Hacker News new | past | comments | ask | show | jobs | submit | jefftk's comments login

That seems to be pretty good for the group of newcomers! They all buy together at lowish prices, then prices go up enough that they ~make their money back on housing appreciation. And it's not bad for existing homeowners either, since they get the appreciation too.

> And it's not bad for existing homeowners either, since they get the appreciation too.

Assuming everyone is only out after appreciating house prices. While the locals who lived there before you might like that the house gains in value, depending on how large the group is and what the culture is, they might not like it at all. There is a reason some rural people continue living in rural areas, and bringing parts of the city to them might not be ideal for those people.


Exactly what rural people want, enough city people all moving in at once to noticably change the local culture(their whole goal) and also price the locals out of their own city

Or you could just invest the difference in your stock portfolio and institute a land value tax instead. Stock are more a lot more liquid than real estate and less risky as well. Whereas the value of a home is pretty much stuck in the property until you convert it to liquid cash by selling it, but then you need to move elsewhere.

Indeed you could either go through the arduous task of convincing your friends to move somewhere with you or just get enough political support for a land value tax instead. No brainer to go for the tax. Way easier to achieve.

Ah, path dependence.

Is someone Penny pinching $400/month really going to have the funds necessary to flip housing?

Some penny pinching so they can flip houses would.

This isn't stainless rebar, it's stainless-coated rebar. Perhaps still to expensive to end up widely used, but it should be a lot cheaper than pure stainless.

Galvanized rebar and epoxy coated rebar have been available for many decades.

They are so much more expensive than ordinary carbon steel rebar to be unicorn poop rare.

If there is a break in the finish, it is as susceptible to corrosion as carbon steel. This means every step requires special handling and rigorous inspection. It cannot be field fabricated with a hand bender if a stirrup is missing or damaged. Tying and placement has to be done with unusual care to avoid damage (and again non-standard level of inspection).

Galvanizing and epoxy coating are long lead time and require prefabrication (bending). So you are shipping, handling and receiving bespoke space filling shapes instead of commodity straight bars to specialty job shops with limited capacity and well booked dance cards.

At every step, everyone has to price their work against all that complexity, uncertainty and potential for delays.


Galvanizing still works if there are scratches in the finish because there is a radius of protection from the galvanization.

Galvanized rebar is about 20% more expensive and is actually widely used in highway construction, particularly when salt is a major concern.

Epoxy coated rebar turned out to have much poorer efficacy than originally expected as you noted but the same is not true of galvanizing which is a much more durable coating.


Recently? It was almost a decade ago: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=815006

There are unfortunately a lot of old C library functions that violate const correctness. Consider dirname: https://www.jefftk.com/p/dirname-is-evil

I would be ok with the equivalent of the "fi" ligature, where two letters would be drawn slightly differently when next to each other.


Why? Monospaced characters fit in their separate cells.


Headers and footers, yes. Menus generally need to expand when you interact with them, especially on mobile.


You could do this with js in the child document, if its important to keep js out of the parent.


no, not if it is cross site URL.

Then you need a postMessage to send body size to parent frame which then needs to listen for messages and resize the iframe element.


Totally! I thought we were talking about the same site case.


Claude 3.7 gets it exactly right:

To determine which option cools coffee the most, I'll analyze the heat transfer physics involved. The key insight is that the rate of heat loss depends on the temperature difference between the coffee and the surrounding air. When the coffee is hotter, it loses heat faster. Option 1 (add milk first, then wait):

- Adding cold milk immediately lowers the coffee temperature right away

- The coffee then cools more slowly during the 2-minute wait because the temperature difference with the environment is smaller

Option 2 (wait first, then add milk):

- The hot coffee cools rapidly during the 2-minute wait due to the large temperature difference

- Then the cold milk is added, creating an additional temperature drop at the end

Option 2 will result in the lowest final temperature. This is because the hotter coffee in option 2 loses heat more efficiently during the waiting period (following Newton's Law of Cooling), and then gets the same cooling benefit from the milk addition at the end. The mathematical principle behind this is that the rate of cooling is proportional to the temperature difference, so keeping the coffee hotter during the waiting period maximizes heat loss to the environment.


That's totally cribbed from some discussion hat occurred in its training.


As apposed to humans who all derive the physics of heat transfer independently when given a question like this?

Not picking on you - this brings up something we could all get better at:

There should be a "First Rule of Critiquing Models": Define a baseline system to compare performance against. When in doubt, or for general critiques of models, compare to real world random human performance.

Without a real practical baseline to compare with, its to easy to fall into subjective or unrealistic judgements.

"Second Rule": Avoid selectively biasing judgements by down selecting performance dimensions. For instance, don't ignore difference in response times, grammatical coherence, clarity of communication, and other qualitative and quantitative differences. Lack of comprehensive performance dimension coverage is like comparing runtimes of runners, without taking into account differences in terrain, length of race, altitude, temperature, etc.

It is very easy to critique. It is harder to critique in a way that sheds light.


> As apposed to humans who all derive the physics of heat transfer independently when given a question like this?

Isn't that the difference between learning and memorizing, though? If you were taught Newton's Law of Cooling using this example and truly learned it, you could apply it to other problems as well. But if you only memorized it, you might be able to recite it when asked the same question, yet still be unable to apply it to anything else.


> It is very easy to critique. It is harder to critique in a way that sheds light.

Well said. This is the sort of ethos I admire and aspire to on HN.


So is my knowledge of newtons law of cooling


If an LLM has only that knowledge and nothing else (pieces of text saying that heat transfer is proportional to some function of the temp difference) such that is not trained on any texts that give problems and solutions in this area, it will not work this out, since it has nothing to generate tokens from.

Also, your knowledge doesn't come from anywhere near having scanned terabytes of text, which would take you multiple lifetimes of full time work.


We get way more info than llms do, just not solely from text


You have not read every accessible piece of text in existence.


There is more to life than just text e.g. this is part of lecun argument against LLMs


Lecun's argument is based off a bad interpretation of how data is processed by the optic nerve, we don't receive that much raw data.

What we do have, is billions of years of evolution that has given a lot of innate knowledge which means we are radically more capable than LLMs despite having little data.


There is more to text than just predicting tokens based on a vast volume of text.

There isn't an argument "against LLMs" as such; the argumentation is more oriented against the hype and incessant promotion of AI.


This exact problem was in Martin Gardner's column for Scientific American in the 1970s. There are surely references all over the internet.


If it was just ‘in the training data’ they’d all get it right.

But they don’t.


I don't think that can be postulated as a law, because they are a kind of lossy compression. Different lossy compressions will lose different details.


> When I’m on my deathbed, I won’t look back at my life and wish I had worked harder. I’ll look back and wish I spent more time with the people I loved

If you don't imagine yourself wishing you'd worked harder, consider whether you've chosen the right work. There are massive problems in the world on which we can make real progress, and if you're not working on these, why not?

Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your work isn't something you look back on with intense pride, consider whether there's something else you could be doing professionally that you would feel really good about.


This isn't just pixels, it's the normal way we use rectangular units in common speech:

* A small city might be ten blocks by eight blocks, and we could also say the whole city is eighty blocks.

* A room might by 13 tiles by 15 tiles, or 295 tiles total.

* On graph paper you can draw a rectangle that's three squares by five squares, or 15 squares total.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: