Hacker Newsnew | past | comments | ask | show | jobs | submit | ranger_danger's commentslogin

The only big gripe I have about htmx is that the hx-on::after-request response it provides to your callback function does not automatically parse JSON content types like with e.g. jQuery.ajax(). Last time I brought that up, people simply questioned why I would ever want to do that in the first place.

That sounds like use-case exploration? Did you answer?

One of the actual responses was "Htmx isn’t designed to work with JSON APIs at all. It needs HTML back from the server."

It sounds like they are referring to hx-swap and not arbitrary javascript callbacks though, and in that case, I don't see why calling JSON.parse() inside htmx (when the content-type is json) is that big of a deal.


They don't want to become a general purpose dynamic web library, but focus on swapping server generated HTML blocks. It's a conscious decision in what they are and what not.

It's a pretty core part of their design philosophy, possible the core.

https://htmx.org/essays/rest-explained/


> One of the actual responses was "Htmx isn’t designed to work with JSON APIs at all. It needs HTML back from the server."

Uh, yes? They wrote a literal book about why they think this is important: https://hypermedia.systems/


in 4.0 we are opening up the entire request/response/swap mechanism so you can replace any component of it per-trigger

you can replace the fetch() function used w/ an event callback, etc

should allow you to do pretty much anything w/o any hacks


I've got a different theory than this AI slop:

Engineers often aren't rational because engineers can still be stupid. Dogmatism/black-and-white thinking is often a sign of low emotional intelligence (and can also be a defense mechanism called "splitting").

The Dunning-Kruger effect also applies to "smart" people. You don't stop when you are estimating your ability correctly. As you learn more, you gain more awareness of your ignorance and continue being conservative with your self estimates.


Dunning-Kruger applies to people who don't know a specific domain. If you spend all day writing code, you probably understand at least one language fairly well. Maybe you are not an expert in how compilers work but at least you understand programming to some degree. So this topic is probably one of the least appropriate ones to apply DK. If you want to make this argument, perhaps best to base it on identity, not DK.

> Dunning-Kruger applies to people who don't know a specific domain.

I mean, its largely a statistical artifact around which a pop science myth has accumulated, but on its own terms it applies smoothly and continuously across the entire range of ability in a domain, not in any special way just one one side of binary knowledge dividing line (the finding was basically that people across the whole range of ability to tend to rate their own relative ability closer to the 70th percentile than it actually is, but have monotonically increasing with actual relative ability.)


The sad thing is that for all their advanced ways of the time, they succumbed to the same thing we are experiencing now... being too comfortable to fix what's broken.

The Mayans did not want to give up their lifestyles even in the face of crippling population growth and surrounding natural resource depletion... which led to their downfall.


This should be upvoted. A lot. The downvotes are ill-informed.

https://www.earthobservatory.nasa.gov/images/77060/mayan-def...

From newish imaging. We can see the impressions of vast jungle swaths cut down and way made for planting food and houses. This looks to have disrupted the water cycle enough to cause cinotes (underground water systems and only source of drinking water) to deplete. We see sacrificial remnants below the modern water line. Their water disappeared and so did their civilization. By the time the Spanish arrived, the local people had no knowledge of how to build nor maintain their now ancient cities, the jungles regrew, water came back, and sacrificial artifacts were covered by replenished water levels.

They are an example of man made effects on local weather leading to the downfall of an advanced civilization.


Didn't the Spanish show up briefly, then come back in force later?

I've heard some speculate that this introduced European diseases, and unlike many Native American tribes, the Mayans lived in dense cities. Such disease would spread like wildfire.

(Certainly, some disease made it the other way too! Tuberculosis and syphilis are examples)

I've heard numbers like 95% died, and it was decades between first contact and serious conquest.

That leaves a lot of time for people to grow up with no one to teach them trades, or even how to read.

If we lost 95% of our population, so many active skills would be lost.


The collapse of classical Maya civilization predated the arrival of the Spanish by around six centuries.

> Didn't the Spanish show up briefly, then come back in force later?

The end of the Incan empire is a really striking example of this dynamic. The Spanish landed on the South American mainland in ~1524, European diseases started spreading, and in 1527 the Incan emperor died from one of the diseases without an heir. This triggered a really brutal civil war of succession that weakened the empire. The Spanish started the conquest proper of the Incan empire in ~1532 and were successful in part because how weak the empire was after the civil war.

So essentially, by arriving early and (inadvertently) initiating the disease epidemics, the Spanish put in place conditions that made the conquest possible a few years later.


Estimates vary wildly on what percentage of the natives died from european diseases. There's just too little information on pre-Columbian populations.

For comparison, estimates of the deaths from the Black Plague in Europe are 30% to 60%. It's a huge error bar, despite having a lot of written records that survived.


*cenotes

Sounds like the opposite no? Since we are going through population collapse in a time of abundance. Does make me wonder what the political dynamics were at the time, whether some could see problems but weren't in power to change things. Or maybe they couldn't understand or figure out solutions to the problems. What I'd give to be a multilingual fly on the wall throughout history.

> Anubis is specifically DDOS protection

Only well-behaved application-level DDoS protection maybe.

A real network-level attack in the many-gigabits/sec+ will not be stopped by anubis itself.


> Who would dare block Google Search from indexing their site?

People who don't want to be indexed. Or found at all.


The problem I see with this approach is that it enables website operators to stop alerting bots completely, and then the bots' customers will complain that sites aren't updated, and don't care that the site owner is blocking them.

I don't consider cloud IP blocks a solution. We use Amazon WorkSpaces, and many sites often block or restrict access just because our IPs appear to be from Amazon. There are also a good number of legitimate VPN users that are on cloud IPs.

How is a curl user-agent automatically a well-behaved automation?

One assumes it is a human, running curl manually, from the command line on a system they're authorized to use. It's not wget -r.

Sounds like the perfect opportunity for bots to use the curl user-agent. How do we know they're not already doing this?

We don’t but now that we’ve talked about it publicly on the Internet they’re gonna start doing that. I'm sure they previously were, but now we've gone and told them, uh yeah.

> Reverse delegation (RFC 2317) is the way IP-to-FQDN lookups are usually done now

> Before this was popular, you could get your ISP to defer to your DNS for reverse records directly

I'm not actually seeing the difference between these two... besides this new "reverse delegation" allowing different nameservers for prefixes longer than /24... aren't you still relying on your ISP/upstream provider (if you don't own your own IPs) to delegate reverse lookups to your own DNS server either way?


Correct, what RFC2317 brings you, is an example of you creating a new namespace in some structured format (IIRC, there are three different example formats given in this RFC), and you just have the upstream ISP, which has the reverse delegation done on the zone cut boundary for the IP ranges it controls inserting a CNAME out to your new namespace on nameservers you control for the reverse PTRs so the reverse PTRs can be formed that way.

Running a long time ISP, I found extremely few customers wanting to do something like RFC2317, or could actually figure out and do it effectively. Almost all were content with control panel/API and having the ISP do it after I pointed them to this informational RFC asking them if this is what they wanted.


I think part of the reason most ISPs don't support RFC2317 or reverse delegation is that it makes it easy for a bad actor who's in charge of the DNS server being delegated to, to spoof any domain they want. The consequences of this sort of spoofing have now been limited by other systems and protocols anyway, so it's not as big of a deal.

ISPs prefer to have direct control of the reverse lookups within their IP blocks so they can ensure the integrity of the information.


> HTML tables are cognitively if not officially deprecated these days.

According to who?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: