Post
A year ago, I thought LLMs were kind of neat but not that useful. I saw the code autocomplete and thought, meh.
Last summer just flipped. I never ever thought I would see automated code generation like we see now.
I know there’s baggage but you need to know the coders are being real about this
1:42 AM · Feb 13, 2026
The open source models are good enough that you could tank every foundational company and this still would be happening.
This tech, whatever it is, is here, and I can run it at home
If you don’t see the utility personally, I can get that. If I wasn’t coding I probably wouldn’t either.
If you’re worried about its downsides, join the club. A lot of us are too.
But no matter what your take is, take it seriously. This shit isn’t crypto.
I was with you right up until the unnecessary dig at a $2.3T economy. 😕
Like it or not, crypto didn’t materialize a lot of its promises for people, came with a ton of downsides, and turned the public’s general distrust of tech to 11
Orthogonal to LLMs is it not?
Technology doesn't make promises, people do.
But either way, it's a diversion from your main point. The ground is indeed shifting.
I'm sympathetic but unfortunately it is relevant. A lot of people are super suspicious of the AI hype because of that history
It's an interesting attribution, but I would argue some valid suspicion was due to prior history of AI hype cycles like 'Expert Systems', 'Symbolics', and 'Deep Learning'.
Indeed, 'this time it's different'.
yea same , a year ago i tested them out and thought, "hmm seems like theyre not very good at making stuff that actually works, why would anyone spend money on this"
but today its pretty obvious that choosing not to use it is like choosing not to use spellcheck in word or something
im still trying to find some balance between just generating lots of code and the harder work of architecting, planning, making security decisions