Post
people saying ai is a fad don't know what they're talking about. i'm definitely not giving up my claude now; the ability to do structural refactors "change this thing to be layered like this thing in this other project" is fantastic. i *do* want the model to be local and hope we'll get there in time
10:52 AM · Oct 23, 2025
I don't think the "it's a fad" argument is people will stop using it, or we won't use it more in 2 years than we do now. But the status quo is that some ai startups are getting multi billion dollar *seed* rounds and xAI is valued at >$100B, this assumes much more than "it's nice to refactor code"
tapping the sign
eva
@eva.town
· 10mo
begging others in tech to see beyond their IDE and acknowledge how AI is pouring kerosene on already bad social problems
(spread of disinfo, concentration of wealth, paying artists and writers for their labor, adolescent mental health, weaponization of our legal system + more)
AI in your code editor to help with refactors: great. yes. love it. it helps me too
AI for nearly everything else: a scourge on humanity
practically speaking, how do you get AI in only an IDE but not in any other arbitrary application that a company wants to add it to? who decides what it's allowed to be in and by what criteria?
technically there is not a practical way to do that, but we have to look beyond the technical at social and regulatory limits on the use of LLMs in areas where the risks are too high (education, law, etc)
yeah i'm very pro regulation in this area (even though we're not gonna get it anytime soon *sigh*), but i assume dan isn't thinking about this carelessly or thoughtlessly — i don't think AI is a fad either and at the same time i'm being very intentional/thoughtful about its impacts on society
I agree w you (and respect Dan and his work)
I just have v strong feelings about AI boosterism within the tech community. because it often comes across as prioritizing making tech work 5% better for people in tech at the cost of making things 20x worse in many fields outside of tech
i could see that, what is AI boosterism mean to you? is dan engaging in AI boosterism by saying he finds LLMs very helpful and that they aren't a fad? i've seen folks use that phrase but am unclear what qualifies
i use AI on a weekly basis at this point, but my views on AI are quite complex
“boosterism” to me means uncritical praise of AI. yes, it’s helpful in some contexts. unfortunately that can’t currently be separated from the rest of its harms. to acknowledge one but not the other is reckless
i think i might be confused, are you saying that you think people should always hedge positive comments about AI with an acknowledgement of how other instances of the technology in other areas might be causing harm?
People are afraid of being replaced by AI, of course they'd cope by saying it's a fad. I wonder how much LLM usage has increased the rate of depleting water bodies.
Yes, but even if 30 percent of the work can be done by an llm, that's enough to spread paranoia among the masses. It's the threat of being completely replaced.
Not only is AI not a fad but #atproto could make it a lot better for everyone.
Once we have personal-private data, your PDS could be your AI provider proxy.
Use any model you want and have the memories stored on the PDS.
It’s a better UX than what any one company could ever offer.
I think I'd prefer my AI config & memories to be in git, where I can track history and changes
And that would be something you could do! But right now you don’t get to take your ChatGPT memories anywhere AFAIK
The agent frameworks are working from files in a repo these days, including things like memories and skills
I don't know about the ChatGPT UI as I don't touch OpenAI stuff. I can see the list of "memories" that Gemini has remembered for me, which I could take easily by the looks of it
I guess I’m not really speaking about developer tooling here but more consumer applications using AI.
Like you couldn’t use those memories in git to inform the search results you see on Bluesky.
I would use git as the SoT and publish to the PDS if needed, much like lexicon
but yeah, for general users, expanding the PDS <-> app access pattern from ATProto is a great general vision!
I'd like to add that we, as developer, should not embrace the Ai linguo, and emphasis on LLM as the proper way to name it.
This to lower our expectations, get back to the machine learning that started all, and to be ready for the real AI.
I get the appeal. It feels like a superpower. My question is about the cognitive trade-off. When the AI handles the structural thinking, do we lose the deeper intuition that comes from wrestling with the architecture ourselves? It's efficient, but I worry it's deskilling us in subtle ways.
How do you give it notes? I use console api version with vscode but feel like I have to continuously guide it back to my preferences
i'm using claude code cli, not sure how anything else works. it's decent at reading stuff in README and i sometimes tell it explicitly to read some instructions
So you can change the endpoint Claude Code uses, I had a little success with litellm as a proxy to local models served by llama.cpp—but you need a lot of RAM/GPU for tool calling context. Local LLM has been great for less intensive tasks that should be private (like analyzing med results), though.
if you can find Intel B60 Pros (24GB VRAM) in stock you could have the local dream rn with Deepseek models?
Or a Strix Halo PC like Framework Desktop with 120GB of unified RAM dedicated to a model you just keep running as a server
Better to wait a year or two for more specialized hardware maybe
i'm hoping i can run something on my macbook but i'm not even satisfied with today's cloud models so realistically i prob won't be happy until like five years in
you’d probably be better off getting a high-end Mac Studio (or future equivalent) to run as a local server, have all that RAM dedicated to keeping your context
not really (my code will be open source anyway) but it feels wrong to depend on somebody else's service for my workflow. i felt this acutely on a plane actually
also bc it’s so unprofitable it’ll only get more expensive. like $20 all-you-can-eat plans are not sustainable long term. thus local will be way better for power users
Why? Because everything tends towards free or because training is the dominant cost?
What are the obstacles to a local model, in your view?
Im doing document processing and research and working on a hybrid localized version. Primarily process documents locally overnight but able to spin out to AWS on demand for fast processing or processor heavy tasks. Haven't done it yet but self hosting via AWS seems cheaper than a commercial subcript
You can get pretty far with Qwen coder and OlympicCoder, etc with just 16G of vram. I use these for generating readme, mermaid charts, etc
Try it out.
I don’t think it’s a fad. It’s tremendously helpful. But it won’t take over the world either.
For local and powerful LLMs, laptops will need to support much more RAM and processing power… that will take some time. Maybe USB pluggable external NPUs will be in high demand in the near future.
I’ve been thinking that the big thing there is “local”, and I feel like that also insinuates a stronger ability to compose desired elements of a model that custom fit the use cases. Basically I think we’re a long way off but I can sort of see the shape of a new kind of HUD