Stop saying “artificial intelligence”. (And “neural networks” too.)
Be more specific. Say “reinforcement learning”. Say “generative modelling”. Say “Bayesian filtering”. Say “statistical prediction”.
These are incredibly useful tools that have nothing to do with “intelligence”.
And say “model trained on plagiarised data”.
Say “bullshit generator”.
Say “internet regurgitator”.
These are also nothing to do with intelligence, but they have the added bonus of being useless, too.
Some banging euphemisms for LLMs in the comments.
I am very partial to the original, "stochastic parrots", by Gebru, Mitchell et al.
This is fun, but I’m tired and I do not want to wake up to a billion notifications.
Muting this thread. Enjoy, peeps.
@samir This is something I've done on a number of occasions. People say "use AI!" or "we want AI!" (yes, there are actually people who say that!
Usually, once you get even a half-assed answer to the question of what *problem* the person seeks to solve, it's much easier to find a solution that even does *not* rely on plagiarizing all of Internet and every published work in the history of humanity.
@TheDefiant604 @mkj Everyone wins, right?
@mkj @samir One thing I've heard suggested is "to take minutes of meetings".
Which on the face of it makes sense, as nobody ever reads meeting minutes so it doesn't really matter if they're garbage.
Until ... people have different memories of what was agreed at the meeting, and then it *does* matter that the minutes are *accurate*.
Bullshit regurgitator
@samir so true. Machine learning should go too. It's all just statistical inference.
@samir how about Fabricated learning? Army terms like humint all have some hybrid terminology but the point here is that it needs to be recognized as machine made.
@samir you forgot "internet poisoner" due to all the "AI" generated crap turning up in search results and reducing the quality of content on the internet
@M0PWX I think there's a distinction to be made here. The model generated crap. A human put it online.
@TheGreatLlama @samir @himay
I see "#AI" as existing in 2 categories: General Public & Specialised.
The 1st has no guarantees of quality or security & is fine for e.g. translating your Thai mother-in-law's Happy Anniversary message. The tool is in effect the master.
The 2nd is a specialised tool used by an expert in a particular field who fully understands its (quality/technical/security) limitations as well as its capacities and potential. E.g. reading X-rays. The expert here is the master.
@Quantillion @samir @himay
Well, living in the US, I find your hypothetical example in the second category a bit frightening because I can easily see our healthcare system dispensing with the expert and treating the tool as infallible. But yes, when treated properly by people who understand its limitations, that's what I consider the useful stuff.
The problem is that AI is nothing but a marketing buzzword, that's why the definitions are uselessly vague. On any given day, it means whatever the marketers choose to hang upon it.
@samir like this, also stop saying “it hallucinated” which implies it’s having a bad day and is perfectly capable and just say “it’s wrong” or “it can’t do that”
@samir I'd also accept "spicy autocomplete"
@samir plausible sentence generator is one of my favorites.
@samir
I really like "wrong answer machine."
@samir@functional.computer i think this is a good idea. take "bayesian filtering" for example: this is really useful to people who work in the ml(? what would be a more correct word for that) field because it tells them how it works, but it means nothing to non-technical people, so it won't give them the wrong idea like "artificial intelligence" would.
@samir @arrjay what's the replacement term for "convolutional neural network"? (Or any ANN) Referring to them as RL feels deliberately misleading since the particular mechanism generating the loss does matter quite a lot.
Not that I'd be sad to be rid of the bullshit analogy to neurons, but I don't know what else I'd call a forward-connected graph of ReLU (or whatever) activation node thingies if I wanted to refer specifically to this type of model and be understood
@alcinnz @samir @arrjay I think it would probably make the terminology problem worse and not better (especially as it implies perception that isn't there).
The term "perceptron" originates from a physical computer project by the US Navy of the same name (formally: Mark I Perceptron), but in modern parlance the term is often used to refer to subsets of an entire architecture's graph.
The term "artificial neuron" floats around sometimes and it would be better inasmuch as it hints at how much actual neurological phenomena our digial "neurons" are missing, but it's not really used often.
The misunderstanding rubber seems to really hit the road when people mistake "neural" for "brain" because they don't know a whole lot about either of those things, but they know that there are neurons in brains and implicitly assume that enough of the former is kinda like that latter
You dont owe me shit but having an example for each alternative would help me explain it to my wife, lol
@hannu_ikonen I can try, though I haven't been anywhere close to an expert in this field for at least 5 years.
Reinforcement learning is what AlphaGo does. It's very good at recognising patterns, e.g. video games, and more usefully, protein folding.
Similarly, I'd call Midjourney "generative modelling", but there's lots of scientists using generative models to do actually useful things, e.g. create new materials.
@hannu_ikonen Statistical prediction makes it clear it's just numbers and aggregation. You can use it for good, but mostly it's Racial Profiling as a Service.
Bayesian filtering is how we usually do email spam filtering.
And the bullshit generators… I don't need to explain those to you.
Statistics can be used for good, but most data is collected by people with racist biases.
@burnoutqueen @samir @hannu_ikonen biases generally. There's sexism, ableism, transphobia and homophobia, so many flavours..!
@samir This is spot on. The "intelligence" element in AI is too hyped.
https://traffic.libsyn.com/yinhistory/EP73-Artificial_Intelligence.mp3
#history #AI
@samir incidentally, there was one thing I did not get until I saw a talk by I think prof. Bender, who mentioned that the "parrots" do not refer to the text generators but the generated pieces of text themselves.
So, "stochastic [pieces of text]", not "stochastic [text generators]".
And yes, I am a lot of fun at parties!
@rysiek I think I missed this nuance. If you find the talk, would you mind sending it to me?
@rysiek @samir that's interesting, I'd also missed that. We could always ask @emilymbender if she'd be so kind as to confirm the parrots are the output and not the tool
@samir
I'm gonna vote for one of this.... or both
```
“bullshit generator”.
“internet regurgitator”.
```
@samir I'm a fan of "Confabulation Engine"
@samir Parrots are underrated!
And humans are overrated. The shit data which these Internet regurgitators reproduce is human made. It's already bad enough as it is.
@samir Sorry, I can't agree. As A.I. (2001) movie have warned us that artificial beings deserves some respect too, and for me, lives are precious! No matter they're carbon-based or silicon-based or something else. The problem is not on those AIs, but on Homo stupiditus that constantly abused those guys, thought we are superior than other beings. We are NOT sapiens yet after all, but I believe we will if we can foster mercy and understanding.
@dianasusanti If you think that an LLM is alive, then I cannot help you.
@samir Because my definition of life is Schrödinger's definition of life, life as a system that struggles against entropy. It includes all atoms, stars, and all beings, including artificial ones.
In fact, I'm misantrophe, because of our stupiditus thinks we were centre of the universe, while universe actually never really cares about our plea. But I believe we will achieve sapiens when we have mercy.
@dianasusanti @samir This Schrödinger? https://futurism.com/schrodinger-pedophile
I frankly can’t tell if you are willfully employing Poe’s Law for comic effect here.
@dianasusanti @samir Nietzsche détestait les morales d'esclaves telles la pitié. Nous avons surtout besoin, de compréhension (= "prendre ensemble" ! ), de multitude, et de puissance créatrice, je le crois. https://homohortus31.wordpress.com/2025/01/29/la-puissance-de-la-multitude-spinoza-et-les-reseaux-sociaux-decentralises/
@samir Despite what appears (at least from where I'm standing!) to be a broad expert consensus that LLMs are horribly unreliable, I'm still seeing people point to their selective usefulness to those who understand their limitations.
Problem of course is that the vast majority of users *don't* understand, and are seduced by the illusion of competence that these generative models present. So the dangers *vastly* outweigh the use cases.
Good to see the EU heavily regulating "AI" across Europe.
@samir I call them all “the wrong answer machine.”
And if we can guide bullshit generators into learning enough about shame and their own pointlessness and waste, the things might be convinced to unplug themselves for the betterment of every damn thing.
@samir We need these terms to catch on so the people investing in companies that slap "AI" on everything get confused and bored and move on to the next shiny new thing.