Not sure why this dross, dated Dec 1, seems to be circulating now (and why it didn't cross my feed a month ago), but wow what a terrible essay.
https://bigthink.com/the-present/the-rise-of-ai-denialism/
A few comments, in a short >>
First, it displays a certain kind of intellectual laziness of a type I've seen before: Arguing against "pundits", "influencers" and "voices", without naming a single person whose specific arguments the author is arguing against.
>>
This allows Rosenberg to caricature the position he is refuting while making it hard for the reader to dispel the caricature, since the original is not pointed to.
>>
Second, he cites Ayn fucking Rand with (apparently) a straight face.
>>
He also conflates the form of language (and art) with the actual thing: "produce content" as if output for which no one has accountability (and which represents no one's communicate intent) has any value in the world.
>>
Experience is central -- no art has any "qualitative value" without experience. Now, people can attribute meaning to synthetic images, but that is also an experience. But as UW's Gabriel Solis once put it so well: writing, art, performance -- these are ways of being human *together*.
>>
The words, the pixels, the sound waves aren't the art. The art is in the experience of the artist and the audience, together.
>>
Meanwhile, Rosenberg also consistently displaces accountability: "AI" hasn't done anything. Companies have amassed large collections of stolen art and used them to produce systems with which they and others can create synthetic images.
>>
Other companies, who would have previously hired artists, are taking advantage of cheaply produced synthetic images which are only cheap because they are based on stolen art + heavily subsidized by venture capitalists sustaining this bubble.
>>
And also unwittingly subsidized by ordinary people paying higher electricity prices when data centers move in:
>>
Meanwhile, note the logical leaps here: People creating art based on their experience of others' art is not equivalent to what happens with the large image models munging pixels together. And what is the evidence that "AI" systems will have "sparks of inspiration"?
>>
And finally, this argument about "denial" is really an argument for defeatism (in everyone else, Rosenberg seems to be welcoming the future he paints).
>>
If Rosenberg had actually cited the people he's arguing against, and made it possible to click through and see their words, I'm pretty sure what you'd find is not denialism, but resistance.
It is important to remember that the future is not yet written. The arts, journalism, science, education, medicine, and our chances to be human together are all worth fighting for --- and not surrendering to the maw of Big Tech.
/fin
"This is not an AI bubble. This is real."
Rosenberg has drunk the Kool-Aid.
He thinks hallucinations are reality.
He is going to have a bad trip.
@emilymbender It's a bit of a desperate article isn't it?
The Ayn Rand... whooo boy, says it all really.
Who are these articles written for I wonder.
@emilymbender Anyone who uses Ayn (asshole) Rand as a justification for anything, should be shot out of a cannon into an alligator-infested swamp.
@c_merriweather @emilymbender But, I regularly use Ayn Rand for a justification for why you should not listen to Ayn Rand on most things.
@gcvsa @emilymbender Negative justification is different. It's all the techbro adulation that makes me want to load them in the cannon.
@c_merriweather @emilymbender I literally just wrote a thing the other day that the reason I don't read Ayn Rand is because she could only *at best* confirm that which was already said better by better people long before she ever picked up a pen.
And at worst…well, I don't think I really need to answer that, do I? We see the consequences all around us of the people who idolize her.
@emilymbender We call it slop because it’s slop. There is no added value just discounted production of inferior quality knock offs. I’ll make a prediction: people will start to die because organisations have relied on “AI” to build critical systems without sufficient checking and testing by humans.
@seb321 @emilymbender I think there's an argument to be made that this has already occurred with the DOGE attack on US govt. They did that under the wise guidance of ChatGPT, right? The starkest example is USAID, but domestic food and housing insecurity are on the rise, too as a direct result of dipshits handing over decision-making to LLMs
@emilymbender thanks for the thread. The idea that AI may have an inner motivation is - among other claims - very strange. And it fits in so many other misrepresentations of what AI is by other AI disciples, like Sam Altman. They became all more and more exaggerated over time, while falling short of the promises. So it seems to me that they are all desperate to keep the scheme going in the hope it never fails.
@prefec2
The evidence, such that it may exist, for self motivation in AI, finds potential for acts of self-preservation that are disquieting at best and inconsistent with pretexts that guardrails are in place.
@emilymbender as offensive as mansplaining is in the ICT world, I genuinely do hope you don’t move on from this place. My own visceral aversion to being ‘splained keeps me at times from sticking up my head in many of these exchanges but in many ways I find this a more comfortable place to be than most.
I'm glad you find it more comfortable. My experience is that Mastodon is far 'splainier than anything else.
This seems to be the trade-off in Mastodon being a more conversational place.
Those who are used to just blasting out a message on other platforms can find the engagement wearisome, especially when it’s more like sealioning or ‘splaining.
On the other hand, there are some genuinely interesting conversations and diverse perspectives, and sometimes the sense that you might suggest a different way of thinking.
@AlsoPaisleyCat I have no issue with conversation -- it's being talked at by people who don't have even the curiosity to read the full thread they are responding to.
I'm not sure if it's the instance I'm on but I don't see much mansplaining. I often see people I follow compaining about things but I never see the actual posts so is a bit confusing. So either there are good mods on this instance or I'm part of the problem I guess
Sounds like my kind of content though so going to start backreading!
@emilymbender I agree that’s exasperating.
And, in part, that *can* be the fault of Mastodon as not all the replies may be available on a user’s server.
The new feature that flags other replies are available is helping counteract that, but clearly many folks aren’t pushing through to see the full thread following the “original post.”
I know that my admin does a great job of filtering out misogyny and other things that violate our rules. The flip side of that is that I know that there’s a lot of it that I don’t see unless I actively seek the “original post” and look to the full thread when it clear that someone is reacting to aggressions.
@emilymbender We're being forced to choose between boosterism and doomerism, but those are are both pro-hype positions.
In reality, we've just found a way to scale a technology from the 1950s. I choose realism.
@emilymbender
Yes! The future is in-person. The future is human. The future is crafted. The future is community. The future is potlucks. The future is nature. The future is soulful. The future is messy.
There's no reason to accept a soulless, online-only future. And there's certainly no way human consciousness will ever be digitised - we are our bodies. No computer, no matter how complex, will ever be human. Only humans can be human.
@emilymbender Being scolded about "wishful thinking" by these delusional shit-heels is sending my goddamn blood pressure through the stratosphere
@emilymbender Ugh, yes. I hear this shite in my line of work more and more now e.g. it should be part of our professional training because 'it's going to be an essential part of [our field's] working life'.
And I guess it will be if we roll over and shoe-horn into the curriculum, yes
@emilymbender I've noticed this a *lot* (although expanded on less) as a theme with a lot of the pro-AI people I've talked to. I guess people like the idea of historical inevitability, but it's a bit depressing, isn't it?
@emilymbender the "AI" narrators for blogs, articles, and even books really get to me as well.
While bad, at least the Stocking Frames initially hit one sector and a couple adjacent ones.
"AI" is Stocking Frames on both steroids and growth hormones, attempting to wipe out multiple sectors.
Glad to have you and others leading the charge for resistance.
As Marcel Duchamp has so nicely shown already more than a century ago.
@emilymbender Decades of under-funding of arts education. A constant focus on the end result, not the journey that led to it.
As sad as I am about the general ignorance about what makes art art, I can't say I'm surprised.
@emilymbender
Rosenberg seems full of pompous crap; thanks for reading & responding to his crappy article so we don’t have to. Like others in this thread I too wonder who he’s writing for. No one we know, that’s for sure.
@emilymbender "we simply don’t know whether AI will ever experience intentions through an inner sense of self the way humans do" is such a ridiculous sentence.
@emilymbender The part about this way of thinking that offends me as a musican and an artist is that it assumes the purpose of art is to create aesthetically "beautiful" things or output.
The purpose of art is to communicate the perspective of the artist, and machines do not and cannot have perspective or communicate.
I will concede that some people may believe they find value or meaning in machine generated output, but some people find patterns in random strings of numbers, too.
@gcvsa @emilymbender So AI generated "art" is effectively a "something" created by a narcissist.
@the_wub @emilymbender I'm saying that while people may look at a thing and ascribe meaning or intent to it (this tendency to see a "ghost in the machine" may be an unavoidable consequence of human sentience and consciousness), that doesn't make it "art", because it obviously cannot possibly have meaning or intent behind it.
The societal value of art is the communication of meaning, intent, and perspective, not "beauty" or even the skill of the artist.
Communication requires two entities.
When I type, I am limited by the keys on the keyboard and I am biased towards symbols that represent letters, then punctuation and digits. The number of space characters is high. In contrast, if I cat /dev/random to a file, the output will be far more varied (it will contain a roughly even distribution of all possible byte values) and much faster than I can type. If speed and variety are the things to optimise for, I have a machine that can generate such content far faster than any LLM.
@emilymbender Good grief
@emilymbender I was trying to see his perspective until that happened. Then I laughed at him and myself and turned off my phone.
@alisynthesis @emilymbender Perfectly normal and healthy reaction to Ayn Rand.
@liebach it is so insane that anyone takes her seriously.
@emilymbender That's a 1950 First of May parade in Moscow of red flags.
@emilymbender it is indeed a terrible essay with a ultra-libertarian smell.
And it seems to ignore the elephant in the room - that statistical models aren’t actually capable of reasoning, only of pattern amplification. Sure, that doesn’t mean that they can’t replace jobs at companies whose managers are eager to chase whatever bubble as an excuse to cut workforce. But ignoring that we’re still dealing with stochastic parrots and calling to fully embrace them in our decision-making processes and creation of art is delusional to say the least. Sure, “humans can make mistakes too“ - but, unlike a statistical model, at least they’re accountable for their mistakes and they can try and explain them.
But the article is chillingly right on a particular point:
These AI assistants (which you will carry around on your phone or wear in your glasses) will be able to observe your emotional reactions throughout your day and build predictive models of your behavior. Unless strictly regulated, which seems increasingly unlikely, this will enable AI assistants and other intelligent agents to influence you with superhuman skill.
Even a stochastic model trained on billions of data points about human emotions can be overwhelmingly capable of human manipulation on levels that even the most sophisticated totalitarian regimes could only dream of.
But then again he seems to passively embrace this statement without providing any solutions to the problem of humanity becoming emotionally enslaved to manipulative robots. He just mentions regulation en passant (“it won’t happen“), like any naive fatalist libertarian would do.
Very interesting thread. Thank you.
@emilymbender incredibly weak article, reads almost like religious document. Kinda indicative of AI zealots.
@emilymbender yes and are there a more circular argument in favor of superintelligence than «I’ve been writing about the destabilizing and demoralizing risks of superintelligence for well over a decade, and I also feel overwhelmed by the changes racing toward us.»?
This person is redacting the same fiction for over a decade!
Thanks for revealing this propaganda!
@emilymbender 30 appearances of the word "will" refering to a possible AI future, not reality, but denialist are «wishful thinkers». What a charlatan.
@emilymbender Thank you! What do you think about the part, that "It is very likely that AI systems will soon be able to “read you” more accurately than any person could."
People tell these systems everything.
@kaffeeringe On the one hand, "emotion recognition" is misframed from the start --- the "best" one could hope to do is to match how a person (ideally from the same cultural background) observing the subject would understand a facial expression.
On the other hand, companies will try anything to take advantage of emotionally vulnerable people.
>>
@kaffeeringe I think it was in the book _Careless People_ that I read about Facebook specifically targeting ads to teen girls who had just deleted a selfie, preying on hypothesized (and probably often right) moments of low self-esteem.
@emilymbender Thank you. I read the article when it was published. And I disagreed with most of it. This part seemed to have a point. But you are right: it's not real reading of people but again mimicking and 50 shades of sycophancy.
@emilymbender Big Think, they say?