Neither. They're unwilling to concede he's run out of the Kremlin and the chaos and damage is the only purpose. The only reason he backs down on any of it is because he can't afford not to, so he's doing the usual brinksmanship, instructed by whoever's telling him to axe those obscure aviation safety committees (someone has detailed info), and probably hoping he can flee to Moscow at some point.
I don't think he'll be let off the hook, though. He's tasked to ruin us well below 'status quo', even for people diligently not paying attention.
That's an interesting take, AND it is enforceable.
It's very difficult to serve immense numbers of people with social media. Scaling takes resources and technical expertise and lots of energy. You could treat it like trying to catch pot growers by watching their electric bills. I don't think there's a way around that, technically: to do the monoculture (and control it) absolutely requires intention, resources and concentration of effort in an obvious way.
Yeah, you can secretly administer multiple internet social communities and pretend it's not you, but that already happens too, and it's less effective than the monoculture, plus if you're doing that it smells funny, makes people ask 'why are we being governed to think this way through secret avenues' which is something people strangely don't ask about the monoculture.
Whether you can get society to sign on to this 'central control can get this big and no bigger' is much like asserting 'rich men can get this rich and no richer'.
The 'haha! ha!' from Genesis's 'Mama' is quite literally taken out of 'The Message' by Grandmaster Flash and the Furious Five (the first rap hit song): I just think it's neat Genesis were listening to that. They don't seem like they'd be listening to rap and yet it's well documented in their own words that they'd not only heard it, but were that into it that they kept sticking that riff in 'Mama' and it stuck.
I'm glad you mentioned Genesis because they're the subject of one of my favorite samples. OutKast's famous track "SpottieOttieDopaliscious" contains a sample from Genesis's "Dancing with the Moonlit Knight" and upon learning this, I just thought it was so cool that OutKast listened to Peter-Gabriel-era Genesis when Genesis was very much a progressive rock band rather than the pop powerhouse most know them as today.
I can't disagree more strongly. If you're a specialized person with expertise and ability to perform outside the knowledge of all possible interns because you're developing something novel and not covered by standard materials, you'll be hard to compete with using AI because AI will direct people to standardized methods.
Granted, if you do a specialized task that is taught in schools and that anyone can do, that's trouble. But that's not tactics either, that's clock-punching. That's replaceable. You can talk 'strategy' all you like but if you're only exploiting the AI's ability to do what people already know, that's another box to be stuck in.
I think we're in agreement then, I'm saying the specialists of this stripe:
> if you do a specialized task that is taught in schools
are in trouble, and so are you.
> perform outside the knowledge of all possible interns
This is the strategic level thinking I'm referring to. Frankly this role might not be safe either, but if that's the case then we're probably headed for fully automated luxury communism no matter what we do.
If and only if the LLM is able to bring the novel, unexpected connection into itself and see whether it forms other consistent networks that lead to newly common associations and paths.
A lot of us have had that experience. We use that ability to distinguish between 'genius thinkers' and 'kid overdosing on DMT'. It's not the ability to turn up the weird connections and go 'ooooh sparkly', it's whether you can build new associations that prove to be structurally sound.
If that turns out to be something self-modifying large models (not necessarily 'language' models!) can do, that'll be important indeed. I don't see fiddling with the 'temperature' as the same thing, that's more like the DMT analogy.
You can make the static model take a trip all you like, but if nothing changes nothing changes.
Along these lines one model that might help is to consider LLMs 'wikipedia of all possible correct articles'. Start with Wikipedia and assume (already a tricky proposition!) that it's perfectly correct. Then, begin resynthesizing articles based on what's already there. Do your made-up articles have correctness?
I'm going to guess that sometimes they will: driven onto areas where there's no existing article, some of the time you'll get made-up stuff that follows the existing shapes of correct articles and produces articles that upon investigation will turn out to be correct. You'll also reproduce existing articles: in the world of creating art, you're just ripping them off, but in the world of Wikipedia articles you're repeating a correct thing (or the closest facsimile that process can produce)
When you get into articles on exceptions or new discoveries, there's trouble. It can't resynthesize the new thing: the 'tokens' aren't there to represent it. The reality is the hallucination, but an unreachable one.
So the LLMs can be great at fooling people by presenting 'new' responses that fall into recognized patterns because they're a machine for doing that, and Turing's Test is good at tracking how that goes, but people have a tendency to think if they're reading preprogrammed words based on a simple algorithm (think 'Eliza') they're confronting an intelligence, a person.
They're going to be historically bad at spotting Holmes-like clues that their expected 'pattern' is awry. The circumstantial evidence of a trout in the milk might lead a human to conclude the milk is adulterated with water as a nefarious scheme, but to an LLM that's a hallucination on par with a stone in the milk: it's going to have a hell of a time 'jumping' to a consistent but very uncommon interpretation, and if it does get there it'll constantly be gaslighting itself and offering other explanations than the truth.
You're assuming it would make a difference: they're already just hoovering up 'usual stuff' and splicing it all sorts of ways. For these purposes there's no difference between 'deliberately incorrect' and 'just not very good at its job'.
No point in taking effort to recreate what's already out there as authentic not-goodery :)
Between Microsoft and Google, my existence AND presence as a community open source developer is being scraped and stolen.
I've been trying to write a body of audio code that sounds better than the stuff we got used to in the DAW era, doing things like dithering the mantissa of floating-point words, just experimental stuff ignoring the rules. Never mind if it works: I can think it does, but my objection holds whether it does or not.
Firstly, if you rip my stuff halfway it's pointless: without the coordinated intention towards specific goals not corresponding with normally practiced DSP, it's useless. LLMs are not going to 'get' the intention behind what I'm doing while also blending it with the very same code I'm a reaction against, the code that greatly outnumbers my own contributions. So even if you ask it to rip me off it tries to produce a synthesis with what I'm actively avoiding, resulting in a fantasy or parody of what I'm trying to make.
Secondly, suppose it became possible to make it hallucinate IN the relevant style, perhaps by training exclusively on my output, so it can spin off variations. That's not so far-fetched: _I_ do that. But where'd the style come from, that you'd spend effort tuning the LLM to? Does it initiate this on its own? Would you let it 'hallucinate' in that direction in the belief that maybe it was on to something? No, it doesn't look like that's a thing. When I've played with LLMs (I have a Mac Studio set up with enough RAM to do that) it's been trying to explore what the thing might do outside of expectation, and it's hard to get anything interesting that doesn't turn out to be a rip from something I didn't know about, but it was familiar with. Not great to go 'oh hey I made it innovate!' when you're mistakenly ripping off an unknown human's efforts. I've tried to explore what you might call 'native hallucination', stuff more inherent to collective humanity than to an individual, and I'm not seeing much facility with that.
Not that people are even looking for that!
And lastly, as a human trying to explore an unusual position in audio DSP code with many years of practice attempting these things and sharing them with the world around me only to have Microsoft try to reduce me to a nutrient slurry that would add a piquant flavor to 'writing code for people', I turn around and find Google, through YouTube, repeatedly offering to speak FOR me in response to my youtube commenters. I'm sure other people have seen this: probably depends on how interactive you are with your community. YouTube clearly trains a custom LLM on my comment responses to my viewers, that being text they have access to (doubtless adding my very verbose video footnotes) to the point that they're regularly offering to BE ME and save me the trouble.
Including technical explanations and helpful suggestions of how to use my stuff, that's not infrequently lies and bizarro world interpretations of what's going on, plus encouraging or self-congratulatory remarks that seem partly drawn from known best practices for being an empty hype beast competing to win the algorithm.
I'm not sure whether I prefer this, or the supposed promise of the machines.
If it can't be any better than this, I can keep working as I am, have my intentionality and a recognizable consistent sound and style, and be full of sass and contempt for the machines, and that'll remain impossible for that world to match (whether they want to is another question… but purely in marketing terms, yes they'll want to because it'll be a distinct area to conquer once the normal stuff is all a gray paste)
If it follows the path of the YouTube suggestions, there will simply be more noise out there, driven by people trying to piggyback off the mindshare of an isolated human doing a recognizable and distinct thing for most of his finite lifetime, with greater and greater volume of hollow mimicry of that person INCLUDING mimicry of his communications and interpersonal tone, the better to shunt attention and literal money to, not the LLMs doing the mimicking, but a third party working essentially in marketing, trying to split off a market segment they've identified as not only relevant, but ripe for plucking because the audience self-identifies as eager to consume the output of something that's not usual and normal.
(I should learn French: that rant is structurally identical to an endlessly digressive French expostulation)
Today I'm doing a livestream, coding with a small audience as I try for the fourth straight day to do a particular sort of DSP (decrackling) that's previously best served by some very expensive proprietary software costing over two thousand dollars for a license. Ideally I can get some of the results while also being able to honor my intentions for preserving the aspects of the audio I value (which I think can be compromised by such invasive DSP). That's because my intention will include this preservation, these iconoclastic details I think important, the trade-offs I think are right.
Meanwhile crap is trained on my work so that a guy who wants money can harness rainforests worth of wasted electrical energy to make programs that don't even work, and a pretend scientist guru persona who can't talk coherently but can and will tell you that he is "a real audio hero who's worked for many years to give you amazing free plugins that really defy all the horrible rules that are ruining music"!
Because this stuff can't pay attention, but it can draw all the wrong conclusions from your tone.
And if you question your own work and learn and grow from your missteps to have greater confidence in your learned synthesis of knowledge, it can't do that either but it can simultaneously bluster with your confidence and also add 'but who knows maybe I'm totally wrong lol!'
And both are forms of lies, as it has neither confidence nor self-doubt.
I'm going on for longer than the original article. Sorry.
I don't think he'll be let off the hook, though. He's tasked to ruin us well below 'status quo', even for people diligently not paying attention.
reply