In the context of gambling, random numbers or prngs can't have any unknown possible frequencies or tendencies. There can't be any doubt as to whether the number could be distorted or hallucinated. A pseudo random number that might or might not be from some algorithm picked by GPT is wayyyy worse than a mersenne twister, because it's open to distortion. Worse, there's no paper trail. MT is not the way to run a casino, or at least not sufficient, but at least you know it's pseudorandom based on a seed. With GPT you cannot know that, which means it doesn't fit the definition of "random" in any way. And if you find yourself watching a player getting blackjack 10 times in a row for $2k per bet, you will ask yourself where those numbers came from.
I think you're missing the point. Current incarnations of GPT can do tool calling, why shouldn't they be able to call on a CSPRNG if they think they'll need a genuinely random number?
I ran a casino and wrote a bot framework that, with a user's permission, attempted to clone their betting strategy based on their hand history (mainly how they bet as a ratio to the pot in a similar blind odds situation relative to the aggressiveness of players before and after), and I let the players play against their own bots. It was fun to watch. Oftentimes the players would lose against their bot versions for awhile, but ultimately the bot tended to go on tilt, because it couldn't moderate for aggressive behavior around it.
None of that was deterministic and the hardest part was writing efficient monte carlos that could weight each situation and average out a betting strategy close to that from the player's hand history, but throw in randomness in a band consistent with the player's own randomness in a given situation.
And none of it needed to touch on game theory. If it did, it would've been much better. LLMs would have no hope at conceptualizing any of that.
It's not. The LLM itself only calculates the probabilities of the next token. Assuming no race conditions in the implementation, this is completely deterministic. The popular LLM inference engine llama.cpp is deterministic. It's the job of the sampler to actually select a token using those probabilities. It can introduce pseudo-randomness if configured to, and in most cases it is configured that way, but there's no requirement to do so, e.g. it could instead always pick the most probable token.
This is a poor conceptualization of how LLMs work. No implementations of models you’re talking to today are just raw autorrgressive predictors, taking the most likely next token. Most are presented with a variety of potential options and choose from the most likely set. A repeated hand and flop would not be played exactly the same in many cases (but a 27o would have a higher likelihood of being played the same way).
>No implementations of models you’re talking to today are just raw autorrgressive predictors, taking the most likely next token.
Set the temperature to zero and that's exactly what you get. The point is the randomness is something applied externally, not a "core concept" for the LLM.
The amount of problems where people are choosing a temperature of 0 are negligible though. The reason I chose the wording “implementations of models you’re talking to today” was because in reality this is almost never where people land, and certainly not what any popular commercial surfaces are using (Claude code, any LLM chat interface).
And regardless, turning this into a system that has some notion of strategic consistency or contextual steering seems like a remarkably easy problem. Treating it as one API call in, one deterministic and constrained choice out is wrong.
Set the temperature to zero and that's exactly what you get.
In some NN implementations, randomness is actually pretty important to keep the gradients from getting stuck at local minima/maxima. Is that true for LLMs, or is it not something that applies at all?
All of life is an arms race. Look at fungi vs. bacteria. All those grasses in the fields and trees in the forests got there by outcompeting other organisms. We're actually the only species which can reason about our resource consumption as a whole, and which has a chance to do something about it. But while we find forests beautiful, they're a blight on grasslands, which are a blight on mosses, which are a blight on plain old rocks.
What's beautiful is complexity, what's ugly is the destruction of complexity. That's why we find the destruction of forests to be repellent. Because we appreciate the more complex over the less complex. Possibly because complexity is the universe's way of observing itself. None of that means that our own complexity is necessarily wicked or irrelevant. It may just be a natural stage in the evolution of a planet. Grassland had 3 billion years to change, and it largely stayed the same. What's a couple thousand years of us blowing shit up, really?
Great points. Thank you. I realized (just after posting) that the wound part is not well-defined. Any abrupt change could be seen as a wound.
But we need to define "progress" as species. Grasslands, trees and dolphins seem to have defined their progress as better adaptation helped by their organic evolution, which contributed to their ultimate goal of reproduction via survival.
How is human race defining their progress? Since we are just one of the animal species, the root goal remains as reproduction. Instead of waiting for our biological evolution to enhance our survival (and thus reproduction), maybe we are augmenting human abilities with artificial means which is quicker.
But then the artificial augmentations could become you, replacing whatever your essence was. A weapon in your hand and AI chip in your head could make you a different beast. We can argue that even without such tools, human is mostly made up of bacterial colonies dictating human thought and life. But we accepted that as our identity. Now the artificial implements are taking up our identity. This is not natural and that is what is wicked.
Also arms race not same as how species out-competed each other. Our arms race and most of what we call as tech progress is spawned by competition internal to our species, not for competing with other species.
Universe did not favor complexity. Universe destroys order and moves towards more entropy. Life is something that goes against this. Life probably was required to trap Sun's energy so that Earth can cool itself.
In geologic time scale, yes, a couple thousand years is puny. But it also indicates a rapid change. Most rapid changes lead to extinction events.
I don't know if dolphins define their purpose or progress. More likely, we look at them in a state of equilibrium now, and think "that looks nice", but at various points over the past hundreds of millions of years, virtually every species experienced drastic shifts within a brief few millenia which also nearly wiped them out. This is not always inter-species conflict. Female dolphins have a spiral-shaped vagina for reasons you can imagine may have resembled an arms race among dolphins. We are clearly in the middle of something like that, which means we're in a poor position to make predictions. We went from something like equilibrium, more or less the same from 1M years ago to 5,000 years ago, to a radically different state. It may very well not last another 5,000 years.
I think maybe we need to see these arms races as short ramps, periods of chaos, which lead either to long plateaus or very quick collapses.
>> Universe did not favor complexity. Universe destroys order and moves towards more entropy. Life is something that goes against this. Life probably was required to trap Sun's energy so that Earth can cool itself.
I think if the Universe were not programmed to generate complexity, there would only be one or two elements. I think the tendency toward entropy is a necessary condition to force complexity and life to evolve. The Universe is slowly trading global energy everywhere for local complexity somewhere. This is how energy is turned into information. If energy at the beginning of the Universe is nearly infinite, then clearly it is cheaper and less valuable to whoever is reading the output than the valuable limited information it can produce (with tons of wasted energy). I believe this because I believe that converting energy to information is not a side-effect of the Universe, but its ultimate purpose.
So yeah, forests are beautiful. Beehives are beautiful. Colonies of fungus are beautiful. Kansas City from the air at night is... well, we shouldn't underrate ourselves.
In my experience with both Copilot and Claude, Claude makes subtler mistakes that are harder to spot, which also gobbles up time. Yes, giving it CLI access pretty cool and helps with scaffolding things. But unless you know exactly what you want to write, and exactly how it should work, to the degree that you will notice the footguns it can add deep in your structures, I wouldn't recommend anyone use it to build something professional.
This is quite amazing. I'm not anything like a serious C coder and haven't tried ASM. I've written "filesystems" in higher level languages (stuff that imposed a directory structure and metadata on what were just bins of data), so I was just looking at parts of your code at random. I think that triple pointer dir_entry_t*** is where my head exploded. Pretty amazing code, you should be very proud.
Thank you so much!
I also made a few years ago an high level filesystem, which helped me during I made this one.
I think the main difference is just that you need to work with drivers here for every disk operation.
I don't know. People in the 90s were initially fooled by Eliza, but soon understood that Eliza was a trick. LLMs are a more complex and expensive trick. Maybe it's time to overthrow the Turing Test. Fooling humans isn't necessarily an indicator of intelligence, and it leads down a blind alley: Language is a false proxy for thought.
Consider this. I could walk into a club in Vegas, throw down $10,000 cash for a VIP table, and start throwing around $100 bills. Would that make most people think I'm wealthy? Yes. Am I actually wealthy? No. But clearly the test is the wrong test. All show and no go.
The more I think about this, the more I think the same is true for our own intelligence. Consciousness is a trick and AI development is lifting the veil of our vanity. I'm not claiming that LLMs are conscious or intelligent or whatever. I'm suggesting that next token prediction has scaled so well and cover so many use cases that the next couple breakthroughs will show us how simple intelligence is once you remove the complexity of biological systems from the equation.
All we know about animal consciousness is limited to behaviour, e.g. the subset of the 40 or so "consciousness" definitions which are things like "not asleep" or "responds to environment".
We don't know that there's anything like our rich inner world in the mind of a chimpanzee, let alone a dog, let alone a lobster.
We don't know what test to make in order to determine if any other intelligence, including humans and AI, actually has an inner experience — including by asking, because we can neither be sure if the failure to report one indicates the absence, nor if the ability to report one is more than just mimicking the voices around them.
For the latter, note that many humans with aphantasia only find out that "visualisation" isn't just a metaphor at some point in adulthood, and both before and after this realisation they can still use it as a metaphor without having a mind's eye.
> Language is the baseline to collaboration - not intelligence
Would you describe intercellular chemical signals in multicellular organisms to be "language"?
> We don't know that there's anything like our rich inner world in the mind of a chimpanzee, let alone a dog, let alone a lobster.
If be "we don't know" you mean we cannot prove, then, sure, but then we don't know anything aside from maybe mathematics. We have a lot of evidence that animals similar consciousness as we do. Dolphins (or whales?) have been known to push drowning people to the surface like they do for a calf. Killer whales coordinate in hunting, and have taken an animus to small boats, intentionally trying to capsize it. I've seen squirrels in the back yard fake burying a nut, and moving fallen leaves to hide a burial spot. Any one who has had a dog or a cat knows they get lonely and angry and guilty. A friend of mine had personal troubles and abandoned his house for a while; I went over to take pictures so he could AirBnB it, and their cat saw me in the house and was crying really piteously, because it had just grown out of being a kitten with a bunch of kids around and getting lots of attention, and suddenly its whole world was vanished. A speech pathologist made buttons for her dog that said words when pressed, and the dog put sentences together and even had emotional meltdowns on the level of a young child. Parrots seem to be intelligent, and I've read several reports where they give intelligent responses (such as "I'm afraid" when the owner asked if it wanted to be put in the same room as the cat for company while the owner was away [in this case, the owner seems to be lacking in intelligence for thinking that was a good idea]). There was a story linked her some years back about a zoo-keeper who had her baby die, and signed it to the chimpanzee (or gorilla or some-such) females when it wanted to know why she had been gone, and in response the chimpanzee motioned to with its eye suggesting crying, as if asking if she were grieving.
I probably have some of those details wrong, but I think there definitely is something there that is qualitatively similar to humans, although not on the same level.
> If be "we don't know" you mean we cannot prove, then, sure, but then we don't know anything aside from maybe mathematics.
More than just that: we don't know what the question is that we're trying to ask. We're pre-paradigmatic.
All of the behaviour you list, those can be emulated by an artificial neural network, the first half even by a small ANN that's mis-classifying various things in its environment — should we call such an artificial neural network "conscious"? I don't ask this as a rhetorical device to cast doubt on the conclusion, I genuinely don't know, and my point is that nobody else seems to either.
> We don't know that there's anything like our rich inner world in the mind of a ...
I posit that we should start with a default "this animal experiences the world the same as I do" until proven differently. Doctors used to think human babies could not feel pain. The assumption has always been "this animal is a rock and doesn't experience anything like me, God's divine creation." It was stupid when applied to babies. It is stupid when applied to animals.
Did you know that jumping spiders can spot prey, move out of line of sight, approach said pray outside that specific prey's ability to detect, and then attack? How could anything do that without a model of the world? MRIs on mice have shown that they plan and experience actions ahead of doing them. Just like when you plan to throw a ball or lift something heavy where you think through it first. Polar bears will spot walrus, go for a long ass swim (again, out of sight) and approach from behind the colony to attack. A spider and the apex bear have models of the world and their prey.
Show that the animal doesn't have a rich inner world before defaulting to "it doesn't."
> I posit that we should start with a default "this animal experiences the world the same as I do" until proven differently.
As I don't know, I take the defensive position both ways for different questions.*
Just in case they have an inner world: We should be kind to animals, not eat them, not castrate them (unless their reproductive method appears to be non-consensual), not allow them to be selectively bred for human interest without regard to their own, etc.
I'd say ditto for AI, but in their case, even under the assumption that they have an inner world (which isn't at all certain!), it's not clear what "be kind" even looks like: are LLMs complex enough to have created an inner model of emotion where getting the tokens for "thanks!" has a feeling that is good? Or are all tokens equal, and the only pleasure-analog or pain-analog they ever experienced were training experiences to shift the model weights?
(I'm still going to say "please" to the LLMs even if it has no emotion: they're trained on human responses, and humans give better responses when the counterparty is polite).
> How could anything do that without a model of the world?
Is "a model of the world" (external) necessarily "a rich inner world" (internal, qualia)? If it can be proven so, then AI must be likewise.
* The case where I say that the defensive position is to say "no" is currently still hypothetical: if someone is dying and wishes to preserve their continuity of consciousness, is it sufficient to scan their brain** and simulate it?
There are some clever tests described in The Language Puzzle on primates that (paraphrasing 14 hour long audiobook so forgive any mistakes.) indicate no primate other than humans and a couple of immediate predecessors (based on archaeological evidence) have much in the realm of abstract thinking abilities using their own communications, a few primates raised and taught forms of human language cannot progress very far without any of the facilities of language present in normal two-three year old development. The book is focused on how humans evolved language so other species are not covered, there is obvious verbal and gesture based communication in primates but it concludes not enough of the components of physiology that enable human language are present(both brain and vocal anatomy).
How do you define verbal language? Many animals emit different sounds that others in their community know how to react to. Some even get quite complex in structure (eg dolphins and whales) but I wouldn’t also rule out some species of birds, and some primates to start with. And they can collaborate; elephants, dolphins, and wolves for example collaborate and would die without it.
Also it’s completely myopic in terms of ignoring humans who have non verbal language (eg sign language) perfectly capable of cooperation.
TLDR: just because you can’t understand an animal doesn’t mean it lacks the capability you failed to actually define properly.
MW defines verbal as "of, relating to, or consisting of words".
I don't think anyone would argue that animals don't communicate with each other. Some may even have language we can't interpret, which may consist of something like words.
The question is why we would model an AGI after verbal language as opposed to modeling it after the native intelligence of all life which eventually leads to communication as a result. Language and communication is a side-effect of intelligence, it's a compounding interest on intelligence, but it is not intelligence itself, any more than a map is the terrain.
> The question is why we would model an AGI after verbal language as opposed to modeling it after the native intelligence of all life which eventually leads to communication as a result.
Because verbal/written language is an abstracted/compressed representation of reality, so it's relatively cheap to process (a high-level natural-language description of an apple takes far fewer bytes to represent than a photo or 3D model of the same apple). Also because there are massive digitized publicly-available collections of language that are easy to train on (the web, libraries of digitized books, etc).
I'm just answering your question here, not implying that language processing is the path towards AGI (I personally think it could play a part, but can't be anything close to the whole picture).
This is one of the last bastions of anthropocentric thinking. I hope this will change in this century. I believe even plants are capable of communication. Everything that changes over time or space can be a signal. And most organisms can generate or detect signals. Which means they do communicate. The term “language” has traditionally been defined from an anthropocentric perspective. Like many other definitions about the intellect (consciousness, reasoning etc.).
That’s like a bird saying planes can’t fly because they don’t flap their wings.
LLMs use human language mainly because they need to communicate with humans. Their inputs and outputs are human language. But in between, they don’t think in human language.
> LLMs use human language mainly because they need to communicate with humans. Their inputs and outputs are human language. But in between, they don’t think in human language.
You seem to fundamentally misunderstand what llms are and how they work, honestly.
Remove the human language from the model and you end up with nothing. That's the whole issue.
Your comment would only make sense if we had real artificial intelligence, but LLMs are quite literally working by predicting the next token - which works incredibly well for a fascimlie of intelligence because there is an incredible amount of written content on the Internet which was written by intelligent people
A human child not taught literally anything can see some interesting item extend a hand to it, touch it, interact with it. All decided by the child. Heck, even my cat can see a new toy, go to it and play with it, without any teaching.
LLMs can't initiate any task on their own, because they lack thinking/intelligence part.
This to me overstretches the definition of teaching. No, a human baby is not "taught" language, it learns it independently by taking cues from its environment. A child absolutely comes with an innate ability to recognize human sound and the capability to reproduce it.
By the time you get to active "teaching", the child has already learned language -- otherwise we'd have a chicken-and-egg problem, since we use language to teach language.
Transformers are very powerful also for non-language data. For example time series, sequences like DNA or audio (also outside of speech and music).
Of course the vast amount of human text is key to training a typical LLM, but it is not the only use.
>but LLMs are quite literally working by predicting the next token - which works incredibly well for a fascimlie of intelligence because there is an incredible amount of written content on the Internet which was written by intelligent people
An additional facet nobody ever seems to mention:
Human language is structured, and seems to follow similar base rules everywhere.
That is a huge boon to any statistical model trying to approximate it. That's why simpler forms of language generation are even possible. It's also a large part of why LLMs are able to do some code, but regularly fuck up the meaning when you aren't paying attention. The "shape" of code and language is really simple.
How do we know animal language isn’t structured, in similar ways? For example we now know that “dark” birds are often colorful, just in the UV spectrum they can see and we can’t. Similarly there’s evidence dolphin and whale speech may be structured, we just don’t know the base rules; their speech is modulated at such rapid frequency our computers until maybe recently would struggle to even record and process that data realtime (probably still do).
Just because we don’t understand something doesn’t mean there’s nothing there.
Also, I’m not so sure human language is structured the same way globally. There’s languages quite far from each other and the similarities tend to be grouped by where the languages originated. Eg Spanish and French might share similarities of rules, but those similarities are not shared with Hungary or Chinese. There’s cross pollination of course but language is old and humans all come from a single location so it’s not surprising for there to be some kinds of links but even a few hundred thousand years of evolution have diverged the rules significantly.
Well, you can explain to a plant in your room that E=mc2 in a couple of sentences, a plant can't explain to you how it feels the world.
If cows were eating grass and conceptualising what is infinity, and what is her role in the universe, and how she was born, and what would happen after she is dead... we would see a lot of jumpy cows out there.
This is exactly what I mean by anthropocentric thinking. Plants talk plant things and cows talk about cow issues. Maybe there are alien cows in some planet with larger brains and can do advanced physics in their moo language. Or some giant network of alien fungi discussing about their existential crisis. Maybe ants talk about ant politics by moving their antennae. Maybe they vote and make decisions. Or bees talk about elaborate honey economics by modulating their buzz. Or maybe plants tell bees the best time for picking pollens by changing their colors and smell.
Words, after all are just arbitrary ink shapes on paper. Or vibrations in air. Not fundamentally different than any other signal. Meaning is added only by the human brain.
I'm also attracted to the idea of reducing rule sets to simple algorithms and axioms, in every case you can. But I'm skeptical that consciousness can be reduced that way. I think if it can be, we'll see it in the distillation and quantizing of smaller and smaller scale models converging on similar adaptations, as opposed to the need for greater scale (at least in inference). I still believe language processing is the wrong task to train to that point. I'd like to see AIs that model thought process, logic, tool construction, real-world tasks without language. Maybe even those that model vocal chords and neurological processes instead of phonemes. Most animals don't use language, and as a result we can't ask if they're conscious, but they probably are. Navigating and manipulating the physical world from the cellular level up to swinging from trees is far more complex - language is a very late invention, and is not in and of itself intelligence - it may just be a lagging indicator.
To the extent that we vainly consider ourselves intelligent for our linguistic abilities, sure. But this underrates the other types of spatial and procedural reasoning that humans possess, or even the type that spiders possess.
That's not how I view it. Consciousness is the result of various feedback structures in the brain, similar to how self-awareness stems from the actuator-sensor feedback loop of the interaction between the nervous system and the skeletomuscular system. Neither of those two definitions have anything to do with language ability -- and it bothers me that many people are so eager to reduce consciousness to programmed language responses only.
The validity of the Turing test doesn’t change the fact that the bots are better than humans at many tasks that we would consider intellectual challenges
I am not a good writer or artist, yet I can tell that AI generated pictures or prose feel 'off' compared to stuff that humans make. People who are professional writers and artists can point out in a lot of cases the issues with structure, execution and composition that these images have, or maybe if sometimes they can't they still have a nose for subtle issues, and can improve on the result.
>I could walk into a club in Vegas, throw down $10,000 cash for a VIP table, and start throwing around $100 bills.
If you can withdraw $10,000 cash at all to dispose as you please (including for this 'trick' game) then my friend you are wealthy from the perspective of the vast majority of humans living on the planet.
And if you balk at doing this, maybe because you cannot actually withdraw that much, or maybe because it is badly needed for something else, then you are not actually capable of performing the test now, are you ?
That's really not true. Lots of people in America can have $0 in net worth and get a credit card, use that to buy some jewelry and then sell it, and have $10k in cash. The fact that the trick only works once proves that it's a trick.
You're not making much sense. Like the other user, you are hinging on non-transferrable details of your analogy, which is not the actual reality of the situation.
You've invented a story where the user can pass the test by only doing this once and hinged your point on that, but that's just that - a story.
All of our tests and benchmarks account for repeatability. The machine in question has no problem replicating its results on whatever test, so it's a moot point.
The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick. If you don't account for either of those resources running out, you're not testing whether its feats are truly repeatable.
>The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick.
Okay ? and you, presumably a human can replicate the trick of fooling me into thinking you're conscious as long as there is a sufficient supply of food to keep you running. So what's your point ? With each comment, you make less sense. Sorry to tell you, but there is no trick.
The difference is that the human can and did find its own food for literally ages. That's already a very, very important difference. And while we cannot really define what's conscious, it's a bit easier (still with some edge cases) to define what is alive. And probably what is alive has some degree of consciousness.
An LLM definitely does not.
One of the "barriers" to me is that (AFAIK) an LLM/agent/whatever doesn't operate without you hitting the equivalent of an on switch.
It does not think idle thoughts while it's not being asked questions. It's not ruminating over its past responses after having replied. It's just off until the next prompt.
Side note: whatever future we get where LLMs get their own food is probably not one I want a part of. I've seen the movies.
You only exist because you were forced to be birthed externally? Everything has a beginning.
In fact, what is artificial is stopping the generation of an LLM when it reaches a 'stop token'.
A more natural barrier is the attention size, but with 2 million tokens, LLMs can think for a long time without losing any context. And you can take over with memory tools for longer horizon tasks.
>All of our tests and benchmarks account for repeatability.
What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.
>The machine in question has no problem replicating its results on whatever test
What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.
>What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.
If your 6 year old can only answer correctly a few times out of that 400 and you don't change your estimation of their understanding of arithmetic then, I sure hope you are not a teacher.
>What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.
No LLM will score 80% on benchmark x today then 50% on the same 2 days later. That doesn't happen, so the convoluted setup OP had is meaningless. LLMs do not 'fail' on consistency or generality.
>Couldn’t someone else just give him a bunch of cash to blow on the test, to spoil the result?
If you still need a rich person to pass the test, then the test is working as intended. Person A is rich or person A is backed by a rich sponsor is not a material difference for the test. You are hinging too much on minute details of the analogy.
In the real word, your riches can be sponsored by someone else, but for whatever intelligence task we envision, if the machine is taking it then the machine is taking it.
>Couldn’t he give away his last dollar but pretend he’s just going to another casino?
Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.
>> Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.
So, most people can't get $1 Trillion to build a machine that fools people into thinking it's intelligent. That's probably also not a trick that will ever be repeated.
>> I should be able to get one up and running for you by the middle of next year
Funny. I agree with your plainspoken analysis of why these things are nowhere near AGI, and of what AGI would be. I even had a long conversation with Claude last week where it told me that no LLM would ever approach AGI (but then it wrote a 4-paragraph-long diatribe entitled "Why I Declare Myself Conscious" in the same conversation). These neural networks are closer to the speechwriting machine in The Penultimate Truth, or the songwriting machine in 1984. As for that latter one, I believe Orwell remarks on how it just recycles the same sentimental tunes and words in different order so that there's always a "new" song all the proles are humming.
Amazon is so completely irresponsible for their marketplace that recently, shopping for a glass oral thermometer (because the digital ones suck) I stumbled on reviews with photos showing products that had no mercury inside and actual blobs of mercury stuck to the tip that goes in your mouth. These were still for sale.
I feel like even 10 years ago, online marketplaces would have taken measures to prevent stuff like this.
From that perspective, all of these services that rate products still place all the onus on the individual consumer. What would be really "luxury" in the modern context would be an online marketplace that vetted every product and whose primary product was trust, as opoosed to logistics and convenience. I'd much rather pay $150/yr for a service that vetted its products and took a week to deliver them, than to have a bunch of worthless or dangerous junk delivered the next day.
> Amazon is so completely irresponsible for their marketplace that recently, shopping for a glass oral thermometer (because the digital ones suck) I stumbled on reviews with photos showing products that had no mercury inside and actual blobs of mercury stuck to the tip that goes in your mouth. These were still for sale.
I did wonder about how this kind of thing was handled in the UK, and (a) Amazon will happily offer a mercury thermometer for sale and (b) it has been illegal to sell mercury thermometers in the UK since 2009.
The absolute poster child for ubiquitous illegal toxic products though? Disposable vapes.
I'm convinced the major tobacco companies will do well if the government ever manages to crack down on sketchy and illegal vaping products and stores. But this seems very hard to do.
I wondered that as well, but they are. I've started to think there's an organized effort by a government with a lot of state-owned enterprises to actually dispose of toxic waste by shipping it to gullible American consumers. Not that it isn't also poisoning people there.
well, the chinese outsourced hog farms (pork production) to the U.S. because the farms are considered too toxic and environmentally destructive for them.
For sure but I haven’t seen one for home use in nearly 40 years. The risk of having mercury in the hands of Joe Schmoe or his garbage is probably not worth it
I've thought about that too, but in the end, price always wins - this is why the Amazons and Walmarts of the world have out-competed local small businesses.
The major flaw in your example is that you have a site saying product X is good and trusted, but people will then go look online for a competitor that sells it for cheaper.
This is where capitalism clashes with consumer rights / safety. What should be the case is that all products sold on all stores are safe. That's what consumer safety organizations are for, but it seems like they have lost the battle against the flood of Chinese crap coming in.
At least in Europe, this is mainly because these companies ship for cheap directly to customers. Customs and the like can check a container full of the same USB chargers easily and efficiently, but if that container full crosses the border in 10.000 individual packages it's impossible.
Thankfully they're putting the brakes on it, but it took forever.
>The major flaw in your example is that you have a site saying product X is good and trusted, but people will then go look online for a competitor that sells it for cheaper.
Product X is good and trusted, except:
- due to mixed inventory you were sent product Y, which is poison
- product X has a complex supply chain, and it was previously good and trusted, and now it's poison and you had no idea anything changed
Product X had decent sales but not good enough profit margins, so the brand was sold to a company that sells a cheap, dangerous look-alike under the same name.
> I stumbled on reviews with photos showing products that had no mercury inside and actual blobs of mercury stuck to the tip that goes in your mouth.
I played with mercury a bit when I was a kid, as did every kid who could - it was COOL! From that I learned: mercury is almost omniphobic. Oil avoids mixing with water. Mercury avoids mixing with, holding on to, and generally touching anything.
So how could a blob of mercury stick to a glass tip???
Amazon is straight up evil at this point. People have pointed out they are selling fake fuses that have most likely gotten people killed, Amazon has done nothing. I am sure the same is occurring across other product categories like your example.
The 'luxury' you are talking about was called Brands, with the idea being that a company's Brand was worth more than lure of profits/shortcuts that could result in ruining the Brand.
>> The 'luxury' you are talking about was called Brands
I dunno. Branding was my gig for a long time. I think brands were a weak substitute for artisans / bespoke makers who had to personally stand by their work. Once upon a time there was a guy named Levi Strauss who made sturdy jeans, some guy named McDonald who made good hamburgers, a couple guys named Johnson who sold talcum powder. And that guy Nobel who invented new ways to blow up the coyote. If any of their products failed, it was on them. Then branding came along and quality declined, but people paid for inferior products because they had the name and stamp of the founder on them. The notion that brands have to maintain the quality associated with their namesake is the central illusion that trillions of dollars spent on branding seeks to create. It turns out that it's cheaper to prop up the name with advertising than it is with selling quality products.
And that doesn't even touch on brands like DuPont or Chevron, where all the positive connotations are purely from brand marketing built as a shroud around selling mass death.
> I think brands were a weak substitute for artisans / bespoke makers who had to personally stand by their work.
Another way to say that is "companies are too big". When companies become big enough that they don't have to worry about the repercussions of screwing over their customers, they're too big.
Right. Absolutely. But then again, everyone can buy jeans now and you don't have to ride your horse across 500 miles of desert and hitch it up to Levi's store. So no one who orders em online now knows what they were worth then. No one's riding horses around in their underwear anymore.
To be serious: I don't think that overpopulation or delivering better things to more people is really the problem. Big companies are indeed a problem. Along with big governments on the other side. They both rely on rent-seeking methods of extracting value while lowering expectations, rather than providing better services. There needs to be a balance of regulation and innovation, that prevents regulatory capture and prevents monopolies without exploding bureaucracies that hamper small businesses. Small businesses are fantastic drivers of prosperity and creativity. That would be the civic ideal I'd implement if I had any interest in getting into government.
I mean wasn't the era of bespoke makers the same as the era of traveling sake-oil salesmen that went town-to-town, disappearing just before the ramifications of their poison "cure-all" became clear? There were tons of scams and grifts back in the day.
Amazon is rather like the Silk Road[0] of old. You're buying cinnamon from some guy in Europe who knows not even the vague direction it came from, let alone what's in it or how it came to be. This could be considered irresponsible today, or it could be considered efficient, depending on your perspective.
I also feel like Amazon should take more responsibility, but then I get angry when my ISP or government "takes responsibility" for online content. What's the difference between Amazon and an ISP? One could argue an ISP is a natural monopoly and therefore should always be a neutral carrier. But maybe Amazon is a natural monopoly too? Could the economy really support ten different Amazons? That would be like having ten different Silk Roads, but there's only one way from Asia to Europe.
It does seem odd to me that Amazon gets a free pass as a common carrier while ISPs seemingly do not. Probably because taking responsibility would affect Amazon's bottom line, while ISPs don't really care.
I don't know that anyone's really asking for protection from their own impulses. Freedom requires protection from other people's impulses. That's where this is all going. A few truly free people, everyone else in a cage.
Rather than framing it as "protection from my own impulses," I think it's more fair to frame it as "protection from teams of professional researchers and engineers and marketers whose entire life's work is fine tuning how to most effectively profit from my impulses"
Well, yeah, but that type of protection isn't compatible with freedom. Neither the freedom to consume nor the freedom to iterate on a product. I don't blame companies for making their products addictive. I smoke cigarettes. I drink. Sometimes I crave a Big Mac. I don't blame them for selling me poison, as long as I recognize it as unhealthy. The best way to protect yourself is by educating yourself to recognize manipulative patterns, and by extension sharing that with other people. We know from drug addiction that simply banning something doesn't work. And an insight from some great druggies like Philip K Dick and Hunter Thompson and Burroughs is that the list of things that can be addictive is endless. If someone made an app that made people chew their nails, or lick an escalator banister, or shock themselves with electricity, people would get addicted to it.
I was hypnotized a few times as a child by a professional hypnotist. When in college I was invited to a "seminar" which turned out to be a cult indoctrination session, I immediately recognized what I was seeing the group leader doing to the audience.
We don't need external protection, we need herd immunity. It's like"give a man a fish" vs "teach a man to fish".
I like your clarity about personal responsibility, but it might also help to remember that human capacity for self-regulation isn’t uniform. We all grow through developmental stages and carry traits that shape how we handle influence, impulse, and awareness.
The idea that “we just need herd immunity” assumes we’re all equally capable of recognizing manipulation or addiction, but as ericmcer noted, the evidence all around us suggests most of us are not quite there. In many ways, that belief in our individual mastery is part of what Western culture keeps overestimating, as if understanding the trick is enough to undo the conditioning.
There also seems to be a deeper resistance in our western cultures to actually pausing, turning inward, and staying present with what’s happening inside us. We intellectualize awareness instead of living it. Real freedom will begin not with more information or clever systems, but with learning to meet our own impulses directly. To listen, to stay with discomfort, to see what drives us without immediately trying to fix it. Until we're willing to practice that kind of contact with ourselves, we’ll keep playing defense against the surface symptoms.
I agree, and I don't want what I wrote to be read as "people just need more self control." I lack self control, that's why I have addictions and bad habits. I know other people who have even less self control, and some who seem to be immaculate. I don't care to be judged or to judge anyone else. I guess that's why I framed it as herd immunity.
Here's what I mean (from an addict's perspective). What makes me hesitate when I reach for a cigarette or another drink, and decide maybe I should call it a night, is not the knowledge that it's bad for me. It's thinking about friends who have died from lung cancer and cirrhosis. As a whole, as a society, we've become more aware of the effects of certain poisons because we've witnessed the results and drawn the conclusions.
So with screen addiction, we're just starting to witness the results on a generational level. Yes - screen use is a little different because it can have good sides as well. The screens are magical devices that can educate us and improve us, too. Someone addicted to watching their diet and exercise with FitBit has a different psychological problem space from someone addicted to watching people put ants up their nose on TikTok.
I agree with you wholeheartedly that all of these things prevent us from focusing and staying present and dealing with real problems. I'm really just saying that removing any or all of them doesn't address that underlying void which causes people to seek consciousness altering substances or mass distraction. There is no distraction or game the human mind won't latch onto to avoid reality - that's the curse of knowing you're going to die. We need immunity against the surface thing that's eating us alive right now. But of course we'll keep playing defense against the next thing, because we have no immunity from it. Changing human nature seems to me like a utopian vision, which has never worked in practice. Yeah, we can romanticize some cultures that seem to be better at managing it. But give them a cell phone, a credit card and an Amazon account and you'll see how long it takes for them to fuck themselves over too. Those that do survive the modern world with things like a Sabbath day of rest or avoiding technology completely, do so because they have very strong communal practices that ensure individuals have little agency or choice - and they also happen to believe in a divine plane of existence beyond the mortal coil, which changes their calculus when making bad decisions. I'm not advocating for either thing. I think changing human nature is impossible, and I think heaven is a placebo. Changing individual human behavior to be more present and more self-aware, I think, is possible. But it's an incremental process. First you have to train each individual to notice a new danger. Then they can develop defenses against it.
I guess I just said something very classically Conservative and yet heretical, but this seems like the way the process has to work, as opposed to wish-casting us to all look inside ourselves and put down that cellphone or that cigarette. We are flawed. We as a species take advantage of each others' flaws in a climb to the top of the monkey barrel. There's no way out, even if Elon thinks there is on Mars, or Zuck thinks there is in Hawaii. Herd immunity amounts to enough of your friends and neighbors gently telling you to wake up and take care of yourself. That's probably how a communally sane society evolves in the end, too - once it reaches some kind of equilibrium with its technology, its drugs, and its environment.
> Changing individual human behavior to be more present and more self-aware
to what though? Id argue people prefer going on a website for something to do than dealing with the sad reality that is modern life. Dont blame a UI designer because your lifes empty
How does a child learn this if they are peer pressured to be on these platforms? The parents can say no up to a point. But eventually the environment demands a child to be on these platforms to participate in their social circles.
How do you enforce a rule to a large group of barely related individuals?
Slow, rhythmic speech pattern. Relaxing tones. Asking you to focus on your heart rate or breathing. Counting, or asking you to count mentally. Low lights or candle light. These are the things that come to mind.
There is a massive gap between "100k engineers have found a way to make most people in my demographic waste all their time by choosing to doomscroll" and "actually Nazis".
Both are bad, but as different as chronic fatigue and terminal cancer.
You gotta do what you can do - take the best of what you remember from your parents and grandparents, and pass it on. I don't feel like they're really dead as long as I'm alive. I hear their voices and their jokes and I see their smiles. Sometimes when I laugh I hear how my grandpa laughed, and I think, shit, I must sound old now. Kids make you realize how temporary we all are.
Is the deadly itchy one of those tiny box jellyfish? More than sharks or crocs, this is why I was an absolute coward and decided not to get in the water in Queensland. There are lots of ways to die, but I'd prefer not to blame myself in my last moments.
reply