deepdeeppuddle's recent activity
-
Comment on I'm tired of dismissive anti-AI bias in ~tech
-
Comment on I'm tired of dismissive anti-AI bias in ~tech
deepdeeppuddle That's interesting information. Thank you for telling me about that. I think the topic warrants further study.That's interesting information. Thank you for telling me about that.
I think the topic warrants further study.
-
Comment on I'm tired of dismissive anti-AI bias in ~tech
deepdeeppuddle I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now...I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now copyright infringement is akin to Nazi crimes against humanity? Please step back from this hyperbole.
I'm not saying you're wrong and I'm not trying to invalidate your wife's hard feelings about having her work used in this way. But we really can't go around comparing things cavalierly to Nazi atrocities.
-
Comment on I'm tired of dismissive anti-AI bias in ~tech
deepdeeppuddle I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically...I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically measurable displacement of paid human artistic labour by AI art generation?
-
Comment on I'm tired of dismissive anti-AI bias in ~tech
deepdeeppuddle This is an interesting perspective. Thank you. I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and...This is an interesting perspective. Thank you.
I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and have non-Euclidean geometry or don't attempt to show perspective in a realistic way. That makes it particularly hard to judge.
I also saw a few examples, particularly the cartoony images of porcelain women, that I find ugly and low-quality but I don't doubt they could have been made by humans. Sometimes I wonder if part of the reason for diffusion models like DALL-E and Midjourney outputting art that looks bad is that they're trained on a lot of art from DeviantArt or Tumblr or wherever that is bad. It makes sense that most of the drawings on the Internet would be made by people who have closer to beginner-level skill than expert-level skill, just like how most fanfiction is closer to a "14-year-old who has never written anything before" level of quality than a "experienced writer who could realistically get a publishing deal" level of quality.
I also think of this post about LLMs generating short fiction. The author's view is that LLMs are good at generating short stories that look like good writing upon a cursory inspection, but if you scratch the surface, you start to notice how bad it really is.
I worry about the same thing happening with the "AI Art Turing Test". Realistically, how long am I going to spend looking at fifty images? Maybe like ten seconds or less, which is not long enough for my eyes to even take in all the detail in the image? Passable at a glance is not the same thing as good.
If a great piece of art is something you can stand in front of at a museum for an hour and continually appreciate more detail in, then a bad piece of AI art is something that looks impressive for the first 30 seconds you spend looking at it before you notice some messed up, ugly detail.
-
Comment on I'm tired of dismissive anti-AI bias in ~tech
deepdeeppuddle (edited )LinkI see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with. This comment will...I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with.
This comment will mostly focus on the common ethical arguments against AI. I could also talk about AI hype (e.g., how despite huge business investment and apparent enthusiasm for AI, it doesn't seem to increasing productivity or profitability), but it seems like most Tildes users already believe that AI is overhyped.
1. The anti-AI art narrative seems to contain a contradiction
The discourse about AI-generated art is confusing. The detractors of AI-generated art make two claims that seem incompatible (or at least in tension with each other):
- AI-generated art is terrible in quality, and obviously so to anyone who looks at it.
- AI-generated art is displacing human-generated art in the market and costing human artists revenue.
I agree with (1). As for (2), I want to see data that supports this claim. I've looked for it and I haven't been able to find much data.
What nags at me most is that (1) and (2) seem to be incompatible. If AI-generated art is so terrible, why do consumers putatively prefer it? And if consumers don't prefer it, how could it be displacing human labour in creating art? How can these two claims, which are often made by the same people, be reconciled?
What seems to me most likely to be true is that AI art sucks and because it sucks, there is a marginal market for it, and there's very little displacement of human artists' labour.
2. Talking about how much electricity AI uses seems like it's just a proxy for talking about useful AI is
I'm skeptical about environmentalist arguments against AI. I'm skeptical because I've tried to find hard data on how much electricity AI consumes and I can't find strong support for the idea that an individual consumer using an LLM uses a lot of electricity when compared to things like using a computer, playing a video game, keeping some LED lightbulbs turned on, running a dishwasher, etc.
The predictable rejoinder is "those other things have some utility, while AI doesn't". If that's what this debate comes down to, then the environmentalist stuff is just a proxy argument for the argument about whether AI is useful or not. If you thought AI were useful, you probably wouldn't object to it using a modest amount of electricity on a per consumer basis. If you don't think it's useful, even if it consumed zero electricity, you would still have other reasons to oppose it. So, it seems like nobody's opinion about AI actually depends on the energy usage of AI.
I also dislike how much discourse about energy in general is focused on promoting energy conservation rather than promoting increased production of sustainable energy when the latter is far more important for mitigating climate change and also benefits people economically (whereas energy conservation, if anything, harms people economically).
3. AI and copyright
A lot of people assert that AI models "steal" training data or that training on copyrighted text or images amounts to "plagiarism" or "copyright infringement". Two things that bother me about this sort of assertion:
-
It's not obvious what constitutes "theft" in the context of training AI models. This is an unprecedented situation and I don't see people trying to justify why their non-obvious interpretation of "theft" is correct. Humans are allowed to consume as much text and as many images as they can in order to produce new text and images. If we treated AI models like humans in this respect, then this would not be theft. I don't think it's obvious we should treat AI models like humans in this respect. I don't know exactly what we should do. Why does it seem like people are not engaging with the complexity and ambiguity of this issue? Why does it seem like people are asserting that it's theft without a supporting argument, as if it should be obvious, when it's really not obvious whether it's theft or not?
-
The people who are angry about AI allegedly infringing copyright seem mostly indifferent or supportive of media piracy. I don't understand why the zeal against AI exists, especially when AI is a more ambiguous case with regard to copyright, and there isn't any zeal against piracy, especially when piracy is such a clear-cut instance of copyright infringement. Being anti-AI and pro-piracy (or neutral on piracy) aren't necessarily inconsistent positions, but I haven't seen many attempts to reconcile these positions.
Is this a symptom of people feeling uncomfortable with ambiguity and uncertainty and attempting to resolve the discomfort by rushing to angry, confident opinions?
4. General properties of the discourse that I don't like
Some of the general things that bother me about the AI discourse are:
-
Strong factual claims, e.g., about AI displacing artist labour and AI using a lot of energy, without clear supporting data.
-
Apparent tensions or contradictions that aren't resolved; obvious questions or objections that go unanswered.
-
Opinions so strongly held against AI that it is sometimes said or implied that no reasonable disagreement with an anti-AI stance could possibly exist and that people who use or defend AI are clearly doing something severely unethical and maybe should even be ostracized on this basis. Wow.
-
I take seriously the possibility that generative AI isn't actually that important or impactful (at least for now and in terms of what's foreseeable over the next few years), and that it's not really worth this much attention. This is a boring, possibly engagement-nullifying opinion, which might make it memetically disadvantaged on the Internet. But maybe also some people would find this idea refreshing!
The polarization isn't just on one side. In a way, both sides might be overrating how impactful AI is, with anti-AI people seeing the impact as highly net negative and the pro-AI people seeing the impact as highly net positive. I don't see AI as a credible threat to artists, the environment, or copyright law and I also don't see AI as a driver of economic productivity or firm/industry profitability. I think LLMs' actually good use cases are pretty limited and I definitely don't see generative AI as "revolutionary" or worth the amount of hype it has been receiving in the tech industry or in other industries where businesses have been eager to integrate AI.
-
Comment on I'm tired of dismissive anti-AI bias in ~tech
deepdeeppuddle I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs: The ARC-AGI-2 benchmark could...I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs:
I encourage people to read that post and comment on it. There is now limited evidence that at least one of the newest frontier AI models — namely, OpenAI's o3 — is capable, to a limited extent, of something we could reasonably call reasoning. This challenges common narrative framings of AI that attempt to downplay its capabilities or potential. It also challenges the idea, common in some circles, that AI models have already possessed impressive reasoning ability since 2023, since the reasoning ability detected in o3 is so small and so recent.
-
Comment on Bluesky’s quest to build nontoxic social media in ~tech
deepdeeppuddle I don’t have a dog in this fight because I’m not interested in using microblogging services in general, regardless of whether they’re fully decentralized, fully centralized, or something in...I don’t have a dog in this fight because I’m not interested in using microblogging services in general, regardless of whether they’re fully decentralized, fully centralized, or something in between.
I will say that I find your mocking tone frustrating. I am now shut down from hearing your opinion on things because your approach is so hardline and combative.
-
Comment on A slow guide to confronting doom in ~health.mental
deepdeeppuddle Thanks for explaining that. This is consistent from what I’ve heard from other people who do coding. Basically, it streamlines the process of looking things up on Stack Overflow (or forums or...Thanks for explaining that. This is consistent from what I’ve heard from other people who do coding.
Basically, it streamlines the process of looking things up on Stack Overflow (or forums or documentation or wherever) and copying and pasting code from Stack Overflow.
The LLM isn’t being creative or solving novel problems (except maybe to a minimal degree), but using existing knowledge to bring speed, ease, and convenience to the coder.
-
Comment on Bluesky’s quest to build nontoxic social media in ~tech
deepdeeppuddle I don’t know anything about designing protocols for decentralized social networks, but why is the AT Protocol that Bluesky is based on able to allow post migration but Mastodon/ActivityPub is not?I don’t know anything about designing protocols for decentralized social networks, but why is the AT Protocol that Bluesky is based on able to allow post migration but Mastodon/ActivityPub is not?
-
Comment on Where do you all get your news from? How do you work to avoid echo chambers and propaganda? in ~life
deepdeeppuddle (edited )LinkI don’t automatically buy your assertions about botnets. I would need to see more evidence before I grant that this is happening. There are certainly instances where evidence of manipulation has...I don’t automatically buy your assertions about botnets. I would need to see more evidence before I grant that this is happening.
There are certainly instances where evidence of manipulation has come out. I’m specifically thinking of this story about a smear campaign against Blake Lively, reported by The New York Times in December. It’s wild and shocking.
I think a lot of the ways discourse changes over time could be shaped by psychological and social psychological dynamics. For example, who feels compelled to speak when on political candidate selection. Or people who are somewhat on the fence or somewhat open-minded looking on the bright side when someone other than their top pick for candidate is chosen. Or the way that enthusiasm for a candidate can be contagious and build momentum over time. And so on.
For example, I have no reason to think the enthusiasm that built for Kamala Harris after took over the presidential campaign from Joe Biden was anything other than primarily organic. Of course the campaign was trying to generate enthusiasm, but they only wish they could generate enthusiasm like that for whatever candidate they want. It’s more a complex, subtle thing than people just being manipulated by the media or political elite, or by propaganda campaigns.
A lot of my exposure to different political ideas has happened through my own research and poking around papers, books, and podcasts, rather than just following a mainstream news source (or any news source). For example, when I was in university, and I was much more enamoured with the ideas of socialism and anti-capitalism than I am today, I looked up and read papers about socialist economics because I had heard enough general theorizing and wanted to see some concrete proposals for how a socialist economy would be run.
This was one of the biggest factors in my turn away from socialism. Not reading proponents of capitalism make convincing arguments. But reading socialist economics papers and finding them lacking. Feeling like the proponents of socialism had a real lack of good ideas.
Also, reading Thomas Piketty’s book Capital in the Twenty-First Century, which was such a contrast to works like Karl Marx’s Capital, which I had read in school. It turns out 150 years of progress in economics really makes a difference. Piketty’s Capital in the Twenty-First Century reads like a work of social science, whereas Marx’s Capital has sections where the digs into the price of corn or whatever, but also has long sections where he waxes about inscrutable Hegelian philosophy or discusses 19th century misconceptions about ancient human history and anthropology.
This made me open up further to the idea that 21st economics could take seriously problems like wealth/income equality and examine them rigorously, “scientifically” based on things like 100 years of French tax records or whatever.
The main (sole?) policy Piketty advocates in that book is a wealth tax. He advocates starting with a small wealth tax, which will allow economists to have more data about wealth, upon which further research and policy can be based. This made its way to Elizabeth Warren’s presidential campaign platform. One of Piketty’s graduate students was on her policy team.
In Piketty’s subsequent book, Capital and Ideology, he advocates for some radical political and economic reforms, but he approaches the topic with a level of intellectual humility and acknowledging uncertainty that I find refreshing. I don’t know if he’s right and he doesn’t know if he’s right, but it’s an interesting jumping off point.
So, that’s a brief story about a major political “conversion” I had, which maybe, hopefully, can tell you a little something about being exposed to new ideas.
Nowadays, I really like The Ezra Klein Show. I don’t listen to most of the political episodes because politics is stressful and I need to take it low doses.
A pretty cool thing about Ezra is he’s willing to entertain ideas that differ significantly from what he already thinks. This includes ideas from people across the political aisle, but also ideas that aren’t currently part of mainstream partisan political discourse at all. For example, I remember him saying at one point that he thinks about the stories he might be missing as a reporter, that would seem important in retrospect when looking back on the current era but that aren’t on his radar (or most reporters’ radar). The example he gave was CRISPR.
On his podcast, he also discusses topics like psychology, psychedelics, loneliness, polyamory, and other things that have importance for the world but aren’t really part of the news or mainstream political debates. I find that refreshing and it also gives me a chance to engage with the podcast and not be stressed out by news or politics.
-
Comment on A slow guide to confronting doom in ~health.mental
deepdeeppuddle How do you use AI in your work? How does it help you accomplish more in less time? I haven’t seen much evidence that AI has been having an effect on the macroeconomy, on (un)employment, or the...How do you use AI in your work? How does it help you accomplish more in less time?
I haven’t seen much evidence that AI has been having an effect on the macroeconomy, on (un)employment, or the productivity of individual companies. I am open to seeing statistics that show an impact, though.
-
Comment on Bluesky’s quest to build nontoxic social media in ~tech
deepdeeppuddle Thank you for sharing this. It’s interesting but, as you said, not very active.Thank you for sharing this. It’s interesting but, as you said, not very active.
-
Comment on A slow guide to confronting doom in ~health.mental
deepdeeppuddle (edited )Link ParentYou are right. Here’s a comment from the Effective Altruism Forum, which has a lot of overlap with the LessWrong forum. There is overlap in terms of the user base, posts (people cross-post to...Trade wars and democratic backsliding seem too mundane to be significant LessWrong concerns.
You are right.
Here’s a comment from the Effective Altruism Forum, which has a lot of overlap with the LessWrong forum. There is overlap in terms of the user base, posts (people cross-post to both, and there’s even a feature built-in to both forums to make this easier), and discussion topics (particularly AGI). The forums also share the same code base.
This comment is about Daniela Amodei, the President of the AI company Anthropic. The context is a discussion about whether it’s appropriate to look up information on the personal website she created for her wedding and publicly discuss it.
…I will just say that by the "level of influence" metric, Daniela shoots it out of the park compared to Donald Trump. I think it is entirely uncontroversial and perhaps an understatement to claim the world as a whole and EA [effective altruism] in particular has a right to know & discuss pretty much every fact about the personal, professional, social, and philosophical lives of the group of people who, by their own admission, are literally creating God. And are likely to be elevated to a permanent place of power & control over the universe for all of eternity.
Such a position should not be a pleasurable job with no repercussions on the level of privacy or degree of public scrutiny on your personal life. If you are among this group, and this level of scrutiny disturbs you, perhaps you shouldn't be trying to "reshape the lightcone without public consent" or knowledge.
Note that 4 people have voted “agree” (that’s what the check mark symbol means).
This helps put into perspective what people in this community are worrying about right now.
-
Comment on A slow guide to confronting doom in ~health.mental
deepdeeppuddle (edited )LinkI really dislike LessWrong for reasons I explained at length in a series of comments on a post from February. If you’re curious, you can find my comments by starting here and then looking at the...I really dislike LessWrong for reasons I explained at length in a series of comments on a post from February. If you’re curious, you can find my comments by starting here and then looking at the replies down the chain.
For those who don’t know, LessWrong is an online forum that has users from around the world, but is also closely connected to an IRL community of people in the San Francisco Bay Area who self-identify as “rationalists”. Rationalists have one primary fixation above all else, which is artificial general intelligence (AGI), and, more specifically, the fear that it will kill all humans sometime in the near future. That’s the “doom” that this LessWrong post is about.
On the topic of AGI, I wrote a post here, in which I expressed frustration at the polarized discourse on AGI and discussed how the conversation could potentially be refined by focusing on better benchmarks for AI performance.
I’ll say a little more on the topic of AGI.
I think there are a number of bad reasons to reject the idea of AGI, such as:
-
dualism, the idea that the mind is non-physical or supernatural
-
mysterianism, the idea that the mind can never be understood by science
-
overly simple or dismissive misunderstandings of deep learning and deep reinforcement learning
-
the belief that AI research will run out of funding
That said, I also think there are a number of bad reasons to believe that AGI will be created soon:
-
being overly impressed with ChatGPT and insufficiently critical of its failures to produce intelligent behaviour
-
a belief that the intelligence of AI systems will rapidly, exponentially increase without plateauing, despite serious empirical and theoretical problems with this idea (such as economic data failing to support that this has been happening so far)
-
a reliance on poor benchmarks that don’t really measure intelligence
-
knee-jerk dismissal of well-qualified critics like Yann LeCun and François Chollet
-
over-reliance on the opinions of other people about AGI, without enough examination of why they hold those opinions (e.g. how much is it circular? How much of those other people’s opinions is based on other people’s opinions?)
It is difficult to find nuanced discussion of AGI online lately because most of the discussion I see is either people taking hardline anti-AI positions (e.g. it’s all just a scam) or people with an extreme, eschatological belief in near-term AGI.
I highly doubt that we will see AGI within ten years. Within a hundred years? Possibly. But there’s a lot of irreducible uncertainty and there’s no way we can really know right now.
-
-
Comment on Bluesky’s quest to build nontoxic social media in ~tech
deepdeeppuddle (edited )Link ParentI don’t know if you meant to reply to me or you meant to reply to skybrian and replied to me on accident. In the comment I wrote that you’re replying to, I said: If I had to guess, I would guess...I don’t know if you meant to reply to me or you meant to reply to skybrian and replied to me on accident. In the comment I wrote that you’re replying to, I said:
I think microblogging is just fundamentally a bad idea. It doesn't matter if it's Twitter, Bluesky, Mastodon, or Threads, it's all fundamentally the same idea for a social network and it all suffers from the same problems.
If I had to guess, I would guess that most people’s lives would be improved on net if they stopped using microblogging platforms. I would also guess that the world would be improved on net if microblogging platforms stopped existing. But I don’t know for sure, and I don’t need to know for sure, since that decision isn’t up to me.
When I was describing in my comment above what I want to see in an online platform, I was indeed, describing a fundamentally different type of online platform than a microblogging platform. (I think we should try to move past microblogging as an idea. Or, at least, I, personally, don’t want to use microblogging platforms anymore.)
-
Comment on Bluesky’s quest to build nontoxic social media in ~tech
deepdeeppuddle (edited )Link ParentYou can read like 50,000 words of exchanges between developers discussing/debating how decentralized Bluesky is, if you really want to get into the weeds:...You can read like 50,000 words of exchanges between developers discussing/debating how decentralized Bluesky is, if you really want to get into the weeds:
I have only read bits of it because it’s incredibly long and I don’t have a dog in this fight anyway. As I said in my comment above, I think microblogging is a deeply troubled idea and most of us would probably be better off quitting it all, regardless of how decentralized or centralized any of the platforms are.
One observation, though: Bluesky is actually decentralized in an important way Mastodon isn’t. Mastodon doesn’t allow yourself to migrate your posts from one Mastodon instance to another. AT Protocol, which Bluesky is based on, is designed to allow that.
When mastodon.lol, one of the biggest and most widely recommended Mastodon instances, shut down, the 12,000 people who were unfortunate enough to have signed up there had no way to migrate their posts anywhere else. That’s such a bummer.
I think I had only posted a handful of times there and I only found out about the instance shutting down after it happened. There was no way to migrate my posts or my accounts, or even to see what I had posted. Not a big loss for me, personally, but it does reveal a weakness in the Mastodon model.
The claim that Bluesky “isn’t decentralized at all” or that Bluesky’s decentralization is “a lie” seems, to me, like it doesn’t really engage with the complexity of the topic.
-
Comment on Bluesky’s quest to build nontoxic social media in ~tech
deepdeeppuddle (edited )LinkI think Bluesky is doing a good job at running a microblogging site (as far as I can tell). They seem to have good moderation. I like the design of the website and the app. The decentralization...- Exemplary
I think Bluesky is doing a good job at running a microblogging site (as far as I can tell). They seem to have good moderation. I like the design of the website and the app. The decentralization strikes a good balance between usability and "forkability" (credible exit).
That said, I think microblogging is just fundamentally a bad idea. It doesn't matter if it's Twitter, Bluesky, Mastodon, or Threads, it's all fundamentally the same idea for a social network and it all suffers from the same problems. Maybe, theoretically, someone could create a microblogging platform that is different. But it hasn't happened yet. (And the problem isn't investors, money, financial incentives, ownership, centralization, a lack of federation, a lack of customization options for users, or minor design decisions that could be fixed with a few tweaks, but the general idea of a microblogging platform as we understand it today.)
I quit Twitter in early 2021 and it was a great decision. It reduced my anxiety. It freed up my mind to focus on better things. It felt like turning off a noisy radio. Only in the silence afterward could I appreciate how much it had been grating on my nerves.
Ezra Klein has a beautiful polemic against Twitter in this podcast at 38:15. (The podcast is from December 2022.) He talks about why he quit Twitter and why we shouldn't want a Twitter alternative like Mastodon, Threads, or Bluesky. He's talked more about this in later podcasts and also in a few New York Times columns. In one column, he writes:
Twitter forces nuanced thoughts down to bumper-sticker bluntness. The chaotic, always moving newsfeed leaves little time for reflection on whatever has just been read. The algorithm’s obsession with likes and retweets means users mainly see (and produce) speech that flatters their community or demonizes those they already loathe. The quote tweet function encourages mockery rather than conversation. The frictionless slide between thought and post, combined with the absence of an edit function, encourages impulsive reaction rather than sober consideration. It is not that difficult conversations cannot or have not happened on the platform. It is more that they should not happen on the platform.
Ev Williams, who co-founded Twitter and later founded Medium, once said in an interview that Twitter was like the limbic system — fast, twitchy, noisy, knee-jerk, reflexive, impulsive — and he wanted Medium to be more like the pre-frontal cortex — slow, quiet, reflective, considered. Initially, Medium didn't even have the ability to leave comments on posts, which I believe was a deliberate design decision to encourage reading and thinking, rather than reacting.
I don't know if anyone has quite cracked what good online platforms should be like. A few aspects I think are important:
-A focus on longer-form content. The maximum length of a tweet is 280 characters (or at least it used to be). In my view, a good social network will have many posts that are 5x, 10x, or 20x longer than this. I think blogging and newsletter platforms like Medium and Substack are sort of on the right track. There are also some niche forums where it's normalized to write blog-length posts.
-Really strict moderation and community norms around respect and kindness. In practice, this is hard to achieve. I've moderated several different kinds of online communities from small to large. It's incredibly hard. But we have to try, otherwise what's the point of any of it? I feel like the status quo is that online life is so nasty it poisons our real relationships because we take that nastiness off the computer and into our real lives. What if it could be the reverse? What if online communities made kindness the norm to the extent it encouraged us to be softer in our real life relationships?
-An experience centred around forming relationships and community with people, in which you gain familiarity with people over time. (Clearly visible profile pictures and different coloured usernames can help with this!) Think small Twitch and Discord communities (Slack used to be used for small communities sometimes too, but now it's pretty much all Discord), small or medium forums, and online games with small communities. The opposite, which we want to avoid, is an overwhelming, chaotic flood of content, in which you can't keep track of what's happening and you might not even realize if you encountered the same person twice. Think Twitter, TikTok, and Reddit. Ultimately, we want to slow down the pace of information consumption (and the speed of reaction), humanize people, and create conditions that foster the growth of one-to-one human connection over time.
I don't think the primary obstacle to building online platforms that have these properties is that big tech companies have bad ethics, bad management, or bad incentives. The barrier to entry to creating new online platforms and new online communities is fairly low. Network effects and coordination problems make things harder, sure. But I ultimately see this as an innovation problem, or a creativity problem. Or a design problem, or a research problem — however you want to categorize it. The problem is a lack of good ideas about what to do.
-
Comment on ‘The terror is real’: an appalled US tech industry is scared to criticize Elon Musk in ~tech
deepdeeppuddle (edited )Link ParentThis sets off my spidey senses for misinformation for a few reasons: Even deaf people who try to lip read all the time say it's not very accurate. From a quick Google, the estimates of how much of...I watched Obama say to GWB "How do we stop this?" (This was from a tiktok video of a deaf person lipreading their interaction at Trumps latest inauguration).
This sets off my spidey senses for misinformation for a few reasons:
-
Even deaf people who try to lip read all the time say it's not very accurate. From a quick Google, the estimates of how much of English speech can be discerned from lip reading seem to be in the 30% to 45% range (source 1, source 2, source 3). Out of curiosity, I looked at the video of Barack Obama speaking and it's a side profile of Obama's face with his head actually angled away from the camera. You can barely see his lips.
-
Even if this sentence were 100% accurately lip read, we wouldn't know the context or the intended meaning of Obama's words. We could try to guess or infer it, but we might be wrong.
-
The source is TikTok and the quality of information on TikTok is absolutely abysmal. I think TikTok is great for comedy videos, jokes, sketches, improv, etc. if you find the right people to follow but the idea of someone trying to get factual information from TikTok or reasonable analysis is scary to me. (One investigation concluded that about 20% of TikTok videos contain misinformation.)
-
No reputable news source picked up on the story. A few unreputable sources that uncritically share popular posts or videos from social media published articles about it. This is only weak evidence, but if the claim that Obama said this were credible, I would guess some journalist somewhere would show the video to a lip reading expert and see if they can confirm what the TikToker claimed. From the absence of any reputable article about this story, we can infer that either no journalist investigated the story or they did and found the TikToker's claim was not credible.
Edit: One small thing people can do to strengthen liberal democracy in liberal democratic countries is to stop using TikTok as a source for factual information, especially information about political news. That means either not using the app at all or tuning your algorithm so that your feed is only funny videos, cute animal videos, beautiful nature videos, personal vlogs, etc. And then skipping any "informational" videos (a high percentage of which will be misinformational) that slip through.
In practice, I've found it hard enough to retrain my algorithm when it got in a bad place that I created a new account. It was a pain, but it worked in the end.
It also bears mentioning that TikTok is ultimately under the control of an authoritarian government and there is some evidence to suggest that this authoritarian government may be using TikTok to influence political opinion in democratic countries in a way that serves its interests.
-
-
Comment on When is it okay to give up? in ~life
deepdeeppuddle This is an essentially impossible question for another person to answer, even a therapist, and especially a random stranger on the Internet. But talking this out with a therapist or a friend can...This is an essentially impossible question for another person to answer, even a therapist, and especially a random stranger on the Internet. But talking this out with a therapist or a friend can help get you to clarity.
I say it's an impossible question for another person to answer because, in my experience, for me, making a decision about continuing or discontinuing a relationship with a friend or partner or family member that feels right is about soul searching and connecting deeply with my gut feeling and intuition, and understanding all the nuances of all my experiences with that person over time. Only I know all that information about all those experiences and I only I can connect with my deep, inner feelings like that.
I empathize because it's such a painful situation to be in. To stay in the relationship or leave the relationship is painful, and to not know what to do and be in between is also painful.
Do you have someone in your life right now who makes you feel seen, who listens to you, who cares what you think and feel, and who supports you? If not, maybe focusing on building those kinds of relationships is a good idea. Since it will make either choice, either outcome easier to bear.
I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it.
Also, the comment you replied to was replying to Diff's comment, and in that comment, Diff wasn't talking about large corporations making popular movies. They were (I thought) talking about individual customers who have a direct, one-to-one relationship with an artist or a small business producing art at small scale. So, that was about individual consumer choice. That was about "the masses" directly purchasing products.
I would appreciate it if you didn't reply to my comments in the future.