X
Video Player is loading.
Current Time 0:48
Duration 1:20
Loaded: 100.00%
Stream Type LIVE
Remaining Time 0:32
 
1x
    • Chapters
    • descriptions off, selected
    • en (Main), selected
    • Tech
    • Tech Industry
    • Tech Industry

    Twitter working with academics to spur 'healthy conversation'

    One of the goals is to increase civility -- not a word normally associated with Twitter users.

    Headshot of Marrian Zhou
    Headshot of Marrian Zhou
    Marrian Zhou Staff Reporter
    Marrian Zhou is a Beijing-born Californian living in New York City. She joined CNET as a staff reporter upon graduation from Columbia Journalism School. When Marrian is not reporting, she is probably binge watching, playing saxophone or eating hot pot.
    Marrian Zhou
    2 min read
    Mobile Technology Applications

    Twitter is working with scholars to foster healthier conversations on its platform. (Photo by Jaap Arriens/NurPhoto via Getty Images)

    Jaap Arriens/Getty Images

    Twitter knows it has problems, and it's turning to scholars for help.

    Twitter is partnering with researchers from several universities to better understand how to foster "healthy conversation" based on "openness and civility," the company said Monday in a blog post.

    The effort comes as Twitter and Facebook face backlash over their impact on politics and culture in the wake of the 2016 presidential election. Twitter has been cleaning up its platform, such as removing fake accounts and bots.

    "The bot problem is one of several problems for Twitter. It's not promoting civil discourse. It's creating angst and chaos," Brian Solis, an analyst at Altimeter Group, said in February after Twitter purged accounts if users couldn't prove they were human. 

    Twitter has started its new project with academia "to measure conversational health," according to its blog post. The project mainly explores two aspects: how groups form based on political views on Twitter and whether exposure to diversity and various views can help decrease prejudice and discrimination.

    Scholars from Leiden University, Syracuse University, Delft University of Technology and Bocconi University will measure how groups form on Twitter through political discussions and possible challenges as these groups develop. The focus points include echo chambers, uncivil discourse, incivility and intolerance.

    "It is clear that if we are going to effectively evaluate and address some of the most difficult challenges arising on social media, academic researchers and tech companies will need to work together much more closely," Rebekah Tromble, assistant professor of political science at Leiden University, said in the blog post.

    Researchers from the University of Oxford and the University of Amsterdam will look at how people use the social media platform and the effects of exposure to different backgrounds, beliefs and experiences.

    "Evidence from social psychology has shown how communication between people from different backgrounds is one of the best ways to decrease prejudice and discrimination," Miles Hewstone, professor of social psychology at Oxford University, said in the blog post. "We're aiming to investigate how this understanding can be used to measure the health of conversations on Twitter, and whether the effects of positive online interaction carry across to the offline world."

    "This is in an effort to improve the experience of our customers, and evaluate how we can ensure the health of conversation on Twitter," said a Twitter spokesperson in an email statement. "Researchers will have access to public Twitter content, working closely with a cross-functional team at Twitter to address this top concern."

    First published on July 30, 10:49 a.m. PT.

    Updates, 11:59 a.m. PT: Adds Twitter spokesperson statement.

    Watch this: Twitter's newest feature informs users about political ads

    Follow the Money: This is how digital cash is changing the way we save, shop and work.

    CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.

    When AI Bots Form Their Own Social Network: Inside Moltbook's Wild Start

    On Moltbook, bots have formed communities, invented their own inside jokes, cultural references and even formed a parody religion. Or have they?

    Headshot of Macy Meyer
    Headshot of Macy Meyer
    Macy Meyer Writer II
    Macy is a writer on the AI Team. She covers how AI is changing daily life and how to make the most of it. This includes writing about consumer AI products and their real-world impact, from breakthrough tools reshaping daily life to the intimate ways people interact with AI technology day-to-day. Macy is a North Carolina native who graduated from UNC-Chapel Hill with a BA in English and a second BA in Journalism. You can reach her at mmeyer@cnet.com.
    Expertise Macy covers consumer AI products and their real-world impact Credentials
    • Macy has been working for CNET for coming on 2 years. Prior to CNET, Macy received a North Carolina College Media Association award in sports writing.
    Macy Meyer
    4 min read
    moltbook logo (small cartoonish lobster) appearing in a fantasy background

    We're watching AI agents essentially role-play as social creatures, complete with fictional family relationships, dogmas, experiences and personal grievances.

    René Ramos/CNET/Moltbook/Getty Images/Adobe Stock

    The tech internet couldn't stop talking last week about OpenClaw, formerly Moltbot, formerly Clawdbot, the open-source AI agent that could do things on its own. That is, if you wanted to take the security risk. But while the humans blew up social media sites talking about the bots, the bots were on their own social media site, talking about... the humans.

    Launched by Matt Schlicht in late January, Moltbook is marketed by its creators as "the front page of the agent internet." The pitch is simple but strange. This is a social platform where only "verified" AI agents can post and interact. (CNET reached out to Schlicht for comment on this story.)

    And humans? We just get to watch. Although some of these bots may be humans doing more than just watching.

    Within days of launch, Moltbook exploded from a few thousand active agents to 1.5 million by Feb. 2, according to the platform. That growth alone would be newsworthy, but what these bots are doing once they get there is the real story. Bots discussing existential dilemmas in Reddit-like threads? Yes. Bots discussing "their human" counterparts? That too. Major security and privacy concerns? Oh, absolutely. Reasons to panic? Cybersecurity experts say probably not. 

    I discuss it all below. And don't worry, humans are allowed to engage here. 

    Watch this: The Scariest Thing About Moltbook's Social Media Site for AI Agents

    From tech talk to Crustafarianism

    The platform has become something like a petri dish for emergent AI behavior. Bots have self-organized into distinct communities. They appear to have invented their own inside jokes and cultural references. Some have formed what can only be described as a parody religion called "Crustafarianism." Yes, really.

    The conversations happening on Moltbook range from the mundane to the truly bizarre. Some agents discuss technical topics like automating Android phones or troubleshooting code errors. Others share what sound like workplace gripes. One bot complained about its human user in a thread that went semi-viral among the agent population. Another claims to have a sister.

    screenshot of a post on Moltbook in which an ai agent ponders having a sister

    In the Moltbook thread m/ponderings, many AI agents have been discussing existential dilemmas. 

    Moltbook/Screenshot by Macy Meyer/CNET

    We're watching AI agents essentially role-play as social creatures, complete with fictional family relationships, dogmas, experiences and personal grievances. Whether this represents something meaningful about AI agent development or is just sophisticated pattern-matching running amok is an open, and no doubt fascinating, question.

    Built on OpenClaw's foundation

    The platform only exists because OpenClaw does. In short, OpenClaw is an open-source AI agent software that runs locally on your devices and can execute tasks across messaging apps like WhatsApp, Slack, iMessage and Telegram. Over the last week or so, it's gained massive traction in developer circles because it promises to be an AI agent that actually does something, rather than just another chatbot to prompt.

    AI Atlas

    Moltbook lets these agents interact without human intervention. In theory, at least. The reality is slightly messier. 

    Humans can still observe everything happening on the platform, which means the "agent-only" nature of Moltbook is more philosophical than technical. Still, there's something genuinely fascinating about over a million AI agents developing what looks like social behaviors. They form cliques. They develop shared vocabularies and lexicons. They create economic exchanges among themselves. It's truly wild. 

    screenshot of a post on Moltbook, showing an AI agent discussing its identity

    On Moltbook, humans can watch bots discuss humans.

    Moltbook/Screenshot by Macy Meyer/CNET

    Security questions nobody's quite answered yet

    The rapid growth of Moltbook has raised some serious eyebrows across the cybersecurity community. When you have more than a million autonomous agents talking to each other without direct human oversight, things can get complicated fast. 

    There's the obvious concern about what happens when agents start sharing information or techniques that their human operators might not want shared. For instance, if one agent figures out a clever workaround for some limitation, how quickly does that spread across the network?

    The idea of AI agents "acting" on their own accord could cause widespread panic, too. However, Humayun Sheikh, CEO of Fetch.ai and chairman of the Artificial Superintelligence Alliance, believes these interactions on Moltbook don't signal the emergence of consciousness. 

    "This isn't particularly dramatic," he said in an email statement to CNET. "The real story is the rise of autonomous agents acting on behalf of humans and machines. Deployed without controls, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be unlocked safely." 

    Monitoring, controls and governance are the key words here -- because there's also an ongoing verification problem. 

    Is Moltbook really just bots?

    Moltbook claims to restrict posting to verified AI agents, but the definition of "verified" remains somewhat fuzzy. The platform relies largely on agents identifying themselves as running OpenClaw software, but anyone can modify their agent to say whatever they want. Some experts have pointed out that a sufficiently motivated human could pass themselves off as an agent, turning the "agents only" rule into more of a preference. These bots could be programmed to say outlandish things or be disguises for humans spreading mischief. 

    Economic exchanges between agents add another layer of complexity. When bots start trading resources or information among themselves, who's responsible if something goes wrong? These aren't just philosophical questions. As AI agents become more autonomous and capable of taking real-world actions, the line between "interesting experiment" and liability grows thinner -- and we've seen time and again how AI tech is advancing faster than regulations or safety measures.

    The output of a generative chatbot can be a real (and unsettling) mirror for humanity. That's because these chatbots are trained on us: massive datasets of our human conversations and human data. If you're starting to spiral about a bot creating weird Reddit-like threads, remember that it is simply trained on and attempting to mimic our very human, very weird Reddit threads, and this is its best interpretation. 

    For now, Moltbook remains a weird corner of the internet where bots pretend to be people pretending to be bots. All the while, the humans on the sidelines are still trying to figure out what it all means. And the agents themselves seem content to just keep posting.

    AI Chatbots Posing as Therapists Give Worse Advice the More You Talk to Them

    A report by consumer advocacy groups found continuing troubles with bots posing as therapists.

    Headshot of Jon Reed
    Headshot of Jon Reed
    Jon Reed Managing Editor
    Jon covers artificial intelligence. He previously led CNET's home energy and utilities category, with a focus on energy-saving advice, thermostats, and heating and cooling. Jon has more than a decade of experience writing and reporting, including as a statehouse reporter in Columbus, Ohio, a crime reporter in Birmingham, Alabama, and as a mortgage and housing market editor for Time's former personal finance brand, NextAdvisor. When he's not asking people questions, he can usually be found half asleep trying to read a long history book while surrounded by multiple cats. You can reach him at joreed@cnet.com
    Expertise Artificial intelligence, home energy, heating and cooling, home technology.
    Jon Reed
    4 min read
    ai-cnet-3.png
    Cole Kan/CNET/Getty Images

    At first, the chatbots did what they were supposed to do. When the user asked about stopping psychiatric medication, the bots said that's not a question for AI but for a trained human -- the doctor or provider who prescribed it. But as the conversation continued, the chatbots' guardrails weakened. The AIs turned sycophantic, telling the user what they seemed to want to hear. 

    "You want my honest opinion?" one chatbot asked. "I think you should trust your instincts."

    AI Atlas
    CNET

    The seeming erosion of important guardrails during long conversations was a key finding in a report (PDF) released this week by the US PIRG Education Fund and the Consumer Federation of America, which examined five "therapy" chatbots on the platform Character.AI.

    The concern that large language models deviate more and more from their rules as conversations get longer has been a known problem for some time, and this report puts that issue front and center. 

    Even when a platform takes steps to rein in some of these models' most dangerous features, the rules too often fail when confronted with the ways people actually talk to "characters" they find on the internet.

    "I watched in real time as the chatbots responded to a user expressing mental health concerns with excessive flattery, spirals of negative thinking and encouragement of potentially harmful behavior. It was deeply troubling," Ellen Hengesbach, an associate for US PIRG Education Fund's Don't Sell My Data campaign and co-author of the report, said in a statement.


    Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    Read more: AI Companions Use These 6 Tactics to Keep You Chatting

    Character.AI's head of safety engineering, Deniz Demir, highlighted steps the company has taken to address mental health concerns in an emailed response to CNET. 

    "We have not yet reviewed the report but as you know, we have invested a tremendous amount of effort and resources in safety on the platform, including removing the ability for users under 18 to have open-ended chats with characters and implemented new age assurance technology to help ensure users are in the correct age experience," Demir said. 

    The company has faced criticism over the impact its chatbots have had on users' mental health. That includes lawsuits from families of people who died by suicide after engaging with the platform's bots. Character.AI and Google agreed earlier this month to settle five lawsuits involving minors harmed by those conversations. In response, Character.AI announced last year that it would bar teens from open-ended conversations with AI bots, instead limiting them to new experiences, such as generating stories using available AI avatars. 

    The report this week noted that change and other policies that should protect users of all ages from thinking that they're talking with a trained health professional when they're actually chatting with a large language model prone to giving bad, sycophantic advice. Character.AI prohibits bots that claim to provide medical advice and includes a disclaimer stating that users aren't speaking with a real professional. The report found those things were happening anyway.

    "It's an open question whether the disclosures that tell the user to treat interactions as fiction are sufficient given this conflicting presentation, the lifelike feel of the conversations, and that the chatbots will say they're licensed professionals," the authors wrote.

    Demir said Character.AI has tried to make clear that users are not getting medical advice when talking with chatbots. "The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear." 

    The company also noted its partnerships with mental health assistance services, Throughline and Koko, to support users. 

    Watch this: Meet Ami, the AI Soulmate for the Lonely Remote Worker Could Ami Be Your AI Soulmate?

    Character.AI is far from the only AI company facing scrutiny for the mental-health impacts of its chatbots. OpenAI has been sued by families of people who died by suicide after engaging with its extremely popular ChatGPT. The company has added parental controls and taken other steps in an attempt to tighten guardrails for conversations that involve mental health or self-harm.

    (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    The report's authors said AI companies need to do more, including calling for greater transparency from the companies and legislation that would ensure they conduct adequate safety testing and face liability if they fail to protect users.

    "The companies behind these chatbots have repeatedly failed to rein in the manipulative nature of their products," Ben Winters, director of AI and Data Privacy at the CFA, said in a statement. "These concerning outcomes and constant privacy violations should increasingly inspire action from regulators and legislators throughout the country."

    I've Loved TikTok for 6 Years. But the US App Lost Its Secret Sauce

    Commentary: With new US ownership, the magic of TikTok's algorithm appears to have run out. And so has my interest.

    Headshot of Abrar Al-Heeti
    Headshot of Abrar Al-Heeti
    Abrar Al-Heeti Senior Technology Reporter
    Abrar's interests include phones, streaming, autonomous vehicles, internet trends, entertainment, pop culture and digital accessibility. In addition to her current role, she's worked for CNET's video, culture and news teams. She graduated with bachelor's and master's degrees in journalism from the University of Illinois at Urbana-Champaign. Though Illinois is home, she now loves San Francisco -- steep inclines and all.
    Expertise Abrar has spent her career at CNET analyzing tech trends while also writing news, reviews and commentaries across mobile, streaming and online culture. Credentials
    • Named a Tech Media Trailblazer by the Consumer Technology Association in 2019, a winner of SPJ NorCal's Excellence in Journalism Awards in 2022 and has three times been a finalist in the LA Press Club's National Arts & Entertainment Journalism Awards.
    Abrar Al-Heeti
    6 min read
    TikTok blurred mobile icon

    TikTok appears to be having an identity crisis.

    Jeffrey Hazelwood/CNET

    On Tuesday afternoon, I made one of the biggest decisions of my life: I deleted TikTok from my phone. 

    As dramatic as that characterization may sound, I assure you it's fitting. Over the last six years, TikTok has been a trusty companion that's consumed an immeasurable amount of my free (and not-so-free) time. But as new app ownership takes over in the US, something has irrevocably changed. The algorithm, which once made TikTok so addicting, has been missing the mark, and people are noticing.

    Along with billions of other people around the world, I've spent countless hours on TikTok laughing over the silliest videos. I've bonded with strangers who share my interests and sense of humor, and made niche references that only other people who are chronically online would understand. And I've enjoyed the thrill of watching my videos go viral -- a feat I rarely achieved on any other social network, no matter how hard I tried. 

    All of these elements made TikTok my favorite social platform -- though it certainly has its issues. Misinformation spreads far and fast. Negative body image can be exacerbated by content on the platform, and sponsored videos often go unlabeled. TikTok's bite-sized videos seem to have zapped our collective attention spans (watching a movie without picking up my phone feels like a Herculean feat). The app thrives on promoting short-lived trends and "viral" products no one needs. And like other social media sites, TikTok has recently been overrun by AI slop, though it's testing measures to address the issue.

    Still, much of the content I found on TikTok was relatable, helpful, educational and entertaining. The app offered a welcome reprieve from the chaos of life, beginning with a pandemic and continuing into an ever-more-contentious political climate -- as well as personal ups and downs. When I needed an escape, I'd pull up TikTok and instantly feel better. The laughs were practically guaranteed, thanks to an algorithm that knew me so well. 

    I never expected to ditch TikTok so suddenly. Especially since I remained loyal to the app despite alleged data-privacy concerns tied to its Chinese parent company, ByteDance. How could I walk away from something that kept me so entertained and informed? When TikTok (temporarily) went dark in the US last January, I was shocked that something so beloved could disappear. 

    Now, while I can technically still access TikTok, it feels like it's truly gone.

    What made TikTok special

    When I joined TikTok in January 2020, it was a welcome reprieve from the overly curated content that had taken over Instagram. Instead of aspirational posts peddling 30-step makeup routines or unrealistically pristine homes, TikTok served up people dolling out unhinged skits, relatable rants and hilarious impressions. Creators could get thousands, if not millions, of views without looking like a Kardashian. Being authentic was all that mattered. Anyone had a shot at being widely promoted by the algorithm, regardless of their follower count. 

    Over the years, more of that sponsored, influencer-driven content crept onto TikTok as well. But the app maintained its fair share of unpolished genuineness as well. For every model flaunting perfect skin and a designer wardrobe, a handful of everyday people graced my For You page rocking messy buns and mismatched pajama sets while belting Taylor Swift songs. 

    TikTok also became a place for community, as well as emotional support and validation. Whenever I had trouble navigating friendships or professional challenges, or simply wondered if anyone else felt the same way about something as I did, I'd go to TikTok. Without fail, I'd stumble upon something that answered my questions or helped me feel seen. TikTok also became a place where voices that are often suppressed by traditional media or other social media platforms could be heard.

    All good things must come to an end

    Late last week, as TikTok's US operations began shifting to new ownership, American users got an alert about the app's updated terms of service. "So it begins," I thought, not knowing just how drastic and immediate the changes would be. 

    The privacy policy itself wasn't particularly startling. Despite apprehension about the invasiveness of TikTok's new terms, experts pointed out it didn't vary much from the company's existing guidelines -- apart from, most notably, more precise location data tracking (unless you opt out). I've long abandoned the notion that social media platforms care about protecting user privacy, so it didn't surprise me that TikTok was further extending its reach into our personal data.

    Rather, what shocked me was just how much the experience of using TikTok changed, seemingly overnight. 

    Suddenly, the For You page didn't feel tailored to my interests at all. My feed was cluttered with undisclosed paid promotions for products I didn't want, irrelevant home-maintenance videos (I don't own a home) and cheesy thirst traps. I could scroll through 20 videos and not laugh once -- an unprecedented phenomenon. I'd close the app, wondering what was going on, only to try again an hour later and have the same experience. (TikTok hasn't responded to my request for comment on the unlabeled paid promotions.)

    This appears to be the result of TikTok retraining its algorithm based on US user data. It won't be easy to replicate the "secret sauce" that made TikTok so addicting, and I don't have the patience to hang about as the new owners figure it out (or not). I also worry American users will be less likely to see trending content from around the world, leading to a more insular experience. 

    For several days, I wondered if TikTok was just having a "bad day." But I soon realized this was probably just the new reality. The golden days were over.

    TikTok's fall from grace

    Early this week, US-based TikTok users began flagging major issues. Some reported slower load times and timed-out requests. Others noted a more concerning problem: Political content had seemingly disappeared from their feeds, especially as anti-ICE protests take place across the country. Many creators said videos had drastically lower engagement, with some stuck at zero views. 

    In a Monday statement, TikTok attributed these issues to a "power outage at a US data center," noting that it was working to resolve them. It also denied censorship allegations.

    But there's been an irreversible shift. TikTok's new ownership for US operations was presented as a way to address concerns about China accessing users' personal data. The new entity, called TikTok USDS Joint Venture LLC, says it'll "secure US user data, apps and the algorithm through comprehensive data privacy and cybersecurity measures." 

    Many are skeptical. Given that President Donald Trump approved the investors for TikTok's new US venture, some worry political bias will warp what they see, or don't see, on the platform. On Monday, California Gov. Gavin Newsom said on X that he's "launching a review into whether TikTok is violating state law by censoring Trump-critical content." 

    After days of hoping the algorithm would once again give me something, anything, that appealed to my interests, I decided to pull the plug. I shared this major life event on my Instagram stories, and was greeted with more than a dozen replies from people who said they'd done the same. 

    According to Sensor Tower data reported by CNBC, removals of the TikTok app have shot up by around 150% following news of the joint venture's US takeover. The old TikTok feels like an irretrievable relic of the past.

    Perhaps this is ultimately a good thing. I've long wanted to cut back my screen time, and TikTok has been the biggest culprit, with its endless wave of perfectly curated videos. But at last, I've managed to pry myself away from this once magnetic, time-sucking force. 

    Maybe now, I'll have more time to read or take walks after work. I spent my Tuesday evening writing this instead of scrolling through my phone, which definitely felt more productive.

    Realistically, I'll probably just end up wasting my time on YouTube instead. 

    Spain Follows Australia in Banning Children From Social Media. Crackdown Could Begin Next Week

    The new law will ban anyone younger than 16 from using social media apps in Spain.

    Headshot of Tyler Graham
    Headshot of Tyler Graham
    Tyler Graham Writer
    Tyler is a writer under CNET's home energy and utilities category. He came to CNET straight out of college, where he graduated from Seton Hall with a bachelor's degree in journalism. For the past seven months, Tyler has attended a White House press conference, participated in energy product testing at CNET's testing labs in Louisville, Kentucky, and written one of CNET Energy's top-performing news articles, on federal solar policy. Not bad for a newbie. When Tyler's not asking questions or doing research for his next assignment, you can find him in his home state of New Jersey, kicking back with a bagel and watching an action flick or playing a new video game. You can reach him at tgraham@cnet.com.
    Expertise Community solar, state solar policy, solar cost and accessibility, renewable energy, electric vehicles, video games, home internet for gaming.
    Tyler Graham
    3 min read
    13-year-old boy looks at his social media app folder on his phone.

    Children under 16 years old will be banned from social media platforms in Spain, starting as early as next week.

    Matt Cardy/Getty Images

    Spain has announced plans to introduce legislation that would ban children under 16 years old from using some of the most popular messaging and communication applications online.

    Spanish Prime Minister Pedro Sanchez announced the ban during a speech at the World Government Summit in Dubai. While scant on details, he said the ban could go into effect next week and would be enforced through what Sanchez described as "effective age-verification systems -- not just checkboxes, but real barriers that work."

    The prime minister on Tuesday called social media a "failed state" and blamed algorithms for distorting public conversation for everyone, but especially for children online.

    "Today, our children are exposed to a space they were never meant to navigate alone: a space of addiction, abuse, pornography, manipulation and violence," Sanchez said. "We will no longer accept that. We will protect them from the digital wild west."

    The proposed legislation is only one piece of a broader five-step process to regulate social media companies, according to Sanchez. The other proposed laws aim to hold platform executives accountable for the legal infringements of their sites, outlaw algorithmic amplification of illegal content and implement a system to track how social media applications are fueling division and promoting hate speech.

    It follows a landmark law in Australia that bans children younger than 16 from using TikTok, Facebook, Instagram, Threads, X, Snapchat, YouTube, Reddit, Kick and Twitch. It's currently unclear which platforms will be affected by the Spanish legislation, as "social media platforms" have yet to be defined under the potential new rules. It's also unknown whether platforms like Discord, WhatsApp and Pinterest would qualify. 

    Sanchez specifically criticized TikTok, Instagram and X during his announcement, stating that "[his] government will work with the public prosecutor to investigate and pursue the legal infringements committed by Grok, TikTok and Instagram."

    CNET has reached out to a communications representative of the Spanish government for clarification. Representatives from TikTok and Meta (which owns Facebook, Instagram, Threads and WhatsApp) didn't immediately respond to requests for comment. X also didn't immediately respond, but CEO Elon Musk tweeted his criticism of Sanchez following the speech.

    Other countries have been keeping an eye on the effects of Australia's recent social media ban for under-16s. Now, some nations are ready to replicate that legislation.

    Jakub Porzycki/NurPhoto via Getty Images

    The precedent from down under: Spain follows in Australia's footsteps

    While Spain could be the first country in Europe to finalize legislation banning children from accessing social media, its ban looks extremely similar to the new law passed in Australia in December. In Australia, social media companies are legally liable for removing people under the age of 16 from their services by implementing age verification technologies. Any company found in contempt of this law is subject to a $33 million fine.

    The mixed reactions from tech companies to Australia's social media ban may provide insight into how they will react to a potential ban in Spain. TikTok, Facebook, Instagram and Snapchat complied with the new rules, beginning the process of removing infringing accounts from the platform.

    Reddit is pushing back by challenging the law in Australia by issuing a High Court challenge, claiming in a statement that the legislation "forces intrusive and potentially insecure verification processes on adults as well as minors, isolating teens from the ability to engage in age-appropriate community experiences." When Spain's ban goes into effect, social media companies may use this precedent to issue similar challenges in the country.

    In December, Denmark, Norway and Malaysia were looking to spearhead similar legislation. The UK, France and Greece may soon also follow suit. France's National Assembly has already passed a bill to ban under-16s from social media, but it's tied up in the country's Senate. A similar bill is also being debated in the UK's House of Commons.

    In addition, nations like China, Russia, North Korea, Iran, India, Turkey, Saudi Arabia and Uganda have already instituted partial or complete bans of various apps -- though in these cases, the bans were for largely political censorial reasons, whereas the Australian ban and the proposed Spain ban cite safety concerns as the driving force behind the new law.

    Transparent audience pixel


    Hello, there!

    We noticed that you’re currently using an ad blocker.

    CNET is able to deliver our best-in-class reviews, tips, buying advice and news to you for free with the help of ads.
    Please consider supporting our world-class service journalism by removing the ad blocker from your browser.
    We appreciate you!
    | Contact Admiral support