X
Video Player is loading.
Current Time 0:02
Duration 7:49
Loaded: 7.60%
Stream Type LIVE
Remaining Time 7:47
 
1x
    • Chapters
    • descriptions off, selected
    • en (Main), selected

    Reddit Takes Australia's Under-16 Social Media Ban to the High Court

    Following its announcement of tougher safety rules, Reddit is swiftly moving to contest the Australian law in court.

    Headshot of Alex Valdes
    Headshot of Alex Valdes
    Alex Valdes
    Alex Valdes from Bellevue, Washington has been pumping content into the Internet river for quite a while, including stints at MSNBC.com, MSN, Bing, MoneyTalksNews, Tipico and more. He admits to being somewhat fascinated by the Cambridge coffee webcam back in the Roaring '90s.
    Alex Valdes
    3 min read
    reddit-logo-6767

    Reddit says it disagrees with the scope, effectiveness, and privacy implications of the new Australia social media ban for people under 16.

    Reddit

    Reddit, the social media and community chat forum, announced on Thursday that it is challenging Australia's under-16 social media ban in the country's High Court. 

    A statement posted to X said that the new law, which bans Australians aged 15 and younger from using apps such as Reddit, TikTok, Facebook, Instagram, Threads, X, Snapchat, YouTube, Kick and Twitch, "has the unfortunate effect of forcing intrusive and potentially insecure verification processes on adults as well as minors, isolating teens from the ability to engage in age-appropriate community experiences." 

    The move comes just days after the San Francisco-based company implemented age verification measures in Australia.

    Initially, Reddit appeared to be complying with the Australian law without resistance. On Tuesday, Reddit said it would verify that new members and current account holders in Australia are at least 16. It also announced that account holders under 18 worldwide will get modified versions of the app that prevent access to NSFW and mature content, with stricter chat settings and no ad personalization or sensitive ads.


    Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    A representative for Reddit did not immediately respond to a request for comment.

    Reddit calls Australian law 'arbitrary'

    Earlier this week, Reddit said the legislation limits free expression and privacy and "is arbitrary, legally erroneous, and goes far beyond the original intent of the Australian Parliament, especially when other obvious platforms are exempt."

    "We believe strongly in the open internet and the continued accessibility of quality knowledge, information, resources, and community building for everyone, including young people," the Tuesday statement said. "This is why Reddit has always been, and continues to be, available for anyone to read even if they don't have an account."

    Age verification rules -- such as the UK Online Safety Act -- are becoming the norm rather than the exception for governments and companies worldwide. The internet is increasingly being filtered to prevent children from accessing certain content online. It's a battleground where privacy, access to information and online safety are huge factors.

    Age prediction and verification

    Reddit had earlier stated that it would use an age prediction model to determine if new and existing account holders in Australia are at least 16 years old. If the model predicts that one of their members is under 16, Reddit will request proof of age. As outlined by the company, people must verify their birthdate by providing a government ID or taking a selfie. The company said it would suspend accounts of those it determines to be under 16.

    Reddit claimed it would only securely store age information and not the photos or documents used in the verification process. The information would not be visible to advertisers or sold to data brokers, and would reportedly only be used to "enhance content relevance and ad experiences." 

    Reddit said it was planning to increase platform safety for those under 18. If you're under 18, you won't be permitted to moderate communities dedicated to NSFW or mature content. The site will disable ad personalization, and you will not see ads for alcohol, gambling or other sensitive topics. 

    What to Know About Australia's Social Media Ban: Reddit Is Challenging the Law

    The new Australian law keeping children under 16 off of social media was bound to face legal scrutiny.

    Headshot of Alex Valdes
    Headshot of Alex Valdes
    Alex Valdes
    Alex Valdes from Bellevue, Washington has been pumping content into the Internet river for quite a while, including stints at MSNBC.com, MSN, Bing, MoneyTalksNews, Tipico and more. He admits to being somewhat fascinated by the Cambridge coffee webcam back in the Roaring '90s.
    Alex Valdes
    4 min read
    an iPhone showing notifications from Instagram explaining that due to laws in Australia, the user won't be able to use social media until they turn 16

    Australia's sweeping ban on social media for young children: Will it set a trend worldwide?

    STR/Getty Images

    While governments around the world continue to tackle the thorny issue of age verification for certain websites and platforms, Australia is taking a blunter approach -- and Reddit is immediately pushing back in court. The social media company has filed a High Court challenge to Australia's new law, which went into effect on Tuesday.

    In a statement posted Thursday on Reddit, the company said that while it supports protecting users under 16, the legislation "forces intrusive and potentially insecure verification processes on adults as well as minors, isolating teens from the ability to engage in age-appropriate community experiences."

    The age-restricted apps include TikTok, Facebook, Instagram, Threads, X, Snapchat, YouTube, Reddit, Kick and Twitch. Younger teenagers will still have access to popular gaming platforms, including Discord, as well as social media platforms such as Messenger Kids, WhatsApp, and Pinterest, and educational resources like Kids Helpline, Google Classroom, and YouTube Kids. The ban also doesn't include AI chatbots such as ChatGPT, OpenAI's Sora or Google Gemini.


    Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    Australia is the first country to launch this kind of age-restricted social media ban. Several other countries, including China, Russia, North Korea, Iran, Turkey, Uganda, Saudi Arabia and India, have full or partial social media bans, typically for political and security reasons.

    Other countries, including Denmark, France, Norway and Malaysia, are considering similar bans to Australia's and will monitor the effectiveness of the Australian ban over the coming months.

    Although many studies have been conducted worldwide about the psycho-emotional effects of social media usage on children, the ban was inspired by The Anxious Generation, a book by US psychologist Jonathan Haidt. Annabel West, the wife of South Australian Premier Peter Malinauskas, encouraged her husband to consider a ban after reading Haidt's book in 2024.

    Companies must enforce the ban, or face massive fines

    Apps can use age-assurance technology, such as facial and voice analysis, to verify that a consumer is at least 16 years of age. Social media companies can also check how long an account has been active and assess age by language style and community memberships.

    Kids being kids, they will find workarounds -- such as one 13-year-old who held up a photo of her mother's face to fool the age verification. The Australian government said it will prevent kids from using false identity documents, AI tools or VPNs to fake their age and location.

    Tech companies will face a $33 million fine, as outlined in the legislation, if they fail to enforce the ban on users under 16.

    Two 15-year-old Australians, supported by the Digital Freedom Project, are challenging the social media ban, and the country's High Court could hear their case as early as February. They argue, in part, that the ban "will have the effect of sacrificing a considerable sphere of freedom of expression and engagement for 13-to-15-year-olds in social media interactions (including communications on personal and governmental matters, and the benefits to those young people of such interactions)."

    TikTok said it will comply with the new laws, although noting that the restrictions "may be upsetting" to customers. Meta, which owns Facebook and Instagram, has already begun removing accounts of users under 16 years old. Snapchat is ready to boot nearly half a million Australian kids from their accounts. Not surprisingly, X boss Elon Musk has criticized the change, writing in 2024 that the law "seems like a backdoor way to control access to the Internet by all Australians."

    Some experts are praising Australia's ban

    Donna Rice Hughes, president and CEO of Enough is Enough, a nonprofit with a mission to "make the Internet safer for children and families," praised Australia for "taking a proactive stick approach to protect children from social media harms."

    Enough is Enough, which launched in 1992, has documented the myriad pitfalls of social media for children, including overuse, sexting, online exploitation, bullying, depression and more. The organization has published several internet safety guides and safety settings for social media apps.

    "This ban should be an incentive for social media and other online platforms and services to be proactive in implementing safer-by-design technologies and default parental management tools before rushing to market with products that are potentially dangerous for children and teens," Hughes told CNET.

    Hughes added that Big Tech has only itself to blame for governmental intervention such as Australia's. 

    "They've failed to do the right thing by our children from the start," she said. "The carrot approach of voluntary industry efforts to prioritize child safety over profits hasn't worked. A historic reality is that the first social media platforms to take off in the US and abroad, Facebook and Myspace, were developed for college-age students and older."

    The US does not have a sweeping age limit like Australia's, but several states are developing new laws to regulate and restrict teens' access to social media. 

    Trump Signs Vague AI Executive Order Blocking State Regulations

    The executive order establishes a federal task force to challenge state AI laws, although it's a bit light on details.

    Headshot of Imad Khan
    Headshot of Imad Khan
    Imad Khan Senior Reporter
    Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom's Guide and Wired, among others.
    Expertise Google | AI | Internet Culture
    Imad Khan
    4 min read
    Digital drawing depicting several AI bots with the AI logo in the background along with the word AI.

    Trump issues executive order to nationalize AI standards, which has met immediate pushback from critics.

    Getty Image/ Zooey Liao/ CNET

    President Donald Trump signed an executive order on Thursday that aims to block state AI regulations with the goal of creating a national framework for the tech industry to follow. 

    The Ensuring a National Policy Framework for Artificial Intelligence executive order states the tech industry must be "free to innovate without cumbersome regulation" as state regulations are creating a patchwork of laws. The order calls out states like Colorado for demanding AI models account for "ideological bias," which the administration says can lead to "false results" that impact protected groups. The order also says that some state laws regulate beyond state borders, infringing on interstate commerce, which is the domain of the federal government.

    The order notes it shall ensure that "children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded." It also states it won't target "lawful state AI laws," which include child safety protections, data center permitting reforms, government procurement, and use of AI, with "other topics as shall be determined." Beyond that, the order is slim on exact details of what the administration would ultimately try to enforce regarding AI. 

    The administration will set up an AI litigation task force within the next 30 days with the goal of challenging state laws. Within the next 90 days, Secretary of Commerce Howard Lutnick must publish a report on existing state laws that go against the executive order or violate the First Amendment, as well as any other parts of the Constitution. The order may also withhold broadband development funding from states.

    The executive order is a follow-up to the AI Action Plan the president signed over the summer, which slashed regulations and gave the AI industry a longer leash to continue expanding and developing, despite concerns. A Monday Truth Social post by Trump details his thinking regarding AI regulation and global competition.

    The White House did not respond to a request for comment.


    Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    "On the heels of Congress correctly deciding for the second time not to pass legislation that would ban states from regulating artificial intelligence, the president should recognize that this is a misguided, unpopular, and dangerous policy choice," Travis Hall, director for state engagement at the Center for Democracy & Technology, told CNET in a statement. 

    Hall said the states need to be allowed to safeguard their citizens.

    AI Atlas
    CNET

    "The power to preempt rests firmly with Congress, and no executive order can change that," Hall said. "State lawmakers have an important role to play in protecting their constituents from AI systems that are untrustworthy or unaccountable. They should remain steadfast in responding to the real and documented harms of these systems."

    Some advocates, however, feel that clear national regulations are best for the industry.

    "China is moving full speed ahead in efforts to replace the US as the home of the next big AI breakthroughs," said Gary Shapiro, CEO and vice chair of the consumer technology association, an industry trade group which also organizes the annual CES conference in Las Vegas. 

    "This EO ensures the US can compete, elevating the federal government as the only body equipped to set nationwide guardrails for AI," Shapiro said.

    Without evidence, that this order gives the American AI industry the "breathing room to build boldly and responsibly," he said. And that a nationalized framework makes it easier for small businesses and startups.

    Congressman Ted Lieu of California posted on X, the social media platform formerly known as Twitter, saying the order is unconstitutional and will be challenged in court. Ultimately, he said he believes executive orders are ineffective and standards must be passed through Congress. 

    The report of the new executive order comes as states have been attempting to regulate AI, particularly as the technology infiltrates all aspects of technology and society, with Congress and the Executive Branch seeking to push back. 

    Some states have passed laws making it a crime to create sexual images of people without their consent. Others have placed restrictions on insurance companies using AI to approve or deny health care claims. Currently, Congress hasn't passed any legislation regulating AI on a national scale. 

    Last month, 35 states and the District of Columbia urged Congress not to block state laws regarding AI regulation, warning of "disastrous consequences." Congress ultimately chose not to interfere earlier this month. Companies, including Google, Meta, OpenAI and Andreessen Horowitz, have been calling for national AI standards rather than litigating across all 50 states.

    "This David Sacks-led executive order is a gift for Silicon Valley oligarchs who are using their influence in Washington to shield themselves and their companies from accountability," said Michael Kleinman, head of US policy at the Future of Life Institute, a non-profit that focuses on the risks of AI to humanity. 

    "No other industry operates without regulation and oversight, be it drug manufacturers or hair salons; basic safety measures are not just expected, but legally required," Kleinman said. "AI companies, in contrast, operate with impunity. Unregulated AI threatens our children, our communities, our jobs and our future."

    Kleinman went on to say, given that the Senate voted against blocking state regulations, with even conservative lawmakers being vocal in their support for local policymakers, there's no democratic mandate for Trump's executive order.

    (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    • News

    Oxford's New Word of the Year? It's Designed to Bait, Debate and Irritate

    Firstly, it's not just one word. It's also an unpleasant aspect of our digital life today.

    Headshot of Macy Meyer
    Headshot of Macy Meyer
    Macy Meyer Writer II
    Macy is a writer on the AI Team. She covers how AI is changing daily life and how to make the most of it. This includes writing about consumer AI products and their real-world impact, from breakthrough tools reshaping daily life to the intimate ways people interact with AI technology day-to-day. Macy is a North Carolina native who graduated from UNC-Chapel Hill with a BA in English and a second BA in Journalism. You can reach her at mmeyer@cnet.com.
    Expertise Macy covers consumer AI products and their real-world impact Credentials
    • Macy has been working for CNET for coming on 2 years. Prior to CNET, Macy received a North Carolina College Media Association award in sports writing.
    Macy Meyer
    3 min read
    In this photo illustration, word of the year 2025 'rage bait' is displayed on a mobile phone screen next to the 'Oxford Word of the Year 2025' image in Ankara, Turkiye on December 1, 2025.

    According to OUP, usage of "rage bait" has roughly tripled over the past year.

    Anadolu/Getty Images

    In a move reflecting the darker side of the social-media era, Oxford University Press has named "rage bait" as its 2025 Word of the Year. (It's actually two words, but don't let that send you into a rage.) The phrase refers to online content deliberately designed to provoke anger or outrage by being provocative, offensive or otherwise manipulative, with the explicit aim of boosting engagement, clicks or shares. 

    According to Oxford University Press, usage of rage bait has roughly tripled over the past year. In announcing the choice, the organization noted that this surge isn't just a change in vocabulary. It points to a larger shift in how online platforms and content creators capture attention, often by exploiting emotional triggers rather than curiosity or honest interest. 

    Casper Grathwohl, president of Oxford Languages, said that this trend marks a progression from earlier waves of click-driven sensationalism toward a more emotionally manipulative digital environment -- one where outrage, not intrigue, is the currency that pays. 
    "Rage bait shines a light on the content purposefully engineered to spark outrage and drive clicks," Grathwohl said. "And together, they form a powerful cycle where outrage sparks engagement, algorithms amplify it, and constant exposure leaves us mentally exhausted. These words don't just define trends; they reveal how digital platforms are reshaping our thinking and behaviour."


    Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    2025's other big words include 'parasocial' and '67'

    Rage bait isn't the only word or phrase gaining recognition this year. Two other major dictionaries have picked their own Words of the Year, each illuminating a different facet of our cultural moment.

    • The Cambridge Dictionary named "parasocial" as its 2025 Word of the Year, capturing the growing phenomenon of one-sided relationships people form with celebrities, influencers, fictional characters -- and now, increasingly, with AI personalities. The word reflects how many of us now treat virtual or distant figures as if they were friends, despite knowing that the connection is unreciprocated.
    • Dictionary.com selected "67" as its Word of the Year. Pronounced "six-seven," this term is a slang expression that's playful, ambiguous and rooted in meme culture. While 67 might not carry a dictionary-style definition, its rise points to how younger generations express attitudes of indifference, irony or insider-like humor in the digital age.

    A quick look back at some past Words of the Year

    To understand what 2025's picks reveal about our time, it helps to glance at some past winners, which show how language shifts in response to social moods, technology and world events.

    • In 2024, Oxford's Word of the Year was "brain rot," a phrase meant to capture the mental fatigue, dissatisfaction, or dulling sensation people feel after endless scrolling through trivial or low-quality online content.
    • 2023's winner was "rizz," a slang term for charisma or personal charm. 
    • In 2022, the winning phrase was "goblin mode," reflecting a mood of laziness, self-indulgence, or rejecting social expectations -- especially as the world grappled with pandemic aftershocks. 

    Earlier years show a variety of themes. 2019's climate emergency stood out as concern over global warming surged. In 2016, "post-truth" became the word, capturing a time of political upheaval, misinformation and shifting trust in facts.

    Study Finds Most Teens Use YouTube, Instagram and TikTok Daily

    About a fifth of US teens say they're on TikTok almost constantly, according to a Pew Research survey.

    Headshot of Omar Gallaga
    Headshot of Omar Gallaga
    Omar Gallaga
    3 min read
    Teens on phones

    Use of some social media platforms among teens is rising, according to Pew Research.

    Xavier Lorenzo/Getty Images

    Most young teens in the US visit social media platforms, including TikTok, YouTube and Instagram, at least once a day, according to a new report from Pew Research.


    Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    Pew's survey of 1,458 teens aged 13 to 17 found that, after a dip in social media use in 2022, use of TikTok, YouTube and Instagram is spiking, with YouTube in particular popular across all demographics, including gender, race, ethnicity and income levels. TikTok is also a constant presence for a fifth of teens: 21% of them said they visit TikTok almost constantly. 

    AI Atlas
    CNET

    The debate over the effects of social media use on teens has been heavily debated and has led Australia to ban the platforms for teens younger than 16. Some US states have also moved to limit or ban social media for minors or to introduce age-verification rules.  

    Pew has been tracking social media use among teens since around 2009, publishing regular reports since 2014. This year, Pew added statistics for the use of chatbots, including OpenAI's ChatGPT. 

    The Pew report found that almost two-thirds of US teens, 64%, use chatbots. The results are higher for older teens, aged 15 to 17 (68%), than for younger teens, aged 13 to 14 (57%). Black and Hispanic teens, and teens from higher-income homes, are more likely to use chatbots, the report found.

    (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    Increased time online can lead to increased risks

    The findings from Pew Research are in line with what one long-running children's advocacy group is seeing.

    "Teens are using social media platforms more often and at younger ages, and that increases their exposure to bullying, grooming, and high-pressure interactions," said Michael Medoro, chief of staff and chief operating officer at Childhelp. The organization, founded in 1959, offers a child abuse hotline and educational curriculum aimed at preventing child abuse and helping victims.

    Medoro said constant usage of social media apps can be a stressor for teens. 

    Read more: Australia Bans Social Media for Kids Under 16

    "Our hotline counselors hear from teens who feel overwhelmed and overstimulated by constant notifications and comparisons to others," he said. "Increased time on apps usually means increased risk for emotional distress, especially without strong guardrails."

    While social media bans can be effective in some ways, Medoro said that many teens find workarounds. Parents need to engage with and check in on their children regularly, and platforms need to implement age checks and safety features to protect minors.

    "Governments, families, schools and tech companies all have a part to play," he said.

    The organization is also concerned about Pew's findings about the increasing use of chatbots, particularly among marginalized communities. Chatbots can be a place teens turn to for mental-health guidance when they're struggling with loneliness, anxiety or isolation. Those interactions can be hard for parents or others to spot. 

    "Chatbots can also give unsafe advice when a teen is looking for help online instead of seeking a mental health professional, which can deepen existing issues," Medoro said.



    Hello, there!

    We noticed that you’re currently using an ad blocker.

    CNET is able to deliver our best-in-class reviews, tips, buying advice and news to you for free with the help of ads.
    Please consider supporting our world-class service journalism by removing the ad blocker from your browser.
    We appreciate you!
    | Contact Admiral support