Advertisement
Advertisement
Gadget Review

Reddit User Uncovers Who Is Behind Meta’s $2B Lobbying for Invasive Age Verification Tech

C. da Costa
2 min read
Add Yahoo on Google
Image: Artapixel
Image: Artapixel
  • Meta funneled over $2 billion through shadowy nonprofits to push age verification laws that would force Apple and Google to build surveillance infrastructure into every device, while exempting Meta’s own platforms from the same requirements.

A Reddit researcher just exposed how Meta funneled over $2 billion through shadowy nonprofits to push age verification laws that would force Apple and Google to build surveillance infrastructure into every device—while conveniently exempting Meta’s own platforms from the same requirements.

Following the Money Trail Through Dark Networks

Meta’s lobbying operation spans 45 states using nonprofit shells to avoid transparency requirements.

The investigation by GitHub user “upper-up” traces funding through organizations like the Digital Childhood Alliance (DCA), which launched December 18, 2024, and testified for Utah’s SB-142 just days later. Bloomberg and Deseret News reported Meta’s backing of DCA, part of a $70 million fragmented super PAC strategy designed to evade FEC tracking. Traditional election spending disclosure requirements don’t apply to this fragmented approach.

What ‘Get Age Category API’ Really Means for Your Device

Proposed laws would embed persistent identity verification directly into operating systems.

Advertisement
Advertisement

The technical reality hits harder than policy abstractions. These bills mandate OS-level APIs that apps can query for age data—creating a permanent identity layer baked into your phone’s core functions. Meta’s Horizon OS for Quest VR already implements this infrastructure through Family Center controls. Now they want Apple and Google to build similar systems that every app can access, turning age verification into persistent device fingerprinting.

The Curious Case of Platform Exemptions

Age verification bills target Meta’s competitors while leaving Meta platforms untouched.

Here’s where the lobbying gets surgical. The proposed laws hammer Apple’s App Store and Google Play with compliance requirements but reportedly spare social media platforms—Meta’s core business. It’s like Spotify lobbying for streaming regulations that only apply to Apple Music. The “child safety” rhetoric masks a competitive strategy that shifts liability from platforms to operating system makers.

Europe Shows a Different Path Forward

EU’s eIDAS 2.0 offers privacy-preserving age verification with zero-knowledge proofs that protect personal data.

Advertisement
Advertisement

The European Union’s Digital Identity Wallet takes a radically different approach. Zero-knowledge proofs let you verify age without revealing personal data—like showing you’re over 18 without disclosing your birthdate or identity details. It’s open-source, self-hostable, and only applies to large platforms while exempting FOSS and small entities. Meanwhile, US lawmakers seem ready to let Meta bamboozle them into complete privacy annihilation.

Your device’s trustworthiness hangs in the balance. These laws could force every Linux distribution and privacy-focused Android fork to implement identity verification or face legal liability. The choice between surveillance-free computing and regulatory compliance is coming faster than you think.


From the coolest cars to the must-have gadgets, GadgetReview’s daily newsletter keeps you in the know. Subscribe - it’s fun, fast, and free.

Up next
The Guardian

Meta on trial over child safety: can it really protect its next generation of users?

Katie McQue
13 min read
Add Yahoo on Google
<span>Documents obtained by New Mexico’s attorney general include emails between Meta executives flagging urgent exploitation issues on Facebook and Instagram.</span><span>Photograph: Carlos Barría/Reuters</span>
Documents obtained by New Mexico’s attorney general include emails between Meta executives flagging urgent exploitation issues on Facebook and Instagram.Photograph: Carlos Barría/Reuters
(Photograph: Carlos Barría/Reuters)

Meta is facing a reckoning over its child safety practices as a trial surfaces fresh allegations that the company prioritized profit incentives and engagement over protecting children.

The landmark trial in New Mexico has now completed its fifth week, with the state attorney general resting the case on 5 March. Proceedings are expected to continue for another week as Meta presents its defense before the jury begins deliberations.

Central to the case are internal company documents obtained by the attorney general’s office during discovery, including emails between Meta executives flagging urgent issues of exploitation on Facebook and Instagram.

Advertisement
Advertisement

Related: ‘They were comparing me to Bonnie Blue’: the disturbing rise of nightlife content

“Data shows that Instagram had become the leading two-sided marketplace for human trafficking,” stated one email to Adam Mosseri, the head of Instagram, sent from a member of Meta’s product team in 2019, which was read in court.

Prosecutors have presented evidence they say demonstrates delays and deficiencies in Meta’s ability to detect and report harms to children on its platforms, including the distribution of child sexual abuse material – photos and videos of the sexual exploitation of children – and child trafficking.

In both the New Mexico trial and concurrent court proceedings in Los Angeles, Facebook and Instagram features have also come under scrutiny for their alleged impact on children’s mental health. The plaintiffs claim the social networks are intentionally addictive and amplify content promoting self-harm, suicidal ideation and body dysmorphia.

Advertisement
Advertisement

The defense has vigorously rejected the attorney general’s allegations as “sensationalist, irrelevant and distracting arguments” and that it goes to great efforts to make its platforms safe and continues to invest in new protective features for teens. The jury has also heard from company executives, including Mosseri and Mark Zuckerberg, Meta’s CEO, who have defended the company’s safety track record. They also argued that with billions of users across Facebook and Instagram worldwide, preventing all crimes and harms that take place on them would not be possible.

“We do our best to keep Facebook safe, but we cannot guarantee it,” said Mosseri, who flew into Santa Fe to be a witness for the defense, after his video deposition played in court earlier in the trial. “Safety is incredibly important to us.”

​​The lawsuit comes after a two-year investigation by the Guardian, published in 2023, which revealed Meta had difficulty stopping people from using its platforms to traffic children. The investigation is referenced multiple times in the lawsuit’s filings.

The two cases strike at an existential question for Meta: can it protect its next generation of users? If the company wants its social networks to survive and grow, it needs to recruit new, younger users. Meta argues its social networks provide safer environments than any other alternative. The New Mexico attorney general argues the tech company does not adequately serve the teens already on its sites and apps, as do the plaintiffs in the Los Angeles trial, who allege that Meta designs its products to addict young people. Child safety advocates who spoke at the trial in Santa Fe said the encryption of Messenger and an enormous backlog in Meta’s reports of child abuse have stymied its investigations of child exploitation.

Advertisement
Advertisement

Documents from the cases have demonstrated just how much Meta wants young people on its platforms. One internal email reads: “Mark has decided that the top priority for the company in 2017 is teens,” referring to Zuckerberg. The CEO denied on the witness stand the company targets users under 13, its cutoff for creating an account, though he said age restrictions were difficult to enforce.

Meta faces global regulatory scrutiny as it stares down the dual verdicts in the US. Countries around the world are following in the footsteps of Australia’s ban on social media for those under 16. The fourth-most populous country in the world has already committed to an age gate of its own, as has the third-largest state in the US. The New Mexico and Los Angeles trials, if they end with findings of liability for child sexual abuse trafficking and intentional addiction for Meta, may sway more lawmakers to cut the company off from the users it needs.

Operation MetaPhile

One of the main pillars of New Mexico’s case is an investigation called “Operation MetaPhile” by the attorney general’s office. Undercover agents posing as girls aged under 13 were contacted by three suspects, who allegedly solicited them for sex after searching for minors through design features on Facebook and Instagram. Two made plans to meet the “girl” at a motel in Gallup, New Mexico.

The agents did not initiate any conversations about sexual activity, according to the state’s court filings. One of their accounts received a surge of activity, with hundreds of friend requests per day, and had accrued 7,000 followers within one month, an investigator said. Despite this activity, Meta did not shut the account down and instead sent it information about how to monetize accounts and grow its following, investigators said.

Advertisement
Advertisement

The state also presented allegations that Instagram’s algorithms connect pedophiles or help them find sellers of child sexual abuse material, which Mosseri labelled as “unfair”.

“I think what we see with these particularly bad actors is they really actively try to work around our systems by disguising things,” Mosseri said. “They try to find each other on our platform.”

Former company executives testified against their ex-employer.

“I absolutely did not believe that safety was a priority, which is the primary reason that I left,” said Brian Boland, former Meta vice-president of partnerships, who spent 11 years at the company before leaving in 2020.

Encrypted Messenger blocked access to evidence of crimes

The New Mexico court heard how Meta’s decision to encrypt Facebook Messenger, which predators have used as a tool to groom minors and exchange child abuse imagery, has blocked access to crucial evidence of these crimes.

Advertisement
Advertisement

In December 2023, Meta introduced end-to-end encryption for Facebook Messenger, its direct messaging platform. Encryption ensures that only the sender and intended recipient can view messages by converting them into unreadable code that is decrypted upon receipt. The messaged content is not stored on Meta’s servers, and is not viewable by law enforcement.

The National Center of Missing and Exploited Children (NCMEC), which is partially funded by Meta, called the move a “devastating blow to child protection”, and its representatives had met with Meta several times in attempts to dissuade the company from implementing encryption, the court heard.

American-headquartered social media companies are required by federal law to report any child sexual abuse material (CSAM), apparent violations of child sexual abuse trafficking, and indications of coercion and enticement of minors on their platforms to NCMEC. Acting as a clearinghouse, NCMEC forwards these “cyber tip” reports to the relevant law enforcement agencies across the US and internationally.

The encryption of Messenger means that “visibility into content or interactions that are occurring is taken away. That doesn’t mean that the abuse stops occurring,” testified Fallon McNulty, executive director of the exploited children division at NCMEC.

Advertisement
Advertisement

She said that Meta submitted 6.9m less reports to NCMEC in 2024, after Messenger’s encryption was implemented, compared with the previous year.

Meta has previously defended encryption as safe because users can report any inappropriate interactions or abuse they experience while using Messenger. Privacy advocates commend encryption as the strongest protection against surveillance by law enforcement.

“We use sophisticated technology to proactively identify child exploitation content on our platform – and between July and September 2025 we removed over 10m pieces of child exploitation content from Facebook and Instagram, over 98% of which we found proactively before it was reported,” a Meta spokesperson said. “We also provide in-app reporting tools, with dedicated options to let us know if content involves a child.”

In her testimony, McNulty highlighted that relying on children to report abuse was not an adequate substitute for the scanning of messages and images now that Messenger was encrypted. According to NCMEC studies, a majority of children choose not to report any abuses or threats made to them on the platforms.

Advertisement
Advertisement

Mosseri said the self-reporting mechanisms on Instagram were not very effective compared with the company’s technological scanning for abuses on the platform, despite Meta’s own claims about the encryption of Messenger. He spoke about plans to encrypt Instagram direct messenger that had been abandoned. It was also determined that encrypting Instagram messages would also make it more difficult to keep children safe on the platform, he said.

He said: “We find that using technology seems to be much more effective than user reports to find bad content.”

Reporting backlogs and errors affected child safety

The jury heard that between May 2017 and July 2021, Meta had a reporting backlog of 247,000 cyber tip reports of potential harms and abuses, which were several weeks or months old when they were sent to NCMEC. Because information about child abuse is often time-sensitive, these backlogs may have meant opportunities to prevent crimes or identify perpetrators were lost.

According to documents presented in evidence, thousands of other cyber tip reports were improperly classified as being low priority. The company did not provide NCMEC with an insight into the cause for the delays and mislabeling. NCMEC regarded the big misclassification as “a serious failing that affected child safety”, McNulty testified.

Advertisement
Advertisement

The jury heard how law enforcement had become frustrated with the lack of detail in some of Meta’s reports, which meant officers could not take further action and investigate them. Law enforcement officers who investigate potential child abuse previously told the Guardian Meta has flooded the cyber tip reporting system with “junk” tips that were useless to law enforcement, and one officer made the same point on the witness stand. Other large platforms had done a better job of providing actionable information in their reports, McNulty said in her testimony.

In 2022, 31 of the country’s 61 Internet Crimes Against Children (ICAC) task forces opted out of receiving some lower-priority cyber tip reports from Meta because they considered the information too poor in quality to be actionable, the jury heard.

The quality issues with Meta’s cyber tips had been “going for years”, and NCMEC expected it to be “resolved sooner”, McNulty said.

“Our image-matching system finds copies of known child exploitation at a scale that would be impossible to do manually, and we work to detect new child exploitation content through technology, reports from our community and investigations by our specialist child safety teams,” a Meta spokesperson said. “We also continue to support NCMEC and law enforcement in prioritizing reports, including by helping build NCMEC’s case management tool and labelling cyber tips so they know which are urgent.”

Advertisement
Advertisement

The Guardian has previously reported that AI-generated tips that have not been confirmed to be reviewed by a social media company employee often cannot be opened by law enforcement without a warrant because of fourth amendment protections. Lawyers involved in such cases say this additional step can also slow investigations into potential crimes.

At the trial, it was revealed that in 2022, more than 14m of Meta’s reports to NCMEC had not involved a human review, meaning they could not be opened by NCMEC or law enforcement without a warrant. The prevalence of unreviewed reports and the resulting impacts on law enforcement had been communicated to Meta several times, McNulty testified.

Teens, addiction, filters and self-harm content affected mental health

In a video deposition played in court, Zuckerberg acknowledged that some users, including children, find Meta’s platforms addictive, which is also the subject of a separate trial taking place in Los Angeles.

Internal documents from Instagram made evident how much the company knew about its tween users and their problems despite its 13-and-over policy, according to the plaintiff’s lawyers. A 2018 presentation from Instagram revealed in the Los Angeles trial reads: “If we wanna win big with teens, we must bring them in as tweens.” Another from 2015 estimated that about 30% of 10-12-year-olds in the US use the photo-sharing app. Yet another detailed a goal of increasing the time 10-year-olds spent on the Instagram app, and one more documented how often 11-year-olds logged on to the app in comparison with older people.

At the New Mexico trial, Ian Russell, whose daughter Molly died by suicide in 2017 after viewing large amounts of harmful content on Instagram, testified for the state about the platform’s potential mental health impacts.

Russell said: “That inescapable stream of harmful content, the cumulative effect that content would have had on a growing brain, a young person, a 14-year-old, turned Molly from that bright, hopeful young person into someone who unbelievably thought she was a burden and a problem and that the best thing for her to do would be to end her life.”

Evidence presented at trial included internal communications about augmented-reality filters on Instagram that allowed users to alter their appearance, such as enlarging lips or eyes. An email from a former Meta employee to Zuckerberg warned that teens using these features would be at greater risk of self-image and mental health issues.

“As a parent of two teenage girls, one of whom has been hospitalized twice for body dysmorphia, I can tell you, the pressure on them and their peers coming through social media is intense with respect to body image,” the former employee wrote.

Jurors heard that a temporary ban was placed on the augmented-reality features in October 2019, and lifted by Zuckerberg in mid-2020.

“It has always felt paternalistic to me that we’ve limited people’s ability to present themselves in these ways, especially when there’s no data I’ve seen that suggests doing so is helpful or not doing so is harmful, and that there’s clearly demand for this type of expression,” the CEO said of his decision.

“Meta bans those that directly promote cosmetic surgery, changes in skin color or extreme weight loss,” a company spokesperson said.

Other internal documents presented in court alleged that Zuckerberg approved allowing minors to interact with artificial-intelligence chatbot companions despite warnings from safety staff that the bots could engage in sexual conversations. Prosecutors also alleged that Meta placed advertisements from companies, such as Walmart and Match Group, alongside content that sexualized children, potentially generating revenue from such material.

Instagram Teen Accounts have built-in protections which limit who can contact them, and the type of content they see, defaulting them into private accounts and the strictest message settings, so they can only be messaged by people they follow or are already connected to,” a Meta spokesperson said. “Teens under 18 are automatically placed into Teen Accounts, and teens under 16 will need a parent’s permission to make any of these settings less strict.”

Arturo Béjar, a former Meta engineering director who became a whistleblower when his daughter received sexually inappropriate messages from strangers on Instagram, took the stand. Béjar told the court the platform’s recommendation system was “really good at connecting” predators with minors.

When Béjar reported the issue to the company, he said he understood that executives such as Zuckerberg and Chris Cox, chief product officer, already knew this was a problem.

“That’s when I first realized the executive team knows about the harm that’s falling on the product, and they’re choosing not to act on it,” Béjar said. “I don’t think we can trust Mark Zuckerberg and Meta with our kids.”

Up next
Gadget Review

Brazil’s Age Verification Law Triggers 250% VPN Surge Overnight

Alex Barrientos
2 min read
Add Yahoo on Google
Image: Pixnio - Bicanski
Image: Pixnio - Bicanski
  • Proton VPN saw a 250% increase in Brazilian sign-ups following the implementation of Brazil’s mandatory age verification law, as users sought to avoid submitting biometric scans and identity documents for accessing social media and adult content.

Brazil’s mandatory age verification law went live on March 17, 2026—and privacy-conscious adults immediately voted with their virtual feet. Proton VPN reported a staggering 250% increase in Brazilian sign-ups between Monday and Tuesday, as users scrambled to avoid submitting biometric scans and identity documents to access social media and adult content. Your Instagram habit now requires the same data disclosure as opening a bank account, and Brazilians aren’t having it.

The Digital Dragnet Demands Your Data

Brazil’s Digital ECA forces platforms to collect identity documents and biometric scans for age verification.

The Digital Estatuto da Criança e do Adolescente mandates “proportional, auditable, and technically secure” age verification across social media platforms (minimum age 16), adult websites (18+), and gaming services. Translation: you’re handing over facial recognition data or government ID scans to access TikTok.

Advertisement
Advertisement

Platforms face brutal enforcement—fines reaching $10 million USD per violation, plus potential service throttling or complete blocking in Brazil. The financial pressure guarantees compliance, even as implementation creates the privacy nightmare users are fleeing.

David Peterson, General Manager at Proton VPN, notes these surges “often reflect adult users turning to VPNs due to growing concerns about their privacy and online security.” Google Trends confirmed the spike, with VPN-related searches climbing steadily since enforcement began.

The Circumvention Economy Explodes

VPN adoption surges as adults refuse mandatory biometric collection for basic internet access.

VPNs solve the immediate problem through IP address masking—route your connection through servers outside Brazil, and platforms can’t determine your location for age verification. It’s digital sleight of hand that transforms a surveillance state requirement into a routing decision. The overnight adoption demonstrates how quickly privacy concerns translate into market behavior when governments mandate data collection.

Advertisement
Advertisement

Yet TechRadar warns that “unreliable or ‘scam’ VPN applications often pose a greater risk to your data than the platforms you are trying to avoid.” Users fleeing mandatory surveillance risk falling into worse privacy traps with malicious VPN providers.

Meanwhile, Brazil’s National Data Protection Agency published guidance emphasizing “data minimization” and avoiding “unnecessary collection of sensitive information, such as biometrics”—advice that seems divorced from the law’s practical requirements.

The regulatory paradox is stark: legislation designed to protect children drives adults toward technical workarounds that may undermine the law’s protective intent entirely. California’s similar AB 1043 takes effect January 2027, suggesting Brazil’s privacy revolt might preview a much larger battle between digital surveillance and user autonomy.


From the coolest cars to the must-have gadgets, GadgetReview’s daily newsletter keeps you in the know. Subscribe - it’s fun, fast, and free.

Up next
Palm Beach Post

Florida attorney general's child predator crackdown reaches Discord

Melissa Pérez-Carrillo, Sarasota Herald-Tribune
4 min read
Add Yahoo on Google
Florida attorney general's child predator crackdown reaches Discord

Florida Attorney General James Uthmeier said his office would be investigating Discord, an online platform for group communication and online communities, for allegedly harboring online predators.

He made the announcement March 18 at the Sarasota County Sheriff’s Office.

After investigating other social media apps, like Roblox, TikTok and Snapchat, Uthmeier is turning his attention to Discord, where he said there have been countless instances of abuse.

The initiative is part of Uthmeier’s ongoing child predator takedown operation, which he said has amounted to 1,400 child predator arrests since he took office in February 2025.

Advertisement
Advertisement

Numerous subpoenas were issued to Discord to aid in Uthmeier’s investigation. They’ll be asking for information regarding marketing and promotional materials that discuss the platform’s safety for children, along with any evidence that shows the company has knowledge that adults are pretending to be children on the app.

“I believe the government should only interfere in the private sector when it is truly essential,” Uthmeier said. “This is one of those occasions. There’s no free speech right to let our kids be hurt. There’s no free market principle to allow dangerous evil villains and predators to go after our kids.”

A spokesperson for the company, which is based in San Francisco, told the USA TODAY Network in a statement that it was "deeply committed to safety and we require all users to be at least 13 to use our platform."

Discord, it added, uses "a combination of advanced technology and trained safety teams to proactively find and remove content that violates our policies. We also maintain strong systems to prevent the spread of sexual exploitation and grooming on our platform, and work with other technology companies and safety organizations to improve online safety across the internet.

Advertisement
Advertisement

"Discord cooperates with law enforcement agencies, including those in Florida, and our reports have helped to play a material role in the prosecution of bad actors. We look forward to actively cooperating with the Attorney General’s office in this investigation."

What is Discord used for?

Discord is "widely used for online communication via messaging, audio, and video calls," according to Uthmeier's office.

As of 2026, Discord enforces age-restricted, 18+ content through mandatory age assurance via an ID check or video selfie for accessing mature servers. If a user is found to be under 13, their account will be banned.

Uthmeier said that suspects will start on more innocuous apps and then want to move to Discord because they think it would be harder to track by parents or law enforcement.

Advertisement
Advertisement

Under legislation (HB 3) that Gov. Ron DeSantis signed into law in March 2024, Florida bars anyone under 16 from using social media platforms, except for 14- and 15-year-olds who obtain parental permission.

At first, the law essentially wasn't in effect because a federal judge had blocked its enforcement. But his order was struck down in November by appellate judges who sided with the state, saying the law promotes the government's interest in "protecting minors."

Uthmeier finally issued an ultimatum March 9, saying tech companies would have 30 days to implement age restrictions on social media and 60 days for parental consent options to be available. Otherwise, he said his office would start filing lawsuits against platforms.

More: Florida's social media law for teens heads to appeals court

Advertisement
Advertisement

It’s still up to parents to monitor their children’s social media usage to ensure they don’t become victims, Sarasota County Sheriff Kurt Hoffman added.

Hoffman said a number of social media apps have weak protections because there is no age-verification, and he hopes barriers will be reexamined to protect victims who can be as young as 6 or 7.

“Collectively, we can all make a difference in the lives of our children,” Hoffman said.

(This story was updated to add new information.)

Melissa Pérez-Carrillo covers breaking news and public safety for the Sarasota Herald-Tribune. Reach out at mperezcarrillo@gannett.com. Support local journalism by subscribing.

This article originally appeared on Sarasota Herald-Tribune: Florida attorney general's child predator crackdown reaches Discord

Up next
The Hill
Opinion

Opinion - Trump’s $10 billion TikTok ‘brokerage fee’ is just the tip of the iceberg

Kimberly Wehle, opinion contributor
4 min read
Add Yahoo on Google

On Friday, the Wall Street Journal reported that investors in the brand-new entity created to oversee content moderation on TikTok will reportedly pay a $10 billion fee — or 70 percent of the company’s value — to the U.S. Treasury as payment for the Trump administration’s role in brokering the pact.

Not only is this pay-to-play arrangement with the federal government unprecedented; it also smacks of possible corruption. Investment bankers typically receive less than 1 percent for the same “service.” President Trump is once again pushing the law where nobody dared go before and doing so with impunity. The implications for democracy and the rule of law are profound.

The deal arises from a 2024 statute mandating TikTok’s Chinese owner, ByteDance, either shut down its U.S. operations or sell to an American-based company. Lawmakers were concerned that a company tied to the Chinese government collecting the data of roughly 135 million American users posed national security concerns. After the divestment deadline passed, Trump issued an executive order directing the attorney general not to enforce the law and unilaterally extended the statutory deadline for compliance.

Advertisement
Advertisement

Investors in the new entity, which is called TikTok USDS Joint Venture LLC, include the software giant Oracle, the private equity firm Silver Lake and the Abu Dhabi-based artificial intelligence company MGX.

The group is led by Larry Ellison, chairman and founder of Oracle — one of the world’s richest men, who hosted a $100,000 per-head fundraiser for Trump in 2020. His son, David Ellison, became chairman and CEO of Paramount Skydance following the Trump-approved $8 billion merger of its predecessor Skydance Media and Paramount, owner of CBS News. Ellison later recruited Bari Weiss, founder of the conservative news site The Free Press, to run the network, which has since taken pro-Trump actions that ignited controversy, including killing a “60 Minutes” segment on Trump’s detention and deportation policies.

CNBC report reveals the Trump administration has taken equity stakes in at least 10 companies, including 10 percent of the chipmaker Intel and a 5 percent stake in minerals startup Lithium Americas, agreeing to defer repayment of $182 million in debt that Lithium Americas owes the federal government — and thus the taxpayers — on a $2.5 billion loan.

The administration is now also a 10 percent shareholder in another startup called Trilogy Metals, which wants to extract copper and other minerals from a 211-mile road in Alaska. Trump approved the necessary permits along with making a $35.6 million investment of federal money in exchange for part ownership and an option to buy another 7.5 percent. A similar deal went to USA Rare Earth — taxpayer dollars in exchange for an 8-16 percent stake in that company, which aims to open a “rare earth” mine in Texas and manufacture magnets in Oklahoma.

Advertisement
Advertisement

The Trump administration has set itself up for possible ownership in a private nuclear reactor developer and a $65 billion defense company that makes rocket motors used in missiles, as well. Additionally, its stake in U.S. Steel as a condition for approving its acquisition by Nippon Steel means that Trump can now veto company decisions to close or sell plants, block company name changes and deny attempts to move its headquarters from Pittsburgh.

Although there’s a history of the federal government propping up private equity, it’s typically been on a temporary basis to resolve economic crises, as with the 2008 bailout of Chrysler. And unlike the recent Trump deals, that action was specifically authorized by Congress and did not involve any direct ownership of the auto manufacturer.

So how is Trump getting away with this?

It turns out that there is no law on the books that allows him to use federal money to take equity stakes in private companies as a condition to securing necessary federal approvals, but there’s no law clearly banning it either — probably because Congress never had to think about it. Prior presidents apparently didn’t think about employing Trump’s tactics, either, for a host of possible reasons.

Advertisement
Advertisement

If the government owns a significant stake in one company, conflicts of interest abound. It could favor its company over competitors. Or worse, it could use its leverage to impose political mandates that increase an Oval Officer holder’s power at the expense of shareholders and taxpayers.

Federal stakes in private industry could also distort the free markets, which have long been a cornerstone of conservative politics, while dampening innovation. With the federal government behind the biggest players in an industry, why would anyone else try to enter the field? Meanwhile, taxpayers bear the risk of failure — without the buy-in from Congress, which is supposed to control the purse strings and make the laws.

This is yet another maneuver to consolidate power in one place: Donald J. Trump. Companies that directly answer to Trump-as-shareholder — which include media conglomerates, tech platforms and significant environmental players — will now think twice about criticizing him publicly or supporting his perceived political opponents.

Although an anti-corruption group promptly sued the Trump administration over the TikTok deal, courts can only do so much for democracy. If Congress continues to fail voters, they must send incompetent members packing. The Constitution’s survival depends on it.

Advertisement
Advertisement

Kimberly Wehle is a professor at the University of Baltimore School of Law and author of “How to Read the Constitution — and Why,” as well as “What You Need to Know About Voting — and Why” and “How to Think Like a Lawyer — and Why.”

Copyright 2026 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

For the latest news, weather, sports, and streaming video, head to The Hill.

Up next
Reason

Social Media Panic Lands Joseph Gordon-Levitt a U.N. Gig

Meagan O'Rourke
3 min read
Add Yahoo on Google
Social Media Panic Lands Joseph Gordon-Levitt a U.N. Gig
  • Actor Joseph Gordon-Levitt appointed as the United Nations' first global advocate for human-centric digital governance, aiming to regulate social media platforms.

Joseph Gordon-Levitt has a new gig, but it's not in Hollywood. On Tuesday, the actor was appointed as the United Nations' (U.N.) first global advocate for human-centric digital governance

In this role, Gordon-Levitt will "strengthen public understanding of how digital technologies shape everyday life, rights and opportunities," according to a U.N. press release. In other words, he will be one of the U.N.'s chief advocates for regulating social media platforms.

In a video explaining his jargon-filled title, Gordon-Levitt warned that social media is causing an "epidemic of mental health issues and loneliness," and a "rise in polarization and extremism and authoritarianism." He said "governments need to get in the game" and curb these "damaging side effects" from social media. 

Advertisement
Advertisement

This is not the first time Gordon-Levitt has advocated for crackdowns on online platforms. In February, Gordon-Levitt traveled to Capitol Hill, where he urged senators to pass the Sunset Section 230 Act. The bill, introduced by Sens. Lindsey Graham (R–S.C.) and Dick Durbin (D–Ill.), would repeal Section 230—the federal law that limits platforms' liability for third party speech—two years after the date of enactment. 

The "first step" in combatting the negative influence of Big Tech is to "sunset Section 230," he said. "I want to see this thing pass 100 to zero. There should be nobody voting to give any more impunity to these tech companies, nobody."

After receiving backlash for these comments, including from journalist Taylor Lorenz, Gordon-Levitt clarified that he didn't want to completely scrap Section 230; he only wanted to reform it. 

During his speech on Capitol Hill, Gordon-Levitt invoked his authority as a concerned father of three to push for more online safety regulations. But emotional pleas do not always make for good policy. In fact, protecting children online has motivated more than a dozen bills in the House alone, many of which would infringe on free speech and privacy. 

Advertisement
Advertisement

One of these bills, the Reducing Exploitative Social Media Exposure for Teens (RESET) Act, would ban anyone under the age of 16 from creating or maintaining social media accounts. Another, the App Store Accountability Act, would require age verification for access to app stores and parental consent for users under 18. Most notably, the controversial Kids Online Safety Act (KOSA) would require online platforms to enforce policies and procedures to "address" various "harms to minors." Reason's Elizabeth Nolan Brown notes that KOSA would compel platforms to "censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue."

What proponents of these bills often fail to recognize is the many benefits that social media can offer kids. According to a 2022 Pew Research Center poll among teenagers, just 9 percent said that social media had a mostly negative effect on their lives. Citing the upsides of friendships and connections, 32 percent said social media had a mostly positive effect on them. Another study found that disconnection was a greater threat to adolescents' self-esteem than heavy social media use, challenging the narrative that social media causes isolation. 

Thankfully, Gordon-Levitt's role at the U.N. will likely be symbolic as it is housed in the Internet Governance Forum, an office that only "informs and inspires those with policy-making power in both the public and private sectors." Still, it is disappointing to see such an influential actor and champion of artistic expression make "digital governance" a celebrity cause. 

The post Social Media Panic Lands Joseph Gordon-Levitt a U.N. Gig appeared first on Reason.com.

Advertisement