The Age Verification Trap

Verifying user’s ages undermines everyone’s data protection

5 min read

Waydell D. Carvalho is an independent researcher and systems architect in AI governance, regulatory design, and socio-technical risk.

Conceptual collage of an iPhone featuring a carnival-esuqe "fool the guesser" sign. The phone casts a shadow that contains a three-dimensional scan of a human head, symbolizing data collection.
Nicole Millman; Source images: iStock

Social media is going the way of alcohol, gambling, and other social sins: societies are deciding it’s no longer kids’ stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16.

In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law.

This is the age-verification trap. Strong enforcement of age rules undermines data privacy.

How Does Age Enforcement Actually Work?

Most age-restriction laws follow a familiar pattern. They set a minimum age and require platforms to take “reasonable steps” or “effective measures” to prevent underage access. What these laws rarely spell out is how platforms are supposed to tell who is actually over the line. At the technical level, companies have only two tools.

The first is identity-based verification. Companies ask users to upload a government ID, link a digital identity, or provide documents that prove their age. Yet in many jurisdictions, 16-year-olds do not have IDs. In others, IDs exist but are not digital, not widely held, or not trustworthy. Storing copies of identity documents also creates security and misuse risks.

The second option is inference. Platforms try to guess age based on behavior, device signals, or biometric analysis, most commonly facial age estimation from selfies or videos. This avoids formal ID collection, but it replaces certainty with probability and error.

In practice, companies combine both. Self-declared ages are backed by inference systems. When confidence drops, or regulators ask for proof of effort, inference escalates to ID checks. What starts as a light-touch checkpoint turns into layered verification that follows users over time.

What Are Platforms Doing Right Now?

This pattern is already visible on major platforms.

Meta has deployed facial age estimation on Instagram in multiple markets, using video-selfie checks through third-party partners. When the system flags users as possibly underaged, it prompts them to record a short selfie video. An AI system estimates their age and, if it decides they are under the threshold, restricts or locks the account. Appeals often trigger additional checks, and misclassifications are common.

TikTok has confirmed that it also scans public videos to infer users’ ages. Google and YouTube rely heavily on behavioral signals tied to viewing history and account activity to infer age, then ask for government ID or a credit card when the system is unsure. A credit card functions as a proxy for adulthood, even though it says nothing about who is actually using the account. The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports.

For a typical user, age is no longer a one-time declaration. It becomes a recurring test. A new phone, a change in behavior, or a false signal can trigger another check. Passing once does not end the process.

How Do Age Verification Systems Fail?

These systems fail in predictable ways.

False positives are common. Platforms identify as minors adults with youthful faces, or who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.

The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.

Scale that experience across millions of users, and you bake the privacy risk into how platforms work.

Is Age Verification Compatible with Privacy Law?

This is where emerging age-restriction policy collides with existing privacy law.

Modern data-protection regimes all rest on similar ideas: collect only what you need, use it only for a defined purpose, and keep it only as long as necessary.

Age enforcement undermines all three.

To prove they are following age verification rules, platforms must log verification attempts, retain evidence, and monitor users over time. When regulators or courts ask whether a platform took reasonable steps, “we collected less data” is rarely persuasive. For companies, defending themselves against accusations of neglecting to properly verify age supersedes defending themselves against accusations of inappropriate data collection.

It is not an explicit choice by voters or policymakers, but instead a reaction to enforcement pressure and how companies perceive their litigation risk.

Less Developed Countries, Deeper Surveillance

Outside wealthy democracies, the tradeoff is even starker.

Brazil’s Statute of Child-rearing and Adolescents (ECA in Portuguese) imposes strong child-protection duties online, while its data protection law restricts data collection and processing. Now providers operating in Brazil must adopt effective age-verification mechanisms and can no longer rely on self-declaration alone for high-risk services. Yet they also face uneven identity infrastructure and widespread device sharing. To compensate, they rely more heavily on facial estimation and third-party verification vendors.

In Nigeria many users lack formal IDs. Digital service providers fill the gap with behavioral analysis, biometric inference, and offshore verification services, often with limited oversight. Audit logs grow, data flows expand, and the practical ability of users to understand or contest how companies infer their age shrinks accordingly. Where identity systems are weak, companies do not protect privacy. They bypass it.

The paradox is clear. In countries with less administrative capacity, age enforcement often produces more surveillance, not less, because inference fills the void of missing documents.

How Do Enforcement Priorities Change Expectations?

Some policymakers assume that vague standards preserve flexibility. In the U.K., then–Digital Secretary Michelle Donelan, argued in 2023 that requiring certain online safety outcomes without specifying the means would avoid mandating particular technologies. Experience suggests the opposite.

When disputes reach regulators or courts, the question is simple: can minors still access the platform easily or not? If the answer is yes, authorities tell companies to do more. Over time, “reasonable steps” become more invasive.

Repeated facial scans, escalating ID checks, and long-term logging become the norm. Platforms that collect less data start to look reckless by comparison. Privacy-preserving designs lose out to defensible ones.

This pattern is familiar, including online sales tax enforcement. After courts settled that large platforms had an obligation to collect and remit sales taxes, companies began continuous tracking and storage of transaction destinations and customer location signals. That tracking is not abusive, but once enforcement requires proof over time, companies build systems to log, retain, and correlate more data. Age verification is moving the same way. What begins as a one-time check becomes an ongoing evidentiary system, with pressure to monitor, retain, and justify user-level data.

The Choice We Are Avoiding

None of this is an argument against protecting children online. It is an argument against pretending there is no tradeoff.

Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.

The age-verification trap is not a glitch. It is what you get when regulators treat age enforcement as mandatory and privacy as optional.

The Conversation (0)
Sort by

When Pills Start Acting Like Machines

Ingestible electronics can sense and act inside the gut

11 min read
Miniature figures in lab coats seated inside half of a red capsule, next to a circuit board.
Edmon de Haro

One day soon, a doctor might prescribe a pill that doesn’t just deliver medicine but also reports back on what it finds inside you—and then takes actions based on its findings.

Instead of scheduling an endoscopy or CT scan, you’d swallow an electronic capsule smaller than a multivitamin. As it travels through your digestive system, it could check tissue health, look for cancerous changes, and send data to your doctor. It could even release drugs exactly where they’re needed or snip a tiny biopsy sample before passing harmlessly out of your body.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Real-World Diagnostics and Prognostics for Grid-Connected Battery Energy Storage Systems

The core challenge of clean energy

7 min read
Power lines tower over a rural landscape at twilight, with pink and blue clouds in the sky.
The University of Sheffield

This is a sponsored article brought to you by The University of Sheffield.

Across global electricity networks, the shift to renewable energy has fundamentally changed the behavior of power systems. Decades of engineering assumptions, predictable inertia, dispatchable baseload generation, and slow, well-characterized system dynamics, are now eroding as wind and solar become dominant sources of electricity. Grid operators face increasingly steep ramp events, larger frequency excursions, faster transients, and prolonged periods where fossil generation is minimal or absent.

In this environment, battery energy storage systems (BESS) have emerged as essential tools for maintaining stability. They can respond in milliseconds, deliver precise power control, and operate flexibly across a range of services. But unlike conventional generation, batteries are sensitive to operational history, thermal environment, state of charge window, system architecture, and degradation mechanisms. Their long-term behavior cannot be described by a single model or simple efficiency curve, it is the product of complex electrochemical, thermal, and control interactions.

Most laboratory tests and simulations attempt to capture these effects, but they rarely reproduce the operational irregularities of the grid. Batteries in real markets are exposed to rapid fluctuations in power demand, partial state of charge cycling, fast recovery intervals, high-rate events, and unpredictable disturbances. As Professor Dan Gladwin, who leads Sheffield’s research into grid-connected energy storage, puts it, “you only understand how storage behaves when you expose it to the conditions it actually sees on the grid.”

This disconnect creates a fundamental challenge for the industry: How can we trust degradation models, lifetime predictions, and operational strategies if they have never been validated against genuine grid behavior?

Few research institutions have access to the infrastructure needed to answer that question. The University of Sheffield is one of them.

Sheffield’s Centre for Research into Electrical Energy Storage and Applications (CREESA) operates one of the UK’s only research-led, grid-connected, multi-megawatt battery energy storage testbeds. The University of Sheffield

Sheffield’s unique facility

The Centre for Research into Electrical Energy Storage and Applications (CREESA) operates one of the UK’s only research-led, grid-connected, multi-megawatt battery energy storage testbeds. This environment enables researchers to test storage technologies not just in simulation or controlled cycling rigs, but under full-scale, live grid conditions. As Professor Gladwin notes, “we aim to bridge the gap between controlled laboratory research and the demands of real grid operation.”

At the heart of the facility is an 11 kV, 4 MW network connection that provides the electrical and operational realism required for advanced diagnostics, fault studies, control algorithm development, techno-economic analysis, and lifetime modeling. Unlike microgrid scale demonstrators or isolated laboratory benches, Sheffield’s environment allows energy storage assets to interact with the same disturbances, market signals, and grid dynamics they would experience in commercial deployment.

“The ability to test at scale, under real operational conditions, is what gives us insights that simulation alone cannot provide.” —Professor Dan Gladwin, The University of Sheffield

The facility includes:

  • A 2 MW / 1 MWh lithium titanate system, among the first independent grid-connected BESS of its kind in the UK
  • A 100 kW second-life EV battery platform, enabling research into reuse, repurposing, and circular-economy models
  • Support for flywheel systems, supercapacitors, hybrid architectures, and fuel-cell technologies
  • More than 150 laboratory cell-testing channels, environmental chambers, and impedance spectroscopy equipment
  • High-speed data acquisition and integrated control systems for parameter estimation, thermal analysis, and fault response measurement

The infrastructure allows Sheffield to operate storage assets directly on the live grid, where they respond to real market signals, deliver contracted power services, and experience genuine frequency deviations, voltage events, and operational disturbances. When controlled experiments are required, the same platform can replay historical grid and market signals, enabling repeatable full power testing under conditions that faithfully reflect commercial operation. This combination provides empirical data of a quality and realism rarely available outside utility-scale deployments, allowing researchers to analyse system behavior at millisecond timescales and gather data at a granularity rarely achievable in conventional laboratory environments.

According to Professor Gladwin, “the ability to test at scale, under real operational conditions, is what gives us insights that simulation alone cannot provide.”

Dan Gladwin, Professor of Electrical and Control Systems Engineering, leads Sheffield’s research into grid-connected energy storage.The University of Sheffield

Setting the benchmark with grid scale demonstration

One of Sheffield’s earliest breakthroughs came with the installation of a 2 MW / 1 MWh lithium titanate demonstrator, a first-of-a-kind system installed at a time when the UK had no established standards for BESS connection, safety, or control. Professor Gladwin led the engineering, design, installation, and commissioning of the system, establishing one of the country’s first independent megawatt scale storage platforms.

The project provided deep insight into how high-power battery chemistries behave under grid stressors. Researchers observed sub-second response times and measured the system’s capability to deliver synthetic inertia-like behavior. As Gladwin reflects, “that project showed us just how fast and capable storage could be when properly integrated into the grid.”

But the demonstrator’s long-term value has been its continued operation. Over nearly a decade of research, it has served as a platform for:

  • Hybridization studies, including battery-flywheel control architectures
  • Response time optimization for new grid services
  • Operator training and market integration, exposing control rooms and traders to a live asset
  • Algorithm development, including dispatch controllers, forecasting tools, and prognostic and health management systems
  • Comparative benchmarking, such as evaluation of different lithium-ion chemistries, lead-acid systems, and second-life batteries

A recurring finding is that behavior observed on the live grid often differs significantly from what laboratory tests predict. Subtle electrical, thermal, and balance-of-plant interactions that barely register in controlled experiments can become important at megawatt-scale, especially when systems are exposed to rapid cycling, fluctuating set-points, or tightly coupled control actions. Variations in efficiency, cooling system response, and auxiliary power demand can also amplify these effects under real operating stress. As Professor Gladwin notes, “phenomena that never appear in a lab can dominate behavior at megawatt scale.”

These real-world insights feed directly into improved system design. By understanding how efficiency losses, thermal behavior, auxiliary systems, and control interactions emerge at scale, researchers can refine both the assumptions and architecture of future deployments. This closes the loop between application and design, ensuring that new storage systems can be engineered for the operational conditions they will genuinely encounter rather than idealized laboratory expectations.

Ensuring longevity with advanced diagnostics

Sheffield’s Centre for Research into Electrical Energy Storage and Applications (CREESA) enables researchers to test storage technologies not just in simulation or controlled cycling rigs, but under full-scale, live grid conditions.The University of Sheffield

Ensuring the long-term reliability of storage requires understanding how systems age under the conditions they actually face. Sheffield’s research combines high-resolution laboratory testing with empirical data from full-scale grid-connected assets, building a comprehensive approach to diagnostics and prognostics. In Gladwin’s words, “A model is only as good as the data and conditions that shape it. To predict lifetime with confidence, we need laboratory measurements, full-scale testing, and validation under real-world operating conditions working together.”

A major focus is accurate state estimation during highly dynamic operation. Using advanced observers, Kalman filtering, and hybrid physics-ML approaches, the team has developed methods that deliver reliable SOC, SOH and SOP estimates during rapid power swings, irregular cycling, and noisy conditions where traditional methods break down.

Another key contribution is understanding cell-to-cell divergence in large strings. Sheffield’s data shows how imbalance accelerates near SOC extremes, how thermal gradients drive uneven ageing, and how current distribution causes long-term drift. These insights inform balancing strategies that improve usable capacity and safety.

Sheffield has also strengthened lifetime and degradation modeling by incorporating real grid behavior directly into the framework. By analyzing actual market signals, frequency deviations, and dispatch patterns, the team uncovers ageing mechanisms that do not appear during controlled laboratory cycling and would otherwise remain hidden.

These contributions fall into four core areas:

State Estimation and Parameter Identification

  • Robust SOC/SOH estimation
  • Online parameter identification for equivalent circuit models
  • Power capability prediction using transient excitation
  • Data selection strategies under noise and variability

Degradation and Lifetime Modelling

  • Degradation models built on real frequency and market data
  • Analysis of micro cycling and asymmetric duty cycles
  • Hybrid physics-ML forecasting models

Thermal and Imbalance Behavior

  • Characterizing thermal gradients in containerized systems
  • Understanding cell imbalance in large-scale systems
  • Mitigation strategies at the cell and module level
  • Coupled thermal-electrical behavior under fast cycling

Hybrid Systems and Multi-Technology Optimization

  • Battery-flywheel coordination strategies
  • Techno-economic modeling for hybrid assets
  • Dispatch optimization using evolutionary algorithms
  • Control schemes that extend lifetime and enhance service performance

Beyond grid-connected systems, Sheffield’s diagnostic methods have also proved valuable in off-grid environments. A key example is the collaboration with MOPO, a company deploying pay-per-swap lithium-ion battery packs in low-income communities across Sub-Saharan Africa. These batteries face deep cycling, variable user behavior, and sustained high temperatures, all without active cooling or controlled environments. The team’s techniques in cell characterization, parameter estimation, and in-situ health tracking have helped extend the usable life of MOPO’s battery packs. “By applying our know-how, we can make these battery-swap packs clean, safe, and significantly more affordable than petrol and diesel generators for the communities that rely on them,” says Professor Gladwin.

Beyond grid-connected systems, Sheffield’s diagnostic methods have also proved valuable in off-grid environments. A key example is the collaboration with MOPO, a company deploying pay-per-swap lithium-ion battery packs in low-income communities across Sub-Saharan Africa. MOPO

Collaboration and the global future

A defining strength of Sheffield’s approach is its close integration with industry, system operators, technology developers, and service providers. Over the past decade, its grid-connected testbed has enabled organizations to trial control algorithms, commission their first battery assets, test market participation strategies, and validate performance under real operational constraints.

These partnerships have produced practical engineering outcomes, including improved dispatch strategies, refined control architectures, validated installation and commissioning methods, and a clearer understanding of degradation under real-world market operation. According to Gladwin, “It is a two-way relationship, we bring the analytical and research tools, industry brings the operational context and scale.”

One of Sheffield’s earliest breakthroughs came with the installation of a 2 MW / 1 MWh lithium titanate demonstrator. Professor Gladwin led the engineering, design, installation, and commissioning of the system, establishing one of UK’s first independent megawatt scale storage platforms.The University of Sheffield

This two-way exchange, combining academic insight with operational experience, ensures that Sheffield’s research remains directly relevant to modern power systems. It continues to shape best practice in lifetime modelling, hybrid system control, diagnostics, and operational optimization.

As electricity systems worldwide move toward net zero, the need for validated models, proven control algorithms, and empirical understanding will only grow. Sheffield’s combination of full-scale infrastructure, long-term datasets, and collaborative research culture ensures it will remain at the forefront of developing storage technologies that perform reliably in the environments that matter most, the real world.

Keep Reading ↓ Show less

AI for Cybersecurity: Promise, Practice, and Pitfalls

Explore how AI is being applied in real-world cybersecurity scenarios

1 min read

AI is revolutionizing the cybersecurity landscape. From accelerating threat detection to enabling real-time automated responses, artificial intelligence is reshaping how organizations defend against increasingly sophisticated attacks.But with these advancements come new and complex risks—AI systems themselves can be exploited, manipulated, or biased, creating fresh vulnerabilities.

In this session, we’ll explore how AI is being applied in real-world cybersecurity scenarios—from anomaly detection and behavioral analytics to predictive threat modeling. We’ll also confront the challenges that come with it, including adversarial AI, data bias, and the ethical dilemmas of autonomous decision-making.

Looking ahead, we’ll examine the future of intelligent cyber defense and what it takes to stay ahead of evolving threats. Join us to learn how to harness AI responsibly and effectively—balancing innovation with security, and automation with accountability.

Register now for this free webinar!

Keep Reading ↓ Show less

Data Centers Turn to High-Temperature Superconductors

Hyperscalers look to deliver more power capacity in less space

4 min read
A cylindrical silver machine winds copper colored tape along a long black bar protruding from its center. As it rotates, disks with the copper tape are also visible.

High-temperature superconducting tape is being developed as an alternative to copper wiring for power delivery in AI data centers.

Microsoft

Data centers for AI are turning the world of power generation on its head. There isn’t enough power capacity on the grid to even come close to how much energy is needed for the number being built. And traditional transmission and distribution networks aren’t efficient enough to take full advantage of all the power available. According to the U.S. Energy Information Administration (EIA), annual transmission and distribution losses average about 5 percent. The rate is much higher in some other parts of the world. Hence, hyperscalers such as Amazon Web Services, Google Cloud and Microsoft Azure are investigating every avenue to gain more power and raise efficiency.

Microsoft, for example, is extolling the potential virtues of high-temperature superconductors (HTS) as a replacement for copper wiring. According to the company, HTS can improve energy efficiency by reducing transmission losses, increasing the resiliency of electrical grids, and limiting the impact of data centers on communities by reducing the amount of space required to move power.

“Because superconductors take up less space to move large amounts of power, they could help us build cleaner, more compact systems,” Alastair Speirs, the general manager of global infrastructure at Microsoft wrote in a blog post.

Superconductors Revolutionize Power Efficiency

Copper is a good conductor, but current encounters resistance as it moves along the line. This generates heat, lowers efficiency, and restricts how much current can be moved. HTS largely eliminates this resistance factor, as it’s made of superconducting materials that are cooled to cryogenic temperatures. (Despite the name, high-temperature superconductors still rely on frigid temperatures—albeit significantly warmer than those required by traditional superconductors.)

The resulting cables are smaller and lighter than copper wiring, don’t lower voltage as they transmit current, and don’t produce heat. This fits nicely into the needs of AI data centers that are trying to cram massive electrical loads into a tiny footprint. Fewer substations would also be needed. According to Speirs, next-gen superconducting transmission lines deliver capacity that is an order of magnitude higher than conventional lines at the same voltage level.

Microsoft is working with partners on the advancement of this technology including an investment of US $75 million into Veir, a superconducting power technology developer. Veir’s conductors use HTS tape, most commonly based on a class of materials known as rare-earth barium copper oxide (REBCO). REBCO is a ceramic superconducting layer deposited as a thin film on a metal substrate, then engineered into a rugged conductor that can be assembled into power cables.

“The key distinction from copper or aluminum is that, at operating temperature, the superconducting layer carries current with almost no electrical resistance, enabling very high current density in a much more compact form factor,” says Tim Heidel, Veir’s CEO and co-founder.

Liquid Nitrogen Cooling in Data Centers

Ruslan Nagimov, the principal infrastructure engineer for Cloud Operations and Innovation at Microsoft, stands near the world’s first HTS-powered rack prototype.Microsoft

HTS cables still operate at cryogenic temperatures, so cooling must be integrated into the power delivery system design. Veir maintains a low operating temperature using a closed-loop liquid nitrogen system: The nitrogen circulates through the length of the cable, exits at the far end, is re-cooled, and then recirculated back to the start.

“Liquid nitrogen is a plentiful, low cost, safe material used in numerous critical commercial and industrial applications at enormous scale,” says Heidel. “We are leveraging the experience and standards for working with liquid nitrogen proven in other industries to design stable, data center solutions designed for continuous operation, with monitoring and controls that fit critical infrastructure expectations rather than lab conditions.”

HTS cable cooling can either be done within the data center or externally. Heidel favors the latter as that minimizes footprint and operational complexity indoors. Liquid nitrogen lines are fed into the facility to serve the superconductors. They deliver power to where it’s needed and the cooling system is managed like other facility subsystem.

Rare earth materials, cooling loops, cryogenic temperatures—all of this adds considerably to costs. Thus, HTS isn’t going to replace copper in the vast majority of applications. Heidel says the economics are most compelling where power delivery is constrained by space, weight, voltage drop, and heat.

“In those cases, the value shows up at the system level: smaller footprints, reduced resistive losses, and more flexibility in how you route power,” says Heidel. “As the technology scales, costs should improve through higher-volume HTS tape manufacturing and better yields, and also through standardization of the surrounding system hardware, installation practices, and operating playbooks that reduce design complexity and deployment risk.”

AI data centers are becoming the perfect proving ground for this approach. Hyperscalers are willing to spend to develop higher-efficiency systems. They can balance spending on development against the revenue they might make by delivering AI services broadly.

“HTS manufacturing has matured—particularly on the tape side—which improves cost and supply availability,” says Husam Alissa, Microsoft’s director of systems technology. “Our focus currently is on validating and derisking this technology with our partners with focus on systems design and integration.”

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Get the latest technology news in your inbox

Subscribe to IEEE Spectrum’s newsletters by selecting from the list.

AI Hunts for the Next Big Thing in Physics

There's a crisis in particle physics. Researchers hope AI can help.

18 min read
Circular and spiral tracks are shown as light blue lines against a darker blue background. 

This historic cloud-chamber image shows the spiral tracks of charged particles—an early, visual way physicists studied the subatomic world.

Omikron/Science Source
LightBlue

In 1930, a young physicist named Carl D. Anderson was tasked by his mentor with measuring the energies of cosmic rays—particles arriving at high speed from outer space. Anderson built an improved version of a cloud chamber, a device that visually records the trajectories of particles. In 1932, he saw evidence that confusingly combined the properties of protons and electrons. “A situation began to develop that had its awkward aspects,” he wrote many years after winning a Nobel Prize at the age of 31. Anderson had accidentally discovered antimatter.

Four years after his first discovery, he codiscovered another elementary particle, the muon. This one prompted one physicist to ask, “Who ordered that?”

Carl Anderson [top] sits beside the magnet cloud chamber he used to discover the positron. His cloud-chamber photograph [bottom] from 1932 shows the curved track of a positron, the first known antimatter particle. Caltech Archives & Special Collections

Over the decades since then, particle physicists have built increasingly sophisticated instruments of exploration. At the apex of these physics-finding machines sits the Large Hadron Collider, which in 2022 started its third operational run. This underground ring, 27 kilometers in circumference and straddling the border between France and Switzerland, was built to slam subatomic particles together at near light speed and test deep theories of the universe. Physicists from around the world turn to the LHC, hoping to find something new. They’re not sure what, but they hope to find it.

It’s the latest manifestation of a rich tradition. Throughout the history of science, new instruments have prompted hunts for the unexpected. Galileo Galilei built telescopes and found Jupiter’s moons. Antonie van Leeuwenhoek built microscopes and noticed “animalcules, very prettily a-moving.” And still today, people peer through lenses and pore through data in search of patterns they hadn’t hypothesized. Nature’s secrets don’t always come with spoilers, and so we gaze into the unknown, ready for anything.

But novel, fundamental aspects of the universe are growing less forthcoming. In a sense, we’ve plucked the lowest-hanging fruit. We know to a good approximation what the building blocks of matter are. The Standard Model of particle physics, which describes the currently known elementary particles, has been in place since the 1970s. Nature can still surprise us, but it typically requires larger or finer instruments, more detailed or expansive data, and faster or more flexible analysis tools.

Those analysis tools include a form of artificial intelligence (AI) called machine learning. Researchers train complex statistical models to find patterns in their data, patterns too subtle for human eyes to see, or too rare for a single human to encounter. At the LHC, which smashes together protons to create immense bursts of energy that decay into other short-lived particles of matter, a theorist might predict some new particle or interaction and describe what its signature would look like in the LHC data, often using a simulation to create synthetic data. Experimentalists would then collect petabytes of measurements and run a machine learning algorithm that compares them with the simulated data, looking for a match. Usually, they come up empty. But maybe new algorithms can peer into corners they haven’t considered.

A New Path for Particle Physics

“You’ve heard probably that there’s a crisis in particle physics,” says Tilman Plehn, a theoretical physicist at Heidelberg University, in Germany. At the LHC and other high-energy physics facilities around the world, the experimental results have failed to yield insights on new physics. “We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t,” Plehn says.

Tilman Plehn

“We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t.”

Gregor Kasieczka, a physicist at the University of Hamburg, in Germany, recalls the field’s enthusiasm when the LHC began running in 2008. Back then, he was a young graduate student and expected to see signs of supersymmetry, a theory predicting heavier versions of the known matter particles. The presumption was that “we turn on the LHC, and supersymmetry will jump in your face, and we’ll discover it in the first year or so,” he tells me. Eighteen years later, supersymmetry remains in the theoretical realm. “I think this level of exuberant optimism has somewhat gone.”

The result, Plehn says, is that models for all kinds of things have fallen in the face of data. “And I think we’re going on a different path now.”

That path involves a kind of machine learning called unsupervised learning. In unsupervised learning, you don’t teach the AI to recognize your specific prediction—signs of a particle with this mass and this charge. Instead, you might teach it to find anything out of the ordinary, anything interesting—which could indicate brand new physics. It’s the equivalent of looking with fresh eyes at a starry sky or a slide of pond scum. The problem is, how do you automate the search for something “interesting”?

Going Beyond the Standard Model

The Standard Model leaves many questions unanswered. Why do matter particles have the masses they do? Why do neutrinos have mass at all? Where is the particle for transmitting gravity, to match those for the other forces? Why do we see more matter than antimatter? Are there extra dimensions? What is dark matter—the invisible stuff that makes up most of the universe’s matter and that we assume to exist because of its gravitational effect on galaxies? Answering any of these questions could open the door to new physics, or fundamental discoveries beyond the Standard Model.

The Large Hadron Collider at CERN accelerates protons to near light speed before smashing them together in hopes of discovering “new physics.”

CERN

“Personally, I’m excited for portal models of dark sectors,” Kasieczka says, as if reading from a Marvel film script. He asks me to imagine a mirror copy of the Standard Model out there somewhere, sharing only one “portal” particle with the Standard Model we know and love. It’s as if this portal particle has a second secret family.

Kasieczka says that in the LHC’s third run, scientists are splitting their efforts roughly evenly between measuring more precisely what they know to exist and looking for what they don’t know to exist. In some cases, the former could enable the latter. The Standard Model predicts certain particle properties and the relationships between them. For example, it correctly predicted a property of the electron called the magnetic moment to about one part in a trillion. And precise measurements could turn up internal inconsistencies. “Then theorists can say, ‘Oh, if I introduce this new particle, it fixes this specific problem that you guys found. And this is how you look for this particle,’” Kasieczka says.

The Standard Model catalogs the known fundamental particles of matter and the forces that govern them, but leaves major mysteries unresolved.

Source: Cush/Wikipedia

What’s more, the Standard Model has occasionally shown signs of cracks. Certain particles containing bottom quarks, for example, seem to decay into other particles in unexpected ratios. Plehn finds the bottom-quark incongruities intriguing. “Year after year, I feel they should go away, and they don’t. And nobody has a good explanation,” he says. “I wouldn’t even know who I would shout at”—the theorists or the experimentalists—“like, ‘Sort it out!’”

Exasperation isn’t exactly the right word for Plehn’s feelings, however. Physicists feel gratified when measurements reasonably agree with expectations, he says. “But I think deep down inside, we always hope that it looks unreasonable. Everybody always looks for the anomalous stuff. Everybody wants to see the standard explanation fail. First, it’s fame”—a chance for a Nobel—“but it’s also an intellectual challenge, right? You get excited when things don’t work in science.”

How Unsupervised AI Can Probe for New Physics

Now imagine you had a machine to find all the times things don’t work in science, to uncover all the anomalous stuff. That’s how researchers are using unsupervised learning. One day over ice cream, Plehn and a friend who works at the software company SAP began discussing autoencoders, one type of unsupervised learning algorithm. “He tells me that autoencoders are what they use in industry to see if a network was hacked,” Plehn remembers. “You have, say, a hundred computers, and they have network traffic. If the network traffic [to one computer] changes all of a sudden, the computer has been hacked, and they take it offline.”

In the LHC’s central data-acquisition room [top], incoming detector data flows through racks of electronics and field-programmable gate array (FPGA) cards [bottom] that decide which collision events to keep.

Fermilab/CERN

Autoencoders are neural networks that start with an input—it could be an image of a cat, or the record of a computer’s network traffic—and compress it, like making a tiny JPEG or MP3 file, and then decompress it. Engineers train them to compress and decompress data so that the output matches the input as closely as possible. Eventually a network becomes very good at that task. But if the data includes some items that are relatively rare—such as white tigers, or hacked computers’ traffic—the network performs worse on these, because it has less practice with them. The difference between an input and its reconstruction therefore signals how anomalous that input is.

“This friend of mine said, ‘You can use exactly our software, right?’” Plehn remembers. “‘It’s exactly the same question. Replace computers with particles.’” The two imagined feeding the autoencoder signatures of particles from a collider and asking: Are any of these particles not like the others? Plehn continues: “And then we wrote up a joint grant proposal.”

It’s not a given that AI will find new physics. Even learning what counts as interesting is a daunting hurdle. Beginning in the 1800s, men in lab coats delegated data processing to women, whom they saw as diligent and detail oriented. Women annotated photos of stars, and they acted as “computers.” In the 1950s, women were trained to scan bubble chambers, which recorded particle trajectories as lines of tiny bubbles in fluid. Physicists didn’t explain to them the theory behind the events, only what to look for based on lists of rules.

But, as the Harvard science historian Peter Galison writes in Image and Logic: A Material Culture of Physics, his influential account of how physicists’ tools shape their discoveries, the task was “subtle, difficult, and anything but routinized,” requiring “three-dimensional visual intuition.” He goes on: “Even within a single experiment, judgment was required—this was not an algorithmic activity, an assembly line procedure in which action could be specified fully by rules.”

Gregor Kasieczka

“We are not looking for flying elephants but instead a few extra elephants than usual at the local watering hole.”

Over the last decade, though, one thing we’ve learned is that AI systems can, in fact, perform tasks once thought to require human intuition, such as mastering the ancient board game Go. So researchers have been testing AI’s intuition in physics. In 2019, Kasieczka and his collaborators announced the LHC Olympics 2020, a contest in which participants submitted algorithms to find anomalous events in three sets of (simulated) LHC data. Some teams correctly found the anomalous signal in one dataset, but some falsely reported one in the second set, and they all missed it in the third. In 2020, a research collective called Dark Machines announced a similar competition, which drew more than 1,000 submissions of machine learning models. Decisions about how to score them led to different rankings, showing that there’s no best way to explore the unknown.

Another way to test unsupervised learning is to play revisionist history. In 1995, a particle dubbed the top quark turned up at the Tevatron, a particle accelerator at the Fermi National Accelerator Laboratory (Fermilab), in Illinois. But what if it actually hadn’t? Researchers applied unsupervised learning to LHC data collected in 2012, pretending they knew almost nothing about the top quark. Sure enough, the AI revealed a set of anomalous events that were clustered together. Combined with a bit of human intuition, they pointed toward something like the top quark.

Georgia Karagiorgi

“An algorithm that can recognize any kind of disturbance would be a win.”

That exercise underlines the fact that unsupervised learning can’t replace physicists just yet. “If your anomaly detector detects some kind of feature, how do you get from that statement to something like a physics interpretation?” Kasieczka says. “The anomaly search is more a scouting-like strategy to get you to look into the right corner.” Georgia Karagiorgi, a physicist at Columbia University, agrees. “Once you find something unexpected, you can’t just call it quits and be like, ‘Oh, I discovered something,’” she says. “You have to come up with a model and then test it.”

Kyle Cranmer, a physicist and data scientist at the University of Wisconsin-Madison who played a key role in the discovery of the Higgs boson particle in 2012, also says that human expertise can’t be dismissed. “There’s an infinite number of ways the data can look different from what you expected,” he says, “and most of them aren’t interesting.” Physicists might be able to recognize whether a deviation suggests some plausible new physical phenomenon, rather than just noise. “But how you try to codify that and make it explicit in some algorithm is much less straightforward,” Cranmer says. Ideally, the guidelines would be general enough to exclude the unimaginable without eliminating the merely unimagined. “That’s gonna be your Goldilocks situation.”

In his 1987 book How Experiments End, Harvard’s Galison writes that scientific instruments can “import assumptions built into the apparatus itself.” He tells me about a 1973 experiment that looked for a phenomenon called neutral currents, signaled by an absence of a so-called heavy electron (later renamed the muon). One team initially used a trigger left over from previous experiments, which recorded events only if they produced those heavy electrons—even though neutral currents, by definition, produce none. As a result, for some time the researchers missed the phenomenon and wrongly concluded that it didn’t exist. Galison says that the physicists’ design choice “allowed the discovery of [only] one thing, and it blinded the next generation of people to this new discovery. And that is always a risk when you’re being selective.”

How AI Could Miss—or Fake—New Physics

I ask Galison if by automating the search for interesting events, we’re letting the AI take over the science. He rephrases the question: “Have we handed over the keys to the car of science to the machines?” One way to alleviate such concerns, he tells me, is to generate test data to see if an algorithm behaves as expected—as in the LHC Olympics. “Before you take a camera out and photograph the Loch Ness Monster, you want to make sure that it can reproduce a wide variety of colors” and patterns accurately, he says, so you can rely on it to capture whatever comes.

Galison, who is also a physicist, works on the Event Horizon Telescope, which images black holes. For that project, he remembers putting up utterly unexpected test images like Frosty the Snowman so that scientists could probe the system’s general ability to catch something new. “The danger is that you’ve missed out on some crucial test,” he says, “and that the object you’re going to be photographing is so different from your test patterns that you’re unprepared.”

The algorithms that physicists are using to seek new physics are certainly vulnerable to this danger. It helps that unsupervised learning is already being used in many applications. In industry, it’s surfacing anomalous credit-card transactions and hacked networks. In science, it’s identifying earthquake precursors, genome locations where proteins bind, and merging galaxies.

An image from a single collision at the LHC shows an unusually complex spray of particles, flagged as anomalous by machine learning algorithms. CERN

But one difference with particle-physics data is that the anomalies may not be stand-alone objects or events. You’re looking not just for a needle in a haystack; you’re also looking for subtle irregularities in the haystack itself. Maybe a stack contains a few more short stems than you’d expect. Or a pattern reveals itself only when you simultaneously look at the size, shape, color, and texture of stems. Such a pattern might suggest an unacknowledged substance in the soil. In accelerator data, subtle patterns might suggest a hidden force. As Kasieczka and his colleagues write in one paper, “We are not looking for flying elephants, but instead a few extra elephants than usual at the local watering hole.”

Even algorithms that weigh many factors can miss signals—and they can also see spurious ones. The stakes of mistakenly claiming discovery are high. Going back to the hacking scenario, Plehn says, a company might ultimately determine that its network wasn’t hacked; it was just a new employee. The algorithm’s false positive causes little damage. “Whereas if you stand there and get the Nobel Prize, and a year later people say, ‘Well, it was a fluke,’ people would make fun of you for the rest of your life,” he says. In particle physics, he adds, you run the risk of spotting patterns purely by chance in big data, or as a result of malfunctioning equipment.

False alarms have happened before. In 1976, a group at Fermilab led by Leon Lederman, who later won a Nobel for other work, announced the discovery of a particle they tentatively called the Upsilon. The researchers calculated the probability of the signal’s happening by chance as 1 in 50. After further data collection, though, they walked back the discovery, calling the pseudo-particle the Oops-Leon. (Today, particle physicists wait until the chance that a finding is a fluke drops below 1 in 3.5 million, the so-called five-sigma criterion.) And in 2011, researchers at the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment, in Italy, announced evidence for faster-than-light travel of neutrinos. Then, a few months later, they reported that the result was due to a faulty connection in their timing system.

Those cautionary tales linger in the minds of physicists. And yet, even while researchers are wary of false positives from AI, they also see it as a safeguard against them. So far, unsupervised learning has discovered no new physics, despite its use on data from multiple experiments at Fermilab and CERN. But anomaly detection may have prevented embarrassments like the one at OPERA. “So instead of telling you there’s a new physics particle,” Kasieczka says, “it’s telling you, this sensor is behaving weird today. You should restart it.”

Hardware for AI-Assisted Particle Physics

Particle physicists are pushing the limits of not only their computing software but also their computing hardware. The challenge is unparalleled. The LHC produces 40 million particle collisions per second, each of which can produce a megabyte of data. That’s much too much information to store, even if you could save it to disk that quickly. So the two largest detectors each use two-level data filtering. The first layer, called the Level-1 Trigger, or L1T, harvests 100,000 events per second, and the second layer, called the High-Level Trigger, or HLT, plucks 1,000 of those events to save for later analysis. So only one in 40,000 events is ever potentially seen by human eyes.

Katya Govorkova

That’s when I thought, we need something like [AlphaGo] in physics. We need a genius that can look at the world differently.”

HLTs use central processing units (CPUs) like the ones in your desktop computer, running complex machine learning algorithms that analyze collisions based on the number, type, energy, momentum, and angles of the new particles produced. L1Ts, as a first line of defense, must be fast. So the L1Ts rely on integrated circuits called field-programmable gate arrays (FPGAs), which users can reprogram for specialized calculations.

The trade-off is that the programming must be relatively simple. The FPGAs can’t easily store and run fancy neural networks; instead they follow scripted rules about, say, what features of a particle collision make it important. In terms of complexity level, it’s the instructions given to the women who scanned bubble chambers, not the women’s brains.

Ekaterina (Katya) Govorkova, a particle physicist at MIT, saw a path toward improving the LHC’s filters, inspired by a board game. Around 2020, she was looking for new physics by comparing precise measurements at the LHC with predictions, using little or no machine learning. Then she watched a documentary about AlphaGo, the program that used machine learning to beat a human Go champion. “For me the moment of realization was when AlphaGo would use some absolutely new type of strategy that humans, who played this game for centuries, hadn’t thought about before,” she says. “So that’s when I thought, we need something like that in physics. We need a genius that can look at the world differently.” New physics may be something we’d never imagine.

Govorkova and her collaborators found a way to compress autoencoders to put them on FPGAs, where they process an event every 80 nanoseconds (less than 10-millionth of a second). (Compression involved pruning some network connections and reducing the precision of some calculations.) They published their methods in Nature Machine Intelligence in 2022, and researchers are now using them during the LHC’s third run. The new trigger tech is installed in one of the detectors around the LHC’s giant ring, and it has found many anomalous events that would otherwise have gone unflagged.

Researchers are currently setting up analysis workflows to decipher why the events were deemed anomalous. Jennifer Ngadiuba, a particle physicist at Fermilab who is also one of the coordinators of the trigger system (and one of Govorkova’s coauthors), says that one feature stands out already: Flagged events have lots of jets of new particles shooting out of the collisions. But the scientists still need to explore other factors, like the new particles’ energies and their distributions in space. “It’s a high-dimensional problem,” she says.

Eventually they will share the data openly, allowing others to eyeball the results or to apply new unsupervised learning algorithms in the hunt for patterns. Javier Duarte, a physicist at the University of California, San Diego, and also a coauthor on the 2022 paper, says, “It’s kind of exciting to think about providing this to the community of particle physicists and saying, like, ‘Shrug, we don’t know what this is. You can take a look.’” Duarte and Ngadiuba note that high-energy physics has traditionally followed a top-down approach to discovery, testing data against well-defined theories. Adding in this new bottom-up search for the unexpected marks a new paradigm. “And also a return of sorts to before the Standard Model was so well established,” Duarte adds.

Yet it could be years before we know why AI marked those collisions as anomalous. What conclusions could they support? “In the worst case, it could be some detector noise that we didn’t know about,” which would still be useful information, Ngadiuba says. “The best scenario could be a new particle. And then a new particle implies a new force.”

Jennifer Ngadiuba

“The best scenario could be a new particle. And then a new particle implies a new force.”

Duarte says he expects their work with FPGAs to have wider applications. “The data rates and the constraints in high-energy physics are so extreme that people in industry aren’t necessarily working on this,” he says. “In self-driving cars, usually millisecond latencies are sufficient reaction times. But we’re developing algorithms that need to respond in microseconds or less. We’re at this technological frontier, and to see how much that can proliferate back to industry will be cool.”

Plehn is also working to put neural networks on FPGAs for triggers, in collaboration with experimentalists, electrical engineers, and other theorists. Encoding the nuances of abstract theories into material hardware is a puzzle. “In this grant proposal, the person I talked to most is the electrical engineer,” he says, “because I have to ask the engineer, which of my algorithms fits on your bloody FPGA?”

Hardware is hard, says Ryan Kastner, an electrical engineer and computer scientist at UC San Diego who works with Duarte on programming FPGAs. What allows the chips to run algorithms so quickly is their flexibility. Instead of programming them in an abstract coding language like Python, engineers configure the underlying circuitry. They map logic gates, route data paths, and synchronize operations by hand. That low-level control also makes the effort “painfully difficult,” Kastner says. “It’s kind of like you have a lot of rope, and it’s very easy to hang yourself.”

Seeking New Physics Among the Neutrinos

The next piece of new physics may not pop up at a particle accelerator. It may appear at a detector for neutrinos, particles that are part of the Standard Model but remain deeply mysterious. Neutrinos are tiny, electrically neutral, and so light that no one has yet measured their mass. (The latest attempt, in April, set an upper limit of about a millionth the mass of an electron.) Of all known particles with mass, neutrinos are the universe’s most abundant, but also among the most ghostly, rarely deigning to acknowledge the matter around them. Tens of trillions pass through your body every second.

If we listen very closely, though, we may just hear the secrets they have to tell. Karagiorgi, of Columbia, has chosen this path to discovery. Being a physicist is “kind of like playing detective, but where you create your own mysteries,” she tells me during my visit to Columbia’s Nevis Laboratories, located on a large estate about 20 km north of Manhattan. Physics research began at the site after World War II; one hallway features papers going back to 1951.

A researcher stands inside a prototype for the Deep Underground Neutrino Experiment, which is designed to detect rare neutrino interactions.

CERN

Karagiorgi is eagerly awaiting a massive neutrino detector that’s currently under construction. Starting in 2028, Fermilab will send neutrinos west through 1,300 km of rock to South Dakota, where they’ll occasionally make their existence known in the Deep Underground Neutrino Experiment (DUNE). Why so far away? When neutrinos travel long distances, they have an odd habit of oscillating, transforming from one kind or “flavor” to another. Observing the oscillations of both the neutrinos and their mirror-image antiparticles, antineutrinos, could tell researchers something about the universe’s matter-antimatter asymmetry—which the Standard Model doesn’t explain—and thus, according to the Nevis website, “why we exist.”

“DUNE is the thing that’s been pushing me to develop these real-time AI methods,” Karagiorgi says, “for sifting through the data very, very, very quickly and trying to look for rare signatures of interest within them.” When neutrinos interact with the detector’s 70,000 tonnes of liquid argon, they’ll generate a shower of other particles, creating visual tracks that look like a photo of fireworks.

Even when not bombarding DUNE with neutrinos, researchers will keep collecting data in the off chance that it captures neutrinos from a distant supernova. “This is a massive detector spewing out 5 terabytes of data per second,” Karagiorgi says, “and it’s going to run constantly for a decade.” They will need unsupervised learning to notice signatures that no one was looking for, because there are “lots of different models of how supernova explosions happen, and for all we know, none of them could be the right model for neutrinos,” she says. “To train your algorithm on such uncertain grounds is less than ideal. So an algorithm that can recognize any kind of disturbance would be a win.”

Deciding in real time which 1 percent of 1 percent of data to keep will require FPGAs. Karagiorgi’s team is preparing to use them for DUNE, and she walks me to a computer lab where they program the circuits. In the FPGA lab, we look at nondescript circuit boards sitting on a table. “So what we’re proposing is a scheme where you can have something like a hundred of these boards for DUNE deep underground that receive the image data frame by frame,” she says. This system could tell researchers whether a given frame resembled TV static, fireworks, or something in between.

Neutrino experiments, like many particle-physics studies, are very visual. When Karagiorgi was a postdoc, automated image processing at neutrino detectors was still in its infancy, so she and collaborators would often resort to visual scanning (bubble-chamber style) to measure particle tracks. She still asks undergrads to hand-scan as an educational exercise. “I think it’s wrong to just send them to write a machine learning algorithm. Unless you can actually visualize the data, you don’t really gain a sense of what you’re looking for,” she says. “I think it also helps with creativity to be able to visualize the different types of interactions that are happening, and see what’s normal and what’s not normal.”

Back in Karagiorgi’s office, a bulletin board displays images from The Cognitive Art of Feynman Diagrams, an exhibit for which the designer Edward Tufte created wire sculptures of the physicist Richard Feynman’s schematics of particle interactions. “It’s funny, you know,” she says. “They look like they’re just scribbles, right? But actually, they encode quantitatively predictive behavior in nature.” Later, Karagiorgi and I spend a good 10 minutes discussing whether a computer or a human could find Waldo without knowing what Waldo looked like. We also touch on the 1964 Supreme Court case in which Justice Potter Stewart famously declined to define obscenity, saying “I know it when I see it.” I ask whether it seems weird to hand over to a machine the task of deciding what’s visually interesting. “There are a lot of trust issues,” she says with a laugh.

On the drive back to Manhattan, we discuss the history of scientific discovery. “I think it’s part of human nature to try to make sense of an orderly world around you,” Karagiorgi says. “And then you just automatically pick out the oddities. Some people obsess about the oddities more than others, and then try to understand them.”

Reflecting on the Standard Model, she called it “beautiful and elegant,” with “amazing predictive power.” Yet she finds it both limited and limiting, blinding us to colors we don’t yet see. “Sometimes it’s both a blessing and a curse that we’ve managed to develop such a successful theory.”

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

From Bottleneck to Breakthrough: AI in Chip Verification

How AI is transforming chip design with smarter verification methods

8 min read
Close-up of a blue circuit board featuring a large, central white microchip.

Using advanced machine learning algorithms, Vision AI analyzes every error to find groups with common failure causes. This means designers can attack the root cause once, fixing problems for hundreds of checks at a time instead of tediously resolving them one by one.

Siemens

This is a sponsored article brought to you by Siemens.

In the world of electronics, integrated circuits (IC) chips are the unseen powerhouse behind progress. Every leap—whether it’s smarter phones, more capable cars, or breakthroughs in healthcare and science—relies on chips that are more complex, faster, and packed with more features than ever before. But creating these chips is not just a question of sheer engineering talent or ambition. The design process itself has reached staggering levels of complexity, and with it, the challenge to keep productivity and quality moving forward.

As we push against the boundaries of physics, chipmakers face more than just technical hurdles. The workforce challenges, tight timelines, and the requirements for building reliable chips are stricter than ever. Enormous effort goes into making sure chip layouts follow detailed constraints—such as maintaining minimum feature sizes for transistors and wires, keeping proper spacing between different layers like metal, polysilicon, and active areas, and ensuring vias overlap correctly to create solid electrical connections. These design rules multiply with every new technology generation. For every innovation, there’s pressure to deliver more with less. So, the question becomes: How do we help designers meet these demands, and how can technology help us handle the complexity without compromising on quality?

Shifting the paradigm: the rise of AI in electronic design automation

A major wave of change is moving through the entire field of electronic design automation (EDA), the specialized area of software and tools that chipmakers use to design, analyze, and verify the complex integrated circuits inside today’s chips. Artificial intelligence is already touching many parts of the chip design flow—helping with placement and routing, predicting yield outcomes, tuning analog circuits, automating simulation, and even guiding early architecture planning. Rather than simply speeding up old steps, AI is opening doors to new ways of thinking and working.

Machine learning models can help predict defect hotspots or prioritize risky areas long before sending a chip to be manufactured.

Instead of brute-force computation or countless lines of custom code, AI uses advanced algorithms to spot patterns, organize massive datasets, and highlight issues that might otherwise take weeks of manual work to uncover. For example, generative AI can help designers ask questions and get answers in natural language, streamlining routine tasks. Machine learning models can help predict defect hotspots or prioritize risky areas long before sending a chip to be manufactured.

This growing partnership between human expertise and machine intelligence is paving the way for what some call a “shift left” or concurrent build revolution—finding and fixing problems much earlier in the design process, before they grow into expensive setbacks. For chipmakers, this means higher quality and faster time to market. For designers, it means a chance to focus on innovation rather than chasing bugs.

Figure 1. Shift-left and concurrent build of IC chips performs multiple tasks simultaneously that use to be done sequentially.Siemens

The physical verification bottleneck: why design rule checking is harder than ever

As chips grow more complex, the part of the design called physical verification becomes a critical bottleneck. Physical verification checks whether a chip layout meets the manufacturer’s strict rules and faithfully matches the original functional schematic. Its main goal is to ensure the design can be reliably manufactured into a working chip, free of physical defects that might cause failures later on.

Design rule checking (DRC) is the backbone of physical verification. DRC software scans every corner of a chip’s layout for violations—features that might cause defects, reduce yield, or simply make the design un-manufacturable. But today’s chips aren’t just bigger; they’re more intricate, woven from many layers of logic, memory, and analog components, sometimes stacked in three dimensions. The rules aren’t simple either. They may depend on the geometry, the context, the manufacturing process and even the interactions between distant layout features.

Priyank Jain leads product management for Calibre Interfaces at Siemens EDA.Siemens

Traditionally, DRC is performed late in the flow, when all components are assembled into the final chip layout. At this stage, it’s common to uncover millions of violations—and fixing these late-stage issues requires extensive effort, leading to costly delays.

To minimize this burden, there’s a growing focus on shifting DRC earlier in the flow—a strategy called “shift-left.” Instead of waiting until the entire design is complete, engineers try to identify and address DRC errors much sooner at block and cell levels. This concurrent design and verification approach allows the bulk of errors to be caught when fixes are faster and less disruptive.

However, running DRC earlier in the flow on a full chip when the blocks are not DRC clean produces results datasets of breathtaking scale—often tens of millions to billions of “errors,” warnings, or flags because the unfinished chip design is “dirty” compared to a chip that’s been through the full design process. Navigating these “dirty” results is a challenge all on its own. Designers must prioritize which issues to tackle, identify patterns that point to systematic problems, and decide what truly matters. In many cases, this work is slow and “manual,” depending on the ability of engineers to sort through data, filter what matters, and share findings across teams.

To cope, design teams have crafted ways to limit the flood of information. They might cap the number of errors per rule, or use informal shortcuts—passing databases or screenshots by email to team members, sharing filters in chat messages, and relying on experts to know where to look. Yet this approach is not sustainable. It risks missing major, chip-wide issues that can cascade through the final product. It slows down response and makes collaboration labor-intensive.

With ongoing workforce challenges and the surging complexity of modern chips, the need for smarter, more automated DRC analysis becomes urgent. So what could a better solution look like—and how can AI help bridge the gap?

The rise of AI-powered DRC analysis

Recent breakthroughs in AI have changed the game for DRC analysis in ways that were unthinkable even a few years ago. Rather than scanning line by line or check by check, AI-powered systems can process billions of errors, cluster them into meaningful groups, and help designers find the root causes much faster. These tools use techniques from computer vision, advanced machine learning, and big data analytics to turn what once seemed like an impossible pile of information into a roadmap for action.

AI’s ability to organize chaotic datasets—finding systematic problems hidden across multiple rules or regions—helps catch risks that basic filtering might miss. By grouping related errors and highlighting hot spots, designers can see the big picture and focus their time where it counts. AI-based clustering algorithms reliably transform weeks of manual investigation into minutes of guided analysis.

AI-powered systems can process billions of errors, cluster them into meaningful groups, and help designers find the root causes much faster.

Another benefit: collaboration. By treating results as shared, living datasets—rather than static tables—modern tools let teams assign owners, annotate findings and pass exact analysis views between block and partition engineers, even across organizational boundaries. Dynamic bookmarks and shared UI states cut down on confusion and rework. Instead of “back and forth,” teams move forward together.

Many of these innovations tease at what’s possible when AI is built into the heart of the verification flow. Not only do they help designers analyze the results; they help everyone reason about the data, summarize findings and make better design decisions all the way to tape out.

A real-world breakthrough in DRC analysis and collaboration: Siemens’ Calibre Vision AI

One of the most striking examples of AI-powered DRC analysis comes from Siemens, whose Calibre Vision AI platform is setting new standards for how full-chip verification happens. Building on years of experience in physical verification, Siemens realized that breaking bottlenecks required not only smarter algorithms but rethinking how teams work together and how data moves across the flow.

Vision AI is designed for speed and scalability. It uses a compact error database and a multi-threaded engine to load millions—or even billions—of errors in minutes, visualizing them so engineers see clusters and hot spots across the entire die. Instead of a wall of error codes or isolated rule violations, the tool presents a heat map of the layout, highlighting areas with the highest concentration of issues. By enabling or disabling layers (layout, markers, heat map) and adjusting layer opacity, users get a clear, customizable view of what’s happening—and where to look next.

Using advanced machine learning algorithms, Vision AI analyzes every error to find groups with common failure causes.

But the real magic is in AI-guided clustering. Using advanced machine learning algorithms, Vision AI analyzes every error to find groups with common failure causes. This means designers can attack the root cause once, fixing problems for hundreds of checks at a time instead of tediously resolving them one by one. In cases where legacy tools would force teams to slog through, for example, 3,400 checks with 600 million errors, Vision AI’s clustering can reduce that effort to investigating just 381 groups—turning mountains into molehills and speeding debug time by at least 2x.

Figure 2. The Calibre Vision AI software automates and simplifies the chip-level DRC verification process.Siemens

Vision AI is also highly collaborative. Dynamic bookmarks capture the exact state of analysis, from layer filters to zoomed layout areas, along with annotations and owner assignments. Sharing a bookmark sends a living analysis—not just a static snapshot—to coworkers, so everyone is working from the same view. Teams can export results databases, distribute actionable groups to block owners, and seamlessly import findings into other Siemens EDA tools for further debug.

Empowering every designer: reducing the expertise gap

A frequent pain point in chip verification is the need for deep expertise—knowing which errors matter, which patterns mean trouble, and how to interpret complex results. Calibre Vision AI helps level the playing field. Its AI-based algorithms consistently create the same clusters and debug paths that senior experts would identify, but does so in minutes. New users can quickly find systematic issues and perform like seasoned engineers, helping chip companies address workforce shortages and staff turnover.

Beyond clusters and bookmarks, Vision AI lets designers build custom signals by leveraging their own data. The platform secures customer models and data for exclusive use, making sure sensitive information stays within the company. And by integrating with Siemens’ EDA AI ecosystem, Calibre Vision AI supports generative AI chatbots and reasoning assistants. Designers can ask direct questions—about syntax, about a signal, about the flow—and get prompt—accurate answers, streamlining training and adoption.

Real results: speeding analysis and sharing insight

Customer feedback from leading IC companies shows the real-world value of AI for full-chip DRC analysis and debug. One company reported that Vision AI reduced their debug effort by at least half—a gain that makes the difference between tapeout and delay. Another noted the platform’s signals algorithm automatically creates the same check groups that experienced users would manually identify, saving not just time but energy.

Quantitative gains are dramatic. For example, Calibre Vision AI can load and visualize error files significantly faster than traditional debug flows. Figure 3 shows the difference in four different test cases: a results file that took 350 minutes with the traditional flow, took Calibre Vision AI only 31 minutes. In another test case (not shown), it took just five minutes to analyze and cluster 3.2 billion errors from more than 380 rule checks into 17 meaningful groups. Instead of getting lost in gigabytes of error data, designers now spend time solving real problems.

Figure 3. Charting the results load time between the traditional DRC debug flow and the Calibre Vision AI flow.Siemens

Looking ahead: the future of AI in chip design

Today’s chips demand more than incremental improvements in EDA software. As the need for speed, quality and collaboration continues to grow, the story of physical verification will be shaped by smarter, more adaptive technologies. With AI-powered DRC analysis, we see a clear path: a faster and more productive way to find systematic issues, intelligent debug, stronger collaboration and the chance for every designer to make an expert impact.

By combining the creativity of engineers with the speed and insight of AI, platforms like Calibre Vision AI are driving a new productivity curve in full-chip analysis. With these tools, teams don’t just keep up with complexity—they turn it into a competitive advantage.

At Siemens, the future of chip verification is already taking shape—where intelligence works hand in hand with intuition, and new ideas find their way to silicon faster than ever before. As the industry continues to push boundaries and unlock the next generation of devices, AI will help chip design reach new heights.

For more on Calibre Vision AI and how Siemens is shaping the future of chip design, visit eda.sw.siemens.com and search for Calibre Vision AI.

Keep Reading ↓ Show less

Estimating Surface Heating of an Atmospheric Reentry Vehicle With Simulation

Learn how heat flux gauges are validated using inverse analysis techniques

1 min read

Join Hannah Alpert (NASA Ames) to explore thermal data from the record-breaking 6-meter LOFTID inflatable aeroshell. Learn how COMSOL Multiphysics® was used to perform inverse analysis on flight thermocouple data, validating heat flux gauges and preflight CFD predictions. Attendees will gain technical insights into improving thermal models for future HIAD missions, making this essential for engineers seeking to advance atmospheric reentry design. The session concludes with a live Q&A.

Register now to watch this free on-demand webinar!

IEEE Plays A Pivotal Role In Climate Mitigation Talks

Leaders showcased tech solutions at COP30 and ITU symposium

5 min read
Four men sitting in chairs with a green banner above their head.

IEEE member Filipe Emídio Tôrres, 2023 IEEE president Saifur Rahman, IEEE Fellow Claudio Canizares and vice chair of ITU’s climate biodiversity program participated in a panel session on clean-tech solutions at COP30.

Filipe Emídio Tôrres/IEEE

IEEE has enhanced its standing as a trusted, neutral authority on the role of technology in climate change mitigation and adaption. Last year it became the first technical association to be invited to a U.N. Conference of the Parties on Climate Change.

IEEE representatives participated in several sessions at COP30, held from 11 to 20 November in Belém, Brazil. More than 56,000 delegates attended, including policymakers, technologists, and representatives from industry, finance, and development agencies.

Following the conference, IEEE helped host the selective International Symposium on Achieving a Sustainable Climate. The International Telecommunication Union and IEEE hosted ISASC on 16 and 17 December at ITU’s headquarters in Geneva. Among the more than 100 people who attended were U.N. agency representatives, diplomats, senior leaders from academia, and experts from government, industry, nongovernment organizations, and standards development bodies.

Power and energy expert Saifur Rahman, the 2023 IEEE president, led IEEE’s delegation at both events. Rahman is the immediate past chair of IEEE’s Technology for a Sustainable Climate Matrix Organization, which coordinates, communicates, and amplifies the organization’s efforts.

IEEE’s evolving role at COP

IEEE first attended a COP in 2021.

“Over successive COPs, IEEE’s role has evolved from contributing individual technical sessions to being recognized as a trusted partner in climate action,” Rahman noted in a summary of COP30. “There is [a] growing demand for engineering insight, not just to discuss technologies but [also] to help design pathways for deployment, capacity-building, and long-term resilience.”

Joining Rahman at COP30 were IEEE Fellow Claudio Canizares and IEEE Member Filipe Emídio Tôrres.

Canizares is a professor of electrical and computer engineering at the University of Waterloo, in Ontario, Canada, and the executive director of the university’s sustainable energy institute.

Tôrres chairs the IEEE Centro-Norte Brasil Section (Brazil Chapter). An entrepreneur and a former professor, he is pursuing a Ph.D. in biomedical engineering at the University of Brasilia. He also represented the IEEE Young Professionals group while attending the conference.

In the Engineering for Climate Resilience: Water Planning, Energy Transition, Biodiversity session, Rahman showed a video from his 2024 visit to Shennongjia, China, where he monitored a clean energy project designed to protect endangered snub-nosed monkeys from human encroachment. The project integrates renewable energy, which helps preserve the forest and its wildlife.

Rahman also chaired a session at the Sustainable Development Goal Pavilion on balancing decarbonization efforts between industrialized and emerging economies.

Additionally, he participated in a joint panel discussion hosted by IEEE and the World Federation of Engineering Organizations on engineering strategies for climate resilience, including energy transition and biodiversity.

Rahman, Canizares, and Tôrres took part in a session on clean-tech solutions for a sustainable climate, hosted by the International Youth Nuclear Congress. The topics included fossil fuel–free electricity for communications in remote areas and affordable electricity solutions for off-grid areas.

The three also joined several panels organized by the IYNC that addressed climate resilience, career pathways in sustainability, and a mentoring program.

“Over successive COPs, IEEE’s role has evolved from contributing individual technical sessions to being recognized as a trusted partner in climate action.” —Saifur Rahman, 2023 IEEE president

The IYNC hosted the Voices of Transition: Including Pathways to a Clean Energy Future session, for which Tôrres and Rahman were panelists. They discussed the need to include underrepresented and marginalized groups, which often get overlooked in projects that convert communities to renewable energy.

Rahman, Canizares, and Tôrres visited the COP Village, where they met several of the 5,000 Indigenous leaders participating in the conference and discussed potential partnerships and collaborations. Climate change has made the land where the Indigenous people live more susceptible to severe droughts and wildfires, particularly in the Amazon region.

Rahman and Tôrres took a field trip to the Federal University of Para, where they met several faculty members and students and toured the LASSE engineering lab.

A meaningful experience

Tôrres, who says representing IEEE at COP30 was transformative, wrote a detailed report about the event.

“The experience reaffirmed my belief that engineering and technology, when combined with respect for cultural diversity, can play a critical role in shaping a more sustainable and equitable world,” he wrote. “It highlighted the importance of combining cutting-edge technological solutions with Indigenous wisdom and cultural knowledge to address the climate crisis.”

COP30 webinar

Rahman and Canizares give an overview of their COP30 experiences in an IEEE webinar.

“IEEE has a place at the table,” Rahman says in the video. “We want to showcase outside our comfort zone what IEEE can do. We go to all these global events so that our name becomes a familiar term. We are the first technical association organization ever to go to COP and talk about engineering.”

Canizares added that IEEE is now collaborating closely with the United Nations.

“This is an important interaction. And I think, moving forward, IEEE will become more relevant, particularly in the context of technology deployment,” he said. “As governments start technology deployments, they will see IEEE as a provider of solutions.”

ISASC takeaways

Rahman was the general chair of the ISASC event, which focused on the delivery and deployment of clean energy. Among the presenters were IEEE members including Canizares, Paulina Chan, Surekha Deshmukh, Ashutosh Dutta, Tariq Durrani, Samina Husain, Bruce Kraemer, Bruno Meyer, Carlo Alberto Nucci, and Seizo Onoe.

Sessions were organized around six themes: energy transition, information and communication technology, financing, case studies, technical standards, and public-private collaborations. A detailed report includes the discussions, insights, and opportunities identified throughout ISASC.

Here are some key takeaways.

  • Although the technology exists to transition to renewable energy, most power grid systems are not ready. Deployment is increasingly constrained by transmission bottlenecks, interconnection delays, permitting challenges, and system flexibility. There’s also a skills shortage.
  • Energy transition pathways must be region-specific and should consider local resources, social conditions, funding opportunities, and development priorities.
  • Information and communication technologies are central to climate mitigation solutions, despite growing concerns about their environmental impact. Even though the technologies are used in beneficial ways, such as early-warning systems for natural disasters and smart water management, they also are driving the rapid growth of data centers for artificial intelligence applications—which has increased energy prices and driven up water demand.
  • Technical standards are a means of accelerating adoption, interoperability, and trust in green technology. There needs to be greater coordination among standards development organizations, particularly at the convergence of energy systems, information technologies, and AI. Fragmented standards hinder interoperability. The lack of technical standards is a major constraint on project financing, limiting investors’ confidence and slowing technology deployment.
  • Training and outreach efforts are important for successfully implementing standards, especially in developing regions. IEEE’s global membership and regional sections can be critical channels to address the needs.

A technology assessment tool

As part of ISASC, IEEE presented a technology assessment tool prototype. The web-based platform is designed to help policymakers, practitioners, and investors compare technology options against climate goals.

The tool can run a comparative analysis of sustainable climate technologies and integrate publicly available, expert-validated data.

IEEE can help the world meet its goals

The ISASC report concluded that by connecting engineering expertise with real-world deployment challenges, IEEE is working to translate global climate goals into measurable actions.

The discussions highlighted that the path forward lies less in inventing new technologies and more in aligning systems to deliver ones that already exist.

Summaries of COP30 and ISASC are available on the IEEE Technology for a Sustainable Climate website.

Keep Reading ↓ Show less

In Nigeria, Why Isn’t Broadband Everywhere?

It has 8 undersea cables, but fiber-optic networks miss half the country

16 min read
Photo of 4 men, some wearing traditional Nigerian garb, seated at computers, with a 5th man leaning over the shoulder of one man to control the computer mouse.
Andrew Esiebo

Under the shade of a cocoa tree outside the hamlet of Atan, near Ibadan, Nigeria, Bolaji Adeniyi holds court in a tie-dyed T-shirt. “In Nigeria we see farms as father’s work,” he says. Adeniyi’s father taught him to farm with a hoe and a machete, which he calls a cutlass. These days, he says, farming in Nigeria can look quite different, depending on whether the farmer has access to the Internet or not.

Not far away, farmers are using drones to map their plots and calculate their fertilizer inputs. Elsewhere, farmers can swipe through security camera footage of their fields on their mobile phones. That saves them from having to patrol the farm’s perimeter and potentially dangerous confrontations with thieves. To be able to do those things, Adeniyi notes, the farmers need broadband access, at least some of the time. “Reliable broadband in Atan would attract international cocoa dealers and enable access to agricultural extension agents, which would aid farmers,” he says.

Keep Reading ↓ Show less

This Startup Is Building the Internet of Underwater Things

WSense’s innovative networking systems are transforming how we explore ocean environments

6 min read
A man and a woman wearing waterproof gear and helmets crouch down on a metal platform while holding cylindrical metal underwater data-gathering sensors, with the ocean seen in the background.

Italian startup WSense develops software and hardware for underwater data collection and communication.

WSense

This is a sponsored article brought to you by LEMO.

Science thrives on data. As such, the emergence of the Internet of Things (IoT) brought about a fantastic revolution. Billions of “intelligent objects” packed with sensors are connected to each other and to servers, capturing and exchanging, in real time, huge amounts of data. Analyzed, accessible, and shareable worldwide, these data enable researchers to observe and understand our planet like never before.

Well, not all of our planet: IoT does not connect us to seas and oceans.

This blind spot is rather striking. Water covers 72 percent of the Earth’s surface, its volumes host 80 percent of biodiversity and play a pivotal role in global phenomena, such as climate change. It is impossible to claim a global vision without integrating the oceans.

Pioneering underwater network technology

There are a few marine research stations scattered around the globe (like needles in algal stacks). An increasing number of intelligent marine objects have also been created (sensors, buoys, autonomous vehicles, probes). The foundations of an underwater wireless network are also being set up, which should be as accessible and reliable as the IoT, the Internet of Underwater Things (IoUT). A pioneer in the field, Italian company WSense has had favorable currents this year.

The adventure of the startup began at the University of Sapienza in Rome, where Professor Chiara Petrioli is in charge of a research laboratory. “We started looking into underwater networks 10 years ago,” she says. “We wanted to find a way to transmit information reliably with elements like routers in large areas.” This research resulted in solutions “achieving levels of reliability and performance previously not possible” and several international patents were filed. Potential applications supported the creation of a spin-off: WSense launched in 2017 with a handful of PhDs and engineers with backgrounds in acoustics, network architecture, signal processing, among other areas.

Today, the startup employs a staff of 50 people with offices located in Italy, U.K., and Norway. It has about 20 customers — “Blue economy” companies and scientific institutions. Its innovations have been honored in 2022 by a Digital Challenge of the European Institute of Innovation and Technology and by a Blueinvest prize from the European Commission.

How WSense is helping protect Italy's underwater archeological treasures

Deploying acoustics, optical systems, and AI

As you can imagine, “wireless network” and “underwater” are not made for each other. In fact, anything that makes aerial Wi-Fi function does not work underwater. Radio waves are significantly attenuated, light or sound communication vary a lot depending on the temperature, salinity level, background noise — everything had to be reconsidered and that’s exactly what WSense has done.

Their solution is based on an innovative combination of acoustic communication for medium-range distances and optical LED technologies for short distances, with a hint of artificial intelligence.

More specifically, underwater “nodes” are deployed. Data transfer between the nodes is permanently optimized by AI: Whenever sea conditions change, algorithms modify the path followed by byte packets.

The system, explains Petrioli, can send data to 1000 meters at the speed of 1 kbit/s and up to several Mbit/s over shorter distances. This bandwidth can’t be compared to those of aerial networks “but we are working on enlarging it.” However, it is sufficient for transmitting environmental data collected by the sensors.

“We are in the process of developing autonomous robotic systems. We can allow teams of robots to communicate and collaborate, to send data, get instructions, and change their mission in real time.” —Chiara Petrioli, WSense Founder & CEO

The resulting network is stable, reliable, and open: A plurality of devices (sensors, probes, vehicles) of various types and brands can be connected. WSense has designed its platform first for shallow water (up to 300 m depth), but now it asserts that it is operational up to -3000 m, opening the door wider to the oceans.

On the surface, floating gateways (or posted on nearby land) connect this local network to the cloud, and so to the rest of the world — the IoUT joins IoT.

WSense designs all the software in-house (from network software to data processing) as well as all the necessary hardware: nodes, probes, modems, and gateways.

WSense’s devices are packed with sensors. “They measure parameters such as temperature, salinity, pH, chlorophyll, methane, ammonium, phosphate, CO2, waves and tide, background noise,” explains Petrioli. In a nutshell: everything required for real-time follow-up and extensive surveillance of submarine environments.

Aquaculture was one of the first sectors to show an interest in WSense (and remains a sector with key customers). The deployment of a wireless network covering the rearing cages, without multiple bulky cabling, connects everything that provides for monitoring the biotope and controlling the fish farm. Cameras and sensors, as well as robots.

“We are in the process of developing autonomous robotic systems,” says Petrioli. “We can allow teams of robots to communicate and collaborate, to send data, get instructions, and change their mission in real time.”

Studying how animals adapt to climate change

Following a request from a Norwegian customer, WSense R&D has recently developed an ultra-miniature fish wearable element. It makes it possible to closely observe the life and health of animals, while monitoring water quality. “All this goes in the same direction: supplying tools to go further in the direction of a more sustainable fish farming,” Petrioli says.

Similarly, WSense’s platform can make it considerably easier to survey and work around offshore stations, as well as underwater infrastructure, such as gas and oil pipelines.

An Out-of-the-Box Diving Experience

This summer, WSense launched a miniature device: a “micronode” that could considerably enhance our submarine diving experience, just like smartphone applications have contributed to enriching our daily lives.

The size of a pack of cigarettes, the device is linked by cable (and LEMO W Series connectors) to a watertight tablet. Thanks to the solution, divers can communicate with the surface and among each other much better than by sign language.

“It also makes it possible for them to receive real-time information about what they see around themselves”, explains WSense founder and CEO Chiara Petrioli. For the submerged Roman ruins of Baiae for instance, the tablet could show, in augmented reality, the reconstituted buildings visited by “diving tourists”.

In addition, the “micronode” is equipped with a GPS, “which increases safety, since the divers will always be precisely located. This option also opens new ways of exploring archaeological sites. It will be possible, for instance, to guide visitors along predefined itineraries,” Petroli says. “There are endless possibilities !”

The new device adds interactivity, augmented reality, and much more for the divers.

This new product has been presented during the finish of the prestigious “Ocean Race” (a round-the-world sailing challenge) which was held in late June in Genoa (Italy).

It is just as efficient in more natural environments. The startup has deployed its network in sensitive sites and environmental hotspots. Scientists use it for instance for studying how algae, corals, and animals adapt to climate change. In the field and continuously, “which is much more precise than what we could do from the surface or satellites,” according to Petrioli. The solution also monitors sites that represent major risks for human populations, such as volcanic areas.

The WSense platform is also deployed in archeological or cultural sites, such as the submerged luxurious Roman city of Baiae, near Naples (Italy), which is part of the UNESCO World Heritage Sites. By measuring pollution and the effects of climate change or potential damage caused by visitors, it contributes to their protection the same way as it has for a long time in the case of on-land archaeological sites.

Just like webcams placed around the world, “those connected by WSense can also promote these sites.” They open windows for education and tourism, providing access to a larger audience than that of just scientists, companies, or authorities.

Defining the standard for IoUT

The startup is also about to launch a “micronode” that, connected to a watertight tablet, would enhance the diving experience. This new appealing product does not really embody WSense’s true ambitions. The Italian company does not only offer, unlike others, “smart devices.” It doesn’t want to be just one more component in our already too fragmented knowledge of oceans.

On the contrary, it wants to unite all the components.

With this in mind, WSense has ensured the interoperability of its submarine network. For the same reason, it has also been working hard on making deployment simple and reducing costs, both prerequisites for its true purpose: to define the standard for IoUT.

Underwater wireless networks give continuous access to an unprecedented wealth of data about our oceans

For this purpose, WSense must enhance its notoriety as well as its platform. In January, it got a great boost from a place that hasn’t seen any oceans for the last 200 million years: Davos, in the heart of the Swiss Alps.

During its last edition, the prestigious World Economic Forum (WEF) rewarded 10 companies, including WSense, winner of its Ocean Data Challenge, an event for identifying the most promising technologies in data collection and management for ocean protection. The award gives access to the WEF network, an ideal platform for finding people who could give support for global scale up.

There was an immediate effect: WSense spent the following weeks answering a flood of inquiries.

“It was huge,” says Petrioli. “We were able to talk to political and scientific leaders, top managers, who were often unaware of the possibilities. We could explain to them that the Internet of Underwater Things was not deep tech, but a solution ready to be implemented.”

Quick positioning on the submarine communications market is quite interesting (Forbes estimated it at $3.5 billion dollars, with a 22 percent increase per year). However, urgency lies elsewhere, insists Petrioli.

“We cannot delay applying these solutions. We must not go on ignoring so many things about the exploitation of the oceans or climate change. We must understand today, because it may be too late tomorrow.”

Keep Reading ↓ Show less

Breaking Boundaries in Wireless Communication

Simulating Animated, On-Body RF Propagation

1 min read

This paper discusses how RF propagation simulations empower engineers to test numerous real-world use cases in far less time, and at lower costs, than in situ testing alone. Learn how simulations provide a powerful visual aid and offer valuable insights to improve the performance and design of body-worn wireless devices.

Download this free whitepaper now!

Terahertz Chip Achieves 72 Gbps Data Rate

Topological antenna achieves unprecedented 3D signal coverage

3 min read
A star shape with arrows and cones on a green background.

A new experimental antenna chip converts terahertz waves into beams that spread in a broad range of directions, solving a key challenge in next-generation wireless communications.

Sixth-generation wireless networks, or 6G, are expected to achieve terabit-per-second speeds using terahertz frequencies. However, to harness the terahertz spectrum, complicated device designs are typically needed to establish multiple high-speed connections. Now research suggests that advanced topological materials may ultimately help to achieve such links. The experimental device the researchers have made, in fact, achieved 72 gigabits-per-second data rates, and reached more than 75 percent of the three-dimensional space around it.

“It delivers very high data speeds, wide coverage without moving parts, support for multiple simultaneous links, and two-way communication, all while keeping signal losses low,” says Ranjan Singh, a professor of electrical engineering at the University of Notre Dame in South Bend, Ind. “Current solutions typically achieve only one or two of these features at a time and often rely on complex antenna arrays or mechanical steering.”

Topology—the mathematics of shapes that preserve certain properties through deformation—reveals that light can flow along protected pathways in specially structured materials, resistant to scattering and defects. In this terahertz antenna, that topological protection is engineered to leak signals outward in a controlled, three-dimensional pattern.

How Leaky Antennas Do the Trick

Instead of completely suppressing leakage, in the new study the researchers designed their chip so it would let some of the terahertz radiation flowing within it to leak out. The topological design of this “leaky wave antenna“ ensured that signals would flow smoothly without significant loss or distortion, improving bandwidth and data rates.

At the same time, the way light propagates within the microchip means that when it leaks out, it radiates in a cone, providing both horizontal and vertical coverage and enabling the antenna to reach 75 percent of the surrounding three-dimensional space.

“Many previous terahertz systems work only by adding layers of complexity, large antenna arrays, mechanical beam steering, or highly customized components,” Singh says. “What makes this work different is that it achieves wide coverage, high speed, and multilink capability without making the system more complicated.”

The silicon chip is perforated with rows of triangular holes—some 264 micrometers wide, others 99 µm in size. Depending on the arrangement of these big and little triangular holes, terahertz radiation either flowed within the chip or leaked out.

Compared with previous state-of-the-art, nontopological terahertz antennas, the new device achieves 30 times more coverage of a 3D space and roughly 275 times higher data speeds.

“Wide spatial coverage allows wireless links to remain flexible and robust, even as devices move or align imperfectly,” Singh says.

In addition, the new microchip can serve as both a receiver and a transmitter, allowing signals to travel smoothly in both directions along the same pathway without disrupting one another.

“Earlier technologies could, in theory, achieve similar two-way communication, but only with far more complicated designs and tightly controlled experimental setups,” Singh says. “That complexity made real-world demonstrations extremely challenging. By simplifying the underlying design, our approach makes bidirectional, multilink communication not just possible in theory but achievable in practice.”

Moving From the Lab to the Real World

In experiments, the antenna achieved radiation efficiencies between 90 and 100 percent—meaning that nearly all terahertz signals flowing through the chip leaked out in a precisely controllable pattern. This high efficiency translated to practical capabilities: The system could simultaneously stream uncompressed high-definition video while maintaining an additional high-speed wireless data link at 24 Gb/s.

In the near term, Singh envisions TeraFi—terahertz Wi-Fi—delivering speeds far beyond today’s standard for homes, offices, and data centers. “The signal can reach many directions at once,” he says, a capacity that makes it “well suited for environments that require multiple reliable connections simultaneously, including vehicles, factories, and robotic platforms.”

Looking ahead, Singh sees sensing as an important new opportunity for terahertz tech. “The technology also enables terahertz sensing and imaging, including TeDAR [terahertz detection and ranging], a high-resolution sensing approach that can precisely detect objects, distances, and shapes. This opens up potential applications in autonomous systems, smart infrastructure, and industrial monitoring, where both fast communication and accurate sensing are critical.”

Terahertz technology has, however, historically struggled to move from labs to real-world use. “Our approach is different,” Singh says. “We’ve built beam control directly into the chip’s structure instead of relying on fragile external components. That makes the system inherently robust and scalable—more than a laboratory curiosity, but a practical path forward.”

Next, the team plans to integrate antenna, sources, detectors, and signal processing on a single chip for complete terahertz systems. Singh says the team also wants to test networks of multiple devices working together.

The team’s findings appear in the 12 January issue of the journal Nature Photonics.

Keep Reading ↓ Show less

Henry Samueli: The Broadband Boss

Broadcom cofounder Henry Samueli’s pioneering work fueled the broadband boom

13 min read
Vertical
This Man Made the Modem in Your Phone a Reality
Red

In 1991, very few people had Internet access. Those who did post in online forums or email friends from home typically accessed the Internet via telephone line, their messages traveling at a top speed of 14.4 kilobits per second. Meanwhile, cable TV was rocketing in popularity. By 1991, sixty percent of U.S. households subscribed to a cable service; cable rollouts in the rest of the world were also picking up speed.

Hypothetically, using that growing cable network instead of phone lines for Internet access would dramatically boost the speed of communications. And making cable TV itself digital instead of analog would allow cable providers to carry many more channels. The theory of how to do that—using analog-to-digital converters and digital signal processing to translate the analog waveforms that travel on coaxial cable into digital form—was well established. But the cable modems required to implement such a digital broadband network were not on the mass market.

Keep Reading ↓ Show less

Designing a Silicon Photonic MEMS Phase Shifter With Simulation

Engineers at EPFL used simulation to design photonic devices for enhanced optical network speed, capacity, and reliability

4 min read
Designing a Silicon Photonic MEMS Phase Shifter With Simulation
EPFL

This sponsored article is brought to you by COMSOL.

The modern internet-connected world is often described as wired, but most core network data traffic is actually carried by optical fiber — not electric wires. Despite this, existing infrastructure still relies on many electrical signal processing components embedded inside fiber optic networks. Replacing these components with photonic devices could boost network speed, capacity, and reliability. To help realize the potential of this emerging technology, a multinational team at the Swiss Federal Institute of Technology Lausanne (EPFL) has developed a prototype of a silicon photonic phase shifter, a device that could become an essential building block for the next generation of optical fiber data networks.

Lighting a Path Toward All-Optical Networks

Using photonic devices to process photonic signals seems logical, so why is this approach not already the norm? “A very good question, but actually a tricky one to answer!” says Hamed Sattari, an engineer currently at the Swiss Center for Electronics and Microtechnology (CSEM) specializing in photonic integrated circuits (PIC) with a focus on microelectromechanical system (MEMS) technology. Sattari was a key member of the EPFL photonics team that developed the silicon photonic phase shifter. In pursuing a MEMS-based approach to optical signal processing, Sattari and his colleagues are taking advantage of new and emerging fabrication technology. “Even ten years ago, we were not able to reliably produce integrated movable structures for use in these devices,” Sattari says. “Now, silicon photonics and MEMS are becoming more achievable with the current manufacturing capabilities of the microelectronics industry. Our goal is to demonstrate how these capabilities can be used to transform optical fiber network infrastructure.”

Optical fiber networks, which make up the backbone of the internet, rely on many electrical signal processing devices. Nanoscale silicon photonic network components, such as phase shifters, could boost optical network speed, capacity, and reliability.

The phase shifter design project is part of EPFL’s broader efforts to develop programmable photonic components for fiber optic data networks and space applications. These devices include switches; chip-to-fiber grating couplers; variable optical attenuators (VOAs); and phase shifters, which modulate optical signals. “Existing optical phase shifters for this application tend to be bulky, or they suffer from signal loss,” Sattari says. “Our priority is to create a smaller phase shifter with lower loss, and to make it scalable for use in many network applications. MEMS actuation of movable waveguides could modulate an optical signal with low power consumption in a small footprint,” he explains.

How a Movable Waveguide Helps Modulate Optical Signals

The MEMS phase shifter is a sophisticated mechanism with a deceptively simple-sounding purpose: It adjusts the speed of light. To shift the phase of light is to slow it down. When light is carrying a data signal, a change in its speed causes a change in the signal. Rapid and precise shifts in phase will thereby modulate the signal, supporting data transmission with minimal loss throughout the network. To change the phase of light traveling through an optical fiber conductor, or bus waveguide, the MEMS mechanism moves a piece of translucent silicon called a coupler into close proximity with the bus.

Figure 1. Two stages of motion for the MEMS mechanism in the phase shifter.

The design of the MEMS mechanism in the phase shifter provides two stages of motion (Figure 1). The first stage provides a simple on–off movement of the coupler waveguide, thereby engaging or disengaging the coupler to the bus. When the coupler is engaged, a finer range of motion is then provided by the second stage. This enables tuning of the gap between the coupler and bus, which provides precise modulation of phase change in the optical signal. “Moving the coupler toward the bus is what changes the phase of the signal,” explains Sattari. “The coupler is made from silicon with a high refractive index. When the two components are coupled, a light wave moving through the bus will also pass through the coupler, and the wave will slow down.” If the optical coupling of the coupler and bus is not carefully controlled, the light’s waveform can be distorted, potentially losing the signal — and the data.

Designing at Nanoscale with Optical and Electromechanical Simulation

The challenge for Sattari and his team was to design a nanoscale mechanism to control the coupling process as precisely and reliably as possible. As their phase shifter would use electric current to physically move an optical element, Sattari and the EPFL team took a two-track approach to the device’s design. Their goal was to determine how much voltage had to be applied to the MEMS mechanism to induce a desired shift in the photonic signal. Simulation was an essential tool for determining the multiple values that would establish the voltage versus phase relationship. “Voltage vs. phase is a complex multiphysics question. The COMSOL Multiphysics software gave us many options for breaking this large problem into smaller tasks,” Sattari says. “We conducted our simulation in two parallel arcs, using the RF Module for optical modeling and the Structural Mechanics Module for electromechanical simulation.”

The optical modeling (Figure 2) included a mode analysis, which determined the effective refractive index of the coupled waveguide elements, followed by a study of the signal propagation. “Our goal is for light to enter and exit our device with only the desired change in its phase,” Sattari says. “To help achieve this, we can determine the eigenmode of our system in COMSOL.”

Figure 2. Left: Light passes from left to right through a path composed of an optical bus and a coupled movable waveguide. Right: Cross-sectional slices of a simulated light waveform as it passes through the coupled device. By adjusting the distance between the two optical elements in their simulation, the EPFL team could determine how that distance affected the speed, or phase, of the optical signal.

Images courtesy EPFL and licensed under CC BY 4.0

Figure 3. Simulation showing deformation of the movable waveguide support structure. The thin elements that suspend the movable waveguide will flex in response to an applied voltage.

Image courtesy EPFL and licensed under CC BY 4.0

Figure 4. Optical simulation (left) established the vertical distance between the coupler and waveguide that would result in a desired phase shift in the optical signal. Electromechanical simulation (right) determined the voltage that, when applied to the MEMS mechanism, would move the coupler waveguide to the desired distance away from the bus.

Images courtesy EPFL and licensed under CC BY 4.0

Along with determining the physical forms of the waveguide and actuation mechanism, simulation also enabled Sattari to study stress effects, such as unwanted deformation or displacement caused by repeated operation. “Every decision about the design is based on what the simulation showed us,” he says.

Adding to the Foundation of Future Photonic Networks

The goal of this project was to demonstrate how MEMS phase shifters could be produced with existing fabrication capabilities. The result is a robust and reliable design that is achievable with existing surface micromachined manufacturing processes, and occupies a total footprint of just 60 μm × 44 μm. Now that they have an established proof of concept, Sattari and his colleagues look forward to seeing their designs integrated into the world’s optical data networks. “We are creating building blocks for the future, and it will be rewarding to see their potential become a reality,” says Sattari.

References

  1. H. Sattari et al., “Silicon Photonic MEMS Phase-Shifter,” Optics Express, vol. 27, no. 13, pp. 18959–18969, 2019.
  2. T.J. Seok et al., “Large-scale broadband digital silicon photonic switches with vertical adiabatic couplers,” Optica, vol. 3, no. 1, pp. 64–70, 2016.

Keep Reading ↓ Show less
{"imageShortcodeIds":["32366883","32366901","32366913"]}

Teach 5G Hands-On with TIMS Lab Experiments

Boost Student Comprehension in Telecoms with Interactive 5G Labs.

1 min read

Boost Student Comprehension in Telecoms with Interactive 5G Labs.

Teaching complex 5G and telecommunications concepts can be challenging – students often struggle to connect theory with real-world applications. Traditional lecture-based methods may fail to engage, leaving gaps in understanding critical technologies like OFDM, channel coding, and signal modulation.

Keep Reading ↓ Show less

At Age 25, Wikipedia Refuses to Evolve

The digital commons champion faces a crisis of its own making

5 min read
Illustration of the Wikipedia logo in a glass case on display with a placard that says Wikipedia 2001.

Wikipedia once had protracted and open debates about new formats that could let it evolve—are those days past?

Illustration: IEEE Spectrum. Source images: Nohat/Wikimedia; Getty Images

Wikipedia celebrates its 25th anniversary this month as the internet’s most reliable knowledge source. Yet behind the celebrations, a troubling pattern has developed: The volunteer community that built this encyclopedia has lately rejected a key innovation designed to serve readers. The same institution founded on the principle of easy and open community collaboration could now be proving unmovable—trapped between the need to adapt and an institutional resistance to change.

Wikipedia’s Digital Sclerosis

Political economist Elinor Ostrom won the 2009 Nobel Prize in economics for studying the ways communities successfully manage shared resources—the “commons.” Wikipedia’s two founders (Jimmy Wales and Larry Sanger) established the internet’s open-source encyclopedia 25 years ago on principles of the commons: Its volunteer editors create and enforce policies, resolve disputes, and shape the encyclopedia’s direction.

But building around the commons contains a trade-off, Ostrom’s work found. Communities that make collective decisions tend to develop strong institutional identities. And those identities sometimes spawn reflexively conservative impulses.

Giving users agency over Wikipedia’s rules, as I’ve discovered in some of my own studies of Wikipedia, can lead an institution away ultimately from the needs of those the institution serves.

Wikipedia’s editors have built the largest collaborative knowledge project in human history. But the governance these editors exercise increasingly resists new generations of innovation.

Paradoxically, Wikipedia’s revolutionarily collaborative structure once put it at the vanguard of innovation on the open internet. But now that same structure may be failing newer generations of readers.

Does Wikipedia’s Format Belong to Readers or Editors?

There’s a generational disconnect today at the heart of Wikipedia’s current struggles. The encyclopedia’s format remains wedded to the information-dense, text-heavy style of Encyclopedia Britannica—the very model Wikipedia was designed to replace.

A Britannica replacement made sense in 2001. One-quarter of a century ago, the average internet user was older and accustomed to reading long-form content.

However, teens and twentysomethings today are of a very different demographic and have markedly different media consumption habits compared to Wikipedia’s forebears. Gen Z and Gen Alpha readers are accustomed to TikTok, YouTube, and mobile-first visual media. Their impatience for Wikipedia’s impenetrable walls of text, as any parent of kids of this age knows, arguably threatens the future of the internet’s collaborative knowledge clearinghouse.

The Wikimedia Foundation knows this, too. Research has shown that many readers today greatly value quick overviews of any article, before the reader considers whether to dive into the article’s full text.

So last June, the Foundation launched a modest experiment they called “Simple Article Summaries.” The summaries consisted of AI-generated, simplified text at the top of complex articles. Summaries were clearly labeled as machine-generated and unverified, and they were available only to mobile users who opted in.

Even after all these precautions, however, the volunteer editor community barely gave the experiment time to begin. Editors shut down Simple Article Summaries within a day of its launch.

The response was fierce. Editors called the experiment a “ghastly idea” and warned of “immediate and irreversible harm” to Wikipedia’s credibility.

Comments in the village pump (a community discussion page) ranged from blunt (“Yuck”) to alarmed, with contributors raising legitimate concerns about AI hallucinations and the erosion of editorial oversight.

Revisiting Wikipedia’s Past Helps Reveal Its Future

Last year’s Simple Summaries storm, and sudden silencing, should be considered in light of historical context. Consider three other flashpoints from Wikipedia’s past:

In 2013, the Foundation launched VisualEditor—a “what you see is what you get” interface meant to make editing easier—as the default for all newcomers. However, the interface often crashed, broke articles, and was so slow that experienced editors fled. After protests erupted, a Wikipedia administrator overrode the Foundation’s rollout, returning VisualEditor to an opt-in feature.

The following year brought Media Viewer, which changed how images were displayed. The community voted to disable it. Then, when an administrator implemented that consensus, a Foundation executive reversed the change and threatened to revoke the admin’s privileges. On the German Wikipedia, the Foundation deployed a new “superprotect” user right to prevent the community from turning off Media Viewer.

Even proposals that technically won majority support met resistance. In 2011, the Foundation held a referendum on an image filter that would let readers voluntarily hide graphic content. Despite 56 percent support, the feature was shelved after the German Wikipedia community voted 86 percent against it.

These three controversies from Wikipedia’s past reveal how genuine conversations can achieve—after disagreements and controversy—compromise and evolution of Wikipedia’s features and formats. Reflexive vetoes of new experiments, as the Simple Summaries spat highlighted last summer, is not genuine conversation.

Supplementing Wikipedia’s Encyclopedia Britannica–style format with a small component that contains AI summaries is not a simple problem with a cut-and-dried answer, though neither were VisualEditor or Media Viewer.

Why did 2025’s Wikipedia crisis result in immediate clampdown, whereas its internal crises from 2011–2014 found more community-based debates involving discussions and plebiscites? Is Wikipedia’s global readership today witnessing the first signs of a dangerous generation gap?

Wikipedia Needs to Air Its Sustainability Crisis

A still deeper crisis haunts the online encyclopedia: the sustainability of unpaid labor. Wikipedia was built by volunteers who found meaning in collective knowledge creation. That model worked brilliantly when a generation of internet enthusiasts had time, energy, and idealism to spare. But the volunteer base is aging. A 2010 study found the average Wikipedia contributor was in their mid-twenties; today, many of those same editors are now in their forties or fifties.

Meanwhile, the tech industry has discovered how to extract billions in value from their work. AI companies train their large language models on Wikipedia’s corpus. The Wikimedia Foundation recently noted it remains one of the highest-quality datasets in the world for AI development. Research confirms that when developers try to omit Wikipedia from training data, their models produce answers that are less accurate, less diverse, and less verifiable.

The irony is stark. AI systems deliver answers derived from Wikipedia without sending users back to the source. Google’s AI Overviews, ChatGPT, and countless other tools have learned from Wikipedia’s volunteer-created content—then present that knowledge in ways that break the virtuous cycle Wikipedia depends on. Fewer readers visit the encyclopedia directly. Fewer visitors become editors. Fewer users donate. The pipeline that sustained Wikipedia for a quarter century is breaking down.

What Does Wikipedia’s Next 25 Years Look Like?

The Simple Summaries situation arguably risks making the encyclopedia increasingly irrelevant to younger generations of readers. And they’ll be relying on Wikipedia’s information commons for the longest time frame of any cohort now editing or reading it.

On the other hand, a larger mandate does, of course, remain at Wikipedia to serve as stewards of the information commons. And wrongly implementing Simple Summaries could fail this ambitious objective. Which would be terrible, too.

All of which, frankly, are what open discussions and sometimes-messy referenda are all about: not just sudden shutdowns.

Meanwhile, AI systems should credit Wikipedia when drawing on its content, maintaining the transparency that builds public trust. Companies profiting from Wikipedia’s corpus should pay for access through legitimate channels like Wikimedia Enterprise, rather than scraping servers or relying on data dumps that strain infrastructure without contributing to maintenance.

Perhaps as the AI marketplace matures, there could be room for new large language models trained exclusively on trustworthy Wikimedia data—transparent, verifiable, and free from the pollution of synthetic AI-generated content. Perhaps, too, Creative Commons licenses need updating to account for AI-era realities.

Perhaps Wikipedia itself needs new modalities for creating and sharing knowledge—ones that preserve editorial rigor while meeting audiences where they are.

Wikipedia has survived edit wars, vandalism campaigns, and countless predictions of its demise. It has patiently outlived the skeptics who dismissed it as unreliable. It has proven that strangers can collaborate to build something remarkable.

But Wikipedia cannot survive by refusing to change. Ostrom’s Nobel Prize–winning research reminds us that the communities that govern shared resources often grow conservative over time.

For anyone who cares about the future of reliable information online, Wikipedia’s 25th anniversary is not just a celebration. It is an urgent warning about what happens when the institutions we depend on cannot adapt to the people they are meant to serve.

Dariusz Jemielniak is vice president of the Polish Academy of Sciences, a full professor at Kozminski University in Warsaw, and a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University. He served for a decade on the Wikimedia Foundation Board of Trustees and is the author of Common Knowledge? An Ethnography of Wikipedia (Stanford University Press).

Keep Reading ↓ Show less

Henry Samueli: Digital Broadband Pioneer

The innovative work that made high-speed digital modems a reality

16 min read
A smiling mustachioed man in a suit stands in front of a building that says Broadcom.

Henry Samueli [shown here in 1999] cofounded Broadcom to commercialize low-cost, high-speed digital modem chips, which revolutionized digital broadband.

Gilles Mingasson/Getty Images

Editor’s Note: Henry Samueli is the 2025 recipient of the IEEE Medal of Honor. IEEE Spectrum published this profile of Samueli in the September 1999 issue.

With the recent explosion in the popularity of cable and digital subscriber-line modems for high-speed Internet access, the odds are that you will soon have one of these broadband communications devices in your home or office—if you don’t already. If you do, the odds are that the chips inside the modem will have been designed by Broadcom Corp., and be based on digital signal-processing (DSP) architectures conceived by Henry Samueli.

Keep Reading ↓ Show less

AI for Wireless

The key to overcoming complexity in modern wireless systems design

4 min read
Diagram showing machine learning workflows
MathWorks

This is a sponsored article brought to you by MathWorks.

The evolution of mobile wireless technology, from 3G/4G to 5G, and introduction of Industry 4.0, have resulted in the ever-increasing complexity of wireless systems design. Wireless networks have also become more difficult to manage due to requirements necessitating optimal sharing of valuable resources to expanding sets of users. These challenges force engineers to think beyond traditional rules-based approaches with many are turning to artificial intelligence (AI) as the go-to solution to face the challenges introduced by modern systems.

From managing communications between autonomous vehicles, to optimization of resource allocations in mobile calls, AI has brought the sophistication necessary for modern wireless applications. As the number and scope of devices connected to networks expands, so too will the role of AI in wireless. Engineers must be prepared to introduce it into increasingly complex systems. Knowing the benefits and current applications of AI in wireless systems, as well as the best practices necessary for optimal implementation, will be key for the future success of the technology.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Breaking 6G Barriers: How Researchers Made Ultra-Fast Wireless Real

Researchers tackle high-frequency path loss challenges

1 min read

Keysight visited 6G researchers at Northeastern University who are working to overcome the challenges of high-speed, high-bandwidth wireless communication.

They shared concepts from their cutting-edge research, including overcoming increased path loss and noise at higher frequencies, potential digital threats to communication channels, and real-time upper-layer network applications.

Keep Reading ↓ Show less

IEEE Spectrum's Top Telecom Stories of 2025

The year saw developments in 6G, optical fiber, and quantum comms

5 min read
Illustration representing a data network spanning across the entire continent of Europe.
iStock

The telecom networks originally built to carry phone calls and packets of data are in the midst of a dramatic shift. The past year saw early steps toward networks becoming a more integrated data fabric that can measure the world, process and sense collaboratively, and even stretch into outer space.

The following list of key IEEE Spectrum telecom news stories from 2025 underscores the evolution the connected (and wireless) world is going through today. A larger story is emerging, in other words, of how networks are turning into instruments and engines rather than just passive pipes.

Keep Reading ↓ Show less

It’s Time To Rethink 6G

It’s not more bandwidth that users need

10 min read
An illustration of person sitting at a table with number of icons sitting on it.
Davide Comai
Purple

Is the worldwide race to keep expanding mobile bandwidth a fool’s errand? Could maximum data speeds—on mobile devices, at home, at work—be approaching “fast enough” for most people for most purposes?

These heretical questions are worth asking, because industry bandwidth tracking data has lately been revealing something surprising: Terrestrial and mobile-data growth is slowing down. In fact, absent a dramatic change in consumer tech and broadband usage patterns, data-rate demand appears set to top out below 1 billion bits per second (1 gigabit per second) in just a few years.

Keep Reading ↓ Show less

Standards Certification Testing: Bringing Order to the Internet of Things

Comarch leads the way in developing automated standards certification systems

5 min read
Colorful glass building serves as Comarch's headquarters in Krakow, Poland.

Headquartered in Krakow, Poland, Comarch is a global IT firm with offices in 33 countries around the world.

Comarch

This is a sponsored article brought to you by Comarch.

From human speech to quantum encryption, communication relies on protocols that are mutually understood and reliably implemented. Without them, there is only noise.

In the world of electronic communications, standards certification makes sure that everybody knows the rules and plays by them. Comarch—the $500-million IT company with headquarters in Krakow, Poland, and offices in 100 countries—is playing a leading role in developing automated standards certification systems.

In 2007, Comarch began working with the UPnP Forum (now part of the Open Connectivity Foundation) to certify Universal Plug and Play compliance for personal computers, Wi-Fi access points, routers, audiovisual, and other devices on home-scale networks.

Over the years, Comarch has steadily expanded its work with multiple standards organizations.

In October 2021, when the AVNU Alliance debuted its test tool for the Milan Advanced Certification Program (MACP) for time-sensitive networks (TSNs), Comarch distilled 15 years of automated certification testing into its new Comarch Automated Testing Framework (CATF) to make it happen.

AVNU Alliance

Thanks to CATF, AVNU could promise its members “faster, easier, more convenient, and less expensive testing” that would guarantee performance and interoperability of high-speed devices for audiovisual, automotive, and industrial applications—applications that demand security, reliability, efficient use of available bandwidth, adaptation to varying latency, and ultra-precise timing.

CATF Product Demo

CATF (Comarch Automated Test Framework) is a new product in Comarch’s portfolio. It’s a specialist conformance test framework dedicated for certification alliances, designed to shorten development time and ensure high stability and maturity.

The Business Case

Standards organizations and their members are looking to faster, digital, cloud-based testing models to help them respond to the increasing pace of technology development. More and more, that requires an expert partner.

Standards organizations know their own technologies inside-out. “They can define the requirements very precisely. They are able to identify needs and goals,” says Radoslaw Kotewicz, Comarch’s director of IoT Sales and Consulting. But he explains that in most cases—not all, but most—they lack in-house expertise in certification test development.

In the past, says Kotewicz, standards alliances nonetheless developed their own certification tools—usually using the resources of their member organizations. That could be a drawn-out process, but it wasn’t major concern…then.

“Even five or ten years ago,” says Kotewicz, “developing a typical product might take up to two years." In that context, it wasn’t a problem if it took six to twelve months to develop automated standards-certification tools.

“Now, though, manufacturers are trying to roll out new products faster," he says. "Standards organizations are accelerating their development schedules to adapt, and may need to deliver new automated testing tools in as little as three months.”

Comarch

“Over the years, more and more standards organizations have moved from dedicated in-house tools to external tools. They know their own technologies, but they are finding that organizations like Comarch, which specialize in standards testing, allow them to bring their automated tests online much faster.”

To accelerate standards-certification test development, speed up device certifications, and improve interoperability testing, the company built the Comarch Automated Test Framework. Written in .Net C# running under Windows, CATF uses libraries of Python test scripts. Most communications protocols use standard types of input and produce standard types of output that are readily interpreted.

Download Free Whitepaper on CATF

Download Comarch's free whitepaper "Technology Certification and Standardization Process with Comarch Automated Test Framework" (CATF) to learn more about the company's testing and certification tools.

Certain protocols, however, require nonstandard calls and responses, while others may require more detailed monitoring at the hardware level—via either a standard piece of test equipment like a commercial vector network analyzer or, infrequently, a piece of custom hardware designed and built by Comarch.

Modularity gives CATF its flexibility—flexibility for proprietary tests, for accommodating standards that are still in development, for looping in uncommon hardware, or for validating interoperability with a wide range of other devices and protocols.

“The power of the solution,” says Kotewicz, “is the common way to manage the text cases, letting us add or extend test cases (to accommodate evolving interoperability requirements, for example).”

Long Experience

The company has long experience working with standards organizations, their member companies, their vendors and contractors, and authorized testing laboratories (ATLs) to develop, roll out, and manage certification tests. To consider just a few of Comarch’s engagements:

UPnP and OCF

When the Open Connectivity Foundation (OCF) took over management of UpnP in 2016, UpnP joined a stable of standards for the wider Internet of Things. Comarch came along, too, winning OCF Outstanding Contributor Awards in 2017 and 2020. As OCF commented:

“As the developer of the OCF Certification Test Tool [CTT], Comarch has demonstrated strong commitment to establishing OCF as a leading communication standard. Comarch has hosted and organized…face-to-face meetings and worked with specification authors and work groups to resolve issues and refine OCF requirements, as well as cooperating with IoTivity [open-source IoT software] developers to improve security requirements in the CTT and in specifications. Comarch’s proactive and responsive approach to resolving issues has led to meeting tight deadlines and making consistent improvements to the OCF CTT.”

AirFuel

Comarch has a history of solving demanding problems. Since 2014, it has worked with the AirFuel Alliance (airfuel.org) to create an automated test system that could tackle the combined software and hardware challenges of resonant wireless power transfer—and certify that AirFuel compliant devices will operate safely and reliably in the increasingly complex interconnected world.

The application was demanding: in addition to standard communications I/O, it had to monitor and validate some hardware conditions (such as the flow of power through the PRU power receive unit) and interact with a variety of external devices. The certification tests were complex, and AirFuel’s first certification rounds required up to two costly weeks. Experience and CATF automation reduced that time to less than a day.

FiRa

Since 2020, Comarch has worked with the FiRa Consortium (firaconsortium.org). FiRa’s mission is to “foster a robust [ultra-wide-band] ecosystem to enable rapid technology deployment,” for a host of smart city and smart home IoT applications like transportation ticket validation, indoor navigation, asset tracking, social-distancing and contact tracing, unmanned store access, residential access control. Comarch used CATF to build FiRa’s MAC Conformance Test Tool.

Comarch has over 13 years of experience in cooperation with leading standard organizations.

Comarch

Lessons Learned

Along with the Comarch Automated Test Framework and a variety of test cases ready to use or adapt, Comarch brings this customer experience—and a lot more—to building reliable, cost-effective certification tests in record time.

In the process, Comarch has learned some important lessons:

  • Implement the most efficient possible test process. Time saved in a single round is multiplied by the number of devices tested, and the total savings can be substantial.
  • Start developing the certification test process as early as possible. But when that means starting before the standard is final, be sure you can add, subtract, or modify test cases on the fly.
  • Give the standards organizations with the tools they need to understand the test-building process. Having a partner who knows what questions to ask make’s the process much easier.
  • Be prepared for interesting problems. In one case, Kotewicz says, the standard required power to ramp up so quickly that only one or two devices in the world could measure the change. In another case, the language of the standard was initially so tangled that it was impossible to verify as written.
  • And, in the Internet of Things—or, indeed, any environment in which multiple standards are in operation at the same time—a modular, multi-protocol approach like Comarch’s is invaluable.

The process is demanding, complex, and “we know it by heart,” says Kotewicz.

Keep Reading ↓ Show less
{"imageShortcodeIds":["29478266"]}

Ready to Optimize Your Resource Intensive EM Simulations?

2D EM modeling can be a game-changer for indoor propagation simulations

1 min read

This paper explores the use of WIPL-D software for simulating indoor electromagnetic (EM) propagation in both 2D and 3D, addressing the growing need for accurate modeling due to increasing electronic device usage. While 3D simulations offer detailed wave propagation analysis, they require substantial computational resources, especially at high frequencies, whereas 2D simulations - assuming an infinite structure with a constant cross-section - provide a computationally efficient alternative with minimal accuracy loss for many practical scenarios. The study examines the effects of material properties (e.g., concrete vs. metallic pillars) on signal distortion and evaluates different signal types, such as Dirac delta and Gaussian pulses, concluding that 2D modeling can often serve as a viable, resource-saving substitute for 3D simulations in telecommunication applications for smart environments.

In this Whitepaper You’ll Learn:

Keep Reading ↓ Show less

Kyocera's Optical Tech Boosts Underwater Data Speeds

Underwater drones need faster ways to communicate

3 min read
Close-up of a blue light chamber with green straps and metallic brackets.

At 0.75 gigabits per second, this Kyocera prototype device boasts a super-fast underwater wireless data link—though it also requires close proximity to the receiver.

Kyocera

Underwater drones may soon be able to transfer data at lightning speeds—though only if the receiver is nearby.

In November, the Kyoto, Japan-based electronics maker Kyocera demonstrated a new optical underwater communications technology that boasts lab tests of up to 5.2 gigabits per second at short range. The company is promoting this new optical transmission tech to enable faster inspections of structural damage at undersea worksites—an application that requires handling and transferring large volumes of data in undersea settings.

For instance, underwater inspection drones are today regularly used to gather footage of oil and gas pipes, submarine electric or communications cables, and other underwater structures. But there’s no real-time, easy way for the drones to send their large data signals through the water, unless the drone is tethered by wire.

But possibly using tech like what Kyocera has developed, an underwater drone might one day gather and store its footage and then dock at a fixed underwater station on the seafloor bed to wirelessly offload its drives. From there, the data station could ship its large stores of data via cable to a buoy or ship on the surface, or a station on the ground.

Kyocera will be showcasing this new technology at the consumer tech world’s largest venue, the 2026 Consumer Electronics Show, in Las Vegas next month.

Why Lasers Beat Sound Underwater

Transmitting data underwater today typically involves 1980s-era modem speeds. Current underwater acoustic modems, for instance, can transmit and receive signals across tens of kilometers distance—though only at the meager bit rate of a few kilobits per second.

Contrast that to underwater wireless optical communication (UWOC). In offshore tests last August, Kyocera researchers achieved 0.75 Gbps throughput to a receiver 15 centimeters away—a world UWOC record, according to the company.

So Kyocera researchers are now developing a 1 Gbps UWOC prototype, and they aim to introduce a 2 Gbps commercial version as early as 2027.

To be successful, the researchers will have to overcome several challenges, says Ampalavanapillai Nirmalathas, dean of faculty of engineering and information technology at the University of Melbourne, Australia.

“They will need to improve the optical beam quality so that it is tightly focused with low divergence, allowing it to travel farther without being scattered in changing underwater conditions,” he says. “They must also ensure the system can sustain high speeds in the ocean environment, even though they have already exceeded that benchmark in the lab.” He adds that they will also need to develop “a receiver with a wider aperture to capture more light to help push speeds beyond the current limit of 750 megabits per second.”

Nirmalathas adds that “human exploration at shallow depths and autonomous platforms at greater depths remain essential for uncovering the undersea world.” Advances like Kyocera’s, he adds, will be key to maintaining communications and supporting the next generation of underwater explorations and applications.

Researchers from Kyocera tested a high-speed, underwater optical wireless communication system offshore last August.Kyocera

From Tank to Ocean, With Caveats

To realize the company’s lab-bench 5.2 Gbps findings, Kyocera researchers optimized three pieces of the puzzle. First, they engineered a blue-laser system that pulses on and off thousands of times per second, encoding data in the bursts. Second, they made a receiver sensitive enough to catch those pulses even as the pulses scatter through seawater. Third, the researchers developed a way to split the data signal into dozens of thin channels and send them all at once, multiplying the throughput.

“Together, these components enable gigabit data-rate communications throughout the system,” says Yoshitaka Toeda, a member of the advanced research group.

The system uses blue lasers instead of other wavelengths for a simple reason: Blue light travels farther through water and doesn’t scatter as much. That’s why submarine searchlights and deep-sea cameras also favor blue. Kyocera’s laser is built from gallium nitride, a semiconductor material chosen for its efficiency in generating that blue wavelength, explained Ryota Kimura, a researcher in Kyocera’s communication systems R&D division.

“It allowed us to assess how stable and reliable the prototype was under various conditions,” Kimura says. And though the seawater was moderately turbid, “we were able to communicate over distances ranging from 15 centimeters to 1.5 meters,” he added. “That gave us a clear picture of the prototype’s performance in a real-world environment.”

Keep Reading ↓ Show less

NATO’s Emergency Plan for an Orbital Backup Internet

An undersea cable breach would reroute to satellites

5 min read
Vertical
NATO’s Emergency Plan for an Orbital Backup Internet
Green

On 18 February 2024, a missile attack from the Houthi militants in Yemen hit the cargo ship Rubymar in the Red Sea. With the crew evacuated, the disabled ship would take weeks to finally sink, becoming an symbol for the security of the global Internet in the process. Before it went down, the ship dragged its anchor behind it over an estimated 70 kilometers. The meandering anchor wound up severing three fiber-optic cables across the Red Sea floor, which carried about a quarter of all the Internet traffic between Europe and Asia. Data transmissions had to be rerouted as system engineers realized the cables had been damaged. So this year, NATO, the North Atlantic Treaty Organization, will begin testing a plan to fix the vulnerability that the Rubymar’s sinking so vividly illustrated.

This article is part of our special report Top Tech 2025.

Keep Reading ↓ Show less

National Instruments Paves the Way for Terahertz Regime in 6G Networks

Developing tools that can test new technologies for 6G networks is the key step in making it a reality

3 min read
National Instruments Paves the Way for Terahertz Regime in 6G Networks

This is a sponsored article brought to you by National Instruments (NI).

While 5G networks continue their rollout around the world, researchers and engineers are already looking ahead to a new generation of mobile networks, dubbed 6G. One of the key elements for 6G networks will be to move beyond the millimeter wave (mmWave) spectrum and up into the terahertz (THz) spectrum. The THz spectrum will certainly open up more bandwidth, but there are a number of technical challenges that will need to be addressed if mobile networks can ever exploit this spectrum.

“The higher carrier frequencies of THz communications in 6G networks yield even harder propagation conditions than mmWave transmission,” said Walter Nitzold, Principal Software Engineer and Group Manager at National Instruments. “These high attenuations can be overcome by antenna designs specifically tailored to yield respective antenna gains with pencil-like beams.”

It is in the design of these new kinds of antennas and network hardware where National Instruments (NI) is working hand-in-hand with researchers around the world who are trying to make 6G a reality.

Simplified block diagram of a bidirectional system capable of real-time two-way communications.

National Instruments

The challenges of moving to THz are not limited to the antennas. The design of RF ICs for THz frequencies brings additional obstacles as the wavelength falls in the range of the IC size, putting further constraints on the design methodology, according to Nitzold.

Nitzold also points out that with technologies like CMOS appearing as though they can only scale up to 140 GHz causes a problem in the linearity of components over bandwidths of multiple GHz and transmit output power (TX power). Further, the requirements on baseband processing and fast and precise beam management for pencil-like beams will become a challenging research area.

Developing Terahertz Testbeds

If research in addressing these issues is to succeed, a new generation of testbeds need to be set up with high-performance, real-time capability. Because THz testbeds will have range limitations due to path loss, initial testbeds will be limited to lab-based setups mostly consisting of simple short-range components such as horn antennas, according to Nitzold.

“Terahertz Communications have the potential to even replace fiber-optic cables with dedicated point-to-point transmission.”

—Walter Nitzold, Principal Software Engineer and Group Manager at National Instruments

However, as soon as larger deployments in testbeds become a reality, the high bandwidth use-cases will put additional requirements on throughput of the backend, especially when testbeds try to set up a disaggregated radio access network (RAN) structure with distributed THz nodes. These would need to be individually served with fiber connections.

“The cost of investments for THz testbeds will become even larger due to the groundbreaking technological changes, demanding for strong cooperation between many partners to stem this effort jointly,” noted Nitzold.

NI is looking ahead to addressing these testbed issues with its sub-THz and mmWave Transceiver System (MTS), which provides a flexible, high-performance platform to demonstrate real-world results for high-frequency research and prototyping.

System diagram of transmit and receive chains.

National Instruments

The modular system architecture can be configured to meet a variety of use cases, built on a common set of components. LabVIEW reference examples provide a starting point for channel sounding and physical layer IP experiments, while allowing the user to modify IP to perform research into new areas. A multi-FPGA processing architecture enables a truly real-time system with no offline processing needed and with 2 GHz of real-time bandwidth, enabling over-the-air (OTA) prototypes of two-way communications links.

“The strength of the NI approach lies in a flexible and scalable modular hardware and software platform,” said Nitzold. “This platform is suitable to adjust to different needs of a testbed, e.g., interface to new RF frontends as well as other components.”

Walter Nitzold, Principal Software Engineer and Group Manager at National Instruments.

National Instruments

Another benefit of NI’s approach is the incorporation of industry-standard functional splits, which allows for a distributed deployment in a testbed and flexible realization of different use-cases, according to Nitzold. “Additionally, NI focuses on real-time processing for communication links to showcase the theoretic gains in scenarios that are close to reality,” he added.

All of this will ultimately make it possible to access the THz spectrum and access greater bandwidth.

“The THz regime will allow for new opportunities and applications such as immersive virtual reality, mobile holograms, wireless cognition, and the possibility to sense the environment in an unprecedented accuracy with a possible combination of radar and communication,” said Nitzold.

“Terahertz Communications have the potential to even replace fiber-optic cables with dedicated point-to-point transmission. This will also allow new ways of intra-device communication.”

Keep Reading ↓ Show less

Discover the Role of Filter Technologies in Advanced Communication Systems

Explore SAW and BAW filters, carrier aggregation, and 5G/6G solutions

1 min read

Learn about carrier aggregation, microcell overlapping, and massive MIMO implementation. Delve into the world of surface acoustic wave (SAW) and bulk acoustic wave (SAW) filters and understand their strengths, limitations, and applications in the evolving 5G/6G landscape.

Key highlights:

Keep Reading ↓ Show less

NTT's Photonics to Slash Data Center Energy Use

Broadcom puts optics on chips to eliminate copper bottlenecks

4 min read
NTT's high-speed photonic-electronic switch (51.2 Tbit/s) on display.

NTT’s photonic-electronic convergence (PEC) device replaces electronic switches with optical alternatives, reducing the power needed to move terabits of data per second.

NTT

Although fiber-optic cables today are fast, converting their photons to electric signals at the internet server level still uses a lot of electricity.

The Japanese telecom firm NTT and the Tokyo-based electronics giant Toshiba are working on new ways around this problem. In November, the duo demonstrated high-speed factory production via an optical and wireless network that was controlled from a data center 300 kilometers away.

They described the demo as an industry first—of the kind that NTT has lately been promoting to convince the tech world that photonics will form a “next-generation information and communications infrastructure.”

Is the Internet’s Bottleneck Inside the Server Rack?

Optical fiber revolutionized data transmission decades ago. However, it still requires components such as electronic routers and transceivers to convert data back and forth between electrical and optical signals. In traditional fiber networks, information is carried at the data center electronically, which in a high-speed setting can lead to packet loss as well as substantial speed and energy limitations for the network. Photonic systems encode information directly into light—using photon number, polarization, phase, and amplitude to encode and transmit the signals through optical fibers.

NTT says its Innovative Optical and Wireless Network (IOWN) photonics platform can reduce the power consumption of telecom networks to one-hundredth of what they are now, increase data capacity 125 times and cut network latency to a fraction of a percent of its current levels. Meanwhile, the power footprint of data centers in the AI era is expanding rapidly and is expected to more than double. In fact, according to Fatih Birol, executive director of the International Energy Agency, data centers’ electricity-consumption footprint worldwide is expected to rival that of Japan by 2030.

“We need to think differently to overcome this,” says C. Sean Lawrence, cohead of NTT’s IOWN Development Office. “The core idea is to move from electrical wiring to optical, inside data centers, between circuit boards in servers, between silicon packages on circuit boards, and eventually between the silicon die inside a package. We think we can revolutionize high-performance data transmission and computing by making this shift.”

Putting the Photonic Chip to the Test

NTT faces challenges of miniaturization of optical components and the high cost of getting them into chips. It began offering elements of IOWN to data centers in 2023, the same year it established NTT Innovative Devices to develop and manufacture what they’re calling photonic-electronic convergence (PEC) devices. PECs are similar to pluggable optical transceivers that convert digital optical and electrical signals. NTT says putting optics and electronics into a single package yields lower power and heat compared with that of conventional electronics for networking and computing.

The company has been selling its vision via demos that include long-distance data center transmission. Collaborating with Chunghwa Telecom, it organized colorful “Cho-Kabuki” performances in which stages in Osaka and Taipei, some 1,700 km apart, were linked through photonics, video, and a large onscreen stage, allowing actors at either end to interact. The time lag, barely noticeable, was 17 milliseconds.

NTT later showed off PEC hardware at its Tokyo research center. Among other IOWN demos, the center showcased a mock TV studio. NTT says the board-to-board prototype used in the Kabuki show has a capacity of 51.2 terabits per second and relies on second-generation PEC switches. NTT says it also developed resource-control tech to optimize the use of hardware resources, and by combining that with PEC switches, it was able to lower power consumption compared with that of conventional optical computing.

The company is partnering with U.S. chipmaker Broadcom and others to commercialize the second-generation PEC in 2026. The hardware is a step in NTT’s envisaged road map that calls for optical communication between boards as the second phase of IOWN, followed by interchip links from 2028 and intrachip connections from 2032.

“Package-to-package connections are under development,” says Yosuke Aragane, the other leader of the IOWN Development Office. “We are developing production technologies with diverse ecosystem partners and a government funding program. The die-to-die connection is under consideration. However, reviewing the history, I believe the connection could be an essential technology in the early 2030s.”

Can NTT Convince the World to Switch?

NTT knows it can’t pull off this transformation by itself, so in 2020 it joined with Sony and Intel to found a photonics ecosystem called the IOWN Global Forum. It now has more than 160 members, including chip and server makers as well as internet companies like Google and Microsoft.

IOWN joins a two-decade-old initiative in Europe called Photonics21, a public-private partnership aimed at boosting the continent’s photonics industry.

NTT, however, has a mixed record when it comes to popularizing new technologies. In 1999, when it was one of the world’s most valuable companies, it failed to sell its groundbreaking “i-mode” cellular internet overseas. Today, it has far less global clout.

“Telcos have a history of missing out on opportunities like the cloud and AI, but their one strength is edge-network connectivity, so this is their last chance to claim some territory,” says Roy Rubenstein, an analyst at the research firm LightCounting, based in Eugene, Ore. “What’s unusual here is we are seeing a telco-led initiative and support for it. I think NTT’s road map is realistic and matches that of industry in general, but it can’t do it alone, and even with all these companies it’s not enough.”

“With the advent of AI,” Rubenstein adds, “computing has returned to the center of everything. If the AI boom slows, then the urgency will disappear. But if AI continues as it has done, in five years it will be much closer to that vision.”

Takasumi Tanabe, a professor of electronics and electrical engineering at Tokyo’s Keio University, says IOWN is contributing to important R&D in silicon photonics and optical packaging.

“At the device level, some aspects are more challenging,” Tanabe says. “A completely ‘all-optical’ system, in which electronics are removed entirely, may not be feasible with the current state of device physics. Electronics will still be necessary for control, modulation, and signal processing. Even so, I expect photonic devices to play an increasingly important role in the most critical parts of future systems, where low-power consumption, high bandwidth, and low latency are required.”

“While some elements are ambitious,” Tanabe adds, “the essential ideas behind IOWN are realistic, and the initiative has stimulated valuable advancements in photonic technologies.”

Keep Reading ↓ Show less
A photo of the inside of an atomic clock.

One of the most precise clocks in the world—the optical atomic clock in Boulder, Colo.—is composed of strontium atoms in a vacuum chamber, with seven different lasers orchestrated in precise patterns to cool, trap, and detect the atoms.

Matthew Jonas/Boulder Daily Camera
Blue

Walking into Jun Ye’s lab at the University of Colorado Boulder is a bit like walking into an electronic jungle. There are wires strung across the ceiling that hang down to the floor. Right in the middle of the room are four hefty steel tables with metal panels above them extending all the way to the ceiling. Slide one of the panels to the side and you’ll see a dense mesh of vacuum chambers, mirrors, magnetic coils, and laser light bouncing around in precisely orchestrated patterns.

This is one of the world’s most precise and accurate clocks, and it’s so accurate that you’d have to wait 40 billion years—or three times the age of the universe—for it to be off by one second.

Keep Reading ↓ Show less

NYU Wireless Picks Up Its Own Baton to Lead the Development of 6G

With the engineers who developed the key enabling technologies for 5G at its helm, NYU Wireless is pushing ahead with the future generations of wireless networks

6 min read
Ted Rappaport
NYU Wireless

The fundamental technologies that have made 5G possible are unequivocally massive MIMO (multiple-input multiple-output) and millimeter wave (mmWave) technologies. Without these two technologies there would be no 5G network as we now know it.

The two men, who were the key architects behind these fundamental technologies for 5G, have been leading one of the premier research institutes in mobile telephony since 2012: NYU Wireless, a part of NYU's Tandon School of Engineering.

Ted Rappaport Ted Rappaport NYU Wireless

Ted Rappaport is the founding director of NYU Wireless, and one of the key researchers in the development of mmWave technology. Rappaport also served as the key thought leader for 5G by planting the flag in the ground nearly a decade ago that argued mmWave would be a key enabling technology for the next generation of wireless. His earlier work at two other wireless centers that he founded at Virginia Tech and The University of Austin laid the early groundwork that helped NYU Wireless catapult into one of the premier wireless institutions in the world.

Thomas Marzetta, who now serves as the director of NYU Wireless, was the scientist who led the development of massive MIMO while he was at Bell Labs and has championed its use in 5G to where it has become a key enabling technology for it.These two researchers, who were so instrumental in developing the technologies that have enabled 5G, are now turning their attention to the next generation of mobile communications, and, according to them both, we are facing some pretty steep technical challenges to realizing a next generation of mobile communications.

"Ten years ago, Ted was already pushing mobile mmWave, and I at Bell Labs was pushing massive MIMO," said Marzetta. "So we had two very promising concepts ready for 5G. The research concepts that the wireless community is working on for 6G are not as mature at this time, making our focus on 6G even more important."

This sense of urgency is reflected by both men, who are pushing against any sense of complacency that may exist in starting the development of 6G technologies as soon as possible. With this aim in mind, Rappaport, just as he did 10 years ago, has planted a new flag in the world of mobile communications with his publication last year of an article with the IEEE, entitled "Wireless Communications and Applications Above 100 GHz: Opportunities and Challenges for 6G and Beyond"

"In this paper, we said for the first time that 6G is going to be in the sub-terahertz frequencies," said Rappaport. "We also suggested the idea of wireless cognition where human thought and brain computation could be sent over wireless in real time. It's a very visionary look at something. Our phones, which right now are flashlights, emails, TV browsers, calendars, are going to be become much more."

img Tom Marzetta Photo: NYU Wireless


While Rappaport feels confident that they have the right vision for 6G, he is worried about the lack of awareness of how critical it is for the US Government funding agencies and companies to develop the enabling technologies for its realization. In particular, both Rappaport and Marzetta are concerned about the economic competitiveness of the US and the funding challenges that will persist if it is not properly recognized as a priority.

“These issues of funding and awareness are critical for research centers, like NYU Wireless," said Rappaport. “The US needs to get behind NYU Wireless to foster these ideas and create these cutting-edge technologies."

With this funding support, Rappaport argues, teaching research institutes like NYU Wireless can create the engineers that end up going to companies and making technologies like 6G become a reality. “There are very few schools in the world that are even thinking this far ahead in wireless; we have the foundations to make it happen," he added.

Both Rappaport and Marzetta also believe that making national centers of excellence in wireless could help to create an environment in which students could be exposed constantly to a culture and knowledge base for realizing the visionary ideas for the next generation of wireless.

“The Federal government in the US needs to pick a few winners for university centers of excellence to be melting pots, to be places where things are brought together," said Rappaport. “The Federal government has to get together and put money into these centers to allow them to hire talent, attract more faculty, and become comparable to what we see in other countries where huge amounts of funding is going in to pick winners."

While research centers, like NYU Wireless, get support from industry to conduct their research, Rappaport and Marzetta see that a bump in Federal funding could serve as both amplification and a leverage effect for the contribution of industrial affiliates. NYU Wireless currently has 15 industrial affiliates with a large number coming from outside the US, according to Rappaport.

“Government funding could get more companies involved by incentivizing them through a financial multiplier," added Rappaport

Of course, 6G is not simply about setting out a vision and attracting funding, but also tackling some pretty big technical challenges.

Both men believe that we will need to see the development of new forms of MIMO, such as holographic MIMO, to enable more efficient use of the sub 6 GHz spectrum. Also, solutions will need to be developed to overcome the blockage problems that occur with mmWave and higher frequencies.

Fundamental to these technology challenges is accessing new frequency spectrums so that a 6G network operating in the sub-terahertz frequencies can be achieved. Both Rappaport and Marzetta are confident that technology will enable us to access even more challenging frequencies.

“There's nothing technologically stopping us right now from 30, and 40, and 50 gigahertz millimeter wave, even up to 700 gigahertz," said Rappaport. “I see the fundamentals of physics and devices allowing us to take us easily over the next 20 years up to 700 or 800 gigahertz."

Marzetta added that there is much more that can be done in the scarce and valuable sub-6GHz spectrum. While massive MIMO is the most spectrally efficient wireless scheme ever devised, it is based on extremely simplified models of how antennas create electromagnetic signals that propagate to another location, according to Marzetta, adding, “No existing wireless system or scheme is operating close at all to limits imposed by nature."

Tom Marzetta with an array Photo: NYU Wireless

While expanding the spectrum of frequencies and making even better use of the sub-6GHz spectrum are the foundation for the realization of future networks, Rappaport and Marzetta also expect that we will see increased leveraging of AI and machine learning. This will enable the creation of intelligent networks that can manage themselves with much greater efficiency than today's mobile networks.

“Future wireless networks are going to evolve with greater intelligence," said Rappaport. An example of this intelligence, according to Rappaport, is the new way in which the Citizens Broadband Radio Service (CBRS) spectrum is going to be used in a spectrum access server (SAS) for the first time ever.

“It's going to be a nationwide mobile system that uses these spectrum access servers that mobile devices talk to in the 3.6 gigahertz band," said Rappaport. “This is going to allow enterprise networks to be a cross of old licensed cellular and old unlicensed Wi-Fi. It's going to be kind of somewhere in the middle. This serves as an early indication of how mobile communications will evolve over the next decade."

These intelligent networks will become increasingly important when 6G moves towards so-called cell-less ("cell-free") networks.

Currently, mobile network coverage is provided through hundreds of roughly circular cells spread out across an area. Now with 5G networks, each of these cells will be equipped with a massive MIMO array to serve the users within the cell. But with a cell-less 6G network the aim would be to have hundreds of thousands, or even millions, of access points, spread out more or less randomly, but with all the networks operating cooperatively together.

“With this system, there are no cell boundaries, so as a user moves across the city, there's no handover or handoff from cell to cell because the whole city essentially constitutes one cell," explained Marzetta. “All of the people receiving mobile services in a city get it through these access points, which in principle, every user is served by every access point all at once."

One of the obvious challenges of this cell-less architecture is just the economics of installing so many access points all over the city. You have to get all of the signals to and from each access point from or to one sort of central point that does all the computing and number crunching.

While this all sounds daunting when thought of in the terms of traditional mobile networks, it conceptually sounds far more approachable when you consider that the Internet of Things (IoT) will create this cell-less network.

“We're going to go from 10 or 20 devices today to hundreds of devices around us that we're communicating with, and that local connectivity is what will drive this cell-less world to evolve," said Rappaport. “This is how I think a lot of 5G and 6G use cases in this wide swath of spectrum are going to allow these low-power local devices to live and breathe."

To realize all of these technologies, including intelligent networks, cell-less networks, expanded radio frequencies, and wireless cognition, the key factor will be training future engineers.

To this issue, Marzetta noted: “Wireless communications is a growing and dynamic field that is a real opportunity for the next generation of young engineers."

For more information about the developments going on NYU Wireless you can visit please visit their website.

Or to learn more about NYU's Tandon School of Engineering, please visit their website.

Keep Reading ↓ Show less

Scaling Agile for Hardware: The Right Framework for Your Organization

Comparing MAHD, SAFe, Nexus, and LeSS to Find the Best Fit to Accelerator Product Development

1 min read

Adopting Agile in hardware organizations presents unique challenges—expensive changes, complex dependencies and regulatory constraints are just some of the constraints that make traditional Agile frameworks ineffective. While
software-driven methods like Scrum or SAFe attempt to scale Agile, they frequently fall short in hardware environments.

This whitepaper compares four leading frameworks—MAHD (Modified Agile for Hardware Development), SAFe (Scaled
Agile Framework), Nexus, and LeSS—highlighting why software-centric approaches often fail and why MAHD provides a tailored, hardware-centric solution for development of physical solutions and hardware-based systems.

Keep Reading ↓ Show less

6G’s Role in Future Sensor Networks Revealed

More connected devices than ever will strain 6G with a surge of uplinks

6 min read
Peter Vetter smiling in a Nokia Bell Labs sweatshirt.

Peter Vetter, head of Nokia Bell Labs core research, told IEEE Spectrum that 6G infrastructure will need a fresh foundation to support future tech.

Source image: Nokia

When the head of Nokia Bell Labs core research talks about “lessons learned” from 5G, he’s also being candid about the ways in which not everying worked out quite as planned.

That candor matters now, too, because Bell Labs core research president Peter Vetter says 6G’s success depends on getting infrastructure right the first time—something 5G didn’t fully do.

By 2030, he says, 5G will have exhausted its capacity. Not because some 5G killer app will appear tomorrow, suddenly making everyone’s phones demand 10 or 100 times as much data capacity as they require today. Rather, by the turn of the decade, wireless telecom won’t be centered around just cellphones anymore.

AI agents, autonomous cars, drones, IoT nodes, and sensors, sensors, sensors: Everything in a 6G world will potentially need a way on to the network. That means more than anything else in the remaining years before 6G’s anticipated rollout, high-capacity connections behind cell towers are a key game to win. Which brings industry scrutiny, then, to what telecom folks call backhaul—the high-capacity fiber or wireless links that pass data from cell towers toward the internet backbone. It’s the difference between the “local” connection from your phone to a nearby tower and the “trunk” connection that carries millions of signals simultaneously.

But the backhaul crisis ahead isn’t just about capacity. It’s also about architecture. 5G was designed around a world where phones dominated, downloading video at higher and higher resolutions. 6G is now shaping up to be something else entirely. This inversion—from 5G’s anticipated downlink deluge to 6G’s uplink resurgence—requires rethinking everything at the core level, practically from scratch.

Vetter’s career spans the entire arc of the wireless telecom era—from optical interconnections in the 1990s at Alcatel (a research center pioneering fiber-to-home connections) to his roles at Bell Labs and later Nokia Bell Labs, culminating in 2021 in his current position at the industry’s bellwether institution.

In this conversation, held in November at the Brooklyn 6G Summit in New York, Vetter explains what 5G got wrong, what 6G must do differently, and whether these innovations can arrive before telecom’s networks start running out of room.

5G’s Expensive Miscalculation

IEEE Spectrum: Where is telecom today, halfway between 5G’s rollout and 6G’s anticipated rollout?

Peter Vetter: Today, we have enough spectrum and capacity. But going forward, there will not be enough. The 5G network by the end of the decade will run out of steam, as we see in our traffic simulations and forecasts. And it is something that has been consistent generation to generation, from 2G to 3G to 4G. Every decade, capacity goes up by about a factor of 10. So you need to prepare for that.

And the challenge for us as researchers is how do you do that in an energy-efficient way? Because the power consumption cannot go up by a factor of 10. The cost cannot go up by a factor of 10. And then, lesson learned from 5G: The idea was, “Oh, we do that in higher spectrum. There is more bandwidth. Let’s go to millimeter wave.” The lesson learned is, okay, millimeter waves have short reach. You need a small cell [tower] every 300 meters or so. And that doesn’t cut it. It was too expensive to install all these small cells.

Is this related to the backhaul question?

Vetter: So backhaul is the connection between the base station and what we call the core of the network—the data centers, and the servers. Ideally, you use fiber to your base station. If you have that fiber as a service provider, use it. It gives you the highest capacity. But very often new cell sites don’t have that fiber backhaul, then there are alternatives: wireless backhaul.

Nokia Bell Labs has pioneered a glass-based chip architecture for telecom’s backhaul signals, communicating between towers and telecom infrastructure.Nokia

Radios Built on Glass Push Frequencies Higher

What are the challenges ahead for wireless backhaul?

Vetter: To get up to the 100-gigabit-per-second, fiber-like speeds, you need to go to higher frequency bands.

Higher frequency bands for the signals the backhaul antennas use?

Vetter: Yes. The challenge is the design of the radio front ends and the radio-frequency integrated circuits (RFICs) at those frequencies. You cannot really integrate [present-day] antennas with RFICs at those high speeds.

And what happens as those signal frequencies get higher?

Vetter: So in a millimeter wave, say 28 gigahertz, you could still do [the electronics and waveguides] for this with a classical printed circuit board. But as the frequencies go up, the attenuation gets too high.

What happens when you get to, say, 100 GHz?

Vetter: [Conventional materials] are no good anymore. So we need to look at other still low-cost materials. We have done pioneering work at Bell Labs on radio on glass. And we use glass not for its optical transparency, but for its transparency in the subterahertz radio range.

Is Nokia Bell Labs making these radio-on-glass backhaul systems for 100-GHz communications?

Vetter: Above 100 GHz, you need to look into a different material. I used an order of magnitude, but [the wavelength range] is actually 140 to 170 GHz, what is called the D-Band.

We collaborate with our internal customers to get these kind of concepts on the long-term road map. As an example, that D-Band radio system, we actually integrated it in a prototype with our mobile business group. And we tested it last year at the Olympics in Paris.

But this is, as I said, a prototype. We need to mature the technology between a research prototype and qualifying it to go into production. The researcher on that is Shahriar Shahramian. He’s well-known in the field for this.

Why 6G’s Bandwidth Crisis Isn’t About Phones

What will be the applications that’ll drive the big 6G demands for bandwidth?

Vetter: We’re installing more and more cameras and other types of sensors. I mean, we’re going into a world where we want to create large world models that are synchronous copies of the physical world. So what we will see going forward in 6G is a massive-scale deployment of sensors which will feed the AI models. So a lot of uplink capacity. That’s where a lot of that increase will come from.

Any others?

Vetter: Autonomous cars could be an example. It can also be in industry—like a digital twin of a harbor, and how you manage that? It can be a digital twin of a warehouse, and you query the digital twin, “Where is my product X?” Then a robot will automatically know thanks to the updated digital twin where it is in the warehouse and which route to take. Because it knows where the obstacles are in real time, thanks to that massive-scale sensing of the physical world and then the interpretation with the AI models.

You will have your agents that act on behalf of you to do your groceries or order a driverless car. They will actively record where you are, make sure that there are also the proper privacy measures in place. So that your agent has an understanding of the state you’re in and can serve you in the most optimal way.

How 6G Networks Will Help Detect Drones, Earthquakes, and Tsunamis

You’ve described before how 6G signals can not only transmit data but also provide sensing. How will that work?

Vetter: The augmentation now is that the network can be turned also in a sensing modality. That if you turn around the corner, a camera doesn’t see you anymore. But the radio still can detect people that are coming, for instance, at a traffic crossing. And you can anticipate that. Yeah, warn a car that, “There’s a pedestrian coming. Slow down.” We also have fiber sensing. And for instance, using fibers at the bottom of the ocean and detecting movements of waves and detect tsunamis, for instance, and do an early tsunami warning.

What are your teams’ findings?

Vetter: The present-day use of tsunami warning buoys are a few hundred kilometers offshore. These tsunami waves travel at 300 and more meters per second, and so you only have 15 minutes to warn the people and evacuate. If you have now a fiber sensing network across the ocean that you can detect it much deeper in the ocean, you can do meaningful early tsunami warning.

We recently detected there was a major earthquake in East Russia. That was last July. And we had a fiber sensing system between Hawaii and California. And we were able to see that earthquake on the fiber. And we also saw the development of the tsunami wave.

6G’s Thousands of Antennas and Smarter Waveforms

Bell Labs was an early pioneer in multiple-input, multiple-output (MIMO) antennas starting in the 1990s. Where multiple transmit and receive antennas could carry many data streams at once. What is Bell Labs doing with MIMO now to help solve these bandwidth problems you’ve described?

Vetter: So, as I said earlier, you want to provide capacity from existing cell sites. And the way to MIMO can do that by a technology called a simplified beamforming: If you want better coverage at a higher frequency, you need to focus your electromagnetic energy, your radio energy, even more. So in order to do that, you need a larger amount of antennas.

So if you double the frequency, we go from 3.5 GHz, which is the C-band in 5G, now to 6G, 7 GHz. So it’s about double. That means the wavelength is half. So you can fit four times more antenna elements in the same form factor. So physics helps us in that sense.

What’s the catch?

Vetter: Where physics doesn’t help us is more antenna elements means more signal processing, and the power consumption goes up. So here is where the research then comes in. Can we creatively get to these larger antenna arrays without the power consumption going up?

The use of AI is important in this. How can we leverage AI to do channel estimation, to do such things as equalization, to do smart beamforming, to learn the waveform, for instance?

We’ve shown that with these kind of AI techniques, we can get actually up to 30 percent more capacity on the same spectrum.

And that allows many gigabits per second to go out to each phone or device?

Vetter: So gigabits per second is already possible in 5G. We’ve demonstrated that. You can imagine that this could go up, but that’s not really the need. The need is really how many more can you support from a base station?

Keep Reading ↓ Show less

Wi-Fi 7 Signals the Industry’s New Priority: Stability

Multi-link operations and the 6-GHz band promise more reliability than before

4 min read
An illustration of a speedometer with the number "7" on the dial.
Giacomo Bagnara
Blue

Wi-Fi is one of the most aggravating success stories. Despite how ubiquitous the technology has become in our lives, it still gives reasons to grumble: The service is spotty or slow, for example, or the network keeps cutting out. Wi-Fi’s reliability has an image problem.

When Wi-Fi 7 arrives this year, it will bring with it a new focus on improving its image. Every Wi-Fi generation brings new features and areas of focus, usually related to throughput—getting more bits from point A to point B. The new features in Wi-Fi 7 will result in a generation of wireless technology that is more focused on reliability and reduced latency, while still finding new ways to continue increasing data rates.

Keep Reading ↓ Show less

Instrument Innovations for mmWave Test

NI is introducing the PXIe-5831 millimeter wave (mmWave) vector signal transceiver (VST)

7 min read
NI

Implementing a validation or production test strategy for new wireless standards is difficult. It is made even harder by the constant increase in complexity in new wireless standards and technologies like 5G New Radio (NR). This includes wider and more complex waveforms, an exponential increase in test points, and restrictive link budgets that require technologies like beamforming and phased-array antennas. To help you address these challenges, NI introduced the PXIe-5831 millimeter wave (mmWave) vector signal transceiver (VST), which delivers high-speed, high-quality measurements in an architecture that can adapt to the needs of the device under test (DUT) even as those needs are changing. This PXI Vector Signal Transceiver (VST) shortens the time you need to bring up new test assets by simplifying complex measurement requirements and the instrumentation you need to test them.

An Extension of the VST Architecture

Figure 1. PXI Vector Signal Transceiver (VST) core block diagram with mmWave extension.

Keep Reading ↓ Show less

Filter Technologies for Advanced Communication Systems

Download this free white paper and learn more about SAW and BAW filters, carrier aggregation, and 5G/6G solutions

1 min read

Learn about carrier aggregation, microcell overlapping, and massive MIMO implementation. Delve into the world of surface acoustic wave (SAW) and bulk acoustic wave (SAW) filters and understand their strengths, limitations, and applications in the evolving 5G/6G landscape.

Key highlights:

Keep Reading ↓ Show less

Citizens of Smart Cities Need a Way to Opt Out

“Data walks” reveal how residents feel about digital privacy

3 min read
A surveillance camera collage with digital graphics overlaying a curly-haired person and a speeding car.

Gwen Shaffer leads residents of Long Beach, Calif. on "data walks" to learn how people respond to smart city technologies and the data they gather.

Stuart Bradford

For years, Gwen Shaffer has been leading Long Beach, Calif. residents on “data walks,” pointing out public Wi-Fi routers, security cameras, smart water meters, and parking kiosks. The goal, according to the professor of journalism and public relations at California State University, Long Beach, was to learn how residents felt about the ways in which their city collected data on them.

Keep Reading ↓ Show less
word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word

mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1