Designing the First Apple Macintosh: The Engineers’ Story

How a small team of little-known designers changed computing forever

28 min read
original macintosh computer with keyboard and mouse, screen display says "happy birthday"
iStock/IEEE Spectrum

In 1979 the Macintosh personal computer existed only as the pet idea of Jef Raskin, a veteran of the Apple II team, who had proposed that Apple Computer Inc. make a low-cost “appliance”-type computer that would be as easy to use as a toaster. Mr. Raskin believed the computer he envisioned, which he called Macintosh, could sell for US $1000 if it was manufactured in high volume and used a powerful microprocessor executing tightly written software.

Mr. Raskin’s proposal did not impress anyone at Apple Computer enough to bring much money from the board of directors or much respect from Apple engineers. The company had more pressing concerns at the time: the major Lisa workstation project was getting under way, and there were problems with the reliability of the Apple III, the revamped version of the highly successful Apple II.

This article was first published as “Design case history: Apple’s Macintosh.” It appeared in the December 1984 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The diagrams and photographs appeared in the original print version. The author spoke with many members of the design team in the months following the 1984 introduction of the Macintosh, however, Steve Jobs did not grant an interview for this article.

Although the odds seemed against it in 1979, the Macintosh, designed by a handful of inexperienced engineers and programmers, is now recognized as a technical milestone in personal computing. Essentially a slimmed-down version of the Lisa workstation with many of its software features, the Macintosh sold for $2495 at its introduction in early 1984; the Lisa initially sold for $10,000. Despite criticism of the Macintosh—that it lacks networking capabilities adequate for business applications and is awkward to use for some tasks—the computer is considered by Apple to be its most important weapon in the war with IBM for survival in the personal-computer business.

From the beginning, the Macintosh project was powered by the dedicated drive of two key players on the project team. For Burrell Smith, who designed the Macintosh digital hardware, the project represented an opportunity for a relative unknown to demonstrate outstanding technical talents. For Steven Jobs, the 29-year-old chairman of Apple and the Macintosh project’s director, it offered a chance to prove himself in the corporate world after a temporary setback: although he cofounded Apple Computer, the company had declined to let him manage the Lisa project. Mr. Jobs contributed relatively little to the technical design of the Macintosh, but he had a clear vision of the product from the beginning. He challenged the project team to design the best product possible and encouraged the team by shielding them from bureaucratic pressures within the company.

Burrell Smith and the Early Mac Design

Mr. Smith, who was a repairman in the Apple II maintenance department in 1979, had become hooked on microprocessors several years earlier during a visit to the electronics-industry area south of San Francisco known as Silicon Valley. He dropped out of liberal-arts studies at the Junior College of Albany, New York, to pursue the possibilities of microprocessors—there isn’t anything you can’t do with those things, he thought. Mr. Smith later became a repairman in Cupertino, Calif., where he spent much time studying the cryptic logic circuitry of the Apple II, designed by company cofounder Steven Wozniak.

Mr. Smith’s dexterity in the shop impressed Bill Atkinson, one of the Lisa designers, who introduced him to Mr. Raskin as “the man who’s going to design your Macintosh.” Mr. Raskin replied noncommittally, “We’ll see about that.”

However, Mr. Smith managed to learn enough about Mr. Raskin ‘s conception of the Macintosh to whip up a makeshift prototype using a Motorola 6809 microprocessor, a television monitor, and an Apple II. He showed it to Mr. Raskin, who was impressed enough to make him the second member of the Macintosh team.

But the fledgling Macintosh project was in trouble. The Apple board of directors wanted to cancel the project in September 1980 to concentrate on more important projects, but Mr. Raskin was able to win a three-month reprieve.

Meanwhile Steve Jobs, then vice president of Apple, was having trouble with his own credibility within the company. Though he had sought to manage the Lisa computer project, the other Apple executives saw him as too inexperienced and eccentric to entrust him with such a major undertaking, and he had no formal business education. After this rejection, “he didn’t like the lack of control he had,” noted one Apple executive. “He was looking for his niche.”

Mr. Jobs became interested in the Macintosh project, and, possibly because few in the company thought the project had a future, Mr. Jobs was made its manager. Under his direction, the design team became as compact and efficient as the Macintosh was tobe—a group of engineers working at a distance from all the meetings and paper-pushing of the corporate mainstream. Mr. Jobs, in recruiting the other members of the Macintosh team, lured some from other companies with promises of potentially lucrative stock options.

The Macintosh project “was known in the company as ‘Steve’s folly.’”

With Mr. Jobs at the helm, the project gained some credibility among the board of directors—but not much. According to one team member, it was known in the company as “Steve’s folly.” But Mr. Jobs lobbied for a bigger budget for the project and got it. The Macintosh team grew to 20 by early 1981.

The decision on what form the Macintosh would take was left largely to the design group. At first the members had only the basic principles set forth by Mr. Raskin and Mr. Jobs to guide them, as well as the example set by the Lisa project. The new machine was to be easy to use and inexpensive to manufacture. Mr. Jobs wanted to commit enough money to build an automated factory that would produce about 300 000 computers a year. So one key challenge for the design group was to use inexpensive parts and to keep the parts count low.

Making the computer easy to use required considerable software for the user-computer interface. The model was, of course, the Lisa workstation with its graphic “windows” to display simultaneously many different programs. “Icons,” or little pictures, were used instead of cryptic computer terms to represent a selection of programs on the screen; by moving a “mouse,’’ a box the size of a pack of cigarettes, the user manipulated a cursor on the screen. The Macintosh team redesigned the software of the Lisa from scratch to make it operate more efficiently, since the Macintosh was to have far less memory than the 1 million bytes of the Lisa. But the Macintosh software was also required to operate quicker than the Lisa software, which had been criticized for being slow.

Defining the Mac as the Project Progressed

The lack of a precise definition for the Macintosh project was not a problem. Many of the designers preferred to define the computer as they went along. “Steve allowed us to crystallize the problem and the solution simultaneously,” recalled Mr. Smith. The method put strain on the design team, since they were continually evaluating design alternatives. “We were swamped in detail,” Mr. Smith said. But this way of working also led to a better product, the designers said, because they had the freedom to seize opportunities during the design stage to enhance the product.

Such freedom would not have been possible had the Macintosh project been structured in the conventional way at Apple, according to several of the designers. “No one tried to control us,” said one. “Some managers like to take control, and though that may be good for mundane engineers, it isn’t good if you are self-motivated.”

Central to the success of this method was the small, closely knit nature of the design group, with each member being responsible for a relatively large portion of the total design and free to consult other members of the team when considering alternatives. For example, Mr. Smith, who was well acquainted with the price of electronic components from his early work on reducing the cost of the Apple II, made many decisions about the economics of Macintosh hardware without time-consuming consultations with purchasing agents. Because communication among team members was good, the designers shared their areas of expertise by advising each other in the working stages, rather than waiting for a final evaluation from a group of manufacturing engineers. Housing all members of the design team in one small office made communicating easier. For example, it was simple for Mr. Smith to consult a purchasing agent about the price of parts if he needed to, because the purchasing agent worked in the same building.

Andy Hertzfeld, who transferred from the Apple II software group to design the Macintosh operating software, noted, “In lots of other projects at Apple, people argue about ideas. But sometimes bright people think a little differently. Somebody like Burrell Smith would design a computer on paper and people would say. ‘It’ll never work.’ So instead Burell builds it lightning fast and has it working before the guy can say anything.”

“When you have one person designing the whole computer, he knows that a little leftover gate in one part may be used in another part.”
—Andy Herzfeld

The closeness of the Macintosh group enabled it to make design tradeoffs that would not have been possible in a large organization, the team members contended. The interplay between hardware and software was crucial to the success of the Macintosh design, using a limited memory and few electronic parts to perform complex operations. Mr. Smith, who was in charge of the computer’s entire digital hardware design, and Mr. Herzfeld became close friends and often collaborated. “When you have one person designing the whole computer,” Mr. Hertzfeld observed, “he knows that a little leftover gate in one part may be used in another part.”

To promote interaction among the designers, one of the first things that Mr. Jobs did in taking over the Macintosh project was to arrange special office space for the team. In contrast to Apple’s corporate headquarters, identified by the company logo on a sign on its well-trimmed lawn, the team’s new quarters, behind a Texaco service station, had no sign to identify them and no listing in the company telephone directory. The office, dubbed Texaco Towers, was an upstairs, low-rent, plasterboard-walled, “tacky-carpeted” place, “the kind you’d find at a small law outfit,’’ according to Chris Espinosa, a veteran of the original Apple design team and an early Macintosh draftee. It resembled a house more than an office, having a communal area much like a living room, with smaller rooms off to the side for more privacy in working or talking. The decor was part college dormitory, part electronics repair shop: art posters, beanbag chairs, coffee machines, stereo systems, and electronic equipment of all sorts scattered about.

“Whenever a competitor came out with a product, we would buy and dismantle it, and it would kick around the office.”
—Chris Espinosa

There were no set work hours and initially not even a schedule for the development of the Macintosh. Each week, if Mr. Jobs was in town (often he was not), he would hold a meeting at which the team members would report what they had done the previous week. One of the designers’ sidelines was to dissect the products of their competitors. “Whenever a competitor came out with a product, we would buy and dismantle it, and it would kick around the office,” recalled Mr. Espinosa.

In this way, they learned what they did not want their product to be. In their competitors’ products, Mr. Smith saw a propensity for using connectors and slots for inserting printed-circuit boards—a slot for the video circuitry, a slot for the keyboard circuitry, a slot for the disk drives, and memory slots. Behind each slot were buffers to allow signals to pass onto and off the printed-circuit board properly. The buffers meant delays in the computers’ operations, since several boards shared a backplane, and the huge capacitance required for multiple PC boards slowed the backplane. The number of parts required made the competitors’ computers hard to manufacture, costly, and less reliable. The Macintosh team resolved that their PC would have but two printed-circuit boards and no slots, buffers, or backplane.

A challenge in building the Macintosh was to offer sophisticated software using the fewest and least-expensive parts.

To squeeze the needed components onto the board, Mr. Smith planned the Macintosh to perform specific functions rather than operate as a flexible computer that could be tailored by programmers for a wide variety of applications. By rigidly defining the configuration of the Macintosh and the functions it would perform, he eliminated much circuitry. Instead of providing slots into which the user could insert printed-circuit boards with such hardware as memory or coprocessors, the designers decided to incorporate many of the basic functions of the computer in read-only memory, which is more reliable. The computer would be expanded not by slots, but through a high-speed serial port.

Writing the Mac’s Software

The software designers were faced in the beginning with often-unrealistic schedules. “We looked for any place where we could beg, borrow, or steal code,” Mr. Herzfeld recalled. The obvious place for them to look was the Lisa workstation. The Macintosh team wanted to borrow some of the Lisa’s software for drawing graphics on the bit-mapped display. In 1981, Bill Atkinson was refining the Lisa graphics software, called Quickdraw, and began to work part-time implementing it for the Macintosh.

Quickdraw was a scheme for manipulating bit maps to enable applications programmers to construct images easily on the Macintosh bit-mapped display. The Quickdraw program allows the programmer to define and manipulate a region—a software representation of an arbitrarily shaped area of the screen. One such region is a rectangular window with rounded comers, used throughout the Macintosh software. Quickdraw also allows the programmer to keep images within defined boundaries, which make the windows in the Macintosh software appear to hold data. The programmer can unite two regions, subtract one from the other, or intersect them.

In Macintosh, the Quickdraw program was to be tightly written in assembly-level code and etched permanently in ROM. It would serve as a foundation for higher-level software to make use of graphics.

Quickdraw was “an amazing graphics package,” Mr. Hertzfeld noted, but it would have strained the capabilities of the 6809 microprocessor, the heart of the early Macintosh prototype. Motorola Corp. announced in late 1980 that the 68000 microprocessor was available, but that chip was new and unproven in the field, and at $200 apiece it was also expensive. Reasoning that the price of the chip would come down before Apple was ready to start mass-producing the Macintosh, the Macintosh designers decided to gamble on the Motorola chip.

Another early design question for the Macintosh was whether to use the Lisa operating system. Since the Lisa was still in the early stages of design, considerable development would have been required to tailor its operating system for the Macintosh. Even if the Lisa had been completed, rewriting its software in assembly code would have been required for the far smaller memory of the Macintosh. In addition, the Lisa was to have a multitasking operating system, using complex circuitry and software to run more than one computer program at the same time, which would have been too expensive for the Macintosh. Thus the decision was made to write a Macintosh operating system from scratch, working from the basic concepts of the Lisa. Simplifying the Macintosh operating system posed the delicate problem of restricting the computer’s memory capacity enough to keep it inexpensive but not so much as to make it inflexible.

The Macintosh would have no multitasking capability but would execute only one applications program at a time. Generally, a multitasking operating system tracks the progress of each of the programs it is running and then stores the entire state of each program—the values of its variables, the location of the program counter, and so on. This complex operation requires more memory and hardware than the Macintosh designers could afford. However, the illusion of multitasking was created by small programs built into the Macintosh system software. Since these small programs—such as one that creates the images of a calculator on the screen and does simple arithmetic—operate in areas of memory separate from applications, they can run simultaneously with applications programs.

Embedding Macintosh software in 64 kilobytes of read-only memory increased the reliability of the computer and simplified the hardware [A]. About one third of the ROM software is the operating system. One third is taken up by Quickdraw, a program for representing shapes and images for the bit-mapped display. The remaining third is devoted to the user ­interface toolbox, which handles the display of windows, text editing, menus, and the like. The user interface of the Macintosh includes pull-down menus, which appear only when the cursor is placed over the menu name and a button on the mouse is pressed. Above, a user examining the ‘file’ menu selects the open command, which causes the computer to load the file (indicated by darkened icon) from disk into internal memory. The Macintosh software was designed to make the toolbox routines optional for programmers; the applications program offers the choice of whether or not to handle an event [B].

Since the Macintosh used a memory-mapped scheme, the 68000 microprocessor required no memory management, simplifying both the hardware and the software. For example, the 68000 has two modes of operation: a user mode, which is restricted so that a programmer cannot inadvertently upset the memory-management scheme; and a supervisor mode, which allows unrestricted access to all of the 68000’s commands. Each mode uses its own stack of pointers to blocks of memory. The 68000 was rigged to run only in the supervisor mode, eliminating the need for the additional stack. Although seven levels of interrupts were available for the 68000, only three were used.

Another simplification was made in the Macintosh’s file structure, exploiting the small disk space with only one or two floppy disk drives. In the Lisa and most other operating systems, two indexes access a program on floppy disk, using up precious random-access memory and increasing the delay in fetching programs from a disk. The designers decided to use only one index for the Macintosh—a block map, located in RAM, to indicate the location of a program on a disk. Each block map represented one volume of disk space.

This scheme ran into unexpected difficulties and may be modified in future versions of the Macintosh, Mr. Hertzfeld said. Initially, the Macintosh was not intended for business users, but as the design progressed and it became apparent that the Macintosh would cost more than expected, Apple shifted its marketing plan to target business users. Many of them add hard disk drives to the Macintosh, making the block-map scheme unwieldy.

By January 1982, Mr. Hertzfeld began working on software for the Macintosh, perhaps the computer’s most distinctive feature, which he called the user-interface toolbox.

The toolbox was envisioned as a set of software routines for constructing the windows, pull-down menus, scroll bars, icons, and other graphic objects in the Macintosh operating system. Since RAM space would be scarce on the Macintosh (it initially was to have only 64 kilobytes), the toolbox routines were to be a part of the Macintosh’s operating software; they would use the Quickdraw routines and operate in ROM.

It was important however, not to handicap applications programmers—who could boost sales of the Macintosh by writing programs for it—by restricting them to only a few toolbox routines in ROM. So the toolbox code was designed to fetch definition functions—routines that use Quickdraw to create a graphic image such as a window—from either the systems disk or an applications disk. In this way, an applications programmer could add definition functions for a program, which Apple could incorporate in later versions the Macintosh by modifying the system disk. “We were nervous about putting (the toolbox) in ROM,” recalled Mr. Hertzfeld, “We knew that after the Macintosh was out, programmers would want to add to the toolbox routines.”

Although the user could operate only one applications program at a time, he could transfer text or graphics from one applications program to another with a toolbox routine called scrapbook. Since the scrapbook and the rest of the toolbox routines were located in ROM, they could run along with applications programs, giving the illusion of multitasking. The user would cut text from one program into the scrapbook, close the program, open another, and paste the text from the scrapbook. Other routines in the toolbox, such as the calculator, could also operate simultaneously with applications programs.

Late in the design of the Macintosh software, the designers realized that, to market the Macintosh in non-English-speaking countries, an easy way of translating text in programs into foreign languages was needed. Thus computer code and data were separated in the software to allow translation without unraveling a complex computer program, by scanning the data portion of a program. No programmer would be needed for translation.

Placing an Early Bet on the 68000 Chip

The 68000, with a 16-bit data bus and 32-bit internal registers and a 7.83-megahertz clock, could grab data in relatively large chunks. Mr. Smith dispensed with separate controllers for the mouse, the disk drives, and other peripheral functions. “We were able to leverage off slave devices,” Mr. Smith explained, “and we had enough throughput to deal with those devices in a way that appeared concurrent to the user.”

When Mr. Smith suggested implementing the mouse without a separate controller, several members of the design team argued that if the main microprocessor was interrupted each time the mouse was moved, the movement of the cursor on the screen would always lag. Only when Mr. Smith got the prototype up and running were they convinced it would work.

Likewise, in the second prototype, the disk drives were controlled by the main microprocessor. “In other computers,” Mr. Smith noted, “the disk controller is a brick wall between the disk and the CPU, and you end up with a poor-performance, expensive disk that you can lose control of. It’s like buying a brand-new car complete with a chauffeur who insists on driving everywhere.

The 68000 was assigned many duties of the disk controller and was linked with a disk-controller circuit built by Mr. Wozniak for the Apple II. “Instead of a wimpy little 8-bit microprocessor out there, we have this incredible 68000—it’s the world’s best disk controller,” Mr. Smith said.

Direct-memory-access circuitry was designed to allow the video screen to share RAM with the 68000. Thus the 68000 would have access to RAM at half speed during the live portion of the horizontal line of the video screen and at full speed during the horizontal and vertical retrace. [See diagram, below.]

The 68000 microprocessor, which has exclusive access to the read-only memory of the Macintosh, fetches commands from ROM at full speed—.83 megahertz. The 68000 shares the random-access memory with the video and sound circuitry, having access to RAM only part of the time [A]; it fetches instructions from RAM at an average speed of about 6 megahertz. The video and sound instructions are loaded directly into the video-shift register or the sound-counter, respectively. Much of the “glue” circuitry of the Macintosh is contained in eight programmable-array-logic chips. The Macintosh’s ability to play four independent voices was added relatively late in the design, when it was realized that most of the circuitry needed already existed in the video circuitry [B]. The four voices are added in software and the digital samples stored in memory. During the video retrace, sound data is fed into the sound buffer.

While building the next prototype, Mr. Smith saw several ways to save on digital circuitry and increase the execution speed of the Macintosh. The 68000 instruction set allowed Mr. Smith to embed subroutines in ROM. Since the 68000 has exclusive use of the address and data buses of the ROM, it has access to the ROM routines at up to the full clock speed. The ROM serves somewhat as a high-speed cache memory. While building the next prototype, Mr. Smith saw several ways to save on digital circuitry and increase the execution speed of the Macintosh. The 68000 instruction set allowed Mr. Smith to embed subroutines in ROM. Since the 68000 has exclusive use of the address and data buses of the ROM, it has access to the ROM routines at up to the full clock speed. The ROM serves somewhat as a high-speed cache memory.

The next major revision in the original concept of the Macintosh was made in the computer’s display. Mr. Raskin had proposed a computer that could be hooked up to a standard television set. However, it became clear early on that the resolution of television display was too coarse for the Macintosh. After a bit of research, the designers found they could increase the display resolution from 256 by 256 dots to 384 by 256 dots by including a display with the computer. This added to the estimated price of the Macintosh, but the designers considered it a reasonable tradeoff.

To keep the parts count low, the two input/output ports of the Macintosh were to be serial. The decision to go with this was a serious one, since the future usefulness of the computer depended largely on its efficiency when hooked up to printers, local-area networks, and other peripherals. In the early stages of development, the Macintosh was not intended to be a business product, which would have made networking a high priority.

“We had an image problem. We wore T-shirts and blue jeans with holes in the knees, and we had a maniacal conviction that we were right about the Macintosh, and that put some people off.”
—Chris Espinosa

The key factor in the decision to use one high-speed serial port was the introduction in the spring of 1981 of the Zilog Corp.’s 85530 serial-communications controller, a single chip to replace two less expensive conventional parts—” vanilla” chips—in the Macintosh. The risks in using the Zilog chip were that it had not been proven in the field and it was expensive, almost $9 apiece. In addition, Apple had a hard time convincing Zilog that it seriously intended to order the part in high volumes for the Macintosh.

“We had an image problem,” explained Mr. Espinosa. “We wore T-shirts and blue jeans with holes in the knees, and we had a maniacal conviction that we were right about the Macintosh, and that put some people off. Also, Apple hadn’t yet sold a million Apple IIs. How were we to convince them that we would sell a million Macs?”

In the end, Apple got a commitment from Zilog to supply the part, which Mr. Espinosa attributes to the negotiating talents of Mr. Jobs. The serial input/output ports “gave us essentially the same bandwidth that a memory-mapped parallel port would,” Mr. Smith said. Peripherals were connected to serial ports in a daisy-chain configuration with the Apple bus network.

Designing the Mac’s Factory Without the Product

In the fall of 1981, as Mr. Smith worked on the fourth Macintosh prototype, the design for the Macintosh factory was getting under way. Mr. Jobs hired Debi Coleman, who was then working as financial manager at Hewlett-Packard Co. in Cupertino, Calif., to handle the finances of the Macintosh project. A graduate of Stanford University with a master’s degree in business ad­ministration, Ms. Coleman was a member of a task force at HP that was studying factories, quality management, and inventory management. This was good training for Apple, for Mr. Jobs was intent on using such concepts to build a highly automated manufacturing plant for the Macintosh in the United States.

Briefly he considered building the plant in Texas, but since the designers were to work closely with the manufacturing team in the later stages of the Macintosh design, he decided to locate the plant at Fremont, Calif., less than a half-hour’s drive from Apple’s Cupertino headquarters.

Mr. Jobs and other members of the Macintosh team made frequent tours of automated plants in various industries, particularly in Japan. At long meetings held after the visits, the manufacturing group discussed whether to borrow certain methods they had observed.

The Macintosh factory borrowed assembly ideas from other computer plants and other industries. A method of testing the brightness of cathode-ray tubes was borrowed from television manufacturers.

The Macintosh factory design was based on two major concepts. The first was “just-in-time” inventory, calling for vendors to deliver parts for the Macintosh frequently, in small lots, to avoid excessive handling of components at the factory and reduce damage and storage costs. The second concept was zero-defect parts, with any defect on the manufacturing line immediately traced to its source and rectified to prevent recurrence of the error.

The factory, which was to churn out about a half million Macintosh computers a year (the number kept increasing), was designed to be built in three stages: first, equipped with stations for workers to insert some Macintosh components, delivered to them by simple robots; second, with robots to insert components instead of workers; and third, many years in the future, with “integrated” automation, requiring virtually no human operators. In building the factory, “Steve was willing to chuck all the traditional ideas about manufacturing and the relationship between design and manufacturing,” Ms. Coleman noted. “He was willing to spend whatever it cost to experiment in this factory. We planned to have a major revision every two years.”

By late 1982, before Mr. Smith had designed the final Macintosh prototype, the designs of most of the factory’s major subassemblies were frozen, and the assembly stations could be designed. About 85 percent of the components on the digital-logic printed-circuit board were to be inserted automatically, and the remaining 15 percent were to be surface-mounted devices inserted manually at first and by robots in the second stage of the factory. The production lines for automatic insertion were laid out to be flexible; the number of stations was not defined until trial runs were made. The materials-delivery system, designed with the help of engineers recruited from Texas Instruments in Dallas, Texas, divided small and large parts between receiving doors at the materials distribution center. The finished Macintoshes coming down the conveyor belt were to be wrapped in plastic and stuffed into boxes using equipment adapted from machines used in the wine industry for packaging bottles.

Most of the discrete components in the Macintosh are inserted automatically into the printed-circuit boards.

As factory construction progressed, pressure built on the Macintosh design team to deliver a final prototype. The designers had been working long hours but with no deadline set for the computer’s introduction. That changed in the middle of 1981, after Mr. Jobs imposed a tough and sometimes unrealistic schedule, reminding the team repeatedly that “real artists ship” a finished product. In late 1981, when IBM announced its personal computer, the Macintosh marketing staff began to refer to a “window of opportunity” that made it urgent to get the Macintosh to customers.

“We had been saying, ‘We’re going to finish in six months’ for two years,” Mr. Hertzfeld recalled.

The new urgency led to a series of design problems that seemed to threaten the Macintosh dream.

The Mac Team Faces Impossible Deadlines

The computer’s circuit density was one bottleneck. Mr. Smith had trouble paring enough circuitry off his first two prototypes to squeeze them onto one logic board. In addition, he needed faster circuitry for the Macintosh display. The horizontal resolution was only 384 dots—not enough room for the 80 characters of text needed for the Macintosh to compete as a word processor. One suggested solution was to use the word-processing software to allow an 80-character line to be seen by horizontal scrolling. However, most standard computer displays were capable of holding 80 characters, and the portable computers with less capability were very inconvenient to use.

Another problem with the Macintosh display was its limited dot density. Although the analog circuitry, which was being designed by Apple engineer George Crow, accommodated 512 dots on the horizontal axis, Mr. Smith’s digital circuitry—which consisted of bipolar logic arrays—did not operate fast enough to generate the dots. Faster bipolar circuitry was considered but rejected because of its high-power dissipation and its cost. Mr. Smith could think of but one alternative: combine the video and other miscellaneous circuitry on a single custom n-channel MOS chip.

Mr. Smith began designing such a chip in February 1982. During the next six months the size of the hypothetical chip kept growing. Mr. Jobs set a shipping target of May 1983 for the Macintosh but, with a backlog of other design problems, Burrell Smith still had not finished designing the custom chip, which was named after him: the IBM (Integrated Burrell Machine) chip.

​Defining terms

Backplane

An electrical connection common to two or more printed-circuit boards.

Bit-mapped graphics

A method of representing data in a computer for display in which each dot on the screen is mapped to a unit of data in memory.

Buffers

Computer memory for holding data temporarily between processes.

Direct-memory access

A mechanism in a computer that bypasses the central processing unit to gain access to memory. It is often used when large blocks of data are transferred from memory to computer.

Icons

Small graphic images on a computer screen that represent functions or programs; for example, a wastebasket designates a delete operation.

Memory management

A mechanism in a computer for allocating internal memory among different programs, especially in multitasking systems.

Mouse

A box the size of a cigarette pack used to move a cursor on a computer screen. The movement of the cursor matches the movement of the mouse. The mouse also may have one or more buttons for selecting commands on a menu.

Multitasking

The simultaneous execution of two or more applications programs in a computer (also known as concurrency)

Operating system

A computer program that performs basic operations, such as governing the allocation of memory, accepting interrupts from peripherals, and opening and closing files.

Programmable-array logic

An array of logic elements that are mass-produced without interconnections and that are interconnected at the specification of the user at the time of purchase.

Subroutines

A section of a computer code that is represented symbolically in a program.

Window

A rectangularly shaped image on a computer screen within which the user writes and reads data, representing a program in the computer.

Meanwhile, the Macintosh offices were moved from Texaco Towers to more spacious quarters at the Apple headquarters, since the Macintosh staff had swelled to about 40. One of the new employees was Robert Belleville, whose previous employer was the Xerox Palo Alto Research Corp. At Xerox he had designed the hardware for the Star workstation—which, with its windows, icons. and mouse, might be considered an early prototype of the Macintosh. When Mr. Jobs offered him a spot on the Macintosh team, Mr. Belleville was impatiently waiting for authorization from Xerox to proceed on a project he had proposed that was similar to the Macintosh—a low-cost version of the Star.

Asthe new head of the Macintosh engineering, Mr. Belleville faced the task of directing Mr. Smith, who was proceeding on what looked more and more like a dead-end course. Despite the looming deadlines, Mr. Belleville tried a soft-sell approach.

“I asked Burrell if he really needed the custom chip,” Mr. Belleville recalled. “He said yes. I told him to think about trying something else.”

After thinking about the problem for three months, Mr. Smith concluded in July 1982 that “the difference in size between this chip and the state of Rhode Island is not very great.” He then set out to design the circuitry with higher-speed programmable-array logic—as he had started to do six months earlier. He had assumed that higher resolution in the horizontal video required a faster clock speed. But he realized that he could achieve the same effect with clever use of faster bipolar-logic chips that had become available only a few months earlier. By adding several high­-speed logic circuits and a few ordinary circuits, he pushed the resolution up to 512 dots.

Another advantage was that the PALs were a mature technology and their electrical parameters could tolerate large variations from the specified values, making the Macintosh more stable and more reliable—important characteristics for a so-called appliance product. Since the electrical characteristics of each integrated circuit may vary from those of other ICs made in different batches, the sum of the variances of 50or so components in a computer may be large enough to threaten the system’s integrity.

“It became an intense and almost religious argument about the purity of the system’s design versus the user’s freedom to configure the system as he liked. We had weeks of argument over whether to add a few pennies to the cost of the machine.”
—Chris Espinosa

Even as late as the summer of 1982, with one deadline after another blown, the Macintosh designers were finding ways of adding features to the computer. After the team disagreed over the choice of a white background for the video with black characters or the more typical white-on-black, it was suggested that both options be made available to the user through a switch on the back of the Macintosh. But this compromise led to debates about other questions.

“It became an intense and almost religious argument,” recalled Mr. Espinosa, “about the purity of the system’s design versus the user’s freedom to configure the system as he liked. We had weeks of argument over whether to add a few pennies to the cost of the machine.”

The designers, being committed to the Macintosh, often worked long hours to refine the system. A programmer might spend many night hours to reduce the time needed to format a disk from three minutes to one. The reasoning was that expenditure of a Macintosh programmer’s time amounted to little in comparison with a reduction of two minutes in the formatting time. “If you take two extra minutes per user, times a million people, times 50 disks to format, that’s a lot of the world’s time,” Mr. Espinosa explained.

But if the group’s commitment to refinements often kept them from meeting deadlines, it paid off in tangible design improvements. “There was a lot of competition for doing something very bright and creative and amazing,” said Mr. Espinosa. “People were so bright that it became a contest to astonish them.”

The Macintosh team’s approach to working—“like a Chautauqua, with daylong affairs where people would sit and talk about how they were going to do this or that’”—sparked creative thinking about the Macintosh’s capabilities. When a programmer and a hardware designer started to discuss how to implement the sound generator, for instance, they were joined by one of several nontechnical members of the team—marketing staff, finance specialists, secretaries—who remarked how much fun it would be if the Macintosh could sound four distinct voices at once so the user could program it to play music. That possibility excited the programmer and the hardware engineer enough to spend extra hours in designing a sound generator with four voices.

The payoff of such discussions with nontechnical team members, Mr. Espinosa said, “was coming up with all those glaringly evident things that only somebody completely ignorant could come up with. If you immerse yourself in a group that doesn’t know the technical limitations, then you get a group mania to try and deny those limitations. You start trying to do the impossible—and once in a while succeeding.”

Nobody had even considered designing a four-voice [sound] generator—that is, not until “group mania” set in.

The sound generator in the original Macintosh was quite simple—a one-bit register connected to a speaker. To vibrate the speaker, the programmer wrote a software loop that changed the value of the register from one to zero repeatedly. Nobody had even considered designing a four-voice generator—that is, not until “group mania” set in.

Mr. Smith was pondering this problem when he noticed that the video circuitry was very similar to the sound-generator circuitry. Since the video was bit-mapped, a bit of memory represented one dot on the video screen. The bits that made up a complete video image were held in a block of RAM and fetched by a scanning circuit to generate the image. Sound circuitry required similar scanning, with data in memory corresponding to the amplitude and frequency of the sound emanating from the speaker. Mr. Smith reasoned that by adding a pulse-width-modulator circuit, the video circuitry could be used to generate sound during the last microsecond of the horizontal retrace—the time it took the electron beam in the cathode-ray tube of the display to move from the last dot on each line to the first dot of the next line. During the retrace the video-scanning circuitry jumped to a block of memory earmarked for the amplitude value of the sound wave, fetched bytes, deposited them in a buffer that fed the sound generator, and then jumped back to the video memory in time for the next trace. The sound generator was simply a digital-to-analog converter connected to a linear amplifier.

To enable the sound generator to produce four distinct voices, software routines were written and embedded in ROM to accept values representing four separate sound waves and convert them into one complex wave. Thus a programmer writing applications programs for the Macintosh could specify separately each voice without being concerned about the nature of the complex wave.

Gearing up to Build Macs

In the fall of 1982, as the factory was being built and the design of the Macintosh was approaching its final form, Mr. Jobs began to play a greater role in the day-to-day activities of the designers. Although the hardware for the sound generator had been designed, the software to enable the computer to make sounds had not yet been written by Mr. Hertzfeld, who considered other parts of the Macintosh software more urgent. Mr. Jobs had been told that the sound generator would be impressive, with the analog circuitry and the speaker having been upgraded to accommodate four voices. But since this was an additional hardware expense, with no audible results at that point, one Friday Mr. Jobs issued an ultimatum: “If I don’t hear sound out of this thing by Monday morning, we’re ripping out the amplifier.”

That motivation sent Mr. Hertzfeld to the office during the weekend to write the software. By Sunday afternoon only three voices were working. He telephoned his colleague Mr. Smith and asked him to stop by and help optimize the software.

“Do you mean to tell me you’re using subroutines!” Burrell Smith exclaimed after examining the problem. “No wonder you can’t get four voices. Subroutines are much too slow.”

“Do you mean to tell me you’re using subroutines!” Mr. Smith exclaimed after examining the problem. “No wonder you can’t get four voices. Subroutines are much too slow.”

By Monday morning, the pair had written the microcode programs to produce results that satisfied Mr. Jobs.

Although Mr. Jobs’s input was sometimes hard to define, his instinct for defining the Macintosh as a product was important to its success, according to the designers. “He would say, ‘This isn’t what I want. I don’t know what I want, but this isn’t it.’” Mr. Smith said.

“He knows what great products are,” noted Mr. Hertzfeld. “He intuitively knows what people want.’’

One example was the design of the Macintosh casing, when clay models were made to demonstrate various possibilities. “I could hardly tell the difference between two models,” Mr. Hertzfeld said. “Steve would walk in and say, ‘This one stinks and this one is great.’ And he was usually right.”

Because Mr. Jobs placed great emphasis on packaging the Macintosh to occupy little space on a desk, a vertical design was used, with the disk drive placed underneath the CRT.

Mr. Jobs also decreed that the Macintosh contain no fans, which he had tried to eliminate from the original Apple computer. A vent was added to the Macintosh casing to allow cool air to enter and absorb heat from the vertical power supply, with hot air escaping at the top. The logic board was horizontally positioned.

[Steve] Jobs at times gave unworkable orders. When he demanded that the designers reposition the RAM chips on an early printed-circuit board because they were too close together, “most people chortled.”

Mr. Jobs, however, at times gave unworkable orders. When he demanded that the designers reposition the RAM chips on an early printed-circuit board because they were too close together, “most people chortled,” one designer said. The board was redesigned with the chips farther apart, but it did not work because the signals from the chips took too long to propagate over the increased distance. The board was redesigned again to move the chips back to their original position.

Stopping the Radiation Leaks

When the design group started to concentrate on manufacturing, the most imposing task was preventing radiation from leaking from the Macintosh’s plastic casing. At one time the fate of the Apple II had hung in the balance as its designers tried unsuccessfully to meet the emissions standards of the Federal Communications Commission. “I quickly saw the number of Apple II components double when several inductors and about 50 capacitors were added to the printed-circuit boards,” Mr. Smith recalled. With the Macintosh, however, he continued, “we eliminated all of the discrete electronics by going to a connector-less and solder-less design; we had had our noses rubbed in the FCC regulations, and we knew how important that was.’’ The high­speed serial I/O ports caused little interference because they were easy to shield.

Another question that arose toward the end of the design was the means of testing the Macintosh. In line with the zero-defect concept, the Macintosh team devised software for factory workers to use in debugging faults in the printed-circuit boards, as well as self-testing routines for the Macintosh itself.

The disk controller is tested with the video circuits. Video signals sent into the disk controller are read by the microprocessor. “We can display on the screen the pattern we were supposed to receive and the pattern we did receive when reading off the disk,” Mr. Smith explained, “and other kinds of prepared information about errors and where they occurred on the disk.’’

To test the printed-circuit boards in the factory, the Macintosh engineers designed software for a custom bed-of-nails tester that checks each computer in only a few seconds, faster than off-the-­shelf testers. If a board fails when a factory worker places it on the tester, the board is handed to another worker who runs a diagnostic test on it. A third worker repairs the board and returns it to the production line.

Each Macintosh is burned in—that is, turned on and heated—to detect the potential for early failures before shipping, thus increasing the reliability of the computers that are in fact shipped.

When Apple completed building the Macintosh factory, at an investment of $20 million, the design team spent most of its time there, helping the manufacturing engineers get the production lines moving. Problems with the disk drives in the middle of 1983 required Mr. Smith to redesign his final prototype twice.

Some of the plans for the factory proved troublesome, according to Ms. Coleman. The automatic insertion scheme for discrete components was unexpectedly difficult to implement. Many of the precise specifications for the geometric and electrical properties of the parts had to be reworked several times. Machines proved to be needed to align many of the parts before they were inserted. Although the machines, at $2000 apiece, were not expensive, they were a last-minute requirement.

The factory had few major difficulties with its first experimental run in December 1983, although the project had slipped from its May 1983 deadline. Often the factory would stop completely while engineers busily traced the faults to the sources—part of the zero-defect approach. Mr. Smith and the other design engineers virtually lived in the factory that December.

In January 1984 the first salable Macintosh computer rolled off the line. Although the production rate was erratic at first, it has since settled at one Macintosh every 27 seconds—about a half million a year.

An Unheard of $30 Million Marketing Budget

The marketing of the Macintosh shaped up much like the marketing of a new shampoo or soft drink, according to Mike Murray, who was hired in 1982 as the third member of the Macintosh marketing staff. “If Pepsi has two times more shelf space than Coke,” he explained, “you will sell more Pepsi. We want to create shelf space in your mind for the Macintosh.’’

To create that space on a shelf already crowded by IBM, Tandy, and other computer companies, Apple launched an aggressive advertising campaign—its most expensive ever.

Mr. Murray proposed the first formal marketing budget for the Macintosh in late 1983: he asked for $40 million. “People literally laughed at me,” he recalled. “They said, ‘What kind of a yo-yo is this guy?’ “He didn’t get his $40 million budget, but he got close to it—$30 million.

“We’ve established a beachhead with the Macintosh. If IBM knew in their heart of hearts how aggressive and driven we are, they would push us off the beach right now.”
—Mike Murray

The marketing campaign started before the Macintosh was introduced. Television viewers watching the Super Bowl football game in January 1984 saw a commercial with the Macintosh overcoming Orwell’s nightmare vision of 1984.

Other television advertisements, as well as magazine and billboard ads, depicted the Macintosh as being easy to learn to use. In some ads, the Mac was positioned directly alongside IBM’s personal computer. Elaborate color foldouts in major magazines pictured the Macintosh and members of the design team.

“The interesting thing about this business,” mused Mr. Murray, “is that there is no history. The best way is to come in really smart, really understand the fundamentals of the technology and how the software dealers work, and then run as fast as you can.’’

The Mac Team Disperses

“We’ve established a beachhead with the Macintosh,” explained Mr. Murray. “We’re on the beach. If IBM knew in their heart of hearts how aggressive and driven we are, they would push us off the beach right now, and I think they’re trying. The next 18 to 24 months is do-or-die time for us.”

With sales of the Lisa workstation disappointing, Apple is counting on the Macintosh to survive. The ability to bring out a successful family of products is seen as a key to that goal, and the company is working on a series of Macintosh peripherals—printers, local-area networks, and the like. This, too, is proving both a technical and organizational challenge.

“Once you go from a stand-alone system to a networked one, the complexity increases enormously,” noted Mr. Murray. “We cannot throw it all out into the market and let people tell us what is wrong with it. We have to walk before we can run.”

Only two software programs were written by Apple for the Macintosh—Macpaint, which allows users to draw pictures with the mouse, and Macwrite, a word-processing program. Apple is counting on independent software vendors to write and market applications programs for the Macintosh that will make it a more attractive product for potential customers. The company is also modifying some Lisa software for use on Macintosh and making versions of the Macintosh software to run on the Lisa.

Meanwhile the small, coherent Macintosh design team is no longer. “Nowadays we’re a large company,” Mr. Smith remarked.

“The pendulum of the project swings,” explained Mr. Hertzfeld, who has taken a leave of absence from Apple. “Now the company is a more mainstream organization, with managers who have managers working for them. That’s why I’m not there, because I got spoiled” working on the Macintosh design team.

The Conversation (0)
Sort by

This Machine Finds Defects Hiding Deep Inside Microchips

How advanced defect detection is enabling the next wave of chip innovation

7 min read
Equipment featuring CFE technology and AI Image Recognition from Applied Materials.

Applied Materials’ SEMVision H20 system combines the industry’s most sensitive eBeam system with cold field emission (CFE) technology and advanced AI image recognition to enable better and faster analysis of buried nanoscale defects in the world’s most advanced chips.

Applied Materials

This is a sponsored article brought to you by Applied Materials.

The semiconductor industry is in the midst of a transformative era as it bumps up against the physical limits of making faster and more efficient microchips. As we progress toward the “angstrom era,” where chip features are measured in mere atoms, the challenges of manufacturing have reached unprecedented levels. Today’s most advanced chips, such as those at the 2nm node and beyond, are demanding innovations not only in design but also in the tools and processes used to create them.

At the heart of this challenge lies the complexity of defect detection. In the past, optical inspection techniques were sufficient to identify and analyze defects in chip manufacturing. However, as chip features have continued to shrink and device architectures have evolved from 2D planar transistors to 3D FinFET and Gate-All-Around (GAA) transistors, the nature of defects has changed.

Defects are often at scales so small that traditional methods struggle to detect them. No longer just surface-level imperfections, they are now commonly buried deep within intricate 3D structures. The result is an exponential increase in data generated by inspection tools, with defect maps becoming denser and more complex. In some cases, the number of defect candidates requiring review has increased 100-fold, overwhelming existing systems and creating bottlenecks in high-volume production.

Applied Materials’ CFE technology achieves sub-nanometer resolution, enabling the detection of defects buried deep within 3D device structures.

The burden created by the surge in data is compounded by the need for higher precision. In the angstrom era, even the smallest defect — a void, residue, or particle just a few atoms wide — can compromise chip performance and the yield of the chip manufacturing process. Distinguishing true defects from false alarms, or “nuisance defects,” has become increasingly difficult.

Traditional defect review systems, while effective in their time, are struggling to keep pace with the demands of modern chip manufacturing. The industry is at an inflection point, where the ability to detect, classify, and analyze defects quickly and accurately is no longer just a competitive advantage — it’s a necessity.

Applied Materials

Adding to the complexity of this process is the shift toward more advanced chip architectures. Logic chips at the 2nm node and beyond, as well as higher-density DRAM and 3D NAND memories, require defect review systems capable of navigating intricate 3D structures and identifying issues at the nanoscale. These architectures are essential for powering the next generation of technologies, from artificial intelligence to autonomous vehicles. But they also demand a new level of precision and speed in defect detection.

In response to these challenges, the semiconductor industry is witnessing a growing demand for faster and more accurate defect review systems. In particular, high-volume manufacturing requires solutions that can analyze exponentially more samples without sacrificing sensitivity or resolution. By combining advanced imaging techniques with AI-driven analytics, next-generation defect review systems are enabling chipmakers to separate the signal from the noise and accelerate the path from development to production.

eBeam Evolution: Driving the Future of Defect Detections

Electron beam (eBeam) imaging has long been a cornerstone of semiconductor manufacturing, providing the ultra-high resolution necessary to analyze defects that are invisible to optical techniques. Unlike light, which has a limited resolution due to its wavelength, electron beams can achieve resolutions at the sub-nanometer scale, making them indispensable for examining the tiniest imperfections in modern chips.

Applied Materials

The journey of eBeam technology has been one of continuous innovation. Early systems relied on thermal field emission (TFE), which generates an electron beam by heating a filament to extremely high temperatures. While TFE systems are effective, they have known limitations. The beam is relatively broad, and the high operating temperatures can lead to instability and shorter lifespans. These constraints became increasingly problematic as chip features shrank and defect detection requirements grew more stringent.

Enter cold field emission (CFE) technology, a breakthrough that has redefined the capabilities of eBeam systems. Unlike TFE, CFE operates at room temperature, using a sharp, cold filament tip to emit electrons. This produces a narrower, more stable beam with a higher density of electrons that results in significantly improved resolution and imaging speed.

Applied Materials

For decades, CFE systems were limited to lab usage because it was not possible to keep the tools up and running for adequate periods of time — primarily because at “cold” temperatures, contaminants inside the chambers adhere to the eBeam emitter and partially block the flow of electrons.

In December 2022, Applied Materials announced that it had solved the reliability issues with the introduction of its first two eBeam systems based on CFE technology. Applied is an industry leader at the forefront of defect detection innovation. It is a company that has consistently pushed the boundaries of materials engineering to enable the next wave of innovation in chip manufacturing. After more than 10 years of research across a global team of engineers, Applied mitigated the CFE stability challenge by developing multiple breakthroughs. These include new technology to deliver orders of magnitude higher vacuum compared to TFE — tailoring the eBeam column with special materials that reduce contamination, and designing a novel chamber self-cleaning process that further keeps the tip clean.

CFE technology achieves sub-nanometer resolution, enabling the detection of defects buried deep within 3D device structures. This is a capability that is critical for advanced architectures like Gate-All-Around (GAA) transistors and 3D NAND memory. Additionally, CFE systems offer faster imaging speeds compared to traditional TFE systems, allowing chipmakers to analyze more defects in less time.

The Rise of AI in Semiconductor Manufacturing

While eBeam technology provides the foundation for high-resolution defect detection, the sheer volume of data generated by modern inspection tools has created a new challenge: how to process and analyze this data quickly and accurately. This is where artificial intelligence (AI) comes into play.

AI-driven systems can classify defects with remarkable accuracy, sorting them into categories that provide engineers with actionable insights.

AI is transforming manufacturing processes across industries, and semiconductors are no exception. AI algorithms — particularly those based on deep learning — are being used to automate and enhance the analysis of defect inspection data. These algorithms can sift through massive datasets, identifying patterns and anomalies that would be impossible for human engineers to detect manually.

By training with real in-line data, AI models can learn to distinguish between true defects — such as voids, residues, and particles — and false alarms, or “nuisance defects.” This capability is especially critical in the angstrom era, where the density of defect candidates has increased exponentially.

Enabling the Next Wave of Innovation: The SEMVision H20

The convergence of AI and advanced imaging technologies is unlocking new possibilities for defect detection. AI-driven systems can classify defects with remarkable accuracy. Sorting defects into categories provides engineers with actionable insights. This not only speeds up the defect review process, but it also improves its reliability while reducing the risk of overlooking critical issues. In high-volume manufacturing, where even small improvements in yield can translate into significant cost savings, AI is becoming indispensable.

The transition to advanced nodes, the rise of intricate 3D architectures, and the exponential growth in data have created a perfect storm of manufacturing challenges, demanding new approaches to defect review. These challenges are being met with Applied’s new SEMVision H20.

Applied Materials

By combining second-generation cold field emission (CFE) technology with advanced AI-driven analytics, the SEMVision H20 is not just a tool for defect detection - it’s a catalyst for change in the semiconductor industry.

A New Standard for Defect Review

The SEMVision H20 builds on the legacy of Applied’s industry-leading eBeam systems, which have long been the gold standard for defect review. This second generation CFE has higher, sub-nanometer resolution faster speed than both TFE and first generation CFE because of increased electron flow through its filament tip. These innovative capabilities enable chipmakers to identify and analyze the smallest defects and buried defects within 3D structures. Precision at this level is essential for emerging chip architectures, where even the tiniest imperfection can compromise performance and yield.

But the SEMVision H20’s capabilities go beyond imaging. Its deep learning AI models are trained with real in-line customer data, enabling the system to automatically classify defects with remarkable accuracy. By distinguishing true defects from false alarms, the system reduces the burden on process control engineers and accelerates the defect review process. The result is a system that delivers 3X faster throughput while maintaining the industry’s highest sensitivity and resolution - a combination that is transforming high-volume manufacturing.

Dr. Neta Shomrat leads product marketing for Applied’s SEMVision product line, where she is responsible for driving the roadmap and strategy for advanced eBeam defect review technologies.

Applied Materials

“One of the biggest challenges chipmakers often have with adopting AI-based solutions is trusting the model. The success of the SEMVision H20 validates the quality of the data and insights we are bringing to customers. The pillars of technology that comprise the product are what builds customer trust. It’s not just the buzzword of AI. The SEMVision H20 is a compelling solution that brings value to customers.”

Broader Implications for the Industry

The impact of the SEMVision H20 extends far beyond its technical specifications. By enabling faster and more accurate defect review, the system is helping chipmakers reduce factory cycle times, improve yields, and lower costs. In an industry where margins are razor-thin and competition is fierce, these improvements are not just incremental - they are game-changing.

Additionally, the SEMVision H20 is enabling the development of faster, more efficient, and more powerful chips. As the demand for advanced semiconductors continues to grow - driven by trends like artificial intelligence, 5G, and autonomous vehicles - the ability to manufacture these chips at scale will be critical. The system is helping to make this possible, ensuring that chipmakers can meet the demands of the future.

A Vision for the Future

Applied’s work on the SEMVision H20 is more than just a technological achievement; it’s a reflection of the company’s commitment to solving the industry’s toughest challenges. By leveraging cutting-edge technologies like CFE and AI, Applied is not only addressing today’s pain points but also shaping the future of defect review.

As the semiconductor industry continues to evolve, the need for advanced defect detection solutions will only grow. With the SEMVision H20, Applied is positioning itself as a key enabler of the next generation of semiconductor technologies, from logic chips to memory. By pushing the boundaries of what’s possible, the company is helping to ensure that the industry can continue to innovate, scale, and thrive in the angstrom era and beyond.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

New AI Model Advances the “Kissing Problem” and More

AlphaEvolve made several mathematical discoveries and practical optimizations

4 min read
Green code screen on top of green pixilated landscape on black background
Nicole Millman; Original imagery: Google DeepMind

There’s a mathematical concept called the kissing number. Somewhat disappointingly, it’s got nothing to do with actual kissing. It enumerates how many spheres can touch (or “kiss”) a single sphere of equal size without crossing it. In one dimension, the kissing number is 2. In two dimensions, it’s 6 (think The New York Times’ spelling bee puzzle configuration). As the number of dimensions grows, the answer becomes less obvious: For most dimensionalities over 4, only upper and lower bounds on the kissing number are known. Now, an AI agent developed by Google DeepMind called AlphaEvolve has made its contribution to the problem, increasing the lower bound on the kissing number in 11 dimensions from 592 to 593.

This may seem like an incremental improvement on the problem, especially given that the upper bound on the kissing number in 11 dimensions is 868, so the unknown range is still quite large. But it represents a novel mathematical discovery by an AI agent, and challenges the idea that large language models are not capable of original scientific contributions.

And this is just one example of what AlphaEvolve has accomplished. “We applied AlphaEvolve across a range of open problems in research mathematics, and we deliberately picked problems from different parts of math: analysis, combinatorics, geometry,” says Matej Balog, a research scientist at DeepMind that worked on the project. They found that for 75 percent of the problems, the AI model replicated the already known optimal solution. In 20 percent of cases, it found a new optimum that surpassed any known solution. “Every single such case is a new discovery,” Balog says. (In the other 5 percent of cases, the AI converged on a solution that was worse than the known optimal one.)

The model also developed a new algorithm for matrix multiplication—the operation that underlies much of machine learning. A previous version of DeepMind’s AI model, called AlphaTensor, had already beat the previous best known algorithm, discovered in 1969, for multiplying 4 by 4 matrices. AlphaEvolve found a more general version of that improved algorithm.

DeepMind’s AlphaEvolve made improvements to several practical problems at Google. Google DeepMind

In addition to abstract math, the team also applied their model to practical problems Google as a company faces every day. The AI was also used to optimize data-center orchestration to gain 1 percent improvement, to optimize the design of the next Google tensor processing unit, and to discover an improvement to a kernel used in Gemini training leading to a 1 percent reduction in training time.

“It’s very surprising that you can do so many different things with a single system,” says Alexander Novikov, a senior research scientist at DeepMind who also worked on AlphaEvolve.

How AlphaEvolve Works

AlphaEvolve is able to be so general because it can be applied to almost any problem that can be expressed as code, and which can be checked by another piece of code. The user supplies an initial stab at the problem—a program that solves the problem at hand, however suboptimally—and a verifier program that checks how well a piece of code meets the required criteria.

Then, a large language model, in this case Gemini, comes up with other candidate programs to solve the same problem, and each one is tested by the verifier. From there, AlphaEvolve uses a genetic algorithm such that the “fittest” of the proposed solutions survive and evolve to the next generation. This process repeats until the solutions stop improving.

AlphaEvolve uses an ensemble of Gemini large language models (LLMs) in conjunction with an evaluation code, all orchestrated by a genetic algorithm to optimize a piece of code. Google DeepMind

“Large language models came around, and we started asking ourselves, is it the case that they are only going to add what’s in the training data, or can we actually use them to discover something completely new, new algorithms or new knowledge?” Balog says. This research, Balog claims, shows that “if you use the large language models in the right way, then you can, in a very precise sense, get something that’s provably new and provably correct in the form of an algorithm.”

AlphaEvolve comes from a long lineage of DeepMind’s models, going back to AlphaZero, which stunned the world by learning to play chess, Go, and other games better than any human player without using any human knowledge—just by playing the game and using reinforcement learning to master it. Another math-solving AI based on reinforcement learning, AlphaProof, performed at the silver-medalist level on the 2024 International Math Olympiad.

For AlphaEvolve, however, the team broke from the reinforcement learning tradition in favor of the genetic algorithm. “The system is much simpler,” Balog says. “And that actually has consequences, that it’s much easier to set up on a wide range of problems.”

The (Totally Not Scary) Future

The team behind AlphaEvolve hopes to evolve their system in two ways.

First, they want to apply it to a broader range of problems, including those in the natural sciences. To pursue this goal, they are planning to open up an early access program for interested academics to use AlphaEvolve in their research. It may be harder to adapt the system to the natural sciences, as verification of proposed solutions may be less straightforward. But, Balog says, “We know that in the natural sciences, there are plenty of simulators for different types of problems, and then those can be used within AlphaEvolve as well. And we are, in the future, very much interested in broadening the scope in this direction.”

Second, they want to improve the system itself, perhaps by coupling it with another DeepMind project: the AI coscientist. This AI also uses an LLM and a genetic algorithm, but it focuses on hypothesis generation in natural language. “They develop these higher-level ideas and hypotheses,” Balog says. “Incorporating this component into AlphaEvolve-like systems, I believe, will allow us to go to higher levels of abstraction.”

These prospects are exciting, but for some they may also sound menacing—for example, AlphaEvolve’s optimization of Gemini training may be seen as the beginning of recursively self-improving AI, which some worry would lead to a runaway intelligence explosion referred to as the singularity. The DeepMind team maintains that that is not their goal, of course. “We are excited to contribute to advancing AI that benefits humanity,” Novikov says.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Breaking 6G Barriers: How Researchers Made Ultra-Fast Wireless Real

1 min read

Keysight visited 6G researchers at Northeastern University who are working to overcome the challenges of high-speed, high-bandwidth wireless communication.

They shared concepts from their cutting-edge research, including overcoming increased path loss and noise at higher frequencies, potential digital threats to communication channels, and real-time upper-layer network applications.

During this event, you will gain insights into the following 6G topics:

  • Using broadband MIMO systems to increase data throughput and transmission distance.
  • Emulating an eavesdropping attack on a 6G signal to test for vulnerabilities.
  • Testing real-time sub-THz for network research.
Keep Reading ↓ Show less

Is Opera Leading the AI Agent Browsing Era?

Norwegian web pioneer seeks to broaden AI capabilities worldwide

3 min read
A smartphone using Aria AI features on the Opera browser.
Original imagery: Nicole Millman; Opera

The Opera Web browser, first introduced 30 years ago, has over its long tenure helped to pioneer features that would later become commonplace among all Web browsers—including tabs, sync, and built-in search. Opera was among the first to introduce a built-in AI assistant (Aria) as well as the ability to use locally running models with its developer version. Now, Opera aims to be the first to offer a new kind of AI agent–based browsing, with a feature called Browser Operator.

AI agents are an emerging trend in artificial intelligence, built around AI-powered assistants that perform extended tasks beyond a single query or command-line action. And many tech observers argue agent-based (or “agentic”) AI will be a big deal in the years ahead.

At the company’s Opera Days event last month, Henrik Lexow, director of product marketing technologies, demonstrated the multifaceted versatility of agentic AI. In one demo, he booked a complicated travel itinerary; in another, he ordered flowers to be delivered to an event attendee.

The Opera browser runs on a range of platforms from high-end gaming devices (Opera GX) to low-end phones (Opera Mini). Mini is Opera’s most popular browser, with nearly 70 million monthly active users in Africa alone, and over 1 billion downloads worldwide from the Google Play Store.

The Global Reach of Opera Mini

Launched 20 years ago in 2005, Opera Mini gave users access to the Internet on lower-end consumer devices, especially feature phones. While the low-end phone marketplace today has expanded to include some smartphones, Internet access limitations and throttled data plans of old are still a going concern around the globe. So Opera Mini continues to combine page compression and snapshotting to reduce the requirements of today’s resource-intensive websites. Instead of loading pages directly from the source, Mini has the option of loading them from a snapshot on Opera’s servers, removing excessive JavaScript or video to render the page more manageable over low-data connections.

Despite the different browser variants, each Opera version is built upon the same AI Composer Engine. For Opera Mini and its user base, this gives access to third-party AI models that typically need a powerful device to run locally, or have high costs to access as a service. With the forthcoming version 2.0, Aria reportedly will prioritize even more the system’s response speed.

“Everyone gets the same experience,” says Tim Lesnik, Opera Mini’s product manager. “Where Aria is available in a particular country, there are no limitations imposed in any way, shape, or form.”

However, patterns differ within user groups and within different countries, says Monika Kurczyńska, Opera’s AI R&D lead. For example, browser usage in Brazil and Nigeria from students peaks during the school year, and then drops off again during school holidays—so much so that initially the Opera team were worried that Aria had stopped working in those countries.

“The first time that happened, we were like, my goodness, what’s happened here? Something must have broken,” says Lesnik.

Opera’s and Aria’s Many Languages

Aria supports more than 50 languages, and for each of these, it provides prompt examples to get users started.

We’ve got a range of different prompts,” says Lesnik. “Those prompts are all the same in the different countries, but they are translated right now. What we know we need to do better is understand that users in Nigeria are using Aria in a different way from users in Indonesia.”

Language support in large language models (LLMs) is inconsistent outside of globally popular languages including English, French, Chinese, and Spanish. Yet, as with prompt examples, an LLM can often translate questions and answers it doesn’t have direct responses to. Kurczyńska, who is Polish, said LLMs treat different languages—and the number of tokens (the building blocks of text that an LLM understands) each language requires—quite differently.

“Different languages act and behave in different ways in LLMs,” says Kurczyńska. “For example, [using] the same sentence with a similar number of characters in Polish and English, the LLM uses more tokens in Polish.”

While work remains to make all features production-ready, bringing agentic browsing to hundreds of millions of Opera users globally, especially those in parts of the world often ignored by larger technology brands, is a mammoth task. Hugging Face, a popular repository of AI models, has nearly 200,000 models that support English, but only 11,000 that support Chinese, and less than 10,000 that support Spanish. In March, in fact, researchers in Singapore introduced what they called Babel, an LLM they claim can support 90 percent of the world’s speakers in a single model.

At Opera, Lesnik and Kurczyńska say they plan to tackle the many-language problem through AI feature drops every two weeks, across parallel public-developer and beta versions of the company’s browsers.

This story was updated on 15 May, 2025 to change Opera’s affiliation (Norwegian, not Chinese-Norwegian as a pervious version of this story stated), as well as clarify details of Opera’s AI models concerning capability and variability among the range of Opera browsers available today. Also, a misspelling of Opera Mini product manager Tim Lesnik’s name was corrected.

Keep Reading ↓ Show less

Get unlimited IEEE Spectrum access

Become an IEEE member and get exclusive access to more stories and resources, including our vast article archive and full PDF downloads
Get access to unlimited IEEE Spectrum content
Network with other technology professionals
Establish a professional profile
Create a group to share and collaborate on projects
Discover IEEE events and activities
Join and participate in discussions

Teething Babies and Rainy Days Once Cut Calls Short

“Trouble men” searched for water damage in early analog telephones

8 min read
illustration of a baby chewing on the cord of an old candlestick telephone. The baby is in the style of a line drawing, while the phone appears to be from a photograph
Serge Bloch
Humans are messy. We spill drinks, smudge screens, and bring our electronic devices into countless sticky situations. As anyone who has accidentally dropped their phone into a toilet or pool knows, moisture poses a particular problem.

And it’s not a new one: From early telephones to modern cellphones, everyday liquids have frequently conflicted with devices that must stay dry. Consumers often take the blame when leaks and spills inevitably occur.

Rachel Plotnick, an associate professor of cinema and media studies at Indiana University Bloomington, studies the relationship between technology and society. Last year, she spoke to IEEE Spectrum about her research on how people interact with buttons and tactile controls. In her new book, License to Spill: Where Dry Devices Meet Liquid Lives (The MIT Press, 2025), Plotnick explores the dynamic between everyday wetness and media devices through historical and contemporary examples, including cameras, vinyl records, and laptops. This adapted excerpt looks back at analog telephones of the 1910s through 1930s, the common practices that interrupted service, and the “trouble men” who were sent to repair phones and reform messy users.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

The Future of AI and Robotics Is Being Led by Amazon’s Next-Gen Warehouses

The company’s robotics systems are redefining warehouse efficiency

5 min read
Robotic arm with suction cups lifting a cardboard box at an Amazon warehouse.

Amazon is a prime destination for engineers and scientists seeking to shape the future of AI and robotics.

Amazon

This is a sponsored article brought to you by Amazon.

The cutting edge of robotics and artificial intelligence (AI) doesn’t occur just at NASA, or one of the top university labs, but instead is increasingly being developed in the warehouses of the e-commerce company Amazon. As online shopping continues to grow, companies like Amazon are pushing the boundaries of these technologies to meet consumer expectations.

Warehouses, the backbone of the global supply chain, are undergoing a transformation driven by technological innovation. Amazon, at the forefront of this revolution, is leveraging robotics and AI to shape the warehouses of the future. Far from being just a logistics organization, Amazon is positioning itself as a leader in technological innovation, making it a prime destination for engineers and scientists seeking to shape the future of automation.

Amazon: A Leader in Technological Innovation

Amazon’s success in e-commerce is built on a foundation of continuous technological innovation. Its fulfillment centers are increasingly becoming hubs of cutting-edge technology where robotics and AI play a pivotal role. Heath Ruder, Director of Product Management at Amazon, explains how Amazon’s approach to integrating robotics with advanced material handling equipment is shaping the future of its warehouses.

“We’re integrating several large-scale products into our next-generation fulfillment center in Shreveport, Louisiana,” says Ruder. “It’s our first opportunity to get our robotics systems combined under one roof and understand the end-to-end mechanics of how a building can run with incorporated autonomation.” Ruder is referring to the facility’s deployment of its Automated Storage and Retrieval Systems (ASRS), called Sequoia, as well as robotic arms like “Robin” and “Cardinal” and Amazon’s proprietary autonomous mobile robot, “Proteus”.

Amazon has already deployed “Robin”, a robotic arm that sorts packages for outbound shipping by transferring packages from conveyors to mobile robots. This system is already in use across various Amazon fulfillment centers and has completed over three billion successful package moves. “Cardinal” is another robotic arm system that efficiently packs packages into carts before the carts are loaded onto delivery trucks.

Proteus” is Amazon’s autonomous mobile robot designed to work around people. Unlike traditional robots confined to a restricted area, Proteus is fully autonomous and navigates through fulfillment centers using sensors and a mix of AI-based and ML systems. It works with human workers and other robots to transport carts full of packages more efficiently.

The integration of these technologies is estimated to increase operational efficiency by 25 percent. “Our goal is to improve speed, quality, and cost. The efficiency gains we’re seeing from these systems are substantial,” says Ruder. However, the real challenge is scaling this technology across Amazon’s global network of fulfillment centers. “Shreveport was our testing ground and we are excited about what we have learned and will apply at our next building launching in 2025.”

Amazon’s investment in cutting-edge robotics and AI systems is not just about operational efficiency. It underscores the company’s commitment to being a leader in technological innovation and workplace safety, making it a top destination for engineers and scientists looking to solve complex, real-world problems.

How AI Models Are Trained: Learning from the Real World

One of the most complex challenges Amazon’s robotics team faces is how to make robots capable of handling a wide variety of tasks that require discernment. Mike Wolf, a principal scientist at Amazon Robotics, plays a key role in developing AI models that enable robots to better manipulate objects, across a nearly infinite variety of scenarios.

“The complexity of Amazon’s product catalog—hundreds of millions of unique items—demands advanced AI systems that can make real-time decisions about object handling,” explains Wolf. But how do these AI systems learn to handle such an immense variety of objects? Wolf’s team is developing machine learning algorithms that enable robots to learn from experience.

“We’re developing the next generation of AI and robotics. For anyone interested in this field, Amazon is the place where you can make a difference on a global scale.” —Mike Wolf, Amazon Robotics

In fact, robots at Amazon continuously gather data from their interactions with objects, refining their ability to predict how items will be affected when manipulated. Every interaction a robot has—whether it’s picking up a package or placing it into a container—feeds back into the system, refining the AI model and helping the robot to improve. “AI is continually learning from failure cases,” says Wolf. “Every time a robot fails to complete a task successfully, that’s actually an opportunity for the system to learn and improve.” This data-centric approach supports the development of state-of-the-art AI systems that can perform increasingly complex tasks, such as predicting how objects are affected when manipulated. This predictive ability will help robots determine the best way to pack irregularly shaped objects into containers or handle fragile items without damaging them.

“We want AI that understands the physics of the environment, not just basic object recognition. The goal is to predict how objects will move and interact with one another in real time,” Wolf says.

What’s Next in Warehouse Automation

Valerie Samzun, Senior Technical Product Manager at Amazon, leads a cutting-edge robotics program that aims to enhance workplace safety and make jobs more rewarding, fulfilling, and intellectually stimulating by allowing robots to handle repetitive tasks.

“The goal is to reduce certain repetitive and physically demanding tasks from associates,” explains Samzun. “This allows them to focus on higher-value tasks in skilled roles.” This shift not only makes warehouse operations more efficient but also opens up new opportunities for workers to advance their careers by developing new technical skills.

“Our research combines several cutting-edge technologies,” Samzun shared. “The project uses robotic arms equipped with compliant manipulation tools to detect the amount of force needed to move items without damaging them or other items.” This is an advancement that incorporates learnings from previous Amazon robotics projects. “This approach allows our robots to understand how to interact with different objects in a way that’s safe and efficient,” says Samzun. In addition to robotic manipulation, the project relies heavily on AI-driven algorithms that determine the best way to handle items and utilize space.

Samzun believes the technology will eventually expand to other parts of Amazon’s operations, finding multiple applications across its vast network. “The potential applications for compliant manipulation are huge,” she says.

Attracting Engineers and Scientists: Why Amazon is the Place to Be

As Amazon continues to push the boundaries of what’s possible with robotics and AI, it’s also becoming a highly attractive destination for engineers, scientists, and technical professionals. Both Wolf and Samzun emphasize the unique opportunities Amazon offers to those interested in solving real-world problems at scale.

For Wolf, who transitioned to Amazon from NASA’s Jet Propulsion Laboratory, the appeal lies in the sheer impact of the work. “The draw of Amazon is the ability to see your work deployed at scale. There’s no other place in the world where you can see your robotics work making a direct impact on millions of people’s lives every day,” he says. Wolf also highlights the collaborative nature of Amazon’s technical teams. Whether working on AI algorithms or robotic hardware, scientists and engineers at Amazon are constantly collaborating to solve new challenges.

Amazon’s culture of innovation extends beyond just technology. It’s also about empowering people. Samzun, who comes from a non-engineering background, points out that Amazon is a place where anyone with the right mindset can thrive, regardless of their academic background. “I came from a business management background and found myself leading a robotics project,” she says. “Amazon provides the platform for you to grow, learn new skills, and work on some of the most exciting projects in the world.”

For young engineers and scientists, Amazon offers a unique opportunity to work on state-of-the-art technology that has real-world impact. “We’re developing the next generation of AI and robotics,” says Wolf. “For anyone interested in this field, Amazon is the place where you can make a difference on a global scale.”

The Future of Warehousing: A Fusion of Technology and Talent

From Amazon’s leadership, it’s clear that the future of warehousing is about more than just automation. It’s about harnessing the power of robotics and AI to create smarter, more efficient, and safer working environments. But at its core it remains centered on people in its operations and those who make this technology possible—engineers, scientists, and technical professionals who are driven to solve some of the world’s most complex problems.

Amazon’s commitment to innovation, combined with its vast operational scale, makes it a leader in warehouse automation. The company’s focus on integrating robotics, AI, and human collaboration is transforming how goods are processed, stored, and delivered. And with so many innovative projects underway, the future of Amazon’s warehouses is one where technology and human ingenuity work hand in hand.

“We’re building systems that push the limits of robotics and AI,” says Wolf. “If you want to work on the cutting edge, this is the place to be.”

Keep Reading ↓ Show less

Overcoming Tech Workforce Shortages With IEEE Microcredentials

New program validates key skills and widens candidate pool

2 min read
Two inspectors manually operating a Coordinate Measuring Machine to measure the physical geometrical characteristics of an object.

Microcredentials are issued when learners prove mastery of a specific skill.

Boonchai Wedmakawand/Getty Images

By 2030, there will be a global shortage of 85 million workers, many of them in technical fields, according to the World Economic Forum. Many industries that need to employ technical workers will be impacted by the shortage, which is projected to cost them up to US $8.5 trillion in unrealized revenue.

Many technical roles now require university degrees. However, as companies consider how to overcome the worker shortage, some are reevaluating their higher education requirements for certain roles requiring specialized skills.

Those jobs might include technician, electrician, and programmer, along with other positions that compose the skilled technical workforce, as described by SRI International’s Center for Innovation Strategy and Policy.

Positions that don’t require higher education widen the pool of candidates.

Even if they eliminate the need for a degree, organizations will still need to rely on some kind of credential to ensure that job candidates have the skills necessary to do the job. One option is the skills-based microcredential.

Microcredentials are issued when learners prove mastery of a specific skill. Unlike traditional university degrees and course certificates, microcredential programs are not based on successfully completing a full learning program. Instead, a student might earn multiple microcredentials in a single program based on demonstrated skills. A qualified instructor using an assessment instrument determines if a learner has acquired the skill and earned the credential.

The IEEE microcredentials program offers standardized credentials in collaboration with training organizations and universities seeking to provide skills-based credentials outside formal degree programs. IEEE, as the world’s largest technical professional organization, has decades of experience offering industry-relevant credentials and expertise in global standardization.

A seal of approval

IEEE microcredentials are industry-driven professional credentials that focus on needed skills. The program allows technical learning providers to supply credentials that bear the IEEE logo. When a hiring organization sees the logo on a microcredential, it confirms to employers that the instruction has been independently vetted and the institution is qualified to issue the credential. Credentials issued through the IEEE program include certificates and digital badges.

Training providers that want to offer standardized microcredentials can apply to the program to become approved. A committee reviews the applications to ensure that providers are credible, offer training within IEEE’s fields of interest, have qualified instructors, and have well-defined assessments.

The IEEE program offers standardized credentials in collaboration with training organizations and universities seeking to provide skills-based credentials outside formal degree programs.

Once a provider is approved, IEEE will work with it to benchmark the credentialing needs for each course, including the skills to be recognized, designing microcredentials, and creating a credential-issuing process. Upon the learner’s successful completion of the program, IEEE will issue the microcredentials on behalf of the training provider.

Microcredentials are stackable; students can earn them from different programs and institutions to demonstrate their growing skill set. The microcredentials can be listed on résumés and CVs and shared on LinkedIn and other professional networking websites.

All IEEE microcredentials that a learner earns are stored within a secure digital wallet for easy reference. The wallet also provides information about the program that issued each credential.

Keep Reading ↓ Show less

This white paper highlights Industrial Computed Tomography (CT) as a transformative solution for precision inspection, overcoming the limitations of traditional methods like destructive testing or surface scans. By providing non-destructive, high-resolution 3D imaging, industrial CT enables engineers to detect hidden defects (porosity, cracks, voids), accelerate product development, verify supplier parts, improve manufacturing yield, and enhance failure analysis. It supports the entire product lifecycle - from R&D prototyping to production quality control and field failure diagnostics - helping industries like aerospace, automotive, and medical devices ensure reliability. The paper also introduces Lumafield’s CT solutions: Neptune (an accessible lab scanner), Triton (automated factory-floor CT), and Voyager (cloud-based AI analysis software), which make advanced CT scanning faster, smarter, and scalable for modern engineering demands.What you’ll learn:

  • How CT scanning reveals hidden defects that surface inspections miss.
  • Why non-destructive testing accelerates prototyping and reduces iteration cycles.
  • How to verify supplier parts and avoid costly manufacturing rework.
  • Ways to improve yield by catching process drift before it creates scrap.

The Lost Story of Alan Turing’s Secret “Delilah” Project

An exclusive look inside Turing’s notebooks shows his DIY approach

13 min read
Collage containing a photo of a young man, an old notebook with math on it, and an electronic machine with a cylinder and bulbs.

A collection of documents was recently sold at auction for almost half a million dollars. The documents detail a top-secret voice-encryption project led by Alan Turing, culminating in the creation of the Delilah machine.

Turing: Archivio GBB/contrasto/Redux Pictures; Delilah: The National Archives, London; Notebook: Bonhams

It was 8 May 1945, Victory in Europe Day. With the German military’s unconditional surrender, the European part of World War II came to an end. Alan Turing and his assistant Donald Bayley celebrated victory in their quiet English way, by taking a long walk together. They had been working side by side for more than a year in a secret electronics laboratory, deep in the English countryside. Bayley, a young electrical engineer, knew little about his boss’s other life as a code breaker, only that Turing would set off on his bicycle every now and then to another secret establishment about 10 miles away along rural lanes, Bletchley Park. As Bayley and the rest of the world would later learn, Bletchley Park was the headquarters of a vast, unprecedented code-breaking operation.

When they sat down for a rest in a clearing in the woods, Bayley said, “Well, the war’s over now—it’s peacetime, so you can tell us all.”

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Humanities Are Essential for Engineers

By design, engineering has long been entangled with the humanities

6 min read
Black and white photograph of Victorian co-ed high school students conducting experiments with batteries.

Students in 1899 conduct an experiment with batteries.

Frances Benjamin Johnston/Library of Congress

Since last September, I’ve been spending seven hours a day, five days a week happily researching the history of women in electrical engineering. So far I’ve uncovered the names of more than 200 women who contributed to electrical engineering, the first step in an eventual book project. No disrespect to Ada Lovelace, Grace Hopper, or Katherine Johnson, but there are many other women in engineering you should know about.

I’m doing my research at the Linda Hall Library of Science, Engineering, and Technology, in Kansas City, Mo., and I’m currently working through the unpublished papers of the American Institute of Electrical Engineers (a predecessor of today’s IEEE). These papers consist of conference presentations and keynote addresses that weren’t included in the society’s journals. They take up about 14 shelves in the closed stacks at the Linda Hall. Most of the content is unavailable on the Internet or anywhere else. No amount of Googling or prompting ChatGPT will reveal this history. The only way to discover it is to go to the library in person and leaf through the papers. This is what history research looks like. It is time intensive and can’t be easily replaced by AI (at least not yet).

Up until 2 April, my research was funded through a fellowship with the National Endowment for the Humanities. My fellowship was supposed to run through mid-June, but the grant was terminated early. Maybe you don’t care about my research, but I’m not alone. Almost all NEH grants were similarly cut, as were thousands of research grants from the National Science Foundation, the National Institutes of Health, the Institute of Museum and Library Services, and the National Endowment for the Arts. Drastic research cuts have also been made or are expected at the Departments of Defense, Energy, Commerce, and Education. I could keep going.

This is what history research looks like.

There’s been plenty of outrage all around, but as an engineer turned historian who now studies engineers of the past, I have a particular plea: Engineers and computer scientists, please defend humanities research just as loudly as you might defend research in STEM fields. Why? Because if you take a moment to reflect on your training, conduct, and professional identity, you may realize that you owe much of this to the humanities.

Historians can show how the past has shaped your profession; philosophers can help you think through the social implications of your technical choices; artists can inspire you to design beautiful products; literature can offer ideas on how to communicate. And, as I have discovered while combing through those unpublished papers, it turns out that the bygone engineers of the 20th century recognized this strong bond to the humanities.

Engineering’s Historical Ties to the Humanities

Granted, the humanities have a few thousand years on engineering when it comes to formal study. Plato and Aristotle were mainly into philosophy, even when they were chatting about science-y stuff. Formal technical education in the United States didn’t begin until the founding of the U.S. Military Academy, in West Point, N.Y., in 1802. Two decades later came what is now Rensselaer Polytechnic Institute, in Troy, N.Y. Dedicated to “the application of science to the common purposes of life,” Rensselaer was the first school in the English-speaking world established to teach engineering—in this case, civil engineering.

Electrical engineering, my undergraduate field of study, didn’t really get going as an academic discipline until the late 19th century. Even then, most electrical training took the form of technical apprenticeships.

One consistent trend throughout the 20th century is the high level of anxiety over what it means to be an engineer.

In addition to looking at the unpublished papers, I’ve been paging through the entire run of journals from the AIEE, the Institute of Radio Engineers (the other predecessor of the IEEE), and the IEEE. And so I have a good sense of the evolution of the profession. One consistent, yet surprising, trend throughout the 20th century is the high level of anxiety over what it means to be an engineer. Who exactly are we?

Early on, electrical engineers looked to the medical and legal fields to see how to organize, form professional societies, and create codes of ethics. They debated the difference between training for a technician versus an engineer. They worried about being too high-minded, but also being seen as getting their hands dirty in the machine shop. During the Great Depression and other times of economic downturn, there were lengthy discussions on organizing into unions.

To cement their status as legitimate professionals, engineers decided to make the case that they, the engineers, are the keystone of civilization. A bold claim, and I don’t necessarily disagree, but what’s interesting is that they linked engineering firmly to the humanities. To be an engineer, they argued, meant to accept responsibility for the full weight of human values that underlie every engineering problem. And to be a responsible member of society, an engineer needed formal training in the humanities, so that he (and it was always he) could discover himself, identify his place within the community, and act accordingly.

Thomas L. Martin Jr., dean of engineering at the University of Arizona, endorsed this engineering curriculum, in which the humanities accounted for 24 of 89 credits. AIEE

What an Engineering Education Should Be

Here’s what that meant in practice. In 1909, none other than Charles Proteus Steinmetz advocated for including the classics in engineering education. An education too focused on empirical science and engineering was “liable to make the man one sided.” Indeed, he contended, “this neglect of the classics is one of the most serious mistakes of modern education.”

In the 1930s, William Wickenden, president of the Case School of Applied Science at Case Western Reserve University, in Cleveland, wrote an influential report on engineering education, in which he argued that at least one-fifth of an engineering curriculum should be devoted to the study of the humanities and social sciences.

After World War II and the deployment of the atomic bomb, the start of the Cold War, and the U.S. entry into the Vietnam War, the study of the humanities within engineering seemed even more pressing.

In 1961, C.R. Vail, a professor at Duke University, in Durham, N.C., railed against “culturally semiliterate engineering graduates who...could be immediately useful in routine engineering activity, but who were incapable of creatively applying fundamental physical concepts to the solution of problems imposed by emerging new technologies.” In his opinion, the inclusion of a full year of humanities coursework would stimulate the engineer’s aesthetic, ethical, intellectual, and spiritual growth. Thus prepared, future engineers would be able “to recognize the sociological consequences of their technological achievements and to feel a genuine concern toward the great dilemmas which confront mankind.”

In a similar vein, Thomas L. Martin Jr., dean of engineering at the University of Arizona, proposed an engineering curriculum in which the humanities and social sciences accounted for 24 of the 89 credits.

Many engineers of that era thought it was their duty to stand up for their beliefs.

Engineers in industry also had opinions on the humanities. James Young, an engineer with General Electric, argued that engineers need “an awareness of the social forces, the humanities, and their relationship to his professional field, if he is to ascertain areas of potential impact or conflict.” He urged engineers to participate in society, whether in the affairs of the neighborhood or the nation. “As an educated man,” the engineer “has more than casual or average responsibility to protect this nation’s heritage of integrity and morality,” Young believed.

Indeed, many engineers of that era thought it was their duty to stand up for their beliefs. “Can the engineering student ignore the existence of moral issue?” asked the UCLA professors D. Rosenthal, A.B. Rosenstein, and M. Tribus in a 1962 paper. “We must answer, ‘he cannot’; at least not if we live in a democratic society.”

Of course, here in the United States, we still live in a democratic society, one that constitutionally protects the freedoms of speech, assembly, and petitioning the government for a redress of grievances. And yet, anecdotally, I’ve observed that engineers today are more reticent than others to engage in public discourse or protest.

Will that change? Since the Eisenhower era, U.S. universities have relied on the federal funding of research, but in the past few weeks and months, that relationship has been upended. I wonder if today’s engineers will take a cue from their predecessors and decide to take a stand. Or perhaps industry will choose to reinvest in fundamental and long-term R&D the way they used to in the 20th century. Or maybe private foundations and billionaire philanthropists will step up.

Nobody can say what will happen next, but I’d like to think this will be one of those times when the past is prologue. And so I’ll repeat my plea to my engineering colleagues: Please don’t turn your back on the humanities. Embrace the moral center that your professional forebears believed all engineers should foster throughout their careers. Stand up for both engineering and the humanities. They are not separate and separable enterprises. They are beautifully entangled and dependent on each other. Both are needed for civilization to flourish. Both are needed for a better tomorrow.

References

With the exception of Charles Proteus Steinmetz’s “The Value of the Classics in Engineering Education,” which is available in IEEE Xplore, and William Wickenden’s Report of the Investigation of Engineering Education, which is available on the Internet Archive, all of the papers and talks quoted above come from the unpublished papers of the AIEE and unpublished papers of the IEEE. The former Engineering Societies Library, which was based in New York City, bound these papers into volumes. They aren’t digitized and probably never will be; you’ll have to go to the Linda Hall Library in Kansas City, Mo., to check them out.

But if you’d like to learn more about how past engineers embraced the humanities, check out Matthew Wisnioski’s book Engineers for Change: Competing Visions of Technology in 1960s America (MIT Press, 2016) and W. Patrick McCray’s Making Art Work: How Cold War Engineers and Artists Forged a New Creative Culture (MIT Press, 2020).

Keep Reading ↓ Show less

A Spy Satellite You’ve Never Heard of Helped Win the Cold War

The Parcae project revolutionized electronic eavesdropping

13 min read
Vertical
A model of a satellite with long, flat panels radiating out from each of its four corners.
A Parcae satellite was just a few meters long but it had four solar panels that extended several meters out from the body of the satellite. The rod emerging from the satellite was a gravity boom, which kept the orbiter\u2019s signal antennas oriented toward Earth.
NRO
DarkBlue2

In the early 1970s, the Cold War had reached a particularly frigid moment, and U.S. military and intelligence officials had a problem. The Soviet Navy was becoming a global maritime threat—and the United States did not have a global ocean-surveillance capability. Adding to the alarm was the emergence of a new Kirov class of nuclear-powered guided-missile battle cruisers, the largest Soviet vessels yet. For the United States, this situation meant that the perilous equilibrium of mutual assured destruction, MAD, which so far had dissuaded either side from launching a nuclear strike, could tilt in the wrong direction.

It would be up to a top-secret satellite program called Parcae to help keep the Cold War from suddenly toggling to hot. The engineers working on Parcae would have to build the most capable orbiting electronic intelligence system ever.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Freddy the Robot Took the Fall for AI

Its creator lost a power struggle that led to an AI winter in the 1970s

9 min read
Metal diamond-shaped apparatus with wires and a short metal pole coming from the top.

Freddy II, completed in 1973, could be taught to assemble simple models from a heap of parts.

National Museums Scotland

Meet FREDERICK Mark 2, the Friendly Robot for Education, Discussion and Entertainment, the Retrieval of Information, and the Collation of Knowledge, better known as Freddy II. This remarkable robot could put together a simple model car from an assortment of parts dumped in its workspace. Its video-camera eyes and pincer hand identified and sorted the individual pieces before assembling the desired end product. But onlookers had to be patient. Assembly took about 16 hours, and that was after a day or two of “learning” and programming.

Freddy II was completed in 1973 as one of a series of research robots developed by Donald Michie and his team at the University of Edinburgh during the 1960s and ’70s. The robots became the focus of an intense debate over the future of AI in the United Kingdom. Michie eventually lost, his funding was gutted, and the ensuing AI winter set back U.K. research in the field for a decade.

Why were the Freddy I and II robots built?

In 1967, Donald Michie, along with Richard Gregory and Hugh Christopher Longuet-Higgins, founded the Department of Machine Intelligence and Perception at the University of Edinburgh with the near-term goal of developing a semiautomated robot and then longer-term vision of programming “integrated cognitive systems,” or what other people might call intelligent robots. At the time, the U.S. Defense Advanced Research Projects Agency and Japan’s Computer Usage Development Institute were both considering plans to create fully automated factories within a decade. The team at Edinburgh thought they should get in on the action too.

Two years later, Stephen Salter and Harry G. Barrow joined Michie and got to work on Freddy I. Salter devised the hardware while Barrow designed and wrote the software and computer interfacing. The resulting simple robot worked, but it was crude. The AI researcher Jean Hayes (who would marry Michie in 1971) referred to this iteration of Freddy as an “arthritic Lady of Shalott.”

Freddy I consisted of a robotic arm, a camera, a set of wheels, and some bumpers to detect obstacles. Instead of roaming freely, it remained stationary while a small platform moved beneath it. Barrow developed an adaptable program that enabled Freddy I to recognize irregular objects. In 1969, Salter and Barrow published in Machine Intelligence their results, “Design of Low-Cost Equipment for Cognitive Robot Research,” which included suggestions for the next iteration of the robot.

Freddy I, completed in 1969, could recognize objects placed in front of it—in this case, a teacup.University of Edinburgh

More people joined the team to build Freddy Mark 1.5, which they finished in May 1971. Freddy 1.5 was a true robotic hand-eye system. The hand consisted of two vertical, parallel plates that could grip an object and lift it off the platform. The eyes were two cameras: one looking directly down on the platform, and the other mounted obliquely on the truss that suspended the hand over the platform. Freddy 1.5’s world was a 2-meter by 2-meter square platform that moved in an x-y plane.

Freddy 1.5 quickly morphed into Freddy II as the team continued to grow. Improvements included force transducers added to the “wrist” that could deduce the strength of the grip, the weight of the object held, and whether it had collided with an object. But what really set Freddy II apart was its versatile assembly program: The robot could be taught to recognize the shapes of various parts, and then after a day or two of programming, it could assemble simple models. The various steps can be seen in this extended video, narrated by Barrow:

The Lighthill Report Takes Down Freddy the Robot

And then what happened? So much. But before I get into all that, let me just say that rarely do I, as a historian, have the luxury of having my subjects clearly articulate the aims of their projects, imagine the future, and then, years later, reflect on their experiences. As a cherry on top of this historian’s delight, the topic at hand—artificial intelligence—also happens to be of current interest to pretty much everyone.

As with many fascinating histories of technology, events turn on a healthy dose of professional bickering. In this case, the disputants were Michie and the applied mathematician James Lighthill, who had drastically different ideas about the direction of robotics research. Lighthill favored applied research, while Michie was more interested in the theoretical and experimental possibilities. Their fight escalated quickly, became public with a televised debate on the BBC, and concluded with the demise of an entire research field in Britain.

A damning report in 1973 by applied mathematician James Lighthill [left] resulted in funding being pulled from the AI and robotics program led by Donald Michie [right]. Left: Chronicle/Alamy; Right: University of Edinburgh

It all started in September 1971, when the British Science Research Council, which distributed public funds for scientific research, commissioned Lighthill to survey the state of academic research in artificial intelligence. The SRC was finding it difficult to make informed funding decisions in AI, given the field’s complexity. It suspected that some AI researchers’ interests were too narrowly focused, while others might be outright charlatans. Lighthill was called in to give the SRC a road map.

No intellectual slouch, Lighthill was the Lucasian Professor of Mathematics at the University of Cambridge, a position also held by Isaac Newton, Charles Babbage, and Stephen Hawking. Lighthill solicited input from scholars in the field and completed his report in March 1972. Officially titled “ Artificial Intelligence: A General Survey,” but informally called the Lighthill Report, it divided AI into three broad categories: A, for advanced automation; B, for building robots, but also bridge activities between categories A and C; and C, for computer-based central nervous system research. Lighthill acknowledged some progress in categories A and C, as well as a few disappointments.

Lighthill viewed Category B, though, as a complete failure. “Progress in category B has been even slower and more discouraging,” he wrote, “tending to sap confidence in whether the field of research called AI has any true coherence.” For good measure, he added, “AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.” So very British.

Lighthill concluded his report with his view of the next 25 years in AI. He predicted a “fission of the field of AI research,” with some tempered optimism for achievement in categories A and C but a valley of continued failures in category B. Success would come in fields with clear applications, he argued, but basic research was a lost cause.

The Science Research Council published Lighthill’s report the following year, with responses from N. Stuart Sutherland of the University of Sussex and Roger M. Needham of the University of Cambridge, as well as Michie and his colleague Longuet-Higgins.

Sutherland sought to relabel category B as “basic research in AI” and to have the SRC increase funding for it. Needham mostly supported Lighthill’s conclusions and called for the elimination of the term AI—“a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better.”

Longuet-Higgins focused on his own area of interest, cognitive science, and ended with an ominous warning that any spin-off of advanced automation would be “more likely to inflict multiple injuries on human society,” but he didn’t explain what those might be.

Michie, as the United Kingdom’s academic leader in robots and machine intelligence, understandably saw the Lighthill Report as a direct attack on his research agenda. With his funding at stake, he provided the most critical response, questioning the very foundation of the survey: Did Lighthill talk with any international experts? How did he overcome his own biases? Did he have any sources and references that others could check? He ended with a request for more funding—specifically the purchase of a DEC System 10 (also known as the PDP-10) mainframe computer. According to Michie, if his plan were followed, Britain would be internationally competitive in AI by the end of the decade.

After Michie’s funding was cut, the many researchers affiliated with his bustling lab lost their jobs.University of Edinburgh

This whole affair might have remained an academic dispute, but then the BBC decided to include a debate between Lighthill and a panel of experts as part of its “Controversy” TV series. “Controversy” was an experiment to engage the public in science. On 9 May 1973, an interested but nonspecialist audience filled the auditorium at the Royal Institution in London to hear the debate.

Lighthill started with a review of his report, explaining the differences he saw between automation and what he called “the mirage” of general-purpose robots. Michie responded with a short film of Freddy II assembling a model, explaining how the robot processes information. Michie argued that AI is a subject with its own purposes, its own criteria, and its own professional standards.

After a brief back and forth between Lighthill and Michie, the show’s host turned to the other panelists: John McCarthy, a professor of computer science at Stanford University, and Richard Gregory, a professor in the department of anatomy at the University of Bristol who had been Michie’s colleague at Edinburgh. McCarthy, who coined the term artificial intelligence in 1955, supported Michie’s position that AI should be its own area of research, not simply a bridge between automation and a robot that mimics a human brain. Gregory described how the work of Michie and McCarthy had influenced the field of psychology.

You can watch the debate or read a transcript.

A Look Back at the Lighthill Report

Despite international support from the AI community, though, the SRC sided with Lighthill and gutted funding for AI and robotics; Michie had lost. Michie’s bustling lab went from being an international center of research to just Michie, a technician, and an administrative assistant. The loss ushered in the first British AI winter, with the United Kingdom making little progress in the field for a decade.

For his part, Michie pivoted and recovered. He decommissioned Freddy II in 1980, at which point it moved to the Royal Museum of Scotland (now the National Museum of Scotland), and he replaced it with a Unimation PUMA robot.

In 1983, Michie founded the Turing Institute in Glasgow, an AI lab that worked with industry on both basic and applied research. The year before, he had written Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach). Michie intended it as intellectual musings that he hoped scientists would read, perhaps on the weekend, to help them get beyond the pursuits of the workweek. The book is wide-ranging, covering his three decades of work.

In the introduction to the chapters covering Freddy and the aftermath of the Lighthill report, Michie wrote, perhaps with an eye toward history:

“Work of excellence by talented young people was stigmatised as bad science and the experiment killed in mid-trajectory. This destruction of a co-operative human mechanism and of the careful craft of many hands is elsewhere described as a mishap. But to speak plainly, it was an outrage. In some later time when the values and methods of science have further expanded, and those adversary politics have contracted, it will be seen as such.”

History has indeed rendered judgment on the debate and the Lighthill Report. In 2019, for example, computer scientist Maarten van Emden, a colleague of Michie’s, reflected on the demise of the Freddy project with these choice words for Lighthill: “a pompous idiot who lent himself to produce a flaky report to serve as a blatantly inadequate cover for a hatchet job.”

And in a March 2024 post on GitHub, the blockchain entrepreneur Jeffrey Emanuel thoughtfully dissected Lighthill’s comments and the debate itself. Of Lighthill, he wrote, “I think we can all learn a very valuable lesson from this episode about the dangers of overconfidence and the importance of keeping an open mind. The fact that such a brilliant and learned person could be so confidently wrong about something so important should give us pause.”

Arguably, both Lighthill and Michie correctly predicted certain aspects of the AI future while failing to anticipate others. On the surface, the report and the debate could be described as simply about funding. But it was also more fundamentally about the role of academic research in shaping science and engineering and, by extension, society. Ideally, universities can support both applied research and more theoretical work. When funds are limited, though, choices are made. Lighthill chose applied automation as the future, leaving research in AI and machine intelligence in the cold.

It helps to take the long view. Over the decades, AI research has cycled through several periods of spring and winter, boom and bust. We’re currently in another AI boom. Is this time different? No one can be certain what lies just over the horizon, of course. That very uncertainty is, I think, the best argument for supporting people to experiment and conduct research into fundamental questions, so that they may help all of us to dream up the next big thing.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the May 2025 print issue as “This Robot Was the Fall Guy for British AI.”

References

Donald Michie’s lab regularly published articles on the group’s progress, especially in Machine Intelligence, a journal founded by Michie.

The Lighthill Report and recordings of the debate are both available in their entirety online—primary sources that capture the intensity of the moment.

In 2009, a group of alumni from Michie’s Edinburgh lab, including Harry Barrow and Pat Fothergill (formerly Ambler), created a website to share their memories of working on Freddy. The site offers great firsthand accounts of the development of the robot. Unfortunately for the historian, they didn’t explore the lasting effects of the experience. A decade later, though, Maarten van Emden did, in his 2019 article “Reflecting Back on the Lighthill Affair,” in the IEEE Annals of the History of Computing.

Beyond his academic articles, Michie was a prolific author. Two collections of essays I found particularly useful are On Machine Intelligence (John Wiley & Sons, 1974) and Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach, 1982).

Jon Agar’s 2020 article “What Is Science for? The Lighthill Report on Artificial Intelligence Reinterpreted” and Jeffrey Emanuel’s GitHub post offer historical interpretations on this mostly forgotten blip in the history of robotics and artificial intelligence.

Keep Reading ↓ Show less

The Forgotten Story of How IBM Invented the Automated Fab

Fifty years ago, a brash middle manager had a vision: a chip in a day

15 min read
A white line diagram against a blue background shows the basic layout of the Project SWIFT chip fabrication process.

The Project SWIFT fabrication line was based on sectors, as shown in this patent diagram of the system from 1973. Each sector contained in an enclosure all of the wafer-processing equipment needed to accomplish a segment of the fabrication process between lithographic-pattern exposures.

IBM/U.S. Patent and Trademark Office

In 1970, Bill Harding envisioned a fully automated wafer-fabrication line that would produce integrated circuits in less than one day. Not only was such a goal gutsy 54 years ago, it would be bold even in today’s billion-dollar fabs, where the fabrication time of an advanced IC is measured in weeks, not days. Back then, ICs, such as random-access memory chips, were typically produced in a monthlong stop-and-go march through dozens of manual work stations.

At the time, Harding was the manager of IBM’s Manufacturing Research group, in East Fishkill, N.Y. The project he would lead to make his vision a reality, all but unknown today, was called Project SWIFT. To achieve such an amazingly short turnaround time required a level of automation that could only be accomplished by a paradigm shift in the design of integrated-circuit manufacturing lines. Harding and his team accomplished it, achieving advances that would eventually be reflected throughout the global semiconductor industry. Many of SWIFT’s groundbreaking innovations are now commonplace in today’s highly automated chip fabrication plants, but SWIFT’s incredibly short turnaround time has never been equaled.

Keep Reading ↓ Show less

Bell Labs Turns 100, Plans to Leave Its Old Headquarters

The lab will relocate to a modern facility elsewhere in New Jersey

4 min read
An aerial view of Nokia Bell Labs' campus. The building has east and west wings, and a lush forest behind the property.

Nokia Bell Labs will soon be moving on from its lengthy tenure at its Murray Hill, N.J. campus.

Nokia Bell Labs

This year, Bell Labs celebrates its one-hundredth birthday. In a centennial celebration held last week at the Murray Hill, N.J., campus, the lab’s impressive technological history was celebrated with talks, panels, demos, and over a half dozen gracefully aging Nobel laureates.

During its impressive 100-year tenure, Bell Labs scientists invented the transistor; laid down the theoretical grounding for the digital age; discovered radio astronomy, which led to the first evidence for the big bang theory; contributed to the invention of the laser; developed the Unix operating system; invented the charge-coupled device (CCD) camera; and many more scientific and technological contributions that have earned Bell Labs 10 Nobel prizes and five Turing awards.

“I normally tell people this is the ‘Bell Labs invented everything’ tour,” said Nokia Bell Labs archivist Ed Eckert as he led a tour through the lab’s history exhibit.

The lab is smaller than it once was. The main campus in Murray Hill, N.J., seems like a bit of a ghost town, with empty cubicles and offices lining the halls. Now it’s planning a move to a smaller facility in New Brunswick, N.J., sometime in 2027. In its heyday, Bell Labs boasted around 6,000 workers at the Murray Hill location. Although that number has now dwindled to about 1,000, more work at other locations around the world

The Many Accomplishments of Bell Labs

Despite its somewhat diminished size, Bell Labs, now owned by Nokia, is alive and kicking.

“As Nokia Bell Labs, we have a dual mission,” says Bell Labs president Peter Vetter. “On the one hand, we need to support the longevity of the core business. That is networks, mobile networks, optical networks, the networking at large, security, device research, ASICs, optical components that support that network system. And then we also have the second part of the mission, which is to help the company grow into new areas.”

Some of the new areas for growth were represented in live demonstrations at the centennial.

A team at Bell Labs is working on establishing the first cellular network on the moon. In February, Intuitive Machines sent up its second lunar mission, Athena, with Bell Labs’ technology on board. The team fit two full cellular networks into a briefcase-size box, the most compact networking system ever made. This cell network was self-deploying: Nobody on Earth needs to tell it what to do. The lunar lander tipped on its side upon landing and quickly went offline due to lack of solar power, but Bell Labs’ networking module had enough time to power up and transmit data back to Earth.

Another Bell Labs group is focused on monitoring the world’s vast network of undersea fiber-optic cables. Undersea cables are subject to interruptions, whether it be from adversarial sabotage, undersea weather events like earthquakes or tsunamis, or fishing nets and ship anchors. The team wants to turn these cables into a sensor network, capable of monitoring the environment around a cable for possible damage. The team has developed a real-time technique for monitoring mild changes in cable length that’s so sensitive the lab-based demo was able to pick up tiny vibrations from the presenter’s speaking voice. This technique can pin changes down to a 10-kilometer interval of cable, greatly simplifying the search for affected regions.

Nokia is taking the path less traveled when it comes to quantum computing, pursuing so-called topological quantum bits. These qubits, if made, would be much more robust to noise than other approaches, and are more readily amenable to scaling. However, building even a single qubit of this kind has been elusive. Nokia Bell Labs’ Robert Willett has been at it since his graduate work in 1988, and the team expects to demonstrate the first NOT gate with this architecture later this year.

Beam-steering antennas for point-to-point fixed wireless are normally made on printed circuit boards. But as the world moves to higher frequencies, toward 6G, conventional printed circuit-board materials are no longer cutting it—the signal loss makes them economically unviable. That’s why a team at Nokia Bell Labs has developed a way to print circuit boards on glass instead. The result is a small glass chip that has 64 integrated circuits on one side and the antenna array on the other. A 100-gigahertz link using the tech was deployed at the Paris Olympics in 2024, and a commercial product is on the road map for 2027.

Mining, particularly autonomous mining—which avoids putting humans in harm’s way—relies heavily on networking. That’s why Nokia has entered the mining business, developing smart digital-twin technology that models the mine and the autonomous trucks that work on it. The company’s robo-truck system features two cellular modems, three Wi-Fi cards, and 12 Ethernet ports. The system collects different types of sensor data and correlates them on a virtual map of the mine (the digital twin). Then it uses AI to suggest necessary maintenance and to optimize scheduling.

The lab is also dipping into AI. One team is working on integrating large language models with robots for industrial applications. These robots have access to a digital-twin model of the space they are in and have a semantic representation of certain objects in their surroundings. In a demo, a robot was verbally asked to identify missing boxes in a rack. The robot successfully pointed out which box wasn’t found in its intended place, and when prompted, it traveled to the storage area and identified the replacement. The key is to build robots that can “reason about the physical world,” says Matthew Andrews, a researcher in the AI lab. A test system will be deployed in a warehouse in the United Arab Emirates in the next six months.

Despite impressive scientific demonstrations, there was an air of apprehension about the event. In a panel discussion about the future of innovation, Princeton engineering dean Andrea Goldsmith said, “I’ve never been more worried about the innovation ecosystem in the U.S.” Former Google CEO Eric Schmidt said in a keynote that “the current administration seems to be trying to destroy university R&D.” Nevertheless, Schmidt and others expressed optimism about the future of innovation at Bell Labs and the United States more generally. “We will win, because we are right, and R&D is the foundation of economic growth,” he said.

Keep Reading ↓ Show less
word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word

mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1