IEEE Spectrum IEEE Spectrum
- Video Friday: High Mobility Logisticsby Evan Ackerman on 25. April 2025. at 16:00
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025: 5–7 September 2025, SHENZHEN CoRL 2025: 27–30 September 2025, SEOUL IEEE Humanoids: 30 September–2 October 2025, SEOUL World Robot Summit: 10–12 October 2025, OSAKA, JAPAN IROS 2025: 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Throughout the course of the past year, LEVA has been designed from the ground up as a novel robot to transport payloads. Although the use of robotics is widespread in logistics, few solutions offer the capability to efficiently transport payloads both in controlled and unstructured environments. Four-legged robots are ideal for navigating any environment a human can, yet few have the features to autonomously move payloads. This is where LEVA shines. By combining both wheels (a means of locomotion ideally suited for fast and precise motion on flat surfaces) and legs (which are perfect for traversing any terrain that humans can), LEVA strikes a balance that makes it highly versatile. [ LEVA ] You probably heard about this humanoid robot half-marathon in China, becuase it got a lot of media attention, which I presume was the goal. And for those of us who remember when Asimo running was a big deal, marathon running is still impressive in some sense, it’s just hard to connect that to these robots doing anything practical, you know? [ NBC ] A robot navigating an outdoor environment with no prior knowledge of the space must rely on its local sensing to perceive its surroundings and plan. This can come in the form of a local metric map or local policy with some fixed horizon. Beyond that, there is a fog of unknown space marked with some fixed cost. In this work, we make a key observation that long-range navigation only necessitates identifying good frontier directions for planning instead of full map knowledge. To this end, we propose the Long Range Navigator (LRN), that learns an intermediate affordance representation mapping high-dimensional camera images to affordable frontiers for planning, and then optimizing for maximum alignment with the desired goal. Through extensive off-road experiments on Spot and a Big Vehicle, we find that augmenting existing navigation stacks with LRN reduces human interventions at test-time and leads to faster decision making indicating the relevance of LRN. [ LRN ] Goby is a compact, capable, programmable, and low-cost robot that lets you uncover miniature worlds from its tiny perspective. On Kickstarter now, for an absurdly cheap $80. [ Kickstarter ] Thanks, Rich! HEBI robots demonstrated inchworm mobility during the Innovation Faire of the FIRST Robotics World Championships in Houston. [ HEBI ] Thanks, Andrew! Happy Easter from Flexiv! [ Flexiv ] We are excited to present our proprietary reinforcement learning algorithm, refined through extensive simulations and vast training data, enabling our full-scale humanoid robot, Adam, to master human-like locomotion. Unlike model-based gait control, our RL-driven approach grants Adam exceptional adaptability. On challenging terrains like uneven surfaces, Adam seamlessly adjusts stride, pace, and balance in real time, ensuring stable, natural movement while boosting efficiency and safety. The algorithm also delivers fluid, graceful motion with smooth joint coordination, minimizing mechanical wear, extending operational life, and significantly reducing energy use for enhanced endurance. [ PNDbotics ] Inside the GRASP Lab - Dr. Michael Posa and DAIR Lab. Our research centers on control, learning, planning, and analysis of robots as they interact with the world. Whether a robot is assisting within the home, or operating in a manufacturing plant, the fundamental promise of robotics requires touching and affecting a complex environment in a safe and controlled fashion. We are focused on developing computationally tractable and data efficient algorithms which enable robots to operate both dynamically and safely as they quickly maneuver through and interact with their environments. [ DAIR Lab ] I will never understand why robotics companies feel the need to add the sounds of sick actuators when their robots move. [ Kepler ] Join Matt Trossen, founder of Trossen Robotics, on a time-traveling teardown through the evolution of our robotic arms! In this deep dive, Matt unboxes the ghosts of robots past—sharing behind-the-scenes stories, bold design decisions, lessons learned, and how the industry itself has shifted gears. [ Trossen ] This week’s CMU RI seminar is a retro edition (2008!) from Charlie Kemp, previously of the Healthcare Robotics Lab at Georgia Tech and now at Hello Robot. [ CMU RI ] This week’s actual CMU RI seminar is from a much more modern version of Charlie Kemp. When I started in robotics, my goal was to help robots emulate humans. Yet as my lab worked with people with mobility impairments, my notions of success changed. For assistive applications, emulation of humans is less important than ease of use and usefulness. Helping with seemingly simple tasks, such as scratching an itch or picking up a dropped object, can make a meaningful difference in a person’s life. Even full autonomy can be undesirable, since actively directing a robot can provide a sense of independence and agency. Overall, many benefits of robotic assistance derive from nonhuman aspects of robots, such as being tireless, directly controllable, and free of social characteristics that can inhibit use.While technical challenges abound for home robots that attempt to emulate humans, I will provide evidence that human-scale mobile manipulators could benefit people with mobility impairments at home in the near future. I will describe work from my lab and Hello Robot that illustrates opportunities for valued assistance at home, including supporting activities of daily living, leading exercise games, and strengthening social connections. I will also present recent progress by Hello Robot toward unsupervised, daily in-home use by a person with severe mobility impairments. [ CMU RI ]
- IEEE Standards Development Pioneer Koepfinger Dies at 99by Amanda Davis on 24. April 2025. at 18:00
Joseph Koepfinger Developed standards for electric power systems Life Fellow, 99; died 6 January Koepfinger was an active volunteer with the American Institute of Electrical Engineers (AIEE), an IEEE predecessor society. He made significant contributions to the fields of surge protection and electric power engineering. In the early 1950s he took part in a three-year task force studying distribution circuit reliability as a member of AIEE’s surge protective devices committee (SPDC), according to his ArresterWorks biography. In 1955 he helped revise the AIEE Standard 32 on neutral grounding devices and was part of a team that developed guidelines for power transformer loadings. In the 1960s he became chair of the SPDC and initiated efforts to develop standards for low-voltage surge protectors. Later, Koepfinger served on the IEEE Standards Association Board and contributed to the development of IEEE standards for lightning arresters and surge protectors. He received several awards for his work in standards development, including the IEEE Standards Association’s first Lifetime Achievement Award in 2011 and the 1989 IEEE Charles Proteus Steinmetz Award. In 2008 he was inducted into the Surge Protection Hall of Fame, a tribute webpage honoring engineers who have contributed to the field. Koepfinger had a 60-year career at Duquesne Light, in Pittsburgh, retiring in 2000 as director of its system studies and research department. After retirement, he continued to serve as a technical advisor for the International Electrotechnical Commission, a standards organization. He received bachelor’s and master’s degrees in electrical engineering from the University of Pittsburgh in 1949 and 1953. Bruce E. Arnold Electrical engineer Life member, 81; died 16 January Arnold was an electrical engineer and computer support specialist. He began his career in 1967 at sewing machine manufacturer Singer in New York City. As supervisor of electrical design and electromechanical equipment, he developed new electronic and motor package subsystems for high-volume consumer sewing machines. Arnold left Singer in 1983 to join Revlon, a New York City–based cosmetics company, as director of electrical engineering. There he designed electronic and pneumatic systems for automated manufacturing and robotic automation. Ten years later he changed careers, becoming a computer support specialist at Degussa Corp., a chemical manufacturing company in Piscataway, N.J. Degussa is now part of Evonik. Arnold retired in 2006 and became a consultant. He received a bachelor’s degree in electrical engineering in 1969 from the Newark College of Engineering (now the New Jersey Institute of Technology). He earned a master’s degree in EE in 1975 from NJIT. William Hayes Kersting Electrical engineering professor Life Fellow, 88; died 7 January Kersting taught electrical engineering for 40 years at his alma mater, New Mexico State University, in Las Cruces. During his tenure, he established the university’s electric utility management program. He published more than 70 academic research articles. He also wrote Distribution System Modeling and Analysis, a textbook that is widely used in graduate programs worldwide. He was an active volunteer of the IEEE Power & Energy Society, serving on its education committee and distribution systems analysis subcommittee. Kersting received the Edison Electric Institute’s 1979 Power Engineering Education Award. Richard A. Olsen Human factors engineer Life member, 90; died 7 November Olsen made significant contributions to aerospace defense technologies and transportation safety. He specialized in human factors engineering, a field that focuses on designing products, systems, and environments that are safe, efficient, and easy for people to use. While working as a human factors engineer at the Lockheed Missiles and Space Co. in Sunnyvale, Calif., he contributed to early guidelines for computer-human interaction. He helped build the first-generation Image Data Exploitation (IDEX) system, used by intelligence agencies and the military to analyze digital imagery. After receiving a bachelor’s degree in physics in 1955 from Union College, in Schenectady, N.Y., he enlisted in the U.S. Navy. Olsen attended the Navy’s Officer Candidate School, in Newport, R.I., before being assigned to a destroyer ship in December 1956. He left active duty three years later. In 1960 he joined Hughes Aircraft Co., a defense contractor in Fullerton, Calif., as a field engineer. He helped develop radar systems and worked at the Navy Electronics Laboratory’s Fleet Anti-Air Warfare Training Center, in San Diego, on the USS Enterprise and USS Mahan. He later was promoted to lead field engineer and worked at the Mare Island Naval Shipyard, in Vallejo, Calif. Olsen moved to Pennsylvania in 1964 to attend graduate school at Pennsylvania State University in State College. After earning master’s and doctoral degrees in experimental psychology in 1966 and 1970, he joined Penn State’s Larson Transportation Institute as director of its human factors research program. Four years later, he became an assistant professor at the university’s industrial and management systems engineering department. He left Penn State in 1980 to join Lockheed. After retiring from the company in 1990, he served as an expert witness for 14 years, testifying in several hundred accident-investigation cases. He was a member of the Association for the Advancement of Automotive Medicine, the National Academy of Engineering’s Transportation Research Board, and SAE (the Society of Automotive Engineers). He was a Fellow of the Human Factors and Ergonomics Society and edited one of its newsletters, The Forvm. Jo Edward Davidson Communications engineer Life senior member, 87; died 24 April 2024 Davidson’s work as an electrical engineer impacted several key communications technologies including early GPS development. He was instrumental in installing cellular networks in Argentina, Nigeria, and the Philippines. He wrote about his career in his memoir: Far From the Flagpole: An Electrical Engineer Tells His Story. He served in the U.S. Air Force from 1959 to 1965, attaining the rank of second lieutenant. After he was discharged, he worked at several companies including Eastman Kodak, Scientific Atlanta, and BellSouth International. He contributed to several satellite communications and network projects while working at Alcatel and Globalstar, both in Memphis. He retired from Globalstar in 2000 as director of satellite network systems. Davidson received a bachelor’s degree in engineering in 1963 from Arizona State University, in Tempe.
- Intel AI Trick Spots Hidden Flaws in Data-Center Chipsby Katherine Bourzac on 24. April 2025. at 14:00
For high-performance chips in massive data centers, math can be the enemy. Thanks to the sheer scale of calculations going on in hyperscale data centers, operating round the clock with millions of nodes and vast amounts of silicon, extremely uncommon errors appear. It’s simply statistics. These rare, “silent” data errors don’t show up during conventional quality-control screenings—even when companies spend hours looking for them. This month at the IEEE International Reliability Physics Symposium in Monterey, Calif., Intel engineers described a technique that uses reinforcement learning to uncover more silent data errors faster. The company is using the machine learning method to ensure the quality of its Xeon processors. When an error happens in a data center, operators can either take a node down and replace it, or use the flawed system for lower-stakes computing, says Manu Shamsa, an electrical engineer at Intel’s Chandler, Ariz., campus. But it would be much better if errors could be detected earlier on. Ideally they’d be caught before a chip is incorporated in a computer system, when it’s possible to make design or manufacturing corrections to prevent errors recurring in the future. “In a laptop, you won’t notice any errors. In data centers, with really dense nodes, there are high chances the stars will align and an error will occur.” —Manu Shamsa, Intel Finding these flaws is not so easy. Shamsa says engineers have been so baffled by them they joked that they must be due to spooky action at a distance, Einstein’s phrase for quantum entanglement. But there’s nothing spooky about them, and Shamsa has spent years characterizing them. In a paper presented at the same conference last year, his team provides a whole catalog of the causes of these errors. Most are due to infinitesimal variations in manufacturing. Even if each of the billions of transistors on each chip is functional, they are not completely identical to one another. Subtle differences in how a given transistor responds to changes in temperature, voltage, or frequency, for instance, can lead to an error. Those subtleties are much more likely to crop up in huge data centers because of the pace of computing and the vast amount of silicon involved. “In a laptop, you won’t notice any errors. In data centers, with really dense nodes, there are high chances the stars will align and an error will occur,” Shamsa says. Some errors could crop up only after a chip has been installed in a data center and has been operating for months. Small variations in the properties of transistors can cause them to degrade over time. One such silent error Shamsa has found is related to electrical resistance. A transistor that operates properly at first, and passes standard tests to look for shorts, can, with use, degrade so that it becomes more resistant. “You’re thinking everything is fine, but underneath, an error is causing a wrong decision,” Shamsa says. Over time, thanks to a slight weakness in a single transistor, “one plus one goes to three, silently, until you see the impact,” Shamsa says. Machine Learning to Spot Defects The new technique builds on an existing set of methods for detecting silent errors, called Eigen tests. These tests make the chip do hard math problems, repeatedly over a period of time, in the hopes of making silent errors apparent. They involve operations on different sizes of matrices filled with random data. There are a large number of Eigen tests. Running them all would take an impractical amount of time, so chipmakers use a randomized approach to generate a manageable set of them. This saves time but leaves errors undetected. “There’s no principle to guide the selection of inputs,” Shamsa says. He wanted to find a way to guide the selection so that a relatively small number of tests could turn up more errors. The Intel team used reinforcement learning to develop tests for the part of its Xeon CPU chip that performs matrix multiplication using what are called fuse-multiply-add (FMA) instructions. Shamsa says they chose the FMA region because it takes up a relatively large area of the chip, making it more vulnerable to potential silent errors—more silicon, more problems. What’s more, flaws in this part of a chip can generate electromagnetic fields that affect other parts of the system. And because the FMA is turned off to save power when it’s not in use, testing it involves repeatedly powering it up and down, potentially activating hidden defects that otherwise would not appear in standard tests. During each step of its training, the reinforcement-learning program selects different tests for the potentially defective chip. Each error it detects is treated as a reward, and over time the agent learns to select which tests maximize the chances of detecting errors. After about 500 testing cycles, the algorithm learned which set of Eigen tests optimized the error-detection rate for the FMA region. Shamsa says this technique is five times as likely to detect a defect as randomized Eigen testing. Eigen tests are open source, part of the openDCDiag for data centers. So other users should be able to use reinforcement learning to modify these tests for their own systems, he says. To a certain extent, silent, subtle flaws are an unavoidable part of the manufacturing process—absolute perfection and uniformity remain out of reach. But Shamsa says Intel is trying to use this research to learn to find the precursors that lead to silent data errors faster. He’s investigating whether there are red flags that could provide an early warning of future errors, and whether it’s possible to change chip recipes or designs to manage them.
- These Companies Were the Patent Powerhouses of 2024by Kohava Mendelsohn on 23. April 2025. at 19:30
This article is best viewed on desktop. In 2006, IEEE Spectrum ranked patenting powerhouses in our first annual patent survey. The survey, conducted by the research firm 1790 Analytics, examined the number and influence of U.S. patents generated by more than 1,000 organizations. Semiconductor manufacturer Micron Technology came out on top at the time, with IBM, Hewlett-Packard, Intel, and Broadcom rounding out the top five. Nearly 20 years later, every company on the top 10 list has been usurped. Once mighty companies have fallen in the ranks, others have come and gone, and the top spots are largely filled by today’s Big Tech companies. In place of semiconductors and computer systems, the top categories in this year’s scorecard are all about Internet services—the category labeled “Telecom and Internet”—and consumer electronics. Digging into the data reveals Amazon’s might, the hidden power of subsidiaries, and which countries are producing U.S. patents—it’s not just the United States. You can explore it all for yourself in the interactive graphic below. Simply click on a category to see which companies produced the most powerful patent portfolios in 2024. The rankings are based on Pipeline Power, a metric calculated by 1790 Analytics that combines several elements of an organization’s patent portfolio into one number. In addition to the number of patents granted in a given year, this metric takes into account four variables representing the quality and impact of those patents. (More details on the calculations are below in the Methodology section.) Amazon Tops the List At a glance, it’s clear that Amazon leads with the highest patent power. The tech giant has produced a more influential patent portfolio than entire industry categories. However, Amazon didn’t produce the largest number of patents in 2024. That achievement goes to Samsung; with more than 9,000 patents, the electronics company was granted more than twice the number produced by the second most prolific company, Taiwan Semiconductor Manufacturing Co. (TSMC). Meanwhile, Amazon ranked 20th in terms of the raw number of patents. So how does it have the largest power? In short, because its patents tend to be cited more frequently, and by a variety of other patents. Similarly, Snap, the company that owns Snapchat and Bitmoji, ranks above more frequent filers Qualcomm and Google, despite being granted just 770 patents last year. Hidden Power in Subsidiaries Several companies have a greater patent power than is immediately visible, when you consider their subsidiaries. For example, Alphabet is categorized as a conglomerate with a patent power of about 4,056. But it has two subsidiaries: Google and Waymo, both in the Telecom and Internet category. Adding in its subsidiaries, Alphabet’s patent power would roughly double, achieving a score higher than those of all the other conglomerates combined. Defense contractor RTX (formerly Raytheon) has the most subsidiaries included in the data collection. Seven companies listed in the Aerospace category are all owned by RTX, in addition to RTX itself. Adding their combined patent power to RTX’s accounts for more than two-thirds of the combined patent power of all Aerospace companies. RTX is the only company with more than two subsidiaries included in the survey. Sony, Johnson & Johnson, and GE Vernova all have two each, and several other companies have one. In Consumer Electronics, Apple leads the pack by a large margin, with about 40 percent of the category’s patent power. Samsung has patents filed both under its primary company, Samsung Electronics, and a subsidiary that focuses on display technology. These two companies take the second and third place in the category. But even grouped together, their combined patent power is well behind Apple’s. Western Digital’s Sandisk Corp, in the Computer Peripherals category, is the subsidiary with the largest patent power for 2024. Sandisk has a score of about 5,087, more than triple that of its parent company. In fact, Sandisk’s successes in flash storage led to its launch as an independent public company in February of 2025. The AI Boom and Patents Splunk, owned by Cisco, tops the Computer Software category, beating out Microsoft, Oracle, and Intuit. This company specializes in collecting and organizing large amounts of machine-generated data. While Splunk is less of a household name than Microsoft, its research into managing data generated by AI has helped launch it to the top of its category. Overall, the number of AI patents filed has grown over the past few years, according to 1790 Analytics. Although many of these were submitted by organizations in the Computer Software category, some AI powerhouses fall under other categories, like Conglomerates. Which Countries Are Producing U.S. Patents? The companies represented here include only those with at least 25 patents granted by the U.S. Patent and Trademark Office in 2024. Out of 247 organizations that meet this requirement, 148 were from the United States. So where else are companies filing U.S. patents from? Japan comes in second place, with 24 companies. These companies span a range of categories, including Consumer Electronics, Computer Peripherals, and Semiconductors. The Japanese company with the highest patent power is Semiconductor Energy Laboratory, taking the top spot in its category. After Japan, Germany and South Korea are tied with nine companies each, and Taiwan has eight listed. The South Korean companies are nearly all in the Consumer Electronics or Automotive and Transportation categories, the Taiwanese companies are mostly in Computer Peripherals, and the German companies span a wider range of categories. Following these, France and China both have seven companies listed, and many more countries have one or two companies listed. Big Names, Small Patent Power Patent power is but one measure of a company’s impact. Some well-known companies have surprisingly small patent power, compared to their cultural or market influence. Microsoft, for example, ranks 31st in patent power, despite being one of the most valuable companies based on market capitalization. Meta also falls surprisingly low, considering the company’s hefty research and development budget. In 2024, the social media mammoth spent US $43 billion, an amount surpassed only by Alphabet in one survey. (Notably, the survey omitted Amazon because it doesn’t report R&D expenses as a separate line item. Amazon’s R&D spending likely exceeds that of Alphabet.) Looking at the strength of a company’s patent portfolio doesn’t replace these other metrics, but it does provide another view of an organization’s impact. Amazon, Apple, Snap, Samsung, and Qualcomm, in that order, are 2024’s victors. What companies will rise to the top in 2025? Methodology The scorecards are based on patents issued by the U.S. Patent and Trademark Office. Focusing on a single patent system helps 1790 Analytics more easily track which previous inventions are referenced by patents, because citation practices vary across different patent systems. Note that focusing on U.S. patents does not restrict the analysis to U.S.-based innovation; approximately half of all U.S. patents are from other countries. In constructing the scorecards, 1790 Analytics measured the following four metrics for each organization: Growth measures if an organization produced more patents in 2024 than in the previous five years. For example, if an organization averaged 100 patents per year between 2019 and 2023 and was granted 125 patents last year, its Pipeline Growth for 2024 would be 1.25. Impact measures how frequently all patents issued in 2024 cite a specific organization’s patents from the past five years. It is calculated by counting how many times that organization’s patents were cited and dividing that number by the average number of citations for all patents from the same technology. That number is then adjusted to remove high amounts of self-citations. This is done by discounting all self-citations that make up more than 30 percent of an organization’s total citations. Originality measures the variety in the sources of an organization’s 2024 patents. Patents that draw from a wide range of earlier technologies to create something new are deemed more original than patents that make incremental improvements upon the same technology. Generality measures how varied the 2024 patents are that cite earlier patents from a given organization. It is based on the idea that patents cited by later patents from many different fields tend to have more general application than patents cited only by later patents from the same field. Together, the total patents produced by an organization multiplied by its growth, impact, originality, and generality define that organization’s Pipeline Power. In the patent scorecard, organizations are ranked by their Pipeline Power, accounting for both the quality and quantity of their patents. Category descriptions Aerospace: These companies develop and manufacture commercial airplanes, spacecraft, and military aircraft. This includes companies that supply specific parts, like engines. Automotive and Other Transportation: Companies that develop cars and car parts, including electric and autonomous vehicles and their parts, are included in this category. Companies involved in making trains and motorcycles are as well. Biomedical Devices: Firms that manufacture or design systems for medical and health care applications, such as prosthetics, imaging systems, and drug-delivery systems are included in biomedical devices. Biotechnology and Pharmaceutical: Companies that use genetic engineering or chemical processes to discover and create new drugs and therapeutics are included in this category. Computer Software: These companies develop firmware and applications for personal devices and other computing systems. Computer Hardware: Companies that manufacture computer systems and storage devices fall under this category. Computer Peripherals: Companies that manufacture auxiliary devices that connect to computers, such as keyboards and mice, scanners, or printers are included in this category. Conglomerates: Organizations in this category don’t fit neatly into one category. Many companies patent in several industries, but tend to focus on one; others are trickier to label. 3M, for example, makes consumer products, medical devices, and adhesives, along with other products. Electrical Power and Energy: The patents in this category may belong to companies investing in renewables (like solar and wind power), energy infrastructure, and more. Consumer Electronics: These companies primarily make devices people can use in their everyday lives for fitness, cleaning, entertainment, and more. Government: Federal agencies, military groups, and national labs all fall into this category. While some may seem to fit well within a single type of patents—NASA in aerospace, for example—government agencies frequently patent across a range of industries. Robotics: With greater demand for automation, the robotics industry is growing. In particular, robots are being used to make manufacturing more efficient, but this category also includes home robots and surgical robots, for example. Semiconductors: This category includes patent producers that design and manufacture chips, as well as those producing the equipment needed for that manufacturing. The majority of these companies, however, focus on manufacturing chips. Telecom and Internet: Many of today’s tech giants—including Amazon, Google, and Meta—fall under Internet services. This category also includes home Internet and wireless providers. Universities: Like government entities and conglomerates, universities tend to produce patents across various industries.
- XPrize in Carbon Removal Goes to Enhanced Rock Weatheringby Emily Waltz on 23. April 2025. at 12:00
The XPrize Foundation today announced the winners of its four-year, US $100 million XPrize competition in carbon removal. The contest is one of dozens hosted by the foundation in its 20-year effort to encourage technological development. Contestants in the carbon removal XPrize had to demonstrate ways to pull carbon dioxide from the atmosphere or oceans and sequester it sustainably. Mati Carbon, a Houston-based startup developing a sequestration technique called enhanced rock weathering, won the grand prize of $50 million. The company spreads crushed basalt on small farms in India and Africa. The silica-rich volcanic rock improves the quality of the soil for the crops but also helps remove carbon dioxide from the air. It does this by reacting with dissolved CO2 in the soil’s water, turning it into bicarbonate ions and preventing it from returning to the atmosphere (see the sidebar, “How Does Enhanced Rock Weathering Remove CO2?” for more detail). More than a dozen organizations globally are developing enhanced rock weathering approaches at an industrial scale, but Mati’s tech-heavy verification and software platform caught the XPrize judges’ attention. “On the one hand, they’re moving rocks around in trucks—that’s not very techy. But when we looked under the hood…what we saw was a very impressive data-collection exercise,” says Michael Leitch, XPrize’s technical lead for the competition. Mati Carbon’s Data-Driven Carbon Removal Mati monitors each farm’s soil both before and after the basalt treatment to verify how much carbon is being stored. This verification process involves bringing an inductively coupled plasma mass spectrometer to farms to analyze soil composition and help determine how well the basalt is working. The company also tracks other measures of soil chemistry, performs geotagging to determine the precise location of all measurements, and uses software to track the carbon footprint of transporting and sourcing the basalt. How Does Enhanced Rock Weathering Remove CO2? Enhanced rock weathering is a way to accelerate one of the Earth’s natural processes for removing carbon dioxide from the atmosphere. It works like this: Carbon dioxide in the air dissolves into rainwater, forming carbonic acid. As rocks are worn away (or weathered) by this slightly acidic water, silicate minerals in the rock dissolve. This releases calcium, magnesium, and other positively charged ions called cations. These cations react with carbonic acid in the water, forming bicarbonate ions. In this bicarbonate form, the carbon can’t return to the atmosphere. Eventually the bicarbonate ions wash into the oceans, where the carbon is locked away for thousands of years. Researchers can accelerate this natural rock weathering process by adding finely crushed basalt, olivine, or other silicate rocks to cropland. All of this must be repeated for thousands of small farms. “The quantity and the density of sampling we’re doing today is intense—really it’s ridiculous,” says Shantanu Agarwal, the CEO and founder of Mati Carbon. “For each field, we’re collecting hundreds of data points.” The company organizes the data with a proprietary enterprise resource-planning software platform it calls matiC. The company also uses machine learning and analytics software driven by AI to weed out manual errors, keep track of field conditions, and model carbon removal. The goal is to develop a prediction tool that will allow the company to reduce the amount of physical sampling it must do, says Agarwal. Mati Carbon began by spreading basalt on rice paddies in India and has now expanded to Zambia and Tanzania. Mati Carbon Solid monitoring and verification is crucial for carbon-removal companies because their revenue is largely based on selling carbon credits. The only way to build up a viable carbon-credit market is for companies to prove they’re actually removing the amount of carbon they say they are. Mati will use the $50 million to expand to more small-farm owners globally. The farmers pay nothing to have the basalt spread on their crops; all of Mati’s revenue comes from carbon credits, Agarwal says. In addition to the prize money for Mati Carbon, XPrize awarded three runners-up. Paris-based NetZero received $15 million for extracting carbon from agricultural waste and converting it to biochar, a type of charcoal, using pyrolysis. Houston-based Vaulted Deep won $8 million for geologically sequestering carbon-filled organic waste. And London-based Undo Carbon received $5 million for its enhanced rock-weathering approach. XPrize awarded additional prize money earlier in the competition for milestones and student teams. The Musk Foundation funded the competition. XPrize Runners-Up Don’t Include Direct Air Capture Conspicuously absent from the list of winners were companies developing direct air capture (DAC) and ocean carbon removal systems. The contest rules stipulated that to win, contestants must remove and store at least 1,000 tonnes of carbon over the course of a year. None of the DAC or ocean carbon removal contestants met that threshold, XPrize’s Leitch says. For example, the ocean carbon startup Captura operates a pilot plant in the Port of Los Angeles that’s been stripping 100 tonnes of CO2 out of the Pacific Ocean each year. But the company’s new 1,000-tonne facility in Hawaii didn’t begin operating until February this year when the contest was concluding. “That one’s a good example of a team that has really pushed the envelope on the tech development,” says Leitch. “It’s just an unfortunate reality that for those more tech-heavy solutions, there are more barriers to short-term deployment,” he says. In the field of air capture, there are DAC facilities that can remove well over 1,000 tonnes of carbon from the atmosphere in a year. For example, Climeworks last year switched on a 36,000-tonne DAC facility in Iceland. And Carbon Engineering is building a 500,000-tonne DAC plant in Texas through a partnership with 1PointFive. But neither company participated in the competition. The DAC team that entered the contest and came the closest to the 1,000-tonne threshold was Project Hajar, which is sequestering air-captured CO2 in peridotite rocks in the UAE. Project Hajar won a $1 million milestone award from the competition in 2022 and another $1 million honorary XFactor award today. XPrize also awarded a $1 million honorary XFactor prize to ocean carbon startup Planetary Technologies. As these technologies inch toward arbitrary competition thresholds, the only way to make a dent in the more than 1,000 gigatonnes of excess CO2 lingering in Earth’s atmosphere is scale up, and fast. Says Leitch: “A thousand tonnes is a very significant undertaking from a project-deployment perspective, but…from a climate perspective, it really doesn’t move the needle.”
- High Schoolers’ AI-Enabled Device Deters Drunk Drivingby Willie D. Jones on 22. April 2025. at 18:00
Accidents happen, but not all of them are inevitable. Drunk driving is one of the deadliest and most preventable causes of roadway fatalities. In 2022 alone, more than 13,000 people died in alcohol-related vehicular crashes in the United States, accounting for nearly a third of all traffic deaths, according to the National Highway Traffic Safety Administration. Now a group of high school students in North Carolina is taking action with SoberRide, an AI-enabled device they designed to prevent intoxicated people from driving. Breathalyzer-based ignition interlocks are already in use; they require the driver to blow into a device, proving they are sober enough to drive. However, these interlocks are not foolproof because someone other than the driver could breathe into them, trying to outsmart the device. SoberRide uses a combination of cameras, sensors, and machine-learning algorithms to detect signs of alcohol impairment in the driver—such as pupil dilation, bloodshot eyes, and the presence of ethanol used in alcoholic beverages—before allowing a vehicle to be put into drive. “We’ve been training our neural network to classify intoxication, refining the system’s ability to reliably sense whether someone is drunk or sober,” says Swayam Shah, chief executive officer and cofounder of SoberRide. He’s an 11th-grader at Enloe Magnet High School, in Raleigh. The SoberRide team presented its invention at the MIT Undergraduate Research Technology Conference in October, sponsored by groups like the IEEE University Partnership Program and IEEE Women in Engineering. The students also showcased their technology at another IEEE-supported event: the International Conference on Artificial Intelligence, Robotics, and Communication, held in December in Xiamen, China. From tragedy to technology The inspiration for SoberRide came from a tragedy. Shah was in eighth grade when a neighbor was killed in a collision caused by a drunk driver. The loss prompted Shah to research the magnitude of the drunk-driving problem. “We learned that nearly 300,000 people die each year in crashes involving at least one drunk driver,” says Shaurya Mantrala, a senior at Enloe and the startup’s chief product officer. “We don’t just want to sell a product. We want to end drunk driving—for good.” —Swayam Shah, SoberRide CEO Motivated to develop technology to address the issue, the students took the initiative to research, design, and build SoberRide. SoberRide now consists of extremely sophisticated technology, which has been issued a U.S. patent and is based on published research presented by Shah and Mantrala at venues such as MIT. Shah leveraged his background in coding—honed since the fourth grade—along with knowledge gained from a Harvard introduction to computer science course, which he took in seventh grade. “I had a background in Python, Java, and C++,” he says, and his intellectual curiosity led to a growing interest in hardware. He spent countless hours learning about Arduino, Raspberry Pi, soldering, and other elements of designing and building electronics. AI-powered detection prevents workarounds SoberRide’s AI-driven approach sets it apart from existing ignition interlocks. Because such devices analyze the breath of the person who blows into them, the system can be bypassed by having a sober person breathe into it. SoberRide’s creators say it will leverage cameras that are already inside a car—technology that automakers are increasingly incorporating for driver-assist monitoring—to analyze the driver’s behavior. Should it detect signs of inebriation, it doesn’t allow the car to be put into drive. The system combines ethanol sensors placed on the dashboard or driver-side B-pillar, which is the vertical roof support between the front and rear doors. Those sensors, combined with facial analysis, assess intoxication indicators such as eye redness and facial swelling. To mitigate racial bias in facial recognition, the AI model was trained using a diverse dataset curated by IIT researchers. “The SoberRide device weighs facial analysis, which accounts for 25 percent of its decision regarding whether the person behind the wheel is impaired,” says Mantrala, who co-authored two research papers along with Shah, which were published in the IEEE Xplore Digital Library. From innovation to legislation In addition to developing the technology, the SoberRide team has lobbied state and federal lawmakers to push for policies mandating in-vehicle DUI detection systems. “I just got back [in March] from Washington, D.C., where I was advocating in Congress for legislation mandating passive anti-drunk systems in all vehicles,” Shah says. He did that as a side quest when he traveled to the nation’s capital to be honored as a 2025 National STEM Festival champion. The award, sponsored by EXPLR, was presented to Shah for being one of the top 106 STEM students in the country. The team also formed a partnership with Mothers Against Drunk Driving to advocate for the HALT law, passed by Congress in 2021 as part of the Infrastructure Investment and Jobs Act. “Under the Biden administration, there was federal action aimed at requiring passive anti-drunk-driving systems in all new vehicles by 2026,” Shah says. “But with the change in administration, the chances of this happening at the federal level have diminished. That’s why we’ve taken our advocacy to state legislators and governors.” Shah and his team have brought this technology to North Carolina Governor Josh Stein, the state’s former Governor Roy Cooper, and Congressional Representative Deborah Ross to continue legislative advocacy. A new business model Although automakers have been slow to adopt the technology, the SoberRide team is targeting fleet vehicle operators such as trucking companies and delivery services, as well as parents of teenage drivers, as early adopters. In the course of their extensive market research, the SoberRide team found that more than 90 percent of teenage drivers’ parents they contacted said they would purchase this technology to serve as a berm against their kids getting behind the wheel while intoxicated. Despite the uphill battle in securing automaker buy-in, the SoberRide team has received national recognition. Most notably, the SoberRide startup became the first high school team ever to be invited to showcase its technology at the CES event (the erstwhile Consumer Electronics Show) in Las Vegas. “Honda, Nissan, and Toyota were among the many vehicle manufacturer representatives who visited the SoberRide booth at CES,” Mantrala says. “They showed great interest in the technology, with some of them even offering to start beta-testing our product in their vehicles.” The team was also recently named global finalists for both the Conrad and Diamond High School Entrepreneurship Challenges, where they will compete on the international stage for further recognition, mentoring, and funding opportunities. The students were runner-ups at last year’s TiE Young Entrepreneurs (TYE) Globals pitch competition, sponsored by the Indus Entrepreneurs, a Silicon Valley nonprofit. The annual competition evaluates high school startups’ ideas, judging them on customer validation, business models, and execution. They were also recently promised US $100,000 in funding from the TiE Angels program, which they plan to utilize to perfect their technology and bring their product to market. A mission beyond profit Shah and his team understand widespread adoption could take years, he says, but they remain committed to their mission. “We don’t just want to sell a product,” he says. “We want to end drunk driving—for good.”
- AMD Takes Holistic Approach to AI Coding Copilotsby Andrej Zdravkovic on 22. April 2025. at 16:48
Coding assistants like GitHub Copilot and Codeium are already changing software engineering. Based on existing code and an engineer’s prompts, these assistants can suggest new lines or whole chunks of code, serving as a kind of advanced autocomplete. At first glance, the results are fascinating. Coding assistants are already changing the work of some programmers and transforming how coding is taught. However, this is the question we need to answer: Is this kind of generative AI just a glorified help tool, or can it actually bring substantial change to a developer’s workflow? At Advanced Micro Devices (AMD), we design and develop CPUs, GPUs, and other computing chips. But a lot of what we do is developing software to create the low-level software that integrates operating systems and other customer software seamlessly with our own hardware. In fact, about half of AMD engineers are software engineers, which is not uncommon for a company like ours. Naturally, we have a keen interest in understanding the potential of AI for our software-development process. To understand where and how AI can be most helpful, we recently conducted several deep dives into how we develop software. What we found was surprising: The kinds of tasks coding assistants are good at—namely, busting out lines of code—are actually a very small part of the software engineer’s job. Our developers spend the majority of their efforts on a range of tasks that include learning new tools and techniques, triaging problems, debugging those problems, and testing the software. We hope to go beyond individual assistants for each stage and chain them together into an autonomous software-development machine—with a human in the loop, of course. Even for the coding copilots’ bread-and-butter task of writing code, we found that the assistants offered diminishing returns: They were very helpful for junior developers working on basic tasks, but not that helpful for more senior developers who worked on specialized tasks. To use artificial intelligence in a truly transformative way, we concluded, we couldn’t limit ourselves to just copilots. We needed to think more holistically about the whole software-development life cycle and adapt whatever tools are most helpful at each stage. Yes, we’re working on fine-tuning the available coding copilots for our particular code base, so that even senior developers will find them more useful. But we’re also adapting large language models to perform other parts of software development, like reviewing and optimizing code and generating bug reports. And we’re broadening our scope beyond LLMs and generative AI. We’ve found that using discriminative AI—AI that categorizes content instead of generating it—can be a boon in testing, particularly in checking how well video games run on our software and hardware. The author and his colleagues have trained a combination of discriminative and generative AI to play video games and look for artifacts in the way the images are rendered on AMD hardware, which helps the company find bugs in its firmware code. Testing images: AMD; Original images by the game publishers. In the short term, we aim to implement AI at each stage of the software-development life cycle. We expect this to give us a 25 percent productivity boost over the next few years. In the long term, we hope to go beyond individual assistants for each stage and chain them together into an autonomous software-development machine—with a human in the loop, of course. Even as we go down this relentless path to implement AI, we realize that we need to carefully review the possible threats and risks that the use of AI may introduce. Equipped with these insights, we’ll be able to use AI to its full potential. Here’s what we’ve learned so far. The potential and pitfalls of coding assistants GitHub research suggests that developers can double their productivity by using GitHub Copilot. Enticed by this promise, we made Copilot available to our developers at AMD in September 2023. After half a year, we surveyed those engineers to determine the assistant’s effectiveness. We also monitored the engineers’ use of GitHub Copilot and grouped users into one of two categories: active users (who used Copilot daily) and occasional users (who used Copilot a few times a week). We expected that most developers would be active users. However, we found that the number of active users was just under 50 percent. Our software review found that AI provided a measurable increase in productivity for junior developers performing simpler programming tasks. We observed much lower productivity increases with senior engineers working on complex code structures. This is in line with research by the management consulting firm McKinsey & Co. When we asked the engineers about the relatively low Copilot usage, 75 percent of them said they would use Copilot much more if the suggestions were more relevant to their coding needs. This doesn’t necessarily contradict GitHub’s findings: AMD software is quite specialized, and so it’s understandable that applying a standard AI tool like Github Copilot, which is trained using publicly available data, wouldn’t be that helpful. For example, AMD’s graphics-software team develops low-level firmware to integrate our GPUs into computer systems, low-level software to integrate the GPUs into operating systems, and software to accelerate graphics and machine learning operations on the GPUs. All of this code provides the base for applications, such as games, video conferencing, and browsers, to use the GPUs. AMD’s software is unique to our company and our products, and the standard copilots aren’t optimized to work on our proprietary data. To overcome this issue, we will need to train tools using internal datasets and develop specialized tools focused on AMD use cases. We are now training a coding assistant in-house using AMD use cases and hope this will improve both adoption among developers and resulting productivity. But the survey results made us wonder: How much of a developer’s job is writing new lines of code? To answer this question, we took a closer look at our software-development life cycle. Inside the software-development life cycle AMD’s software-development life cycle consists of five stages. We start with a definition of the requirements for the new product, or a new version of an existing product. Then, software architects design the modules, interfaces, and features to satisfy the defined requirements. Next, software engineers work on development, the implementation of the software code to fulfill product requirements according to the architectural design. This is the stage where developers write new lines of code, but that’s not all they do: They may also refactor existing code, test what they’ve written, and subject it to code review. Next, the test phase begins in earnest. After writing code to perform a specific function, a developer writes a unit or module test—a program to verify that the new code works as required. In large development teams, many modules are developed or modified in parallel. It’s essential to confirm that any new code doesn’t create a problem when integrated into the larger system. This is verified by an integration test, usually run nightly. Then, the complete system is run through a regression test to confirm that it works as well as it did before new functionality was included, a functional test to confirm old and new functionality, and a stress test to confirm the reliability and robustness of the whole system. Finally, after the successful completion of all testing, the product is released and enters the support phase. Even in the development and test phases, developing and testing new code collectively take up only about 40 percent of the developer’s work. The standard release of a new AMD Adrenalin graphics-software package takes an average of six months, followed by a less-intensive support phase of another three to six months. We tracked one such release to determine how many engineers were involved in each stage. The development and test phases were by far the most resource intensive, with 60 engineers involved in each. Twenty engineers were involved in the support phase, 10 in design, and five in definition. Because development and testing required more hands than any of the other stages, we decided to survey our development and testing teams to understand what they spend time on from day to day. We found something surprising yet again: Even in the development and test phases, developing and testing new code collectively take up only about 40 percent of the developer’s work. The other 60 percent of a software engineer’s day is a mix of things: About 10 percent of the time is spent learning new technologies, 20 percent on triaging and debugging problems, almost 20 percent on reviewing and optimizing the code they’ve written, and about 10 percent on documenting code. Many of these tasks require knowledge of highly specialized hardware and operating systems, which off-the-shelf coding assistants just don’t have. This review was yet another reminder that we’ll need to broaden our scope beyond basic code autocomplete to significantly enhance the software-development life cycle with AI. AI for playing video games and more Generative AI, such as large language models and image generators, are getting a lot of airtime these days. We have found, however, that an older style of AI, known as discriminative AI, can provide significant productivity gains. While generative AI aims to create new content, discriminative AI categorizes existing content, such as identifying whether an image is of a cat or a dog, or identifying a famous writer based on style. We use discriminative AI extensively in the testing stage, particularly in functionality testing, where the behavior of the software is tested under a range of practical conditions. At AMD, we test our graphics software across many products, operating systems, applications, and games. Nick Little For example, we trained a set of deep convolutional neural networks (CNNs) on an AMD-collected dataset of over 20,000 “golden” images—images that don’t have defects and would pass the test—and 2,000 distorted images. The CNNs learned to recognize visual artifacts in the images and to automatically submit bug reports to developers. We further boosted test productivity by combining discriminative AI and generative AI to play video games automatically. There are many elements to playing a game, including understanding and navigating screen menus, navigating the game world and moving the characters, and understanding game objectives and actions to advance in the game. While no game is the same, this is basically how it works for action-oriented games: A game usually starts with a text screen to choose options. We use generative AI large vision models to understand the text on the screen, navigate the menus to configure them, and start the game. Once a playable character enters the game, we use discriminative AI to recognize relevant objects on the screen, understand where the friendly or enemy nonplayable characters may be, and direct each character in the right direction or perform specific actions. To navigate the game, we use several techniques—for example, generative AI to read and understand in-game objectives, and discriminative AI to determine mini-maps and terrain features. Generative AI can also be used to predict the best strategy based on all the collected information. Overall, using AI in the functional testing stage reduced manual test efforts by 15 percent and increased how many scenarios we can test by 20 percent. But we believe this is just the beginning. We’re also developing AI tools to assist with code review and optimization, problem triage and debugging, and more aspects of code testing. Once we reach full adoption and the tools are working together and seamlessly integrated into the developer’s environment, we expect overall team productivity to rise by more than 25 percent. For review and optimization, we’re creating specialized tools for our software engineers by fine-tuning existing generative AI models with our own code base and documentation. We’re starting to use these fine-tuned models to automatically review existing code for complexity, coding standards, and best practices, with the goal of providing humanlike code review and flagging areas of opportunity. Similarly, for triage and debugging, we analyzed what kinds of information developers require to understand and resolve issues. We then developed a new tool to aid in this step. We automated the retrieval and processing of triage and debug information. Feeding a series of prompts with relevant context into a large language model, we analyzed that information to suggest the next step in the workflow that will find the likely root cause of the problem. We also plan to use generative AI to create unit and module tests for a specific function in a way that’s integrated into the developer’s workflow. These tools are currently being developed and piloted in select teams. Once we reach full adoption and the tools are working together and seamlessly integrated into the developer’s environment, we expect overall team productivity to rise by more than 25 percent. Cautiously toward an integrated AI-agent future The promise of 25 percent savings does not come without risks. We’re paying particular attention to several ethical and legal concerns around the use of AI. First, we’re cautious about violating someone else’s intellectual property by using AI suggestions. Any generative AI software-development tool is necessarily built on a collection of data, usually source code, and is generally open source. Any AI tool we employ must respect and correctly use any third-party intellectual property, and the tool must not output content that violates this intellectual property. Filters and protections are needed to ensure compliance with this risk. Second, we’re concerned about the inadvertent disclosure of our own intellectual property when we use publicly available AI tools. For example, certain generative AI tools may take your source code input and incorporate it into its larger training dataset. If this is a publicly available tool, it could expose your proprietary source code or other intellectual property to others using the tool. The Future Workflow of AI Agents We envision a future where a human defines a new software requirement or submits a new bug report, and a series of AI agents perform all the steps of the software development lifecycle, submitting the result to a human developer for review. iStock Third, it’s important to be aware that AI makes mistakes. In particular, LLMs are prone to hallucinations, or providing false information. Even as we off-load more tasks to AI agents, we’ll need to keep a human in the loop for the foreseeable future. Lastly, we’re concerned with possible biases that the AI may introduce. In software-development applications, we must ensure that the AI’s suggestions don’t create unfairness, that generated code is within the bounds of human ethical principles and doesn’t discriminate in any way. This is another reason a human in the loop is imperative for responsible AI. Keeping all these concerns front of mind, we plan to continue developing AI capabilities throughout the software-development life cycle. Right now, we’re building individual tools that can assist developers in the full range of their daily tasks—learning, code generation, code review, test generation, triage, and debugging. We’re starting with simple scenarios and slowly evolving these tools to be able to handle more-complex scenarios. Once these tools are mature, the next step will be to link the AI agents together in a complete workflow. The future we envision looks like this: When a new software requirement comes along, or a problem report is submitted, AI agents will automatically find the relevant information, understand the task at hand, generate relevant code, and test, review, and evaluate the code, cycling over these steps until the system finds a good solution, which is then proposed to a human developer. Even in this scenario, we will need software engineers to review and oversee the AI’s work. But the role of the software developer will be transformed. Instead of programming the software code, we will be programming the agents and the interfaces among agents. And in the spirit of responsible AI, we—the humans—will provide the oversight.
- Ace This Tricky Interview Questionby Rahul Pandey on 21. April 2025. at 20:03
This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!One of the most common mistakes I’ve observed among job seekers is over-sharing. When you’re interviewing at a company, remember this key insight: Don’t share information that hurts you. This frequently comes up for folks who have been affected by a “performance-based” layoff. Your explanation for how and why you left your last job can have a huge impact on your interviews. I have two tactical pieces of advice:First, reframe the layoff as a mutual decision instead of something that happened to you. No one wants to hire a victim. ❌ “I was unfairly punished by a horrible company” ✅ “My career goals didn’t align with the company at the time” Second, focus the conversation more on where you’re headed rather than what you’re leaving behind. Here’s a sample answer: “I’m super excited about this company. I deeply resonate with the mission and smaller environment.” (You should, of course, adapt this to match what you care about and the role you’re applying for.) You will likely be asked, “Why did you leave your last job?” but you have the power to decide what to share. You shouldn’t lie, but you also need not be fully transparent. Many people will experience a layoff or reduction in force at least once in their career. Instead of blaming yourself or the company, your framing should be: “There was a mismatch in expectation between employee and employer.” There are countless reasons why an engineer may have poor performance: lack of support, poor management, lack of motivation, or shifting company needs. Have the humility to learn from your mistakes, but remember to advocate for yourself, too. Have the confidence that you are valuable. Too many engineers are their own worst critics, which exacerbates a negative spiral of performance. The tech job market right now is especially turbulent, and anxiety around layoffs is everywhere. Remember: You are in the driver’s seat of your career, and you can have an amazing career after getting laid off. —Rahul ICYMI: What Engineers Should Know About AI in 2025 Each year, Stanford’s Institute for Human-Centered Artificial Intelligence releases a report on the current state of AI, including its effect on the workforce. While last year’s report showed signs that the AI hiring boom was quieting, this year, AI jobs postings were back up in most places. Read more here. ICYMI: This Solar Engineer is Patching Lebanon’s Power Grid In Mira Daher’s home country of Lebanon, the national grid provides power for only a few hours a day. But in recent years, the rapidly falling cost of solar panels has given Lebanese businesses and families a compelling alternative, and the country has seen a boom in private solar-power installations. Daher has played an important part in this ongoing revolution. She is in charge of bidding for new solar projects, drawing up designs, and ensuring that they are correctly implemented on-site for renewable-energy company Earth Technologies. Read more here. Debate over H-1B visas shines spotlight on US tech worker shortage The H-1B visa system, which allows U.S. employers to hire skilled workers from other countries in specialty occupations, is frequently used in the tech industry. However, some supporters of President Donald Trump are critical of the program. A professor of computer science at Rice University discusses the contentious state of H-1B visas and what the program reveals about the U.S. tech workforce in this article from The Conversation. Read more here.
- Henry Samueli’s Career Advice for Aspiring Engineersby Tekla S. Perry on 21. April 2025. at 15:30
Henry Samueli, cofounder of Broadcom and the 2025 recipient of the IEEE Medal of Honor, has this advice for engineering students and recent graduates just starting on their careers: “Don’t do engineering for the money. Do it because it can have an impact, because you enjoy doing it, and because you love doing it. If you have an impact on society, the money follows.” “Advance your college education as far as you can. I know there are people that say, ‘You don’t even need a college education,’ but statistically, that’s stupid. The average salaries paid to STEM professionals blow away every other field. You don’t look at the one-offs in a field. You have to look at the average statistics, because you’re probably not going to be that one-off. You’re most likely going to end up in the averages. When you’re betting on your career, you want to go into fields with the largest probability of success.” “Stay off social media. Social media can be very damaging. And it’s just a huge time sink. You end up going down ratholes that are totally useless.” “If you are fortunate enough to be successful, don’t forget to give back. One of the most rewarding things you can do in your life is giving back to those less fortunate than you.”
- This Man Made the Modem in Your Phone a Realityby Tekla S. Perry on 21. April 2025. at 15:00
In 1991, very few people had Internet access. Those who did post in online forums or email friends from home typically accessed the Internet via telephone line, their messages traveling at a top speed of 14.4 kilobits per second. Meanwhile, cable TV was rocketing in popularity. By 1991, sixty percent of U.S. households subscribed to a cable service; cable rollouts in the rest of the world were also picking up speed. Hypothetically, using that growing cable network instead of phone lines for Internet access would dramatically boost the speed of communications. And making cable TV itself digital instead of analog would allow cable providers to carry many more channels. The theory of how to do that—using analog-to-digital converters and digital signal processing to translate the analog waveforms that travel on coaxial cable into digital form—was well established. But the cable modems required to implement such a digital broadband network were not on the mass market. Enter Henry Samueli. In 1985, he had established a multidisciplinary research program at the University of California, Los Angeles, to develop chips for digital broadband. Over the next several years, he and his team created a wide variety of proof-of-concept chips demonstrating the key building blocks of high-performance digital modems. And in 1991, Samueli, along with his UCLA grad student Henry Nicholas, founded Broadcom Corp. to commercialize the technology. Today, the innovations in digital signal processing architectures pioneered at UCLA and Broadcom persist in the digital modems that enable both wired and wireless communications in our devices. For these advances, along with contributions to expanding science, technology, engineering, and math (STEM) education, Samueli is the recipient of the 2025 IEEE Medal of Honor. Henry Samueli Current jobs Philanthropist, Chairman of Broadcom Inc. Date of birth 20 September 1954 Birthplace Buffalo, N.Y. Family Wife, Susan; three children; three grandchildren Education BSEE 1975, MSEE 1976, Ph.D. 1980, all in electrical engineering from the University of California, Los Angeles (UCLA) First job Cashier and stock boy in family’s liquor store Biggest surprise in career The overwhelming success of Broadcom and the explosive growth of the Internet Patents 75 Favorite kind of music Classic rock, including Led Zeppelin, Van Halen, Metallica, the Beatles, and the Rolling Stones Most recent TV series watched “Lioness” Favorite food Chocolate Favorite movie The Godfather Favorite country Italy, for the people, the culture, the food, the scenery Favorite cities Paris, London, New York, Tokyo Leisure activities E-biking, skiing, hiking, basketball Pet peeves Disorganization and messes Key organizational memberships IEEE, Marconi Society Major awards IEEE Medal of Honor: “For pioneering research and commercialization of broadband communication and networking technologies, and promotion of STEM education,” IEEE Fellow, Marconi Fellow, member of the National Academy of Engineering, Fellow of the American Academy of Arts and Sciences, Ellis Island Medal of Honor, Broadcom’s 2024 Emmy for “Pioneering Technologies Enabling High-Performance Communications over Cable TV Systems” Before the Cable Modem—Way Before Samueli started down the path that would lead to cable modems when he was in middle school. But he wasn’t thinking about a future career when he enrolled in an electric shop class. It was just that, he says, “electricity seemed kind of mysterious, compared with metal or wood.” The teacher assigned a crystal radio project, he recalls, “but wrapping a wire around a tube from toilet paper and connecting the wire to a crystal wasn’t that exciting to me.” So he thumbed through an electronics catalog looking for an alternative. A Graymark five-tube radio caught his eye. It took some convincing before the teacher agreed to let him tackle the project, which came with complicated instructions and involved learning how to solder. “I worked every night,” Samueli says. “There were hundreds of connections that I had to solder up. It took a full semester to build it, and, at the end, I brought it into class, plugged it in, and sound came out of it. I was totally blown away. And I literally made it my mission in life to figure out how radios work.” Samueli’s teacher was blown away as well. And what he said crystallized Samueli’s future. “He told me, ‘Henry, honestly, I never ever thought you could do this. But clearly, you’ve got some special gifts. I think you should pursue electrical engineering as a career. You’re going to do something big someday.’ ” UCLA Takes Hold—and Never Lets Go Samueli eventually applied to UCLA—a university with a good electrical engineering program and affordable tuition that was close to home. He went straight through to a Ph.D. but, he says, didn’t really understand how radios worked until a few years beyond that. After collecting his Ph.D. in 1980, Samueli joined TRW to work on defense communications projects. He says he loved every minute. “It’s a tremendous opportunity to learn because you’re dealing with superhigh tech, the greatest technology at the time. And with a big budget, you can build very sophisticated things,” he says. Samueli didn’t completely leave the world of higher education. In his spare time, he taught a circuit-design class at California State University, Northridge, and then several circuits and signal processing classes at UCLA. In 1985, UCLA offered him an assistant professorship, and he left TRW, taking coworker Henry Nicholas with him as his first Ph.D. student. The two formed the core of what would become the multidisciplinary communications research program in UCLA’s Integrated Circuits and Systems Laboratory. They collaborated with several faculty members in the electrical engineering and computer science department to develop digital modem chips. Broadcom cofounders Henry Samueli [left] and Henry Nicholas pose in front of the company’s headquarters in Irvine, Calif., in 1999. Ted Soqui/Corbis/Getty Images “Chip design is a very complex and broad discipline,” Samueli points out. “There are analog designs, digital designs, multiple systems, various architectures. While such a multidisciplinary approach is standard today, it was fairly unusual at the time.” AT&T Bell Labs was leading the world in digital-communications research, Samueli recalls, using low-speed modems that communicated in the same bandwidths as the human voice. The labs built those modems using programmable digital signal processing chips from Texas Instruments and others. “It was a software-driven approach to building digital signal processing,” Samueli says. “And it only ran at data rates of tens of kilobits per second. Our challenge was how to take those algorithms and make them run at tens of megabits per second—one thousand times faster.” Samueli and his colleagues concluded that a programmable architecture using software was just too slow. So they began investigating parallel architectures that could implement sophisticated algorithms on a single chip with no software, just dedicated hardware. “That was our innovation. Back then, it was very novel,” he notes. “Today, it’s what makes AI processors work.” UCLA researchers who specialized in analog signal processing collaborated with the group to integrate high-speed analog-to-digital and digital-to-analog converters into the core functions of the chip—“really breakthrough work,” Samueli says. “I was totally blown away. And I literally made it my mission in life to figure out how radios work.” —Henry Samueli Samueli and his team weren’t thinking patents while they were doing this research. As academics, their focus was on publishing their results—some 100-plus papers over 10 years. But many others saw commercial potential in their work. “After we’d publish a paper, we’d go to a conference and make a presentation,” Samueli says. “People would come up to us after the talk and say, ‘This is really neat stuff. Have you ever thought about commercializing it and starting a company?’” Samueli and Nicholas took the leap in August of 1991, incorporating Broadcom Corp. and chipping in US $5,000 each to rent an office and buy computers and office supplies. Samueli kept working full-time at UCLA while Broadcom began bringing in small defense contracts: developing a digital frequency synthesizer for TRW, a digital filter for a Rockwell microwave radio, and, for the U.S. Air Force, a digital filter to protect GPS signals from jamming. “These projects funded our R&D, and we gained more and more knowledge,” says Samueli. [For more on Samueli's early career, see this 1999 profile.] Scientific Atlanta Connects with Broadcom In December 1992, a student of Samueli’s gave a presentation at Globecom (the IEEE Global Telecommunications Conference, that is) about a prototype 10-plus megabit-per-second digital modem chip the group had developed. “What was different in their chip is that it integrated digital and analog,” recalls Leo Montreuil, then an engineer at Scientific Atlanta and now an IC design engineer at Broadcom. At the time, Scientific Atlanta shared the U.S. cable TV set-top box market with only one competitor, General Instrument. “We had many companies making chips for Scientific Atlanta, but not that kind of chip.” After the presentation, Montreuil approached the student, who referred him to Samueli. Montreuil met with Samueli and Nicholas three months later. Henry Samueli is this year’s recipient of the IEEE Medal of Honor for his contributions to digital broadband technology and his support of STEM education. He is donating the US $2 million cash award to support an annual student-leadership conference.Peter Adams Scientific Atlanta wasn’t just casually curious about the work. The company had signed a major contract with Time Warner to build 4,000 set-top boxes for the world’s first digital cable system, called the Full Service Network. It needed a digital modem for that box, but the necessary chips weren’t commercially available. “What they were trying to do in a single chip seemed so much better than multichip systems being developed by others,” says Montreuil. “When you go from analog to a digital implementation, you have to worry about drift, temperature sensitivity, and other issues. The more you can implement in the digital domain, the more predictable is the system.” Scientific Atlanta awarded a $1 million development contract to Broadcom in June of 1993. Although Broadcom’s design ended up using three chips, the company did combine analog and digital circuitry on the same silicon. “The project was straightforward,” Samueli says, “because it was based on the prototype designs we had already done. And it worked the first time, flawlessly.” Time Warner’s digital cable network—activated in Orlando, Fla., in early 1995—was a technical success, but Time Warner didn’t take it any further. The network wasn’t intended to be financially viable, Montreuil says, pointing out that the core of each home system was a prohibitively expensive Sun SPARC workstation. “The goal was to acquire knowledge and to get our foot in the door for the next generation.” Broadcom’s modem design impressed both Scientific Atlanta and General Instrument. The two competitors invested $1 million each, for a 10 percent total stake in the startup. That funding allowed Broadcom to keep working on digital modems, to reduce the cost by putting all the functions on a single chip. Sherman Chen was a senior engineer at General Instrument at the time. “We knew then that the Broadcom device would dramatically extend the boundaries of communications,” recalls Chen, who is now vice president of engineering in Broadcom’s broadband video group. “Ideas like advanced error correction and digital compression were around, but they were all just elegant theories until Broadcom built the first mixed-signal silicon for broadband communications. Broadcom created an industry.” Broadcom wasn’t the only company chasing the low-cost digital modem grail. One key competitor was LANcity, which had a $500 digital modem. The market was evolving quickly, and it was becoming clear to cable operators that this new technology would require standardization. Broadcom, CableLabs, General Instrument, LANcity, 3Com, and others began collaborating in 1995 to create an international standard called the Data-Over-Cable Service Interface Specification (DOCSIS). “People would come up to us after the talk and say, ‘This is really neat stuff. Have you ever thought about commercializing it and starting a company?’ ” —Henry Samueli Around that time, Samueli left UCLA to focus on Broadcom, which had recently moved from Los Angeles to Irvine, in Orange County. Reluctant to cut his academic ties, he asked that his departure be considered a temporary leave of absence. He officially remains on leave from UCLA even now. In 1995, Broadcom released its first mainstream commercial product—that is, a device built to sell on the open market, not developed under contract. The BCM3100 was an under-$20, single-chip, DOCSIS-compatible digital modem. In 1996, Broadcom added another type of product: digital Ethernet chips, what Samueli says was the world’s first all-digital implementation of Ethernet technology. With those two successful product lines, Broadcom went public in 1998 at a valuation of $1 billion, making Samueli, Nicholas, and many of Broadcom’s 320 or so employees wealthy. By mid-2000, that valuation had jumped to more than $60 billion, with Samueli’s stake worth about $10 billion, and, according to the Orange County Register, the average employee worth nearly $6 million. “We were a very generous company to our employees,” Samueli says. “We gave stock to virtually everybody in the company. We had it to give because we didn’t dilute our shares by taking on venture capital investors.” The SEC Goes After Broadcom’s Stock Option Grants This sharing of the wealth, ironically, led to one of the darkest chapters in Samueli’s story. In the mid-2000s, the U.S. Securities and Exchange Commission began investigating the use of stock options at a number of tech companies, including Broadcom. The SEC opened a formal inquiry into Broadcom’s practices in late 2006 and in 2008 charged several Broadcom executives, including Samueli and Nicholas, of backdating stock options. “It was a nightmare,” Samueli says. “We went through five years of hell. It’s frightening. They threaten you. They say, ‘We’re going to put you in jail for 300 years.’ ” In late 2009, the case came before U.S. District Court Judge Cormac Carney. After hearing some of the evidence, including testimony from Samueli and others, the judge “threw everything out,” Samueli says. Carney seemed particularly outraged by the prosecution’s treatment of Samueli. His ruling stated: “The uncontroverted evidence at trial established that Dr. Samueli was a brilliant engineer and a man of incredible integrity. There was no evidence at trial to suggest that Dr. Samueli did anything wrong, let alone criminal. Yet, the government embarked on a campaign of intimidation and other misconduct to embarrass him and bring him down.” Says Samueli: “This whole options backdating scandal was misery, but I wouldn’t change what we did. I think being overly generous to employees is a good thing.” Meanwhile, Broadcom cofounder Nicholas was struggling. He had resigned from the company in 2003, and around the same time as the stock options investigation, he was indicted for distribution of illegal drugs. Nicholas entered rehab in 2008, and the charges were eventually dropped. A decade later, though, Nicholas was arrested in Las Vegas for drug trafficking and took a plea deal without admitting guilt. “I haven’t spoken to him in a couple of years,” says Samueli. “It’s really sad. But what he did for the company cannot be underestimated. I wish him all the best.” Samueli’s Philanthropy and the Giving Pledge Samueli remained a steady presence as Broadcom’s chief technology officer until 2018, continuing through its acquisition by Avago in 2016. (The resulting entity is now called Broadcom Inc.) Since 2018, he’s served as chairman. He still has a big influence on the company’s engineers. Charlie Kawwas, president of Broadcom’s semiconductor solutions group, says that Samueli continues to attend all of the division’s technical reviews—about 72 a year, each lasting 2 to 3 hours. “He engages with the engineers, asking questions and giving feedback, and they love that,” Kawwas says. On a cruise to Antarctica in 2023, Henry Samueli “went to every lecture…he went on every excursion,” a colleague recalls. Lindsey Spindle With his current personal wealth estimated by Forbes at about $20 billion, Samueli spends much of his time giving money away through the Samueli Foundation. He also chairs the board of the Broadcom Foundation. He and his wife, Susan, have committed to the Giving Pledge, promising to give away most of their wealth either during their lifetimes or in their wills. “After Broadcom went public, and the stock was flying, Susan and I decided we needed to start giving this away. It was easy to think of what I wanted to give back to. What created this wealth? My engineering education. And UCLA was that entire education—my bachelor’s, master’s, Ph.D., faculty member. So there was no question in my mind that the first major gift would be to UCLA and the engineering school, and that was $30 million in 1999.” [See "Henry Samueli’s Career Advice for Aspiring Engineers."] Since then, the Samueli Foundation has supported engineering and integrative health programs at UCLA and the University of California, Irvine, for a total, Samueli estimates, of more than $500 million. (Integrative health is health care that embraces alternative therapies along with conventional medicine and is a passion of Samueli’s wife.) The foundation also targets projects aimed at bringing students into the STEM pipeline, including creating a charter middle and high school— the Samueli Academy—focused on hands-on learning in engineering and design arts. It’s working with community colleges to expand training for nursing, construction, maritime, and STEM careers. And the foundation funds initiatives to combat antisemitism and to promote collaborations with Israel and projects within Israel, a growing focus in response to recent events. “He engages with the engineers, asking questions and giving feedback, and they love that.”—Charlie Kawwas, Broadcom Altogether the foundation has distributed more than $1 billion to date, and it’s on track to give away about a billion more in this decade, reports Lindsey Spindle, president of the Samueli Family Philanthropies, which oversees the foundation and the family’s other, smaller philanthropic efforts. “Henry’s engineering background gives him the right constitution for philanthropy,” Spindle says. “He knows about systems building. He appreciates interconnectivity. When you are building hardware, you have to think about the larger system in which it will function, be patient, and be willing to iterate. When you care about combating antisemitism, ending homelessness, and reorienting medicine towards well-being, you also have to have a systems orientation and be willing to iterate.” This year’s IEEE Medal of Honor comes with a significant cash award—$2 million, up from $50,000 in the recent past. Samueli, an IEEE member since his student days and now an IEEE Fellow, plans to use the money to create an endowment to enable IEEE’s Eta Kappa Nu honor society to host an annual student-leadership conference, something he’s been funding directly for the past three years. Henry Samueli and his wife, Susan, celebrate the Stanley Cup victory for the Anaheim Ducks hockey team, which Samueli bought in 2005. Harry How/Getty Images Samueli is also the owner of the National Hockey League franchise the Anaheim Ducks. At a glance, this might seem like a typical rich guy’s plaything—and there is no doubt that he enjoys his involvement with the team. But the acquisition came from an impulse to do good. In 2003, the company managing the Ducks’ home, the Anaheim Arena, went bankrupt. Anaheim officials knew Samueli was an active businessman in the Orange County community, and they asked him to take over management of the arena (now called the Honda Center). Meanwhile, Ducks owner Disney was eager to sell the team. Says Samueli: “In fear of an outsider coming in and moving the team out of town, we decided that, for the community’s sake, we would make sure they stay here—and learn how to run a sports team.” “It was a big learning curve,” he says. “But in any business, it’s really about the management. We put in a good management team—and won the Stanley Cup in our second year of ownership.” His dive into learning about hockey is characteristic of Samueli’s approach to just about everything, people who’ve worked with him report. “Henry has a seemingly limitless capacity to entertain new ideas,” Spindle says. She described a trip to Antarctica, for which her family joined some of the Samuelis. “Henry went to every lecture offered on the ship. He went on every excursion,” she says. In his work with the foundation, she continued, he’s equally curious and engaged. “He shows up at every meeting,” she says. “You can send him a 120-page document, and he will read every word and come prepared to ask questions.” The hockey team is part of Samueli’s investment in, and enjoyment of, the Orange County community. Next up is creating a true downtown Anaheim, in the form of an arts and entertainment district tagged OCVIBE. And in his free time, he takes long e-bike rides just to enjoy the neighborhoods. “OCVIBE and the Ducks are an important part of our lives,” Samueli says. “And as Broadcom stock grows, we just keep putting more and more money into the foundation. That’s not going to stop. Then, of course, there’s being on the Broadcom board and deeply involved with Broadcom—I can see that continuing for many years. Theoretically, I’m retired, but I’m as busy as ever.” This article appears in the May 2025 print issue as “The Broadband Boss.” This article was updated on 22 and 24 April 2025.
- Henry
Samueli and the Rise of Digital Broadbandby Tekla S. Perry on 20. April 2025. at 14:00
Editor’s Note: Henry Samueli is the 2025 recipient of the IEEE Medal of Honor. IEEE Spectrum published this profile of Samueli in the September 1999 issue. With the recent explosion in the popularity of cable and digital subscriber-line modems for high-speed Internet access, the odds are that you will soon have one of these broadband communications devices in your home or office—if you don’t already. If you do, the odds are that the chips inside the modem will have been designed by Broadcom Corp., and be based on digital signal-processing (DSP) architectures conceived by Henry Samueli. Eight years ago, Samueli, then a professor at the University of California, Los Angeles (UCLA), who had been pushing the state of the art of digital broadband communications for more than a decade, joined with his Ph.D. student Henry Nicholas to found Broadcom, now in Irvine, Calif. Their first project was to design the world’s first chips for digital interactive television. Today Samueli holds patents for DSP-based receiver architectures for a number of digital communications transceivers, including ones for cable television, satellite television, Ethernet, and high-bit-rate digital subscriber line services. Plus Broadcom now makes more than 95 percent of the chips that go into U.S. digital cable set-top boxes and cable modems. Such modems are viewed as the foundation for the future of data, voice, and video services to the home. Broadcom also has big chunks of the markets for chips for Ethernet transceivers, high-definition television (HDTV) receivers, digital subscriber line modems (the leading alternative to cable modems), and direct broadcast satellite receivers. How a DIY radio kit launched Henry Samueli’s career Samueli’s path toward becoming one of today’s key players in digital communications started 33 years ago, when he was a seventh grader. Required to take a shop class at his West Hollywood, Calif., junior high school, he selected electric shop. During the term, each student was expected to build a crystal radio from a kit, using a single crystal and an antenna wound on a toilet paper tube. Bored with the prospect, Samueli asked his teacher if, instead, he could build a five-tube short-wave radio he had read about in a Heathkit catalog. [Editor’s note: Samueli later determined that the kit was a Graymark 506B.] Initially, the teacher said no—the short-wave radio was a ninth grade project—but Samueli persisted and eventually prevailed. It wasn’t easy, even though it was a cookbook project. Samueli had never done anything like it, and he recalls slaving away on it every night all term. Finally, he brought the assembled kit to school, the teacher plugged it in, and it worked. “The teacher’s jaw hit the floor,” Samueli said. “He said nobody gets it right the first time.” The teacher predicted that Samueli would be a successful electrical engineer someday. It was the first time Samueli had heard of such a profession. The radio project had fascinated him. Though he had managed to put it together, he had no idea how it worked. “That became my mission in life, from seventh grade onward, to find out how radios work,” he told IEEE Spectrum. It took him nine years of college, a Ph.D. thesis—a highly theoretical paper entitled “Nonperiodic forced overflow oscillations in digital filters”—and a few years in industry before he felt he had satisfied that goal. In pursuit of this understanding, Samueli applied to UCLA, which had a good engineering department. It was also affordable because he could live with his parents. (His parents, Holocaust survivors from Krakow, Poland, who operated a series of small businesses in Los Angeles, were committed to supporting his education but couldn’t afford to send him away to school.) After he received his master’s degree at UCLA, he went straight through to a Ph.D. program, turning down a job offer from the then Bell Telephone Laboratories, in Murray Hill, N.J. The defense industry beckons With the completion of his doctoral thesis, Samueli joined a friend as a member of the technical staff at TRW, in Redondo Beach, Calif. “In the late ’70s and early ’80s, the defense industry was at its peak,” he recalled. “All the top students at the local colleges went into defense. Hughes and TRW were the top two—you almost didn’t consider any other company.” At TRW, Samueli was initially assigned to a communications systems group that was analyzing the wartime survivability of U.S. communications networks. A year later, he was moved into a design group that was developing circuit boards for military satellite and radio communications systems. His first assignment in that position was challenging. “I had to design a communications processor box,” he recalled. This box was part of a transmitter/receiver for a digital link in a NASA ground station. It was one of the first applications of DSP technology to a satellite communications system. “Since in those days each chip contained very few functions (like a four-bit adder or a quad flip-flop), you had to connect up hundreds or thousands of such digital logic chips to actually build a reasonable system,” Samueli said. “It was overwhelmingly complex, this fairly large box of hardware with about 1200 logic chips and several LSI [large-scale integration] multiplier chips that I had to get working all by myself, with only a technician to help. They effectively threw me into the ocean and told me to sink or swim.” “I found out later,” he said, “that my boss didn’t think I could do it. He had given me the assignment as a test, thinking that I would eventually yell for help.” Samueli had been given four months to complete the task; he did it in two and a half. “I’m Mr. Nice Guy. I’m not confrontational. So I get very frustrated when something goes wrong because I don’t like to yell at people.”—Henry Samueli After that, he was given his pick of any project in the department. He chose a contract to design a high-speed digital radio modem for the U.S. Army—a project that set him on the path that eventually led to the founding of Broadcom. This was a 26-Mb/s microwave digital radio, which, being built with digital circuits, pushed the limits of technology at a time when typical digital radios were designed around analog circuits. Succeeding required designing the fastest digital adaptive equalizer—a circuit that corrects for distortions—ever built. Peter McAdam, director of advanced technology for TRW’s electronics and technology division, was several management layers above Samueli at the time, but he recalls this project. “We were designing digital radios,” McAdam told Spectrum,” and he was doing digital signal demodulators for them. He implemented them using digital phase-lock-loop technology before the rest of the world had thought of doing such a thing. We didn’t have to do that part of it digitally, but he pushed it—he insisted we could do it, and got us all inventing algorithms to do so.” The lure of academia Since joining TRW in 1980, Samueli had been simultaneously teaching college engineering courses in his spare time—first at California State University, Northridge, and then at UCLA. In 1985 UCLA offered him a full-time position. Samueli jumped at the chance. “Not that I didn’t like TRW. To this day I think it was one of the best jobs I could have had. Working in the defense industry, you are given all the money and resources you need in order to develop the greatest, state-of-the-art technology. But the opportunity to be a professor at one of the top universities in the world was too good to pass up.” The best part of academia, Samueli thinks, is working with students. “They are so energetic and hardworking and motivated to learn,” he said. “It is a thrilling environment.” “Coming from a Jewish family,” he mused, “the big push was to become a medical doctor. But working in a hospital around sick people all day versus working at a university where you have all these bright eager young minds—there is just no comparison.” The other bonus of the university environment is academic freedom. “You pick a subject and go for it. You have to raise the money, but nobody tells you what to do.” Nicolaos G. Alexopoulos, now dean of engineering at the University of California, Irvine, was the chair of UCLA’s electrical engineering department during Samueli’s tenure. He recalled that Samueli was good at getting corporate research grants and donations. “I had created a corporate affiliates program for the department,” Alexopoulos said, “and Henry must have raised several million dollars in equipment donations and affiliate memberships. He was successful because the corporations related to his work, respected his research, and could tell he had genuine interest in helping the department, not just himself.” At UCLA, Samueli launched a research program in applying IC technology to high-speed digital communications, building on the digital modem project he had completed at TRW. The first Ph.D. student to join his group was Henry Nicholas, a chip designer from TRW who was working on his doctorate part time. Nicholas’s chip design background complemented Samueli’s systems architecture background, and he became a partner in building the research group, which, at its peak, had 15 graduate students. Broadcom cofounders Henry Samueli [left] and Henry Nicholas pose in front of the company’s headquarters in Irvine, Calif., in 1999. Ted Soqui/Corbis/Getty Images Nicholas complemented Samueli in another way, as the partnership continued, with the later founding of Broadcom. “The two are good cop/bad cop,” McAdam told Spectrum. “Henry [Samueli] is really mild, really nice. In a competitive environment he can be too nice. But Nick [Henry Nicholas] takes care of that, thank you very much.” Others who have worked with the two of them agree. And Samueli himself sees Nicholas as the ideal balance to his laid-back personality. “I’m Mr. Nice Guy,” he told Spectrum. “I’m not confrontational. So I get very frustrated when something goes wrong because I don’t like to yell at people.” “Nick, on the other hand,” he said, “is never shy about yelling. And you need somebody like that to run a successful corporation. It has turned out to be a tremendous partnership; we are complementary in every respect.” Henry Samueli’s first start-up In 1988, with his UCLA research program in full swing, pushing digital communications chips to higher and higher speeds, Samueli got a phone call from two of his former TRW co-workers. They were starting a company, PairGain Technologies, in Tustin, Calif., to build digital subscriber line (DSL) transceivers, and they needed a chief architect. Their initial product operated at integrated-services digital network (ISDN) speeds (128 kb/s), which were standard at the time. But the company then made a technological leap by developing a high-bit-rate DSL (HDSL) transceiver that operated at 1.5Mb/s over ordinary phone lines. Ben Itri, now chief technology officer of PairGain, was behind the effort to recruit Samueli. “We needed someone who could give us credibility in a theoretical area,” Itri said. “What we were proposing had adaptive digital filters, and Henry had done a lot of work in that area.” (Adaptive digital filters correct for the distortion that occurs when a broadband digital signal is sent over the telephone network, which is optimized for analog voice communications.) “He also gave us access to a pool of talented people at UCLA,” Itri told Spectrum. “After he was on board, we pitched the company to venture capitalists. They respected his background. Without him, it would have been very difficult.” While the PairGain job was of interest to Samueli, he was not ready to leave UCLA, so he signed on as a one-day-a-week PairGain consultant. He immediately brought Nicholas on board, who added a PairGain post to his already busy schedule of TRW work and Ph.D. research at UCLA. Samueli worked on the architecture, Nicholas launched a chip design group, and the company’s first product, the pioneering HDSL transceiver, was introduced in 1991. PairGain subsequently achieved about an 80 percent market share for HDSL transmission equipment—the boxes that allow the installation of high-speed digital connections between businesses and their local phone companies. “I got stock options to join PairGain,” Samueli said. “I had no idea what that meant at the time, but, boy, did I learn quick.” PairGain went public in 1993, and Samueli’s stock subsequently became worth several million dollars. How Broadcom got its start Meanwhile, Samueli’s research group at UCLA was designing all sorts of digital communications chips, using novel algorithms to implement things like QAM (quadrature amplitude modulation) modems and equalizers that had never before been done digitally. Next he proposed developing ICs for an all-digital modem that would operate at several hundred megabits per second, which was far beyond existing digital modem speeds. Samueli published his results in over 100 papers and spoke at numerous conferences, and many companies were interested in applying this work to real products. “People were calling us up and saying, ‘That was a really interesting chip design you published. Have you considered commercializing it?’ ” Samueli said. In 1991 he decided to try. He and Nicholas incorporated Broadcom, set up the company in Nicholas’s spare bedroom, and signed development contracts with Scientific Atlanta, Intel, TRW, and the U.S. Air Force. Samueli kept his UCLA post and his PairGain consulting job, hiring his graduate students as consultants to implement much of the initial work at Broadcom. “I had three business cards: UCLA professor, chief scientist of PairGain, and vice president of research and development of Broadcom.” (Nicholas, who may have had better business and negotiating skills, became Broadcom’s president and chief executive officer; the two are co-chairmen of the company.) The contract for Scientific Atlanta, of Norcross, Ga., clearly pushed the state of the art. New York City-based Time Warner was preparing to deploy an ambitious test of interactive digital television services in Orlando, Fla., and Scientific Atlanta had contracted with the company to build the world’s first digital cable set-top box. (Existing cable set-top boxes were analog.) What was needed was a chip-based modem to serve as the cable signal receiver for that digital box. Broadcom completed the modem in 1994 in three chips, at a time when other digital modems filled many circuit boards. Samueli got a patent for the work on the all-digital cable receiver architecture. Using Broadcom’s design, Scientific Atlanta built 2,000 cable boxes for the Orlando field trial. While the trial was a technical success, it was a marketing failure. Time Warner quietly pulled the plug on the project, and nobody talked about interactive TV for several years. Only now is the ubiquity of the World Wide Web making interactive TV a marketable product. In retrospect, the Time Warner test appears to have been about five years too early. Today, Internet TV products that merge TV viewing with Web access perform many of the functions envisioned by Time Warner years ago. Broadcom’s contract with Intel Corp., of Santa Clara, Calif., was for a chip implementing a 100-Mb/s Ethernet transceiver for a local-area network (LAN), using DSP techniques. (Available Ethernet chips at the time had a top speed of 10 Mb/s.) The chip, which shipped in 1995, became the first DSP-based transceiver for LANs. The company recently announced a 1-Gb/s Ethernet chip based on similar DSP technology. For TRW, Broadcom designed a digital frequency synthesizer chip for a military satellite application. Under the Air Force contract, Broadcom’s staff developed an anti-jam filter chip for a Global Positioning System satellite receiver. The three-chip digital modem for Scientific Atlanta got Broadcom into the cable TV business. The Ethernet chip for Intel got the company into the LAN business. Those are the company’s largest markets today. Later, related contracts drove the company into new markets. For example, one for DSL transceivers based on Broadcom’s QAM cable modem architecture, designed for Nortel Networks, of Brampton, Ont., Canada, was Broadcom’s entry into the DSL chip market. Another venture, a development partnership with Sony Corp., Tokyo, subsequently brought the company into the HDTV receiver IC business. But Broadcom did not restrict itself to handling development contracts alone for long. The modem chips it had developed for Scientific Atlanta brought other customers knocking on its door. So in 1994, the then 15-person company (14 engineers and an office manager) added a vice president of marketing and put together its first product line, soon establishing itself as the leader in the cable modem chip industry. At the time, cable modems were emerging as a broadband Internet access platform for the home market. Their downstream speeds, which today are several megabits per second, offer the fastest Internet access compared with 56-kb/s modems and DSL links. Upstream speeds, though slower, are also faster than competitors. Cable operators can also provide conventional telephone service over the modems as well. “We want to be the Intel of communications.”—Henry Samueli in 1994 Crucial to Broadcom’s chip designs was the need to sort out the signals being sent to subscribers from the cable operator’s headend. Unlike the dedicated lines in the point-to-point links used by phone modems, cable modems share a line to the headend in a point-to-multipoint configuration. A continuous bit stream is broadcast to all subscribers, with each assigned a unique address. Time-division multiple access (TDMA) is used to allocate the single address to which it is sent. The upstream uses a TDMA protocol whereby users send requests to transmit data to the headend and are then assigned specific time slots in which to send the data in short bursts. The challenge of a single-chip cable modem design, according to Samueli, is coping with its high degree of complexity. The device incorporates a high-speed receiver and transmitter, both with precision analog front ends, as well as a complex media access control protocol engine. Successful execution requires a team with a broad range of expertise, including algorithm and protocol experts, DSP architects, application-specific IC (ASIC) engineers, and full custom and mixed-signal circuit designers. Broadcom also became instrumental in writing the DOCSIS (Data-Over-Cable Service Interface Specification) standard for cable modems, cooperating with General Instrument and LANcity, under the auspices of Cable Television Laboratories (CableLabs), the cable industry’s research arm in Louisville, Colo. Approved in 1998, DOCSIS is now the industry standard for all cable modems being built for the U.S. market, and was recently adopted by the International Telecommunication Union as an international cable modem standard. This market is poised for rapid growth as cable modems become readily available through computer retailers so customers can easily plug them into a cable line, rather than rent the devices from their cable service providers. Data can be transmitted at a rate of 43 Mb/s downstream and 10 Mb/s upstream using TDMA. Even though Broadcom was being run with a small staff, Nicholas and Samueli were thinking big fairly early on. Steve Tsubota, now director of Broadcom’s cable TV business unit, interviewed for a job with Samueli in 1994. Throughout the discussion, he recalled, Samueli was low key and modest. Then Tsubota asked him where he saw Broadcom going in the future. Samueli, with his 20-person company crammed into offices shared with a law firm, answered, “We want to be the Intel of communications.” Managing millionaires Four years later, on 17 April 1998, the then 350-employee company went public, making nearly two-thirds of its employees paper millionaires. (Because Samueli and Nicholas did not seek venture capital investment for Broadcom, they were each able to retain over 20 percent of the company for themselves and still be generous with stock options.) Broadcom’s stock price has appreciated by more than a factor of 10 since its initial public offering. Samueli is now a billionaire three times over, running an R&D organization with some 400 engineers, more than 50 of whom are Ph.D.s. The company as a whole now has about 700 employees, and Samueli oversees Broadcom’s research laboratories in Irvine, San Jose, and San Diego, Calif.; Atlanta, Ga.; Phoenix, Ariz.; the Netherlands; Singapore; and Bangalore, India. Samueli claims he is not a start-up junkie; Broadcom will probably be his last start-up venture: “I can’t see myself going through that punishment all over again. So many factors of success are out of your control. I don’t believe I could create another Broadcom again, so I wouldn’t even want to try.” “I don’t think my family would put up with it, either,” he added. “Eighty-hour workweeks are very stressful on family life. I think I have the most understanding and tolerant wife in the world. There isn’t anything I wouldn’t do for her, given all that she has done for me, and her No. 1 request is for me to spend more time at home.” The money hasn’t changed him much, colleagues say. His one splurge was to buy a house on the ocean (his wife’s life-long dream). He has also greatly increased his philanthropy, with a focus on university research and on science and math education for students from kindergarten through 12th grade. “Education is the key to prosperity,” Samueli said. “I hope that by investing back into our educational infrastructure, I can plant the seeds for the next Broadcom.” He still behaves like a college professor. “I have never given up my professor’s hat,” he told Spectrum. “I love to give lectures, I love to talk to people and teach them things.” He brags about the technical successes of the engineers on his staff and of the papers they presented at recent conferences. Not an academic alone But, although UCLA still lists Samueli as a faculty member on a leave of absence, he is not sure that he will ever go back to academia. “Life in industry is simply too exciting,” he said. “At a university, you are on a treadmill. You bring in a graduate student, give him a research project, he spends three or four or five years on it, then he graduates. All that knowledge he accumulated leaves with him, and you get a fresh student who has to come up the learning curve from the bottom. You spend a lot of time repeating yourself. There is some institutional memory, but every time you have one of your students graduate, you lose a lot, even though industry and society gain from the talent you have created. “On the other hand, at our company, people don’t leave. They can in theory, but in our eight-year history, we’ve only lost four engineers out of more than 400. So you are not going through a reset every few years. You are on a continuous ramp of knowledge accumulation, and that is a huge benefit. You also have a lot more resources at your disposal: software, computers, chip fabrication.” Yet another benefit, Samueli told Spectrum, “is the focus on real products. This creates good limits. You don’t do something unless there is a real application for it. You get closure, completion, and success, and that is rewarding in and of itself. “The success of Broadcom has brought me enormous happiness in many respects; the most exciting to me is the ability to create such extensive success and happiness for so many people. At the university, I was successful, but it was on a much smaller scale. Here, some 400 engineers have become very successful, financially as well as professionally.” Alexopoulos, of the University of California at Irvine, confirms that, while at heart Samueli is an academic, “he is also a doer. He wants to see that his work has significant and global impact, not only in providing technology for improving society, but also in creating meaningful and challenging employment for engineers and nonengineers alike.” Although much of Samueli’s success came from his independent technical achievements, as a manager, he is a people person. Observed at a recent meeting of his laboratory heads and other key staff members, Samueli sat quietly when technical problems were discussed, but quickly jumped in during discussions about new hires, potential engineering recruits, and other human resources issues. He was a little surprised when this was pointed out to him, then said: “I think recruiting is of paramount importance to the success of most high-tech companies. I have confidence that technical issues can be solved by the talented people we have at the company, but due to my networking in the research community, one of my key roles is in identifying the best people.” The ‘nucleus of the black hole’ What often draws people to the company are Samueli’s technical credentials and reputation for sharing the credit. Said Broadcom’s Tsubota: “He is the nucleus of the black hole—an irresistible force,” attracting talent to Broadcom out of professorships, secure jobs, and corporate fellow positions. And he has a good memory for people’s strengths and weaknesses. Anne Cole, today’s cable business unit controller and engineering controller who was Broadcom’s second employee, told Spectrum that when she interviewed at Broadcom, several years after taking an introductory engineering class from Samueli, he surprised her by confronting her with her academic record. “You turned in all your homework and you blew the final,” he told her. He ended up hiring her as an office manager (she had since earned an MBA), not an engineer. He also sees helping his staff logistically as a key role, and, in that, he may be the engineer’s dream boss. At the previously mentioned meeting, the company’s information systems director presented a problem: Engineers were facing sometimes extensive delays in running computing jobs on the company’s large servers—in part because other engineers were using those same servers to run simple tasks that could be easily run from a desktop workstation. Eliminating the delays would require changes in computer utilization or the purchase of US $650,000 worth of additional servers. Another manager might have responded by creating an official policy listing what jobs could and could not be run on the company’s shared servers, burdening his engineers with bureaucracy. Samueli barely hesitated. “From an engineering perspective,” he said, “buy the machines.” But perhaps his most important attribute as a manager is his niceness. People at Broadcom often work until two in the morning. Samueli says it is because they are competitive and want their products to win in the market place. But another motive seems to come into play. The Broadcom employees seem to want to make Samueli happy. Besides being the technical center of the company, Samueli is viewed as the moral center, Tsubota said. “The engineers here don’t want to disappoint him,” controller Cole told Spectrum. “They want to meet his expectations—and he has very high expectations.” Said one employee, “When you don’t come through for Henry [Samueli], it hurts a lot more than when Nick [CEO Nicholas] yells at you.” This article appeared in the September 1999 print issue.
- Solar-Powered Tech Transforms Remote Learningby Maurizio Arseni on 19. April 2025. at 13:00
When Marc Alan Sperber of Arizona State University’s Education for Humanity initiative arrived at a refugee camp along the Thai-Myanmar border, the scene was typical of many crisis zones: no Internet, unreliable power, and few resources. But within minutes, he and local NGO partners were able to set up a full-featured digital classroom using nothing more than a solar panel and a yellow device the size of a soup can. Students equipped with only basic smartphones and old tablets were accessing content through Beekee, a Swiss-built, lightweight, standalone microserver that can turn any location into an offline-first, pop-up digital classroom. While international initiatives like Giga try to connect every school to the Internet, the timeline and cost remain hard to predict. And even then, according to UNESCO’s Global Education Monitoring Report, keeping schools in low-income countries online could run up to a billion dollars a day. Beekee, founded by Vincent Widmer and Sergio Estupiñán during their Ph.D. studies at the educational technology department of the University of Geneva, seeks to bridge the connectivity gap through its easy-to-deploy device. At the core of Beekee’s box is a Raspberry Pi–based microserver, enclosed in a ventilated 3D-printed thermoresistant plastic shell. Optimized for passive and active cooling, weather resilience, and field repairs, it can withstand heat in arid climates like those of Jordan and northern Kenya. With its devices often deployed in remote regions, where repair options are few, Beekee supplies 3D-print-friendly STL and G-code files to partners, enabling them to fabricate replacement parts on a 3D printer. “We’ve seen them use recycled plastic filament in Kenya and Lebanon to print replacement parts within days,” says Estupiñán. The device consumes less than 10 watts of power, making it easy to run for more than 12 hours on an inexpensive 20,000 milliamp-hour (mAh) power bank. Alternately, Beekee can run on compact solar panels, where battery backup can provide up to two hours even on a cloudy day or at night. “This kind of energy efficiency is essential,” says Marcel Hertel of GIZ, the German development agency that uses Beekee in Indonesia as a digital learning platform, accessible to farmers in remote areas for training. “We work where even charging a phone is a challenge,” he says. The device runs on a custom Linux distribution and open-source software stack. Its Wi-Fi hotspot has a 40-meter range, providing coverage enough for two adjacent classrooms or a small courtyard. Up to 40 learners can connect simultaneously using their smartphones, tablets, or laptops, without apps or Internet access needed. Beekee’s interface is browser-based. However, the yellow box isn’t meant to replace the Internet. It’s designed to complement it, using available bandwidth for syncing whenever available via 3G or 4G connections. Although in many deployment zones, 3G/4G connectivity exists but is fragile. Mobile networks suffer from speed caps, high data costs, and congestion. Streaming educational content or relying on cloud platforms becomes impractical. But satellite-based Internet connectivity, including emerging LEO satellite providers like Starlink, still provide windows of opportunity to download and upload content on the yellow box. Beekee’s replacement part 3D design files can be used for remotely repairing the organization’s rugged e-learning boxes, using only a screwdriver and a 3D printer. Beekee Offline Moodle for E-Learning Beekee hosts e-learning tools for teachers and students, offering an offline Moodle instance—an open-source learning management system. Via Moodle, educators can use Scorm packages and H5P modules, technical standards commonly used to package and deliver e-learning material. “Beekee is designed to interoperate with existing training platforms,” says Estupiñán. “We sync learner progress, content updates, and analytics without changing how an organization already works.” Beekee also comes with Open Educational Resources (OER), including Offline Wikipedia, Khan Academy videos in multiple languages, and curated instructional content. “We don’t want just to deliver content,” says Estupiñán, “but also create a collaborative, engaging learning environment.” Before turning to Beekee, some organizations attempted to create their own offline learning platforms or worked with third-party developers. Some of them overlooked realities like extreme heat, power outages, and near-zero Internet bandwidth—while others tried solutions that were essentially file libraries masquerading as learning platforms. “Most standalone systems don’t support remote updates or syncing of learner data and analytics,” says Sperber. “They delivered PDFs, not actual learning experiences that include interactive practice, assessment, feedback, or anything of the like.” Additionally, many of the systems lacked sustainable maintenance strategies, and devices broke down under field conditions. “The tech might have looked sleek, but when things failed, there was no repair plan,” says Estupiñán. “We designed Beekee so that even nonspecialist users could fix things with a screwdriver and a local 3D printer.” Beekee runs its own production line using a 3D printer farm in Geneva, capable of producing up to 30 custom units per day. But it doesn’t make only hardware. It also offers training, instructional design support, and ongoing technical help. “The real challenge isn’t just getting technology into the field, it’s keeping it running,” says Estupiñán. The Next Frontier: Offline AI Future plans include integrating small language models (SLMs) directly into the box. A lightweight AI engine could automate tasks like grading, flagging conceptual errors, or supporting teachers with localized lesson plans. “Offline AI is the next big step,” says Estupiñán. “It lets us bring intelligent support to teachers who may be isolated, undertrained, or overwhelmed.” Beekee has partnered with more than 40 organizations across nearly 30 countries. Founded five years ago, it now has a team of seven. The company recently joined UNESCO’s Global Education Coalition alongside Coursera, Google, and Microsoft. Even though Beekee is primarily used in low-resource environments, its offline-first design is now drawing interest in broader contexts. In France and Switzerland, secondary schools are beginning to use Beekee devices to give students digital access without exposing them fully to the Internet during class. Teachers use them for outdoor projects, such as biology fieldwork, allowing students to share photos and notes over a local network. “The system is also being considered for secure, offline learning in correction facilities, and companies are exploring its potential for training in isolated, privacy-sensitive settings,” says Beekee cofounder Widmer.
- Video Friday: Robot Boxingby Evan Ackerman on 18. April 2025. at 16:00
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025: 5–7 September 2025, SHENZHEN CoRL 2025: 27–30 September 2025, SEOUL IEEE Humanoids: 30 September–2 October 2025, SEOUL World Robot Summit: 10–12 October 2025, OSAKA, JAPAN IROS 2025: 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Let’s step into a new era of Sci-Fi, join the fun together! Unitree will be livestreaming robot combat in about a month, stay tuned! [ Unitree ] A team of scientists and students from Delft University of Technology in the Netherlands (TU Delft) has taken first place at the A2RL Drone Championship in Abu Dhabi - an international race that pushes the limits of physical artificial intelligence, challenging teams to fly fully autonomous drones using only a single camera. The TU Delft drone competed against 13 autonomous drones and even human drone racing champions, using innovative methods to train deep neural networks for high-performance control. [ TU Delft ] RAI’s Ultra Mobile Vehicle (UMV) is learning some new tricks! [ RAI Institute ] With 28 moving joints (20 QDD actuators + 8 servo motors), Cosmo can walk with its two feet with a speed of up to 1 m/s (0.5 m/s nominal) and balance itself even when pushed. Coordinated with the motion of its head, fingers, arms and legs, Cosmo has a loud and expressive voice for effective interaction with humans. Cosmo speaks in canned phrases from the 90’s cartoon he originates from and his speech can be fully localized in any language. [ RoMeLa ] We wrote about Parallel Systems back in January of 2022, and it’s good to see that their creative take on autonomous rail is still moving along. [ Parallel Systems ] RoboCake is ready. This edible robotic cake is the result of a collaboration between researchers from EPFL (the Swiss Federal Institute of Technology in Lausanne), the Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) and pastry chefs and food scientists from EHL in Lausanne. It takes the form of a robotic wedding cake, decorated with two gummy robotic bears and edible dark chocolate batteries that power the candles. [ EPFL ] ROBOTERA’s fully self-developed five-finger dexterous hand has upgraded its skills, transforming into an esports hand in the blink of an eye! The XHAND1 features 12 active degrees of freedom, pioneering an industry-first fully direct-drive joint design. It offers exceptional flexibility and sensitivity, effortlessly handling precision tasks like finger opposition, picking up soft objects, and grabbing cards. Additionally, it delivers powerful grip strength with a maximum payload of nearly 25 kilograms, making it adaptable to a wide range of complex application scenarios. [ ROBOTERA ] Witness the future of industrial automation as Extend Robotics trials their cutting-edge humanoid robot in Leyland factories. In this groundbreaking video, see how the robot skillfully connects a master service disconnect unit—a critical task in factory operations. Watch onsite workers seamlessly collaborate with the robot using an intuitive XR (extended reality) interface, blending human expertise with robotic precision. [ Extend Robotics ] I kind of like the idea of having a mobile robot that lives in my garage and manages the charging and cleaning of my car. [ Flexiv ] How can we ensure robots using foundation models, such as large language models (LLMs), won’t “hallucinate” when executing tasks in complex, previously unseen environments? Our Safe and Assured Foundation Robots for Open Environments (SAFRON) Advanced Research Concept (ARC) seeks ideas to make sure robots behave only as directed & intended. [ DARPA ] What if doing your chores were as easy as flipping a switch? In this talk and live demo, roboticist and founder of 1X Bernt Børnich introduces NEO, a humanoid robot designed to help you out around the house. Watch as NEO shows off its ability to vacuum, water plants and keep you company, while Børnich tells the story of its development — and shares a vision for robot helpers that could free up your time to focus on what truly matters. [ 1X ] via [ TED ] Rodney Brooks gave a keynote at the Stanford HAI spring conference on Robotics in a Human-Centered World. There are a bunch of excellent talks from this conference on YouTube at the link below, but I think this panel is especially good, as a discussion of going from from research to real-world impact. [ YouTube ] via [ Stanford HAI ] Wing CEO Adam Woodworth discusses consumer drone delivery with Peter Diamandis at Abundance 360. [ Wing ] This CMU RI Seminar is from Sangbae Kim, who was until very recently at MIT but is now the Robotics Architect at Meta’s Robotics Studio. [ CMU RI ]
- Bell Labs Turns 100, Plans to Leave Its Old Headquartersby Dina Genkina on 18. April 2025. at 13:00
This year, Bell Labs celebrates its one-hundredth birthday. In a centennial celebration held last week at the Murray Hill, N.J., campus, the lab’s impressive technological history was celebrated with talks, panels, demos, and over a half dozen gracefully aging Nobel laureates. During its impressive 100-year tenure, Bell Labs scientists invented the transistor; laid down the theoretical grounding for the digital age; discovered radio astronomy, which led to the first evidence for the big bang theory; contributed to the invention of the laser; developed the Unix operating system; invented the charge-coupled device (CCD) camera; and many more scientific and technological contributions that have earned Bell Labs 10 Nobel prizes and five Turing awards. “I normally tell people this is the ‘Bell Labs invented everything’ tour,” said Nokia Bell Labs archivist Ed Eckert as he led a tour through the lab’s history exhibit. The lab is smaller than it once was. The main campus in Murray Hill, N.J., seems like a bit of a ghost town, with empty cubicles and offices lining the halls. Now it’s planning a move to a smaller facility in New Brunswick, N.J., sometime in 2027. In its heyday, Bell Labs boasted around 6,000 workers at the Murray Hill location. Although that number has now dwindled to about 1,000, more work at other locations around the world The Many Accomplishments of Bell Labs Despite its somewhat diminished size, Bell Labs, now owned by Nokia, is alive and kicking. “As Nokia Bell Labs, we have a dual mission,” says Bell Labs president Peter Vetter. “On the one hand, we need to support the longevity of the core business. That is networks, mobile networks, optical networks, the networking at large, security, device research, ASICs, optical components that support that network system. And then we also have the second part of the mission, which is to help the company grow into new areas.” Some of the new areas for growth were represented in live demonstrations at the centennial. A team at Bell Labs is working on establishing the first cellular network on the moon. In February, Intuitive Machines sent up its second lunar mission, Athena, with Bell Labs’ technology on board. The team fit two full cellular networks into a briefcase-size box, the most compact networking system ever made. This cell network was self-deploying: Nobody on Earth needs to tell it what to do. The lunar lander tipped on its side upon landing and quickly went offline due to lack of solar power, but Bell Labs’ networking module had enough time to power up and transmit data back to Earth. Another Bell Labs group is focused on monitoring the world’s vast network of undersea fiber-optic cables. Undersea cables are subject to interruptions, whether it be from adversarial sabotage, undersea weather events like earthquakes or tsunamis, or fishing nets and ship anchors. The team wants to turn these cables into a sensor network, capable of monitoring the environment around a cable for possible damage. The team has developed a real-time technique for monitoring mild changes in cable length that’s so sensitive the lab-based demo was able to pick up tiny vibrations from the presenter’s speaking voice. This technique can pin changes down to a 10-kilometer interval of cable, greatly simplifying the search for affected regions. Nokia is taking the path less traveled when it comes to quantum computing, pursuing so-called topological quantum bits. These qubits, if made, would be much more robust to noise than other approaches, and are more readily amenable to scaling. However, building even a single qubit of this kind has been elusive. Nokia Bell Labs’ Robert Willett has been at it since his graduate work in 1988, and the team expects to demonstrate the first NOT gate with this architecture later this year. Beam-steering antennas for point-to-point fixed wireless are normally made on printed circuit boards. But as the world moves to higher frequencies, toward 6G, conventional printed circuit-board materials are no longer cutting it—the signal loss makes them economically unviable. That’s why a team at Nokia Bell Labs has developed a way to print circuit boards on glass instead. The result is a small glass chip that has 64 integrated circuits on one side and the antenna array on the other. A 100-gigahertz link using the tech was deployed at the Paris Olympics in 2024, and a commercial product is on the road map for 2027. Mining, particularly autonomous mining—which avoids putting humans in harm’s way—relies heavily on networking. That’s why Nokia has entered the mining business, developing smart digital-twin technology that models the mine and the autonomous trucks that work on it. The company’s robo-truck system features two cellular modems, three Wi-Fi cards, and 12 Ethernet ports. The system collects different types of sensor data and correlates them on a virtual map of the mine (the digital twin). Then it uses AI to suggest necessary maintenance and to optimize scheduling. The lab is also dipping into AI. One team is working on integrating large language models with robots for industrial applications. These robots have access to a digital-twin model of the space they are in and have a semantic representation of certain objects in their surroundings. In a demo, a robot was verbally asked to identify missing boxes in a rack. The robot successfully pointed out which box wasn’t found in its intended place, and when prompted, it traveled to the storage area and identified the replacement. The key is to build robots that can “reason about the physical world,” says Matthew Andrews, a researcher in the AI lab. A test system will be deployed in a warehouse in the United Arab Emirates in the next six months. Despite impressive scientific demonstrations, there was an air of apprehension about the event. In a panel discussion about the future of innovation, Princeton engineering dean Andrea Goldsmith said, “I’ve never been more worried about the innovation ecosystem in the U.S.” Former Google CEO Eric Schmidt said in a keynote that “the current administration seems to be trying to destroy university R&D.” Nevertheless, Schmidt and others expressed optimism about the future of innovation at Bell Labs and the United States more generally. “We will win, because we are right, and R&D is the foundation of economic growth,” he said.
- The Future of AI and Robotics Is Being Led by Amazon’s Next-Gen Warehousesby Dexter Johnson on 17. April 2025. at 11:09
This is a sponsored article brought to you by Amazon. The cutting edge of robotics and artificial intelligence (AI) doesn’t occur just at NASA, or one of the top university labs, but instead is increasingly being developed in the warehouses of the e-commerce company Amazon. As online shopping continues to grow, companies like Amazon are pushing the boundaries of these technologies to meet consumer expectations. Warehouses, the backbone of the global supply chain, are undergoing a transformation driven by technological innovation. Amazon, at the forefront of this revolution, is leveraging robotics and AI to shape the warehouses of the future. Far from being just a logistics organization, Amazon is positioning itself as a leader in technological innovation, making it a prime destination for engineers and scientists seeking to shape the future of automation. Amazon: A Leader in Technological Innovation Amazon’s success in e-commerce is built on a foundation of continuous technological innovation. Its fulfillment centers are increasingly becoming hubs of cutting-edge technology where robotics and AI play a pivotal role. Heath Ruder, Director of Product Management at Amazon, explains how Amazon’s approach to integrating robotics with advanced material handling equipment is shaping the future of its warehouses. “We’re integrating several large-scale products into our next-generation fulfillment center in Shreveport, Louisiana,” says Ruder. “It’s our first opportunity to get our robotics systems combined under one roof and understand the end-to-end mechanics of how a building can run with incorporated autonomation.” Ruder is referring to the facility’s deployment of its Automated Storage and Retrieval Systems (ASRS), called Sequoia, as well as robotic arms like “Robin” and “Cardinal” and Amazon’s proprietary autonomous mobile robot, “Proteus”. Amazon has already deployed “Robin”, a robotic arm that sorts packages for outbound shipping by transferring packages from conveyors to mobile robots. This system is already in use across various Amazon fulfillment centers and has completed over three billion successful package moves. “Cardinal” is another robotic arm system that efficiently packs packages into carts before the carts are loaded onto delivery trucks. “Proteus” is Amazon’s autonomous mobile robot designed to work around people. Unlike traditional robots confined to a restricted area, Proteus is fully autonomous and navigates through fulfillment centers using sensors and a mix of AI-based and ML systems. It works with human workers and other robots to transport carts full of packages more efficiently. The integration of these technologies is estimated to increase operational efficiency by 25 percent. “Our goal is to improve speed, quality, and cost. The efficiency gains we’re seeing from these systems are substantial,” says Ruder. However, the real challenge is scaling this technology across Amazon’s global network of fulfillment centers. “Shreveport was our testing ground and we are excited about what we have learned and will apply at our next building launching in 2025.” Amazon’s investment in cutting-edge robotics and AI systems is not just about operational efficiency. It underscores the company’s commitment to being a leader in technological innovation and workplace safety, making it a top destination for engineers and scientists looking to solve complex, real-world problems. How AI Models Are Trained: Learning from the Real World One of the most complex challenges Amazon’s robotics team faces is how to make robots capable of handling a wide variety of tasks that require discernment. Mike Wolf, a principal scientist at Amazon Robotics, plays a key role in developing AI models that enable robots to better manipulate objects, across a nearly infinite variety of scenarios. “The complexity of Amazon’s product catalog—hundreds of millions of unique items—demands advanced AI systems that can make real-time decisions about object handling,” explains Wolf. But how do these AI systems learn to handle such an immense variety of objects? Wolf’s team is developing machine learning algorithms that enable robots to learn from experience. “We’re developing the next generation of AI and robotics. For anyone interested in this field, Amazon is the place where you can make a difference on a global scale.” —Mike Wolf, Amazon Robotics In fact, robots at Amazon continuously gather data from their interactions with objects, refining their ability to predict how items will be affected when manipulated. Every interaction a robot has—whether it’s picking up a package or placing it into a container—feeds back into the system, refining the AI model and helping the robot to improve. “AI is continually learning from failure cases,” says Wolf. “Every time a robot fails to complete a task successfully, that’s actually an opportunity for the system to learn and improve.” This data-centric approach supports the development of state-of-the-art AI systems that can perform increasingly complex tasks, such as predicting how objects are affected when manipulated. This predictive ability will help robots determine the best way to pack irregularly shaped objects into containers or handle fragile items without damaging them. “We want AI that understands the physics of the environment, not just basic object recognition. The goal is to predict how objects will move and interact with one another in real time,” Wolf says. What’s Next in Warehouse Automation Valerie Samzun, Senior Technical Product Manager at Amazon, leads a cutting-edge robotics program that aims to enhance workplace safety and make jobs more rewarding, fulfilling, and intellectually stimulating by allowing robots to handle repetitive tasks. “The goal is to reduce certain repetitive and physically demanding tasks from associates,” explains Samzun. “This allows them to focus on higher-value tasks in skilled roles.” This shift not only makes warehouse operations more efficient but also opens up new opportunities for workers to advance their careers by developing new technical skills. “Our research combines several cutting-edge technologies,” Samzun shared. “The project uses robotic arms equipped with compliant manipulation tools to detect the amount of force needed to move items without damaging them or other items.” This is an advancement that incorporates learnings from previous Amazon robotics projects. “This approach allows our robots to understand how to interact with different objects in a way that’s safe and efficient,” says Samzun. In addition to robotic manipulation, the project relies heavily on AI-driven algorithms that determine the best way to handle items and utilize space. Samzun believes the technology will eventually expand to other parts of Amazon’s operations, finding multiple applications across its vast network. “The potential applications for compliant manipulation are huge,” she says. Attracting Engineers and Scientists: Why Amazon is the Place to Be As Amazon continues to push the boundaries of what’s possible with robotics and AI, it’s also becoming a highly attractive destination for engineers, scientists, and technical professionals. Both Wolf and Samzun emphasize the unique opportunities Amazon offers to those interested in solving real-world problems at scale. For Wolf, who transitioned to Amazon from NASA’s Jet Propulsion Laboratory, the appeal lies in the sheer impact of the work. “The draw of Amazon is the ability to see your work deployed at scale. There’s no other place in the world where you can see your robotics work making a direct impact on millions of people’s lives every day,” he says. Wolf also highlights the collaborative nature of Amazon’s technical teams. Whether working on AI algorithms or robotic hardware, scientists and engineers at Amazon are constantly collaborating to solve new challenges. Amazon’s culture of innovation extends beyond just technology. It’s also about empowering people. Samzun, who comes from a non-engineering background, points out that Amazon is a place where anyone with the right mindset can thrive, regardless of their academic background. “I came from a business management background and found myself leading a robotics project,” she says. “Amazon provides the platform for you to grow, learn new skills, and work on some of the most exciting projects in the world.” For young engineers and scientists, Amazon offers a unique opportunity to work on state-of-the-art technology that has real-world impact. “We’re developing the next generation of AI and robotics,” says Wolf. “For anyone interested in this field, Amazon is the place where you can make a difference on a global scale.” The Future of Warehousing: A Fusion of Technology and Talent From Amazon’s leadership, it’s clear that the future of warehousing is about more than just automation. It’s about harnessing the power of robotics and AI to create smarter, more efficient, and safer working environments. But at its core it remains centered on people in its operations and those who make this technology possible—engineers, scientists, and technical professionals who are driven to solve some of the world’s most complex problems. Amazon’s commitment to innovation, combined with its vast operational scale, makes it a leader in warehouse automation. The company’s focus on integrating robotics, AI, and human collaboration is transforming how goods are processed, stored, and delivered. And with so many innovative projects underway, the future of Amazon’s warehouses is one where technology and human ingenuity work hand in hand. “We’re building systems that push the limits of robotics and AI,” says Wolf. “If you want to work on the cutting edge, this is the place to be.”
- Future Chips Will Be Hotter Than Everby James Myers on 16. April 2025. at 13:30
For over 50 years now, egged on by the seeming inevitability of Moore’s Law, engineers have managed to double the number of transistors they can pack into the same area every two years. But while the industry was chasing logic density, an unwanted side effect became more prominent: heat. In a system-on-chip (SoC) like today’s CPUs and GPUs, temperature affects performance, power consumption, and energy efficiency. Over time, excessive heat can slow the propagation of critical signals in a processor and lead to a permanent degradation of a chip’s performance. It also causes transistors to leak more current and as a result waste power. In turn, the increased power consumption cripples the energy efficiency of the chip, as more and more energy is required to perform the exact same tasks. The root of the problem lies with the end of another law: Dennard scaling. This law states that as the linear dimensions of transistors shrink, voltage should decrease such that the total power consumption for a given area remains constant. Dennard scaling effectively ended in the mid-2000s at the point where any further reductions in voltage were not feasible without compromising the overall functionality of transistors. Consequently, while the density of logic circuits continued to grow, power density did as well, generating heat as a by-product. As chips become increasingly compact and powerful, efficient heat dissipation will be crucial to maintaining their performance and longevity. To ensure this efficiency, we need a tool that can predict how new semiconductor technology—processes to make transistors, interconnects, and logic cells—changes the way heat is generated and removed. My research colleagues and I at Imec have developed just that. Our simulation framework uses industry-standard and open-source electronic design automation (EDA) tools, augmented with our in-house tool set, to rapidly explore the interaction between semiconductor technology and the systems built with it. The results so far are inescapable: The thermal challenge is growing with each new technology node, and we’ll need new solutions, including new ways of designing chips and systems, if there’s any hope that they’ll be able to handle the heat. The Limits of Cooling Traditionally, an SoC is cooled by blowing air over a heat sink attached to its package. Some data centers have begun using liquid instead because it can absorb more heat than gas. Liquid coolants—typically water or a water-based mixture—may work well enough for the latest generation of high-performance chips such as Nvidia’s new AI GPUs, which reportedly consume an astounding 1,000 watts. But neither fans nor liquid coolers will be a match for the smaller-node technologies coming down the pipeline. Heat follows a complex path as it’s removed from a chip, but 95 percent of it exits through the heat sink. Imec Take, for instance, nanosheet transistors and complementary field-effect transistors (CFETs). Leading chip manufacturers are already shifting to nanosheet devices, which swap the fin in today’s fin field-effect transistors for a stack of horizontal sheets of semiconductor. CFETs take that architecture to the extreme, vertically stacking more sheets and dividing them into two devices, thus placing two transistors in about the same footprint as one. Experts expect the semiconductor industry to introduce CFETs in the 2030s. In our work, we looked at an upcoming version of the nanosheet called A10 (referring to a node of 10 angstroms, or 1 nanometer) and a version of the CFET called A5, which Imec projects will appear two generations after the A10. Simulations of our test designs showed that the power density in the A5 node is 12 to 15 percent higher than in the A10 node. This increased density will, in turn, lead to a projected temperature rise of 9 °C for the same operating voltage. Complementary field-effect transistors will stack nanosheet transistors atop each other, increasing density and temperature. To operate at the same temperature as nanosheet transistors (A10 node), CFETs (A5 node) will have to run at a reduced voltage. Imec Nine degrees might not seem like much. But in a data center, where hundreds of thousands to millions of chips are packed together, it can mean the difference between stable operation and thermal runaway—that dreaded feedback loop in which rising temperature increases leakage power, which increases temperature, which increases leakage power, and so on until, eventually, safety mechanisms must shut down the hardware to avoid permanent damage. Researchers are pursuing advanced alternatives to basic liquid and air cooling that may help mitigate this kind of extreme heat. Microfluidic cooling, for instance, uses tiny channels etched into a chip to circulate a liquid coolant inside the device. Other approaches include jet impingement, which involves spraying a gas or liquid at high velocity onto the chip’s surface, and immersion cooling, in which the entire printed circuit board is dunked in the coolant bath. But even if these newer techniques come into play, relying solely on coolers to dispense with extra heat will likely be impractical. That’s especially true for mobile systems, which are limited by size, weight, battery power, and the need to not cook their users. Data centers, meanwhile, face a different constraint: Because cooling is a building-wide infrastructure expense, it would cost too much and be too disruptive to update the cooling setup every time a new chip arrives. Performance Versus Heat Luckily, cooling technology isn’t the only way to stop chips from frying. A variety of system-level solutions can keep heat in check by dynamically adapting to changing thermal conditions. One approach places thermal sensors around a chip. When the sensors detect a worrying rise in temperature, they signal a reduction in operating voltage and frequency—and thus power consumption—to counteract heating. But while such a scheme solves thermal issues, it might noticeably affect the chip’s performance. For example, the chip might always work poorly in hot environments, as anyone who’s ever left their smartphone in the sun can attest. Another approach, called thermal sprinting, is especially useful for multicore data-center CPUs. It is done by running a core until it overheats and then shifting operations to a second core while the first one cools down. This process maximizes the performance of a single thread, but it can cause delays when work must migrate between many cores for longer tasks. Thermal sprinting also reduces a chip’s overall throughput, as some portion of it will always be disabled while it cools. System-level solutions thus require a careful balancing act between heat and performance. To apply them effectively, SoC designers must have a comprehensive understanding of how power is distributed on a chip and where hot spots occur, where sensors should be placed and when they should trigger a voltage or frequency reduction, and how long it takes parts of the chip to cool off. Even the best chip designers, though, will soon need even more creative ways of managing heat. Making Use of a Chip’s Backside A promising pursuit involves adding new functions to the underside, or backside, of a wafer. This strategy mainly aims to improve power delivery and computational performance. But it might also help resolve some heat problems. New technologies can reduce the voltage that needs to be delivered to a multicore processor so that the chip maintains a minimum voltage while operating at an acceptable frequency. A backside power-delivery network does this by reducing resistance. Backside capacitors lower transient voltage losses. Backside integrated voltage regulators allow different cores to operate at different minimum voltages as needed.Imec Imec foresees several backside technologies that may allow chips to operate at lower voltages, decreasing the amount of heat they generate. The first technology on the road map is the so-called backside power-delivery network (BSPDN), which does precisely what it sounds like: It moves power lines from the front of a chip to the back. All the advanced CMOS foundries plan to offer BSPDNs by the end of 2026. Early demonstrations show that they lessen resistance by bringing the power supply much closer to the transistors. Less resistance results in less voltage loss, which means the chip can run at a reduced input voltage. And when voltage is reduced, power density drops—and so, in turn, does temperature. By changing the materials within the path of heat removal, backside power-delivery technology could make hot spots on chips even hotter. Imec After BSPDNs, manufacturers will likely begin adding capacitors with high energy-storage capacity to the backside as well. Large voltage swings caused by inductance in the printed circuit board and chip package can be particularly problematic in high-performance SoCs. Backside capacitors should help with this issue because their closer proximity to the transistors allows them to absorb voltage spikes and fluctuations more quickly. This arrangement would therefore enable chips to run at an even lower voltage—and temperature—than with BSPDNs alone. Finally, chipmakers will introduce backside integrated voltage-regulator (IVR) circuits. This technology aims to curtail a chip’s voltage requirements further still through finer voltage tuning. An SoC for a smartphone, for example, commonly has 8 or more compute cores, but there’s no space on the chip for each to have its own discrete voltage regulator. Instead, one off-chip regulator typically manages the voltage of four cores together, regardless of whether all four are facing the same computational load. IVRs, on the other hand, would manage each core individually through a dedicated circuit, thereby improving energy efficiency. Placing them on the backside would save valuable space on the frontside. It is still unclear how backside technologies will affect heat management; demonstrations and simulations are needed to chart the effects. Adding new technology will often increase power density, and chip designers will need to consider the thermal consequences. In placing backside IVRs, for instance, will thermal issues improve if the IVRs are evenly distributed or if they are concentrated in specific areas, such as the center of each core and memory cache? Recently, we showed that backside power delivery may introduce new thermal problems even as it solves old ones. The cause is the vanishingly thin layer of silicon that’s left when BSPDNs are created. In a frontside design, the silicon substrate can be as thick as 750 micrometers. Because silicon conducts heat well, this relatively bulky layer helps control hot spots by spreading heat from the transistors laterally. Adding backside technologies, however, requires thinning the substrate to about 1 µm to provide access to the transistors from the back. Sandwiched between two layers of wires and insulators, this slim silicon slice can no longer move heat effectively toward the sides. As a result, heat from hyperactive transistors can get trapped locally and forced upward toward the cooler, exacerbating hot spots. Our simulation of an 80-core server SoC found that BSPDNs can raise hot-spot temperatures by as much as 14 °C. Design and technology tweaks—such as increasing the density of the metal on the backside—can improve the situation, but we will need more mitigation strategies to avoid it completely. Preparing for “CMOS 2.0” BSPDNs are part of a new paradigm of silicon logic technology that Imec is calling CMOS 2.0. This emerging era will also see advanced transistor architectures and specialized logic layers. The main purpose of these technologies is optimizing chip performance and power efficiency, but they might also offer thermal advantages, including improved heat dissipation. In today’s CMOS chips, a single transistor drives signals to both nearby and faraway components, leading to inefficiencies. But what if there were two drive layers? One layer would handle long wires and buffer these connections with specialized transistors; the other would deal only with connections under 10 µm. Because the transistors in this second layer would be optimized for short connections, they could operate at a lower voltage, which again would reduce power density. How much, though, is still uncertain. In the future, parts of chips will be made on their own silicon wafers using the appropriate process technology for each. They will then be 3D stacked to form SoCs that function better than those built using only one process technology. But engineers will have to carefully consider how heat flows through these new 3D structures. Imec What is clear is that solving the industry’s heat problem will be an interdisciplinary effort. It’s unlikely that any one technology alone—whether that’s thermal-interface materials, transistors, system-control schemes, packaging, or coolers—will fix future chips’ thermal issues. We will need all of them. And with good simulation tools and analysis, we can begin to understand how much of each approach to apply and on what timeline. Although the thermal benefits of CMOS 2.0 technologies—specifically, backside functionalization and specialized logic—look promising, we will need to confirm these early projections and study the implications carefully. With backside technologies, for instance, we will need to know precisely how they alter heat generation and dissipation—and whether that creates more new problems than it solves. Chip designers might be tempted to adopt new semiconductor technologies assuming that unforeseen heat issues can be handled later in software. That may be true, but only to an extent. Relying too heavily on software solutions would have a detrimental impact on a chip’s performance because these solutions are inherently imprecise. Fixing a single hot spot, for example, might require reducing the performance of a larger area that is otherwise not overheating. It will therefore be imperative that SoCs and the semiconductor technologies used to build them are designed hand in hand. The good news is that more EDA products are adding features for advanced thermal analysis, including during early stages of chip design. Experts are also calling for a new method of chip development called system technology co-optimization. STCO aims to dissolve the rigid abstraction boundaries between systems, physical design, and process technology by considering them holistically. Deep specialists will need to reach outside their comfort zone to work with experts in other chip-engineering domains. We may not yet know precisely how to resolve the industry’s mounting thermal challenge, but we are optimistic that, with the right tools and collaborations, it can be done. This article was updated on 22 April 2025.
- Navigating the Angstrom Eraby Wiley on 16. April 2025. at 13:22
This is a sponsored article brought to you by Applied Materials. The semiconductor industry is in the midst of a transformative era as it bumps up against the physical limits of making faster and more efficient microchips. As we progress toward the “angstrom era,” where chip features are measured in mere atoms, the challenges of manufacturing have reached unprecedented levels. Today’s most advanced chips, such as those at the 2nm node and beyond, are demanding innovations not only in design but also in the tools and processes used to create them. At the heart of this challenge lies the complexity of defect detection. In the past, optical inspection techniques were sufficient to identify and analyze defects in chip manufacturing. However, as chip features have continued to shrink and device architectures have evolved from 2D planar transistors to 3D FinFET and Gate-All-Around (GAA) transistors, the nature of defects has changed. Defects are often at scales so small that traditional methods struggle to detect them. No longer just surface-level imperfections, they are now commonly buried deep within intricate 3D structures. The result is an exponential increase in data generated by inspection tools, with defect maps becoming denser and more complex. In some cases, the number of defect candidates requiring review has increased 100-fold, overwhelming existing systems and creating bottlenecks in high-volume production. Applied Materials’ CFE technology achieves sub-nanometer resolution, enabling the detection of defects buried deep within 3D device structures. The burden created by the surge in data is compounded by the need for higher precision. In the angstrom era, even the smallest defect — a void, residue, or particle just a few atoms wide — can compromise chip performance and the yield of the chip manufacturing process. Distinguishing true defects from false alarms, or “nuisance defects,” has become increasingly difficult. Traditional defect review systems, while effective in their time, are struggling to keep pace with the demands of modern chip manufacturing. The industry is at an inflection point, where the ability to detect, classify, and analyze defects quickly and accurately is no longer just a competitive advantage — it’s a necessity. Applied Materials Adding to the complexity of this process is the shift toward more advanced chip architectures. Logic chips at the 2nm node and beyond, as well as higher-density DRAM and 3D NAND memories, require defect review systems capable of navigating intricate 3D structures and identifying issues at the nanoscale. These architectures are essential for powering the next generation of technologies, from artificial intelligence to autonomous vehicles. But they also demand a new level of precision and speed in defect detection. In response to these challenges, the semiconductor industry is witnessing a growing demand for faster and more accurate defect review systems. In particular, high-volume manufacturing requires solutions that can analyze exponentially more samples without sacrificing sensitivity or resolution. By combining advanced imaging techniques with AI-driven analytics, next-generation defect review systems are enabling chipmakers to separate the signal from the noise and accelerate the path from development to production. eBeam Evolution: Driving the Future of Defect Detections Electron beam (eBeam) imaging has long been a cornerstone of semiconductor manufacturing, providing the ultra-high resolution necessary to analyze defects that are invisible to optical techniques. Unlike light, which has a limited resolution due to its wavelength, electron beams can achieve resolutions at the sub-nanometer scale, making them indispensable for examining the tiniest imperfections in modern chips. Applied Materials The journey of eBeam technology has been one of continuous innovation. Early systems relied on thermal field emission (TFE), which generates an electron beam by heating a filament to extremely high temperatures. While TFE systems are effective, they have known limitations. The beam is relatively broad, and the high operating temperatures can lead to instability and shorter lifespans. These constraints became increasingly problematic as chip features shrank and defect detection requirements grew more stringent. Enter cold field emission (CFE) technology, a breakthrough that has redefined the capabilities of eBeam systems. Unlike TFE, CFE operates at room temperature, using a sharp, cold filament tip to emit electrons. This produces a narrower, more stable beam with a higher density of electrons that results in significantly improved resolution and imaging speed. Applied Materials For decades, CFE systems were limited to lab usage because it was not possible to keep the tools up and running for adequate periods of time — primarily because at “cold” temperatures, contaminants inside the chambers adhere to the eBeam emitter and partially block the flow of electrons. In December 2022, Applied Materials announced that it had solved the reliability issues with the introduction of its first two eBeam systems based on CFE technology. Applied is an industry leader at the forefront of defect detection innovation. It is a company that has consistently pushed the boundaries of materials engineering to enable the next wave of innovation in chip manufacturing. After more than 10 years of research across a global team of engineers, Applied mitigated the CFE stability challenge by developing multiple breakthroughs. These include new technology to deliver orders of magnitude higher vacuum compared to TFE — tailoring the eBeam column with special materials that reduce contamination, and designing a novel chamber self-cleaning process that further keeps the tip clean. CFE technology achieves sub-nanometer resolution, enabling the detection of defects buried deep within 3D device structures. This is a capability that is critical for advanced architectures like Gate-All-Around (GAA) transistors and 3D NAND memory. Additionally, CFE systems offer faster imaging speeds compared to traditional TFE systems, allowing chipmakers to analyze more defects in less time. The Rise of AI in Semiconductor Manufacturing While eBeam technology provides the foundation for high-resolution defect detection, the sheer volume of data generated by modern inspection tools has created a new challenge: how to process and analyze this data quickly and accurately. This is where artificial intelligence (AI) comes into play. AI-driven systems can classify defects with remarkable accuracy, sorting them into categories that provide engineers with actionable insights. AI is transforming manufacturing processes across industries, and semiconductors are no exception. AI algorithms — particularly those based on deep learning — are being used to automate and enhance the analysis of defect inspection data. These algorithms can sift through massive datasets, identifying patterns and anomalies that would be impossible for human engineers to detect manually. By training with real in-line data, AI models can learn to distinguish between true defects — such as voids, residues, and particles — and false alarms, or “nuisance defects.” This capability is especially critical in the angstrom era, where the density of defect candidates has increased exponentially. Enabling the Next Wave of Innovation: The SEMVision H20 The convergence of AI and advanced imaging technologies is unlocking new possibilities for defect detection. AI-driven systems can classify defects with remarkable accuracy. Sorting defects into categories provides engineers with actionable insights. This not only speeds up the defect review process, but it also improves its reliability while reducing the risk of overlooking critical issues. In high-volume manufacturing, where even small improvements in yield can translate into significant cost savings, AI is becoming indispensable. The transition to advanced nodes, the rise of intricate 3D architectures, and the exponential growth in data have created a perfect storm of manufacturing challenges, demanding new approaches to defect review. These challenges are being met with Applied’s new SEMVision H20. Applied Materials By combining second-generation cold field emission (CFE) technology with advanced AI-driven analytics, the SEMVision H20 is not just a tool for defect detection - it’s a catalyst for change in the semiconductor industry. A New Standard for Defect Review The SEMVision H20 builds on the legacy of Applied’s industry-leading eBeam systems, which have long been the gold standard for defect review. This second generation CFE has higher, sub-nanometer resolution faster speed than both TFE and first generation CFE because of increased electron flow through its filament tip. These innovative capabilities enable chipmakers to identify and analyze the smallest defects and buried defects within 3D structures. Precision at this level is essential for emerging chip architectures, where even the tiniest imperfection can compromise performance and yield. But the SEMVision H20’s capabilities go beyond imaging. Its deep learning AI models are trained with real in-line customer data, enabling the system to automatically classify defects with remarkable accuracy. By distinguishing true defects from false alarms, the system reduces the burden on process control engineers and accelerates the defect review process. The result is a system that delivers 3X faster throughput while maintaining the industry’s highest sensitivity and resolution - a combination that is transforming high-volume manufacturing. “One of the biggest challenges chipmakers often have with adopting AI-based solutions is trusting the model. The success of the SEMVision H20 validates the quality of the data and insights we are bringing to customers. The pillars of technology that comprise the product are what builds customer trust. It’s not just the buzzword of AI. The SEMVision H20 is a compelling solution that brings value to customers.” Broader Implications for the Industry The impact of the SEMVision H20 extends far beyond its technical specifications. By enabling faster and more accurate defect review, the system is helping chipmakers reduce factory cycle times, improve yields, and lower costs. In an industry where margins are razor-thin and competition is fierce, these improvements are not just incremental - they are game-changing. Additionally, the SEMVision H20 is enabling the development of faster, more efficient, and more powerful chips. As the demand for advanced semiconductors continues to grow - driven by trends like artificial intelligence, 5G, and autonomous vehicles - the ability to manufacture these chips at scale will be critical. The system is helping to make this possible, ensuring that chipmakers can meet the demands of the future. A Vision for the Future Applied’s work on the SEMVision H20 is more than just a technological achievement; it’s a reflection of the company’s commitment to solving the industry’s toughest challenges. By leveraging cutting-edge technologies like CFE and AI, Applied is not only addressing today’s pain points but also shaping the future of defect review. As the semiconductor industry continues to evolve, the need for advanced defect detection solutions will only grow. With the SEMVision H20, Applied is positioning itself as a key enabler of the next generation of semiconductor technologies, from logic chips to memory. By pushing the boundaries of what’s possible, the company is helping to ensure that the industry can continue to innovate, scale, and thrive in the angstrom era and beyond.
- The Many Ways Tariffs Will Hit Your Electronicsby Samuel K. Moore on 16. April 2025. at 13:00
Like the industry he covers, Shawn DuBravac had already had quite a week by the time IEEE Spectrum spoke to him early last Thursday, 10 April 2025. As chief economist at IPC, the 3,000-member industry association for electronics manufacturers, he’s tasked with figuring out the impact of the tsunami of tariffs the U.S. government has planned, paused, or enacted. Earlier that morning he’d recalculated price changes for electronics in the U.S. market following a 90-day pause on steeper tariffs that had been unveiled the previous week, the implementation of universal 10 percent tariffs, and a 125 percent tariff on Chinese imports. A day after this interview, he was recalculating again, following an exemption on electronics of an unspecified duration. According to DuBravac, the effects of all this will likely include higher prices, less choice for consumers, stalled investment, and even stifled innovation. How have you had to adjust your forecasts today [Thursday 10 April]? Shawn DuBravac: I revised our forecasts this morning to take into account what the world would look like if the 90-day pause holds into the future and the 125 percent tariffs on China also hold. If you look at smartphones, it would be close to a 91 percent impact. But if all the tariffs are put back in place as they were specified on “Liberation Day,” then that would be 101 percent price impact. The estimates become highly dependent on how influential China is for final assembly. So, if you look instead at something like TVs, 76 percent of televisions that are imported into the United States are coming from Mexico, where there has long been strong TV manufacturing because there were already tariffs in place on smart flat-panel televisions. The price impact I see for TVs is somewhere between 12 and 18 percent, as opposed to a near doubling for smartphones. Video-game consoles are another story. In 2024, 86 percent of video-game consoles were coming into the United States from China. So the tariffs have a very big impact. That said, the number of smartphones coming from China has actually declined pretty significantly in recent years. It was still about 72 percent in 2024, but Vietnam was 14 percent and India was 12 percent. Only a couple years ago the United States wasn’t importing any meaningful amount of smartphones from India, and it’s now become a very important hub. It sounds like the supply chain started shifting well ahead of these tariffs. DuBravac: Supply chains are really designed to be dynamic, adaptive, and resilient. So they’re constantly reoptimizing. I almost think of supply chains like living, breathing entities. If there is a disruption in one part, it’s like it lurches forward to figure out how to resolve the constrain, how to heal. We make these estimates with the presumption that nothing changes, but everything would change if this 125 percent were to become permanent. You would see an acceleration of the decoupling from China that has been happening since 2017 and accelerated during the pandemic. It’s also important to recognize that the United States isn’t the only buyer of smartphones. They’re produced in a global market, and so the supply chains are going to optimize based on that global-market dynamic. Maybe the rest of the chain could remain intact, and for example, China could continue to produce smartphones for Europe, Asia, and Latin America. How can supply chains adapt in this constantly changing environment? DuBravac: That, to me, is the most detrimental aspect of all of this. Supply chains want to adjust, but if they’re not sure what the environment is going to be in the future, they will be hesitant. If you were investing in a new factory—especially a modern, cutting-edge, semiautonomous factory—these are long-term investments. You’re looking at a 20- to 50-year time horizon, so you’re not going to make those type of investments in a geography if you’re not sure what the the broader situation is. I think one of the great ironies of all of this is that there was already a decoupling from China taking place, but because the tariff dynamics have been so fluid, it causes a pause in new business investment. As a result of that potential pause, the impact of tariffs could be more pronounced on U.S. consumers, because supply chains don’t adjust as quickly as they might have adjusted in a more certain environment. A lot of damage was done because of the uncertainty that’s been created, and it’s not clear to me that any of that uncertainty has been resolved. Our 3,000 member companies express a tremendous amount of uncertainty about the current environment. https://public.flourish.studio/visualisation/22670...” width=”100%” alt=”chart visualization” /> Lower-priced electronics have thin margins. What does that mean for the low-end consumer? DuBravac: What I see there is the households that are constrained by financials, they’re often the consumers of low-price products, and they’re the ones that are most likely to see tariff cost pushed through. There’s just no margin along the way to absorb those higher costs, and so they might see the highest percentage pricing. A low-price laptop would probably see a higher price increase in percentage terms. So I think the challenge there is the households least well positioned to handle the impact are the ones that will probably see the most impact. For some products, we tend to have higher price elasticities at lower price points, which means that a small price change tends to have a big negative impact on demand. There could be other things happening in the background as well, but the net result is that U.S. consumers have less choice. Some companies have already announced that they were going to cut out their lower-priced models, because it no longer makes economic sense to sell into the marketplace. That could happen on a company basis within their model selections, but it could also happen broadly, in an entire category where you might see the three or four lowest-priced options for a given category exit the market. So now you’re only left with more expensive options. What other effects are tariffs having? DuBravac: Another long-term effect we’ve talked about is that as companies try to optimize the cost, they relocate engineering staff to address cost. They’re pulling that engineering staff from other problems that they were trying to solve, like the next cutting-edge innovation. So some of that loss is a potentially a loss of innovation. Companies are going to worry about cost, and as a result, they’re not going to make the next iteration of product as innovative. It’s hard to measure, but I think that it is a potential negative by-product. The other thing is tariffs generally allow domestic producers to raise their price as well. You’ve already seen that for steel manufacturers. Maybe that makes U.S. companies more solvent or more viable, but at the end of the day, it’s consumers and businesses that will be paying higher prices.
- Meet the “First Lady of Engineering”by Willie D. Jones on 15. April 2025. at 16:00
For more than a century, women and racial minorities have fought for access to education and employment opportunities once reserved exclusively for white men. The life of Yvonne Young “Y.Y.” Clark is a testament to the power of perseverance in that fight. As a smart Black woman who shattered the barriers imposed by race and gender, she made history multiple times during her career in academia and industry. She probably is best known as the first woman to serve as a faculty member in the engineering college at Tennessee State University, in Nashville. Her pioneering spirit extended far beyond the classroom, however, as she continuously staked out new territory for women and Black professionals in engineering. She accomplished a lot before she died on 27 January 2019 at her home in Nashville at the age of 89. Clark is the subject of the latest biography in IEEE-USA’s Famous Women Engineers in History series. “Don’t Give Up” was her mantra. An early passion for technology Born on 13 April 1929 in Houston, Clark moved with her family to Louisville, Ky., as a baby. She was raised in an academically driven household. Her father, Dr. Coleman M. Young Jr., was a surgeon. Her mother, Hortense H. Young, was a library scientist and journalist. Her mother’s “Tense Topics” column, published by the Louisville Defender newspaper, tackled segregation, housing discrimination, and civil rights issues, instilling awareness of social justice in Y.Y. Clark’s passion for technology became evident at a young age. As a child, she secretly repaired her family’s malfunctioning toaster, surprising her parents. It was a defining moment, signaling to her family that she was destined for a career in engineering—not in education like her older sister, a high school math teacher. “Y.Y.’s family didn’t create her passion or her talents. Those were her own,” said Carol Sutton Lewis, co-host and producer for the third season of the “Lost Women of Science” podcast, on which Clark was profiled. “What her family did do, and what they would continue to do, was make her interests viable in a world that wasn’t fair.” Clark’s interest in studying engineering was precipitated by her passion for aeronautics. She said all the pilots she spoke with had studied engineering, so she was determined to do so. She joined the Civil Air Patrol and took simulated flying lessons. She then learned to fly an airplane with the help of a family friend. Despite her academic excellence, though, racial barriers stood in her way. She graduated at age 16 from Louisville’s Central High School in 1945. Her parents, concerned that she was too young to attend college, sent her to Boston for two additional years at the Girls’ Latin School and Roxbury Memorial High School. She then applied to the University of Louisville, where she was initially accepted and offered a full scholarship. When university administrators realized she was Black, however, they rescinded the scholarship and the admission, Clark said on the “Lost Women of Science” podcast, which included clips from when her daughter interviewed her in 2007. As Clark explained in the interview, the state of Kentucky offered to pay her tuition to attend Howard University, a historically Black college in Washington, D.C., rather than integrate its publicly funded university. Breaking barriers in higher education Although Howard provided an opportunity, it was not free of discrimination. Clark faced gender-based barriers, according to the IEEE-USA biography. She was the only woman among 300 mechanical engineering students, many of whom were World War II veterans. “Y.Y.’s family didn’t create her passion or her talents. Those were her own. What her family did do, and what they would continue to do, was make her interests viable in a world that wasn’t fair.” —Carol Sutton Lewis Despite the challenges, she persevered and in 1951 became the first woman to earn a bachelor’s degree in mechanical engineering from the university. The school downplayed her historic achievement, however. In fact, she was not allowed to march with her classmates at graduation. Instead, she received her diploma during a private ceremony in the university president’s office. A career defined by firsts Determined to forge a career in engineering, Clark repeatedly encountered racial and gender discrimination. In a 2007 Society of Women Engineers (SWE) StoryCorps interview, she recalled that when she applied for an engineering position with the U.S. Navy, the interviewer bluntly told her, “I don’t think I can hire you.” When she asked why not, he replied, “You’re female, and all engineers go out on a shakedown cruise,” the trip during which the performance of a ship is tested before it enters service or after it undergoes major changes such as an overhaul. She said the interviewer told her, “The omen is: ‘No females on the shakedown cruise.’” Clark eventually landed a job with the U.S. Army’s Frankford Arsenal gauge laboratories in Philadelphia, becoming the first Black woman hired there. She designed gauges and finalized product drawings for the small-arms ammunition and range-finding instruments manufactured there. Tensions arose, however, when some of her colleagues resented that she earned more money due to overtime pay, according to the IEEE-USA biography. To ease workplace tensions, the Army reduced her hours, prompting her to seek other opportunities. Her future husband, Bill Clark, saw the difficulty she was having securing interviews, and suggested she use the gender-neutral name Y.Y. on her résumé. The tactic worked. She became the first Black woman hired by RCA in 1955. She worked for the company’s electronic tube division in Camden, N.J. Although she excelled at designing factory equipment, she encountered more workplace hostility. “Sadly,” the IEEE-USA biography says, she “felt animosity from her colleagues and resentment for her success.” When Bill, who had taken a faculty position as a biochemistry instructor at Meharry Medical College in Nashville, proposed marriage, she eagerly accepted. They married in December 1955, and she moved to Nashville. In 1956 Clark applied for a full-time position at Ford Motor Co.’s Nashville glass plant, where she had interned during the summers while she was a Howard student. Despite her qualifications, she was denied the job due to her race and gender, she said. She decided to pursue a career in academia, becoming in 1956 the first woman to teach mechanical engineering at Tennessee State University. In 1965 she became the first woman to chair TSU’s mechanical engineering department. While teaching at TSU, she pursued further education, earning a master’s degree in engineering management from Nashville’s Vanderbilt University in 1972—another step in her lifelong commitment to professional growth. After 55 years with the university, where she was also a freshman student advisor for much of that time, Clark retired in 2011 and was named professor emeritus. A legacy of leadership and advocacy Clark’s influence extended far beyond TSU. She was active in the Society of Women Engineers after becoming its first Black member in 1951. Racism, however, followed her even within professional circles. At the 1957 SWE conference in Houston, the event’s hotel initially refused her entry due to segregation policies, according to a 2022 profile of Clark. Under pressure from the society’s leadership, the hotel compromised; Clark could attend sessions but had to be escorted by a white woman at all times and was not allowed to stay in the hotel despite having paid for a room. She was reimbursed and instead stayed with relatives. As a result of that incident, the SWE vowed never again to hold a conference in a segregated city. Over the decades, Clark remained a champion for women in STEM. In one SWE interview, she advised future generations: “Prepare yourself. Do your work. Don’t be afraid to ask questions, and benefit by meeting with other women. Whatever you like, learn about it and pursue it. “The environment is what you make it. Sometimes the environment is hostile, but don’t worry about it. Be aware of it so you aren’t blindsided.” Her contributions earned her numerous accolades including the 1998 SWE Distinguished Engineering Educator Award and the 2001 Tennessee Society of Professional Engineers Distinguished Service Award. A lasting impression Clark’s legacy was not confined to engineering; she was deeply involved in Nashville community service. She served on the board of the 18th Avenue Family Enrichment Center and participated in the Nashville Area Chamber of Commerce. She was active in the Hendersonville Area chapter of The Links, a volunteer service organization for Black women, and the Nashville alumnae chapter of the Delta Sigma Theta sorority. She also mentored members of the Boy Scouts, many of whom went on to pursue engineering careers. Clark spent her life knocking down barriers that tried to impede her. She didn’t just break the glass ceiling—she engineered a way through it for people who came after her.
- What Engineers Should Know About AI Jobs in 2025by Gwendolyn Rak on 15. April 2025. at 14:00
It seems AI jobs are here to stay, based on the latest data from the 2025 AI Index Report. To better understand the current state of AI, the annual report from Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) collects a wide range of information on model performance, investment, public opinion, and more. Every year, IEEE Spectrum summarizes our top takeaways from the entire report by plucking out a series of charts, but here we zero in on the technology’s effect on the workforce. Much of the report’s findings about jobs are based on data from LinkedIn and Lightcast, a research firm that analyzes job postings from more than 51,000 websites. Last year’s report showed signs that the AI hiring boom was quieting. But this year, AI job postings were back up in most places after the prior year’s lag. In the United States, for example, the percentage of all job postings demanding AI skills rose to 1.8 percent, up from 1.4 percent in 2023. The AI Index Report/Stanford HAI Will AI Create Job Disruptions? Many people, including software engineers, fear that AI will make their jobs expendable—but others believe the technology will provide new opportunities. A McKinsey & Co. survey found that 28 percent of executives in software engineering expect generative AI to decrease their organizations’ workforces in the next three years, while 32 percent expect the workforce to increase. Overall, the portion of executives who anticipate a decrease in the workforce seems to be declining. In fact, a separate study from LinkedIn and GitHub suggests that adoption of GitHub Copilot, the generative AI-powered coding assistant, is associated with a small increase in software-engineering hiring. The study also found these new hires were required to have fewer advanced programming skills, as Peter McCrory, an economist and labor researcher at LinkedIn, noted during a panel discussion on the AI Report last Thursday. As tools like GitHub Copilot are adopted, the mix of required skills may shift. “Big picture, what we see on LinkedIn in recent years is that members are increasingly emphasizing a broader range of skills and increasingly uniquely human skills, like ethical reasoning or leadership,” McCrory said. Python Remains a Top Skill Still, programming skills remain central to AI jobs. In both 2023 and 2024, Python was the top specialized skill listed in U.S. AI job postings. The programming language also held onto its lead this year as the language of choice for many AI programmers. The AI Index Report/Stanford HAI Taking a broader look at AI-related skills, most were listed in a greater percentage of job postings in 2024 compared with those in 2023, with two exceptions: autonomous driving and robotics. Generative AI in particular saw a large increase, growing by nearly a factor of four. The AI Index Report/Stanford HAI AI’s Gender Gap A gender gap is appearing in AI talent. According to LinkedIn’s research, women in most countries are less likely to list AI skills on their profiles, and it estimates that in 2024, nearly 70 percent of AI professionals on the platform were male. The ratio has been “remarkably stable over time,” the report states. The AI Index Report/Stanford HAI Academia and Industry Although models are becoming more efficient, training AI is expensive. That expense is one of the primary reasons most of today’s notable AI advances are coming from industry instead of academia. “Sometimes in academia, we make do with what we have, so you’re seeing a shift of our research toward topics that we can afford to do with the limited computing [power] that we have,” AI Index steering committee co-director Yolanda Gil said at last week’s panel discussion. “That is a loss in terms of advancing the field of AI,” said Gil. Gil and others at the event emphasized the importance of investment in academia, as well as collaboration across sectors—industry, government, and education. Such partnerships can both provide needed resources to researchers and create a better understanding of the job market among educators, enabling them to prepare students to fill important roles.
- Video Friday: Tiny Robot Bug Hops and Jumpsby Evan Ackerman on 11. April 2025. at 15:30
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025: 5–7 September 2025, SHENZHEN World Robot Summit: 10–12 October 2025, OSAKA, JAPAN IROS 2025: 19–25 October 2025, HANGZHOU, CHINA IEEE Humanoids: 30 September–2 October 2025, SEOUL CoRL 2025: 27–30 September 2025, SEOUL Enjoy today’s videos! MIT engineers developed an insect-sized jumping robot that can traverse challenging terrains while using far less energy than an aerial robot of comparable size. This tiny, hopping robot can leap over tall obstacles and jump across slanted or uneven surfaces carrying about 10 times more payload than a similar-sized aerial robot, opening the door to many new applications. [ MIT ] CubiX is a wire-driven robot that connects to the environment through wires, with drones used to establish these connections. By integrating with various tools and a robot, it performs tasks beyond the limitations of its physical structure. [ JSK Lab ] Thanks, Shintaro! It’s a game a lot of us played as children—and maybe even later in life: unspooling measuring tape to see how far it would extend before bending. But to engineers at the University of California San Diego, this game was an inspiration, suggesting that measuring tape could become a great material for a robotic gripper. [ University of California San Diego ] I enjoyed the Murderbot books, and the trailer for the TV show actually looks not terrible. [ Murderbot ] For service robots, being able to operate an unmodified elevator is much more difficult (and much more important) than you might think. [ Pudu Robotics ] There’s a lot of buzz around impressive robotics demos — but taking Physical AI from demo to real-world deployment is a journey that demands serious engineering muscle. Hammering out the edge cases and getting to scale is 500x the effort of getting to the first demo. See our process for building this out for the singulation and induction Physical AI solution trusted by some of the world’s leading parcel carriers. Here’s to the teams likewise committed to the grind toward reliability and scale. [ Dexterity Robotics ] I am utterly charmed by the design of this little robot. [ RoMeLa ] This video shows a shortened version of Issey Miyake’s Fly With Me runway show from 2025 Paris Men’s Fashion Week. My collaborators and I brought two industrial robots to life to be the central feature of the minimalist scenography for the Japanese brand. Each ABB IRB 6640 robot held a two meter square piece of fabric, and moved synchronously in flowing motions to match the emotional timing of the runway show. With only three-weeks development time and three days on-site, I built custom live coding tools that opened up the industrial robots to more improvisational workflows. This level of reliable, real-time control unlocked the flexibility needed by the Issey Miyake team to make the necessary last-minute creative decisions for the show. [ Atonaton ] Meet Clone’s first musculoskeletal android: Protoclone, the most anatomically accurate robot in the world. Based on a natural human skeleton, Protoclone is actuated with over 1,000 Myofibers, Clone’s proprietary artificial muscle technology. [ Clone Robotics ] There are a lot of heavily produced humanoid robot videos from the companies selling them, but now that these platforms are entering the research space, we should start getting a more realistic sense of their capabilities. [ University College London ] Here’s a bit more footage from RIVR on their home delivery robot. [ RIVR ] And now, this. [ EngineAI ] Robots are at the heart of sci-fi, visions of the future, but what if that future is now? And what if those robots, helping us at work and at home, are simply an extension of the tools we’ve used for millions of years? That’s what artist and engineer Catie Cuan thinks, and it’s part of the reason she teaches robots to dance. In this episode we meet the people at the frontiers of the future of robotics and Astro Teller introduces two groundbreaking projects, Everyday Robots and Intrinsic, that have advanced how robots could work not just for us but with us. [ Moonshot Podcast ]
- Climb the Career Ladder with Focused Expertiseby Rahul Pandey on 10. April 2025. at 18:00
This article is crossposted from IEEE Spectrum’s rebooted careers newsletter! In partnership with tech career development company Taro, every issue will be bringing you deeper insight into how to pursue your goals and navigate professional challenges. Sign up now to get insider tips, expert advice, and practical strategies delivered to your inbox for free. One of the key strategies for gaining seniority is expertise. Whether you’re trying to get promoted or land a new job at a higher level, you need to demonstrate mastery over a valuable skill or domain. Here’s what most job seekers get wrong about this: They think that being an “expert” is reserved for senior or principal engineers who have decades of experience. Nothing could be further from the truth. Instead of assuming that expertise is a distant goal, realize that you can become more knowledgeable than anyone as long as you narrow the scope appropriately. For example, in one afternoon, you can become the go-to person in your team of 10 for anything related to configuring logs for your company’s version control software. In a company with any amount of sophistication, each person’s knowledge is incomplete. There will always be problems that fall into the category of “If we had more time, we’d look into that.” Your goal is to identify which of these gaps could make a meaningful business impact. It need not be purely technical; it could be about search engine optimization (SEO), launch processes, or improving the developer experience. This is actionable advice if you’re on the job market. If you’re looking for a job, especially as a junior engineer, your #1 goal is to demonstrate mastery over a technology or domain. This means you should be selective about how much you claim to know on your resume. If you mention every programming language or analysis tool you’ve ever touched, you are making it impossible for someone to identify your level of expertise. This is especially true when you have less than 4 years of experience. When you claim to know everything, I’ll assume you actually suck at everything. You should be able to teach me something about each of the projects or technologies you mention, e.g. discuss tradeoffs or interesting technical decisions you made. Yes, you do disqualify yourself from certain jobs where you didn’t list the technologies they were looking for. But those jobs weren’t a good fit anyway. -Rahul ICYMI: Top Programming Languages If you’re taking our advice and looking to develop expertise in a programming language your team needs, check out Spectrum‘s Top Programming Languages interactive. There you’ll find out what programming languages are the most important in your field, and which are most in demand by employers. Read more here. ICYMI: Data Centers Seek Engineers Amid a Talent Shortage The rapid development of AI is fueling a data center boom, unlocking billions of dollars in investments to build the infrastructure needed to support data- and energy-hungry models. This surge in construction has created a strong demand for certain electrical engineers, whose expertise in power systems and energy efficiency is essential for designing, building, and maintaining energy-intensive AI infrastructure. Read more here. ICYMI: In Praise of “Normal” Engineers You don’t have to be a superhero to develop valuable skills either. In one of the most popular articles on IEEE Spectrum this month, Charity Majors breaks down the dangers of lionizing the “10x engineer,” writing “Individual engineers don’t own software; engineering teams own software. It doesn’t matter how fast an individual engineer can write software. What matters is how fast the team can collectively write, test, review, ship, maintain, refactor, extend, architect, and revise the software that they own.” Read more here.
- IEEE TryEngineering STEM Grants Fund Over 50 Projectsby Robert Schneider on 10. April 2025. at 17:00
IEEE TryEngineering, a program within Educational Activities, fosters outreach to school-age children worldwide by equipping teachers and IEEE volunteers with tools for engaging activities. The science, technology, engineering, and math resources include peer-reviewed lesson plans, games, and activities that are designed to captivate and inspire—all provided at no cost. The TryEngineering STEM grant program provides financial support to IEEE volunteers to start, sustain, or scale up selected outreach projects in their communities. Since its inception in 2021, 144 projects have been funded, totaling more than US $176,000. At least 1,000 IEEE volunteers have led programs, engaging with more than 19,000 students. Last year the grant program awarded more than $70,379 to 58 volunteer-led projects, and 462 applications from nine IEEE regions were received. IEEE members involved in preuniversity outreach programs, including STEM Champions and members of the preuniversity education coordinating committee, reviewed the submissions using a criteria-based rubric. The full list of funded projects can be found here. What follows is a sampling. Eight donor-supported projects STEM Education Workshop 2024: Introducing the Internet of Things to High School Students, funded by the Taenzer Memorial Fund with support from the IEEE Foundation, featured a hands-on activity that provided an introduction to the IoT, programming, and basic microcontroller concepts. Forty high school and vocational school students and nine teachers from the Itenas Bandung electrical engineering study program in Indonesia attended. Twelve IEEE volunteers facilitated the program. Through experimentation with microelectronics, students were able to be creative, spurring increased interest and a desire to further explore technology. The Taenzer Fund subsidized seven additional proposals to support engineering in developing countries. They totaled $10,000 and reached more than 300 students. The programs included: Exploring the Future: An IEEE STEM Industry Tour in Indonesia. This event in Jakarta engaged 20 students with hands-on workshops, networking opportunities, and visits to cutting-edge facilities involved with 5G, AI, and ocean engineering. IEEE STEM Empowerment: Student Workshop Series. In these workshops, also held in Indonesia, 20 students tackled hands-on projects in communications technology, AI, and ocean engineering. IEEE STEM Teacher Workshop: Empowering Educators for Future Innovators. Fifty participants attended the event in Sukabumi, Indonesia. It offered hands-on sessions on pedagogy, cutting-edge technologies, and ways to increase gender inclusivity. IEEE Women in Engineering Day. A three-day session in Kenya included STEM competitions, interactive workshops, mentorship sessions, and hands-on activities to empower young women. WIE Impact. A series of workshops and events held in Zaghouan, Tunisia, engaged 160 students with activities in coding, cybersecurity, robotics, space exploration, sustainability, and first aid. Exploring Sustainable Futures: Empowering Students With IoT-driven Aquaponics Systems for STEM Enthusiasts. This hands-on program in Kuala Lumpur, Malaysia, taught 28 students about coding basics, real-time system monitoring, and IoT-driven aquaponics systems. Aquaponics, a food-production method, combines aquaculture with hydroponics. Development of a Game—Multiplayer Kuis (Gamukis)—for Students of Senior High School 1 Cangkringan in Increasing Learning Enthusiasm. Held at Amikom University, in Yogyakarta, Indonesia, this program taught 150 students how to design educational video games. Students who participated in the program held at the Atal Tinkering Lab in Narasaraopet, Andhra Pradesh, learned about the fundamentals of Internet of Things and its applications. Atal Tinkering Lab Program Team IEEE society-sponsored programs The generous support from the Taenzer Fund was supplemented by financial assistance from IEEE groups including the Communications, Oceanic Engineering, and Signal Processing societies, as well as IEEE Women in Engineering. The IEEE Signal Processing Society funded three projects including Train the STEM Trainers in Secondary Schools-Multiplier Effect STEM Outreach. The “train the trainers” program involved 350 students and 10 parents, more than 80 teachers, and 30 volunteers. The teachers were trained in robotics and coding using mBlock and Python. The students got experience with calculators, digital counters, LED displays, robotic cars, smart dustbins, and more. A focus on Internet of Things Another notable project was the ConnectXperience: The Journey into the World of IoT. Held at the Atal Tinkering Lab in Narasaraopet, Andhra Pradesh, India, this program engaged more than 400 students who learned about IoT fundamentals, robotics, programming, electronics, data analytics, networking, cybersecurity, and other innovative applications of IoT. 2025 grants TryEngineering recently announced its 2025 STEM grant recipients. Out of more than 410 applications received, funding was awarded to 58 programs from nine sections, for a total of $70,379. The list of recipients can be found here. To contribute to the program, visit the TryEngineering Fund donation page.
- The Great Chatbot Debate: Do They Really Understand?by Eliza Strickland on 10. April 2025. at 15:01
The large language models (LLMs) that power today’s chatbots have gotten so astoundingly capable, AI researchers are hard pressed to assess those capabilities—it seems that no sooner is there a new test than the AI systems ace it. But what does that performance really mean? Do these models genuinely understand our world? Or are they merely a triumph of data and calculations that simulates true understanding? To hash out these questions, IEEE Spectrum partnered with the Computer History Museum in Mountain View, Calif. to bring two opinionated experts to the stage. I was the moderator of the event, which took place on 25 March. It was a fiery (but respectful) debate, well worth watching in full. Emily M. Bender is a University of Washington professor and director of its computational linguistics laboratory, and she has emerged over the past decade as one of the fiercest critics of today’s leading AI companies and their approach to AI. She’s also known as one of the coauthors of the seminal 2021 paper “On the Dangers of Stochastic Parrots,” a paper that laid out the possible risks of LLMs (and caused Google to fire coauthor Timnit Gebru). Bender, unsurprisingly, took the “no” position. Taking the “yes” position was Sébastien Bubeck, who recently moved to OpenAI from Microsoft, where he was VP of AI. During his time at Microsoft he coauthored the influential preprint “Sparks of Artificial General Intelligence,” which described his early experiments with OpenAI’s GPT-4 while it was still under development. In that paper, he described advances over prior LLMs that made him feel that the model had reached a new level of comprehension. With no further ado, we bring you the matchup that I call “Parrots vs. Sparks.” - YouTube youtu.be
- First Supercritical CO2 Circuit Breaker Debutsby Emily Waltz on 9. April 2025. at 16:36
Researchers this month will begin testing a high-voltage circuit breaker that can quench an arc and clear a fault with supercritical carbon dioxide fluid. The first-of-its-kind device could replace conventional high-voltage breakers, which use the potent greenhouse gas sulfur hexafluoride, or SF6. Such equipment is scattered widely throughout power grids as a way to stop the flow of electrical current in an emergency. “SF6 is a fantastic insulator, but it’s very bad for the environment—probably the worst greenhouse gas you can think of,” says Johan Enslin, a program director at U.S. Advanced Research Projects Agency–Energy (ARPA-E), which funded the research. The greenhouse warming potential of SF6 is nearly 25,000 times as high as that of carbon dioxide, he notes. If successful, the invention, developed by researchers at the Georgia Institute of Technology, could have a big impact on greenhouse gas emissions. Hundreds of thousands of circuit breakers dot power grids globally, and nearly all of the high voltage ones are insulated with SF6. A high-voltage circuit breaker interrupter, like this one made by GE Vernova, stops current by mechanically creating a gap and an arc, and then blasting high-pressure gas through the gap. This halts the current by absorbing free electrons and quenching the arc as the dielectric strength of the gas is increased. GE Vernova On top of that, SF6 byproducts are toxic to humans. After the gas quenches an arc, it can decompose into substances that can irritate the respiratory system. People who work on SF6-insulated equipment have to wear full respirators and protective clothing. The European Union and California are phasing out the use of SF6 and other fluorinated gases (F-gases) in electrical equipment, and several other regulators are following suit. In response, researchers globally are racing to develop alternatives. Over the last five years, ARPA-E has funded 15 different early-stage circuit breaker projects. And GE Vernova has developed products for the European market that use a gas mixture that includes an F-gas, but at a fraction of the concentration of conventional SF6 breakers. Reinventing Circuit Breakers With Supercritical CO2 The job of a grid-scale circuit breaker is to interrupt the flow of electrical current when something goes wrong, such as a fault caused by a lightning strike. These devices are placed throughout substations, power generation plants, transmission and distribution networks, and industrial facilities where equipment operates in tens to hundreds of kilovolts. Unlike home circuit breakers, which can isolate a fault with a small air gap, grid-scale breakers need something more substantial. Most high-voltage breakers rely on a mechanical interrupter housed in an enclosure containing SF6, which is a non-conductive insulating gas. When a fault occurs, the device breaks the circuit by mechanically creating a gap and an arc, and then blasts the high-pressure gas through the gap, absorbing free electrons and quenching the arc as the dielectric strength of the gas is increased. In Georgia Tech’s design, supercritical carbon dioxide quenches the arc. The fluid is created by putting CO2 under very high pressure and temperature, turning it into a substance that’s somewhere between a gas and a liquid. Because supercritical CO2 is quite dense, it can quench an arc and avoid reignition of a new arc by reducing the momentum of electrons—or at least that’s the theory. Led by Lukas Graber, head of Georgia Tech’s plasma and dielectrics lab, the research group will run its 72-kV prototype AC breaker through a synthetic test circuit at the University of Wisconsin-Milwaukee beginning in late April. They group is also building a 245-kV version. The use of supercritical CO2 isn’t new, but designing a circuit breaker around it is. The challenge was to build the breaker with components that can withstand the high pressure needed to sustain supercritical CO2, says Graber. The team turned to the petroleum industry to find the parts, and found all but one: the bushing. This crucial component serves as a feed-through to carry current through equipment enclosures. But a bushing that can withstand 120 atmospheres of pressure didn’t exist. So Georgia Tech made its own using mineral-filled epoxy resins, copper conductors, steel pipes, and blank flanges. “They had to go back to the fundamentals of the bushing design to make the whole breaker work,” says Enslin. “That’s where they are making the biggest contribution, in my eyes.” The compact design of Georgia Tech’s breaker will also allow it to fit in tighter spaces without sacrificing power density, he says. Replacing a substation’s existing circuit breakers with this design will require some adjustments, including the addition of a heat pump in the vicinity for thermal management of the breaker. If the tests on the synthetic circuit go well, Graber plans to run the breaker through a battery of real-world simulations at KEMA Laboratories‘ Chalfont, Penn. location—a gold standard certification facility. The Georgia Tech team built its circuit breaker with parts that can withstand the very high pressures of supercritical CO2.Alfonso Jose Cruz GE Vernova Markets SF6-alternative Circuit Breaker If Georgia Tech’s circuit breaker makes it to the market, it will have to compete with GE Vernova, which had a 20-year head start on developing SF6-free circuit breakers. In 2018, the company installed its first SF6-free gas-insulated substation in Europe, which included a 145 kV-class AC circuit breaker that’s insulated with a gas mixture it calls g3. It’s composed of CO2, oxygen and a small amount of C4F7N, or heptafluoroisobutyronitrile. This fluorinated greenhouse gas isn’t good for the environment either. But it comprises less than 5 percent of the gas mixture, so it lowers the greenhouse warming potential by up to 99 percent compared with SF6. That makes the warming potential still far greater than CO2 and methane, but it’s a start. “One of the reasons we’re using this technology is because we can make an SF6-free circuit breaker that will actually bolt onto the exact foundation of our equivalent SF6 breaker,” says Todd Irwin, a high-voltage circuit breaker senior product specialist at GE Vernova. It’s a drop-in replacement that will “slide right into a substation,” he says. Workers must still wear full protective gear when they maintain or fix the machine like they do for SF6 equipment, Irwin says. The company also makes a particular type of breaker called a live tank circuit breaker without the fluorinated component, he says. All of these approaches, including Georgia Tech’s supercritical CO2, depend on mechanical action to open and close the circuit. This takes up precious time in the event of a fault. That’s inspired many researchers to turn to semiconductors, which can do the switching a lot faster, and don’t need a gas to turn off the current. “With mechanical, it can take up to four or five cycles to clear the fault and that’s so much energy that you have to absorb,” says Enslin at ARPA-E. A semiconductor can potentially do it in a millisecond or less, he says. But commercial development of these solid state circuit breakers is still in early stages, and is focused on medium voltages. “It will take some time to get them to the required high voltages,” Enslin says. The work may be niche, but the impact could be high. About 1 percent of SF6 leaks from electrical equipment. In 2018, that translated to 9,040 tons (8,200 tonnes) of SF6 emitted globally, accounting for about 1 percent of the global warming value that year.
- Airbus Is Working on a Superconducting Electric Aircraftby Glenn Zorpette on 9. April 2025. at 15:10
One of the greatest climate-related engineering challenges right now is the design and construction of a large, zero-emission, passenger airliner. And in this massive undertaking, no airplane maker is as invested as Airbus. At the Airbus Summit, a symposium for journalists on 24 and 25 March, top executives sketched out a bold, tech-forward vision for the company’s next couple of generations of aircraft. The highlight, from a tech perspective, is a superconducting, fuel-cell powered airliner. Airbus’s strategy is based on parallel development efforts. While undertaking the enormous R&D projects needed to create the large, fuel-cell aircraft, the company said it will also work aggressively on an airliner designed to wring the most possible efficiency out of combustion-based propulsion. For this plane, the company is targeting a 20-to-30 percent reduction in fuel consumption, according to Bruno Fichefeux, head of future programmes at Airbus. The plane would be a single-aisle airliner, designed to succeed Airbus’s A320 family of aircraft, the highest-selling passenger jet aircraft on the market, with nearly 12,000 delivered. The company expects the new plane to enter service some time in the latter half of the 2030s. Airbus hopes to achieve such a large efficiency gain by exploiting emerging advances in jet engines, wings, lightweight, high-strength composite materials, and sustainable aviation fuel. For example, Airbus disclosed that it is now working on a pair of advanced jet engines, the more radical of which would have an open fan whose blades would spin without a surrounding nacelle. Airbus is evaluating such an engine in a project with partner CFM International, a joint venture between GE Aerospace and Safran Aircraft Engines. Without a nacelle to enclose them, an engine’s fan blades can be very large, permitting higher levels of “bypass air,” which is the air sucked in to the back of the engine—separate from the air used to combust fuel—and expelled to provide thrust. The ratio of bypass air to combustion air is an important measure of engine performance, with higher ratios indicating higher efficiencies, according to Mohamed Ali, chief technology and operating officer for GE Aerospace. Typical bypass ratios today are around 11 or 12, but the open-fan design could enable ratios as high as 60, according to Ali. The partners have already tested open-fan engines in two different series of wind-tunnel tests in Europe, Ali added. “The results have been extremely encouraging, not only because they are really good in terms of performance and noise validation, but also [because] they’re validating the computational analysis that we have done,” Ali said at the Airbus event. A scale model of an open-fan aircraft engine was tested last year in a wind tunnel in Modane, France. The tests were conducted by France’s national aerospace research agency and Safran Aircraft Engines, which is working on open-fan engines with GE Aerospace.Safran Aircraft Engines Fuel-cell airliner is a cornerstone of zero-emission goals In parallel with this advanced combustion-powered airliner, Airbus has been developing a fuel-cell aircraft for five years under a program called ZEROe. At the Summit, Airbus CEO Guillaume Faury backed off of a goal to fly such a plane by 2035, citing the lack of a regulatory framework for certifying such an aircraft as well as the slow pace of the build-out of infrastructure needed to produce “green” hydrogen at commercial scale and at competitive prices. “We would have the risk of a sort of ‘Concord of hydrogen’ where we would have a solution, but that would not be a commercially viable solution at scale,” Faury explained. That said, he took pains to reaffirm the company’s commitment to the project. “We continue to believe in hydrogen,” he declared. “We’re absolutely convinced that this is an energy for the future for aviation, but there’s just more work to be done. More work for Airbus, and more work for the others around us to bring that energy to something that is at scale, that is competitive, and that will lead to a success, making a significant contribution to decarbonization.” Many of the world’s major industries, including aviation, have pledged to achieve zero net greenhouse gas emissions by the year 2050, a fact that Faury and other Airbus officials repeatedly invoked as a key driver of the ZEROe project. Later in the event, Glenn Llewellyn, Airbus’s vice president in charge of the ZEROe program, described the project in detail, indicating an effort of breathtaking technological ambition. The envisioned aircraft would seat at least 100 people and have a range of 1000 nautical miles (1850 kilometers). It would be powered by four fuel-cell “engines” (two on each wing), each with a power output of 2 megawatts. According to Hauke Luedders, head of fuel cell propulsion systems development at Airbus, the company has already done extensive tests in Munich on a 1.2 MW system built with partners including Liebherr Group, ElringKlinger, Magna Steyr, and Diehl. Luedders said the company is focusing on low-temperature proton-exchange-membrane fuel cells, although it has not yet settled on the technology. But the real stunner was Llewellyn’s description of a comprehensive program at Airbus to design and test a complete superconducting electrical powertrain for the fuel-cell aircraft. “As the hydrogen stored on the aircraft is stored at a very cold temperature, minus 253 degrees Celsius, we can use this temperature and the cryogenic technology to also efficiently cool down the electrics in the full system,” Llewellyn explained. “It significantly improves the energy efficiency and the performance. And even if this is an early technology, with the right efforts and the right partnerships, this could be a game changer for our fuel-cell aircraft, for our fully electric aircraft, enabling us to design larger, more powerful, and more efficient aircraft.” In response to a question from IEEE Spectrum, Llewellyn elaborated that all of the major components of the electric propulsion system would be cryo-cooled: “electric distribution system, electronic controls, power converters, and the motors”—specifically, the coils in the motors. “We’re working with partners on every single component,” he added. The cryo-cooling system would chill a refrigerant that would circulate to keep the components cold, he explained. A fuel cell aircraft “engine,” as envisioned by Airbus, would include a 2-megawatt electric motor and associated motor control unit (MCU), a fuel-cell system to power the motor, and associated systems for supplying air, hydrogen fuel, liquid refrigerant, and other necessities. The ram air system would capture cold air flowing over the aircraft for use in the cooling systems.Airbus SAS Could aviation be the killer app for superconductors? Llewellyn did not specify which superconductors and refrigerants the team was working with. But high temperature superconductors are a good bet, because of the drastically reduced requirements on the cooling system that would be needed to sustain superconductivity. Copper-oxide based ceramic superconductors were invented at IBM in 1986, and various forms of them can superconduct at temperatures between –238 °C (35 K) and –140 °C (133 K) at ambient pressure. These temperatures are higher than traditional superconductors, which need temperatures below about 25 K. Nevertheless, commercial applications for the high-temperature superconductors have been elusive. But a superconductivity expert, applied physicist Yu He at Yale University, was heartened by the news from Airbus. “My first reaction was, ‘really?’ And my second reaction was, wow, this whole line of research, or application, is indeed growing and I’m very delighted” about Airbus’s ambitious plans. Copper-oxide superconductors have been used in a few applications, almost all of them experimental. These included wind-turbine generators, magnetic-levitation train demonstrations, short electrical transmission cables, magnetic-resonance imaging machines and, notably, in the electromagnet coils for experimental tokamak fusion reactors. The tokamak application, at a fusion startup called Commonwealth Fusion Systems, is particularly relevant because to make coils, engineers had to invent a process for turning the normally brittle copper-oxide superconducting material into a tape that could be used to form donut-shaped coils capable of sustaining very high current flow and therefore very intense magnetic fields. “Having a superconductor to provide such a large current is desirable because it doesn’t generate heat,” says He. “That means, first, you have much less energy lost directly from the coils themselves. And, second, you don’t require as much cooling power to remove the heat.” Still, the technical hurdles are substantial. “One can argue that inside the motor, intense heat will still need to be removed due to aerodynamic friction,” He says. “Then it becomes, how do you manage the overall heat within the motor?” An engineer at Air Liquide Advanced Technologies works on a test of a hydrogen storage and distribution system at the Liquid Hydrogen Breadboard in November, 2024. The “Breadboard” was established last year in Grenoble, France, by Air Liquide and Airbus.Céline Sadonnet/Master Films For this challenge, engineers will at least have a favorable environment with cold, fast-flowing air. Engineers will be able to tap into the “massive air flow” over the motors and other components to assist the cooling, He suggests. Smart design could “take advantage of this kinetic energy of flowing air.” To test the evolving fuel-cell propulsion system, Airbus has built a unique test center in Grenoble called the “Liquid Hydrogen Breadboard,” Llewellyn disclosed at the Summit. “We partnered with Air Liquide Advanced Technologies” to build the facility, he said. “This Breadboard is a versatile test platform designed to simulate key elements of future aircraft architecture: tanks, valves, pipes, and pumps, allowing us to validate different configurations at full scale. And this test facility is helping us gain critical insight into safety, hydrogen operations, tank design, refueling, venting, and gauging.” “Throughout 2025, we’re going to continue testing the complete liquid-hydrogen and distribution system,” Llewellyn added. “And by 2027, our objective is to take an even further major step forward, testing the complete end-to-end system, including the fuel-cell engine and the liquid hydrogen storage and distribution system together, which will allow us to assess the full system in action.” Glenn Zorpette traveled to Toulouse as a guest of Airbus.
- Get Ready for the Stellarator Showdownby Tom Clynes on 9. April 2025. at 14:33
For decades, nuclear fusion—the reaction that powers the sun—has been the ultimate energy dream. If harnessed on Earth, it could provide endless, carbon-free power. But the challenge is huge. Fusion requires temperatures hotter than the sun’s core and a mastery of plasma—the superheated gas in which atoms that have been stripped of their electrons collide, their nuclei fusing. Containing that plasma long enough to generate usable energy has remained elusive. Now, two companies—Germany’s Proxima Fusion and Tennessee-based Type One Energy—have taken a major step forward, publishing peer-reviewed blueprints for their competing stellarator designs. Two weeks ago, Type One released six technical papers in a special issue of the Journal of Plasma Physics. Proxima detailed its fully integrated stellarator power plant concept, called Stellaris, in the journal Fusion Engineering and Design. Both firms say the papers demonstrate that their machines can deliver commercial fusion energy. Related: Nuclear Fusion’s New Idea: An Off-the-Shelf Stellarator At the heart of both approaches is the stellarator, a mesmerizingly complex machine that uses twisted magnetic fields to hold the plasma steady. This configuration, first dreamed up in the 1950s, promises a crucial advantage: Unlike its more popular cousin, the tokamak, a stellarator can operate continuously, without the need for a strong internal plasma current. Instead, stellarators use external magnetic coils. This design reduces the risk of sudden disruptions to the plasma field that can send high-energy particles crashing into reactor walls. The downside? Stellarators, while theoretically simpler to operate, are notoriously difficult to design and build. Recent advances in computational power, high-temperature superconducting (HTS) magnets, and AI-enhanced optimization of magnet geometries are changing the game, helping researchers to uncover patterns that lead to simpler, faster, and cheaper stellarator designs. Two Visions of Fusion with One Goal While both firms are racing toward the same destination—practical, commercial fusion power—the Proxima paper’s focus leans more toward the engineering integration of its reactor, while Type One’s papers reveal details of its plasma physics design and key components of its reactor. Proxima, a spinoff from Germany’s Max Planck Institute for Plasma Physics, aims to build a 1-gigawatt stellarator power plant. The design uses HTS magnets and AI optimization to generate more power per unit volume than earlier stellarators, while also significantly reducing the overall size. Proxima has applied for a patent on an innovative liquid-metal breeding blanket, which will be used to breed tritium fuel for the fusion reaction, via the reaction of neutrons with lithium. Proxima Fusion’s Stellaris design is significantly smaller than other stellarators of the same power.Proxima Fusion “This is the first time anyone has put all the elements together in a single, fully integrated concept,” says Proxima cofounder and chief scientist Jorrit Lion. The design builds on the Wendelstein 7-X stellarator, a €1.4 billion (US $1.5 billion) project funded by the German government and the European Union, which set records for electron temperature, plasma density, and energy confinement time. Type One’s stellarator design incorporates three key innovations: an optimized magnetic field for plasma stability, advanced manufacturing techniques, and cutting-edge HTS magnets. The plant it has dubbed Infinity Two is designed to generate 350 megawatts of electricity. Like Proxima’s plant, Infinity Two will use deuterium-tritium fuel and build on lessons learned from W7-X, as well as Wisconsin’s HSX project, where many of Type One’s founders worked before forming the company. In partnership with the Tennessee Valley Authority, Type One aims to build Infinity Two at TVA’s Bull Run Fossil Plant by the mid-2030s. “Why are we the first private fusion company with an agreement to develop a fusion power plant with a utility? Because we have a design based in reality,” says Christofer Mowry, CEO of Type One Energy. “This isn’t about building a science experiment. This is about delivering energy.” AI Points to an Ideal 3D Magnetic-Field Structure Both firms have relied heavily on AI and supercomputing to help them place the magnetic coils to more precisely shape their magnetic fields. Type One relied on a range of high-performance computing resources, including the U.S. Department of Energy’s cutting-edge exascale Frontier supercomputer at Oak Ridge National Laboratory, to power its highly detailed simulations. That research led to one of the more intriguing developments buried in these papers: a possible move toward consensus in the stellarator physics community about the ideal three-dimensional magnetic-field structure. Proxima’s team has always embraced the quasi-isodynamic (QI) approach, used in W7-X, which prioritizes deep particle trapping for superior plasma confinement. Type One, on the other hand, built its early designs around quasi-symmetry (QS), inspired by the HSX stellarator, which aimed to streamline particle motion. Now, based on its optimization research, Type One is changing course. “We were champions of quasi-symmetry,” says Type One’s lead theorist Chris Hegna. “But the surprise was that we couldn’t make quasi-symmetry work as well as we thought we could. We will continue doing studies of quasi-symmetry, but primarily it looks like QI is the prominent optimization choice we are going to pursue.” Type One Energy is working with the Tennessee Valley Authority to build a commercial stellarator by the mid-2030s.Type One Energy The Road Ahead for Stellarators According to Hegna, Type One’s partnership with TVA could put a stellarator fusion plant on the grid by the mid-2030s. But before it builds Infinity Two, the company plans to validate key technologies with its Infinity One test platform, set for construction in 2026 and operation by 2029. Proxima, meanwhile, plans to bring its Stellaris design to life by the 2030s, first with a demo stellarator, dubbed Alpha. The company claims Alpha will be the first stellarator to demonstrate net energy production in a steady state. It’s targeted to debut in 2031, after the 2027 completion of a demonstration set of the complex magnetic coils. Both companies face a common challenge: funding. Type One has raised $82 million and, according to Axios, is preparing for more than $200 million in Series A financing, which the company declined to confirm. Proxima has secured about $65 million in public and private capital. If the recent papers succeed in building confidence in stellarators, investors may be more willing to fund these ambitious projects. The coming decade will determine whether both companies’ confidence in their own designs is justified, and whether producing fusion energy from stellarators transitions from scientific ambition to commercial reality.
- Andrew Ng: Unbiggen AIby Eliza Strickland on 9. February 2022. at 15:31
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
- How AI Will Change Chip Designby Rina Diane Caballar on 8. February 2022. at 14:00
The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
- Atomically Thin Materials Significantly Shrink Qubitsby Dexter Johnson on 7. February 2022. at 16:12
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.