IEEE Novosti

IEEE Spectrum IEEE Spectrum

  • Optical Interposers Could Start Speeding Up AI in 2025
    by Laura Hautala on 22. Januara 2025. at 14:00

    Fiber-optic cables are creeping closer to processors in high-performance computers, replacing copper connections with glass. Technology companies hope to speed up AI and lower its energy cost by moving optical connections from outside the server onto the motherboard and then having them sidle up alongside the processor. Now tech firms are poised to go even further in the quest to multiply the processor’s potential—by slipping the connections underneath it. That’s the approach taken by Lightmatter, which claims to lead the pack with an interposer configured to make light-speed connections, not just from processor to processor but also between parts of the processor. The technology’s proponents claim it has the potential to decrease the amount of power used in complex computing significantly, an essential requirement for today’s AI technology to progress. Lightmatter’s innovations have attracted the attention of investors, who have seen enough potential in the technology to raise US $850 million for the company, launching it well ahead of its competitors to a multi-unicorn valuation of $4.4 billion. Now Lightmatter is poised to get its technology, called Passage, running. The company plans to have the production version of the technology installed and running in lead-customer systems by the end of 2025. Passage, an optical interconnect system, could be a crucial step to increasing computation speeds of high-performance processors beyond the limits of Moore’s Law. The technology heralds a future where separate processors can pool their resources and work in synchrony on the huge computations required by artificial intelligence, according to CEO Nick Harris. “Progress in computing from now on is going to come from linking multiple chips together,” he says. An Optical Interposer Fundamentally, Passage is an interposer, a slice of glass or silicon upon which smaller silicon dies, often called chiplets, are attached and interconnected within the same package. Many top server CPUs and GPUs these days are composed of multiple silicon dies on interposers. The scheme allows designers to connect dies made with different manufacturing technologies and to increase the amount of processing and memory beyond what’s possible with a single chip. Today, the interconnects that link chiplets on interposers are strictly electrical. They are high-speed and low-energy links compared with, say, those on a motherboard. But they can’t compare with the impedance-free flow of photons through glass fibers. Passage is cut from a 300-millimeter wafer of silicon containing a thin layer of silicon dioxide just below the surface. A multiband, external laser chip provides the light Passage uses. The interposer contains technology that can receive an electric signal from a chip’s standard I/O system, called a serializer/deserializer, or SerDes. As such, Passage is compatible with out-of-the-box silicon processor chips and requires no fundamental design changes to the chip. Computing chiplets are stacked atop the optical interposer. Lightmatter From the SerDes, the signal travels to a set of transceivers called microring resonators, which encode bits onto laser light in different wavelengths. Next, a multiplexer combines the light wavelengths together onto an optical circuit, where the data is routed by interferometers and more ring resonators. From the optical circuit, the data can be sent off the processor through one of the eight fiber arrays that line the opposite sides of the chip package. Or the data can be routed back up into another chip in the same processor. At either destination, the process is run in reverse, in which the light is demultiplexed and translated back into electricity, using a photodetector and a transimpedance amplifier. Passage can enable a data center to use between one-sixth and one-twentieth as much energy, Harris claims. The direct connection between any chiplet in a processor removes latency and saves energy compared with the typical electrical arrangement, which is often limited to what’s around the perimeter of a die. That’s where Passage diverges from other entrants in the race to link processors with light. Lightmatter’s competitors, such as Ayar Labs and Avicena, produce optical I/O chiplets designed to sit in the limited space beside the processor’s main die. Harris calls this approach the “generation 2.5” of optical interconnects, a step above the interconnects situated outside the processor package on the motherboard. Advantages of Optics The advantages of photonic interconnects come from removing limitations inherent to electricity, which expends more energy the farther it must move data. Photonic interconnect startups are built on the premise that those limitations must fall in order for future systems to meet the coming computational demands of artificial intelligence. Many processors across a data center will need to work on a task simultaneously, Harris says. But moving data between them over several meters with electricity would be “physically impossible,” he adds, and also mind-bogglingly expensive. “The power requirements are getting too high for what data centers were built for,” Harris continues. Passage can enable a data center to use between one-sixth and one-twentieth as much energy, with efficiency increasing as the size of the data center grows, he claims. However, the energy savings that photonic interconnects make possible won’t lead to data centers using less power overall, he says. Instead of scaling back energy use, they’re more likely to consume the same amount of power, only on more-demanding tasks. AI Drives Optical Interconnects Lightmatter’s coffers grew in October with a $400 million Series D fundraising round. The investment in optimized processor networking is part of a trend that has become “inevitable,” says James Sanders, an analyst at TechInsights. In 2023, 10 percent of servers shipped were accelerated, meaning they contain CPUs paired with GPUs or other AI-accelerating ICs. These accelerators are the same as those that Passage is designed to pair with. By 2029, TechInsights projects, a third of servers shipped will be accelerated. The money being poured into photonic interconnects is a bet that they are the accelerant needed to profit from AI.

  • AI Workloads Spur Bigger Memory Drives
    by Dina Genkina on 22. Januara 2025. at 13:00

    At the SC24 supercomputing conference held in November in Atlanta, Hafþór Júlíus Björnsson (the actor who played The Mountain in Game of Thrones), deadlifted a custom barbell weighed down by 453 kilograms (1000 pounds) of solid state drives. The data stored in those drives totaled just over 280 petabytes. “Without question, this is the most data lifted by a human in history,” says Andy Higginbotham, senior director of business development at Phison Electronics. Behind this publicity stunt is a real trend—to feed AI’s insatiable data appetite, memory drives are getting larger, with no end in sight. Phison recently announced the largest SSD memory drive to date, storing 128 terabytes of data, and piled hundreds of them into Björnsson’s barbell. Within a few weeks, Solidigm announced its own 123 Tb drive. Samsung and Western Digital also recently started carrying similar products. The shift towards more AI workloads in data centers has led to very power-hungry chips, mostly GPUs. Since the overall power use in a data center is going up, people are looking for ways to use less power wherever possible. At the same time, large language models and other AI models require ever increasing amounts of memory. “You can see where storage requirements are going,” says Roger Corell, senior director of AI and leadership marketing at Solidigm. “You look at a large language model just a couple years ago, you had a half a petabyte per rack or lower. And now there’s large language models that pair with between three and three and a half petabytes per rack. Storage efficiency to enable continued scaling of AI infrastructure is really, really important.” Crucially, this new crop of solid-state drives takes up the same area in a computing rack and power budget as their roughly 32 Tb and 64 Tb predecessors—although they are slightly taller—meaning they can be swapped into data centers for an easy win. 3 Ways to Boost SSD Capacity “There’s three kind of vectors you can innovate on to drive up capacity per SSD,” says Corell. “One is the bits per cell, and then two is how many cells can you pack in one layer, and how many layers can you stack of these memory cells.” The number of bits per cell is literally how many different values can be stored in a single NAND flash floating-gate transistor. These transistors are packed onto a layer with ever-increasing densities. Adding more layers does make the device taller, but, Corell says, height is not a limiting factor in data centers today, so there is room to expand. Solidigm was already making four-bit-per-cell devices, so it innovated by packing more cells per layer to go from the 60 Tb model to its 122 Tb device, mainly by using the smallest available NAND technology and reducing the size of non-NAND components. Phison went from thee-bit to four-bits NAND cells, improving along all three vectors to go from its 32 Tb to its 128 Tb drive. Going to a higher number of bits per cell makes the write times somewhat slower, but this is still a win in a lot of applications, says Allyn Malventano, senior manager of technical marketing at Phison. Maintaining the same power draw as smaller devices is a fine balancing act. “The drive developers are always going to have to do to figure out, okay, if we configure it this particular way, we maybe we can get some higher performance, but if we do that, it’s going to take some more power in order to do that right,” Malventano says. “So there’s a, basically a tuning operation that goes on for any particular capacity.” The demand for higher capacity memory drives is not slowing down, either. “It’s never going to stop. There’s always going to be more demand for larger SSDs,” Higginbotham says. Solidigm’s Corell predicts there will be petabyte SSDs on the market before the decade is done.

  • It’s a SNaP: New Technique Paves Way for Scalable Therapeutic Nanoparticle Manufacturing
    by Michael W. Richardson on 21. Januara 2025. at 18:03

    This sponsored article is brought to you by NYU Tandon School of Engineering. In a significant advancement for the field of drug delivery, researchers have developed a new technique that addresses a persistent challenge: scalable manufacturing of nanoparticles and microparticles. This innovation, led by Nathalie M. Pinkerton, Assistant Professor of Chemical and Biomolecular Engineering at the NYU Tandon School of Engineering, promises to bridge the gap between lab-scale drug delivery research and large-scale pharmaceutical manufacturing. The breakthrough, known as Sequential NanoPrecipitation (SNaP), builds on existing nano-precipitation techniques to offer improved control and scalability, essential factors in ensuring that drug delivery technologies reach patients efficiently and effectively. This technique enables scientists to manufacture drug-carrying particles that maintain their structural and chemical integrity from lab settings to mass production—an essential step toward bringing novel therapies to market. Using 3D Printing to Overcome a Challenge in Drug Delivery Nanoparticles and microparticles hold tremendous promise for targeted drug delivery, allowing precise transport of medicines directly to disease sites while minimizing side effects. However, producing these particles consistently at scale has been a major barrier in translating promising research into viable treatments. As Pinkerton explains, “One of the biggest barriers to translating many of these precise medicines is the manufacturing. With SNaP, we’re addressing that challenge head-on.” Pinkerton is an Assistant Professor of Chemical and Biomolecular Engineering at NYU Tandon.NYU Tandon School of Engineering Traditional methods like Flash Nano-Precipitation (FNP) have been successful in creating some types of nanoparticles, but they often struggle to produce larger particles, which are essential for certain delivery routes such as inhalable delivery. FNP creates polymeric core–shell nanoparticles (NPs) between 50 to 400 nanometers in size. The process involves mixing drug molecules and block-copolymers (special molecules that help form the particles) in a solvent, which is then rapidly blended with water using special mixers. These mixers create tiny, controlled environments where the particles can form quickly and evenly. Despite its success, FNP has some limitations: it can’t create stable particles larger than 400 nm, the maximum drug content is about 70 percent, the output is low, and it can only work with very hydrophobic (water-repelling) molecules. These issues arise because the particle core formation and particle stabilization happen simultaneously in FNP. The new SNaP process overcomes these limitations by separating the core formation and stabilization steps. In the SNaP process, there are two mixing steps. First, the core components are mixed with water to start forming the particle core. Then, a stabilizing agent is added to stop the core growth and stabilize the particles. This second step must happen quickly, less than a few milliseconds after the first step, to control the particle size and prevent aggregation. Current SNaP setups connect two specialized mixers in series, controlling the delay time between steps. However, these setups face challenges, including high costs and difficulties in achieving short delay times needed for small particle formation. A new approach using 3D printing has solved many of these challenges. Advances in 3D printing technology now allow the creation of precise, narrow channels needed for these mixers. The new design eliminates the need for external tubing between steps, allowing for shorter delay times and preventing leaks. The innovative stacked mixer design combines two mixers into a single setup, making the process more efficient and user-friendly. “One of the biggest barriers to translating many of these precise medicines is the manufacturing. With SNaP, we’re addressing that challenge head-on.” —Nathalie M. Pinkerton, NYU Tandon Using this new SNaP mixer design, researchers have successfully created a wide range of nanoparticles and microparticles loaded with rubrene (a fluorescent dye) and cinnarizine (a weakly hydrophobic drug used to treat nausea and vomiting). This is the first time small nanoparticles under 200 nm and microparticles have been made using SNaP. The new setup also demonstrated the critical importance of the delay time between the two mixing steps in particle size control. This control over the delay time enables researchers to access a larger range of particle sizes. Additionally, the successful encapsulation of both hydrophobic and weakly hydrophobic drugs in nanoparticles and microparticles with SNaP was achieved for the first time by Pinkerton’s team. Democratizing Access to Cutting-Edge Techniques The SNaP process is not only innovative but also offers a unique practicality that democratizes access to this technology. “We share the design of our mixers, and we demonstrate that they can be manufactured using 3D printing,” Pinkerton says. “This approach allows academic labs and even small-scale industry players to experiment with these techniques without investing in costly equipment.” A stacked mixer schematic, with an input stage for syringe connections (top), which connects immediately to the first mixing stage (middle). The first mixing stage is interchangeable, with either a 2-inlet or a 4-inlet mixer option depending on the desired particle size regime (dotted antisolvent streams only present in the 4-inlet mixer). This stage also contains pass-through for streams used in the second mixing step. All the streams mix in the second mixing stage (bottom) and exit the device. The accessibility of SNaP technology could accelerate advances across the drug delivery field, empowering more researchers and companies to utilize nanoparticles and microparticles in developing new therapies. The SNaP project exemplifies a successful cross-disciplinary effort. Pinkerton highlighted the team’s diversity, which included experts in mechanical and process engineering as well as chemical engineering. “It was truly an interdisciplinary project,” she noted, pointing out that contributions from all team members—from undergraduate students to postdoctoral researchers—were instrumental in bringing the technology to life. Beyond this breakthrough, Pinkerton envisions SNaP as part of her broader mission to develop universal drug delivery systems, which could ultimately transform healthcare by allowing for versatile, scalable, and customizable drug delivery solutions. From Industry to Academia: A Passion for Innovation Before arriving at NYU Tandon, Pinkerton spent three years in Pfizer’s Oncology Research Unit, where she developed novel nano-medicines for the treatment of solid tumors. The experience, she says, was invaluable. “Working in industry gives you a real-world perspective on what is feasible,” she points out. “The goal is to conduct translational research, meaning that it ‘translates’ from the lab bench to the patient’s bedside.” Pinkerton — who earned a B.S. in Chemical Engineering from the Massachusetts Institute of Technology (2008) and a doctoral degree in Chemical and Biological Engineering from Princeton University — was attracted to NYU Tandon, in part, because of the opportunity to collaborate with researchers across the NYU ecosystem, with whom she hopes to develop new nanomaterials that can be used for controlled drug delivery and other bio-applications. She also came to academia because of a love of teaching. At Pfizer, she realized her desire to mentor students and pursue innovative, interdisciplinary research. “The students here want to be engineers; they want to make a change in the world,” she reflected. Her team at the Pinkerton Research Group focuses on developing responsive soft materials for bio-applications ranging from controlled drug delivery, to vaccines to medical imaging. Taking an interdisciplinary approach, they use tools from chemical and materials engineering, nanotechnology, chemistry and biology to create soft materials via scalable synthetic processes. They focus on understanding how process parameters control the final material properties, and in turn, how the material behaves in biological systems — the ultimate goal being a universal drug delivery platform that improves health outcomes across diseases and disorders. Her SNaP technology represents a promising new direction in the quest to scale drug delivery solutions effectively. By controlling assembly processes with millisecond precision, this method opens the door to creating increasingly complex particle architectures, providing a scalable approach for future medical advances. For the field of drug delivery, the future is bright as SNaP paves the way toward an era of more accessible, adaptable, and scalable solutions.

  • A Spy Satellite You’ve Never Heard of Helped Win the Cold War
    by Ivan Amato on 21. Januara 2025. at 14:00

    In the early 1970s, the Cold War had reached a particularly frigid moment, and U.S. military and intelligence officials had a problem. The Soviet Navy was becoming a global maritime threat—and the United States did not have a global ocean-surveillance capability. Adding to the alarm was the emergence of a new Kirov class of nuclear-powered guided-missile battle cruisers, the largest Soviet vessels yet. For the United States, this situation meant that the perilous equilibrium of mutual assured destruction, MAD, which so far had dissuaded either side from launching a nuclear strike, could tilt in the wrong direction. It would be up to a top-secret satellite program called Parcae to help keep the Cold War from suddenly toggling to hot. The engineers working on Parcae would have to build the most capable orbiting electronic intelligence system ever. “It was becoming obvious what the challenges were,” says Lee M. Hammarstrom, an electrical engineer who over a 40-year period beginning in the 1960s was in the thick of classified Cold-War technology development. His work included the kind of satellite-based intelligence systems that could fill the surveillance gap. The Soviet Union’s expanding naval presence in the 1970s came on the heels of its growing prowess in antiaircraft and antiballistic missile capabilities, he notes. “We were under MAD at this time, so if the Soviets had a way to negate our strikes, they might have considered striking first.” A Parcae satellite was just a few meters long but it had four solar panels that extended several meters out from the body of the satellite. The rod emerging from the satellite was a gravity boom, which kept the orbiter’s signal antennas oriented toward Earth.NRO Reliable, constant, and planetwide ocean surveillance became a top U.S. priority. An existing ELINT (electronic intelligence) satellite program, code-named Poppy, was able to detect and geolocate the radar emissions from Soviet ships and land-based systems, but until the program’s last stages it could take weeks or more to make sense of its data. According to Dwayne Day, a historian of space technology for the National Academy of Sciences, the United States conducted large naval exercises in 1971, with U.S. ships broadcasting signals, and several types of ELINT satellites attempting to detect them. The tests revealed worrisome weaknesses in the country’s intelligence-gathering satellite systems. That’s where Parcae would come in. One of the big advances of the Parcae program was a three-satellite dispenser that could loft three satellites, which then functioned together in orbit as a group. Seen here are three Parcae satellites on the dispenser.Arthur Collier Even the mere existence of the satellites, which would be built by a band of veteran engineers at the U.S. Naval Research Laboratory (NRL) in Washington, D.C., would remain officially secret until July 2023. That’s when the National Reconnaissance Office declassified a one-page acknowledgment about Parcae. Since its establishment in 1961, the NRO has directed and overseen the nation’s spy-satellite programs, including ones for photoreconnaissance, communications interception, signals intelligence, and radar. With this scant declassification, the Parcae program could at least be celebrated by name and its overall mission revealed during the NRL’s centennial celebration that year. Aspects of the Parcae program had been unofficially outed over the years by a few enterprising journalists in such venues as Aviation Week & Space Technology and The Space Review, by historians like Day, and even by a Russian military advisor in a Ministry of Defense journal. This article is based on these sources, along with additional interviews and written input from Navy engineers who designed, built, operated, and managed Parcae and its precursor satellite systems. They confirm a commonly held but nevertheless profound understanding about the United States during that era. Simply put, there was nothing quite like the paranoia and high stakes of the Cold War to spur engineers into creative frenzies that rapidly produced brilliant national-security technologies, including surveillance systems like Parcae. A Spy Satellite with a Cosmic Cover Name Although the NRO authorized and paid for Parcae, the responsibility to actually design and build it fell to the cold-warrior engineers at NRL and their contractor-partners at such places as Systems Engineering Laboratories and HRB Singer, a signal-analysis and -processing firm in State College, Pa. Parcae was the third Navy satellite ELINT program funded by the NRO. The first was a satellite called GRAB, about as big as an exercise ball. GRAB stood for Galactic Radiation and Background experiment, which was a cover name for the satellite’s secret payload; it also had a bona fide solar-science payload housed in the same shell [see sidebar, “From Quartz-Crystal Detectors to Eavesdropping Satellites”]. On 22 June 1960, GRAB made it into orbit to become the world’s first spy satellite, though there was no opportunity to brag about it. The existence of GRAB’s classified mission was an official secret until 1998. A second GRAB launched in 1961, and the pair of satellites monitored Soviet radar systems for the National Security Agency and the Strategic Air Command. The NSA, headquartered at Fort Meade, Md., is responsible for many aspects of U.S. signals intelligence, notably intercepting and decrypting sensitive communications all over the world and devising machines and algorithms that protect U.S. official communications. The SAC was until 1992 in charge of the country’s strategic bombers and intercontinental ballistic missiles. The Poppy Block II satellites, which had a diameter of 61 centimeters, were outfitted with antennas to pick up signals from Soviet radars [top]. The signals were recorded and retransmitted to ground stations, such as this receiving console photographed in 1965, designated A-GR-2800. NRO The GRAB satellites tracked several thousand Soviet air-defense radars scattered across the vast Russian continent, picking up the radars’ pulses and transmitting them to ground stations in friendly countries around the world. It could take months to eke out useful intelligence from the data, which was hand-delivered to NSA and SAC. There, analysts would examine the data for “signals of interest,” like the proverbial needle in a haystack, interpret their significance, and package the results into reports. All this took days if not weeks, so GRAB data was mostly relevant for overall situational awareness and longer-term strategic planning. In 1962, the GRAB program was revamped around more advanced satellites, and rechristened Poppy. That program operated until 1977 and was partially declassified in 2004. With multiple satellites in orbit, Poppy could geolocate emission sources, at least roughly. Toward the end of the Poppy program, the NRL satellite team showed it was even possible, in principle, to get this information to end users within hours or even less by relaying it directly to ground stations, rather than recording the data first. These first instances of rapidly delivered intelligence fired the imaginations, and expectations, of U.S. national-security leaders and offered a glimpse of the ocean-surveillance capabilities they wanted Parcae to provide. How Parcae Inspired Modern Satellite Signals Intelligence The first of the 12 Parcae missions launched in 1976 and the last, 20 years later. Over its long lifetime, the program had other cryptic cover names, among them White Cloud and Classic Wizard. According to NRO’s declassification memo, it stopped using the Parcae satellites in May 2008. Originally designed as an intercontinental ballistic missile (ICBM), the Atlas F was later repurposed to launch satellites, including Parcae. Peter Hunter Photo Collections Initially, Parcae launches relied on an Atlas F rocket to deliver three satellites in precise orbital formations, which were essential for their geolocation and tracking functions. (Later launches used the larger Titan IV-A rocket.) This triple launching capability was achieved with a satellite dispenser designed and built by an NRL team led by Peter Wilhelm. As chief engineer for NRL’s satellite-building efforts for some 60 years until his retirement in 2015, Wilhelm directed the development of more than 100 satellites, some of them still classified. One of the Parcae satellites’ many technical breakthroughs was a gravity-gradient stabilization boom, which was a long retractable arm with a weight at the end. Moving the weight shifted the center of mass of the satellite, enabling operators on the ground to keep the satellite antennae facing earthward. The satellites generally worked in clusters of three (the name Parcae comes from the three fates of Roman mythology), each detecting the radar and radio emissions from Soviet ships. To pinpoint a ship, the satellites were equipped with highly precise, synchronized clocks. Tiny differences in the time when each satellite received the radar signals emitted from the ship were then used to triangulate the ship’s location. The calculated location was updated each time the satellites passed over. A GRAB satellite was prepared for launch in 1960. Peter Wilhelm is standing, at right, in a patterned shirt.NRO Transmissions from the GRAB satellites were received in “huts” [left], likely in a country just outside Soviet borders. In between the two banks of receivers in this photo is the wheel used for manually steering the antennas. These yagi antennas [right] were linearly polarized.NRO The satellites collected huge amounts of data, which they transmitted to ground stations around the world. These stations were operated by the Naval Security Group Command, which performed encryption and data-security functions for the Navy. The data was then relayed via communications satellites to Naval facilities worldwide, where it was correlated and turned into intelligence. That intelligence, in the form of Ships Emitter Locating Reports, went out to watch officers and commanders aboard ships at sea and other users. A report might include information about, for example, a newly detected radar signal—the type of radar, its frequencies, pulse, scan rates, and location. The simultaneous detection of signals from different kinds of emitters from a single location made it possible to identify the class of the ship doing the emitting and even the specific ship. This kind of granular maritime reconnaissance began in the 1960s, when the NRL developed a ship surveillance capability known as HULTEC, short for hull-to-emitter correlation. Early Minicomputers Spotted Signals of Interest To scour the otherwise overwhelming torrents of raw ELINT data for signals of interest, the Parcae program included an intelligence-analysis data-processing system built around then-high-end computers. These were likely produced by Systems Engineering Laboratories, in Fort Lauderdale, Fla. SEL had produced the SEL-810 and SEL-86 minicomputers used in the Poppy program. These machines included a “real-time interrupt capability,” which enabled the computers to halt data processing to accept and store new data and then resume the processing where it had left off. That feature was useful for a system like Parcae, which continually harvested data. Also crucial to ferreting out important signals was the data-processing software, supplied by vendors whose identities remain classified. The SEL-810 minicomputer was the heart of a data-processing system built to scour the torrents of raw data from the Poppy satellites for signals of interest. Computer History Museum This analysis system was capable of automatically sifting through millions of signals and discerning which ones were worthy of further attention. Such automated winnowing of ELINT data has become much more sophisticated in the decades since. The most audacious requirement for the Parcae system was that the “intercept-to-report” interval—the time between when the satellite detected a signal of interest and when the report was generated—take no more than a few minutes, rather than the hours or days that the best systems at the time could deliver. Eventually, the requirement was that reports be generated quickly enough to be used for day-to-day and even hour-to-hour military decision making, according to retired Navy Captain Arthur “Art” Collier. For six years, Collier served as the NRO program manager for Parcae. In a time of mutual assured destruction, he notes, if the intercept-to-report delay was longer than the time it took to fry an egg, national security leaders regarded it as a vulnerability of potentially existential magnitude. Over time, the Ships Emitter Locating Reports evolved from crude teletype printouts derived from raw intercept data to more user-friendly forms such as automatically displayed maps. The reports delivered the intelligence, security, or military meaning of the intercepts in formats that naval commanders and other end users on the ground and in the air could grasp quickly and put to use. Parcae Tech and the 2-Minute Warning Harvesting and pinpointing radar signatures, though difficult to pull off, wasn’t even the most sobering tech challenge. Even more daunting was Parcae’s requirement to deliver “sensor-to-shooter” intelligence—from a satellite to a ship commander or weapons control station—within minutes. According to Navy Captain James “Mel” Stephenson, who was the first director of the NRO’s Operational Support Office, achieving this goal required advances all along the technology chain. That included the satellites, computer hardware, data-processing algorithms, communications and encryption protocols, broadcast channels, and end-user terminals. From Quartz-Crystal Detectors to Eavesdropping Satellites The seed technology for the U.S. Navy’s entire ELINT-satellite story goes back to World War II, when the Naval Research Laboratory (NRL) became a leading developer in the then-new business of electronic warfare and countermeasures. Think of monitoring an enemy’s radio-control signals, fooling its electronic reconnaissance probes, and evading its radar-detection system. NRL’s foray into satellite-based signals intelligence emerged from a quartz-crystal-based radio-wave detector designed by NRL engineer Reid Mayo that he sometimes personally installed on the periscopes of U.S. submarines. This device helped commanders save their submarines and the lives of those aboard by specifying when and from what direction enemy radars were probing their vessels. In the late 1950s, as the Space Age was lifting off, Mayo and his boss, Howard Lorenzen (who would later hire Lee M. Hammarstrom), were perhaps the first to realize that the same technology should be able to “see” much larger landscapes of enemy radar activity if the detectors could be placed in orbit. Lorenzen was an influential, larger-than-life technology visionary often known as the father of electronic warfare. In 2008, the United States named a missile-range instrumentation ship, which supports and tracks missile launches, after him. Lorenzen’s and Mayo’s engineering concept of “raising the periscope” for the purpose of ELINT gathering was implemented on the first GRAB satellite. The satellite was a secret payload that piggybacked on a publicly announced scientific payload, Solrad, which collected first-of-its-kind data on the sun’s ultraviolet- and X-ray radiation. That data would prove useful for modeling and predicting the behavior of the planet’s ionosphere, which influenced the far-flung radio communication near and dear to the Navy. Though the United States couldn’t brag about the GRAB mission even as the Soviet Union was scoring first after first in the space race, it was the world’s first successful spy payload in orbit, beating by a few months the first successful launch of Corona, the CIA’s maiden space-based photoreconnaissance program. A key figure in the development of those user terminals was Ed Mashman, an engineer who worked as a contractor on Parcae. The terminals had to be tailored according to where they would be used and who would be using them. One early series was known as Prototype Analysis Display Systems, even though the “prototypes” ended up deployed as operational units. Before these display systems became available, Mashman recalled in an interview for IEEE Spectrum, “Much of the data that had been coming in from Classic Wizard just went into the burn bag, because they could not keep up with the high volume.” The intelligence analysts were still relying on an arduous process to determine if the information in the reports was alarming enough to require some kind of action, such as positioning U.S. naval vessels that were close enough to a Soviet vessel to launch an attack. To make such assessments, the analysts had to screen a huge number of teletype reports coming in from the satellites, manually plotting the data on a map to discern which ones might indicate a high-priority threat from the majority that did not. When the “prototype” display systems became available, Mashman recalls, the analysts could “all of a sudden, see it automatically plotted on a map and get useful information out of it…. When some really important thing came from Classic Wizard, it would [alert] the watch officer and show where it was and what it was.” Data overload was even more of a problem aboard ship or in the field, so NRL engineers developed the capability to deliver the data directly to computers onboard ships and in the field. Software automatically plotted the data on geographic displays in a form that watch officers could quickly understand and assess. These capabilities were developed during shoulder-to-shoulder work sessions between end users and engineers like Mashman. Those sessions led to an iterative process by which the ELINT system could deliver and package data in user-friendly ways and with a swiftness that was tactically useful. Parcae’s rapid-dissemination model flourished well beyond the end of the program and is one of Parcae’s most enduring legacies. For example, to rapidly distribute intelligence globally, Parcae’s engineering teams built a secure communications channel based on a complex mix of protocols, data-processing algorithms, and tailored transmission waveforms, among other elements. The communications network connecting these pieces became known as the Tactical Receive Equipment and Related Applications Broadcast. As recently as Operation Desert Storm, it was still being used. “During Desert Storm, we added imagery to the…broadcast, enabling it to reach the forces as soon as it was generated,” says Stephenson. Over the course of a 40-year career in national security technologies, Lee M. Hammarstrom rose to the position of chief scientist of the National Reconnaissance Office. U.S. Naval Research Laboratory According to Hammarstrom, Parcae’s communications challenges had to be solved concurrently with the core challenge of managing and parsing the vast amounts of raw data into useful intelligence. Coping with this data deluge began with the satellites themselves, which some participants came to think of as “orbiting peripherals.” The term reflected the fact that the gathering of raw electronic signals was just the beginning of a complex system of complex systems. Even in the late 1960s, when Parcae’s predecessor Poppy was operational, the NRL team and its contractors had totally reconfigured the satellites, data-collection system, ground stations, computers, and other system elements for the task. This “data density” issue had become apparent even with GRAB 1 in 1960. Those who saw the first data harvests were astonished by how much radar infrastructure the Soviet Union had put in place. Finding ways of processing the data became a primary focus for Hammarstrom and an emerging breed of electronic, data, and computer engineers working on these highly classified programs. Collier notes that in addition to supporting military operations, Parcae “was available to help provide maritime-domain awareness for tracking drug, arms and human trafficking as well as general commercial shipping.” Those who built and operated Parcae and those who relied on it for national security stress that so much more of the story remains classified and untellable. As they reminisced in interviews that can’t yet be fully shared, engineers who made this spy satellite system real say they had not been more professionally and creatively on fire before or after the program. Parcae, though a part of the Cold War’s prevailing paradigm of mutual assured destruction, proved to be a technological adventure that gave these engineers joy. This article appears in the February 2025 print issue.

  • The Gap’s Data Science Director Has Tailored the Retailer’s Operations
    by Kathy Pretz on 20. Januara 2025. at 19:00

    Shoppers probably don’t realize how large a role data science plays in retail. The discipline provides information about consumer habits to help predict demand for products. It’s also used to set prices, determine the number of items to be manufactured, and figure out more efficient ways to transport goods. Those are just some of the insights that data scientist Vivek Anand extracts to inform decision makers at the Gap, a clothing company headquartered in San Francisco. As director of data science, Anand—who is based in Austin, Texas—manages a team that includes statisticians and operations research professionals. The team collects, analyzes, and interprets the data, then suggests ways to improve the company’s operations. Vivek Anand Employer: The Gap, headquartered in San Francisco Title: Director of data science Member grade: Senior member Alma maters:Indian Institute of Science Education and Research in Pune; Columbia “Data science is trying to effectively solve problems that were previously unsolvable,” Anand says. “The technology is used to group similar transactions that look different on the surface. But underneath they are similar.” Anand is an IEEE senior member who has spent his career using data science, artificial intelligence, and mathematical and statistical modeling to help businesses solve problems and make smarter decisions. Last year AIM Research honored Anand’s efforts to transform the retail industry with its AI100 award, which recognizes the 100 most influential AI leaders in the United States. A data scientist at heart Growing up in Gopalganj, India, he set his sights on becoming a physician. In 2006 he enrolled in the Indian Institute of Science Education and Research (IISER) in Pune with every intention of earning a medical degree. During his first semester, however, he enjoyed the introductory mathematics classes much more than his biology courses. A project to design a statistics program to determine the best way to vaccinate people (pre-COVID-19) helped him realize math was a better fit. “That was my first introduction to optimization techniques,” he says, adding that he found he really liked determining whether a system was working as efficiently as possible. The vaccine project also got him interested in learning more about industrial engineering and operations research, which uses mathematical modeling and analytical techniques to help complex systems run smoothly. He graduated in 2011 from IISER’s five-year dual science degree program with bachelor’s and master’s degrees, with a concentration in mathematics. He then earned a master’s degree in operations research in 2012 from Columbia. One of the courses at Columbia that intrigued him most, he says, was improving the process of identifying a person’s risk tolerance when making investment choices. That training and an internship at an investment firm helped him land his first job at Markit, now part of S&P Global, a credit-rating agency in New York City. He created AI and mathematical models for financial transactions such as pricing cash and credit instruments, including credit default swaps. A CDS is a financial instrument that lets investors swap or offset their credit risk with those from another investor. Anand, who began as an analyst in 2013, was promoted to assistant vice president in 2015. Later that year, he was recruited by Citigroup, an investment bank and financial services company in New York City. As an assistant vice president, he developed data science and machine learning models to price bonds more accurately. He also led a team of quantitative analysts responsible for modeling, pricing, and determining the valuation of credit derivatives such as CDSs in emerging markets. He left Citi in 2018 to join Zilliant, a price and revenue optimization consultancy firm in Austin. As a senior data scientist and later as lead data scientist and director of science, he led a team that built and serviced custom price optimization models for customers in the automotive, electronics, retail, and food and beverage industries. “We used to estimate elasticities, which is a key component for pricing products,” he says. Price elasticity shows how much demand for a product would change when its cost changes. “The existing algorithms were not efficient. In a number of instances, it used to take days to compute elasticities, and we were able to bring down that process to a few hours.” He was director of science at Zilliant when he left to join the Gap, where he oversees three data science subteams: price optimization, inventory management, and fulfillment optimization. “In the fashion industry a vast majority of product assortments are continuously refreshed,” he says, “so the objective is to sell them as profitably and as quickly as possible.” Clothing tends to be season-specific, and stores make space on their shelves for new items to avoid excess inventory and markdowns. “It’s a balance between being productive and profitable,” Anand says. “Pricing is basically a three-prong approach. You want to hold onto inventory to sell it more profitably, clear the shelves if there is excessive unproductive inventory, and acquire new customers through strategic promotions.” Managing inventory can be challenging because the majority of fashion merchandise sold in the United States is made in Asia. Anand says it means long lead times for delivery to the Gap’s distribution centers to ensure items are available in time for the appropriate season. Unexpected shipping delays happen for many reasons. The key to managing inventory is not to be overstocked or understocked, Anand says. Data science not only can help estimate the average expected delivery times from different countries and factor in shipping delays but also can inform the optimal quantities bought. Given the long lead times, correcting an underbuy error is hard, he says, whereas overbuys result in unsold inventory. Until recently, he says, experts estimated transit time based on average delivery times, and they made educated guesses about how much inventory for a certain item would be needed. In most cases, there is no definitive right or wrong answer, he says. “Based on my observations in my current role, as well as my previous experience at Zilliant where I collaborated with a range of organizations—including Fortune 500 companies across various industries—data science models frequently outperform subject matter experts,” he says. Building a professional network Anand joined IEEE last year at the urging of his wife, computer engineer Richa Deo, a member. Because data science is a relatively new field, he says, it has been difficult to find a professional organization of like-minded people. Deo encouraged him to contact IEEE members on her LinkedIn account. After many productive conversations with several members, he says, he felt that IEEE is where he belongs. “IEEE has helped me build that professional network that I was looking for,” he says. Career advice for budding data scientists Data science is a growing field that needs more workers, Anand says. For people who are considering it as a career, he has some advice. First, he says, recognize that not all data scientists are the same; the job description differs from company to company. “It’s important to network with people to understand what kind of data science they are doing, what the role entails, and what skills are needed to make sure that it’s a good fit for you,” he says. There are eight of what he calls spokes in data science. Each one represents a specific skill: exploratory data analysis and visualization, data storytelling, statistics, programming, experimentation, modeling, machine learning operations, and data engineering. Continuing education is important, Anand says. “Just because you’ve earned a degree doesn’t mean that learning stops,” he says. “Don’t just scratch the surface of a topic; go deeper. It’s also important to read fundamental textbooks about the field. A lot of people just skip that part.” In data science, he notes, there are premade libraries such as SciKit Learn. If you don’t understand what the library is doing in the background or what’s under the surface, though, then you are just a user, not a developer, he says, and that means you aren’t building things. “Data science needs a lot of developers,” he says. “The people who know things can build programs from scratch. We have a ton of users but fewer developers. And this field is going to be here for a long time.”

  • IEEE Power & Energy Society Boosts Scholarship to $10,000
    by Amanda Davis on 19. Januara 2025. at 19:00

    During the past 13 years, the IEEE Power & Energy Society (PES) has awarded US $4.6 million in scholarships to undergraduate students in North America studying power engineering as part of its Scholarship Plus Initiative. PES has increased the scholarship amount this year to $10,000, up from $7,000. Recipients, who do not have to be IEEE student members, are eligible to receive up to three years of financial support but must reapply each year. They can apply for the scholarship as early as their freshman year. The scholarship is funded by donors. The increase is thanks in part to a $1 million donation from a family foundation. If recipients continue to meet the requirements, they receive $2,000 for each of the first two years and $3,000 for the third. PES scholars are offered mentorship by leading professionals in the power and engineering industry. They also receive a free one-year student membership to IEEE and the society. “The amount had not changed since the program was established [in 2011], but the cost of education has increased, so we wanted to ensure recipients received more financial support,” says Daniel Toland, director of PES operations. Scholarship eligibility expanded The society also made other changes to the program based on feedback from supporters and donors. Previously, recipients were required to pursue a degree in power and energy, but the industry needs engineers with backgrounds in all IEEE fields of interest—not just power and energy—to meet its staffing needs, according to survey respondents. The program is now open to students pursuing bachelor’s degrees in other IEEE-designated fields of interest related to power and energy. They include electrical, civil, computer, environmental, and mechanical engineering; computer science; data analytics; renewable energy; and sustainability studies. “The program’s main objective has always been to get as many undergraduate students as possible involved and engaged in the power and energy industry,” Toland says. Students can apply online from 20 January to 30 April. To learn more, email pes-scholarship-info@ieee.org.

  • The Toyota Prius Transformed the Auto Industry
    by Willie D. Jones on 17. Januara 2025. at 19:00

    In the early 1990s, Toyota saw that environmental awareness and tighter emissions regulations would shape the future of the automotive industry. The company aimed to create an eco-friendly, efficient vehicle that would meet future standards. In 1997 Toyota introduced the Prius to the Japanese market. The car was the world’s first mass-produced hybrid vehicle that combined gasoline and electric power to reduce fuel consumption and emissions. Its worldwide debut came in 2000. Developing the Prius posed significant technical and market challenges that included designing an efficient hybrid power train, managing battery technology, and overcoming consumer skepticism about combining an electric drivetrain system with the standard gasoline-fueled power train. Toyota persevered, however, and its instincts proved prescient and transformative. “The Prius is not only the world’s first mass-produced hybrid car, but its technical and commercial success also spurred other automakers to accelerate hybrid vehicle development,” says IEEE Member Nobuo Kawaguchi, a professor in the computational science and engineering department at Nagoya University’s Graduate School of Engineering, in Japan. He is also secretary of the IEEE Nagoya Section. “The Prius helped shape the role of hybrid cars in today’s automotive market.” The Prius was honored with an IEEE Milestone on 30 October during a ceremony held at company headquarters in Toyota City, Japan. The G21 project The development of the Prius began in 1993 with the G21 project, which focused on fuel efficiency, low emissions, and affordability. According to a Toyota article detailing the project’s history, by 1997, Toyota engineers—including Takeshi Uchiyamada, who has since become known as the “father of the Prius”—were satisfied they had met the challenge of achieving all three goals. The first-generation Prius featured a compact design with aerodynamic efficiency. Its groundbreaking hybrid system enabled smooth transitions between an electric motor powered by a nickel–metal hydride battery and an internal combustion engine fueled by gasoline. The car’s design incorporated regenerative braking in the power-train arrangement to enhance the vehicle’s energy efficiency. Regenerative braking captures the kinetic energy typically lost as heat when conventional brake pads stop the wheels with friction. Instead, the electric motor switches over to generator mode so that the wheels drive the motor in reverse rather than the motor driving the wheels. Using the motor as a generator slows the car and converts the kinetic energy into an electrical charge routed to the battery to recharge it. “The Prius is not only the world’s first mass-produced hybrid car, but its technical and commercial success also spurred other automakers to accelerate hybrid vehicle development.” —Nobuo Kawaguchi, IEEE Nagoya Section secretary According to the company’s “Harnessing Efficiency: A Deep Dive Into Toyota’s Hybrid Technology” article, a breakthrough was the Hybrid Synergy Drive, a system that allows the Prius to operate in different modes—electric only, gasoline only, or a combination—depending on driving conditions. A key component Toyota engineers developed from scratch was the power split device, a planetary gear system that allows smooth transitions between electric and gasoline power, permitting the engine and the motor to propel the vehicle in their respective optimal performance ranges. The arrangement helps optimize fuel economy and simplifies the drivetrain by making a traditional transmission unnecessary. Setting fuel-efficiency records Nearly 30 years after its commercial debut, the Prius remains an icon of environmental responsibility combined with technical innovation. It is still setting records for fuel efficiency. When in July 2023 the newly released 2024 Prius LE was driven from Los Angeles to New York City, it consumed a miserly 2.52 liters of gasoline per 100 kilometers during the 5,150-km cross-country journey. The record was set by a so-called hypermiler, a driver who practices advanced driving techniques aimed at optimizing fuel efficiency. Hypermilers accelerate smoothly and avoid hard braking. They let off the accelerator early so the car can coast to a gradual stop without applying the brakes, and they drive as often as possible at speeds between 72 and 105 km per hour, the velocities at which a car is typically most efficient. A driver not employing such techniques still can expect fuel economy as high as 4.06 L per 100 km from the latest generation of Prius models. Toyota has advanced the Prius’s hybrid technology with each generation, solidifying the car’s role as a leader in fuel efficiency and sustainability. Milestone event attracts luminaries Uchiyamada gave a brief talk at the IEEE Milestone event about the Prius’s development process and the challenges he faced as chief G21 engineer. Other notable attendees were Takeshi Uehara, president of Toyota’s power-train company; Toshio Fukuda, 2020 IEEE president; Isao Shirakawa, IEEE Japan Council history committee chair; and Jun Sato, IEEE Nagoya Section chair. A plaque recognizing the technology is displayed at the entrance of the Toyota Technical Center, which is within walking distance of the company’s headquarters. It reads: “In 1997 Toyota Motor Corporation developed the world’s first mass-produced hybrid vehicle, the Toyota Prius, which used both an internal combustion engine and two electric motors. This vehicle achieved revolutionary fuel efficiency by recovering and reusing energy previously lost while driving. Its success helped popularize hybrid vehicles internationally, advanced the technology essential for electric power trains, contributed to the reduction of CO2 emissions, and influenced the design of subsequent electrified vehicles.” Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments worldwide. The IEEE Nagoya Section sponsored the nomination.

  • Video Friday: Agile Upgrade
    by Evan Ackerman on 17. Januara 2025. at 16:30

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today's videos! Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it.] [ Unitree ] This is just lovely. [ Mimus CNK ] There’s a lot to like about Grain Weevil as an effective unitasking robot, but what I really appreciate here is that the control system is just a remote and a camera slapped onto the top of the bin. [ Grain Weevil ] This video, “Robot arm picking your groceries like a real person,” has taught me that I am not a real person. [ Extend Robotics ] A robot walking like a human walking like what humans think a robot walking like a robot walks like. And that was my favorite sentence of the week. [ Engineai ] For us, robots are tools to simplify life. But they should look friendly too, right? That’s why we added motorized antennas to Reachy, so it can show simple emotions—without a full personality. Plus, they match those expressive eyes O_o! [ Pollen Robotics ] So a thing that I have come to understand about ships with sails (thanks, Jack Aubrey!) is that sailing in the direction that the wind is coming from can be tricky. Turns out that having a boat with two fronts and no back makes this a lot easier. [ Paper ] from [ 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics ] via [ IEEE Xplore ] I’m Kento Kawaharazuka from JSK Robotics Laboratory at the University of Tokyo. I’m writing to introduce our human-mimetic binaural hearing system on the musculoskeletal humanoid Musashi. The robot can perform 3D sound source localization using a human-like outer ear structure and an FPGA-based hearing system embedded within it. [ Paper ] Thanks, Kento! The third CYBATHLON took place in Zurich on 25-27 October 2024. The CYBATHLON is a competition for people with impairments using novel robotic technologies to perform activities of daily living. It was invented and initiated by Prof. Robert Riener at ETH Zurich, Switzerland. Races were held in eight disciplines including arm and leg prostheses, exoskeletons, powered wheelchairs, brain computer interfaces, robot assistance, vision assistance, and functional electrical stimulation bikes. [ Cybathlon ] Thanks, Robert! If you’re going to work on robot dogs, I’m honestly not sure whether Purina would be the most or least appropriate place to do that. [ Michigan Robotics ]

  • How Antivirus Software Has Changed With the Internet
    by Dina Genkina on 17. Januara 2025. at 12:00

    We live in a world filled with computer viruses, and antivirus software is almost as old as the Internet itself: The first version of what would become McAfee antivirus came out in 1987—just four years after the Internet booted up. For many of us, antivirus software is an annoyance, taking up computer resources and generating opaque pop-ups. But they are also necessary: Almost every computer today is protected by some kind of antivirus software, either built into the operating system or provided by a third party. Despite their ubiquity, however, not many people know how these antivirus tools are built. Paul A. Gagniuc set out to fix this apparent oversight. A professor of bioinformatics and programming languages at the University Politehnica of Bucharest, he has been interested in viruses and antivirus software since he was a child. In his book Antivirus Engines: From Methods to Innovations, Design, and Applications, published last October, he dives deep into the technical details of malware and how to fight it, all motivated by his own experience of designing an antivirus engine—a piece of software that protects a computer from malware—from scratch in the mid-2000s. IEEE Spectrum spoke with Gagniuc about his experience as a life-long computer native, antivirus basics and best practices, his view of how the world of malware and anti-virus software has changed over the last decades, the effects of cryptocurrencies, and his opinion on what the issues with fighting malware will be going forward. How did you become interested in antivirus software? Paul Gagniuc: Individuals of my age grew up with the Internet. When I was growing up, it was the wild wild West, and there were a lot of security problems. And the security field was at its very beginning, because nothing was controlled at the time. Even small children had access to very sophisticated pieces of software in open source. Knowing about malware provided a lot of power for a young man at that time, so I started to understand the codes that were available starting at the age of 12 or so. And a lot of codes were available. I wrote a lot of versions of different viruses, and I did manage to make some of my own, but not with the intent of doing harm, but for self-defense. Around 2002 I started to think of different strategies to detect malware. And between 2006 and 2008 I started to develop an antivirus engine, called Scut Antivirus. I tried to make a business based on this antivirus, however, the business side and programming side are two separate things. I was the programmer. I was the guy that made the software framework, but the business side wasn’t that great, because I didn’t know anything about business. What was different about Scut Antivirus than the existing solution from a technical perspective? Gagniuc: The speed, and the amount of resources it consumed. It was almost invisible to the user, unlike the antiviruses of the time. Many users at time started to avoid antiviruses for this reason, because at one point, the antivirus consumed so many resources that the user could not do their work. How does antivirus software work? Gagniuc: How can we detect a particular virus? Well, we take a little piece of the code from that virus, and we put that code inside an antivirus database. But what do we do when we have 1 million, 2 million different malware files, which are all different? So what happens is that malware from two years, three years ago, for instance, is removed from the database, because that those files are not a danger to the community anymore, and what is kept in the database are just the new threats. And, there’s an algorithm that’s described in my book called the Aho-Corasick algorithm. It’s a very special algorithm that allows one to check millions of viruses’ signatures against one suspected file. It was made in the 70s, and it is extremely fast. “Once Bitcoin appeared, every type of malware out there transformed itself into ransomware.” —Paul Gagniuc, University Polytehnica of Bucharest This is the basis of classical antivirus software. Now, people are using artificial intelligence to see how useful it can be, and I’m sure it can be, because at root the problem is pattern recognition. But there are also malware files that can change their own code, called polymorphic malware, which are very hard to detect. Where do you get a database of viruses to check for? Gagniuc: When I was working on Scut Antivirus, I had some help from some hackers from Ukraine, who allowed me to have a big database, a big malware bank. It’s an archive which has several millions of infected files with different types of malware. At that time, VirusTotal was becoming more and more known in in the security world. Before it was bought by Google [in 2012], VirusTotal was the place where all the security companies started to verify files. So if we had a suspected file, we uploaded to VirusTotal. “I’m scared of a loss of know-how, and not only for antivirus, but for technology in general.” —Paul Gagniuc, University Polytehnica of Bucharest This was a very interesting system, because it allowed for quick verification of a suspicious file. But this also had some consequences. What happened was that every security company started to believe what they see in the results of VirusTotal. So that did lead to a loss of diversity in the in different laboratories, from Kaspersky to Norton. How has malware changed during the time you’ve been involved in the field? Gagniuc: There are two different periods, namely the period up to 2009, and the period after that. The security world splits when Bitcoin appears. Before Bitcoin, we had viruses, we had the Trojan horses, we had worms, we had different types of spiral key logs. We had everything. The diversity was high. Each of these types of malware had a specific purpose, but nothing was linked to the real life. Ransomware existed, but at the time it was mainly playful. Why? Because in order to have ransomware, you have to be able to oblige the user to pay you, and in order to pay, you have to make contact with a bank. And when you make the contact with a bank, you have to have an ID. Once Bitcoin appeared, every type of malware out there transformed itself into ransomware. Once a user can pay by using Bitcoin or other cryptocurrency, then you don’t have any control over the identity of the hacker. Where do you see the future of antiviruses going? Gagniuc: It’s hard to say what the future will bring, but it’s indispensable. You cannot live without a security system. Antiviruses are here to stay. Of course, a lot of trials will be made by using artificial intelligence. But I’m scared of a loss of know-how, and not only for antivirus, but for technology in general. In my view, something happened in the education of young people about 2008, where they became less apt in working with the assembler. Today, at my university in Bucharest, I see that every engineering student knows one thing and only one thing: Python. And Python uses a virtual machine, like Java, it’s a combination between what in the past was called a scripting language and a programming language. You cannot do with it what you could do with C++, for instance. So at the worldwide level, there was a de-professionalization of young people, whereas in the past, in my time, everyone was advanced. You couldn’t work with a computer without being very advanced. Big leaders of our companies in this globalized system must take into consideration the possibility of loss of knowledge. Did you write the book partially an effort to fix this lack of know-how? Gagniuc: Yes. Basically, this loss of knowledge can be avoided if everybody brings their own experience into the publishing world. Because even if I don’t write that book for humans, although I’m sure that many humans are interested in the book, at least it will be known by artificial intelligence. That’s the reality.

  • Asimov's Laws of Robotics Need an Update for AI
    by Dariusz Jemielniak on 14. Januara 2025. at 15:00

    In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans. But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience. Deepfakes, Misinformation, and Scams The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust. Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort. Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited. Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots. In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans. Therefore, we must make an addition to Asimov’s laws. Fourth Law: A robot or AI must not deceive a human by impersonating a human being. The Way Toward Trusted AI We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human. Implementation of this Fourth Law would require: Mandatory AI disclosure in direct interactions, Clear labeling of AI-generated content, Technical standards for AI identification, Legal frameworks for enforcement, Educational initiatives to improve AI literacy. Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem. But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence. Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.

  • Be the Key Influencer in Your Career
    by Tariq Samad on 13. Januara 2025. at 19:00

    This article is part of our exclusive career advice series in partnership with the IEEE Technology and Engineering Management Society. When thinking about influencers, you might initially consider people with a large social media following who have the power to affect people with an interest in fashion, fitness, or food. However, the people closest to you can influence the success you have in the early days of your career in ways that affect your professional journey. These influencers include you, your management, colleagues, and family. Take control your career You are—or should be—the most prominent influencer of your career. Fortunately, you’re the one you have the most control over. Your ability to solve engineering problems is a significant determining factor in your career growth. The tech world is constantly evolving, so you need to stay on top of the latest developments in your specialization. You also should make it a priority to learn about related technical fields, as it can help you understand more and advance faster. Another trait that can influence your career trajectory is your personality. How comfortable are you with facing awkward or difficult situations? What is your willingness to accept levels of risk when making commitments? What is your communication style with your peers and management? Do you prefer routine or challenging assignments? How interested are you in working with people from different backgrounds and cultures? Do you prefer to work on your own or as part of a team? Most of those questions don’t have right or wrong answers, but how you respond to them can help you chart your path. At the same time, be cognizant of the impression you make on others. How would you like them to think of you? How you present yourself is important, and it’s within your control. Lead with confidence about your abilities, but don’t be afraid to seek help or ask questions to learn more. You want to be confident in yourself, but if you can’t ask for help or acknowledge when you’re wrong, you’ll struggle to form good relationships with your colleagues and management. Learn about your company’s leadership Your immediate supervisor, manager, and company leaders can impact your career. Much depends on your willingness to demonstrate initiative, accept challenging work, and be dedicated to the team. Don’t forget that it is a job, however, and you will not stay in your first role forever. Develop a good business relationship with your manager while recognizing the power dynamic. Learn to communicate with the manager; what works for one leader might not work for another. Like all of us, managers have their idiosyncrasies. Accept theirs and be aware of your own. If your supervisor makes unachievable performance demands, don’t immediately consider it a red flag. Such stretch assignments can be growth opportunities, provided an environment of trust exists. But beware of bosses who become possessive and prevent you from accepting other opportunities within the organization rather than viewing you as the organization’s investment in talent. Make it a priority to learn about your company’s leadership. How does the business work? What are the top priorities and values for the company, and why? Find out the goals of the organization and your department. Learn how budgets are allocated and adjusted. Understand how the engineering and technology departments work with the marketing department, system integration, manufacturing, and other groups. Companies differ in structure, business models, industry sectors, financial health, and many other aspects. The insight you gain from your managers is valuable to you, both in your current organization and with future employers. Form strong relationships with coworkers Take the time to understand your colleagues, who probably face similar issues. Try to learn something about the people you spend most of your day with attempting to solve technical problems. What do you have in common? How do your skills complement each other? You also should develop social connections with your colleagues—which can enrich your after-work life and help you bond over job-related issues. As a young professional, you might not fully understand the industry in which your employer operates. A strong collaborative relationship with more experienced colleagues can help you learn about customer needs, available products and services, competitors, market share, regulations, and technical standards. By becoming more aware of your industry, you might even come up with ideas for new offerings and find ways to develop your skills. Family ties are important You’re responsible for your career, but the happiness and well-being of those close to you should be part of the calculus of your life. Individual circumstances related to family—a partner’s job, say, or parents’ health or children’s needs—can influence your professional decisions. Your own health and career trajectory are also part of the whole. Remember: Your career is part of your life, not the entire thing. Find a way to balance your career, life, and family. Planning your next steps As engineers and technologists, our work is not just a means to earn a living but also a source of fulfillment, social connections, and intellectual challenge. Where would you like to be professionally in 5, 10, or 15 years? Do you see yourself as an expert in key technical areas leading large and impactful programs? A manager or senior executive? An entrepreneur? If you haven’t articulated your objectives and preferences, that’s fine. You’re early in your career, and it’s normal to be figuring out what you want. But if so, you should think about what you need to learn before planning for your next steps. Whatever your path forward, you can benefit from your career influencers—the people who challenge you, teach you, and cause you to think about what you want.

  • AI Mistakes Are Very Different Than Human Mistakes
    by Nathan E. Sanders on 13. Januara 2025. at 13:00

    Humans make mistakes all the time. All of us do, every day, in tasks both new and routine. Some of our mistakes are minor and some are catastrophic. Mistakes can break trust with our friends, lose the confidence of our bosses, and sometimes be the difference between life and death. Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes. Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do. Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes. Human Mistakes vs AI Mistakes Life experience makes it fairly easy for each of us to guess when and where humans will make mistakes. Human errors tend to come at the edges of someone’s knowledge: Most of us would make mistakes solving calculus problems. We expect human mistakes to be clustered: A single calculus mistake is likely to be accompanied by others. We expect mistakes to wax and wane, predictably depending on factors such as fatigue and distraction. And mistakes are often accompanied by ignorance: Someone who makes calculus mistakes is also likely to respond “I don’t know” to calculus-related questions. To the extent that AI systems make these human-like mistakes, we can bring all of our mistake-correcting systems to bear on their output. But the current crop of AI models—particularly LLMs—make mistakes differently. AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats. And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is. How to Deal with AI Mistakes This situation indicates two possible areas of research. The first is to engineer LLMs that make more human-like mistakes. The second is to build new mistake-correcting systems that deal with the specific sorts of mistakes that LLMs tend to make. We already have some tools to lead LLMs to act in more human-like ways. Many of these arise from the field of “alignment” research, which aims to make models act in accordance with the goals and motivations of their human developers. One example is the technique that was arguably responsible for the breakthrough success of ChatGPT: reinforcement learning with human feedback. In this method, an AI model is (figuratively) rewarded for producing responses that get a thumbs-up from human evaluators. Similar approaches could be used to induce AI systems to make more human-like mistakes, particularly by penalizing them more for mistakes that are less intelligible. When it comes to catching AI mistakes, some of the systems that we use to prevent human mistakes will help. To an extent, forcing LLMs to double-check their own work can help prevent errors. But LLMs can also confabulate seemingly plausible, but truly ridiculous, explanations for their flights from reason. Other mistake mitigation systems for AI are unlike anything we use for humans. Because machines can’t get fatigued or frustrated in the way that humans do, it can help to ask an LLM the same question repeatedly in slightly different ways and then synthesize its multiple responses. Humans won’t put up with that kind of annoying repetition, but machines will. Understanding Similarities and Differences Researchers are still struggling to understand where LLM mistakes diverge from human ones. Some of the weirdness of AI is actually more human-like than it first appears. Small changes to a query to an LLM can result in wildly different responses, a problem known as prompt sensitivity. But, as any survey researcher can tell you, humans behave this way, too. The phrasing of a question in an opinion poll can have drastic impacts on the answers. LLMs also seem to have a bias towards repeating the words that were most common in their training data; for example, guessing familiar place names like “America” even when asked about more exotic locations. Perhaps this is an example of the human “availability heuristic” manifesting in LLMs, with machines spitting out the first thing that comes to mind rather than reasoning through the question. And like humans, perhaps, some LLMs seem to get distracted in the middle of long documents; they’re better able to remember facts from the beginning and end. There is already progress on improving this error mode, as researchers have found that LLMs trained on more examples of retrieving information from long texts seem to do better at retrieving information uniformly. In some cases, what’s bizarre about LLMs is that they act more like humans than we think they should. For example, some researchers have tested the hypothesis that LLMs perform better when offered a cash reward or threatened with death. It also turns out that some of the best ways to “jailbreak” LLMs (getting them to disobey their creators’ explicit instructions) look a lot like the kinds of social engineering tricks that humans use on each other: for example, pretending to be someone else or saying that the request is just a joke. But other effective jailbreaking techniques are things no human would ever fall for. One group found that if they used ASCII art (constructions of symbols that look like words or pictures) to pose dangerous questions, like how to build a bomb, the LLM would answer them willingly. Humans may occasionally make seemingly random, incomprehensible, and inconsistent mistakes, but such occurrences are rare and often indicative of more serious problems. We also tend not to put people exhibiting these behaviors in decision-making positions. Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.

  • IEEE Offers New Credential to Address Tech Skills Gap
    by Jennifer Fong on 11. Januara 2025. at 14:00

    Analysts predict that demand for engineers will skyrocket during the next decade, and that the supply will fall substantially short. A Comptia report about the tech workforce estimates that there will be an additional 7.1 million tech jobs in the United States by 2034. Yet nearly one in three engineering jobs will go unfilled each year through 2030, according to a report from the Boston Consulting Group and SAE International. Ongoing tech investment programs such as the 2022 U.S. CHIPS and Science Act seek to build a strong technical workforce. The reality, however, is that the workforce pipeline is leaking badly. The BCG-SAE report found that only 13 percent of students who express initial interest in engineering and technical careers ultimately choose that career path. The statistics are even worse among women. Of the women who graduated with an engineering degree from 2006 to 2010, only 27 percent were still working in the field in 2021, compared with 41 percent of men with the same degree. To help address the significant labor gap, companies are considering alternative educational pathways to technical jobs. The businesses realize that some technician roles might not actually require a college degree. Ways to develop needed skills outside of traditional schooling—such as apprenticeships, vocational programs, professional certifications, and online courses—could help fill the workforce pipeline. When taking those alternative pathways, though, students need a way to demonstrate they have acquired the skills employers are seeking. One way is through skills-based microcredentials. IEEE is the world’s largest technical professional organization, with decades of experience offering industry-relevant credentials as well as expertise in global standardization. As the tech industry looks for a meaningful credential to help ease the semiconductor labor shortage, IEEE has the credibility and infrastructure to offer a meaningful, standardized microcredentialing program that meets semiconductor industry needs and creates opportunities for people who have traditionally been underrepresented in technical fields. The IEEE Credentialing Program is now offering skills-based microcredentials for training courses. Earning credentials while acquiring skills Microcredentials are issued when learners prove mastery of a specific skill. Unlike more traditional university degrees and course certificates, microcredential programs are not based on successfully completing a full learning program. Rather, a student might earn multiple microcredentials in a single program based on the skills demonstrated. A qualified instructor using an assessment instrument determines that the learner has acquired the skill and earned the credential. Mastery of skills might be determined through observation, completion of a task, or a written test. In a technician training course held in a clean-room setting, for example, an instructor might use an observation checklist that rates each student’s ability to demonstrate adherence to safety procedures. During the assessment, the students complete the steps while the instructor observes. Upon successful completion of each step, a student would earn a microcredential for that skill. Microcredentials are stackable; a student can earn them from different programs and institutions to demonstrate their growing skill set. Students can carry their earned credentials in a digital “wallet” for easy access. The IEEE Learning Technology Standards Committee is working on a recommended practice standard to help facilitate the portability of such records. Microcredentials differ from professional credentials When considering microcredentials, it is important to understand where they fall in the wider scope of credentials available through learning programs. The credentials commonly earned can be placed along a spectrum, from easy accessibility and low personal investment to restricted accessibility and high investment. Microcredentials are among the most accessible alternative educational pathways, but they are in need of standardization. The most formal credentials are degrees issued by universities and colleges. They have a strict set of criteria associated with them, and they often are accredited by a third party, such as ABET in the United States. The degrees typically require a significant investment of time and money, and they are required for some professional roles as well as for advanced studies. Certifications require specialized training on a formal body of knowledge, and students need to pass an exam to prove mastery of the subject. A learner seeking such a credential typically pays both for the learning and the test. Some employers accept certifications for certain types of roles, particularly in IT fields. A cybersecurity professional might earn a Computing Technology Industry Association Security+ certification, for example. CompTIA is a nonprofit trade association that issues certifications for the IT industry. Individual training courses are farther down the spectrum. Typically, a learner receives a certificate upon successful completion of an individual training course. After completing a series of courses in a program, students might receive a digital badge, which comes with associated metadata about the program that can be shared on professional networks and CVs. The credentials often are associated with continuing professional education programs. Microcredentials are at the end of the accessibility spectrum. Tied to a demonstrated mastery of skills, they are driven by assessments, rather than completion of a formal learning program or number of hours of instruction. This key difference can make them the most accessible type of credential, and one that can help a job seeker pursue alternative routes to employment beyond a formal degree or certification. Standardization of microcredentials A number of educational institutions and training providers offer microcredentials. Different providers have different criteria when issuing microcredentials, though, making them less useful to industry. Some academic institutions, for example, consider anything less than a university degree to be a microcredential. Other training providers offer microcredentials for completing a series of courses. There are other types of credentials that work for such scenarios, however. By ensuring that microcredentials are tied to skills alone, IEEE can provide a useful differentiation that benefits both job seekers and employers. Microcredentials for clean-room training IEEE is working to standardize the definition of microcredentials and what is required to issue them. By serving as a centralized source and drawing on more than 30 years of experience in issuing professional credentials, IEEE can help microcredential providers offer a credit that is recognized by—and meaningful to—industry. That, in turn, can help job seekers increase their career options as they build proof of the skills they’ve developed. Last year IEEE collaborated with the University of Southern California, in Los Angeles, and the California DREAMS Microelectronics Hub on a microcredentialing program. USC offered a two-week Cleanroom Gateway pilot program to help adult learners who were not currently enrolled in a USC degree program learn the fundamentals of working in a semiconductor fabrication clean room. The school wanted to provide them with a credential that would be recognized by semiconductor companies and help improve their technician-level job prospects. USC contacted IEEE to discuss credentialing opportunities. Together, the two organizations identified key industry-relevant skills that were taught in the program, as well as the assessment instruments needed to determine if learners master the skills. IEEE issued microcredentials for each skill mastered, along with a certificate and professional development hours for completing the entire program. The credentials, which now can be included on student CVs and LinkedIn profiles, are a good way for the students to show employers that they have the skills to work as a clean-room technician. How the IEEE program works IEEE’s credentialing program allows technical learning providers to supply credentials that bear the IEEE logo. Because IEEE is well respected in its fields of interest, its credentials are recognized by employers, who understand that the learning programs issuing them have been reviewed and approved. Credentials that can be issued through the IEEE Credentialing Program include certificates, digital badges, and microcredentials. Training providers that want to offer standardized microcredentials can apply to the IEEE Credentialing Program to become approved. Applications are reviewed by a committee to ensure that the provider is credible, offers training in IEEE’s fields of interest, has qualified instructors, and has well-defined assessments. Once a provider is approved, IEEE will work with it on the credentialing needs for each course offered, including the selection of skills to be recognized, designing the microcredentials, and creating a credential-issuing process. Upon successful completion of the program by learners, IEEE will issue the microcredentials on behalf of the training provider. You can learn more about offering IEEE microcredentials here.

  • Video Friday: Arms on Vacuums
    by Evan Ackerman on 10. Januara 2025. at 17:00

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! I’m not totally sure yet about the utility of having a small arm on a robot vacuum, but I love that this is a real thing. At least, it is at CES this year. [ Roborock ] We posted about SwitchBot’s new modular home robot system earlier this week, but here’s a new video showing some potentially useful hardware combinations. [ SwitchBot ] Yes, it’s in sim, but (and this is a relatively new thing) I will not be shocked to see this happen on Unitree’s hardware in the near future. [ Unitree ] With ongoing advancements in system engineering, ‪LimX Dynamics‬’ full-size humanoid robot features a hollow actuator design and high torque-density actuators, enabling full-body balance for a wide range of motion. Now it achieves complex full-body movements in a ultra stable and dynamic manner. [ LimX Dynamics ] We’ve seen hybrid quadrotor bipeds before, but this one , which is imitating the hopping behavior of Jacana birds, is pretty cute. What’s a Jacana bird, you ask? It’s these things, which surely must have the most extreme foot to body ratio of any bird: Also, much respect to the researchers for confidently titling this supplementary video “An Extremely Elegant Jump.” [ SSRN Paper preprint ] Twelve minutes flat from suitcase to mobile manipulator. Not bad! [ Pollen Robotics ] Happy New Year from Dusty Robotics! [ Dusty Robotics ]

  • This Pool Robot Is the First With Ultrasonic Mapping
    by Evan Ackerman on 10. Januara 2025. at 13:00

    Back in the day, the defining characteristic of home-cleaning robots was that they’d randomly bounce around your floor as part of their cleaning process, because the technology required to localize and map an area hadn’t yet trickled down to the consumer space. That all changed in 2010, when home robots started using lidar (and other things) to track their location and optimize how they cleaned. Consumer pool-cleaning robots are lagging about 15 years behind indoor robots on this, for a couple of reasons. First, most pool robots—different from automatic pool cleaners, which are purely mechanical systems that are driven by water pressure—have been tethered to an outlet for power, meaning that maximizing efficiency is less of a concern. And second, 3D underwater localization is a much different (and arguably more difficult) problem to solve than 2D indoor localization was. But pool robots are catching up, and at CES this week, Wybot introduced an untethered robot that uses ultrasound to generate a 3D map for fast, efficient pool cleaning. And it’s solar powered and self-emptying, too. Underwater localization and navigation is not an easy problem for any robot. Private pools are certainly privileged to be operating environments with a reasonable amount of structure and predictability, at least if everything is working the way it should. But the lighting is always going to be a challenge, between bright sunlight, deep shadow, wave reflections, and occasionally murky water if the pool chemicals aren’t balanced very well. That makes relying on any light-based localization system iffy at best, and so Wybot has gone old-school, with ultrasound. Wybot Brings Ultrasound Back to Bots Ultrasound used to be a very common way for mobile robots to navigate. You may (or may not) remember venerable robots like the Pioneer 3, with those big ultrasonic sensors across its front. As cameras and lidar got cheap and reliable, the messiness of ultrasonic sensors fell out of favor, but sound is still ideal for underwater applications where anything that relies on light may struggle. The Wybot S3 uses 12 ultrasonic sensors, plus motor encoders and an inertial measurement unit to map residential pools in three dimensions. “We had to choose the ultrasonic sensors very carefully,” explains Felix (Huo) Feng, the chief technology officer of Wybot. “Actually, we use multiple different sensors, and we compute time of flight [of the sonar pulses] to calculate distance.” The positional accuracy of the resulting map is about 10 centimeters, which is totally fine for the robot to get its job done, although Feng says that they’re actively working to improve the map’s resolution. For path-planning purposes, the 3D map gets deconstructed into a series of 2D maps, since the robot needs to clean the bottom of the pool, stairs, and ledges, and also the sides of the pool. Efficiency is particularly important for the S3 because its charging dock has enough solar panels on the top of it to provide about 90 minutes of run time for the robot over the course of an optimally sunny day. If your pool isn’t too big, that means the robot can clean it daily without requiring a power connection to the dock. The dock also sucks debris out of the collection bin on the robot itself, and Wybot suggests that the S3 can go for up to a month of cleaning without the dock overflowing. The S3 has a camera on the front, which is used primarily to identify and prioritize dirtier areas (through AI, of course) that need focused cleaning. At some point in the future, Wybot may be able to use vision for navigation too, but my guess is that for reliable 24/7 navigation, ultrasound will still be necessary. One other interesting little tidbit is the communication system. The dock can talk to your Wi-Fi, of course, and then talk to the robot while it’s charging. Once the robot goes off for a swim, however, traditional wireless signals won’t work, but the dock has its own sonar that can talk to the robot at several bytes per second. This isn’t going to get you streaming video from the robot’s camera, but it’s enough to let you steer the robot if you want, or ask it to come back to the dock, get battery status updates, and similar sorts of things. The Wybot S3 will go on sale in Q2 of this year for a staggering US $2,999, but that’s how it always works: The first time a new technology shows up in the consumer space, it’s inevitably at a premium. Give it time, though, and my guess is that the ability to navigate and self-empty will become standard features in pool robots. But as far as I know, Wybot got there first.

  • Meet the Candidates Running for 2026 IEEE President-Elect
    by Joanna Goodrich on 9. Januara 2025. at 19:00

    The IEEE Board of Directors has nominated IEEE Senior Members Jill I. Gostin and David Alan Koehler as candidates for 2026 IEEE president-elect. IEEE Life Fellow Manfred “Fred” J. Schindler is seeking to be a petition candidate. The winner of this year’s election will serve as IEEE president in 2027. For more information about the election, president-elect candidates, and the petition process, visit the IEEE election website. IEEE Senior Member Jill I. Gostin Nominated by the IEEE Board of Directors Sean McNeil/Georgia Tech Research Institute Gostin is a principal research scientist at the Georgia Tech Research Institute in Atlanta, focusing on algorithms and developing and testing software for sensor systems. She is the systems engineering, integration, and test lead in the software engineering and architecture division. She has managed large technical programs and led research collaborations among academia, government, and industry. Her papers have been published in multiple conference proceedings. Her presentation on fractal geometry applications was selected as Best Paper at the National Telesystems Conference and was published in IEEE Aerospace and Electronic Systems Magazine. Gostin has held several IEEE leadership positions including vice president, IEEE Member and Geographic Activities and Region 3 director. She is a former chair of the IEEE Atlanta Section and of the IEEE Computer Society’s Atlanta chapter. She served on the IEEE Computer and IEEE Aerospace and Electronic Systems societies’ boards of governors and has led or been a member of several IEEE organizational units and committees, locally and globally. In 2016 the Georgia Women in Technology named Gostin its Woman of the Year, an award that recognizes technology executives for their accomplishments as business leaders, technology visionaries, and impact makers in their community. IEEE Senior Member David Alan Koehler Nominated by the IEEE Board of Directors IEEE Koehler is a business development manager for Doble Engineering Co. in Marlborough, Mass. Doble, which manufactures diagnostic testing equipment and software, provides engineering services for utilities, service companies, and OEMs worldwide. More than 100 years old, the company is a leader in the power and energy sector. Koehler has 20 years of experience in testing insulating liquids and managing analytical laboratories. He has presented his work at technical conferences and published articles in technical publications related to the power industry. An active volunteer, he has served in every geographical unit within IEEE. His first leadership position was treasurer of the Central Indiana Section in 2010. He served as 2022 vice president of IEEE Member and Geographic Activities, 2019–2020 director of IEEE Region 4, and 2024 chair of the IEEE Board of Directors Ad Hoc Committee on Leadership Continuity and Efficiency. He served on the IEEE Board of Directors for three different years. He has been a member of IEEE-USA, Member and Geographic Activities, and Publication Services and Products boards. He received his bachelor’s degree in chemistry and his master of business administration from Indiana University in Bloomington. IEEE Life Fellow Manfred “Fred” J. Schindler Seeking petition candidacy Tammy Lyle Schindler, an expert in microwave semiconductor circuits and technology, is an independent consultant supporting clients with technical expertise, due diligence, and project management. Throughout his career, he led the development of gallium arsenide monolithic microwave integrated-circuit technology, from lab demonstrations to the production of high-volume commercial products. He has numerous technical publications and 11 patents. He previously served as CTO of Anlotek and director of Qorvo and RFMD’s Boston design center. He was applications manager of IBM’s microelectronics wireless products group, engineering manager at ATN Microwave, and manager of Raytheon’s microwave circuits research laboratory. An IEEE volunteer for more than 30 years, he served as the 2024 vice president of IEEE Technical Activities and the 2022–2023 Division IV director. He was chair of the IEEE Conferences Committee from 2015 to 2018 and president of the IEEE Microwave Theory and Technology Society in 2003. He founded the IEEE MTTS Radio Wireless Symposium in 2006 and was general chair of the 2009 International Microwave Symposium. The IEEE–Eta Kappa Nu member received the 2018 IEEE MTTS Distinguished Service Award. He has been writing an award-winning column focused on business for IEEE Microwave Magazine since 2011. To sign Schindler’s petition, click here.

  • Tragedy Spurred the First Effective Land-Mine Detector
    by Joanna Goodrich on 8. Januara 2025. at 13:00

    Land mines have been around in one form or another for more than a thousand years. By now, you’d think a simple and safe way of locating and removing the devices would’ve been engineered. But that’s not the case. In fact, up until World War II, the most common method for finding the explosives was to prod the ground with a pointed stick or bayonet. The hockey-puck-size devices were buried about 15 centimeters below the ground. When someone stepped on the ground above or near the mine, their weight triggered a pressure sensor and caused the device to explode. So mine clearing was nearly as dangerous as just walking through a minefield unawares. During World War II, land mines were widely used by both Axis and Allied forces and were responsible for the deaths of 375,000 soldiers, according to the Warfare History Network. In 1941 Józef Stanislaw Kosacki, a Polish signals officer who had escaped to the United Kingdom, developed the first portable device to effectively detect a land mine without inadvertently triggering it. It proved to be twice as fast as previous mine-detection methods, and was soon in wide use by the British and their allies. The Engineer Behind the Portable Mine Detector Before inventing his mine detector, Kosacki worked as an engineer and had developed tools to detect explosives for the Polish Armed Forces. After receiving a bachelor’s degree in electrical engineering from the Warsaw University of Technology in 1933, Kosacki completed his year-long mandatory service with the army. He then joined the National Telecommunications Institute in Warsaw as a manager. Then, as now, the agency led the country’s R&D in telecommunications and information technologies. In 1937 Kosacki was commissioned by the Polish Ministry of National Defence to develop a machine that could detect unexploded grenades and shells. He completed his machine, but it was never used in the field. Polish engineer Józef Kosacki’s portable land-mine detector saved thousands of soldiers’ lives in World War II. Military Historical Office When Germany invaded Poland in September 1939, Kosacki returned to active duty. Because of his background in electrical engineering, he was placed in a special communications unit that was responsible for the upkeep of the Warszawa II radio station. But that duty lasted only until the radio towers were destroyed by the German Army a month after the invasion. With Warsaw under German occupation, Kosacki and his unit were captured and taken to an internment camp in Hungary. In December 1939, he escaped and eventually found his way to the United Kingdom. There he joined other Polish soldiers in the 1st Polish Army Corps, stationed in St. Andrews, Scotland. He trained soldiers in the use of wireless telegraphy to send messages in Morse code. Then tragedy struck. Tragedy Inspired Engineering Ingenuity The invention of the portable mine detector came about after a terrible accident on the beaches of Dundee, Scotland. In 1940, the British Army, fearing a German invasion, buried thousands of land mines along the coast. But they didn’t notify their allies. Soldiers from the Polish 10th Armored Cavalry Brigade on a routine patrol of the beach were killed or injured when the land mines exploded. This event prompted the British Army to launch a contest to develop an effective land-mine detector. Each entrant had to pass a simple test: Detect a handful of coins scattered on the beach. Kosacki and his assistant spent three months refining Kosacki’s earlier grenade detector. During the competition, their new detector located all of the coins, beating the other six devices entered. There’s some murkiness about the detector’s exact circuitry, as befits a technology developed under wartime security, but our best understanding is this: The tool consisted of a bamboo pole with an oval-shaped wooden panel at one end that held two coils—one transmitting and one receiving, according to a 2015 article in Aerospace Research in Bulgaria. The soldier held the detector by the pole and passed the wooden panel over the ground. A wooden backpack encased a battery unit, an acoustic-frequency oscillator, and an amplifier. The transmitting coil was connected to the oscillator, which generated current at an acoustic frequency, writes Mike Croll in his book The History of Landmines. The receiving coil was connected to the amplifier, which was then linked to a pair of headphones. The detector weighed less than 14 kilograms and operated much like the metal detectors used by beachcombers today. Michał Bojara/National Museum of Technology in Warsaw When the panel came close to a metallic object, the induction balance between the two coils was disturbed. Via the amplifier, the receiving coil sent an audio signal to the headphones, notifying the soldier of a potential land mine. The equipment weighed just under 14 kilograms and could be operated by one soldier, according to Croll. Kosacki didn’t patent his technology and instead gave the British Army access to the device’s schematics. The only recognition he received at the time was a letter from King George VI thanking him for his service. Detectors were quickly manufactured and shipped to North Africa, where German commander Erwin Rommel had ordered his troops to build a defensive network of land mines and barbed wire that he called the Devil’s Gardens. The minefields stretched from the Mediterranean in northern Egypt to the Qattara Depression in western Egypt and contained an estimated 16 million mines over 2,900 square kilometers. Kosacki’s detectors were first used in the Second Battle of El Alamein, in Egypt, in October and November of 1942. British soldiers used the device to scour the minefield for explosives. Scorpion tanks followed the soldiers; heavy chains mounted on the front flailed the ground and exploded the mines as the tank moved forward. Kosacki’s mine detector doubled the speed at which such heavily mined areas could be cleared, from 100 to 200 square meters an hour. By the end of the war, his invention had saved thousands of lives. Kosacki’s land-mine detector was first used in Egypt, to help clear a massive minefield laid by the Germans. The basic technology continued to be used until 1991.National Army Museum The basic design with minor modifications continued to be used by Canada, the United Kingdom, and the United States until the end of the First Gulf War in 1991. By then, engineers had developed more sensitive portable detectors, as well as remote-controlled mine-clearing systems. Kosacki wasn’t publicly recognized for his work until after World War II, to prevent retribution against his family in German-occupied Poland. When Kosacki returned to Poland after the war, he began teaching electrical engineering at the National Centre for Nuclear Research, in Otwock-Świerk. He was also a professor at what is now the Military University of Technology in Warsaw. He died in 1990. The prototype of Kosacki’s detector shown at top is housed at the museum of the Military Institute of Engineering Technology, in Wroclaw, Poland. Land Mines Are Still a Worldwide Problem Land-mine detection has still not been perfected, and the explosive devices are still a huge problem worldwide. On average, one person is killed or injured by land mines and other explosive ordnance every hour, according to UNICEF. Today, it’s estimated that 60 countries are still contaminated by mines and unexploded ordnance. Although portable mine detectors continue to be used, drones have become another detection method. For example, they’ve been used in Ukraine by several humanitarian nonprofits, including the Norwegian People’s Aid and the HALO Trust. Nonprofit APOPO is taking a different approach: training rats to sniff out explosives. The APOPO HeroRATs, as they are called, only detect the scent of explosives and ignore scrap metal, according to the organization. A single HeroRAT can search an area the size of a tennis court in 30 minutes, instead of the four days it would take a human to do so. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the January 2025 print issue as “The First Land-Mine Detector That Actually Worked.” References There are many well-known Polish scientists, such as Marie Curie, whose discoveries and research made history. As a first-generation Polish-American, I wanted to learn more about the engineering feats of unknown innovators in Poland. After scouring museum websites, I came across Józef Kosacki and his portable mine detector. To learn more about his life and work, I read excerpts of Przemyslaw Slowinski and Teresa Kowalik’s 2020 book, Królewski dar: Co Polska i Polacy dali światu (“Royal Gifts: What Poland and Poles Gave the World”). I also read several articles about Kosacki on both Polish and Scottish websites, including the Polish Press Agency, Wielka Historia (Great History), and Curious St. Andrews, which covers the history of the city.To learn more about the history of land mines, I read articles published by the nonprofit APOPO, the BBC, and the U.S. Army. APOPO researches, develops, and implements land-mine detection technology to clear minefields in Africa.

  • Ambitious Projects Could Reshape Geopolitics
    by Harry Goldstein on 6. Januara 2025. at 16:30

    Over the last year, Spectrum’s editors have noticed an emerging through line connecting several major stories: the centrality of technology to geopolitics. Last month, our cover story, done in partnership with Foreign Policy magazine, was on the future of submarine warfare. And last October, we focused on how sea drones could bolster Taiwan’s “silicon shield” strategy, which rests on Taiwan Semiconductor Manufacturing Co.’s domination of high-end chip manufacturing. So when I asked the curator of this issue, Senior Editor Samuel K. Moore, what he saw as the major theme as we head into 2025, I wasn’t surprised when he said, without hesitation, “geopolitics and technology.” In fact, the same day Sam and I spoke, I forwarded to Spectrum’s Glenn Zorpette a news item about China banning the export to the United States of gallium, germanium and antimony. China’s overwhelming command of rare earths like these is at the heart of Zorpette’s story in this issue. “Inside an American Rare Earth Boomtown” paints a vivid picture of how the United States is trying to nurture a domestic rare earth mining and processing industry. China, meanwhile, is itself looking to minimize its own dependence on imported uranium by building a thorium-based molten-salt reactor in the Gobi Desert. And tensions between China and Taiwan will undoubtedly be further stressed with the opening of TSMC’s first advanced wafer fab in the United States this year. The mitigation of climate change is another key area where politics informs tech advances. In “Startups Begin Geoengineering the Sea”, Senior Associate Editor Emily Waltz takes readers aboard a pair of barges anchored near the Port of Los Angeles. There, two companies, Captura and Equatic, are piloting marine carbon-capture systems to strip CO2 out of ocean water. Whether the results can be measured accurately enough to help companies and countries meet their carbon-reduction goals is an open question. One way for the international community to study the impacts of these efforts could be Deep’s Sentinel program, the first part of which will be completed this year. Our correspondent Liam Critchley, based in England, reports in “Making Humans Aquatic Again” that Deep, located in Bristol, is building a modular habitat that will let scientists live underwater for weeks at a time. Another geopolitical concern also lies at sea: the vulnerability of undersea fiber-optic cables, which carry an ever-growing share of the world’s Internet traffic. The possibility of outages due to attack or accident is so worrying that NATO is funding a project to quickly detect undersea-cable damage and reroute data to satellites. In a provocative commentary on why technology will define the future of geopolitics published in Foreign Affairs in 2023, Eric Schmidt, chair of the Special Competitive Studies Project and the former CEO and chair of Google, argues that “a country’s ability to project power in the international sphere—militarily, economically, and culturally—depends on its ability to innovate faster and better than its competitors.” In this issue, you’ll get an idea of how various nations are faring in this regard. In the coming year, you can look forward to our continuing analysis of how the new U.S. administration’s policies on basic research, climate change, regulation, and immigration impact global competition for the raw materials and human resources that stoke the engines of innovation.

  • As EV Sales Stall, Plug-In Hybrids Get a Reboot
    by Lawrence Ulrich on 6. Januara 2025. at 13:00

    Automakers got one thing right: Electrified cars are the future. What they got wrong was assuming that all of those vehicles would run on battery power alone, with gasoline-electric hybrid technology bound for the technological scrap heap. Now the automaking giants are scrambling to course correct. They’re delaying their EV plans, rejiggering factories, and acknowledging what some clear-eyed observers (including IEEE Spectrum) suspected all along: Not every car buyer is ready or able to ditch the internal-combustion engine entirely, stymied by high EV prices or unnerved by a patchy, often-unreliable charging infrastructure. This article is part of our special report Top Tech 2025. Consumers are still looking for electrified rides, just not the ones that many industry pundits predicted. In China, Europe, and the United States, buyers are converging on hybrids, whose sales growth is outpacing that of pure EVs. “It’s almost been a religion that it’s EVs or bust, so let’s not fool around with hybrids or hydrogen,” says Michael Dunne, CEO of Dunne Insights, a leading analyst of China’s auto industry. “But even in the world’s largest market, almost half of electrified vehicle sales are hybrids.” In China, which accounts for about two-thirds of global electrified sales, buyers are flocking to plug-in hybrids, or PHEVs, which combine a gas engine with a rechargeable battery pack. Together, hybrids and all-electric vehicles (a category that the Chinese government calls “new energy vehicles”) reached a milestone last July, outselling internal-combustion engine cars in that country for the first time. PHEV sales are up 85 percent year-over-year, dwarfing the 12-percent gain for pure EVs. The picture is different in the United States, where customers still favor conventional hybrids that combine a gas engine with a small battery—no plug involved, no driver action required. Through September 2024, this year’s conventional hybrid sales in the United States have soared past 1.1 million, accounting for 10.6 percent of the overall car market, versus an 8.9 percent share for pure EVs, according to Wards Intelligence. Including PHEVs, that means a record 21 percent of the country’s new cars are now electrified. But even as overall EV sales rise, plug-in hybrid sales have stalled at a mere 2 percent of the U.S. market. A J.D. Power survey also showed low levels of satisfaction among PHEV owners. Those numbers are a disappointment to automakers and regulators, who have looked to PHEVs as a bridge technology between gasoline vehicles and pure EVs. But what if the problem isn’t plug-in tech per se but the type of PHEVs being offered? Could another market pivot be just around the corner? Plug-in Hybrids: The Next Generation The Ram brand, a part of Stellantis, is betting on new plug-in technology with its Ram 1500 Ramcharger, a brash full-size pickup scheduled to go on sale later this year. The Ramcharger is what’s known as an extended-range electric vehicle, or EREV. An EREV resembles a PHEV, pairing gas and electric powertrains, but with two key differences: An EREV integrates much larger batteries, enough to rack up significant all-electric miles before the internal-combustion engine kicks in; and the gas engine in an EREV is used entirely to generate electricity, not to propel the car directly. BMW demonstrated EREV tech beginning in 2013 with its pint-size i3 REx, which used a tiny motorcycle engine to generate electricity. The Chevrolet Volt, produced from 2010 to 2019, also used its gasoline engine largely as a generator, although it could partially propel the car in certain conditions. The short-lived 2012 Fisker Karma took a similar approach. None of these vehicles made much impact on the market. As a large pickup with conventional styling, the Ramcharger appears to be a more mainstream proposition. The platform for the 2025 Ram 1500 Ramcharger includes a V6 engine, a 92-kilowatt-hour battery pack, and a 130-kilowatt generator. Stellantis The Ramcharger shares its STLA Frame platform and electrical architecture with the upcoming pure-battery Ram 1500 REV, but it’s designed to provide more driving range, along with the option of gasoline fill-ups when there are no EV chargers around. Although the Ramcharger’s 92-kilowatt-hour battery has less than half the capacity of the Ram REV’s humongous upgraded 229-kWh battery (the largest ever in a passenger EV), it’s hardly small. The Ramcharger still packs more battery power than many all-electric vehicles, such as the Ford Mustang Mach-E and the Tesla Model 3. Originally, Ram planned to launch its EV pickup first. But based on consumer response, the brand decided to prioritize the Ramcharger; the Ram 1500 REV will now follow in 2026. Ram estimates that its pickup will travel about 233 kilometers (145 miles) on a fully charged battery, more than three times the stated 71-km electric range of a compact Toyota Prius Prime, among the U.S. market’s longest-range PHEVs. For typical commuters and around-town drivers, “that’s enough range where you could really use this truck as an EV 95 percent of the time,” says Ed Kim, chief analyst of AutoPacific. When the battery runs down below a preset level, the gasoline engine kicks in and generates 130 kilowatts of electricity to the battery, which continues to feed the permanent-magnet motors that propel the car. Since the motors do all the work, an EREV mimics an EV, needing no conventional transmission or driveshaft. But the onboard generator enables the Ramcharger to drive much farther than any conventional EV can. Extended-Range EVs: The Long Haul With batteries and gasoline working in tandem, the Ramcharger has an extended range of 1,110 km (690 miles), dwarfing the 824-km range of the Lucid Air Grand Touring, the current EV long-distance champ. And unlike many PHEVs, whose performance suffers when they rely on battery juice alone, the Ramcharger doesn’t skimp on power. The system churns up a mighty 494 kW (663 horsepower) and 834 newton-meters of torque. Ram estimates the Ramcharger will go from 0 to 60 miles per hour (0 to 97 kilometers per hour) in 4.4 seconds. Using the engine solely as a generator also addresses a couple of key criticisms of PHEVs: skimpy all-electric range, especially if the owner rarely bothers to plug it in, and compromised efficiency when the car runs primarily on gasoline. “Today’s PHEVs can be extremely efficient or extremely inefficient, all depending on how a customer decides to use it,” Kim says. “If they never plug in, it ends up being a hybrid that’s heavier than it needs to be, hauling around a battery it doesn’t really use.” In an EREV, by contrast, the engine can operate constantly in its most frugal operating range, which maximizes its thermal efficiency, rather than constantly revving from idle to high speeds and wasting fuel. The Ramcharger should also excel at actual hauling. Some truck loyalists have soured on all-electric pickups like the Ford F-150 Lightning, whose maximum 515-km range can drop by 50 percent or more when it’s towing or hauling heavy loads. The Ramcharger is rated to tow up to 14,000 pounds (6,350 kilograms) and to handle up to 2,625 pounds (1,190 kg) in its cargo bed. Company officials insist that its range won’t be unduly dinged by heavy lifting, although they haven’t yet revealed details. China Embraces EREVs If China is the bellwether for all things electric, the EREV could be the next big thing. EREVs now make up nearly a third of the nation’s plug-in-hybrid sales, according to Bloomberg New Energy Finance. Beijing-based Li Auto, founded nine years ago by billionaire entrepreneur Xiang Li, makes EREVs exclusively; its sales jumped nearly 50 percent in 2024. Big players, including Geely Auto and SAIC Motor Corp., are also getting into the game. Li Auto’s Li L9, shown here at a 2024 auto show in Tianjin, China, is an extended-range hybrid SUV designed for China’s wide-open spaces.NurPhoto/Getty Images The Li L9 is a formidable example of China’s EREV tech. The crossover SUV pairs a 44.5-kWh nickel cobalt manganese battery with a 1.5-liter turbo engine, for a muscular 330 kW (443 horsepower). Despite having a much smaller battery than the Ramcharger, the all-wheel-drive L9 can cover 180 km on a single charge. Then its four-cylinder engine, operating at a claimed 40.5 percent thermal efficiency, extends the total range to 1,100 km, perfect for China’s wide-open spaces. Dunne notes that in terms of geography and market preferences, China is more similar to the United States than it is to Europe. All-electric EVs predominate in China’s wealthier cities and coastal areas, but in outlying regions—characterized by long-distance drives and scarce public charging—many Chinese buyers are spurning EVs in favor of hybrids, much like their U.S. counterparts. Those parallels may bode well for EREV tech in the United States and elsewhere. When the Ramcharger was first announced in November 2023, it looked like an outlier. Since then, Nissan, Hyundai, Mazda, and General Motors’ Buick brand have all announced EREV plans of their own. Volkswagen’s Scout Traveler SUV, due out in 2027, is another entry in the extended-range electric vehicle (EREV) category.Scout Motors In October, Volkswagen unveiled a revival of the Scout, the charming off-roaders built by International Harvester between 1960 and 1980. The announcement came with a surprise twist: Both the Scout Traveler SUV and Scout Terra pickup will offer EREV models. (In a nod to the brand’s history, Scout’s EREV system is called “Harvester.”) The hybrid versions will have an estimated 500-mile (800-km) range, easily beating the 350-mile (560-km) range of their all-electric siblings. According to a crowdsourced tracker, about four-fifths of people reserving a Scout are opting for an EREV model. Toyota Gets the Last Laugh For Toyota, the market swing toward hybrid vehicles comes as a major vindication. The world’s largest automaker had faced withering attacks for being slow to add all-electric vehicles to its lineup, focusing more on hybrids as a transitional technology. In 2021, Toyota’s decision to redesign its Sienna minivan exclusively as a hybrid seemed risky. Its decision to do the same with the 2025 Camry, America’s best-selling sedan, seemed riskier still. Now Toyota and its luxury brand, Lexus, control nearly 60 percent of the hybrid market in North America. Toyota was criticized for sticking with hybrid cars like the Prius, but it now controls nearly 60 percent of the North American hybrid market. Toyota Although Toyota hasn’t yet announced any plans for an EREV, the latest Toyota Prius shows the fruits of five generations and nearly 30 years of hybrid R&D. A 2024 Prius starts at around $29,000, with up to 196 horsepower (146 kW) from its 2.0-liter Atkinson-cycle engine and tiny lithium-ion battery—enough to go 0-97 km/h in a little over 7 seconds. The larger 2025 Camry hybrid is rated at up to 48 miles per gallon (20 kilometers per liter). Market analysts expect the Toyota RAV4, the most-popular SUV in the United States, to go hybrid-only around 2026. David Christ, Toyota’s North American general manager, indicated that “the company is not opposed” to eventually converting all of its internal-combustion models to hybrids. Meanwhile, GM, Ford, Honda, and other brands are rapidly introducing more hybrids as well. Stellantis is offering plug-in models from Jeep, Chrysler, and Dodge in addition to the EREV Ramcharger. Even the world’s most rarefied sports-car brands are adopting hybrid technology because of its potential to improve performance significantly—and to do so without reducing fuel economy. [For more on high-performance hybrids, see "A Hybrid Car That's Also a Supercar".] The electricity-or-nothing crowd may regard hybrids as a compromise technology that continues to prop up demand for fossil fuels. But by meeting EV-skeptical customers halfway, models that run on both batteries and gasoline could ultimately convert more people to electrification, hasten the extinction of internal-combustion dinosaurs, and make a meaningful dent in carbon emissions.

  • A Hybrid Car That’s Also a Supercar
    by Lawrence Ulrich on 6. Januara 2025. at 13:00

    Aside from having four wheels, it’s hard to see what a US $30,000 Toyota Camry has in common with a $3 million Ferrari F80. But these market bookends are examples of an under-the-radar tech revolution. From budget transportation to hypercars, every category of internal-combustion car is now harnessing hybrid tech. In “As EV Sales Stall, Plug-In Hybrids Get a Reboot,” I describe the vanguard of this new hybrid boom: extended-range EVs like the 2025 Ram 1500 Ramcharger, which boasts a range of more than 1,000 kilometers. The world’s leading performance brands are also embracing hybrid EV tech—not merely to cut emissions or boost efficiency but because of the instant-on, highly controllable torque produced by electric motors. Hybridized models from BMW, Corvette, Ferrari, and Porsche are aimed at driving enthusiasts who have been notoriously resistant to electric cars. Sam Fiorani, vice-president of global vehicle forecasting for AutoForecast Solutions, predicts that “nearly all light-duty internal-combustion engines are likely to be hybridized in one form or another over the next decade.” Even mainstream electrified models, Fiorani notes, routinely generate acceleration times that were once limited to exotic machines. “The performance offered by electric motors cannot be accomplished by gas-powered engines alone without impacting emissions,” Fiorani says. “The high-end brands will need to make the leap that only an electric powertrain can practically provide.” That leap is well underway, as I experienced firsthand during test drives of the BMW M5, Corvette E-Ray, and Ferrari 296 GTB. These performance hybrids outperform their internal-combustion-only equivalents in almost every way. Most incorporate all-wheel-drive, along with torque vectoring, energy harvesting, and other engineering tricks that are possible with the inclusion of electric motors. 2025 BMW M5: The Heavyweight Hybrid The BMW M5 sedan is a literal heavyweight, tipping the scales at 2,435 kilograms.BMW The 2025 BMW M5 sedan adds plug-in hybrid power to one of the company’s iconic models. A twin-turbo, 4.4-liter V-8 engine pairs with a fifth-generation BMW eMotor and a 14.8-kilowatt-hour battery. The M5 can cruise silently on battery power for 69 km (43 miles). The biggest downside is the car’s crushing curb weight—up to 2,500 kilograms (5,500 pounds)—and poor fuel economy once its electric range is spent. The upside is 527 kilowatts (717 horsepower) of Teutonic aggression, which I experienced from Munich to the Black Forest, making Autobahn sprints at up to 280 kph (174 mph). Ferrari 296 GTB and F80: Top of the Hybrid Food Chain Although the Ferrari 296 GTB is a plug-in hybrid, its goal is high performance, not high gas mileage.Ferrari Ferrari’s swoopy 296 GTB is a plug-in hybrid with a 122-kW electric motor sandwiched between a 3.0-liter V-6 and an F1 automated gearbox, producing a total of 602 kilowatts (819 horsepower). The 296 GTB can cover just 25 km on electricity alone, but that could be enough to pass through European low-emission zones, where internal-combustion cars may eventually be banned. Of course, the 296 GTB’s main goal is high performance, not high gas mileage. A digital brake-by-wire system makes it Ferrari’s shortest-stopping production car, and the brakes regenerate enough energy that I was able to recharge the 7.5-kWh battery on-the-fly in roughly 10 minutes of driving. Despite its modest V-6 engine, the 296 GTB turns faster laps around Ferrari’s Fiorano test circuit than any V-8 model in company history. The Ferrari weighs in at 1,467 kilograms (3,234 pounds), unusually svelte for a hybrid, which aids its sharp handling. At the top of the hybrid food chain is Ferrari’s F80, a hypercar inspired by Formula 1 racers. It pairs a V-6 with five electric motors—two in turbochargers, three for propulsion—for a total of 882 kW (1,200 horsepower). The two electric motors driving the front wheels allow for independent torque vectoring. Only 799 of the F80s will be built, but those numbers do not capture the cultural impact of harnessing hybrid tech in one of the world’s most exclusive sports cars. Porsche 911 GTS T-Hybrid: A First for Porsche The Porsche 911 now has its first electrified design. The new Porsche 911 GTS T-Hybrid keeps the model’s classic flat-six, rear-engine layout but adds a 40-kW electric motor, for a combined 391 kW (532 horsepower). Another 20-kW motor drives a single electric turbocharger, which has much less lag and wasted heat than mechanical turbochargers do. Porsche’s 911 GTS T-Hybrid is the carmaker’s first electric car offering.Porsche The 911 GTS T-Hybrid’s 400-volt system quickly spools that turbo up to 120,000 rpm; peak turbo boost arrives in less than one second, versus more than three seconds before [[ Corvette E-Ray: An Affordable Hybrid Supercar The Porsche 911’s main rival, the Corvette, is likewise coming out with a hybrid EV. The Corvette E-Ray, which starts at $108,595, is intended to make supercar tech affordable to a broader clientele. The eighth-generation Corvette was designed with an aluminum tunnel along its spine to accommodate optional hybrid power. Buy the E-Ray version, and that tunnel is stuffed with 80 pouch-style, nickel cobalt manganese Ultium battery cells that augment a V-8 engine. The small, 1.9-kWh battery pack is designed for rapid charge and discharge: It can spit out 525 amps in short bursts, sending up to 119 kW (160 horsepower) to an electrified front axle. Hybrids like the Corvette E-Ray should appeal to purists who’ve thus far resisted all-electric cars.Chevrolet History’s first all-wheel-drive Corvette is also the fastest in a straight line, with a computer-controlled 2.5-second launch to 102 kilometers per hour (60 miles per hour) . No matter how hard I drove the E-Ray in the Berkshires of Massachusetts, I couldn’t knock its battery below about 60 percent full. Press the Charge+ button, and the Corvette uses energy recapture to fill its battery within 5 to 6 kilometers of driving. Battery and engine together produce a hefty 482 kW (655 horsepower), yet I got 25 miles per gallon during gentle highway driving, on par with lesser-powered Corvettes. Even more than other customers, sports-car buyers seem resistant to going full-EV. Aside from a handful of seven-figure hypercars, there are currently no electric two-seaters for sale anywhere in the world. Tadge Juechter, Corvette’s recently retired executive chief engineer, notes that many enthusiasts are wedded to the sound and sensation of gasoline engines, and are leery of the added weight and plummeting range of EVs driven at high velocity. That resistance doesn’t seem to extend to hybrids, however. The Corvette E-Ray, Juechter says, was specifically designed to meet those purists halfway, and “prove they have nothing to fear from electrification.”

  • SwitchBot Introduces Modular Mobile Home Robot
    by Evan Ackerman on 5. Januara 2025. at 13:00

    Earlier this year, we reviewed the SwitchBot S10, a vacuuming and wet mopping robot that uses a water-integrated docking system to autonomously manage both clean and dirty water for you. It’s a pretty clever solution, and we appreciated that SwitchBot was willing to try something a little different. At CES this week, SwitchBot introduced the K20+ Pro, a little autonomous vacuum that can integrate with a bunch of different accessories by pulling them around on a backpack cart of sorts. The K20+ Pro is SwitchBot’s latest effort to explore what’s possible with mobile home robots. SwitchBot’s small vacuum can transport different payloads on top.SwitchBot What we’re looking at here is a “mini” robotic vacuum (it’s about 25 centimeters in diameter) that does everything a robotic vacuum does nowadays: It uses lidar to make a map of your house so that you can direct it where to go, it’s got a dock to empty itself and recharge, and so on. The mini robotic vacuum is attached to a wheeled platform that SwitchBot is calling a “FusionPlatform” that sits on top of the robot like a hat. The vacuum docks to this platform, and then the platform will go wherever the robot goes. This entire system (robot, dock, and platform) is the “K20+ Pro multitasking household robot.” SwitchBot refers to the K20+ Pro as a “smart delivery assistant,” because you can put stuff on the FusionPlatform and the K20+ Pro will move that stuff around your house for you. This really doesn’t do it justice, though, because the platform is much more than just a passive mobile cart. It also can provide power to a bunch of different accessories, all of which benefit from autonomous mobility: The SwitchBot can carry a variety of payloads, including custom payloads.SwitchBot From left to right, you’re looking at an air circulation fan, a tablet stand, a vacuum and charging dock and an air purifier and security camera (and a stick vacuum for some reason), and lastly just the air purifier and security setup. You can also add and remove different bits, like if you want the fan along with the security camera, just plop the security camera down on the platform base in front of the fan and you’re good to go. This basic concept is somewhat similar to Amazon’s Proteus robot, in the sense that you can have one smart powered base that moves around a bunch of less smart and unpowered payloads by driving underneath them and then carrying them around. But SwitchBot’s payloads aren’t just passive cargo, and the base can provide them with a useful amount of power. A power port allows you to develop your own payloads for the robot.SwitchBot SwitchBot is actively encouraging users to “to create, adapt, and personalize the robot for a wide variety of innovative applications,” which may include “3D-printed components [or] third-party devices with multiple power ports for speakers, car fridges, or even UV sterilization lamps,” according to the press release. The maximum payload is only 8 kilograms, though, so don’t get too crazy. Several SwitchBots can make bath time much more enjoyable.SwitchBot What we all want to know is when someone will put an arm on this thing, and SwitchBot is of course already working on this: SwitchBot’s mobile manipulator is still in the lab stage.SwitchBot The arm is still “in the lab stage,” SwichBot says, which I’m guessing means that the hardware is functional but that getting it to reliably do useful stuff with the arm is still a work in progress. But that’s okay—getting an arm to reliably do useful stuff is a work in progress for all of robotics, pretty much. And if SwitchBot can manage to produce an affordable mobile manipulation platform for consumers that even sort of works, that’ll be very impressive.

  • CES 2025 Preview: Needleless Injections, E-Skis, and More
    by Gwendolyn Rak on 4. Januara 2025. at 12:00

    This weekend, I’m on my way to Las Vegas to cover this year’s Consumer Electronics Show. I’ve scoured the CES schedule and lists of exhibitors in preparation for the event, where I hope to find fascinating new tech. After all, some prep is required given the size of the show: CES span 12 venues and more than 2.5 million square feet of exhibit space—a good opportunity to test out devices that will be on display, like these shoe attachments that track muscle load for athletes (and journalists running between demos), or an exoskeleton to help out on hikes through the Mojave Desert. Of course, AI will continue to show up in every device you might imagine it to, and many you wouldn’t. This year, there will be AI-enabled vehicle sensors and PCs, as well as spice dispensers, litter boxes, and trash cans. With AI systems for baby care and better aging, the applications practically range from cradle to grave. I’m also looking forward to discovering technology that could change the way we interact with our devices, such as new displays in our personal vehicles and smart eyewear to compete with Ray-Ban Meta glasses. Hidden among the big names showcasing their latest tech, startups and smaller companies will be exhibiting products that could become the next big thing, and the innovative engineering behind them. Here are a few of the gadgets and gizmos I’m planning to see in person this week. Needle-Free Injections Imagine a world in which you could get a flu shot—or any injection—without getting jabbed by a needle. That’s what Dutch company FlowBeams aims to create with its device, which injects a thin jet of liquid directly into the skin. With a radius of 25 micrometers, the jet measures about one-tenth the size of a 25-gauge needle often used for vaccines. Personally, I’ve dealt with my fair share of needles from living with type 1 diabetes for nearly two decades, so this definitely caught my eye. Delivering insulin is, in fact, one of the medical applications the FlowBeams team imagines the tech could eventually be used for. But healthcare isn’t the only potential use. It could also become a new, supposedly painless way to get cosmetic fillers or a tattoo. Electric Skis to Help With Hills Skiing may initially seem like the recreational activity least in need of a motorized boost—gravity is pretty reliable on its own. But if you, like me, actually prefer cross country skiing, it’s an intriguing idea. Now being brought to life by a Swiss startup, E-Skimo was created for ski mountaineering (A.K.A. “skimo”), a type of backcountry skiing that involves climbing up a mountain to then speed back down. The battery-powered, detachable device uses a belt of rubber tread to help skiers get to higher peaks in less time. Unfortunately, Vegas will be a bit too balmy for live demos. A Fitbit for Fido—and for Bessie Nearly any accessory you own today—watches, rings, jewelry, or glasses—can be replaced by a wearable tech alternative. But what about your dog? Now, we can extend our obsession with health metrics to our pets with the next generation of smart collars from companies like Queva, which is debuting a collar that grades your dog’s health on a 100-point scale. While activity-tracking collars have been on the market for several years, these and other devices, like smart pet flaps, are making our pets more high-tech than ever. And the same is true for livestock: The first wearable device for tracking a cow’s vitals will also be at CES this year. While not exactly a consumer device, it’s a fascinating find nonetheless. Real-Time Translation Douglas Adams fans, rejoice: Inspired by the Babel fish from The Hitchhiker’s Guide to the Galaxy, Timekettle’s earbuds make (nearly) real-time translation possible. The company’s latest version operates with a new, proprietary operating system to offer two-way translation during phone or video calls on any platform. The US $449 open-ear buds translate between more than 40 languages and 93 accents, albeit with a 3 to 5 second delay. “Hormometer” to Subdue Stress Ironically, everybody seems stressed out about cortisol, the hormone that regulates your body’s stress response. To make hormone testing more accessible, Eli Health has created a device, dubbed the “Hormometer,” which detects either cortisol or progesterone levels from a quick saliva sample. After 20 minutes, the user scans the tester with a smartphone camera and gets results. At about $8 per test, each one is much less expensive than other at-home or lab tests. However, the company functions as a subscription service, starting at about $65 per month with a 12-month commitment. AR Binoculars to Seamlessly ID the Natural World I have a confession to make: For someone who once considered a career in astronomy, I can identify embarrassingly few constellations. Alas, after Orion and the Big Dipper, I have trouble finding many of these patterns in the night sky. Stargazing apps help, but looking back and forth between a screen and the sky tends to ruin the moment. Unistellar’s Envision smart binoculars, however, use augmented reality to map the stars, tag comets, and label nebulae directly in your line of sight. During the day, they can identify hiking trails or tell you the altitude of a summit on the horizon. When it comes to identifying the best technology on the horizon, though, leave that job to IEEE Spectrum.

  • IEEE Young Professionals Talked Sustainability Tech at Climate Week NYC
    by Chinmay Tompe on 3. Januara 2025. at 19:00

    The IEEE Young Professionals Climate and Sustainability Task Force focuses on empowering emerging leaders to contribute to sustainable technology and climate action, fostering engagement and leading initiatives that address climate change–related challenges and potential solutions. Since its launch in 2023, the CSTF has been engaging them in the conversation of how to get involved in the climate and sustainability sectors. The group held a panel session during last year’s Climate Week NYC, which ran from 22 to 29 September to coincide with the U.N. Summit of the Future. Climate Week NYC is the largest annual climate event, featuring more than 600 activities throughout New York City. It brings together leaders from the business sector, government, and private organizations to promote climate action and innovation, highlighting the urgent need for transformative change. The U.N. summit, held on 22 and 23 September, aimed to improve global governance and establish a “pact for the future” focusing on the climate crisis and sustainable development. The IEEE panel brought together climate-change experts from organizations and government agencies worldwide—including IEEE, the Global Renewables Alliance, and the SDG7 Youth Constituency—to highlight the intersection of technology, policy, and citizen engagement. Participants from 30 countries attended the panel session. The event underscored IEEE’s commitment to fostering technological solutions for climate challenges while emphasizing the crucial role of young professionals in driving innovation and change. As the world moves toward critical climate deadlines, the dialogue demonstrated that success is likely to require a combination of technical expertise, policy understanding, and inclusive participation from all stakeholders. The panel was moderated by IEEE Member Sajith Wijesuriya, chair of the task force, and IEEE Senior Member Sukanya S. Meher, the group’s communications lead and one of the authors of this article. The moderators guided the discussion through key topics such as organizational collaboration, youth engagement, skill development, and technological advancements. The panel also highlighted why effective climate solutions must combine technical innovation with inclusive policymaking, ensuring the transition to a sustainable future leaves no community behind. Engaging youth in mitigating climate change The panel featured young professionals who emphasized the importance of engaging the next generation of engineers, climate advocates, and students in the climate-action movement. “Young people, especially women living in [rural] coastal communities, are at the front lines of the climate crisis,” said Grace Young, the strategy and events manager at nonprofit Student Energy, based in Vancouver. Women and girls are disproportionately impacted by climate change because “they make up the majority of the world’s poor, who are highly dependent on local natural resources for their livelihood,” according to the United Nations. Women and girls are often responsible for securing food, water, and firewood for their families, the U.N. says, and during times of drought and erratic rainfall, it takes more time and work to secure income and resources. That can expose women and girls to increased risks of gender-based violence, as climate change exacerbates existing conflicts, inequalities, and vulnerabilities, according to the organization. Climate advocates, policymakers, and stakeholders “must ensure that they [women] have a seat at the table,” Young said. One way to do that is to implement energy education programs in preuniversity schools. “Young people must be heard and actively involved in shaping solutions,” said Manar Elkebir, founder of EcoWave, a Tunisian youth-led organization that focuses on mobilizing young people around environmental issues. During the panel session, Elkebir shared her experience collaborating with IRENA—a global intergovernmental group—and the Italian government to implement energy education programs in Tunisian schools. She also highlighted the importance of creating inclusive, nonintimidating spaces for students to engage in discussions about the transition to cleaner energy and other climate-related initiatives. Young professionals “are not just the leaders of tomorrow; we are the changemakers of today,” she said. Another group that is increasing its focus on youth engagement and empowerment is the World Meteorological Organization, headquartered in Geneva. The WMO’s Youth Climate Action initiative, for example, lets young people participate in policymaking and educational programs to deepen their understanding of climate science and advocacy. Such initiatives recognize that the next generation of leaders, scientists, and innovators will be generating transformative changes, and they need to be equipped with knowledge and tools, said panelist Ko Barret, WMO deputy secretary general. Other discussions focused on the importance of engaging young professionals in the development and implementation of climate change technology. There are an abundance of career opportunities in the field, particularly in climate data analytics, said Bala Prasanna, IEEE Region 1 director. “Both leadership skills and multidisciplinary learning are needed to stay relevant in the evolving climate and sustainability sectors,” Prasanna said. Although “climate change represents humanity’s greatest threat,” said Saifur Rahman, 2023 IEEE president, technology-driven solutions were notably underrepresented at climate conferences such as COP27. Rahman urged young engineers to take ownership of the problem, and he directed them to resources such as IEEE’s climate change website, which offers information on practical solutions. “Technology practitioners will be at the forefront of developing public-private partnerships that integrate cutting-edge technologies with national energy strategies,” said A. Anastasia Kefalidou, acting chief of the IRENA office in New York. “The IRENA NewGen Renewable Energy Accelerator plays a key role in nurturing a new generation of technology practitioners, who can lead innovation and digital transformation in the energy sector.” The accelerator program provides budding entrepreneurs ages 18 to 35 with mentors and resources to scale projects focused on energy technologies and climate adaptation. “The dialogue hosted by IEEE Young Professionals during this incredible Climate Week event is helping to bridge the gap between emerging innovators and institutional efforts,” Young added, “providing a platform for fresh perspectives on renewable energy and climate solutions.” Focus on global partnerships Fostering global partnerships was on the panelists’ minds. Collaboration among governments, private companies, and international organizations could accelerate clean energy transitions, particularly in emerging economies, said Ana Rovzar, director of policy and public affairs at the Global Renewables Alliance in Brussels. She highlighted the need for tailored approaches to address regional challenges in climate resilience and energy access. Environmental journalist Ciara Kavanagh shared how she has been inspired by genuine intersectoral discussions among technical experts, policymakers, communicators, and leaders. The communications specialist at the U.N. Environment Programme in New York discussed how hearing from technical experts can help communicators like her understand renewable technologies. “If the myriad marvelous ideas coming out of the lab aren’t communicated widely and effectively, we all risk falling short of real impact,” Kavanagh said. She called on fellow young professionals to work together to show the world what a cleaner, greener future powered by renewable energy could look like, and to “ensure the power to build that future is in the hands and homes of those who need it, regardless of where they live.” At COP28, COP29, and G20, the United Nations outlined ambitious global goals in what is known as the UAE Consensus. One of the goals is tripling renewable energy capacity and doubling energy efficiency by 2030. Kefalidou highlighted IRENA’s commitment to tracking the targets by analyzing global technology trends while emphasizing the development of next-generation solutions, including advanced solar PV systems, offshore wind farms, and smart-grid technologies. IRENA’s tracking shows that despite rapid growth in renewable energy, the UAE Consensus’s current plans are projected to achieve only 50 percent of the target capacity by the deadline. IRENA regularly publishes detailed progress reports including renewable capacity statistics and the World Energy Transitions Outlook. Not even 20 percent of the U.N.’s Sustainable Development Goals are on track to reach their targets, and more than 40 percent of governments and companies lack net-zero targets, said Shreenithi Lakshmi Narasimhan. In a call to action, the CSTF member and vice chair of the New York IEEE local group emphasized the need for accelerated climate action. “The tools young professionals need to succeed are already in our hands,” Narasimhan said. “Now we must invest strategically, overcome geopolitical barriers, and drive toward real solutions. The stakes couldn’t be higher.” Josh Oxby, energy advisor for the U.K.’s Parliamentary Office of Science and Technology, emphasized the importance of empowering young changemakers and forming collaborations among private, public, and third-sector organizations to develop a workforce to assist with energy transition. Third-sector organizations include charities, community groups, and cooperative societies. “Climate Week NYC has highlighted the importance of taking a step back to evaluate the conventional scrutiny of—and engagement with—policy and governance processes,” Oxby said. “Young professionals are the changemakers of today. Their way of forward thinking and reapproaching frameworks for the inclusivity of future generations is a testament to their dynamic and reflective mindset.” Tech-driven strategies to address the climate crisis CSTF member Chinmay Tompe highlighted the potential of breakthrough technologies such as quantum computing and simulation in addressing climate change and driving the energy transition. “Although we have yet to achieve practical quantum utility, recent advancements in the field offer promising opportunities,” Tompe said. “Simulating natural processes, like molecular and particle fluid dynamics, can be achieved using quantum systems. These technologies could pave the way for cleaner energy solutions, including optimized reactor designs, enhanced energy storage systems, and more efficient energy distribution networks. However, realizing this potential requires proactive efforts from policymakers to support innovation and implementation.” Nuclear energy emerged as a crucial component of the clean energy discussion. Dinara Ermakova advocated for the role nuclear technology can play in achieving net-zero emissions goals, particularly via small modular reactors. Ermakova is an innovation chair for the International Youth Nuclear Congress in Berkeley, Calif. IYNC is a nonprofit that connects students and young professionals worldwide involved in nuclear science and technology. Marisa Zalabak, founder and CEO of Open Channel Culture, highlighted the ethical dilemmas of technological solutions, specifically those regarding artificial intelligence. “AI is not a magic bullet,” Zalabak cautioned, “but when governed ethically and responsibly, it can become a powerful tool for driving climate solutions while safeguarding human rights and planetary health.” She emphasized the importance of regenerative design systems and transdisciplinary collaboration in creating sustainable solutions: “This event reinforced the importance of human collaboration across sectors and the power of youth-driven innovation in accelerating climate action dedicated to human and environmental flourishing for current and future generations.” Implications of climate tech and policy IEEE CSTF showed its commitment to sustainability throughout the event. Panelists were presented with customized block-printed shawls made with repurposed fabric. The initiative was led by CSTF member Kalyani Matey and sourced from Divyang Creations, a social enterprise in Latur, India, employing people with disabilities. Leftover refreshments were donated to New York City food banks. After the panel session concluded, Rahman said participating in it was fulfilling. He commended the young professionals for their “enthusiasm and commitment to help develop a road map to implement some of the SDG goals.” The outcomes of the discussions were presented at the U.N. Climate Change Conference, which was held in Baku, Azerbaijan, from 11 to 22 November.

  • Video Friday: Sleepy Robot Baby
    by Evan Ackerman on 3. Januara 2025. at 17:30

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! It’s me. But we can all relate to this child android robot struggling to stay awake. [ Osaka University ] For 2025, the RoboCup SPL plans an interesting new technical challenge: Kicking a rolling ball! The velocity and start position of the ball can vary and the goal is to kick the ball straight and far. In this video, we show our results from our first testing session. [ Team B-Human ] When you think of a prosthetic hand you probably think of something similar to Luke Skywalker’s robotic hand from Star Wars, or even Furiosa’s multi-fingered claw from Mad Max. The reality is a far cry from these fictional hands: upper limb prostheses are generally very limited in what they can do, and how we can control them to do it. In this project, we investigate non-humanoid prosthetic hand design, exploring a new ideology for the design of upper limb prostheses that encourages alternative approaches to prosthetic hands. In this wider, more open design space, can we surpass humanoid prosthetic hands? [ Imperial College London ] Thanks, Digby! A novel three-dimensional (3D) Minimally Actuated Serial Robot (MASR), actuated by a robotic motor. The robotic motor is composed of a mobility motor (to advance along the links) and an actuation motor [to] move the joints. [ Zarrouk Lab ] This year, Franka Robotics team hit the road, the skies and the digital space to share ideas, showcase our cutting-edge technology, and connect with the brightest minds in robotics across the globe. Here is 2024 video recap, capturing the events and collaborations that made this year unforgettable! [ Franka Robotics ] Aldebaran has sold an astonishing number of robots this year. [ Aldebaran ] The advancement of modern robotics starts at its foundation: the gearboxes. Ailos aims to define how these industries operate with increased precision, efficiency and versatility. By innovating gearbox technology across diverse fields, Ailos is catalyzing the transition towards the next wave of automation, productivity and agility. [ Ailos Robotics ] Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. After minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance. [ MAVRL via Github ] Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers. [ Personal Robotics Lab ] A drone’s eye-view reminder that fireworks explode in 3D. [ Team BlackSheep ]

  • This Year, RISC-V Laptops Really Arrive
    by Matthew S. Smith on 3. Januara 2025. at 14:00

    Buried in the inner workings of your laptop is a secret blueprint, dictating the set of instructions the computer can execute and serving as the interface between hardware and software. The instructions are immutable and hidden behind proprietary technology. But starting in 2025, you could buy a new and improved laptop whose secrets are known to all. That laptop will be fully customizable, with both hardware and software you’ll be able to be modified to fit your needs. This article is part of our special report Top Tech 2025. RISC-V is an open-source instruction set architecture (ISA) poised to make personal computing more, well, personal. Though RISC-V is still early in its life cycle, it’s now possible to buy fully functional computers with this technology inside—a key step toward providing a viable alternative to x86 and Arm in mainstream consumer electronics. “If we look at a couple of generations down the [software] stack, we’re starting to see a line of sight to consumer-ready RISC-V in something like a laptop, or even a phone,” said Nirav Patel, CEO of laptop maker Framework. Patel’s company plans to release a laptop that can support a RISC-V mainboard in 2025. Though still intended for early adopters and developers, it will be the most accessible and polished RISC-V laptop yet, and it will ship to users with the same look and feel as the Framework laptops that use x86 chips. RISC-V Is Coming to a Laptop Near You An ISA is a rulebook that defines the set of valid instructions programs can execute on a processor. Like other ISAs, RISC-V includes dozens of instructions, such as loading data into memory or floating-point arithmetic operations. But RISC-V is open source, which sets it apart from closed ISAs like x86 and Arm. It means anyone can use RISC-V without a license fee. It also makes RISC-V hardware easy to customize, because there are no license restrictions on what can or can’t be modified. Researchers at University of California, Berkeley’s Parallel Computing Laboratory began developing the RISC-V ISA in 2010 based on established reduced instruction set computer (RISC) principles, and it’s already in use by companies looking to design inexpensive, specialized chips: Alibaba put RISC-V to work in a chip development platform for edge computing, and Western Digital used RISC-V for storage controllers. Now, a small group of companies and enthusiasts are laying the groundwork for bringing RISC-V to mainstream consumer devices. Among these pioneers is software engineer Yuning Liang, who found himself drawn to the idea while sidelined by COVID lockdowns in Shenzhen, China. Unable to continue previous work, “I had to ask, what can I do here?” says Liang. “Mark Himelstein, the former CTO of RISC-V [International], mentioned we should do a laptop on a 12-nanometer RISC-V test chip.” Because the 12-nm node is an older production process than CPUs use today, each chip costs less. DeepComputing released the first RISC-V laptop, Roma, in 2023, followed by the DC-Roma II a year later.DeepComputing The project had a slow start amid COVID-related supply-chain issues but eventually led to the 2023 release of the world’s first RISC-V laptop, the Roma, by DeepComputing—a Hong Kong–based company Liang founded the prior year. It was followed in 2024 by the DC-Roma II, which shipped with the open-source Ubuntu operating system preinstalled, making it capable of basic computing tasks straight out of the box. DeepComputing is now working in partnership with Framework, a laptop maker founded in 2019 with the mission to “fix consumer electronics,” as it’s put on the company’s website. Framework sells modular, user-repairable laptops that owners can keep indefinitely, upgrading parts (including those that can’t usually be replaced, like the mainboard and display) over time. “The Framework laptop mainboard is a place for board developers to come in and create their own,” says Patel. The company hopes its laptops can accelerate the adoption of open-source hardware by offering a platform where board makers can “deliver system-level solutions,” Patel adds, without the need to design their own laptop in-house. Closing the Price and Performance Gap The DeepComputing DC-Roma II laptop marked a major milestone for open source computing, and not just because it shipped with Ubuntu installed. It was the first RISC-V laptop to receive widespread media coverage, especially on YouTube, where video reviews of the DC-Roma II (as well as other RISC-V single-board computers, such as the Milk-V Pioneer and Lichee Pi 4A) collectively received more than a million views. Even so, Liang was quick to acknowledge a flaw found by many online reviewers: The RISC-V chip in the DC-Roma II performs well behind x86 and Arm-powered alternatives. DeepComputing wants to tackle that in 2025 with the DC-Roma III, according to Liang. In the coming year, “performance will be much better. It’ll still be on 12-nanometer [processors], but we’re going to upgrade the CPU’s performance to be more like an Arm Cortex-A76,” says Liang. The Cortex-A76 is a key architecture to benchmark RISC-V against, as it’s used by chips in high-volume single-board computers like the Raspberry Pi 5. Liang isn’t alone in his dream of high-performance RISC-V chips. Ventana, founded in 2018, is designing high-performance data-center chips that rely on the open-source ISA. Balaji Baktha, Ventana’s founder and CEO, is adamant that RISC-V chips will go toe-to-toe with x86 and Arm across a variety of products. “There’s nothing that is ISA specific that determines if you can make something high performance, or not,” he says. “It’s the implementation of the microarchitecture that matters.” DeepComputing also wants to make RISC-V appealing with lower prices. At about US $600, the DC-Roma II isn’t much more expensive than a midrange Windows laptop like an Acer Aspire or Dell Inspiron, but online reviews note its performance is more in line with that of budget laptops that sell for much less. Liang says that’s due to the laptop’s low production volume: The DC-Roma II was produced in “the low tens of thousands,” according to Liang. DeepComputing hopes to increase production to 100,000 units for the DC-Roma III, he adds. If that pans out, it should make all DeepComputing laptops more competitive with those using x86 and Arm. That’s important to Liang, who sees affordability as synonymous with openness; both lower the barriers for newcomers. “If we can open up even the chip design, then one day, even students at schools and universities can come into class and design their own chips, with open tools,” says Liang. “With openness, you can choose to build things yourself from zero.”

  • Reversible Computing Escapes the Lab in 2025
    by Dina Genkina on 2. Januara 2025. at 14:00

    Michael Frank has spent his career as an academic researcher working over three decades in a very peculiar niche of computer engineering. According to Frank, that peculiar niche’s time has finally come. “I decided earlier this year that it was the right time to try to commercialize this stuff,” Frank says. In July 2024, he left his position as a senior engineering scientist at Sandia National Laboratories to join a startup, U.S. and U.K.-based Vaire Computing. Frank argues that it’s the right time to bring his life’s work—called reversible computing—out of academia and into the real world because the computing industry is running out of energy. “We keep getting closer and closer to the end of scaling energy efficiency in conventional chips,” Frank says. According to an IEEE semiconducting industry road map report Frank helped edit, by late in this decade the fundamental energy efficiency of conventional digital logic is going to plateau, and “it’s going to require more unconventional approaches like what we’re pursuing,” he says. This article is part of our special report Top Tech 2025. As Moore’s Law stumbles and its energy-themed cousin Koomey’s Law slows, a new paradigm might be necessary to meet the increasing computing demands of today’s world. According to Frank’s research at Sandia, in Albuquerque, reversible computing may offer up to a 4,000x energy-efficiency gain compared to traditional approaches. “Moore’s Law has kind of collapsed, or it’s really slowed down,” says Erik DeBenedictis, founder of Zettaflops, who isn’t affiliated with Vaire. “Reversible computing is one of just a small number of options for reinvigorating Moore’s Law, or getting some additional improvements in energy efficiency.” Vaire’s first prototype, expected to be fabricated in the first quarter of 2025, is less ambitious—it is producing a chip that, for the first time, recovers energy used in an arithmetic circuit. The next chip, projected to hit the market in 2027, will be an energy-saving processor specialized for AI inference. The 4,000x energy-efficiency improvement is on Vaire’s road map but probably 10 or 15 years out. “I feel that the technology has promise,” says Himanshu Thapliyal, associate professor of electrical engineering and computer science at the University of Tennessee, Knoxville, who isn’t affiliated with Vaire. “But there are some challenges also, and hopefully, Vaire Computing will be able to overcome some of the challenges.” What Is Reversible Computing? Intuitively, information may seem like an ephemeral, abstract concept. But in 1961, Rolf Landauer at IBM discovered a surprising fact: Erasing a bit of information in a computer necessarily costs energy, which is lost as heat. It occurred to Landauer that if you were to do computation without erasing any information, or “reversibly,” you could, at least theoretically, compute without using any energy at all. Landauer himself considered the idea impractical. If you were to store every input and intermediate computation result, you would quickly fill up memory with unnecessary data. But Landauer’s successor, IBM’s Charles Bennett, discovered a workaround for this issue. Instead of just storing intermediate results in memory, you could reverse the computation, or “decompute,” once that result was no longer needed. This way, only the original inputs and final result need to be stored. Take a simple example, such as the exclusive-OR, or XOR gate. Normally, the gate is not reversible—there are two inputs and only one output, and knowing the output doesn’t give you complete information about what the inputs were. The same computation can be done reversibly by adding an extra output, a copy of one of the original inputs. Then, using the two outputs, the original inputs can be recovered in a decomputation step. A traditional exclusive-OR (XOR) gate is not reversible—you cannot recover the inputs just by knowing the output. Adding an extra output, just a copy of one of the inputs, makes it reversible. Then, the two outputs can be used to “decompute” the XOR gate and recover the inputs, and with it, the energy used in computation. The idea kept gaining academic traction, and in the 1990s, several students working under MIT’s Thomas Knight embarked on a series of proof-of-principle demonstrations of reversible computing chips. One of these students was Frank. While these demonstrations showed that reversible computation was possible, the wall-plug power usage was not necessarily reduced: Although power was recovered within the circuit itself, it was subsequently lost within the external power supply. That’s the problem that Vaire set out to solve. Computing Reversibly in CMOS Landauer’s limit gives a theoretical minimum for how much energy information erasure costs, but there is no maximum. Today’s CMOS implementations use more than a thousand times as much energy to erase a bit than is theoretically possible. That’s mostly because transistors need to maintain high signal energies for reliability, and under normal operation that all gets dissipated as heat. To avoid this problem, many alternative physical implementations of reversible circuits have been considered, including superconducting computers, molecular machines, and even living cells. However, to make reversible computing practical, Vaire’s team is sticking with conventional CMOS techniques. “Reversible computing is disrupting enough as it is,” says Vaire chief technology officer and cofounder Hannah Earley. “We don’t want to disrupt everything else at the same time.” To make CMOS play nicely with reversibility, researchers had to come up with clever ways to to recover and recycle this signal energy. “It’s kind of not immediately clear how you make CMOS operate reversibly,” Earley says. The main way to reduce unnecessary heat generation in transistor use—to operate them adiabatically—is to ramp the control voltage slowly instead of jumping it up or down abruptly. This can be done without adding extra compute time, Earley argues, because currently transistor switching times are kept comparatively slow to avoid generating too much heat. So, you could keep the switching time the same and just change the waveform that does the switching, saving energy. However, adiabatic switching does require something to generate the more complex ramping waveforms. It still takes energy to flip a bit from 0 to 1, changing the gate voltage on a transistor from its low to high state. The trick is that, as long as you don’t convert energy to heat but store most of it in the transistor itself, you can recover most of that energy during the decomputation step, where any no-longer-needed computation is reversed. The way to recover that energy, Earley explains, is by embedding the whole circuit into a resonator. A resonator is kind of like a swinging pendulum. If there were no friction from the pendulum’s hinge or the surrounding air, the pendulum would swing forever, going up to the same height with each swing. Here, the swing of the pendulum is a rise and fall in voltage powering the circuit. On each upswing, one computational step is performed. On each downswing, a decomputation is performed, recovering the energy. In every real implementation, some amount of energy is still lost with each swing, so the pendulum requires some power to keep it going. But Vaire’s approach paves the way to minimizing that friction. Embedding the circuit in a resonator simultaneously creates the more complex waveforms needed for adiabatic transistor switching and provides the mechanism for recovering the saved energy. The Long Road to Commercial Viability Although the idea of embedding reversible logic inside a resonator has been developed before, no one has yet built one that integrates the resonator on chip with the computing core. Vaire’s team is hard at work on their first version of this chip. The simplest resonator to implement, and the one the team is tackling first, is an inductive-capacitive (LC) resonator, where the role of the capacitor is played by the whole circuit and an on-chip inductor serves to keep the voltage oscillating. The chip Vaire plans to send for fabrication in early 2025 will be a reversible adder embedded in an LC resonator. The team is also working on a chip that will perform the multiply-accumulate operation, the basic computation in most machine learning applications. In the following years, Vaire plans to design the first reversible chip specialized for AI inference. “Some of our early test chips might be lower-end systems, especially power-constrained environments, but not long after that, we’re addressing higher-end markets as well,” Frank says. LC resonators are the most straightforward way to implement in CMOS, but they come with comparatively low quality factors, meaning the voltage pendulum will run with some friction. The Vaire team is also working on integrating a microelectromechanical systems (MEMS) resonator version, which is much more difficult to integrate on chip but promises much higher quality factors (less friction). Earley expects a MEMS-based resonator to eventually provide 99.97 percent friction-free operation. Along the way, the team is designing new reversible logic gate architectures and electronic-design-automation tools for reversible computation. “Most of our challenges will be, I think, in custom manufacturing and hetero-integration in order to combine efficient resonator circuits together with the logic in one integrated product,” Frank says. Earley hopes that these are challenges the company will overcome. “In principle, this allows [us], over the next 10 to 15 years, to get to 4,000x improvement in performance,” she says. “Really it is going to be down to how good a resonator you can get.”

  • Build a Better DIY Seismometer
    by David Schneider on 2. Januara 2025. at 13:00

    In September of 2023, I wrote in these pages about using a Raspberry Pi–based seismometer—a Raspberry Shake—to record earthquakes. But as time went by, I found the results disappointing. In retrospect, I realize that my creation was struggling to overcome a fundamental hurdle. I live on the tectonically stable U.S. East Coast, so the only earthquakes I could hope to detect would be ones taking place far away. Unfortunately, the signals from distant quakes have relatively low vibrational frequencies, and the compact geophone sensor in a Raspberry Shake is meant for higher frequencies. I had initially considered other sorts of DIY seismometers, and I was put off by how large and ungainly they were. But my disappointment with the Raspberry Shake drove me to construct a seismometer that represents a good compromise: It’s not so large (about 60 centimeters across), and its resonant frequency (about 0.2 Hertz) is low enough to make it better at sensing distant earthquakes. My new design is for a horizontal-pendulum seismometer, which contains a pendulum that swings horizontally—or almost so, being inclined just a smidge. Think of a fence gate with its two hinges not quite aligned vertically. It has a stable position in the middle, but when it’s nudged, the restoring force is very weak, so the gate makes slow oscillations back and forth. The backbone of my seismometer is a 60-cm-long aluminum extrusion. Or maybe I should call it the keel, as this seismometer also has what I would describe as a mast, another piece of aluminum extrusion about 25 cm long, attached to the end of the keel and sticking straight up. Beneath the mast and attached to the bottom of the keel is an aluminum cross piece, which prevents the seismometer from toppling over. The pendulum—let’s call it a boom, to stick with my nautical analogies—is a 60-cm-long bar cut from 0.375-inch-square aluminum stock. At one end, I attached a 2-pound lead weight (one intended for a diving belt), using plastic cable ties. To allow the boom to swing without undue friction, I drilled a hole in the unweighted end and inserted the carbide-steel tip of a scribing tool. That sharp tip rests against a shallow dimple in a small steel plate screwed to the mast. To support the boom, I used some shifter cable from a bicycle, attached by looping it through a couple of strategically drilled holes and then locking things down using metal sleeves crimped onto the ends of the cable. Establishing the response of the seismometer to vibrations is the role of the end weight [top left] and dampening magnets [top right]. A magnet is also used with a Hall effect sensor [middle right] that is read by a microcontroller [middle left]. Data is stored on a logging board with a real-time clock [bottom]. James Provost I fabricated a few other small physical bits, including leveling feet and a U-shaped bracket to prevent the boom from swinging too far from equilibrium. But the main challenges were how to sense earthquake-induced motions of the boom and how to prevent it from oscillating indefinitely. Most DIY seismometers use a magnet and coil to sense motion as the moving magnet induces a current in the fixed coil. That’s a tricky proposition in a long-period seismometer, because the relative motion of the magnet is so slow that only very faint electrical signals are induced in the coil. One of the more sophisticated designs I saw online called for an LVDT (linear variable differential transformer), but such devices seem hard to come by. Instead, I adopted a strategy I hadn’t seen used in any other homebrewed seismometer: employing a Hall-effect magnetometer to sense position. All I needed was a small neodymium magnet attached to the boom and an inexpensive Hall-effect sensor board positioned beneath it. It worked just great. I figured the immense excursions must reflect some sort of gross malfunction! The final challenge was damping. Without that, the pendulum, once excited, would oscillate for too long. My initial solution was to attach to the boom an aluminum vane immersed in a viscous liquid (namely, oil). That worked, but I could just see the messy oil spills coming. So I tacked in the other direction and built a magnetic damper, which works by having the aluminum vane pass through a strong magnetic field. This induces eddy currents in the vane that oppose its motion. To the eye, it looks like the metal is caught in a viscous liquid. The challenge here is making a nice strong magnetic field. For that, I collected all the neodymium magnets I had on hand, kludged together a U-shaped steel frame, and attached the magnets to the frame, mimicking a horseshoe magnet. This worked pretty well, although my seismometer is still somewhat underdamped. Compared with the fussy mechanics, the electronics were a breeze to construct. I used a US $9 data-logging board that was designed to accept an Arduino Nano and that includes both a real-time clock chip and an SD card socket. This allowed me to record the digital output of the Hall sensor at 0.1-second intervals and store the time-stamped data on a microSD card. My homebrew seismometer recorded the trace of an earthquake occurring roughly 1,500 kilometers away, beginning at approximately 17:27 and ending at 17:37.James Provost The first good test came on 10 November 2024, when a magnitude-6.8 earthquake struck just off the coast of Cuba. Consulting the global repository of shared Raspberry Shake data, I could see that units in Florida and South Carolina picked up that quake easily. But ones located farther north, including one close to where I live in North Carolina, did not. Yet my horizontal-pendulum seismometer had no trouble registering that 6.8 earthquake. In fact, when I first looked at my data, I figured the immense excursions must reflect some sort of gross malfunction! But a comparison with the trace of a research-grade seismometer located nearby revealed that the waves arrived in my garage at the very same time. I could even make out a precursor 5.9 earthquake about an hour before the big one. My new seismometer is not too big and awkward, as many long-period instruments are. Nor is it too small, which would make it less sensitive to far-off seismic signals. In my view, this Goldilocks design is just right.

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 9. Februara 2022. at 15:31

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 8. Februara 2022. at 14:00

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 7. Februara 2022. at 16:12

    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

640px-NATO_OTAN_landscape_logo.svg-2627358850
BHTelecom_Logo
roaming
RIPE_NCC_Logo2015-1162707916
MON_4
mibo-logo
intel_logo-261037782
infobip
bhrt-zuto-logo-1595455966
elektro
eplus_cofund_text_to_right_cropped-1855296649
fmon-logo
h2020-2054048912
H2020_logo_500px-3116878222
huawei-logo-vector-3637103537
Al-Jazeera-Balkans-Logo-1548635272
previous arrowprevious arrow
next arrownext arrow