IEEE News

IEEE Spectrum IEEE Spectrum

  • Balancing Work and Life: An Engineer’s Guide to Fulfillment
    by Naeem Zafar on 8. May 2025. at 18:00

    This article is part of our exclusive career advice series in partnership with the IEEE Technology and Engineering Management Society. Throughout my 40-year career as an electrical engineer and entrepreneur, I’ve often been asked how I achieve a work-life balance. Over time, I’ve come to realize that the question—and the way it’s framed—is inherently flawed. So my response to the inquiry is simple: I seek to live an integrated life where work and personal joy are not in conflict but in harmony. The key is in shifting your mindset: Stop viewing work and life as opposites and start recognizing how they complement each other. The notion of work-life balance suggests that work and life are opposing forces. Balance is seen as an elusive goal. The implication is that success in one area inevitably comes at the expense of the other. But what if the conflict is more imagined than real? Instead of trying to balance two separate entities, the goal should be to integrate them into a cohesive whole. I won’t pretend that I have everything figured out. Life—especially with work, kids, and the everyday chaos of being part of a two-income household—is messy. But I’ve learned that work and life aren’t two things to balance; they’re two sides of the same coin. Rather than compartmentalizing them, I approach them as interconnected parts of a fulfilling journey. Here are tips that have helped me embrace work and life as a unified whole. Embracing chaos Let’s be real, balancing a career with caring for children and handling daily responsibilities is chaotic, especially when both spouses are working. Between shuttling the kids to after-school activities, keeping up with household chores, and managing work deadlines, it can feel impossible to find time for everything. But here’s the thing: Balance doesn’t always come in the form of perfectly blocked time. It can come in small, intentional moments. I’ve learned to make use of the in-between times to my advantage. When I used to attend my child’s swim meets or was waiting for an event to start, for example, I would catch up on work with my iPad. I wasn’t always working, but in those moments where I’d otherwise just be waiting, I got things done. By strategically using downtime, you can keep on top of work while also being present for your family. If you can find creative ways to merge the chaos of life with work demands, you can feel less overwhelmed, even when it all feels like a juggling act. The integrated life mindset When I experience joy or setbacks in my work, I share them with my family. By doing so, I bring them into the ups and downs of my entrepreneurial journey of running five technology companies. My work isn’t a separate part of my life, and having a conversation about it with those closest to me allows us to connect more deeply. By involving my family members in my professional world, they’ve become more than bystanders; they’ve become a supportive sounding board. The integration means that I don’t feel constantly torn between my work and my personal life. Instead, I’ve found harmony in embracing both. By introducing the key figures in my professional life to my partner, I create context for them. It fosters empathy and understanding, allowing my spouse to offer emotional support. The transparency avoids the unrealistic pressure of “leaving work at work.” After all, we are human, and what happens at work affects how we feel at home. For young engineers, that mindset shift is key. Don’t view work as something that competes with your personal life. View it as something that can be shared. The more your loved ones understand your professional world, the stronger your relationships can become. For working couples, it can be especially relevant. Coordinating to give each other “catch-up time” helps create space for both partners to manage work commitments without sacrificing family experiences. It’s not about being perfect in both spheres. It’s about being present where you are. Merging work and travel Another way I’ve achieved the integration of work and life is by blending travel for business and pleasure. On family vacations, I don’t treat time away as a complete break from work. I typically start my day early in the morning, catching up on email before everyone else wakes up. By the time my family is ready for breakfast, I’ve usually handled my work responsibilities and can be fully present with my spouse and children. The approach allows me to enjoy the day stress-free, knowing I’ve kept up with professional demands. Work-life integration is also important at the personal level. On business trips, I always build in extra time to explore the area. These mini vacations transform my work trips from exhausting obligations into enriching experiences. I often visit places within a short flight from my business destination, turning a routine trip into an adventure. Not all adventures have to be shared to be fulfilling. Sometimes solo experiences can refresh you just as much. The approach works for my spouse and I, as we each find our own ways to recharge before reuniting. Dismissing the work-life balance myth It’s important to schedule downtime, as it can make you more productive in the long run. Taking a few hours to relax without guilt is exactly what you need to tackle your next project with clarity. Sometimes after a full day of meetings, my spouse and I watch a TV show together, sharing each other’s company. On other days, we plan a dinner with no electronics, and we just talk about our day. I’ve been able not only to achieve most of my professional goals but also to build a life rich in experiences and memories. Life isn’t a zero-sum game between work and personal time. It’s about finding synergy between the two and designing your life so both parts can thrive. As engineers with analytical mindsets and problem-solving skills, we’re well suited to take on the challenge. If there’s one piece of advice I’d give to engineers and young professionals, it’s this: Don’t chase balance; pursue integration. If you do, you’re likely to find that life in all its complexity becomes far more fulfilling. The experiences you create—both at work and at home—are sure to be richer, and your sense of accomplishment can extend beyond just your career.

  • Can Geopolitics Unlock Greenland’s Critical Materials Treasure Chest?
    by Flemming Getreuer Christiansen on 8. May 2025. at 15:00

    For months, the world has wondered about the stated goal of the president of the United States to acquire Greenland. Is he merely expressing a desire to make America greater again in terms of territorial area? Is it a question of security policy? Or are critical minerals—and especially Greenland’s immense rare earth riches—a key factor? The first two questions can’t really be answered without a fuller understanding of the administration’s motives and strategies than is currently available publicly. But for that third question, there exists a wealth of data and context, both historical and modern-day. Let’s start with the modern day. U.S. industries, like those of any developed economy, depend on critical materials. Lots of attention now is focused on the rare earths, a group of elements of unusual importance because of their indispensability in essential commercial, defense, and industrial applications. Roughly 90 percent of processed rare earths come from China, creating supply-chain vulnerabilities that many countries are now trying to avoid, particularly since China announced restrictions on the export of heavy rare earths in April 2025. Systematic studies have indicated that Greenland has 10 important deposits of rare earth elements. But in mining, it’s the details that matter. To understand the value of a minerals deposit it’s important to consider the history and to make a reality check of documented resources. The facts and the obvious conclusions do not always reach decision-makers, investors, and the media. Greenland Is the World’s Biggest Island, But It’s Not as Big as You Think Greenland is the largest island in the world, but people often get an exaggerated impression of how big it is because the commonly used Mercator projection distorts the size of landmasses close to the poles. Greenland is about 2 million square kilometers. However, the ice-free part—which could much more feasibly be mined than the other part—is roughly the size of California, and without any scenic coastal highways. Traveling between settlements is possible only by boat or airplane. Greenland has had nine different mines since World War II, but only one, for the mineral anorthosite, is active today. Another one, for gold, is expected to reach full production later this year. Greenland’s only fully operational mine as of May 2025 extracts anorthosite rock at a site called White Mountain, on the central west coast of the island.Flemming Getreuer Christiansen Greenland took over the handling of all mining licenses from Denmark in 1998 and full authority in 2010 after self-governance was introduced. Greenland has developed a modern licensing system for mining with an element of competition between companies and transparent procedures for public input on environmental and socioeconomic concerns. The number of licenses granted by the government has been high and relatively constant for many years, but the level of actual activities, such as drilling, has been low over the past decade. Several licenses have been relinquished or revoked without any mining ever taking place. The reasons are many. A major one is lack of human resources. The population of Greenland is only 57,000, scattered around an area three times the size of Texas. Another reason is high operational costs due to the harsh climate and lack of infrastructure. Others include restrictions favoring labor from Greenland or Denmark. Still others include puzzling recent bureaucratic changes in the legal framework of mining related to the requirements for resource assessments and feasibility studies, and also environmental and socioeconomic requirements for obtaining an exploitation license. Complex royalty schemes and relatively high corporate taxes contribute to uncertainty and risks that many investors have been unwilling to take. This is at a time when Greenland sorely needs investments to fuel its dreams of economic and political independence from Denmark. In the summer of 2021, the Greenland government said no to issuing further petroleum licenses. It didn’t matter very much because by then, all the largest oil and gas companies had already left Greenland as a result of declining oil prices and increasing costs caused by the changes to regulations. Since the 1970s, some 40 companies have been involved, but they drilled only 15 exploration wells and made no commercial discoveries. The Ivittuut mine, seen here in 1953, extracted huge amounts of cryolite during World War II. Ivittuut was one of the few places in the world known to have deposits of cryolite, which was necessary for the smelting of aluminum. The mine closed in 1987.Kaj Skall Sørensen/Danish Arctic Institute Still, there have been some commercial activities. During World War II, the Ivittuut mine in South Greenland produced cryolite for aluminum production and was crucial for the U.S. war effort. Since that time, U.S. companies have shown only a modest interest in Greenland; of the total of 250 companies that have been granted exploration licenses over the past several decades, 10 have been American. Of 50 companies from various countries that have been drilling in Greenland, four were from the United States, and of the 15 companies that have applied for an exploitation license, precisely zero were American. So historically the U.S. fingerprint on mineral exploration in Greenland is negligible, even though the door has been open for decades. The Deposits Are Big but Have Low Concentrations of Rare Earth Elements According to the European Union, Greenland has great untapped potential for 25 of the 34 minerals identified in the Union’s official list of raw materials, including rare earth elements, graphite, platinum group metals, and niobium. Looking at these in more detail, however, a more complex picture emerges. In 2023, an investigation for the Center for Mineral Resources and Materials of the Geological Survey of Denmark and Greenland found that Greenland’s mineral resources included everything from minor occurrences to substantial deposits scattered around the island. Of Greenland’s 10 important rare earth deposits, only two, Kvanefjeld (Kuannersuit) and Kringlerne (Killavaat Alannguat), have attracted much attention. In spite of being located just a few miles apart in the same complex in South Greenland, these two rare earth deposits are very different for all parameters: geology and mineralogy, stage of exploration, and documented resources. They are alike, however, in that both deposits have some serious drawbacks when compared with active mines or advanced projects elsewhere in the West. The Kvanefjeld plateau, near the southern tip of Greenland, is the site of large deposits of rare earth oxides, uranium, thorium, and other elements of industrial importance.Jan Richard Heinicke/laif/Redux Kvanefjeld is the only project in Greenland with well-documented reserves, but it suffers from–or, some would say, could potentially benefit from–a high content of uranium and thorium. In 2021, when the project was in the final phase of getting an exploitation license, the Greenlandic government dug in its heels and made it illegal to mine deposits with more than 100 parts per million of uranium. The new regulations were used against the Australian company behind the Kvanefjeld project, rendering worthless years of heavy investments; more than US $100 million was reportedly spent on drilling and other work. The company filed a request for arbitration in 2022, and in 2024 it commenced legal proceedings against the Greenland and Danish governments. It may be years before these cases are resolved. In the meantime, this case is likely to harm investment in Greenland due to the climate of high political risk it suggests. The rare earths at the Kringlerne deposit are contained within an igneous rock called kakortokite.Jan Richard Heinicke/laif/Redux At Kringlerne there are much lower concentrations of uranium. The privately owned company that has invested in the site, Tanbreez, claims that it is the largest deposit of its kind in the world, but that belief is based on very little drilling. In 2020, the company got an exploration license in spite of not having provided documentation of resources, updated studies of feasibility, or of environmental and socioeconomic impact. Deadlines for providing financial security and plans for mining and closure were extended for many years. Tanbreez was partly taken over by a New York–based company registered on the NASDAQ, and the new owners were obligated to follow international standards and disclose a confidential resource estimate from 2016. The report revealed considerably more disappointing numbers with regard to ore grade and other characteristics. Nor do Greenland’s deposits fare well in comparison with other mines or deposits. Typical ore grades at successful mines or attractive deposits are between 4 and 8 percent rare earths. The ore at the mines at Mount Weld in Australia and Mountain Pass in California, and the deposits at Nolans Bore in Australia and Bear Lodge in Wyoming, all fall within that range. In Kvanefjeld the number is 1.4 percent, and in Kringlerne, it dips as low as 0.38 percent. The Greenlandic mines would consequently require larger open pits and much more energy for crushing, separation, and refining, which would inevitably drive up costs for establishing mines and for actually operating them. Nevertheless, the recent U.S. interest in Greenland led to some surprising reactions on the stock market, combined with high volatility. The share price of Kvanefjeld tripled after intense trading at the start of 2025. The share price of Kringlerne has dropped significantly since the new owners were listed on the NASDAQ, but with many ups and downs following all sorts of stock announcements that referenced miscellaneous analyses of old cores. Greenland’s Flirtations With China Could Backfire After the Trump administration touched off a tariff war, the geopolitical skirmishing between the United States and China over rare earths became a much more complicated conflagration involving the United States, the European Union, and China. And the Greenland government has continued to fan the flames by courting Chinese investment. In March, for example, Greenland’s foreign minister, Vivian Motzfeldt, reportedly identified closer cooperation with China as a priority, and even touted the possibility of a free trade agreement between Greenland and China. According to a report in The Diplomat, Motzfeldt’s actions were ”largely driven by the belief…that a mining boom, fueled by Chinese investment, was the most realistic path toward independence from Denmark—a goal shared by most Greenlandic parties.” Maybe so, but such moves have been widely perceived as a provocation against Denmark and the United States. Whatever their motivation, they could very well lead to increased U.S. pressure. For the United States it would be a geostrategic nightmare if China opened a mine in a remote part of Greenland, with a town, communication lines, harbor, and an airfield that could obviously be used for purposes other than resource extraction and export. Since Greenland took control of its mineral resources, Denmark has been reluctant to get involved in mining projects. That could very well change. Denmark and the E.U. may come under pressure to invest in much-needed infrastructure and energy projects and offer loans on more favorable conditions, if for no other reason than to keep the United States at bay.Though the situation is highly unstable, it’s important to try, at least, to separate the geopolitical posturing from the realities of mining. In that vein, it’s safe to say there will be no operating rare earth mines in Greenland during the term of Donald J. Trump.

  • Ensure Hard Work Is Recognized With These 3 Steps
    by Rahul Pandey on 7. May 2025. at 19:35

    This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free! There is a widespread misconception in the tech industry that if you work hard, you are guaranteed to be rewarded. Unfortunately, this is far from the truth. During my four and a half years at Meta, I saw many people working crazy hours who still ended up with a ‘Meets Most Expectations” rating, putting them at risk for a remedial performance improvement plan. Not only were these engineers sacrificing their evenings and taking on tons of stress, but they weren’t even being acknowledged for their efforts. Whether you’re at a startup or Big Tech company, there is an endless amount of work you could take on. No matter how many weekends you dedicate to your team, you’ll always have more to do. In order to have a sustainably high impact, you must be deliberate about choosing what to work on. By understanding what your team and manager care about, you can ensure great results that are commensurate with great effort. So many engineers mess this up: They go down a rabbit-hole about some interesting problem instead of thinking about their performance review and how they’ll be judged. Tactically, here’s what that means for you: Build relationships with colleagues who understand your work and can vouch for you in a wide range of areas. Working in isolation often leads to misunderstood goals and priorities, resulting in wasted efforts. Talking about your work is also extremely valuable for identifying future problems and opportunities. Tactfully share your work with colleagues so you get full credit for it. It’s not about “claiming” credit—it’s about sharing your work to help others. The byproduct of this is marketing. Figure out who might benefit from your work and send them regular project updates. Set clear expectations with your team and manager. Negative feedback should never be a surprise. You should regularly receive (and seek out) feedback before your official performance review. I recommend having a dedicated “feedback 1:1” or “performance check-in” with your manager to chat about how you’re trending. The above steps will ensure that every action you take is loaded with value. Once you’re consistently exceeding expectations, you’re ready to take on more responsibility and grow your career. —Rahul ICYMI: U.S. Semiconductor Courses Surge Amid Industry Boom The U.S. semiconductor industry is booming, driven by rapid advancements in AI, federal funding, and private sector investments. As a result, electrical engineering programs in the U.S. are now seeing an uptick in student enrollment in semiconductor coursework. But some educators and recruiters warn that new tariffs and proposed immigration restrictions could potentially complicate job prospects for students. Read more here. ICYMI: How to Avoid Ethical Red Flags in Your AI Projects A growing number of engineers now find themselves developing AI solutions while navigating complex ethical considerations, says IBM’s AI ethics global leader Francesca Rossi. In this guest article, Rossi provides some advice for engineers based on her experience developing IBM’s internal processes for responsible AI deployment. Read more here. How Big Tech hides its outsourced African workforce Content moderation, AI data training, and other tasks imperative to the tech sector require a lot of labor. New data reported by Rest of World reveals that the people performing this labor are often hidden. Visualized as maps, the data shows that many African workers are indirectly employed by Big Tech companies across the globe. Read more here.

  • Amazon’s Vulcan Robots Now Stow Items Faster Than Humans
    by Evan Ackerman on 7. May 2025. at 08:30

    At an event in Dortmund, Germany today, Amazon announced a new robotic system called Vulcan, which the company is calling “its first robotic system with a genuine sense of touch—designed to transform how robots interact with the physical world.” In the short to medium term, the physical world that Amazon is most concerned with is its warehouses, and Vulcan is designed to assist (or take over, depending on your perspective) with stowing and picking items in its mobile robotic inventory system. Related: Amazon’s Vulcan Robots Are Mastering Picking Packages In two upcoming papers in IEEE Transactions on Robotics, Amazon researchers describe how both the stowing and picking side of the system operates. We covered stowing in detail a couple of years ago, when we spoke with Aaron Parness, the director of applied science at Amazon Robotics. Parness and his team have made a lot of progress on stowing since then, improving speed and reliability over more than 500,000 stows in operational warehouses to the point where the average stowing robot is now slightly faster than the average stowing human. We spoke with Parness to get an update on stowing, as well as an in-depth look at how Vulcan handles picking, which you can find in this separate article. It’s a much different problem, and well worth a read. Optimizing Amazon’s Stowing Process Stowing is the process by which Amazon brings products into its warehouses and adds them to its inventory so that you can order them. Not surprisingly, Amazon has gone to extreme lengths to optimize this process to maximize efficiency in both space and time. Human stowers are presented with a mobile robotic pod full of fabric cubbies (bins) with elastic bands across the front of them to keep stuff from falling out. The human’s job is to find a promising space in a bin, pull the plastic band aside, and stuff the thing into that space. The item’s new home is recorded in Amazon’s system, the pod then drives back into the warehouse, and the next pod comes along, ready for the next item. Different manipulation tools are used to interact with human-optimized bins.Amazon The new paper on stowing includes some interesting numbers about Amazon’s inventory-handling process that helps put the scale of the problem in perspective. More than 14 billion items are stowed by hand every year at Amazon warehouses. Amazon is hoping that Vulcan robots will be able to stow 80 percent of these items at a rate of 300 items per hour, while operating 20 hours per day. It’s a very, very high bar. After a lot of practice, Amazon’s robots are now quite good at the stowing task. Parness tells us that the stow system is operating three times as fast as it was 18 months ago, meaning that it’s actually a little bit faster than an average human. This is exciting, but as Parness explains, expert humans still put the robots to shame. “The fastest humans at this task are like Olympic athletes. They’re far faster than the robots, and they’re able to store items in pods at much higher densities.” High density is important because it means that more stuff can fit into warehouses that are physically closer to more people, which is especially relevant in urban areas where space is at a premium. The best humans can get very creative when it comes to this physical three-dimensional “Tetris-ing,” which the robots are still working on. Where robots do excel is planning ahead, and this is likely why the average robot stower is now able to outpace the average human stower—Tetris-ing is a mental process, too. In the same way that good Tetris players are thinking about where the next piece is going to go, not just the current piece, robots are able to leverage a lot more information than humans can to optimize what gets stowed where and when, says Parness. “When you’re a person doing this task, you’ve got a buffer of 20 or 30 items, and you’re looking for an opportunity to fit those items into different bins, and having to remember which item might go into which space. But the robot knows all of the properties of all of our items at once, and we can also look at all of the bins at the same time along with the bins in the next couple of pods that are coming up. So we can do this optimization over the whole set of information in 100 milliseconds.” Essentially, robots are far better at optimization within the planning side of Tetrising, while humans are (still) far better at the manipulation side, but that gap is closing as robots get more experienced at operating in clutter and contact. Amazon has had Vulcan stowing robots operating for over a year in live warehouses in Germany and Washington state to collect training data, and those robots have successfully stowed hundreds of thousands of items. Stowing is of course only half of what Vulcan is designed to do. Picking offers all kinds of unique challenges too, and you can read our in-depth discussion with Parness on that topic right here.

  • Amazon’s Vulcan Robots Are Mastering Picking Packages
    by Evan Ackerman on 7. May 2025. at 08:00

    As far as I can make out, Amazon’s warehouses are highly structured, extremely organized, very tidy, absolute raging messes. Everything in an Amazon warehouse is (usually) exactly where it’s supposed to be, which is typically jammed into some pseudorandom fabric bin the size of a shoebox along with a bunch of other pseudorandom crap. Somehow, this turns out to be the most space- and time-efficient way of doing things, because (as we’ve written about before) you have to consider the process of stowing items away in a warehouse as well as the process of picking them, and that involves some compromises in favor of space and speed. For humans, this isn’t so much of a problem. When someone orders something on Amazon, a human can root around in those bins, shove some things out of the way, and then pull out the item that they’re looking for. This is exactly the sort of thing that robots tend to be terrible at, because not only is this process slightly different every single time, it’s also very hard to define exactly how humans go about it. Related: Amazon’s Vulcan Robots Now Stow Items Faster Than Humans As you might expect, Amazon has been working very very hard on this picking problem. Today at an event in Germany, the company announced Vulcan, a robotic system that can both stow and pick items at human(ish) speeds. Last time we talked with Aaron Parness, the director of applied science at Amazon Robotics, our conversation was focused on stowing—putting items into bins. As part of today’s announcement, Amazon revealed that its robots are now slightly faster at stowing than the average human is. But in the stow context, there’s a limited amount that a robot really has to understand about what’s actually happening in the bin. Fundamentally, the stowing robot’s job is to squoosh whatever is currently in a bin as far to one side as possible in order to make enough room to cram a new item in. As long as the robot is at least somewhat careful not to crushify anything, it’s a relatively straightforward task, at least compared to picking. The choices made when an item is stowed into a bin will affect how hard it is to get that item out of that bin later on—this is called “bin etiquette.” Amazon is trying to learn bin etiquette with AI to make picking more efficient.Amazon The defining problem of picking, as far as robots are concerned, is sensing and manipulation in clutter. “It’s a naturally contact-rich task, and we have to plan on that contact and react to it,” Parness says. And it’s not enough to solve these problems slowly and carefully, because Amazon Robotics is trying to put robots in production, which means that its systems are being directly compared to a not-so-small army of humans who are doing this exact same job very efficiently. “There’s a new science challenge here, which is to identify the right item,” explains Parness. The thing to understand about identifying items in an Amazon warehouse is that there are a lot of them: something like 400 million unique items. One single floor of an Amazon warehouse can easily contain 15,000 pods, which is over a million bins, and Amazon has several hundred warehouses. This is a lot of stuff. In theory, Amazon knows exactly which items are in every single bin. Amazon also knows (again, in theory), the weight and dimensions of each of those items, and probably has some pictures of each item from previous times that the item has been stowed or picked. This is a great starting point for item identification, but as Parness points out, “We have lots of items that aren’t feature rich—imagine all of the different things you might get in a brown cardboard box.” Clutter and Contact As challenging as it is to correctly identify an item in a bin that may be stuffed to the brim with nearly identical items, an even bigger challenge is actually getting that item that you just identified out of the bin. The hardware and software that humans have for doing this task is unmatched by any robot, which is always a problem, but the real complicating factor is dealing with items that are all jumbled together in a small fabric bin. And the picking process itself involves more than just extraction—once the item is out of the bin, you then have to get it to the next order-fulfillment step, which means dropping it into another bin or putting it on a conveyor or something. “When we were originally starting out, we assumed we’d have to carry the item over some distance after we pulled it out of the bin,” explains Parness. “So we were thinking we needed pinch grasping.” A pinch grasp is when you grab something between a finger (or fingers) and your thumb, and at least for humans, it’s a versatile and reliable way of grabbing a wide variety of stuff. But as Parness notes, for robots in this context, it’s more complicated: “Even pinch grasping is not ideal because if you pinch the edge of a book, or the end of a plastic bag with something inside it, you don’t have pose control of the item and it may flop around unpredictably.” At some point, Parness and his team realized that while an item did have to move farther than just out of the bin, it didn’t actually have to get moved by the picking robot itself. Instead, they came up with a lifting conveyor that positions itself directly outside of the bin being picked from, so that all the robot has to do is get the item out of the bin and onto the conveyor. “It doesn’t look that graceful right now,” admits Parness, but it’s a clever use of hardware to substantially simplify the manipulation problem, and has the side benefit of allowing the robot to work more efficiently, since the conveyor can move the item along while the arm starts working on the next pick. Amazon’s robots have different techniques for extracting items from bins, using different gripping hardware depending on what needs to be picked. The type of end effector that the system chooses and the grasping approach depend on what the item is, where it is in the bin, and also what it’s next to. It’s a complicated planning problem that Amazon is tackling with AI, as Parness explains. “We’re starting to build foundation models of items, including properties like how squishy they are, how fragile they are, and whether they tend to get stuck on other items or no. So we’re trying to learn those things, and it’s early stage for us, but we think reasoning about item properties is going to be important to get to that level of reliability that we need.” Reliability has to be superhigh for Amazon (and with many other commercial robotic deployments) simply because small errors multiplied over huge deployments result in an unacceptable amount of screwing up. There’s a very, very long tail of unusual things that Amazon’s robots might encounter when trying to extract an item from a bin. Even if there’s some particularly weird bin situation that might only show up once in a million picks, that still ends up happening many times per day on the scale at which Amazon operates. Fortunately for Amazon, they’ve got humans around, and part of the reason that this robotic system can be effective in production at all is that if the robot gets stuck, or even just sees a bin that it knows is likely to cause problems, it can just give up, route that particular item to a human picker, and move on to the next one. The other new technique that Amazon is implementing is a sort of modern approach to “visual servoing,” where the robot watches itself move and then adjusts its movement based on what it sees. As Parness explains: “It’s an important capability because it allows us to catch problems before they happen. I think that’s probably our biggest innovation, and it spans not just our problem, but problems across robotics.” A (More) Automated Future Parness was very clear that (for better or worse) Amazon isn’t thinking about its stowing and picking robots in terms of replacing humans completely. There’s that long tail of items that need a human touch, and it’s frankly hard to imagine any robotic-manipulation system capable enough to make at least occasional human help unnecessary in an environment like an Amazon warehouse, which somehow manages to maximize organization and chaos at the same time. These stowing and picking robots have been undergoing live testing in an Amazon warehouse in Germany for the past year, where they’re already demonstrating ways in which human workers could directly benefit from their presence. For example, Amazon pods can be up to 2.5 meters tall, meaning that human workers need to use a stepladder to reach the highest bins and bend down to reach the lowest ones. If the robots were primarily tasked with interacting with these bins, it would help humans work faster while putting less stress on their bodies. With the robots so far managing to keep up with human workers, Parness tells us that the emphasis going forward will be primarily on getting better at not screwing up: “I think our speed is in a really good spot. The thing we’re focused on now is getting that last bit of reliability, and that will be our next year of work.” While it may seem like Amazon is optimizing for its own very specific use cases, Parness reiterates that the bigger picture here is using every last one of those 400 million items jumbled into bins as a unique opportunity to do fundamental research on fast, reliable manipulation in complex environments. “If you can build the science to handle high contact and high clutter, we’re going to use it everywhere,” says Parness. “It’s going to be useful for everything, from warehouses to your own home. What we’re working on now are just the first problems that are forcing us to develop these capabilities, but I think it’s the future of robotic manipulation.”

  • Master 5G and 6G Basics With IEEE’s New Training Program
    by Angelique Parashis on 6. May 2025. at 18:00

    As 5G network capabilities expand globally, the need for trained engineers who know the protocols and procedures required to set up and manage telecommunications systems grows. The newest telecom generation has brought higher data transmission speeds, lower latency, and increased device connectivity to a wide variety of devices used for health care, entertainment, manufacturing, and transportation. Remote surgery, self-driving cars, real-time industrial monitoring, and immersive virtual reality experiences are just some of the innovations that 5G has made possible. The more recent enhancements, known as 5G-Advanced, include the integration of artificial intelligence and machine learning solutions to enable more intelligent network management. The developments are paving the way for 6G, expected to be commercially available by 2030. Key differences between 5G and 6G are expected to include enhanced scalability, increased utilization of the radio spectrum, and dynamic access to different connection types including cellular, satellite, and Wi-Fi. The improvements likely will result in more reliable connections with fewer interruptions—which is essential for supporting drones, robots, and more advanced technologies. To get engineers up to speed on the two technologies, IEEE and telecom training provider Wray Castle have launched the 5G/6G Essential Protocols and Procedures Training and Innovation Testbed. Self-paced learning, course books, and videos The training program provides a deep dive into essential 5G protocols including the network function (NF) framework, registration processes, and packet data unit (PDU) session establishment. The NF framework supports functions required for 5G networks to operate. Registration processes involve the steps needed for devices to connect to the network. PDU session establishment is the procedure for setting up data sessions between devices and the network. The comprehensive training includes 11 hours of on-demand, self-paced online learning, illustrated digital course books, and instructional videos. Learners receive three months of access to the IEEE 5G/6G Innovation Testbed, a cloud-based, private, secure, end-to-end 5G network testing platform that offers hands-on experience. The platform is for those who want to try out their 5G enhancements, run trials of future 6G functions, or test updates for converged networks. Users may test and retest as many times as they want at no additional cost. Key differences between 5G and 6G are expected to include enhanced scalability, increased utilization of the radio spectrum, and dynamic access to different connection types including cellular, satellite, and Wi-Fi. Tailored for professionals such as system engineers, integrators, and technical experts, the program provides knowledge and practical skills needed to excel in the evolving telecommunications landscape. After successful completion of the training program, learners earn an IEEE certificate bearing 11 professional development hours—which can be shared on résumés and professional networking sites such as LinkedIn. Ensure your mobile network expertise is up to speed on the latest advancements in wireless technology. Complete this form to learn more.

  • See Inside Your Designs - Learn How CT Scanning Finds Hidden Flaws
    by Lumafield on 6. May 2025. at 15:50

    This white paper highlights Industrial Computed Tomography (CT) as a transformative solution for precision inspection, overcoming the limitations of traditional methods like destructive testing or surface scans. By providing non-destructive, high-resolution 3D imaging, industrial CT enables engineers to detect hidden defects (porosity, cracks, voids), accelerate product development, verify supplier parts, improve manufacturing yield, and enhance failure analysis. It supports the entire product lifecycle - from R&D prototyping to production quality control and field failure diagnostics - helping industries like aerospace, automotive, and medical devices ensure reliability. The paper also introduces Lumafield’s CT solutions: Neptune (an accessible lab scanner), Triton (automated factory-floor CT), and Voyager (cloud-based AI analysis software), which make advanced CT scanning faster, smarter, and scalable for modern engineering demands.What you’ll learn: How CT scanning reveals hidden defects that surface inspections miss. Why non-destructive testing accelerates prototyping and reduces iteration cycles. How to verify supplier parts and avoid costly manufacturing rework. Ways to improve yield by catching process drift before it creates scrap. Download this free whitepaper now!

  • Ready to Optimize Your Resource Intensive EM Simulations?
    by WIPL-D on 6. May 2025. at 15:07

    This paper explores the use of WIPL-D software for simulating indoor electromagnetic (EM) propagation in both 2D and 3D, addressing the growing need for accurate modeling due to increasing electronic device usage. While 3D simulations offer detailed wave propagation analysis, they require substantial computational resources, especially at high frequencies, whereas 2D simulations - assuming an infinite structure with a constant cross-section - provide a computationally efficient alternative with minimal accuracy loss for many practical scenarios. The study examines the effects of material properties (e.g., concrete vs. metallic pillars) on signal distortion and evaluates different signal types, such as Dirac delta and Gaussian pulses, concluding that 2D modeling can often serve as a viable, resource-saving substitute for 3D simulations in telecommunication applications for smart environments. In this Whitepaper You’ll Learn: The trade-offs between 2D and 3D EM modeling for indoor scenarios. Practical strategies for reducing computational resources without losing accuracy. Why 2D EM modeling can be a game-changer for indoor propagation simulations. Download this free whitepaper now!

  • How to Fast-Track Perovskite Solar Cells
    by Mary O'Kane on 6. May 2025. at 14:00

    When British solar manufacturer Oxford PV shipped the first commercial order of perovskite-silicon solar cells last September, it was touted as a breakthrough in the industry. The news marked a milestone in a 15-year global effort to develop this lightweight, versatile material that could outperform traditional silicon solar cells. But the lack of follow-on shipments since then served as a reminder that this technology wasn’t quite ready for mass commercialization. The main problem: Continued delays in getting perovskites to the solar market has made them less cost-competitive than their established predecessor: silicon solar cells. In the time it took the sector to go from the first paper on perovskite-based solar cells in 2009 to the first commercial shipment in 2024, the cost of manufacturing silicon solar cells plummeted from US $2.11 per watt to as low as $.20 per watt. These prices were driven down largely by increased production throughout Southeast Asia. Now, hefty U.S. tariffs on silicon solar imports from these countries could give perovskite manufacturers a competitive edge. The U.S. Department of Commerce on April 21 announced a final decision to levy tariffs as high as over 3,400 percent on solar companies in Malaysia, Cambodia, Thailand, and Vietnam. The decision is the result of a long-term, countervailing antidumping investigation that found that companies in China attempted to bypass previous levies by moving manufacturing to these four countries. If confirmed by another U.S. agency in June, the levies would add to other U.S. import taxes already in place on solar components from the region. But the antidumping tariffs don’t apply to thin-film photovoltaics such as perovskites. This could be a boon for those solar developers, but they’re going to have to move quickly. The longer it takes to get perovskites to market, the more the landscape could change. And yet, some researchers in this field continue to focus on breaking power-conversion efficiency records, with some types of perovskite cells reaching 27 percent. These accomplishments might lead to papers in high-impact journals but do little to get perovskites out the door. Perovskite Solar Cells’ Efficiency Limits Many researchers say it’s time to stop aiming for incremental efficiency gains and instead focus on scaling manufacturing and improving the life-span of the cells. This would involve developing manufacturing techniques that strike a balance between high-quality devices and low production costs. This won’t be easy. Lowering processing costs while increasing cell life-span and maintaining reasonably high efficiency will require a lot of research and effort. But if academic and industrial researchers unite, this manufacturing challenge could be solved quicker than one might think. Perovskite solar cells are composed of organic ions, metals, and halogens that form a special crystal structure that makes them very versatile. With the right composition, perovskites could be better than silicon at converting sunlight to electricity: They have a theoretical efficiency limit of 34 percent, compared with silicon’s 32 percent. They can achieve this with a much thinner layer of material, allowing them to be used in innovative ways such as flexible solar cells, curved solar panels, indoor photovoltaics, and solar windows. Perovskite developer Tandem PV says perovskite layers produced with solution-based processes don’t have to be made in completely inert conditions. Tandem PV Perovskites can also be stacked on top of silicon photovoltaics to improve performance. The current record efficiency of perovskite-silicon tandem solar cells stands at 34.6 percent, an impressive 7 percent improvement compared to the best silicon cells. But manufacturing high quality perovskites at a low cost has proven challenging. Exposure to air and moisture during processing can hinder initial performance and lead to degradation over time. This has forced researchers to assemble them in highly controlled environments. Within these controlled environments, there are two ways to make perovskite solar cells. The more expensive route—vapor deposition—involves evaporating or vaporizing perovskite materials under vacuum conditions and then depositing them as a thin film. This makes very high-quality films with few defects and reliable efficiency. But the set-up costs for this method are high, and rigorous maintenance and high environmental control is required. The simpler and cheaper method involves depositing perovskite layers using inkjet printing or spray coating. In these solution-based approaches, perovskite materials are dissolved in a precursor solution or “ink,” and directly applied to the desired surface or substrate. The simplicity of this technique has enabled researchers to rapidly improve perovskites over the past decade. However, these techniques allow plenty of room for contamination and defects to occur. With either route, to produce the highest-performing cells, fabrication usually happens in a controlled environment such as a laboratory glove box. This equipment pumps out oxygen and moisture, replacing it with a non-reactive gas such as nitrogen. However, increasing the amount of environmental control can drive up costs. Some glove boxes can bring internal oxygen and moisture levels down to less than 1 part per million (ppm). But installing and maintaining these systems is expensive. This level of environmental control requires a complex loop of filters and blower systems to extract contaminated air, purify it, then recirculate it into the system. These filters and control systems require regular upkeep and replacement, which raise maintenance costs. The ppm sensors alone can cost thousands of dollars. These maintenance costs will always scale with volume. The larger the environment, the more air that needs to be filtered, and the harder it is to maintain strict environmental control. This necessitates more powerful fans, larger filters, and if these systems are exposed to atmosphere, it will cost more time and money to get them working again. Innovative Perovskite Fabrication Methods These challenges have led solar developers to experiment with different fabrication methods for perovskite devices, especially on a larger scale. For example, Power Roll in Durham, England, which is developing flexible solar modules, is currently taking a solution-based approach while simultaneously evaluating other methods. “We continuously collaborate with both industrial and academic partners to stay at the forefront of fabrication techniques. This ensures we keep options open for both vacuum and solution processes,” says Nathan Hill, a senior scientist at Power Roll. Oxford PV, based in Oxford, England, hasn’t disclosed how it fabricated the perovskite-silicon tandem modules in its first commercial shipment. In a 2018 interview, Oxford cofounder Henry Snaith hinted that his company might take the vapor route when he said that “vapor-deposited cells [would] advance more quickly than solution-processed cells.” The spin coating technique deposits perovskite layers using centripetal force to spread material evenly across a substrate. Performing this process inside a glove box, where oxygen and moisture are controlled, helps improve performance. Ossila Completely inert processing—at very low ppm—isn’t ideal for large scale production, many manufacturers say. So they are exploring innovative approaches to simplify fabrication. “While we acknowledge that processing under inert conditions may be beneficial for lab-scale production, we and our partners find that controlling temperature and humidity are the key factors for managing perovskite grain growth, and have had promising results working outside of inert environments,” says Hill at Power Roll. Another perovskite innovator, Tandem PV in San Jose, Calif., processes its perovskite layers from solutions outside of inert conditions, according to a spokesperson for the company. As manufacturers continue to experiment, researchers should reevaluate their goals for perovskite solar cells too. Typically, the more inert the environment, the higher performing the solar cell. But how high performing do these cells—and how inert do these environments—really have to be? Is there a middle ground where the environments are partially controlled, and the resulting perovskites are still high-enough quality? My colleagues and I at Ossila have demonstrated that triple-cation mixed-halide perovskites, which are relatively robust, can be reliably made in a glove box that maintains only 15 ppm moisture and 0.5 percent oxygen levels (5000 ppm). These solar cells achieved efficiencies comparable to those made in high-end glove boxes (19.2 percent compared to 19.7 percent, respectively). Devices approaching 19 percent are within the realm of competing with silicon solar technology (which largely achieve 13-23 percent efficiency, depending on the type of solar cell). Because perovskites are best used in situations where silicon cannot be used, or in conjunction with silicon devices, we think this is an impressive result. We also found that when the same perovskites (triple cation mixed halides) are processed in ambient air with a solution-based approach, devices still performed well. The best results, which reached 17.6 percent, indicate there is hope for good air-processed perovskite solar cells. Tariffs on Silicon Solar Could Make Perovskites More Competitive Many academic researchers are also experimenting with creating perovskite solar cells outside of glove box environments. A recent study in Nature Communications described a solution-processed roll-to-roll perovskite fabricated entirely in ambient air. (Roll-to-roll processing involves high speed manufacturing that can continuously deposit solutions on flexible materials on moving rolls. It’s like newspaper printing, but for solar cells.) Researchers at Australia’s national science agency, CISRO, last year demonstrated the first entirely roll-to-roll fabricated perovskite solar cell under ambient room conditions. CSIRO The resulting devices reach efficiencies of 15.5 percent for individual cells and 11 percent for mini-solar modules. What’s more, the estimated production costs are as low as $0.70 per watt and still have further space for cost reductions. To move the field into full commercialization, it’s critical that more focus be placed on scalable processing methods rather than chasing ever-higher efficiencies. Academia and industry must align their goals of increasing stability and scalability. Commercialization of perovskite solar cells is within reach. And evolving international trade conditions could give perovskite solar cells a competitive edge. But to achieve this, it’s extremely important to identify any unnecessary steps involved in making them. With low-costs, globally adaptable production, and flexible manufacturing opportunities, perovskite devices could offer a promising path for manufacturing worldwide, strengthening the overall global supply of photovoltaics.

  • Will Supercapacitors Come to AI’s Rescue?
    by Dina Genkina on 6. May 2025. at 13:55

    In the United Kingdom, electricity provider National Grid faces a problem every time there is a soccer match on (or any other widely viewed televised event, for that matter): During halftime, or a commercial break, an inordinate number of viewers go to turn on their tea kettles. This highly British coordinated activity strains the energy grid, causing spikes in demand of sometimes thousands of megawatts. In AI training, a similar phenomenon can occur every second. Because training is orchestrated simultaneously among many thousands of GPUs in massive data centers, and with each new generation of GPU consuming an ever-increasing amount of power, each step of the computation corresponds to a massive energy spike. Now, at least three companies are coming out with a solution to smooth out the load seen by the grid—to add banks of huge capacitors, known as supercapacitors, to those data centers. “When you have all of those GPU clusters, and they’re all linked together in the same workload, they’ll turn on and turn off at the same time. That’s a fundamental shift,” says Joshua Buzzell, vice president and data-center chief architect at the power-equipment supplier Eaton. These coordinated spikes can strain the power grid, and the issue promises to get worse rather than better in the near future. “The problems that we’re trying to solve for are the language models that are probably 10 to 20 times, maybe 100 times larger” than the ones that exist today, Buzzell says. AI workloads sometimes use power in short bursts, causing the power grid to experience a wildly oscillating load [blue]. Banks of supercapacitors can store energy on short time scales [purple], which compensates for AI’s power bursts and creates a smoother power demand on the grid [red]. Siemens Energy One solution is to rely on backup power supplies and batteries for charging and discharging, which can provide extra power quickly. However, much the way a phone battery degrades after multiple recharge cycles, lithium-ion batteries degrade quickly when charging and discharging at this high rate. Another solution is to use dummy calculations, which run while there are no spikes, to smooth out demand. This makes the grid carry a more consistent load, but it also wastes energy doing unnecessary work. Supercapacitors for AI Power Management Several companies are coming out with a new solution using banks of supercapacitors. Supercapacitors have two parallel plates, like regular capacitors. But they also have an electrolyte layer in between the plates, akin to a battery. However, while batteries store energy chemically, supercapacitors store energy electrostatically, without the need for a reaction to occur. This allows them to charge and discharge quickly, offering a power backup on short time scales that doesn’t significantly degrade over time. Siemens Energy is selling the E-statcom, a supercapacitor bank that is connected in parallel to the load at the level of the whole data center. They can charge and discharge at millisecond time scales, cycle up to 75 megawatts per unit, and promise to last between 12 and 20 years. The company is currently looking for customers in the data-center space. Eaton is selling the XLHV (among other configurations), a supercapacitor bank the size of a single unit of a server rack (156 by 485 by 605 millimeters deep). One unit can dynamically deliver up to 420 kilowatts of power, and can also last up to 20 years. Some of these products are already deployed at data centers. And Delta Electronics is selling the Power Capacitance Shelf, a 1-rack-unit-size bank of lithium-ion capacitors capable of supporting a 15-kilowatt load for up to 5 seconds. “The capacitors Delta Electronics selected are kind of halfway between supercapacitors and lithium batteries,” says Jason Lee, global product manager for supercapacitors at Eaton. These and other such products can help smooth out load fluctuations on the grid. “That’s part of becoming a good grid citizen,” Lee says. “So instead of seeing all that fluctuation going back to the grid, we can take all the pulses and the low points and just smooth those out to where the utilities provide more or less average power.” This becomes particularly important when transitioning to renewables. The power supply of solar and wind power is more variable than that of oil and gas, as it depends on weather and other factors. This means it isn’t always easy to ramp the power supply up or down in response to a varying load. Predictable loads allow for better planning and provisioning, which makes implementing some kind of smoothing mechanism particularly important. However, Lee notes, this is not a panacea. The supercapacitors “have a niche. They’re not going to replace batteries everywhere. But these short-term events [like AI power bursts], that’s an ideal application for supercapacitors.”

  • Why Engineers Still Need the Humanities
    by Allison Marsh on 5. May 2025. at 15:00

    Since last September, I’ve been spending seven hours a day, five days a week happily researching the history of women in electrical engineering. So far I’ve uncovered the names of more than 200 women who contributed to electrical engineering, the first step in an eventual book project. No disrespect to Ada Lovelace, Grace Hopper, or Katherine Johnson, but there are many other women in engineering you should know about. I’m doing my research at the Linda Hall Library of Science, Engineering, and Technology, in Kansas City, Mo., and I’m currently working through the unpublished papers of the American Institute of Electrical Engineers (a predecessor of today’s IEEE). These papers consist of conference presentations and keynote addresses that weren’t included in the society’s journals. They take up about 14 shelves in the closed stacks at the Linda Hall. Most of the content is unavailable on the Internet or anywhere else. No amount of Googling or prompting ChatGPT will reveal this history. The only way to discover it is to go to the library in person and leaf through the papers. This is what history research looks like. It is time intensive and can’t be easily replaced by AI (at least not yet). Up until 2 April, my research was funded through a fellowship with the National Endowment for the Humanities. My fellowship was supposed to run through mid-June, but the grant was terminated early. Maybe you don’t care about my research, but I’m not alone. Almost all NEH grants were similarly cut, as were thousands of research grants from the National Science Foundation, the National Institutes of Health, the Institute of Museum and Library Services, and the National Endowment for the Arts. Drastic research cuts have also been made or are expected at the Departments of Defense, Energy, Commerce, and Education. I could keep going. This is what history research looks like. There’s been plenty of outrage all around, but as an engineer turned historian who now studies engineers of the past, I have a particular plea: Engineers and computer scientists, please defend humanities research just as loudly as you might defend research in STEM fields. Why? Because if you take a moment to reflect on your training, conduct, and professional identity, you may realize that you owe much of this to the humanities. Historians can show how the past has shaped your profession; philosophers can help you think through the social implications of your technical choices; artists can inspire you to design beautiful products; literature can offer ideas on how to communicate. And, as I have discovered while combing through those unpublished papers, it turns out that the bygone engineers of the 20th century recognized this strong bond to the humanities. Engineering’s Historical Ties to the Humanities Granted, the humanities have a few thousand years on engineering when it comes to formal study. Plato and Aristotle were mainly into philosophy, even when they were chatting about science-y stuff. Formal technical education in the United States didn’t begin until the founding of the U.S. Military Academy, in West Point, N.Y., in 1802. Two decades later came what is now Rensselaer Polytechnic Institute, in Troy, N.Y. Dedicated to “the application of science to the common purposes of life,” Rensselaer was the first school in the English-speaking world established to teach engineering—in this case, civil engineering. Electrical engineering, my undergraduate field of study, didn’t really get going as an academic discipline until the late 19th century. Even then, most electrical training took the form of technical apprenticeships. One consistent trend throughout the 20th century is the high level of anxiety over what it means to be an engineer. In addition to looking at the unpublished papers, I’ve been paging through the entire run of journals from the AIEE, the Institute of Radio Engineers (the other predecessor of the IEEE), and the IEEE. And so I have a good sense of the evolution of the profession. One consistent, yet surprising, trend throughout the 20th century is the high level of anxiety over what it means to be an engineer. Who exactly are we? Early on, electrical engineers looked to the medical and legal fields to see how to organize, form professional societies, and create codes of ethics. They debated the difference between training for a technician versus an engineer. They worried about being too high-minded, but also being seen as getting their hands dirty in the machine shop. During the Great Depression and other times of economic downturn, there were lengthy discussions on organizing into unions. To cement their status as legitimate professionals, engineers decided to make the case that they, the engineers, are the keystone of civilization. A bold claim, and I don’t necessarily disagree, but what’s interesting is that they linked engineering firmly to the humanities. To be an engineer, they argued, meant to accept responsibility for the full weight of human values that underlie every engineering problem. And to be a responsible member of society, an engineer needed formal training in the humanities, so that he (and it was always he) could discover himself, identify his place within the community, and act accordingly. Thomas L. Martin Jr., dean of engineering at the University of Arizona, endorsed this engineering curriculum, in which the humanities accounted for 24 of 89 credits. AIEE What an Engineering Education Should Be Here’s what that meant in practice. In 1909, none other than Charles Proteus Steinmetz advocated for including the classics in engineering education. An education too focused on empirical science and engineering was “liable to make the man one sided.” Indeed, he contended, “this neglect of the classics is one of the most serious mistakes of modern education.” RELATED: The First Virtual Meeting Was in 1916 In the 1930s, William Wickenden, president of the Case School of Applied Science at Case Western Reserve University, in Cleveland, wrote an influential report on engineering education, in which he argued that at least one-fifth of an engineering curriculum should be devoted to the study of the humanities and social sciences. After World War II and the deployment of the atomic bomb, the start of the Cold War, and the U.S. entry into the Vietnam War, the study of the humanities within engineering seemed even more pressing. In 1961, C.R. Vail, a professor at Duke University, in Durham, N.C., railed against “culturally semiliterate engineering graduates who...could be immediately useful in routine engineering activity, but who were incapable of creatively applying fundamental physical concepts to the solution of problems imposed by emerging new technologies.” In his opinion, the inclusion of a full year of humanities coursework would stimulate the engineer’s aesthetic, ethical, intellectual, and spiritual growth. Thus prepared, future engineers would be able “to recognize the sociological consequences of their technological achievements and to feel a genuine concern toward the great dilemmas which confront mankind.” In a similar vein, Thomas L. Martin Jr., dean of engineering at the University of Arizona, proposed an engineering curriculum in which the humanities and social sciences accounted for 24 of the 89 credits. Many engineers of that era thought it was their duty to stand up for their beliefs. Engineers in industry also had opinions on the humanities. James Young, an engineer with General Electric, argued that engineers need “an awareness of the social forces, the humanities, and their relationship to his professional field, if he is to ascertain areas of potential impact or conflict.” He urged engineers to participate in society, whether in the affairs of the neighborhood or the nation. “As an educated man,” the engineer “has more than casual or average responsibility to protect this nation’s heritage of integrity and morality,” Young believed. Indeed, many engineers of that era thought it was their duty to stand up for their beliefs. “Can the engineering student ignore the existence of moral issue?” asked the UCLA professors D. Rosenthal, A.B. Rosenstein, and M. Tribus in a 1962 paper. “We must answer, ‘he cannot’; at least not if we live in a democratic society.” Of course, here in the United States, we still live in a democratic society, one that constitutionally protects the freedoms of speech, assembly, and petitioning the government for a redress of grievances. And yet, anecdotally, I’ve observed that engineers today are more reticent than others to engage in public discourse or protest. Will that change? Since the Eisenhower era, U.S. universities have relied on the federal funding of research, but in the past few weeks and months, that relationship has been upended. I wonder if today’s engineers will take a cue from their predecessors and decide to take a stand. Or perhaps industry will choose to reinvest in fundamental and long-term R&D the way they used to in the 20th century. Or maybe private foundations and billionaire philanthropists will step up. Nobody can say what will happen next, but I’d like to think this will be one of those times when the past is prologue. And so I’ll repeat my plea to my engineering colleagues: Please don’t turn your back on the humanities. Embrace the moral center that your professional forebears believed all engineers should foster throughout their careers. Stand up for both engineering and the humanities. They are not separate and separable enterprises. They are beautifully entangled and dependent on each other. Both are needed for civilization to flourish. Both are needed for a better tomorrow. References With the exception of Charles Proteus Steinmetz’s “The Value of the Classics in Engineering Education,” which is available in IEEE Xplore, and William Wickenden’s Report of the Investigation of Engineering Education, which is available on the Internet Archive, all of the papers and talks quoted above come from the unpublished papers of the AIEE and unpublished papers of the IEEE. The former Engineering Societies Library, which was based in New York City, bound these papers into volumes. They aren’t digitized and probably never will be; you’ll have to go to the Linda Hall Library in Kansas City, Mo., to check them out. But if you’d like to learn more about how past engineers embraced the humanities, check out Matthew Wisnioski’s book Engineers for Change: Competing Visions of Technology in 1960s America (MIT Press, 2016) and W. Patrick McCray’s Making Art Work: How Cold War Engineers and Artists Forged a New Creative Culture (MIT Press, 2020).

  • China Plans to Bring Back Samples of Venusian Clouds
    by Andrew Jones on 5. May 2025. at 13:00

    Some time in next 10 years, a Chinese mission aims to do what’s never been done before: collect cloud particles from Venus and bring them home. But achieving that goal will mean overcoming one of the most hostile environments in the solar system—the planet’s cloaking clouds are primarily made up of droplets of sulfuric acid. When China unveiled a long-term road map for space science and exploration last fall, its second phase (2028–2035) included an unprecedented Venus atmosphere sample-return mission. As is typical for Chinese space missions, few details were made public. But information in a recent presentation shared on Chinese social media gives us new insight into early mission plans. The slide shows that the key scientific questions being targeted include the potential for life on Venus, the planet’s atmospheric evolution, and the mystery of UV absorbers in its clouds. The mission will carry a sample-collection device as well as in situ atmospheric analysis equipment. The search for life is, in part, due to the interest generated by a controversial study published in Nature Astronomy in 2020 that suggested that traces of phosphine in Venus’s atmosphere could be an indication of a biological process. Venus Sample-Return Mission Challenges Sara Seager, a professor at the Massachusetts Institute of Technology, led a team to put together a Venus atmosphere sample-return mission proposal in 2022. NASA did not select the proposal, but her team has carried on working, including experiments with concentrated sulfuric acid. “Although our DNA cannot survive, we have started to show that [a] growing number of organic molecules, biomolecules, are stable. And so we’re envisioning there could be life on Venus,” Seager told IEEE Spectrum. Mission proposals like MIT’s offer a window into the daunting technical challenges that China’s team is facing. Getting to Venus, entering its thick atmosphere, collecting samples, and getting back into Venus orbit to a waiting orbiter to return the samples to Earth, all come with various challenges. But the potential scientific payoff clearly makes these hurdles worth clearing. The MIT team proposed a Teflon-coated balloon capable of resisting acid corrosion that would float through the sky without the need for propulsion and the associated fuel and mass. Conversely, China’s preliminary render shows a winged vehicle, suggesting it is pursuing a different architectural path. “It would be amazing to get samples in hand to really solve some of the big mysteries on Venus.” —Sara Seager, MIT Rachana Agrawal, a postdoctoral associate at MIT, says a couple of the main challenges are related to operations within the clouds. One is navigating through the dense clouds, typically opaque to visible light. While this isn’t critical during sampling, knowing exactly where you are is essential when it comes to using a rocket to return samples, with the rocket needing to enter a precise orbit. “On Venus, we don’t have GPS in the clouds. The rocket cannot see the stars or the surface, and Venus doesn’t have a magnetic field,” Agrawal states. One answer would be to set up a satellite navigation system for Venus to assist the mission, adding additional launch and complexity. An ascent vehicle will be needed to get the sample canister into orbit to rendezvous and dock with a waiting orbiter. A two-stage solid propellant rocket—similar to that planned for Mars sample-return mission architectures—would be one of the simpler options. But operating remotely or autonomously, millions of kilometers from Earth, in unknown conditions, will be exacting. “We don’t know much about the atmosphere, so we don’t know what the local conditions are. So it could be a very dynamic environment that the rocket has to launch from,” says Agrawal, adding that launches on Earth are often scrubbed due to high winds. China’s scientists and engineers will need to answer all these questions to pull off its own sample return. It has already demonstrated success with its Chang’e-5 and 6 lunar sample returns. It is set to launch the Tianwen-2 near-Earth asteroid sampling mission in late May this year and is targeting a late 2028 launch for its ambitious Tianwen-3 Mars sample-return mission. The experience and tech from these efforts will be instructive for Venus. MIT’s proposed mission design would require 22 tons of spacecraft, with the ultimate aim of delivering 10 grams of atmospheric samples to Earth. It’s likely the Chinese design would offer a similar ratio. However, even such a relatively small amount of material could be revolutionary in our understanding of Venus and our solar system. “I’m superexcited about this,” says Seager. “Even if there’s no life, we know there’s interesting organic chemistry, for sure. And it would be amazing to get samples in hand to really solve some of the big mysteries on Venus.”

  • Clouds Loom Over Europe's Nuclear Titan
    by Peter Fairley on 4. May 2025. at 13:00

    Ukraine’s Zaporizhzhia nuclear power plant, the largest in Europe, has provoked anxiety ever since Russian troops captured it barely two weeks into the 2022 invasion. But recently, after three years of occupation and frequent near misses that threatened radiological disaster, a promise of sunnier days suddenly popped into view, albeit briefly. In a 19 March call U.S. President Donald Trump and Ukrainian President Volodymyr Zelensky discussed American protection and investment for Ukraine’s nuclear power—or even ownership, according to a White House summary. International Atomic Energy Agency (IAEA) director Rafael Grossi upped the ante one week later, telling Reuters that Zaporizhzhia’s reactors could restart within “months” of a ceasefire, and the plant could be fully operational in a year. The promise of a rapid restart at Zaporizhzhia, which has six 950-megawatt reactors, quickly faded amid daily and deadly Russian attacks on Ukrainian cities. Nevertheless the chief executive of Energoatom, Ukraine’s nuclear power utility, essentially endorsed Grossi’s timeline for a demilitarized scenario in an interview this month, even as he acknowledged serious technical challenges including deferred maintenance and a dearth of cooling water. In fact, according to Ukrainian, European and U.S.-based experts interviewed by IEEE Spectrum, the challenges facing a Zaporizhzhia Nuclear Power Plant (ZNPP) revival could go far deeper. Those experts say that Russia’s operation of the plant may have so badly damaged it that repairs could take years and cost billions of dollars. Particular problems include potential tilting of the reactor buildings, and the integrity of the complex and relatively fragile steam generators for the plant’s pressurized, light-water reactors. Even if there is a lasting cessation of hostilities, restarting ZNPP’s reactor-generator units may cost more than Ukraine is able to spend. And at least some Ukrainian energy experts say the country should focus instead on building smaller, decentralized power plants. Volodymyr Kudrytskyi, the former director of Ukraine’s power grid operator, said as much in March during a forum at MIT. Kudrytskyi said big nuclear power plants concentrate too much power at a few spots in the grid: “We are able to use this Soviet legacy to survive, but this is not the way forward.” Questionable Operating Practices May Have Damaged the Plant Since Russia’s full-scale invasion of Ukraine, ZNPP experienced a wide range of unprecedented insults. During its armed seizure in March 2022, Russian forces fired on the plant. That October, Russia began bombing the Ukrainian power system. Those attacks repeatedly disconnected ZNPP from Ukraine’s grid, forcing the use of diesel generators to power the pumps that circulate water over spent fuel, keeping it from overheating and potentially melting down and releasing large amounts of radiation. Russia’s attacks have destroyed some equipment and placed strain on others, but special concern arises from unprecedented longterm operating modes: hot shutdown and cold shutdown. ZNPP is the first nuclear power plant in the world to persist in a condition of hot shutdown, in which the plant operates at minimum output. Sustained hot shutdown, for months on end, violated ZNPP’s license. But Russian plant managers insisted that it provided steam needed to sustain critical equipment, such as the water treatment plant, as well as heating for the nearby city of Enerhodar, also under Russian occupation. Ukrainian and international safety experts argued instead that hot shutdown unnecessarily increased the risk of an accident causing a regional catastrophe, since hot reactors melt down more quickly after cooling systems fail. Ukrainians saw the enhanced risk as a form of nuclear blackmail, arguing that Russian forces could deliberately unleash a radiological incident if they were forced to retreat from the area. In April 2024 the plant’s Russian management finally relented, placing the last operating generating unit into cold shutdown. Cold shutdown is a safer mode for the plant, but, still, several aspects of the cold shutdown are highly unusual and are provoking concern. These concerns stem from a complex combination of chemistry and physics. During cold shutdown the cooling flows are low—nearly stationary in some loops—and also relatively cool, in some cases dropping below 35 °C. The result is a coolant with higher density. Ukrainian nuclear expert Georgiy Balakan says that high-density coolant puts greater mechanical load on the cooling pipes and the delicate tubes within the steam generators. That elevated load, in turn, increases strain on the many welds, as well as on the steel pipes themselves because their metal is less ductile at lower temperatures, according to Balakan. Low temperature and flow, meanwhile, also impact boric acid that’s added to the primary cooling water to regulate the reactor’s fission reactions, allowing boric acid to crystallize in sensitive areas of the primary circuit pipes and in the steam generators. Efforts to purge crystals can then exacerbate damage. If the damage perforates the steam generator tubes, borated water can leak through and attack the secondary cooling circuits’ steel, which is of a lower grade. An office building at the Zaporizhzhia nuclear power plant in southern Ukraine was photographed on 14 June, 2023, 15 months after the facility was captured by Russian troops. Olga Maltseva/AFP/Getty Images Steam Leaks or Groundwater Extraction Could Doom Plant Russian officials controlling ZNPP have reported a series of leaks to IAEA observers, including steam generator leaks in half of its power units. Balakan, a former special advisor to the president of Energoatom, the Ukrainian nuclear utility, calls those telltale signs of the physical and chemical assault on the plant’s equipment. “The Russians acted as if they could operate the water-chemical regime for an unlimited time,” he says. Independent experts contacted by IEEE Spectrum affirmed Balakan’s analysis. They include a senior U.S. nuclear engineer familiar with Soviet-design reactors, who spoke to Spectrum on condition of anonymity because they feared retribution from national authorities, and a Ukrainian engineer who is not authorized to speak to the press. Steam-generator issues can shutter a nuclear plant for good. That scenario played out in California in 2013 when utility Southern California Edison scrapped its only nuclear power plant after botched steam generator repairs that cost nearly $2 billion ($2.7 billion in 2025 dollars). Another set of potentially costly issues stem from the operators’ shift to groundwater for cooling following the demolition of the Kakhovka Dam in June 2023. Potential implications include impairment a critical safety system: the reactor control rods. After the draining of the Kakhovka Reservoir eliminated ZNPP’s original source of cooling water, Rosatom, the Russian nuclear generation and technology conglomerate, drilled 11 wells on site. Withdrawing of groundwater is cause for concern, according to Aybars Gürpinar, a former top safety official at the International Atomic Energy Agency (IAEA). “Especially when there is significant ground water extraction, settlement is always a possibility,” wrote Gürpinar, now a consultant based in Vienna and Brussels, in an email to Spectrum. Subsidence has caused multiple expensive headaches for Soviet-designed VVER-1000 reactors, including ZNPP’s. Nearly 20 years ago Energoatom had to attach counterweights to arrest tilting of several reactor buildings settling into the site’s sandy soil, according to a 2024 LinkedIn post by Balakan. In 2011, Rosatom told then-President Dmitry Medvedev it had plans to fix the “progressing tilt” at the Balakovo and Kalinin power plants. Gürpinar says tilting could crack ZNPP’s concrete base and interfere with reactor control rods, slowing their gravity-driven drop into the reactor to squelch fission reactions during station blackouts. He says the rods could even get “stuck,” forcing operators to rely on boric acid to control the reactor and leaving them without backup control. In a statement to Spectrum, Rosatom asserted that: “No ground level changes or signs of subsidence have been observed.” Restarting the Reactors Would Require Solving Multiple Problems Addressing structural damage is only one of many challenges to safely restarting ZNPP’s reactors. Last month, ZNPP’s Russian-appointed director Yuriy Chernichuk said in an interview for Rosatom’s corporate magazine that job one is shoring up the cooling water supply, because restarting reactors will generate thousands of times more heat. Rosatom says it plans to tap the Dnieper River for this purpose. Chernichuk went on to provide a laundry list of additional challenges, including: •Repairing or replacing upgraded Western equipment subject to international sanctions; •Securing operating licenses from Russia’s nuclear regulator, since Ukrainian unit licenses begin to expire this year; •Rebuilding personnel from ZNPP’s current skeleton staff; and •Building transmission links to Russia’s grid. Chernichuk said that “the most realistic option” is to launch Units 2 and 6 first. Their reactors are loaded with Russian-produced fuel, whereas other reactors contain fuel produced by U.S.-based Westinghouse, for which Rosatom has neither license nor experience. If Ukraine reclaims the plant, Energoatom might more easily address its issues. It could start with Units 1 and 3, which have fresher fuel. Energoatom also better understands ZNPP’s equipment, and it has access to Western gear and expertise. Similar advantages could flow to the U.S. if it could pressure Russia to give up the plant. However, Zelensky has rejected U.S. ownership. Balakan projects that Energoatom would need one year to restart just one power unit in a best-case scenario where ZNPP is “under full control of Ukraine” and equipment damage is not severe. But show-stoppers could still emerge. If the steam generators need extensive parts or replacement, it might not make sense to proceed—new steam generators could cost over $1-billion per unit, judging by the experience of Southern California Edison. “They’re not only expensive. They’re very complicated gadgets and they’re hard to fix,” says the U.S. expert who spoke with Spectrum. Unfortunately, only Russian firms manufacture the steam generators employed at ZNPP. And those might not be available at any price.

  • Video Friday: Robots for Extreme Environments
    by Evan Ackerman on 2. May 2025. at 16:30

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C. ICRA 2025: 19–23 May 2025, ATLANTA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025: 5–7 September 2025, SHENZHEN CoRL 2025: 27–30 September 2025, SEOUL IEEE Humanoids: 30 September–2 October 2025, SEOUL World Robot Summit: 10–12 October 2025, OSAKA, JAPAN IROS 2025: 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! The LYNX M20 series represents the world’s first wheeled-legged robot built specifically for challenging terrains and hazardous environments during industrial operation. Featuring lightweight design with extreme-environment endurance, it conquers rugged mountain trails, muddy wetlands and debris-strewn ruins—pioneering embodied intelligence in power inspection, emergency response, logistics, and scientific exploration. [ DEEP Robotics ] The latest OK Go music video includes lots of robots. And here’s a bit more on how it was done, mostly with arms from Universal Robots. [ OK Go ] Despite significant interest and advancements in humanoid robotics, most existing commercially available hardware remains high-cost, closed-source, and nontransparent within the robotics community. This lack of accessibility and customization hinders the growth of the field and the broader development of humanoid technologies. To address these challenges and promote democratization in humanoid robotics, we demonstrate Berkeley Humanoid Lite, an open-source humanoid robot designed to be accessible, customizable, and beneficial for the entire community. [ Berkeley Humanoid Lite ] I think this may be the first time I’ve ever seen a pedestal-mounted Atlas from Boston Dynamics. [ NVIDIA ] We are increasingly adopting domestic robots (Roomba, for example) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles. [ University of Bath ] Whenever I see a soft robot, I have to ask, “Okay, but how soft is it really?” And usually, there’s a pump or something hidden away off-camera somewhere. So it’s always cool to see actually soft robotics actuators, like these, which are based on phase-changing water. [ Nature Communications ] via [ Collaborative Robotics Laboratory, University of Coimbra ] Thanks, Pedro! Pruning is an essential agricultural practice for orchards. Robot manipulators have been developed as an automated solution for this repetitive task, which typically requires seasonal labor with specialized skills. Our work addresses the behavior planning challenge for a robotic pruning system, which entails a multilevel planning problem in environments with complex collisions. In this article, we formulate the planning problem for a high-dimensional robotic arm in a pruning scenario, investigate the system’s intrinsic redundancies, and propose a comprehensive pruning workflow that integrates perception, modeling, and holistic planning. [ Paper ] via [ IEEE Robotics and Automation Magazine ] Thanks, Bram! Watch the Waymo Driver quickly react to potential hazards and avoid collisions with other road users, making streets safer in cities where it operates. [ Waymo ] This video showcases some of the early testing footage of HARRI (High-speed Adaptive Robot for Robust Interactions), a next-generation proprioceptive robotic manipulator developed at the Robotics & Mechanisms Laboratory (RoMeLa) at University of California, Los Angeles. Designed for dynamic and force-critical tasks, HARRI leverages quasi-direct drive proprioceptive actuators combined with advanced control strategies such as impedance control and real-time model predictive control (MPC) to achieve high-speed, precise, and safe manipulation in human-centric and unstructured environments. [ Robotics & Mechanisms Laboratory ] Building on reinforcement learning for natural gait, we’ve upped the challenge for Adam: introducing complex terrain in training to adapt to real-world surfaces. From steep slopes to start-stop inclines, Adam handles it all with ease! [ PNDbotics ] ABB Robotics is serving up the future of fast food with BurgerBots—a groundbreaking new restaurant concept launched in Los Gatos, Calif. Designed to deliver perfectly cooked, made-to-order burgers every time, the automated kitchen uses ABB’s IRB 360 FlexPicker and YuMi collaborative robot to assemble meals with precision and speed, while accurately monitoring stock levels and freeing staff to focus on customer experience. [ Burger Bots ] Look at this little guy, such a jaunty walk! [ Science Advances ] General-purpose humanoid robots are expected to interact intuitively with humans, enabling seamless integration into daily life. Natural language provides the most accessible medium for this purpose. In this work, we present an end-to-end, language-directed policy for real-world humanoid whole-body control. [ Hybrid Robotics ] It’s debatable whether this is technically a robot, but sure, let’s go with it, because it’s pretty neat—a cable car of sorts consisting of a soft twisted ring that’s powered by infrared light. [ North Carolina State University ] Robert Playter, CEO of Boston Dynamics, discusses the future of robotics amid rising competition and advances in artificial intelligence. [ Bloomberg ] AI is at the forefront of technological advances and is also reshaping creativity, ownership, and societal interactions. In episode 7 of Penn Engineering’s Innovation & Impact podcast, host Vijay Kumar, Nemirovsky Family dean of Penn Engineering and professor in mechanical engineering and applied mechanics, speaks with Meta’s chief AI scientist and Turing Award winner Yann LeCun about the journey of AI, how we define intelligence, and the possibilities and challenges it presents. [ University of Pennsylvania ]

  • Tiny “Fans-on-Chips” Could Cool Big Data Centers
    by Samuel K. Moore on 2. May 2025. at 13:00

    In data centers, pluggable optical transceivers convert electronic bits to photons, fling them across the room, and then turn them back to electronic signals, making them a technological linchpin to controlling the blizzard of data used in AI. But the technology consumes quite a bit of power. In a data center containing 400,000 GPUs, Nvidia estimates that optical transceivers burn 40 megawatts. Right now, the only way to deal with all that heat is to hope you can thermally connect these transceivers to the switch system’s case and cool that. It’s not a great solution, says Thomas Tarter, principal thermal engineer at startup xMEMS Labs, but because these transceivers are about the size of an overlarge USB stick, there’s no way to stick a conventional cooling fan in each. Now, xMEMS says it has adapted its upcoming ultrasonic microelectromechanical (MEMS) “fan-on-a-chip” to fit inside a pluggable optical transceiver so it drives air through and cools the transceiver’s main digital part, the digital signal processor (DSP). Keeping the DSP cool is critical to its longevity, says Tarter. At upwards of US $2,000 per transceiver, getting an extra year or two from a transceiver is well worth it. Cooling should also improve the integrity of the transceivers’ signals. Unreliable links are blamed for extending already lengthy training runs for new large language models. xMEMS’ Cooling Tech Finds a New Home The xMEMS chip-cooling tech, which was unveiled in August 2024, builds on the company’s earlier product, solid-state microspeakers for earbuds. It uses piezoelectric materials that can change shape at ultrasound frequencies to pump 39 cubic centimeters of air per second through a chip just about a millimeter high and less than a centimeter on a side. Smartphones, which are too slim to carry a fan, were the first obvious application for the MEMS cooler, but cooling the fast-growing data-center-scale AI systems seemed out of reach for MEMS technology, because it can’t come near to matching the liquid cooling systems removing thousands of watts of heat from GPU servers. “We were pleasantly surprised by the approach by data-center customers,” says Mike Housholder, xMEMS vice president of marketing. “We were focused on low power. So we didn’t think we had a slam dunk.” Pluggable optical transceivers turn out to be a data-center technology that is squarely in the fan-on-a-chip’s wheelhouse. Today, heat from a transceiver’s DSP, photonics IC, and lasers is thermally coupled to the network switch computers they are plugged into. (These usually sit at the top of a rack of computers.) Then air moving over fins built into the switch’s face removes the heat. In collaboration with partners they would not name, xMEMS began exploring how to get air flowing through the transceiver. These parts consume 18 watts or more. But by situating the company’s MEMS chip within an airflow channel that is thermally connected to the transceiver chips but physically isolated from them, the company predicts it will be able to drop the DSP’s temperature by more than 15 percent. xMEMS has been making prototype MEMS chips at Stanford’s nanofabrication facility, but it will have its first production silicon from TSMC in June, says Housholder. The company expects to be in full production in the first quarter of 2026. “That aligns well with our early customers,” he says. Transceiver shipments are growing fast, according to the Dell’Oro Group. The market analyst predicts that shipments of 800-gigabit-per-second and 1.6-terabit-per-second parts will grow at more than 35 percent per year through 2028. Other innovations in optical communications that could affect heat and power are also in the offing. In March, Broadcom unveiled a new DSP that could lead to a more than 20 percent power reduction for 1.6 Tb/s transceivers, due in part to the use of a more advanced chip-manufacturing process. The latter company and Nvidia have separately developed network switches that do away with pluggable transceivers altogether. These new “co-packaged optics” do the optical/electronic conversion on silicon within the switch chip’s package. But Tarter, who has been working on cooling chips since the 1980s, predicts there will be more applications both inside and outside the data center for the MEMS chip to come. “We’re learning a lot about applications,” he says. “I’ve come up with 20 or 30 basic applications for it, and hopefully that inspires designers to say ‘Oh, this is how I can use this in my system.’”

  • 5 Technologies That Could Combat Antimicrobial Resistance
    by Kathy Pretz on 1. May 2025. at 18:00

    The flu, measles, pneumonia, and other microbial infections once were easy to treat with antibiotic, antifungal, and antiviral medications. The conditions have become more resistant to drugs, however, increasing the chances of deadly outcomes caused by bacterial, viral, fungal, and parasitic infections. Antimicrobial resistance (AMR) caused more than 1 million deaths in 2021, according to a 2024 report published in The Lancet. The World Health Organization declared in 2023 that AMR had become a major global health threat. AMR can be blamed on a number of things including the overuse of antibiotics in people, animals, and plants; inadequate sanitation; and a lack of new medications. Other factors include ineffective prevention measures and a dearth of new tools to detect infections. To discuss how technology can assist with preventing the spread of AMR, the Engineering Research Visioning Alliance held a two-day event last year, attracting more than 50 researchers, industry leaders, and policymakers. The ERVA, funded by the U.S. National Science Foundation, identifies areas that address national and global needs that any parties that fund research—companies, government agencies, and foundations—should consider. The alliance has more than 20 affiliate partners including IEEE. “ERVA is not necessarily about finding a solution tomorrow,” says Anita Shukla, who chaired the February 2024 event. “It’s about creating long-term research directions that may help minimize, mitigate, or eradicate problems over the long term. We’re enabling research or ideas for research.” Shukla, a professor of engineering at Brown University, in Providence, R.I., researches biomaterials for applications in drug delivery, including the treatment of bacterial and fungal infections. The alliance recently released “Engineering Opportunities to Combat Antimicrobial Resistance.” The report identified five grand challenges for researchers: diagnostic biosensors and wearables, engineered antimicrobial surfaces, smart biomaterials, cell engineering, and advanced modeling approaches. Biosensors to speed up detection Faster, more accurate, and less expensive diagnostic tools and wearables are needed to better detect infections, the report says. It suggests the development of diagnostic biosensors, which could detect specific components of pathogens within a sample. The biosensors could collect the sample from the patient in a minimal or noninvasive way, according to the report. The traditional method to find out if someone has an infection is to collect samples of their cells, tissue, blood, mucus, or other bodily fluids and send them to a laboratory for analysis. Depending on the type of infection and test, it can take a few days to get the results. The alliance suggested the development of diagnostic biosensors that could detect bacteria, viruses, fungi, and parasitic pathogens within the sample on-site. Results need to be provided quickly—ideally in a few hours or less, the report says—in order to reduce the spread of the infection, lessen recovery time for patients, and lower health care costs. But first, research is needed to develop biosensors that can detect low levels of infection-related biomarkers from patient samples, the report says. A biomarker is a measurable indicator, such as a gene, that can provide information about a person’s health. Currently it can take several days to weeks for a person’s immune system to produce enough antibodies to be detected, delaying a diagnosis. “I think IEEE members have the right skill set and could make quite a difference if they, along with other engineers, work together to solve this very complex problem.” —Anita Shukla, engineering professor at Brown University, in Providence, R.I. The authors call for engineers, clinicians, and microbiologists to collaborate on creating devices and designing them for use in clinical settings. The advancements, the report says, can be incorporated into existing smart devices, or new ones could be designed that are infection-specific. Another area that should be explored, it says, is developing wearable devices to allow patients to accurately diagnose themselves. “Engineers, particularly electrical engineers who have a lot of knowledge on various biosensor design and wearable technologies, are the individuals who need to innovate in this space and produce these technologies,” Shukla says. Cleaner surfaces to stop germ propagation One way infections spread is from bacteria-contaminated surfaces including hospital beds, medical equipment, doorknobs, and desks. No matter how stringent hospital protocols are for sterilization, sanitation, and disinfection, bacteria attach to most things. The ERVA report notes that more than 90 percent of curtains used by hospitals for privacy between patients in shared rooms are contaminated after one week. The authors say it’s imperative to develop antimicrobial surfaces that can kill bacterial and fungal pathogens on contact. Also needed are materials that release antimicrobial agents when touched, including metals, polymers, and composites. New engineered antimicrobial surfaces have to be durable enough to withstand the sanitation and sterilization methods used in hospitals and other clinical settings, Shukla says. Other locations where antimicrobial surfaces should be installed, she adds, include schools and office buildings. Smarter materials to deliver medication Dressings and other biomaterial-based drug delivery methods used today to deliver antibiotics directly to a potential infection site aren’t advanced enough to control the amount of medication they release, according to the report. That can lead to overuse of the drug and can exacerbate AMR, the report says. Smarter biomaterials-based delivery systems that release antimicrobials are an urgent area of research, the authors say. Nano- and microscale particles and polymer gels that can release drugs only when a bacterial infection is present are a few examples cited in the report. “These are materials that can release therapeutics on demand,” Shukla says. “You expose the infection to the therapeutic only when it’s needed so that you’re not introducing more of the drug [than required]—which potentially could accelerate resistance development.” The materials also should contain components that sense the presence of a bacteria or fungus and signal whether the patient’s immune system is actively fighting the infection, the report says. The germ’s presence would trigger an encapsulated antibiotic or antifungal to be released at the infection site. There’s an opportunity for electrical engineers to develop components that would be incorporated into the smart material and respond to electric fields to trigger drug release or help detect infection, Shukla says. Drug-free cellular engineering Another area where electrical engineers could play a big role, Shukla says, involves immune cells. A potential alternative to antibiotics, engineered white blood cells could enhance the body’s natural response for fighting off infections, according to the report. Such a drug-free approach would require advances in cellular engineering, however, as well as a better understanding of genetically manipulating cells. For people with persistent infections, it’s important to study long-term interactions between engineered immune cells and bacteria, the report says. Research into creating engineered microbes with antimicrobial activity could help reduce antibiotic use and might prevent infections, it says. Using advanced modeling to develop new drugs The alliance says significant research is needed for developing computational modeling. The technology could be used to rapidly develop complex bacterial infection models to evaluate the effectiveness of new antimicrobial drugs and therapeutics. “Modeling has the opportunity to speed up the development of new drugs and potentially predict the outcomes of new treatments, all in a way that’s less expensive and less subject to the variability that often happens with laboratory-based tests,” Shukla says. AI-based tools are already being used to predict or develop potential therapeutics, she adds, but new algorithms and approaches are still needed. “I think IEEE members have the right skill set and could make quite a difference if they, along with other engineers, work together to solve this very complex problem of AMR,” Shukla says. “People working in silos is a problem. If we can get people working together to really tackle this problem, that’s how AMR is going to be solved.” You can watch Shukla discuss the findings of the visioning event in this webinar, produced on 27 March.

  • We Need to Talk About AI’s Impact on Public Health
    by Shaolei Ren on 1. May 2025. at 14:47

    Most people have heard about the environmental impact of today’s AI boom, stemming from sprawling data centers packed with power-hungry servers. In the United States alone, the demand for AI is projected to push data-center electricity consumption to 6.7 to 12.0 percent of the nation’s total by 2028. By that same date, water consumption for cooling these data-center facilities is predicted to double, or even quadruple, compared to the 2023 level. But many people haven’t made the connection between data centers and public health. The power plants and backup generators needed to keep data centers working generate harmful air pollutants, such as fine particulate matter and nitrogen oxides (NOx). These pollutants take an immediate toll on human health, triggering asthma symptoms, heart attacks, and even cognitive decline. But AI’s contribution to air pollution and the public health burden is often missing from conversations about responsible AI design. Why? Because ambient air pollution is a “silent killer.” While concerns about the public health impacts of data centers, including potential links to cancer rate increases, are beginning to surface, most AI-model developers, practitioners, and users simply aren’t aware of the serious health risks tied to the energy and infrastructure powering modern AI systems. The Danger of Ambient Air Pollution Ambient air pollution is responsible for approximately 4 million premature deaths worldwide each year. The biggest culprit are tiny particles 2.5 micrometers or less in diameter (referred to as PM 2.5), which can travel deep into the respiratory tract and lungs. Along with high blood pressure, smoking, and high blood sugar, air pollution is a leading health risk factor. The World Bank estimates the global cost of air pollution at US $8.1 trillion, equivalent to 6.1 percent of global gross domestic product. Contrary to common belief, air pollutants don’t stay near their emission sources: They can travel hundreds of miles. Moreover, PM 2.5 is considered a “nonthreshold” pollutant, meaning that there’s no safe level of exposure. With the danger of this pollution well established, the question becomes: How much is AI responsible for? In our research as professors at Caltech and the University of California, Riverside, we’ve set out to answer that question. Quantifying the Public Health Cost of AI To ensure that AI services are available even during grid outages, data centers rely on large sets of backup generators that usually burn diesel fuel. While the total operation time of backup generators is limited and regulated by local environmental agencies, their emission rates are high. A typical diesel generator can release 200 to 600 times more NOx than a natural gas power plant producing the same amount of electricity. A recent report by the state of Virginia revealed that backup generators at Virginia’s data centers emitted about 7 percent of what permits allowed in 2023. According to the U.S. Environmental Protection Agency’s COBRA modeling tool, which maps how air pollution affects human health at the local, state, and federal levels, the public health cost of those emissions in Virginia is estimated at $150 million, affecting communities as far away as Florida. Imagine the impact if data centers maxed out their permitted emissions. Further compounding the public health risk, a large set of data-center generators in a region may operate simultaneously during grid outages or grid shortages as part of demand-response programs, potentially triggering short-term spikes in PM2.5 and NOx emissions that are especially harmful to people with lung problems. Next, let’s look beyond the backup generators to the supply of energy from the grid. The bulk of the electricity powering AI data centers comes from power plants that burn fossil fuels, which release harmful air pollutants, including PM 2.5 and NOx. Despite years of progress, power plants remain a leading source of air pollution in the United States. We calculated that training a single large generative AI model in the United States, such as Meta’s Llama 3.1, can produce as much PM 2.5 as more than 10,000 round trips by car between Los Angeles and New York City. According to our research, in 2023, air pollution attributed to U.S. data centers was responsible for an estimated $6 billion in public health damages. If the current AI growth trend continues, this number is projected to reach $10 billion to $20 billion per year by 2030, rivaling the impact of emissions from California’s 30 million vehicles. Why Carbon and Energy Efficiency Aren’t the Whole Story To date, efforts to mitigate AI’s environmental footprint have focused mostly on carbon emissions and energy efficiency. These efforts are important, but they may not alleviate health impacts, which strongly depend on where the emissions occur. Carbon anywhere is carbon everywhere. The climate impact of carbon dioxide is largely the same no matter where it’s emitted. But the health impact of air pollution depends heavily on regional factors such as local sources of energy, wind patterns, weather, and population density. Even though carbon emissions and health-damaging air pollutants have some shared sources, an exclusive focus on cutting carbon does not necessarily reduce, and could even exacerbate, public health risks. For instance, our latest (and unpublished) research has shown that redistributing Meta’s energy loads in 2023 across its U.S. data centers to prioritize carbon reductions could potentially lower overall carbon emissions by 7.2 percent, but would increase public health costs by 2.8 percent. Likewise, focusing solely on energy efficiency can reduce air pollutant emissions, but doesn’t guarantee a decrease in health impact. That’s because training the same AI model using the same amount of energy can yield vastly different health outcomes depending on the location. Across Meta’s U.S. data centers, we’ve found that the public health cost of training the same model can vary by more than a factor of 10. We Need Health-Informed AI Supply-side solutions, such as using alternative fuels for backup generators and sourcing electricity from clean fuels, can reduce AI’s public health impact, but they come with significant challenges. Clean backup generators that offer the same level of reliability as diesel are still limited. And despite advancements in renewable energy, fossil fuels remain deeply embedded in the energy fuel mix. The U.S. Energy Information Administration projects that coal-based electricity generation in 2050 will remain at approximately 30 percent of the 2024 level under the alternative electricity scenario, in which power plants continue operating under rules existing prior to April 2024. Globally, the share of coal and other fossil fuels in electricity generation has remained nearly flat over the past four decades, underscoring the difficulty of entirely changing the energy supply that powers data centers. We believe that demand-side strategies that consider the spatial and temporal variations in health impacts can provide effective and actionable solutions immediately. These strategies are particularly well-suited for AI data centers with substantial operational flexibility. For example, AI training can often run at any available data centers and typically do not face hard deadlines, so those jobs can be routed to locations or deferred to times that have less impact on public health. Similarly, inference jobs—the work a model does to create an output—can be routed among multiple data centers without affecting user experience. By incorporating public health impact as a key performance metric, these flexibilities can be harnessed to reduce AI’s growing health burden. Crucially, this health-informed approach to AI requires minimal changes to existing systems. Companies simply need to consider public health costs when making decisions. While the public health cost of AI is growing rapidly, AI also holds tremendous promise for advancing public health. For example, within the energy sector, AI can navigate the complex decision space of real-time power plant dispatch. By aligning grid stability with public health objectives, AI can help minimize health costs while maintaining a reliable power supply. AI is rapidly becoming a public utility and will continue to reshape society profoundly. Therefore, we must examine AI through a public lens, with its public health impact as a critical consideration. If we continue to overlook it, the public health cost of AI will only grow. Health-informed AI offers a clear path forward for advancing AI while promoting cleaner air and healthier communities.

  • Bot Milk?
    by Harry Goldstein on 1. May 2025. at 14:00

    I come from dairy-farming stock. My grandfather, the original Harry Goldstein, owned a herd of dairy cows and a creamery in Louisville, Ky., that bore the family name. One fateful day in early April 1944, Harry was milking his cows when a heavy metallic part of his homemade milking contraption—likely some version of the then-popular Surge Bucket Milker—struck him in the abdomen, causing a blood clot that ultimately led to cardiac arrest and his subsequent demise a few days later, at the age of 48. Fast forward 80 years and dairy farming is still a dangerous occupation. According to an analysis of U.S. Bureau of Labor Statistics data done by the advocacy group Farmworker Justice, the U.S. dairy industry recorded 223 injuries per 10,000 full-time workers in 2020, almost double the rate for all of private industry combined. Contact with animals tops the list of occupational hazards for dairy workers, followed by slips, trips, and falls. Other significant risks include contact with objects or equipment, overexertion, and exposure to toxic substances. Every year, a few dozen dairy workers in the United States meet a fate similar to my grandfather’s, with 31 reported deadly accidents on dairy farms in 2021. As Senior Editor Evan Ackerman notes in “Robots for Cows (and Their Humans)”, traditional dairy farming is very labor-intensive. Cows need to be milked at least twice per day to prevent discomfort. Conventional milking facilities are engineered for human efficiency, with systems like rotating carousels that bring the cows to the dairy workers. The robotic systems that Netherlands-based Lely has been developing since the early 1990s are much more about doing things the bovine way. That includes letting the cows choose when to visit the milking robot, resulting in a happier herd and up to 10 percent more milk production. Turns out that what’s good for the cows might be good for the humans, too. Another Lely bot deals with feeding, while yet another mops up the manure, the proximate cause of much of the slipping and sliding that can result in injuries. The robots tend to reset the cow–human relationship—it becomes less adversarial because the humans aren’t always there bossing the cows around. Farmer well-being is also enhanced because the humans don’t have to be around to tempt fate, and they can spend time doing other things, freed up by the robot laborers. In fact, when Ackerman visited Lely’s demonstration farm in Schipluiden, Netherlands, to see the Lely robots in action, he says, “The original plan was for me to interview the farmer, and he was just not there at all for the entire visit while the cows were getting milked by the robots. In retrospect, that might have been the most effective way he could communicate how these robots are changing work for dairy farmers.” The farmer’s absence also speaks volumes about how far dairy technology has evolved since my grandfather’s day. Harry Goldstein’s life was cut short by the very equipment he hacked to make his own work easier. Today’s dairy-farming innovations aren’t just improving efficiency—they’re keeping humans out of harm’s way entirely. In the dairy farms of the future, the most valuable safety features might simply be a barn resounding with the whirring of robots and moos of contentment.

  • This Chart Might Keep You From Worrying About AI’s Energy Use
    by Emily Waltz on 30. April 2025. at 16:33

    The world is collectively freaking out about the growth of artificial intelligence and its strain on power grids. But a look back at electricity load growth in the United States over the last 75 years shows that innovations in efficiency continually compensate for relentless technological progress. In the 1950s, for example, rural America electrified, the industrial sector boomed, and homeowners rapidly accumulated nifty domestic appliances such as spinning clothes dryers and deep freezers. This caused electricity demand to grow at a breathtaking clip of nearly 9 percent per year on average. The growth continued into the 1960s as homes and businesses readily adopted air conditioners and the industrial sector automated. But over the next 30 years, industrial processes such as steelmaking became more efficient, and home appliances did more with less power. Around 2000, the onslaught of computing brought widespread concerns about its electricity demand. But even with the explosion of Internet use and credit card transactions, improvements in computing and industrial efficiencies and the adoption of LED lighting compensated. Net result: Average electricity growth in the United States remained nearly flat from 2000 to 2020. Now it’s back on the rise, driven by AI data centers and manufacturing of batteries and semiconductor chips. Electricity demand is expected to grow more than 3 percent every year for the next five years, according to Grid Strategies, an energy research firm in Washington, D.C. “Three percent per year today is more challenging than 3 percent in the 1960s because the baseline is so much larger,” says John Wilson, an energy regulation expert at Grid Strategies. Can the United States counter the growth with innovation in data-center and industrial efficiency? History suggests it can.

  • Freddy the Robot Was the Fall Guy for British AI
    by Allison Marsh on 30. April 2025. at 14:00

    Meet FREDERICK Mark 2, the Friendly Robot for Education, Discussion and Entertainment, the Retrieval of Information, and the Collation of Knowledge, better known as Freddy II. This remarkable robot could put together a simple model car from an assortment of parts dumped in its workspace. Its video-camera eyes and pincer hand identified and sorted the individual pieces before assembling the desired end product. But onlookers had to be patient. Assembly took about 16 hours, and that was after a day or two of “learning” and programming. Freddy II was completed in 1973 as one of a series of research robots developed by Donald Michie and his team at the University of Edinburgh during the 1960s and ’70s. The robots became the focus of an intense debate over the future of AI in the United Kingdom. Michie eventually lost, his funding was gutted, and the ensuing AI winter set back U.K. research in the field for a decade. Why were the Freddy I and II robots built? In 1967, Donald Michie, along with Richard Gregory and Hugh Christopher Longuet-Higgins, founded the Department of Machine Intelligence and Perception at the University of Edinburgh with the near-term goal of developing a semiautomated robot and then longer-term vision of programming “integrated cognitive systems,” or what other people might call intelligent robots. At the time, the U.S. Defense Advanced Research Projects Agency and Japan’s Computer Usage Development Institute were both considering plans to create fully automated factories within a decade. The team at Edinburgh thought they should get in on the action too. Two years later, Stephen Salter and Harry G. Barrow joined Michie and got to work on Freddy I. Salter devised the hardware while Barrow designed and wrote the software and computer interfacing. The resulting simple robot worked, but it was crude. The AI researcher Jean Hayes (who would marry Michie in 1971) referred to this iteration of Freddy as an “arthritic Lady of Shalott.” Freddy I consisted of a robotic arm, a camera, a set of wheels, and some bumpers to detect obstacles. Instead of roaming freely, it remained stationary while a small platform moved beneath it. Barrow developed an adaptable program that enabled Freddy I to recognize irregular objects. In 1969, Salter and Barrow published in Machine Intelligence their results, “Design of Low-Cost Equipment for Cognitive Robot Research,” which included suggestions for the next iteration of the robot. Freddy I, completed in 1969, could recognize objects placed in front of it—in this case, a teacup.University of Edinburgh More people joined the team to build Freddy Mark 1.5, which they finished in May 1971. Freddy 1.5 was a true robotic hand-eye system. The hand consisted of two vertical, parallel plates that could grip an object and lift it off the platform. The eyes were two cameras: one looking directly down on the platform, and the other mounted obliquely on the truss that suspended the hand over the platform. Freddy 1.5’s world was a 2-meter by 2-meter square platform that moved in an x-y plane. Freddy 1.5 quickly morphed into Freddy II as the team continued to grow. Improvements included force transducers added to the “wrist” that could deduce the strength of the grip, the weight of the object held, and whether it had collided with an object. But what really set Freddy II apart was its versatile assembly program: The robot could be taught to recognize the shapes of various parts, and then after a day or two of programming, it could assemble simple models. The various steps can be seen in this extended video, narrated by Barrow: The Lighthill Report Takes Down Freddy the Robot And then what happened? So much. But before I get into all that, let me just say that rarely do I, as a historian, have the luxury of having my subjects clearly articulate the aims of their projects, imagine the future, and then, years later, reflect on their experiences. As a cherry on top of this historian’s delight, the topic at hand—artificial intelligence—also happens to be of current interest to pretty much everyone. As with many fascinating histories of technology, events turn on a healthy dose of professional bickering. In this case, the disputants were Michie and the applied mathematician James Lighthill, who had drastically different ideas about the direction of robotics research. Lighthill favored applied research, while Michie was more interested in the theoretical and experimental possibilities. Their fight escalated quickly, became public with a televised debate on the BBC, and concluded with the demise of an entire research field in Britain. A damning report in 1973 by applied mathematician James Lighthill [left] resulted in funding being pulled from the AI and robotics program led by Donald Michie [right]. Left: Chronicle/Alamy; Right: University of Edinburgh It all started in September 1971, when the British Science Research Council, which distributed public funds for scientific research, commissioned Lighthill to survey the state of academic research in artificial intelligence. The SRC was finding it difficult to make informed funding decisions in AI, given the field’s complexity. It suspected that some AI researchers’ interests were too narrowly focused, while others might be outright charlatans. Lighthill was called in to give the SRC a road map. No intellectual slouch, Lighthill was the Lucasian Professor of Mathematics at the University of Cambridge, a position also held by Isaac Newton, Charles Babbage, and Stephen Hawking. Lighthill solicited input from scholars in the field and completed his report in March 1972. Officially titled “ Artificial Intelligence: A General Survey,” but informally called the Lighthill Report, it divided AI into three broad categories: A, for advanced automation; B, for building robots, but also bridge activities between categories A and C; and C, for computer-based central nervous system research. Lighthill acknowledged some progress in categories A and C, as well as a few disappointments. Lighthill viewed Category B, though, as a complete failure. “Progress in category B has been even slower and more discouraging,” he wrote, “tending to sap confidence in whether the field of research called AI has any true coherence.” For good measure, he added, “AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.” So very British. Lighthill concluded his report with his view of the next 25 years in AI. He predicted a “fission of the field of AI research,” with some tempered optimism for achievement in categories A and C but a valley of continued failures in category B. Success would come in fields with clear applications, he argued, but basic research was a lost cause. The Science Research Council published Lighthill’s report the following year, with responses from N. Stuart Sutherland of the University of Sussex and Roger M. Needham of the University of Cambridge, as well as Michie and his colleague Longuet-Higgins. Sutherland sought to relabel category B as “basic research in AI” and to have the SRC increase funding for it. Needham mostly supported Lighthill’s conclusions and called for the elimination of the term AI—“a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better.” Longuet-Higgins focused on his own area of interest, cognitive science, and ended with an ominous warning that any spin-off of advanced automation would be “more likely to inflict multiple injuries on human society,” but he didn’t explain what those might be. Michie, as the United Kingdom’s academic leader in robots and machine intelligence, understandably saw the Lighthill Report as a direct attack on his research agenda. With his funding at stake, he provided the most critical response, questioning the very foundation of the survey: Did Lighthill talk with any international experts? How did he overcome his own biases? Did he have any sources and references that others could check? He ended with a request for more funding—specifically the purchase of a DEC System 10 (also known as the PDP-10) mainframe computer. According to Michie, if his plan were followed, Britain would be internationally competitive in AI by the end of the decade. After Michie’s funding was cut, the many researchers affiliated with his bustling lab lost their jobs.University of Edinburgh This whole affair might have remained an academic dispute, but then the BBC decided to include a debate between Lighthill and a panel of experts as part of its “Controversy” TV series. “Controversy” was an experiment to engage the public in science. On 9 May 1973, an interested but nonspecialist audience filled the auditorium at the Royal Institution in London to hear the debate. Lighthill started with a review of his report, explaining the differences he saw between automation and what he called “the mirage” of general-purpose robots. Michie responded with a short film of Freddy II assembling a model, explaining how the robot processes information. Michie argued that AI is a subject with its own purposes, its own criteria, and its own professional standards. After a brief back and forth between Lighthill and Michie, the show’s host turned to the other panelists: John McCarthy, a professor of computer science at Stanford University, and Richard Gregory, a professor in the department of anatomy at the University of Bristol who had been Michie’s colleague at Edinburgh. McCarthy, who coined the term artificial intelligence in 1955, supported Michie’s position that AI should be its own area of research, not simply a bridge between automation and a robot that mimics a human brain. Gregory described how the work of Michie and McCarthy had influenced the field of psychology. You can watch the debate or read a transcript. A Look Back at the Lighthill Report Despite international support from the AI community, though, the SRC sided with Lighthill and gutted funding for AI and robotics; Michie had lost. Michie’s bustling lab went from being an international center of research to just Michie, a technician, and an administrative assistant. The loss ushered in the first British AI winter, with the United Kingdom making little progress in the field for a decade. For his part, Michie pivoted and recovered. He decommissioned Freddy II in 1980, at which point it moved to the Royal Museum of Scotland (now the National Museum of Scotland), and he replaced it with a Unimation PUMA robot. In 1983, Michie founded the Turing Institute in Glasgow, an AI lab that worked with industry on both basic and applied research. The year before, he had written Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach). Michie intended it as intellectual musings that he hoped scientists would read, perhaps on the weekend, to help them get beyond the pursuits of the workweek. The book is wide-ranging, covering his three decades of work. In the introduction to the chapters covering Freddy and the aftermath of the Lighthill report, Michie wrote, perhaps with an eye toward history: “Work of excellence by talented young people was stigmatised as bad science and the experiment killed in mid-trajectory. This destruction of a co-operative human mechanism and of the careful craft of many hands is elsewhere described as a mishap. But to speak plainly, it was an outrage. In some later time when the values and methods of science have further expanded, and those adversary politics have contracted, it will be seen as such.” History has indeed rendered judgment on the debate and the Lighthill Report. In 2019, for example, computer scientist Maarten van Emden, a colleague of Michie’s, reflected on the demise of the Freddy project with these choice words for Lighthill: “a pompous idiot who lent himself to produce a flaky report to serve as a blatantly inadequate cover for a hatchet job.” And in a March 2024 post on GitHub, the blockchain entrepreneur Jeffrey Emanuel thoughtfully dissected Lighthill’s comments and the debate itself. Of Lighthill, he wrote, “I think we can all learn a very valuable lesson from this episode about the dangers of overconfidence and the importance of keeping an open mind. The fact that such a brilliant and learned person could be so confidently wrong about something so important should give us pause.” Arguably, both Lighthill and Michie correctly predicted certain aspects of the AI future while failing to anticipate others. On the surface, the report and the debate could be described as simply about funding. But it was also more fundamentally about the role of academic research in shaping science and engineering and, by extension, society. Ideally, universities can support both applied research and more theoretical work. When funds are limited, though, choices are made. Lighthill chose applied automation as the future, leaving research in AI and machine intelligence in the cold. It helps to take the long view. Over the decades, AI research has cycled through several periods of spring and winter, boom and bust. We’re currently in another AI boom. Is this time different? No one can be certain what lies just over the horizon, of course. That very uncertainty is, I think, the best argument for supporting people to experiment and conduct research into fundamental questions, so that they may help all of us to dream up the next big thing. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the May 2025 print issue as “This Robot Was the Fall Guy for British AI.” References Donald Michie’s lab regularly published articles on the group’s progress, especially in Machine Intelligence, a journal founded by Michie. The Lighthill Report and recordings of the debate are both available in their entirety online—primary sources that capture the intensity of the moment. In 2009, a group of alumni from Michie’s Edinburgh lab, including Harry Barrow and Pat Fothergill (formerly Ambler), created a website to share their memories of working on Freddy. The site offers great firsthand accounts of the development of the robot. Unfortunately for the historian, they didn’t explore the lasting effects of the experience. A decade later, though, Maarten van Emden did, in his 2019 article “Reflecting Back on the Lighthill Affair,” in the IEEE Annals of the History of Computing. Beyond his academic articles, Michie was a prolific author. Two collections of essays I found particularly useful are On Machine Intelligence (John Wiley & Sons, 1974) and Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach, 1982). Jon Agar’s 2020 article “What Is Science for? The Lighthill Report on Artificial Intelligence Reinterpreted” and Jeffrey Emanuel’s GitHub post offer historical interpretations on this mostly forgotten blip in the history of robotics and artificial intelligence.

  • Rugged Microdata Centers Bring Rural Reliability
    by Dina Genkina on 30. April 2025. at 13:00

    Rural connectivity is still a huge issue. As of 2022, approximately 28 percent of Americans living in rural areas did not have access to broadband Internet, which at that time was defined by 25 megabits per second for download speeds and 3 Mb/s for upload speeds by the Federal Communications Commission (FCC). As of 2024, the FCC came out with a new benchmark with higher speed requirements—increasing the number of people whose connections don’t meet the definition. One potential solution to the problem is small, rugged data centers with relatively old, redundant components, placed strategically in rural areas such that crucial data can be stored locally and network providers can route through them, providing redundancy. “We are not the big AI users,” said Doug Recker, the President and Founder of Duos Edge AI, in a talk delivered at the Data Center World conference in Washington, D.C., earlier this month. “We’re still trying to resolve the problem from 20 years ago. These aren’t high-bandwidth or high-power data centers. We don’t need them out there. We just need better connectivity. We need robust networks.” The Jacksonville, Fla.–based startup provides small data centers (about the size of a shipping container) to rural areas, mostly in the Texas panhandle. It recently added such a data center in Amarillo, working with the local school district to provide more robust connectivity to students. The school district runs its learning platform on Amazon Web Services (AWS) and can now store that platform locally in the data center. Previously, data had to travel to and from Dallas, over 500 kilometers away. Network outages were a common occurrence, impeding student learning. Recker’s company paid the upfront cost of US $1.2 million to $1.5 million to build the 15-cabinet data center, which it calls a pod. Duos is making the money back by charging a monthly usage and maintenance fee (between $1,800 and $3,000 per shelf) to the school district and other customers. The company follows a ”build what’s needed and they will come” approach. Once the data center is installed, Recker says, existing network providers colocate there, providing redundancy and reliability to the customers. The pod provides a seed around which network providers can build a hub-and-spoke-type network. Three Requirements for Edge Data Centers The trick to making these edge data centers financially profitable is minimizing their energy usage and maximizing their reliability. To minimize energy use, Duos uses relatively old, time-tested equipment. For reliability, every piece of equipment is duplicated, including uninterruptible power supply batteries, generators, and air-conditioning units. They also have to locate the pods in places where there would still be a large enough number of potential customers to justify building a 15-rack pod (the equipment is rented out per rack). The pods are unmanned, but efficient and timely maintenance is key. “Say your AC unit goes down at 2:00 in the morning,” Recker says. “It’s redundant, but you don’t want it to be down, so you have to dispatch somebody who can get into a pod at 2:00 in the morning.” Duos has a system for dispatching maintenance workers and an auditing standard that remotely keeps track of all the work that has been done or needs to be done on each piece of equipment. Each pod also has a clean room to prevent maintenance workers from tracking in dust or dirt from outside while they work on repairs. The compact data center allows the Amarillo school district to have affordable and reliable connectivity for their digital learning platform. Students will soon have access to AI-powered tools, simulations, and real-time data for their classes. “The pod enables that to happen because they can compute on site and host that environment on site where they couldn’t do it before because of the latency issues,” says Recker. Duos is also placing pods elsewhere in the Texas panhandle, as well as in Florida. And they’re getting so much demand in Amarillo that they’re planning to install a second pod. Recker says that although they initially built the pod in collaboration with the school district, other local institutions quickly became interested as well, including hospitals, utility companies, and farmers.

  • Dalma Novak’s Journey From Professor to Entrepreneur
    by Kathy Pretz on 29. April 2025. at 18:00

    It can be a bit of a bumpy road from leaving a secure job in academia to launching a startup based on your research. That’s what IEEE Fellow Dalma Novak experienced. An expert in developing technology to transmit microwave and millimeter-wave signals over long distances using optical fibers, she left a tenured position at the University of Melbourne, in Parkville, Australia, to join a venture-backed U.S. optical network equipment firm. After two years, the startup went out of business as the telecom industry’s bubble was bursting in the early 2000s. That turn of events didn’t dissuade Novak. She loved working in industry and had no intention of returning to academia, she says. Instead, she helped found Pharad, now Octane Wireless, which makes advanced antennas and radio-over-fiber products for communications equipment. Located in Hanover, Md., Novak is vice president of engineering for Octane. Dalma Novak Employer: Octane Wireless in Hanover, Md. Title: Vice president of engineering Member grade: Fellow Alma mater: University of Queensland, Brisbane, Australia One of the other founders is her husband, IEEE Fellow Rod Waterhouse. A former electrical and electronics engineering associate professor at RMIT University in Melbourne, he is an expert in creating antennas and radio-over-fiber communication links. “We decided,” she says, “that we would form our own company and work on some of the technologies that we developed over the years as academics and also build on some of the things that we worked on as Ph.D. students.” She juggles her day job with her role as director and vice president of IEEE Technical Activities, making her a member of the IEEE Board of Directors. She also chairs the Technical Activities Board, which is the largest of the organization’s six major boards. Novak helps set the strategic direction of the TAB, which oversees IEEE’s societies and technical councils, including their products and services. From professor to entrepreneur Novak, who grew up in Brisbane, Australia, fell in love with math and physics in high school. She wanted to have a STEM career. Her private all-girls school in the early 1980s didn’t have a career counselor, so she researched job possibilities at her local library. “I determined that I wanted to do engineering rather than just science,” she says. “When I started to look into the different fields of engineering, I realized electrical engineering matched best because of the subjects I loved the most. I really wanted to say I was an engineer when I finished my degree.” She graduated in 1987 with a bachelor’s degree in engineering, then got a Ph.D. in electrical engineering in 1992 from the University of Queensland in Brisbane. Her doctoral thesis was on the emerging field of semiconductor lasers for fiber-optic communications. “A lot of my research has focused on developing new technologies for transporting very-high-frequency wireless signals over optical fiber and developing new methods that also enable high-performance radio-over-fiber systems,” she says. She has published more than 280 papers; most are in the IEEE Xplore Digital Library. Shortly after earning her Ph.D., she was hired by the University of Melbourne as a professor of electrical and electronic engineering. She later was appointed as chair of telecommunications. Novak and her husband took a six-month sabbatical from their universities in 2000 so she could conduct research at the University of California, Los Angeles, and the U.S. Naval Research Laboratory in Washington, D.C. Several colleagues from the Naval Research Lab who went on to work at startups encouraged Novak and her husband to do the same. The two joined Dorsal Networks in Columbia, Md. At Dorsal, which builds undersea optical networks, she developed optical networking equipment for submarines. “My husband and I had always wanted to spend some time working in the industry in the United States,” she says. “We didn’t necessarily see ourselves as being professors all of our lives.” “IEEE is the professional home for everyone who works in the engineering field. It’s a club, and you need to be in it.” Dorsal was acquired by Corvis, an optical network equipment manufacturer in Columbia. It then purchased Broadwing, a telecommunications service provider, and took on that name. The company went out of business in 2003. The couple and their business partner, Austin Farnham, a former managing director at Corvis, founded Octane in 2004. Farnham is president, and Waterhouse is chief technology officer. “We decided that we were going to fund our own company and bootstrapped it through research grants,” Novak says. “Our background writing research proposals as professors actually played a really important role in getting the company off the ground.” Career Advice: Don’t Overplan Your Career Do not focus on failure when things don’t go according to plan, Novak advises. “I think when you’re younger, you’re more inclined to internalize failures or dwell on things that don’t go right, such as having a paper rejected or not receiving a scholarship,” she says. “I know it’s not easy to put something in the past and focus on the next thing you’re trying for.” Setting career milestones can be a helpful way to track progress and stay motivated, but she cautions against focusing on them too much, because you could end up overplanning your career. The opportunity to move from Australia to the United States, for example, came out of the blue, she says. “It wasn’t anything that I was expecting to happen,” she says. “I think you always have to be open to opportunities that you weren’t expecting. Not that you necessarily have to take them, but just don’t be so focused on what you want to accomplish that you don’t see other opportunities that come your way.” And, of course, consider volunteering for IEEE. She says that volunteering has made her a more effective communicator. “Giving technical talks at a conference is primarily focused on the technology,” she says. “In industry, you have to explain the technology you are working on in simpler terms since you are presenting your work to people with various levels of engineering knowledge.” As a volunteer leader, she adds, “you have to think about how to focus people’s efforts and bring them together to form a consensus while also making everyone feel like they are being listened to.” The company initially was constrained to working on projects for which they received funding, but it has evolved and no longer applies for research grants, Novak says. “We are very much focused on commercializing our technology and selling our products,” she says. Giving back to the community Novak’s Ph.D. advisor encouraged her to join IEEE because of its journals and conferences. “You need to join IEEE because it’s really important for you to publish papers and go to its conferences,” he told her. “And that’s what you’re going to have to do in order to graduate.” She joined. “IEEE is the professional home for everyone who works in the engineering field,” she says. “It’s a club, and you need to be in it.” Some of the most important benefits for her, she says, are meeting authors of seminal papers, networking, and collaborating. “What people don’t realize, particularly younger people, is the value of networking,” she says. “When I moved to the U.S., I already knew many people from attending IEEE meetings and through my volunteer work for it. I was able to talk to them about new opportunities, and we even applied for research grants together. These types of collaborations really expand your network.” She says she feels strongly about giving back to the community through volunteering. She has served in many roles, particularly for the IEEE Photonics Society. She is a former president, vice president of membership, and a member of its board of governors. “You get so much more return on your investment with your membership when you’re a volunteer,” she says. “You get to interact with really smart people and learn from them. “Because IEEE is a global organization, you also meet people from around the world with different backgrounds and speaking different languages—which is an excellent way for people to expand their horizons. “Volunteering is a great way to really open your mind to other people. And I think it just makes you grow as a person. Every volunteer experience I’ve had has enriched me personally.” Novak’s Goals for Technical Activities Here is what Novak says she plans to accomplish during her term as vice president of IEEE Technical Activities: Increase the volunteer pipeline. Societies and councils have passionate volunteers who want to contribute within their particular technical community, she says, but they don’t typically volunteer for the Technical Activities Board or its committees. Novak wants to find ways to get them more involved with the TAB, and she particularly wants to encourage younger members to serve. “One Technical Activities.” Similar to IEEE President Kathleen Kramer’s aspiration of creating a One IEEE framework that fosters more collaborations, Novak says there’s an opportunity to create One Technical Activities. She would like societies, councils, and technical communities to work and engage more directly with all of the TAB’s standing committees. “I really want to find ways in which they can engage more effectively and work more cohesively,” she says. “We talk about having silos across the IEEE, but we also have silos within the TAB because each of the 47 societies and councils is unique and operates very differently.” Cross-committee collaboration. She would like to promote more cohesion with other major IEEE boards including Educational Activities and the Standards Association. Inter-committee support. She wants societies to provide more support to their technical chapters and student chapters. Chapters have two parents: the geographical section and the society. The society funds the chapter’s activities, and the section controls the purse strings. Sometimes there is a difference between what the chapters want to fund and what the section leadership wants to spend money on, she says.

  • When Does an AI Image Become Art?
    by Gwendolyn Rak on 29. April 2025. at 14:00

    AI generated images are now seeping into advertising, social media, entertainment, and more, thanks to models like Midjourney and DALL-E. But creating visual art with AI actually dates back decades. Christiane Paul curates digital art at the Whitney Museum of American Art, in New York City. Last year, Paul curated an exhibit on British artist Harold Cohen and his computer program AARON, the first AI program for art creation. Unlike today’s statistical models, AARON was created in the 1970s as an expert system, emulating the decision-making of a human artist. Christiane Paul Christiane Paul is the curator of digital art at the Whitney Museum of American Art and a professor emeritus at the New School. IEEE Spectrum spoke with Paul about Cohen’s iconic AI program, digital art curation, and the relationship between art and technology. How do you curate digital art? Christiane Paul: Curating digital art is not that different from any other art form. Whether painting or photography or print, we all look at the sophistication of a concept and how it is translated into a medium. So my curatorial choices are not driven by the technology. If you’re a curator of painting, the selection of a work for an exhibition would not be driven by a specific paint or technique for a brush stroke. In 2001, Harold Cohen produced AARON KCAT as part of his experiments in producing figures with the AI model. Cohen taught the model how to handle overlapping objects in a composition, which he did by having the model fill in objects from the foreground to the background.Whitney Museum of American Art That being said, of course, there have been shows about pointillism as a specific technique in painting. And, there could be an exhibition focused on AI technologies as an artistic medium. But the general criteria would still be the sophistication of concept and its implementation. Do you collaborate with engineers as part of your work? Paul: Yes, of course. Many artists also have a background in engineering, particularly when it comes to the older generation of digital artists. When there weren’t any digital art programs or schools, digital artists often would have a background in engineering or programming. So you work with developers and software engineers, and many artists are programmers or coders themselves—I would say most of the artists I’m working with. They sometimes have to outsource, just due to the amount of work, but most of them are also very deeply in the weeds. What are the challenges of collecting and preserving digital art? Paul: For art institutions or collectors, it is important to have standards and best practices for archiving and keeping track of the technologies, because computers and systems change at such a rapid pace. In the ’90s people started paying more attention to implementing conservation approaches, and there are several strategies. One of them is storage and hardware conservation. This is used for pieces that conceptually depend on hardware. And then there is migration, emulation, and re-creation. There is no silver bullet. One has to look at the individual artwork to see which approach may be the best one. In the Harold Cohen exhibition, for example, we basically re-created one of the earlier pieces from scratch based on Cohen’s notebooks and printed out code that we found, and his son actually recoded that in Python. We reconstructed the original BASIC but then also recoded in Python. What inspired the Cohen exhibit? Paul: I had known Harold Cohen for quite some time. We worked together on an exhibition in 2007, and AARON is an iconic work. Everybody studying digital art knows this as one of the fundamental pieces. We had brought some of his works into the collection of the Whitney Museum, so showcasing that was one point. But I also thought that it would be particularly interesting to revisit the first AI software for art making in the light of current text-to-image models. Their processes are radically different, and authorship and collaboration play out in a very different way. AARON learned how to generate images of plants, as seen in AARON Gijon from 2007, using rules that Harold Cohen provided about their size and patterns governing their branching and leaf formation.Whitney Museum of American Art Harold Cohen wrote AARON from scratch. He was completely in charge of building that software, which he evolved across five different languages over his lifetime, so the composition of an image was completely under his control. He moved from evocative forms, to a figurative phase, to a plant-based phase, and then returned to abstraction. Later in life, he taught the software color composition and he also built the drawing devices that would execute AARON’s work. He really considered AARON a collaborator, and AARON encapsulated Cohen’s sensibility and aesthetics. Today’s AI software is essentially statistically based, and a lot of the authorship and agency happens in the corporate black box. The artist has no control over that, even if artists train and tweak their own models. Artists working with AI are very invested in manipulating the software and working with it, but there always is a component that is created by corporations that they do not have control over. Can AI-generated images be art? Paul: Not all visuals created by text-to-image models are art. It’s wonderful that people can use AI to generate images and play with it, but I wouldn’t call that end result art. AI art uses artificial intelligence as a tool and medium in a conceptual and practical way, critically engaging with those technologies and questioning them, be that from an ethical or aesthetic perspective. Most of today’s AI artists are engaging with these technologies in a very deep way. They’re putting together their own training datasets. They train the models. They question the biases embedded in AI. So it’s quite a complicated and involved process, and it’s not simply a text prompt generating an image. This article appears in the May 2025 issue as “5 Questions for Christiane Paul.”

  • Is China Pulling Ahead in the Quest for Fusion Energy?
    by Tom Clynes on 29. April 2025. at 13:00

    In the rocky terrain of China’s Sichuan province, a massive X-shaped building is quickly rising, its crisscrossed arms stretching outward in a bold, futuristic design. From a satellite’s view, it could be just another ambitious megaproject in a country known for building fast and thinking big. But to some observers of Chinese tech development, it’s yet more evidence that China may be on the verge of pulling ahead in one of the most consequential technological races of our time: the quest to achieve commercial nuclear fusion. Fusion—the process that powers stars—promises nearly limitless clean energy, without the radioactive waste and meltdown risk of fission. But building a reactor that can sustain fusion requires an extraordinary level of scientific and engineering precision. The X-shaped facility under construction in Mianyang, Sichuan, appears to be a massive laser-based fusion facility; its four long arms, likely laser bays, could focus intense energy on a central chamber. Analysts who’ve examined satellite imagery and procurement records say it resembles the U.S. National Ignition Facility (NIF), but is significantly larger. Others have speculated that it could be a massive Z-pinch machine—a fusion-capable device that uses an extremely powerful electrical current to compress plasma into a narrow, dense column. “Even if China is not ahead right now,” says Decker Eveleth, an analyst at the research nonprofit CNA, “when you look at how quickly they build things, and the financial willpower to build these facilities at scale, the trajectory is not favorable for the U.S.” Fusion is a marathon, not a sprint—and China is pacing itself to win. Other Chinese plasma-physics programs have also been gathering momentum. In January, researchers at the Experimental Advanced Superconducting Tokamak (EAST)—nicknamed the “Artificial Sun”—reported maintaining plasma at over 100 million °C for more than 17 minutes. (A tokamak is a doughnut-shaped device that uses magnetic fields to confine plasma for nuclear fusion.) Operational since 2006, EAST is based in Hefei, in Anhui province, and serves as a testbed for technologies that will feed into next-generation fusion reactors. Not far from EAST, the Chinese government is building the Comprehensive Research Facility for Fusion Technology (CRAFT), a 40-hectare complex that will develop the underlying engineering for future fusion machines. Results from EAST and CRAFT will feed into the design of the China Fusion Engineering Test Reactor (CFETR), envisioned as a critical bridge between experimental and commercial fusion power. The engineering design of CFETR was completed in 2020 and calls for using high-temperature superconducting magnets to scale up what machines like EAST have begun. Meanwhile, on Yaohu Science Island in Nanchang, in central China, the federal government is preparing to launch Xinghuo—the world’s first fusion-fission hybrid power plant. Slated for grid connection by 2030, the reactor will use high-energy neutrons from fusion reactions to trigger fission in surrounding materials, boosting overall energy output and potentially reducing long-lived radioactive waste. Xinghuo aims to generate 100 megawatts of continuous electricity, enough to power approximately 83,000 U.S.-size homes. Why China Is Doubling Down on Fusion Why such an aggressive push? Fusion energy aligns neatly with three of China’s top priorities: securing domestic energy, reducing carbon emissions, and winning the future of high technology—a pillar of President Xi Jinping’s “great rejuvenation” agenda. “Fusion is a next-generation energy technology,” says Jimmy Goodrich, a senior advisor for technology analysis at Rand Corp. “Whoever masters it will gain enormous advantages—economically, strategically, and from a national-security perspective.” The lengthy development required to commercialize fusion also aligns with China’s political economy. Fusion requires patient capital. The Chinese government doesn’t need to answer to voters or shareholders, and so it’s uniquely suited to fund fusion R&D and wait for a payoff that may take decades. In the United States, by contrast, fusion momentum has shifted away from government-funded projects to private companies like Commonwealth Fusion Systems, Helion, and TAE Technologies. These fusion startups have captured billions in venture capital, riding a wave of interest from tech billionaires hoping to power, among other things, the data centers of an AI-driven future. But that model has vulnerabilities. If demand for energy-hungry data centers slows or market sentiment turns, funding could dry up quickly. “The future of fusion may come down to which investment model proves more resilient,” says Goodrich. “If there’s a slowdown in AI or data center demand, U.S. [fusion] startups could see funding evaporate. In contrast, Chinese fusion firms are unlikely to face the same risk, as sustained government support can shield them from market turbulence.” The talent equation is shifting, too. In March, plasma physicist Chang Liu left the Princeton Plasma Physics Laboratory to join a fusion program at Peking University, where he’d earned his undergraduate degree. At the Princeton lab, Liu had pioneered a promising method to reduce the impact of damaging runaway electrons in tokamak plasmas. “The future of fusion may come down to which investment model proves more resilient.” —Jimmy Goodrich, Rand Corp. Liu’s move exemplifies a broader trend, says Goodrich. “When the Chinese government prioritizes a sector for development, a surge of financing and incentives quickly follows,” he says. “For respected scientists and engineers in the U.S. or Europe, the chance to [move to China to] see their ideas industrialized and commercialized can be a powerful draw.” Meanwhile, China is growing its own talent. Universities and labs in Hefei, Mianyang, and Nanchang are training a generation of physicists and engineers to lead in fusion science. Within a decade, China could have a vast, self-sustaining pipeline of experts. The scale and ambition of China’s fusion effort is hard to miss. Analysts say the facility in Mianyang could be 50 percent larger than NIF, which in 2022 became the first fusion-energy project to achieve scientific breakeven—producing 3.15 megajoules of energy from a 2.05-megajoule input. There are military implications as well. CNA’s Eveleth notes that while the Mianyang project could aid energy research, it also will boost China’s ability to simulate nuclear weapons tests. “Whether it’s a laser fusion facility or a Z-pinch machine, you’re looking at a pretty significant increase in Chinese capability to conduct miniaturized weapons experiments and boost their understanding of various materials used within weapons,” says Eveleth. These new facilities are likely to surpass U.S. capabilities for certain kinds of weapons development, Eveleth warns. While Los Alamos and other U.S. national labs are aging, China is building fresh and installing the latest technologies in shiny new buildings. The United States still leads in scientific creativity and startup diversity, but the U.S. fusion effort remains comparatively fragmented. During the Biden administration, the U.S. government invested about $800 million annually in fusion research. China, according to the U.S. Department of Energy, is investing up to $1.5 billion per year—although some analysts say that the amount could be twice as high. Fusion is a marathon, not a sprint—and China is pacing itself to win. Backed by a coordinated national strategy, generous funding, and a rapidly expanding talent base, Beijing isn’t just chasing fusion energy—it’s positioning itself to dominate the field. “It’s a renaissance moment for advanced energy in China,” says Goodrich, who contends that unless the United States ramps up public investment and support, it may soon find itself looking eastward at the future of fusion. The next few years will be decisive, he and others say. Reactors are rising. Scientists are relocating. Timelines are tightening. Whichever nation first harnesses practical fusion energy won’t just light up cities. It may also reshape the balance of global power.

  • Put an Old-School BBS on Meshtastic Radio
    by Stephen Cass on 28. April 2025. at 15:00

    In the 1980s and 1990s, online communities formed around tiny digital oases called bulletin-board systems. Often run out of people’s homes and accessible by only one or two people at a time via dial-up modems, these BBSs let people exchange public and private messages, play games, and share files using simple menus and a text-based interface. Today, there is an uptick in interest in BBSs as a way to create idiosyncratic digital spaces away from the glare of large social-media platforms like Facebook, X, and Bluesky. Today’s BBSs are typically accessed over the Internet, rather than dial-up connections. But their old standalone mojo is possible thanks to one of the hottest new radio technologies: Meshtastic. Indeed, this article is really the latest installment in what has become an accidental series that I’ll call “Climbing the LoRa Stack.” LoRa first appeared on Hands On’s radar in 2020, when enthusiasts realized that the long-range, low-bandwidth protocol had a lot of potential beyond just machine-to-machine Internet of Things connections, such as building person-to-person text messagers. Then last year we talked about the advent of Meshtastic, which adds mesh-networking capabilities to LoRa, allowing devices to autonomously create wireless networks and exchange data over a much larger area. In that article, I wondered what kind of interesting applications might be built on top of Meshtastic—and that brings us to today. Created by the Comms Channel, the open source TC2-BBS software was first released last summer. It’s a set of Python scripts that relies on just two additional libraries: one for talking to Meshtastic radios over a USB connection and one that helps manage internal data traffic. TC2-BBS doesn’t require a lot of computing power because the low-bandwidth limits of LoRa mean it’s never handling much data at any given time. All of this means the BBS code is very portable and you can run it on something as low-powered as a Raspberry Pi Zero. The BBS system uses a WisBlock Meshtastic radio with a status display [middle left and center], which can communicate wirelessly using LoRa and bluetooth antennas [top]. A servo moves a physical flag under the control of an Arduino Nano [middle right and bottom], while a Raspberry Pi runs the BBS Python software.James Provost The current TC2-BBS feature set is minimal, albeit under active development. There’s no option for sharing files, the interface is basic even by BBS standards, and there are no “door games,” which let visitors play what were typically turn-based text adventures or strategy games. On the other hand TC2-BBS does have some features from the more advanced bulletin-board systems of yore, such as the ability to store-and-forward email among other BBSs, similar to the FidoNet network, which flourished in the early 1990s until it was supplanted by the Internet. And in a nod to the whimsy of door games, the TC2-BBS system does have an option that lets users ask for a fortune-cookie-style aphorism, à la the Unix fortune command. And of course, anyone can access it at any time without having to worry about a busy phone line. I installed the software on a spare Raspberry Pi 3, following the simple instructions on GitHub. There is a Docker image, but because I was dedicating this Pi to the BBS, I just installed it directly. For the radio hardware, I hooked the Pi up to a RAKwireless WisBlock, which runs Meshtastic out of the box. In addition to a LoRa antenna, the WisBlock also has a Bluetooth antenna that allows for easy configuration of the radio via a smartphone app. Anyone can access it at any time without having to worry about a busy phone line The biggest hiccup was power: Normally the WisBlock radio is powered via its USB connection, but my attached Pi couldn’t meet the radio’s needs without triggering low-voltage warnings. So I powered the WisBlock separately through a connector normally reserved for accepting juice from a solar panel. Soon I had IEEE Spectrum’s TC2-BBS up and running and happily talking via Meshtastic with a HelTXT communicator I’d bought for my earlier Hands On experiments. Now anyone within three hops of Spectrum’s midtown Manhattan office on New York City’s emerging Meshtastic network can leave a message by sending “hello” to our node, advertised on the Meshtastic network as IEEE Spectrum BBS. But of course, just like the BBS’s of old, it was going to take a while for people to realize it was there and start leaving messages. I could monitor the BBS for visitors via a display connected to the Pi, but after a little poking around in the Python scripts, I realized I could do something more fun. By using the RPi.GPIO library and adding a few lines of code at the point where the BBS stores board messages in memory, I set the Pi to pulse one of its general-purpose input/output (GPIO) pins on and off for a moment every time a new message was posted. The Raspberry Pi sends and receives serial data from the WisBlock Meshtastic radio, and it sends pulses via its GPIO header to the Arduino Nano when a post is added to the bulletin-board database. When the Nano receives a signal, it raises a physical flag until the reset button is pushedJames Provost I fished an Arduino Nano out of my drawer and hooked it up to a servo, a push button, and the Pi’s GPIO pin. The Nano listens for an incoming pulse from the Pi. When the Nano hears one, it moves the arm of the servo through 90 degrees, raising a little red flag. Pressing the button to acknowledge the flag lowers the notification flag again and the Nano resumes listening for another pulse. This eliminates the need to keep the Pi plugged into a display, and I can check to see what the new message is via my HelTXT radio or smartphone. So please, if you’re in New York City and have a Meshtastic radio, drop by our new/old digital watering hole and leave a message! As for me, I’m going to keep climbing up the LoRa stack and see if I can write one of those door games.

  • How to Avoid Ethical Red Flags in Your AI Projects
    by Francesca Rossi on 27. April 2025. at 13:00

    As a computer scientist who has been immersed in AI ethics for about a decade, I’ve witnessed firsthand how the field has evolved. Today, a growing number of engineers find themselves developing AI solutions while navigating complex ethical considerations. Beyond technical expertise, responsible AI deployment requires a nuanced understanding of ethical implications. In my role as IBM’s AI ethics global leader, I’ve observed a significant shift in how AI engineers must operate. They are no longer just talking to other AI engineers about how to build the technology. Now they need to engage with those who understand how their creations will affect the communities using these services. Several years ago at IBM, we recognized that AI engineers needed to incorporate additional steps into their development process, both technical and administrative. We created a playbook providing the right tools for testing issues like bias and privacy. But understanding how to use these tools properly is crucial. For instance, there are many different definitions of fairness in AI. Determining which definition applies requires consultation with the affected community, clients, and end users. In her role at IBM, Francesca Rossi cochairs the company’s AI ethics board to help determine its core principles and internal processes. Francesca Rossi Education plays a vital role in this process. When piloting our AI ethics playbook with AI engineering teams, one team believed their project was free from bias concerns because it didn’t include protected variables like race or gender. They didn’t realize that other features, such as zip code, could serve as proxies correlated to protected variables. Engineers sometimes believe that technological problems can be solved with technological solutions. While software tools are useful, they’re just the beginning. The greater challenge lies in learning to communicate and collaborate effectively with diverse stakeholders. The pressure to rapidly release new AI products and tools may create tension with thorough ethical evaluation. This is why we established centralized AI ethics governance through an AI ethics board at IBM. Often, individual project teams face deadlines and quarterly results, making it difficult for them to fully consider broader impacts on reputation or client trust. Principles and internal processes should be centralized. Our clients—other companies—increasingly demand solutions that respect certain values. Additionally, regulations in some regions now mandate ethical considerations. Even major AI conferences require papers to discuss ethical implications of the research, pushing AI researchers to consider the impact of their work. At IBM, we began by developing tools focused on key issues like privacy, explainability, fairness, and transparency. For each concern, we created an open-source tool kit with code guidelines and tutorials to help engineers implement them effectively. But as technology evolves, so do the ethical challenges. With generative AI, for example, we face new concerns about potentially offensive or violent content creation, as well as hallucinations. As part of IBM’s family of Granite models, we’ve developed safeguarding models that evaluate both input prompts and outputs for issues like factuality and harmful content. These model capabilities serve both our internal needs and those of our clients. While software tools are useful, they’re just the beginning. The greater challenge lies in learning to communicate and collaborate effectively. Company governance structures must remain agile enough to adapt to technological evolution. We continually assess how new developments like generative AI and agentic AI might amplify or reduce certain risks. When releasing models as open source, we evaluate whether this introduces new risks and what safeguards are needed. For AI solutions raising ethical red flags, we have an internal review process that may lead to modifications. Our assessment extends beyond the technology’s properties (fairness, explainability, privacy) to how it’s deployed. Deployment can either respect human dignity and agency or undermine it. We conduct risk assessments for each technology use case, recognizing that understanding risk requires knowledge of the context in which the technology will operate. This approach aligns with the European AI Act’s framework—it’s not that generative AI or machine learning is inherently risky, but certain scenarios may be high or low risk. High-risk use cases demand additional scrutiny. In this rapidly evolving landscape, responsible AI engineering requires ongoing vigilance, adaptability, and a commitment to ethical principles that place human well-being at the center of technological innovation.

  • Video Friday: High Mobility Robots for Logistics
    by Evan Ackerman on 25. April 2025. at 16:00

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025: 5–7 September 2025, SHENZHEN CoRL 2025: 27–30 September 2025, SEOUL IEEE Humanoids: 30 September–2 October 2025, SEOUL World Robot Summit: 10–12 October 2025, OSAKA, JAPAN IROS 2025: 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Throughout the course of the past year, LEVA has been designed from the ground up as a novel robot to transport payloads. Although the use of robotics is widespread in logistics, few solutions offer the capability to efficiently transport payloads both in controlled and unstructured environments. Four-legged robots are ideal for navigating any environment a human can, yet few have the features to autonomously move payloads. This is where LEVA shines. By combining both wheels (a means of locomotion ideally suited for fast and precise motion on flat surfaces) and legs (which are perfect for traversing any terrain that humans can), LEVA strikes a balance that makes it highly versatile. [ LEVA ] You’ve probably heard about this humanoid robot half-marathon in China, because it got a lot of media attention, which I presume was the goal. And for those of us who remember when Asimo running was a big deal, marathon running is still impressive in some sense. It’s just hard to connect that to these robots doing anything practical, you know? [ NBC ] A robot navigating an outdoor environment with no prior knowledge of the space must rely on its local sensing to perceive its surroundings and plan. This can come in the form of a local metric map or local policy with some fixed horizon. Beyond that, there is a fog of unknown space marked with some fixed cost. In this work, we make a key observation that long-range navigation only necessitates identifying good frontier directions for planning instead of full-map knowledge. To this end, we propose the Long Range Navigator (LRN), which learns an intermediate affordance representation mapping high-dimensional camera images to affordable frontiers for planning, and then optimizing for maximum alignment with the desired goal. Through extensive off-road experiments on Spot and a Big Vehicle, we find that augmenting existing navigation stacks with LRN reduces human interventions at test time and leads to faster decision making indicating the relevance of LRN. [ LRN ] Goby is a compact, capable, programmable, and low-cost robot that lets you uncover miniature worlds from its tiny perspective. On Kickstarter now, for an absurdly cheap US $80. [ Kickstarter ] Thanks, Rich! HEBI robots demonstrated inchworm mobility during the Innovation Faire of the FIRST Robotics World Championships in Houston. [ HEBI ] Thanks, Andrew! Happy Easter from Flexiv! [ Flexiv ] We are excited to present our proprietary reinforcement learning algorithm, refined through extensive simulations and vast training data, enabling our full-scale humanoid robot, Adam, to master humanlike locomotion. Unlike model-based gait control, our RL-driven approach grants Adam exceptional adaptability. On challenging terrains like uneven surfaces, Adam seamlessly adjusts stride, pace, and balance in real time, ensuring stable, natural movement while boosting efficiency and safety. The algorithm also delivers fluid, graceful motion with smooth joint coordination, minimizing mechanical wear, extending operational life, and significantly reducing energy use for enhanced endurance. [ PNDbotics ] Inside the GRASP Lab—Dr. Michael Posa and DAIR Lab. Our research centers on control, learning, planning, and analysis of robots as they interact with the world. Whether a robot is assisting within the home or operating in a manufacturing plant, the fundamental promise of robotics requires touching and affecting a complex environment in a safe and controlled fashion. We are focused on developing computationally tractable and data efficient algorithms that enable robots to operate both dynamically and safely as they quickly maneuver through and interact with their environments. [ DAIR Lab ] I will never understand why robotics companies feel the need to add the sounds of sick actuators when their robots move. [ Kepler ] Join Matt Trossen, founder of Trossen Robotics, on a time-traveling teardown through the evolution of our robotic arms! In this deep dive, Matt unboxes the ghosts of robots past—sharing behind-the-scenes stories, bold design decisions, lessons learned, and how the industry itself has shifted gears. [ Trossen ] This week’s Carnegie Mellon University Robotics Institute (CMU RI) seminar is a retro edition (2008!) from Charlie Kemp, previously of the Healthcare Robotics Lab at Georgia Tech and now at Hello Robot. [ CMU RI ] This week’s actual CMU RI seminar is from a much more modern version of Charlie Kemp. When I started in robotics, my goal was to help robots emulate humans. Yet as my lab worked with people with mobility impairments, my notions of success changed. For assistive applications, emulation of humans is less important than ease of use and usefulness. Helping with seemingly simple tasks, such as scratching an itch or picking up a dropped object, can make a meaningful difference in a person’s life. Even full autonomy can be undesirable, since actively directing a robot can provide a sense of independence and agency. Overall, many benefits of robotic assistance derive from nonhuman aspects of robots, such as being tireless, directly controllable, and free of social characteristics that can inhibit use.While technical challenges abound for home robots that attempt to emulate humans, I will provide evidence that human-scale mobile manipulators could benefit people with mobility impairments at home in the near future. I will describe work from my lab and Hello Robot that illustrates opportunities for valued assistance at home, including supporting activities of daily living, leading exercise games, and strengthening social connections. I will also present recent progress by Hello Robot toward unsupervised, daily in-home use by a person with severe mobility impairments. [ CMU RI ]

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 9. February 2022. at 15:31

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 8. February 2022. at 14:00

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 7. February 2022. at 16:12

    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

640px-NATO_OTAN_landscape_logo.svg-2627358850
BHTelecom_Logo
roaming
RIPE_NCC_Logo2015-1162707916
MON_4
mibo-logo
intel_logo-261037782
infobip
bhrt-zuto-logo-1595455966
elektro
eplus_cofund_text_to_right_cropped-1855296649
fmon-logo
h2020-2054048912
H2020_logo_500px-3116878222
huawei-logo-vector-3637103537
Al-Jazeera-Balkans-Logo-1548635272
previous arrowprevious arrow
next arrownext arrow