The CMU Startup Speeding Grid Innovation
In This Section
A CMU start-up uses insights from chip design to revolutionize the U.S. electric grid.
Why it matters: The lengthy and time-consuming studies necessary before adding new, renewable power sources to the nation’s grid is one of the key challenges to meeting the AI-driven demand in energy growth. This process leads to an interconnection queue backlog that bottlenecks efforts to improve grid reliability and resilience. The CMU-grown startup Pearl Street Technologies helps to cut that process so developers and utilities can add more power, faster.
Catch up quick: Larry Pileggi, head of Carnegie Mellon’s Department of Electrical and Computer Engineering, is a global leader in the design automation of very large integrated circuits (ICs), or chips.
- While the North American power grid is very complicated, a few years ago Pileggi realized that it is not as complicated as the systems with billions of transistors on chips he had been analyzing.
- Using their know-how from ICs, he and his graduate students built a platform called SUGAR that has become a leading tool for the analysis and optimization of power systems.
How it works: SUGAR uses the design and simulation tools from the computer chip industry to reduce months of power system engineering effort to minutes.
- Development projects to incorporate new forms of electricity generation, particularly via renewables such as solar energy, once required several months of study, which creates a significant backlog for implementation.
- One cannot simply build a solar farm and connect it to the grid. It is necessary to first build a model of the grid and simulate how it will behave when the solar farm is added. These analyses must consider all possible scenarios for the new grid, which can be a daunting task.
SUGAR can perform simulations and optimization in hours, accomplishing what might otherwise require a team of engineers working for six months or more. The technology has other grid planning applications as well, including:
- Transmission expansion planning
- Extreme event analysis
- Base case creation
David Rosner, commissioner of the Federal Energy Regulatory Commission (FERC), praised SUGAR in a letter highlighting interconnection automation software platforms, specifically referring to a planning study that was performed for a large region of the U.S. transmission grid:
“One application reproduced the manual study of a large interconnection cluster — which took nearly two years to complete — in just 10 days and arrived at largely similar results.”
Enverus, which offers a platform for the management, development, and acquisition of the entire energy value chain, announced its acquisition of Pearl Street earlier this year.
The big picture: At Carnegie Mellon, we are committed to accelerating research to commercialization. We spin off companies that can move engineering forward in ways that directly benefit society — developing practical solutions.
- Nowhere is this more important than at the intersection of AI and energy.
- As Pearl Street shows, it’s not just providing the energy for AI. It’s also using AI for energy — to develop and deploy environmental responsibility and economically feasible solutions.
- CMU's global leadership in design, engineering and systems of AI enable our researchers, founders, and students to bring a new lens to the biggest energy challenges.
Unlocking American Research Dominance: Opportunities and Chokepoints in AI for Science
In This Section
By: Aaron Bartnick
AI could revolutionize scientific research and power a second century of American leadership, but only if we can overcome gaps in interdisciplinary talent, data accessibility, algorithm development, and computational resources.
Why it matters: Recent advances in AI could rapidly accelerate innovation in fields from agriculture and energy to materials science and cell biology. It could be possible to read and digest every single publication in almost any field, capture that knowledge in a single system, discover patterns across complex datasets, then make predictions with unrivaled visibility and test them through automated experimentation that improves repeatability. This could revolutionize the pace and scale of scientific discovery, ensuring America continues to lead the world in advanced technologies.
The challenge ahead: To realize the promise of AI for Science, CMU research shows we need to address four key chokepoints:
- Interdisciplinary expertise at the intersection of AI and a given scientific field.
- Access to machine-readable publications and underlying experimental data.
- Algorithmic advances to move from identifying correlations in this data to discovering causal relationships.
- Computational and energy resources necessary to run such algorithms.
Of these, the two most important bottlenecks specifically for scientific research are limited interdisciplinary expertise and access to publicly available, machine-readable data.
- Interdisciplinary expertise: Scientists spend years developing expertise in a particular field or method, and often prefer to leverage that expertise rather than pursue additional training in AI. Funding interdisciplinary PhD programs in areas with specific industry or national security applications — particularly those located in industrial hubs that might otherwise struggle to attract and retain top talent — could help build this new generation of scientific leadership.
- Accessible data: Many scientific data sets are either (a) far smaller and more complex than those used to train chatbot and deep learning models, or (b) in formats that are not machine-readable and would take untold hours to make usable. And all scientific disciplines suffer from barriers that limit data sharing across researchers and institutions, including paywalls limiting AI’s ability to access journal articles and institutions limiting access to proprietary datasets. Working with Congress and leading corporations to establish data accessibility standards and offset the costs of opening scientific journals to AI (and the public) could help unlock these valuable resources.
Algorithmic development and access to compute, energy, and capital are also important constraints on applying AI to scientific research. But we have thus far found these bottlenecks are generally no worse for scientific applications than for broader AI use cases.
What we’re doing: Carnegie Mellon is joining with federal agencies and partners across national labs, academia, and industry to spearhead an initiative to build a national network of AI-enabled autonomous experimentation laboratories. The university has begun convening scholars from across the country and is working in partnership with a major U.S. technology company to quantify the scale of these bottlenecks, identify cost-effective solutions, and provide more detailed recommendations to leaders across government, industry, and academia.
The bottom line: If we can address several key chokepoints — most importantly, developing an interdisciplinary workforce and making data more accessible — AI can help ensure America’s continued global dominance in scientific discovery and innovation.
How AI Can Unlock Fusion Energy
In This Section
By: Jeff Schneider
Combining generative AI with reinforcement learning can help the U.S. win the race to fusion energy by finding better ways to reach the plasma temperatures and pressures needed for fusion power plants. Researchers at Carnegie Mellon University are using these advances in AI to unlock the promise of unlimited, clean energy from fusion.
Why it matters: Consider some of the world's grand challenges: producing enough food, the availability of clean water, or handling climate change. These are all primarily energy problems. We already have the ability to produce and deliver food and water — it’s just energy-intensive, and thus expensive to do so. The biggest contributor to climate change is our inability to generate sufficient energy without unwanted side effects.
Nuclear fusion, the reaction that powers the sun and stars, holds the promise of solving these problems here on Earth.
- There is a nearly unlimited supply of fuel, some of which can be derived from seawater.
- It does not produce long-term harmful byproducts. Its power plants can not melt down.
- The biggest hurdle to delivering on this promise is our inability to sustain the extremely high plasma temperatures and pressures needed for a financially viable power plant. Solving that challenge is where AI comes in.
The big question: The traditional research approach to solving hard scientific and engineering challenges like nuclear fusion centers on individual scientists. It is the single-scientist, single-hypothesis, single-experiment, single-dataset analysis, single conclusion approach to science. That has worked in the past, but it is too slow.
Now, AI algorithms, working alongside scientists, can reason about all the data, all the hypotheses, all the accumulated scientific knowledge, and design experiments at a much faster rate than individual scientists alone. The question we ask is how to design those AI algorithms, and what is the collaborative discovery process that delivers those breakthroughs for nuclear fusion.
Catch up quick: There are two recent developments, and one that is still missing, that enable this new approach to science for nuclear fusion.
- Advances in large language models: They now provide that repository of scientific knowledge that previously wasn't very accessible to other AI algorithms.
- Advances in reinforcement learning and discovery algorithms: They allow for work at the scale needed for hard problems like nuclear fusion.
Finally, the missing development is the national capacity to generate more data through experiments on our research fusion devices. These devices were built and funded to generate enough data to support the old single-scientist model, which couldn't handle much data anyway. AI can handle more data and needs it to make these big advances.
What we’re doing: With funding from the Department of Energy, the Auton Lab at CMU, a collaboration with the Princeton Plasma Physics Lab, and DIII-D at General Atomics, has demonstrated the ability of AI to: do closed loop control of temperature and density profiles; optimize heating profiles for stability, stably reach high pressures, and do so at multiple fusion devices.
Policy takeaways: In order for the U.S. to fully leverage the advances in nuclear fusion made possible with AI, policy makers should reconsider the national capability to run fusion experiments and dramatically expand the operation to run more experiments and thus obtain more data for AI.
- In the near term, this means the Department of Energy buying more experiment time on the devices we already have and allocating it to AI-guided data collection.
- In the longer term, it means building new devices designed for high throughput experimentation.
The U.S. needs a national AI for Fusion Initiative to support model development, empower the national labs to expand access and develop a lab to manufacturing strategy for devices.
Building American Strength and Resiliency in Critical Minerals for Energy Storage
In This Section
By: Jeremy J. Michalek, Aaron Bartnick, Erica R.H. Fuchs, Costa Samaras, Mike Starz
The U.S. government has galvanized national focus and action on the supply chain vulnerabilities for critical minerals vital to national security, economic competitiveness and societal well-being. Carnegie Mellon University research shows how strategic alignment in policy, trade agreements, materials recovery and workforce training is vital for U.S. critical mineral resilience.
Why it matters: America’s strategic economic and military strength depends on reliable access to critical materials for energy storage. Energy storage is central to both civilian and military capabilities. Demand for energy storage is expected to account for half of mineral demand growth from clean energy technologies over the next two decades, and energy is key for defense.
Key insight: CMU Critical Technology Initiative research shows how the supply of critical minerals used to make batteries is highly concentrated and how disruptions in that supply can have far-reaching consequences.
- Concentrated supply: Critical minerals used to make batteries are extracted primarily from a handful of nations around the world, and the refining, processing and manufacturing of battery materials is heavily concentrated in China, posing supply chain risks for the U.S. economy and military. Because China dominates multiple stages of the supply chain, if China were to stop exports, 80% to 92% of global supply of some common battery chemistries could be disrupted.
- Supply disruptions: Realistic supply chain disruption scenarios, such as export restrictions from China or natural disasters in the Democratic Republic of Congo, could raise new car prices, including gasoline cars, by more than $1,000 each, costing consumers over $10 billion.
Policy takeaways: Alignment in manufacturing policy, trade partnerships, materials recovery and workforce training can help bolster U.S. critical minerals strength and resiliency.
- Manufacturing policy: The U.S. represents a small fraction of global battery and battery material production today, in part because China has heavily invested in and subsidized the industry.
- But U.S. production costs can be competitive with China given relatively modest U.S. incentives.
- In particular, the lithium iron phosphate (LFP) battery chemistry is the cheapest, most robust and lowest polluting of the lithium ion battery chemistries with the fewest critical materials to source. But China produces 90% of LFP cathode material and has invested so much capacity that it is difficult for other nations to compete on cost.
- Targeted U.S. investment in innovation to improve energy density or in developing U.S. production could be an important part of a national strategy, and U.S. investment in lithium production could provide the primary critical material and reduce disruption effects.
- Trade partnerships: Growing and deepening robust trade relationships with partners and allies can reduce risks for emerging materials.
- Materials recovery: While the U.S. does not have all of the critical minerals it needs on its own soil, we can reduce reliance on imports cost effectively by recovering materials from used batteries and repurposing used batteries for second life stationary storage applications. Such strategies could also mitigate supply chain disruption effects.
- Workforce training: Investment in U.S. production requires a prepared workforce. Strategic location targeting and workforce training can help match worker skills with industrial skill demand, which can be larger for electric than conventional technologies. It can also create pipelines between occupations with declining or threatened employment and growth opportunities.
The bottom line: Energy security is national security. Today, a growing portion of our energy security is dependent on minerals and batteries produced largely in a few locations that include adversaries and unstable nations. The United States has the capacity and the national security imperative to address this by making targeted investments in technology, industry, trade, materials recovery and workforce training.
AI and National Security: Harden the Grid To Win the Race
In This Section
By: Harry Krejsa and Audrey Kurth Cronin
America’s electric grid represents a ‘soft target’ for international adversaries in the AI arms race. The country needs a new grid security framework to adapt to the changing nature of the threat.
Why it matters: Today, energy infrastructure is on the frontlines of global AI dominance. As a consequence, it represents a new, national target for international adversaries. The People’s Republic of China (PRC) is exploiting the U.S.’s aging infrastructure’s vulnerabilities, pre-positioning malware to sabotage military and civilian services alike.
Key insight: To address this threat, the United States needs a new, national strategy to defend the grid. A Carnegie Mellon Institute for Strategy & Technology (CMIST) report shows how the U.S. can seize the opportunity of AI-driven energy expansion to provide both abundant power and the increased resilience our national security demands.
What we found: We are faced with a narrow window to transform vulnerability into strength. Every new data center connection, every grid modernization project, and every infrastructure investment is an opportunity to build something fundamentally more defensible and resilient. To do so, the U.S. must:
- Update federal cybersecurity frameworks to reflect how modern energy systems differ from their predecessors.
- Ensure industry coordination bodies integrate stakeholders who understand both cutting-edge technology and security imperatives.
- Adopt a strategic approach that recognizes electricity generation, transmission, and storage as crucial to U.S. great power competition.
Catch up quick: We now know the PRC is actively embedding disruptive cyber capabilities on American critical infrastructure. And the Russian Federation is likely following Beijing’s playbook, and has already demonstrated the power of such capabilities in cyber attacks against Ukraine, disrupting its electrical grid and depriving millions of power.
In fact, these threats may be intensifying precisely because the AI race is accelerating. If American advances in AI threaten to cement a decisive technological lead — with all the military and economic advantages that some believe will come with it — Beijing's satisfaction with second-place status may shift dangerously. Our adversaries understand that crippling America's electricity infrastructure could derail our AI ambitions before we have the opportunity to turn a tenuous edge into an enduring advantage.
What's next: The vast majority of our patchwork of century-old infrastructure has internet connectivity awkwardly bolted onto systems that were never meant to be accessible to the outside world. This makes it a prime target for threats.
- Upgrading energy infrastructure for AI offers a once-in-a-generation opportunity to replace these vulnerable legacy systems with inherently more defensible technologies.
- Many modern energy systems, like advanced nuclear reactors, utility-scale batteries, and inverter-based resources, were designed from the ground up to be “digitally-native” and software-defined, capable of updates and adaptations as threats evolve.
The bottom line: We cannot afford to power the future with infrastructure pre-compromised by our most capable adversaries. Ensuring that power flows abundantly and securely may determine not just who wins the AI race, but whether we can compete at all.
Measuring AI’s Energy and Environmental Footprint
In This Section
By: Ramayya Krishnan, Mitul Jhaveri and Jay Palat
The rapid expansion of artificial intelligence is driving a surge in data center energy consumption, water use, carbon emissions, and electronic waste — and decision-makers need trusted information about these impacts and how they will change in the future.
Why it matters: Without standardized metrics and reporting, policymakers and grid operators cannot accurately track, manage, or mitigate AI’s growing resource footprint. Given the dramatic differences in energy and resource constraints region-by-region, without open and accurate information, AI's growing energy footprint could lead to significant and unexpected local negative impacts.
Catch up quick: Generative AI and large-scale cloud computing are driving an unprecedented increase in energy demand. But companies often use outdated or narrow measures and purchase renewable energy credits to address sustainability concerns.
- Their true carbon footprint may be much higher than the figures they report.
- A single hyperscale AI data center can consume hundreds of thousands of gallons of water per day and contribute to a “mountain” of e-waste, yet only about a quarter of data center operators even track what happens to retired hardware.
The big question: If the rapid build-out of AI data centers, on top of other growing power demands, pushes global demand up by an additional hundreds of terawatt hours annually, the steady-growth assumption embedded in today’s models will be shattered.
- The International Energy Agency (IEA) forecasts that data center energy use could more than double to 945 TWh by 2030.
- Planners need far more granular, forward-looking forecasting methods to avoid driving up costs for rate-payers, last-minute scrambles to find power, and potential electricity reliability crises.
What we're doing: Carnegie Mellon University researchers developed a set of policy recommendations to establish standardized metrics for AI energy and environmental impacts across model training, inference, and data center infrastructure.
- Develop AI energy metrics: Congress should direct the Department of Energy (DOE) and the National Institute of Standards and Technology (NIST) to spearhead the creation of a phased plan to develop, implement, and operationalize standardized metrics, in close partnership with industry.
- Measure the AI energy lifecycle: NIST should lead a process to create new standardized metrics that capture AI’s energy and environmental footprint across its entire lifecycle — training, inference, data center operations (cooling/power), and hardware manufacturing/disposal.
- Mandate reporting: DOE and the Environmental Protection Agency should lead a six‑month voluntary AI energy reporting program, and gradually move toward a mandatory reporting mechanism. The data would feed straight into Energy Information Administration outlooks and Federal Energy Regulatory Commission grid planning.
The bottom line: AI’s extraordinary capabilities should not come at the expense of our energy security or environmental sustainability. By standardizing how we measure AI’s footprint, firms have the incentives to innovate for sustainability and the U.S. can be better prepared for the growth in power consumption while maintaining its leadership in artificial intelligence.
Go deeper: Read the full policy memo CMU wrote for the Federation of American Scientists and the MIT Tech Review article.
AI and Its Growing Energy Demand
In This Section
By: Zico Kolter
AI models, especially the large models powering systems like ChatGPT, have exploded in capabilities and usage over the past years. In response, there have been many commitments from large companies to spend tens or hundreds of billions of dollars to build data centers that can power these AI models.
Why it matters: These data centers, in turn, need to be powered by large amounts of electricity: it’s not uncommon for newly proposed data centers to consume over 1 Gigawatt (GW) of electrical power. By comparison, the entire Pittsburgh area uses an average of between 1-2 GW of electrical power over the course of a day.
The big questions: But why do AI models use this much electricity? What is driving this rapid increase in demand? And perhaps most subjectively, will the resulting increases in AI capabilities and availability be “worth” the cost in power?
Catch up quick: Doing any kind of computation on a computer requires power: to use your laptop, you need to plug it into an outlet, and if you’re doing a particularly intensive task, you’ll likely even notice it heating up from that power consumption. AI models work the same way but on a much larger scale.
The AI models that power systems like ChatGPT work by first transforming the text you type into numbers, then multiplying and adding these numbers many trillions of times, very quickly, to eventually produce a written response, an image, or a video. The costs associated with data centers correspond to: 1) the cost of the building/facilities themselves; 2) the cost of the computer chips that run the computation; and finally 3) the electrical power used to run the chips and run the cooling systems to prevent them from overheating.
The details: AI’s energy demands have grown rapidly for two main reasons.
- First, because of a phenomenon called the “scaling laws” of AI models. For several years, researchers have recognized that if you increase the number of computations used by the AI models: say going from 100 billion to 1 trillion to 10 trillion operations to produce an output, then the performance of the resulting model will improve by corresponding amount. Due to the desire to create ever-more-capable models, companies want to create models that use more and more computations, and hence more and more energy.
- The second reason is due to our increasing use of AI. In the past 2.5 years, AI has grown from a set of niche use cases, to a tool that many of us use every day. Thus, the net effect of larger models and massive growth is a rapidly increasing demand for electrical power, even in light of other improvements of efficiency.
Worth noting: We don’t always need more electrical power to run the AI models: the efficiency of computer chips are also improving exponentially as are the efficiency of the underlying algorithmic approaches themselves, which can offset much of the increase in power.
The big picture: All of this finally leads to the natural question: is this increasing demand for power “worth” its cost? The answer, naturally, depends on our individual perception of the value of AI systems. If you are inherently skeptical about the value provided by AI systems, then you may feel that the benefits do not outweigh the energy costs. But if (and I now have to acknowledge that I place myself in this camp) you believe that AI has the potential to substantially transform our world for the better, then the energy cost may seem to be well worth it.
AI technology, if deployed responsibly, has the potential to drastically increase productivity, to enable us to create software in a faster and more robust manner, to advance science, and to ultimately benefit the human condition. Most revolutions that ultimately have increased the quality of life for the majority of humanity — the industrial revolution, modern transportation, and the introduction of computers — all came with associated increases in energy cost, and AI will likely be no different.
What's next: As work across Carnegie Mellon shows, AI has the potential to drastically improve our energy consumption as well, assisting in developing more efficient techniques for grid operation, building better materials for batteries, and potentially even truly revolutionizing energy through accelerating the development of technologies like nuclear fusion. These are all big bets, to be clear, and advancing science is never a sure thing, but AI at its best can be a unique enabler of so many beneficial downstream technologies.
As we move AI technology forward, and build the needed infrastructure to power this revolution, it is incumbent upon all of us to ensure that the positive impacts of AI are worth the substantial energy cost. Owning to work at Carnegie Mellon and elsewhere, we are well-positioned to meet this challenge.
AI Materials Design: Enabling the Next Generation of US Energy Infrastructure
In This Section
By: Elizabeth Dickey, Mohadeseh Taheri-Mousavi, Emma Strubell
AI is poised to revolutionize the way new materials are discovered and deployed, a shift that has the potential to speed up the development of novel materials necessary for the future U.S. energy infrastructure. Carnegie Mellon University’s field-leading materials design and manufacturing researchers have the expertise to lead the nation in accelerating the development of innovative materials solutions across the energy sector.
Why it matters: The U.S. needs to develop new materials with superior properties — such as radiation tolerance, corrosion resistance in extreme environments, and high thermal conductivity — to build next-generation energy systems like advanced nuclear reactors, hydrogen infrastructure and grid-scale storage. Without a national-scale, AI-driven approach to materials innovation, the U.S. risks falling behind in the race to build energy systems that are efficient, secure, and sustainable.
The path forward: Designing advanced materials is a complex challenge that is not just about choosing the right elements, but also requires understanding how those materials are processed and manufactured. Adding to the difficulty, researchers must account for real-world constraints like energy consumption, the availability of rare or critical elements, or variations in raw materials.
To make this process faster and more efficient, researchers are using a mix of automated simulations and experiments, along with decades of data drawn from across scientific fields. With human oversight, virtual AI agents can now take on the heavy lift — identifying material design needs, writing and running simulation, and directing lab robots to perform experiments and provide feedback. The virtual agents can pull together information from every area of science — text, images, audio, and video — and make informed decisions on what to do next, improving and iterating until a material achieves the required performance. The methods build on progress from the Material Genome initiative, which has already cut the design cycle from 20 to just seven years. With AI fully integrated into the process, researchers believe that they can shorten the timeline further, to just 2-3 years.
What we did: Carnegie Mellon’s field-leading materials design and manufacturing researchers have significantly accelerated the discovery-to-deployment of innovative materials solutions across the energy sector. For example, the CMU-developed AlloyGPT is automating critical mineral extraction and material design. CMU is also leading initiatives in the certification and qualification of new materials before their employment in critical applications. By combining AI and digital twins of the material design from initial processing to component-level manufacturing, the whole certification process will become significantly shorter.
These initiatives are also training a new generation of scientists and engineers in AI-driven materials design. These future scientists and engineers will have deep disciplinary expertise while being able to harness and lead AI agents in a new paradigm for the design of manufactured materials.
Policy takeaways: A national AI-materials design ecosystem is a strategic asset for energy security and economic competitiveness. To realize its potential, policymakers should:
- Support sustained federal investment in AI-integrated materials research infrastructure.
- Incentivize public-private partnerships to accelerate material deployment in energy-critical applications.
- Embed material innovation into national strategies for clean energy, grid modernization, and critical mineral independence.
- Promote data standards and access, while protecting intellectual property.
- Incentivize and enable startup companies in this area.
The bottom line: Without a national-scale, AI-driven approach to material innovation, the U.S. risks falling behind in critical technologies that depend on high-performance materials — from aerospace to clean energy.
Go deeper:
- CMU’s Mohadeseh Taheri-Mousavi discusses her research, which aims to design next generation structural alloys with higher performance and contribute to material sustainability.
- CMU’s Anthony Rollett describes the new NASA Space Technology Research Initiative that will focus on modeling for metals additive manufacturing.
- An interdisciplinary team of faculty from CMU’s materials science and engineering and chemical engineering team partners with the Naval Nuclear Laboratory to develop advanced alloys.
Using AI to Defend Against Cyber Threats
In This Section
By: Lujo Bauer and Vyas Sekar
The world is approaching a pivotal moment where advances in AI, critical infrastructure, and cybersecurity are rapidly converging. The U.S. has the opportunity to stay a step ahead of its adversaries, employing AI and automation to help prevent and protect our critical infrastructure from attack. Researchers at Carnegie Mellon University’s CyLab are at the forefront of this work.
Why it matters: AI is already changing how cyber threats evolve. With increasing use of AI for code generation and workflow automation, our ability to quickly build and deploy new features far exceeds our ability to secure systems. At the same time as new cyber threats are emerging, we are relying more heavily on AI and autonomous systems within the nation’s critical industries and infrastructure, including energy, water, transportation and health and financial services.
Catch up quick: Existing mechanisms, protocols, and processes used to secure our critical infrastructures are based on a “human attacker”’ mindset. Today’s security operations rely on manually predefined rules that grant or deny access based on simple heuristic factors and human time scale responses. CMU researchers have observed the use of AI-driven autonomous capabilities for uncovering and exploiting vulnerabilities dramatically accelerating attacks. The threats to our critical infrastructure will increase significantly and our existing mechanisms are no longer sufficient. The U.S. needs to invest in autonomous cyber operations for defending critical infrastructures against future autonomous cyberthreats.
What we did: Researchers at Carnegie Mellon’s CyLab released two new groundbreaking studies on the use of AI for autonomous cyber operations — one focused on offensive capabilities, and the other on next-generation defense tactics.
- The research on offensive capabilities showed that when equipped with new abstractions, AI-driven red teams are able to autonomously execute complex multistage attacks against realistic networks in a matter of minutes, costing only a few tens of dollars.
- The research on defense tactics showed that deceptive strategies, if deployed correctly, can slow down and help defeat most of these AI-driven attackers.
- Together, the work lays a foundation for the use of AI-enabled systems in understanding and defending against sophisticated cyberattacks.
What we found: CMU CyLab experts found that it is important to build an open, extensible, and community-driven platform for benchmarking AI-driven attacks and defenses in realistic critical-infrastructure settings is both timely and critical. This includes:
- Autonomous red-teaming with AI: Right now, only big companies can afford to run professional “red team” tests on their networks via expensive human red teams, and they might only do that once or twice a year. In order to empower defenders with AI-assisted tools to autonomously catch problems before real attackers do, CMU researchers created a novel framework using modern AI models to autonomously plan and execute complex network red-team attacks.
- Cyber deception war gaming: As AI-based attackers become the norm, CMU researchers examined the effectiveness of cyber deception tactics to distract, detect, delay, and thwart attacks. CMU research shows how operators can proactively run a broad spectrum of deception war gaming scenarios to inform their future security posture.
The way forward: The growing sophistication and scale of cyber threats against our critical infrastructure necessitates a shift toward AI-enabled tools and systems to enhance U.S. cyber defense of our critical infrastructure. There is a need to:
- Develop national research and innovation strategy in autonomous cyber defense.
- Invest in research foundations to understand the capabilities and limits of frontier AI models as autonomous attackers.
- Create realistic testbeds, datasets, and benchmarks to rigorously evaluate the effectiveness of diverse defense strategies vs. diverse attack strategies.
- Evaluate mechanisms for transitioning these foundational advances through academic-industry-public sector partnerships for both open- and closed-door security evaluations.
What's next: CMU’s CyLab researchers are laying the groundwork for a broader research initiative leveraging AI-driven autonomous cyber operations for defending critical infrastructure. Researchers are creating community “leaderboards” for evaluating realistic AI-driven attack systems against realistic AI-driven defense systems on realistic infrastructure environments. Their work includes developing foundational algorithmic and systems capabilities that will enable users to set up realistic cyber ranges and to design, test, and deploy novel attack and defense strategies.
The bottom line: As AI continues to reshape the cybersecurity landscape, the US must support efforts across the nation to build the tools and infrastructure needed to evaluate, compare, and advance our capability to secure our critical infrastructure.
Energy & Innovation
CMU thrives at the busy intersection of AI, innovation and energy, and our world-class researchers are tackling some of society's toughest challenges today while also pioneering new solutions for tomorrow.
Learn more about this work across four areas of impact:
CMU researchers are forecasting the rising energy demands of AI data centers to inform policymakers and planners, developing strategies to reduce consumption for greater environmental and grid resilience, and exploring how to build the power infrastructure needed to sustainably support this growth.
- AI and its Growing Energy Demand
- AI-Driven Discoveries to Catalyze Energy Storage
- Sustaining AI Growth Needs Energy and Carbon Efficient Computing Infrastructure
- Accelerating Safe Microreactor Deployment with AI-Powered Knowledge
- Building American Strength and Resiliency in Critical Minerals for Energy Storage
- ‘AI Fast Lanes’ for an Electricity System to Meet the AI Moment
- Building the Robust Transmission Capacity Necessary to Power America
- How AI Can Unlock Fusion Energy
- Unlocking Energy Efficient AI
- Open Source AI May Reduce Energy Demands
Innovation in AI can enable important new capabilities, but this technological revolution can also bring potential new impacts - from increased energy bills to additional pollution to strained resources. CMU researchers are exploring these challenges and forging paths forward to a future where AI infrastructure can be developed sustainably, and in a way that responds to the needs of local communities.
- Powering Environmentally Sustainable AI
- Building Public Trust: Developing a Framework for Measuring and Reporting the Impacts of AI
- Using AI to Assess Veterans’ Exposure to Harmful Forever Chemicals
- AI is CMU’s Secret Weapon for Greener Buildings
- Measuring AI’s Energy and Environmental Footprint
- Data Center Growth Could Increase Electricity Bills 8% Nationally and as Much as 25% in Some Regional Markets
- Identifying the Workers We Need and Where To Find Them
Advances in AI mean increasingly sophisticated cybersecurity threats. CMU is well positioned to meet these challenges with innovative solutions spun out of the CyLab Security and Privacy Institute, which coordinates cybersecurity research and education across all university departments; and the Software and Engineering Institute, a research center that is leveraging AI to create a more resilient power grid.
- Using AI to Defend Against Cyber Threats
- CMU Research Helps the Air Force 'Fuel More Fight'
- AI and National Security: Harden the Grid to Win the Race
- Securing the Grid: A Call for Rigorous Modeling and Standardization
- Securing the Future of Robotics and Autonomous Systems
- CMU Research Helps the Navy Power Up
At CMU, our discoveries don’t stay in the lab. We pride ourselves on cutting edge research and technology - such as inventing new materials for next-generation energy systems and in launching start-ups based on our research.
Our new initiative to build a national network of AI-enabled autonomous laboratories is designed to hypercharge American innovation, shortening scientific problem solving from years to weeks.
- AI Materials Design: Enabling the Physical US Energy Infrastructure
- From Research to Commercialization: Encouraging Energy and Climate Tech Entrepreneurship
- Carnegie Foundry: Bridging the Gap from Lab to Market in AI, Robotics, Energy Innovation & Deep Tech Commercialization
- The CMU Startup Speeding Grid Innovation
- Unlocking American Research Dominance: Opportunities and Chokepoints in AI for Science
- Supercharging American Innovation: Harnessing Advances in AI and Robotics to Transform Science
Pittsburgh Supercomputing Center
The Pittsburgh Supercomputing Center, a joint center with CMU and the University of Pittsburgh, provides university, government, and industrial researchers access to the most powerful systems nationwide.
CMU Energy Week
Scott Institute for Energy Innovation's flagship event brings leaders and investors from across the nation for action toward decarbonizing our energy economy.
CMU's Software Engineering Institute
The Software Engineering Institute is leveraging AI to create a more resilient power grid.