Unlocking Energy Efficient AI
In This Section
By: Scott Andes
The national conversation on energy and AI centers around increasing our electricity capacity to fuel American computing power. Research at Carnegie Mellon shows a parallel strategy to grow our AI capabilities is by dramatically reducing the energy needs of AI.
Why it matters: Policymakers are speeding ahead to meet AI energy needs, which is the right immediate step. But to win the AI race, the United States needs to be the first to innovate compute strategies that require less power. The stark reality is the energy-hungry applications of AI could cause electricity demands to outstrip supply, potentially causing an electricity crisis in the United States.
But at CMU we think there is another way to look at the problem. Rapid improvements to specialized Large Language Models (LLMs), automating learning systems with AI agents and localized edge computing and other technologies leaving our labs hold the possibility to significantly reduce the energy needs of AI models. Doing so would bend the AI cost curve to America’s economic advantage.
Key Insights: CMU research shows there are several opportunities on the technological horizons that could soon be deployed to reduce the energy needed to run in AI models. These include:
- Specialized LLMs: Industry is investing heavily in creating larger, more powerful LLMs. But more specialized models can take on a lot of the workload of larger LLMs. Specialized models — that, for example, might be good at coding but not at composing a song — can be smaller and far more energy efficient. Deploying specialized LLMs represents a significant investment by industry, which will require more work to prove where, and for what problems, they are more effective.
- Edge Computing: Increasingly, AI models can run locally (on your phone, tablet or PC) without a data center. Localized “edge computing” is not only far more energy efficient — because queries don’t need to travel from your computer to the data center, run on larger server farms, and back to your computer — but can also be more secure. There are early signs of these models taking off, yet more research needs to be done to build system infrastructure and algorithm co-design to enable portable and effective deployment of AIs on the edge.
- AI-driven Automated Learning Systems: The industry releases new generations of GPUs almost every year. For every new generation, hundreds of engineers need to support the building of the overall efficient system frameworks for these environments. There is a great potential to leverage AI agents to automate learning system software developments, reducing the overall product cycle to make the best use of the hardware and, as a result, improve the rate of innovation for energy consumption reduction.
What we’re doing: At CMU, we are pushing these and many other AI strategies forward that can improve the energy efficiency of AI systems. The Catalyst Group at CMU is an interdisciplinary machine learning and systems research group exploring all aspects of the frontiers of AI and computing. In addition, CMU’s Scott Institute for Energy Innovation is supporting research to improve the performance and efficiency of AI infrastructure by a factor of 20.
The big picture: In advanced economies, data centers are projected to drive more than 20% of the growth in electricity us through 2030. And most AI applications can draw from servers globally. While it’s debatable whether the United States is the cheapest and easiest place to build large-scale infrastructure like data centers (China currently has twice the electricity capacity as the U.S.), the U.S. is the unquestionable leader in AI innovation.
Reducing the energy needed to power AI through innovation can provide a durable competitive advantage. There is a real risk the US power grid will not handle future AI demands — and industry can always build more data centers abroad. Reducing energy demands and improving LLM efficiency could also drive down domestic computing costs, unleashing new AI applications for startups, researchers, hospitals and society as a whole.
What’s next: In the pre-PC era, it was famously predicted that six large mainframes, hidden in research labs, would meet the country’s computational needs. The personal computer shattered the old paradigm and unlocked the digital era. High-power LLMs represent a similar paradigm that, sooner or later, innovation will disrupt. The only question is how quickly that paradigm will exist and whether the U.S. leads or follows.
Open Source AI May Reduce Energy Demands
In This Section
By: Sayeed Choudhury, Hoda Heidari, Tori Qiu, and Keith Webster
It will be important to tap open source models to reduce AI’s energy demand, and Carnegie Mellon University is at the forefront of exploring this opportunity.
Why it matters: Transparency about the AI model development cycle, from design to deployment, underscore opportunities for optimization of energy consumption, which could lead to greater efficiency and less energy usage. Openness in AI is a framework for such transparency, with open source AI being a foundation.
Catch up quick: There is growing evidence that AI design and implementation choices related to a model’s architecture, hardware, cloud infrastructure, data processing, and algorithms have profound impact on energy usage, particularly regarding training from scratch or fine tuning an existing model. Models that are released without adequate information about these facets of their design, development, and use make it difficult, if not impossible, for third parties to assess the energy consumption, carbon footprint, water use, or other impacts of the training and running of these models and their downstream applications.
The opportunity ahead: Open source software, as defined by the Open Source Definition, represents a well established, successful underpinning for transparency in software development. While sometimes cast in an altruist light, open source software has resulted in nearly $9 trillion of value, while reducing production costs by a factor of 3.5. A similarly principled framework for AI, including transparency around model weights, training data, code, and governance, also may enable both accountability and energy-conscious innovation.
What we’re doing: The Carnegie Mellon University-led Open Forum for AI (OFAI) is developing an Openness in AI framework, which includes the Open Source AI Definition (OSAID) as one of its underpinnings.
- The OSAID is an initial community-driven definition that reflects the technical and legal dimensions for data, code, and weights of an AI system.
- The Openness in AI framework expands upon the view of open source to open governance, with multiple stakeholders working together toward transparent, responsible, participatory AI.
- OFAI’s comprehensive program, including research, aims to examine the benefits and risks of openness across various dimensions of AI development.
What we found: One of OFAI’s initial research outputs is a study of the implications from regulators’ choices of open source AI, and responses from entities who create general purpose AI models, and specialists who fine-tune those models for specialized tasks or domains. This research provides an initial framework for optimizing policy choices regarding open source AI, while considering the impact on AI model developers and contributors. Extensions of this work could examine how such policy choices can encourage energy-friendly innovations (i.e., Meta’s open-weight models inspired more efficient fine-tuning methods such as QLoRA) and choices that affect energy usage or consumption.
Policy takeaways: Policymakers and research funders can play a pivotal role by incentivizing transparency and openness in AI development as part of broader climate and energy strategies. By linking openness to grants, procurement standards, or regulatory frameworks, governments can help drive innovation toward more energy-efficient and accountable AI systems.
The bottom line: Addressing the growing energy demands for AI development will require a comprehensive approach spanning AI computational demands, AI immediate applications, and AI systemic impacts. For the computational aspects, transparency can play a critical role in nudging AI companies toward a more participatory approach. Such an approach would include academia, local utilities, municipalities, state and federal governments, and public citizens/ratepayers, for a coherent energy and electrification strategy and policy.
Identifying the Workers We Need and Where To Find Them
In This Section
CMU has built a suite of analytics tools that can identify the skills needed for an advanced industrial workforce in energy and beyond, the readiness of the local and national workforce to meet that need, and the opportunities to close gaps through job design, training and other transition supports, and worker-augmenting technology.
Why it matters: Meeting national energy security and capacity goals will require a large-scale investment in infrastructure, a buildup of manufacturing capacity and proactively creating an innovative workforce that can respond to new opportunities.
- Increasing the capacity of the industrial base will require a dramatic expansion and transformation of the workforce.
- Decision-makers need tools to evaluate the gap between the workforce available and the workforce required: These gaps, and the job opportunities created by filling them, will be local, and so a high level of geographic resolution is needed in workforce analytics.
Key insight: The rate of occupation mobility for workers is also important to capture: the faster workers are able (and willing) to transition into new jobs, the more occupational transitions can relieve the talent bottlenecks that might otherwise slow down the construction of capacity or reduce the efficiency of operation.
For example, in the Pittsburgh metropolitan area there may be over 46,000 workers with a high level of skill similarity to electricians, but we estimate fewer than 3,000 of those workers transition occupations in any given year (and we find that any one occupation tends to capture less than 10% of that pipeline). Widening the scope for skill matching (e.g., through greater training) could increase the potential recruitment base to over 130,000 workers, of whom over 8,000 switch occupations annually on average.
- Our methods allow us to evaluate which types of workers may be a “partial match” to meet skill needs, and how much of an improvement a new career opportunity may represent over their current wages.
- We can also assess the pathways between civilian occupations and military occupational specializations to find routes for veterans to enter into critical industries.
What we’re doing: The Workforce Supply Chains (WSC) Initiative is a Carnegie Mellon-led research effort that builds and deploys analytical methods to quantify the readiness of regional labor markets to meet skill demand. These methods have been used to evaluate workforce gaps to meet the needs of a variety of industries, including commercial semiconductor and battery production.
Selecting a category from over 1,000 occupations (or a custom occupation), across any industry, the WSC methodology identifies which other occupations may have a minimum level of readiness to meet the requirements of the needed occupation. With these inputs, the tool estimates:
- The number of workers available in any region of the country to meet a given level of demand.
- Their demographics and their current wages (hence, the potential economic attractiveness of transitioning to a new role).
- How many may change occupations each year.
This methodology identifies gaps between skill demand and supply both at a moment in time and over a given period, and quantifies which skills are most frequently missing in a region (such as a county or a metropolitan area).
What we’ve found: Our work has shown that:
- Rural regions, especially Pennsylvania communities, have essential skills for scale-up of energy systems and infrastructure.
- Demand for workers in manufacturing and deployment of critical grid infrastructure could be served quickly through pipelines from occupations that are projected to decline.
- Automation can close labor gaps, but make some skills matching more difficult.
The bottom line: Meeting the energy needs of re-industrialization and strategic AI capacity will require a construction, manufacturing and operational workforce. CMU integrates the analytics capability to evaluate the skills that are needed, where they can be found and the gaps between demand and supply, with the capability to design and execute digital training solutions to close gaps at scale.
Securing the Future of Robotics and Autonomous Systems
In This Section
Building a safe, secure, privacy-preserving robotics ecosystem that is trustworthy by design in an increasingly autonomous world.
By: Limin Jia, Eunsuk Kang, Christopher Timperley, Sarah Scheffler
Soon, robotics and autonomous systems will be ubiquitous within America's industrial infrastructure. But these systems are as susceptible — if not more — to privacy and security threats as existing online systems.
Why it matters: Robotic systems will be central to the future of energy production, grid inspection and repairs. But the deployment of robotics far outpaces the necessary privacy and security infrastructure.
Key insight: The U.S. has largely adopted a "responsive" posture to hacking of medical records, credit card information, and critical infrastructure systems such as water treatment plants, oil pipelines, and energy grids. But CMU research shows that security threats in the coming robotic systems that will manage critical sectors represent a far greater economic and human risk.
Our vision: CMU's CyLab Robotics Security and Privacy Initiative (RSPI) is fostering a future where autonomous systems are not just innovative, but also safe, reliable, private, and trustworthy. Our mission is to conduct foundational and applied research to build trusted middleware and toolchains, ensuring operational efficiency and security by design to meet the demands of future applications across diverse sectors.
What we’ve found: Three areas represent the greatest risk exposure in our robotic systems:
- Systemic neglect: Existing approaches to robotics often prioritize functionality over inherent security and privacy, creating systemic risks.
- Middleware vulnerabilities: Current robotics middleware, such as Robot Operating System (ROS), has notable deficiencies in real-time readiness, usability, and widespread implementation, leading to potential security and privacy flaws.
- AI and ML challenges: The foundational role of AI and Machine Learning in advanced robots introduces novel difficulties in ensuring their safety and resilience in physical environments.
Policy takeaways: A national robotics strategy with a focus on security must be aligned with our national energy ambitions and strategies.To address the risks, decision-makers, including policymakers and industry leaders, should:
- Prioritize security and privacy by design: Mandate and incentivize the integration of security and privacy into the entire development lifecycle of robotics and autonomous systems, rather than as an add-on.
- Support foundational research: Invest in interdisciplinary research initiatives like RSPI that address the unique cyber-physical security challenges of robotics and AI.
- Foster open standards and practices: Encourage the development and adoption of open-source, industry-standard security solutions and frameworks to promote a secure and trustworthy robotics market.
- Develop clear regulatory frameworks: Establish robust safety standards, security and privacy regulations, and ethical guidelines that foster responsible innovation while mitigating risks.
What’s next: CMU is convening academic and industry leaders at the Robotics Security and Privacy Workshop on July 28-29, 2025 to understand the problems and define research directions. RSPI will drive research in key areas including:
- Physical and Hardware Security
- Secure and Resilient Robotic Systems and Programming Environments
- Secure and Trustworthy AI and Autonomous Systems
- Safe and Private Human-Robot Interaction
- Policy and Compliance for Responsible Robotics
Go deeper: We invite collaboration to shape the future of secure robotic and autonomous systems and look forward to discussing these critical issues further.
CMU Research Helps the Navy Power Up
In This Section
By: Matthew Butkovic, Thomas Longstaff, Brett Tucker
Nuclear reactors are critical to the U.S. Navy missions of maintaining global reach and maritime dominance. These systems provide the power and propulsion for U.S. Navy vessels, including submarines and aircraft carriers. Yet designing and building new nuclear propulsion plants can take decades. Researchers at the Software Engineering Institute at Carnegie Mellon are applying new AI models and methods that will significantly reduce the time needed for the design and construction of new nuclear propulsion plants while maintaining high safety and security standards.
Why it matters: As global threats evolve and naval missions grow more energy-intensive, the Navy needs next-generation propulsion systems that are safe, adaptable and efficient to maintain energy resilience, fleet readiness and technological superiority. Accelerating the design and deployment of programs for strategic advantage requires technologies and practices that can solve difficult AI, software and cybersecurity challenges.
Catch up quick: To meet energy and national defense demands, the U.S. needs to deploy Naval nuclear reactors more rapidly at a lower cost. Doing so will require bringing new technology online at an unprecedented rate. The Software Engineering Institute (SEI) at Carnegie Mellon University is assisting the Naval Nuclear Laboratory (NNL) in accelerating propulsion plant design, analysis, verification and build to reduce the total design and build time of a propulsion plant from several decades to several years. Yet accelerating one element of plant development without others would create critical chokepoints and dramatically slow progress. The Department of Defense needs a comprehensive approach to ensure that all aspects of plant design and construction are aligned with new processes augmented by artificial intelligence (AI), in particular machine learning (ML).
Challenge: Using ML to improve U.S. Navy nuclear reactors
The safe and reliable operation of U.S. Navy nuclear power reactors is a national defense imperative. The SEI collaborates with the NNL to advance the use of AI, especially ML, to improve the safety, resilience, and assurance of operations of these plants.
The way forward:
- Integrate ML to design better reactor plants: The SEI is assisting the NNL in developing a unified ML-centric propulsion plant design process. This applied research introduces new tools and processes that more efficiently create engineering process maps and ultimately accelerates propulsion plant design, analysis, verification, and build. This process will also reduce the time required for major reactor refueling overhauls by a factor of years.
- Use ML to detect reactor anomalies faster and avoid incidents: Naval nuclear propulsion plants, which are generally reliable, can experience anomalies that could lead to unexpected shutdowns, reduced power and hazardous conditions. The SEI is collaborating with the NNL to develop ML models to more quickly detect and resolve anomalies that could lead to unexpected shutdowns or critical safety failures in naval nuclear propulsion plants as well as minimize the likelihood of future anomalies. This effort will also deliver new operator interface concepts to promote trust in ML applications. The SEI and NNL also recognize the critical need to identify and analyze cybersecurity risks in all phases of reactor design and operations.
The bottom line: Opportunities exist to foster synergy between investments and innovations in major national security initiatives and the frontier of energy — and Carnegie Mellon is actively seizing those opportunities.
From Research to Commercialization: Encouraging Energy and Climate Tech Entrepreneurship
In This Section
For entrepreneurs, the path from fundamental research to commercialization is risky, but Carnegie Mellon University support has helped launch several start-ups in energy and climate tech.
Why it matters: These technologies can be game-changers in enhancing energy generation and reducing energy consumption in homes, transportation and industry. But with regulatory compliance issues, complex supply chains, and high upfront costs, entrepreneurs with these highly technical start-ups can require a longer funding runway to achieve profitability.
Catch up quick: This year, CMU’s Wilton E. Scott Institute for Energy Innovation will award up to $100,000 to accelerate CMU founder ideas into startups in the energy and climate tech space. This support helps advance the Scott Institute vision of encouraging the development of breakthrough technologies to speed the transition to a sustainable future. Learn more about the inaugural winners of Scott Institute entrepreneurship funding award here.
Over the years, many CMU-founded start-ups have been in the energy sector, including:
- CorePower Magnetics: A manufacturer of high-performance inductors, transformers, and motors using proprietary magnetic materials.
- Pearl Street Technologies: Created the platform SUGAR, leading tool for the analysis and optimization of power systems to help transmission providers, project developers, and other stakeholders to overcome the interconnection bottleneck.
- Peoples Energy Analytics: Identifies and reaches at-risk customers before they default on their energy bills. By using Energy Limiting Behavior™ metrics, the company accurately determines who is at risk of default and connects them with assistance programs years before they ever miss a payment.
- SeaLion Energy: Provides performance advancements to lithium-ion batteries (LIB) to enhance conductivity, reduce resistance buildup during cycling, and offer elasticity to minimize mechanical degradation.
What we’re doing: Annually the Scott Institute hosts the Cleantech + Energy Investor Forum & Pitch Showcase at CMU Energy Week to connect founders with funders.
- This day-long event includes a panel featuring a discussion between climate tech investors on the greatest challenges and opportunities facing innovation within energy and climate tech.
- The Pitch Showcase includes up to 20 energy and climate tech startups from the PA, OH, WV and MD region. More than half of the startups featured are from the CMU energy and climate tech startup community.
What’s next: CMU is partnering with University of Pittsburgh and West Virginia University to form the Resilient Energy Technology and Infrastructure (RETI) Consortium, which was created to innovate and implement critical technologies for industrial grid resilience and energy efficiency. RETI is invigorating the innovation ecosystem in Western Pennsylvania and West Virginia.
The bottom line: Carnegie Mellon continues to spin out energy and climate tech startups, but the U.S. needs to integrate programs that support start-ups into all of our major energy initiatives to ensure that we are seizing the opportunities for innovation. With support from the Scott Institute, the future of energy innovation and entrepreneurship at CMU will remain strong and support the transition to a net-zero emissions future.
Go deeper: Read more to learn about the many startups featured at past CMU Energy Week conferences here. Learn more about CMU energy and climate tech startups here.
Carnegie Foundry: Bringing AI, Robotics and Energy Innovations to Market
In This Section
By: Rich Fruehauf, Rob Szczerba, Michael Lutzky, and Jeff Legault
Successfully delivering world-leading AI, autonomous robotics, energy innovation and deep tech from research to the market requires a fast, scalable, and proven approach — one that the Carnegie Foundry venture studio model delivers.
Why it matters: America’s competitiveness and security in energy, advanced manufacturing, and defense increasingly hinge on the rapid and successful commercialization of AI and robotics.
- Deployment of AI and autonomous robotics is essential to leapfrogging foreign competitors in advanced manufacturing, reducing U.S. reliance on fragile global supply chains, and re-establishing technology dominance in critical fields.
- Yet, most promising innovations born in university labs stall before reaching the market — especially in deep tech sectors with steep costs, long development cycles and high technical risk.
For example, key U.S. sectors now depend heavily on foreign supply chains, with China emerging as a dominant supplier of critical infrastructure components in several areas, including:
- More than 80% of global unmanned aerial vehicles (at a fraction of cost of U.S. suppliers).
- 65% of global LiDAR, or Light Detection and Ranging, manufacturing.
- 80% of large power transformers (90% of power in the U.S. passes through them) in the U.S. are imported.
What we’re doing: Carnegie Foundry’s for-profit venture studio model bridges the commercialization gap by de-risking deep-tech solutions, shortening development timelines, and scaling innovation through:
- Deep integration with CMU’s National Robotics Engineering Center (NREC) and its 30 years of prototyping experience.
- Leveraging a robust portfolio of proven AI, machine learning and autonomous robotics technology.
- Robust business case development and market review prior to venture launch.
- Early-stage funding.
- A successful spinout structure backed by leaders like U.S. Steel and Oshkosh.
Our model is already delivering results with the successful launch of AI/autonomous robotics focused startups, including:
- VoxEQ Inc: We redirected voice analysis IP developed at CMU toward high-value fintech fraud detection use cases. VoxEQ has since closed a $10 million Series A and is deploying with major call centers.
- Freespace Robotics: By commercializing unused warehouse automation IP from a prior NREC client, Freespace Robotics won "Startup of the Year" at one of the logistics industry’s largest global expos. Backed by Pittsburgh leaders like U.S. Steel and Matthews International, Freespace is proof that a studio approach creates investable, scalable companies addressing large scale challenges, rooted in deep tech.
- Thryve Labs: Developing the first custom LLMs and AI models for human behavior initially directed to addressing the aging and care crisis with monitoring, detection, and precision that keeps people safer, healthier and offers peace of mind for their caregivers. In development with third party investor participation pending.
The bottom line: Realizing the full potential of our world leading AI, robotics and deeptech research requires successful and repeatable commercialization across numerous verticals. Carnegie Foundry speeds time to market, reduces costs, and increases success rates in deployment of U.S.-developed intellectual property into operational systems for critical industries.
CMU Research Helps the Air Force 'Fuel More Fight'
In This Section
By: Eric Heim, Thomas Longstaff
The U.S. Air Force (USAF) conducts training, combat, mobility, support, deterrence and other critical flight operations. Fueling these missions places a heavy financial burden on the USAF organizations that fly them. A team of researchers and engineers at the Software Engineering Institute (SEI) at Carnegie Mellon University is applying new AI and machine learning models that can reduce the fuel consumption of aircraft while USAF accomplishes the mission. This innovation has the potential to save the USAF millions of dollars in fuel every year.
Why it matters: The Department of the Air Force (DAF) uses 1.5 billion gallons of fuel annually at a cost of $5.5 billion. Even a relatively modest reduction in fuel usage results in significant savings for the United States. Today, many steps in flight mission planning are done manually, with limited capacity to identify mission plans that optimize fuel savings while maintaining mission outcomes. Tools that help automate the detection of fuel savings can be integrated into existing mission planning workflows to reduce the cost of performing flight operations.
The Challenge: Using ML to Identify Fuel Saving Opportunities for the Air Force
Reducing the cost of flight missions for the USAF means that the savings can be reinvested elsewhere, potentially adding to or enhancing current air assets. Maintaining such a technological advantage over adversaries remains a critical focal point for the Department of Defense.
What we did: The SEI developed a machine learning (ML) prototype to optimize the USAF’s significant fuel costs. The ML tool estimates fuel savings from aircraft modifications with 0.5% percent accuracy, translating to millions of gallons saved per year. Delivered to the USAF’s Operational Energy Program in May 2024, the prototype automates what was previously a laborious manual process for DAF experts and, along with other tools created by the SEI, is expected to generate over $35 million in re-investable savings for fiscal year 2024 alone. The project exemplifies how the SEI collaborates with defense innovation accelerators like the Defense Innovation Unit to rapidly deliver data-driven solutions that enhance military readiness while reducing costs.
The way forward: The DAF’s Operational Energy program continues to apply the model developed by the SEI to reduce fuel costs. It is adding functionality to include more aircraft, expanding the applicability of the model to more of the USAF’s assets.
Securing the Grid: A Call for Rigorous Modeling and Standardization
In This Section
By: Lujo Bauer, Larry Pileggi and Vyas Sekar
Despite increasing concerns over cyber threats to the electrical grids, the academic, operational, and policy communities remain divided on which threats are most pressing — and why.
Our CMU team recently showed that these disagreements are rooted in wide disparities and inconsistencies in how cyber threats against the grid are modeled and analyzed, leading to divergent threat assessments.
Why it matters: If you can’t identify the most likely threats to the grid, it’s near impossible to protect against them. We call on the research and policy communities to develop more comprehensive and accurate grid evaluation frameworks and datasets, and for updating threat models and grid resiliency requirements to match cyber attackers realistic capabilities.
What we did: As part of our recent work, we surveyed 18 grid‑cybersecurity experts and dissected four representative threats: MadIoT (IoT‑based load attacks), False Data Injection Attacks (FDIA), Substation Circuit Breaker Takeovers (SCBT), and Power Plant Takeovers (PPT).
What we found: Experts displayed wide variation in both perceived likelihood and impact of the four threats across normal and emergency grid conditions — with averages slipping below 25% confidence on many estimates. This fragmented outlook mirrors conflicting results from prior studies. For instance, the original MadIoT analysis suggested a mere 2% spike in demand could trigger a blackout, but subsequent work reported no effect at 1% and only minimal impact at 10%.
Five inconsistencies in how grids are modeled and threats are analyzed underpin this discord, some causing threats to be overestimated and others causing threats to be severely underestimated.
- Many studies use unrealistic grid topologies that do not meet standard reliability criteria, such as N-1 contingency compliance or adequate reserve margins.
- Researchers often assume attackers possess implausibly high levels of access and control — such as manipulating every sensor or breaker in a region.
- Most analyses only consider steady-state grid conditions, failing to explore how threats unfold under stress or during emergency states.
- Simulations frequently omit essential operational processes, including reserve dispatch, droop control, and automatic load shedding.
- When such processes are modeled, they are sometimes implemented incorrectly — for instance, simulating droop response with unrealistic speed or magnitude.
A path forward: The first phase of our work has identified the following needs for performing an accurate assessment of threats against the grid, as outlined in these targeted steps:
- Release realistic, validated grid models: Much existing work relies on synthetic or outdated topologies that do not reflect real-world grid resilience. The community must prioritize publishing standardized, N-1-compliant test systems with realistic load, generation, and reserve profiles to enable reproducibility and fair comparison across studies.
- Harmonize simulation practices: Discrepancies in how core processes — like droop control, reserve dispatch, and load shedding — are modeled often lead to conflicting results. The field needs shared guidelines and reference implementations to ensure accurate, comparable threat assessments. Assumptions about attacker capabilities must also be grounded in plausible scenarios.
- Model threats under emergency conditions: Most analyses focus on normal grid operations, but vulnerabilities often emerge during high-stress states. Researchers should routinely include emergency and degraded operating conditions — such as generator outages or reserve exhaustion — to uncover risks hidden under idealized scenarios.
- Treat the grid as a cyber-physical system: Many studies isolate cyber and physical components, missing critical interactions. Threats must be modeled end-to-end, from cyber compromise to physical impact, using integrated frameworks that reflect real control logic and operator response.
- Create community modeling standards: To institutionalize these improvements, the field should develop shared benchmarks and modeling standards through academic-industry collaboration. Workshops, working groups, or open-source consortia could help establish baselines for threat modeling, simulation fidelity, and reproducibility.
The bottom line: Our preliminary research has established the challenges in identifying the most likely threats to the grid. Our work has shown that inconsistencies in threat assessments occur because of ad hoc simulation and modeling methodologies, as well as dataset errors. This shows the need for the creation of standardized public toolkits and datasets and for recommending ways to increase the accuracy of evaluations. This will enable us, as well as other researchers, to develop more rigorous foundations for securing tomorrow’s electric energy grid.
AI is CMU’s Secret Weapon for Greener Buildings
In This Section
By: Azadeh Sawyer
CMU researchers are using AI to predict building energy use and emissions — at daily, weekly, and yearly intervals.
Why it matters: Buildings account for nearly 40% of global carbon emissions, but the burden isn’t shared equally. In many U.S. cities, lower-income communities pay up to 20% more for energy than higher-income areas because their homes are older and less efficient.
Catch up quick: Many buildings, especially older ones, struggle with energy efficiency. Poor insulation, outdated HVAC systems, leaky windows, and inefficient lighting all contribute to higher energy use and emissions — leading to higher bills and environmental impact.
Key insight: The CMU project investigates how AI can help make energy benchmarking — comparing a building’s energy use and emissions with peers — a more accessible tool for sustainability. Currently, less than 50 cities in the U.S. deploy energy benchmarking to reduce emissions, and just a fraction of them make annual benchmarking data available to the public.
What we’re doing: With support from a Scott Institute seed grant, our AI model analyzes energy and emissions data across multiple time scales, offering a more detailed, scalable, and equitable approach to benchmarking. This will create opportunities for cities without formal programs to better understand building performance, prioritize retrofits, and reduce emissions.
For example, if one building uses a lot more energy compared with a peer group, then end users such as engineers, utilities, and building owners can respond appropriately.
What’s next: The model has several implications for future uses, including:
- Delivering actionable benchmarking for entire neighborhoods or cities, even in the absence of energy disclosure laws, with minimal data inputs such as utility bills and public records.
- Reducing the need for costly, labor-intensive energy audits while still delivering performance assessment.
- Integrating utility usage, weather data, and building characteristics to train algorithms to estimate energy use and flag anomalies across a city's building stock.
- Transferring models that can adapt to local communities with limited data.
- Directing funding toward the buildings most in need of efficiency upgrades.