Published on
Body

BEAVERCREEK, Ohio — In future conflicts, the “golden hour” of battlefield medicine — the idea that a wounded service member can be evacuated to a fully equipped field hospital within 60 minutes — may no longer apply.

Forward-deployed units could find themselves cut off, operating without immediate access to centralized medical support. At the same time, artificial intelligence tools may place the collective knowledge of military medicine into the palm of a warfighter’s hand.

But that technological promise comes with a profound challenge.

“What is a good decision?” said Dr. Amy Summerville, principal research scientist at Kairos Research. “Is it saving the most lives? Saving the most money? Maximizing quality of life? Different people can define ‘better’ in different ways.”

Summerville, a social cognitive psychologist by training, leads Kairos’ work on DARPA’s “In The Moment,” or ITM, program as a subcontractor to RTX. The effort focuses on understanding how human experts make decisions in complex, high-stakes environments where there is no single correct answer — such as battlefield medical triage.

When Experts Disagree

Traditional AI systems are often trained to identify the “right” answer. But in real-world domains like military medicine, experts themselves may disagree.

“There are some cases where every expert is going to agree — it’s cut and dried,” Summerville said. “But there are other cases where two experts will prioritize different things and make different decisions. That’s a real challenge for creating AI tools.”

Kairos’ role on the ITM team is to characterize and quantify those difficult decision spaces. Rather than assuming a single correct outcome, the team studies what Summerville calls “key decision-making attributes” — the trade-offs experts consider when making judgment calls.

In battlefield triage scenarios, for example, affiliation can become a factor. Should a medic prioritize a member of their own unit? A coalition ally? A civilian contractor? A disarmed adversary?

“There are strong arguments on multiple sides,” Summerville said. Medics are embedded in units and tasked with returning warfighters to the fight. At the same time, medical ethics and international law require impartial treatment.

By presenting experts with carefully designed scenarios that vary injury severity, affiliation and risk factors, researchers analyze how individuals weigh competing considerations. They then use quantitative models to describe those trade-offs.

The goal is not to determine which expert is “correct,” but to understand patterns of judgment — and measure how closely an AI system aligns with those patterns.

From Alignment to Trust

That alignment is critical to trust.

“We’re interested in understanding what makes humans trust autonomy,” Summerville said. “How aligned are two decision-makers? The idea is that the more aligned you are, the more trust you should have.”

In other words, if an AI system consistently reflects the values and trade-offs a medic considers important, that medic may be more willing to rely on it. Conversely, a black-box system that produces unexplained recommendations risks eroding confidence.

“If you don’t understand the trade-offs the AI is making, you can’t be sure it’s using the values you want it to,” she said.

The stakes are particularly high in domains that involve ethical, legal and moral considerations. A decision about where to place the last available tourniquet, for example, is not merely a technical calculation. It reflects deeply human judgments about responsibility, risk and fairness.

Without unpacking those trade-offs, deploying AI in complex environments can create hidden risks.

“It’s not just a technical challenge,” Summerville said. “It’s about critically human decisions.”

Beyond Explainability

While explainable AI has gained attention as a pathway to trustworthy systems, Summerville said both the ITM effort and Kairos’ related work under the Ohio Federal Research Network’s VISTA program take a different approach.

“A lot of people get excited about explainable AI — the idea that if the AI can tell you what it’s doing and why, that makes it trustworthy,” she said. “We’re really looking at system behavior itself.”

Under the OFRN-funded VISTA project, Kairos studies calibrated trust in human-autonomy teams — ensuring operators trust systems an appropriate amount. Not all automated systems are equally reliable, and misplaced trust can be dangerous.

Summerville likens it to a faulty smoke detector.

“You want to trust a smoke detector that’s working correctly,” she said. “But if it’s malfunctioning — if it doesn’t beep when it should or it false alarms so often you ignore it — that’s dangerous.”

The VISTA research examines how performance signals and system behavior influence operator trust, while ITM focuses on deeper alignment around decision values. Together, the programs offer what Summerville described as a fuller picture of how to build AI systems humans can stand behind — whether humans remain in the loop or not.

In some scenarios, such as cyber defense or when a sole medic is incapacitated, AI systems may need to operate independently. Even then, she said, the decisions must reflect human-defined boundaries of what is permissible.

Building Talent in Ohio

Although Kairos is a small business, its participation in a large DARPA program underscores the depth of AI and cognitive science expertise in Ohio, Summerville said.

The company collaborates with Wright State University and other institutions across the state. Under OFRN VISTA, Kairos has partnered with Sinclair College’s National UAS Training and Certification Center to conduct hands-on field research.

Summerville, a former Miami University professor, said the ecosystem fostered by OFRN — which emphasizes partnerships between industry and academia — is critical to developing both research breakthroughs and workforce talent.

Student researchers gain direct experience working alongside industry on real-world problems, preparing them for careers in advanced technology fields.

“I think it’s a testament to the caliber of AI and cognitive science being developed in Ohio,” she said.

As AI systems continue to move from laboratories into contested environments, the work underway in Ohio suggests that the future of trustworthy autonomy may depend less on faster algorithms — and more on understanding the complex, human judgments they aim to support.

 

This document was cleared by DARPA on April 2, 2026.  All copies should carry Distribution Statement "A" (Approved for Public Release, Distribution Unlimited).  If you have any questions, please contact the Public Release Center.

###

About Ohio Federal Research Network (OFRN)   

The Ohio Federal Research Network (OFRN) has the mission to stimulate Ohio’s innovation economy by building statewide university-industry research collaborations that meet the requirements of Ohio’s federal laboratories, resulting in the creation of technologies that drive job growth for the State of Ohio. The OFRN is a program managed by Parallax Advanced Research in collaboration with The Ohio State University and is funded by the Ohio Department of Higher Education.   

  

About Parallax Advanced Research and the Ohio Aerospace Institute (OAI)   

Parallax Advanced Research is a research institute that tackles global challenges through strategic partnerships with government, industry, and academia. It accelerates innovation, addresses critical global issues, and develops groundbreaking ideas with its partners. With offices in Ohio and Virginia, Parallax aims to deliver new solutions and speed them to market. In 2023, Parallax and the Ohio Aerospace Institute (OAI) formed a collaborative affiliation to drive innovation and technological advancements in Ohio and for the nation. The Ohio Aerospace Institute plays a pivotal role in advancing the aerospace industry in Ohio and the nation by fostering collaborations between universities, aerospace industries, and government organizations, and managing aerospace research, education, and workforce development projects.