Artificial intelligence and technology (Getty images)

WASHINGTON — The ongoing revolution in artificial intelligence takes a ton of computing power — which, in turn, consumes a lot of electrical power and generates a lot of heat.

As AI software grows more powerful and more complex, that becomes a problem even in large, air-conditioned data centers with direct connections to the electrical grid. On what the military calls the “tactical edge” — the drones, manned aircraft, ground vehicles, forward command-posts, and individual kit that have to work in Ukrainian mud, Arctic snow, or desert sands — making an AI compact enough to function can be impossible.

RELATED: The revolution that wasn’t: How AI drones have fizzled in Ukraine (so far)

To bypass this bottleneck of size, weight, power, and cooling, the Pentagon’s branch for high-risk, high-reward R&D outfit known as DARPA  launched a program in December known as OPTIMA, with awards to tech giant IBM, German semiconductor maker Infineon, and three US universities.

If DARPA exercises all its contract options, the total value could reach $78 million over four and a half years, a relative pittance for the Pentagon. An amount of $18.5 million would go to Princeton, $9.1 million to Georgia Tech, and $8 million to UCLA, the organizations said. IBM and Infineon didn’t disclose amounts to Breaking Defense.

In fact, DARPA itself has only publicly discussed the program in a brief LinkedIn post last month, and all five contractors kept their work quiet — that is, until March 6, when Princeton and its spin-off start-up, EnCharge AI, publicly announced their award and the details of their technology.

“Each of our performers offers their own unique and novel approaches,” OPTIMA program manager Jason Woo told Breaking Defense through a spokesperson.

But interviews and emails with EnCharge and some other awardees, some who haven’t spoken to the press before about their work, revealed some striking similarities in approach, as well as subtle but significant differences.

Data That Can Walk To Work

So how do you miniaturize AI chips? Start with the name of the DARPA program: OPTIMA stands for “Optimum Processing Technology Inside Memory Arrays.” While contrived, it’s the rare military acronym that actually means something — especially that “inside memory” part.

Here’s the issue: As AI grows ever more powerful and power-hungry, tech pioneers keep seeking new and better kinds of chips, evolving from CPUs (so 20th century) to the current array of of GPUs, TPUs, NPUs, FPGAs, and more. But all these different chips have a lot in common.

In particular, they all have their memory on one part of the chip where they store data and their processor on another part where calculations are done on that data. That means every computation requires moving bits back and forth between the two zones, the scientists working on OPTIMA explained to Breaking Defense. For most applications, for most of the history of computing, the tiny amount of time and energy required to make these minuscule movements didn’t matter. But with AI, whose whole raison d’etre is complex calculations on vast amounts of data, all those little back-and-forths add up, like a growing city choking on its own commuter  traffic.

So, for years, engineers have tried to come up ways to put the processor inside the memory, aka In-Memory Computing (IMC). Basically, instead of data having to commute between the memory and the processor, it can walk to work.

But IMC is tricky too. It requires processors small enough to fit inside the memory array, without just recreating the commute-time problem in a different configuration. To do that, the scientists explained, they switch from digital — where data is encoded in binary form as either 0s or 1s, with nothing in between — to analog, where data can be encoded in a spectrum of fine gradations.

Analog lets you transmit a lot more information in the same amount of signal, so it’s much more energy-efficient, they told Breaking Defense. But, historically, it’s much more error-prone, because one small glitch in the signal can change the value.

How do you fix that?

“We’ve known for years analog can be a hundred times more efficient than digital, [but] analog is noisy, it’s not reliable, it’s not scalable,” said Princeton professor of engineer Naveem Varma. “That’s the big problem that we solved.”

After almost a decade of experiments, some of them DARPA-funded, Varma and his team hit on a new approach in 2017. Instead of using semiconductors to transmit the analog signal, he told Breaking Defense, they went old-school and used capacitors: “It’s just a metal wire, sitting next to a metal wire.”

Unlike semiconductors, where the flow of electric current can be skewed by anything from excess heat to material impurities, the capacitors transmit signal using electric charge, which depends solely on how far apart the wires are — and the companies that make modern computer chips have gotten really, really good at placing tiny wires within nanometers of where they’re supposed to go.

So you don’t need new exotic tooling to make Verma’s invention, he emphasized: “Anybody who builds chips is doing it a billion times a day” already.

UCLA and Georgia Tech are also going analog, their lead scientists told Breaking Defense, but instead of using capacitors, they’re using transistors. This is another approach that doesn’t require exotic hardware. In fact, UCLA is using a widely-available transistor called the 22FDX.

“22FDX is actually an older [model],” UCLA Prof. Sudhakar Pamarti told Breaking Defense. But unlike most transistors, it can change something called its threshold voltage. Over years of work, Pamarti’s team learned how to control those variations so precisely they can use them to encode data — over 50 times more compactly than alternative approaches, he said.

What if that threshold voltage glitches and garbles the data it’s encoding? The UCLA team uses a calibration technique called “write-verify-write,” making tiny adjustments and then checking the output. This is “time consuming,” Pamarti acknowledged, but for most applications, you only have to do it once.

Georgia Tech is also using transistors. Their emphasis, they said via email, is on building the capacity for crucial calculations (called Multiple-Accumulate macros) into the hardware itself, rather than in software, for maximum speed and efficiency.

Does this really work? “We have tested devices from cryogenic [temperatures] to 85 degree C [185 Fahrenheit],” Georgia lead scientist Prof. Suman Datta wrote Breaking Defense. “We have also tested in radiation intensive environment with up to five megarad of gamma ray irradiation.”

That’s five times as much energy as current radiation-hardened microchips can survive and well above a fatal dose for humans. A chip sitting safely in an air-conditioned data center is unlikely to experience such extremes — but one on a space probe or combat drone just might.

How close are these chips to such demanding operational uses? Of the three universities, Princeton seems to have gone furthest towards commercial production. After five years of refinement in the lab, Verma and his collaborators spun out a startup company in 2022, EnCharge AI, which licensed the technology from Princeton and set out to commercialize it. (Encharge is technically a subcontractor to Princeton on the OPTIMA contract).

Since then, they’ve been busy writing code to make the new analog chips work seamlessly with existing software and getting feedback from early customers on how to finalize their design.

“We built a ton of chips, we demonstrated [that] it works, [and] we are putting it in the hands of customers,” Verma said. While he can’t disclose those clients yet, he would say many are in the military-industrial world — where interest is high in compact, power-efficient “edge AI.”