Former Epic Games CEO Mike Capps is heading a new company in Raleigh that’s trying to develop “the world’s first understandable AI system.”
Diveplane Corp. broke cover on Tuesday when Capps joined a panel at the Fortune Brainstorm Tech 2018 conference to discuss the impact artificial intelligence will have on humanity. He shared the stage with former Yahoo! CEO Marissa Mayer and General Motors Vice President for Strategy Mike Ableson.
Capps told the conference audience in Aspen, Colorado, that Diveplane’s “whole mission is to build an AI that’s debuggable” and that people can understand. He didn’t go into details, though he indicated the company is shying away from an open-source software development model.
The company’s web site posted a statement from Capps that said Diveplane has more than 15 employees, has raised $3.5 million and has launched a dozen pilot projects in fields ranging from venture-capital “decision support” to agriculture science to drone training.
Plans are to double the number of employees ahead of “fully opening the doors to customers later this year,” Capps said.
A UNC-Chapel Hill graduate, Capps ran Epic Games from 2004 until early 2013. His tenure saw the Cary studio produce the hit franchises “Gears of War” and “Infinity Blade.” Before his time at Epic, Capps developed the game “America’s Army” for the U.S. Army.
With Diveplane, he’s co-founder of the company with Chris Hazard, the chief technology, and Mike Resnick, chief R&D officer. Hazard, an N.C. State University alumnus, also worked with Resnick at a company called Hazardous Software.
Capps said he and Hazard have “been friends for years” and that the formation of Diveplane came after Hazard briefed him in on what he and Resnick had been up to in a “boutique tech consultancy focused on defense and intelligence applications.”
In Aspen, Capps indicated that Diveplane is trying to run a secure office, to the point of not having wi-fi. “We treat AI as weaponizable because it is,” Capps said.
The mention of debuggable artificial-intelligence alludes to one of the key problems in the artificial-intelligence field, namely that an AI routine is capable of learning and making decisions in ways even its developers might not necessarily understand.