Intelligence Takes Centaur Stage

I took a trip to the future to learn how the US Intelligence Community became the Artificial Intelligence Community.
It’s 2025. Intelligence officers are still tasked with informing the policymakers in Washington, have even fewer benefits than they did a decade ago, and are still sequestered from the flow of digital information everyone else lives in, but they daily experience of their jobs is radically different. One of the biggest differences is that they are now centaurs.
Last_Centaur cropped wikicommons
It’s not that bioengineering has run amok. This usage of “centaur” comes from chess, and means a human and computer playing together as a team to take advantage of their complementary strengths. After IBM’s Deep Blue beat Kasparov in 1996, humans began teaming with computers. In 2014, the top chess player in the world was a centaur named Intagrand, who was several humans and several chess programs. Two years later, in 2016, the centaur model was adopted by the IC to create the best possible analysts, combining the speed and depth of artificial intelligence with the creativity and strategic vision of a human expert.
And you may ask yourself: How did we get here?
Why did a famously conservative government organization disrupt the model of knowledge work? Let’s remember the perfect storm of factors occurring at that time that resulted in useful, functional AI:
  • The amount of information available in digital form exploded. Beginning around 2014, 2.5 exabytes of data were created every day.
  • Computing power and storage capacity likewise exploded. Computers could store, retrieve, and process all that data, and those resources became available to everyone from the world’s largest corporations to garage startups.
  • Research in algorithms gave us the ability to use these new computing resources on the massive data sets now available.
  • Humans began giving feedback on the quality of the machines’ work in processing that data on an unprecedented scale through crowdsourcing.
  • We realized that all cognition is specialized. It’s hard to believe now, but in the early 21st Century people assumed that any AI would be just like the smartest human times a thousand, rather than thinking in ways and at scales and speeds that humans can’t.
After spending decades either praying for or dreading AI, most people missed the moment when it became part of our lives.  It wasn’t Skynet or a planetary overmind; instead, the AI that emerged in 2016 looked more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinked off.
Within the decade, AI became a utility, delivered via the Internet of Things, often by verbal interface. Your AI served you as much IQ as you wanted but no more than you needed. Like all utilities, AI turned out to be supremely boring, even as it transformed the Internet, the global economy, and civilization. This utilitarian AI also augmented us individually as people (deepening our memory, speeding our recognition) and collectively as a species. Today in 2025, we’ve seen at least 10,000 startups whose business plan was Take X and add AI. The IC even adopted some of them. Sure, we have to find specialized repairbots for our dumb office refrigerators and microwaves, but our office chairs push themselves in and out to aid our AI-managed roombas that replaced the human charforce.
At the same time, elements within the IC returned to its OSS roots in cognitive science. Bot just for PSYOPS anymore!  The leaders realized that humans think differently than AIs in important ways. We humans ascribe elusive context to variables machines can only quantify. Among the many ranges of human genius, tolerating ambiguity and making winning leaps of intuition cannot be replicated by even the most complex neural network. AIs have almost limitless memory and do huge math at speed, but can’t figure out what to work on, or which other intelligences to make use of. Most importantly for the IC, human intuition outperforms the brute force computational power of an AI in many social situations, such as war and espionage, where there is opposition and goals. We have the creativity to direct ourselves and our AIs, selecting where to best focus our attention and often applying orthogonal thinking. We also have empathy (at least some of us do), a factor that can’t be overlooked in real-world decision-making. Victory in war or intelligence at the start of the 21st Century, depended on limited and fragile humans operating complex sociotechnical systems that left little room for error. Teaming humans and machines together achieved the best of both worlds.
2015: The Inflection Point
Understanding the best organization, fusion, and direction for human–machine systems had preoccupied the U.S. defense-industrial complex ever since the Cold War. As well it should: the DoD is naturally uneasy about letting robots decide to use lethal force.
In 2015 the Department of Defense, under the guidance of  Deputy Secretary Bob Work, paved the way for the IC’s use of centaurs by incorporating them into the Third Offset Strategy, taking advantage of the potential for human and machine to be far more effective together than either would be alone.
The term “offset strategy” was coined in the 1970s to describe a situation where the US couldn’t match Soviet numbers, so it would have to “offset” them with superior quality and technology. We needed a third one to address the emerging conflict zones of cyberspace and outer space, and the DoD adopted an approach that relied not just on technology but on the one American advantage China couldn’t simply copy or steal: our people. (Well, at least not  yet).
“It’s actually not an either-or,” military futurist Paul Scharre said in 2016. Like the mythical centaur, we can harness inhuman speed and power to human judgment. We can combine “machine precision and reliability, human robustness and flexibility.” Scharre was then head of the Center for New America Security’s 20YY Future of Warfare Initiative, which was founded by then-CNAS chief executive officer Bob Work, who co-wrote its inaugural study.
If that CV seems like the usual Beltway echo-chamber-quasi-nepotism, you might use your human intelligence to draw a parallel with centaurs, in which the AI presents the human with two or three options.  If you’ve been around Washington long enough, you know that the “decisionmakers” are often puppets of their own staff, who determine which options make it to the boss’s desk in the first place and then put a heavy thumb on the scales of which is best. Won’t the tail wag the, uh, centaur? It’s a risk, so we’ve had to develop AI training to make sure the human isn’t just a rubber stamp for the computer.  It seems to be working well – RUMINT says that POTUS is considering replacing the NSC with a centaur. #kiddingnotkidding
Today’s IC
While there are fewer human analysts across the IC than there were back in 2016, production has increased in both quality and quantity. Centaurs harness the complexity of today’s rapidly evolving environment and track the goals and actions of our ever-shifting adversaries. Analysis centaurs provide assessments to synthesis centaurs, who make the most probable connections. Author centaurs write the finished intelligence, with the AIs drafting and editing and the humans doing the final pass.
Of course, the other Beltway risk, of policymakers opting against the best choice, or even the top three, for personal political gain, is still and always a risk factor.
Additional sources:
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s