The Age of AI: And Our Human Future

Metadata

Highlights & Notes

AI is not an industry, let alone a single product. In strategic parlance, it is not a “domain.” It is an enabler of many industries and facets of human life: scientific research, education, manufacturing, logistics, transportation, defense, law enforcement, politics, advertising, art, culture, and more. The characteristics of AI—including its capacities to learn, evolve, and surprise—will disrupt and transform them all. The outcome will be the alteration of human identity and the human experience of reality at levels not experienced since the dawn of the modern age.

What do AI-enabled innovations in health, biology, space, and quantum physics look like? • What do AI-enabled “best friends” look like, especially to children? • What does AI-enabled war look like? • Does AI perceive aspects of reality humans do not? • When AI participates in assessing and shaping human action, how will humans change? • What, then, will it mean to be human?

This book seeks to provide the reader with a template with which they can decide for themselves what that future should be. Humans still control it. We must shape it with our values. 2

Even after the antibiotic was discovered, humans could not articulate precisely why it worked. The AI did not just process data more quickly than humanly possible; it also detected aspects of reality humans have not detected, or perhaps cannot detect.

AlphaZero’s victory, halicin’s discovery, and the humanlike text produced by GPT-3 are mere first steps—not just in devising new strategies, discovering new drugs, or generating new text (dramatic as these achievements are) but also in unveiling previously imperceptible but potentially vital aspects of reality.

AI, powered by new algorithms and increasingly plentiful and inexpensive computing power, is becoming ubiquitous.

it is performing are any guide, it may access different aspects of reality from the ones humans access.

But AI’s function is complex and inconsistent. In some tasks, AI achieves human—or superhuman—levels of performance; in others (or sometimes the same tasks), it makes errors even a child would avoid or produces results that are utterly nonsensical.

When intangible software acquires logical capabilities and, as a result, assumes social roles once considered exclusively human (paired with those never experienced by humans), we must ask ourselves: How will AI’s evolution affect human perception, cognition, and interaction? What will AI’s impact be on our culture, our concept of humanity, and, in the end, our history?

categories: either a challenge for the future application of reason or an aspect of the divine, not subject to processes and explanations vouchsafed to our direct understanding. The advent of AI obliges us to confront whether there is a form of logic that humans have not achieved or cannot achieve, exploring aspects of reality we have never known and may never directly know.

Only very rarely have we encountered a technology that challenged our prevailing modes of explaining and ordering the world. But AI promises to transform all realms of human experience. And the core of its transformations will ultimately occur at the philosophical level, transforming how humans understand reality and our role within it.

Its zenith will be AI that is ubiquitous, augmenting human thought and action in ways that are both obvious (such as new drugs and automatic language translations) and less consciously perceived (such as software processes that learn from our movements and choices and adjust to anticipate or shape our future needs). Now that the promise of AI and machine learning has been demonstrated, and the computing power needed to operate sophisticated AI is becoming readily available, few fields will remain unaffected.

the advent of AI will alter humanity’s concept of reality and therefore of itself. We are progressing toward great achievements, but those achievements should prompt philosophical reflection. Four centuries after Descartes promulgated his maxim, a question looms: If AI “thinks,” or approximates thinking, who are we?

AI will usher in a world in which decisions are made in three primary ways: by humans (which is familiar), by machines (which is becoming familiar), and by collaboration between humans and machines (which is not only unfamiliar but also unprecedented). AI is also in the process of transforming machines—which, until now, have been our tools—into our partners. We will begin to give AI fewer specific instructions about how exactly to achieve the goals we assign it. Much more frequently, we will present AI with ambiguous goals and ask: “How, based on your conclusions, should we proceed?”

AI sometimes operates in ways even its designers can only elaborate in general terms. As a result, the prospects for free society, even free will, may be altered. Even if these evolutions prove to be benign or reversible, it is incumbent on societies across the globe to understand these changes so they can reconcile them with their values, structures, and social contracts.

In such cases, new divides will appear within and between societies—between those who adopt the new technology and those who opt out or lack the means to develop or acquire some of its applications. When various groups or nations adopt differing concepts or applications of AI, their experiences of reality may diverge in ways that are difficult to predict or bridge. As societies develop their own human-machine partnerships—with varying goals, different training models, and potentially incompatible operational and moral limits with respect to AI—they may devolve into rivalry, technical incompatibility, and ever greater mutual incomprehension. Technology that was initially believed to be an instrument for the transcendence of national differences and the dispersal of objective truth may, in time, become the method by which civilizations and individuals diverge into different and mutually unintelligible realities.

in many cases, AI will suggest new solutions or directions that will bear the stamp of another, nonhuman, form of learning and logical evaluation.

Once AI’s performance outstrips that of humans for a given task, failing to apply that AI, at least as an adjunct to human efforts, may appear increasingly as perverse or even negligent.

A novel human-machine partnership is emerging: First, humans define a problem or a goal for a machine. Then a machine, operating in a realm just beyond human reach, determines the optimal process to pursue. Once a machine has brought a process into the human realm, we can try to study it, understand it, and, ideally, incorporate it into existing practice.

  • Importante

a measurable goal is reason not to fear all-knowing, all-controlling machines; such inventions remain the stuff of science fiction. Yet human-machine partnerships mark a profound departure from previous experience.

Enlightenment, the defining attribute—of humanity. The advent of machines that can approximate human reason will alter both humans and machines. Machines will enlighten humans, expanding our reality in ways we did not expect or necessarily intend to provoke (the opposite will also be possible: that machines that consume human knowledge will be used to diminish us). Simultaneously, humans will create machines capable of surprising discoveries and conclusions—able to learn and evaluate the significance of their discoveries. The result will be a new epoch.

does not possess self-awareness—in other words, the ability to reflect on its role in the world. It does not have intention, motivation, morality, or emotion; even without these attributes, it is likely to develop different and unintended means of achieving assigned objectives. But inevitably, it will change humans and the environments in which they live. When individuals grow up or train with it, they may be tempted, even subconsciously, to anthropomorphize it and treat it as a fellow being.

revealed that could serve as the jumping-off point for additional questions. In this way, new discoveries, patterns, and connections came to light, many of which could be applied to practical aspects of daily life: keeping time, navigating the ocean, synthesizing useful compounds.

The outcome was incongruence: societies remained united in their monotheism but were divided by competing interpretations and explorations of reality. They needed a concept—indeed, a philosophy—to guide their quest to understand the world and their role in it. The philosophers of the Enlightenment answered the call, declaring reason—the power to understand, think, and judge—both the method of and purpose for interacting with the environment. “Our soul is made for thinking, that is, for perceiving,” the French philosopher and polymath Montesquieu wrote, “but such a being must have curiosity, for just as all things form a chain in which every idea precedes one idea and follows another, so one cannot want to see the one without desiring to see the other.”4 The relationship between humanity’s first question (the nature of reality) and second question (its role in reality) became self-reinforcing: if reason begat consciousness, then the more humans reasoned, the more they fulfilled their purpose. Perceiving and elaborating on the world was the most important project in which they were or would ever be engaged. The age of reason was born.

For the following two hundred years, Kant’s essential distinction between the thing-in-itself and the unavoidably filtered world we experience hardly seemed to matter. While the human mind might present an imperfect picture of reality, it was the only picture available. What the structures of the human mind barred from view would, presumably, be barred forever—or would inspire faith and consciousness of the infinite. Without any alternative mechanism for accessing reality, it seemed that humanity’s blind spots would remain hidden. Whether human perception and reason ought to be the definitive measure of things, lacking an alternative, for a time, they became so. But AI is beginning to provide an alternative means of accessing—and thus understanding—reality.

Innovations made possible by the modern scientific method magnified weapons’ destructive power and eventually ushered in the age of total war—conflicts characterized by societal-level mobilization and industrial-level destruction.

This “uncertainty principle” (as it came to be known) implied that a completely accurate picture of reality might not be available at any given time. Further, Heisenberg argued that physical reality did not have independent inherent form, but was created by the process of observation: “I believe that one can formulate the emergence of the classical ‘path’ of a particle succinctly… the ‘path’ comes into being only because we observe it.”

Later, in the late twentieth century and the early twenty-first, this thinking informed theories of AI and machine learning. Such theories posited that AI’s potential lay partly in its ability to scan large data sets to learn types and patterns—e.g., groupings of words often found together, or features most often present in an image when that image was of a cat—and then to make sense of reality by identifying networks of similarities and likenesses with what the AI already knew. Even if AI would never know something in the way a human mind could, an accumulation of matches with the patterns of reality could approximate and sometimes exceed the performance of human perception and reason.

Throughout three centuries of discovery and exploration, humans have interpreted the world as Kant predicted they would according to the structure of their own minds. But as humans began to approach the limits of their cognitive capacity, they became willing to enlist machines—computers—to augment their thinking in order to transcend those limitations. Computers added a separate digital realm to the physical realm in which humans had always lived. As we are growing increasingly dependent on digital augmentation, we are entering a new epoch in which the reasoning human mind is yielding its pride of place as the sole discoverer, knower, and cataloger of the world’s phenomena.

But we have reached a tipping point: we can no longer conceive of some of our innovations as extensions of that which we already know. By compressing the time frame in which technology alters the experience of life, the revolution of digitization and the advancement of AI have produced phenomena that are truly new, not simply more powerful or efficient versions of things past. As computers have become faster and smaller, they have become embeddable in phones, watches, utilities, appliances, security systems, vehicles, weapons—and even human bodies. Communication across and between such digital systems is now essentially instantaneous. Tasks that were manual a generation ago—reading, research, shopping, discourse, record keeping, surveillance, and military planning and conduct—are now digital, data-driven, and unfolding in the same realm: cyberspace.

Digital natives do not feel the need, at least not urgently, to develop concepts that, for most of history, have compensated for the limitations of collective memory. They can (and do) ask search engines whatever they want to know, whether trivial, conceptual, or somewhere in between. Search engines, in turn, use AI to respond to their queries. In the process, humans delegate aspects of their thinking to technology. But information is not self-explanatory; it is context-dependent. To be useful—or at least meaningful—it must be understood through the lenses of culture and history.

When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom. Yet the internet inundates users with the opinions of thousands, even millions, of other users, depriving them of the solitude required for sustained reflection that, historically, has led to the development of convictions. As solitude diminishes, so, too, does fortitude—not only to develop convictions but also to be faithful to them, particularly when they require the traversing of novel, and thus often lonely, roads. Only convictions—in combination with wisdom—enable people to access and explore new horizons.

The digital world has little patience for wisdom; its values are shaped by approbation, not introspection.

The introduction of AI—which completes the sentence we are texting, identifies the book or store we are seeking, and “intuits” articles and entertainment we might enjoy based on prior behavior—has often seemed more mundane than revolutionary. But as it is being applied to more elements of our lives, it is altering the role that our minds have traditionally played in shaping, ordering, and assessing our choices and actions.

Turing suggested setting aside the problem of machine intelligence entirely. What mattered, Turing posited, was not the mechanism but the manifestation of intelligence. Because the inner lives of other beings remain unknowable, he explained, our sole means of measuring intelligence should be external behavior. With this insight, Turing sidestepped centuries of philosophical debate on the nature of intelligence. The “imitation game” he introduced proposed that if a machine operated so proficiently that observers could not distinguish its behavior from a human’s, the machine should be labeled intelligent. The Turing test was born.

consuming data, then drawing observations and conclusions based on the data. While previous systems required exact inputs and outputs, AIs with imprecise function require neither. These AIs translate texts not by swapping individual words but by identifying and employing idiomatic phrases and patterns. Likewise, such AI is considered dynamic because it evolves in response to changing circumstances and emergent because it can identify solutions that are novel to humans. In machinery, these four qualities are revolutionary.

Unlike classical algorithms, which consist of steps for producing precise results, machine-learning algorithms consist of steps for improving upon imprecise results. These techniques are making remarkable progress.

In the 1990s, a set of renegade researchers set aside many of the earlier era’s assumptions, shifting their focus to machine learning. While machine learning dated to the 1950s, new advances enabled practical applications. The methods that have worked best in practice extract patterns from large datasets using neural networks. In philosophical terms, AI’s pioneers had turned from the early Enlightenment’s focus on reducing the world to mechanistic rules to constructing approximations of reality. To identify an image of a cat, they realized, a machine had to “learn” a range of visual representations of cats by observing the animal in various contexts. To enable machine learning, what mattered was the overlap between various representations of a thing, not its ideal—in philosophical terms, Wittgenstein, not Plato. The modern field of machine learning—of programs that learn through experience—was born.

A machine-learning algorithm that improves a model based on underlying data, however, is able to recognize relationships that have eluded humans.

Rather, modern AI algorithms measure the quality of outcomes and provide means for improving those outcomes, enabling them to be learned rather than directly specified.

But neural network training is resource-intensive. The process requires substantial computing power and complex algorithms to analyze and adjust to large amounts of data. Unlike humans, most AIs cannot simultaneously train and execute. Rather, they divide their effort into two steps: training and inference. During the training phase, the AI’s quality measurement and improvement algorithms evaluate and amend its model to obtain quality results. In the case of halicin, this was the phase when the AI identified relationships between molecular structures and antibiotic effects based on the training-set data. Then, in the inference phase, researchers tasked the AI with identifying antibiotics that its newly trained model predicted would have a strong antibiotic effect. The AI, then, did not reach conclusions by reasoning as humans reason; it reached conclusions by applying the model it developed.

Because the application of AI varies with the tasks it performs, so, too, must the techniques developers use to create that AI. This is a fundamental challenge of deploying machine learning: different goals and functions require different training techniques.

As of this writing, three forms of machine learning are noteworthy: supervised learning, unsupervised learning, and reinforcement learning.

each set of inputs, supervised learning has proved to be a particularly effective way of creating a model that can predict outputs in response to novel inputs.

  • Dev importante

employ unsupervised learning to extract potentially useful insights. Thanks to the internet and the digitization of information, businesses, governments, and researchers are awash in data, which they can access more easily than they could in the past. Marketers have more customer information, biologists more DNA data, and bankers more financial transactions on file. When marketers want to identify their customer base, or when fraud analysts seek potential inconsistencies among reams of transactions, unsupervised learning allows AIs to identify patterns or anomalies without having any information regarding outcomes. In unsupervised learning, the training data contains only inputs. Then programmers task the learning algorithm with producing groupings based on some specified weight of measuring the degree of similarity. For example, streaming video services such as Netflix use algorithms to identify clusters of customers with similar viewing habits in order to recommend additional streaming to those customers. But fine-tuning such algorithms can be complex: because most people have several interests, they are typically grouped within several clusters.

In both unsupervised and supervised learning, AIs chiefly use data to perform tasks such as discovering trends, identifying images, and making predictions. Looking beyond data analysis, researchers sought to train AIs to operate in dynamic environments. A third…

In reinforcement learning, AI is not passive, identifying relationships within data. Instead, AI is an “agent” in a controlled environment, observing and recording responses to its actions. Generally these are simulated, simplified versions of reality lacking real-world complexities. It is easier to accurately simulate the operation of a robot on an assembly line than it is in the chaos of a crowded city street. But even in a simulated, simplified environment, such as a chess match, a single move can trigger a cascade of opportunities and risks. As a result, directing an AI to train itself in an artificial environment is, in general, insufficient to produce the best performance. Feedback is required. Providing that feedback is the task of the reward function, indicating to the AI how successful its approach was. No human could effectively fill this role: running on digital processors, AIs can train themselves hundreds, thousands, or billions of times within the space of hours or days, making direct human feedback wholly impractical.…

Reinforcement learning requires human involvement in creating the AI training environment (even if not in providing direct feedback during the training itself): humans define a simulator and reward function, and the AI trains itself on that basis. For meaningful results,…

For millennia, humanity has been challenged by the inability of individuals to communicate clearly across cultural and linguistic divides. Mutual miscomprehension, and the inaccessibility of information in one language to a speaker of another, has caused misunderstanding, impeded trade, and fomented war.

Now, it seems, AI is poised to make powerful translation capabilities available to wide audiences, potentially allowing more people to communicate more easily with one another.

from the basic building blocks of machine learning, developers have the capacity to continue innovating in brilliant ways, unlocking new AIs in the process.

The radical advancement of automated language translation promises to transform business, diplomacy, media, academia, and other fields as people engage with languages that are not their own more easily, quickly, and cheaply than ever before.

a standard neural network can identify a picture of a human face, but a generative network can create an image of a human face that seems real. Conceptually, they depart from their predecessors.

Generators will enrich our information space, but without checks, they will likely also blur the line between reality and fantasy.

By analogy, one can think of the generator as being tasked with brainstorming and the discriminator as being tasked with assessing which ideas are relevant and realistic. In the training phase, the generator and discriminator are trained in alternation, holding the generator fixed to train the discriminator and vice versa.

Transformers like GPT-3 detect patterns in sequential elements such as text, enabling them to predict and generate the elements likely to follow. In GPT-3’s case, the AI can capture the sequential dependencies between words, paragraphs, or code in order to generate these outputs.

Machine-learning methods have taken AI from beating human chess experts to discovering entirely new chess strategies. And its capacity for discovery is not limited to games. As we mentioned, DeepMind built an AI that successfully reduced the energy expenditures of Google’s data centers by 40 percent more than what its excellent engineers could achieve. This and other advances are taking AI past what Turing envisioned in his test—performance indistinguishable from human intelligence—to include performance that exceeds humans, thereby pushing forward the frontiers of understanding. These advances promise to allow AI to handle new tasks, to make AI more prevalent, and even to allow it to generate original text and code.

The proposition that filtration can help steer choices is both familiar and practical. In the physical world, tourists in foreign countries may hire guides to show them the most historic sites or the most meaningful sites according to their religions, nationalities, or professions. But filtration can become censorship through omission. A guide can avoid slums and high-crime areas. In an authoritarian country, a guide can be a “government minder” and thus only show a tourist what the regime wants him or her to see. But in cyberspace, filtration is self-reinforcing. When the algorithmic logic that personalizes searching and streaming begins to personalize the consumption of news, books, or other sources of information, it amplifies some subjects and sources and, as a practical necessity, omits others completely. The consequence of de facto omission is twofold: it can create personal echo chambers, and it can foment discordance between them. What a person consumes (and thus assumes reflects reality) becomes different from what a second person consumes, and what a second person consumes becomes different still from what a third person consumes—a paradox we consider further in chapter 6.

Unlike earlier generations of AI, in which people distilled a society’s understanding of reality in a program’s code, contemporary machine-learning AIs largely model reality on their own. While developers may examine the results generated by their AIs, the AIs do not “explain” how or what they learned in human terms. Nor can developers ask an AI to characterize what it has learned.

At best, we can only observe the results an AI produces once it has completed its training. Accordingly, humans must work backward. Once an AI produces a result, people—be they researchers or auditors—must…

Sometimes, operating beyond the bounds of human experience and unable to conceptualize or generate explanations, AI may produce insights that are true but beyond the frontiers of (at least current) human understanding. When AIs produce unexpected discoveries in this fashion, humans may find themselves in a similar position to that of Alexander Fleming, the discoverer of penicillin. In Fleming’s lab, a penicillin-producing mold accidentally colonized a petri dish, killing off disease-causing bacteria and cluing Fleming in to the existence of the potent, previously unknown compound. At the time, humanity, lacking a concept of an antibiotic, did not understand how penicillin worked. The discovery launched an entire field of endeavor. AIs produce similarly startling…

In addition, AI cannot reflect upon what it discovers. Across many eras, humans have experienced war, then reflected on its lessons, its sorrows, and its extremes—from Homer’s account of Hector and Achilles at the gates of Troy in The Iliad to Picasso’s portrayal of civilian casualties in the Spanish Civil War in Guernica. AI cannot do this, nor can it feel the moral or philosophical compulsion to do so. It simply applies its method and produces a result, be that result—from a human perspective—banal or shocking, benign or malignant.…

Not only are AIs incapable of reflection, they also make mistakes—including mistakes that any human would regard as rudimentary. And while developers are continually weeding out flaws,…

Alternatively, AI bias may result directly from human bias—that is, its training data may contain bias inherent in human actions. This can occur in the labeling of outputs for supervised learning—whatever misidentification the labeler makes, deliberate or inadvertent, the AI will encode. Or a developer may incorrectly specify a reward function used in reinforcement training. Imagine an AI trained to play chess on a simulator that overvalues a set of moves favored by its creator. Like its creator, that AI will learn to prefer those moves, even if they fare poorly in practice.

When AI is employed, we should seek to understand its errors—not so we can forgive them but so we can correct them. Bias besets all aspects of human society, and in all aspects of human society, merits a serious response.

society cannot mitigate what it does not foresee.

Accordingly, the development of procedures to assess whether an AI will perform as expected is vital. Since machine learning will drive AI for the foreseeable future, humans will remain unaware of what an AI is learning and how it knows what it has learned. While this may be disconcerting, it should not be: human learning is often similarly opaque. Artists and athletes, writers and mechanics, parents and children—indeed, all humans—often act on the basis of intuition and thus are unable to articulate what or how they learned. To cope with this opacity, societies have developed myriad professional certification programs, regulations, and laws. Similar techniques should be applied to AIs; for example, societies could permit an AI to be employed only after its creators demonstrate its reliability through testing processes. Developing professional certification, compliance monitoring, and oversight programs for AI—and the auditing expertise their execution will require—will be a crucial societal project.

without fear that it will develop unexpected, undesired behaviors after it completes its tests. In other words, when the algorithm is fixed, a self-driving car trained to stop at red lights cannot suddenly “decide” to start running them.

can further reduce the risk that the AI will falter when made operational. As of this writing, AI is constrained by its code in three ways. First, the code sets the parameters of the AI’s possible actions. These parameters might be quite broad, permitting a substantial range of autonomy and therefore risk. A self-driving AI can brake, accelerate, and turn, any of which could precipitate a collision. Nevertheless, the parameters of the code establish some limits on the AI’s behavior. Though AlphaZero developed novel chess strategies, it did not do so by breaking the rules of chess; it did not suddenly move pawns backward. Actions outside the parameters of the code are beyond the AI’s vocabulary. And if the programmer does not put the capacity there, or explicitly forbids the action, the AI cannot do it. Second, AI is constrained by its objective function, which defines and assigns what it is to optimize. In the case of the model that discovered halicin, the objective function was the relationship between the molecules’ chemical properties and their antibiotic potential. Limited by its objective function, that AI could not have instead sought to identify molecules that might, for example, help cure cancer. Finally and most obviously, AI can only process inputs that it is designed to recognize and analyze. Without human intervention in the form of an auxiliary program, a translation AI cannot evaluate images—the data would appear nonsensical to it.

That said, many aspects of AI and machine learning still need to be developed and understood. Machine-learning-powered AI requires substantial training data. Training data, in turn, requires substantial computing infrastructure, making retraining AI prohibitively expensive, even if it is otherwise desirable to do so. With data and computing requirements limiting the development of more advanced AI, devising training methods that use less data and less computer power is a critical frontier.

Forecasting how swiftly AI will be applied to additional fields is equally difficult. But we can continue to expect dramatic increases in the capacities of these systems. Whether these advances take five, ten, or twenty-five years, at some point, they will occur. Existing AI applications will become more compact, effective, inexpensive, and, therefore, more frequently used. AI will increasingly become part of our daily lives, both visibly and invisibly. It is reasonable to expect that over time, AI will progress at least as fast as computing power has, yielding a millionfold increase in fifteen to twenty years. Such progress will allow the creation of neural networks that, in scale, are equal to the human brain. As of this writing, generative transformers have the largest networks. GPT-3 has about 1011 such weights. But recently, the state-funded Beijing Academy of Sciences announced a generative language model with 10 times as many weights as GPT-3. This is still 104 times fewer than estimates of the human brain’s synapses. But if advances proceed at the rate of doubling every two years, this gap could close in less than a decade. Of course, scale does not translate directly to intelligence. Indeed, the level of capability a network will sustain is unknown. Some primates have brains similar in size to or even larger than human brains, but they do not exhibit anything approaching human acumen. Likely, development will yield AI “savants”—programs capable of dramatically exceeding human performance in specific areas, such as advanced scientific fields.

“network platforms”: digital services that provide value to their users by aggregating those users in large numbers, often at a transnational and global scale.

Before significant disruption arises, governments, network platform operators, and users must consider the nature of their goals, the basic premises and parameters of their interactions, and the type of world they aim to create.

Thus, although they are operated as commercial entities, some network platforms are becoming geopolitically significant actors by virtue of their scale, function, and influence.

What seems intuitive to the software engineer may be perplexing to the political leader or inexplicable to the philosopher. What the consumer welcomes as a convenience the national security official may view as an unacceptable threat or the political leader may reject as out of keeping with national objectives. What one society may embrace as a welcome guarantee another may interpret as a loss of choice or freedom.

To achieve greater convenience and accuracy, human developers have had to willingly forgo a measure of direct understanding.

Positive network effects occur for information-exchange activities in which the value rises with the number of participants. When the value rises in this manner, success tends to produce further success and a greater likelihood of eventual predominance. People naturally gravitate toward existing gatherings, which leads to larger aggregations of users. For a network platform relatively unconstrained by borders, this dynamic leads to a broader, often transnational geographic scope with correspondingly few major competing services.

There is no inherent reason for the dynamic of positive network effects to stop at national or regional borders—and network platforms often expand across such terrestrial boundaries. Physical distances and national or linguistic differences are rarely obstacles to expansion: the digital world is accessible from anywhere with internet connectivity, and network platforms’ services can typically be delivered in several languages. The main limitations on expansion are those put in place by governments or perhaps technological incompatibility (the former sometimes encouraging the latter). Thus, for each type of service, such as social media and video streaming, there are generally a small number of global network platforms, perhaps complemented by local ones. Their users benefit from, and contribute to, a new, as yet poorly understood phenomenon: the operation of nonhuman intelligence at global scale.

The relationship between an individual, a network platform, and its other users is a novel combination of intimate bond and remote connection.

To a large extent, AI is judged by the utility of its results, not the process used to reach those results. This signals a shift in priorities from earlier eras, when each step in a mental or mechanical process was either experienced by a human being (a thought, a conversation, an administrative process) or could be paused, inspected, and repeated by human beings.

In a sense, the individual using such a service is not driving alone; instead, he or she is part of a system in which human and machine intelligence are collaborating to guide an aggregation of people through their individual routes.

This raises essential questions: With what objective function is such AI operating? And by whose design, and within what regulatory parameters?

The fact that AI operates according to its own processes, which are different from and often faster than human mental processes, adds another complexity. AI develops its own approaches for fulfilling whatever objective functions were specified. It produces outcomes and answers that are not characteristically human and that are largely independent of national or corporate cultures. The global nature of the digital world, and AI’s ability to monitor, block, tailor, produce, and distribute information on network platforms worldwide, imports these complexities to the “information space” of disparate societies.

When a free society relies on AI-enabled network platforms that generate, transmit, and filter content across national and regional borders, and when those platforms proceed in a manner that inadvertently promotes hate and division, that society faces a novel threat that should prompt it to consider novel approaches to policing its information environment. The underlying problem is urgent, yet AI-reliant solutions produce their own critical questions. We must not forgo consideration of the proper balance between human judgment and AI-driven automation on both sides of the equation.

Abroad, they are all increasingly being treated (often without distinction) as creations and representatives of the United States—although in many cases the US government’s role was confined to staying out of their way.

There is the concern that network platforms may foster, even passively, a level of connection and influence that previously would have arisen only from a close alliance, particularly with the use of AI as a tool for learning from and influencing citizens. If a network platform is useful and successful, it comes to support broader commercial and industrial functions—and, in this capacity, it may become nationally indispensable.

technologies) by withholding them in a crisis may prompt governments to engage in new forms of policy and strategy.

For countries and regions that do not produce homegrown network platforms, the choice for their immediate future seems to be between (1) limiting reliance on platforms that could provide leverage to an adversary government; (2) remaining vulnerable—for example, to another government’s potential ability to access data about its citizens; or (3) counterbalancing potential threats against each other.

Today, transportation network platforms created in one country could become the arteries and lifeblood of another country, as the platform learns which consumers need certain products and as it automates the logistics of provision. In effect, such network platforms could become critical economic infrastructure, giving the country of origin leverage over any country that relies on it.

communication may, in time, facilitate a process of regionalization—uniting blocs of users in separate realities, influenced by distinctive AIs that have evolved in different directions.

AI-ENABLED NETWORK PLATFORMS AND OUR HUMAN FUTURE Human perception and experience, filtered through reason, has long defined our understanding of reality. This understanding has typically been individual and local in scope, only reaching broader correspondence for certain essential questions and phenomena; it has rarely been global or universal, except in the distinctive context of religion. Now day-to-day reality is accessible on a global scale, across network platforms that unite vast numbers of users.

The human mind has never functioned in the manner in which the internet era demands.

Network platform operators will face choices beyond those of serving customers and achieving commercial success. Until now, they have generally not been obliged to define a national or service ethic beyond the organic drive to improve their products, increase their reach, and serve the interests of users and shareholders. As they have assumed broader and more influential roles, however, including functions that influence (and sometimes rival) the activities of governments, they will face far greater challenges. Not only will they need to assist in defining the capacity and ultimate purposes of the virtual realms they have created, they will also need to pay increasing attention to how they interact with one another and with other sectors of society. 8

“Now I am become Death, the destroyer of worlds.” This insight presaged the central paradox of Cold War strategy: that the dominant weapons technology of the era was never used. The destructiveness of weapons remained out of proportion to achievable objectives other than pure survival.

The first is cyber conflict, which has magnified vulnerabilities as well as expanded the field of strategic contests and the variety of options available to participants. The second is AI, which has the capacity to transform conventional, nuclear, and cyber weapons strategy. The emergence of new technology has compounded the dilemmas of nuclear weapons.

A central paradox of our digital age is that the greater a society’s digital capacity, the more vulnerable it becomes. Computers, communications systems, financial markets, universities, hospitals, airlines, and public transit systems—even the mechanics of democratic politics—involve systems that are, to varying degrees, vulnerable to cyber manipulation or attack.

In addition to its potentially transformative utility, AI’s capacity for autonomy and separate logic generates a layer of incalculability. Most traditional military strategies and tactics have been based on the assumption of a human adversary whose conduct and decision-making calculus fit within a recognizable framework or have been defined by experience and conventional wisdom. Yet an AI piloting an aircraft or scanning for targets follows its own logic, which may be inscrutable to an adversary and unsusceptible to traditional signals and feints—and which will, in most cases, proceed faster than the speed of human thought.

If a competitor trains its AI in silence and secrecy, can leaders know—outside of a conflict—whether they are ahead or behind in an arms race?

The most strategically significant aspects of cutting-edge AI development will frequently be adopted by governments to meet their concepts of national interest.

The most revolutionary and unpredictable effect may occur at the point where AI and human intelligence encounter each other.

come to operate in conceptual and analytical realms that are accessible to AI but not to human reason, they will become opaque—in their processes, reach, and ultimate significance. If policy makers conclude that AI’s assistance in scouring the deepest patterns of reality is necessary to understand the capabilities and intentions of adversaries (who may field their own AI) and respond to them in a timely manner, delegation of critical decisions to machines may grow inevitable. Societies are likely to reach differing instinctive limits on what to delegate and what risks and consequences to accept. Major countries should not wait for a crisis to initiate a dialogue about the implications—strategic, doctrinal, and moral—of these evolutions. If they do, their impact is likely to be irreversible. An international attempt to limit these risks is imperative.

Three qualities have traditionally facilitated the separation of military and civilian domains: technological differentiation, concentrated control, and magnitude of effect. Technologies with either exclusively military or exclusively civilian applications are described as differentiated. Concentrated control refers to technologies that a government can easily manage as opposed to technologies that spread easily and thereby escape government control. Finally, the magnitude of effect refers to a technology’s destructive potential.

being, in essence, no more than lines of code: most algorithms (with some noteworthy exceptions) can be run on single computers or small networks, meaning that governments have difficulty controlling the technology by controlling the infrastructure. Finally, AI applications have substantial destructive potential. This relatively unique constellation of qualities, when coupled with the broad range of stakeholders, produces strategic challenges of novel complexity.

In the stock market, sophisticated so-called quant firms have recognized that AI algorithms can spot market patterns and react with speed that exceeds that of even the ablest trader. Accordingly, such firms have delegated control over certain aspects of their securities trading to these algorithms. In many cases, these algorithmic systems can exceed human profits by a substantial margin. However, they occasionally grossly miscalculate—potentially far beyond the worst human error.

technology for which strategists could find no viable operational doctrine. The dilemma of the AI age will be different: its defining technology will be widely acquired, mastered, and employed. The achievement of mutual strategic restraint—or even achieving a common definition of restraint—will be more difficult than ever before, both conceptually and practically.

In the era of artificial intelligence, the enduring quest for national advantage must be informed by an ethic of human preservation.

IN AN AGE in which machines increasingly perform tasks only humans used to be capable of, what, then, will constitute our identity as human beings? As previous chapters have explored, AI will expand what we know of reality. It will alter how we communicate, network, and share information. It will transform the doctrines and strategies we develop and deploy. When we no longer explore and shape reality on our own—when we enlist AI as an adjunct to our perceptions and thoughts—how will we come to see ourselves and our role in the world? How will we reconcile AI with concepts like human autonomy and dignity?

Now we are entering an era in which AI—a human creation—is increasingly entrusted with tasks that previously would have been performed, or attempted, by human minds. As AI executes these tasks, producing results approximating and sometimes surpassing those of human intelligence, it challenges a defining attribute of what it means to be human. Moreover, AI is capable of learning, evolving, and becoming “better” (according to the objective function it has been given). This dynamic learning permits AI to achieve complex outcomes that were, until now, the preserve of humans and human organizations.

What will its guiding principles be? To the two traditional ways by which people have known the world, faith and reason, AI adds a third. This shift will test—and, in some instances, transform—our core assumptions about the world and our place in it. Reason not only revolutionized the sciences, it also altered our social lives, our arts, and our faith. Under its scrutiny, the hierarchy of feudalism fell, and democracy, the idea that reasoning people should direct their own governance, rose. Now AI will again test the principles upon which our self-understanding rests.

For humans accustomed to agency, centrality, and a monopoly on complex intelligence, AI will challenge self-perception.

The fact that AI is able to make certain predictions or decisions, or generate certain material, does not by itself indicate sophistication akin to that of humans. But in many cases, the results are comparable or superior to those previously produced only by humans.

With perceptions of reality complementary to humans’, AI may emerge as an effective partner for people. In scientific discovery, creative work, software development, and other comparable fields, there can be great benefits to having an interlocutor with a different perception. But this collaboration will require humans to adjust to a world in which our reason is not the only—and perhaps not the most informative—way of knowing or navigating reality. This portends a shift in human experience more significant than any that has occurred for nearly six centuries—since the advent of the movable-type printing press.

Ultimately, individuals and societies will have to make up their minds which aspects of life to reserve for human intelligence and which to turn over to AI or human-AI collaboration.

Indeed, in many fields, the experience of surpassing traditional reason through specialized technology, as in the cases of AI’s breakthroughs in medicine, biology, chemistry, and physics, will often prove fulfilling.

processes primarily as consumers, will also frequently find these processes gratifying, as in the case of a busy person who can read or check their email while traveling in a self-driving car. Indeed, embedding AI in consumer products will distribute the technology’s benefits widely. However, AI will also operate networks and systems that are not designed for any specific individual user’s benefit and are beyond any individual user’s control. In these cases, encounters with AI may be disconcerting or disempowering, as when AI recommends one individual over others for a desirable promotion or transfer—or encourages or promotes attitudes that challenge or overpower prevailing wisdom.

decisions are often as accurate or more accurate than humans’, and with the proper safeguards, may actually be less biased. Similarly, AI may be more effective at distributing resources, predicting outcomes, and recommending solutions. Indeed, as generative AI becomes more prevalent, its ability to produce novel text, images, video, and code may even enable it to perform as effectively as its human counterparts in roles typically considered creative (such as drafting documents and creating advertisements). For the entrepreneur offering new products, the administrator wielding new information, and the developer creating increasingly powerful AI, advances in these technologies may enhance senses of agency and choice.

Optimizing the distribution of resources and increasing the accuracy of decision making is good for society, but for the individual, meaning is more often derived from autonomy and the ability to explain outcomes on the basis of some set of actions and principles. Explanations supply meaning and permit purpose; the public recognition and explicit application of moral principles supply justice. But an algorithm does not offer reasons grounded in human experience to explain its conclusions to the general public. Some people, particularly those who understand AI, may find this world intelligible. But others, greater in number, may not understand why AI does what it does, diminishing their sense of autonomy and their ability to ascribe meaning to the world.

As AI transforms the nature of work, it may jeopardize many people’s senses of identity, fulfillment, and financial security. Those most affected by such change and potential dislocation will likely hold blue-collar and middle-management jobs that require specific training as well as professional jobs involving review or interpretation of data or drafting of documents in standard forms.

Whatever AI’s long-term effects prove to be, in the short term, the technology will revolutionize certain economic segments, professions, and identities. Societies need to be ready to supply the displaced not only with alternative sources of income but also with alternative sources of fulfillment.

At times, unseen AI may lend the world a magical congeniality, as when stores seemingly anticipate our visits and our whims. At other times, it may produce a Kafkaesque feeling, as when institutions present life-shaping decisions—offers of employment, decisions about car and home loans, or decisions made by security firms or law enforcement—that no single human can explain.

These tensions—between reasoned explanations and opaque decision making, between individuals and large systems, between people with technical knowledge and authority and people without—are not new. What is new is that another intelligence, one that is not human and often inexplicable in terms of human reason, is the source. What is also new is the pervasiveness and scale of this new intelligence. Those who lack knowledge of AI or authority over it may be particularly tempted to reject it. Frustrated by its seeming usurpation of their autonomy or fearful of its additional effects, some may seek to minimize their use of AI and disconnect from social media or other AI-mediated network platforms, shunning its use (at least knowingly) in their daily lives.

Some segments of society may go further, insisting on remaining “physicalists” rather than “virtualists.” Like the Amish and the Mennonites, some individuals may reject AI entirely, planting themselves firmly in a world of faith and reason alone. But as AI becomes increasingly prevalent, disconnection will become an increasingly lonely journey. Indeed, even the possibility of disconnection may prove illusory: as society becomes ever more digitized, and AI ever more integrated into governments and products, its reach may prove all but inescapable.

Across the biological, chemical, and physical sciences, a hybrid partnership is emerging in which AI is enabling new discoveries that humans are, in response, working to understand and explain.

Coming of age in the presence of AI will alter our relationships, both with one another and with ourselves. Just as a divide exists today between “digital natives” and prior generations, so, too, will a divide emerge between “AI natives” and the people who precede them. In the future, children may grow up with AI assistants, more advanced than Alexas and Google Homes, that will be many things at once: babysitter, tutor, adviser, friend. Such an assistant will be able to teach children virtually any language or train children in any subject, calibrating its style to individual students’ performance and learning styles to bring out their best. AI may serve as a playmate when a child is bored and as a monitor when a child’s parent is away. As AI-provided and tailored education is introduced, the average human’s capabilities stand both to increase and to be challenged.

Over time, individuals may come to prefer their digital assistants over humans, for humans will be less intuitive of their preferences and more “disagreeable” (if only because humans have personalities and desires not keyed to other individuals). As a result, our dependence on one another, on human relationships, may decrease. What, then, will become of the ineffable qualities and lessons of childhood? How will the omnipresent companionship of a machine, which does not feel or experience human emotion (but may mimic it), affect a child’s perception of the world and his or her socialization? How will it shape imagination? How will it change the nature of play? How will it alter the process of making friends or fitting in? Arguably, the availability of digital information has already transformed the education and cultural experience of a generation. Now the world is embarking on another great experiment, in which children will grow up with machines that will, in many ways, act as human teachers have for generations—but without human sensibilities, insight, and emotion. Eventually, the experiment’s participants will likely ask whether their experiences are being altered in ways they did not expect or accept.

information available, it is diminishing the space required for deep, concentrated thought. Today’s near-constant stream of media increases the cost, and thus decreases the frequency, of contemplation. Algorithms promote what seizes attention in response to the human desire for stimulation—and what seizes attention is often the dramatic, the surprising, and the emotional. Whether an individual can find space in this environment for careful thought is one matter. Another is that the now-dominant forms of communication are non-conducive to the promotion of tempered reasoning.

Now, in every domain characterized by intensive intellectual labor, from finance to law, AI is being integrated into the process of learning. But humans cannot always verify that what AI presents is representative; we cannot always explain why applications such as TikTok and YouTube promote some videos over others. Human editors and anchors, on the other hand, can provide explanation (accurate or not) of their reasons for selecting what they present. As long as people desire such explanation, the age of AI will disappoint the majority of people who do not understand the technology’s processes and mechanisms.

AI’s effects on human knowledge are paradoxical. On the one hand, AI intermediaries can navigate and analyze bodies of data vaster than the unaided human mind could have previously contemplated. On the other, this power—the ability to engage with vast bodies of data—may also accentuate forms of manipulation and error. AI is capable of exploiting human passions more effectively than traditional propaganda. Having tailored itself to individual preferences and instincts, AI elicits responses its creator or user desires. Similarly, the deployment of AI intermediaries may also amplify inherent biases, even if these AI intermediaries are technically under human control. The dynamics of market competition prompt social media platforms and search engines to present information that users find most compelling. As a result, information that users are believed to want to see is prioritized, distorting a representative picture of reality. Much as technology accelerated the speed of information production and dissemination in the nineteenth and twentieth centuries, in this era, information is being altered by the mapping of AI onto dissemination processes. Some people will seek information filters that do not distort, or at least distort transparently. Some will balance filter against filter, independently weighing the results. Others may opt out entirely, preferring filtration by traditional human intermediaries. Yet when the majority of people in a society accept AI intermediation, either as a default or as the price of powering network platforms, those pursuing traditional forms of personal inquiry through research and reason may find themselves unable to keep pace with events. They will certainly find their ability to shape them progressively limited. If information and entertainment become immersive, personalized, and synthetic—such as AI-sorted “news” confirming people’s long-held beliefs or AI-generated movies “starring” long-deceased actors—will a society have a common understanding of its history and current affairs? Will it have a common culture? If an AI is instructed to scan a century’s worth of music or television and produce “a hit,” does it create or merely assemble? How will writers, actors, artists, and other creators, whose labors have traditionally been treated as a unique human engagement with reality and lived experience, see themselves and be seen by others?

Traditional reason and faith will persist in the age of AI, but their nature and scope are bound to be profoundly affected by the introduction of a new, powerful, machine-operated form of logic. Human identity may continue to rest on the pinnacle of animate intelligence, but human reason will cease to describe the full sweep of the intelligence that works to comprehend reality. To make sense of our place in this world, our…

Barring fundamental ethical or legal constraints, what company would forgo knowledge of AI functionality a rival has used to offer new products or services? If AI enables a bureaucrat, architect, or investor to predict outcomes or conclusions with ease, on what basis would he or she not use it? Given the pressures for deployment, limitations on AI uses that are, on their face, desirable will need to be formulated at a society-wide or international level.

Societies need to build the intellectual and psychological infrastructure to engage with AI and exercise its unique intelligence to benefit humans as much as possible. The technology will compel adaptation in many—indeed, most—aspects of political and social life.

In each discrete major new deployment of AI, it will be crucial to establish the balance. Societies and their leaders will have to choose when individuals should be notified that they are dealing with AI as well as what powers they have in those interactions. Ultimately, through these choices, a new human identity for the AI age will be made manifest.

Does one classify an AI dialogue between two public figures who never met as misinformation, entertainment, or political inquiry—or does the answer depend on the context or on the participants? Does an individual have the right not to be represented in a simulated reality without his or her permission? If permission is granted, is the synthetic expression any more genuine?

Reality explored by AI, or with the assistance of AI, may prove to be something other than what humans had imagined. It may have patterns we have never discerned or cannot conceptualize. Its underlying structure, penetrated by AI, may be inexpressible in human language alone. As one of our colleagues has observed of AlphaZero, “Examples like this show that there are ways of knowing that are not available to human consciousness.”

To chart the frontiers of contemporary knowledge, we may task AI to probe realms we cannot enter; it may return with patterns or predictions we do not fully grasp. The prognostications of the Gnostic philosophers, of an inner reality beyond ordinary human experience, may prove newly significant. We may find ourselves one step closer to the concept of pure knowledge, less limited by the structure of our minds and the patterns of conventional human thought. Not only will we have to redefine our roles as something other than the sole knower of reality, we will also have to redefine the very reality we thought we were exploring. And even if reality does not mystify us, the emergence of AI may still alter our engagement with it and with one another.

The age of AI has yet to define its organizing principles, its moral concepts, or its sense of aspirations and limitations.

The AI revolution will occur more quickly than most humans expect. Unless we develop new concepts to explain, interpret, and organize its consequent transformations, we will be unprepared to navigate it or its implications. Morally, philosophically, psychologically, practically—in every way—we find ourselves on the precipice of a new epoch. We must draw on our deepest resources—reason, faith, tradition, and technology—to adapt our relationship with reality so it remains human.

Today, a new epoch beckons. In it, once again, technology will transform knowledge, discovery, communication, and individual thought. Artificial intelligence is not human. It does not hope, pray, or feel. Nor does it have awareness or reflective capabilities. It is a human creation, reflecting human-designed processes on human-created machines. Yet in some instances, at awesome scale and speed, it produces results approximating those that have, until now, only been reached through human reason. Sometimes, its results astound. As a result, it may reveal aspects of reality more dramatic than any we have ever contemplated. Individuals and societies that enlist AI as a partner to amplify skills or pursue ideas may be capable of feats—scientific, medical, military, political, and social—that eclipse those of preceding periods. Yet once machines approximating human intelligence are regarded as key to producing better and faster results, reason alone may come to seem archaic. After defining an epoch, the exercise of individual human reason may find its significance altered.

The AI revolution stands to do something similar: access new information, produce major scientific and economic advances, and in so doing, transform the world. But its impact on discourse will be difficult to determine. By helping humanity navigate the sheer totality of digital information, AI will open unprecedented vistas of knowledge and understanding. Alternatively, its discovery of patterns in masses of data may produce a set of maxims that become accepted as orthodoxy across continental and global network platforms. This, in turn, may diminish humans’ capacity for skeptical inquiry that has defined the current epoch. Further, it may channel certain societies and network-platform communities into separate and contradictory branches of reality.

Until now, humans alone developed their understanding of reality, a capacity that defined our place in the world and relationship to it. From this, we elaborated our philosophies, designed our governments and military strategies, and developed our moral precepts. Now AI has revealed that reality may be known in different ways, perhaps in more complex ways, than what has been understood by humans alone. At times, its achievements may be as striking and disorienting as those of the most influential human thinkers in their heydays—producing bolts of insight and challenges to established concepts, all of which demand a reckoning. Even more frequently, AI will be invisible, embedded in the mundane, subtly shaping our experiences in ways we find intuitively suitable.

We must recognize that AI’s achievements, within its defined parameters, sometimes rank beside or even surpass those that human resources enable. We may comfort ourselves by repeating that AI is artificial, that it has not or cannot match our conscious experience of reality. But when we encounter some of AI’s achievements—logical feats, technical breakthroughs, strategic insights, and sophisticated management of large, complex systems—it is evident that we are in the presence of another experience of reality by another sophisticated entity.

Accessed by AI, new horizons are opening before us. Previously, the limits of our minds constrained our ability to aggregate and analyze data, filter and process news and conversations, and interact socially in the digital domain. AI permits us to navigate these realms more effectively. It finds information and identifies trends that traditional algorithms could not—or at least not with equal grace and efficiency. In so doing, it not only expands physical reality but also permits expansion and organization of the burgeoning digital world.

In the age of AI, then, human reason will find itself both augmented and diminished.

partnering with it, or deferring to it. These choices will define AI’s application to specific tasks or domains, reflecting philosophical as well as practical dimensions. For example, in airline and automotive emergencies, should an AI copilot defer to a human? Or the other way around?

In this era, the ideal type of truth has been the singular, verifiable proposition provable through testing. But the AI era will elevate a concept of knowledge that is the result of partnership between humans and machines. Together, we (humans) will create and run (computer) algorithms that will examine more data more quickly, more systematically, and with a different logic than any human mind can. Sometimes, the result will be the revelation of properties of the world that were beyond our conception—until we cooperated with machines.

AI already transcends human perception—in a sense, through chronological compression or “time travel”: enabled by algorithms and computing power, it analyzes and learns through processes that would take human minds decades or even centuries to complete. In other respects, time and computing power alone do not describe what AI does.

Are humans and AI approaching the same reality from different standpoints, with complementary strengths? Or do we perceive two different, partially overlapping realities: one that humans can elaborate through reason and another that AI can elaborate through algorithms? If this is the case, then AI perceives things that we do not and cannot—not merely because we do not have the time to reason our way to them, but also because they exist in a realm that our minds cannot conceptualize.

An AI ethic is essential. Each individual decision—to constrain, partner, or defer—may or may not have dramatic consequences, but in the aggregate, they will be magnified. They cannot be made in isolation.

AI’s dynamic and emergent qualities generate ambiguity in at least two respects. First, AI may operate as we expect but generate results that we do not foresee. With those results, it may carry humanity to places its creators did not anticipate.

Second, in some applications, AI may be unpredictable, with its actions coming as complete surprises. Consider AlphaZero, which, in response to the instruction “win at chess,” developed a style of play that, in the millennia-long history of the game, humans had never conceived.

While humans may carefully specify AI’s objectives, as we give it broader latitude, the paths AI takes to accomplish its objectives may come to surprise or even alarm us.

The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity.

With the use of AIs to navigate masses of information comes the challenge of distortion—of AIs promoting the world humans instinctually prefer. In this domain, our cognitive biases, which AIs can readily magnify, echo. And with those reverberations, with that multiplicity of choice coupled with the power to select and screen, misinformation proliferates. Social media companies do not run news feeds to promote extreme and violent political polarization. But it is self-evident that these services have not resulted in the maximization of enlightened discourse.

other words, we may have no choice but to foster AI. But we also have a duty to shape it in a way that is compatible with a human future.

But the potential military uses of AI are broader than those of nuclear arms, and the divisions between offense and defense are, at least currently, unclear.

AI and other emerging technologies (such as quantum computing) seem to be moving humans closer to knowing reality beyond the confines of our own perception. Ultimately, however, we may find that even these technologies have limits. Our problem is that we have not yet grasped their philosophical implications. We are being advanced by them, but automatically rather than consciously. The last time human consciousness was changed significantly—the Enlightenment—the transformation occurred because new technology engendered new philosophical insights, which, in turn, were spread by the technology (in the form of the printing press). In our period, new technology has been developed, but remains in need of a guiding philosophy.

Such a group or commission should have at least two functions: 1. Nationally, it should ensure that the country remains intellectually and strategically competitive in AI. 2. Both nationally and globally, it should study, and raise awareness of, the cultural implications AI produces.

What other questions should we seek to answer when, for the situation in which we find ourselves, we have no experience or intuition?

humans assisted by AIs, which interpret and thus understand the world differently? Is our destiny one in which humans do not completely understand machines, but make peace with them and, in so doing, change the world? Immanuel Kant opened the preface to his Critique of Pure Reason with an observation: Human reason has the peculiar fate in one species of its cognitions that it is burdened with questions which it cannot dismiss, since they are given to it as problems by the nature of reason itself, but which it also cannot answer, since they transcend every capacity of human reason.

The advent of AI, with its capacity to learn and process information in ways that human reason alone cannot, may yield progress on questions that have proven beyond our capacity to answer. But success will produce new questions, some of which we have attempted to articulate in this book. Human intelligence and artificial intelligence are meeting, being applied to pursuits on national, continental, and even global scales. Understanding this transition, and developing a guiding ethic for it, will require commitment and insight from many elements of society: scientists and strategists, statesmen and philosophers, clerics and CEOs. This commitment must be made within nations and among them. Now is the time to define both our partnership with artificial intelligence and the reality that will result.

Probing a model’s weaknesses needs to be done en masse, at scale—and not only by the developers of the model. And if one comes to dominate—or if a few models come to dominate, as we expect they will—we face the danger of a technological monoculture that risks proliferating any weaknesses, biases, or limitations inherent in a small number of models across a large range of domains. Even without such risks, the sheer reduction in the diversity and variability of outcomes that results from having a small number of models makes the nature of those models crucially important. Imagine replacing the independent decisions of millions or even billions of people by a handful of people—or, in this case, a handful of AI models.

But in a likely second phase, computers may add new types of knowledge, reaching levels that humans themselves cannot accomplish. Reliance on knowledge that can only be gained through machines moves mankind into a new reality.

AI provides answers without human logical notions of how one idea follows another. This is an altogether different mode of thinking and knowledge. It may feel discomfiting to humans, who rely on reason for a sense of meaning and agency.

This new intelligence and its different form of logic will change human perception of reality, as AlphaZero’s new intelligence is changing human perception of chess. Such a phenomenon requires its own philosophy, which will have to be developed. If we fail to do so, we may find ourselves dismissing entirely the old wheel of knowledge. Enthralled by machines that appear to be our friends, fearful of blocking their superhuman speed, and incapable of explaining their new conclusions, we may develop a reverence for computers that approaches mysticism. The roles of history, morality, justice, and human judgment in such a world are unclear.

AI will lead human beings to realms that we cannot reach solely by human reason, now or perhaps ever. Its technical achievements in health and economics promise to make the age of AI an age of abundance. While we celebrate that potential, we recognize that a new reality is emerging. As the stakes rise, our response must meet them.