What Technology Wants from Humans

The books in this cluster pose a question that only becomes visible when they are read together: not “what do humans want from technology?” (the usual question) but “what does technology require from humans in order to thrive, and what does it do to humans in return?” This is Kevin Kelly’s frame — his 2010 book is literally titled What Technology Wants — but the question runs through every book in the Technology/AI cluster, even those that approach it from engineering, history, or statecraft.

The Adaptive Demand

Every major technological transition makes demands on the humans who encounter it. These demands are not optional; they are the price of participation.

The electrification transition (Carr, The Big Switch) required humans to restructure their relationship to physical labor, to accept centralized provision of a resource they had previously controlled locally, and to reorganize their working lives around new temporal and spatial patterns. Factory workers who could not adapt were displaced.

The software transition (Lawson, Ask Your Developer; Kim et al., The Phoenix Project) requires humans to develop new cognitive dispositions: tolerance for ambiguity, comfort with rapid iteration, the ability to work in small teams with high autonomy. Organizations that cannot adapt their cultures to these demands fail to attract the people who can build the software they need.

The AI transition (Kissinger/Schmidt/Huttenlocher, The Age of AI) requires something more fundamental: a renegotiation of what it means to know something. If an AI can produce a correct answer without being able to explain how it arrived at that answer, and if that answer is better than any human could produce, what does human knowledge mean?

“The digital world has little patience for wisdom; its values are shaped by approbation, not introspection.”

This is The Age of AI’s most challenging claim: the information environment created by digital technology and now shaped by AI may be structurally hostile to the kind of sustained, solitary reflection that has historically produced wisdom. If so, the AI transition is not merely a productivity challenge but a civilizational one.

The Endless Newbie Problem

Kelly in The Inevitable names the fundamental demand of technological becoming:

“Endless Newbie is the new default for everyone, no matter your age or experience.”

This is not a complaint — it is a design constraint. In a technological environment that changes faster than expertise can accumulate, the competency that matters is not the current state of one’s knowledge but the ability to learn continuously. The “expert” whose expertise is locked in a specific technology stack or methodology is perpetually one technological transition away from obsolescence.

The adaptive demand is therefore primarily dispositional rather than technical: cultivate the capacity to become a beginner again, reliably and without existential crisis.

Diamandis and Kotler add the group dimension: the most powerful human response to technological acceleration is collective intelligence — groups achieving flow states that exceed individual cognitive capability:

“Group flow is a team performing at its very best… It’s also considered the most pleasurable state on Earth.”

The human adaptation that technology rewards most highly is not individual genius but collaborative intelligence.

What Technology Does to Community

Bilton in I Live in the Future and Kelly in The Inevitable both observe that digital technology transforms community — not destroying it but reorganizing its architecture.

Mass media created mass communities: shared consumption of the same content, at the same time, forming a “imagined community” (Benedict Anderson’s concept, which Bilton invokes) organized around geography and publication schedule. Digital media creates networked communities: highly differentiated groups organized around shared interest and identity, operating across geography and time zones.

The human adaptation required: the capacity to participate in multiple, fluid, overlapping communities simultaneously, and to evaluate the trustworthiness of strangers based on identity signals rather than physical familiarity.

“Those somewhat unknown online friends may be as influential — or more so — as a running buddy or a next-door neighbor.” (Bilton)

Kelly notes the paradox of the sharing impulse:

“If today’s social media has taught us anything about ourselves as a species, it is that the human impulse to share overwhelms the human impulse for privacy.”

Technology does not create this impulse — it reveals and amplifies it. The humans who built social networks did not anticipate that their platforms would become primary vehicles for community, identity performance, and political organization. The technology reflected back something about human nature that was previously constrained by the physical costs of broadcasting.

What AI Specifically Requires: A New Epistemology

The AI transition makes a uniquely demanding request of humans. Previous technologies extended human capability while leaving the structure of human knowing intact. AI potentially requires humans to revise what counts as knowledge.

Kissinger, Schmidt, and Huttenlocher frame this as an Enlightenment-level disruption:

“For the following two hundred years, Kant’s essential distinction between the thing-in-itself and the unavoidably filtered world we experience hardly seemed to matter… But AI is beginning to provide an alternative means of accessing — and thus understanding — reality.”

If AI can access aspects of reality that human cognition cannot perceive, then human knowledge is no longer the only valid form of knowledge — and human cognitive structures are no longer the ultimate measure of what is real. This is a philosophical revolution that has barely been absorbed at the cultural level, even as the technology that produces it becomes ubiquitous.

The practical demand: humans must develop the capacity to work with AI outputs whose validity they cannot independently verify, building a new kind of trust relationship with non-human intelligence.

“A novel human-machine partnership is emerging: First, humans define a problem or a goal for a machine. Then a machine, operating in a realm just beyond human reach, determines the optimal process to pursue. Once a machine has brought a process into the human realm, we can try to study it, understand it, and, ideally, incorporate it into existing practice.”

The Question as Human Advantage

Kelly’s most original contribution to the human-technology theme is his identification of questioning as the distinctively human capability that technology most needs and least threatens:

“A good question is not concerned with a correct answer. A good question cannot be answered immediately. A good question challenges existing answers. A good question is one you badly want answered once you hear it, but had no inkling you cared before it was asked. A good question creates new territory of thinking.”

“A good question may be the last job a machine will learn to do. A good question is what humans are for.”

This is not a consolation prize for the humans displaced by AI — it is a claim about what humans contribute uniquely to the human-AI partnership. Machines can execute queries, answer questions, and optimize for specified objectives. They cannot formulate the question that opens a new domain of inquiry; they cannot feel the lack of a question that needs to be asked; they cannot experience the surprise of recognizing that the right question reframes a previously unsolvable problem.

The human adaptation: invest in the capacity for deep questioning. The Socratic tradition — which values the question over the answer — turns out to be the appropriate preparation for the AI era.

The Character Under Pressure

Ridley in The Evolution of Everything makes a point that connects personality formation to technological adaptation:

“Children get their personalities mostly from within themselves… personality unfolds from within, responding to the environment — so in a very literal sense of the word, it evolves.”

The technological environment is now one of the primary environments within which personality unfolds. Children growing up with smartphones, AI tutors, and social media as default contexts are not merely learning to use new tools — they are developing new cognitive, social, and emotional architectures in response to a different environmental pressure than any previous generation faced.

The historical parallel from The Innovators: the digital revolution was built by people whose personalities — curious, collaborative, technically skilled, aesthetically engaged — made them well-suited to the demands of computing. The AI revolution may select for different personality characteristics: comfort with ambiguity, capacity for interdependence with non-human intelligence, and above all the question-asking disposition Kelly celebrates.

Synthesis: What Technology Asks For

Across the books, a consistent list of human qualities emerges as what the technological transition rewards and requires:

  1. Adaptive learning capacity (Kelly’s endless newbie; Lawson’s developer curiosity)
  2. Collaborative intelligence (Diamandis/Kotler on group flow; Isaacson on collaborative innovation)
  3. Comfort with opacity (Kissinger/Schmidt/Huttenlocher on AI outputs whose reasoning cannot be inspected)
  4. Questioning over answering (Kelly’s most distinctive claim)
  5. Identity coherence under information overload (Bilton’s anchoring communities)
  6. Wisdom alongside information (Kissinger’s most urgent concern: wisdom is what AI cannot provide)

Technology does not create better humans automatically. It creates environments that reward certain human qualities and penalize others. The humans who thrive in the software century — as Kelly, Diamandis, Lawson, and Schmidt all argue from different directions — are those who develop the dispositional qualities that technology most rewards, rather than defending the qualities that technology most threatens.

“This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots.” (Kelly)