TOOLBOX
Core Concepts & Vocabulary
Shared language matters when building worlds together. This section offers a working vocabulary for futures thinking—covering terms like trajectory, multipolarity, cooperation, and longtermism. Use it to align assumptions, clarify debates, and explore unfamiliar ideas.
Term
Definition
Why it matters
The process of constructing an imaginary world. It involves developing consistent geography, history, cultures, and rules that govern the fictional universe.
Useful for generating rich, multidimensional future scenarios
A philosophical concept focusing on positive futures for humanity. It involves envisioning and working toward desirable futures where humanity flourishes.
Encourages aspirational thinking beyond risk avoidance.
Threats that could cause human extinction or permanently destroy humanity’s potential for future development.
Defines the worst-case outcomes that worldbuilding can help avoid.
Artificial General Intelligence. An advanced AI that can understand, learn, and apply knowledge across a wide range of tasks at a level equal to or exceeding human capabilities.
Central to many AI-related futures and governance discussions.
Artificial Superintelligence. An AI system that surpasses human intelligence across virtually all domains, including scientific creativity, wisdom, and social skills.
Represents a major discontinuity in intelligence and coordination.
Clustered Regularly Interspaced Short Palindromic Repeats. A revolutionary gene editing technology that allows scientists to make precise changes to specific DNA sequences.
Key enabler in biotechnology and future medical scenarios.
Messenger RNA. A single-stranded molecule that carries genetic code from DNA to the cell’s protein-making machinery, prominent in modern vaccines.
A recent real-world example of bioinnovation with long-term implications.
Brain-Computer Interfaces. Devices that establish a direct communication pathway between the brain and an external device.
Enables new human-machine interaction possibilities.
Electroencephalograms. Tests that detect electrical activity in the brain using small metal discs (electrodes) attached to the scalp.
Relevant in non-invasive neurotechnology futures.
Decentralized Autonomous Organizations. Blockchain-based entities governed by smart contracts and the collective decisions of their members.
Illustrates new governance mechanisms in decentralized systems.
Universal Basic Income. A government program in which every citizen receives a regular financial stipend regardless of their income or employment status.
A proposed economic response to automation and inequality.
Labor that primarily involves developing or using knowledge rather than physical effort, a concept originated by Peter Drucker.
Shifting nature of labor and productivity in future economies.
Transcranial Magnetic Stimulation. A non-invasive procedure that uses magnetic fields to stimulate nerve cells in the brain.
Part of neurotech-enabled mental health or enhancement pathways.
A theoretical economic condition where goods, services, and information are abundant and available to all people with minimal human labor.
A long-term scenario that shifts focus from survival to flourishing.
The lag between developing powerful technologies and deploying them safely or wisely.
Underscores the need for institutional readiness alongside innovation.
A measure of how effectively humanity can coordinate and act under high-stakes uncertainty.
Frames questions of governance and resilience at the civilizational scale.
The full set of possible configurations for systems or civilizations.
Encourages thinking beyond defaults and exploring novel institutional designs.
The idea that some types of progress (e.g., safety) should outpace others (e.g., capabilities).
Guides prioritization in AI, biotech, and other sensitive domains.
A sudden, positive reversal when outcomes seem lost, coined by J.R.R. Tolkien.
Encourages leaving room for unexpected positive change.
The hypothesized barrier that prevents civilizations from advancing beyond a certain point.
Informs thinking about existential risks in a cosmic context.
A period when present-day actions can heavily shape long-term outcomes.
Helps prioritize interventions with enduring impact.
A not-yet-existent system or agent (e.g., AGI) that motivates present-day strategy.
Helps coordinate long-term thinking and decision-making.
A belief that becomes real by being believed in.
Shows how culture and narrative influence technological futures.
The tendency of intelligent agents to pursue similar subgoals, like self-preservation.
Key to understanding risks from advanced AI agents.
A dynamic where early decisions become hard to reverse later.
Raises the stakes for foundational choices, especially in AI and governance.
A proposed era of safety in which humanity pauses to decide its long-term values and goals.
Creates space for deliberation before irreversible actions.
A latent stockpile of capacity (e.g., compute) that could accelerate progress once unlocked.
Important for anticipating sudden tech jumps.
Mechanisms to ensure transparency and accountability as systems grow complex.
Central challenge in safe AI development.
A hypothetical global decision-making entity—like AGI or a world government.
Used to model total coordination and its consequences.
When one actor takes a high-risk action others would avoid, but can’t stop.
Explains risks from open science and premature deployment.
The gradual erosion or mutation of core norms or values.
Raises long-term alignment issues for institutions and AI.
A stance (from William James) that it’s sometimes rational to believe in hope-driving ideas.
Supports existential hope and vision-led strategy under uncertainty.
A method where you start with a desirable future and work backward to plan toward it.
Supports normative goal setting and mission-driven planning.
Exploring unfamiliar moral or cultural futures, especially involving new agents or values.
Helps uncover ethical blind spots and broaden value perspectives.
Estimating future events or timelines using data, trends, or expert judgment.
Clarifies what is plausible, likely, or urgent in the near term.
Identifying early signals or emerging developments that may shape the future.
Improves preparedness and strategic adaptability.
Challenging plans or assumptions to surface flaws and failure modes.
Strengthens the robustness of strategies and models.
Developing multiple plausible futures to guide decision-making under uncertainty.
Enables institutions to prepare for a range of contingencies.
Visualizing paths of technological development, dependencies, and milestones.
Supports R&D planning and strategic forecasting.
Creating detailed, internally coherent future environments for exploration or communication.
Useful for visioning, storytelling, and surfacing assumptions.