Skip to content
  • About
  • Podcast
  • Engage
  • About
  • Podcast
  • Engage
  • Worldbuilding course
  • Guest lectures
  • Toolbox
  • World gallery
  • World submission
  • Worldbuilding course
  • Guest lectures
  • Toolbox
  • World gallery
  • World submission
  • About
  • Podcast
  • Engage
  • About
  • Podcast
  • Engage
  • Worldbuilding course
  • Guest lectures
  • Toolbox
  • World gallery
  • World submission
  • Worldbuilding course
  • Guest lectures
  • Toolbox
  • World gallery
  • World submission
X-twitter Bookmark

TOOLBOX

Taking action

Want to go deeper? This section offers curated readings, frameworks, and toolkits from adjacent domains—strategic foresight, speculative design, science fiction, and systems thinking. A good place to explore after you’ve built your first scenario or toolkit.

Core Concepts & Vocabulary

Challenges to Explore

Solution Approaches

Worldbuilding Tools

Modeling & Forecasting

Foresight Projects & Experts

Taking Action

Recommended Readings

Governance models

Course Worksheet

💼 Career planning

80,000 Hours

Research and career advice on how to have the greatest positive impact with your career, including detailed guides on AI safety, policy, and other global priorities.

80,000 Hours Job Board

Curated job board featuring opportunities addressing major global problems, including AI safety research, governance, biosecurity, and more.

AISafety.com

Hub for learning about AI existential safety, with guides, advisor calls, projects, and pathways for contributing to the field.

AI Safety Jobs (AISafety.com)

Job board focused specifically on roles in AI safety, including technical, governance, and field-building positions.

AISafety.info – Career Guide

Practical guide explaining pathways into AI alignment research, AI governance, and AI safety field-building.

Probably Good – AI Safety & Governance Careers

Career profiles and advice focused on impactful roles in AI safety and governance.

Successif

Free personalized career advising for experienced professionals looking to transition into AI risk reduction roles.

All Tech Is Human – Responsible Tech Job
Board

Job board for responsible tech, trust & safety, AI governance, and public-interest technology roles.

Emerging Technology Policy Careers

Resources, fellowships, and institutional pathways for careers in AI policy and governance.

💸 Grantmakers

Future of Life Institute

Supports work reducing existential risk from AI, biotech, and making great futures more likely.

Skoll Foundation

Funds social entrepreneurs tackling urgent global challenges.

Omidyar Network

Invests in responsible tech, digital governance, and civic empowerment.

Schmidt Futures

Strategic philanthropy backing exceptional people solving hard problems.

Patrick J. McGovern Foundation

Supports ethical AI, data equity, and human-centered technology.

Coefficient Giving

Funds AI safety, biosecurity, global health, and long-term impact.

Mozilla Foundation

Advocates for trustworthy AI, internet health, and abundance and growth.

Ford Foundation (Tech & Society)

Supports equitable, rights-based approaches to technology development.

Open Society Foundations – Tech Program

Funds civic tech, digital rights, and inclusive governance.

Survival and Flourishing Fund

Supports early-stage, high-impact projects focused on existential risk reduction.

XPRIZE Foundation

Launches global competitions to incentivize radical breakthroughs.

🧠 Futures thinking & strategic imagination

Institute for the Future (IFTF)

Nonprofit pioneer in foresight, future-casting, and trend analysis.

UNDP Futures Lab

Global development foresight hub within the United Nations.

Forethought Foundation

Promotes longtermist research in philosophy, economics, and public policy.

SERI – Stanford Existential Risks Initiative

Student-led research on existential risk and long-term futures.

Futures Centre

Maps emerging signals of change in sustainability and global systems.

CSER

Centre for the Study of Existential Risk – Interdisciplinary research center studying global catastrophic risks.

The Long Now Foundation

Encourages long-term thinking through cultural and technological projects.

Global Priorities Institute

Conducts foundational research to inform effective long-term decision-making.

🏛️ Policy & governance

Centre for the Governance of AI (GovAI)

Research and field-building for global AI governance.

CSET – Center for Security and Emerging Technology

Policy-focused research on AI and tech for national security.

Nuclear Threat Initiative

Works to prevent nuclear and biological catastrophes.

Existential Risk Observatory

Communicates extreme risks like unaligned AI to broader audiences.

GovLab

Innovates civic tech and governance through public sector experimentation.

Institute for Progress

Policy org advancing science and innovation for long-term good.

AI Now Institute

Justice-driven research on the societal impacts of AI systems.

Center for AI Safety

Promotes AI safety through research, awareness, and field-building.

🌱 Communities, fellowships & ecosystems

Foresight Fellowships

Supports innovators building transformative futures.

Encode Justice

Youth-led movement advocating for ethical, inclusive AI policy.

Ashoka

Global community of social entrepreneurs driving systems-level change.

The School for Moral Ambition

Helps people align their work with meaningful global impact.

Roots of Progress Institute

Explores and advocates for a philosophy of progress.

RadicalxChange

Movement redesigning democracy, identity, and markets for collective flourishing.

Design Futures Initiative

Global community advancing speculative and participatory design.

New Design Congress

Explores tech infrastructure, power, and digital design ethics.

The Long Now Foundation

Inspires long-term thinking through projects like the 10,000-year clock.

Atlas of the Future

Storytelling platform sharing real projects building a better world.

Center for Humane Technology

Works to realign technology with human and societal well-being.

Metagov Project

Builds infrastructure and theory for governing digital spaces and communities.

All rights reserved