Want to go deeper? This section offers curated readings, frameworks, and toolkits from adjacent domains—strategic foresight, speculative design, science fiction, and systems thinking. A good place to explore after you’ve built your first scenario or toolkit.
Research and career advice on how to have the greatest positive impact with your career, including detailed guides on AI safety, policy, and other global priorities.
Curated job board featuring opportunities addressing major global problems, including AI safety research, governance, biosecurity, and more.
Hub for learning about AI existential safety, with guides, advisor calls, projects, and pathways for contributing to the field.
Job board focused specifically on roles in AI safety, including technical, governance, and field-building positions.
Practical guide explaining pathways into AI alignment research, AI governance, and AI safety field-building.
Career profiles and advice focused on impactful roles in AI safety and governance.
Free personalized career advising for experienced professionals looking to transition into AI risk reduction roles.
Job board for responsible tech, trust & safety, AI governance, and public-interest technology roles.
Resources, fellowships, and institutional pathways for careers in AI policy and governance.
Supports work reducing existential risk from AI, biotech, and making great futures more likely.
Funds social entrepreneurs tackling urgent global challenges.
Invests in responsible tech, digital governance, and civic empowerment.
Strategic philanthropy backing exceptional people solving hard problems.
Supports ethical AI, data equity, and human-centered technology.
Funds AI safety, biosecurity, global health, and long-term impact.
Advocates for trustworthy AI, internet health, and abundance and growth.
Supports equitable, rights-based approaches to technology development.
Funds civic tech, digital rights, and inclusive governance.
Supports early-stage, high-impact projects focused on existential risk reduction.
Launches global competitions to incentivize radical breakthroughs.
Nonprofit pioneer in foresight, future-casting, and trend analysis.
Global development foresight hub within the United Nations.
Promotes longtermist research in philosophy, economics, and public policy.
Student-led research on existential risk and long-term futures.
Maps emerging signals of change in sustainability and global systems.
Centre for the Study of Existential Risk – Interdisciplinary research center studying global catastrophic risks.
Encourages long-term thinking through cultural and technological projects.
Conducts foundational research to inform effective long-term decision-making.
Research and field-building for global AI governance.
Policy-focused research on AI and tech for national security.
Works to prevent nuclear and biological catastrophes.
Communicates extreme risks like unaligned AI to broader audiences.
Innovates civic tech and governance through public sector experimentation.
Policy org advancing science and innovation for long-term good.
Justice-driven research on the societal impacts of AI systems.
Promotes AI safety through research, awareness, and field-building.
Supports innovators building transformative futures.
Youth-led movement advocating for ethical, inclusive AI policy.
Global community of social entrepreneurs driving systems-level change.
Helps people align their work with meaningful global impact.
Explores and advocates for a philosophy of progress.
Movement redesigning democracy, identity, and markets for collective flourishing.
Global community advancing speculative and participatory design.
Explores tech infrastructure, power, and digital design ethics.
Inspires long-term thinking through projects like the 10,000-year clock.
Storytelling platform sharing real projects building a better world.
Works to realign technology with human and societal well-being.
Builds infrastructure and theory for governing digital spaces and communities.