My name is Wojciech Zaremba. I am a co-founder and a scientist at OpenAI.

These days, I spend most of my time doing research, focusing on safety and alignment. It’s an exciting and prolific time as we still lack a complete understanding of what AGI and beyond will look like. We find that the o1 paradigm offers many new opportunities in safety and alignment research (like chain of thought monitoring).

Ahead of humanity lies a potential world of unimaginable abundance. As the cost of intelligence approaches zero, we may solve many of our current challenges, including disease, poverty, and climate change. It’s astonishing to remember that not long ago, humanity viewed disease, famine, and war as inescapable aspects of existence. It's conceivable that problems like disease and famine could become trivial, leaving only politics as an unsolved challenge.

Besides solving problems, I am motivated by possibilities of AGI and superintelligence. We (big we — humanity) may be able to spread ourselves throughout the universe. Isn't the universe looking so beautiful, as if it were screaming, "Please inhabit me"? The path ahead is not without landmines, and if we are not wise, we may squander this cosmic potential.

The development of AGI and superintelligence won't happen overnight, but it's on the horizon within a single-digit number of years—which is pretty wild. It's critical to bring society into the process of AGI development. By society, I mean not only users and developers but also policymakers, auditors, think tanks, and safety institutions. I’m optimistic that, together, we can establish the frameworks necessary to ensure this groundbreaking technology benefits everyone. I support auditing AI labs and promoting coordinated efforts that include government involvement. I hope for regulations that are smart and cohesive, rather than reactionary. Ideally, AI law would reward good behavior rather than rely on punishment through criminal law. Criminal law will create an adversarial relationship between companies and policymakers, making coordination harder. I want us (big us) to dream the future together, as that's the way to bring it into existence (hyperstition).

I recognize several significant risks associated with AI: misuse (including the erosion of shared reality through hyper-personalized fake news, cyberattacks, CBRN weapons, and the creation of unbreakable surveillance states), the negative consequences of AI labs competing in an "AI race," accidents, rogue AI (where AI could become adversarial), and unknown unknowns. These risks are serious and demand substantial effort to mitigate. It's crucial to acknowledge that catastrophic—and even existential—risks fall within the realm of possibility.

There are three major stages of AI development: product excellence, national security, and superintelligence. In the first stage, companies will try to build the most delightful products. In the second stage, nations will recognize the importance of AI at the national level. Ensuring access to AI will be as important as ensuring access to natural gas or other sources of energy. The last stage is when the relevance of AI goes beyond nations and reaches the global level. Ensuring that this stage goes well requires a lot of political wisdom.

I predict that 2026 will be the year when AI dominates public discourse. AI will become the critical topic discussed by presidents of countries and average Joes. The debate is already intensifying. The exciting aspect is that brilliant minds from diverse backgrounds will join forces in shaping the future of AI.


I'm proud of several exciting projects, ranging from the most recent to the earliest:

Life Philosophy

  1. Building something amazing often involves pain, much like the pain that accompanies childbirth. Challenge is a part of the adventure of creation.