My name is Wojciech Zaremba. I am a co-founder and a scientist at OpenAI.

These days, I spend most of my time conducting research focused on safety and alignment. It's an exciting and profound period, as we still lack a complete understanding of what AGI and beyond will look like. We find that the OpenAI o1 paradigm offers many new opportunities in safety and alignment research, such as chain-of-thought monitoring.

Ahead of humanity lies a potential world of unimaginable abundance. As artificial intelligence becomes more advanced and its cost approaches zero, we may solve many of our current challenges, including curing diseases, providing superb and affordable education, vastly growing the economy, and even finding solutions to climate change. Not long ago, humanity viewed disease, famine, and war as inescapable aspects of existence. While problems like disease and famine may be solved with enough intelligence, some challenges—even in the presence of advanced intelligence—may persist. Politics might be one of them. With all the promises of AI, the path ahead is not without obstacles, and if we are not wise, we may squander this potential.

The development of AGI and superintelligence won't happen overnight, but it's on the horizon within a relatively small number of years—which is pretty wild. It's critical to bring society into the process of AGI development. By society, I mean not only users and developers but also policymakers, auditors, think tanks, and safety organizations. I'm optimistic that together we can establish the frameworks necessary to ensure this groundbreaking technology benefits everyone. I support the auditing of AI labs and government involvement. I hope for regulations that are smart and cohesive rather than reactionary. Ideally, AI law would reward good behavior rather than rely on punishment through criminal law. Criminal law enforcement could create an adversarial relationship between companies and policymakers, making coordination harder. I want all of us to dream the future together, as that's the way to bring it into existence.

I recognize several significant risks associated with AI: misuse—including the erosion of shared reality through hyper-personalized fake news, cyberattacks, CBRN weapons, and the creation of unbreakable surveillance states—the negative consequences of AI labs competing in an "AI race," accidents, rogue AI (where AI could become adversarial), and unknown unknowns. These risks are serious and demand substantial effort to mitigate. It's crucial to acknowledge that catastrophic—and even existential—risks fall within the realm of possibility.


I'm proud of several exciting projects, ranging from the most recent to the earliest:

Life Philosophy

Understanding Yourself