The text discusses the emergence of generative AIs and the risks associated with them, such as misinformation and value locking. It highlights the need for AI alignment, which aims to steer AI systems towards intended goals and ethical principles. The text also mentions OpenAI's plan for alignment and the challenges associated with it. It introduces the concept of controllable AI as an alternative to perfectly aligned AI, emphasizing the importance of being able to stop AI systems if needed. The text then delves into the open agency architecture, which aims to achieve controllability through a specification step and han evaluation. The challenges of building such a system are discussed, including the need for a new language for specifications and the complexity of designing a full world model. The text concludes by highlighting the urgency of addressing AI risks and the need for more time and safer design architectures.