OpenAI Co-Founder Ilya Sutskever’s SSI Raises $1 Billion for AI Safety
Created on September 4|Last edited on September 4
Comment
Safe Superintelligence (SSI), a new AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, has raised $1 billion to focus on developing safe artificial intelligence systems. Launched just three months ago, SSI has already been valued at approximately $5 billion, according to sources. The company aims to push the boundaries of AI safety research, emphasizing the creation of AI systems that exceed human capabilities while maintaining stringent safety protocols. SSI’s headquarters are split between Palo Alto, California, and Tel Aviv, Israel, and it currently employs 10 people.
Investment and Strategic Focus
SSI's funding will primarily be directed toward acquiring computing power and recruiting top talent in AI research and engineering. The company is backed by several prominent venture capital firms, including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Additionally, NFDG, an investment partnership co-led by SSI CEO Daniel Gross and former GitHub CEO Nat Friedman, also participated in the investment round. Gross highlighted the importance of aligning with investors who understand and support SSI’s mission of advancing safe AI, even if it means dedicating years to research and development before launching a market-ready product.
Challenges and Industry Context
The pursuit of AI safety has grown in importance amid concerns that advanced AI systems could act contrary to human interests or pose existential risks. Sutskever, a key figure in AI research, founded SSI alongside Gross, who previously spearheaded AI projects at Apple, and Daniel Levy, a former OpenAI researcher.
Sutskever’s Departure from OpenAI and Vision for SSI
Sutskever’s departure from OpenAI earlier this year followed a series of tumultuous events, including his brief involvement in the board decision to remove OpenAI CEO Sam Altman, which was later reversed. Following his exit, OpenAI dismantled the "Superalignment" team, which Sutskever had led to ensure AI systems align with human values as they grow increasingly sophisticated. SSI operates as a standard for-profit company, contrasting with OpenAI’s unique corporate structure designed to prioritize safety. The startup emphasizes hiring individuals who not only possess exceptional skills but also share its cultural values and commitment to responsible AI development.
Approach to Scaling and Future Plans
Sutskever is known for championing the scaling hypothesis—the idea that AI models improve significantly with increased computational resources and data. However, he intends to explore a different approach to scaling at SSI, though he has not yet revealed specific details. The company is also considering partnerships with cloud providers and chip manufacturers to meet its extensive computing power needs, but no decisions have been made about which firms it will collaborate with. Sutskever’s vision for SSI involves a deliberate and careful approach, prioritizing groundbreaking advancements over the industry’s current scaling trends.
Conclusion
SSI’s emergence as a major player in the AI safety landscape underscores the ongoing investment in next-generation AI technologies, despite broader hesitancies in the market. With a focus on safety, ethical AI development, and a commitment to exploring new research directions, SSI aims to make a significant impact on the future of artificial intelligence.
Source:
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.