On Wednesday, Reuters reported that Protected Superintelligence (SSI), a brand new AI startup cofounded by OpenAI’s former chief scientist Ilya Sutskever, has raised $1 billion in funding. The three-month-old firm plans to concentrate on growing what it calls “secure” AI programs that surpass human capabilities.
The fundraising effort reveals that even amid growing skepticism round huge investments in AI tech that thus far have did not be worthwhile, some backers are nonetheless keen to position giant bets on high-profile expertise in foundational AI analysis. Enterprise capital companies like Andreessen Horowitz, Sequoia Capital, DST International, and SV Angel participated within the SSI funding spherical.
SSI goals to make use of the brand new funds for computing energy and attracting expertise. With solely 10 staff in the intervening time, the corporate intends to construct a bigger group of researchers throughout areas in Palo Alto, California, and Tel Aviv, Reuters reported.
Whereas SSI didn’t formally disclose its valuation, sources instructed Reuters it was valued at $5 billion—which is a stunningly great amount simply three months after the corporate’s founding and with no publicly-known merchandise but developed.
Son of OpenAI
Very like Anthropic earlier than it, SSI fashioned as a breakaway firm based partially by former OpenAI staff. Sutskever, 37, cofounded SSI with Daniel Gross, who beforehand led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher.
Sutskever’s departure from OpenAI adopted a tough interval on the firm that reportedly included disenchantment that OpenAI administration didn’t commit correct sources to his “superalignment” analysis group after which Sutskever’s involvement within the transient ouster of OpenAI CEO Sam Altman final November. After leaving OpenAI in Could, Sutskever said his new firm would “pursue secure superintelligence in a straight shot, with one focus, one objective, and one product.”
Superintelligence, as we have noted previously, is a nebulous time period for a hypothetical know-how that might far surpass human intelligence. There isn’t any assure that Sutskever will achieve his mission (and skeptics abound), however the star energy he gained from his educational bona fides and being a key cofounder of OpenAI has made speedy fundraising for his new firm comparatively straightforward.
The corporate plans to spend a few years on analysis and growth earlier than bringing a product to market, and its self-proclaimed concentrate on “AI security” stems from the idea that highly effective AI programs that may trigger existential dangers to humanity are on the horizon.
The “AI security” matter has sparked debate inside the tech trade, with firms and AI consultants taking completely different stances on proposed security laws, together with California’s controversial SB-1047, which can quickly become law. Because the matter of existential threat from AI continues to be hypothetical and regularly guided by private opinion reasonably than science, that specific controversy is unlikely to die down anytime quickly.