Ilya Sutskever created a new AGI startup
post by harfe · 2024-06-19T17:17:17.366Z · LW · GW · 35 commentsThis is a link post for https://ssi.inc/
Contents
36 comments
[copy of the whole text of the announcement on ssi.inc, not an endorsement]
Safe Superintelligence Inc.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence Inc.
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.
If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.
Now is the time. Join us.
Ilya Sutskever, Daniel Gross, Daniel Levy
June 19, 2024
35 comments
Comments sorted by top scores.
comment by William_S · 2024-06-19T18:12:16.764Z · LW(p) · GW(p)
If anyone says "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead." you should really ask for the details of what this means, how to measure whether safety is ahead. (E.g. is it "we did the bare minimum to make this product tolerable to society" vs. "we realize how hard superalignment will be and will be investing enough to have independent experts agree we have a 90% chance of being able to solve superalignment before we build something dangerous")
Replies from: Seth Herd, anders-lindstroem↑ comment by Anders Lindström (anders-lindstroem) · 2024-06-20T14:57:58.485Z · LW(p) · GW(p)
Come on now, there is nothing to worry about here. They are just going to "move fast and break things"...
comment by William_S · 2024-06-19T18:02:39.548Z · LW(p) · GW(p)
I don't trust Ilya Sutskever to be the final arbiter of whether a Superintelligent AI design is safe and aligned. We shouldn't trust any individual, especially if they are the ones building such a system to claim that they've figured out how to make it safe and aligned. At minimum, there should be a plan that passes review by a panel of independent technical experts. And most of this plan should be in place and reviewed before you build the dangerous system.
Replies from: Chi Nguyen, William_S↑ comment by Chi Nguyen · 2024-06-20T16:52:25.952Z · LW(p) · GW(p)
I don't trust Ilya Sutskever to be the final arbiter of whether a Superintelligent AI design is safe and aligned. We shouldn't trust any individual,
I'm not sure how I feel about the whole idea of this endeavour in the abstract - but as someone who doesn't know Ilya Sutskever and only followed the public stuff, I'm pretty worried that he in particular runs it if decision-making is on the "by an individual" level and even if not. Running this safely will likely require lots of moral integrity and courage. The board drama made it look to me like Ilya disqualified himself from having enough of that.
Lightly held because I don't know the details but just from the public stuff I've seen I don't know why I should at all believe that Ilya has sufficient moral integrity and courage for this project even if he might "mean well" at the moment.
comment by Owen Henahan (OwenLeaf) · 2024-06-19T18:22:25.095Z · LW(p) · GW(p)
I am deeply curious who is funding this, considering that there will explicitly be no intermediate product. Only true believers with mindboggling sums of money to throw around would invest in a company with no revenue source. Could it be Thiel? Who else is doing this in the AI space? I hope to see journalists exploring the matter.
Replies from: mesaoptimizer, Mitchell_Porter, o-o↑ comment by mesaoptimizer · 2024-06-19T20:14:48.985Z · LW(p) · GW(p)
Thiel has historically expressed disbelief about AI doom, and has been more focused on trying to prevent civilizational decline. From my perspective, it is more likely that he'd fund an organization founded by people with accelerationist credentials, than by someone who was a part of a failed coup attempt that would look to him like it involved a sincere belief in an extreme difficulty of the alignment problem.
↑ comment by Mitchell_Porter · 2024-06-20T03:54:33.878Z · LW(p) · GW(p)
I'd look for funds or VCs that are involved with Israel's tech sector at a strategic level. And who knows, maybe Aschenbrenner's new org is involved.
↑ comment by O O (o-o) · 2024-06-20T14:54:47.064Z · LW(p) · GW(p)
I see Elon throwing money into this. He originally recruited Sutskever and he’s probably(?) smart enough to diversify his AGI bets.
Replies from: orthonormal↑ comment by orthonormal · 2024-06-20T16:46:50.423Z · LW(p) · GW(p)
Elon diversifies in the sense of "personally micromanaging more companies", not in the sense of "backing companies he can't micromanage".
comment by Nathan Young · 2024-06-20T03:34:30.951Z · LW(p) · GW(p)
Weakly endorsed
“Curiously enough, the only thing that went through the mind of the bowl of petunias as it fell was Oh no, not again. Many people have speculated that if we knew exactly why the bowl of petunias had thought that we would know a lot more about the nature of the Universe than we do now.”
The Hitchhiker’s Guide To The Galaxy, Douglas Adams
comment by orthonormal · 2024-06-19T17:49:31.645Z · LW(p) · GW(p)
I'm not even angry, just disappointed.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-06-19T21:50:37.904Z · LW(p) · GW(p)
I am angry and disappointed.
comment by Roman Malov · 2024-06-19T20:36:45.814Z · LW(p) · GW(p)
safety always remains ahead
When was it ever ahead? I mean, to be sure that safety is ahead, you need to first make advancement there compatible with capabilities. And to do that, you shouldn't advance the capabilities.
comment by MondSemmel · 2024-06-20T06:19:42.594Z · LW(p) · GW(p)
OpenAI board vs. Altman: Altman "was not consistently candid [LW · GW] in his communications with the board".
Ilya's statement on leaving OpenAI:
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly. So long, and thanks for everything. I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.
So, Ilya, how come your next project is an OpenAI competitor? Were you perhaps not candid in your communications with the public? But then why should anyone believe anything about your newly announced organization's principles and priorities?
comment by mesaoptimizer · 2024-06-19T17:29:52.887Z · LW(p) · GW(p)
Related Bloomberg announcement news article.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2024-06-19T18:33:52.681Z · LW(p) · GW(p)
Paywalled. Would be fantastic if someone with access could summarise the most important bits.
Replies from: harfe↑ comment by harfe · 2024-06-19T18:37:51.176Z · LW(p) · GW(p)
It does not appear paywalled to me. The link that @mesaoptimizer posted is an archive, not the original bloomberg.com article.
comment by eggsyntax · 2024-06-21T09:16:46.454Z · LW(p) · GW(p)
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
In fairness, there's a high-integrity version of this that's net good:
- Accept plenty of capital.
- Observe that safety is not currently clearly ahead.
- Spent the next n years working entirely on alignment, until and unless it's solved.
This isn't the outcome I expect, and it wouldn't stop other actors from releasing catastrophically unsafe systems, but given that Ilya Sutskever has to the best of my (limited) knowledge been fairly high-integrity in the past, it's worth noting as a possibility. It would be genuinely lovely to see them use a ton of venture capital for alignment work.
comment by Rafael Harth (sil-ver) · 2024-06-21T09:57:30.836Z · LW(p) · GW(p)
I don't even get it. If their explicit plan is not to release any commercial products on the way, then they must think they can (a) get to superintelligence faster than Deepmind, OpenAI, and Anthropic, and (b) do so while developing more safety on the way -- presumably with less resources, a smaller team, and a headstart for the competitors. How does that make any sense?
comment by Lao Mein (derpherpize) · 2024-06-20T07:13:01.223Z · LW(p) · GW(p)
We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This is the galaxy-brained plan of literally every single AI safety company of note.
Then again, maybe only the capabilities focused ones become noteworthy.
comment by Raghuvar Nadig (raghuvar-nadig) · 2024-06-22T18:22:50.164Z · LW(p) · GW(p)
In the spirit of Situational Awareness, I'm curious how people are parsing some apparent contradictions:
- OpenAI is explicitly pursuing AGI
- Most/many people in the field (eg. Leopold Aschenbrenner, who worked with Ilya Sutskever) presume that (approximately) when AGI is reached, we'll have automated software engineers and ASI will follow very soon
- SSI is explicitly pursuing straight-shot superintelligence - the announcement starts off by claiming ASI is "within reach"
- In his departing message from OpenAI, Sutskever said "I’m confident that OpenAI will build AGI that is both safe and beneficial...I am excited for what comes next - a project that is very personally meaningful to me about which I will share details in due time"
- At the same time, Sam Altman said "I am forever grateful for what he did here and committed to finishing the mission we started together"
Does this point to increased likelihood of a timeline in which somehow OpenAI develops AGI before anyone else, and also SSI develops superintelligence before anyone else?
Does it seem at all likely from the announcement that by "straight-shot" SSI is strongly hinting that it aims to develop superintelligence while somehow sidestepping AGI (which they won't release anyway) and automated software engineers?
Or is it all obviously just speculative talk/PR, not to be taken too literally, and we don't really need to put much weight on the differences between AGI/ASI for now? Just seems like more unnecessary specificity than warranted, if that were the case.
comment by harfe · 2024-06-20T16:06:27.946Z · LW(p) · GW(p)
One thing I find positive about SSI is their intent to not have products before superintelligence (note that I am not arguing here that the whole endeavor is net-positive). Not building intermediate products lessens the impact on race dynamics. I think it would be preferable if all the other AGI labs had a similar policy (funnily, while typing this comment, I got a notification about Claude 3.5 Sonnet... ). The policy not to have any product can also give them cover to focus on safety research that is relevant for superintelligence, instead of doing some shallow control of the output of LLMs.
To reduce bad impacts from SSI, it would be desirable that SSI also
- have a clearly stated policy to not publish their capabilities insights,
- take security sufficiently seriously to be able to defend against nation-state actors that try to steal their insights.
↑ comment by orthonormal · 2024-06-20T16:49:23.236Z · LW(p) · GW(p)
Counterpoint: other labs might become more paranoid that SSI is ahead of them. I think your point is probably more correct than the counterpoint, but it's worth mentioning.
comment by Charlie Steiner · 2024-06-19T23:38:38.927Z · LW(p) · GW(p)
We are assembling a lean, cracked team
This team is going to be cracked.
comment by O O (o-o) · 2024-06-19T23:25:28.307Z · LW(p) · GW(p)
OpenAI is closed
StabilityAI is unstable
SafeSI is ...
Replies from: RussellThor, Kenku, g-w1↑ comment by RussellThor · 2024-06-20T01:25:27.448Z · LW(p) · GW(p)
LessWrong is ...
Replies from: MondSemmel, metachirality, rudi-c↑ comment by MondSemmel · 2024-06-20T12:25:13.565Z · LW(p) · GW(p)
Is it MoreWrong or MoreRight?
↑ comment by metachirality · 2024-06-20T01:38:05.213Z · LW(p) · GW(p)
Let's hope not!
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2024-06-20T05:49:27.883Z · LW(p) · GW(p)
Actually, we should hope that LW is very wrong about AI and alignment is easy.
↑ comment by Rudi C (rudi-c) · 2024-06-20T18:51:49.620Z · LW(p) · GW(p)
I’ve long taken to using GreaterWrong. Give it a try, lighter and more featureful.
↑ comment by Jacob G-W (g-w1) · 2024-06-20T16:47:33.607Z · LW(p) · GW(p)
Orwell was more prescient than we could have imagined.