Posts

Comments

Comment by Lee.aao (leonid-artamonov) on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-27T10:18:26.915Z · LW · GW

Greg Brockman to Elon Musk, (cc: Sam Altman) - Nov 22, 2015 6:11 PM

In response to this follow up, Elon first mentions that $100M is not enough. And that he is encouraging OpenAI to raise more money on their own and promises to increase the amount they can raise to $1B.

I found this on the OpenAI blog: https://openai.com/index/openai-elon-musk/
There is a couple of other messages there. With the vibe that OpenAI team felt a betrayal from Elon.

We're sad that it's come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.

 

@habryka can you pls check the link? I think these messages could have added more context. Not sure why they weren't also included in the original source, though.

Comment by Lee.aao (leonid-artamonov) on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-26T16:44:35.702Z · LW · GW

Rather, they didn't foresee the possibility that Microsoft might want to invest. And they didn't consider that capped-for-profit was a path to billions of dollars.

Comment by Lee.aao (leonid-artamonov) on On Dwarkesh’s Podcast with OpenAI’s John Schulman · 2024-05-28T08:27:51.899Z · LW · GW
  •  
    1. Note: It was a 100-point Elo improvement based on the ‘gpt2’ tests prior to release, but GPT-4o itself while still on top saw only a more modest increase.
  •  

Didn't he meant the early GPT-4 vs GPT-4 turbo?



As I get it, it's the same pre-trained model, but with more post-training work.
GPT-4o is probably a newly trained model, so you can't compare it like that.

Comment by Lee.aao (leonid-artamonov) on AI #58: Stargate AGI · 2024-04-09T15:17:13.041Z · LW · GW

and these aren’t normies, they work on tech, high paying 6 figure salaries, very up to date with current events.

If you are a true normie not working in tech, it makes sense to be unaware of such details. You are missing out, but I get why.

If you are in tech, and you don’t even know GPT-4 versus GPT-3.5? Oh no.


Is it just me, or do you also feel intellectually lonely lately? 

I think my relatives and most of my friends think I'm crazy for thinking and talking so much about AI. And they listen to me more out of respect and politeness than out of any real interest in the topic.

Comment by Lee.aao (leonid-artamonov) on AI Timelines · 2024-03-22T21:36:22.079Z · LW · GW

Ege, do you think you'd update if you saw a demonstration of sophisticated sample-efficient in-context learning and far-off-distribution transfer?
 

Yes.

Suppose it could get decent at the first-person-shooter after like a subjective hour of messing around with it. If you saw that demo in 2025, how would that update your timelines?

I would probably update substantially towards agreeing with you.


DeepMind released an early-stage research model SIMA: https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/

It was tested on 600 basic (10-sec max) videogame skills and had only video from the screen + text with the task. The main takeaway is that an agent trained on many games performs in a new unseen game almost as well as another agent, trained specifically on this game.



Seems like by 2025 its really possible to see more complex generalization (harder tasks and games, more sample efficiency) as in your crux for in-context learning.

Comment by Lee.aao (leonid-artamonov) on Report on Frontier Model Training · 2023-12-10T14:28:20.267Z · LW · GW

Since OpenAI are renting MSFT compute for both training and inference.. 
Seems reasonable to think that inference >> training.  Am I right? 

Comment by Lee.aao (leonid-artamonov) on Report on Frontier Model Training · 2023-12-09T15:01:49.883Z · LW · GW

Is there a cheap of free way to read Semianalysis posts? 
Cant afford the $500 subscription sadly