Posts

Comments

Comment by Sinityy (michal-basiura) on AI #83: The Mask Comes Off · 2024-09-29T14:11:47.542Z · LW · GW

@gwern wrote am explanation why this is surprising (for some) [here](https://forum.effectivealtruism.org/posts/Mo7qnNZA7j4xgyJXq/sam-altman-open-ai-discussion-thread?commentId=CAfNAjLo6Fy3eDwH3)

Open Philanthropy (OP) only had that board seat and made a donation because Altman invited them to, and he could personally have covered the $30m or whatever OP donated for the seat [...] He thought up, drafted, and oversaw the entire for-profit thing in the first place, including all provisions related to board control. He voted for all the board members, filling it back up from when it was just him (& Greg Brockman at one point IIRC). He then oversaw and drafted all of the contracts with MS and others, while running the for-profit and eschewing equity in the for-profit. He designed the board to be able to fire the CEO because, to quote him, "the board should be able to fire me". [...]

Credit where credit is due - Altman may not have believed the scaling hypothesis like Dario Amodei, may not have invented PPO like John Schulman, may not have worked on DL from the start like Ilya Sutskever, may not have created GPT like Alec Radford, may not have written & optimized any code like Brockman's - but the 2023 OA organization is fundamentally his work.

The question isn't, "how could EAers* have ever let Altman take over OA and possibly kick them out", but entirely the opposite: "how did EAers ever get any control of OA, such that they could even possibly kick out Altman?" Why was this even a thing given that OA was, to such an extent, an Altman creation?

The answer is: "because he gave it to them." Altman freely and voluntarily handed it over to them.

So you have an answer right there to why the Board was willing to assume Altman's good faith for so long, despite everyone clamoring to explain how (in hindsight) it was so obvious that the Board should always have been at war with Altman and regarding him as an evil schemer out to get them. But that's an insane way for them to think! Why would he undermine the Board or try to take it over, when he was the Board at one point, and when he made and designed it in the first place? Why would he be money-hungry when he refused all the equity that he could so easily have taken - and in fact, various partner organizations wanted him to have in order to ensure he had 'skin in the game'? Why would he go out of his way to make the double non-profit with such onerous & unprecedented terms for any investors, which caused a lot of difficulties in getting investment and Microsoft had to think seriously about, if he just didn't genuinely care or believe any of that? Why any of this?

(None of that was a requirement, or even that useful to OA for-profit. [...] Certainly, if all of this was for PR reasons or some insidious decade-long scheme of Altman to 'greenwash' OA, it was a spectacular failure - nothing has occasioned more confusion and bad PR for OA than the double structure or capped-profit. [...]

What happened is, broadly: 'Altman made the OA non/for-profits and gifted most of it to EA with the best of intentions, but then it went so well & was going to make so much money that he had giver's remorse, changed his mind, and tried to quietly take it back; but he had to do it by hook or by crook, because the legal terms said clearly "no takesie backsies"'. Altman was all for EA and AI safety and an all-powerful nonprofit board being able to fire him, and was sincere about all that, until OA & the scaling hypothesis succeeded beyond his wildest dreams, and he discovered it was inconvenient for him and convinced himself that the noble mission now required him to be in absolute control, never mind what restraints on himself he set up years ago - he now understands how well-intentioned but misguided he was and how he should have trusted himself more. (Insert Garfield meme here.)

No wonder the board found it hard to believe! No wonder it took so long to realize Altman had flipped on them, and it seemed Sutskever needed Slack screenshots showing Altman blatantly lying to them about Toner before he finally, reluctantly, flipped. The Altman you need to distrust & assume bad faith of & need to be paranoid about stealing your power is also usually an Altman who never gave you any power in the first place! I'm still kinda baffled by it, personally.

He concealed this change of heart from everyone, including the board, gradually began trying to unwind it, overplayed his hand at one point - and here we are.

It is still a mystery to me what is Sam's motive exactly.

Comment by Sinityy (michal-basiura) on AI #83: The Mask Comes Off · 2024-09-29T13:29:28.938Z · LW · GW

That is not an argument against “the robots taking over,” or that AI does not generally pose an existential threat. It is a statement that we should ignore that threat, on principle, until the dangers ‘reveal themselves,’ with the implicit assumption that this requires the threats to actually start happening.

 

File:scientific briefing.png
Comment by Sinityy (michal-basiura) on Language Models Model Us · 2024-05-19T22:53:49.188Z · LW · GW
Comment by Sinityy (michal-basiura) on In Defense of Chatbot Romance · 2023-02-14T17:06:27.561Z · LW · GW

A lot of people will end up falling in love with chatbot personas, with the result that they will become uninterested in dating real people, being happy just to talk to their chatbot.

 

Good. If substantial number of men do this, dating market will become more tolerable, presumably.

Especially in some countries, where there are absurd demographic imbalances. Some stats from Poland, compared with Ireland (pics are translated using some online util, so they look a bit off):

Ratio of unmarried men / women within a given age bracket (20-24: 1.1; 25-29: 1.33; 30-34: 1.5). Red=Poland, Green=Ireland.

What about singles? 18-30 age bracket, men on the left side, women on the right side. Blue=singles, orange=relationship, gray=marriage. 47% young men are single, compared to 20% of young women.

 

Some other numbers. I think this is for 15-49 age bracket; single men is mistranslation, it should be unmarried men; M:K should be M:F (male-female). 7.4:1 M:F suicide ratio; apparently 8th highest in the world) 

Also, apparently that M:F suicide ratio is as of 2017; I checked OurWorldInData and it was increasing ~monotonically from 1990 to 2017.  From what I see elsewhere, it stayed at 2017 level to 2020 at least. Men are 4th most suicidal within the EU, women 6th least suicidal in the EU.

Okay, I went off-topic a little bit, but these stats are so curiously bad I couldn't help myself.

Comment by Sinityy (michal-basiura) on So, geez there's a lot of AI content these days · 2023-02-02T19:27:58.677Z · LW · GW

Maybe GPT-3 could be used to find LW content related to the new post, using something like this: https://gpt-index.readthedocs.io

Unfortunately, I didn't get around to doing anything with it yet. But it seems useful: https://twitter.com/s_jobs6/status/1619063620104761344

Comment by michal-basiura on [deleted post] 2023-01-28T15:08:51.629Z

You can convert money into height, through limb lengthening surgery. About $50K USD for additional 8cm in Europe. Double (or close) that price for the US. Also the process takes 1-1.5 years (lengthening alone - about 3m).

Comment by Sinityy (michal-basiura) on Don't use 'infohazard' for collectively destructive info · 2022-11-20T19:04:11.960Z · LW · GW

For a more realistic example, consider the DNA sequence for smallpox: I'd definitely rather that nobody knew it, than that people could with some effort find out, even though I don't expect being exposed to that quoted DNA information to harm my own thought processes.

 

...is it weird that I saved it in a text file? I'm not entirely sure if I've got the correct sequence tho. Just, when I learned that it is available, I couldn't not do it.

Comment by Sinityy (michal-basiura) on I Converted Book I of The Sequences Into A Zoomer-Readable Format · 2022-11-10T11:01:00.819Z · LW · GW