Posts

Paper: "The Ethics of Advanced AI Assistants" -Google DeepMind 2024-04-21T06:45:14.447Z

Comments

Comment by Tristan Wegner (tristan-wegner) on How to Model the Future of Open-Source LLMs? · 2024-04-22T10:42:35.785Z · LW · GW

I agree with the premise, but not the conclusion of your last point. Any OpenSource development, that will significantly lower the resource requirements can also be used by closed models to just increased their model/training size for the same cost, thus keeping the gap.

Comment by Tristan Wegner (tristan-wegner) on Thoughts on seed oil · 2024-04-21T06:57:23.348Z · LW · GW

The first graph is supposed to show " BMI at age 50 for white, high-school educated American men born in various years", but goes up 1986. But People born in 1986 are only 38 right now, so we cannot know their BMI at 50 years old. Something is wrong.

Comment by Tristan Wegner (tristan-wegner) on An Idea on How LLMs Can Show Self-Serving Bias · 2023-12-13T06:05:01.795Z · LW · GW

No. By off by 100 I meant of by a factor of 100 to small, NOT that they don't sum up to 100.

Comment by Tristan Wegner (tristan-wegner) on An Idea on How LLMs Can Show Self-Serving Bias · 2023-11-25T10:31:23.005Z · LW · GW

I think you made a off by 100 error in Unlabeled Evaluation with all win rates <1%

Comment by Tristan Wegner (tristan-wegner) on OpenAI: The Battle of the Board · 2023-11-23T12:07:21.509Z · LW · GW

The former boards only power was to agree/fire to new board members and CEOs.

Pretty sure they only Let Altman back as CEO under the condition of having a strong influence over the new board.

Comment by Tristan Wegner (tristan-wegner) on OpenAI: The Battle of the Board · 2023-11-23T08:38:35.742Z · LW · GW

Just in 5h ago:

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters.

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

Could relate to this Q* for Deep Learning heuristics:

https://x.com/McaleerStephen/status/1727524295377596645?s=20

Comment by Tristan Wegner (tristan-wegner) on OpenAI: Facts from a Weekend · 2023-11-22T07:58:44.067Z · LW · GW

I have been working all weekend with the OpenAI leadership team to help with this crisis
Jan Leike

Nov 20
I think the OpenAI board should resign

https://x.com/janleike/status/1726600432750125146?s=20

Comment by Tristan Wegner (tristan-wegner) on OpenAI: Facts from a Weekend · 2023-11-22T07:18:31.931Z · LW · GW

Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

Here the paper: https://cset.georgetown.edu/publication/decoding-intentions/

Some more recent (Nov/Okt 2023) publications from her here:

https://cset.georgetown.edu/staff/helen-toner/

Comment by Tristan Wegner (tristan-wegner) on OpenAI: Facts from a Weekend · 2023-11-21T08:55:42.755Z · LW · GW

From your last link:

Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value.

As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value.

This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher.

Also:

It’s important to reiterate that the PPUs inherently are not redeemable for value if OpenAI does not turn a profit

So it seems the growth cap actually encourages short term thinking, which seems against their long term mission.

Do you also understand these incentives this way? 

Comment by Tristan Wegner (tristan-wegner) on How much to update on recent AI governance moves? · 2023-11-18T08:08:40.383Z · LW · GW

The ousting of Sam Altman by a Board with 3 EA people could be the strongest public move so far.

Comment by Tristan Wegner (tristan-wegner) on LoRA Fine-tuning Efficiently Undoes Safety Training from Llama 2-Chat 70B · 2023-10-22T14:10:23.921Z · LW · GW

Could you compare the average activation of the Lora network with prompts that the original Llama model refused with prompts that it allowed? I would expect more going on when Lora has to modify the output.

Using linear probes in similar manner could also be interesting.

Comment by Tristan Wegner (tristan-wegner) on 2022 was the year AGI arrived (Just don't call it that) · 2023-01-05T07:11:02.749Z · LW · GW

Humans/human level can invent technology from distant future if given enough time. Nick Bostrom has a whole chapter about speed intelligence in Superintelligence.

Comment by Tristan Wegner (tristan-wegner) on 2022 was the year AGI arrived (Just don't call it that) · 2023-01-05T07:08:15.878Z · LW · GW

While I agree, there is a positive social feedback mechanism going on right now, where AI/Neural Networks are a very interesting AND well paid field to go into AND the field is young, so a new bright student can be up to speed quite quickly. More motivated smart people leading to more breakthroughs, making AIs more impressive and useful, closing the feedback loop.

I have no doubt that the mainstream popularity of ChatGPT right now decides quite a few career choices.

Comment by Tristan Wegner (tristan-wegner) on Deepmind's Gato: Generalist Agent · 2022-05-13T09:35:51.648Z · LW · GW

That is how I have seen the term expert performance used in papers. Yes.