Posts
Comments
Actually a great example of people using the voting system right. It does not contribute anything substantial to the conversation, but just express something most of us feel obviously.
I had to order the 2 votes into the 4 prototypes to makes sure I voted sensibly:
High Karma - Agree: A well expressed opinion I deeply share
High Karma - Disagree: A well argued counterpoint that I would never use myself / It did not convince me.
Low Karma - Agree: Something obvious/trivial/repeated that I agree with, but not worth saying here.
Low Karma - Disagree: low quality rest bucket
Also pure factual statement contribution (helpful links, context etc.) should get Karma votes only, as no opinion to disagree with is expressed.
On first order, this might have a good effect on safety.
On second order, it might have negative effects, because it increases the risk of and therefor lowers the rate of such companies hiring people openly worrying about AI X-Risk.
Someone serious about alignment seeing dangers better do what is save and not be influenced by a non-disparagement agreement. It might lose them some job prospects and have money and possible lawsuit costs, but if history on earth is on the line? Especially since such a known AI genius would find plenty support from people who supported such open move.
So I hope he assumes talking right NOW it not considered strategically worth it. E.g. He might want to increase his chance to be hired by semi safety serious company (more serious than Open AI, but not enough to hire a proven whistleblower), where he can use his position better.
I agree with the premise, but not the conclusion of your last point. Any OpenSource development, that will significantly lower the resource requirements can also be used by closed models to just increased their model/training size for the same cost, thus keeping the gap.
The first graph is supposed to show " BMI at age 50 for white, high-school educated American men born in various years", but goes up 1986. But People born in 1986 are only 38 right now, so we cannot know their BMI at 50 years old. Something is wrong.
No. By off by 100 I meant of by a factor of 100 to small, NOT that they don't sum up to 100.
I think you made a off by 100 error in Unlabeled Evaluation with all win rates <1%
The former boards only power was to agree/fire to new board members and CEOs.
Pretty sure they only Let Altman back as CEO under the condition of having a strong influence over the new board.
Just in 5h ago:
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters.
Could relate to this Q* for Deep Learning heuristics:
https://x.com/McaleerStephen/status/1727524295377596645?s=20
I have been working all weekend with the OpenAI leadership team to help with this crisis
Jan Leike
Nov 20
I think the OpenAI board should resign
https://x.com/janleike/status/1726600432750125146?s=20
Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.
Here the paper: https://cset.georgetown.edu/publication/decoding-intentions/
Some more recent (Nov/Okt 2023) publications from her here:
https://cset.georgetown.edu/staff/helen-toner/
From your last link:
Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value.
As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value.
This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher.
Also:
It’s important to reiterate that the PPUs inherently are not redeemable for value if OpenAI does not turn a profit
So it seems the growth cap actually encourages short term thinking, which seems against their long term mission.
Do you also understand these incentives this way?
The ousting of Sam Altman by a Board with 3 EA people could be the strongest public move so far.
Could you compare the average activation of the Lora network with prompts that the original Llama model refused with prompts that it allowed? I would expect more going on when Lora has to modify the output.
Using linear probes in similar manner could also be interesting.
Humans/human level can invent technology from distant future if given enough time. Nick Bostrom has a whole chapter about speed intelligence in Superintelligence.
While I agree, there is a positive social feedback mechanism going on right now, where AI/Neural Networks are a very interesting AND well paid field to go into AND the field is young, so a new bright student can be up to speed quite quickly. More motivated smart people leading to more breakthroughs, making AIs more impressive and useful, closing the feedback loop.
I have no doubt that the mainstream popularity of ChatGPT right now decides quite a few career choices.
That is how I have seen the term expert performance used in papers. Yes.