Posts

The Kindness Project 2022-02-17T20:24:38.732Z

Comments

Comment by soth02 (chris-fong) on Looking for a specific group of people · 2023-01-19T12:38:00.480Z · LW · GW

There is a problem in that any group that is generating alpha would likely lose alpha/person if they allow random additional people into their group.

Think Renaissance Medallion fund.  It's been closed to outside investment since near its inception 30 years ago.  Prerequisites for the average person joining would be something like true-genius level Phd in a related STEM field.

An analogue which is closely related is poker players who use solvers to improve their game.  The starting stakes are a bit lower.  The solvers are like a few thousand dollars + equipment to run them, a class on how to use them runs a similar couple thousand bucks, and then there is the small matter of memorizing the shape of a few thousand tables.  As a side note, I think poker is inherently limited because at the top of the heap, you are fighting for single digit to tens of millions of dollars, which is somewhat chump change in the ultimate scheme of things.

Magic the Gathering is similar (cards+variance+strategy/tactics as alpha).

Crypto is similar because of the variance/volatility.  There was a decent pipeline of people who went from MtG->Poker->crypto.  However, I don't think crypto groups are what you are looking for because at this point, the alpha is you.

There is also the superforecaster group.  You can try metaculus.com or reading https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718 

I'm not sure what the end goal is for individual forecasters.  On metaculus you accumulate points for correct predictions and there is a rankings board.  So it looks primarily status driven, but it's hard to put food on the table with this level of status.  Maybe when you hit top 100 you get an invite to an exclusive group?

Comment by soth02 (chris-fong) on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-11T14:12:12.106Z · LW · GW

Coincidentally, that scene in The Big Short takes place on January 11 (2007) :D

Comment by soth02 (chris-fong) on That one apocalyptic nuclear famine paper is bunk · 2022-10-12T21:39:13.345Z · LW · GW

I read it as a joke, lol.

Comment by soth02 (chris-fong) on Half-baked AI Safety ideas thread · 2022-08-23T00:27:15.320Z · LW · GW

https://www.lesswrong.com/posts/jnyTqPRHwcieAXgrA/finding-goals-in-the-world-model

Could it be possible to poison the world model an AGI is based on to cripple its power?

Use generated text/data to train world models based on faulty science like miasma, phlogiston, ether, etc.

Remove all references to the internet or connectivity based technology.

Create a new programming language that has zero real world adoption, and use that for all code based data in the training set.

Comment by soth02 (chris-fong) on Half-baked AI Safety ideas thread · 2022-08-18T00:56:52.311Z · LW · GW

There might be a way to elicit how aligned/unaligned the putative AGI is.

  1. Enter into a Prisoner's Dilemma type scenario with the putative AGI.
  2. Start off in the non-Nash equilibrium of cooperate/cooperate.
  3. The number of rounds is specified at random and isn't known to participants. (possible variant is declare false last rounds, and then continue playing for x rounds).
  4. Observe when/if the putative AGI defects in the 'last' round.
Comment by soth02 (chris-fong) on Half-baked AI Safety ideas thread · 2022-07-22T23:55:06.768Z · LW · GW

Does there have to be a reward?  This is using brute force to create the underlying world model.  It's just adjusting weights right?

Comment by soth02 (chris-fong) on Half-baked AI Safety ideas thread · 2022-06-23T20:43:24.393Z · LW · GW

Brute force alignment by adding billions of tokens of object level examples of love, kindness, etc to the dataset.  Have the majority of humanity contribute essays, comments, and (later) video.

Comment by soth02 (chris-fong) on Half-baked AI Safety ideas thread · 2022-06-23T19:38:44.860Z · LW · GW

I wonder what kind of signatures a civilization gives off when AGI is nascent.

Comment by soth02 (chris-fong) on Gato as the Dawn of Early AGI · 2022-05-18T16:55:52.054Z · LW · GW

Develop a training set for alignment via brute force.  We can't defer alignment to the ubernerds.  If enough ordinary people (millions? tens of millions?) contribute billions or trillions of tokens, maybe we can increase the chance of alignment.  It's almost like we need to offer prayers of kindness and love to the future AGI: writing alignment essays of kindness that are posted to reddit, or videos  extolling the virtue of love that are uploaded to youtube.

Comment by soth02 (chris-fong) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T03:13:43.102Z · LW · GW

AI presents both staggering opportunity and chilling peril. Developing intelligent machines could help eradicate disease, poverty, and hunger within our lifetime. But uncontrolled AI could spell the end of the human race. As Stephen Hawking warned, "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."

Comment by soth02 (chris-fong) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T03:13:14.723Z · LW · GW

AI safety is essential for the ethical development of artificial intelligence."

Comment by soth02 (chris-fong) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T03:12:48.533Z · LW · GW

"AI safety is the best insurance policy against an uncertain future."

Comment by soth02 (chris-fong) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T03:12:22.196Z · LW · GW

"AI safety is not a luxury, it's a necessity."

Comment by soth02 (chris-fong) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T03:06:51.544Z · LW · GW

While it is true that AI has the potential to do a lot of good in the world, it is also true that it has the potential to do a lot of harm. That is why it is so important to ensure that AI safety is a top priority. As Google Brain co-founder Andrew Ng has said, "AI is the new electricity." Just as we have rules and regulations in place to ensure that electricity is used safely, we need to have rules and regulations in place to ensure that AI is used safely. Otherwise, we run the risk of causing great harm to ourselves and to the world around us.

Comment by soth02 (chris-fong) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T02:56:06.620Z · LW · GW
Comment by soth02 (chris-fong) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T02:54:36.225Z · LW · GW
Comment by soth02 (chris-fong) on The Kindness Project · 2022-02-18T11:41:48.996Z · LW · GW

I'm soliciting input from people with more LLM experience to tell me why this naive idea will fail.  I'm hoping it's not in the category of "not even wrong".  If there's a 2%+ shot this will succeed, i'll start coding.

From what I gather, the scrapers look for links on reddit to external text files.  I could also collate submissions, zip them and upload to github/IPFS.  Which ever format is easiest for inclusion into a Pile.