Posts

Might AI modularity be a modular subproblem of deconfusion? 2021-08-28T12:33:24.565Z
Simultaneous Redundant Research 2021-08-17T12:17:27.701Z
Great Negotiation MOOC on Coursera 2021-08-09T12:23:55.268Z
Why not more small, intense research teams? 2021-08-05T11:57:35.516Z
New evidence on popular perception of "AI" risk 2019-11-07T23:36:08.183Z

Comments

Comment by eg on [deleted post] 2021-08-16T13:02:53.695Z

And for more conceptual rather than empirical research, the teams might go in completely different directions and generate insights that a single team or individual would not.

Comment by eg on How many parameters do self-driving-car neural nets have? · 2021-08-06T13:42:42.080Z · LW · GW

Take with grain of salt but maybe 119m?

Medium post from 2019 says "Tesla’s version, however, is 10 times larger than Inception. The number of parameters (weights) in Tesla’s neural network is five times bigger than Inception’s. I expect that Tesla will continue to push the envelope."

Wolfram says of Inception v3 "Number of layers: 311 | Parameter count: 23,885,392 | Trained size: 97 MB"

Not sure what version of Inception was being compared to Tesla though.

Comment by eg on Two AI-risk-related game design ideas · 2021-08-05T16:49:11.725Z · LW · GW

D&D website estimates 13.7m active players and rising.

Comment by eg on LCDT, A Myopic Decision Theory · 2021-08-05T15:53:35.215Z · LW · GW

Probabilistic/inductive reasoning from past/simulated data (possibly assumes imperfect implementation of LCDT):

"This is really weird because obviously I could never influence an agent, but when past/simulated agents that look a lot like me did X, humans did Y in 90% of cases, so I guess the EV of doing X is 0.9 * utility(Y)."

Cf. smart humans in Newcomb's prob: "This is really weird but if I one box I get the million, if I two-box I don't, so I guess I'll just one box."

Comment by eg on LCDT, A Myopic Decision Theory · 2021-08-05T15:45:30.717Z · LW · GW

For a start, low-level deterministic reasoning:

"Obviously I could never influence an agent, but I found some inputs to deterministic biological neural nets that would make things I want happen."

"Obviously I could never influence my future self, but if I change a few logic gates in this processor, it would make things I want happen."

Comment by eg on Training Better Rationalists? · 2021-08-05T12:19:33.867Z · LW · GW

This post inspired https://www.lesswrong.com/posts/RdCb8EGEEdWbwvqcp/why-not-more-small-intense-research-teams

Comment by eg on LCDT, A Myopic Decision Theory · 2021-08-05T12:05:45.465Z · LW · GW

Smart humans in Newcomb's prob: "whatever, this is weird but I'll just do what gives me a million dollars"

Comment by eg on Training Better Rationalists? · 2021-08-05T11:44:58.933Z · LW · GW

My impression is that SEALs are exceptional as a team, much less individually.  Their main individual skill is extreme team-mindedness.

Comment by eg on LCDT, A Myopic Decision Theory · 2021-08-04T12:53:25.492Z · LW · GW

Seems potentially valuable as an additional layer of capability control to buy time for further control research.  I suspect LCDT won't hold once intelligence reaches some threshold: some sense of agents, even if indirect, is such a natural thing to learn about the world.

Comment by eg on What does GPT-3 understand? Symbol grounding and Chinese rooms · 2021-08-04T12:35:35.192Z · LW · GW

Two big issues I see with the prompt:

a) It doesn't actually end with text that follows the instructions; a "good" output (which GPT-3 fails in this case) would just be to list more instructions.

b) It doesn't make sense to try to get GPT-3 to talk about itself in the completion.  GPT-3 would, to the extent it understands the instructions, be talking about whoever it thinks wrote the prompt.

Comment by eg on What does GPT-3 understand? Symbol grounding and Chinese rooms · 2021-08-04T12:17:20.490Z · LW · GW

I agree and was going to make the same point: GPT-3 has 0 reason to care about instructions as presented here.  There has to be some relationship to what text follows immediately after the end of the prompt.

Comment by eg on What does GPT-3 understand? Symbol grounding and Chinese rooms · 2021-08-04T12:12:08.458Z · LW · GW

Instruction 5 is supererogatory, while instruction 8 is not.

Comment by eg on How should my timelines influence my career choice? · 2021-08-03T11:54:43.317Z · LW · GW

Apply to orgs when you apply to PhDs.  If you can work at an org, do it.  Otherwise, use PhD to upskill and periodically retry org applications.

You would gain skills while working at a safety org, and the learning would be more in tune with what the problems require.