Posts

Cornell Meetup 2021-11-23T21:28:42.605Z

Comments

Comment by Lionel Levine on «Boundaries», Part 1: a key missing concept from utility theory · 2022-08-04T22:14:10.049Z · LW · GW

So, boundaries enable cooperation, by protecting BATNA.

Would you say there is a boundary between cell and mitochondria?

In the limit of perfect cooperation, the BATNA becomes minus infinity and the boundary dissolves.

Comment by Lionel Levine on Cornell Meetup · 2021-12-07T14:02:41.497Z · LW · GW

Thanks all for responding! The meetup will be this Thursday, any other Ithaca locals DM for details!

Comment by Lionel Levine on Dangerous optimisation includes variance minimisation · 2021-07-16T16:57:11.740Z · LW · GW

This is an obvious point, but: Any goal is likely to include some variance minimization as a subgoal, if only because of the possibility that another entity (rival AI, nation state, company) with different goals could take over the world.  If an AI has the means to take over the world, then it probably takes seriously the scenario that a rival takes over the world. Could it prevent that scenario without taking over itself?

Comment by Lionel Levine on We need a standard set of community advice for how to financially prepare for AGI · 2021-07-16T16:35:46.322Z · LW · GW

money is probably much less valuable after AGI than before, indeed practically worthless.

I think this overstates the case against money. Humans will always value services provided by other humans, and these will still be scarce after AGI. Services provided by humans will grow in value (as measured by utility to humans) if AGI makes everything else cheap.  It seems plausible that money (in some form) will still be the human-to-human medium of exchange, so it will still have value after AGI.

Comment by Lionel Levine on Decoupling deliberation from competition · 2021-05-26T18:32:25.644Z · LW · GW

If Alice and Bob are talking to each other as they deliberate

I think this is a typo, it should say "compete" instead of "deliberate".

I worry about persuasion becoming so powerful that it blocks deliberation: How can Alice know whether Bob (or his delegated AI) is deliberating in good faith or trying to manipulate her?

In this scenario, small high-trust communities can still deliberate, but mutual mistrust prevents them from communicating their insights to the rest of the world.

Comment by Lionel Levine on Agency in Conway’s Game of Life · 2021-05-16T19:51:11.886Z · LW · GW

I think this is possible and it doesn’t require AI. It only requires a certain kind of "infectious Turing machine" described below. 

Following Gwern’s comment, let’s consider first the easier problem of writing a program on a small portion of a Turing machine’s tape, which draws a large smiley face on the rest of the tape. This is easy even with the *worst case* initialization of the rest of the tape.  Whereas our problem is not solvable in worst case, as pointed out by Richard_Kennaway. 

What makes our problem harder is errors caused by the random environment. We could model these errors by spontaneous random changes in the letters written on the Turing machine's tape.  But it’s easy to make a computation robust to such errors:  A naive way to do it is to repeat each square 100 times and assign a repair bot to repeatedly scan the 100 squares, compute majority, and convert the minority squares back to the majority letter. This is not so different from how your laptop repeatedly scans its memory to prevent errors.

Now we get to what I think is the hard part of the problem: How to organize a random environment into squares of Turing machine tape? I don’t know how to do it in Conway's Life, but I would guess it's possible. An interesting question is whether *any* cellular automaton that supports a Turing machine can also support an infectious Turing machine. Again I would guess yes.

Does an infectious Turning machine require AI? There is certainly something lifelike about a device that gradually converts its environment into squares of its own tape. The tape squares reproduce, but they hardly need to be intelligent.

But okay, we’re setting the bar low by competing against a *random* environment. As Oscar_Cunningham points out, a random environment is unlikely to produce anything smart. If we set the bar higher by asking the infection to compete against an environment containing life or intelligence, then it might need to be significantly smarter in order to take over the rest.