Posts

You Can Do Futarchy Yourself 2020-06-14T00:16:20.823Z
Tetraspace Grouping's Shortform 2019-08-02T01:37:14.859Z

Comments

Comment by tetraspace-grouping on The Darwin Game - Rounds 10 to 20 · 2020-11-20T13:06:23.221Z · LW · GW

Indeed, OscillatingTwoThreeBot does behave like that. Thanks for the cooperation LiamGoddard!

Comment by tetraspace-grouping on Open & Welcome Thread – November 2020 · 2020-11-15T17:57:20.612Z · LW · GW

:0, information on the original AI box games!

In that round, the ASI convinced me that I would not have created it if I wanted to keep it in a virtual jail.

What's interesting about this is that, despite the framing of Player B being the creator of the AGI, they are not. They're still only playing the AI box game, in which Player B loses by saying that they lose, and otherwise they win.

For a time I suspected that the only way that Player A could win a serious game is by going meta, but apparently this was done just by keeping Player B swept up in their role enough to act how they would think the creator of the AGI would act. (Well, saying "take on the role of [someone who would lose]" is meta, in a sense.)

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-11-12T12:46:50.247Z · LW · GW

Smarkets is currently selling shares in Trump conceding if he loses at 57.14%. The Good Judgement Project's superforecasters predict that any major presidential candidate will concede with probability 88%. I assign <30% probability to Biden conceding* (scenarios where Biden concedes are probably overwhelmingly ones where court cases/recounts mean states were called wrong, which Betfair assigns ~10% probability to, and FTX kind of** assigns 15% probability to, and even these seem high), so I think it's a good bet to take.

* I think that the Trump concedes if he loses market is now unconditional, because by Smarkets' standards (projected electoral votes from major news networks) Biden has won.

** Kind of, because some TRUMP shares expired at 1 TRUMFEB share - $0.10, rather than $0 as expected, and some TRUMP shares haven't expired yet, because TRUMP holders asked. So it's possible that the value of a TRUMPFEB share might also include the value of a hypothetical TRUMPMAR share, or that TRUMPFEB trades will be nullified at some point, or some other retrospective rule change on FTX's part.

UPDATE 2020-11-16: Trump... kind of conceded? Emphasis mine:

He won because the Election was Rigged. NO VOTE WATCHERS OR OBSERVERS allowed, vote tabulated by a Radical Left privately owned company, Dominion, with a bad reputation & bum equipment that couldn’t even qualify for Texas (which I won by a lot!), the Fake & Silent Media, & more!

While he has retracted this, it met Smarkets' standards, so I'm £22.34 richer.

Comment by tetraspace-grouping on Share your personal stories of prediction markets · 2020-11-08T17:13:28.635Z · LW · GW

I bet £10 on Biden winning on Smarkets upon reading the GJP prediction, because I trust superforecasters more than prediction markets. I bet another £10 after reading Demski's post on Kelly betting - my bankroll is much larger than £33 (!! Kelly bets are enormous!) but as far as my System 1 is concerned I'm still a broke student who would have to sheepishly ask their parents to cover any losses.

Very pleased about the tenner I won, might spend it on a celebratory beer.

Comment by tetraspace-grouping on Babble challenge: 50 ways of solving a problem in your life · 2020-10-28T00:19:23.575Z · LW · GW

The problem I have and wish to solve is, of course, the accurséd Akrasia that stops me from working on AI safety.

Let's begin with the easy ones:

1 Stop doing this babble challenge early and go try to solve AI safety.

2 Stop doing this babble challenge early; at 11 pm, specifically, and immediately sleep, in order to be better able to solve AI safety tomorrow.

In fact generally sleep seems to be a problem, I spend 10 hours doing it every day (could be spent solving AI safety) and if I fall short I am tired. No good! So working on this instrumental goal.

3 Get blackout curtains to improve sleep quality

4 Get sleep mask to improve sleep quality

5 Get better mattress to improve sleep quality

6 Find a beverage with more caffeine to reduce the need for sleep

7 Order modafinil online to reduce the need for sleep

And heck while we're on the topic of stimulants

8 Order adderall online or from a friend to increase ability to focus

9 Look up good nootropics stacks to improve cognitive ability and hence ability to do AI safety

Now another constraint when doing AI safety is that I don't have a good shovel-ready list of things to try, and it's easy for me to get distracted if I can't just pick something from the task list

10 Check if complice solves this problem

11 Check if some ordinary getting-things-done (that I can stick into roam) solves this problem

12 Make a giant checklist and go down this list

13 Make a personal kanban board of things that would be nice for solving AI safety

And instrumentally useful for creating these task lists?

14 Ask friends who know about AI safety for things to do

15 Apophatically ask for suggestions for things to do via an entry on a list of 50 items for a lesswrong babble challenge

Anyway, I digress. I'm here to solve akrasia, not make a checklist. Unless I need more items on this list, in which case I will go back to checklist construction. Is this pruning? Never mind. Back to the point:

16 Set up some desktop shortcut macro thing in order to automatically start pomodoros when I open my laptop

17 Track time spent doing things useful to AI safety on a spreadsheet

18 Hey, I said "laptop"! Get a better mouse to make using the laptop more fun so I'm more likely to do hard things when using it

19 Get a better desk for more space for notes and to require less expensive shifting into/out of AI safety mode

20 On notes, use the index cards I have to make a proper zettelkasten as a cognitive aid

(Does this solve akrasia? Well, if I have better cognitive aids, then doing cognitively expensive things is easier, so I'm less likely to fail even with my current levels of willpower)

21 Start doing accountability things like promising to review a paper every X time period

22 I said levels of willpower - Google for interventions that increase conscientiousness (there's gotta be some dodgy big-5 based things) and do those?

Back to the top of the tree

23 Quit my job because it's using up energy that I could be using to do AI safety

24 Instead of doing my job, pretend to do my job while actually doing AI safety

25 Set up an AI safety screen on work laptop so it's easy to switch over to doing AI safety during breaks or lunches

Hey, I said lunch

26 Use nutritionally complete meal replacements to save time/willpower that would be spent on food preparation

27 Use nutritionally complete meal replacements to ensure that nutrient intake keeps me in top physical form

28 Exercise (this improves everything, apparently) by running on a treadmill

29 By lifting weights

30 By jogging in a large circle

31 Become a monk and live an austere lifestyle without the distractions of rich food, wine, and lust

32 Become an anti-monk and live a rich lifestyle to ensure that no willpower is wasted on distractions

33 Specifically in vice use nicotine as a performance enhancing stimulant by smoking. Back to stimulants again I guess

34 ... or by using nicotine patches or gum or something

35 By using nicotine only if I do AI safety things, in order to develop an addiction to AI safety

Hey, develop an addiction to doing AI safety! People go to serious lengths for addictions, so why not gate it on math?

36 Do so with something very addictive, like opioids

37 Use electric shocks to do classical conditioning

etc. there was a short sci-fi story about this kind of thing let me see if I can find it. Hey, actually, since I said sci-fi, adn this is a babble challenge:

38 Promise very hard to time travel back to this exact point in time, meet future self, recieve advice

(They're not here :( Oh well) Back on that akrasia-solving:

39 Make up a far-future person who I am specifically working to save (they're called Dub See Wun). Get invested in their internal life (they want to make their own star!). Feel an emotional connection to them. I'm doing it for them!

40 Specifically put up a "do it for them" poster modelled off the one in the Simpsons

41 DuckDuckGo "how to beat akrasia" and do the top suggestion

42 Adopt strategic probably false beliefs (the world will end in 1 year!! :0) in order to encourage a more aggressive search for strategies

"Aggressive search for strategies" is the virtue that the Sequences call "actually trying", so in the Sequences-sphere

43 Go to a CFAR workshop, which I heard might be kind of useful towards this sort of thing

44 Or just read the CFAR booklet and apply the wisdom found in there

45 Or some sequence on Lesswrong with exercises that applies some CFARy wisdom

Of course all this willpower boosting and efficiency and stuff wouldn't help if I was just doing the wrong thing faster (like that one Shen comic, you know the one). So:

46 Consider how much of what I think is working on AI safety is actually just self-actualisy math/CS stuff, throw that out, and actually try to solve the problem

47 Deliberately create and encourage a subagent in my mind that wants to do AI safety (call em Dub See Wun)

48 Adopt strategic infohazards in order to encourage a more focused and aggressive search for strategies

49 Post a lot about AI safety in public forums like Lesswrong so that I feel compelled to do AI safety in my private life in order to maintain the illusion that I'm some kind of AI-safety-doing-person

50 Stop doing this babble challenge at the correct time, and continue to do AI safety or sleep as in 1) or 2). Hey, this one seems good. Think I might try it now!

Comment by tetraspace-grouping on Introduction to Cartesian Frames · 2020-10-26T20:22:53.033Z · LW · GW

This means you can build an action that says something like "if I am observable, then I am not observable. If I am not observable, I am observable" because the swapping doesn't work properly.

Constructing this more explicitly: Suppose that and . Then must be empty. This is because for any action in the set , if was in then it would have to equal which is not in , and if was not in it would have to equal which is in .

Since is empty, is not observable.

Comment by tetraspace-grouping on The Darwin Game - Rounds 0 to 10 · 2020-10-26T17:34:56.229Z · LW · GW

Because the best part of a sporting event is the betting, I ask Metaculus: [Short-Fuse] Will AbstractSpyTreeBot win the Darwin Game on Lesswrong?

Comment by tetraspace-grouping on The Darwin Game - Rounds 0 to 10 · 2020-10-24T23:12:26.886Z · LW · GW

How does your CooperateBot work (if you want to share?). Mine is OscillatingTwoThreeBot which IIRC cooperates in the dumbest possible way by outputting the fixed string "2323232323...".

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-10-23T20:06:05.511Z · LW · GW

I have two questions on Metaculus that compare how good elements of a pair of cryonics techniques are: preservation by Alcor vs preservation by CI, and preservation using fixatives vs preservation without fixatives. They are forecasts of the value (% of people preserved with technique A who are revived by 2200)/(% of people preserved with technique B who are revived by 2200), which barring weird things happening with identity is the likelihood ratio of someone waking up if you learn that they've been preserved with one technique vs the other.

Interpreting these predictions in a way that's directly useful requires some extra work - you need some model for turning the ratio P(revival|technique A)/P(revival|technique B) into plain P(revival|technique X), which is the thing you care about when deciding how much to pay for a cryopreservation.

One toy model is to assume that one technique works (P(revival) = x), but the other technique may be flawed (P(revival) < x). If r < 1, it's the technique in the numerator that's flawed, and if r > 1, it's the technique in the denominator that's flawed. This is what I guess is behind the trimodality in the Metaculus community median: there are peaks at the high end, the low end, and at exactly 1, perhaps corresponding to one working, the other working, and both working.

For the current community medians, using that model, using the Ergo library, normalizing the working technique to 100%, I find:

  • Alcor vs CI:

    • EV(Preserved with Alcor) = 52%
    • EV(Preserved with Cryonics Institue) = 79%
  • Fixatives vs non-Fixatives

    • EV(Preserved using Fixatives) = 73%
    • EV(Preserved without using Fixatives) = 52%

(here's the Colab notebook)

Comment by tetraspace-grouping on Babble challenge: 50 ways of hiding Einstein's pen for fifty years · 2020-10-16T21:12:45.845Z · LW · GW

The annotations that some other people have put on their lists to show their thinking process as well as the list of assumptions at the start, have been interesting - I haven't done this this time, but it seems like something worth trying next time.

Keep it in my pocket the whole time.

Locked safe down the Marianas trench.

Am I a time traveller? Is that how I know? If so, hide it in dinosaur times, long before the evil forces lived.

Or hide it in the far future, long after the evil forces lived.

Send it into orbit.

Land it on the moon. Can't quite think of a way to achieve this, though. Any ideas?

Bury it in a geologically stable location and dig it up later as if it were nuclear waste.

Hide it in a gangster's treasure box hidden under some foliage, a la 20200.

Start a pen manufacturing company and create many, many identical pens. They won't be able to tell which one it is.

Eat the pen. Repeatedly, each time it passes through. For 50 years.

Find the guy with 10 years' worth of energy. Lock them in a room. Offer them their freedom if and only if they vow to protect the pen.

Surgically implant the pen under my skin (hope it's not made of biologically active materials).

Hidden safe in the walls of the house.

Hidden safe in the attic of the house.

Swiss bank vault (we had those in 1855, right?).

Inside a bottle of wine that will be aged to become a 50-year vintage in 1950.

Write a book on effective altruism (using the pen, of course) - there are probably some good cause areas around in 1855 to use as examples. They will read it, and cease to be evil, thus removing their motivation to acquire the pen.

Give Babbage some pointers on making his difference engine not suck, beginning an early steampunk cybersingularity, and ask the Great Brass Mind how to hide the pen.

Give the pen to my well-connected close friend, [famous person who lived in 1855], providing them with the same evidence I used to find that Einstein would need it.

Select, completely randomly, a point on the surface of the Earth. Bury it under a small amount of earth. Security through obscurity!

Replace each component of the pen, one at a time, until you have two pens: the old pen, and a new pen that's atom-for-item identical to the original pen. Let the evil forces find the new pen.

Create a replica of the first pen and let the evil forces find it, so that they stop looking.

Bribe every grunt of the evil forces who comes looking for your pen.

Like 10), but the other end; at that point they won't want to find it, even if they know where it is.

Find Einstein's parents. Offer them this treasured family heirloom. They will keep it safe and Einstein will inherit it.

Paint the pen black and put in in a soot-filled chimney.

Find Oliver Twist and Fagin, or some other group of Victorian urchins, who are ubiquitous in this age. Hire Fagin's street urchins to come up with and then red-team test 50-year security plans for the pen.

Become a miserly industrialist, refusing even to give my workers a day off for Christmas. When three ghosts come to visit, use information from the Ghost of Christmas Future to divine the manner in which the evil forces retrieve the pen, and make countermeasures.

All of these plans have some chance of failing, so I can obviously tolerate that. Hence, bet my money at very, very long odds - in the small sliver of timelines in which I succeed, use my money to buy out the evil forces entirely.

Call my friends at the time commission for backup. C'mon, we can't just forget about protocol here.

Go on an expedition to the Arctic and hide it in the inhospitable ice; I could probably talk some guys in pith helmets into giving me backup.

Or to the deepest jungles of the dark continent of Africa; likewise with the pith helments.

Or to the source of the Nile.

Or to the summit of the Mt. Everest or K2 or whatever's going to be most awkward for the evil forces..

Or to the Antarctic, which is colder than the Arctic in the middle part.

Or to the deserts of Australia.

Found a cult of Defending the Pen, perhaps using song lyrics from the future as substitute mystical wisdom.

Ask the longer-haired, wiser, and older version of myself who just gave me this quest for advice, since they're still standing there. Follow their advice.

Bury the pen deep in a coal mine.

Keep your head down and don't tell anyone that it's -you- who has the pen - it's not like the evil forces have any reason to suspect that, unless you give them a good reason to, like boostrapping the world to nanotech using future knowledge or something. Haha. Heh.

Hide the pen under my top hat; since it's 1855, that won't look unusual.

Dismantle the pen and hide the seven components throughout the world using techniques described above and below; being smaller, they'll be harder to find.

Join the evil forces as a simple masked minion; working for them, they won't suspect you have the pen, until one day as the second-in-command you usurp the leader (as it tradition).

Message in a bottle to the North Sentinel Island, who will repel outsiders including the evil forces.

Give a speech that's something like "evil forces, you really want to mess with me? I can leap to the moon in a single bound, and that's just to save me pulling it to ground, which I can also do. You once tried to trap me in a room and I took down your mothership's entire network before tearing it to shreds. This planet, and this pen specifically, is under my protection. Return to your galaxy," probably with dramatic orchestral music playing in the background, and then the evil forces will leave.

Check your Messing-with-Time-Wongle, standard issue equipment for all time travellers with missions to defend artefacts that are important to the timeline. Notice that the LED on it flashes green. Precommit to only sending a "green" signal to your MwTW in 50 years if the pen reaches Einstein successfully. Now Time will bend to ensure the pen is not found.

Freeze the pen in liquid nitrogen. It will now be too cold for the evil forces to touch.

The evil forces that I'm leader of, remember. Obviously my disloyal second-in-command will take umbrage if I seem not to be looking for the pen at all - I'm fairly sure they're a time traveller here to prevent Einstein from laying the physics foundations for the nuclear weapons that will destroy the world in the mid-20th century or something like that, and they keep scribbling notes on this list of about 50 items - but I can still direct them to the wrong place for 50 years. Hey, I think I saw the pen-keeper go into the middle of the Antarctic to launch a rocket!

Bury the pen in a large heap of explosives that only I know how to disarm - WWII mines are still dangerous so them being stable for 50 years should work.

Tie the pen to my ankle, everywhere I go - the traditional mores of the 19th century would make it scandalous for the evil forces to retrieve it from there!

Melt down the pen into a block of ordinary looking gunk. Remake the pen when needed years later.

Comment by tetraspace-grouping on Babble challenge: 50 ways to escape a locked room · 2020-10-12T17:54:16.635Z · LW · GW

The added resource constraints (I don't have a space elevator with me in the room... yet) made this a bit more difficult, which is very nice.

Ask someone for help via the phone

Punch through the door

Unlock the door, go through it

Punch through the wall

Punch through the window

Unlock the window, go through it

Wait for someone to help

Wait for the room to be demolished

Climb up through the ceiling...

...or through one of the missing walls (does it still count as a room?)

Create a series of Lesswrong posts diguised as babble exercises to try to come up with a way out of this room; use the best suggestion

Wait for a friendly GPT-derived AGI to rescue you (admittedly a longshot)

Quantum tunnel out of the room (rare but possible)

Release all of the energy stored in your body in a single burst to destroy walls (10 years! That's a lot!)

Release all of the energy stored in the phone's battery in a single burst to destroy walls

Use friction from rubbing clothes against wall to wear through it

Hang self with clothes (morbid, but "I" am no longer in the room)

Wait ten years, starve to death (don't worry; the GPT-derived AGI can read off my brain structures and revive me later)

Lifelog very accurately online via the phone; have myself be reconstructed outside of the room

I am already outside of the room, 10^10^100 light years away. No problem.

Release all energy stored in body in a single burst to jump through the ceiling and several miles into the sky - this might also allow me to bring a small object to the moon

Punch through the wall, but using phone to protect hands

Punch through the wall using shirt wrapped around to protect hands

Use the power armour that I am wearing as clothes to dismantle room

Wait sufficiently long that my personality is different enough that I am not in the room

Escape mentally via escapism (with help of phone games?)

Astral project

Use my cool utility-fog based sci-fi clothes to convert wall into nanobots

Redefine "inside" as "outside", like that SCP that lets you do that

Is this a real room, or a metaphorical "you" video game character? Type the console command to teleport out.

Ask the server admin to teleport me out.

Ask the real life server admin of the simulation we are embedded in to teleport me out (Elon Musk does this with Telsa stock prices)

Tap on the wall of the room to send a Morse code message asking for help.

Use phone's wifi to connect to the door's bluetooth and unlock it via the app.

Run at the door really hard.

The phone is a Nokia. Drop it on the ground and the room crumbles.

The phone is that Samsung phone that has batteries that set on fire (with 10 years of charge, that might be bad news for me?) Do so, then use the automatic door unlocking (that happens as a fire safety measure) to leave the room.

Pull off a bit of the phone's casing and use it as a lockpick.

The phone is that iPhone that can bend easily. Bend it into a shape that can prise the door open. Exit through door.

As above, but prise the window open. Exit through window.

Stop imagining the room.

Use lucid dream powers to escape the room.

Go to sleep and dream of a different place

Grow large enough to break through the room's walls

The walls are made of air so I can walk through them.

The walls are made of antimatter and annihilate with the surrounding environment.

The walls are made of ice and will melt soon.

Rub together two stick-like objects (derived from my phone, probably) to start a fire, as fire safety measure the door unlocks, etc

Do the five movements to travel to another dimension where we are not trapped

Hack the wi-fi. As an expert hacker, my captors will thus have to recruit me in order to fix their wifi. As they open the door, slip past them.

The room is completely empty. The air pressure outside causes the walls to immediately buckle and break.

Comment by tetraspace-grouping on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T11:40:05.086Z · LW · GW

About halfway through I forgot that I was only meant to be bringing something to the moon rather than having to visit it myself, and some of my items are very broad (the first one could make up a whole list in itself).

This was very fun!

rocket

space elevator

jump really, really hard

electromagnetic cannon

accelerate the spin of the earth until it falls apart

decelerate the orbit of the moon until it falls, by flying comets past it

or by painting one side of the moon black

or by using a giant rocket

or by detonating enough antimatter weaponry

flap your arms, again really, really hard

shine a torch at the moon (photons reach there)

start in space and use an ion drive

project orion nuclear bomb detonated below you

program an AGI and ask the AGI how to get to the moon

build a very tall ladder

spaceplane

wings made of wax

throw it really, really hard

spin around and let go

stand under an asteroid strike and join the ejecta

wait for quantum fluctuations to teleport you there

wait for random gravitational solar system pertubations to bring the moon to you

wait for another civilisation to bring you to the moon

time travel to before Theia hit and join the original moon

teleport

add mass to the moon until it becomes the planet and you are on the moon

find the space rocks the apollo astronauts brought back and stand on them

project orion but with fusion

project orion but with antimatter

trigger false vacuum collapse with particle accelerator and use new physics to develop as yet unknowable way of travelling to moon

astral projection

bird with a spacesuit

space helicopter

vacuum-filled zepellin

submarine with reactionless thruster inside

perpetual motion machine

buy a ticket on musk's starship

invest in dogecoin, use billions from dogecoin to start space program

stand above a supervolcano and hope ejecta takes you high enough

run very very fast reaching orbital velocity

very long space elevator reaching down from moon

very very long space elevator reaching down from mars

create microscopic black hole and use gravitational slingshot

carefully warp space to make a staircase built from the metric

make a normal staircase

wormhole

very, very fast bicycle with a ramp

add mass to moon until gravitational tide from moon lifts you from the surface of the earth

deorbit the earth-moon system into the sun and join it in the molten iron in the sun's core

apollo 11 mission

Comment by tetraspace-grouping on Comparing LICDT and LIEDT · 2020-07-24T16:21:58.425Z · LW · GW

The statement of the law of logical causality is:

Law of Logical Causality: If conditioning on any event changes the probability an agent assigns to its own action, that event must be treated as causally downstream.

If I'm interpreting things correctly, this is just because anything that's upstream gets screened off, because the agent knows what action it's going to take.

You say that LICDT pays the blackmail in XOR blackmail because it follows this law of logical causality. Is this because, conditioned on the letter being sent, if there is a disaster the agent assigns  to sending money, and if there isn't a disaster the agent assigns  to sending money, so the disaster must be causally downstream of the decision to send money if the agent is to know whether or not it sends money?

Comment by tetraspace-grouping on Smoking Lesion Steelman · 2020-07-21T02:43:38.359Z · LW · GW

I didn't find the conclusion about the smoke-lovers and non-smoke-lovers obvious in the EDT case at first glance, so I added in some numbers and ran through the calculations that the robots will do to see for myself and get a better handle on what not being able to introspect but still gaining evidence about your utility function actually looks like.

Suppose that, out of the  robots that have ever been built,  are smoke-lovers and  are non-smoke-lovers. Suppose also the smoke-lovers end up smoking with probability  and non-smoke-lovers end up smoking with probability .

Then  robots smoke, and  robots don't smoke. So by Bayes' theorem, if a robot smokes, there is a   chance that it's killed, and if a robot doesn't smoke, there's a chance that it's killed.

Hence, the expected utilities are:

  • An EDT non-smoke-lover looks at the possibilities. It sees that if it smokes, it expects to get utilons, and that if it doesn't smoke, it expects to get  utilons.
  • An EDT smoke-lover looks at the possibilities. It sees that if it smokes, it expects to get  utilons, and if it doesn't smoke, it expects to get  utilons.

Now consider some equilibria. Suppose that no non-smoke-lovers smoke, but some smoke-lovers smoke. So  and . So (taking limits as  along the way):

  • non-smoke-lovers expect to get  utilons if they smoke, and  utilons if they don't smoke.  so non-smoke-lovers will choose not to smoke.
  • smoke-lovers expect to get  utilons if they smoke, and  utilons if they don't smoke. Smoke-lovers would be indifferent between the two if . This works fine if at least 90% of robots are smoke lovers, and equilibrium is achieved. But if less than 90% of robots are smoke-lovers, then there is no point at which they would be indifferent, and they will always choose not to smoke.

But wait! This is fine if more than 90% are smoke-lovers, but if fewer than 90% are smoke-lovers, then they would always choose not to smoke, that's inconsistent with the assumption that  is much larger than . So instead suppose that  is only only a little bit bigger than , say that . Then:

  • non-smoke-lovers expect to get  utilons if they smoke, and  utilons if they don't smoke. They will choose to smoke if , i.e. if smoke-lovers smoke so rarely that not smoking would make them believe they're a smoke-lover about to be killed by the blade runner.
  • smoke-lovers expect to get   utilons if they smoke, and  utilons if they don't smoke. They are indifferent between these two when . This means that, when  is at the equilibrium point, non-smoke-lovers will not choose to smoke when fewer than 90% of robots are smoke-lovers, which is exactly when this regime applies.

I wrote a quick python simulation to check these conclusions, and it was the case that  for , and  for  there as well.

Comment by tetraspace-grouping on Reductive Reference · 2020-06-25T13:13:16.203Z · LW · GW

Your reliable thermometer doesn't need to be well-calibrated - it only has to show the same value whenever it's used to measure boiling water, regardless of what that value is. So the dependence isn't quite so circular, thankfully.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-05-28T16:58:11.094Z · LW · GW

So the definition of myopia given in Defining Myopia was quite similar to my expansion in the But Wait There's More section; you can roughly match them up by saying and , where is a real number corresponding to the amount that the agent cares about rewards obtained in episode and is the reward obtained in episode . Putting both of these into the sum gives , the undiscounted, non-myopic reward that the agent eventually obtains.

In terms of the definition that I give in the uncertainty framing, this is , and .

So if you let be a vector of the reward obtained on each step and be a vector of how much the agent cares about each step then , and thus the change to the overall reward is , which can be negative if the two sums have different signs.

I was hoping that a point would reveal itself to me about now but I'll have to get back to you on that one.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-05-27T00:27:56.445Z · LW · GW

Thoughts on Abram Demski's Partial Agency:

When I read Partial Agency, I was struck with a desire to try formalizing this partial agency thing. Defining Myopia seems like it might have a definition of myopia; one day I might look at it. Anyway,

Formalization of Partial Agency: Try One

A myopic agent is optimizing a reward function where is the vector of parameters it's thinking about and is the vector of parameters it isn't thinking about. The gradient descent step picks the in the direction that maximizes (it is myopic so it can't consider the effects on ), and then moves the agent to the point .

This is dual to a stop-gradient agent, which picks the in the direction that maximizes but then moves the agent to the point (the gradient through is stopped).

For example,

  • Nash equilibria - are the parameters defining the agent's behavior. are the parameters of the other agents if they go up against the agent parametrized by . is the reward given for an agent going up against a set of agents .
  • Image recognition with a neural network - is the parameters defining the network, are the image classifications for every image in the dataset for the network with parameters , and is the loss function plus the loss of the network described by on classifying the current training example.
  • Episodic agent - are parameters describing the agents behavior. are the performances of the agent in future episodes. is the sum of , plus the reward obtained in the current episode.

Partial Agency due to Uncertainty?

Is it possible to cast partial agency in terms of uncertainty over reward functions? One reason I'd be myopic is if I didn't believe that I could, in expectation, improve some part of the reward, perhaps because it's intractable to calculate (behavior of other agents) or something I'm not programmed to care about (reward in other episodes).

Let be drawn from a probability distribution over reward functions. Then one could decompose the true, uncertain, reward into defined in such a way that for any ? Then this is would be myopia where the agent either doesn't know or doesn't care about , or at least doesn't know or care what its output does to . This seems sufficient, but not necessary.

Now I have two things that might describe myopia, so let's use both of them at once! Since you only end up doing gradient descent on , it would make sense to say , , and hence that .

Since for small , this means that , so substituting in my expression for gives , so . Uncertainly is only over , so this is just the claim that the agent will be myopic with respect to if . So it won't want to include in its gradient calculation if it thinks the gradients with respect to are, on average, 0. Well, at least I didn't derive something obviously false!

But Wait There's More

When writing the examples for the gradient descenty formalisation, something struck me: it seems there's a structure to a lot of them, where is the reward on the current episode, and are rewards obtained on future episodes.

You could maybe even use this to have soft episode boundaries, like say the agent receives a reward on each timestep so , and saying that so that for , which is basically the criterion for myopia up above.

Unrelated Note

On a completely unrelated note, I read the Parable of Predict-O-Matic in the past, but foolishly neglected to read Partial Agency beforehand. The only thing that I took away from PoPOM the first time around was the bit about inner optimisers, coincidentally the only concept introduced that I had been thinking about beforehand. I should have read the manga before I watched the anime.

Comment by tetraspace-grouping on Open & Welcome Thread—May 2020 · 2020-05-26T09:48:57.662Z · LW · GW

The Whole City is Center:

This story had a pretty big impact on me and made me try to generate examples of things that could happen such that I would really want the perpetrators to suffer, even more than consequentialism demanded. I may have turned some very nasty and imaginative parts of my brain, the ones that wrote the Broadcast interlude in Unsong, to imagining crimes perfectly calculated to enrage me. And in the end I did it. I broke my brain to the point where I can very much imagine certain things that would happen and make me want the perpetrator to suffer – not infinitely, but not zero either.
Comment by tetraspace-grouping on A game designed to beat AI? · 2020-05-07T22:05:33.139Z · LW · GW

The AI Box game, in contrast with the thing it's a metaphor for, is a two player game played over text chat by two humans where the goal is for Player A to persuade Player B to let them win (traditionally by getting them to say "I let you out of the box"), within a time limit.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-05-02T10:09:48.008Z · LW · GW

Thoughts on Dylan Hadfield-Menell et al.'s The Off-Switch Game.

  • I don't think it's quite right to call this an off-switch - the model is fully general to the situation where the AI is choosing between two alternatives A and B (normalized in the paper so that U(B) = 0), and to me an off-switch is a hardware override that the AI need not want you to press.
  • The wisdom to take away from the paper: An AI will voluntarily defer to a human - in the sense that the AI thinks that it can get a better outcome by its own standards if it does what the human says - if it's uncertain about the utilities, or if the human is rational.
  • This whole setup seems to be somewhat superseded by CIRL, which has the AI, uh, causally find by learning its value from the human actions, instead of evidentially(?) doing it by taking decisions that happen to land it on action A when is high because it's acting in a weird environment where a human is present as a side-constraint.
    • Could some wisdom to gain be that the high-variance high-human-rationality is something of an explanation as to why CIRL works? I should read more about CIRL to see if this is needed or helpful and to compare and contrast etc.
  • Why does the reward gained drop when uncertainty is too high? Because the prior that the AI gets from estimating the human reward is more accurate than the human decisions, so in too-high-uncertainty situations it keeps mistakenly deferring to the flawed human who tells it to take the worse action more often?
    • The verbal description, that the human just types in a noisily sampled value of , is somewhat strange - if the human has explicit access to their own utility function, they can just take the best actions directly! In practice, though, the AI would learn this by looking at many past human actions (there's some CIRL!) which does seem like it plausibly gives a more accurate policy than the human's (ht Should Robots Be Obedient).
    • The human is Boltzmann-rational in the two-action situation (hence the sigmoid). I assume that it's the same for the multi-action situation, though this isn't stated. How much does the exact way in which the human is irrational matter for their results?
Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-04-19T23:38:15.083Z · LW · GW

PMarket Maker

Just under a month ago, I said "web app idea: one where you can set up a play-money prediction market with only a few clicks", because I was playing around on Hypermind and wishing that I could do my own Hypermind. It then occurred to me that I can make web apps, so after getting up to date on modern web frameworks I embarked in creating such a site.

Anyway, it's now complete enough to use, provided that you don't blow on it too hard. Here it is: pmarket-maker.herokuapp.com. Enjoy!

You can create a market, and then create a set of options within that market. Players can make buy and sell limit orders on those options. You can close an option and pay out a specific amount per owned share. There are no market makers, despite the pun in the name, but players start with 1000 internet points that they can use to shortsell.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-04-10T03:01:43.806Z · LW · GW

Thoughts on Ryan Carey's Incorrigibility in the CIRL Framework (I am going to try to post these semi-regularly).

  • This specific situation looks unrealistic. But it's not really trying to be too realistic, it's trying to be a counterexample. In that spirit, you could also just use , which is a reward function parametrized by that gives the same behavior but stops me from saying "Why Not Just set ", which isn't the point.
    • How something like this might actually happen: you try to have your be a complicated neural network that can approximate any function. But you butcher the implementation and get something basically random instead, and this cannot approximate the real human reward.
  • An important insight this highlights well: An off-switch is something that you press only when you've programmed the AI badly enough that you need to press the off-switch. But if you've programmed it wrong, you don't know what it's going to do, including, possibly, its off-switch behavior. Make sure you know under which assumptions your off-switch will still work!
  • Assigning high value to shutting down is incorrigible, because the AI shuts itself down. What about assigning high value to being in a button state?
  • The paper considers a situation where the shutdown button is hardcoded, which isn't enough by itself. What's really happening is that the human either wants or doesn't want the AI to shut down, which sounds like a term in the human reward that the AI can learn.
    • One way to do this is for the AI to do maximum likelihood with a prior that assigns 0 probability to the human erroneously giving the shutdown command. I suspect there's something less hacky related to setting an appropriate prior over the reward assigned to shutting down.
  • The footnote on page 7 confuses me a bit - don't you want the AI to always defer to the human in button states? The answer feels like it will be clearer to me if I look into how "expected reward if the button state isn't avoided" is calculated.
    • Also I did just jump into this paper. There are probably lots of interesting things that people have said about MDPs and CIRLs and Q-values that would be useful.
Comment by tetraspace-grouping on Blog Post Day II Retrospective · 2020-04-01T01:46:00.735Z · LW · GW

I'm interested in participating in a Blog Post Day III! And I approve of one this month, mostly out of a self-interested regret that I missed out on Blog Post Day II.

Comment by tetraspace-grouping on Habryka's Shortform Feed · 2020-01-02T03:18:39.116Z · LW · GW

Since this hash is publicly posted, is there any timescale for when we should check back to see the preimage?

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2020-01-02T03:08:52.116Z · LW · GW

Life 3.0 Liveblog/Review Thread

Prelude

The prologue begins with a short story called the Tale of the Omega Team. It's a wish-fulfilment pseudo-isekai about a bunch of effective altruist tech people working for not-Google called the Omegas who make an AGI and then use it to take over the world.

But a cybersecurity specialist on their team talked them out of the game plan [...] risk of Prometheus breaking out and seizing control of its own destiny [...] weren't sure how its goals would evolve [...] go to great lengths to keep Prometheus confined

For some reason, the Omegas in the story claim that the Prometheus (the AI) might be unsafe, and then proceed to do things like have it write software which they then run on computers and let it produce long pieces of animated media and let it send blueprints of technologies to scientists. There is a cybersecurity expert in the team who just barely stops them from straight up leaving the whole thing unboxed, and I do not envy her job position.

(Prometheus is safe, it turns out, which I can tell because there are humans alive at the end of the story.)

[...] Omega-controlled [...] controlled by the Omegas [...] the Omegas harnessed Prometheus [...] the Omegas' [...] the Omegas' [...]

There's also another odd thing where it says that the Omegas are using Prometheus as a tool to do things, instead of what's clearly actually happening which is that Prometheus is achieving its goals with the Omegas being some lumps of atoms that it's been pushing around according to its whims, as it has been since they decided to switch it on.

All-in-all, I like it. It wouldn't be out of place on r/rational, if wish-fulfillment pseudo-isekai does happen then AGI sweeping aside the previous social order will be how (a real AGI would come close to some of the capabilities I've seen those protagonists have), and fiction about more plausible robopocalypses (or roboutopias) coming about is always great.

Comment by tetraspace-grouping on A Critique of Functional Decision Theory · 2019-12-24T22:31:02.097Z · LW · GW

The note is just set-dressing; you could have both the boxes have glass windows that let you see whether or not they contain a Bomb for the same conclusions if it throws you off.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-12-23T23:51:07.425Z · LW · GW

In the Parable of Predict-O-Matic, a subnetwork of the titular Predict-O-Matic becomes a mesa-optimiser and begins steering the future towards its own goals, independently of the rest of Predict-O-Matic. It does so in a way that sabotages the other subnetworks.

I am reminded of one specification problem that a run of Eurisko faced:

During one run, Lenat noticed that the number in the Worth slot of one newly discovered heuristic kept rising, indicating that Eurisko had made a particularly valuable find. As it turned out the heuristic performed no useful function. It simply examined the pool of new concepts, located those with the highest Worth values, and inserted its name in their My Creator slots.

One thing I wondered is whether this could happen in humans, and if not, why it doesn't. A simplified description of memory that I learned in a flash game is that "neural connections" are "strengthened" whenever they are "used", which sounds sort of like gradients in RL if you don't think about it too hard. Maybe the analogue of this would be some memory that "wants" you to remember it repeatedly at the expense of other memories. Trauma?

Comment by tetraspace-grouping on ozziegooen's Shortform · 2019-12-23T22:06:40.611Z · LW · GW

Other things that Tim might mean when he says 20%:

  • Tim is being dishonest, and believes that the listeners will update away from the radical and low-status figure of 20% to avoid being associated with the lowly Tim.
  • Tim believes that other listeners will be encouraged to make their own probability estimates with explicit reasoning in response, which will make their expertise more legible to Tim and other listeners.
  • Tim wants to show cultural allegiance with the Superforecasting tribe.
Comment by tetraspace-grouping on Should We Still Fly? · 2019-12-22T12:40:42.703Z · LW · GW

Quick estimate: Global average is 4.8 tons per person = $50 additional per year per life saved = ~$1500 total (over 30 additional years of life), so over the course of saving an average person's life the costs if you're buying offsets are the same order as the costs of saving a life via a Givewell charity (~half).

For the people helped by Givewell recommended charities, the additional CO2 emissions are probably lower; among the world's poorest, <1 tons of CO2 per capita per year is pretty common, which is <$300 over a lifetime, about an order of magnitude less than the cost of saving a life.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-12-12T21:18:18.607Z · LW · GW

Over the past few days I've been reading about reinforcement learning, because I understood how to make a neural network, say, recognise handwritten digits, but I wasn't sure how at all that could be turned into getting a computer to play Atari games. So: what I've learned so far. Spinning Up's Intro to RL probably explains this better.

(Brief summary, explained properly below: The agent is a neural network which runs in an environment and receives a reward. Each parameter in the neural network is increased in proportion to how much it increases the probability of making the agent do what it just did, and how good the outcome of what the agent just did was.)

Reinforcement learners play inside a game involving an agent and an environment. On turn , the environment hands the agent an observation , and the agent hands the environment an action . For an agent acting in realtime, there can be sixty turns a second; this is fine.

The environment has a transition function which takes an observation-action pair and responds with a probability distribution over observations on the next timestep ; the agent has a policy that takes an observation and responds with a probability distribution over actions to take .

The policy is usually written as , and the probability that outputs an action in response to an observation is . In practise, is usually a neural network that takes observations as input and has actions as output (using something like a softmax layer to give a probability distribution); the parameters of this neural network are , and the corresponding policy is .

At the end of the game, the entire trajectory is assigned a score, , measuring how well the agent has done. The goal is to find the policy that maximises this score.

Since we're using machine learning to maximise, we should be thinking of gradient descent, which involves finding the local direction in which to change the parameters in order to increase the expected value of by the greatest amount, and then increasing them slightly in that direction.

In other words, we want to find .

Writing the expectation value in terms of a sum over trajectories, this is = , where is the probability of observing the trajectory if the agent follows the policy , and is the space of possible trajectories.

The probability of seeing a specific trajectory happen is the product of the probabilities of any individual step on the trajectory happening, and is hence where is the probability that the environment outputs the observation in response to the observation-action pair . Products are awkward to work with, but products can be turned into sums by taking the logarithm - .

The gradient of this is . But what the environment does is independent of , so that entire term vanishes, and we have . The gradient of the policy is quite easy to find, since our policy is just a neural network so you can use back-propagation.

Our expression for the expectation value is just in terms of the gradient of the probability, not the gradient of the logarithm of the probability, so we'd like to express one in terms of the other.

Conveniently, the chain rule gives , so . Substituting this back into the original expression for the gradient gives

,

and substituting our expression for the gradient of the logarithm of the probability gives

.

Notice that this is the definition of the expectation value of , so writing the sum as an expectation value again we get

.

You can then find this expectation value easily by sampling a large number of trajectories (by running the agent in the environment many times), calculating the term inside the brackets, and then averaging over all of the runs.

Neat!

(More sophisticated RL algorithms apply various transformations to the reward to use information more efficiently, and use various gradient descent tricks to use the gradients acquired to converge on the optimal parameters more efficiently)

Comment by tetraspace-grouping on Grue_Slinky's Shortform · 2019-10-01T10:36:48.703Z · LW · GW

Are we allowed to I-am-Groot the word "cake" to encode several bits per word, or do we have to do something like repeat "cake" until the primes that it factors into represent a desired binary string?

(edit: ah, only nouns, so I can still use whatever I want in the other parts of speech. or should I say that the naming cakes must be "cake", and that any other verbal cake may be whatever this speaking cake wants)

Comment by tetraspace-grouping on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-28T01:09:36.183Z · LW · GW

Dank EA Memes is a Facebook group. It's pretty good.

Comment by tetraspace-grouping on Follow-Up to Petrov Day, 2019 · 2019-09-28T00:59:41.532Z · LW · GW

If anyone asks, I entered a code that I knew was incorrect as a precommitment to not nuke the site.

Comment by tetraspace-grouping on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T00:38:46.380Z · LW · GW

To make sure I have this right and my LW isn't glitching: TurnTrout's comment is a Drake meme, and the two other replies in this chain are actually blank?

Comment by tetraspace-grouping on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T00:35:05.499Z · LW · GW

Well, at least we have a response to the doubters' "why would anyone even press the button in this situation?"

Comment by tetraspace-grouping on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T23:37:24.075Z · LW · GW

I.

Clicking on the button permanently switches it to a state where it's pushed-down, below which is a prompt to enter launch codes. When moused over, the pushed-down button has the tooltip "You have pressed the button. You cannot un-press it." Screenshot.

(On an unrelated note, on r/thebutton I have a purple flair that says "60s".)

Upon entering a string of longer than 8 characters, a button saying "launch" appears below the big red button. Screenshot.

II.

I'm nowhere near the PST timezone, so I wouldn't be able to reliably pull a shenanigan whereby if I had the launch codes I would enter or not enter them depending on the amount of counterfactual money pledged to the Ploughshares Fund in the name of either launch-code-entry-state, but this sentence is not apophasis.

III.

Conspiracy theory: There are no launch codes. People who claim to have launch codes are lying. The real test is whether people will press the button at all. I have failed that test. I came up with this conspiracy theory ~250 milliseconds after pressing the button.

IV. (Update)

I can no longer see the button when I am logged in. Could this mean that I have won?

Comment by tetraspace-grouping on Novum Organum: Preface · 2019-09-24T01:44:38.049Z · LW · GW

At the start of the Sequences, you are told that rationality is a martial art, used to amplify the power of the unaided mind in the same way that a martial art doesn't necessarily make you stronger but just lets you use your body properly.

Bacon, on the other hand, throws the prospect of using the unaided mind right out; Baconian rationality is a machine, like a pulley or a lever, where you apply your mind however feebly to one end and by its construction the other end moves a great distance or applies a great force (either would do for the metaphor).

If I have my history right, Bacon's machine is Science. Its function is to accumulate a huge mountain of evidence, so big that even a human could be persuaded by it, and instruction in the use of science is instruction in being persuaded by that mountain of evidence. Philosophers of old simply ignored the mountain of evidence (failed to use the machine) and maybe relied on syllogisms and definitions and hence failed to move the stone column.

And later, with the aid of Bacon's machine, it turns out that one discovers that you don't really need this huge mountain of evidence or the systematic stuff and that an ideal reasoner could simply perform a Bayesian update on each bit that comes in and get to the truth way faster, while avoiding all the slowness or all the mistakes that come if you insist on setting up the machine every single time. At your own risk, of course - get your stance slightly wrong lifting a stone column, and you throw your back out.

Comment by tetraspace-grouping on A Critique of Functional Decision Theory · 2019-09-15T14:43:55.051Z · LW · GW

An agent also faces a guaranteed payoffs problem in Parfit's hitchhiker, since the driver has already made their prediction (the agent knows they're safe in the town) so the agent's choice is between losing $1,000 and losing $0. Is it also a bad idea for the agent to pay the $1000 in this problem?

Comment by tetraspace-grouping on ozziegooen's Shortform · 2019-09-10T15:09:12.386Z · LW · GW

There's something of a problem with sensitivity; if the x-risk from AI is ~0.1, and the difference in x-risk from some grant is ~10^-6, then any difference in the forecasts is going to be completely swamped by noise.

(while people in the market could fix any inconsistency between the predictions, they would only be able to look forward to 0.001% returns over the next century)

Comment by tetraspace-grouping on Open & Welcome Thread - September 2019 · 2019-09-06T21:43:49.344Z · LW · GW

Is the issue that it's pain-based and hence makes my life worse (probably false for me: maths is fun and gives me a sense of pride and accomplishment when I do it, it's just that darn System 1 always saying "better for you if you play Kerbal Space Program"), or that social punishment isn't always available and therefore ought not to be relied on (this is probably an issue for me), or some third thing?

Comment by tetraspace-grouping on Open & Welcome Thread - September 2019 · 2019-09-04T20:17:40.376Z · LW · GW

Previously: August.

Dear Diary,

In the intervening month I have done chapters 8 and 9 of Tao's Analysis I, which feels terribly slow. Two chapters in a month? I could do the whole book in that time if I tried! And I know that I can because I have, like I'm getting a physics degree and it definitely feels like I've done at least one textbook worth of learning per term.

One of the active ingredients seems to be time pressure, which is present but not salient here - if I fail, all that happens is the wrong math is deployed to steer the future of the lightcone, which doesn't hold a candle to me losing a little bit of status. Ah, to be a brain.

Thus: by October I'll have finished Analysis I; think less of me if I haven't.

(And perhaps I'll have done even more!)

UPDATE SEP 26: You can rest easy now; I have completed the book.

Comment by tetraspace-grouping on I think I came up with a good utility function for AI that seems too obvious. Can you people poke holes in it? · 2019-08-30T15:34:21.095Z · LW · GW

This AI wouldn't be trying to convince a human to help it, just that it's going to succeed.

So instead of convincing humans that a hell-world is good, it would convince the humans that it was going to create a hell-world (and they would all disapprove, so it would score low).

I think what this ends up doing is having everyone agree with a world that sounds superficially good but is actually terrible in a way that's difficult for unaided humans to realize e.g. the AI convinces everyone that it will create an idyllic natural world where people live forager lifestyles in harmony etc. etc., everyone approves because they like nature and harmony and stuff, it proceeds to create such an idyllic natural world, and wild animal suffering outweighs human enjoyment forevermore.

Comment by tetraspace-grouping on I think I came up with a good utility function for AI that seems too obvious. Can you people poke holes in it? · 2019-08-29T13:14:00.142Z · LW · GW

One thing I'd be concerned about is that there are a lot of possible futures that sound really appealing, and that a normal human would sign off on, but are actually terrible (similar concept: siren worlds).

For example, in a world of Christians the AI would score highly on a future where they get to eternally rest and venerate God, which would get really boring after about five minutes. In a world of Rationalists the AI would score highly on a future where they get to live on a volcano island with catgirls, which would also get really boring after about five minutes.

There are potentially lots of futures like this (that might work for a wider range of humans), and because the metric (inferred approval after it's explained) is different from the goal (whether the future is good) and there's optimisation pressure increasing with the number of futures considered, I would expect it to be Goodharted.

Some possible questions this raises:

  • On futures: I can't store the entire future in my head, so the AI would have to only describe some features. Which features? How to avoid the selection of features determining the outcome?
  • On people: What if the future involves creating new people, who most people currently would want to live in that future? What about animals? What about babies?
Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-08-26T19:15:41.503Z · LW · GW

Here are three statements I believe with a probability of about 1/9:

  • The two 6-sided dice on my desk, when rolled, will add up to 5.
  • An AI system will kill at least 10% of humanity before the year 2100.
  • Starvation was a big concern in ancient Rome's prime (claim borrowed from Elizabeth's Epistemic Spot Check post).

Except I have some feeling that the "true probability" of the 6-sided die question is pretty much bang on exactly 1/9, but that the "true probability" of the Rome and AI xrisk questions could be quite far from 1/9 and to say the probability is precisely 1/9 seems... overconfident?

From a straightforward Bayesian point of view, there is no true probability. It's just my subjective degree of belief! I'd be willing to make a bet at 8/1 odds on any of these, but not at worse odds, and that's all there really is to say on the matter. It's the number I multiply by the utilities of the outcomes to make decisions.

One thing you could do is imagine a set of hypotheses that I have that involve randomness, and then I have a probability distribution over which of these hypotheses is the true one, and by mapping each hypothesis to the probability it assigns to the outcome my probability distribution over hypotheses becomes a probability distribution over probabilities. This is sharply around 1/9 for the dice rolls, and widely around 1/9 for AI xrisk, as expected, so I can report 50% confidence intervals just fine. Except sensible hypotheses about historical facts probably wouldn't be random, because either starvation was important or it wasn't, that's just a true thing that happens to exist in my past, maybe.

I like jacobjacob's interpretation of a probability distribution over probabilities as an estimate of what your subjective degree of belief would be if you thought about the problem for longer (e.g. 10 hours). The specific time horizon seems a bit artificial (extreme case: I'm going to chat with an expert historian in 10 hours and 1 minute) but it does work and gives me the kind of results that makes sense. The advantage of this is that you can quite straightforwardly test your calibration (there really is a ground truth) - write down your 50% confidence interval, then actually do the 10 hours of research, and see how often the degree of belief you end up with lies inside the interval.

Comment by tetraspace-grouping on Epistemic Spot Check: The Fate of Rome (Kyle Harper) · 2019-08-24T23:11:37.520Z · LW · GW

What do the probability distributions listed below the claims mean specifically?

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-08-24T18:08:33.448Z · LW · GW

Imagine two prediction markets, both with shares that give you $1 if they pay out and $0 otherwise.

One is predicting some event in the real world (and pays out if this event occurs within some timeframe) and has shares currently priced at $X.

The other is predicting the behaviour of the first prediction market. Specifically, it pays out if the price of the first prediction market exceeds an upper threshhold $T before it goes below a lower threshhold $R.

Is there anything that can be said in general about the price of the second prediction market? For example, it feels intuitively like if T >> X, but R is only a little bit smaller than X, then assigning a high price to shares of the second prediction market violates conservation of evidence - is this true, and can it be quantified?


Comment by tetraspace-grouping on Time Travel, AI and Transparent Newcomb · 2019-08-22T22:51:21.588Z · LW · GW

We would also expect destroying time machines to be a convergent instrumental goal in this universe, since any agent that does this would be more likely to have been created. So by default powerful enough optimization processes would try to prevent time travel.

Comment by tetraspace-grouping on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-08-07T15:24:56.193Z · LW · GW

The counterfactual oracle can answer questions for which you can evaluate answers automatically (and might be safe because it doesn't care about being right in the case where you read the prediction so it won't manipulate you), and the low-bandwith oracle can answer multiple-choice questions (and might be safe because none of the multiple-choice options are unsafe).


My first thought for this is to ask the counterfactual oracle for an essay on the importance of coffee, and in the case where you don't see its answer, you get an expert to write the best essay on coffee possible, and score the oracle by the similarity between what it writes and what the expert writes. Though this only gives you human levels of performance.

Comment by tetraspace-grouping on Open & Welcome Thread - August 2019 · 2019-08-04T23:04:35.201Z · LW · GW

I might as well post a monthly update on my doing things that might be useful for me doing AI safety.

I decided to just continue with what I was doing last year before I got distracted, and learn analysis, from Tao's Analysis I, on the grounds that it's maths which is important to know and that I will climb the skill tree analysis -> topology -> these fixed point exercises. Have done chapters 5, 6 and 7.

My question on what it would be most useful for me to be doing remains if anyone has any input.

Comment by tetraspace-grouping on Occam's Razor: In need of sharpening? · 2019-08-04T22:28:21.753Z · LW · GW

The formalisation used in the Sequences (and algorithmic information theory) is the complexity of a hypothesis is the shortest computer program that can specify that hypothesis.

An illustrative example is that, when explaining lightning, Maxwell's equations are simpler in this sense than the hypothesis that Thor is angry because the shortest computer program that implements Maxwell's equations is much simpler than an emulation of a humanlike brain and its associated emotions.

In the case of many-worlds vs. Copenhagen interpretation, a computer program that implemented either of them would start with the same algorithm (Schrodinger's equation), but (the claim is) that the computer program for Copenhagen would have to have an extra section that specified how collapse upon observation worked that many-worlds wouldn't need.