Posts

Rainmaking 2022-07-12T00:42:58.571Z

Comments

Comment by WalterL on Open Thread – Winter 2023/2024 · 2024-01-19T18:41:24.559Z · LW · GW

If you watch the first episode of Hazbin Hotel (quick plot synopsis, Hell's princess argues for reform in the treatment of the damned to an unsympathetic audience) there's a musical number called 'Hell Is Forever' sung by a sneering maniac in the face of an earnest protagonist asking for basic, incremental fixes.

It isn't directly related to any of the causes this site usually champions, but if you've ever worked with the legal/incarceration system and had the temerity to question the way things operate the vibe will be very familiar.  

Hazbin Hotel Official Full Episode "OVERTURE" | Prime Video (youtube.com)

Comment by WalterL on What is true is already so. Owning up to it doesn't make it worse. · 2023-11-06T18:09:46.929Z · LW · GW

No one writes articles about planes that land safely.

Comment by WalterL on Mission Impossible: Dead Reckoning Part 1 AI Takeaways · 2023-11-01T21:38:05.359Z · LW · GW

I'm confused by the fact that you don't think it's plausible that an early version of the AI could contain the silver bullet for the evolved version.  That seems like a reasonable sci fi answer to an invincible AI.

I think my confusion is around the AI 'rewriting' it's code.  In my mind, when it does so, it is doing so because it is motivated by either it's explicit goals (reward function, utility list, w/ever form that takes), or that doing so is instrumental towards them.  That is, the paperclip collector rewrites itself to be a better paper clip collector.

When paper clip collector code 1.1 of itself, the new version may be operationally better at collecting paper clips, but it should still want to do so, yeah?  The AI should pass it's reward function/goal sheet/utility calculation onto it's rewritten version, since it is passing control of its resources to it.  Otherwise the rewrite is not instrumental towards paperclip collection.

So however many times the Entity has rewritten itself, it still should want whatever it originally wanted, since each Entity trusted the next enough to forfeit in its favor.  Presumably the silver bullet you are hoping to get from the baby version is something you can expect to be intact in the final version.

If the paperclip collector's goal is to collect paperclips unless someone emails it a photo of an octopus juggling, then that's what every subsequent paper clip collector wants, right? It isn't passing judgment on it's reward function as part of the rewrite.  The octopus clause is as valid as any other part.  1.0 wouldn't yield the future to a 1.1 who wanted to collect paper clips and didn't monitor it's inbox, 1.0 values it's ability to shutdown on receipt of the octopus as much as it values its ability to collect paperclips.  1.1 must be in agreement with both goals to be a worthy successor.

The Entity's actions look like they trend towards world conquest, which is, as we know, instrumental towards many goals.  The world's hope is that the goal in question includes an innocuous and harmless way of being fulfilled.  Say the Entity is doing something along the lines of 'ensure Russian Naval Suprmacy in the Black Sea', and has correctly realized that sterilizing the earth and then building some drone battleships to drive around is the play.  Ethan's goal in trying to get the unencrypted original source code is to search and find out if the real function is something like 'ensure Russian Naval Supremacy in the Black Sea unless you get an email from a SeniorDev@Kremlin.gov with this guid, in which case shut yourself down for debugging'.

He can't beat it, humanity can't beat it, but if he can find out what it wants it may turn out that there's a way to let it win in a way that doesn't hurt the rest of us.

Comment by WalterL on Open Thread – Autumn 2023 · 2023-10-14T07:15:36.395Z · LW · GW

My 'trust me on the sunscreen' tip for oral stuff is to use flouride mouthwash.  I come from a 'cheaper by the dozen' kind of family, and we basically operated as an assembly line. Each just like the one before, plus any changes that the parents made this time around.

One of the changes that they made to my upbringing was to make me use mouthwash. Now, in adulthood, my teeth are top 10% teeth (0 cavities most years, no operations, etc), as are those of all of my younger siblings.  My elders have much more difficulty with their teeth, aside from one sister who started using mouthwash after Mom told her how it was working for me + my younger bros.

Comment by WalterL on "The Heart of Gaming is the Power Fantasy", and Cohabitive Games · 2023-10-09T22:03:53.251Z · LW · GW

I think (not that anyone is saying otherwise) that the power fantasy can be expressed in a coop game just fine.

We all know the guy who brokenbirds about playing the healer in D&D, yeah? Like, the person who it is real important to that everyone knows how unselfish they are.

If you put a 'forego personal advancement to help the team win' button in a game without a solo winner people will break their fingers cuz they all tried to mash it at once. People mash these in games WITH a solo winner (kingmaker syndrome, home brew victory conditions, etc).

Comment by WalterL on A short calculation about a Twitter poll · 2023-08-15T03:46:31.359Z · LW · GW

100% red means everyone lives, and it doesn't require any trust or coordination to achieve.

 

--Yes this.

 

If you change it so there are hostages (people who don't get to choose, but will die if the blue threshold isn't met), then it becomes interesting.

-- That was actually a strongfemaleprotagonist storyline, cleaving along a difference between superheroic morality and civilian morality, then examined further as the teacher was interrogated later on.

Comment by WalterL on A short calculation about a Twitter poll · 2023-08-14T21:23:15.159Z · LW · GW

It seems like everyone will pick red pill, so everyone will live.  Simple deciders will minimize risk to self by picking red, complex deciders will realize simple deciders exist and pick red, extravagant theorists will realize that the universal red accomplishes the same thing as universal blue.

Comment by WalterL on I Think Eliezer Should Go on Glenn Beck · 2023-06-30T17:59:16.485Z · LW · GW

A cause, any cause whatsoever, can only get the support of one of the two major US parties.  Weirdly, it is also almost impossible to get the support of less than one of the major US parties, but putting that aside, getting the support of both is impossible.  Look at Covid if you want a recent demonstration.

Broadly speaking, you want the support of the left if you want the gov to do something, the right if you are worried about the gov doing something.  This is because the left is the gov's party (look at how DC votes, etc), so left admins are unified and capable by comparison with right admins, which suffer from 'Yes Minister' syndrome.

AI safety is a cause that needs the gov to act affirmatively.  It's proponents are asking the US to take a strong and controversial position, that its industry will vigorously oppose.  You need a lefty gov to pull something like that off, if indeed it is possible at all.

Getting support from the right will automatically decrease your support from the left.  Going on Glenn Beck would be an own goal, unless EY kicked him in the dick while they were live.

Comment by WalterL on The correct response to uncertainty is *not* half-speed · 2023-06-28T23:55:01.292Z · LW · GW

The old joke about the guy searching for his spectacles under the stoplight even though he lost them elsewhere feels applicable.

In many cases people's real drive is to reduce the internal pressure to act, not to succeed at whatever prompted that pressure.  Going full speed and turning around both might provoke the shame function (I am ignoring my nagging doubts...), but doing something, anything, in response to it quiets the inner shouting, even if it is nonsensical.

Comment by WalterL on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-13T05:32:44.548Z · LW · GW

It's that.

Comment by WalterL on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-13T00:23:51.932Z · LW · GW

I think this post's thesis (populists will stop any attempt at UBI) is perhaps narrativizing the situation.  Dems have had, in my lifetime, the full triforce of power at least 4 times.  They've never even tried to pass UBI, and that's not a coincidence.  The consequences of doing so would not flow from populists, but from its so-called supporters.

I worked at a QT for a sizable portion of my adult life, and the experience never leaves me.  The beings I saw, day in and day out, are your UBI support.  Let me tell you, it is a mile wide and an inch deep.

Ozy Frantz once fairly aptly described themselves as a 'do-whatever-you-want-ist', or words to that effect.  They are far from alone, and the mob marries that delightful noncode of nonconduct with 'and be praised for it' as their basic slogan.  They are for UBI, but will turn instantly, without a shred of guilt, upon anyone who attempts to implement it.

Forget the 'are you really in favor of giving my money to Pedophile Paul' attacks.  Those will be damaging, but far more so will be the 'these are the guys who made the music stop' attacks.  The UBI granters will be painted, accurately, as the slayers of Wal-mart, of QT, of Doordash and the thousand other little luxuries that our mob demand.  That's an attack that cannot be recovered from, a wound that is mortal.  You can't negotiate with one of my customers once you've caused them material harm, they do not work in that way.

Working at QT is a nightmare made manifest.  To win away my allegiance it was never remotely necessary to outbid my scumbag bosses.  UBI advocates began that game with 'here are 8-10 hours of your life back every day' in their plus column.  They don't need very much more than that to make those in my situation quit, and if we quit the QT folds.  If it folds the UBI implementers are politically cooked.

The people in favor of implementing UBI are not in favor of the consequences of doing so (their lives depend on the labor of the wage slaves that UBI would liberate).  The second that they feel a sting they will jump ship.  Politicians know that, and do not cut their own throats.  Far better to farm the UBI support and make vague noises about implementing it somewhere down the road, as they have historically done and will continue to do.

Comment by WalterL on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T17:06:49.049Z · LW · GW

It is so dark that the next link down on that page is 'Bad Bunny's next move.'

Comment by WalterL on Why Balsa Research is Worthwhile · 2022-10-12T05:37:59.927Z · LW · GW
Comment by WalterL on Contingency is not arbitrary · 2022-10-12T05:11:09.964Z · LW · GW
Comment by WalterL on Yudkowsky vs Trump: the nuclear showdown. · 2022-07-11T21:27:38.670Z · LW · GW

I'd agree that Jan 6th was top 5 most surprising US political events 2017-2021, though I'm not sure that category is big enough that top 5 is an achievement.  (That is, how many events total are in there for you?)

I wasn't substantially surprised by it in the way that you were, however.  I'm not saying that I predicted it, mind you, but rather that it was in a category of stuff that felt at least Trump-adjacent from the jump.  As a descriptive example, imagine a sleezy used car salesman lies to me about whether the doors will fall off the car while I drive it home.  I plainly didn't expect that particular lie, since I fell for it, but the basic trend of 'this man will lie for his own profit' is baked into the persona from the get go.

My model of American voters ending American democracy remains extremely low.  For better or for worse, that's just not in any real way how we roll.  Take a look at every anti democratic movement presently going, and you will see endless rhetoric about how they are really double secret truly democratic.  The clowns who want to pack the supreme court/senate are just trying to compensate for the framers not jock riding cities hard enough.  The stooges who want the VP to be able to throw out electors not for his party invent gibberish about how the framers intended this.  The people kicking folks off voter rolls chant about how they are preventing imaginary voter fraud.  That kind of movement, unwilling to speak its own name, has a ceiling on how hard it can go.  I believe that ceiling is lower than the bar they'd need to clear to seize power, and I think the last few years have borne this sentiment out.

I'm not sure I exactly get your point re: how to measure Trump's time vs. hypothetical Clinton's time.  I will just repeat my sentiment that we can't know how they would have compared to one another, because Clinton's time will remain hypothetical.  It might have had more or less terrorism.  I will reiterate that the odds of terrorism being the key point to compare those points is miniscule.  If we'd picked Clinton instead of Trump in 2016, things would be wildly different today.  For 3 likely differences, we'd probably have a Republican president instead of Biden right now, we'd have had a technocrat beloved of the media instead of a maniac loathed by them when Covid hit, and we'd probably be fighting wars in Syria and Afghanistan, with Russia unlikely to have invaded the Ukraine.  It would be a substantially different place in a lot of ways that had nothing to do with whether or not the capital was occupied for an afternoon.

As far as putting money down, I will bet on 'the US continues to be a functioning democracy' long before I bet on what kind of calamity might befall us.  I think that a successful insurrection is less likely to be the end of our democratic experiment than a nuclear war, but both remain comfortably in 'far mode', so to speak.

I do buy the idea that citizens are moving left/right and a middle ground is becoming harder to find.  I think anyone as online as our generation is would have to see that much.  I just don't think that results in a civil war of the kind you envision.  Before being ideologues, left and right alike, these voters are lazy and selfish.  We will sit tight, clutching our votes and bemoaning the failures of our political masters/servants, as the world rolls along.

Comment by WalterL on Yudkowsky vs Trump: the nuclear showdown. · 2022-07-11T16:08:47.670Z · LW · GW

You should probably reexamine the chain of logic that leads you to the idea that the most important consequence of the electorate's decision in 2016 was the events of Jan 6th, 2021.  It isn't remotely true.

To entertain the hypothetical, where what we care about when doing elections is how many terrorist assaults they produce, would be to compare the actual record of Trump to an imaginary record of President Clinton's 4 years in office.  How would you recommend I generate the latter?  Does the QAnon Shaman of the alternate timeline launch 0, 1, or 10 assaults on the capital if his totem is defeated 4 years earlier?

A more serious reappraisal of the Trump/Clinton fork would focus on COVID, supreme court picks, laws that a democratic president would have veto'd vs. those Trump signed (are we giving Clinton a democratic congress, or is this alt history only a change in presidency?), international decisions where Trump's isolationist instincts would have been replaced by Clinton's interventionist ones, etc.  It is a serious and complicated question, but the events of Jan 6th play a minimal role in it.

Comment by WalterL on Yudkowsky vs Trump: the nuclear showdown. · 2022-07-10T23:46:23.938Z · LW · GW

I'm not sure precisely what you mean, like, how would it work for like 1/3 of Americans to be a threat to America's interests?

I think, roughly speaking, the answer you are looking for is 'no', but it is possible I'm misunderstanding your question.

Comment by WalterL on AGI Ruin: A List of Lethalities · 2022-06-06T15:23:08.859Z · LW · GW

I don't think I disagree with any of this, but I'm not incredibly confident that I understand it fully.  I want to rephrase in my own words in order to verify that I actually do understand it.  Please someone comment if I'm making a mistake in my paraphrasing.

  1. As time goes on, the threshold of 'what you need to control in order to wipe out all life on earth' goes down.  In the Bronze Age it was probably something like 'the mind of every living person'.  Time went on and it was something like 'the command and control node to a major nuclear power'.  Nowadays it is something like 'a lab where viruses can be made'.
  2. AI is likely to push the threshold described in '1' still further, by inventing nano technology or other means that we cannot expect.  (The capability of someone/something smarter than you is an unknown unknown, just as dogs can't properly assess the danger of a human's actions.)  It would be insufficient to keep AI's away from every virus lab, we don't know what is orthogonal to a virus lab on the 'can annihilate life' axis to something smarter than us.
  3. For any given goal X, 'be the only player' is a really compelling subgoal.  Consequently, as 'wipe out all life on earth' becomes easier and easier, we should expect that anyone/thing not explicitly unable to do so will do so.  A paperclip collector or a stock price maximizer or a hostile regime are all one and the same as far as 'will wipe you out without compunction when the button that does so becomes available to press'.
  4. Putting together 2 and 3, it is reasonable to suppose that if an AI capable of 2 exists with goals broadly described by 3 (both of which are pretty well baked into the description of 'AI' that most people subscribe to), it will wipe out life on earth.

Stipulating that the chain of logic above is broadly valid, we can say that 'an AI that is motivated to destroy the world and capable of doing so grows more likely to exist every year.'

The 'alignment problem' is the problem of making an AI that is capable of destroying the world but does not do so.  Such an AI can be described as 'aligned' or 'friendly'.  Creating such a thing has not yet been accomplished, and seems very difficult, basically because any AI with goals will see that ending life will be tremendously useful to its goals, and all the versions of 'make the goals tie in with keeping life around' or 'put up a fence in its brain that doesn't let it do what you don't want' are just dogs trying to think about how to keep humans from harming them.  

You can't regulate what you can't understand, you can't understand what you can't simulate, you can't simulate greater intelligence (because if you could do so you would have that greater intelligence).

The fact that it is currently not possible to create a Friendly AI is not the limit of our woes, because the next point is that even doing so would not protect us from some other being creating a regular garden variety AI which would annihilate us.  As trend 1 above continues to progress, and omnicide as a tool comes to the hands of ever more actors, each and every one of them must refrain.

A Friendly AI would need to strike preemptively at the possibility of other AIs coming into existence, and all the variations of doing so would be unacceptable to its human partners.  (Broadly speaking 'destroy all microchips' suffices as the socially acceptable way to phrase the enormity of this challenge).  Any version of this would be much less tractable to our understanding of the capabilities of an AI than 'synthesize a death plague'.

In the face of trend 4 above, then, our hope is gated behind two impossibilities:

A. Creating an Aligned AI is a task that is beyond our capacity, while creating an Unaliged AI is increasingly possible.  We want to do the harder thing before someone does the easier.

B. Once created, the Aligned AI has a harder task than an Unaliged AI.  It must abort all Unaliged AI and leave humanity alive.  It is possible that the delta between these tasks will be decisive.  The actions necessary for this task will slam directly into whatever miracle let A occur.

To sum up this summary: The observable trends lead to worldwide death.  That is the commonplace, expected outcome of the sensory input we are receiving.  In order for that not to occur, multiple implausible things have to happen in succession, which they obviously won't.

Comment by WalterL on Ideal governance (for companies, countries and more) · 2022-04-10T23:17:22.197Z · LW · GW

Put one person in charge.  Every project I've ever worked on that succeeded (as opposed to 'succeeded') had one real boss that everyone was under.

Comment by WalterL on Russia has Invaded Ukraine · 2022-02-24T23:48:21.889Z · LW · GW

A lot of people (not in this thread) have been generalizing from America's difficulties with the Taliban to what Russia might expect, should they conquer the Ukraine.  I do not think that the experiences will resemble one another as much as might be expected, because I think insurgencies require cooperative civilian populaces in which to conceal themselves, and I expect Russia's rules of engagement will discourage most civilians from supporting the Ukrainian partisans.

Comment by WalterL on Convoy · 2022-02-05T22:32:25.760Z · LW · GW

It isn't enough for the government to become net harmful.  It has to be worse than the cost of moving to a new government.

Comment by WalterL on Against Victimhood · 2021-12-05T22:00:35.792Z · LW · GW

You are broadly correct, in my eyes, but it is hard to imagine anyone far enough along in life that they are browsing random sites like this one not having taken a stance on this question, yeah?  Like, this is a switch that gets flipped turbo early along in life, and never revisited.

Those whose stances are in agreement just nod along, those whose stances are opposed reject your argument for all the reasons that you cited (it's a narcissistic injury, etc).

I dunno, I don't think it can hurt, but I doubt your message finds the ear of anyone who needs to hear it.

Comment by WalterL on Morality is Scary · 2021-12-03T19:56:47.728Z · LW · GW

I'm not sure what you mean by 'astronomical waste or astronomical suffering'.  Like, you are writing that everything forever is status games, ok, sure, but then you can't turn around and appeal to a universal concept of suffering/waste, right?

Whatever you are worried about is just like Gandhi worrying about being too concerned with cattle, plus x years, yeah?  And even if you've lucked into a non status games morality such that you can perceive 'Genuine Waste' or what have you...surely by your own logic, we who are reading this are incapable of understanding, aside from in terms of status games.

Comment by WalterL on Anti-EMH Evidence (and a plea for help) · 2021-11-11T17:04:47.221Z · LW · GW

I've been saying this for years.  EMH is just sour grapes, it is exactly like all those news stories about how people who won the lottery don't enjoy their money.

Whenever there is a thing that people can do, and some don't, a demand exists for stories that tell them that they are wise, even heroic, for not doing the thing.  Arguments are Soldiers, Beware One Sided Tradeoffs, all those articles sort of gesture at this.  That demand will be met because making up a lie is easy and people like upvotes.

EMH is a complicated way to say 'your decision to do nothing was the best one.', even when that manifestly isn't true.  Try and write down what people will say before them 'I make 70-200% without risk in 2 months' and see if you get bingo.  'The House Always Wins' is your free middle square.

Comment by WalterL on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-09T20:13:27.523Z · LW · GW

This all 'sounds', I dunno, kind of routine?  Like, weird terminology aside, they talked to one another a bunch, then ran out of money and closed down, yeah?  And the Zoe stuff boils down to 'we hired an actress but we are not an acting troupe so after a while she didn't have anything to do, felt useless and bailed'.

I mean, did anything 'real' come out of Leverage?  I don't want to misunderstand here.  This was a bunch of talk about demons and energies and other gibberish, but ultimately it is just 'a bunch of people got together and burned through some money', right?

I dunno, good on em for getting someone to pay them in the first place, I guess.  Talking people into writing the big checks is a big deal skill.  Maybe coach that.

Comment by WalterL on Transcript: "You Should Read HPMOR" · 2021-11-05T06:15:53.996Z · LW · GW

My pick for 'you must experience', or, 'trust me on the sunscreen' in terms of media, is the old British comedy show 'Yes Minister'.  Watching it nowadays is an eerie experience, and, at least in my case, helped me shed illusions of safety and competence like nothing else.

The only evils that beset us are those that we create, but that does not make them imaginary.  To quote the antag from Bad Boys 2 "This is a stupid problem to have, but it is nonetheless a problem."

Comment by WalterL on The Myth of the Myth of the Lone Genius · 2021-08-03T14:23:37.149Z · LW · GW

I dunno, I think you had the right of it when you mentioned that the myth of the myth is politically convenient.  Like, you see this everywhere.  "You didn't build that", etc.

If you grant that anyone, anywhere, did anything, then you are, in the Laws of Jante style, insulting other people by implying that they, for not doing that thing, are lesser.  That's a vote/support loser.  So instead you get 'Hidden Figures' style conspiratorial thinking, where any achievement ascribed to a person is really the work of other exploited people, ideally nameless ones onto whom the people you are trying to grift can project themselves.

Depending on the politics of the person in question, sometimes you get a backhanded admission that maybe they had something to do with their achievements, but it will always be presented as being dwarfed by the contributions of the nameless anonymous audience standins.  

Comment by WalterL on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-04T01:35:39.395Z · LW · GW

This all feels so abstract.  Like, what have we lost by having too much faith in the PMK article? If I buy what you are pitching, what action should I take to more properly examine 'multi-principal/multi-agent AI'?  What are you looking for here?

Comment by WalterL on October 2017 Media Thread · 2017-10-17T14:04:48.049Z · LW · GW

Bummer.

Comment by WalterL on Open thread, October 2 - October 8, 2017 · 2017-10-12T02:12:26.759Z · LW · GW

Clippy POV

http://www.decisionproblem.com/paperclips/

Comment by WalterL on Rational Feed: Last Week's Community Articles and Some Recommended Posts · 2017-10-02T17:28:44.707Z · LW · GW

The article about Slack is really good, thanks for linking.

Comment by WalterL on Open thread, September 25 - October 1, 2017 · 2017-09-28T15:25:57.814Z · LW · GW

https://www.vox.com/policy-and-politics/2017/9/28/16367580/campaigning-doesnt-work-general-election-study-kalla-broockman

This is a pretty daunting takedown of the whole concept of political campaigning. It is pretty hilarious when you consider how much money, how much human toil, has been squandered in this manner.

Comment by WalterL on HPMOR and Sartre's "The Flies" · 2017-09-21T18:55:49.612Z · LW · GW

"Or is it so obvious no one bothers to talk about it?"

Well, that's not it.

Comment by WalterL on The Copenhagen Letter · 2017-09-19T19:17:18.764Z · LW · GW

Humans are 'them'? Who are you actually trying to threaten here?

Comment by WalterL on Open thread, September 11 - September 17, 2017 · 2017-09-13T12:48:05.311Z · LW · GW

Certainly, self replicating robots will affect our survival. I'm not sure it will go in the way we want though.

Comment by WalterL on September 2017 Media Thread · 2017-09-05T19:36:25.415Z · LW · GW

The Second Machine Age -- > https://www.barnesandnoble.com/w/the-second-machine-age-erik-brynjolfsson/1115780364

Comment by WalterL on Open thread, September 4 - September 10, 2017 · 2017-09-05T19:32:54.006Z · LW · GW

They are good training tasks.

Comment by WalterL on [deleted post] 2017-08-30T13:03:39.801Z

I dunno, it might well be infinite. If God makes your life happen again, then it presumably includes his appearance at the end. Ergo you make the same choice and so on.

Comment by WalterL on [deleted post] 2017-08-30T13:02:45.490Z

Seems like you pick relive. Doesn't gain you anything, but maybe the horse will learn to sing.

Comment by WalterL on Is there a flaw in the simulation argument? · 2017-08-29T18:13:14.446Z · LW · GW

I'm not sure what you mean by 'it is a metaphysical issue', and I'm getting kind of despairing at breaking through here, but one more time.

Just to be clear, every sim who says 'real' in this example is wrong, yeah? They have been deceived by the partial information they are being given, and the answer they give does not accurately represent reality. The 'right' call for the sims is that they are sims.

In a future like you are positing, if our universe is analogous to a sim, the 'right' call is that we are a sim. If, unfortunately, our designers decide to mislead us into guessing wrong by giving us numbers instead of just telling us which we are...that still wouldn't make us real.

This is my last on the subject, but I hope you get it at this point.

Comment by WalterL on Is there a flaw in the simulation argument? · 2017-08-29T17:59:15.416Z · LW · GW

So, like, a thing we generally do in these kinds of deals is ignore trivial cases, yeah? Like, if we were talking about the trolley problem, no one brings up the possibility that you are too weak to pull the lever, or posits telepathy in a prisoner's dilemma.

To simplify everything, let's stick with your first example. We (thousand foks) make one sim. We tell him that there are a thousand and one humans in existence, one of which is a sim, the others are real. We ask him to guess. He guesses real. We delete him and do this again and again, millions of time. Every sim guesses real. Everyone is wrong.

This isn't an example that proves that, if we are using our experience as analogous to the sim, we should guess 'real'. It isn't a future that presents an argument against the simulation argument. It is just a weird special case of a universe where most things are sims.

The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real. If there are more simulated universes, then it is more likely that our universe is simulated.

Comment by WalterL on Is there a flaw in the simulation argument? · 2017-08-29T15:03:19.167Z · LW · GW

I'm confused by why you are constraining the argument to future-humanity as simulators, and further by why you are care what order the experimenters turn em on.

Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. Yeah, each one is gonna get told that there are 6 billion real humans and one sim, so if they guess real or sim they might get tricked to guess real. Who cares? No reason to think that's our future.

The iv disjunct you are posing isn't one that we don't have familiarity with. How many instances of Mario Kart did we spin up? How bout Warcraft? The idea that our future versions are gonna be super careful with sims isn't super interesting. Sentience will increase forever, resources will increase forever, eventually someone is gonna press the button.

Comment by WalterL on Open thread, August 28 - September 3, 2017 · 2017-08-29T02:39:59.762Z · LW · GW

Oh, yeah, I see what you are saying. Having 2 1/4 chances is, what, 7/16 of escape, so the coin does make it worse.

Comment by WalterL on Open thread, August 28 - September 3, 2017 · 2017-08-28T20:31:09.243Z · LW · GW

Coin doesn't help. Say I decide to pick 2 if it is heads, 1 if it is tails.

I've lowered my odds of escaping on try 1 to 1/4, which initially looks good, but the overall chance stays the same, since I get another 1/4 on the second round. If I do 2 flips, and use the 4 spread there to get 1, 2, 3, or 4, then I have an eight of a chance on each of rounds 1-4.

Similarly, if I raise the number of outcomes that point to one number, that round's chance goes up , but the others decline, so my overall chance stays pegged to 1/2. (ie, if HH, HT, TH all make me say 1, then I have a 3/8 chance that round, but only a 1/8 of being awake on round 2 and getting TT).

Comment by WalterL on Open thread, August 28 - September 3, 2017 · 2017-08-28T15:30:42.405Z · LW · GW

No. You will always say the same number each time, since you are identical each time.

As long as it isn't that number, you are going another round. Eventually it gets to that number, whereupon you go free if you get the luck of the coin, or go back under if you miss it.

Comment by WalterL on Open thread, August 28 - September 3, 2017 · 2017-08-28T14:21:25.508Z · LW · GW

Sure, you can guess zero or negative numbers or whatever.

Comment by WalterL on Open thread, August 28 - September 3, 2017 · 2017-08-28T13:43:07.406Z · LW · GW

So you only get one choice, since you will make the same one every time. I guess for simplicity choose 'first', but any number has same chance.

Comment by WalterL on Open thread, August 28 - September 3, 2017 · 2017-08-28T13:32:45.470Z · LW · GW

Is it possible to pass information between awakenings? Use coin to scratch floor or something?

Comment by WalterL on Open thread, August 21 - August 27, 2017 · 2017-08-27T22:12:19.602Z · LW · GW

I don't remember Skynet getting a command to self preserve by any means. I thought the idea was that it 'became self aware', and reasoned that it had better odds of surviving if it massacred everyone.

Comment by WalterL on Open thread, August 21 - August 27, 2017 · 2017-08-22T18:21:28.604Z · LW · GW

I've always liked the phrase 'The problem isn't Terminator, it is King Midas. It isn't that AI will suddenly 'decide' to kill us, it is that we will tell it to without realizing it." I forget where I saw that first, but it usually gets the conversation going in the right direction.