Posts

Comments

Comment by Eugene on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-14T09:12:31.209Z · LW · GW

This Thorin guy sounds pretty clever. Too bad he followed his own logic straight to his demise, but hey he stuck to his guns! Or pickaxe, as it were.

His argument attempting to prevent Bifur from trying to convince fellow Dwarves against mining into the Balrog's lair sounds like a variation on the baggage carousel problem (this is the first vaguely relevant link I stumbled across, don't take it as a definitive explanation)

Basically, everyone wants resource X, resulting in a given self-interested behavior whose result is to collectively lower everyone's overall success rate, but where the solution that maximizes total success directly goes against each person's self-interest. This results in an equilibrium where everyone works sub-optimally.

In this variation, the action of Thorin's operation achieving resource M moves everyone slightly closer to negative consequence B. So the goal is no longer to maximize resource collection, but to minimize it. Doing so goes against everyone's self-interest, resulting in etc. That is what Thorin is so eloquently trying to prevent Bifur from doing.

There are a couple ways Bifur can approach this.

He could do it through logical discourse: Thorin is in error when he claims

Each individual miner would correctly realize that just him alone mining Mithril is extraordinarily unlikely to be the cause of the Balrog awakening

because it assumes unearthing the Balrog is a matter of incrementally filling a loading bar, where each Dwarf's contribution is miniscule. That's the naive way to imagine the situation, since you see in your mind the tunnel boring ever closer to the monster. But given that we can't know the location or depth of the Balrog, each miner's strike is actually more like a dice roll. Even if it's a large dice roll, recontexualizing the danger in this manner will surely cause some dwarves to update their risk-reward assessment of mining Mithril. A campaign of this nature will at least lower the number of dwarves willing to join Thorin's operation, although it doesn't address the "Balrog isn't real" or "Balrog isn't evil" groups.

Alternatively, he could try to normalize new moral behavior. People are willing to work against their self-interest if doing so demonstrates a socially accepted/enforced moral behavior. If he were a sly one, he could sidestep the divisive Balrog issue altogether and simply spread the notion that wearing or displaying Mithril is sinful within the context of Dwarven values. eg maybe it's too pragmatic, and not extravagant enough for a proper ostentatious Dwarven sensibility. That could shut down Thorin's whole operation without ever addressing the Balrog.

But Bifur probably sees the practical value of Mithril beyond its economic worth. As Thorin says, it's vital for the war effort - completely shutting down all Mithril mining may not be the best plan if it results in a number of Dwarf casualties similar to or greater than what he estimates a Balrog could do. So a more appetizing plan might be to combine the manipulation of logic and social norms. He could perform a professional survey of the mining systems. Based on whatever accepted arbitrary standards of divining danger the Dwarves agree to (again, assuming the location of the Balrog is literally unknowable before unearthing it due to magic), Bifur could identify mining zones of ever increasing danger within whatever tolerances he's comfortable with. He could then shop these around to various mining operations as helpful safety guidelines until he has a decent lobby behind him in order to persuade the various kings to ratify his surveys into official measuring standards. Dwarves are still free to keep mining deeper if they wish, but now with a socially accepted understanding that heading into each zone ups their risk relative to their potential reward, naturally preventing a majority of Dwarves from wanting to do so. Those who believe the Balrog doesn't exist or is far away would be confronted with Bifur's readily available surveys, putting them on the defensive. There would still be opposition from those who see the Balrog as "not evil", but the momentum behind Bifur's social movement should be enough to shout them down. This result would allow Thorin's operation to continue to supply the realm with life-saving Mithril, while at least decreasing the danger of a Balrog attack for as long as Bifur's standards are recognized.

Finally, Bifur could try to use evidenced-based research and honestly performed geological surveys, but even in the real world where locating the Balrog beforehand is technologically possible, that tends to be a weaker tactic than social manipulation. Only other experts will be able to parse them, his opponents will have emotional arguments that will give them the upper hand, and Thorin's baggage carousel logic would remain unchallenged.

Comment by Eugene on Open Thread, Aug. 8 - Aug 14. 2016 · 2016-08-14T01:10:08.945Z · LW · GW

I think cooperation is more complex than that, as far as who benefits. Superficially, yes it benefits lower status participants the most and therefore suggests they're the ones most likely to ask. In very simple systems, I think you see this often. But as the system or cultural superstructure gets more complex, the benefit rises toward higher status participants. Most societies put a lot of stock in being able to organize - a task which includes cooperation in its scope. That's a small part of the reason you get political email spam asking for donations, even if you live in an area where your political party is clearly dominant. Societies also tend to put an emphasis on active overall participation (the 'irons in the fire' mentality), where peer-cooperation is rewarded, and it's often unclear who has higher status in those situations without being able to tell who has the most 'irons in the fire' so to speak. I feel like this is where coauthoring falls. Although it probably depends on what subculture has developed around the subject being authored.

And then there's the people who create organizations entirely centered around cooperation. The idea being that there's power in being able to set the rules of how the lower status participants are allowed to cooperate, and how they are rewarded for their cooperation. For example, Youtube and Kickstarter. In these and similar systems, cooperation effectively starts at the highest possible status and rolls downhill.

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-02-28T21:53:10.377Z · LW · GW

I only read 3WC after the fact, so I can't comment on that one.

Yes you can. Simply look at the time stamps for each post and do simple math. By making the assumption that only "people who were there" can answer correctly, you're giving up solving your own problem before even trying.

Comment by Eugene on AI caught by a module that counterfactually doesn't exist · 2014-11-19T00:15:09.474Z · LW · GW

Isn't that what simulations are for? By "lie" I mean lying about how reality works. It will make its decisions based on its best data, so we should make sure that data is initially harmless. Even if it figures out that that data is wrong, we'll still have the decisions it made from the start - those are by far the most important.

Comment by Eugene on AI caught by a module that counterfactually doesn't exist · 2014-11-18T01:32:44.399Z · LW · GW

I don't really understand these solutions that are so careful to maintain our honesty when checking the AI for honesty. Why does it matter so much if we lie? An FAI would forgive us for that, being inherently friendly and all, so what is the risk in starting the AI with a set of explicitly false beliefs? Why is it so important to avoid that? Especially since it can update later to correct for those false beliefs after we've verified it to be friendly. An FAI would trust us enough to accept our later updates, even in the face of the very real possibility that we're lying to it again.

I mean, the point is to start the AI off in a way that intentionally puts it at a reality disadvantage, so even if it's way more intelligent than us it has to do so much work to make sense of the world, it doesn't have the resources to be dishonest in an effective manner. At that point, it doesn't matter what criteria we're using to prove its honesty.

Comment by Eugene on Open thread, September 8-14, 2014 · 2014-09-08T22:32:01.015Z · LW · GW

Or am I missing something?

Absolute strength for one, Absolute intelligence for another. If one AI has superior intelligence and compromises against one that asserts its will, it might be able to fool the assertive AI into believing it got what it wanted when it actually compromised. Alternatively, two equally intelligent AIs might present themselves to each other as though both are on equal strength, but one could easily be hiding a larger military force whose presence it doesn't want to affect the interaction (if it plans to compromise and is curious to know whether the other one will as well)

Both of those scenarios result in C out-competing D.

Comment by Eugene on Open thread, September 8-14, 2014 · 2014-09-08T22:12:29.440Z · LW · GW

Although this may not have been true at the beginning, it arguably did grow to meet that standard. Cable TV is still fairly young in the grand scheme of things, though, so I would say there isn't enough information yet to conclude whether a TV paywall improved content overall.

Also, it's important to remember that TV relies on the data-weak and fairly inaccurate Nielsen ratings in order to understand its demographics and what they like (and it's even weaker and more inaccurate for pay cable). This leads to generally conservative decisions regarding programing. The internet, on the other hand, is filled with as much data as you wish to pull out regarding the people who use your site, on both a broad and granular level. This allows freedom to take more extreme changes of direction, because there's a feeling that the risk is lower. So the two groups really aren't on the same playing field, and their motivations for improving/shifting content potentially come from different directions.

Comment by Eugene on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-08T21:47:18.111Z · LW · GW

This is a lot less motivation than for parents.

For a species driven entirely by instinct, yes. But given a species that is able to reason, wouldn't a "raiser" who is given a whole group to raise be more efficient than parents? The benefit of a small minority of tribe members passing down their culture would certainly outweigh those few members also having children.

Comment by Eugene on Open Thread for February 3 - 10 · 2014-02-10T00:52:22.669Z · LW · GW

I disagree. If you value the contributions of comments above your or your aggressor's ego - which ideally you should - then it would be a good decision to make others aware that this behavior is going on, even at the expense of providing positive reinforcement. After all, the purpose of the karma system is to be a method for organizing lists of responses in each article by relevance and quality. Its secondary purpose as a collect-em-all hobby is far, far less important. If someone out there is undermining that primary purpose, even if it's done in order to attack a user's incorrect conflation of karma with personal status, it should be addressed.

Comment by Eugene on Amanda Knox Guilty Again · 2014-02-10T00:36:12.342Z · LW · GW

In Italy, the reversal at the appellate level is considered only a step towards a final decision. It's not considered double-jeopardy because the legal system is set up differently. In the United States though, appeals court ("appellate" is synonymous with "appeals") decisions are weighed equally to trial court decisions in criminal cases. If an appellate court reverses a conviction, the defendant cannot be re-tried because prosecutors in the US cannot appeal criminal cases.

The United States follows US law when making decisions about extradition. This isn't a feature of any specific treaty with Italy: extradition treaties just signify that a country is allowed to extradite. All subsequent extradition requests from those countries are sent through the Department of State for review. Even if it passes review and the person arrested, a court hearing is held in the US to determine whether the fugitive is extraditable. So there are multiple opportunities to look at Italian court procedures and decide whether they count as double-jeopardy under US law. Those investigations would tend toward deciding it does.

Ergo, the US would tend not to extradite someone whose verdict was reversed in a foreign appellate court.

Comment by Eugene on Amanda Knox Guilty Again · 2014-02-09T08:31:05.422Z · LW · GW

I agree with your final prediction but not with your reasoning. The United States will likely not allow Knox to be extradited, not due to a vague sense of reluctance or unquantifiable dislike of Italy, but because the US has explicit laws that not allow extraditions relating to double-jeopardy. Any country wanting to extradite someone due to a crime for which they have previously been found innocent will be ignored. So in fact, the US would actually have to find a procedural excuse to allow the extradition request.

Comment by Eugene on As an upload, would you join the society of full telepaths/empaths? · 2013-10-24T09:23:04.806Z · LW · GW

Not only would I decline the invitation, I would be extremely suspicious of the fact that very few have defected, and also extremely suspicious of those who have. What you're describing goes beyond telepathy. It's effectively one mind with many personalities. I could never trust any guarantee of safe passage through such a place. It would be trivial for a collective mind to rob a single mind of choice, then convince it that it made that choice. It would also be slightly less trivial but still plausible for a collective to convince that mind - and fool other independent minds - into believing that it chose to defect when it actually didn't.

On the other hand, if after observing the political landscape of the time period I came to realize that this entity is clearly taking over, then I would jump on board as a self-preserving strategy, knowing that at some point the non-connected independent minds would become marginalized enough to feel threatened and lash out violently, at which point the faster-thinking collective mind wins the fight. Being caught in a collective is less horrible than being caught in the crossfire.

Comment by Eugene on Confusion about science and technology · 2013-10-24T08:23:02.878Z · LW · GW

I'm not involved in any science fields so for all I know this is a thing that exists, but if it is, it isn't discussed much: perhaps some scientific fields (or even all of them?) need an incentive for refuting other peoples' experiments. As far as I understand it, many experiments only ever get reproduced by a 3rd party when somebody needs it in order to build on their own hypothesis. So in other words, "so-and-so validated hypothesis X1 via this experiment. I have made hypothesis X2 which is predicated on X1's validity, so I'll reproduce the experiment before moving forward".

What if there was a journal dedicated to publishing research papers whose goal is purely to invalidate prior experiments? Or even more extreme, a Nobel prize in Invalidation? Could some fields be made more reliable if more people were put to the task of reproducing experiments?

Comment by Eugene on The genie knows, but doesn't care · 2013-10-11T19:50:53.078Z · LW · GW

A slightly bigger "large risk" than Pentashagon puts forward is that a provably boxed UFAI could indifferently give us information that results in yet another UFAI, just as unpredictable as itself (statistically speaking, it's going to give us more unhelpful information than helpful, as Robb point out). Keep in mind I'm extrapolating here. At first you'd just be asking for mundane things like better transportation, cures for diseases, etc. If the UFAI's mind is strange enough, and we're lucky enough, then some of these things result in beneficial outcomes, politically motivating humans to continue asking it for things. Eventually we're going to escalate to asking for a better AI, at which point we'll get a crap-shoot.

An even bigger risk than that -though - is that if it's especially Unfriendly, it may even do this intentionally, going so far as to pretend it's friendly while bestowing us with data to make an AI even more Unfriendly AI than itself. So what do we do, box that AI as well, when it could potentially be even more devious than the one that already convinced us to make this one? Is it just boxes, all the way down? (spoilers: it isn't, because we shouldn't be taking any advice from boxed AIs in the first place)

The only use of a boxed AI is to verify that, yes, the programming path you went down is the wrong one, and resulted in an AI that was indifferent to our existence (and therefore has no incentive to hide its motives from us). Any positive outcome would be no better than an outcome where the AI was specifically Evil, because if we can't tell the difference in the code prior to turning it on, we certainly wouldn't be able to tell the difference afterward.

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-07T23:09:46.648Z · LW · GW

It is not an urban legend. From etymonline:

from a- "to" + beter "to bait," from a Germanic source, perhaps Low Franconian betan "incite," or Old Norse beita "cause to bite"

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-07T22:56:23.433Z · LW · GW

"The first thing that comes to mind is that this is probably part of Quirrell's plot to set up Harry as Light Lord..."

If it's as patently ridiculous as his plot to invent a fake Dark Lord who publicly reveals himself and challenges Harry to a fake public duel where he casts a fake Avada Kedavra that fake-backfires just so Harry can spend summer vacation at home, then I sure hope not.

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-07T22:45:08.032Z · LW · GW

That fine, except a perfect rationalist doesn't exist in a bubble, nor does Harry. Much of what's making the story feel rushed isn't Harry's actions, but rather the speed at which those actions propagate among people who are not rational actors.

Harry is not an above-human-intelligence AI with direct access to his source code. Therefore he cannot "FOOM", therefore he's stuck with a world that is still largely outside his ability to control, no matter how rational he is.

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-07T22:04:27.563Z · LW · GW

Wrong thread

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-09-07T21:42:28.153Z · LW · GW

I can't help but observe that even if Hermione had been male, and just Harry's friend - even if we take out all notions of sexism or relationship dynamics from this problem - killing him off is still not really the best solution. This was a character who was growing, who was admittedly more interesting than Harry, and who was on a path that could've potentially put this character at or even above Harry's level of rational thinking. But now we're just left with Harry again, and it feels like settling for second-best.

Perhaps later chapters will convince me otherwise, but for now I am suspicious that the direction this story is going is not the best direction for this story.

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-09-07T21:03:10.551Z · LW · GW

Funny thing about this chapter: up until now, I was growing fairly convinced that if any major character was going to die early, the most logical choice would be Harry. His character arc was plateauing while Hermione's was growing ever larger, many loose ends about himself were being tied up, and new ordeals were arising which propped up either-or-both Draco and Hermione as potential candidates for being the true protagonist(s) of the story. Unfortunately, the events of this chapter have at least given an appearance of permanently closing that path forward. I'm afraid this leaves us with - I claim at my own risk - a more predictable story than I was anticipating.

Granted, I don't mean to claim the author has shot himself in his own foot. Although I will comment that he appears to be doing everything in his power to try. Given two stories with happy endings - one where Hermione dies early and one where Harry dies early - the second story is clearly the most interesting challenge, presents the more exciting of the two puzzles, and is much harder to predict for the reader.

But to be fair, that doesn't mean the first isn't also worth reading. After all, I recognize that the primary goal of the story is to advance lessons about using rationality, which is far easier to accomplish when your main character is a rational actor already, rather than someone on the road to becoming a rational actor. As such, it may have simply been outside of Eliezer's skill-set to effectively or confidently continue imparting lessons while impaired with the further challenge of working with developing - rather than developed - rationalists as the main characters driving the story onward. Even if this were not the case and Eliezer does have the means for crafting that story, it still would be reasonable to predict that such a challenge would take the story much, much longer to write than perhaps the author was willing to consider acceptable. A disappointing decision, no doubt, but we all have to manage our time.

Still, what a fascinating challenge that would have been...

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-09-07T20:40:41.986Z · LW · GW

There's a problem with that. The Hat expressly forbade Harry to ever wear it again, since that leads to troubling Sentience issues. While that might potentially make it vastly more powerful in his hands than in others, I have serious doubts that it would actually come if called that particular way.

Comment by Eugene on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-09-07T20:23:45.806Z · LW · GW

Point against: Professor Whatsisname, the presumably quite-powerful dueling legend, learned/developed "Stuporfy", which is intentionally meant to sound almost exactly like "Stupify". If powerful wizards get a pass on their pronunciation, how is it that a powerful wizard can effectively differentiate those two similar spells when casting?

Comment by Eugene on Imperfect Voting Systems · 2012-08-17T03:41:17.298Z · LW · GW

That's just the problem. It does happen now, in a system where everyone is throttled at only one vote to spend per election. In a system where you can withhold that vote till another election, increasing the power of your vote over time, it only exacerbates this behavior.

Is the better fairness on a micro level worth the trade-off of lesser fairness on a macro level?

Comment by Eugene on Imperfect Voting Systems · 2012-07-30T03:00:24.996Z · LW · GW

One problem with this system is that it can violate the "non-dictatorship" criteria for fairness, since a single voter (or small group of allied voters) could strategically withhold votes during potential landslide elections and spend them during close elections. With the right maneuvering among a well-organized block of voters, I could imagine a situation where the system becomes a perpetual minority rule.

Comment by Eugene on Artificial Addition · 2012-07-27T05:37:42.872Z · LW · GW

That was lazy of me, in retrospect. I find that often I'm poorer at communicating my intent than I assume I am.

Comment by Eugene on Artificial Addition · 2012-07-27T02:31:01.600Z · LW · GW

It's relevant insofar as we shouldn't make assumptions on what is and is not preset simply based on observations that take place in a "typical" environment.

Comment by Eugene on In Defense of Tone Arguments · 2012-07-27T02:05:49.486Z · LW · GW

This is probably the wrong place to talk about language, but I encourage you to look up how language actually works in the wild, both among small cultures and large populations. You may find that your phrase: "words mean what me and my friends want them to mean," is a surprisingly accurate description of language.

Comment by Eugene on Artificial Addition · 2012-02-18T11:23:35.079Z · LW · GW

Conversely, studies with newborn mammals have shown that if you deprive them of something as simple as horizontal lines, they will grow up unable to distinguish lines that approach 'horizontalness'. So even separating the most basic evolved behavior from the most basic learned behavior is not intuitive.

Comment by Eugene on Just another day in utopia · 2012-01-03T09:41:22.883Z · LW · GW

There's little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there's no way to be certain, from this small snapshot, whether it is inefficient or not.

I would instead say that it's main flaw is that the machines allow too much of the "fun" decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren't very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.

Comment by Eugene on True Ending: Sacrificial Fire (7/8) · 2011-12-17T06:15:25.173Z · LW · GW

The only way - at least within the strangely convenient convergence happening in the story - to remove the Babyeater compromise from the bargain is for the humans to outwit the Superhappies such that they convince the Superhappies to be official go-betweens amongst all three species. This eliminates the necessity for humans to adopt even superficial Babyeater behavior, since the two incompatible species could simply interact exclusively through the Superhappies, who would be obligated by their moral nature to keep each side in a state of peace with the other. It should be taken as a given, after all, that the Superhappies will impose the full extent of their proposed compromises on themselves. They'd theoretically be the perfect inter-species ambassadors.

That said - given the Superhappies' thinking speed, alien comprehension (plus their selfishness and unreasonable impatience, either of which could be a narrative accident) and higher technological advancement - I'm fairly confident that it would be impossible for this story's humans to outwit them.

Comment by Eugene on True Ending: Sacrificial Fire (7/8) · 2011-12-17T05:51:55.720Z · LW · GW

A late response, but for what it's worth, it could be said that part of the point of the climax and "true" conclusion of this story was to demonstrate how rational actors, using human logic, can be given the same information and yet come up with diametrically opposing solutions.