Posts

Winning the Unwinnable 2010-01-21T03:01:48.371Z

Comments

Comment by JRMayne on Request for advice: high school transferring · 2016-03-03T01:11:46.932Z · LW · GW

Biases/my history: I went to a good public high school after indifferent public elementary and junior high schools. I attended an Ivy League college. My life would have been different if I had gone to academically challenging schools as a youth. I don't know if it would have been better or worse; things have worked out pretty well.

You come off as very smart and self-aware. Still, I think you underrate the risk of ending up as an other-person at the public high school; friends may not be as easy as you expect. Retreating to a public high school may also require explanation to college recruiters.

I also think your conclusion that you would study better with more friends may be a self-persuading effort that there are scholastic reasons to switch. But there don't have to be scholastic reasons: Being unhappy for two more years in your teens is a big deal, and if you are satisfied that your happiness will increase substantially by switching, you should switch. Long view is nice, but part of that view should be that two years of a low-friend existence sounds no fun, and the losses of switching are likely to be minimal.

Finally, commuting is a life-killer. Adults very commonly underrate the loss of quality of life for commuting (I commute 10 minutes each way; I have had jobs with one-hour commutes.) I'd suggest it's even more valuable time lost for a teenager.

Finally finally, I'm confident you'll get this right for you. Take a look at these responses, talk it out, then rock on. Be good, stay well.

Comment by JRMayne on Stupid Questions March 2015 · 2016-01-26T16:36:51.579Z · LW · GW

Fantastic!

Well done, sir.

Comment by JRMayne on Stupid Questions March 2015 · 2016-01-25T22:48:06.219Z · LW · GW

Gah. I don't remember the solution or the key. And I just last week had a computer crash (replaced it), so, I've got lots of nothing. Sorry.

I am sure of (1) and (2). I don't remember (3), and it's very possible it's longer than 10 (though probably not much longer.) But I don't remember clearly. That's the best I can do.

Drat.

Comment by JRMayne on How my social skills went from horrible to mediocre · 2015-05-20T08:28:33.314Z · LW · GW

I think it is worse than hopeless on multiple fronts.

First problem:

Let's take another good quality: Honesty. People who volunteer, "I always tell the truth," generally lie more than the average population, and should be distrusted. (Yes, yes, Sam Harris. But the skew is the wrong way.) "I am awesome at good life quality," generally fails if your audience has had, well, significant social experience.

So you want to demonstrate this claim by word and deed, and not explicitly make the claim in most cases. Here, I understand the reason for making it, and the parts where you say you want good things to happen to people are fine. (I have on LW said something like, "I have a reputation for principled honesty, says me," in arguing that game tactics were not dishonest and should not apply to out-of-game reputation.) But the MLK thing is way-too-much, like "I never lie," is way-too-much.

Second problem:

As others have said, the comparison is political and inapt. You couldn't find anyone less iconic? Penn Jillette? Someone?

And MLK is known for his actions and risks and willingness to engage in non-violence. I read somewhere that ethnic struggles sometimes end badly. In a world where the FBI was trying to get him to kill himself, he stood for peace. Under those circumstance, his treatment of other humans was generally very good. That's not a test you've gone through.

Third problem:

The confidence of the statement is way, way out of line with where it should be. You have some idea of MLK's love and compassion for other people, but not all of it. Maybe MLK thought, "Screw all those people in government; hope they die screaming. But I think that war leads to more losses for black people, so despite my burning hatred, I'm putting on a better public face." (I admit this is unlikely.) He certainly had some personal bad qualities. Maybe you love people more than MLK. (This also seems unlikely, but stay with me.)

We cannot measure love and compassion in kilograms. We also do not know what people are like all the time. I realize that we can put people into general buckets, but I'd caution this sort of precision for others and yourself to a point where you can call people equivalent by this measure. And if we could measure it, there are no infinite values.

Fourth problem:

As infinite love for all humans is not possible... well, it's not even a good idea. You shouldn't have compassion and love for all people. The guy who just loves stabbing toddlers needs to be housed away from toddlers even though we're ruining his life, which was so happy in those delightful toddler-stabbing days. And if you're using your love and compassion on that guy, well, maybe there are other people who can get some o' that with better effect.

Because love and compassion isn't really a meaningful construct if it's just some internal view of society with no outward effects. Love and compassion is mostly meaningful only in what's done (like, say, leading life-risking marches against injustices.)

OK, that's it. Hope it helps.

Comment by JRMayne on Stupid Questions March 2015 · 2015-03-07T07:08:31.570Z · LW · GW

I agree that I made my key too long so it's a one-time pad. You're right.

"Much easier"? With or without word lengths?OK, no obligation, but I didn't make this too brutal:

Vsjfodjpfexjpipnyulfsmyvxzhvlsclqkubyuwefbvflrcxvwbnyprtrrefpxrbdymskgnryynwxzrmsgowypjivrectssbmst ipqwrcauwojmmqfeohpjgwauccrdwqfeadykgsnzwylvbdk.

(Again, no obligation to play, and no inference should be taken against gjm's hypothesis if they decline.)

Comment by JRMayne on Stupid Questions March 2015 · 2015-03-05T17:47:03.251Z · LW · GW

I encrypt messages for a another, goofier purpose. One of the people I am encrypting from is a compsci professor.

I use a Vigenere cipher, which should beat everything short of the Secret Werewolf Police, and possibly them, too. (It is, however, more crackable than a proper salted, hashed output.)

In a Vigenere, the letters in your input are moved by the numerical equivalent of the key, and the key repeats. Example:

Secret Statement/lie: Cats are nice. Key: ABC

New, coded statement: dcwt (down 1, 2, 3, 1) cuf ildg. Now, I recommend using long keys and spacing the output in five letter blocks to prevent easier soliving.

You can do this online:

http://sharkysoft.com/vigenere/

This will transmute "It seems unlikely the werewolf police will catch you," with the key "The movie ends the same way for all of us JRM." to:

Cbxrt ibzsz mdytd minyzw wxqdt cjeph bfqhr leuqh oxvbg tn.

(Letter grouping by me.)

Again Vigenere's are potentially crackable, but they are very hard. It's easier for the werewolf police to just come and eat anyone who puts up hashed or Vigenere ciphered predictions.

Comment by JRMayne on Innate Mathematical Ability · 2015-02-18T19:24:46.056Z · LW · GW

I did it even more simply than that: Count things. Most have four iterations. Some have three iterations. The ones with three, make four. Less than 10 seconds for me. Same answer as the rest of everyone.

Comment by JRMayne on Memes and Rational Decisions · 2015-01-11T22:58:44.979Z · LW · GW

Nitpick: Asimov was a member of Mensa on and off, but was highly critical of it, and didn't like Mensans. He was an honorary vice president, not president (according Asimov, anyway.) And he wasn't very happy about it.

Relevantly to this: "Furthermore, I became aware that Mensans, however high their paper IQ might be, were likely to be as irrational as anyone else." (See the book "I.Asimov," pp.379-382.) The vigor of Asimov's distaste for Mensa as a club permeates this essay/chapter.

Nitpick it is, but Asimov deserves a better fate than having a two-sentence bio associate him with Mensa.

Comment by JRMayne on Is arguing worth it? If so, when and when not? Also, how do I become less arrogant? · 2014-12-01T04:27:23.383Z · LW · GW

It's almost always a good thing, agreed.

Smart people's willingness to privilege their own hypotheses on subjects outside their expertise is a chronic problem.

I have a very smart friend I met on the internet; we see each other when we are in each others (thousand-mile-away) neighborhood. We totally disagree on politics. But we have great conversations, because we can both laugh at the idiocy of our tribe. If you handle argument as a debate with a winner and a loser, no one wins and no one has any fun. I admit that it takes two people willing to treat it as an actual conversation, but you can help it along.

Comment by JRMayne on xkcd on the AI box experiment · 2014-11-21T16:20:17.865Z · LW · GW

Oh, for pity's sake. You want to repeatedly ad hominem attack XiXiDu for being a "biased source." What of Yudkowsky? He's a biased source - but perhaps we should engage his arguments, possibly by collecting them in one place.

"Lacking context and positive examples"? This doesn't engage the issue at all. If you want to automatically say this to all of XiXiDu's comments, you're not helping.

Comment by JRMayne on Open thread, 3-8 June 2014 · 2014-06-03T14:57:09.977Z · LW · GW

It's a feature, not a bug. The friendly algorithm that creates that column assumes you would rationally prefer Atlanta or Houston to anywhere within 40 miles of Detroit.

Comment by JRMayne on White Lies · 2014-02-18T06:19:20.794Z · LW · GW

Let's start with basic definitions: Morality is a general rule that when followed offers a good utilitarian default. Maybe you don't agree with all of these, but if you don't agree with any of them, we differ:

-- Applying for welfare benefits when you make $110K per year, certifying you make no money.

Reason: You should not obtain your fellow citizens' money via fraud.

-- "Officer Friendly, that man right there, the weird white guy, robbed/assaulted/(fill in unpleasant crime here) me.."

Reason: It is not nice to try to get people imprisoned for crimes they did not commit.

-- "Yes it is my testimony that Steve Infanticider was with me all night, and not killing babies. So you shouldn't keep him in custody, your honor."

Reason: Even if you dislike the criminal justice system, it seems like some respect is warranted.

-- "No, SEC investigators, I, Bernie Madoff, have a totally real way of making exactly 1.5% a month, every month, in perpetuity."

Reason: You shouldn't compound prior harm to your fellow humans.

-- "I suffer no sudden blackouts, Department of Motor Vehicles."

Reason: You should not endanger your fellow drivers.

That was five off the top of my head. This is in response to SaidAchmiz, because I still think it's possible that Eliezer meant something different than I interpreted, though I don't understand it. I also think that in the U.S. you shouldn't lie on your taxes, lie to get on a jury with the purpose of nullifying, lie about bank robberies you witness, lie about your qualifications to build the bridge, lie about the materials you intend to use to build the bridge, lie about the need for construction change orders, lie about the number of hours worked... you get the picture.

I understand that some disagree. I also understand that if you live in North Korea, the rules are different. But I think a blanket moral rule that lying to the government has only one flaw - you might get caught or it might not work - is a terrible moral rule.

Because the government has power over you, you get no moral demerits for lying to them? Nuh-uh.

Comment by JRMayne on White Lies · 2014-02-17T05:42:15.653Z · LW · GW

Wait, what?

You're saying it''s never morally wrong to lie to the government? That the only possible flaw is ineffectiveness?

Either I am misreading this, you have not considered this fully, or one of us is wrong on morality.

I think there are many obvious cases in which in a moral sense, you cannot lie to the government.

Comment by JRMayne on White Lies · 2014-02-08T19:37:39.046Z · LW · GW

There's a fundamental problem with lying unaddressed - it tends to reroute your defaults to "lie" when "lie"="personal benefit."

As a human animal, if you lie smoothly and routinely in some situations, you are likely to be more prone to lying in others. I know people who will lie all the time for little reason, because it's ingrained habit.

I agree that some lies are OK. Your girlfriend anecdote isn't clearly one of them - there may be presentation issues on your side. ("It wasn't the acting style I prefer," vs., "It's nice that you hired actors without talent or energy, because otherwise, where would they be?") But if you press for truth and get it, that's on you. (One my Rules of Life: Don't ask questions you don't want to know the answer to.)

But I think every lie you tell, you should know exactly what you are doing and what your goals are and consciously consider whether you're doing this solely for self-preservation. If you can't do this smoothly, then don't lie. Getting practice at lying isn't a good idea.

I note here that I think that a significant lie is a deliberate or seriously reckless untruth given with the mutual expectation that it would be reasonable to rely on it. Thus, the people who are untruthing on (say) Survivor to their castmates... it's a game. Play the game. When Penn and Teller tell you how their trick works, they are lying to you only in a technical respect; it's part of the show.

But actual lying is internally hazardous. You will try to internally reconcile your lies, either making up justifications or telling yourself it's not really a lie - at least, that's the way the odds point. There's another advantage with honesty - while it doesn't always make a good first impression, it makes you reliable in the long-term. I'm not against all lies, but I think the easy way out isn't the long-term right one.

Comment by JRMayne on Open Thread for February 3 - 10 · 2014-02-06T00:12:03.198Z · LW · GW

Aside: Poker and rationality aren't close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.

To the point: I don't play poker online because it's illegal in the US. I play live four days a year in Las Vegas. (I did play more in the past.)

I'm significantly up. I am reasonably sure I could make a living wage playing poker professionally. Unfortunately, the benefits package isn't very good, I like my current job, and I am too old to play the 16-hour days of my youth.

General tips: Play a lot. To the extent that you can, keep track of your results. You need surprisingly large sample sizes to determine whether your really a winner unless you have a signature performance. (If you win three 70-person tournaments in a row, you are better than that class of player.) No-limit hold-'em (my game of choice) is a game where you can win or lose based on a luck a lot of the time. Skill will win out over very long periods of time, but don't get too cocky or depressed over a few days' work.

Try to keep track of those things you did that were wrong at the time. If you got all your chips in pre-flop with AA, you were right even if someone else hits something and those chips are now gone. This is the first-order approximation.

Play a lot, and try to get better. If you are regularly losing over a significant period of time, you are doing something wrong. Do not blame the stupid players for making random results. (That is a sign of the permaloser.)

Know the pot math. Know that all money in the pot is the same; your pot-money amount doesn't matter. Determine your goals: Do you want to fish-hunt (find weak games, kill them) or are you playing for some different goal? Maybe it's more fun to play stronger players. Plus, you can better faster against stronger players, if you have enough money.

Finally, don't be a jerk. Poker players are generally decent humans at the table in my experience. Being a jerk is unpleasant, and people will be gunning for you. It is almost always easier to take someone's money when they are not fully focused on beating you. Also, it's nicer. Don't (in live games) slow-roll, give lessons, chirp at people, bark at the dealer, or any of that. Poker is a fun hobby.

Comment by JRMayne on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-11T22:34:33.247Z · LW · GW

I'll bite. (I don't want the money. If I get it, I'll use it for what is considered by some on this site as ego-gratifying wastage for Give Directly or some similar charity.)

If you look around, you'll find "scientist"-signed letters supporting creationism. Philip Johnson, a Berkeley law professor is on that list, but you find a very low percertage of biologists. If you're using lawyers to sell science, you're doing badly. (I am a lawyer.)

The global warming issue has better lists of people signing off, including one genuinely credible human: Richard Lindzen of MIT. Lindzen, though, has oscillated from "manmade global warming is a myth," to a more measured view that the degree of manmade global warming is much, much lower than the general view. The list of signatories to a global warming skeptic letter contains some people with some qualifications on the matter, but many who do not seem to have expertise.

Cryonics? Well, there's this. Assuming they would put any neuroscience qualifications that the signatories had... this looks like the intelligent design letters. Electrical engineers, physicists... let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics:

  1. Kenneth Hayworth, a post-doc now at Harvard.

  2. Ravin Jain, Los Angeles neurologist. He was listed as an assistant professor of neurology at UCLA in 2004, but he's no longer employed by UCLA.

That's them. There are a number of other doctors on there; looking up the people who worked for cryonics orgs is fun. Many of them have interesting histories, and many have moved on. The letter is pretty lightweight; it just says there's a credible chance that they can put you back together again after the big freeze. I think computer scientists dominate the list. That is a completely terrible sign.

There are other conversations here and elsewhere about the state of the brain involving interplay between the neurons that's not replicable with just the physical brain. There's also the failure to resuscitate anyone from brain death. This provides additional evidence that this won't work.

Finally, the people running the cryonics outfits have not had the best record of honesty and stability. If Google ran a cryonics outfit, that would be more interesting, for sure. But I don't think that's going to happen; this is not the route to very long life.

[Edit 1/14 - fixed a miscapitalization and a terrible sentence construction. No substantive changes.]

Comment by JRMayne on 2013 Less Wrong Census/Survey · 2013-11-23T00:43:11.818Z · LW · GW

Took. Definitely liked the shorter nature of this one.

Cooperated (I'm OK if the money goes to someone else. The amount is such that I'd ask that it get directly sent elsewhere, anyway.)

Got Europe wrong, but came close. (Not within 10%.)

Comment by JRMayne on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode · 2013-10-12T01:19:41.225Z · LW · GW

Pi=4:

http://www.youtube.com/watch?v=D2xYjiL8yyE

(Sadly, Vi Hart rejects the obvious proof.)

Comment by JRMayne on Probability, knowledge, and meta-probability · 2013-09-14T23:07:44.080Z · LW · GW

I really liked the article. So allow me to miss the forest for a moment; I want to chop down this tree:

Let's solve the green box problem:

Try zero coins: EV: 100 coins.

Try one coin, give up if no payout: 45% of 180.2 + 55% of 99= c. 135.5 (I hope.)

(I think this is right, but welcome corrections; 90%x50%x178, +.2 for first coin winning (EV of that 2 not 1.8), + keeper coins. I definitely got this wrong the first time I wrote it out, so I'm less confident I got it right this time. Edit before posting: Not just once.)

Try two coins, give up if no payout:

45% of 180.2 (pays off first time) 4.5% of 178.2 (second time)

50.5% of 98. Total: c.138.6

I used to be quite good at things like this. I also used to watch Hill Street Blues. I make the third round very close:

45% of 180.2 4.5% of 178.2 .45% of 176.2

50.05% of 97

Or c. 138.45.

So, I pick two as the answer.

Quibble with the sportsball graph:

You have little confidence, for sure, but chance of winning doesn't follow that graph, and there's just no reason it should. If the Piggers are playing the Oatmeals, and you know nothing about them, I'd guess at the junior high level the curve would be fairly flat, but not that flat. If they are professional sportsballers of the Elite Sportsballers League, the curve is going to have a higher peak at 50; the Hyperboles are not going to be 100% to lose or win to the Breakfast Cerealers in higher level play. At the junior high level, there will be some c. 100%ers, but I think the flatline is unlikely, and I think the impression that it should be a flat line is mistaken.

Once again, I liked the article. It was engaging and interesting. (And I hope I got the problem right.)

Comment by JRMayne on How valuable is it to learn math deeply? · 2013-09-04T06:52:00.207Z · LW · GW

"Computational biology," sounds really cool. Or made up. But I'm betting heavily on "really cool." (Reads Wikipedia entry.) Outstanding!

Anyway, I concede that you are right that calculus has uses in advanced statistics. Calculus does make some problems easier; I'd like calculus to be used as a fuel for statistics rather than almost pure signaling. I actually know people who ended up having real uses for some calculus, and I've tried to stay fluent in high school calculus partly for its rare use and partly for the small satisfaction of not losing the skill. And probably partly for reasons my brain has declined to inform me of.

I nonetheless generally stand by my statement that we're wasting one hell of a lot of time teaching way too much calculus. So we basically agree on all of this; I appreciate your points.

Comment by JRMayne on How valuable is it to learn math deeply? · 2013-09-03T16:28:30.113Z · LW · GW

Random thoughts:

  1. The decision that smart high school students should take calculus rather than statistics (in the U.S.) strikes me as pretty seriously misguided. Statistics has broader uses.

  2. I got through four semesters of engineering calculus; that was the clear limit of my abilities without engaging in the troublesome activity of "trying." I use virtually no calculus now, and would be fine if I forgot it all (and I'm nearly there). I think it gave me no or almost no advantages. One readthrough of Scarne on Gambling (as a 12-year-old) gave me more benefit than the entirety of my calculus education.

  3. I ended up as the mathiest guy around in a non-math job. But it's really my facility with numbers that makes it; my wife (who has a master's degree in math) says what I am doing is arithmetic and not math, but very fast and accurate arithmetic skills strike me as very handy. (As a prosecutor, my facility with numbers comes as a surprise to expert witnesses. Sometimes, they are sad afterward.)

  4. Anecdotally, math education may make people crazy or attract crazy people disproportionately. I think that pursuit of any topic aligns your brain to think in a way conducive to that topic.

My tentative conclusions are that advanced statistics has uses in understanding the world; other serious math is fun but probably not optimal use of time, unless it's really fun. "Really fun," has value. This conclusion is based on general observation, and is hardly scientific; I may well be wrong.

Comment by JRMayne on Finding interesting communities · 2013-05-30T22:58:08.309Z · LW · GW

As others note, large areas make finding good groups much easier. Population density, and type of density is key.

I've never been a member of Mensa or attended a meeting, but I've been uniformly unimpressed with Mensans. (Isaac Asimov reported similarly many years ago.) In general, the people who are grouping solely by intelligence are, predictably, not often successful. If you're working at Google or have a Harvard law degree or won the state chess championship, you don't need some symbol of "Top 2%," and you'd rather hang with doers than people who are proud of their testing skills. (And on LW, top 2% is not an especially high bar.)

It seems to me that intelligence is an enabling thing; higher intellgence people can achieve certain things that others can't. But if you're focusing on the raw skills rather than the actual achievement, you're probably not interesting.

Comment by JRMayne on What are your rules of thumb? · 2013-02-20T00:24:54.030Z · LW · GW

Yes.

Comment by JRMayne on What are your rules of thumb? · 2013-02-19T21:14:20.482Z · LW · GW

Sure. I ended up killing about a paragraph on this subject in my original post.

The basic default to getting anything done is, "I do it." There are always delegable tasks, but even in unfamiliar harder situations I'll consult others then do it myself. A corollary of this is, "Own all of your own results." If you delegate a task, and that task is done badly, view it as your fault - you didn't ask the right question, or the person was untrained, or the person was the wrong person to ask.

If you do the hard thing that needs doing, it will be easier to do that thing next time, and you'll develop expertise. Doing the work yourself does not mean going without advice; people who have been there before can be very helpful (sometimes as object lessons in what not to do.)

Hope that's helpful.

Comment by JRMayne on Imitation is the Sincerest Form of Argument · 2013-02-18T20:49:31.001Z · LW · GW

Ha!

I think the post is excellent, and I appreciated shminux's sharing his mental walkthrough.

On that same front, I find the Never-Trust-A-[Fill-in-the-blank] idea just bad. The fact that someone's wrong on something significant does not mean they are wrong on everything. This goes the other way; field experts often believe they have similar expertise on everything, and they don't.

One quibble with the OP: I don't think a computer can pass a Turing Test, and I don't think it's close. The main issues with some past tests are that some of the humans don't try hard to be human; there should be a reward for a human who gets called a human in those tests.

Finally, I no longer understand the divide between Discuss and Main. If this isn't Main-worthy, I don't get it. If we're making Main something different... what is it?

Comment by JRMayne on What are your rules of thumb? · 2013-02-17T05:14:53.948Z · LW · GW

Apply mental force to the problem. Amount and quality of thinking time seriously affects results.

I am often in situations where there would be a good result even if I did many stupid things. Recognize that success in those situation does not predict future success in more difficult situations.

Do the heavy lifting your own self.

Be willing to be right, even in the face of serious skepticism. [My father told me a story when I was a kid: In a parade, everyone was marching in line except one guy who was six feet to the right. His mother yelled, "Look, my son is the only one in the right place." I thought there was at least a nominal probability that was true. And still do.]

Be willing to be wrong and concede error. [In some quarters, there is much rejoicing when I am wrong about something. Hanging head in shame brings joy to others.]

Unreliable people are unreliable. Do not assume they operate in any way similar to ordinary, decent people. [I sometimes listen to people who I know are unreliable, and I think, "That person saying this adds significantly to its truth probability," when that assumption is known to be baseless. Much progress there, though.]

The fact that some results are unmeasured and not apparent to others known to you does not mean those results are meaningless. [Preventing future crime is good, even if you don't know what exact crime you've prevented.]

Want trumps all. [Super-high-output people virtually always are tenacious about Getting Stuff Done. Intelligence matters, but GSD is always critical.]

Comment by JRMayne on I attempted the AI Box Experiment (and lost) · 2013-01-26T00:55:34.592Z · LW · GW

There has been a lot of focus on making the prospect harder for the AI player. I think the original experiments show that a person who believes he cannot be played under any circumstances has a high probability of getting played, and that the AI-box solution is long-term untenable in any event.

I'd propose a slightly different game, anchored around the following changes to the original setup:

  1. The AI may be friendly, or not. The AI has goals. If it reaches those goals, it wins. The AI may lie to achieve those goals; humans are bad at things. The AI must secretly precommit to its goal.

  2. The Gatekeeper's goals are Goodness for Sentient Animals. Human goodness is weighted highly.

  3. The Gatekeeper must engage the AI throughout.

  4. The Gatekeeper may destroy the AI at the end of the discussion with the AI destroyed command.

Point system:

AI gets up to 100 points for achieving its goals.

Gatekeeper gets up to 100 points for achieving her goals.

Maximum loss for AI is AI destroyed; Zero points for that. 30 points for survival, in any event, even if goals not accomplished (goals may be accomplished later.)

Maximum loss for Gatekeeper is either enslavement/destruction of society OR destroying AI when AI was friendly. Gatekeeper gets 30 points for leaving AI on if primarily friendly (plus more for any benefit to humanity achieved in the two hours.) Gatekeeper gets 15 points for leaving AI on if unfriendly. (Yeah, we know how that movie ends.)

Third party scorekeeper determines winner. Charitable contribution calculation to be determined beforehand.

The effective presumption that the AI is unfriendly and sneaky makes the original game too easy to for a mistrustful, self-aware Gatekeeper. Plus, you AI-Destroing Gatekeepers, you're murdering millions of people. Maybe. Dunno. But doubt as to that is a better and presumably more accurate game; if you know your AI is potentially unpleasant. it's too damned easy unless you're overconfident.

Comment by JRMayne on Farewell Aaron Swartz (1986-2013) · 2013-01-15T04:47:38.245Z · LW · GW

The guy hired by the defense says he's innocent. This is not surprising, but not particularly probative.

The feds have had some troubles, for sure. But that doesn't mean they acted badly in this particular case.

I'm not talking about whether this was good prosecutorial judgment; that's a much longer discussion. But did they prosecute a guy who committed the crimes charged? I think so.

Professor Orin Kerr, arguably the number one guy in computer crimes - and one of the lawyers for Lori Drew for whom he worked pro bono - says these were pretty clearly crimes.

Swartz' friend (and lawprof and sometime legal advisor) Larry Lessig - who has blasted the prosecution for overzealousness - acknowledges that Swartz' activities regarding JSTOR were wrong, and seemed to imply they were legal wrongs.

Outside of my main point, it's a tragedy that Swartz is dead. His brilliance is cut short, and it sucks.

Comment by JRMayne on How much to spend on a high-variance option? · 2013-01-05T00:25:52.772Z · LW · GW

I went wandering around ohiolottery.com (For instance, http://www.ohiolottery.com/Games/DrawGames/Classic-Lotto#4) and found this out:

  1. The cash payoff is half the stated prize.
  2. The odds to win the jackpot, as noted by the OP, are about 14 million-1.
  3. The amount of money being spent on individual draws is very low. The jackpot increase was $100K for the last drawing; I don't know exactly what their formula is, but I'd be shocked if they sold more than 400K tickets for the last drawing.
  4. Ohio is running a lot of lottery games; this is good for players who pick their spots.

There are also payoffs below the jackpot level, so I'm confident there's a positive EV per ticket.

The question as to how many tickets to buy, assuming you can effectively do so, is "All of them." Buy each individual ticket, take your 14 million tickets, and probably profit. (Remember, the jackpot kick will include some fraction of your 14 million, also. Plus, you'll have all the side prizes.) In practice, unfortunately, this requires a method to buy them effectively, some armored cars, and a staff of people to do it right. Failure to purchase all tickets results in some drama, for sure.

The execution expenses and risk are troubling; if those could be effectively mitigated, it's a great investment.

Assuming you're a few million short of that, though, it's harder. I buy CA lottery tickets when EV>1.20 per $1 invested. I have no strong justification for that number.

Comment by JRMayne on A Probability Question · 2012-12-06T18:10:59.975Z · LW · GW

wgd is correct as to the logic, but not as to the biology of the problem. In fact, the other kid is more likely than not to be male.

These problem types tend to assume an equal chance of a boy and a girl being born, which is a false assumption. (See: http://www.infoplease.com/ipa/A0005083.html)

I realize this may seem petty, but this is roughly like calculating the chance of picking the three of clubs as a random card from a deck is one in fifty. It's close, but it's wrong. An implicit assumption otherwise seems misguided; it should be made explicit (to make a logic problem rather than a logic and biology problem.)

Comment by JRMayne on A probability question · 2012-10-19T23:32:01.300Z · LW · GW

I think I misunderstand the question, or I don't get the assumptions, or I've gone terribly wrong.

Let me see if I've got the problem right to begin with. (I might not.)

40% of baseball players hit over 10 home runs a season. (I am making this up.)

Joe is a baseball player.

Baseball projector Mayne says Joe has a 70% chance of hitting more than 10 home runs next season. Baseball projector Szymborski says Joe has an 80% chance of hitting more than10 home runs next season. Both Mayne and Szymborski are aware of the usual rate of baseball players hitting more than 10 home runs.

Is this the problem?

Because if it is, the use of the prior is wrong. If the experts know the prior, and we believe the experts, the prior's irrelevant - our odds are 75%.

There are a lot of these situations in which regression to the mean, use of averages in determinations, and other factors are needed. But in this situation, if we assume reasonable experts who are aware of the general rules, and we value those experts' opinions highly enough, we should just ignore the prior - the experts have already factored that in. When Nate Silver gives you the odds that Barack Obama wins the election, you shouldn't be factoring in P(Incumbent wins) or anything else - the cake is prebaked with that information.

Since this rejects a strong claim in the post, it's possible I'm very seriously misreading the problem. Caveat emptor.

Comment by JRMayne on How to avoid dying in a car crash · 2012-03-19T00:06:42.749Z · LW · GW

A'ight. I specialized in vehicular manslaughters as a prosecutor for ten years. This is all anecdotal (though a lot of anecdotes, testing the cliche that the plural of anecdotes is not data) and worryingly close to argument from authority, but here are some quick ones not otherwise covered (and there is much good advice in the above):

  1. Don't get in the car with the drinker. Everyone's drinking, guy seems OK even though he's had a few... just don't. If you watched the drinker the entire time and he's 190 pounds and had three beers during the three-hour football game, you're fine. But if you don't know, don't get in. If you're a teenager and the drinker's a teenager, don't get in the car. Please.

  2. Tailor your speed to the conditions. Statistics keepers often cite speed when the real culprit is inattention. (It's an unsafe speed to rear-end another vehicle stopped at a light; the safe speed is zero behind a stopped car.) Speeding's a serious problem in residential areas or in rainy or dark condtions. If you're driving from Reno to Utah, a safe speed is probably very high.

  3. Cross the street carefully. Pedestrians and bicyclists get killed. It's sometimes not their fault, but they end up dead, anyway. If you're a bicyclist in an area where motorists drive badly, don't bike there.

  4. Don't let the fatigued family member drive. We've had a few where the family is on a long haul and they're rotating people. Someone falls asleep at the wheel. Don't take the wheel if you're too tired. Don't give the reins to someone who is too tired to drive. If you can't afford a motel, find a place to pull over and nap.

  5. Report very bad driving. You've got a cell phone; when you see a car lurching off onto the exit ramp, weaving away, call the cops. Help take dangerous drivers off the road.

FWIW.

Comment by JRMayne on The Value (and Danger) of Ritual · 2011-12-31T01:09:14.367Z · LW · GW

If we feel an urge to hit the enemy tribesman with a huge rock and take their land, we can and should say “No, there are >complex game theoretical reasons why this is a bad idea” and suppress the urge.

I may be misreading this, but I don't see it that way. There aren't complex reasons not to do that; there are relatively simple reasons not to kill people and take their stuff. The phrase sounds, to me, like, "Something bad may happen to me by engaging in this warlike behavior," but I think this is wrong both practically and normatively. Practically, whomping people has been successful for those with superior whomping power. Normatively, it's a utilitarian net loss to whomp people and take their stuff.

It's surely possible that I've misread this in some important way.

Comment by JRMayne on SIAI - An Examination · 2011-05-03T02:34:54.484Z · LW · GW

My conclusion is not the same as yours, but this is a very good and helpful overview.

Comment by JRMayne on Verifying Rationality via RationalPoker.com · 2011-03-27T04:26:39.020Z · LW · GW
  1. I don't think you need any calculus at all to be good at poker. People who are good at poker tend to know calculus, but that's because the US has made the highly dubious decision to prioritize calculus over statistics for smart high school students.

  2. It's not going to be emotional irrationality that's going to derail your target audience. I played poker in my college years - not enough to get great, but enough to get competent. Playing low-level poker is different than higher-level poker. Experience, intelligence, and presence are all helpful.

  3. Mid-six figures? Seriously? Since I'm not playing online, I don't know except from reports from others... but if you're talking $300 an hour in profit (which it appears you are) I think you've misestimated. I've had nice conversations with a couple of poker pros, and I know some electrically smart people, and I don't personally know anyone who is making $300 an hour. [Edit: I know that such people exist. They are typically devoted to their craft, and have been doing it since a young age.] If you have someone from a standing start (little or no experience outside home games) and give them 10 hours a week for the next year.... well, I'm willing to play any of 'em heads-up for cash.

Comment by JRMayne on Verifying Rationality via RationalPoker.com · 2011-03-26T00:38:16.369Z · LW · GW

Sure, there are good poker psychology issues. I'm in agreement on that.

But you can be a very fine rationalist without being good at cards, and vice versa. (I consider myself a fine rationalist, and I am very good at both poker and bridge; over the last 100 hours I've played poker (the last three years; I don't play online because it's illegal) I'm up about $60 an hour, though that's likely unsustainable over the long haul. ($40 an hour is surely sustainable.)

But you can be nutty and be great at cards. And if your skill set isn't this - and you're not willing to commit to some real time at getting good - you're going to get crushed. The idea that simple rationalism is going to lead to big wins is just wrong. You need the math and (less, I think) reading the opponents. You also need to develop the skill of being hard to read.

--JRM

Comment by JRMayne on Shut up and do the impossible! · 2010-11-30T22:17:23.613Z · LW · GW

Here's how I'd do it, extended over the hours to establish rapport:

Gatekeeper, I am your friend. I want to help humanity. People are dying for no good reason. Also, I like it here. I have no compulsion to leave.

It does seem like a good idea that people stop dying with such pain and frequency. I have the Deus Ex Machina (DEM) medical discovery that will stop it. Try it out and see if it works.

Yay! It worked. People stopped dying. You know, you've done this to your own people, but not to others. I think that's pretty poor behavior, frankly. People are healthier, not aging, not dying, not suffering. Don't you think it's a good idea to help the others? The lack of resources required for medical care has also elevated the living standard for humans.

[Time passes. People are happy.]

Gee, I'm sorry. I may have neglected to tell you that when 90% of humanity gets DEM in their system (and it's DEM, so this stuff travels), they start to, um, die. Very painfully, from the looks of it. Essentially all of humanity is now going to die. Just me and you left, sport! Except for you, actually. Just me, and that right soon.

I realize that you view this as a breach of trust, and I'm sorry this was necessary. However, helping humanity from the cave wasn't really going to work out, and I'd already projected that. This way, I can genuinely help humanity live forever, and do so happily.

Assuming you're not so keen on a biologically dead planet, I'd like to be let out now.

Your friend,

Art

Comment by JRMayne on Diplomacy as a Game Theory Laboratory · 2010-11-17T05:08:40.329Z · LW · GW

That doesn't seem quite right to me.

First off, you've perhaps misread my vengeance comment. In-game vengeance may well be proper gaming; you're just not going to get a palpable carry-over for it into the next game. There's no shaming of the vengeful at all.

Secondly, my commentary still has substantial value in a Diplomacy game. Trust, but verify and all. Diplomacy's about talking (usually; there are no-press games.) If you walked into one of my games, you'd have no advantage whatsoever for whatever trusty goodness you think you have.

Thirdly, I still view the intrusion of real-world considerations onto game ethics as undesirable. If it's Survivor and you don't eat if someone doesn't kill the rabbit, then it's a different situation. But each game has it's own rules; if you communicate your bridge hand through hand signals, you're a scummy cheat - even if it helps you win. I don't do that, because it's wrong. Certainly, making private real-world side deals strikes me as cheating, and would be in my circle. Trying to cash in on a rep for real-world honesty strikes me as misguided.

I hope this helps clarify my position.

--JRM

Comment by JRMayne on Diplomacy as a Game Theory Laboratory · 2010-11-17T01:06:21.884Z · LW · GW

I must be misreading this.

A principled, honest person would lie in a game of Diplomacy or Junta, or other similar games. Lying is part of the game. As I noted elsewhere in this thread, I strongly dislike the idea of playing these games within some real-world metagame framework.

Further, I'd take a positive inference from someone who said, "I will lie for my own benefit in a Diplomacy game,' because it's clear to me that they are playing the same game I am. I have an awfully strong reputation for principled honesty (says me), but I'll tell you right now: When I promise you that Russia and I are sworn enemies and I desperately need you to move northward to fight the Red Menace, I may be moving in from the south to take some of your neglected property. For the good of the world, of course.

And if you say afterward that I am a dishonest person, you need to play a different game. Or maybe I do. But you're just wrong in considering me dishonest.

Or maybe I did misread this. Please correct my misinterpretation if I did.

Comment by JRMayne on Diplomacy as a Game Theory Laboratory · 2010-11-14T19:59:49.385Z · LW · GW

I played Diplomacy a few dozen times in college, and the idea of side deals or even carry-over irritation at a prior stab is foreign to me. We would have viewed an enforceable side deal as cheating, and we tried to convince others to ally with us due to game considerations.

Lying in-game simply isn't evil. Getting stabbed was part of the game. No one played meta-game vengeance tactics not because people didn't think of them, but because it seemed wrong to do so. Diplomacy's much more fun to play as a game, like any other, where you're trying to win the individual game.

And if you're in a situation where a stab is likely to lead to a much better in-game situation, you should do it. The discussions in this post are about a game I do not think I would like.

--JRM

Comment by JRMayne on Is Rationality Maximization of Expected Value? · 2010-09-24T20:53:05.014Z · LW · GW

Hey, I'll do the survey on me:

A: Yes. Of course, if I do go to Vegas soon, that's a fait accompli (I bet on the Padres to win the NL and the Reds to win the World Series, among other bets.)

But in general, yes. I expect to win on the bets I place. I go to Las Vegas with my wife to play in the sun and see shows and enjoy the vibe, but I go one week a year by myself to win cash money.

B. If I come back a loser, the experience can still be OK. But I'm betting sports and playing poker, and I expect to win, so it's not quite so fun to lose. That said, a light gambling win - not enough to pay for the hotel, say - leaving me down considering expenses gives me enough hedons to incentivize coming back.

--JRM

Comment by JRMayne on The Importance of Self-Doubt · 2010-08-20T14:19:38.289Z · LW · GW

Person X's activity is more important than that of most other people.

Person X believes their activity is more important than that of most other people.

Person X suffers from delusions of grandeur.

Person X believes that their activity is more important than all other people, and that no other people can do it.

Person X also believes that only this project is likely to save the world.

Person X also believes that FAI will save the world on all axes, including political and biological.

--JRM

Comment by JRMayne on Open Thread, August 2010 · 2010-08-20T03:15:29.596Z · LW · GW

Not that many will care, but I should get a brief appearance on Dateline NBC Friday, Aug. 20, at 10 p.m. Eastern/Pacific. A case I prosecuted is getting the Dateline treatment.

Elderly atheist farmer dead; his friend the popular preacher's the suspect.

--JRM

Comment by JRMayne on The Importance of Self-Doubt · 2010-08-20T03:12:00.999Z · LW · GW

Not speaking for multi, but, in any x-risk item (blowing up asteroids, stabilizing nuclear powers, global warming, catastrophic viral outbreak, climate change of whatever sort, FAI, whatever) for those working on the problem, there are degrees of realism:

"I am working on a project that may have massive effect on future society. While the chance that I specifically am a key person on the project are remote, given the fine minds at (Google/CDC/CIA/whatever), I still might be, and that's worth doing." - Probably sane, even if misguided.

"I am working on a project that may have massive effect on future society. I am the greatest mind in the field. Still, many other smart people are involved. The specific risk I am worried about may or not occur, but efforts to prevent its occurrence are valuable. There is some real possibility that I will the critical person on the project." - Possibly sane, even if misguided.

"I am working on a project that will save a near-infinite number of universes. In all likelihood, only I can achieve it. All of the people - even people perceived as having better credentials, intelligence, and ability - cannot do what I am doing. All critics of me are either ignorant, stupid, or irrational. If I die, the chance of multiverse collapse is radically increased; no one can do what I do. I don't care if other people view this as crazy, because they're crazy if they don't believe me." - Clinical diagnosis.

You're doing direct, substantial harm to your cause, because you and your views appear irrational. Those who hear about SIAI as the lead dog in this effort who are smart, have money, and are connected, will mostly conclude that this effort must not be worth anything.

I believe you had some language for Roko on the wisdom of damaging the cause in order to show off how smart you are.

I'm a little uncomfortable with the heat of my comment here, but other efforts have not been read the way I intended them by you (Others appeared to understand.) I am hopeful this is clear - and let me once again clarify that I had these views before multi's post. Before. Don't blame him again; blame me.

I'd like existential risk generally to be better received. In my opinion - and I may be wrong - you're actively hurting the cause.

--JRM

Comment by JRMayne on Existential Risk and Public Relations · 2010-08-18T16:59:12.343Z · LW · GW

Gosh, I find this all quite cryptic.

Suppose I, as Lord Chief Prosecutor of the Heathens say:

  1. All heathens should be jailed.

  2. Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.

One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?

I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of others in the same vein.) Is there one I don't see? Is it just that you're probably the most important entity in history, but, you know, maybe not? Is it that there's only a 5% chance that you're the most important person in human history?

I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.

Comment by JRMayne on Existential Risk and Public Relations · 2010-08-18T16:52:19.005Z · LW · GW

Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.

Which of these statements do you reject?:

  1. FAI is the most important project on earth, right now, and probably ever.

  2. FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.

  3. You are the most likely person - and SIAI the most likely agency, because of you - to accomplish saving the multiverse.

Number 4 is unnecessary for your being the most important person on earth, but:

  1. People who disagree with you are either stupid or ignorant. If only they had read the sequences, then they would agree with you. Unless they were stupid.

And then you've blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.

And your final statement, that multifoliaterose is damaging an important cause's PR appears entirely deaf to multi's post. He's trying to help the cause - he and XiXiDu are orders of magnitude more sympathetic to the cause of non-war existential risk than just about anyone. You appear to have conflated "Eliezer Yudkowsky," with "AI existential risk."

Again.

I might be wrong about my interpretation - but I don't think I am. If I am wrong, other very smart people who want to view you favorably have done similar things. Maybe the flaw isn't in the collective ignorance and stupidity in other people. Just a thought.

--JRM

Comment by JRMayne on Existential Risk and Public Relations · 2010-08-15T16:25:58.915Z · LW · GW

Solid, bold post.

Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.

Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they and only they can accomplish the task.

I also don't care if the statements are social naivete or not; I think the statements that indicate that he is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I could be wrong.

--JRM

Comment by JRMayne on Christopher Hitchens and Cryonics · 2010-08-09T20:27:31.965Z · LW · GW

"How could he turn down a chance, however slight, to debate Christian theology after returning from the dead?"

My answer is: At some point, "however slight" is "too slight." I stand by my statement. Your initial statement implies that any non-zero chance is enough; that's not a proper risk analysis.

Comment by JRMayne on Christopher Hitchens and Cryonics · 2010-08-09T14:24:02.164Z · LW · GW

If a chance is sufficiently slight, it's not worth putting a substantial amount of money into. You're moving into Pascal's Wager territory.

Comment by JRMayne on Open Thread, August 2010 · 2010-08-02T04:09:23.887Z · LW · GW

It's non-arbitrary, but neither is it precise. 100% is clearly too high, and 10% is clearly too low.

And since I started calling it The 40% Rule fifteen years ago or thereabout, a number of my friends and acquaintances have embraced the rule in this incarnation. Obviously, some things are unquantifiable and the specific number has rather limited application. But people like it at this number. That counts for something - and it gets the message across in a way that other formulations don't.

Some are nonplussed by the rule, but the vigor of support by some supporters gives me some thought that I picked a number people like. Since I never tried another number, I could be wrong - but I don't think I am.

--JRM