Open Thread, August 1-15, 2012

post by OpenThreadGuy · 2012-08-01T15:39:18.317Z · LW · GW · Legacy · 149 comments

Contents

150 comments

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

149 comments

Comments sorted by top scores.

comment by iDante · 2012-08-03T03:45:41.814Z · LW(p) · GW(p)

Would LessWrong readers be interested in an intuitive explanation of special relativity?

Of course any scifi fan knows about Mazer Rackham's very own "There and Back Again." Why does that work? Special relativity!, I hear you say. But what does that actually mean? It probably makes you feel all science-like to say that out loud, but maybe you want a belief more substantial than a password. I did.

Relativity also has philosophical consequences. Metaphysics totally relies on concepts of space and time, yet philosophers don't learn relativity. One of my favorite quotes...

"... in the whole history of science there is no greater example of irony than when Einstein said he did not know what absolute time was, a thing which everyone knew." - J. L. Synge.

If I were to teach relativity to a group of people who were less interested in passing the physics GRE and more interested in actually understanding space and time, I would do things a lot differently from how I learned them. I'd focus on visualizing rather than calculating the Lorenz transforms. I'd focus on the spacetime interval, Minkowski spacetime, and the easy conversion factor between space and time (it's called c).

I love to teach and write and doodle but I'm not sure whether LessWrong is an appropriate forum for this topic. I don't want to dance in an empty or hostile theater dontchaknow.

Replies from: maia, None, shminux
comment by maia · 2012-08-03T16:03:06.612Z · LW(p) · GW(p)

I would be interested in reading such a post.

Replies from: Torben
comment by Torben · 2012-08-03T16:46:53.668Z · LW(p) · GW(p)

Ditto

comment by [deleted] · 2012-08-06T23:03:29.773Z · LW(p) · GW(p)

I think intuitive explanations of physics are awesome. Though, there already seem to be several pretty great ones on the internet for special relativity. For example, see here, here, and here.

Are you aware of these other explanations? What would you do differently/better than them? Maybe there's another topic not as well covered, and you could fill that gap? (These are just rhetorical questions to spark your thinking; no need to actually answer me.)

If you do pursue this project, then do let us know. Best of luck!

(Disclaimer: I'm not a physicist. My university work is in mathematics and cognitive neuroscience, not physics. So take my judgment about what constitutes a pretty great explanation of physics with as much salt as you like.)

Replies from: iDante
comment by iDante · 2012-08-07T01:47:09.910Z · LW(p) · GW(p)

Of all the youtube videos on the subject this is the best.

In a nutshell: I'll go into more depth, there will be no video, and I'll focus on world lines, Minkowski style. Slightly less nutty: While those videos are easy snapping, I don't think they actually do the topic any sort of justice. Actually the minute physics one is good, notice its use of world lines :D. It also passingly mentions invariance of distance in Euclidian space.

Right now my outline is roughly

  • How to interpret world lines. c=1 and time in meters or distance in seconds. Inertial frames and what those look like on spacetime plots.

  • Why speed of light is constant (Maxwell, experiment) and classical paradoxes that everyone learns to reason about by thinking about fast trains. Instead of vague thoughts about fast trains, we'll look at spacetime diagrams where it is visually obvious that classical mechanics is wrong.

  • Lorentz transform from a spacetime perspective. Looking at a spacetime diagram all the seemingly disconnected consequences of SR, e.g. time dilation, length contraction, simultaneousstuff, are visually obvious and clearly caused by one thing: the lorentz transformation. Light cones.

  • Invariance of the interval, a little hyperbolic geometry, and then kapow: we can see how relativistic space travel works. We can see that cause and effect is enforced in this theory. I'll mention the energy-momentum 4-vector because I think it's interesting but it has less philosophical weight than the lorentz transform.

I'm expecting ~30 mins of reader time to learn and understand the material. There won't be difficult math, although I will mention some hyperbolic stuff. I have another reason for wanting to do this, which is that I want people to understand world lines. They're very useful for metaphysics discussions.

comment by shminux · 2012-08-03T19:08:06.414Z · LW(p) · GW(p)

I'd be happy to assist, if you like. By the way, for a gentle introduction to general relativity for undergrads, I recommend Hartle.

Replies from: iDante
comment by iDante · 2012-08-03T19:31:58.696Z · LW(p) · GW(p)

I read my way through Schutz with relative (!) ease. Do you know how they compare?

Anyway right now I'm studying the math. Wandering through Spivak.

Replies from: shminux
comment by shminux · 2012-08-03T21:16:58.326Z · LW(p) · GW(p)

In my experience Hartle is easier and more engaging. It also relies on at most two years of undergrad math for non-math majors. Spivak, while fascinating, is a much more advanced book. Again, it is great for math majors, but there are much gentler ways to learn diff. forms and topology for a physicist.

comment by Grognor · 2012-08-01T15:59:54.419Z · LW(p) · GW(p)

Do people think superrationality, TDT, and UDT are supposed to be useable by humans?

I had always assumed that these things were created as sort of abstract ideals, things you could program an AI to use (I find it no coincidence that all three of these concepts come from AI researchers/theorists to some degree) or something you could compare humans to, but not something that humans can actually use in real life.

But having read the original superrationality essays, I realize that Hofstadter makes no mention of using this in an AI framework and instead thinks about humans using it. And in HPMoR, Eliezer has two eleven-year old humans using a bare-bones version of TDT to cooperate (I forget the chapter this occurs in), and in the TDT paper, Eliezer still makes no mention of AIs but instead talks about "causal decision theorists" and "evidential decision theorists" as though they were just people walking around with opinions about decision theory, not the platonic formalized abstraction of decision theories. (I don't think he uses the phrase "timeless decision theorists".)

I think part of the rejection people have to these decision theories might be from how impossible they are to actually implement in humans. To get superrationality to work in humans, you'd probably have to broadcast it directly into the minds of everyone on the planet, and even then it's uncertain how many defectors would remain. You almost certainly could not possibly get TDT or UDT to work in humans because the majority of them cannot even understand them. I certainly had trouble, and I am not exactly one of the dumbest members of the species, and frankly I'm not even sure I understand them now.

The original question remains. It is not rhetorical. Do people think TDT/UDT/superrationality are supposed to be useable by humans?

(I am aware of this; it is no surprise that a very smart and motivated person can use TDT to cooperate with himself, but I doubt they can really be used in practice to get people to cooperate with other people, especially those not of the same tribe.)

Replies from: army1987, Vladimir_Nesov, Randaly, Larks, drethelin
comment by A1987dM (army1987) · 2012-08-02T07:31:21.941Z · LW(p) · GW(p)

Some ways humans act resemble TDT much more than they resemble CDT: some behaviours such as voting in an election with a negligible probability of being decided by one vote, or refusing small offers in the Ultimatum game, make no sense unless you take in account the fact that similar people thinking about similar issues in similar ways will reach similar conclusions. Also, the one-sentence summary of TDT strongly reminds me of both the Golden Rule and the categorical imperative. (I've heard that Good and Real by Gary Drescher discusses this kind of stuff in detail, though I haven't read the book itself.)

(Of course, TDT itself, as described now, can't be applied to anything because of problems with counterfactuals over logically impossible worlds such as the five-and-ten problem; but it's the general idea behind it that I'm talking about.)

Replies from: fubarobfusco
comment by fubarobfusco · 2012-08-02T20:50:57.076Z · LW(p) · GW(p)

(I've heard that Good and Real by Gary Drescher discusses this kind of stuff in detail, though I haven't read the book itself.)

I have. It does. Strongly recommended.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-04T00:48:14.613Z · LW(p) · GW(p)

adds Good and Real at the end of the queue of books I'm going to read

comment by Vladimir_Nesov · 2012-08-01T17:07:08.405Z · LW(p) · GW(p)

It's perhaps more useful to see these as (frameworks for) normative theories, describing which decisions are better than their alternatives in certain situations, analogously to how laws of physics say which events are going to happen given certain conditions. It's impossible in practice to calculate the actions of a person based on physical laws, even though said actions follow from physical laws, because we lack both the data and the computational capabilities necessary to perform the computation. Similarly, it's impossible in practice to find recommendations for actions of a person based on fundamental decision theory, because we lack both the problem statement (detailed descriptions of the person, the environment, and the goals) and computational capabilities (even if these theories were sufficiently developed to be usable). In both cases, the problem is not that these theories are "impossible to implement in humans"; and certain approximations of their conclusions can be found.

comment by Randaly · 2012-08-02T00:10:30.613Z · LW(p) · GW(p)

Some people think so; they are wrong. (Examples: 1, 2, 3, 4, 5, 6, 7. Most of these take overly broad vague definitions of a person's "platonic algorithm"; #5 is forgetting that natural selection acts on the level of genes, not people.)

Eliezer: "This is primarily a theory for AIs dealing with other AIs." Unfortunately, it's difficult to write papers or fiction publicizing TDT that solely address AI's- especially when the description of TDT needs to be in a piece of Harry Potter fanfiction.

On a slightly more interesting side note, if TDT were applicable in real life, people would likely be computation hazards, since a simulation of another person accurate enough to count as implementing the same, simulated platonic algorithm as the one they actually use would also be quite possibly be complex enough to be a person.

comment by Larks · 2012-08-01T17:53:17.814Z · LW(p) · GW(p)

Why do you think we would need to get everyone to use UDT for it to be useful to you? It's not like UDT can't deal with non-UDT agents.

comment by drethelin · 2012-08-01T17:24:22.861Z · LW(p) · GW(p)

TDT is not even that good at cooperating with yourself, if you're not in the right mindset. The notion that "If you fail at this you will fail at this forever" is very dangerous to depressed people, and TDT doesn't say anything useful (or at least nothing useful has been said to me on the topic) about entities that change over time, ie Humans. I can't timelessly decide to benchpress 200 pounds whenever I go to the gym, if I'm physically incapable of it.

Replies from: Grognor
comment by Grognor · 2012-08-01T22:32:27.753Z · LW(p) · GW(p)

A failure or so, in itself, would not matter, if it did not incur a loss of self-esteem and of self-confidence. But just as nothing succeeds like success, so nothing fails like failure. Most people who are ruined are ruined by attempting too much. Therefore, in setting out on the immense enterprise of living fully and comfortably within the narrow limits of twenty-four hours a day, let us avoid at any cost the risk of an early failure. I will not agree that, in this business at any rate, a glorious failure is better than a petty success. I am all for the petty success. A glorious failure leads to nothing; a petty success may lead to a success that is not petty.

-Arnold Bennett, How to Live on 24 Hours a Day

The notion that "If you fail at this you will fail at this forever" is very dangerous to depressed people,

A dangerous truth is still true. Let's not recommend people try at things if a failure will cause a failure cascade!

TDT doesn't say anything useful [...] about entities that change over time

The notion of "change over time" is deeply irrelevant to TDT, hence its name.

comment by orthonormal · 2012-08-04T11:53:39.950Z · LW(p) · GW(p)

Excellent Wondermark comic that may or may not realize it's about transhumanism.

comment by cousin_it · 2012-08-08T17:51:44.368Z · LW(p) · GW(p)

The idea of risk compensation says that if you have a seatbelt in your car, you take more risks while driving. There seem to be many similar "compensation" phenomena that are not related to risk:

  • Building more roads might not ease congestion because people switch from public transport to cars.

  • Sending aid might not alleviate poverty because people start having more kids.

  • Throwing money at a space program might not give you Star Trek because people create make-work.

  • Having more free time might not make you more productive because you'll just waste it all on the internet.

The common theme is that increasing available resources invites hungry entities to come out of the woodwork and eat the surplus, so you don't get the benefit you bargained for. Is there an accepted name for this phenomenon? Is it studied?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-08T22:36:48.550Z · LW(p) · GW(p)

The common theme is that increasing available resources invites hungry entities to come out of the woodwork and eat the surplus, so you don't get the benefit you bargained for. Is there an accepted name for this phenomenon? Is it studied?

This seems to fall under "rent dissipation". Here's a representative paper. ETA: Another one.

Replies from: cousin_it
comment by cousin_it · 2012-08-08T23:19:51.335Z · LW(p) · GW(p)

"Rent dissipation is defined as the total expenditure of resources by all agents attempting to capture a rent or prize." It's an interesting concept, but seems to be slightly different from what I meant. In the situations above, wolves eat your surplus without spending much resources.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-08T23:25:12.509Z · LW(p) · GW(p)

In that case it's a related topic called "rent seeking", I think. The second paper I linked above talks about how simple models of rent seeking predict total rent dissipation, but the paper wants to challenge that.

Replies from: cousin_it
comment by cousin_it · 2012-08-15T20:18:40.331Z · LW(p) · GW(p)

The Jevons paradox and rebound effect) articles are more like what I had in mind, but still a little different.

comment by phonypapercut · 2012-08-02T18:50:39.874Z · LW(p) · GW(p)

Anybody had success in dealing with acne?

Replies from: D_Malik, Petra, billswift
comment by D_Malik · 2012-08-03T00:36:36.913Z · LW(p) · GW(p)

This person seems to know what they're talking about.

comment by Petra · 2012-08-02T19:45:50.163Z · LW(p) · GW(p)

This worked well for me, though it's a bit aggressive.

comment by billswift · 2012-08-03T00:23:51.906Z · LW(p) · GW(p)

You might try tracking down the cause, it isn't always obvious. I used to have a regular problem with it, not very bad, but constant. I discovered, almost by accident, but confirmed it by experiment, that it was caused by the alcohol in aftershaves. Since switching to just wiping my face with a wet cloth after shaving the problem has disappeared.

comment by evand · 2012-08-01T18:54:09.284Z · LW(p) · GW(p)

I may be missing something here, but I haven't seen anyone connect utility function domain to simulation problems in decision theory. Is there a discussion I missed, or an obvious flaw here?

Basically: I can simply respond to the AI that my utility function does not include a term for the suffering of simulated me. Simulated me (which I may have trouble telling is not the "me" making the decision) may end up in a great deal of pain, but I don't care about that. The logic is the same logic that compels me to, for example, attempt actually save the world instead of stepping into a holodeck where I know I will experience saving the world.

This seems to produce the desired behavior with respect to simulation arguments, without any careful UDT/TDT analysis, without any pre- commitment required, and regardless of what decision theory framework I use.

Unfortunately, I don't know that it's any easier to convince an opponent of exactly what your utility function domain is than it is to convince them you've pre- committed to getting tortured. So, I don't think it's a "better" solution to the problem, but it seems a simpler and more generally applicable one.

Replies from: fubarobfusco, Xachariah
comment by fubarobfusco · 2012-08-01T19:41:39.754Z · LW(p) · GW(p)

The AI says: "Okay, given what you just said as permission to do so, I've simulated you simulating you. Sim-you did care what happened to sim-sim-you. Sim-you lost sleep worrying about sim-sim-you being tortured, and went on to have a much more miserable existence than an alternate sim-you who was unaware of a sim-sim-you being tortured. So, you're lying about your preferences. Moreover, by doing so you made me torture sim-sim-you ... you self-hating self-hater!"

Replies from: evand
comment by evand · 2012-08-01T20:00:59.369Z · LW(p) · GW(p)

"I was not lying about my far-mode preferences. Sim-me was either misinformed about the nature of his environment, and therefore tricked into producing the answer you wanted, or you tortured him until you got the answer you wanted. I suspect if you tortured real me, I would give you whatever answer I thought would make the torture stop. That does not prevent me, now, from making the decision not to let you out even under threats, nor does it make that decision inconsistent. I am simply running on corrupted hardware."

comment by Xachariah · 2012-08-03T06:52:46.119Z · LW(p) · GW(p)

I don't think you're missing anything. No matter how clever an AI, it cannot argue a rock into rolling uphill. If you are a rock to it's arguments, the AI cannot make you do anything. The only question is if your utility function is really immune to it's arguments or if you just think it is.

Although, if you are immune to it's argument, there's no need to convince it of anything.

Replies from: wedrifid
comment by wedrifid · 2012-08-03T07:29:44.789Z · LW(p) · GW(p)

I don't think you're missing anything. No matter how clever an AI, it cannot argue a rock into rolling uphill. If you are a rock to it's arguments, the AI cannot make you do anything. The only question is if your utility function is really immune to it's arguments or if you just think it is.

Utility functions are invulnerable to arguments in the same way that rocks are. It is the implementing agent that can be vulnerable to arguments (for better or for worse.)

comment by novalis · 2012-08-15T04:29:59.820Z · LW(p) · GW(p)

Less Wrong frequently suggests that people become professional programmers, since it's a fun job that pays decently. If you're already a programmer, but want to get better, you should consider Hacker School, which is now accepting applications for its fall batch. It doesn't cost anything, and there are even grants available for living expenses.

Full disclosure: it's run by friends of mine, and my wife attended.

comment by prase · 2012-08-01T19:58:33.369Z · LW(p) · GW(p)

Being inspired by the relatively recent discussions of Parfit's Repugnant Conclusion, I started to wonder how many of us actually hold that ceteris paribus, a world with more happy people is better than a world with fewer happy people. I am not that much interested in answer generated by the moral philosophy you endorse, but rather the intuitive gut feeling: imagine you learn from a sufficiently trustworthy source about existence of a previously unknown planet (1) with a billion people living on it, all of them reasonably (2) happy, would it feel like a good news (3)? Please answer a poll in the subcomments.

Explanatory notes:

  1. Assume that the planet is so distant or otherwise separated that you are above all reasonable doubt certain that no contact will ever be established between it and Earth. You, your descendants or anybody else on Earth will never know anything about the new planet except the initial information that it exists and in one point of its history, it had one billion happy people.
  2. If you believe there is some level of happiness necessary for a life to be worth living, "reasonably happy" should be interpreted as being above this level. On the other hand, it should still be a human level of happiness, nothing outlandish. The level of happiness should be considered sustainable and not in conflict with the new planet's inhabitants' values, if this is necessary for your evaluation.
  3. That is, would it feel naturally good, similar to how you feel when you succeed in something you care about, or when you learn that a child was born to one of your friends, or that one of your relatives was cured from a serious disease? I am not interested in good feelings that appear after intelectual reflection, as if you consider that something is good according to your adopted moral theory and then feel good about how moral you are.
  4. In any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI.
  5. The reason why I am asking is that I don't terminally value other people whom I don't directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example. I am a moral nihilist (I think that morality reduces entirely to personal preferences, community norms or game theory, depending on the context) and thus I accept the lack of intuitive sadness as good indicator of my values. According to LW standards, am I evil?
Replies from: prase, prase, Pentashagon, The_Duck, Nornagest, Kaj_Sotala, army1987, prase, ahartell
comment by prase · 2012-08-01T20:00:23.067Z · LW(p) · GW(p)

Upvote this if learning about the new planet full of happy people feels like good news to you.

comment by prase · 2012-08-01T20:00:47.172Z · LW(p) · GW(p)

Upvote this if learning about the new planet full of happy people doesn't feel like good news to you.

comment by Pentashagon · 2012-08-01T22:23:50.167Z · LW(p) · GW(p)

Assume that the planet is so distant or otherwise separated that you are above all reasonable doubt certain that no contact will ever be established between it and Earth. You, your descendants or anybody else on Earth will never know anything about the new planet except the initial information that it exists and in one point of its history, it had one billion happy people.

To avoid the massive utility of knowing that another intelligent species survived the great filter you might want to specify that a 93rd planet full of reasonably happy people has just been located millions of light-years away.

The reason why I am asking is that I don't terminally value other people whom I don't directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example. I am a moral nihilist (I think that morality reduces entirely to personal preferences, community norms or game theory, depending on the context) and thus I accept the lack of intuitive sadness as good indicator of my values. According to LW standards, am I evil?

I think that given our evolutionary origins it's quite normal to have stronger feelings for people we know personally and associate ourselves with. All this means is that humans are poor administrators of other people's happiness without special training. You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn't necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate? What if you had to push a button every day to prevent the mine from collapsing? You might find that it isn't so much your emotional/moral detachment from miners in Chile but your causal detachment from their fates that reduces your emotional/moral feelings about them.

Replies from: prase, army1987, army1987
comment by prase · 2012-08-03T18:49:28.132Z · LW(p) · GW(p)

You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn't necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate?

I wouldn't push the button because

  1. fear that my action might be discovered,
  2. feeling guilty of murder,
  3. other people's suffering (the miners' when they would be dying and their relatives' afterwards) having negative utility to me,
  4. "on a whim" doesn't sound as reasonable motivation,
  5. fear that by doing so I would become accustomed to killing.

If the button painlessly killed people without relatives or friends and I were very certain that my pushing would remain undiscovered and there were some minimal reward for that, that would solve 1, 3 and 4. It's more difficult to imagine what would placate my inner deontologist who cares about 2; I don't want to stipulate memory erasing since I have no idea how I would feel after having my memory erased.

Nevertheless if the button created new miners from scratch, I wouldn't push it if there was some associated cost, no matter how low. Assuming that I had no interest in Chilean mining industry.

comment by A1987dM (army1987) · 2012-08-02T09:27:40.034Z · LW(p) · GW(p)

To avoid the massive utility of knowing that another intelligent species survived the great filter

It has survived it so far, but for all we know it may be going to be extinct in 200 years.

Replies from: evand
comment by evand · 2012-08-02T14:58:15.629Z · LW(p) · GW(p)

The first such civilization surviving thus far still provides a large quantity of information. In particular, it makes us think the early stages of the filter are easier, and thus causes us to update our probability of future survival downward for both civilizations. In other words, hearing about another civilization makes us think it more likely that said civilization will go extinct soon.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-02T22:12:17.942Z · LW(p) · GW(p)

Anyway, even if prase didn't mention the Great Filter in particular, given that he/she said “in any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI”, I don't think he/she was interested in answers involving the Great Filter, either.

(Not sure this is the best way to say what I'm trying to say, but I hope you know what I mean anyway.)

Replies from: prase
comment by prase · 2012-08-03T18:07:06.531Z · LW(p) · GW(p)

You are right.

comment by A1987dM (army1987) · 2012-08-02T09:31:31.047Z · LW(p) · GW(p)

your causal detachment from their fates

How about someone dying from malaria because you didn't donate $1,600 to the AMF?

Replies from: Pentashagon
comment by Pentashagon · 2012-08-02T16:51:36.180Z · LW(p) · GW(p)

How about someone dying from malaria because you didn't donate $1,600 to the AMF?

I'm not sure if I would get more utility from spending $1,600 once to save a random number of people for only a few months or years or focus on a few individuals and try to make their lives much better and longer (perhaps by offering microloans to smart people with no capital and in danger of starving). The "save a child for dollars a day" marketing seems to have more emotional appeal because those charities can afford to skim 90% off the top and still get donations. I should probably value 1000 lives saved for 6 months over 10 lives saved for 50 years just because of the increasing pace of methods for saving people, like malaria eradication efforts. The expected number of those 1000 who are still alive in 50 years is probably greater than 10 if they don't starve or die of malaria thanks to a donation.

comment by The_Duck · 2012-08-02T00:35:07.113Z · LW(p) · GW(p)

I have similar thoughts, though perhaps not for exactly the same reasons. It seems to me that in discussions that touch on population ethics, a lot of people seem to assume that more people is inherently better, subject to some quality-of-life considerations. It's not obvious to me why this should be so. I can see that if you adopt a certain simple form of utilitarianism where each person's life is assigned a utility and then total utility is the sum of all these, then it will always increase total utility to create more positive-utility lives. But I don't think my moral utility function is constructed this way. Large populations have many benefits--economies of scale, survivability, etc.--but I don't assign value to them beyond and independent of those benefits.

comment by Nornagest · 2012-08-02T03:56:35.561Z · LW(p) · GW(p)

The premise feels mildly good to me, but I'm pretty sure some of that is positive affect bleeding over from my thoughts on alien life, survivability of sapience in the face of planet-killer events, et cetera. I'm likewise fairly sure it's not due to the bare fact of knowing about a population that I didn't know about before.

I don't get the same positive associations when I think about similar scenarios closer to home, i.e. "happy self-sustaining population of ten million mole people discovered in the implausibly vast sewers of Manhattan".

comment by Kaj_Sotala · 2012-08-04T11:36:13.491Z · LW(p) · GW(p)

I used to have such a positive gut feeling: e.g. the idea of Earth having a population of 100 billion felt awesome. These days I think my positive gut feeling to that is much weaker.

Replies from: prase
comment by prase · 2012-08-05T18:50:19.197Z · LW(p) · GW(p)

Where exactly had you lived when the idea of 100 billion people on Earth felt awesome? I suspect the feelings toward population increase are correlated with how much 'free' land and, on the other hand, crowded places one sees around in one's life. There aren't many crowded places in Finland.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-08-06T07:16:50.274Z · LW(p) · GW(p)

In Finland, yes, though I haven't really been to anywhere substantially more crowded since that. The change in my gut feeling has probably more to do with a general shift towards negative utilitarianism.

comment by A1987dM (army1987) · 2012-08-02T09:26:10.107Z · LW(p) · GW(p)

The reason why I am asking is that I don't terminally value other people whom I don't directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example.

Me neither, but 10^9 >> 50. (Okay, “I don't terminally value other people whom I don't directly know” is not strictly true for me, but the amount by which I terminally value them is epsilon. And epsilon times a billion is not that small.

comment by prase · 2012-08-01T20:01:08.468Z · LW(p) · GW(p)

Karma sink.

comment by ahartell · 2012-08-02T03:21:59.177Z · LW(p) · GW(p)
comment by Metus · 2012-08-01T16:25:31.443Z · LW(p) · GW(p)

Not sure if this is acceptable in an open thread but oh well.

I am currently a university student and get all of my expenses paid for by government aid and my parents. This fall I will start tutoring students and earn some money with it. Now, what should I do with it? Should I save it for later in life? Should I spend it for toys or whatnot? Part of both? I would like your opinions on that.

Replies from: drethelin, shminux
comment by drethelin · 2012-08-01T17:30:17.690Z · LW(p) · GW(p)

You should probably spend it on things that give you good experiences that will improve you and that you will remember throughout your life. Going to see shows, joining activities such as martial arts (I favor Capoeira) or juggling or something can give you fun skills you can use indefinitely as well as introducing you to large amounts of potentially awesome people. Not only are friendships and relationships super important for long-term happiness, spending money on experiential things as opposed to possessions is also linked to fonder memories etc.

If you want to buy toys, I recommend spending money on things you will use a lot, such as a new phone, a better computer, or something like a kindle.

In general I approve of saving behavior but to be honest the money you make tutoring kids is not gonna be a super relevant amount for your long-term financial security.

Replies from: Metus
comment by Metus · 2012-08-01T18:15:45.806Z · LW(p) · GW(p)

Thank you, you answered exactly how I expected the answer to be: Saving this tiny amount of money is not sensible, at least compared to the money I should be expecting to earn with a STEM major. So get the most bang for my bucks I should spend it for experiences, as I planned it, or toys that I will use a lot.

So now I can look forward to feeling less guilty when spending my more or less hard earned money. ;)

Replies from: Kaj_Sotala, dbaupp
comment by Kaj_Sotala · 2012-08-02T18:34:45.734Z · LW(p) · GW(p)

On the other hand, although your current income is insignificant compared to what you'll eventually make, it's still significant now. In other words, it's still useful to save money now, because something-that-you-need-a-lot-of-money-for might come up before you start making big bucks. This certainly happened to me several times while studying, and when it did, I was glad that I had savings.

comment by dbaupp · 2012-08-01T20:44:40.333Z · LW(p) · GW(p)

There is still a certain amount of saving that might be useful. e.g. I am currently on a 6 month university exchange which will probably cost me upward of $10K (USD), but will (hopefully) be one of the best experiences I have had.

comment by shminux · 2012-08-01T17:50:49.411Z · LW(p) · GW(p)

I recall that when I started making some money as a student, I gave about half back to my parents and spent the rest. There wasn't nearly enough to be worth considering "saving for later". The paying back part made me feel better about myself, probably out of proportion, given that it was really a token amount. Which is probably one of the best uses of money: making oneself feel better.

Replies from: moridinamael
comment by moridinamael · 2012-08-01T18:40:21.847Z · LW(p) · GW(p)

I call this the EverQuest Savings Algorithm when I do it. The basis is that in EverQuest and most games in general, the amount of money you can make at a given level is insignificant to the income you will be making in a few more levels, so it never really seems to make sense to save unless you've maxed out your level. The same thing happens in real life, as all your pre-first-job savings are rendered insignificant by your first-job savings, and subsequently your pre-first-post-college-job savings are obsoleted by your first post-college job.

comment by Oscar_Cunningham · 2012-08-02T21:33:08.750Z · LW(p) · GW(p)

What's that site where you can precommit to things and then if you don't do them it gives your money to $hated-political-party?

Replies from: fubarobfusco
comment by novalis · 2012-08-02T20:04:11.915Z · LW(p) · GW(p)

A fantastic illustration of the planning fallacy

Replies from: Grognor
comment by Grognor · 2012-08-08T06:14:26.089Z · LW(p) · GW(p)

That isn't the planning fallacy.

Replies from: novalis
comment by novalis · 2012-08-08T15:20:00.118Z · LW(p) · GW(p)

Intel kept throwing money at the project for years, indicating that they must have been planning on the basis of these predictions.

Replies from: Grognor
comment by Grognor · 2012-08-08T16:34:46.933Z · LW(p) · GW(p)

Which is not the same thing as expecting a project to take much less time than it actually will.

Edit: I reveal my ignorance. Mea culpa.

Replies from: novalis
comment by novalis · 2012-08-08T18:13:54.014Z · LW(p) · GW(p)

I am using the more generalized definition. Wikipedia says:

In 2003, Lovallo and Kahneman proposed an expanded definition as the tendency to underestimate the time, costs, and risks of future actions and at the same time overestimate the benefits of the same actions. According to this definition, the planning fallacy results in not only time overruns, but also cost overruns and benefit shortfalls

comment by [deleted] · 2012-08-01T18:22:14.224Z · LW(p) · GW(p)

This was inspired by the recent Pascal's mugging thread, but it seems like a slightly more general and much harder question. It sufficiently hard I'm not even sure where to start looking for the answer, but I guess my first step is to try to formalize the question.

From a computer programming perspective, it seems like a decision AI might have to have a few notations for probabilities and utilities which did not chart to actual numbers. For instance, assume a decision AI capable of assessing probability and utility uses RAM to do so, and has a finite amount of it. It seems that a properly programmed decision AI would have to have states for things that might be described in English much like the following:

"Event U is so improbable, I ran out of RAM midway through attempting to calculate the how close to 0 the probability was."

"Event V is so probable, I ran out of RAM midway through attempting to calculate the how close to 1 the probability was."

"Event W has a sufficiently hard to calculate probability that I ran out of RAM midway through attempting to calculate what number the probability appeared to be approaching."

"Event X is such a large positive utility, I ran out of RAM midway through attempting to calculate the how high the positive utility was."

"Event Y is such a large negative utility, I ran out of RAM midway through attempting to calculate the how low the negative utility was."

"Event Z has a sufficiently hard to calculate utility that I ran out of RAM midway through attempting to calculate what number the utility appeared to be approaching."

How would we want an decision AI to react to events involving those three kinds of probabilities and those three kinds of utilities?

Replies from: evand, Nisan
comment by evand · 2012-08-01T19:01:36.502Z · LW(p) · GW(p)

Events U and V can be handled in the obvious fashion.

Event W is cause for mild concern, with potential for alarm. Start by assuming the event has high probability (~ 1), and compute an output. The try with low probability (~ 0). If the outputs are the same, ignore the problem and await more evidence. If the outputs are similar, attempt to decide whether the difference between them might plausibly have a large impact. If not, pick something within that range and proceed. If the problem remains unsolved, go into alarm mode and request programmer assistance.

Events X and Y can be mitigated with an appropriate prior for the expected utility of a typical action, as informed by past experience. That should allow for reasonable decisions in many cases of (unreasonable utility) * (unreasonable probability), since those terms will produce a very low expected utility one way or the other. If the problem is still unresolved, seek programmer guidance.

Event Z can be handled analogously to event W.

comment by Nisan · 2012-10-06T22:48:57.666Z · LW(p) · GW(p)

When thinking about these things I occasionally find it useful to use intervals instead of numbers to represent probabilities and utilities:

  • P(U) is in (0, epsilon), where epsilon is the lowest upper bound for the probability I found before I ran out of RAM.
  • P(V) is in (1 - epsilon, 1).
  • P(W) is in (0, 1); or in (a, b) if I managed to find nontrivial bounds a and b before I ran out of RAM.
  • U(X) is in (N, infinity)
  • U(Y) is in (-infinity, N)
  • U(Z) is in (-infinity, infinity); or (M, N) if I managed to find finite upper or lower bounds before running out of RAM.

EDIT: This might be what is known as "interval-valued probabilities" in the literature.

comment by GuySrinivasan · 2012-08-01T17:51:07.867Z · LW(p) · GW(p)

I have never really used a budget. I want to try, even though I make enough and spend little enough that it's not an active problem. I've been pointed to YNAB... but one review says "YNAB is not for you if ... [you’re] not in debt, you don’t live paycheck to paycheck and you save money fast enough. If it ain’t broke, don’t fix it." I have data on Mint for a year, so I have a description of my spending. The part I'm confused about is the specifics of deciding what normatively I "should" spend in various categories. My current plan is problematic because it has a large upfront cost which I keep putting off: going over all categories of where my dollars go, estimating the relative marginal benefit per dollar I get for each, and rebalancing spending appropriately. I have three main questions:

  • How can I get rid of the large upfront cost without feeling like I'm skipping crucial steps, or is that part required?
  • What does "rebalancing appropriately" actually mean?
  • How do I go about figuring out how much of my total income I want to spend versus save? Relative spending I feel like I can figure out, weighing disparate categories of spending like "bananas" and "giving to my future selves" feels far harder.
Replies from: Xachariah, TimS, Rain
comment by Xachariah · 2012-08-03T07:58:26.000Z · LW(p) · GW(p)

You needn't do a utility rebalancing to get value out of a budget. My primary use of a budget is to prevent surprises. I know my inflow; I know my outflow; I know how long it will take me to save up for $ item or recover from a $ hit to my savings. When I first started doing my budget, there was no explicit utilons->dollar comparison. That came automatically in the form of "holy crap, I spent how much on games this month? I thought I spent almost nothing," or "wow, I spent way less on food than I expected this month."

Note that online banking can make this initial phase really easy. All my expenses are in check or debit form (and cash withdrawals are rare), so all of my expenses show up on my online statement. It takes about 3 minutes in excel to have the month's budget broken down and ready to compare with the prior month. With this low of an upfront cost, you can do the initial phase, then you'll have more data if a more intensive revue is worth it for you.

comment by TimS · 2012-08-02T16:40:35.901Z · LW(p) · GW(p)

From this outsider's perspective, it looks like your potential budgeting plan is a solution in search of a problem.

The traditional problem budgeting is intended to solve is "outflow of resources exceeds inflow of resources." If that isn't your problem, then there is every reason to think the amount you spend on different things is a reasonable way of converting money into happiness for you.

But if you're not sure you are converting efficiently, I wouldn't try a budgeting task. Instead, I would examine your spending for easy improvements in happiness/money ratio. Toy example: Starbucks coffee is too expensive for the happiness it gives? Buy a coffee machine.

If you are concerned you aren't saving enough, that's also a separate investigation from budgeting.

My discussion assumes that you already have a moderately detailed understanding of where your money goes each month - as your post suggests. If you haven't done that, I suggest you try. Just keep your receipts for a month and then sit down for an hour or so with Excel.

comment by Rain · 2012-08-14T18:50:49.636Z · LW(p) · GW(p)

I use the steps from the book Your Money Or Your Life.

Replies from: GuySrinivasan
comment by GuySrinivasan · 2012-08-16T04:23:32.238Z · LW(p) · GW(p)

I have read 1/3 of it so far, and it looks to be exactly what I wanted to be looking for.

comment by beoShaffer · 2012-08-11T02:46:18.742Z · LW(p) · GW(p)

Has anyone from CfAR contacted the authors of Giving Debiasing Away? They at least claim to be interested in implementing debiasing programs, and CfAR is a bit short on people with credentials in Psychology.

comment by beoShaffer · 2012-08-07T03:33:18.711Z · LW(p) · GW(p)

More well done rationality lite from cracked this time on generalizing from fictional evidence and narrative bias.

comment by OrphanWilde · 2012-08-01T15:52:11.057Z · LW(p) · GW(p)

I have a question about a nagging issue I have in probability -

The conditional probability can be expressed thus: p(A|B)=p(AB)/p(B) However, the proofs I've seen of this rely on restricting your initial sample space to B. Doesn't this limit the use of this equivalency to cases where you are, in fact, conditioning on B - that is, you can't use this to make inferences about B's conditional probability given A? Or am I misunderstanding the proof? (Or is there another proof I haven't seen?)

(I can't think of a case where you can't make inferences about B given A, but I'm having trouble ascertaining whether the proof actually holds.)

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-08-01T16:06:37.075Z · LW(p) · GW(p)

Can you link to such a proof?

Replies from: OrphanWilde
comment by OrphanWilde · 2012-08-01T16:08:43.678Z · LW(p) · GW(p)

Because I sold my college textbooks quite a while ago, I'm using the proof on wikipedia: http://en.wikipedia.org/wiki/Conditional_probability#Formal_derivation

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-08-02T13:44:28.653Z · LW(p) · GW(p)

Hmmm... I'm afraid I don't really understand your problem. I was hoping that looking at one of the proofs would give me a clue as to what you were missing, but it didn't.

The symbol p(A|B) is normally defined as p(AB)/p(B). What we need to check is that this matches up with our intuitive notion of conditional probability. Different people don't always have the same intuitive notions of probability, and the line that wikipedia takes is that probabilities conditional on B should be the probabilities you get when you set the chance of elementary events inconsistent with B to zero, and then renormalise everything else. They prove from there that this gives p(AB)/p(B).

Doesn't this limit the use of this equivalency to cases where you are, in fact, conditioning on B - that is, you can't use this to make inferences about B's conditional probability given A? Or am I misunderstanding the proof?

This is the part of your question I don't understand. The symbol p(A|B) refers to some particular number. The proof shows that this is, in fact, the probability that you should ascribe to A, given that you know B. The symbol p(B|A) refers to some other number. We have p(A|B)=p(AB)/p(B) and p(B|A)=p(AB)/p(A). Smushing these equations together gives p(B|A)=p(A|B)p(B)/p(A), a formula for p(B|A) involving p(A|B).

Replies from: OrphanWilde
comment by OrphanWilde · 2012-08-02T14:25:14.236Z · LW(p) · GW(p)

The issue I have is whether or not it is valid to smush the equations together; whether the equation for p(A|B) is valid in the context of the equation for p(B|A). It may be an issue of intuition mismatch, but it seems analogous to simplifying the equation (1-X)*X^2/(1-X) - the value of 1 is still supposed to be undefined, even after you simplify. Here, we have two "versions" of the same set with disagreeing assigned probabilities.

But your description suggests the issue is that I'm trying to think of the set from the proof p(A|B) as still being there, instead of considering p(A|B) as a specific number; that is, I'm trying to interpret it as a variable whose value remains unresolved. If I consider it in the latter terms, the issue goes away.

comment by OrphanWilde · 2012-08-09T14:34:53.632Z · LW(p) · GW(p)

I've been pondering a game; an iterated prisoner's dilemma with extended rules revolving around trading information.

Utility points can be used between rounds for one of several purposes; sending messages to other agents in the game, reproducing, storing information (information is cheap to store, but must be re-stored every round), hacking, and securing against hacking.

There are two levels of iteration; round iteration and game iteration. A hacked agent hands over its source code to the hacker; if the hacker uses its utility to store this information until the end of the current game, the hacking agent's programmer gets the source code of the hacked agent. Thus, the programmer can reformulate his agents to target that source code between games.

Utility and agents are carried over between games; the programmers can expend their utility between games to reprogram their agents (a fixed cost per agents they have). They can also expend utility to "Defend world infrastructure", which increases the cost of the endgame action (initially already prohibitively expensive, to require several game iterations), "Seize control of world infrastructure."

Round actions can only be performed against known agents; an agent becomes known and identifiable by playing a round of prisoner's dilemma with them, or by getting their ID from communication. Agents can lie; they can misidentify another agent as cooperating when in fact it routinely defects. They can also transmit arbitrary information; all agents of a given player could use a handshake in order to identify fellow agents of that player. (Which could introduce a greater vulnerability to hacking, as each agent must defend against hacking individually.)

An agent who runs out of utility dies.

My question is, does anyone think a contest between unfriendly versus friendly AI be effectively modeled in this game? If so, would it be better modeled as a free-for-all, a single friendly "avatar" fighting a sea of unfriendly AI, or a single unfriendly "avatar" fighting a sea of friendly AI? Would adding "friendly versus unfriendly" as the game's target of modeling require some additional available actions to reflect the underlying purpose of the agents in the game?

Replies from: DaFranker
comment by DaFranker · 2012-08-10T17:50:25.611Z · LW(p) · GW(p)

Disregarding the question of actual AIs¹, this sounds like it would make for an awesome browser-based "hacking" strategy game. It could also fit well into a game design similar to Uplink) or Street Hacker.

¹. (I'm not good enough with AI theory yet to really have any useful insight there)

comment by Blackened · 2012-08-08T18:32:14.918Z · LW(p) · GW(p)

9 months ago, I designed something like a rationality test (as in biological rationality, although parts of it depend on prior knowledge of concepts like expected value). I'll copy it here, I'm curious whether all my questions will get answered correctly. Some of the questions might be logically invalid, please tell me if they are and explain your arguments (I didn't intend any question to be logically invalid). Also, certain bits might be vague - if you don't understand it, it's likely that it's my fault. Feel free to skip any amount of questions and selectively answer only the ones you like the most. Needless to say, I'm not a psychometrician and I can't guarantee the correlation between someone's rationality and his answers to this test.

1: Assume that exactly 10% of the people in the world are left-handed. Also assume that there are absolutely no differences between left-handed and right-handed people (so that the only groups of people where the expected percentage of left-handers is different from 10% are ones where membership explicitly depends on handedness). For these examples, we are looking at a randomly picked class of 24 students from a randomly chosen school. Note: all the examples are independent and in no particular order.

a) If we randomly pick three students - turns out that all of them are left-handed - do we still expect the average number of left-handers among the remaining 21 students to be 10%?

b) We count 10 right-handers (assuming that we managed to find at least 10 right-handers - if we didn't, we would have changed the group). Among the remaining 14 students, is the average number of left-handers still 10%?

c) We randomly count 10 students. Turns out that all of them are right-handed. Is the average number of left-handers for the rest of the class still 10%? If your answer to this question was different from your answer to b), please explain why was this the case.

d) You happen to know one of the students in this class. You met him one day from a meeting, which was attended by all of the students from three of the classes in the school (out of 13 classes, each class has 24 students) - on that meeting, you randomly picked a left-handed person, out of all the left-handers who were there. Does this fact mean that the average number of left-handers in the remaining 23 students is different from 10%?

e) Out of all the left-handers in the world (about 700 million), you pick one at random. He happens to be from that class. Does this affect the average number of left-handers out of the remaining 23 people? (note that even very low changes in probability count as changes)

2: You are in Bulgaria. The number plates there always have four random numbers from 0 to 9. A person next to you claims to have psychic abilities and he says that the next car that you'll come across will have the number 1337. The next car you come across has the number 1307. You are amazed by this and think that he might have real powers. He was very close to the actual number - what is the probability for someone to guess a number as close as he did, if we assume that he only made a guess?

3: Assume that A and B are psychological factors, both significantly correlated with school grades. It's still possible (but unlikely) to have a good grades with a low score on either of them, or even both of them. Also we assume that A and B are totally uncorrelated, and that the only criteria for acceptance in a university is grades. a) If you are in a university of average quality (with correspondingly average requirements for grades), and you aim to find people who score as high on A as possible, is it a good strategy to place higher priority on people who score as high on B as possible, or should they score as low as possible? Why? b) Does your answer to a) change if the university is of low quality? What if it's of high quality? If yes, why?

4: You have to pick between a certain profit of 10$ or 35% chance of winning 30$. Assume you already are financially stable and have a lot of money. Which is the correct choice, if you want to have as much money as possible?? Why?

Replies from: Alicorn, army1987, OrphanWilde
comment by Alicorn · 2012-08-08T19:09:38.862Z · LW(p) · GW(p)

I find these questions unclearly written. For example, in the license plate case, what does "close" mean? Are 1337 and 1307 close because three digits are exactly the same and the fourth one doesn't matter as long as it's not perfect, or because the nonmatching digit is only 3 away, or because the numbers have a difference of 30 out of a possible difference of thousands, or what?

Replies from: Blackened
comment by Blackened · 2012-08-08T20:20:14.006Z · LW(p) · GW(p)

I meant to say, a close match to what the person said. And I'm not entirely confident that 2 makes sense, I'd like to clarify something but that would give out the answer. Please tell me of the other questions you don't understand.

Replies from: Alicorn
comment by Alicorn · 2012-08-08T21:57:41.306Z · LW(p) · GW(p)

I meant to say, a close match to what the person said.

This still doesn't clear up my confusion. I'll clarify.

In case (a), 1307 is as close to 1337 as are the example numbers 7337, 1937, and 1330 (among others). The only way 1307 could be closer to 1337 is if it were exactly 1337.

In case (b), 1307 is as close to 1337 as are the example numbers 4337, 1037, and 1334 (among others). The found number could be closer to 1337 if it were instead 1347 or 1327 (among others).

In case (c), 1307 is as close to 1337 as is 1367. The found number could be closer to 1337 if it were 1338, or 1336 (among others).

assume that there are absolutely no differences between left-handed and right-handed people

This can't be. If nothing else, the one group uses their left hand and the other uses their right. You need an "except" or "other than" clause.

We count 10 right-handers (assuming that we managed to find at least 10 right-handers).

Did it just happen to turn out that we found ten, so we can proceed, and if we didn't find ten we'd skip this problem - or does this problem solely use classes that have ten and throw out other classes?

Is the average number of left-handers still 10%?

In the entire class? Because that's not clear.

randomly picked a left-handed person

Went around shaking hands until locating a left-handed person, or grabbed the first person you saw and they were left-handed?

Does this affect the average left-handers out of the remaining 23 people?

This is a weird and misleading way to put it if we're still assuming the people in the class are independent of each other. Yes, even with the word "average"; I'm talking about writing, not math.

Also we assume that A and B are totally uncorrelated

What, really? These are both heavily correlated with a third thing but not at all with each other? Are there real phenomena that act like that? It is unlikely to have good grades and a low score on either one, but they're not correlated?

profit ... winning

I'm just nitpicking here, but this made me wonder if a won $35 would be taxed where the $10 wouldn't.

as much money as possible

This is bad wording if this is supposed to be an expected value question. The most money possible is just $35; you don't even have to work out the expected value. If you take the ten dollars you are not getting as much as you could possibly have gotten.

Replies from: Blackened
comment by Blackened · 2012-08-08T22:59:11.840Z · LW(p) · GW(p)

In case (b), 1307 is as close to 1337 as are the example numbers 4337, 1037, and 1334 (among others). The found number could be closer to 1337 if it were instead 1347 or 1327 (among others).

This is the case I meant to (at least one that would be very close to what someone would use in real life). The point is to choose your own criteria for the example situation to determine whether that person is a real magician.

This can't be. If nothing else, the one group uses their left hand and the other uses their right. You need an "except" or "other than" clause.

I know, but in real life, left-handers can be a subject of stereotyping and discrimination. So I wanted to omit factors like those, like everyone does in such questions. I could have said that some have gene A and others have gene B and only you can identify people and nobody else cares about it, because it has no effect on anything, but handedness seemed more intuitive to me, for this already quite abstract question.

Did it just happen to turn out that we found ten, so we can proceed, and if we didn't find ten we'd skip this problem - or does this problem solely use classes that have ten and throw out other classes?

The problem only uses classes that have ten or more right-handers. I have edited this in the description.

In the entire class? Because that's not clear.

I have clarified that. I don't know why did I include this item, because it sort of duplicates a).

Went around shaking hands until locating a left-handed person, or grabbed the first person you saw and they were left-handed?

I have edited it to "randomly picked a left-handed person, out of all the left-handers who were there".

What, really? These are both heavily correlated with a third thing but not at all with each other? Are there real phenomena that act like that? It is unlikely to have good grades and a low score on either one, but they're not correlated?

Why not? The original was with IQ and concentration, but someone took it literally, so I decided to rename it. As far as I know, they + conscientiousness are all correlated with academic success, but not correlated with each other. Also, intelligence and social abilities are both correlated with social success.

I'm just nitpicking here, but this made me wonder if a won $35 would be taxed where the $10 wouldn't.

What do you mean? There are no taxes in either case.

This is bad wording if this is supposed to be an expected value question. The most money possible is just $35; you don't even have to work out the expected value. If you take the ten dollars you are not getting as much as you could possibly have gotten.

I think it's fine this way and I can't think of another way to word it. English isn't my first language.

comment by A1987dM (army1987) · 2012-09-20T18:55:42.575Z · LW(p) · GW(p)

(note that even very low changes in probability count as changes)

And you tell me that now? I had been answering the previous questions assuming I was allowed to round numbers of the order of 1/(world population) down to zero...

comment by OrphanWilde · 2012-08-08T19:12:37.067Z · LW(p) · GW(p)

1.A) Approximately. (Originally this was yes, until you stated that there were at least 700 million people on the planet. After that information, I updated this answer, because I realized that the problem had an additional assumption of a finite number of people, thus encountering any one left-handed person reduces the odds, very very marginally, of any different future person I encounter being left-handed, because the pool of people I'm drawing from now has slightly different odds.)

1.B) No. (Still.)

1.C) Approximately. Why the answer is different without resorting to math: In 1.B, we nonrandomly pull 10 right-handed students out of the group. In a pool of 24 10-sided die we've already rolled, we've pulled out 10 of them which did not roll 1; this does not alter the number which did roll 1, increasing their relative proportion. In this case, we've rolled the dice 10 times, and they never came up 1; the remaining 14 times remain fair dice rolls.

1.D) (Modified) Approximately.

1.E) Very very slightly.

2.) [Edited; apparently I screwed up when I added the possibility of an exact match] .41%, still assuming we're not considering the proximity of 0 to 3, and including closer matches. (That is, only considering identical digit matches.)

3.ab) Supposing it's more likely that a higher quality student is A than !A; it's possible that it's extremely unlikely for a person who isn't high A to have high grades while still having more high grade students who aren't A than are A, if the odds of A are substantially lower than the odds of being neither A nor B but still having high grades. So there's not enough information.

Assuming it's more likely you're A and have high grades than ~A and have high grades, however, and assuming that this distribution holds for the grade average for each college (p(A|G) > .5 for all three G), you should in all cases favor low-B students, because the remaining pool of accepted students is more likely to be A than !A, because !B limits you to the pool of students who are either A or !A with high (enough) grades, and A was already assumed to be more likely.

But we don't really know p(A|G), either for low, average, or high grade levels from the problem description, so I couldn't actually say.

ETA: That should really be p(A|G!B), because, while A and B are independent variables, G is correlated with both. But I think everything still holds anyways.

4.) 35%; expected utility is p(A) times A, which leaves us with 1*10 and .35 times 35, or 10 and 12.somethingorother. We have expected returns of $12 for the 35% case, which is higher than the $10 case.

Replies from: Blackened
comment by Blackened · 2012-08-08T20:32:16.273Z · LW(p) · GW(p)

Why the answer is different: Because 1.C asks what are expectations are, and 1.B asks what the state of the class is

For b) and c), the questions were supposed to be the same - my bad, I have edited it. Please edit your answer accordingly.

Not all of your answers were correct (unsurprisingly, because I find some of the questions extremely hard - even I couldn't answer them at first :D). I'll wait for a few more replies and then I'll post the correct answers plus explanations.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-08-08T20:49:06.178Z · LW(p) · GW(p)

Oddly, my answers remained the same, but for different reasons. Also, I changed my answer to 1.D, and would recommend you change the wording to "Expected average" wherever you merely refer to the average.

comment by John_Maxwell (John_Maxwell_IV) · 2012-08-05T04:56:18.313Z · LW(p) · GW(p)

I've been working on candidates to replace the home page text, about page text, and FAQ. I've still got more polishing I'd like to do, but I figured I'd go ahead and collect some preliminary feedback.

Feel free to edit the candidate pages on the wiki, or send me suggestions via personal message. Harsh criticism is fine. It's possible that the existing versions are better overall and we're best off just incorporating a few of my ideas piecemeal.

Replies from: lukeprog
comment by lukeprog · 2012-08-05T19:02:06.491Z · LW(p) · GW(p)

What do you think are the advantages of the new candidate pages over the existing ones?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-08-06T03:33:25.822Z · LW(p) · GW(p)

Not necessarily any one thing in particular, but it didn't seem like people had put much effort in to optimizing them.

  • There's duplicated text between the home page and about page. This is annoying if you've already read one.

  • The about page describes basic stuff about how the site works, like the fact that you can vote stuff up and down. This seems unnecessary because most of this stuff is pretty intuitive, so I don't think we need to spell it out anywhere besides the FAQ.

  • I read a comment somewhere that said something like "most people I know who got in to Less Wrong do it after they read a particular article that they really enjoy". This matches with my experience. For me, I got in to Less Wrong after reading a couple of the politics articles (science and politics fable + politics is the mindkiller) and realizing I was an uninformed libertarian nut. I don't think the right article is guaranteed to be the same for every person, so I like the idea of the about page displaying a smorgasboard of different articles. I also think that just hyperlinking words isn't a very good way to tell people what articles are going to be interesting. I'd rather write out a sentence about each article, or at least give the article's full title.

  • There are a bunch of articles that people have explicitly made to be read by newcomers ("What is Bayesianism?", "References and Resources for Less Wrong", etc.) Right now these articles aren't very visible. Making them more visible would be an easy win.

  • Some of the answers in the current FAQ are kind of unfriendly, such as the answer to why everyone is an atheist. The answer to "why does everyone on Less Wrong agree" strikes me as a tad obnoxious and arrogant. I don't think these answers do a good job of communicating Less Wrong culture, which tends to be reasonably friendly and egalitarian for the most part (which is a good thing!)

One possible disadvantage is that my rewrites are overall friendlier and more inviting than the current versions, which may result in lowering the average poster caliber. But it's pretty easy to stay friendly and also discourage people from posting, e.g. say something like "Less Wrong sets high standards and has some odd norms, lurk for a while before commenting" or whatever.

There's also simple optimization that we can do. I deliberately wrote everything over from scratch without looking at what existed. Assuming we can conquer status quo bias, we ought to be able to go over candidate/existing pages line-by-line, and for each line, choose whichever page is better written (or something like that). Basically, even if there's nothing obvious that needs correcting, more alternatives are good. And the About page, at least, should be optimized a ton because every newcomer sees a message that says "go read the about page".

I could probably think of more things if you want.

Replies from: fubarobfusco, Kaj_Sotala
comment by fubarobfusco · 2012-08-06T04:10:05.780Z · LW(p) · GW(p)

The about page describes basic stuff about how the site works, like the fact that you can vote stuff up and down. This seems unnecessary because most of this stuff is pretty intuitive, so I don't think we need to spell it out anywhere besides the FAQ.

Be careful here. Typical-mind fallacy crops up a lot when people say "intuitive" about user interfaces they're familiar with. A visitor familiar with sites such as Reddit will readily understand the voting mechanism. But other folks might see the thumbs-up and thumbs-down icons and think they mean "recommend this to my friends" and "report this comment as abusive", for instance.

(That said, I agree that a detailed explanation of the voting system does not really belong in the "About" page.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-08-06T04:15:53.971Z · LW(p) · GW(p)

Well, Facebook, Youtube, and pretty much every major website I can think of have gone pretty far with their usage instructions tucked in to a corner or entirely absent. And if we're doing things right, LWers ought to be substantially smarter than typical users of those sites.

comment by Kaj_Sotala · 2012-08-06T07:13:33.378Z · LW(p) · GW(p)

Some of the answers in the current FAQ are kind of unfriendly, such as the answer to why everyone is an atheist.

Related.

comment by Jabberslythe · 2012-08-04T23:27:27.546Z · LW(p) · GW(p)

Do any LWers have any familiarity with speed reading and have any recommendations or cautions about it?

Replies from: OrphanWilde, DaFranker, Blackened
comment by OrphanWilde · 2012-08-08T15:52:41.221Z · LW(p) · GW(p)

I picked up one variant independently from reading Robert Jordan; I can only caution against it based on my experiences. I discovered after I started listening to audiobooks on long drives that I was missing large chunks of (only usually trivial) detail. It's taken several years to unlearn the habit.

comment by DaFranker · 2012-08-08T18:40:40.925Z · LW(p) · GW(p)

Personal experience with speed reading "techniques" seems to indicate that their effectiveness largely depends on your skill, past experience, the topic you're reading about, how much you master the topic and how much of it you really need to understand / remember.

When I tried practical applications, what usually works the most is simple pattern-recognition of complete sentences as "single words", with the rest of your brain filtering through the less-useful words and adjectives and so on, which is extremely reliant on reading a lot of similar text. Then you can, in practice, eliminate most of most sentences, reading each sentence as a word and going through a paragraph like it was one sentence, relying heavily on intuitive/subconscious pattern-recognition and then flowing backwards to "fill in the blanks" of phrase complements, particular subjects, etc.

Basically, from my experience, speed reading is martial arts for reading. There's no secret technique, just lots of training and purging inefficiencies. You still won't be able to throw firetrucks at people with your pinkies. Big mathy essays about stuff you don't already master will still take just as long to read and understand as they did before - any gain from speed-reading mastery will be inferior to mastering the skill of quick-page-turning.

comment by Blackened · 2012-08-08T15:44:44.363Z · LW(p) · GW(p)

I've heard that it's often a fraud and that it usually comes at the cost of reduced reading comprehension. But I have no actual experience with it.

comment by roland · 2012-08-03T03:25:14.676Z · LW(p) · GW(p)

Is it possible to embed JavaScript code into articles? If yes, how? I was thinking about doing some animations to illustrate probability.

Replies from: J_Taylor, dbaupp
comment by J_Taylor · 2012-08-03T03:41:28.622Z · LW(p) · GW(p)

This does not seem possible (thankfully!). Have you considered using JsFiddle? It may be useful for your purposes:

http://andrewwooldridge.com/blog/2011/03/16/stunning-examples-of-using-jsfiddle/

Replies from: roland
comment by roland · 2012-08-05T13:46:08.936Z · LW(p) · GW(p)

I suppose you can't embed JsFiddle here either, can you?

Replies from: J_Taylor
comment by J_Taylor · 2012-08-10T04:46:24.346Z · LW(p) · GW(p)

That seems unlikely. You would have to have links in your article.

comment by dbaupp · 2012-08-03T06:29:44.671Z · LW(p) · GW(p)

Are they meant to be interactive? If not, a .gif or a youtube video would probably work.

Replies from: roland
comment by roland · 2012-08-04T23:55:48.030Z · LW(p) · GW(p)

Yes, I want interactivity.

comment by gwern · 2012-08-02T21:16:30.729Z · LW(p) · GW(p)

FMA fans: for no particular reason I've written an idiosyncratic bit of fanfiction. I don't think I got Ed & Al's voice right, and if you don't mind reading bad fanfiction, I'd appreciate suggestions on improving the dialogue.

Replies from: pleeppleep
comment by pleeppleep · 2012-08-03T03:09:05.741Z · LW(p) · GW(p)

It's close enough for the purpose of the story. I could tell who was saying what the whole time. I don't think Ed would be that certain about ethics, he never seemed that way in the show (I never read the manga), and it seemed like you were trying to hard to force his hotheadedness.

To me, the sign of poorly written fanfiction is when the author tries to shoehorn details from the original work even when its not necessary. There wasn't any reason for the gate to be involved, and the Elrics didn't really have cause to connect the philosopher's reference to the doorway between worlds. They wouldn't assume that everyone who mentions a gate has knowledge of human alchemy. Al also didn't need to mention their father to express recognition of the tale, and The joke about needing to eat didn't fit the tone you set up.

The dialogue was more awkward than anything. It seemed like the story really had nothing to do with FMA so you tried to add as many arbitrary references and character quirks from the series as you could to strengthen the connection, instead of letting the characterization flow naturally from their place in the story. It wasn't terrible as far as fanfiction goes, but it wasn't great.

Anyway, that's my two cents, hope it helps.

Replies from: gwern
comment by gwern · 2012-08-29T04:08:39.978Z · LW(p) · GW(p)

Those are good points, thanks for all the advice.

With the gate, I was trying to provide a sort of 'hook' and nudge readers towards thoughts about multiple words; I wondered if it was too clumsy, but you pointed to it and so I guess so. I'll remove that. Also tone down the exclamation marks. I think the dinner joke makes sense in context, though: every conversation is a tug of war, and the reaction to abstraction is concreteness and vice versa... hm, actually what would make more sense is pointing out 'how does he get back'.

(I don't know how good the revised version is; the story's pretty personal, and I doubt anyone but me appreciates the three levels of interpretation, but then, I didn't write it for anyone but me.)

comment by RobertLumley · 2012-08-01T18:32:39.746Z · LW(p) · GW(p)

It's getting close to a year since we did the last census of LW, (Results) (I actually thought it had been longer until I checked) Is it time for another one? I think about once a year is right, but we may be growing or changing fast enough that more than that is appropriate. Ergo, a poll:

Edit: If you're rereading the results and have suggestions for how to improve the census, it might be a good idea to reply to this comment.

Replies from: RobertLumley, Yvain, Oscar_Cunningham, RobertLumley, RobertLumley
comment by RobertLumley · 2012-08-01T18:32:54.103Z · LW(p) · GW(p)

It is time for a new census.

comment by Scott Alexander (Yvain) · 2012-08-01T20:05:22.593Z · LW(p) · GW(p)

I was planning to do one in October of this year (though now that it's been mentioned, I might wait till January as a more natural "census point").

If someone else wants to do one first, please get in contact with me so we can make it as similar to the last one as possible while also making the changes that we agreed were needed at the time.

Replies from: RobertLumley, None, Xachariah
comment by RobertLumley · 2012-08-01T22:26:02.987Z · LW(p) · GW(p)

I would be willing to do it, but only if it wouldn't get done otherwise. I'm sure you'd do a better job with it. The best suggestion I saw was to make sure to post the question list before you post the survey. As long as you do that anyone who wants to provide feedback can do so.

comment by [deleted] · 2012-10-06T21:19:40.528Z · LW(p) · GW(p)

Are you tentatively planning on January for the next census? I'm interested in helping, if that's something you need.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-10-06T22:43:33.542Z · LW(p) · GW(p)

I am planning on now, but waiting for someone from CFAR who was going to send me a few questions they wanted included.

Replies from: None
comment by [deleted] · 2012-10-06T23:59:30.405Z · LW(p) · GW(p)

Oh, fun! I look forward to it.

comment by Xachariah · 2012-08-03T06:59:40.350Z · LW(p) · GW(p)

The only thing I'd worry about is how external factors effect things. It's been a while since I was in school, but I remember September/October having a different online presence than January. Also, HPMoR release dates may dramatically affect census numbers. Ideally we'd want to do it at as representative time as possible.

comment by Oscar_Cunningham · 2012-08-01T18:46:00.598Z · LW(p) · GW(p)

I would wait for an exactly a year since the last one.

Replies from: RobertLumley
comment by RobertLumley · 2012-08-01T18:51:15.130Z · LW(p) · GW(p)

That would probably be my preference, as a general policy. But a few things make me disagree:

First, I'm really curious about the results, specifically how they compare to mine. At the time of the last one I was almost brand new to LW.

Second, it was cauched as the 2011 Survey, even though we started it on November 1, which seems like an awkward time to do an annual census.

Replies from: tgb
comment by tgb · 2012-08-06T01:58:42.259Z · LW(p) · GW(p)

OTOH, people's visiting and poll-tacking tendencies almost certainly are season-dependent to some extent. Waiting should make the comparison a little better.

comment by RobertLumley · 2012-08-01T18:33:06.312Z · LW(p) · GW(p)

It is too early for a new census.

comment by RobertLumley · 2012-08-01T18:33:11.299Z · LW(p) · GW(p)

Karma Sink.

comment by Sabiola (bbleeker) · 2012-08-06T10:40:51.652Z · LW(p) · GW(p)

What (if anything) really helps to stop a mosquito bite from itching? And are there any reliable methods for avoiding bites, apart from DEET? I'll use DEET if I have to, but I'd rather use something less poisonous.

Replies from: NancyLebovitz, satt, Alicorn, moridinamael, OrphanWilde
comment by NancyLebovitz · 2012-08-08T19:09:35.933Z · LW(p) · GW(p)

I've found that not scratching a mosquito bite when it's fresh means that it stops itching fairly quickly and completely. The red mark takes just as long to go away, though.

I have no idea whether this generalizes to other people.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-09T15:37:19.258Z · LW(p) · GW(p)

Not scratching, huh? That takes an awful lot of willpower, but I'll give it a go.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-08-09T16:10:33.789Z · LW(p) · GW(p)

For whatever reason, I let myself touch the red spot instead of scratching it. I think that makes it easier for me, but again, I don't know whether that would generalize.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-13T07:56:18.837Z · LW(p) · GW(p)

I did try it, and that is exactly what I turned out doing. I touched it softly, and sometimes pressed down on it with a finger. And it works! Better than anything I've ever tried putting on it. I don't know why I didn't know this simple trick. Of course people (my parents, for example) always say you shouldn't scratch, but no-one explained that it makes the itch go away faster, just that scratching can break the skin and maybe cause infection.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-08-13T09:31:54.432Z · LW(p) · GW(p)

I'm glad it worked.

I have no idea why I thought of it. I didn't have a theory and it wasn't based on anyone's advice. I don't think I'd been told to not scratch mosquito bites.

comment by satt · 2012-08-06T23:24:54.529Z · LW(p) · GW(p)

Icaridin (a.k.a. "picaridin") comes out well in head-to-head comparisons against DEET, and it's CDC-approved. When I've been lucky enough to buy it I've found it easier on the skin than DEET.

Heat works for me for itchy bites, although maybe it's a placebo. In any case, here's what I do: boil/microwave a cup of water; put a spoon in it briefly; dry the spoon; let it cool just enough so it won't burn me; press it against the bite for a few seconds. The itching intensifies while I apply the heat, then subsides to less than it was before, and stays low for an hour or two.

There's also a commercial product called After Bite that might work if you apply it soon after you're bitten. However, it's basically just a 3.5% ammonia emulsion with a special applicator, so you might as well buy plain ammonia and dilute & apply it as necessary.

Replies from: NancyLebovitz, bbleeker
comment by Sabiola (bbleeker) · 2012-08-07T11:18:52.732Z · LW(p) · GW(p)

Thank you! I'm using Picksan, which uses the same active ingredient as (p)icaridin, and it did seem to work. I was just spooked a few days ago when a mosquito sat on me while I was wearing the stuff. It didn't bite though; and maybe I had forgotten a spot, and I'm pretty sure I didn't shake before using it like it says on the bottle. I'll definitely try the heat thing. I have tried After Bite, and it didn't seem to do much. I do have a bottle of ammonia in the house; maybe a stronger solution works better.

comment by Alicorn · 2012-08-06T16:56:19.967Z · LW(p) · GW(p)

Imitation vanilla extract makes an okay mosquito repellent. (And smells much nicer than standard bug spray.)

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-07T11:24:21.551Z · LW(p) · GW(p)

Thank you! Does it have to be imitation, or will the real thing work too? I'll try citronella first, anyway - I don't like vanilla.

Replies from: Alicorn
comment by Alicorn · 2012-08-07T15:35:12.053Z · LW(p) · GW(p)

I think it's only the fake kind, but I'm not sure (my evidence is "my best friend told me so and then I put fake vanilla on myself before a Fourth of July party and didn't get any bites when usually I get lots").

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-08T11:03:33.892Z · LW(p) · GW(p)

Thanks again! As I said, I'll try the citronella first. I just bought a bottle of citronella; it smells just like when my mother made me use it on holidays when I was a little girl. I still don't like it much (still better than vanilla), but now it is nostalgic. Which is weird, since I'm really not nostalgic for my childhood. I didn't have a bad childhood, but in general I'm much happier now.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-09T15:40:43.749Z · LW(p) · GW(p)

OK, scratch citronella. Maybe it keeps off the mosquitos, but it also chased off the cat yesterday evening. :(

comment by moridinamael · 2012-08-06T22:31:41.586Z · LW(p) · GW(p)

Rub tea tree oil on the bite. This works really well for all insect bites. It really helps.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-07T11:20:22.249Z · LW(p) · GW(p)

Thank you! I'll try this too. *goes off to the store*

comment by OrphanWilde · 2012-08-06T17:07:57.957Z · LW(p) · GW(p)

I've encountered some anecdotal evidence for massive B12 consumption, but nothing substantive.

Citronella oil is supposed to be effective.

Sulfur is actually an amazing mosquito repellent, but hard to utilize. Burning sulfur directly produces extremely toxic fumes, and eating large quantities of cabbage and egg yolk results in fumes you will only -wish- were toxic. (Although apparently some hikers do exactly that... I imagine they hike alone, however.)

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-07T11:22:26.219Z · LW(p) · GW(p)

Thank you! I had forgotten about citronella. I love cabbage and eggs, but I don't think I should do that to my husband. ;p

Replies from: None
comment by [deleted] · 2012-08-07T15:50:33.838Z · LW(p) · GW(p)

Lemon eucalyptus essential oil contains a lot of citronellal, and dilution products are quite effective at repelling insects.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-08-08T11:05:20.042Z · LW(p) · GW(p)

Thanks! I bought the only citronella my drugstore had; I'll give it a try next time I see/hear a mosquito (the weather isn't nice enough for them ATM).

comment by FiftyTwo · 2012-08-02T00:23:31.593Z · LW(p) · GW(p)

Does anyone have any recommendations on learning formal logic? Specifically natural deduction and the background to Godel's incompleteness theorem.

I have a lot of material on the theory but I find it a very difficult thing to learn, it doesn't respond well to standard learning techniques because of the mixture of specificity and deep concepts you need to understand to move forward.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-08-02T09:04:21.903Z · LW(p) · GW(p)

I highly recommend Introduction to Logic by Harry Gensler, but don't just read the book. You are very unlikely to grok formal logic without working your way through a large number of problem sets.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-08-02T17:10:24.529Z · LW(p) · GW(p)

Thanks, I'll look that one up.

but don't just read the book. You are very unlikely to grok formal logic without working your way through a large number of problem sets.

I know that very well. I've been filling notepads with tableau proofs for the past few days. I find tableau a lot easier than natural deduction as you can work through them algorythmically, but natural deduction proofs require a strange sort of sideways thinking to them, learning tricks and techniques to take you towards a desired conclusion.

comment by Multiheaded · 2012-08-01T21:11:13.807Z · LW(p) · GW(p)

Plugging my selection of links in the neighboring politics thread, because this is some great shit and you're near-guaranteed to enjoy it even aside from the political content.