Open Thread for February 11 - 17

post by Scott Garrabrant · 2014-02-11T18:08:23.934Z · LW · GW · Legacy · 335 comments

Contents

  If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
None
335 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

335 comments

Comments sorted by top scores.

comment by pan · 2014-02-11T18:29:50.157Z · LW(p) · GW(p)

Luke wrote a detailed description of his approach to beating procrastination (here if you missed it).

Does anyone know if he's ever given an update anywhere as to whether or not this same algorithm works for him to this day? He seems to be very prolific and I'm curious about whether his view on procrastination has changed at all.

comment by gwern · 2014-02-13T18:28:29.745Z · LW(p) · GW(p)

Yvain has started a nootropics survey: https://docs.google.com/forms/d/1aNmqagWZ0kkEMYOgByBd2t0b16dR029BoHmR_OClB7Q/viewform

Background: http://www.reddit.com/r/Nootropics/comments/1xglcg/a_survey_for_better_anecdata/ http://www.reddit.com/r/Nootropics/comments/1xt0zn/rnootropics_survey/

I hope a lot of people take it; I'd like to run some analyses on the results.

Replies from: hyporational, gwern
comment by hyporational · 2014-02-14T08:19:59.726Z · LW(p) · GW(p)

Why is nicotine not on that list?

Replies from: gwern
comment by gwern · 2014-02-14T18:14:09.894Z · LW(p) · GW(p)

I have no idea. The selection isn't the best selection ever (I haven't even heard of some of them), but it can be improved for next time based on this time.

comment by Scott Garrabrant · 2014-02-11T21:26:29.934Z · LW(p) · GW(p)

I wrote a logic puzzle, which you may have seen on my blog. It has gotten a lot of praise, and I think it is a really interesting puzzle.

Imagine the following two player game. Alice secretly fills 3 rooms with apples. She has an infinite supply of apples and infinitely large rooms, so each room can have any non-negative integer number of apples. She must put a different number of apples in each room. Bob will then open the doors to the rooms in any order he chooses. After opening each door and counting the apples, but before he opens the next door, Bob must accept or reject that room. Bob must accept exactly two rooms and reject exactly one room. Bob loves apples, but hates regret. Bob wins the game if the total number of apples in the two rooms he accepts is a large as possible. Equivalently, Bob wins if the single room he rejects has the fewest apples. Alice wins if Bob loses.

Which of the two players has the advantage in this game?

This puzzle is a lot more interesting than it looks at first, and the solution can be seen here.

I would also like to see some of your favorite logic puzzles. If you you have any puzzles that you really like, please comment and share.

Replies from: DanielLC, solipsist, Scott Garrabrant, mwengler
comment by DanielLC · 2014-02-11T21:31:37.334Z · LW(p) · GW(p)

To make sure I understand this correctly: Bob cares about winning, and getting no apples is as good as 3^^^3 apples, so long as he rejects the room with the fewest, right?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T21:32:13.821Z · LW(p) · GW(p)

That is correct.

comment by solipsist · 2014-02-11T21:51:36.642Z · LW(p) · GW(p)

A long one-lane, no passing highway has N cars. Each driver prefers to drive at a different speed. They will each drive at that preferred speed if they can, and will tailgate if they can't. The highway ends up with clumps of tailgaters lead by slow drivers. What is the expected number of clumps?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T22:07:43.000Z · LW(p) · GW(p)

My Answer

Replies from: solipsist, mwengler
comment by solipsist · 2014-02-11T22:09:39.214Z · LW(p) · GW(p)

You got it.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T22:12:39.683Z · LW(p) · GW(p)

I am not sure what the distribution is.

Replies from: gjm
comment by gjm · 2014-02-11T22:50:31.908Z · LW(p) · GW(p)

The distribution; see e.g. here.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T23:02:34.951Z · LW(p) · GW(p)

Ah, yes, thank you.

comment by mwengler · 2014-02-11T23:31:00.493Z · LW(p) · GW(p)

Coscott's solution seems incorrect for N=3. label 3 cars 1 is fastest, 2 is 2nd fastest 3 is slowest. There are 6 possible orderings for the cars on the road. These are shown with the cars appropriately clumped and the number of clumps associated with each ordering:

1 2 3 .. 3 clumps

1 32 .. 2 clumps

21 3 .. 2 clumps

2 31 .. 2 clumps

312 .. 1 clump

321 .. 1 clump

Find the mean number of clumps and it is 11/6 mean number of clumps. Coscott's solution gives 10/6.

Fix?

Replies from: Scott Garrabrant, mwengler
comment by Scott Garrabrant · 2014-02-11T23:58:07.118Z · LW(p) · GW(p)

My solution gives 11/6

Replies from: mwengler
comment by mwengler · 2014-02-12T00:04:49.485Z · LW(p) · GW(p)

Dang you are right.

comment by mwengler · 2014-02-11T23:45:36.997Z · LW(p) · GW(p)

Coscott's solution also wrong for N=4, actual solution is a mean of 2, Coscott's gives 25/12.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-12T00:02:22.577Z · LW(p) · GW(p)

4 with prob 1/24, 3 with prob 6/24, 2 with prob 11/24, 1 with prob 6/24

Mean of 25/12

How did you get 2?

Replies from: mwengler
comment by mwengler · 2014-02-12T00:45:04.810Z · LW(p) · GW(p)

Must have counted wrong. Counted again and you are right.

Great problems though. I cannot figure out how to conclude it is the solution you got. Do you do it by induction? I think I could probably get the answer by induction, but haven't bothered trying.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-12T01:36:53.978Z · LW(p) · GW(p)

Take the kth car. It is at the start of a cluster if it is the slowest of the first k cars. The kth car is therefore at the start of a cluster with probability 1/k. The expected number of clusters is the sum over all cars of the probability that that car is in the front of a cluster.

Replies from: solipsist
comment by solipsist · 2014-02-12T02:08:06.320Z · LW(p) · GW(p)

Hurray for the linearity of expected value!

comment by Scott Garrabrant · 2014-02-12T00:17:45.332Z · LW(p) · GW(p)

Imagine that you have a collection of very weird dice. For every prime between 1 and 1000, you have a fair die with that many sides. Your goal is to generate a uniform random integer from 1 to 1001 inclusive.

For example, using only the 2 sided die, you can roll it 10 times to get a number from 1 to 1024. If this result is less than or equal to 1001, take that as your result. Otherwise, start over.

This algorithm uses on average 10240/1001=10.228770... rolls. What is the fewest expected number of die rolls needed to complete this task?

When you know the right answer, you will probably be able to prove it.

Solution

Replies from: Strilanc, Luke_A_Somers
comment by Strilanc · 2014-02-12T05:20:14.886Z · LW(p) · GW(p)

If you care about more than the first roll, so you want to make lots and lots of uniform random numbers in 1, 1001, then the best die is (rot13'd) gur ynetrfg cevzr va enatr orpnhfr vg tvirf lbh gur zbfg ragebcl cre ebyy. Lbh arire qvfpneq erfhygf, fvapr gung jbhyq or guebjvat njnl ragebcl, naq vafgrnq hfr jung vf rffragvnyyl nevguzrgvp pbqvat.

Onfvpnyyl, pbafvqre lbhe ebyyf gb or qvtvgf nsgre gur qrpvzny cbvag va onfr C. Abgvpr gung, tvira gung lbh pbhyq ebyy nyy 0f be nyy (C-1)f sebz urer, gur ahzore vf pbafgenvarq gb n cnegvphyne enatr. Abj ybbx ng onfr 1001: qbrf lbhe enatr snyy ragveryl jvguva n qvtvg va gung onfr? Gura lbh unir n enaqbz bhgchg. Zbir gb gur arkg qvtvg cbfvgvba naq ercrng.

Na vagrerfgvat fvqr rssrpg bs guvf genafsbezngvba vf gung vs lbh tb sebz onfr N gb onfr O gura genafsbez onpx, lbh trg gur fnzr frdhrapr rkprcg gurer'f n fznyy rkcrpgrq qrynl ba gur erfhygf.

I give working code in "Transmuting Dice, Conserving Entropy".

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-12T05:49:57.813Z · LW(p) · GW(p)

I will say as little as possible to avoid spoilers, because you seem to have thought enough about this to not want it spoiled.

The algorithm you are describing is not optimal.

Edit: Oh, I just realized you were talking about generating lots of samples. In that case, you are right, but you have not solved the puzzle yet.

comment by Luke_A_Somers · 2014-02-13T13:54:17.138Z · LW(p) · GW(p)

Ebyy n friragrra fvqrq qvr naq n svsgl guerr fvqrq qvr (fvqrf ner ynoryrq mreb gb A zvahf bar). Zhygvcyl gur svsgl-guerr fvqrq qvr erfhyg ol friragrra naq nqq gur inyhrf.

Gur erfhyg jvyy or va mreb gb bar gubhfnaq gjb. Va gur rirag bs rvgure bs gurfr rkgerzr erfhygf, ergel.

Rkcrpgrq ahzore bs qvpr ebyyf vf gjb gvzrf bar gubhfnaq guerr qvivqrq ol bar gubhfnaq bar, be gjb cbvag mreb mreb sbhe qvpr ebyyf.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-13T18:46:44.101Z · LW(p) · GW(p)

You can do better :)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-02-14T00:46:13.845Z · LW(p) · GW(p)

Yeah, I realized that a few minutes after I posted, but didn't get a chance to retract it... Gimme a couple minutes.

Vf vg gur fnzr vqrn ohg jvgu avar avargl frira gjvpr, naq hfvat zbq 1001? Gung frrzf njshyyl fznyy, ohg V qba'g frr n tbbq cebbs. Vqrnyyl, gur cebqhpg bs gjb cevzrf jbhyq or bar zber guna n zhygvcyr bs 1001, naq gung'f gur bayl jnl V pna frr gb unir n fubeg cebbs. Guvf qbrfa'g qb gung.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-14T01:48:25.850Z · LW(p) · GW(p)

I am glad someone is thinking about it enough to fully appreciate the solution. You are suggesting taking advantage of 709*977=692693. You can do better.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-02-14T10:33:13.410Z · LW(p) · GW(p)

You can do better than missing one part in 692693? You can't do it in one roll (not even a chance of one roll) since the dice aren't large enough to ever uniquely identify one result... is there SOME way to get it exactly? No... then it would be a multiple of 1001.

I am presently stumped. I'll think on it a bit more.

ETA: OK, instead of having ONE left over, you leave TWO over. Assuming the new pair is around the same size that nearly doubles your trouble rate, but in the event of trouble, it gives you one bit of information on the outcome. So, you can roll a single 503 sided die instead of retrying the outer procedure?

Depending on the pair of primes that produce the two-left-over, that might be better. 709 is pretty large, though.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-14T18:09:44.840Z · LW(p) · GW(p)

The best you can do leaving 2 over is 709*953=675677, coincidentally using the same first die. You can do better.

comment by mwengler · 2014-02-12T00:34:58.589Z · LW(p) · GW(p)

It is interesting to contemplate that the almost fair solution favors bob:

Bob counts the numbers of apples in 1st room and accepts it unless it has zero apples in it, in which case he rejects it.
If he hasn't rejected room 1 he counts the apples in 2 and if it is more than in 1 he accepts it else he rejects it.

For all possible numbers of apples in rooms EXCEPT one room has zero apples, Bob has 50% chance of getting it right. But for all possible number of apples in rooms where one room has zero apples in it, Bob has 5/6 chance of winning and only 1/6 chance of losing.

I think in some important sense this is the telling limit of why Coscott is right and how Alice can force a tie, but not win, if she knows Bob's strategy. If Alilce knew Bob was using this strategy, she would never put zero apples in any room, and she and Bob would tie, i.e. Alice was able to force him arbitrarily close to 50:50.

And the strategy to work relies upon the asymmetry in the problem, that you can go arbitrarily high in apples but you can't go arbitrarily low. Initially I was thinking Coscott's solution must be wrong, that it must be equivocating somehow on the fact that Alice can choose ANY number of apples. But I think it is right, but that every strategy Bob uses to win can be defeated by Alice if she knows what his strategy is. I think without proof, that is :)

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-12T00:42:30.806Z · LW(p) · GW(p)

I think in some important sense this is the telling limit of why Coscott is right

Right about what? The hint I give at the beginning of the solution? My solution?

Watch your quantifiers. The strategy you propose for Bob can be responded to by Alice never putting 0 apples in any room. This strategy shows that Bob can force a tie, but this is not an example of Bob doing better than a tie.

Replies from: mwengler
comment by mwengler · 2014-02-12T01:13:34.996Z · LW(p) · GW(p)

Right about it not being a fair game. My first thought was that it really is a fair game and that by comparing only the cases where fixed numbers a, b, and c are distributed you get the slight advantage for Bob that you claimed. That if you considered ALL possibilities you would have not advantage for Bob.

Then I thought you have a vanishingly small advantage for Bob if you consider Alice using ALL numbers, including very very VERY high numbers, where the probability of ever taking the first room becomes vanishingly small.

And then by thinking of my strategy, of only picking the first room when you were absolutely sure it was correct, i.e. it had in it as low a number of apples as a room can have, I convinced myself that there really is a net advantage to Bob, and that Alice can defeat that advantage if she knows Bob's strategy, but Alice can't find a way to win herself.

So yes, I'm aware that Alice can defeat my 0 apple strategy if she knows about it, just as you are aware that Alice can defeat your 2^-n strategy if she knows about that.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-12T01:48:11.785Z · LW(p) · GW(p)

So yes, I'm aware that Alice can defeat my 0 apple strategy if she knows about it, just as you are aware that Alice can defeat your 2^-n strategy if she knows about that.

What? I do not believe Alice can defeat my strategy. She can get arbitrarily close to 50%, but she cannot reach it.

comment by Shmi (shminux) · 2014-02-11T21:16:38.715Z · LW(p) · GW(p)

2.5 years ago I made an attempt to calculate an upper bound for the complexity of the currently known laws of physics. Since the issue of physical laws and complexity keeps coming up, and my old post is hard to find with google searches, I'm reposting it here verbatim.

I would really like to see some solid estimates here, not just the usual hand-waving. Maybe someone better qualified can critique the following.

By "a computer program to simulate Maxwell's equations" EY presumably means a linear PDE solver for initial boundary value problems. The same general type of code should be able to handle the Schroedinger equation. There are a number of those available online, most written in Fortran or C, with the relevant code size about a megabyte. The Kolmogorov complexity of a solution produced by such a solver is probably of the same order as its code size (since the solver effectively describes the strings it generates), so, say, about 10^6 "complexity units". It might be much lower, but this is clearly the upper bound.

One wrinkle is that the initial and boundary conditions also have to be given, and the size of the relevant data heavily depends on the desired precision (you have to give the Dirichlet or Neumann boundary conditions at each point of a 3D grid, and the grid size can be 10^9 points or larger). On the other hand, the Kolmogorov complexity of this initial data set should be much lower than that, as the values for the points on the grid are generated by a piece of code usually much smaller than the main engine. So, in the first approximation, we can assume that it does not add significantly to the overall complexity.

Things get dicier if we try to estimate in a similar way the complexity of the models like General Relativity, the Navier-Stokes equations or the Quantum Field Theory, due to their non-linearity and a host of other issues. When no general-purpose solver is available, how does one estimate the complexity? Currently, a lot of heuristics are used, effectively hiding part of the algorithm in the human mind, thus making any estimate unreliable, as human mind (or "Thor's mind") is rather hard to simulate.

One can argue that the equations themselves for each of the theories are pretty compact, so the complexity cannot be that high, but then, as Feynman noted, all of physical laws can be written simply as A=0, where A hides all the gory details. We still have to specify the algorithm to generate the predictions, and that brings us back to numerical solvers.

I also cannot resist noting, yet again, that all interpretation of QM that rely on solving the Schrodinger equation have exactly the same complexity, as estimated above, and so cannot be distinguished by the Occam's razor. This applies, in particular, to MWI vs Copenhagen.

It is entirely possible that my understanding of how to calculate the Kolmogorov complexity of a physical theory is flawed, so I welcome any feedback on the matter. But no hand-waving, please.

Replies from: gwern, Squark
comment by gwern · 2014-02-11T22:49:51.168Z · LW(p) · GW(p)

Interesting recent paper: "Is ZF a hack? Comparing the complexity of some (formalist interpretations of) foundational systems for mathematics", Wiedijk; he formalizes a number of systems in Automath.

Replies from: shminux
comment by Shmi (shminux) · 2014-02-12T17:31:24.167Z · LW(p) · GW(p)

This makes sense for mathematical systems. I wonder if is possible to do something like this for a mathematical model of a physical phenomenon.

comment by Squark · 2014-02-16T20:20:57.144Z · LW(p) · GW(p)

It shouldn't be that hard to find code that solves a non-linear PDE. Google search reveals http://einsteintoolkit.org/ an open source that does numerical General Relativity.

However, QFT is not a PDE, it is a completely different object. The keyword here is lattice QFT. Google reveals this gem: http://xxx.tau.ac.il/abs/1310.7087

Nonperturbative string theory is not completely understood, however all known formulations reduce it to some sort of QFT.

comment by mcoram · 2014-02-12T01:17:07.214Z · LW(p) · GW(p)

I've written a game (or see (github)) that tests your ability to assign probabilities to yes/no events accurately using a logarithmic scoring rule (called a Bayes score on LW, apparently).

For example, in the subgame "Coins from Urn Anise," you'll be told: "I have a mysterious urn labelled 'Anise' full of coins, each with possibly different probabilities. I'm picking a fresh coin from the urn. I'm about to flip the coin. Will I get heads? [Trial 1 of 10; Session 1]". You can then adjust a slider to select a number a in [0,1]. As you adjust a, you adjust the payoffs that you'll receive if the outcome of the coin flip is heads or tails. Specifically you'll receive 1+log2(a) points if the result is heads and 1+log2(1-a) points if the result is tails. This is a proper scoring rule in the sense that you maximize your expected return by choosing a equal to the posterior probability that, given what you know, this coin will come out heads. The payouts are harshly negative if you have false certainty. E.g. if you choose a=0.995, you'd only stand to gain 0.993 if heads happens but would lose 6.644 if tails happens. At the moment, you don't know much about the coin, but as the game goes on you can refine your guess. After 10 flips the game chooses a new coin from the urn, so you won't know so much about the coin again, but try to take account of what you do know -- it's from the same urn Anise as the last coin (iid). If you try this, tell me what your average score is on play 100, say.

There's a couple other random processes to guess in the game and also a quiz. The questions are intended to force you to guess at least some of the time. If you have suggestions for other quiz questions, send them to me by PM in the format:

{q:"1+1=2. True?", a:1} // source: my calculator

where a:1 is for true and a:0 is for false.

Other discussion: probability calibration quizzes Papers: Some Comparisons among Quadratic, Spherical, and Logarithmic Scoring Rules; Bickel

Replies from: Scott Garrabrant, solipsist
comment by Scott Garrabrant · 2014-02-12T02:11:07.290Z · LW(p) · GW(p)

This game has taught me something. I get more enjoyment than I should out of watching a random variable go up and down, and probably should avoid gambling. :)

Replies from: Emile
comment by Emile · 2014-02-12T22:32:17.256Z · LW(p) · GW(p)

Nice work, congrats! Looks fun and useful, better than the calibration apps I've seen so far (including one I made, that used confidence intervals - I had a proper scoring rule too!)

My score:

Current score: 3.544 after 10 plays, for an average score per play of 0.354.

Replies from: mcoram
comment by mcoram · 2014-02-13T03:23:37.076Z · LW(p) · GW(p)

Thanks Emile,

Is there anything you'd like to see added?

For example, I was thinking of running it on nodejs and logging the scores of players, so you could see how you compare. (I don't have a way to host this, right now, though.)

Or another possibility is to add diagnostics. E.g. were you setting your guess too high systematically or was it fluctuating more than the data would really say it should (under some models for the prior/posterior, say).

Also, I'd be happy to have pointers to your calibration apps or others you've found useful.

comment by solipsist · 2014-02-12T18:17:50.774Z · LW(p) · GW(p)

Thank you. I really, really want to see more of these.

Feature request #976: More stats to give you an indication of overconfidence / underconfidence. (e.g. out of 40 questions where you gave an answer between .45 and .55, you were right 70% of the time).

comment by Leonhart · 2014-02-16T19:55:44.259Z · LW(p) · GW(p)

Brought to mind by the recent post about dreaming on Slate Star Codex:

Has anyone read a convincing refutation of the deflationary hypothesis about dreams - that is, that there aren't any? In the sense of nothing like waking experience ever happening during sleep; just junk memories with backdated time-stamps?

My brain is attributing this position to Dennett in one of his older collections - maybe Brainstorms - but it probably predates him.

Replies from: Yvain, gwern, PECOS-9, Alejandro1
comment by Scott Alexander (Yvain) · 2014-02-16T23:30:46.291Z · LW(p) · GW(p)

Stimuli can be incorporated into dreams - for example, if someone in a sleep lab sees you are in REM sleep and sprays water on you, you're more likely to report having had a dream it was raining when you wake up. Yes, this has been formally tested. This provides strong evidence that dreams are going on during sleep.

More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-02-19T08:21:47.275Z · LW(p) · GW(p)

More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.

Whoa, that's cool. Do you have a reference?

Replies from: chaosmage
comment by chaosmage · 2014-02-19T10:42:51.722Z · LW(p) · GW(p)

Here.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-02-19T12:32:23.267Z · LW(p) · GW(p)

Thanks!

comment by PECOS-9 · 2014-02-17T03:23:37.706Z · LW(p) · GW(p)

Would this be refuted by cases where lucid dreamers were able to communicate (one way) with researchers during their dreams through eye movements?

http://en.wikipedia.org/wiki/Lucid_dream#Perception_of_time

In 1985, Stephen LaBerge performed a pilot study which showed that time perception while counting during a lucid dream is about the same as during waking life. Lucid dreamers counted out ten seconds while dreaming, signaling the start and the end of the count with a pre-arranged eye signal measured with electrooculogram recording.[31]

comment by Alejandro1 · 2014-02-17T01:16:31.045Z · LW(p) · GW(p)

Indeed, there is an essay in Brainstorms articulating this position. IIRC Dennett does not explicitly commit to defending it, rather he develops it to make the point that we do not have a privileged, first-person knowledge about our experiences. There is conceivable third-person scientific evidence that might lead us to accept this theory (even if, going by Yvain's comment, this does not seem to actually be the case), and our first-person intuition does not trump it.

comment by palladias · 2014-02-11T20:22:14.066Z · LW(p) · GW(p)

I wrote a piece for work on quota systems and affirmative action in employment ("Fixing Our Model of Meritocracy"). It's politics-related, but I did get to cite a really fun natural experiment and talk about quotas for the use of countering the availability heuristic.

Replies from: fubarobfusco, JQuinton, mwengler
comment by fubarobfusco · 2014-02-12T02:57:11.350Z · LW(p) · GW(p)

This is a tangent, but since you mention the "good founders started [programming] at 13" meme, it's a little bit relevant ...

I find it deeply bizarre that there's this idea today among some programmers that if you didn't start programming in your early teens, you will never be good at programming. Why is this so bizarre? Because until very recently, there was no such thing as a programmer who started at a young age; and yet there were people who became good at programming.

Prior to the 1980s, most people who ended up as programmers didn't have access to a computer until university, often not until graduate school. Even for university students, relatively unfettered access to a computer was an unusual exception, found only in extremely hacker-friendly cultures such as MIT.

Put another way: Donald Knuth probably didn't use a computer until he was around 20. John McCarthy was born in 1927 and probably couldn't have come near a computer until he was a professor, in his mid-20s. (And of course Alan Turing, Jack Good, or John von Neumann couldn't have grown up with computers!)

(But all of them were mathematicians, and several of them physicists. Knuth, for one, was also a puzzle aficionado and a musician from his early years — two intellectual pursuits often believed to correlate with programming ability.)

In any event, it should be evident from the historical record that people who didn't see a computer until adulthood could still become extremely proficient programmers and computer scientists.

I've heard some people defend the "you can't be good unless you started early" meme by comparison with language acquisition. Humans generally can't gain native-level fluency in a language unless they are exposed to it as young children. But language acquisition is a very specific developmental process that has evolved over thousands of generations, and occurs in a developmentally-critical period of very early childhood. Programming hasn't been around that long, and there's no reason to believe that a critical developmental period in early adolescence could have come into existence in the last few human generations.

So as far as I can tell, we should really treat the idea that you have to start early to become a good programmer as a defensive and prejudicial myth, a bit of tribal lore arising in a recent (and powerful) subculture — which has the effect of excluding and driving off people who would be perfectly capable of learning to code, but who are not members of that subculture.

Replies from: Viliam_Bur, bogus, Douglas_Knight, mwengler, Scott Garrabrant
comment by Viliam_Bur · 2014-02-12T09:21:12.729Z · LW(p) · GW(p)

Seems to me that using computers since your childhood is not necessary, but there is something which is necessary, and which is likely to be expressed in childhood as an interest in computer programming. And, as you mentioned, in the absence of computers, this something is likely to be expressed as an interest in mathematics or physics.

So the correct model is not "early programming causes great programmers", but rather "X causes great programmers, and X causes early programming; therefore early programming correlates with great programmers".

Starting early with programming is not strictly necessary... but these days when computers are almost everywhere and they are relatively cheap, not expressing any interest in programming during one's childhood is an evidence this person is probably not meant to be a good programmer. (The only question is how strong this evidence is.)

Comparing with language acquisition is wrong... unless the comparison is true for mathematics. (Is there a research on this?) Again, the model "you need programming acquisition as a child" would be wrong, but the model "you need math acquisition as a child, and without this you later will not grok programming" might be correct.

Replies from: Pfft
comment by Pfft · 2014-02-12T23:00:32.450Z · LW(p) · GW(p)

the correct model is not "early programming causes great programmers", but rather "X causes great programmers, and X causes early programming; therefore early programming correlates with great programmers".

Yeah, I think this is explicitly the claim Paul Graham made, with X = "deep interest in technology".

The problem with that is I think, at least with technology companies, the people who are really good technology founders have a genuine deep interest in technology. In fact, I've heard startups say that they did not like to hire people who had only started programming when they became CS majors in college. If someone was going to be really good at programming they would have found it on their own. Then if you go look at the bios of successful founders this is invariably the case, they were all hacking on computers at age 13.

comment by bogus · 2014-02-12T12:14:07.623Z · LW(p) · GW(p)

This is a tangent, but since you mention the "good founders started [programming] at 13" meme, it's a little bit relevant ...

There is a rule of thumb that achieving exceptional mastery in any specific field requires 10,000 hours of practice. This seems to be true across fields, in classical musicians, chess players, sports players, scholars/academics etc... It's a lot easier to meet that standard if you start from childhood. Note that people who make this claim in the computing field are talking about hackers, not professional programmers in a general sense. It's very possible to become a productive programmer at any age.

comment by Douglas_Knight · 2014-02-12T17:07:14.813Z · LW(p) · GW(p)

Humans generally can't gain native-level fluency in a language unless they are exposed to it as young children.

The only aspect of language with a critical period is accent. Adults commonly achieve fluency. In fact, adults learn a second language faster than children.

Replies from: Creutzer
comment by Creutzer · 2014-02-12T17:20:01.395Z · LW(p) · GW(p)

As far as I know, the degree to which second-language speakers can acquire native-like competence in domains other than phonetics is somewhat debated. Anecdotally, it's a rare person who manages to never make a syntactic error that a native speaker wouldn't make, and there are some aspects of language (I'm told that subjunctive in French and aspect in Slavic languages may be examples) that may be impossible to fully acquire for non-native speakers.

So I wouldn't accept this theoretical assertion without further evidence; and for all practical purposes, the claim that you have to learn a language as a child in order to become perfect (in the sense of native-like) with it is true.

Replies from: Emile, Lumifer, Douglas_Knight
comment by Emile · 2014-02-12T22:13:54.711Z · LW(p) · GW(p)

Not my downvotes, but you're probably getting flak for just asserting stuff and then demanding evidence for the opposing side. A more mellow approach like "huh that's funny I've always heard the opposite" would be better received.

Replies from: Creutzer
comment by Creutzer · 2014-02-12T23:23:21.282Z · LW(p) · GW(p)

Indeed, I probably expressed myself quite badly, because I don't think what I meant to say is that outrageous: I heard the opposite, and anecdotally, it seems right - so I would have liked to see the (non-anecdotal) evidence against it. Perhaps I phrased it a bit harshly because what I was responding to was also just an unsubstantiated assertion (or, alternatively, a non-sequitur in that it dropped the "native-like" before fluency).

Replies from: Creutzer
comment by Creutzer · 2014-02-12T23:31:43.819Z · LW(p) · GW(p)

[error]

comment by Lumifer · 2014-02-12T17:26:33.301Z · LW(p) · GW(p)

As far as I know, the degree to which second-language speakers can acquire native-like competence in domains other than phonetics is somewhat debated.

Links? As far as I know it's not debated.

there are some aspects of language (I'm told that subjunctive in French and aspect in Slavic languages may be examples) that may be impossible to fully acquire for non-native speakers.

That's, ahem, bullshit. Why in the world would some features of syntax be "impossible to fully acquire"?

for all practical purposes, the claim that you have to learn a language as a child in order to become perfect (in the sense of native-like) with it is true.

For all practical purposes it is NOT true.

Replies from: Creutzer
comment by Creutzer · 2014-02-12T23:21:24.437Z · LW(p) · GW(p)

You may easily know more about this issue than me, because I haven't actually researched this.

That said, let's be more precise. If we're talking about mere fluency, there is, of course, no question.

But if we're talking about actually native-equivalent competence and performance, I have severe doubts that this is even regularly achieved. How many L2 speakers of English do you know who never, ever pick an unnatural choice from among the myriad of different ways in which the future can be expressed in English? This is something that is completely effortless for native speakers, but very hard for L2 speakers.

The people I know who are candidates for that level of proficiency in an L2 are at the upper end of the intelligence spectrum, and I also know a non-dumb person who has lived in a German-speaking country for decades and still uses wrong plural formations. Hell, there's people who are employed and teach at MIT and so are presumably non-dumb who say things like "how it sounds like".

The two things I mentioned are semantic/pragmatic, not syntactic. I know there is a study that shows L2 learners don't have much of a problem with the morphosyntax of Russian aspect, and that doesn't surprise me very much. I don't know and didn't find any work that tried to test native-like performance on the semantic and pragmatic level.

I'm not sure how to answer the "why" question. Why should there be a critical period for anything? ... Intuitively, I find that semantics/pragmatics, having to do with categorisation, is a better candidate for something critical-period-like than pure (morpho)syntax. I'm not even sure you need critical periods for everything, anyway. If A learns to play the piano starting at age 5 and B starts at age 35, I wouldn't be surprised if A is not only on average, but almost always, better at age 25 than B is at 55. Unfortunately, that's basically impossible to study while controlling for all confounders like general intelligence, quality of instruction, and number of hours spent on practice. (The piano example would be analogous more to the performance than the competence aspect of language, I suppose.)

There is a study about Russian dative subjects that suggests even highly advanced L2 speakers with lots of exposure don't get things quite right. Admittedly, you can still complain that they don't separate the people who have lived in a Russian-speaking country for only a couple of months from those who have lived there for a decade.

The thing about the subjunctive is, at best, wrong, but certainly not bullshit. The fact that it was told to me by a very intelligent French linguist about a friend of his whose L2-French is flawless except for occasional errors in that domain is better evidence for that being a very hard thing to acquire than your "bullshit" is against that.

Replies from: Lumifer, Pfft, Viliam_Bur
comment by Lumifer · 2014-02-13T01:14:51.179Z · LW(p) · GW(p)

How many L2 speakers of English do you know who never, ever pick an unnatural choice from among the myriad of different ways in which the future can be expressed in English?

You are committing the nirvana fallacy. How many native speakers of English never make mistakes or never "pick an unnatural choice"?

For example, I know a woman who immigrated to the US as an adult and is fully bilingual. As an objective measure, I think she had the perfect score on the verbal section of the LSAT. She speaks better English than most "natives". She is not unusual.

The fact that it was told to me by a very intelligent French linguist about a friend of his whose L2-French is flawless except for occasional errors in that domain

Tell your French linguist to go into countryside and listen to the French of the uneducated native speakers. Do they make mistakes?

Replies from: Creutzer
comment by Creutzer · 2014-02-13T01:31:05.844Z · LW(p) · GW(p)

How many native speakers of English never make mistakes or never "pick an unnatural choice"?

I'm not talking about performance errors in general. I'm talking about the fact that it is extremely hard to acquire native-like competence wrt the semantics and pragmatics of the ways in which English allows one to express something about the future.

She speaks better English than most "natives".

Your utterance of this sentence severely damages your credibility with respect to any linguistic issue. The proper way to say this is: she speaks higher-status English than most native speakers. Besides, the fact that she gets perfect scores on some test (whose content and format is unknown to me), which presumably native speakers don't, suggests that she is far from an average individual anyway.

Also, that you're not bringing up a single relevant study that compares long-time L2 speakers with native speakers on some interesting, intricate and subtle issue where a competence difference might be suspected leaves me with a very low expectation of the fruitfulness of this discussion, so maybe we should just leave it at that. I'm not even sure to what extent we aren't simply talking past each other because we have different ideas about what native-like performance means.

Tell your French linguist to go into countryside and listen to the French of the uneducated native speakers. Do they make syntax errors?

They don't, by definition; not the way you probably mean it. I wouldn't know why the rate of performance errors should correlate in any way with education (controlling for intelligence). I also trust the man's judgment enough to assume that he was talking about a sort of error that stuck out because a native speaker wouldn't make it.

Replies from: Lumifer, NancyLebovitz
comment by Lumifer · 2014-02-13T01:45:26.339Z · LW(p) · GW(p)

I'm talking about the fact that it is extremely hard to acquire native-like competence wrt the semantics and pragmatics of the ways in which English allows one to express something about the future.

I don't think so. This looks like an empirical question -- what do you mean by "extremely hard"? Any evidence?

Your utterance of this sentence severely damages your credibility with respect to any linguistic issue. The proper way to say this is: she speaks higher-status English than most native speakers.

No, I still don't think so -- for either of your claims. Leaving aside my credibility, non-black English in the United States (as opposed to the UK) has few ways to show status and they tend to be regional, anyway. She speaks better English (with some accent, to be sure) in the usual sense -- she has a rich vocabulary and doesn't make many mistakes.

she is far from an average individual anyway.

While that is true, your claims weren't about averages. Your claims were about impossibility -- for anyone. An average person isn't successful at anything, including second languages.

Replies from: Creutzer
comment by Creutzer · 2014-02-13T21:05:19.019Z · LW(p) · GW(p)

I don't think so. This looks like an empirical question -- what do you mean by "extremely hard"? Any evidence?

I don't know if anybody has ever studied this - I would be surprised if they had -, so I have only anecdotal evidence from the uncertainty I myself experience sometimes when choosing between "will", "going to", plain present, "will + progressive", and present progressive, and from the testimony of other highly advanced L2 speakers I've talked to who feel the same way - while native speakers are usually not even aware that there is an issue here.

She speaks better English (with some accent, to be sure) in the usual sense -- she has a rich vocabulary and doesn't make many mistakes.

How exactly is "rich vocabulary" not high-status? (Also, are you sure it actually contains more non-technical lexemes and not just higher-status lexemes?) I'm not exactly sure what you mean by "mistakes". Things that are ungrammatical in your idiolect of English?

While that is true, your claims weren't about averages. Your claims were about impossibility -- for anyone. An average person isn't successful at anything, including second languages.

I actually made two claims. The one was that it's not entirely clear that there aren't any such in-principle impossibilities, though I admit that the case for them isn't very strong. I will be very happy if you give me a reference surveying some research on this and saying that the empirical side is really settled and the linguists who still go on telling their students that it isn't are just not up-to-date.

The second is that in any case, only the most exceptional L2 learners can in practice expect to ever achieve native-like fluency.

Replies from: Lumifer
comment by Lumifer · 2014-02-14T16:28:12.497Z · LW(p) · GW(p)

the uncertainty ... while native speakers are usually not even aware that there is an issue here.

It seems you are talking about being self-conscious, not about language fluency.

The one was that it's not entirely clear that there aren't any such in-principle impossibilities

Why in the world would there be "in-principle impossibilities" -- where does this idea even come from? What possible mechanism do you have in mind?

only the most exceptional L2 learners can in practice expect to ever achieve native-like fluency.

Well, let's get specific. Which test do you assert native speakers will pass and ESL people will not (except for the "most exceptional")?

Replies from: Creutzer
comment by Creutzer · 2014-02-16T17:28:15.690Z · LW(p) · GW(p)

It seems you are talking about being self-conscious, not about language fluency.

I didn't say it was about fluency. But I don't think it's about self-consciousness, either. Native speakers of a language pick the appropriate tense and aspect forms of verbs perfectly effortlessly - or how often do you hear a native speaker of English use a progressive in a case where it strikes you as inappropriate and you would say that they should really have used a plain tense here, for example?* - while for L2 speakers, it is generally pretty hard to grasp all the details of a language's tense/aspect system.

*I'm choosing the progressive as an example because it's easiest to describe, not because I think it's a candidate for serious unacquirability. It's known to be quite hard for native speakers of a language that has no aspect, but it's certainly possible to get to a point where you don't use the progressive wrongly essentially ever.

What possible mechanism do you have in mind?

For syntax, you would really need to be a strong Chomskian to expect any such things. For semantics, it seems to be a bit more plausible a priori: maybe as an adult, you have a hard time learning new ways of carving up the world?

Well, let's get specific. Which test do you assert native speakers will pass and ESL people will not (except for the "most exceptional")?

I don't know of a pass/fail format test, but I expect reading speed and the speed of their speech to be lower in L2 speakers than in L1 speakers of comparable intelligence. I would also expect that if you measure cognitive load somehow, language processing in an L2 requires more of your capacity than processing your L1. I would also expect that the active vocabulary of L1 speakers is generally larger than that of an L2 speaker even if all the words in the L1 speaker's active lexicon are in the L2 speaker's passive vocabulary.

comment by NancyLebovitz · 2014-02-13T08:49:46.299Z · LW(p) · GW(p)

The proper way to say this is: she speaks higher-status English than most native speakers.

I wonder if there's an implication that colloquial language is more complex than high status language.

Replies from: arundelo, tut
comment by arundelo · 2014-02-13T14:22:42.718Z · LW(p) · GW(p)

The things being measured are different. To a first approximation, all native speakers do maximally well at sounding like a native speaker.

Lumifer's friend may indeed speak like a native speaker (though it's rare for people who learned as adults to do so), but she cannot be better at it than "most 'natives'".

What she can be better at than most natives is:

It is possible, though, for a lower-status dialect to be more complex than a higher-status one. Example: the Black English verb system.

comment by tut · 2014-02-13T10:40:49.098Z · LW(p) · GW(p)

Or maybe it means that high status and low status English have different difficulties, and native speakers tend to learn the one that their parents use (finding others harder) while L2 speakers learn to speak from a description of English which is actually a description of a particular high status accent (usually either Oxford or New England I think)

Replies from: taelor
comment by taelor · 2014-02-13T19:34:05.853Z · LW(p) · GW(p)

The "Standard American Accent" spoken in the media and generally taught to foriegners is the confusingly named "Midwestern" Accent, which due to internal migration and a subsequent vowel shift, is now mostly spoken in California and the Pacific Northwest.

Interestingly enough, my old Japanese instructor was a native Osakan, who's natural dialect was Kansai-ben; despite this, she conducted the class using the standard, Tokyo Dialect.

comment by Pfft · 2014-02-13T18:46:39.677Z · LW(p) · GW(p)

If A learns to play the piano starting at age 5 and B starts at age 35, I wouldn't be surprised if A is not only on average, but almost always, better at age 25 than B is at 55. Unfortunately, that's basically impossible to study while controlling for all confounders like general intelligence, quality of instruction, and number of hours spent on practice.

If all you are saying is that people who start learning a language at age 2 are almost always better at it than people who start learning the same language at age 20, I don't think anyone would disagree. The whole discussion is about controlling for confounders...

Replies from: Creutzer
comment by Creutzer · 2014-02-13T20:39:36.627Z · LW(p) · GW(p)

Yes and no - the whole discussion is actually two discussions, I think.

One is about in-principle possibility, the presence of something like a critical period, etc. There it is crucial for confounders.

The second discussion is about in-practice possibility, whether people starting later can reasonably expect to get to the same level of proficiency. Here the "confounders" are actually part of what this is about.

comment by Viliam_Bur · 2014-02-13T11:34:31.387Z · LW(p) · GW(p)

There is a study about Russian dative subjects that suggests even highly advanced L2 speakers with lots of exposure don't get things quite right.

Bonus points for giving a specific example, which helped me to understand your point, and at this moment I fully agree with you. Because I understand the example; my own language has something similar, and wouldn't expect a stranger to use this correctly. The reason is that it would be too much work to learn properly, for too little benefit. It's a different way to say things, and you only achieve a small difference in meaning. And even if you asked a non-linguist native, they would probably find it difficult to explain the difference properly. So you have little chance to learn it right, and also little motivation to do.

Here is my attempt to explain the examples from the link, pages 3 and 4. (I am not a Russian language speaker, but my native language is also Slavic, and I learned Russian. If I got something wrong, please correct me.)

"ya uslyshala ..." = "I heard ..."
"mne poslyshalis ..." = "to-me happened-to-be-heard ..."

"ya xotel ..." = "I wanted ..."
"mne xotelos ..." = "to-me happened-to-want ..."

That's pretty much the same meaning, it's just that the first variant is "more agenty", and the second variant is "less agenty", to use the LW lingo. But that's kinda difficult to explain explicitly, becase... you know, how exactly can "hearing" (not active listening, just hearing) be "agenty"; and how exactly can "wanting" be "non-agenty"? It doesn't seem to make much sense, until you think about it, right? (The "non-agenty wanting" is something like: my emotions made me to want. So I admit that I wanted, but at the same time I deny full responsibility for my wanting.)

As a stranger, what is the chance that (1) you will hear it explained in a way that will make sense to you, (2) you will remember it correctly, and (3) when the opportunity comes, you will remember to use it. Pretty much zero, I guess. Unless you decide to put an extra effort into this aspect of the langauge specifically. But considering the costs and benefits, you are extremely unlikely to do it, unless being a professional translator to Russian is extremely important for you. (Or unless you speak a Slavic language that has a similar concept, so the costs are lower for you, but even then you need a motivation to be very good at Russian.)

Now when you think about contexts, these kinds of words are likely to be used in stories, but don't appear in technical literature or official documents, etc. So if you are a Russian child, you heard them a lot. If you are a Russian-speaking foreigner working in Russia, there is a chance you will literally never hear it at the workplace.

Replies from: Douglas_Knight, IlyaShpitser
comment by Douglas_Knight · 2014-02-13T17:53:09.398Z · LW(p) · GW(p)

The paper doesn't even find a statistically significant difference. The point estimate is that advanced L2 do worse than natives, but natives make almost as many mistakes.

Replies from: Creutzer
comment by Creutzer · 2014-02-13T20:49:10.300Z · LW(p) · GW(p)

They did found differences with the advances L2 speakers, but I guess we care about the highly advanced ones. They point out a difference at the bottom of page 18, though admittedly, it doesn't seem to be that much of a big deal and I don't know enough about statistics to tell whether it's very meaningful.

comment by IlyaShpitser · 2014-02-13T12:23:27.044Z · LW(p) · GW(p)

'mne poslyshalos' I think. This one has connotations of 'hearing things,' though.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-02-13T15:26:56.049Z · LW(p) · GW(p)

Note: "Mne poslyshalis’ shagi na krishe." was the original example; I just removed the unchanging parts of the sentences.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-02-13T15:33:29.015Z · LW(p) · GW(p)

Ah I see, yes you are right. That is the correct plural in this case. Sorry about that! 'Mne poslyshalos chtoto' ("something made itself heard by me") would be the singular, vs the plural above ("the steps on the roof made themselves heard by me."). Or at least I think it would be -- I might be losing my ear for Russian.

comment by Douglas_Knight · 2014-02-12T19:54:07.119Z · LW(p) · GW(p)

What do you mean by "theoretical"? Is this just an insult you fling at people you disagree with?

Replies from: Creutzer
comment by Creutzer · 2014-02-12T23:17:09.848Z · LW(p) · GW(p)

Huh? What a curious misunderstanding! The theoretical referred just the - theoretical! - question of whether it's in principle possible to acquire native-like proficiency, which was contrasted with my claim that even if it is, most people cannot expect to reach that state in practice.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-02-13T01:47:18.749Z · LW(p) · GW(p)

I thought that my choice of the word "commonly" indicated that I was not talking about the limits of the possible.

Replies from: Creutzer
comment by Creutzer · 2014-02-13T03:11:28.901Z · LW(p) · GW(p)

You really think it's common for L2 speakers to achieve native-like levels of proficiency? Where do you live and who are these geniuses? I'm serious. For example, I see people speaking at conferences who have lived in the US for years, but aren't native speakers, and they are still not doing so with native-like fluency and eloquence. And presumably you have to be more than averagely intelligent to give a talk at a scientific conference...

I'm not talking about just any kind of fluency here, and neither was fubarobfusco, I assume. I suspect I was trying to interpret your utterance in a way that I didn't assign very low probability to (i.e. not as claiming that it's common for people to become native-like) and that also wasn't a non-sequitur wrt the claim you were referring to (by reducing native-like fluency to some weaker notion) and kind of failed.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-02-13T04:57:58.898Z · LW(p) · GW(p)

Maybe I should have said "routinely" rather than "commonly." But the key differentiator is effort.

I don't care about your theoretical question of whether you can come up with a test that L2 speakers fail. I assume that fubarobfusco meant the same thing I meant. I'm done.

comment by mwengler · 2014-02-12T16:59:30.213Z · LW(p) · GW(p)

I find it deeply bizarre that there's this idea today among some programmers that if you didn't start programming in your early teens, you will never be good at programming.

Suppose you replaced it with the idea that people who started programming when they were 13 have a much easier time becoming good programmers as adults, and so are overrepresented among programmers at every level. Does that still sound bizarre?

comment by Scott Garrabrant · 2014-02-12T07:31:01.216Z · LW(p) · GW(p)

Donald Knuth was probably doing real math in his early teens. Maybe this counts.

comment by JQuinton · 2014-02-11T21:26:12.326Z · LW(p) · GW(p)

The same tortured analysis plays out in the business world, where Paul Graham, the head of YCombinator, a startup incubator, explained that one reason his company funds fewer women-led companies is because fewer of them fit this profile of a successful founder:

If someone was going to be really good at programming they would have found it on their own. Then if you go look at the bios of successful founders this is invariably the case, they were all hacking on computers at age 13.

The trouble is, successful founders don’t run through a pure meritocracy, either. They’re supported, mentored, and funded when they’re chosen by venture capitalists like Graham. And, if everyone is working on the same model of “good founders started at 13″ then a lot of clever ideas, created by people of either gender, might get left on the table.

A similar argument was presented in an article at Slate: Affirmative action doesn’t work. It never did. It’s time for a new solution.:

But even if the government were keeping better tabs on affirmative action, the bigger problem is that its jurisdiction doesn’t reach the parts of the economy where affirmative action is most desperately needed: the places where real money is made and real power is allocated. The best example of this is the industry that dominates so much of our economy today: the technology sector. Silicon Valley’s racial diversity is pretty terrible, the kind of gross imbalance that inspires special reports on CNN.

It’s a dismal state of affairs, but how could it really be otherwise? Silicon Valley isn’t just an industry; it’s a social and cultural ecosystem that grew out of a very specific social and cultural setting: mostly West Coast, upper-middle-class white guys who liked to tinker with motherboards and microchips. If you were around that culture, you became a part of it. If you weren’t, you didn’t. And because of the social segregation that pervades our society, very few black people were around to be a part of it.

Some would purport to remedy this by fixing the tech industry job pipeline: more STEM graduates, more minority internships and boot camps, etc. And that will get you changes here and there, at the margins, but it doesn’t get at the real problem. The big success stories of the Internet age—Instagram, YouTube, Twitter—all came about in similar ways: A couple of people had an idea, they got together with some of their friends, built something, called some other friends who knew some other friends who had access to friends with serious money, and then the thing took off and now we’re all using it and they’re all millionaires. The process is organic, somewhat accidental, and it moves really, really fast. And by the time those companies are big enough to worry about their “diversity,” the ground-floor opportunities have already been spoken for

comment by mwengler · 2014-02-12T16:57:48.053Z · LW(p) · GW(p)

I almost upvoted your post on realizing you are a woman and thinking I'd like more women on LW. Then I realized how ironic that was. Then I did it anyway, likely influenced by the pretty photo on your article (not caring whether it was stock or you).

Fixing our meritocracy presumes we have a meritocracy to fix. Certainly a democracy is not a meritocracy, unless your definition of merit is EXTREMELY flexible to the point of defining merit as "getting elected." Certainly the Athenian and 19th century American democracies which supported human slavery were not meritocracies, unless again your definition of merit is flexible enough to include being white and/or patrician.

If there is a lesson from the study you cite, it would seem to be that one should push for quotas at the governmental level. It is said by many, but I don't know the evidence, that an advantage Europe and US have over Islamic societies is that we are much better about monetizing the talents of women, and so we have up to 200% more per capita effective productivity available. Your Indian lesson shows an example where rather extreme and anti-democratic quotas appeared to shift the preferences of the broad population to include more humans more broadly in what they see as the talent pool.

Is it likely that quotas in the US have worked negatively rather than positively? Looked at myopically one might make the case. But pre-quota US was a MUCH LESS integrated society. I grew up in a middle class suburb in Long Island (Farmingdale) hardly a bastion of white privelege. In the 1970s, a black family bought a house and had stuff thrown through their windows and a range of other harassments perpetrated upon them by anonymous but I'm willing to bet white perpetrators. Now we have interracial couples all over the southern US, and tremendous reduction in racist feeling in people younger than myself. Correlation is not causation, but it ain't exactly an argument against causation either.

comment by Lumifer · 2014-02-14T17:17:27.608Z · LW(p) · GW(p)

An interesting quote, I wonder what people here will make of it...

True rationalists are as rare in life as actual deconstructionists are in university English departments, or true bisexuals in gay bars. In a lifetime spent in hotbeds of secularism, I have known perhaps two thoroughgoing rationalists—people who actually tried to eliminate intuition and navigate life by reasoning about it—and countless humanists, in Comte’s sense, people who don’t go in for God but are enthusiasts for transcendent meaning, for sacred pantheons and private chapels. They have some syncretic mixture of rituals: they polish menorahs or decorate Christmas trees, meditate upon the great beyond, say a silent prayer, light candles to the darkness.

source

Replies from: Oscar_Cunningham, Vulture
comment by Oscar_Cunningham · 2014-02-14T20:18:30.441Z · LW(p) · GW(p)

I can't tell if the author means "rationalists" in the technical sense (i.e. as opposed to empiricists) but if he doesn't then I think it's unfair of him to require that rationalists "eliminate intuition and navigate life by reasoning about it", since this is so clearly irrational (because intuition is so indispensably powerful).

comment by Vulture · 2014-02-14T19:16:45.709Z · LW(p) · GW(p)

I loved this quote. I think it's a characterization of UU-style humanism that is fair but that they would probably agree with.

comment by rxs · 2014-02-16T16:25:05.295Z · LW(p) · GW(p)

Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.

My standard reading speed is about 200 WPM (based on my eReader statisitcs, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.

My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure reading and without RSVP). Is this even possible? Claims seem to be contradictory.

Does anybody have recommendations on systems that actually work? Most I've seen seem like overblown claims to pump for money from desperate managers... I'm willing to put into it money if it actually can deliver.

Thank you very much.

Replies from: drethelin
comment by drethelin · 2014-02-16T20:41:21.788Z · LW(p) · GW(p)

I read around 600 wpm without ever taking speedreading lessons so with training it should be very possible.

comment by ChrisHallquist · 2014-02-16T04:16:31.626Z · LW(p) · GW(p)

Something I recently noticed: steelmanning is popular on LessWrong. But the sequences contain a post called Against Devil's Advocacy, which argues strongly against devil's advocacy, and steelmanning often looks a lot like devil's advocacy. What, if anything is the difference between the two?

Replies from: Vladimir_Nesov, Jayson_Virissimo
comment by Vladimir_Nesov · 2014-02-16T11:23:56.856Z · LW(p) · GW(p)

Steelmanning is about fixing errors in an argument (or otherwise improving it), while retaining (some of) the argument's assumptions. As a result, the argument becomes better, even if you disagree with some of the assumptions. The conclusion of the argument may change as a result, what's fixed about the conclusion is only the question that it needs to clarify. Devil's advocacy is about finding arguments for a given conclusion, including fallacious but convincing ones.

So the difference is in the direction of reasoning and intent regarding epistemic hygiene. Steelmanning starts from (somewhat) fixed assumptions and looks for more robust arguments following from them that would address a given question (careful hypothetical reasoning), while devil's advocacy starts from a fixed conclusion (not just a fixed question that the conclusion would judge) and looks for convincing arguments leading to it (rationalization with allowed use of dark arts).

A bad aspect of a steelmanned argument is that it can be useless: if you don't accept the assumptions, there is often little point in investigating their implications. A bad aspect of a devil's advocate's argument is that it may be misleading, acting as filtered evidence for the chosen conclusion. In this sense, devil's advocates exercise the skill of coming up with misleading arguments, which might be bad for their ability to reason carefully in other situations.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-02-16T20:49:41.455Z · LW(p) · GW(p)

Devil's advocacy is about finding arguments for a given conclusion, including fallacious but convincing ones.

But what if you steelman devil's advocacy to exclude fallacious but convincing arguments?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-02-16T21:02:27.946Z · LW(p) · GW(p)

Then the main problem is that it produces (and exercises the skill of producing) arguments that are filtered evidence in the direction of the predefined conclusion, instead of well-calibrated consideration of the question on which the conclusion is one position.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-02-16T23:18:10.248Z · LW(p) · GW(p)

So I'm still not sure what the difference with steelmanning is supposed to be, unless it's that with steelmanning you limit yourself to fixing flaws in your opponents' arguments that can be fixed without essentially changing their arguments, as opposed just trying to find the best arguments you can for their conclusion (the latter being a way of filtering evidence?)

That would seem to imply that steelmanning isn't a universal duty. If you think an argument can't be fixed without essentially steelmanning it, you'll just be forced to say it can't be steelmanned.

comment by Jayson_Virissimo · 2014-02-16T05:09:24.485Z · LW(p) · GW(p)

As far as I can tell...nothing. Most likely, there are simply many LessWrongers (like me) that disagree with E.Y. on this point.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-02-16T17:57:14.797Z · LW(p) · GW(p)

What leads you to believe that you disagree with Eliezer on this point? I suspect that you are just going by the title. I just read the essay and he endorses lots of practices that others call Devil's Advocacy. I'm really not sure what practice he is condemning. If you can identify a specific practice that you disagree with him about, could you describe it in your own words?

comment by Vaniver · 2014-02-14T22:26:29.067Z · LW(p) · GW(p)

An article on samurai mental tricks. Most of them will not be that surprising to LWers, but it is nice to see modern results have a long history of working.

comment by pewpewlasergun · 2014-02-12T02:10:36.411Z · LW(p) · GW(p)

Does anyone have advice for getting an entry level software-development job? I'm finding a lot seem to want several years of experience, or a degree, while I'm self taught.

Replies from: ChrisHallquist, fezziwig, jkaufman, maia
comment by ChrisHallquist · 2014-02-12T02:28:10.706Z · LW(p) · GW(p)

Ignore what they say on the job posting, apply anyway with a resume that links to your Github, websites you've built, etc. Many will still reject you for lack of experience, but in many cases it will turn out the job posting was a very optimistic description of the candidate they were hoping to find, and they'll interview you anyway in spite of not meeting the qualifications on the job listing.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-02-12T09:35:12.619Z · LW(p) · GW(p)

links to your Github, websites you've built, etc.

This is just a guess, but I think it might be helpful to include some screenshots (in color) of the programs, websites, etc. That would make them "more real" to the person who reads this. At least, save them some inconvenience. Of course, I assume that the programs and websites have a nice user interface.

It's also an opportunity for an interesting experiment: randomly send 10 resumes without the screenshorts, and 10 resumes with screenshots. Measure how many interview invitations you get from each group.

If you have a certificate from Udacity or other online university, mention that, too. Don't list is as a formal education, but somewhere in the "other courses and certificates" category.

Replies from: ChrisHallquist, ChristianKl
comment by ChrisHallquist · 2014-02-12T16:54:24.412Z · LW(p) · GW(p)

I think ideally, you want your code running on a website where they can interact with it, but maybe a screenshot would help entice them to go to the website. Or help if you can't get the code on a website for some reason.

comment by ChristianKl · 2014-02-14T12:14:04.266Z · LW(p) · GW(p)

This is just a guess, but I think it might be helpful to include some screenshots (in color) of the programs, websites, etc.

You want to signal a hacker mindset. Instead of focusing to include screenshots it might be more effective to write your resume in LaTeX.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-02-14T19:31:24.085Z · LW(p) · GW(p)

It depends on your model of who will be reading your resume.

I realized that my implicit model is some half-IT-literate HR person or manager. Someone who doesn't know what LaTeX is, and who couldn't download and compile your project from Github. But they may look at a nice printed paper and say: "oh, shiny!" and choose you instead of some other candidate.

comment by fezziwig · 2014-02-12T18:58:44.044Z · LW(p) · GW(p)
  1. Live in a place with lots of demand. Silicon Valley and Boston are both good choices; there may be others but I'm less familiar with them.
  2. Have a github account. Fill it with stuff.
  3. Have a personal site. Fill it with stuff.
  4. Don't worry about the degree requirements; everybody means "Bachelor's of CS or equivalent".
  5. Don't worry about experience requirements. Unlike the degree requirement this does sometimes matter, but you won't be able to tell by reading the advert so just go ahead and apply.
  6. Prefer smaller companies. The bigger the company, the more likely it is that your resume will be screened out by some automated process before it can reach someone like me. I read peoples' githubs; HR necessarily does not.
Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-02-12T22:29:01.340Z · LW(p) · GW(p)

Live in a place with lots of demand.

Alternatively, be willing to move.

comment by jefftk (jkaufman) · 2014-02-12T22:28:09.644Z · LW(p) · GW(p)

Practicing whiteboard-style interview coding problems is very helpful. The best places to work will all make you code in the interview [1] so you want to feel at-ease in that environment. If you want to do a practice interview I'd be up for doing that and giving you an honest evaluation of whether I'd hire you if I were hiring.

[1] Be very cautious about somewhere that doesn't make you code in the interview: you might end up working with a lot of people who can't really code.

comment by maia · 2014-02-12T20:16:25.245Z · LW(p) · GW(p)

If you have the skills to do software interviews well, the hardest part will be getting past resume screening. If you can, try to use personal connections to bypass that step and get interviews. Then your skills will speak for themselves.

comment by palladias · 2014-02-13T16:06:12.220Z · LW(p) · GW(p)

I got to design my first infographic for work and I'd really appreciate feedback (it's here: "Did We Mess Up on Mammograms?").

I'm also curious about recommendations for tools. I used Easl.ly which is a WYSIWYG editor, but it was annoying in that I couldn't just tell it I wanted an mxn block of people icons, evenly spaced, but had to do it by hand instead.

comment by Viliam_Bur · 2014-02-12T22:47:27.229Z · LW(p) · GW(p)

A TEDx video about teaching mathematics; in Slovak, you have to select English subtitles. "Mathematics as a source of joy" Had to share it, but I am afraid the video does not explain too much, and there is not much material in English to link to -- I only found two articles. So here is a bit more info:

The video is about an educational method of a Czech math teacher Vít Hejný; it is told by his son. Prof. Hejný created an educational methodology based mostly on Piaget, but specifically applied to the domain of teaching mathematics (elementary- and high-school levels). He taught the method to some volunteers, who used it to teach children in Czech Rep. and Slovakia. These days the inventor of the method is dead, he started writing a book but didn't finish it, and most of the volunteers are not working in education anymore. So I was afraid the art would be lost, which would a pity. Luckily, his son finished the book, other people added their notes and experiences, and recently the method was made very popular among teachers; and in Czech Rep. the government officially suports this method (in 10% of schools). My experience with this method from my childhood (outside of the school system, in summer camps), is that it's absolutely great.

I am afraid that if I try to describe it, most of it will just sound like common sense. Examples from real life are used. Kids are encouraged to solve the problems for themselves. The teacher is just a coach or moderator; s/he helps kids discuss each other's solutions. Start with specific examples, only later move to abstract generalization of them. Let the children discover the solution; they will remember it better. In some situation specific tools are used (e.g. the basic addition and subtraction is taught by walking on a numeric axis on the floor; also see pictures here). For motivation, the specific examples are described using stories or animals or something interesting (e.g. the derivative of the function is introduced using a caterpillar climbing on hills). There is a big emphasis on keeping a good mood in the classroom.

EDIT: Classroom videos (not subtitles, but some of them should be obvious): 1st grade, 2nd grade, 3rd grade, 4th grade.

Replies from: chaosmage
comment by chaosmage · 2014-02-19T11:54:04.706Z · LW(p) · GW(p)

This was fun. I like how he emphasizes that every kid can figure out all of math by herself, and that thinking citizens are what you need for a democracy rather than a totalitarian state - because the Czech republic was a communist dictatorship only a generation ago, and many teachers were already teachers then.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-02-19T12:27:29.664Z · LW(p) · GW(p)

A cultural detail which may help to explain this attitude:

In communist countries a carreer in science or education of math or physics was a very popular choice of smart people. It was maybe the only place where you could use your mind freely, without being afraid of contradicting something that Party said (which could ruin your career and personal life).

So there are many people here who have both "mathematics" and "democracy" as applause lights. But I'd say that after the end of communist regime the quality of math education actually decreased, because the best teachers suddenly had many new career paths available. (I was in a math-oriented high school when the regime ended, and most of the best teachers left the school within two years, and started their private companies or non-governmental organizations; usually somehow related to education.) Even the mathematical curriculum of prof. Hejný was invented during communism... but only in democracy his son has the freedom to actually publish it.

Replies from: chaosmage
comment by chaosmage · 2014-02-19T13:32:19.917Z · LW(p) · GW(p)

That's very true. Small addition: Many smart people went into medicine, too.

comment by ricketybridge · 2014-02-12T02:40:07.929Z · LW(p) · GW(p)

Sometimes I feel like looking into how I can help humanity (e.g. 80000 hours stuff), but other times I feel like humanity is just irredeemable and may as well wipe itself off the planet (via climate change, nuclear war, whatever).

For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this (and other deficiencies, of course). Yet discourse about such things is overwhelmingly negative, mired in what I think are irrational kneejerk reactions to defend "what it means to be human." So I'm just like, you know what? Fuck it. You can't even help yourselves help yourselves. Forget it.

Thoughts?

Replies from: None, Vladimir_Nesov, RomeoStevens, mwengler, Viliam_Bur, Locaha, Slackson, None, DanielLC, ChristianKl, knb, skeptical_lurker, D_Malik
comment by [deleted] · 2014-02-12T06:39:38.452Z · LW(p) · GW(p)

You know how when you see a kid about to fall off a cliff, you shrug and don't do anything because the standards of discourse aren't as high as they could be?

Me neither.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T07:08:33.202Z · LW(p) · GW(p)

lol yeah, I know what you're talking about.

Okay okay, fine. ;-)

comment by Vladimir_Nesov · 2014-02-12T03:30:41.208Z · LW(p) · GW(p)

A task with a better expected outcome is still better (in expected outcome), even if it's hopeless, silly, not as funny as some of the failure modes, not your responsibility or in some way emotionally less comfortable.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T07:31:58.968Z · LW(p) · GW(p)

You're of course correct. I'm tempted to question the use of "better" (i.e. it's a matter of values and opinion as to whether its "better" if humanity wipes itself out or not), but I think it's pretty fair to assume (as I believe utilitarians do) that less suffering is better, and theoretically less suffering would result from better decision-making and possibly from less climate change.

Thanks for this.

comment by RomeoStevens · 2014-02-12T08:22:56.520Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Identifiable_victim_effect

Also, would you still want to save a drowning dog even if it might bite you out of fear and misunderstanding? (let's say it is a small dog and a bite would not be drastically injurious)

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T08:32:13.891Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Identifiable_victim_effect

True, true. But it's still hard for me (and most people?) to circumvent that effect, even while I'm aware of it. I know Mother Theresa actually had a technique for it (to just think of one child rather than the millions in need). I guess I can try that. Any other suggestions?

Also, would you still want to save a drowning dog even if it might bite you out of fear and misunderstanding? (let's say it is a small dog and a bite would not be drastically injurious)

I'll pretend it's a cat since I don't really like small dogs. ;-) Yes, of course I'd save it. I think this analogy will help me moving forward. Thank you! ^_^

Replies from: RomeoStevens
comment by RomeoStevens · 2014-02-12T10:11:16.934Z · LW(p) · GW(p)

No problem. I have an intuition that IMing might be more productive than structured posts if you're exploring this space and want to cover a bunch of ground quickly. Feel free to ping me on gtalk if you're interested. romeostevensit is my google.

comment by mwengler · 2014-02-12T16:28:22.137Z · LW(p) · GW(p)

I think it is amazingly myopic to look at the only species that has ever started a fire or crafted a wheel and conclude that

humans are so facepalmingly bad at making decisions

The idea that climate change is an existential risk seems wacky to me. It is not difficult to walk away from an ocean which is rising at even 1 m a year and no one hypothesizes anything close to that rate. We are adapted to a broad range of climates and able to move north south east and west as the winds might blow us.

Running out of fossil fuels, thinking we are doing something wildly stupid with our use of fossil fuels seems to me to be about as sensible as thinking a centrally planned economy will work better. It is not intuitive that a centrally planned economy will be a piece of crap compared to what we have, but it turns out to be true. Thinking you or even a bunch of people like you with no track record doing ANYTHING can second guess the markets in fossil fuels, well it seems intuitively right but if you ever get involved in testing your intuitions I don't think you'll find out it holds up. And if you think even doubling the price of fossil fuels really changes the calculus by much, I think Europe and Japan have lived that life for decades compared to the US, and yet the US is the home to the wackiest and ill-thought-out alternatives to fossil fuels in the world.

Can anybody explain to me why creating a wildly popular luxury car which effectively runs on burning coal is such a boon to the environment that it should be subsidized at $7500 by the US federal government and an additional $2500 by states such as California which has been so close to bankruptcy recently? Well that is what a Tesla is, if you drive one in a country with coal on the grid, and most of Europe, China, and the US are in that category, The Tesla S Performance puts out the same amount of carbon as a car getting (WRONG14WRONG) 25 mpg of gasoline.

Replies from: roystgnr, drethelin, Izeinwinter
comment by roystgnr · 2014-02-12T17:22:59.404Z · LW(p) · GW(p)

The Tesla S Performance puts out the same amount of carbon as a car getting 14 mpg of gasoline.

The Tesla S takes about 38 kW-hr to go 100 miles, which works out to around 80 lb CO2 generated. 14mpg would be 7.1 gallons of gasoline to go 100 miles, which works out to around 140lb CO2 generated. I couldn't find any independent numbers for the S Performance, but Tesla's site claims the same range as the regular S with the same battery pack.

The rest of your point seems to hold, though; if the subsidy is predicated on reducing CO2 emissions then the equivalent of 25mpg still isn't anything to brag about.

Replies from: Nornagest, mwengler
comment by Nornagest · 2014-02-12T18:02:40.801Z · LW(p) · GW(p)

works out to around 80 lb CO2 generated

This is likely an overestimation, since it assumes that you're exclusively burning coal. Electricity production in the US is about 68% fossil, the rest deriving from a mixture of nuclear and renewables; the fossil-fuel category also includes natural gas, which per your link generates about 55-60% the CO2 of coal per unit electricity. This varies quite a bit state to state, though, from almost exclusively fossil (West Virginia; Delaware; Utah) to almost exclusively nuclear (Vermont) or renewable (Washington; Idaho).

Based on the same figures and breaking it down by the national average of coal, natural gas, and nuclear and renewables, I'm getting a figure of 43 lb CO2 / 100 mi, or about 50 mpg equivalent. Since its subsidies came up, California burns almost no coal but gets a bit more than 60% of its energy from natural gas; its equivalent would be about 28 lb CO2.

Replies from: mwengler
comment by mwengler · 2014-02-12T19:20:02.976Z · LW(p) · GW(p)

works out to around 80 lb CO2 generated

This is likely an overestimation, since it assumes that you're exclusively burning coal.

Yes, but that should be the right comparison to make. Consider two alternatives: 1) World generates N kwh + 38 kwh to fuel a Tesla to go 100 miles 2) World generates N kwh and puts 4 gallons of gasoline in a car to go 100 miles.

If we are interested in minimizing CO2 emissions, then in world 2 compared to world 1 we will generate 38 kWh fewer from our dirtiest plant on the grid, which is going to be a coal-fired plant.

So in world 1 we have an extra 80 lbs of CO2 emission from electric generation and 0 from gasoline. In world 2 we have 80 lbs less of CO2 emission from electric generation and add 80 lbs from gasoline.

When adding electric usage, you need to "bill" it at the marginal costs to generate that electricity, which is true both in terms the price you charge customers for it and the CO2 emissions you attribute to it.

The US, China, and most of Europe have a lot of Coal in the mix on the grid. Until they scrub coal or stop using it, it seems very clear that the Tesla puffs out the same amount of CO2 as a 25 mpg gasoline powered car.

Replies from: Nornagest, Douglas_Knight
comment by Nornagest · 2014-02-12T19:40:25.030Z · LW(p) · GW(p)

It's true that most of the flexibility in our power system comes from dirty sources, and that squeezing a few extra kilowatt-hours in the short term generally means burning more coal. If we're talking policy changes aimed at popularizing electric cars, though, then we aren't talking a megawatt here or there; we've moved into the realm of adding capacity, and it's not at all obvious that new electrical capacity is going to come from dirty sources -- at least outside of somewhere like West Virginia. On those kinds of scales, I think it's fair to assume a mix similar to what we've currently got, outside of special cases like Germany phasing out its nuclear program.

(There are some caveats; renewables are growing strongly in the US, but nuclear isn't. But it works as a first approximation.)

Replies from: mwengler
comment by mwengler · 2014-02-12T20:53:53.536Z · LW(p) · GW(p)

The global installed capacity of coal-fired power generation is expected to increase from 1,673.1 GW in 2012 to 2.057.6 GW by 2019, according to a report from Transparency Market Research. Coal-fired electrical-generation plants are being started up in Europe—and comparatively clean gas-fired generating capacity is being shut down.

Coal electric generation isn't going away anytime soon. The only reason coal may look at the moment like it is declining in the US is because at the moment natural gas generation in the US is less expensive than coal. But in Europe, coal is less expensive and, remarkably, generating companies respond by turning up coal and turning down natural gas.

Replies from: Nornagest
comment by Nornagest · 2014-02-12T21:47:28.176Z · LW(p) · GW(p)

Doesn't need to be going away for my argument to hold, as long as the relative proportions are favorable -- and as far as I can tell, most of that GIC delta in coal is happening in the developing world, where I don't see too many people buying Teslas. Europe and the US project new capacity disproportionately in the form of renewables; coal is going up in Europe, but less quickly.

This isn't ideal; I'm generally long on wind and solar, but if I had my way we'd be building Gen IV nuclear reactors as fast as we could lay down concrete. But neither is it as grim as the picture you seem to be painting.

Replies from: mwengler
comment by mwengler · 2014-02-12T22:11:09.007Z · LW(p) · GW(p)

This isn't ideal; .... But neither is it as grim as the picture you seem to be painting.

I would agree with that.. Certainly my initial picture was just wrong. Even using Coal as the standard, the Tesla is as good as a 25 mpg gasolilne car. For that size and quality of car, that is actually not bad, but it is best in class, not revolutionary.

As to subsidizing a Tesla as opposed to a 40 mpg diesel, for example, as long as we use coal for electricity, we are better off adding a 40 mpg diesel to the fleet than adding a Tesla. This is almost just me hating on subsidies, preferring that we just tax fuels proportional to their carbon content and let market forces decide how to distribute that distortion.

Replies from: Nornagest
comment by Nornagest · 2014-02-12T22:22:00.061Z · LW(p) · GW(p)

This is almost just me hating on subsidies, preferring that we just tax fuels proportional to their carbon content and let market forces decide how to distribute that distortion.

That probably is better baseline policy from a carbon minimization perspective, yeah; I have similar objections to the fleet mileage penalties imposed on automakers in the US, which ended up contributing among other things to a good chunk of the SUV boom in the '90s and '00s. Now, I can see an argument for subsidies or even direct grants if they help kickstart building EV infrastructure or enable game-changing research, but that should be narrowly targeted, not the basis of our entire approach.

Unfortunately, basic economic literacy is not exactly a hallmark of environmental policy.

comment by Douglas_Knight · 2014-02-13T18:18:40.217Z · LW(p) · GW(p)

When adding electric usage, you need to "bill" it at the marginal costs to generate that electricity

Yes, but marginal analysis requires identifying the correct margin. If you charge your car during the day at work, you are increasing peak load, which is often coal. If you charge your car at night, you are contributing to base load. This might not even require building new plants! This works great if you have nuclear plants. With a sufficiently smart grid, it makes erratic sources like wind much more useful.

Replies from: mwengler
comment by mwengler · 2014-02-13T22:50:45.405Z · LW(p) · GW(p)

Yes, but marginal analysis requires identifying the correct margin.

I do agree using the rate for coal is pessimistic.

On further research, I discover that Li-ion batteries are very energetically expensive to produce. Their net lifetime energy in production and then recycling is about 430 kWh per kWh of battery. Li-ion can be recharged 300-500 times. Using 430 recharges, amortizing production costs across all uses of the battery we see that we have 1 kWh of production energy used for every 1 kWh of storage the battery accomplished during its lifetime.

So now we have the more complicated accounting questions, how much carbon do we associate with constructing the battery vs how much with charging the battery? If construction and charging come from the same grid, we charge the same.

And of course to be fair, we need to figure the cost to refine a gallon of gasoline. Its pretty wacky out there but the numbers out there range from 6 kwh to 12 kwh. The higher numbers include quite a bit of natural gas directly used in the process, which using it directly is about twice as efficient as making electricity with it.

All in all, it looks to me like we have about 100% overhead on battery production energy, and say 8 kWh to make a gallon of gas for about 25% overhead on gasoline.

Lets assign 1.3 lbs of CO2 per kwh electric, which is 2009 US average adjusted 7.5% for delivery losses.

Then a gallon of gasoline gives 19 lbs from the gasoline + 10.4 lbs from making/transporting the gasoline.

A Tesla costs 1.3*38 = 39 lbs CO2 to go 100 miles from electric charge + 39 lbs CO2 from amortizing battery lifetime over CO2 cost or producing the battery.

Tesla = 78 lbs CO2 per 100 miles.

A 78 lbs of CO2 comes from 78/30 = 2.6 gallons of fuel.

So using US average CO2 load for kwh electricity, loading the Tesla with 100% overhead for battery production and loading gasoline with 34% overhead from refining, mining, and transport, we get a Tesla S about equivalent to a 38 mpg car in CO2 emissions.

That number is actually extremely impressive for the class of car a Tesla is.

Nissan Leaf uses 75% as much energy as Tesla to go 100 miles. So Leaf has same CO2 emissions as a 51 mpg car.

If we use coal for electricity these numbers change to Tesla --> 19 mpg and Leaf --> 26 mpg. The Tesla still looks good-ish for the class of car it is, but the Leaf is lousy at 26 mpg, competing with hybrids that get 45 mpg or so.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-02-13T23:12:15.631Z · LW(p) · GW(p)

Your lithium-ion numbers match my understanding of batteries in general: they cost as much energy to create as their lifetime capacity. That's why you can't use batteries to smooth out erratic power sources like wind, or inflexible ones like nuclear.

I'm skeptical that it's a good idea to focus on the energy used to create the battery. There's energy used to create all the rest of the car, and certainly energy to create the gasoline-powered car that you're using as a benchmark. Production energy is difficult to compute and I think most people do such a bad job that I think it's better to use price as a proxy.

comment by mwengler · 2014-02-12T19:13:20.293Z · LW(p) · GW(p)

The rest of your point seems to hold, though; if the subsidy is predicated on reducing CO2 emissions then the equivalent of 25mpg still isn't anything to brag about.

You are right I did my math wrong.

To make it a little clearer to people following along, 80 lbs of CO2 generate to move a Tesla 100 miles using coal generated electricity. 80 pounds of CO2 to move a 25 mpg gasoline car 100 miles.

I'll address why the coal number is the right one in commenting on the next comment.

comment by drethelin · 2014-02-12T18:12:26.789Z · LW(p) · GW(p)

It's not difficult to walk away from an ocean? Please explain New Orleans.

Tesla (and other stuff getting power from the grid) currently run mostly on coal but ideally they can be run off (unrealistically) solar or wind or (realistically) nuclear.

Replies from: mwengler
comment by mwengler · 2014-02-12T18:58:28.988Z · LW(p) · GW(p)

It's not difficult to walk away from an ocean? Please explain New Orleans.

Are you under the impression that climate change rise in ocean level will look like a dike breaking? All references to sea levels rising are reported at less than 1 cm a year, but lets say that rises 100 fold to 1 m/yr. New Orleans flooded a few meters in at most a few days, about 1 m/day.

A factor of 365 in rate could well be the subtle difference between finding yourself on the roof of a house and finding yourself living in a house a few miles inland.

Replies from: drethelin
comment by drethelin · 2014-02-12T20:45:35.463Z · LW(p) · GW(p)

No, explain why we still have a city in new orleans when it repeatedly gets destroyed by hurricanes.

Replies from: mwengler
comment by mwengler · 2014-02-12T20:55:37.862Z · LW(p) · GW(p)

The thread is about whether climate change is an existential threat, not how to best manage coastal cities that flood.

Replies from: drethelin
comment by drethelin · 2014-02-12T21:55:46.640Z · LW(p) · GW(p)

you're right, sorry.

comment by Izeinwinter · 2014-02-12T18:05:43.214Z · LW(p) · GW(p)

Uncache economic liberal dogma and consider real world experience for a moment? Because just going from observation, I would have to say that Electric Grids do in fact work better when centrally planned. TVA, EDF and the rest of the regulated utilities beat the stuffing out of every example of places that attempt to have competitive markets in electricity. That said, if we actually cared about the problems of fossil fuels, we would long ago have transitioned to a fission based grid, because that would actually solve that problem.

Replies from: ChristianKl
comment by ChristianKl · 2014-02-14T00:21:16.939Z · LW(p) · GW(p)

fission based grid

Googling doesn't find many hits. What do you mean with the term?

Replies from: Izeinwinter
comment by Izeinwinter · 2014-02-14T04:26:13.881Z · LW(p) · GW(p)

Nuclear fission. As in: "Everyone follows the example of France and Sweden, builds nuclear reactors until they no longer have any fossil fuel based power plants". There are no real resource or economic limits keeping us from doing this - the Russians have quite good breeder reactor designs, and on a per-terawatt hour produced, it would kill a lot fewer people than any fossil fuel, and cost less money.

comment by Viliam_Bur · 2014-02-12T09:50:30.437Z · LW(p) · GW(p)

If you think helping humanity is (in long term) a futile effort, because humans are so stupid they will destroy themselves anyway... I'd say the organization you are looking for is CFAR.

So, how would you feel about making a lot of money and donating to CFAR? (Or other organization with a similar mission.)

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T21:09:30.535Z · LW(p) · GW(p)

How cool, I've never heard of CFAR before. It looks awesome. I don't think I'm capable of making a lot of money, but I'll certainly look into CFAR.

Edit: I just realized that CFAR's logo is at the top of the site. Just never looked into it. I am not a smart man.

comment by Locaha · 2014-02-12T07:52:04.500Z · LW(p) · GW(p)

Thoughts?

Taboo humanity.

comment by Slackson · 2014-02-12T03:25:44.201Z · LW(p) · GW(p)

I can't speak for you, but I would hugely prefer for humanity to not wipe itself out, and even if it seems relatively likely at times, I still think it's worth the effort to prevent it.

If you think existential risks are a higher priority than parasite removal, maybe you should focus your efforts on those instead.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T07:42:09.653Z · LW(p) · GW(p)

Serious, non-rhetorical question: what's the basis of your preference? Anything more than just affinity for your species?

I'm not 100% sure what you mean by parasite removal... I guess you're referring to bad decision-makers, or bad decision-making processes? If so, I think existential risks are interlinked with parasite removal: the latter causes or at least hastens the former. Therefore, to truly address existential risks, you need to address parasite removal.

Replies from: Slackson
comment by Slackson · 2014-02-12T08:49:54.910Z · LW(p) · GW(p)

If I live forever, through cryonics or a positive intelligence explosion before my death, I'd like to have a lot of people to hang around with. Additionally, the people you'd be helping through EA aren't the people who are fucking up the world at the moment. Plus there isn't really anything directly important to me outside of humanity.

Parasite removal refers to removing literal parasites from people in the third world, as an example of one of the effective charitable causes you could donate to.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T21:23:06.669Z · LW(p) · GW(p)

EA? (Sorry to ask, but it's not in the Less Wrong jargon glossary and I haven't been here in a while.)

Parasite removal refers to removing literal parasites from people in the third world

Oh. Yes. I think that's important too, and it actually pulls on my heart strings much more than existential risks that are potentially far in the future, but I would like to try to avoid hyperbolic discounting and try to focus on the most important issue facing humanity sans cognitive bias. But since human motivation isn't flawless, I may end up focusing on something more immediate. Not sure yet.

Replies from: Emile
comment by Emile · 2014-02-12T22:15:42.761Z · LW(p) · GW(p)

EA is Effective Altruism.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T22:31:10.304Z · LW(p) · GW(p)

Ah, thanks. :)

comment by [deleted] · 2014-02-12T02:49:10.376Z · LW(p) · GW(p)

I find it fascinating to observe.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T07:23:11.534Z · LW(p) · GW(p)

I assume you're talking about the facepalm-inducing decision-making? If so, that's a pretty morbid fascination. ;-)

comment by DanielLC · 2014-02-16T07:21:23.295Z · LW(p) · GW(p)

If you're looking for ways to eliminate existential risk, then knowing that humanity is about to kill itself no matter what you do and you're just putting it off a few years instead of a few billion matters. If you're just looking for ways to help individuals, it's pretty irrelevant. I guess it means that what matters is what happens now, instead of the flow through effects after a billion years, but it's still a big effect.

If you're suggesting that the life of the average human isn't worth living, then saving lives might not be a good idea, but there are still ways to help keep the population low.

Besides, if humanity was great at helping itself, then why would we need you? It is precisely the fact that we allow extreme inequality to exist that means that you can make a big difference.

comment by ChristianKl · 2014-02-14T00:42:27.527Z · LW(p) · GW(p)

For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this

I think you underrate the existential risks that come along with substantial genetic or neurological enhancements. I'm not saying we shouldn't go there but it's no easy subject matter. It requires a lot of thought to address it in a way that doesn't produce more problems than it solves.

For example the toolkit that you need for genetic engineering can also be used to create artificial pandemics which happen to be the existential risk most feared by people in the last LW surveys.

When it comes to running out of fossil fuels we seem to do quite well. Solar energy halves costs every 7 years. The sun doesn't shine the whole day so there's still further work to be done, but it doesn't seem like an insurmountable challenge.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-14T01:39:01.495Z · LW(p) · GW(p)

I think you underrate the existential risks that come along with substantial genetic or neurological enhancements.

It's true, I absolutely do. It irritates me. I guess this is because the ethics seem obvious to me: of course we should prevent people from developing a "supervirus" or whatever, just as we try to prevent people from developing nuclear arms or chemical weapons. But steering towards a possibly better humanity (or other sentient species) just seems worth the risk to me when the alternative is remaining the violent apes we are. (I know we're hominds, not apes; it's just a figure of speech.)

When it comes to running out of fossil fuels we seem to do quite well. Solar energy halves costs every 7 years.

That's certainly a reassuring statistic, but a less reassuring one is that solar power currently supplies less than one percent of global energy usage!! Changing that (and especially changing that quickly) will be an ENORMOUS undertaking, and there are many disheartening roadblocks in the way (utility companies, lack of government will, etc.). The fact that solar itself is getting less expensive is great, but unfortunately the changing over from fossil fuels to solar (e.g. phasing out old power plants and building brand new ones) is still incredibly expensive.

Replies from: ChristianKl
comment by ChristianKl · 2014-02-14T11:59:03.155Z · LW(p) · GW(p)

. I guess this is because the ethics seem obvious to me: of course we should prevent people from developing a "supervirus" or whatever, just as we try to prevent people from developing nuclear arms or chemical weapons.

Of course the ethics are obvious. The road to hell is paved with good intentions. 200 years ago burning all those fossil fuels to power steam engines sounded like a really great idea.

If you simply try to solve problems created by people adopting technology by throwing more technology at it, that's dangerous.

The wise way is to understand the problem you are facing and do specific intervention that you believe to help. CFAR style rationality training might sound less impressive then changing around peoples neurology but it might be an approach with a lot less ugly side effects.

CFAR style rationality training might seem less technological to you. That's actually a good thing because it makes it easier to understand the effects.

The fact that solar itself is getting less expensive is great, but unfortunately the changing over from fossil fuels to solar (e.g. phasing out old power plants and building brand new ones) is still incredibly expensive.

It depends on what issue you want to address. Given how things are going technology involves in a way where I don't think we have to fear that we will have no energy when coal runs out. There plenty of coal around and green energy evolves fast enough for that task.

On the other hand we don't want to turn that coal. I want to eat tuna that's not full of mercury and there already a recommendation from the European Food Safety Authority against eating tuna every day because there so much mercury in it. I want less people getting killed via fossil fuel emissions. I also want to have less greenhouse gases in the atmosphere.

is still incredibly expensive.

If you want to do policy that pays off in 50 years looking at how things are at the moment narrows your field of vision too much.

If solar continues it's price development and is 1/8 as cheap in 21 years you won't need government subsidies to get people to prefer solar over coal. With another 30 years of deployment we might not burn any coal in 50 years.

disheartening roadblocks in the way (utility companies, lack of government will, etc.).

If you think lack of government will or utility companies are the core problem, why focus on changing human neurology? Addressing politics directly is more straightforward.

When it comes to solar power it might also be that nobody will use any solar panels in 50 years because Craig Venter's algae are just a better energy source. Betting to much on single cards is never good.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-14T18:51:37.371Z · LW(p) · GW(p)

CFAR style rationality training might sound less impressive then changing around peoples neurology but it might be an approach with a lot less ugly side effects.

It's a start, and potentially fewer side effects is always good, but think of it this way: who's going to gravitate towards rationality training? I would bet people who are already more rational than not (because it's irrational not to want to be more rational). Since participants are self-selected, a massive part of the population isn't going to bother with that stuff. There are similar issues with genetic and neurological modifications (e.g. they'll be expensive, at least initially, and therefore restricted to a small pool of wealthy people), but given the advantages over things like CFAR I've already mentioned, it seems like it'd be worth it...

I have another issue with CFAR in particular that I'm reluctant to mention here for fear of causing a shit-storm, but since it's buried in this thread, hopefully it'll be okay. Admittedly, I only looked at their website rather than actually attending a workshop, but it seems kind of creepy and culty--rather reminiscent of Landmark, for reasons not the least of which is the fact that it's ludicrously, prohibitively expensive (yes, I know they have "fellowships," but surely not that many. And you have to use and pay for their lodgings? wtf?). It's suggestive of mind control in the brainwashing sense rather than rationality. (Frankly, I find that this forum can get that way too, complete with shaming thought-stopping techniques (e.g. "That's irrational!"). Do you (or anyone else) have any evidence to the contrary? (I know this is a little off-topic from my question -- I could potentially create a workshop that I don't find culty -- but since CFAR is currently what's out there, I figure it's relevant enough.)

Given how things are going technology involves in a way where I don't think we have to fear that we will have no energy when coal runs out. There plenty of coal around and green energy evolves fast enough for that task.

You could be right, but I think that's rather optimistic. This blog post speaks to the problems behind this argument pretty well, I think. Its basic gist is that the amount of energy it will take to build sufficient renewable energy systems demands sacrificing a portion of the economy as is, to a point that no politician (let alone the free market) is going to support.

This brings me to your next point about addressing politics instead of neurology. Have you ever tried to get anything changed politically...? I've been involved in a couple of movements, and my god is it discouraging. You may as well try to knock a brick wall down with a feather. It basically seems that humanity is just going to be the way it is until it is changed on a fundamental level. Yes, I know society has changed in many ways already, but there are many undesirable traits that seem pretty constant, particularly war and inequality.

As for solar as opposed to other technologies, I am a bit torn as to whether it might be better to work on developing technologies rather than whatever seems most practical now. Fusion, for instance, if it's actually possible, would be incredible. I guess I feel that working on whatever's practical now is better for me, personally, to expend energy on since everything else is so speculative. Sort of like triage.

comment by knb · 2014-02-12T20:44:23.113Z · LW(p) · GW(p)

Thoughts?

Pretty sure you just feel like bragging about how much smarter you are than the rest of the world. If you think people have to be as smart as you think you are to be worth protecting, you are a bad person.

comment by skeptical_lurker · 2014-02-12T15:43:26.875Z · LW(p) · GW(p)

Well, there has not been a nuclear war yet (excluding WWII where deaths from nuclear weapons were tiny in proportion), climate change has only been a known risk for a few decades, and progress is being made with electric cars and solar power. Things could be worse. Instead of moaning, propose solutions : what would you do to stop global warming when so much depends on fossil fuels?

On a separate note, I agree with the kneejerk reactions, but its a temporary cultural thing, caused partially by people basing morality on fiction. Get one group of people to watch GATTACA and another to watch Ghost in the shell, and they would have very different attitudes towards transhumanism. More interestingly, cybergoths (people who like to dress as cyborgs as a fashion statement) seem to be pretty open to discussions of actual brain-computer interfaces and there is music with H+ lyrics being realeased on actual record lables and brought by people who like the music and are not transhumanists... yet.

In conclusion, once enhancement become possible I think there will be a sizeable minority of people who back it - in fact this has allready happend with modafinil and students.

Replies from: ricketybridge, Izeinwinter
comment by ricketybridge · 2014-02-12T20:57:03.950Z · LW(p) · GW(p)

people basing morality on fiction.

Yes, and that seems truly damaging. I get the need to create conflict in fiction, but it seems to come always at the expense of technological progress, in a way I've never really understood. When I read Brave New World, I genuinely thought it truly was a "brave new world." So what if some guy was conceived naturally?? Why is that inherently superior? Sounds like status quo bias, if you ask me. Buncha Luddite propraganda.

I've actually been working on a pro-technology, anti-Luddite text-based game. Maybe working on it is in fact a good idea towards balancing out the propaganda and changing public opinion...

comment by Izeinwinter · 2014-02-12T18:10:45.353Z · LW(p) · GW(p)

"Reactors by the thousand". Fissile and fertile materials are sufficiently abundant that we could run a economy much larger than the present one entirely on fission for millions of years, and doing so would have considerably lower average health impacts and costs than what we are actually doing. - The fact that we still burn coal is basically insanity, even disregarding climate change, because of the sheer toxicity of the wastestream from coal plants. Mercury has no halflife.

comment by D_Malik · 2014-02-12T06:08:31.123Z · LW(p) · GW(p)

[deleted]

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T07:43:19.487Z · LW(p) · GW(p)

Well, true. All things shall pass.

comment by mwengler · 2014-02-12T01:08:59.391Z · LW(p) · GW(p)

All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?

I have spent my life figuring other humans are probably conscious purely on an Occam's razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of humans could be both "obviously" not conscious but be mistaken by others as conscious. Does every human on the planet who reaches "voice mail jail," voice text interactive systems, are they all aware that they have not reached a consciousness? Do even those of us who are aware forget sometimes when we are not being careful? Is this going to become even a harder distinction to make as tech continues to get better?

I have been enjoying the television show "almost human." In this show there are androids, most of which have been designed to NOT be too much like humans, although what they are really like is boring rule-following humans. It is clear in this show that the value on an android "life" is a tiny fraction of the value on a "human" life, in the first episode a human cop kills his android partner in order to get another one. The partner he does get is much more like a human, but still considered the property of the police department for which he works, and nobody really has much of a problem with this. Ironically, this "almost human" android partner is African American.

Replies from: cousin_it, ChristianKl, shminux
comment by cousin_it · 2014-02-12T09:44:48.414Z · LW(p) · GW(p)

Is this going to become even a harder distinction to make as tech continues to get better?

Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say "I'm conscious". Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying "I'm conscious" for reasons other than being conscious. After all, you can have a very small program that outputs the string "I'm conscious" without being conscious.

So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It's not clear where the cutoff happens, or even if it's meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.

Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that "it preserves behavior at every step, therefore it preserves consciousness", but as the above argument shows, that justification is incomplete and could easily be wrong.

Replies from: mwengler
comment by mwengler · 2014-02-12T18:52:08.128Z · LW(p) · GW(p)

So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them.

Suppose you mean lossless compression. The compressed program has ALL the same outputs to the same inputs as the original program.

Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.

From an evolutionary point of view, can a feature with no output, absolutely zero effect on the interaction of the creature with its environment ever evolve? There would be no mechanism for it to evolve, there is no basis on which to select for it. It seems to me that to believe in the possibility of p-zombies is to believe in the supernatural, a world of phenomena such as consciousness that for some reason is not allowed to be listed as a phenomenon of the natural world.

At the moment, I can't really distinguish how a belief that p-zombies are possible is any different from a belief in the supernatural.

Also this scenario reopens the question of whether uploads are conscious in the first place!

Years ago I thought an interesting experiment to do in terms of artificial consciousness would be to build an increasingly complex verbal simulation of a human, to the point where you could have conversations involving reflection with the simulation. At that point you could ask it if it was conscious and see what it had to say. Would it say "not so far as I can tell?"

The p-zombie assumption is that it would say "yeah I'm conscious duhh what kind of question is that?" But the way a simulation actually gets built is you have the list of requirements and you keep accreting code until all the requirements are met. If your requirements included a vast array of features but NOT the feature that it answer this question one way or another, conceivably you could elicit an "honest" answer from your sim. If all such sims answers "yes," you might conclude that somehow in the collection of features you HAD required, consciousness emerged, and you could do other experiments where you removed features from the sim and kept statistics on how those sims answered that question. You might see the sim saying "no, don't think so." and conclude that whatever it is in us that makes us function as conscious we hadn't found that thing yet and put it in our list of requirements.

Replies from: crazy88, cousin_it, jkaufman
comment by crazy88 · 2014-02-12T23:30:11.694Z · LW(p) · GW(p)

Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.

I haven't thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn't seem right to me.

First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it's a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.

Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn't show that consciousness can't play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn't mean that consciousness can't cause that input-output profile in one system and something else cause it in the other system.

Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.

Am I confused here or do the three points above seem to hold?

Replies from: mwengler
comment by mwengler · 2014-02-14T20:37:36.941Z · LW(p) · GW(p)

Am I confused here or do the three points above seem to hold?

I think formally you are right.

But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built.

I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.

comment by cousin_it · 2014-02-14T14:09:28.094Z · LW(p) · GW(p)

A few points:

1) Initial mind uploading will probably be lossy, because it needs to convert analog to digital.

2) I don't know if even lossless compression of the whole input-output map is going to preserve everything. Let's say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn't contain many interesting statements about consciousness, but that doesn't mean you're allowed to compress away consciousness. And even on longer timescales, people don't seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can't say that input-output map is large, unless we figure out more about consciousness in the first place!

3) Even if consciousness plays a large causal role, I agree with crazy88's point that consciousness might not be the smallest possible program that can fill that role.

4) I'm not sure that consciousness is just about the input-output map. Doesn't it feel more like internal processing? I seem to have consciousness even when I'm not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.

Replies from: mwengler
comment by mwengler · 2014-02-14T20:31:44.142Z · LW(p) · GW(p)

I don't know if even lossless compression of the whole input-output map is going to preserve everything. Let's say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn't contain many interesting statements about consciousness, but that doesn't mean you're allowed to compress away consciousness.

It is not your actual input-output map that matters, but your potential. What is uploaded must be information about the functional organization of you, not some abstracted mapping function. If I have 10 s left to live and I am uploaded, my upload should type this comment in response to your comment above even if it is well more than 10 s since I was uploaded.

And even on longer timescales, people don't seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map.

If with years of intense and expert schooling I could say more about consciousness, then that is part of my input-output map. My upload would need to have the same property.

Even if consciousness plays a large causal role, I agree with crazy88's point that consciousness might not be the smallest possible program that can fill that role.

Might not be, but probably is. Biological function seems to be very efficient, with most bio features not equalled in efficiency by human manufactured systems even now. The chances that evolution would have created consciousness if it didn't need to seem slim to me. So as an engineer trying to plan an attack on the problem, I'd expect consciousness to show up in any successful upload. If it did not, that would be a very interesting result. But of course, we need a way to measure consciousness to tell whether it is there in the upload or not.

To the best of my knowledge, no one anywhere has ever said how you go about distinguishing between a conscious being and a p-zombie.

'm not sure that consciousness is just about the input-output map. Doesn't it feel more like internal processing? I seem to have consciousness even when I'm not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.

I mean your input-output map writ broadly. But again, since you don't even know how to distinguish a conscious me from a p-zombie me, we are not in a position yet to worry about the input-output map and compression, in my opinion.

If a simulation of me can be complete, able to attend graduate school and get 13 patents doing research afterwards, able to carry on an obsessive relationship with a married woman for a decade, able to enjoy a convertible he has owned for 8 years, able to post on lesswrong posts much like this one, then I would be shocked if it wasn't conscious. But I would never know whether it was conscious, nor for that matter will I ever know whether you are conscious, until somebody figures out how to tell the difference between a p-zombie and a conscious person.

Replies from: cousin_it
comment by cousin_it · 2014-02-19T13:03:59.196Z · LW(p) · GW(p)

Biological function seems to be very efficient

Even if that's true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.

I mean your input-output map writ broadly.

Can you expand what you mean by "writ broadly"? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?

That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we're in agreement.

Replies from: mwengler
comment by mwengler · 2014-02-19T15:53:06.735Z · LW(p) · GW(p)

Biological function seems to be very efficient

Even if that's true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.

I was thinking of uploads in the Hansonian sense, a shortcut to "building" AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.

The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.

You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don't think this is at all how an upload would work.

Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.

So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.

Can you expand what you mean by "writ broadly"? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?

I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.

That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we're in agreement.

What I think is that we don't know whether or not consciousness has been thrown away because we don't even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.

What I would say is IF there is a consciousness-meter even possible, and I think there is but I don't know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.

If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.

Do we agree on any of these last two paragraphs?

Replies from: cousin_it
comment by cousin_it · 2014-02-21T07:49:25.271Z · LW(p) · GW(p)

You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don't think this is at all how an upload would work.

Well, presumably you don't want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don't know where to draw the line between harmless and harmful optimizations.

comment by jefftk (jkaufman) · 2014-02-12T23:00:29.711Z · LW(p) · GW(p)

Suppose you mean lossless compression

Right; with lossless compression then you're not going to lose anything. So cousin_it probably means lossy compression, like with jpgs and mp3s, smaller versions that are very similar to what you had before.

Replies from: cousin_it
comment by cousin_it · 2014-02-14T13:56:03.984Z · LW(p) · GW(p)

Well, initial mind uploading is going to be lossy because it will convert analog to digital.

That said, I don't know if even lossless compression of the whole input-output map is going to preserve everything. Let's say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn't contain many interesting statements about consciousness, but that doesn't mean you're allowed to compress away consciousness...

And even on longer timescales, people don't seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can't say that input-output map is large, unless we figure out more about consciousness in the first place.

(Also I agree with crazy88's point that consciousness might play a large causal role but still be compressible to a smaller non-conscious program.)

More generally, I'm not sure that consciousness is just about the input-output map. Doesn't it feel more like internal processing? I seem to have consciousness even when I'm not talking about it, and I would still have it even if my religion prohibited me from talking about it, or something.

comment by ChristianKl · 2014-02-14T00:58:00.705Z · LW(p) · GW(p)

It depends on whether you subscribe to materialism. If you do then there nothing to measure. Conscious might even be a tricky illusion as Dennett suggests.

If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.

Replies from: mwengler
comment by mwengler · 2014-02-14T20:33:11.298Z · LW(p) · GW(p)

If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.

OMG then someone should get busy! Tell me what I can measure and if it makes any kind of sense I will start working on it!

Replies from: ChristianKl
comment by ChristianKl · 2014-02-15T02:36:47.375Z · LW(p) · GW(p)

I do have a qualia for perceiving whether someone else is present in a meditation or is absent minded. It could be that it's some mental reactions that picks up microgestures or some other thing that I don't consciously perceive and summarizes that information into a qualia for mental presence.

Investigating how such a qualia works is what I would do personally when I would want to investigate consciousness.

But you probably have no such qualia, so you either need someone who has or develop it yourself. In both cases that probably means seeking a good meditation teacher.

It's a difficult subject to talk about in a medium like this where people who are into a spiritual framework that has some model of what conscious happens to be have phenomenological primitives that the audience I'm addressing doesn't have. In my experience most of the people who I consider capable in that regard are very unwilling to talk about details with people who don't have phenomenological primitives to make sense of them. Instead of answering a question directly a Zen teacher might give you a koan and tell you to come back in a month when you build the phenomenological primitives to understand it, expect that he doesn't tell you about phenomenological primitives.

comment by Shmi (shminux) · 2014-02-12T01:19:14.808Z · LW(p) · GW(p)

I don't know of a human-independent definition of consciousness, do you? If not, how can one say that "something else is conscious"? So the statement

increasingly complex simulations of humans could be both "obviously" not conscious but be mistaken by others as conscious

will only make sense once there is a definition of consciousness not relying on being a human or using one to evaluate it. (I have a couple ideas about that, but they are not firm enough to explicate here.)

Replies from: mwengler, Scott Garrabrant
comment by mwengler · 2014-02-12T16:37:25.635Z · LW(p) · GW(p)

I don't know of a human-independent definition of consciousness, do you? If not, how can one say that "something else is conscious"? So the statement

I don't know of ANY definition of consciousness which is testable, human-independent or not.

comment by Scott Garrabrant · 2014-02-12T01:46:04.958Z · LW(p) · GW(p)

I don't know of a human-independent definition of consciousness, do you?

Integrated Information Theory is one attempt at a definition. I read about it a little, but not enough to determine if it is completely crazy.

Replies from: fluchess
comment by fluchess · 2014-02-12T02:49:20.720Z · LW(p) · GW(p)

IIT is provides a mathematical approach to measuring consciousness. It is not crazy, and has a significant number of good papers on the topic. It is human-independent

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-02-12T09:43:39.753Z · LW(p) · GW(p)

I don't understand it, but from reading the wikipedia summary it seems to me it measures a complexity of the system. A complexity is not necessarily consciousness.

According to this theory, what is the key difference between a human brain, and... let's say a hard disk of the same capacity, connected to a high-resolution camera? Let's assume that the data from the camera are being written in real time to pseudo-random parts of the hard disk. The pseudo-random parts are chosen by calculating a checksum of the whole hard disk. This system obviously is not conscious, but seems complex enough.

Replies from: fluchess
comment by fluchess · 2014-02-12T12:26:21.031Z · LW(p) · GW(p)

IIT proposes that consciousness is integrated information.

The key difference between a brain and the hard disk is the disk has no way of knowing what it is actually sensing. Brain can tell difference between many more sense and receive and use more forms of information. The camera is not conscious of the fact it sensing light and colour.

This article is a good introduction to the topic and the photodiode example in the paper is the simple version of your question http://www.biolbull.org/content/215/3/216.full

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-02-12T21:50:18.809Z · LW(p) · GW(p)

Thanks! The article was good. At this moment, I am... not convinced, but also not able to find an obvious error.

comment by Shmi (shminux) · 2014-02-15T18:37:02.636Z · LW(p) · GW(p)

Paraphrased from #lesswrong: "Is it wrong to shoot everyone who believes Tegmark level 4?" "No, because, according to them, it happens anyway". (It's tongue-in-cheek, for you humorless types.)

comment by RolfAndreassen · 2014-02-13T07:42:13.272Z · LW(p) · GW(p)

I am still seeking players for a multiplayer game of Victoria 2: Hearts of Darkness. We have converted from an earlier EU3 game, itself converted from CK2; the resulting history is very unlike our own. We are currently in 1844:

  • Islamic Spain has publicly declared half of Europe to be dar al Harb, liable to attack at any time, while quietly seeking the return of its Caribbean colonies by diplomatic means.
  • The Christian powers of Europe discuss the partition of Greece-across-the-sea, the much-decayed final remnant of the Roman Empire, which nonetheless rules eastern Africa from the Nile Delta to Lake Tanganyika.
  • United India jostles with China for supremacy in Asia, both courting the lesser powers of Sind and the Mongol Khanate as allies in their struggle. The Malayan Sultanate, the world's foremost naval power, keeps its vast fleet as the balancing weight in these scales, supporting now one, now another as the advantage shifts - while keeping a wary eye on the West, looking for a European challenge to its Pacific hegemony.
  • The Elbe, marking the border of the minor powers France-Allemagne and Bavaria, remains a flashpoint for Great-Power rivalries, as it has been for centuries. The diplomatic balance is once again shifting, with France-Allemagne opportunistically seeking support from Bavaria's historic protector Spain, Scandinavia eyeing the Baltic ports of both sides, and Russia seemingly distracted by imperial concerns in Asia.
  • An enormous darkness shrouds the South American continent; where the ancient Inca kingdom has extended its rule, and its human sacrifices, from the Tierra del Fuego to the Rio Grande. Only a few Amazonian tribes, protected by the jungle canopy, maintain a precarious independence; and the Jaguar Knights are ever in search of new conquests to feed their gods. The oceans have protected Europe, and distance and desert North America; but an age of steam ships and iron horses dawns, and the globe shrinks. Beplumed cavalry may yet ride in triumph through the streets of London, and obsidian knives flash atop the Great Pyramid.

Several nations are available to play:

  • Sind, an important regional power, occupying roughly the area of Pakistan, Afghanistan, and parts of Iran. Contend with India for the rule of the subcontinent!
  • Najd, likewise a significant factor in the power-balance of both Asia and Europe, taking up most of the Middle East. Fight Russia for Anatolia, Greece for Africa, or ally with India to partition Sind!
  • The Khanate, a landlocked power stretching from the Urals to very nearly the Pacific - but not quite, courtesy of the Korean War. Reverse the outcome and bring a new Mandate to rule China!
  • Greece-in-exile, least among the powers that bestride the Earth - that is, not counting the various city-states, vassals, and half-independent border marches that some Great Powers find it convenient to maintain. Take on usurping Italia Renata, bullying Russia and infidel Spain, and restore the glory that was Rome!

Next session is this Sunday; PM me for details.

Replies from: RolfAndreassen, Vaniver, bramflakes
comment by RolfAndreassen · 2014-02-16T03:49:02.836Z · LW(p) · GW(p)

Additionally, playing in an MP campaign offers all sorts of opportunities for sharpening your writing skills through stories set in the alternate history!

comment by Vaniver · 2014-02-14T22:24:26.668Z · LW(p) · GW(p)

If you play in this game, you get to play with not one, but two LWers! I am Spain, beacon of learning, culture, and industry.

comment by bramflakes · 2014-02-13T10:29:08.440Z · LW(p) · GW(p)

Other than the alternate start, are there any mods?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-02-14T02:30:36.063Z · LW(p) · GW(p)

Yes, we have redistributed the RGOs for great balance, and stripped out the nation-specific decisions.

comment by Lapsed_Lurker · 2014-02-12T15:45:55.835Z · LW(p) · GW(p)

BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.

comment by MrMind · 2014-02-12T10:35:16.968Z · LW(p) · GW(p)

I don't think it has already been posted here on LW, but SMBC has a wonderful little strip about UFAI: http://www.smbc-comics.com/?id=3261#comic

Replies from: NoSuchPlace
comment by NoSuchPlace · 2014-02-12T16:58:46.176Z · LW(p) · GW(p)

It's a repost from last week.

Though rereading it, does anyone know whether Zach knows about MIRI and/or lesswrong? I expect "unfriendly human-created Intelligence " to parse to AI with bad manners to people unfamiliar with MIRI's work, which is probably not what the scientist is worried about.

Replies from: Lumifer
comment by Lumifer · 2014-02-12T17:12:28.208Z · LW(p) · GW(p)

I expect "unfriendly human-created Intelligence " to parse to AI with bad manners to people unfamiliar with MIRI's work

I expect "unfriendly human-created Intelligence " to parse to HAL and Skynet to regular people.

Replies from: Vulture
comment by Vulture · 2014-02-14T16:01:17.090Z · LW(p) · GW(p)

The use of "friendly" to mean "non-dangerous" in the context of AI is, I believe, rather idiosyncratic.

comment by cursed · 2014-02-11T20:33:42.392Z · LW(p) · GW(p)

I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use? Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):

  • Calculus for Science and Engineering
  • Calculus - Spivak
  • Linear Algebra and its Applications - Strang
  • Linear Algebra Done Right
  • Div, Grad, Curl and All That (Vector calc)
  • Fundamentals of Number Theory - LeVeque
  • Basic Set Theory
  • Discrete Mathematics and its Applications
  • Introduction to Mathematical Logic
  • Abstract Algebra - Dummit

I'm well versed in simple calculus, going back to precalc to fill gaps I may have in my knowledge. I feel like I'm missing some major gaps in knowledge jumping from the undergrad to graduate level. Do any math PhDs have any advice?

Thanks!

Replies from: Scott Garrabrant, Nisan, ricketybridge, Qiaochu_Yuan, Vladimir_Nesov, iarwain1, solipsist
comment by Scott Garrabrant · 2014-02-11T20:50:19.567Z · LW(p) · GW(p)

I advise that you read the first 3 books on your list, and then reevaluate. If you do not know any more math than what is generally taught before calculus, then you have no idea how difficult math will be for you or how much you will enjoy it.

It is important to ask what you want to learn math for. The last four books on your list are categorically different from the first four (or at least three of the first four). They are not a random sample of pure math, they are specifically the subset of pure math you should learn to program AI. If that is your goal, the entire calculus sequence will not be that useful.

If your goal is to learn physics or economics, you should learn calculus, statistics, analysis.

If you want to have a true understanding of the math that is built into rationality, you want probability, statistics, logic.

If you want to learn what most math PhDs learn, then you need things like algebra, analysis, topology.

Replies from: cursed
comment by cursed · 2014-02-11T20:53:34.376Z · LW(p) · GW(p)

Thanks, I made an edit you might not have seen, I mentioned I do have experience with calculus (differential, integral, multi-var), discrete math (basic graph theory, basic proofs), just filling in some gaps since it's been awhile since I've done 'math'. I imagine I'll get through the first two books quickly.

Can you recommend some algebra/analysis/topology books that would be a natural progression of the books I listed above?

Replies from: Nisan, Nisan, Scott Garrabrant
comment by Nisan · 2014-02-11T21:26:19.688Z · LW(p) · GW(p)

In my experience, "analysis" can refer to two things: (1) A proof-based calculus course; or (2) measure theory, functional analysis, advanced partial differential equations. Spivak's Calculus is a good example of (1). I don't have strong opinions about good texts for (2).

comment by Nisan · 2014-02-11T21:23:08.885Z · LW(p) · GW(p)

Dummit & Foote's Abstract Algebra is a good algebra book and Munkres' Topology is a good topology book. They're pretty advanced, though. In university one normally one tackles them in late undergrad or early grad years after taking some proof-based analysis and linear algebra courses. There are gentler introductions to algebra and topology, but I haven't read them.

Replies from: cursed
comment by cursed · 2014-02-11T21:33:29.893Z · LW(p) · GW(p)

Great, I'll look into the Topology book.

Replies from: gjm
comment by gjm · 2014-02-11T22:08:46.431Z · LW(p) · GW(p)

A couple more topology books to consider: "Basic Topology" by Armstrong, one of the Springer UTM series; "Topology" by Hocking and Young, available quite cheap from Dover. I think I read Armstrong as a (slightly but not extravagantly precocious) first-year undergraduate at Cambridge. Hocking and Young is less fun and probably more of a shock if you've been away from "real" mathematics for a while, but goes further and is, as I say, cheap.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-02-11T23:39:17.822Z · LW(p) · GW(p)

Given how much effort it takes to study a textbook, cost shouldn't be a significant consideration (compare a typical cost per page with the amount of time per page spent studying, if you study seriously and not just cram for exams; the impression from the total price is misleading). In any case, most texts can be found online.

Replies from: gjm
comment by gjm · 2014-02-12T01:20:05.492Z · LW(p) · GW(p)

cost shouldn't be a significant consideration

And yet, sometimes, it is. (Especially for impecunious students, though that doesn't seem to be quite cursed's situation.)

most texts can be found online

Some people may prefer to avoid breaking the law.

Replies from: Nornagest
comment by Nornagest · 2014-02-12T01:41:45.559Z · LW(p) · GW(p)

There's some absurd recency effects in textbook publishing. In well-trodden fields it's often possible to find a last-edition textbook for single-digit pennies on the dollar, and the edition change will have close to zero impact if you're doing self-study rather than working a highly exact problem set every week.

(Even if you are in a formal class, buying an edition back is often worth the trouble if you can find the diffs easily, for example by making friends with someone who does have the current edition. I did that for a couple semesters in college, and pocketed close to $500 before I started getting into textbooks obscure enough not to have frequent edition changes.)

comment by Scott Garrabrant · 2014-02-11T20:57:42.199Z · LW(p) · GW(p)

I am not going to be able to recommend any books. I learned all my math directly from professors' lectures.

What is your goal in learning math?

If you want to learn for MIRI purposes, and youve already seen some math, then relearning calculus might not be worth your time

Replies from: cursed
comment by cursed · 2014-02-11T21:00:14.749Z · LW(p) · GW(p)

I have a degree in computer science, looking to learn more about math to apply to a math graduate program and for fun.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T21:10:30.188Z · LW(p) · GW(p)

My guess is that if you have an interest in computer science, you will have the most fun with logic and discrete math, and will not have much fun with the calculus.

If you are serious about getting into a math graduate program, then you have to learn the calculus stuff anyway, because it is a large part of the Math GRE.

Replies from: lmm
comment by lmm · 2014-02-11T22:36:42.282Z · LW(p) · GW(p)

It's worth mentioning that this is a US peculiarity. If you apply to a program elsewhere there is a lot less emphasis on calculus.

Replies from: somervta
comment by somervta · 2014-02-13T05:18:11.780Z · LW(p) · GW(p)

But you should still know rthe basics of calculus (and linear algebra) - at least the equivalent of calc 1, 2 & 3,

comment by Nisan · 2014-02-11T21:19:56.552Z · LW(p) · GW(p)

Maybe the most important thing to learn is how to prove things. Spivak's Calculus might be a good place to start learning proofs; I like that book a lot.

comment by ricketybridge · 2014-02-12T02:46:59.585Z · LW(p) · GW(p)

For what it's worth, I'm doing roughly the same thing, though starting with linear algebra. At first I started with multivariable calc, but when I found it too confusing, people advised me to skip to linear algebra first and then return to MVC, and so far I've found that that's absolutely the right way to go. I'm not sure why they're usually taught the other way around; LA definitely seems more like a prereq of MVC.

I tried to read Spivak's Calc once and didn't really like it much; I'm not sure why everyone loves it. Maybe it gets better as you go along, idk.

I've been doing LA via Gilbert Strang's lectures on the MIT Open CourseWare, and so far I'm finding them thoroughly fascinating and charming. I've also been reading his book and just started Hoffman & Kunze's Linear Algebra, which supposedly has a bit more theory (which I really can't go without).

Just some notes from a fellow traveler. ;-)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-02-12T03:08:19.628Z · LW(p) · GW(p)

I tried to read Spivak's Calc once and didn't really like it much; I'm not sure why everyone loves it. Maybe it gets better as you go along, idk.

"Not liking" is not very specific. It's good all else equal to "like" a book, but all else is often not equal, so alternatives should be compared from other points of view as well. It's very good for training in rigorous proofs at introductory undergraduate level, if you do the exercises. It's not necessarily enjoyable.

I've also been reading his book and just started Hoffman & Kunze's Linear Algebra, which supposedly has a bit more theory

It's a much more advanced book, more suitable for a deeper review somewhere at the intermediate or advanced undergraduate level. I think Axler's "Linear Algebra Done Right" is better as a second linear algebra book (though it's less comprehensive), after a more serious real analysis course (i.e. not just Spivak) and an intro complex analysis course.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-12T07:22:16.762Z · LW(p) · GW(p)

Oh yeah, I'm not saying Spivak's Calculus doesn't provide good training in proofs. I really didn't even get far enough to tell whether it did or not, in which case, feel free to disregard my comment as uninformed. But to be more specific about my "not liking", I just found the part I did read to be more opaque than engaging or intriguing, as I've found other texts (like Strang's Linear Algebra, for instance).

Edit: Also, I'm specifically responding to statements that I thought referring to liking the book in the enjoyment sense (expressed on this thread and elsewhere as well). If that's not the kind of liking they meant, then my comment is irrelevant.

It's a much more advanced book, more suitable for a deeper review somewhere at the intermediate or advanced undergraduate level. I think Axler's "Linear Algebra Done Right" is better as a second linear algebra book (though it's less comprehensive), after a more serious real analysis course (i.e. not just Spivak) and an intro complex analysis course.

Damn, really?? But I hate it when math books (and classes) effectively say "assume this is true" rather than delve into the reason behind things, and those reasons aren't explained until 2 classes later. Why is it not more pedagogically sound to fully learn something rather than slice it into shallow, incomprehensible layers?

comment by Qiaochu_Yuan · 2014-02-12T01:24:50.581Z · LW(p) · GW(p)

I think people generally agree that analysis, topology, and abstract algebra together provide a pretty solid foundation for graduate study. (Lots of interesting stuff that's accessible to undergraduates doesn't easily fall under any of these headings, e.g. combinatorics, but having a foundation in these headings will equip you to learn those things quickly.)

For analysis the standard recommendation is baby Rudin, which I find dry, but it has good exercises and it's a good filter: it'll be hard to do well in, say, math grad school if you can't get through Rudin.

For point-set topology the standard recommendation is Munkres, which I generally like. The problem I have with Munkres is that it doesn't really explain why the axioms of a topological space are what they are and not something else; if you want to know the answer to this question you should read Vickers. Go through Munkres after going through Rudin.

I don't have a ready recommendation for abstract algebra because I mostly didn't learn it from textbooks. I'm not all that satisfied with any particular abstract algebra textbooks I've found. An option which might be a little too hard but which is at least fairly comprehensive is Ash, which is also freely legally available online.

For the sake of exposure to a wide variety of topics and culture I also strongly, strongly recommend that you read the Princeton Companion. This is an amazing book; the only bad thing I have to say about it is that it didn't exist when I was a high school senior. I have other reading recommendations along these lines (less for being hardcore, more for pleasure and being exposed to interesting things) at my blog.

Replies from: Vladimir_Nesov, MrMind
comment by Vladimir_Nesov · 2014-02-12T02:03:54.316Z · LW(p) · GW(p)

For analysis the standard recommendation is baby Rudin, which I find dry, but it has good exercises and it's a good filter: it'll be hard to do well in, say, math grad school if you can't get through Rudin.

I feel that it's only good as a test or for review, and otherwise a bad recommendation, made worse by its popularity (which makes its flaws harder to take seriously), and the widespread "I'm smart enough to understand it, so it works for me" satisficing attitude. Pugh's "Real Mathematical Analysis" is a better alternative for actually learning the material.

comment by MrMind · 2014-02-12T10:19:04.563Z · LW(p) · GW(p)

For point-set topology the standard recommendation is Munkres, which I generally like.

I would preface any textbook on topology with the first chapter of Ishan's "Differential geometry". It builds the reason for studying topology and why the axioms have the shape they have in a wonderful crescendo, and at the end even dabs a bit into nets (non point-set topology). It's very clear and builds a lot of intuition.

Also, as a side dish in a topology lunch, the peculiar "Counterexamples in topology".

comment by Vladimir_Nesov · 2014-02-11T23:20:47.757Z · LW(p) · GW(p)

Keep a file with notes about books. Start with Spivak's "Calculus" (do most of the exercises at least in outline) and Polya's "How to Solve It", to get a feeling of how to understand a topic using proofs, a skill necessary to properly study texts that don't have exceptionally well-designed problem sets. (Courant&Robbins's "What Is Mathematics?" can warm you up if Spivak feels too dry.)

Given a good text such as Munkres's "Topology", search for anything that could be considered a prerequisite or an easier alternative first. For example, starting from Spivak's "Calculus", Munkres's "Topology" could be preceded by Strang's "Linear Algebra and Its Applications", Hubbard&Hubbard's "Vector Calculus", Pugh's "Real Mathematical Analysis", Needham's "Visual Complex Analysis", Mendelson's "Introduction to Topology" and Axler's "Linear Algebra Done Right". But then there are other great books that would help to appreciate Munkres's "Topology", such as Flegg's "From Geometry to Topology", Stillwell's "Geometry of Surfaces", Reid&Szendrői's "Geometry and Topology", Vickers's "Topology via Logic" and Armstrong's "Basic Topology", whose reading would benefit from other prerequisites (in algebra, geometry and category theory) not strictly needed for "Topology". This is a downside of a narrow focus on a few harder books: it leaves the subject dry. (See also this comment.)

comment by iarwain1 · 2014-02-18T18:21:15.586Z · LW(p) · GW(p)

I'm doing precalculus now, and I've found ALEKS to be interesting and useful. For you in particular it might be useful because it tries to assess where you're up to and fill in the gaps.

I also like the Art of Problem Solving books. They're really thorough, and if you want to be very sure you have no gaps then they're definitely worth a look. Their Intermediate Algebra book, by the way, covers a lot of material normally reserved for Precalculus. The website has some assessments you can take to see what you're ready for or what's too low-level for you.

comment by solipsist · 2014-02-11T21:20:27.533Z · LW(p) · GW(p)

Given your background and our wish for pure math, I would skip the calculus and applications of linear algebra and go directly to basic basic set theory, then abstract algebra, then mathy linear algebra or real analysis, then topology.

Or, do discrete math directly if you already know how to write a proof.

comment by listic · 2014-02-13T00:39:08.802Z · LW(p) · GW(p)

I am going to organize a coaching course to learn Javascript + Node.js.

My particular technology of choice is node.js because:

  • If starting from scratch, having to learn just one language for both frontend and backend makes sense. Javascript is the only language you can use in a browser and you will have to learn it anyway. They say it's kind of Lisp or Scheme in disguise and a pretty cool language by itself.
  • Node.js is a modern asynchronous web framework, made by running Javascript code server-side on Google's open-source V8 JavaScript Engine It seems to be well suited for building highly-loaded backend servers, and works for regular websites, too.
  • Hack Reactor teaches it to make 98% of graduates earn $110k/year, on average, after 3 months of study. But their tuition is $17,780. We will do much cheaper.

I wanted to learn modern web technologies for a while, but haven't gotten myself to actually do it. When I tried to start learning, I was overwhelmed by the number of things I still have to learn to get anything done. Here's the bare minimum:

  • html
  • css
  • javascript
  • node.js
  • git

I believe the optimum course of action is to hire a guru to do coaching for me and several other students and split the cost. The benefits compared to learning by yourself are:

  • personal communication (via Skype or similar) and doing tasks along with the others provides an additional drive to complete your studies
  • guru can choose an optimum path for me to reach the desired capabilities in shortest time.

The capabilities that I want to achieve are:

i. To be able to add functionality to my Tumblr blog (where I run a writing prompt) by either using custom theme + Tumblr API or extracting posts via API and using them to render my blog on a separate website. node.js is definitely not needed here, rather than this is the simplest case of doing something useful that I need to with web technologies and node.js is my web technology of choice.

ii. To hack on Undum, a client-side hypertext interactive fiction framework. My thoughts on why I think Undum and IF are cool are here.

  • To port features from one version of Undum to another and create a version of Undum that is able to run all existing games (about 5 of them)
  • To abstract away Undum's internal game representation and state so that they can be loaded and saved externally, over a network
  • To create a server part for Undum that controls the version of the book you're allowed to read (allows to read one new chapter a day, remembers the branch you're reading, up to the end, if you've read to the end, etc.)
  • To create a website that works as a YouTube and an editor for Undum games

iii. To create new experiments that utilize modern web technologies to interesting and novel effect. I know that this sounds really vague, but the point is that sometimes you never know what can be done until you learn the relevant skills. One example of the kind of thing that I think about is what this paper is talking about:

Lee et al. - Real-Time Disease Surveillance Using Twitter Data, Demonstration on Flu and Cancer

Friend's advice: Skype Premium + Dropbox + Piratepad + Slideshare + Doodle should be enough. What do you think?

Want to join? Questions? Suggestions for better videoconferencing software than Skype?

Replies from: Emile
comment by Emile · 2014-02-13T08:50:13.655Z · LW(p) · GW(p)

I would suggest using AngularJs instead, since it can be purely client-side code, you don't need to deal with anything server-side.

There are also some nice online development environments like codenvy that can provide a pretty rich environment and I belieave have some collaburative features too (instead of using dropbox, doodle and slideshare, maybe).

If all those technologies seem intimidating, some strategies:

  • Focus on a subset, i.e. only html and css
  • Use Anki a lot - I've used anki to put in git commands, AngularJS concepts and CSS tricks so that even if I wasn't actively working on a project using those, they'd stay at the back of my mind.
comment by Bayeslisk · 2014-02-12T04:02:33.371Z · LW(p) · GW(p)

Has anyone else had one of those odd moments when you've accidentally confirmed reductionism (of a sort) by unknowingly responding to a situation almost identically to the last time or times you encountered it? For my part, I once gave the same condolences to an acquaintance who was living with someone we both knew to be very unpleasant, and also just attempted to add the word for "tomato" in Lojban to my list of words after seeing the Pomodoro technique mentioned.

Replies from: mwengler, BloodyShrimp, NancyLebovitz
comment by mwengler · 2014-02-12T16:05:49.572Z · LW(p) · GW(p)

A freaky thing I once saw... when my daughter was about 3 there were certain things she responded to verbally, I can't remember what the thing was in this example, but something like me asking here "who is your rabbit?" and her replying "Kisses" (which was the name of her rabbit).

I had videoed some of this exchange and was playing it on a TV with her in the room. I was appalled to hear her responding "Kisses" upon hearing me on the TV saying "who is your favorite rabbit." Her response was extremely similar to her response on the video, tremendous overlap in timing tone and inflection. Maybe 20 to 50 ms off in timing (almost sounded like unison).

I really had the sense that she was a machine and it did not feel good.

Replies from: Sherincall
comment by Sherincall · 2014-02-14T04:09:29.637Z · LW(p) · GW(p)

After a brain surgery, my father developed Anterograde amnesia. Think Memento by Chris Nolan. His reactions to different comments/situations were always identical. If I were to mention a certain word, it would always invoke the same joke. Seeing his wife wearing a certain dress always produces the same witty comment. He was also equally amused by his wittiness every time.

For several months after the surgery he had to be kept on tight watch, and was prone to just do something that was routine pre-op, so we found a joke he finds extremely funny and which he hasn't heard before the surgery, and we would tell it every time we want him to forget where he was going. So, he would laugh for a good while, get completely disoriented, and go back to his sofa.

For a long while, we were unable to convince him that he had a problem, or even that he had the surgery (he would explain the scar away through some fantasy). And even when we manage, it lasts only for a minute or two.. Since then, I've developed several signals I would use if I found myself in an isomorphic situation. I had already read HPMoR by that time, but have discarded Harry's lip-biting as mostly pointless in real life.

Replies from: Bayeslisk
comment by Bayeslisk · 2014-02-14T10:01:30.623Z · LW(p) · GW(p)

These are both pretty much exactly what I'm thinking of! The feeling that someone (or you!) is/are a terrifyingly predictable black box.

Replies from: DanielLC
comment by DanielLC · 2014-02-16T07:24:23.533Z · LW(p) · GW(p)

My goal in life is to become someone so predictable that you can figure out what I'll do just by calculating what choice would maximize utility.

Replies from: Bayeslisk
comment by Bayeslisk · 2014-02-17T02:55:00.774Z · LW(p) · GW(p)

That seems eminently exploitable and consequently extremely dangerous. Safety and unexpected delight lie in unpredictability.

comment by BloodyShrimp · 2014-02-12T22:33:24.415Z · LW(p) · GW(p)

This doesn't seem related to reductionism to me, except in that most reductionists don't believe in Knightian free will.

Replies from: Bayeslisk
comment by Bayeslisk · 2014-02-14T09:56:54.812Z · LW(p) · GW(p)

Sort of in the sense of human minds being more like fixed black boxes that one might like to think. What's Knightian free will, though?

Replies from: BloodyShrimp
comment by BloodyShrimp · 2014-02-18T22:51:56.289Z · LW(p) · GW(p)

Knightian uncertainty is uncertainty where probabilities can't even be applied. I'm not convinced it exists. Some people seem to think free will is rescued by it; that the human mind could be unpredictable even in theory, and this somehow means it's "you" "making choices". This seems like deep confusion to me, and so I'm probably not expressing their position correctly.

Reductionism could be consistent with that, though, if you explained the mind's workings in terms of the simplest Knightian atomic thingies you could.

Replies from: Bayeslisk
comment by Bayeslisk · 2014-02-20T11:00:41.178Z · LW(p) · GW(p)

Can you give me some examples of what some people think constitutes Knightian uncertainty? Also: what do they mean by "you"? They seem to be postulating something supernatural.

Replies from: BloodyShrimp
comment by BloodyShrimp · 2014-02-23T05:59:24.402Z · LW(p) · GW(p)

Again, I'm not a good choice for an explainer of this stuff, but you could try http://www.scottaaronson.com/blog/?p=1438

Replies from: Bayeslisk
comment by Bayeslisk · 2014-02-24T19:01:17.868Z · LW(p) · GW(p)

Thanks! I'll have a read through this.

Replies from: BloodyShrimp
comment by BloodyShrimp · 2014-02-27T05:16:08.661Z · LW(p) · GW(p)

I decided I should actually read the paper myself, and... as of page 7, it sure looks like I was misrepresenting Aaronson's position, at least. (I had only skimmed a couple Less Wrong threads on his paper.)

comment by NancyLebovitz · 2014-02-12T14:04:39.740Z · LW(p) · GW(p)

In my case, it seems more likely that the other person will remember that I'd said the same thing before.

Replies from: Bayeslisk
comment by Bayeslisk · 2014-02-14T09:57:38.875Z · LW(p) · GW(p)

In mine, too, at least for the first few seconds. Otherwise, knowing I had already responded a certain way, I would probably respond differently.

comment by [deleted] · 2014-02-11T19:21:48.493Z · LW(p) · GW(p)

Are there any reasons for becoming utilitarian, other than to satisfy one's empathy?

Replies from: VAuroch, Squark, Scott Garrabrant, Viliam_Bur, ThrustVectoring
comment by VAuroch · 2014-02-11T22:53:48.012Z · LW(p) · GW(p)

I am interested in this, or possibly a different closely-related thing.

I accept the logical arguments underlying utilitarianism ("This is the morally right thing to do.") but not the actionable consequences. ("Therefore, I should do this thing.") I 'protect' only my social circle, and have never seen any reason why I should extend that.

Replies from: blacktrance
comment by blacktrance · 2014-02-11T22:58:33.950Z · LW(p) · GW(p)

What does "the morally right thing to do" mean if not "the thing you should do"?

Replies from: VAuroch
comment by VAuroch · 2014-02-11T23:02:58.308Z · LW(p) · GW(p)

To rephrase: I accept that utilitarianism is the correct way to extrapolate our moral intuitions into a coherent generalizable framework. I feel no 'should' about it -- no need to apply that framework to myself -- and feel no cognitive dissonance when I recognize that an action I wish to perform is immoral, if it hurts only people I don't care about.

Replies from: mwengler
comment by mwengler · 2014-02-12T00:59:54.798Z · LW(p) · GW(p)

Ultimately I think that is the way all utilitarianism works. You define an in group of people who are important, effectively equivalently important to each other and possibly equivalently important to yourself.

For most modern utilitarians, the in-group is all humans. Some modern utilitarians put mammals with relatively complex nervous systems in the group, and for the most part become vegetarians. Others put everything with a nervous system in there and for the most part become vegans. Very darn few put all life forms in there as they would starve. Implicit in this is that all life forms would place negative utility on being killed to be eaten which may be reasonable or may be projection of human values on to non-human entities.

But logically it makes as much sense to shrink the group you are utilitarian about as to expand it. Only Americans seems like a popular one in the US when discussing immigration policy. Only my friends and family has a following. Only LA Raiders fans or Manchester United fans seems to also gather its proponents.

Around here, I think you find people trying to put all thinking things, even mechanical, in the in-group, perhaps only all conscious thinking things. Maybe the way to create a friendly AI would be to make sure the AI never values its own life more than it values its own death, then we would always be able to turn it off without it fighting back.

Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won't accord the stranger the full benefit of your concern, just a partial benefit.

Replies from: VAuroch
comment by VAuroch · 2014-02-12T01:30:00.511Z · LW(p) · GW(p)

Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won't accord the stranger the full benefit of your concern, just a partial benefit.

Oh, there are definitely gradations. I probably wouldn't do this, even if I could get away with it. I don't care enough about strangers to go out of my way to save them, but neither do I want to kill them. On the other hand, if it was a person I had an active dislike for, I probably would. All of which is basically irrelevant, since it presupposes the incredibly unlikely "if I thought I could get away with it".

Replies from: deskglass
comment by deskglass · 2014-02-12T19:18:51.599Z · LW(p) · GW(p)

I used to think I thought that way, but then I had some opportunities to casually steal from people I didn't know (and easily get away with it), but I didn't. With that said, I pirate things all the time despite believing that doing so frequently harms the content owners a little.

Replies from: VAuroch
comment by VAuroch · 2014-02-12T22:16:14.564Z · LW(p) · GW(p)

I have taken that precise action against someone who mildly annoyed me. I remember it (and the perceived slight that motivated it), but feel no guilt over it.

comment by Squark · 2014-02-16T20:26:51.649Z · LW(p) · GW(p)

By utilitiarian you mean:

  1. Caring about all people equally

  2. Hedonism, i.e. caring about pleasure/pain

  3. Both of the above (=Bentham's classical utilitarianism)?

In any case, what answer do you expect? What would constitute a valid reason? What are the assumptions from which you want to derive this?

Replies from: None
comment by [deleted] · 2014-02-17T17:19:59.965Z · LW(p) · GW(p)

Both of the above (=Bentham's classical utilitarianism)

I mean this.

In any case, what answer do you expect?

I do not expect any specific answer.

What would constitute a valid reason?

For me personally, probably nothing, since, apparently, I neither really care about people (I guess I overintellectuallized my empathy), nor about pleasure and suffering. The question, however, was asked mostly to better understand other people.

What are the assumptions from which you want to derive this?

I don't know any.

comment by Scott Garrabrant · 2014-02-11T19:29:02.104Z · LW(p) · GW(p)

You can band together lots of people to work together towards the same utilitarianism.

Replies from: None
comment by [deleted] · 2014-02-11T19:41:11.810Z · LW(p) · GW(p)

i.e. change happiness-suffering to something else?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T19:47:06.813Z · LW(p) · GW(p)

I don't know how to parse that question.

I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner's dilemma.

Replies from: Lumifer, None
comment by Lumifer · 2014-02-11T20:04:53.639Z · LW(p) · GW(p)

I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner's dilemma.

I don't understand why is this an argument in favor of utilitarianism.

A bunch of people can agree to work towards pretty much anything, for example getting rid of the unclean/heretics/untermenschen/etc.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T20:09:49.195Z · LW(p) · GW(p)

I think you are taking this sentence out of context. I am not trying to present an argument in favor of utilitarianism. I was trying to explain why empathy is not necessary for utilitarianism.

I interpreted the question as "Why (other than my empathy) should I try to maximize other people's utility?"

Replies from: Lumifer
comment by Lumifer · 2014-02-11T20:24:46.754Z · LW(p) · GW(p)

I interpreted the question as "Why (other than my empathy) should I try to maximize other people's utility?"

Right, and here is your answer:

You can band together lots of people to work together towards the same utilitarianism.

I don't understand why this is a reason "to maximize other people's utility".

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T20:28:34.300Z · LW(p) · GW(p)

You can entangle your own utility with other's utility, so that what maximizes your utility also maximizes their utility and vice versa. Your terminal value does not change to maximizing other people's utility, but it becomes a side effect.

Replies from: Lumifer
comment by Lumifer · 2014-02-11T20:31:36.308Z · LW(p) · GW(p)

So you are basically saying that sometimes it is in your own self-interest ("own utility") to cooperate with other people. Sure, that's a pretty obvious observation. I still don't see how it leads to utilitarianism.

If you terminal value is still self-interest but it so happens that there is a side-effect of increasing other people's utility -- that doesn't look like utilitarianism to me.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T20:54:12.280Z · LW(p) · GW(p)

I was only trying to make the obvious observation.

Just trying to satisfy your empathy does not really look like pure utilitarianism either.

comment by [deleted] · 2014-02-11T20:16:02.536Z · LW(p) · GW(p)

There's no need to parse it anymore, I didn't get your comment initially.

for the same reason they can agree to cooperate in the repeated prisoner's dilemma.

I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans' happiness.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-11T20:22:16.136Z · LW(p) · GW(p)

I agree in the short term, but many of my long term goals (e.g. not dying) require lots of cooperation.

comment by Viliam_Bur · 2014-02-11T20:16:18.263Z · LW(p) · GW(p)

I guess the reason is maximizing one's utility function, in general. Empathy is just one component of the utility function (for those agents who feel it).

If multiple agents share the same utility function, and they know it, it should make their cooperation easier, because they only have to agree on facts and models of the world; they don't have to "fight" against each other.

Replies from: None
comment by [deleted] · 2014-02-12T21:17:01.860Z · LW(p) · GW(p)

Apparently, we mean different things by "utilitarianism". I meant moral system whose terminal goal is to maximize pleasure and minimize suffering in the whole world, while you're talking about agent's utility function, which may have no regard for pleasure and suffering.

I agree, thought, that it makes sense to try to maximize one's utility function, but to me it's just egoism.

comment by ThrustVectoring · 2014-02-11T23:17:07.227Z · LW(p) · GW(p)

I suspect that most people already are utilitarians - albeit with implicit calculation of their utility function. In other words, they already figure out what they think is best and do that (if they thought something else was better, it's what they'd do instead).

Replies from: blacktrance
comment by blacktrance · 2014-02-11T23:39:09.648Z · LW(p) · GW(p)

Utilitarian =/= utility maximizer.

comment by ChrisHallquist · 2014-02-17T17:36:46.345Z · LW(p) · GW(p)

Would just like to make sure everyone here is aware of LessWrong.txt

Replies from: Jayson_Virissimo, Nornagest, Viliam_Bur
comment by Jayson_Virissimo · 2014-02-17T19:57:02.701Z · LW(p) · GW(p)

Why?

comment by Nornagest · 2014-02-18T00:29:11.721Z · LW(p) · GW(p)

Criticism's well and good, but 140 characters or less of out-of-context quotation doesn't lend itself to intelligent criticism. From the looks of that feed, about half of it is inferential distance problems and the other half is sacred cows, and neither one's very interesting.

If we can get anything from it, it's a reminder that killing sacred cows has social consequences. But I'm frankly tired of beating that particular drum.

comment by Viliam_Bur · 2014-02-17T19:13:50.136Z · LW(p) · GW(p)

Things like this merely mean that you exist and someone else has noticed it.

comment by skeptical_lurker · 2014-02-12T15:19:08.572Z · LW(p) · GW(p)

EDIT: This particular site does margin trading differently to how I thought margin trading normally works. So... disregard everything I just said?

Bitcoin economy and a possible violation of the efficent market hypososis. With the growing maturity of the Bitcoin ecosystem, there has appeared a website which allows leveraged trading, meaning that people who think they know which way the price is going can borrow money to increase their profits. At the time of writing, the bid-ask spread for the rates offered is 0.27% - 0.17% per day, which is 166% - 86% per annum. Depositors are not actually trading themselves, so the only way failure modes I can see is if the exchange takes the money and runs, if there is a catastrophic failure of the trading engine, or if they get hacked. I Gwern estimates that a Bitcoin exchange has a 1% chance of failure per month based upon past performance, but that was written some time ago, and the increased legal recognition of Bitcoin plus people learning from mistakes should decrease this probability. On the other hand the biggest exchange MtGox froze withdrawals a few days ago, but note that they claim that this is a temporary technical fault. As additional information, Bitfinex's website states "The company is incorporated in Hong Kong as a Limited Liability Corporation.", which would seem to decrease the likelihood of the company stealing the money. In conclusion, even assuming a pessimistic 1% chance of failure per month I reach a conservative estimate of 65% APR expected returns (assuming that the interest is constant at the lower 0.17% figure) . So why aren't people flocking to the website, starting a bidding war to drive the interest rate down to a tenth of its current value? Unless there is something wrong with my previous calculations, the best explanation I can think of is that it simply has not generated enough publicity. Perhaps also everyone in the Bitcoin community is assuming the price is going to increase by 10000%, or they are looking for the next big altcoin, or they are daytrading, but either way a boring but safe option doesn't seem so interesting. In conclusion, this seems to be an example where the efficent market hypothosis does not hold, due to insufficent propagation of information.

Disclaimers: I don't have shares in Bitfinex, and I hope this doesn't look like spam. This is a theoretical discussion of the EMH, not finanal advice, and if you lose your money I am not responsible. I'm not sure whether this deserves its own post outside of discussion – please let me know.

Replies from: Lumifer, niceguyanon
comment by Lumifer · 2014-02-12T16:10:37.253Z · LW(p) · GW(p)

the only way failure modes I can see is if the exchange takes the money and runs, if there is a catastrophic failure of the trading engine, or if they get hacked.

The exchange can just fail in a large variety of ways and close (go bankrupt). If you're not "insured" you are exposed to the trading risk and insurance costs what, about 30%? and, of course, it doesn't help you with the exchange counterparty risk.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-02-12T16:58:41.106Z · LW(p) · GW(p)

30% per annum? Even if this were true (and this sounds quite high, as I mentioned with Gwerns 1% per month estimate) then providing liquidity with them would still be +EV (86% increase vs 30% risk).

Replies from: Lumifer
comment by Lumifer · 2014-02-12T17:10:24.091Z · LW(p) · GW(p)

Um, did you make your post without actually reading the Bitfinex site about how it works..?

Replies from: skeptical_lurker, skeptical_lurker
comment by skeptical_lurker · 2014-02-12T17:26:27.193Z · LW(p) · GW(p)

Upvoted for pointing out my stupid mistake (I assumed it works in a certain way, and skipped readig the vital bit)

comment by skeptical_lurker · 2014-02-12T17:20:50.293Z · LW(p) · GW(p)

Ahh, oops. I think I missed the last line... I thought if someone exceeded their margin, they were forced to close their position so that no money was lost.

comment by niceguyanon · 2014-02-12T20:38:13.986Z · LW(p) · GW(p)

Depositors are not actually trading themselves, so the only way failure modes I can see is if the exchange takes the money and runs, if there is a catastrophic failure of the trading engine, or if they get hacked.

There is risk that is baked in from the fact that depositors are on the hook if trades can not be unwound quickly enough, and because this is Bitcoins, where volatility is crazy there is even more of this risk.

For example assume you lend money for some trader to go long, and now say that suddenly prices drop so quickly that it puts the trader beyond a margin call, in fact it puts him at liquidation, oh uh...the traders margin wallet is now depleted, who makes up the balance, the lenders. They actually do mention this on their website. But they don't tell you what the margin call policy is. This is a really important part of the risk. If they allow a trader to only put up $50 of a $100 position and call you in when your portion hits 25% that would be normal for something like index equities but pretty insane for something like Bitcoin.

comment by EGarrett · 2014-02-12T04:52:46.901Z · LW(p) · GW(p)

How does solipsism change one's pattern of behavior, compared to other things being alive? I noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.

For example, if you steal from your neighbor, you can observe that you run the risk of him catching you, and thus you having to deal with consequences that will be painful or unpleasant. Similarly, assuming you're a healthy person, you have a conscience that makes you feel bad about certain things, even when you get away with them.

Do you think your conscience would cease to bother you if you could know for a fact that there were no other living creatures feeling pain around you? In what other cases does a true solipsistic world make your behavior distinct from a non-solipsistic one?

Replies from: mwengler, ahbwramc, MrMind, DanielLC, hyporational
comment by mwengler · 2014-02-12T16:01:07.728Z · LW(p) · GW(p)

I'm certainly comfortable with violent fantasy when the roles are acted out. This suggests to me that if I were convinced that certain person-seeming things were not alive, conscious, were not what they seemed that this might tip me in to some violent behaviors. I think at minimum I would experiment with it, try a slap here, a punch there. And where I went from there would depend on how it felt I suppose.

Also I would almost certainly steal more stuff if I was convinced that everything was landscape.

Replies from: hyporational
comment by hyporational · 2014-02-13T09:59:16.091Z · LW(p) · GW(p)

In fantasies you're in total control. Same applies to video games for example. Risk of severe retaliation isn't a real.

comment by ahbwramc · 2014-02-12T15:31:52.611Z · LW(p) · GW(p)

Well, the obvious difference would be that non-solipsists might care about what happens after they die, and act accordingly.

comment by MrMind · 2014-02-12T10:32:35.862Z · LW(p) · GW(p)

noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.

When I was younger and studying analytical philosophy, I noticed the same thing. Unless solipsism morphs into apathy, there are still 'representations' you can't control and that you can care about. Unless it alters your values, there should be no difference in behaviour too.

comment by DanielLC · 2014-02-16T07:11:50.007Z · LW(p) · GW(p)

If I didn't care about other people, I wouldn't worry about donating to charities that actually help people. I'd donate a little to charities that make me look good, and if I'm feeling guilty and distracting myself doesn't seem to be cost-effective, I'd donate to charities that make me feel good. I would still keep quite a bit of my money for myself, or at least work less.

As it is, I've figured that other people matter, and some of them are a lot cheaper to make happy than me, so I decided that I'm going to donate pretty much everything I can to the best charity I can find.

comment by hyporational · 2014-02-12T06:06:27.655Z · LW(p) · GW(p)

If there were no other beings that could consciously suffer, I would probably adopt a morality that would be utterly horrible in the real world. Video games might hint at how solipsism would make you behave.

comment by fluchess · 2014-02-12T03:03:45.025Z · LW(p) · GW(p)

I participated in an economics experiment a few days ago, and one of the tasks was as follows. Choose one of the following gambles where each outcome has 50% probability Option 1: $4 definitely Option 2: $6 or $3 Option 3: $8 or $2 Option 4: $10 or $1 Option 5: $12 or $0

I choose option 5 as it has the highest expected value. Asymptotically this is the best option but for a single trial, is it still the best option?

Replies from: Scott Garrabrant, jkrause, EGarrett, Lumifer, Dagon, DanielLC, jobe_smith
comment by Scott Garrabrant · 2014-02-12T03:14:32.232Z · LW(p) · GW(p)

Technically, it depends on your utility function. However, even without knowing your utility function, I can say that for such a low amount of money, your utility function is very close to linear, and option 5 is the best.

Replies from: PECOS-9
comment by PECOS-9 · 2014-02-12T03:36:41.839Z · LW(p) · GW(p)

More info: marginal utility

comment by jkrause · 2014-02-12T07:16:37.509Z · LW(p) · GW(p)

Here's one interesting way of viewing it that I once read:

Suppose that the option you chose, rather than being a single trial, were actually 1,000 trials. Then, risk averse or not, Option 5 is clearly the best approach. The only difficulty, then, is that we're considering a single trial in isolation. However, when you consider all such risks you might encounter in a long period of time (e.g. your life), then the situation becomes much closer to the 1,000 trial case, and so you should always take the highest expected value option (unless the amounts involved are absolutely huge, as others have pointed out).

comment by EGarrett · 2014-02-12T04:46:07.063Z · LW(p) · GW(p)

As a poker player, the idea we always batted back and forth was that Expected Value doesn't change over shorter sample sizes, including a single trial. However you may have a risk of ruin or some external factor (like if you're poor and given the option of being handed $1,000,000 or flipping a coin to win $2,000,001).

Barring that, if you're only interested in maximizing your result, you should follow EV. Even in a single trial.

comment by Lumifer · 2014-02-12T03:09:04.427Z · LW(p) · GW(p)

That depends on your utility function, specifically your risk tolerance. If you're risk-neutral, option 5 has the highest value, otherwise it depends.

comment by Dagon · 2014-02-12T09:01:53.615Z · LW(p) · GW(p)

Clearly option 5 has the higest mean outcome. If you value money linearly (that is, $12 is exactly 3 times as good as $4, and there's no special utility threshold along the way (or disutility at $0), it's the best option.

For larger values, your value for money may be nonlinear (meaning: the difference between $0 and $50k may be much much larger than the difference between $500k and $550k to your happiness), and then you'll need to convert the payouts to subjective value before doing the calculation. Likewise if you're in a special circumstance where there's a threshold value that has special value to you - if you need $3 for bus fare home, then option 1 or 2 become much more attractive.

comment by DanielLC · 2014-02-16T07:15:20.265Z · LW(p) · GW(p)

That depends on the amount of background money and randomness you have.

Although I can't really see any case where I wouldn't pick option five. Even if that's all the money I will ever have, my lifespan, and by extension my happiness, will be approximately linear with time.

If you specify that I get that much money each day for the rest of my life, and that's all I get, then I'd go for something lower risk.

comment by jobe_smith · 2014-02-14T14:12:57.731Z · LW(p) · GW(p)

In general, picking the highest EV option makes sense, but in the context of what sounds like a stupid/lazy economics experiment, you have a moral duty to do the wrong thing. Perhaps you could have flipped a coin twice to choose among the first 4 options? That way you are providing crappy/useless data and they have to pay you for it!

Replies from: fluchess
comment by fluchess · 2014-02-15T04:03:57.545Z · LW(p) · GW(p)

Why do I have a moral duty to do wrong thing? Shouldn't I act in my own self interest to maximise the amount of money I make?

comment by fubarobfusco · 2014-02-18T07:49:08.074Z · LW(p) · GW(p)

An Iterated Prisoner's Dilemma variant I've been thinking about —

There is a pool of players, who may be running various strategies. The number of rounds played is randomly determined. On each round, players are matched randomly, and play a one-shot PD. On the second and subsequent rounds, each player is informed of its opponent's previous moves; but players have no information about what move was played against them last round, nor whether they have played the same opponent before.

In other words, as a player you know your current opponent's move history — but you don't know whom they were playing those moves against; and you don't know what your score is looking like, either.

If you're playing with a pool of TFT bots, it's going to seem the same as if you were playing against a single TFT bot. TFT judges you on your previous move, regardless of whom you were playing.

But defecting against CooperateBot or DefectBot doesn't look so good if your next opponent predicts you based on your defection, and doesn't know you were up against a bot.

comment by A1987dM (army1987) · 2014-02-17T13:14:50.019Z · LW(p) · GW(p)

Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.

Replies from: private_messaging, Douglas_Knight, Error
comment by private_messaging · 2014-02-17T17:24:20.687Z · LW(p) · GW(p)

Self driving cars have very complex goal metrics, along the lines of getting to the destination while disrupting the traffic the least (still grossly oversimplifying).

The manufacturer is interested in every one of his cars getting to the destination in the least time, so the cars are programmed to optimize for the sake of all cars. They're also interested in getting human drivers to buy their cars, which also makes not driving like a jerk a goal. PD is problematic when agents are selfish, not when agents entirely share the goal. Think of 2 people in PD played for money, who both want to donate all proceeds to same charity. This changes the payoffs to the point where it's not PD any more.

Replies from: army1987
comment by A1987dM (army1987) · 2014-02-23T11:03:16.520Z · LW(p) · GW(p)

They're also interested in getting human drivers to buy their cars, which also makes not driving like a jerk a goal.

Depends on who those humans are. For a large fraction of low-IQ young males...Replies from: private_messaging
comment by private_messaging · 2014-02-23T11:55:55.296Z · LW(p) · GW(p)

I dunno, having a self driving jerk car takes away what ever machoism one could have about driving... there's something about a car where you can go macho and drive manual to be a jerk.

I don't think it'd help sales at all if self driving cars were causing accidents while themselves evading the collision entirely.

comment by Douglas_Knight · 2014-02-17T17:09:40.592Z · LW(p) · GW(p)

Already deployed is a better example: computer network protocols.

comment by Error · 2014-02-17T14:21:34.554Z · LW(p) · GW(p)

Or different algorithms. How long after wide release will it be before someone modifies their car's code to drive aggressively, on the assumption that cars running the standard algorithm will move out of the way to avoid an accident?

(I call this "driving like a New Yorker." New Yorkers will know what I mean.)

Replies from: private_messaging
comment by private_messaging · 2014-02-17T17:18:33.077Z · LW(p) · GW(p)

That's like driving without a license. Obviously the driver (software) has to be licensed to drive the car, just as persons are. Software that operates deadly machinery has to be developed in specific ways, certified, and so on and so forth, for how many decades already? (Quite a few)

comment by TraderJoe · 2014-02-13T03:25:47.704Z · LW(p) · GW(p)

I have been reviewing FUE hair transplants, and I would like LWers' opinion. I'm actually surprised this isn't covered, as it seems relevant to many users.

As far as I can tell, the downsides are:

  • Mild scarring on the back of the head
  • Doesn’t prevent continued hair loss, so if you get e.g. a bald spot filled in, then you will in a few years have a spot of hair in an oasis
  • Cost
  • Mild pain/hassle in the initial weeks.
  • Possibility of finding a dodgy surgeon

The scarring is basically covered if you have a few two days’ hair growth there and I am fine with that as a long-term solution. he continued hair loss is potentially dealt with by a repeated transplant and more certainly dealt with by getting the initial transplant “all over”, i.e. thickening hair, rather than just moving the hairline forward. But it is the area I am most uncertain about. I should add that I am 29 with male pattern baldness on both sides of my family, Norwood level 4, and have seen hair loss stabilised (I have been taking propecia for the last year).

Ignoring the cost, my questions are:

  • Is anyone aware of any other problems besides these?
  • Do you think this solution works?
  • Any ideas on how to pick the right surgeon (using someone in Singapore most probably)?
Replies from: TraderJoe
comment by TraderJoe · 2014-02-13T09:46:20.402Z · LW(p) · GW(p)

This is quite far down the page, even though I posted it a few hours ago. Is that an intended effect of the upvoting/downvoting system? (it may well be - I don't understand how the algorithm assigns comment rankings)

Replies from: Oscar_Cunningham, Douglas_Knight
comment by Oscar_Cunningham · 2014-02-13T21:51:08.467Z · LW(p) · GW(p)

Just below and to the right of the post there's a choice of which algorithm to use for sorting comments. I don't remember what the default is, but I do know that at least some of them sort by votes (possibly with other factors). I normally use the sorting "Old" (i.e. oldest first) and then your comment is near thhe bottom of the page since so many were posted before it.

comment by Douglas_Knight · 2014-02-13T17:59:07.281Z · LW(p) · GW(p)

The algorithm is a complicated mix of recency and score, but on an open thread that only lasts a week, recency is fairly uniform, so it's pretty much just score.

comment by EGarrett · 2014-02-13T03:08:25.574Z · LW(p) · GW(p)

I'm looking into Bayesian Reasoning and trying to get a basic handle on it and how it differs from traditional thinking. When I read about how it (apparently) takes into account various explanations for observed things once they are observed, I was immediately reminded of Richard Feynman's opinion of Flying Saucers. Is Feynman giving an example of proper Bayesian thinking here?

http://www.youtube.com/watch?v=wLaRXYai19A

Replies from: mcoram
comment by mcoram · 2014-02-14T04:17:10.332Z · LW(p) · GW(p)

It's certainly in the right spirit. He's reasoning backwards in the same way Bayesian reasoning does: here's what I see; here's what I know about possible mechanisms for how that could be observed and their prior probabilities; so here what I think is most likely to be really going on.

comment by ricketybridge · 2014-02-13T03:03:45.331Z · LW(p) · GW(p)

Since people were pretty encouraging about the quest to do one's part to help humanity, I have a follow-up question. (Hope it's okay to post twice on the same open thread...)

Perhaps this is a false dichotomy. If so, just let me know. I'm basically wondering if it's more worthwhile to work on transitioning to alternative/renewable energy sources (i.e. we need to develop solar power or whatever else before all the oil and coal run out, and to avoid any potential disastrous climate change effects) or to work on changing human nature itself to better address the aforementioned energy problem in terms of better judgment and decision-making. Basically, it seems like humanity may destroy itself (if not via climate change, then something else) if it doesn't first address its deficiencies.

However, since energy/climate issues seem pretty pressing and changing human judgment is almost purely speculative (I know CFAR is working on that sort of thing, but I'm talking about more genetic or neurological changes), civilization may become too unstable before it can take advantage from any gains from cognitive enhancement and such.On the other hand, climate change/energy issues may not end up being that big of a deal, so it's better to just focus on improving humanity to address other horrible issues as well, like inequality, psychopathic behavior, etc.

Of course, society as a whole should (and does) work on both of these things. But one individual can really only pick one to make a sizable impact -- or at the very least, one at a time. Which do you guys think may be more effective to work on?

[NOTE: I'm perfectly willing to admit that I may be completely wrong about climate change and energy issues, and that collective human judgment is in fact as good as it needs to be, and so I'm worrying about nothing and can rest easy donating to malaria charities or whatever.]

Replies from: ChristianKl, DanielLC
comment by ChristianKl · 2014-02-14T00:01:53.426Z · LW(p) · GW(p)

Of course, society as a whole should (and does) work on both of these things. But one individual can really only pick one to make a sizable impact -- or at the very least, one at a time. Which do you guys think may be more effective to work on?

The core question is: "What kind of impact do you expect to make if you work on either issue?"

Do you think there work to be done in the space of solar power development that other people than yourself aren't effectively doing? Do you think there work to be done in terms of better judgment and decision-making that other people aren't already doing?

we need to develop solar power or whatever else before all the oil and coal run out,

The problem with coal isn't that it's going to run out but that it kills hundred of thousands of people via pollution and that it creates climate change.

I know CFAR is working on that sort of thing, but I'm talking about more genetic or neurological changes)

Why? To me it seems much more effective to focus on more cognitive issues when you want to improve human judgment. Developing training to help people calibrate themselves against uncertainty seems to have a much higher return than trying to do fMRI studies or brain implants.

Replies from: ricketybridge
comment by ricketybridge · 2014-02-14T01:26:51.327Z · LW(p) · GW(p)

The core question is: "What kind of impact do you expect to make if you work on either issue?"

Do you think there work to be done in the space of solar power development that other people than yourself aren't effectively doing? Do you think there work to be done in terms of better judgment and decision-making that other people aren't already doing?

I'm familiar with questions like these (specifically, from 80000 hours), and I think it's fair to say that I probably wouldn't make a substantive contribution to any field, those included. Given that likelihood, I'm really just trying to determine what I feel is most important so I can feel like I'm working on something important, even if I only end up taking a job over someone else who could have done it equally well.

That said, I would hope to locate a "gap" where something was not being done that should be, and then try to fill that gap, such as volunteering my time for something. But there's no basis for me to surmise at this point which issue I would be able to contribute more to (for instance, I'm not a solar engineer).

To me it seems much more effective to focus on more cognitive issues when you want to improve human judgment. Developing training to help people calibrate themselves against uncertainty seems to have a much higher return than trying to do fMRI studies or brain implants.

At the moment, yes, but it seems like it has limited potential. I think of it a bit like bootstrapping: a judgment-impaired person (or an entire society) will likely make errors in determining how to improve their judgment, and the improvement seems slight and temporary compared to more fundamental, permanent changes in neurochemistry. I also think of it a bit like people's attempts to lose weight and stay fit. Yes, there are a lot of cognitive and behavioral changes people can make to facilitate that, but for many (most?) people, it remains a constant struggle -- one that many people are losing. But if we could hack things like that, "temptation" or "slipping" wouldn't be an issue.

The problem with coal isn't that it's going to run out but that it kills hundred of thousands of people via pollution and that it creates climate change.

From what I've gathered from my reading, the jury is kind of out on how disastrous climate change is going to be. Estimates seem to range from catastrophic to even slightly beneficial. You seem to think it will definitely be catastrophic. What have you come across that is certain about this?

comment by DanielLC · 2014-02-16T07:34:22.542Z · LW(p) · GW(p)

The economy is quite capable of dealing with finite resources. If you have land with oil on it, you will only drill if the price of oil is increasing more slowly than interest. If this is the case, then drilling for oil and using the value generated by it for some kind of investment is more helpful than just saving the oil.

Climate change is still an issue of course. The economy will only work that out if we tax energy in proportion to its externalities.

We should still keep in mind that climate change is a problem that will happen in the future, and we need to look at the much lower present value of the cost. If we have to spend 10% of our economy on making it twice as good a hundred years from now, it's most likely not worth it.

comment by JMiller · 2014-02-12T20:49:45.055Z · LW(p) · GW(p)

I am not sure if this deserves it's own post. I figured I would post here and then add it to discussion if there is sufficient interest.

I recently started reading Learn You A Haskell For Great Good. This is the first time I have attempted to learn a functional language, and I am only a beginner in Imperative languages (Java). I am looking for some exercises that could go along with the e-book. Ideally, the exercises would encourage learning new material in a similar order to how the book is presented. I am happy to substitute/compliment with a different resource as well, if it contains problems that allow one to practice structurally. If you know of any such exercises, I would appreciate a link to them. I am aware that Project Euler is often advised; does it effectively teach programming skills, or just problem solving? (Then again, I am not entirely sure if there is a difference at this point in my education).

Thanks for the help!

Replies from: adbge, Douglas_Knight
comment by adbge · 2014-02-12T21:06:57.224Z · LW(p) · GW(p) Replies from: JMiller
comment by JMiller · 2014-02-12T21:32:33.131Z · LW(p) · GW(p)

Awesome, thanks so much! If you were to recommend one of these resources to begin with, which would it be?

Replies from: adbge
comment by adbge · 2014-02-12T21:39:41.872Z · LW(p) · GW(p)

Awesome, thanks so much!

Happy to help!

If you were to recommend one of these resources to begin with, which would it be?

I like both Project Euler and 99 Haskell problems a lot. They're great for building success spirals.

comment by Douglas_Knight · 2014-02-12T21:08:47.568Z · LW(p) · GW(p)

Why are you committed to that book? SICP is well-tested introductory textbook with extensive exercises . Added: I meant to say that it is functional.

Replies from: JMiller
comment by JMiller · 2014-02-12T21:31:15.426Z · LW(p) · GW(p)

I'm not. The reason I picked it up was because it happens to be the book recommended in MIRI's course suggestions, but I am not particularly attached to it. Looking again, it seems they do actually recommend SICP on lesswrong, and Learnyouahaskell on intelligence.org.

Thanks for the suggestion.

comment by Pfft · 2014-02-11T20:00:46.310Z · LW(p) · GW(p)

Modafinil is prescription-only in the US, so to get it you have to do illegal things. However, I note that (presumably due to some legislative oversight?) the related drug Adrafinil is unregulated, you can buy it right off Amazon. Does anyone know how Adrafinil and Modafinil compare in terms of effectiveness and safety?

Replies from: Douglas_Knight, Lumifer, RomeoStevens
comment by Douglas_Knight · 2014-02-11T21:53:43.081Z · LW(p) · GW(p)

No, you don't have to do illegal things. Another option is to convince your doctor to give you a prescription. I think people on LW greatly overestimate the difficulty of this.

Replies from: hg00
comment by hg00 · 2014-02-12T08:02:28.156Z · LW(p) · GW(p)

Some info on getting a prescription here: http://www.bulletproofexec.com/q-a-why-i-use-modafinil-provigil/

I think ADD/ADHD will likely be a harder sell; my impression is that people are already falsely claiming that in order to get Adderall etc.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-02-13T21:45:33.635Z · LW(p) · GW(p)

I don't even mean to suggest lying. I mean something simple like "I think this drug might help me concentrate."

A formal diagnosis of ADD or narcolepsy is carte blanche for amphetamine prescription. Because it is highly scheduled and, moreover, has a big black market, doctors guard this diagnosis carefully. Whereas, modafinil is lightly scheduled and doesn't have a black market (not driven by prescriptions), so they are less nervous about giving it out in ADD-ish situations.

But doctors very much do not like it when a new patient comes in asking for a specific drug.

comment by RomeoStevens · 2014-02-12T08:10:33.409Z · LW(p) · GW(p)

Adrafinil has additional downstream metabolites besides just modafinil, but I don't know exactly what they are. Some claim it is harder on the liver implying some of the metabolites are mildly toxic, but that's not really saying much. Lots of stuff we eat is mildly toxic. Adrafinil is generally well tolerated and if your goal is finding out the effects of modafinil on your system and you can't get modafinil itself I would say go for it. If you then decided to take moda long term I would say do more research.

IANAD. Research thoroughly and consult with a doctor if you have any medical conditions or are taking any medications.

comment by chaosmage · 2014-02-19T10:36:45.971Z · LW(p) · GW(p)

Andy Weir's "The Martian" is absolutely fucking brilliant rationalist fiction, and it was published in paper book format a few days ago.

I pre-ordered it because I love his short story The Egg, not knowing I'd get a super-rationalist protagonist in a radical piece of science porn that downright worships space travel. Also, fart jokes. I love it, and if you're an LW type of guy, you probably will too.

comment by dunno · 2014-02-12T14:19:30.209Z · LW(p) · GW(p)

Would you prefer that one person be horribly tortured for eternity without hope or rest, or that 3^^^3 people die?

Replies from: RowanE, jobe_smith
comment by RowanE · 2014-02-12T15:55:40.099Z · LW(p) · GW(p)

One person being horribly tortured for eternity is equivalent to that one person being copied infinite times and having each copy tortured for the rest of their life. Death is better than a lifetime of horrible torture, and 3^^^3, despite being bigger than a whole lot of numbers, is still smaller than infinity.

Replies from: dunno
comment by dunno · 2014-02-13T09:57:37.822Z · LW(p) · GW(p)

What if the 3^^^3 people were one immortal person?

Replies from: RowanE, DanielLC
comment by RowanE · 2014-02-15T12:37:44.589Z · LW(p) · GW(p)

Well then the answer is still obviously death, and that fact has become more immediately intuitive - probably even those who disagreed with my assessment of the original question would agree with my choice given the scenario "an immortal person is tortured forever or an otherwise-immortal person dies"

comment by DanielLC · 2014-02-16T07:26:38.381Z · LW(p) · GW(p)

Being horribly tortured is worse than death, so I'd pick death.

comment by jobe_smith · 2014-02-14T14:40:20.651Z · LW(p) · GW(p)

I would solicit bids from the two groups. I imagine that the 3^^^3 people would be able to pay more to save their lives than the 1 person would be able to pay to avoid infinite torture. Plus, once I make the decision, if I sentence the 1 person to infinite torture I only have to worry about their friends/family and I have 3^^^3 allies who will help defend me against retribution. Otherwise, the situation is reversed and I think its likely I'll be murdered or imprisoned if I kill that many people. Of course, if the scenario is different, like the 3^^^3 people are in a different galaxy (not that that many people could fit in a galaxy) and the 1 person is my wife, I'll definitely wipe out all those assholes to save my wife. I'd even let them all suffer infinite torture just to keep my wife from experiencing a dust speck in her eye. It is valentine's day after all!