Open Thread, January 1-15, 2013

post by OpenThreadGuy · 2013-01-01T06:09:02.403Z · LW · GW · Legacy · 336 comments

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

336 comments

Comments sorted by top scores.

comment by David Althaus (wallowinmaya) · 2013-01-01T22:50:39.992Z · LW(p) · GW(p)

I'm thinking about writing a more comprehensive guide than Skatche's Rationalist's Guide to Psychoactive Drugs.

Replies from: TimS, ChristianKl, Jabberslythe, army1987
comment by TimS · 2013-01-02T02:09:59.459Z · LW(p) · GW(p)

And I'm a bit worried that this kind of post falls under the new censorship laws.

My analysis:

Do your posts look like solicitation to possess illegal drugs with intend to distribute? (Hint: for anything short of "Please tell me where to buy drugs," the answer is probably no).

Could a malicious prosecutor convince a grand jury to indict Eliezer (or others) as co-conspirators based on what you have written? (Hint: probably not).

In short, you are probably fine. But I am not a "power" on LW.


Just to be clear, I doubt this is Eliezer's thought process. But I suspect it is a fairly accurate heuristic for what is and isn't acceptable.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-02T22:40:56.114Z · LW(p) · GW(p)

I agree with your analysis. However, the fact that some people are expressing concern that their comments might violate the new censorship policy suggests that others might abstain, or have already abstained, from posting valuable material to this forum, which in turn increases my credence that the censorship policy does more harm than good.

Replies from: David_Gerard, quiet
comment by David_Gerard · 2013-01-02T23:47:41.262Z · LW(p) · GW(p)

"Avoid compartmentalisation, but don't talk about your results from doing so too loudly."

In context, this 2010 post (capture) is interesting: current version is about deaths of tobacco company employees, but it was changed after comments from the original, which was about slowing the computer industry to slow AI progress.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-03T03:42:21.738Z · LW(p) · GW(p)

Interesting. As far as I can see, though, the screencap shows the revised version about deaths of tobacco company employees, not the original version.

Replies from: David_Gerard
comment by David_Gerard · 2013-01-03T09:38:33.899Z · LW(p) · GW(p)

Yes, the capture is recent.

comment by quiet · 2013-01-03T16:49:59.553Z · LW(p) · GW(p)

When in doubt, frame all drug talk as harm reduction.

comment by ChristianKl · 2013-01-04T01:02:26.282Z · LW(p) · GW(p)

The "Lesswrong censorship laws" speak of illegal violence. Possession of drugs might be illegal but isn't violence.

comment by Jabberslythe · 2013-01-04T02:56:11.808Z · LW(p) · GW(p)

There are a lot of things that that didn't cover. Go for it!

comment by A1987dM (army1987) · 2013-01-02T11:15:02.693Z · LW(p) · GW(p)

Have you read gwern's writings (under “Practical”) about melatonin, modafinil, nicotine, and other nootropics?

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2013-01-02T12:57:23.254Z · LW(p) · GW(p)

I have read about and tried many of them.

comment by shminux · 2013-01-11T20:06:05.885Z · LW(p) · GW(p)

Just wanted to point out that many contributors to the site are afflicted by what I call "theoritis", a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field. The field in question can be psychology, neuroscience, physics, math, computer science, you name it.

It is rare that people consider a reverse situation first: what would I think of an amateur who argues with me in the area of my competence? For example, if you are an auto mechanic, would you take seriously someone who tells you how to diagnose and fix car issues without ever having done any repairs first? If not, why would you argue about quantum mechanics with a physicist, with a decision theorist about utility functions,or with a mathematician about first-order logic, unless that's your area of expertise? Of course, looking back it what I post about, I am no exception.

OK, I cannot bring myself to add philosophy to the list of "don't argue with the experts, learn from them" topics, but maybe it's because I don't know anything about philosophy.

Replies from: OrphanWilde, Vladimir_Nesov, IlyaShpitser, private_messaging, whowhowho, Wei_Dai, BerryPick6
comment by OrphanWilde · 2013-01-11T21:34:21.800Z · LW(p) · GW(p)

I take non-programmers seriously about programming all of the time. That's pretty much in the job description.

Just because I'm not stupid doesn't mean I'm not wrong. Indeed, it takes some serious intelligence to be wrong in the worst kind of ways.

Replies from: whowhowho
comment by whowhowho · 2013-01-25T13:05:52.972Z · LW(p) · GW(p)

I take non-programmers seriously about programming all of the time

About implementation, or about what to implement?

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-25T14:10:19.787Z · LW(p) · GW(p)

In practice the two are, in my line of work, very difficult to separate. The what is almost always the how. But both, out of practical necessity. When the client insists on a particular implementation, that's the implementation you go with.

Replies from: whowhowho
comment by whowhowho · 2013-01-25T14:16:10.362Z · LW(p) · GW(p)

I would assume that's high-level -- "use Oracle, not MySQL"

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-25T16:40:35.158Z · LW(p) · GW(p)

That's part of it, but no, that's not what I'm referring to. Client necessities are client necessities.

"Encryption and file delivery need to be in separate process flows" would be closer. (This sounds high-level, but in the scripting language I do most of my work in, both of these are atomic operations.)

comment by Vladimir_Nesov · 2013-01-12T13:23:24.818Z · LW(p) · GW(p)

A relevant distinction that you are not making is between the questions that are well-understood in the expert's area and the questions that are merely associated with the expert's area (or are expert's own inventions), where we have no particular reason to expect that the expert's position on the topic is determined by its truth and not by some accident of epistemic misfortune. The expert will probably know the content of their position very well, but won't necessarily correctly understand the motivation for that position. (On the other hand, someone sufficiently unfamiliar with the area might be unable to say anything meaningful about the question.)

Replies from: bogus
comment by bogus · 2013-01-13T00:42:39.077Z · LW(p) · GW(p)

Good point. Also, even when questions are well-understood by domain experts it still can be very effective to argue about them, since this usually leads to the clearest arguments and explanations. This is especially true since the social norms on this site highly value truth-seeking, epistemic hygiene (including basic intellectual honesty) and scholarship: in many other venues (including some blogs), anti-expertise attitudes do lead to bad outcomes, but this does not seem to apply much on LW.

comment by IlyaShpitser · 2013-01-11T21:40:54.536Z · LW(p) · GW(p)

Good post. It's EY's fault, imo. He set the norms.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-11T21:59:50.339Z · LW(p) · GW(p)

(...) a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field.

Not exactly a green amateur, so how could he have set that norm? EDIT: Retracted, you answered in another comment.

comment by private_messaging · 2013-01-13T23:56:08.373Z · LW(p) · GW(p)

I think philosophy does belong to the list if you are arguing some matters of philosophy but not others. There is a common field to all mathematics-heavy disciplines, that is mathematics, with huge overlaps, and there's no reason why for example a physicist couldn't correctly critique bad mathematics of a philosopher, even though most non philosophers or amateur philosophers really should learn and not argue as a philosopher is a bit of an expert in mathematics.

comment by whowhowho · 2013-01-25T13:04:14.324Z · LW(p) · GW(p)

OK, I cannot bring myself to add philosophy to the list of "don't argue with the experts, learn from them" topics, but maybe it's because I don't know anything about philosophy.

I find that an odd statement. Why can't you assume by default that arguing with an expert in X is bad for all X?

For some reason, theoritis is much worse with regard to philosophy than just about anything else. Amateurs hardly ever argue with brain surgeons or particle physicists. I think part of the reason for that is that brain surgeons and particle physicists have manifest practical skills that others don't have. The "skill" of philosophy consists of stating opinions and defending them, which everyone can do to some extent. The amateurs are like people who think you can write (well, at a a professional level) because you can type.

Replies from: shminux
comment by shminux · 2013-01-25T17:10:46.796Z · LW(p) · GW(p)

I find that an odd statement. Why can't you assume by default that arguing with an expert in X is bad for all X?

By default, yes. Let me try to articulate my perception of the difference between philosophers and other experts. When I talk to a mathematician, or a physicist, or a computer scientist, I can almost immediately see that their level in their discipline is way above mine, because they bring up a standard argument/calculation/proof which refutes my home-made ideas, and then extend those ideas to a direction I never considered and show which of them are any good. Talking to an expert willing to take you seriously is generally a humbling experience. You see the depth of their knowledge and realize that arguing with them instead of listening is a poor strategy. By the way, I noticed that I sometimes also do that to people when I talk about my area of expertise.

Now, when I listen to a mainstream philosophical argument, I don't feel humbled at all (with one or two exceptions), instead I want to scream "why are you arguing about definitions? Especially the definitions you didn't even bother formalizing?!?!" or "why do you rely on a premise you find "intuitive" or "obvious", given that it's rather not obvious to others?" or "why do you gleefully strawman someone else's argument instead of trying to salvage it?". The exceptions are generally in the areas which can hardly be considered philosophy, they are usually a part of mathematical logic, or computer science, or physics, or psychology, which makes them (gasp!) testable, something classical philosophers seem to shy away from. I don't normally get the feeling of awe and respect when listening to a philosopher. They can sure cite a multitude of sources and positions and reproduce some ancient arguments, but many of these arguments look as outdated as Aristotle's ideas about physics, and so only of historical interest.

Again, I'm no expert in the matters of philosophy, so my perspective might be completely wrong, but that's the explanation why I did not add philosophers to the list of experts in my original comment.

Replies from: whowhowho
comment by whowhowho · 2013-01-25T17:36:11.213Z · LW(p) · GW(p)

Now, when I listen to a mainstream philosophical argument, I don't feel humbled at all (with one or two exceptions), instead I want to scream "why are you arguing about definitions?

Becuase phils. deal with abstract concepts, not things you can point at, and because many phil. problems are caused by inconsistent definitions, as in the when-a-tree falls problem.

Especially the definitions you didn't even bother formalizing?!?!"

Phils can and do stipulate.

or "why do you rely on a premise you find "intuitive" or "obvious", given that it's rather not obvious to others?"

Are there fields where people don't rely on intuitions?

or "why do you gleefully strawman someone else's argument instead of trying to salvage it?".

Maybe they can't see how.

comment by Wei Dai (Wei_Dai) · 2013-01-11T21:27:12.311Z · LW(p) · GW(p)

Want to give some examples? I don't seem to recall seeing a lot of this myself.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-11T21:46:41.095Z · LW(p) · GW(p)

Come on, Luke has a series of posts taking a shit on the entire discipline of philosophy. Luke is not an expert on philosophy. EY says he isn't happy with do(.) based causality while getting basic terminology in the field wrong, etc. EY is not an expert on causal inference. If you disagree with Larry Wasserman on a subject in stats, chances are it is you who is confused. etc. etc. Communication and scholarship norms here are just awful.

If you want to see how academic disagreements ought to play out, stroll on over to Scott's blog.


edit: To respond to the grandparent: I think the answer is adopting mainstream academic norms.

Replies from: Wei_Dai, whowhowho, bogus
comment by Wei Dai (Wei_Dai) · 2013-01-11T21:59:28.183Z · LW(p) · GW(p)

shminux explicitly excluded philosophy, and I wasn't aware of the other two examples you gave. Can you link to them so I can take a look? (ETA: Never mind, I think I found them. ETA2: Actually I'm not sure. Re Wasserman, are you referring to this?)

comment by whowhowho · 2013-01-25T13:14:06.579Z · LW(p) · GW(p)

I couldn't agree more. Mainstream academia is set of rationality skills and a very case hardened one. Adding something extra, like cognitive science might be good, but LW omits a lot of the academic virtues -- not blowing off about things you don't know, making an attempt to answer objections, modesty, etc.

PS: Tenure is a great rationality-promoting institution because...left as an exercise to the reader.

comment by bogus · 2013-01-13T00:55:27.493Z · LW(p) · GW(p)

EY says he isn't happy with do(.) based causality while getting basic terminology in the field wrong

Just for clarity, could you link to where EY does this? Also, it's fairly well known in statistics that econometricians are unhappy with causal networks and do(.), because causal networks cannot directly account for feedback-like or cyclic phenomena, which are quite ubiquitous in econometric data (think supply and demand factors co-determining price and quantity, or the influence of expectations) - causal networks have to be acyclic. So there is a genuine controversy here which is reflected in the literature.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-13T09:01:12.456Z · LW(p) · GW(p)

Also, it's fairly well known

This is precisely what I mean. Well known by whom? Not by me!

Causal networks can easily encode cycles (in fact in two separate ways -- via unrolling the cycle a la dynamic Bayesian network, or via non-recursive, or cyclic, structural equation models). Pearl's first picture of an SEM, Figure 1.5 in his book, shows a cyclic causal diagram representing supply and demand. See google preview here: http://bayes.cs.ucla.edu/BOOK-2K/

Here's a paper from as early as 1995 by Spirtes (there have been many more since then) talking about cyclic causal models:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.1489

Here's a logical axiomatization of counterfactual causality in cyclic models (2000):

http://www.jair.org/papers/paper648.html

When you say causal networks cannot account for feedback or cyclic phenomena, what exactly do you mean? Do you have any references for econometricians abandoning do(.) in favor of something else? Or any reference for the controversy? Note that SEM (which is likely what most econometricians use due to their preference for instrumental variable methods) are a special case of do(.) models.


As for EY, he was confused about the difference between a causal model and a Bayesian network. This would be sort of comparable to going up to Scott and saying "it seems incontrovertible to me that MWI is the correct interpretation fo quantum mechanics. By the way, I got the definition of the Hamiltonian wrong." One may be right, but the worry is right for the wrong reasons.

Replies from: bogus
comment by bogus · 2013-01-13T15:08:15.460Z · LW(p) · GW(p)

OK, I managed to find the comment by Eliezer that you're probably referring to, here. But what Eliezer says in that comment is that do(.)-based causality cannot be physically fundamental, which sounds right to me. And Pearl agrees with this, insofar as he states (in Causality) that the correspondence between physical causation (Pearl references the requirement that causes be in the past light cone of their effects; albeit presumably we should also include the principle of locality/"no action at a distance") and statistical causality analysis is a bit of a mystery, and may say more about the way that people build models of the world and talk about them than anything more fundamental.

As for the confusion between Bayesian networks and causal graphs, Pearl deals with that in his book. Even before causal graphs were formally described, a lot of the interest in Bayesian networks (which are represented as directed graphs) was due to folks wanting to do causal analysis on them, if only informally. And indeed, if all we're interested in is the correlation structure, then we're not limited to Bayesian networks: we can use other kinds of graphical models, some of which have better properties (such as Markov graphs).

I am suspending judgement about the feedbacks issue for now, even though I still think it's important. The point is that you'd need to make the case that causal diagrams can account in a reasonably straightforward way for all relevant uses of SEM (including not just explicit feedback but also equilibrium relationships more generally). Unless this is clearly shown, I don't think it's right to call do(.)-based methods a generalization of SEM.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-13T15:16:42.124Z · LW(p) · GW(p)

Structural equation models (SEMs) are a special (linear/Gaussian) case of the non-parametric structural model (which uses do(.), or potential outcomes). This is not even an argument we can have, it's standard math in the field. I don't know where you learned that this is not the case, but whatever that source, it is wrong.

It's fairly easy to verify: all non-parametric structural models do is replace the linear mechanism function by an arbitrary function, and the Gaussian noise term by an arbitrary noise term. It's fairly easy to derive that causal regression coefficients in a SEM are simply interventional expected value contrasts on the difference scale.

So if we have:

y = ax + epsilon, then

a = E[y | do(x = 1)] - E[y | do(x = 0)]

One can also think of regression coefficients as partial derivatives of the interventional mean with respect to the intervened variable:

a = dE[y|do(x)]/dx


Cyclic causal models do not require either linearity or Gaussianity, although these assumptions make certain things easier.


Part of the reason I post here is I love talking about this stuff, and while I think I can learn much from the lesswrong community, I also can contribute my expertise where appropriate. What is disheartening is arguing with non-experts about settled issues. This reminds me of this episode where Judea asked me to change something on the Wikipedia Bayesian network article, and I got into an edit war with a resident Wikipedia edit camper. I am sure he was not an expert, because he was reverting a wrong statement (and had more time than me..) I adjusted my overall opinion of Wikipedia quality based on that :(.

Replies from: private_messaging, bogus
comment by private_messaging · 2013-01-14T08:44:02.431Z · LW(p) · GW(p)

Arguing with experts on settled issue is a symptom of sloppiness which would be particularly prominent in non-settled issues, though.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-15T11:22:11.055Z · LW(p) · GW(p)

You would think so, but I don't think that's true. Think about the legions of cranks trying to create perpetual motion machines, or settle the P/NP question, etc. etc. Thermodynamics is fairly settled, the difficulty of the P/NP question is fairly settled. Crankery is an easy attractor, apparently.


Note: I am not calling anyone in this thread a crank, merely responding to the general point that argument is evidence of an unsettled area. It's true, but the evidence is surprisingly weak.

Replies from: private_messaging
comment by private_messaging · 2013-01-15T12:38:04.479Z · LW(p) · GW(p)

No, I meant that if someone gets settled stuff wrong, that's usually due to sloppiness, and said sloppiness is an utter horror in any less settled area. It's like repeatedly falling off bicycle head first with the training wheels on. Without training wheels its only worse.

comment by bogus · 2013-01-13T15:48:07.228Z · LW(p) · GW(p)

I agree that this is true of structural equation models, taken in a fairly narrow sense. However, econometricians commonly generalize these to simultaneous equation models, which include equations where one simply asserts an algebraic equation involving variables, with no one variable having a privileged status of being "determined", or an "outcome" of others. This means that do(.) cannot carry over to such models in a straightforward way. And yes, this is standard practice in econometrics when modeling equilibrium, feasibility constraints and the like.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-13T16:00:33.031Z · LW(p) · GW(p)

This is probably a good read also:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.1408

To the extent that constraints are simply constraints and not a result of causal structure, the model representing them is partly non-causal (so do(.) or some other representation of causation is irrelevant for such constraints). To the extent that constraints represent some consequence of graphical causal structure I am not aware of a single example where a potential outcome model is not appropriate. Do you have an example in mind?

In some sense if you have constraints that represent consequence of causality, such as feedback, and there is no story relating them to interventions/generative mechanisms, then I am not sure in what sense the model is causal. I am not saying it is not possible, but the burden of proof is on whoever proposed the model to clearly explain how causality works in it. There is a lot of confusion in economics and sometimes even in stats about causality (Judea is fairly unhappy with incoherence that many economics textbooks display when discussing causation, actually).

comment by BerryPick6 · 2013-01-11T22:43:35.885Z · LW(p) · GW(p)

OK, I cannot bring myself to add philosophy to the list of "don't argue with the experts, learn from them" topics, but maybe it's because I don't know anything about philosophy.

Could this be because we have fewer philosophy experts (although there are a few notable ones) than science experts?

comment by Qiaochu_Yuan · 2013-01-01T13:40:29.738Z · LW(p) · GW(p)

Can someone who's familiar with Mencius Moldbug's writing briefly summarize his opinions? I've tried reading Unqualified Reservations but I find his writing long-winded. He also refers to a lot of background knowledge I just don't have, e.g. I don't know what I'm supposed to take away from him calling something Calvinist.

Replies from: None, Alejandro1, Vaniver, None, ChristianKl
comment by [deleted] · 2013-01-01T14:04:12.791Z · LW(p) · GW(p)

This is a tall order. Nearly everyone I talk to seems to while getting the same basic models emphasise wildly different things about them. Their updates on the matter also vary considerably everything from utterly changing their politics to just mentally noting that you can make smart arguments for positions very divergent from the modern political consensus. Lots of people dislike his verbose style.

That is certainly the reason I haven't read all of his material so far.

I think the best way to get a summary is to discuss him with people here who have been read him. They will likely learn things too. When its too political continue the discussion either in the politics thread or in private correspondence.

I'm interested and willing to engage in such discussion. If you are too I'd ask you to perhaps make a list of the posts you have read so far? For now I'm assuming you began with one of the recommended essays like Idealism Is Not Great, Divine-right monarchy for the modern secular intellectual, Formalist Manifsto. Perhaps the introductory Open Letter to Open Minded Progressives or the Gentle Introduction sequences.

To this I would add the comment history of fellow LWer Vladimir_M which is littered with high quality Moldbug-like arguments on various issues. Who knows a few new responses might coax him out of inactivity!

I recall some old sort of interesting discussion of Moldbuggian positions in which I participated as well:

Replies from: Alejandro1, FiftyTwo, Qiaochu_Yuan
comment by Alejandro1 · 2013-01-01T22:16:52.911Z · LW(p) · GW(p)

By the way: I was pondering Les Miserables not long ago in anticipation of the movie, and realized that both the musical and the original novel are an exact artistic/literary expression of what Moldbug calls Universalism (down to details like the family lineage from Christianity (the bishop at the beginning) to revolutionary politics). And the character of Javert summarizes perfectly Moldbuggian philosophy, e.g. "I am the law and the law is not mocked!" Would you agree?

Replies from: TimS, None
comment by TimS · 2013-01-01T22:43:12.525Z · LW(p) · GW(p)

If we take the Javert = Moldbug metaphor seriously, how should we interpret Javert's later conclusion that his earlier philosophy contains a hopeless conflict between authority-for-its-own-sake and helping people live happier lives?

Replies from: Alejandro1, drethelin
comment by Alejandro1 · 2013-01-01T23:08:14.682Z · LW(p) · GW(p)

Well, the story is set up to favor Universalism. If Moldbug had written it, probably it would have ended with Valjean concluding that his earlier philosophy contained a hopeless conflict between rejecting authority and helping people live happier lives.

Replies from: TimS
comment by TimS · 2013-01-02T01:27:36.031Z · LW(p) · GW(p)

I'm smirking at the idea of a Moldbuggian story of the uprising of 1832. Revolutionists Get What They Deserve or some-such. :)

But I don't think that story has room for the complex characters of Hugo's story, narratively speaking. There's no room at all for Valjean, and Javert becomes simply the protagonist to the evil antagonist Enjolras.

Ultimately, you asked if canon!Javert embodies Moldbug. As I suggested above, I think the answer is no. He's a tragic figure - even Hugo would admit that > 75% of the time, the king's law point toward a just outcome. But Javert was blind to the fact that the king's law contained deep flaws.

I don't know if the passage survives the standard abridgements, but Javert writes a note to his superiors listing several minor injustices in the local prison system, immediately before killing himself. Even after conversion, Javert fails to realize that he was the only person who both (1) knew about the issues, and (2) cared about the injustice. That episode, and Javert as a character, are deeply tragic in my opinion.

And I can't imagine Moldbug caring about those issues at all. Obviously, Moldbug's choices would be different - but I don't get the impression Moldbug would think the minor injustices were even worth his attention if he were in Javert's situation.

Replies from: Alejandro1
comment by Alejandro1 · 2013-01-02T02:00:48.825Z · LW(p) · GW(p)

I'm smirking at the idea of a Moldbuggian story of the uprising of 1832. Revolutionists Get What They Deserve or some-such. :)

Yes, in addition to the musical!Javert quote I included, I was going to include "Crush those little schoolboys!"--but tried searching it and found I was misremembering a different line.

But I don't think that story has room for the complex characters of Hugo's story, narratively speaking. There's no room at all for Valjean, and Javert becomes simply the protagonist to the evil antagonist Enjolras.

You are certainly right that Javert is a more complex and tragic character than a pure Inflexible Authoritarian Law archetype. I could shift a bit my statement and say that the bare essence of Javert is that archetype, and that Hugo gives him that depth because of the direction he wants to take the story and the ideology it embodies.

From Moldbug's viewpoint LesMiz might be described as an Universalist tract that stacks the deck by showing Valjean as saintlike instead of naive, and setting up Javert's character and storyline to end in a forced alternative between conversion and suicide, rather than the triumph he "deserves". (Like Chick tracts, or to pick examples with more quality Chesterton's and Lewis' fictions, stack the deck against the skeptic.) But I agree that such a description by Moldbug would be too "reductionist' (to Moldbug's own ideology) and unfair to the literary qualities of the work.

Replies from: None
comment by [deleted] · 2013-01-02T10:07:52.648Z · LW(p) · GW(p)

Moldbug is not beyond commenting recent events or culture, we may yet hear his take on at least the movie if not the book itself. Also I'll do a search if he perhaps hasn't already mentioned the book in a offhanded fashion.

comment by drethelin · 2013-01-01T23:06:10.648Z · LW(p) · GW(p)

It's a lesson about happens when you combine the virtuous with a pernicious system of virtue. The liberal backlash against strong authoritarianism/belief in the rule of law is one way of reacting to such a world. "The laws are evil, therefore their enforcers are evil." The other side of this is people who believe the laws are good and anyone who enforces them is good. Both views are lacking nuance. Javert is someone who has spent his life believing that he is good because he enforces the laws, which are good. He can't live with the idea that he has been "bad" all along.

comment by [deleted] · 2013-01-02T10:05:05.450Z · LW(p) · GW(p)

I will probably have to watch the movie or reread the book before commenting since I recall the story only in vague outlines.

comment by FiftyTwo · 2013-01-01T23:20:13.496Z · LW(p) · GW(p)

Thanks thats a helpful summary.

Slightly related question, why are his views seemingly being suddenly discussed a lot and taken semi-seriously on LessWrong?

Replies from: NancyLebovitz, None
comment by NancyLebovitz · 2013-01-02T06:17:00.320Z · LW(p) · GW(p)

It isn't a sudden change. As far as I know, Moldbug's ideas are a recurring minor theme at LW.

Replies from: None
comment by [deleted] · 2013-01-02T09:50:58.652Z · LW(p) · GW(p)

Yes I think this is about right. An example is this discussion of Peter Thiel's support of seastading.

comment by [deleted] · 2013-01-02T09:53:37.763Z · LW(p) · GW(p)

As NancyLebovitz said it isn't really a new thing, there was a recent discussion on why talk of Moldbug's ideas is noticeable here.

comment by Qiaochu_Yuan · 2013-01-01T22:48:12.648Z · LW(p) · GW(p)

To be honest, I'm not terribly interested in discussing Moldbug (yet); I just wanted to get a better sense of what other people mean when they call something Moldbuggian. Thanks for the detailed response!

comment by Alejandro1 · 2013-01-01T22:09:19.267Z · LW(p) · GW(p)

I summarized very briefly my understanding of his political philosophy in this comment a few weeks ago.

comment by Vaniver · 2013-01-01T19:08:16.392Z · LW(p) · GW(p)

If you've got a few hours, I found the Gentle Introduction to be sufficiently gentle, but it does have nine parts and is written in his regular style. I think the first part is strongly worth slogging through, in part because his definition of "church" is a great one. I may write a short summary of it at some point, but that's a nontrivial writing project.

comment by [deleted] · 2013-01-01T14:14:59.981Z · LW(p) · GW(p)

He also refers to a lot of background knowledge I just don't have, e.g. I don't know what I'm supposed to take away from him calling something Calvinist.

Could you please clarify if you are unsure what he means when he calls a position Calvinist (presumably Crypt-Calivinist or something like that) or are you just unsure what Calvinism is?

The short and sufficient answer to the second is that this is a designation for a bunch of Protestant Christians who historically took themselves very seriously and have a reputation for being dour. Take special note of the Five Points of Calvinism.

The short and insufficient answer to the first is people who have ethical, political and philosophical ideas that can't be justified by their declared systems of ethics but can be perfectly well explained if you note the memeplexes in their heads are descendent of highbrow American Protestantism of the previous centuries. He goes into several things he considers indications of this and points out they dislike this explanation very much and want to believe their positions are the result of pure reason or Whiggish notions of history inching towards a universal "true human morality".

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-01T21:14:16.048Z · LW(p) · GW(p)

The former, but thanks for your clarification on both (I imagine your clarification on the latter is a relevant connotation Moldbug wanted and that I was largely ignorant of).

comment by ChristianKl · 2013-01-04T13:23:26.154Z · LW(p) · GW(p)

Moldbug has a variety of opinions that he expresses in his articles. Summarizing all of them is therefore hard. I will try to list a few.

Moldbug reject the progressive project. That means that he's opposed to most politicial ideas of Woodrow Wilson and presidents after Wilson.

Moldbug rejects modern democracy. He thinks that the US military should orchestrate a coup d'état. After the coup d'état the US should split and every state should have his own laws.

In the ideal case Moldbug wants that the states to be run like a stock company. If that isn't possible Moldbug prefers the way Singapur and Qatar are governed to the way the US is governed. According to him competition between a lot of states that are governed like Singapur is better than a huge federal government.

Replies from: TimS
comment by TimS · 2013-01-04T14:23:26.259Z · LW(p) · GW(p)

Your timeline starts too late. Moldbug rejects the Glorious Revolution.

I suspect that Moldbug thinks a military coup is only a means to an end. He wants government rule on a for profit basis, with essentially no tolerance of social disorder - other than vote with your feet (i.e. leaving). This is the concept he calls "Patches."

Replies from: ChristianKl
comment by ChristianKl · 2013-01-04T15:29:28.040Z · LW(p) · GW(p)

Your timeline starts too late. Moldbug rejects the Glorious Revolution.

Moldbug does reject it, I'm however not sure that he rejects all political pre-20st century events. He seems to like corporations and corporations have gotten much more legal rights than they had before the Glorious Revolution.

comment by Pablo (Pablo_Stafforini) · 2013-01-02T23:03:39.730Z · LW(p) · GW(p)

The few times I raised this question in the past, my comments were met with either indifference or hostility. I will try to raise it one more time in this open thread. If you think the question deserves a downvote, could you please, in addition to downvoting me, leave a brief comment explaining your rationale for doing so? I promise to upvote all comments providing such explanations.

So, here's the question: What is the reason for defining the class of beings whose volitions are to be coherently extrapolated as the class of present human beings? Why present and not also future (or past!)? Why human and not, say, mammals, males, or friends of Eliezer Yudkowsky?

Note that the question is not: Why should we value only present people? This way of framing the problem already assumes that "we" (i.e., present human beings) are the subjects whose preferences are to be accorded relevance in the process of coherent extrapolation, and that the interests of any other being (present or future, human or nonhuman) should matter only to the extent that "we" value them. What I am asking for, rather, is a justification of the assumption that only "our" preferences matter.

Replies from: Kaj_Sotala, MTGandP, TimS, None, NancyLebovitz, leplen, lsparrish, drethelin
comment by Kaj_Sotala · 2013-01-03T04:12:36.985Z · LW(p) · GW(p)

Luke lists "Why extrapolate the values of humans alone? What counts as a human? Do values converge if extrapolated?" as an open question in So You Want to Save the World.

Would the choice to extrapolate the values of humans alone be an unjustified act of speciesism, or is it justified because humans are special in some way — perhaps because humans are the only beings who can reason about their own preferences? And what counts as a human? The problem is more complicated than one might imagine (Bostrom 2006; Bostrom & Sandberg 2011). Moreover, do we need to scan the values of all humans, or only some? These problems are less important if values converge upon extrapolation for a wide variety of agents, but it is far from clear that this is the case (Sobel 1999, Doring & Steinhoff 2009).

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-03T05:07:53.474Z · LW(p) · GW(p)

Thanks!

Of course, the premise that "humans are the only beings who can reason about their own preferences" could only justify the conclusion that some human beings are special, since there are members of the human species who lack that ability. Similar objections could be raised against any other proposed candidate property. This has long been recognized by moral philosophers.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2013-01-03T20:22:07.237Z · LW(p) · GW(p)

Of course, the premise that "humans are the only beings who can reason about their own preferences" could only justify the conclusion that some human beings are special, since there are members of the human species who lack that ability.

In our society we don't really respect the volition of those human beings. We give them legal guardians who are supposed to decide in their interests instead of letting them make their own decisions. We don't let them vote in our elections.

comment by ChristianKl · 2013-01-03T20:23:32.419Z · LW(p) · GW(p)

Of course, the premise that "humans are the only beings who can reason about their own preferences" could only justify the conclusion that some human beings are special, since there are members of the human species who lack that ability.

In our society we don't really respect the volition of those human beings. We give them legal guardians who are supposed to decide in their interests instead of letting them make their own decisions. We don't let them vote in our elections.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-03T22:31:40.954Z · LW(p) · GW(p)

That is not because we don't regard their preferences as valuable in themselves, but simply because these beings lack the means to do the kinds of things that would allow them to satisfy those preferences. In any case, CEV does not exclude such humans from the class of creatures whose volitions are to be coherently extrapolated.

comment by MTGandP · 2013-01-04T06:50:15.920Z · LW(p) · GW(p)

I see no reason to restrict our preference extrapolation to presently-existing humans. CEV should extrapolate from all preferences, which includes the preferences of all sentient beings, present and future. Any attempt to place boundaries on this require justification.

Edit: You might say, "Why not also include rocks in our consideration?" Simple: rocks don't have preferences. Sentient beings (including many non-human animals) have preferences.

Replies from: tut
comment by tut · 2013-01-04T13:50:58.401Z · LW(p) · GW(p)

What if the majority of sentient beings are ants and beetles?

Replies from: MTGandP
comment by MTGandP · 2013-01-06T05:46:41.636Z · LW(p) · GW(p)

If ants and beetles are sentient, then CEV should take their preferences into account. It sounds like you're trying to use this as a reductio ad absurdum of my claim, but I don't believe that works. If ants and beetles are sentient then they deserve consideration, no matter how unintuitive that may seem.

Replies from: wedrifid
comment by wedrifid · 2013-01-06T19:08:27.304Z · LW(p) · GW(p)

If ants and beetles are sentient, then CEV should take their preferences into account.

No it shouldn't.

Elaboration: Your 'should' claim indicates both that you have a preference for CEV (if not all then at least up to the inclusion of ants and beetles if they are sentient) and that you assert it as a tribal norm. Many others don't implicitly instantiate CEV in that way and instead instantiate it to CEV. The most common favored group being 'all humans'. To those people your unqualified assertion would be interpreted as false.

Replies from: MTGandP
comment by MTGandP · 2013-01-06T19:58:39.157Z · LW(p) · GW(p)

I addressed this point in my original comment.

comment by TimS · 2013-01-03T02:13:00.363Z · LW(p) · GW(p)

I'm not sure that there is community consensus that "human beings currently living" is the right reference class. Eliezer suggests that he thinks the right reference class is all of humanity ever in this post.

If one assumes some kind of moral progress constraint and unpredictable future values, CEV(living humans) seems like our future descendents would hate it. Certainly, modern Westerners probably would hate CEV(Europeans-alive-in-1300). But I'm a moral anti-realist, so I don't believe there are constraints that cause moral progress - and don't expect CEV(all-humans-ever) to output a morality.

Replies from: army1987, MichaelAnissimov, MichaelAnissimov
comment by A1987dM (army1987) · 2013-01-03T15:18:50.326Z · LW(p) · GW(p)

Certainly, modern Westerners probably would hate CEV(Europeans-alive-in-1300).

Some people would disagree.

Replies from: TimS
comment by TimS · 2013-01-03T15:49:14.480Z · LW(p) · GW(p)

Gwern collects some evidence against the proposition. The fact that people disagree and think morality is timeless in some sense is not particularly strong evidence when compared to results of competent historical analysis.

Of course, which historical analysis is considered credible is fairly controversial.

comment by MichaelAnissimov · 2013-01-03T23:58:20.120Z · LW(p) · GW(p)

Part of the point of CEV is to make the extrapolation process good enough that future beings X won't hate the extrapolation of arbitrary past group Y. The extrapolation should be effective and broad enough that extrapolating from humans in different parts of history would not appreciably change the outcome. My guess would be that the extrapolation process itself would provide most of the content, the starting reference class being a minor variable.

Replies from: TimS
comment by TimS · 2013-01-04T00:06:16.523Z · LW(p) · GW(p)

It would be convenient if such a process could be proven to exist and rigorously described.

Resolving that issue would do a lot to address the OPs concerns. Separately, it would be a strong reason for me to reject moral anti-realism.

What evidence do we have that such convenient extrapolation is actually possible?

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-01-04T00:32:04.763Z · LW(p) · GW(p)

Resolving that issue is part of the overall goal of the SI, and a huge project. I'm also a moral anti-realist, by the way. CEV should be starter-insensitive w/ respect to humans from different time periods. My reasons for why I think that this is achievable in principle would be a whole post.

Replies from: TimS
comment by TimS · 2013-01-04T01:31:00.555Z · LW(p) · GW(p)

I'd be very interested in a theory that harmonized CEV with moral anti-realism.

And you seem to believe in a very strong form of extrapolation. I'm personally skeptical that CEV(modern-humanity) would output anything, while you assert CEV(modern-humanity) = CEV(ancient Greece). Surely you don't think CEV(Clippy) = CEV(humanity).


minor terminology note: I've always used CEV and (moral) extrapolation interchangeably. If there's a reason I shouldn't do that, I'd appreciate an explanatory pointer.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-01-04T06:06:12.501Z · LW(p) · GW(p)

Well, moral extrapolation is a broader category than CEV. CEV suggests, for instance, that we should also take into account the social dynamics that would influence the development of morality ("grown up farther together"), while you could conceivably also have a moral extrapolation approach which considered that irrelevant.

(One could also argue that it is the addition of social dynamics which helps justify the notion of CEV(modern-humanity) = CEV(ancient Greece), given that it was technological and social dynamics which got us from the values-of-ancient-Greece to values-of-today. Of course, that presupposes a deterministic view of history, which seems to me highly implausible. It also opens the door for all kinds of nasty social dynamics.)

comment by MichaelAnissimov · 2013-01-04T00:31:39.686Z · LW(p) · GW(p)

.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-04T00:51:07.891Z · LW(p) · GW(p)

You can delete retracted comments if you reload the page.

Replies from: Nisan
comment by Nisan · 2013-01-06T05:52:44.863Z · LW(p) · GW(p)

But not if someone's replied to the comment.

comment by [deleted] · 2013-01-19T19:14:04.078Z · LW(p) · GW(p)

No one else seems to be giving what is IMO the correct answer; I want the values of a created FAI to match my own, extrapolated. ie moral selfishness.

I would actually prefer that the extrapolation seed be drawn only from SI supporters (or ideally just me, but that's unlikely to fly), because I'm uneasy about what happens if some of my values turn out to be memetic, and they get swamped/outvoted by a coherent extrapolated deathist or hedonist memplex. Or if you include, for example, uplifted sharks in the process.

Replies from: TimS
comment by TimS · 2013-01-19T19:32:46.765Z · LW(p) · GW(p)

I too would prefer super AI to look to my values when deciding what to implement.

But, given the existence of moral disagreement, I don't see why that deserves to be labeled Friendly. And the whole point of CEV or similar process is to figure out what is awesome for humanity. Implementing something other than what is awesome for all of humanity is not Friendly.

If deathism really is what is awesome for all humanity, I expect a FAI to implement deathism. But there's no particular reason to believe that deathism is what is awesome for humanity.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-19T21:37:29.749Z · LW(p) · GW(p)

Tim, your comment highlights the potential conflict between CEV and FAI that I also mentioned previously. FAI is by definition not hostile to human beings, whereas CEV might permit, or even require, the extinction of all humanity. This may happen, for instance, if the process of coherent extrapolation shows that humans value certain superior beings more than they value themselves, and if the coexistence of humans and these beings is impossible.

When I pointed out this problem, both Kaj Sotala and Michael Anissimov replied that CEV can never condone hostile actions towards humanity because FAI is "defined as 'human-benefiting, non-human harming'". However, this reply just proves my point, namely that there is a potential internal inconsistency between CEV and FAI.

Replies from: TimS
comment by TimS · 2013-01-20T03:46:53.598Z · LW(p) · GW(p)

Don't look at me to resolve that conflict. I think moral extrapolation is unlikely to output anything coherent if the reference class is sufficiently large to avoid the objections I raised above. And I can't think of any other plausible candidate to produce Friendly instructions for an AI.

comment by NancyLebovitz · 2013-01-03T14:38:39.140Z · LW(p) · GW(p)

Slight sidetrack: By the time AI seems plausible, I think it's likely that the human race will have done enough self-modification (computer augmentation, biological engineering) that the question of what's human is going to be more difficult than it is now.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-03T15:42:18.047Z · LW(p) · GW(p)

By 'human', do you mean 'member of the species Homo sapiens' or something else?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-03T16:48:41.463Z · LW(p) · GW(p)

I was thinking "member of the species Homo sapiens", but now that you mention it, I'd assign a small probability to genetically modified humans which can't interbreed with other humans. I don't have anything specific in mind, it's just that if genetic modification becomes at all common, a lot of possibilities open up, and some of the good ones might be incompatible with mutual fertility....whatever that means under the circumstances.

comment by leplen · 2013-01-03T19:40:11.106Z · LW(p) · GW(p)

I would also like to see this discussion. It isn't terribly clear to me why the extinction of the human race and its replacement with some non-human AI is an inherently bad outcome. Why keep around and devote resources to human beings, who at best can be seen as sort of a prototype of true intelligence, since that's not really what they're designed for?

While imagining our extinction at the hands of our robot overlords seems unpleasant, if you imagine a gradual cyborg evolution to a post-human world, that seems scary, but not morally objectionable. Besides the Ship of Theseus, what's the difference?

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-03T22:48:17.048Z · LW(p) · GW(p)

A long time ago, a different person who also happens to be named “Eliezer Yudkowsky” said that, in the event of a clash between human beings and superintelligent AIs, he would side with the latter. The Yudkowsky we all know rejects this position, though it is not clear to me why.

Replies from: wedrifid, MichaelAnissimov, leplen
comment by wedrifid · 2013-01-04T13:08:05.041Z · LW(p) · GW(p)

A long time ago, a different person who also happens to be named “Eliezer Yudkowsky” said that, in the event of a clash between human beings and superintelligent AIs, he would side with the latter. The Yudkowsky we all know rejects this position, though it is not clear to me why.

Not clear why? Because he likes people and doesn't want everyone he knows (including himself), everyone he doesn't know and any potential descendants of either to die? Doesn't that sound like a default position? Most people don't want themselves to go extinct.

comment by MichaelAnissimov · 2013-01-03T23:55:28.085Z · LW(p) · GW(p)

"Superintelligent AIs" is not one thing, it's a class of quadrillions of different possible things. The old Eliezer was probably thinking of one thing when he referred to superintelligences. When you realize that SAIs are a category of beings with more potential diversity than all species that have ever lived, it's hard to side with them all as a group. You'd have to have poor aesthetics to value them all equally.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-04T04:39:00.535Z · LW(p) · GW(p)

Thanks for the clarification. My understanding is that (the current) Eliezer doesn't merely claim that we shouldn't value all superintelligent AIs equally; he makes the much stronger claim that, in a conflict between humans and AIs, we should side with the former regardless of what kind of AI is actually involved in this conflict. This stronger claim seems much harder to defend precisely in light of the fact that the space of possible AIs is so vast. Surely there must be some AIs in this heterogenous group whose survival is preferable to that of creatures like us?

Replies from: Kaj_Sotala, TheOtherDave
comment by Kaj_Sotala · 2013-01-04T06:02:07.271Z · LW(p) · GW(p)

I don't think he makes that claim: all of his arguments on the topic that I've seen mainly refer to the kinds of AIs that seem likely to be built by humans at this time, not hypothetical AIs that could be genuinely better than us in every regard. E.g. here:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

"Well," says the one, "maybe according to your provincial human values, you wouldn't like it. But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals. And that's fine by me. I'm not so bigoted as you are. Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things -"

My friend, I have no problem with the thought of a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand.

That's what the Future looks like if things go right.

If the chain of inheritance from human (meta)morals is broken, the Future does not look like this. It does not end up magically, delightfully incomprehensible.

With very high probability, it ends up looking dull. Pointless. Something whose loss you wouldn't mourn.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-04T06:37:26.523Z · LW(p) · GW(p)

That's helpful. I take it, then, that "friendly" AIs could in principle be quite hostile to actual human beings, even to the point of causing the extinction of every person alive. If this is so, I think it's misleading to use the locution 'friendly AI' to designate such artificial agents, and am inclined to believe that many folks who are sympathetic to the goal of creating friendly AI wouldn't be if they knew what was actually meant by that expression.

Replies from: MichaelAnissimov, Kaj_Sotala
comment by MichaelAnissimov · 2013-01-04T08:37:30.848Z · LW(p) · GW(p)

Not "that doesn't sound quite right", but "that's completely wrong". Friendly AI is defined as "human-benefiting, non-human harming".

Replies from: TheOtherDave, Pablo_Stafforini
comment by TheOtherDave · 2013-01-04T19:25:41.533Z · LW(p) · GW(p)

I would say that the defining characteristic of Friendly AI, as the term is used on LW, is that it optimizes for human values.

On this view, if it turns out that human values prefer that humans be harmed, then Friendly AI harms humans, and we ought to prefer that it do so.

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-01-04T23:31:29.657Z · LW(p) · GW(p)

That's not the proper definition... Friendly AI, according to current guesses/theory, would be an extrapolation of human values. The extrapolation part is everything. I encourage you to check out that linked document, the system it defines (though just a rough sketch) is what is usually meant by "Friendly AI" around here. No one is arguing that "human values" = "what we absolutely must pursue". I'm not sure that creating Friendly AI, a machine that helps us, should be considered as passing a moral judgment on mankind or the world. At least, it seems like a really informal way of looking at it, and probably unhelpful as it's imbued with so much moral valence.

comment by Pablo (Pablo_Stafforini) · 2013-01-04T15:23:37.531Z · LW(p) · GW(p)

Let's backtrack a bit.

I said:

[Eliezer] makes the much stronger claim that, in a conflict between humans and AIs, we should side with the former regardless of what kind of AI is actually involved in this conflict.

Kaj replied:

I don't think he makes that claim: all of his arguments on the topic that I've seen mainly refer to the kinds of AIs that seem likely to be built by humans at this time, not hypothetical AIs that could be genuinely better than us in every regard.

I then said:

I take it, then, that "friendly" AIs could in principle be quite hostile to actual human beings, even to the point of causing the extinction of every person alive.

But now you reply:

Friendly AI is defined as "human-benefiting, non-human harming".

It would clearly be wishful thinking to assume that the countless forms of AIs that "could be genuinely better than us in every regard" would all act in friendly ways towards humans, given that acting in other ways could potentially realize other goals that this superior beings might have.

comment by Kaj_Sotala · 2013-01-04T06:59:56.480Z · LW(p) · GW(p)

That doesn't sound quite right either, given Eliezer's unusually strong anti-death preferences. (Nor do I think most other SI folks would endorse it; I wouldn't.)

ETA: Friendly AI was also explicitly defined as "human-benefiting" in e.g. Creating Friendly AI:

The term “Friendly AI” refers to the production of human-benefiting, non-humanharming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals.

Even though Eliezer has declared CFAI as outdated, I don't think that particular bit is.

comment by TheOtherDave · 2013-01-04T06:15:18.471Z · LW(p) · GW(p)

As I understand Eliezer's current position, it is that the right thing to optimize the universe for is the set of things humans collectively value (aka "CEV(humanity)").

On this account the space of all possible optimizing systems (aka "AIs" or "AGIs") can be divided into two sets: those which optimize for CEV(humanity) (aka "Friendly AIs"), and those which optimize for something else (aka "Unfriendly AIs").

And Friendly AIs are the right thing to "side with", as you put it here, because CEV(humanity) is on this account the right thing to optimize for.

On this account, "why side with Friendly AI over Unfriendly?" is roughly equivalent to asking "why do the right thing?"

The survival of creatures like us is entirely beside the point. Maybe CEV(humanity) includes the survival of creatures like us and maybe it doesn't.

Now, you might ask, why is CEV(humanity) the right thing to optimize the universe for, as opposed to something else? To which I think Eliezer's reply is that this is simply what it means to be right; things are right insofar as they correspond to what humans collectively value.

Some people (myself among them) find this an unconvincing argument. That said, I don't think anyone has made a convincing argument that some specific other thing is better to optimize for, either.

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-01-04T08:39:08.857Z · LW(p) · GW(p)

To which I think Eliezer's reply is that this is simply what it means to be right; things are right insofar as they correspond to what humans collectively value.

No. The argument is more like that there's no source of complex value in the world besides humans, and writing complex values line by line would take thousands of years, so we are forced to use some combination and/or extrapolation of human values, whether we want to or not.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-04T16:23:39.785Z · LW(p) · GW(p)

Hm.

If you have citations for EY articulating the idea that writing superior nonhuman values would take too long to do, rather than that it's fundamentally incoherent, I'd be interested. This would completely change my understanding of the whole Metaethics Sequence.

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-01-04T18:36:35.604Z · LW(p) · GW(p)

Whole brain emulation would basically be "copying" human values in a machine, and would demonstrate that "writing" human values is possible. You could then edit a couple morally relevant bits, and you'd be demonstrating that you could "create" a human-like but slightly edited morality. Evaluating whether it is "superior" by some metric would be a whole additional exercise, though.

I don't think the metaethics sequence implies that writing down values is impossible, just that human values are very complex and messy.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-04T19:12:40.300Z · LW(p) · GW(p)

Sure, if we drop the idea of "superior," I agree completely that it's possible (in principle) to write a set of values, and that the metaethics sequence does not imply otherwise.

And, also, it implies -- well, it asserts -- that human values are very complex and messy, as you say.

IIRC, it also asserts that human values are right. Which is why I think that on EY's view, evaluating whether the "edited morality" you describe here is superior to human values is not just an additional exercise, but an unnecessary (and perhaps incoherent) one. On his view, I think we can know a priori that it isn't.

Actually, now that I think about it more... when you say "there's no source of complex value in the world besides humans", do you mean to suggest that aliens with equally complex incompatible values simply can't exist, or that if they did exist EY's conclusions would change in some way to account for them?

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-01-04T23:11:07.223Z · LW(p) · GW(p)

I believe that EY definitively rejected the idea of there being an objective morality back in 2003 or thereabouts. Unless I am forgetting something from the metaethics sequence.

The whole point of CEV is to create a "superior" morality, though I think that too value-loaded of a word to use; the better word is "extrapolated". The whole idea of Friendly AI is to create a moral agent that continues to progress. So I'm not sure why you're claiming that EY is claiming that the notion of moral self-evaluation in AI is unnecessary. Isn't comparing possible, "better" moralities to the current morality essential to the definition of "moral progress" and therefore indispensable to building a Friendly AI?

To respond to your last statement, no to both. Of course aliens with equally complex incompatible values can exist, and I'm sure they do in some faraway place. Those aliens don't live here, though, so I'm not sure why we'd want to build a Friendly AI for their values rather than our own. The idea of building a Friendly AI is to ensure some kind of "metamoral continuity" through the intelligence explosion.

Replies from: TheOtherDave, Vladimir_Nesov
comment by TheOtherDave · 2013-01-05T05:29:18.638Z · LW(p) · GW(p)

To some extent, I think we may be talking past each other when I talk about values and you reply about moralities.

To clarify: would you say that this process you refer to of creating a different "morality" (whether it's different by virtue of being superior or extrapolated or something else is beside my point right now) keeps values fixed, or not?

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-01-05T10:04:51.674Z · LW(p) · GW(p)

I think it depends on what is meant by "values". I would say that the values change while the fundamental motivations are fixed, though Vladimir's response makes me unsure about this. Another way of saying it is that supergoals are fixed but the "Friendliness content" changes. (Though I haven't seen the phrase "Friendliness content" around much lately, perhaps it's being discarded in favor of more formal terms.)

Maybe another useful distinction would be between Friendliness structure and content (see the CFAI entry on the wiki).

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-05T19:03:08.201Z · LW(p) · GW(p)

I have to admit, the proliferation of terms in this discussion is making me less and less clear that I understand what was being said when you corrected me initially, despite several attempts to clarify it. So I'm going to suggest that we roll back and try this again, keeping our working vocabulary as well-defined as we can.

As I understand EY's account:

  • He endorses building an optimization process (that is, a process that acts to maximize the amount of some specified target) that uses as its target the set of human terminal values (that is, the things that we want for their own sake, rather than wanting because we believe they'll get us something else).

  • He also endorses building this process in such a way that it will improve itself as required so as to be able to exert superhuman optimizing power towards its target. The term "Friendly AI" refers to processes of this sort -- that is, self-improving superhuman optimization processes that use as their target the set of human terminal values.

  • He also endorses a particular process (building a seed AI that analyzes humans) as a way of identifying the set of human terminal values. The term "CEV" (or, sometimes, "CEV(humanity)") refers to the output of such an analysis.

  • He endorses all of this not only as pragmatic for our purposes, but also as the morally right thing to do. Even if there's an equally complex species out there whose terminal values differ from ours, on EY's account the morally right thing to do is optimize the universe for our terminal values rather than for theirs or for some compromise between the two. Members of that species might believe that humans are wrong to do so, but if so they'll be mistaken.

I understand that you believe I'm mistaken about some or all of the above.
I'm really not clear at this point on what you think is mistaken, or what you think is true instead.

Can you edit the above to reflect where you think it's mistaken?

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-01-06T00:59:59.050Z · LW(p) · GW(p)

The only part I disagree with strongly is the language of the last point. Referring to CEV as "THE morally right thing to do" makes it seem as if it were set in stone as the guaranteed best path to creating FAI, which it isn't. EY argues that building Friendly AI instead of just letting the chips fall where they may is the morally right thing to do, and I'd agree with that, but not that CEV specifically is the right thing to do.

One general goal point for FAI is to target outcomes "at least as good" as those which would be caused by benevolent human mind upload(s). So, the kind of "moral development" that a community of uploads would undergo should be encapsulated within a FAI. In fact, any beneficial area of the moral state space that would be accessible starting from humans or any combination of humans and tools should be accessible by a good FAI design. CEV is one such proposal towards such a design.

As I understand it, yes, the thinking is to optimize for our terminal values instead of this hypothetical alien species or some compromise of the two. However, if values among different intelligent species converge given greater intelligence, knowledge, and self-reflection, then we would expect our FAI to have goals that converge with the alien FAI. If values do not converge, then we would suppose our FAI to have different values than alien FAIs.

A "terminal value" might include carefully thinking through philosophical questions such as this and designing the best goal content possible given these considerations. So, if there are hypothetical alien values that seem "correct" (or simply sufficiently desirable from the subjective perspective) to extrapolated humanity, these values would be integrated into the CEV-output.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-06T01:57:01.510Z · LW(p) · GW(p)

I agree that EY does not assert that his proposed process for defining FAI's optimization target (that is, seed AI calculating CEV) is necessarily the best path to FAI, nor that that proposed process is particularly right. Correction accepted.

And yes, I agree that on EY's account, given an alien species whose values converge with ours, a system that optimizes for our terminal values also optimizes for theirs.

Thanks.

comment by Vladimir_Nesov · 2013-01-04T23:36:50.532Z · LW(p) · GW(p)

Isn't comparing possible, "better" moralities to the current morality essential to the definition of "moral progress" and therefore indispensable to building a Friendly AI?

FAI's goals should be fixed, unchanging (by initial design). I see three possible things related to a FAI that could be described as involving a "changing morality". First, it's possible that the definition of FAI's unchanging goals could take the form where it makes sense to talk about some process of change in provisional goals, but this process of change would be a part of the definition of the unchanging result. For something like CEV, we might say that CEV is the first stage that takes care of collecting initial data from humans, tries to "extrapolate" goals from this data, decides on whether it can formulate FAI's goals, and if successful runs a FAI with these (fixed) goals.

Second, the world managed by FAI might contain agents with changing morality, if the FAI decides that agents with changing morality are the right thing to create or maintain, according to FAI's fixed morality.

And third, FAI itself might take significant time in understanding the logical implications of the fixed definition of its morality, either in general or as applied to particular (hypothetical) situations. Even mathematics with elementary axioms that human mathematicians do is quite complicated. Useful parts of the mathematics of human value might take billions of years to figure out.

comment by leplen · 2013-01-04T00:14:59.060Z · LW(p) · GW(p)

Yeah, that's an interesting question. I'll offer a conjecture.

From my understanding, one of the fundamental assumptions of FAI is that there is somehow a stable moral attractor for every AI that is in the local neighborhood of its original goals, or perhaps only that this attractor is possible. No matter how intelligent the machine gets, no matter how many times it improves itself, it will consciously attempt to stay in the local neighborhood of this point (ala the Gandhi murder pill analogy).

If an AI is designed with a moral attractor that is essentially random, and thus probably totally antithetical to human values (such as paperclip manufacture), then it's hard to be on the side of the machines. Giving control of the world over to machine super-intelligences sounds like an okay idea if you imagine them growing, doing science, populating the universe, etc., but if they just tear apart the world to make paperclips in an exceptionally clever manner, then perhaps it isn't such a good idea. This is to say, if the machines use their intelligence to derive their morality, then siding with the machines is all well and good, but if their morality is programmed from the start, and the machines are merely exceptionally skilled morality executors, then there's no good reason to be on the sides of the machines just because they execute their random morality much more effectively.

I am fairly hesitant to agree with the idea of the moral attractor, along with the goals of FAI in general. I understand the idea only through analogy, which is to say not at all, and I have little idea what would dictate the peaks and valleys of a moral landscape, or even the coordinates really. It also isn't clear to me that a machine of such high intelligence would be incapable of forming new value systems, and perhaps discarding its preference for paper clips if there was no more paper to clip together.

While I'm exploring a very wide hypothesis space here about a person I know essentially nothing about, this sort of reasoning is at least consistent with what appears to be the thinking that undergirds work on FAI.

It also raises a very interesting question, which is perhaps more fundamental, and that is whether moral preferences are a function of intelligence or not. If so, the beings far more intelligent than us would presumably be more moral, and have a reasonable claim for our moral support. If not, then they're simply more clever and more powerful, and neither is a particularly good reason to welcome our robot overlords.

An idea I just had, which I'm sure others have considered, but I will merely note here, is that a recursively self-modifying AI would be subject to Darwinian evolution, with lines of code analogous to individual genes, and indeed if there is a stable attractor for such an AI, it seems likely to be about as moral as evolution. which is not particularly encouraging.

comment by lsparrish · 2013-01-03T02:00:10.938Z · LW(p) · GW(p)

It sounds like extra work, and I'm not sure there would be a payoff. Presumably a past person whose volition was coherently extrapolated would lose their racism and other backwards attitudes, and thus be on par with a contemporary person's coherently extrapolated volition. With future persons, the argument could be made that their CEV can't be much different from a current person's for similar reasons.

Replies from: TimS, Pablo_Stafforini
comment by TimS · 2013-01-03T02:23:56.516Z · LW(p) · GW(p)

Presumably a past person whose volition was coherently extrapolated would lose their racism and other backwards attitudes

That's a lot to presume. Gwern lists some reasons from history to think this statement is unlikely to be true.

comment by Pablo (Pablo_Stafforini) · 2013-01-03T03:36:40.129Z · LW(p) · GW(p)

Presumably a past person whose volition was coherently extrapolated would lose their racism and other backwards attitudes, and thus be on par with a contemporary person's coherently extrapolated volition.

Even if we grant this assumption, this sort of argument clearly cannot be generalized to justify the exclusion of nonhuman animals--who have preferences that humans routinely disregard--from the class of beings whose volitions are to be coherently extrapolated. Why not run CEV on all present sentient beings?

comment by drethelin · 2013-01-03T07:55:07.391Z · LW(p) · GW(p)

No preferences "matter" except in relation to each other. The subset of humanity that I value isn't decided by logic, but by my values and how they interact with humans.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-03T15:39:52.406Z · LW(p) · GW(p)

You say that you only value a subset of humanity. But this is irrelevant for CEV, according to which we should extrapolate the preferences of all (present?) humans, not just those of drethelin.

comment by NancyLebovitz · 2013-01-02T06:22:35.377Z · LW(p) · GW(p)

The ability to go easily from standing to sitting and from sitting to standing has a good correlation with all-causes mortality

The test was a simple assessment of the subjects' ability to sit and then rise unaided from the floor. The assessment was performed in 2002 adults of both sexes and with ages ranging from 51 to 80 years. The subjects were followed-up from the date of the baseline test until the date of death or 31 October 2011, a median follow-up of 6.3 years.

Before starting the test, they were told: "Without worrying about the speed of movement, try to sit and then to rise from the floor, using the minimum support that you believe is needed."

As might be predicted, I'm putting in a little work on improving my ability at the test-- I have no idea whether this an example of Goodhart's Law.

comment by Wei Dai (Wei_Dai) · 2013-01-02T13:34:34.907Z · LW(p) · GW(p)

A couple of quick points about "reflective equilibrium":

  1. I just recently noticed that when philosophers (and at least some LWers including Yvain) talk about "reflective equilibrium", they're (usually?) talking about a temporary state of coherence among one's considered judgement or intuitions ("There need be no assurance the reflective equilibrium is stable—we may modify it as new elements arise in our thinking"), whereas many other LWers (such as Eliezer) use it to refer to an eventual and stable state of coherence, for example after one has considered all possible moral arguments. I've personally always been assuming the latter meaning, and as a result have misinterpreted a number of posts and comments that meant to refer to the former. This seems worth pointing out in case anyone else has been similarly confused without realizing it.

  2. I often wonder and ask others what non-trivial properties we can state about moral reasoning (i.e., besides that theoretically it must be some sort of an algorithm). One thing that I don't think we know yet is that for any given human, their moral judgments/intuitions are guaranteed to converge to some stable and coherent set as time goes to infinity. It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments, or none if for example their conclusions keep wandering chaotically among several basins of attraction as they review previously considered arguments. So I think the singular term "reflective equilibrium" is currently unjustified when talking about someone's eventual conclusions, and we should instead use "the possibly null set of eventual reflective equilibria". (Unless someone can come up with a pithier term that has similar connotations and denotations.)

Replies from: Emile
comment by Emile · 2013-01-02T22:00:55.507Z · LW(p) · GW(p)

It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments

Another way to get several equilibria would be moral judgements whose "correctness" depends on whether other people share them. I find it likely that there would be some like that, since you get those in social norms and laws (like, on which side of the road you drive, or whether you should address strangers by their first or last name), and there's a bit of a fuzzy continuum between laws, social norms, and morality.

comment by NancyLebovitz · 2013-01-04T16:09:15.582Z · LW(p) · GW(p)

Lead and crime Arguments that lead has a lot to do with crime levels, and discussion of why this has gotten so little attention.

Just to indulge in a little evolutionary psychology..... Punishing people and helping people are both strong drives, but spending a lot of money on lead abatement (the lead from gasoline is still in the soil, and it keeps coming back-- lead paint is still a problem, too) is pretty boring.

ETA: And worse, progress with lead abatement is literally invisible (you don't have a dam or a highway so it looks like you're doing something) and the good effects take some 15 or 20 years to be obvious.

Replies from: Douglas_Knight, None, NancyLebovitz
comment by Douglas_Knight · 2013-01-05T08:03:45.012Z · LW(p) · GW(p)

The basic point is reasonable, but there are so many things that bother me about that article.

Drum's credulity varies a lot in this article. His lowest level is about where I stand. I have to wonder if that actually reflects his beliefs and the rest of it is forcing enthusiasm on himself because to reflect value rather than truth; that is, he is doing an expected value calculation. Certainly, he should be applauded for scope sensitivity.


Perhaps the biggest thing that bothers me is that Drum tries to have it both ways: small amounts of lead matter and big amounts of lead matter. It seems rather unlikely that this is true. Maybe 10μg/dL has a huge effect, but if so, I doubt that 20 has double that effect, and this ruins all the analysis of the first half of the article. This is important because there is a logical trade-off between saying that past lead reduction was useful and saying future lead reduction will be useful. In particular, Drum says that Kleiman says that if the US were to eliminate lead, it would reduce crime by 10%. Did he just make up this number, or does it come out of a model? I'd like to see the model because even if he pulled the model out of thin air, it forces him to deal with the logical trade-off.

In Kleiman's book, he says that eliminating lead paint would reduce crime by 5% and attributes it to Nevin 2000. On the same page, he misquotes Nevin in a way that makes me not trust Kleiman with models. But that's OK because he has a citation, not model. I cannot find the claim in Nevin's paper. There is a model on p19 that says that 6 points of IQ, applied to the lowest 30% of the population could explain the past decline. And that's at a rate of 2 points of IQ for 10μg/dL, a small enough rate I'm willing to extrapolate linearly. If you assume crime in linear in lead, the 5% number is reasonable, except for the assumption that lead explains all of the past decline. (I'm not sure Nevin actually makes this assumption because I don't think he makes a prediction about eliminating lead; in this section, I think he's just doing a reality check that the known IQ effect of lead plus the known correlation of IQ and crime is big enough to explain the whole drop in crime.)

So I am bothered by Drum's language about the effects of low levels of lead, even though the suggestion of a 10% drop in crime maybe survives the trade-off between past and future. (And how does Kleiman's 5% turn into "Kleinman's" 10%? windows vs windows+soil?)


From the first half of the article:

the field of econometrics gives researchers an enormous toolbox of sophisticated statistical techniques

Econometrics gives people enough rope to publish themselves. Plus they implement these algorithms in spreadsheets, to hide the bugs from themselves.

murder rates have always been higher in big cities than in towns and small cities

If lead explains everything, this should not always have been true. In fact, I think it was not true in 1960. The graph Drum cites starts in 1975, after most of the increase in national murder rates has already happened, but there is very little dependence on city size until later. The graph seems to me evidence against the claim that lead explains this detail. Anyhow, such bucketed graphs are a bad way to test this hypothesis. In particular, there are only 9 "big cities" and NYC has 1/3 of this population. The convergence today is probably driven just by NYC now having a lower murder rate than small cities.

Drum says that Newarks's crime rate dropped 75%. That is true and but it is also true that Newark's murder rate has rebounded to its peak. I don't know how to resolve this. I usually prefer murder rates because they are harder to fake, but there are only about 80 murders in the worst years, making the data quite noisy.

That the graphs of leaded gasoline and crime match perfectly, up until year that Nevin's first paper was published screams publication bias.

Crack:

Trying to explain the crack epidemic in terms of childhood seems like a serious error to me. It seems very clear to me that it was contagious. How it spread and why it burnt itself out, I do not know. Regardless, one can disprove Nevin's model's claim to explain the crack epidemic, like Levitt's spreadsheet fraud before it, because it assumes that the age of criminals is constant in time. In fact, the crack epidemic involved young murderers, born after lead levels had started to decline. I think Nevin worries about this in later papers, but I don't know what he does.

Here is a suggestion for a better model for testing Nevin's hypothesis than he used in 2000: instead of lagging on some constant, create a new time series of murder by age of birth. This also corrects for the demographic problems such as the baby boom. The disadvantage is that this loses exogenous effects, such as the crack epidemic, which hit multiple ages simultaneously. Yet another time series, to avoid the problem of missing data, uses the age of the victim rather than of the perp.

So Nevin fails to explain the crack epidemic, but if he just explains the big rise and the big fall, that's a big deal. Unfortunately, the presence of the crack epidemic masks the big fall. In the absence of crack, when would crime have started falling? Perhaps it would have started falling earlier, but was elevated by crack. Or perhaps all those dead or jailed young teens would have become 25 year old criminals and so the effect of crack was to speed things up, including the falling crime rate.

comment by [deleted] · 2013-01-10T19:08:57.153Z · LW(p) · GW(p)

There's a lot you can do to remediate lead and the bioavailable forms of it, fortunately (been working on a garden in an urban area, and bioremediation is a chief concern) -- it doesn't just have to involve removing it. Unfortunately, it's still likely to be rather expensive and unglamorous, so it'll be a tough sell as a point of policy.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-10T22:25:30.028Z · LW(p) · GW(p)

The sexy project would be to figure out how to undo the effects of lead on people years after they'd been exposed as children. I think succeeding at this would wonderful, but I wouldn't put off cleaning up lead in the environment in the meanwhile.

Replies from: None
comment by [deleted] · 2013-01-10T22:36:17.548Z · LW(p) · GW(p)

That'd be beyond "sexy"; the effects of lead poisoning on the central nervous system are generally considered irreversible. I daresay anything that could repair that sort of brain damage would have a whole host of other applications...

comment by leplen · 2013-01-03T19:11:24.330Z · LW(p) · GW(p)

So I'm fairly new to LessWrong, and have being going through some of the older posts, and I had some questions. Since commenting on 4 year old posts was probably unlikely to answer those questions or to generate any new discussion, I thought posting here might be more appropriate. If this is not proper community etiquette, I'm happy to be corrected.

Specifically, I'm trying to evaluate how I understand and feel about this post: The Level Above Mine

I have some very mixed feelings on this post, and the subject in general. (You might say I've noticed that I'm confused.) Sure. It's hard to evaluate reliably just how intelligent someone who is more intelligent than you is, just like a test that every student in a class aces doesn't allow you to identify which student knows the information the best, but doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor? Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset. Indeed, it seems in many ways the raison d'etre of LW relies on the assumption that it is possible to improve your intelligence. I would further argue that LW relies on the assumption that it is possible to recursively improve your intelligence, (i.e. learning things that help you learn better).

Is it possible that the fundamental attribution error is at work here? I mean, if it's ridiculous to believe in "mutants born with unnaturally high anger levels" then why the rush to believe in mutants with unnaturally high levels of intelligence? I'm not sure what to make of a post that discusses assessing how many standard deviations above average intelligence someone is, if I really believe that "Any given aspect of someone's disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability."

Indeed if we make fundamental attribution error when assessing someone because "we don't see their past history trailing behind them in the air", then can we not say the same for experiences that result in greater situational intelligence? Perhaps I'm straining the bounds of metaphor slightly, since problem-solving intelligence tends to be more enduring than vending-machine kicking anger, but is it so fixed that my SAT scores from the 7th grade are meaningful or worth discussing? Is it possible that what we perceive as greater intelligence, as "the level above mine" is just someone who has spent more time working on something, or working on something similar to it? What is the prior probability that someone picks up a new idea quickly because they've been exposed to a similar idea before, versus the prior probability that they are of mutant intelligence?

The entire ranking debate to me, sounds suspiciously like human social hiearchies, and since that's a type of irrationality humans are especially prone to, it makes me very suspicious. I know from personal experience, that being considered of "above average intelligence" is a very useful social tool which I can use to create a place for myself in social hierarchies, and often that place is not only secure, but also grants me reasonably high social status. I have at various times in my life, evaluated others, and granted social status accordingly, on the basis of their SAT scores and other similar measures. Is that what is going on here?

Fundamentally, I believe this question boils down to a handful of related questions:

  1. How accurate over time is our evaluation of general intelligence?
  2. Does our love of static hiearchies, esp. one that priveleges intelligence affect our answer to 1?

Sub-questions to #1

  • a. How varaible is intelligence, and over what time span? Or more generally, what do we estimate are the most heavily weighted inputs to a function that describes intelligence?
  • b. Is there an upper bound on human intelligence?
  • c. Are the people whose intelligence we're evaluating operating near that bound?
  • d. Can we reliably distinguish between intelligence and knowledge? How?

I'm not sure about question 1, but I'm pretty sure the answer to question 2 is yes.

Replies from: Kaj_Sotala, drethelin, Viliam_Bur, saturn, knb
comment by Kaj_Sotala · 2013-01-04T07:27:33.875Z · LW(p) · GW(p)

"Intelligence" seems to consist of multiple different systems, but there are many tasks which recruit several of those systems simultaneously. That said, this doesn't exclude the possibility of a hierarchy - in some people all of those systems could be working well, in some people all of them could be working badly, and most folks would be somewhere in between. (Which would seem to match the genetic load theory of intelligence.) But of course, this is a partially ordered set rather than a pure hierarchy - different people can have the same overall score, but have different capabilities in various subtasks.

IQ in childhood is predictive of IQ scores in adulthood, but not completely reliably; adult scores are more stable. There have been many interventions which aimed to increase IQ, but so far none of them has worked out.

IQ is one of the strongest general predictors of life outcomes and work performance... but that "general" means that you can still predict performance on some specific task better via some other variable. Also, IQ is one of the best such predictors together with conscientiousness, which implies that hard work also matters a lot in life. We also know that e.g. personality type and skills matter when it comes to rationality.

I would suppose that the kinds of people referred to "the level above mine" would be some of those rare types who've had the luck of getting a high score on all important variables - a high IQ, a high conscientiousness, a naturally curious personality type, high reserves of mental energy, and so on. To what extent these various things are trainable is an open question.

Replies from: leplen
comment by leplen · 2013-01-05T00:39:21.673Z · LW(p) · GW(p)

In which case, if IQ is a good and stable predictor, then we are placing high confidence in #1 if we know their IQ. Is IQ or test scores what we commonly base intelligence assessments on?

If we can put high confidence in #1 via testing, can we still put high confidence in it on based on a general impression or a conversation, or even on the basis of mysterious evidence? e.g. This quote: "(Interesting question: If I'm not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me? I don't remember any stunning epiphanies in his presentation at the Summit. I didn't talk to him very long in person. He just came across as... formidable, somehow.)"

I mean, I would assume aura judgment is less effective than testing, particularly at discriminating between levels above that of the aura judge, but how much worse isn't clear to me. I'm particularly suspicious of it because evaluating someone else's intelligence routinely involves a comparison with myself, and I'm very uncertain I can make those comparisons without bias.

I appreciate your response immensely. I have almost no training in any sort of cognitively focused science, and so my impressions about the constancy of intelligence are largely drawn from my personal experience, which is obviously an enormously impoverished data set. Your explanation and data does offer a compelling reason to believe intelligence corresponds with some fixed aspect of an individual, at least with some reasonable probability.

I can certainly think of exceptions, individuals with triple-digit SAT scores that went on to pursue Ph.Ds,, but perhaps that does not mean the model is wrong, as unlikely events do occur. Or perhaps the adult IQ doesn't stabilize until sometimes after 25, and so they underwent a large IQ fluctuation in college. Perhaps as I age and spend more time with older people, I'll become more confident in predicting future intelligence from current intelligence.

Replies from: Kaj_Sotala, gwern
comment by Kaj_Sotala · 2013-01-05T07:11:36.061Z · LW(p) · GW(p)

Intelligence is generally measured using either explicit IQ tests or performance on tasks which are known to correlate reliably with IQ (such as SAT scores).

I think there was a study somewhere - it might have been discussed on this site, but I couldn't find it on a quick search - where an audience listened to two people have a conversation, and they knew that one of the people had been allowed to pick a topic that he knew a lot about and the other person didn't. Despite knowing that, the audience consistently thought that the person who'd been allowed to pick the topic was more intelligent, as he had better things to say about it. That would at least weakly suggest that people aren't very good at controlling for irrelevant factors when estimating someone's intelligence.

Replies from: gwern, gwern
comment by gwern · 2013-01-06T18:53:17.962Z · LW(p) · GW(p)

Found it: http://lesswrong.com/lw/4b/dont_revere_the_bearer_of_good_info/

One of the classic demonstrations of the Fundamental Attribution Error is the 'quiz study' of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we're not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.

comment by gwern · 2013-01-05T17:15:53.590Z · LW(p) · GW(p)

If anyone knows what this study is, I'd be very interested to learn more about it, since it sounds like it might be a falsification of my hypothesized http://www.gwern.net/backfire-effect

EDIT: found it by accident, see sibling comment

comment by gwern · 2013-01-05T01:20:16.492Z · LW(p) · GW(p)

If we can put high confidence in #1 via testing, can we still put high confidence in it on based on a general impression or a conversation, or even on the basis of mysterious evidence? e.g. This quote: "(Interesting question: If I'm not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me? I don't remember any stunning epiphanies in his presentation at the Summit. I didn't talk to him very long in person. He just came across as... formidable, somehow.)"

I don't think you can. A conversation or 'general impression' is going to be based on interpersonal skills, and unless it is a highly technical conversation, be based mostly on verbal sorts of skills. Asking whether an IQ test would be less reliable than a conversation is a little like asking 'if we drop the SAT Math section and just use Verbal, is that better than using both the Math and Verbal sections?' No one item loads very heavily on g which is why IQ tests typically have a bunch of subtests.

comment by drethelin · 2013-01-03T21:37:16.092Z · LW(p) · GW(p)

Why is it ridiculous to believe in mutants born with high anger levels?

Replies from: leplen
comment by leplen · 2013-01-03T22:31:59.387Z · LW(p) · GW(p)

Following the line of reasoning in Correspondence Bias, because it's probably much more likely that someone who seems to you to "be an angry person" has just had a bad day.

According to our current understanding, significant mood altering mutations are much less common than many other more probable causes of anger. This is one of the reasons gene therapy is not typically suggested as part of treating anger management issues.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-01-03T22:47:40.671Z · LW(p) · GW(p)

Wouldn't it be interesting if everyone had exactly equal hormonal tendencies toward various emotions?

"This particular episode of angry behavior is not as strong of evidence that this person has angry tendencies as my brain wants to treat it" is not the same as "Angry tendencies do not exist at all."

Replies from: leplen
comment by leplen · 2013-01-04T02:41:24.235Z · LW(p) · GW(p)

Okay sure. I'm certainly not arguing that there is no variation in human intelligence or emotional make-up. Indeed it is probably supremely likely that there are indeed mutants born with "high anger levels", whatever that is supposed to mean, While I am not a geneticist and can't speak for the genetic complexity of that particular set of mutations, there's a lot of humans and it seems like something in that vein is as least as likely in 1 in 5 billion, so there's bound to be a couple of them around. It was sloppy writing I suppose, but the implication wasn't that no mutants with high anger levels exist, just that the hypothetical person in the example in all probability isn't one of them. I was working within the framework of an existing metaphor, not making my own original research claim about angry mutants.

I still feel like there's a large discrepancy between how anger and intelligence are discussed in the two articles. I feel like intelligence is given an ontological weight, that anger is not granted. If you met John Conway at a summer camp, or better yet, some no-name kid who nonetheless carried on a brilliant conversation with you, dazzling you with insights you'd never imagined, would you also tell yourself, "This particular episode of intelligent behavior is not as strong a piece of evidence that this person has intelligent tendencies as my brain wants to treat it."? If you would, then when you read The Level Above Mine and the following posts do you feel like that filter is being carefully applied? If you would not, then why is there a difference between intelligence and anger?

Replies from: Risto_Saarelma, knb
comment by Risto_Saarelma · 2013-01-04T10:50:34.193Z · LW(p) · GW(p)

there are indeed mutants born with "high anger levels", whatever that is supposed to mean

Maybe think animal taming, and the ways tame animals ended up different from wild ones. Taming seems to work way too fast to rely on only new mutations, so there's probably existing genetic variation on aggressiveness in the starting population it can use.

There's also starting to be some research on actual high anger mutations in humans, which seem to be a bit more common than 1 in 5 billion.

I still feel like there's a large discrepancy between how anger and intelligence are discussed in the two articles. I feel like intelligence is given an ontological weight, that anger is not granted.

Anger is much more situational thing, so maybe you should talk about temperament instead, as the relatively stable emotional makeup of person that affects how easily they become angry. Having high intelligence can make you do behaviors that are very improbable otherwise, like proving Fermat's conjecture. But there can be many causes that lead to quite a similar fit of anger, both a large stimulus and a calm temperament and a small stimulus and an anger-prone temperament will work. So I don't see the problem with the argument. If I see Alice proving Fermat's conjecture, Alice being very intelligent is the only solid hypothesis I have. If I see Bob angrily kicking a vending machine, both Bob having a hair-trigger temperament and Bob having had a very bad day are plausible hypotheses.

comment by knb · 2013-01-10T21:09:02.760Z · LW(p) · GW(p)

I still feel like there's a large discrepancy between how anger and intelligence are discussed in the two articles.

Intelligence really is more fixed than "anger". Anger is an emotion, and even people highly inclined toward anger are not angry all (or even most of) the time. To put it plainly, you are more likely to come across a calm person experiencing rage, than a mentally retarded person having a conversation at Conway's level. Do you really doubt that?

comment by Viliam_Bur · 2013-01-06T22:15:31.953Z · LW(p) · GW(p)

I will start with: +1 for caring about the community etiquette

Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset. Indeed, it seems in many ways the raison d'etre of LW relies on the assumption that it is possible to improve your intelligence.

Intelligence (IQ) is more or less static. If you have a scientifically proven method of increasing IQ, please post it here, and I am sure many people will try it. But at this moment, LW is not about increasing human intelligence. It is about increasing human rationality -- learning a better way to use the intelligence (brain) we already have -- and about machine intelligence. A hypothetical intelligent machine could increase its intelligence by changing its code or adding new hardware. For humans, similar change would require surgery or implants beyond our current knowledge.

if it's ridiculous to believe in "mutants born with unnaturally high anger levels" then why the rush to believe in mutants with unnaturally high levels of intelligence?

How high is unnaturally high? The intelligence is on the Bell curve. One in two persons has IQ above 100. One in ten has IQ above 115. One in fifty has IQ above 130; one in hundred above 135; one in thousand above 146; one in ten thousands above 156... this is all within the Bell curve. It is possible to search for people with this level of intelligence. (Speaking about someone with IQ 300, that would be unnatural.)

The question is, how much real-world effect do these levels of intelligence have. Clearly, intelligence is not enough to make people smart -- a person with a high IQ can still believe and do stupid things. (This is why we usually don't obsess about IQ, and discuss rationality instead.) On the other hand, some IQ may be necessary for some outcome, or at least could make the same person get the same outcome significantly faster. (This is easier to understand by imagining people with very low IQs. Even the best rationality training is not going to make them new Einsteins.) Being faster does not seem like a critical difference, but for sufficiently complex tasks the difference between years and decades, or maybe decades and centuries, can determine whether a human is able or unable to ever complete the task.

Is it possible that what we perceive as greater intelligence, as "the level above mine" is just someone who has spent more time working on something, or working on something similar to it?

In the article, Eliezer considers the alternative explanations. (Maybe Conway had more opportunities to show his mastery. Maybe he specializes in doing something different. Maybe Conway used the time of his youth better.) But maybe... it is the difference in general intelligence. All these explanations deserve to be considered.

What is the prior probability that someone picks up a new idea quickly because they've been exposed to a similar idea before, versus the prior probability that they are of mutant intelligence?

Depends on circumstances. Did it happen once, or does it happen all the time? Does it happen consistently in a field where both persons spent a lot of time learning? Does it happen in different fields? The prior probability of someone having higher intelligence is not so small that evidence like this couldn't change the result.

\2. Does our love of static hiearchies, esp. one that priveleges intelligence affect our answer to 1? I'm not sure about question 1, but I'm pretty sure the answer to question 2 is yes.

Just because we have a bias for X, it does not automatically mean non-X must be true. People do love hierarchies. People are bad at estimating their skills, or skills of others. That does not mean different people can't really have different traits.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-07T00:25:11.290Z · LW(p) · GW(p)

Intelligence (IQ) is more or less static. If you have a scientifically proven method of increasing IQ, please post it here, and I am sure many people will try it. But at this moment, LW is not about increasing human intelligence. It is about increasing human rationality -- learning a better way to use the intelligence (brain) we already have -- and about machine intelligence.

Is it solid that IQ tests can distinguish between the intelligence we already have, and our ability to use that intelligence?

comment by saturn · 2013-01-17T08:50:18.857Z · LW(p) · GW(p)

doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor? Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset.

I'd just like to point out that a growth mindset is fully compatible with fixed intelligence. Fixed intelligence doesn't mean that growth is impossible, only that some people can grow faster than others.

comment by knb · 2013-01-10T20:51:03.208Z · LW(p) · GW(p)

There actually are mutants with high anger levels (read about Brunner's syndrome). Less Wrong is not about improving human intelligence but rather human rationality. The two are obviously distinct.

If you are asking these basic questions about intelligence, (i.e. proposing that it can easily be changed) you simply need to read more about this topic.

comment by TimS · 2013-01-02T02:31:54.803Z · LW(p) · GW(p)

What exactly is the function of the Rationality Quotes threads? They seem like nothing more that a litmus test for local orthodoxy.

Replies from: Alicorn, Jayson_Virissimo, ChristianKl, Jabberslythe, Douglas_Knight
comment by Alicorn · 2013-01-02T03:40:52.062Z · LW(p) · GW(p)

They are repositories for quotes that resonate with and/or amuse us. It might be a little too easy to get karma that way, admittedly, but I think they are nice to have around.

Replies from: TimS
comment by TimS · 2013-01-02T03:50:06.936Z · LW(p) · GW(p)

Sources of karma don't bother me. It just seems like the standards for voting in that thread - both comments and replies - is really different than the rest of the site. Not looser, but different.

It seems like I'm always surprised but the vote totals there - both upvotes and downvotes, when I think I have a feel for what folks like in the rest of the site.

comment by Jayson_Virissimo · 2013-01-02T04:06:32.603Z · LW(p) · GW(p)

What exactly is the function of the Rationality Quotes threads? They seem like nothing more that a litmus test for local orthodoxy.

One of their functions is to act as a kind of litmus test for local orthodoxy.

Replies from: TimS
comment by TimS · 2013-01-02T21:42:47.982Z · LW(p) · GW(p)

This is local orthodoxy?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-03T05:30:48.503Z · LW(p) · GW(p)

"X is a good test for Y" does not imply "every part of X reflects Y."

Replies from: TimS
comment by TimS · 2013-01-03T05:39:18.359Z · LW(p) · GW(p)

I don't think you and Jayson are agreeing.

comment by ChristianKl · 2013-01-04T14:24:54.974Z · LW(p) · GW(p)

I don't think it's a test for orthodoxy. Take the quote: "To see is to forget the name of the thing one sees.” ― Paul Valéry with 13 upvotes while I write it.

The position that gets articulated in that quote isn't orthodox on LessWrong. There are a bunch of quotes that are interesting instead of just making an orthodox point.

Replies from: TimS
comment by TimS · 2013-01-04T14:54:58.934Z · LW(p) · GW(p)

I don't think that quote is irrational, for basically the reasons TheOtherDave said.

Replies from: ChristianKl
comment by ChristianKl · 2013-01-04T15:31:27.161Z · LW(p) · GW(p)

I didn't claim that it's irrational. I claim that it's not orthodox rationality.

Take a quote that makes a more orthodox point: "The social sciences are largely hokum." --Sheldon Cooper

That quote is voted -2. That quote makes a point in which many members of the community believe but it doesn't make that point in a way that's interesting.

Replies from: TimS
comment by TimS · 2013-01-04T19:04:19.981Z · LW(p) · GW(p)

I think your original quote is rational, as this community defines the term. I think the Big Bang Theory quote is not rational - in part because of denotative implications.

I think Jabberslythe is probably right when he says the purpose is celebrating in-group feelings. I'm not sure I approve of that purpose.

comment by Jabberslythe · 2013-01-04T04:42:59.946Z · LW(p) · GW(p)

They trigger the ingroup fuzzies really well for me. I think quotes inspire me as well sometimes and it's otherwise hard to find quotes that inspire in the right direction.

comment by Douglas_Knight · 2013-01-03T04:53:29.222Z · LW(p) · GW(p)

The purpose is clearly articulated in the first one.

hover text

Replies from: TimS
comment by TimS · 2013-01-03T05:36:54.201Z · LW(p) · GW(p)

I'll be moving to Redwood City, CA in a week, so forgive me if I don't get a regular post out every day between now and then. As a substitute offering, some items from my (offline) quotesfile

Now I'm really confused.

comment by NancyLebovitz · 2013-01-06T15:27:46.417Z · LW(p) · GW(p)

LW has been loading slowly lately-- sometimes it times out. Has anyone else been having this problem?

Replies from: BerryPick6, army1987
comment by BerryPick6 · 2013-01-06T15:29:14.459Z · LW(p) · GW(p)

Yeah, I've been experiencing this as well. It mostly happens when I'm trying to use karma or when I first open up LW.

Replies from: Kawoomba, army1987
comment by Kawoomba · 2013-01-06T16:00:16.194Z · LW(p) · GW(p)

Good a reason as any if some of our comments aren't sufficiently upvoted!

comment by A1987dM (army1987) · 2013-01-07T10:58:43.159Z · LW(p) · GW(p)

It mostly happens when I'm trying to use karma

ISTM that that happens to me a lot when I'm on my phone, but very seldom when I'm using my laptop. (But it's not like I did statistics.)

comment by mstevens · 2013-01-03T11:09:42.169Z · LW(p) · GW(p)

Random idea inspired by the politics thread: Could we make a list of high quality expressions of various positions?

People who wished to better understand other views could then refer to this list for well expressed sources.

It seems like there might be some argument about who "really" understood a given point of view best, but we could resolve debates by having eg pastafarianism-mstevens for the article on pastafarianism I like best, and pastafarianism-openthreadguy for the one openthreadguy prefers.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-03T16:57:25.721Z · LW(p) · GW(p)

TVTropes has an -amazing- political and philosophical library. They have the single-best description of Objectivism I've ever seen, in particular.

Replies from: mstevens
comment by mstevens · 2013-01-03T17:18:59.116Z · LW(p) · GW(p)

You're right, the tvtropes article on Objectivism is actually really good. I knew they had a lot of good non-trope content.

Replies from: drethelin, NancyLebovitz
comment by drethelin · 2013-01-03T23:23:55.405Z · LW(p) · GW(p)

Wow that's amazingly good. It reminds me of how baffled i was about the degree that everyone hated Ayn Rand after reading atlas shrugged as a teenager, and I now realize the reason is that everyone thought she was arguing against things she wasn't arguing against.

Replies from: shminux
comment by shminux · 2013-01-04T00:37:59.422Z · LW(p) · GW(p)

It's a great description, I agree. Unfortunately, Atlas Shrugged is meta-ethics top-heavy on fighting the "the motive of service to others is intrinsically virtuous" windmill/strawman. So much so, that I was unable to continue reading after the first 100 pages or so, given the quoted statement seems obviously fallacious to me to begin with, yet she kept pounding on and on.

Replies from: NancyLebovitz, drethelin
comment by NancyLebovitz · 2013-01-04T14:50:34.184Z · LW(p) · GW(p)

fighting the "the motive of service to others is intrinsically virtuous" windmill/strawman.

Would that it were a windmill/strawman.... but sometimes dysfunctional families teach their less-favored children to believe it, and I'd say that some nations certainly go in for it now and then.

Admittedly, this isn't service to others in general, it's to some specific person or organization which wants the service, and that changes the concept somewhat.

comment by drethelin · 2013-01-04T07:04:19.365Z · LW(p) · GW(p)

oh, don't get me wrong. I'm not an objectivist and I think Atlas Shrugged is badly written. I just get really tired of people attacking Ayn Rand for stupid reasons

comment by NancyLebovitz · 2013-01-04T14:47:16.778Z · LW(p) · GW(p)

I wonder whether not being a formally respectable source is actually good for tvtropes.

Replies from: TimS
comment by TimS · 2013-01-04T15:02:43.782Z · LW(p) · GW(p)

By not being formally respectable, TVtropes gets an otherwise skeptical audience (western nerds) to seriously consider certain philosophical positions that they are otherwise quite hostile to.

If LW concepts (eg mindkiller, raising the sanity line, paying rent in anticipated experience) were as popular as similarly philosophical TVtropes concepts, I think SI and CFAR leadership would be thrilled.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-04T15:15:03.799Z · LW(p) · GW(p)

I was thinking about it from a different angle-- that sometimes lack of respectability leaves more room for conscientiousness.

It doesn't always work that way-- but so far tvtropes is a home for people who genuinely want to get the details of popular culture right. It seems odd, but it doesn't seem to have the problems with fraud and sloppiness that science does. Is this because people care more about popular culture than science? Or is it just that if tvtropes becomes respectable, the rewards for cheating will go up?

Replies from: TimS
comment by TimS · 2013-01-04T19:11:53.338Z · LW(p) · GW(p)

I hadn't thought of it that way - it's very plausible.

But some of the fraud in science is just lost purpose. If you need a certain number of publications to advance in your job, submitting fraudulent studies seems much more rewarding. And TVtropes doesn't have a similar issue - in part because of the lack of respectability you noted.

comment by JoshuaZ · 2013-01-02T01:35:12.588Z · LW(p) · GW(p)

Is rubber part of the Great Filter? This thought occurred to me while reading Charles Mann's "1493" about the biological exchange post Columbus.

Rubber was a major part of the industrial revolution (allowing insulation of electric lines, and is important in many industrial applications in preventing leaks) . Rubber only arose on a single continent for a small set of species. While synthetic rubber exists, for many purposes it isn't as of high quality as natural rubber. Moreover, having the industrial infrastructure to make synthetic rubber would be extremely difficult without modern rubber. Thus, a civilization just like ours but without rubber might not have been able to go through the industrial revolution. This situation may also be relevant to Great Filter issues in our future: if civilization collapses and rubber becomes wiped out in the collapse, is this another potential barrier to returning to a functional civilization, especially if there's less available coal and oil to make synthetic rubber easily?

Replies from: gwern
comment by gwern · 2013-01-02T02:01:56.998Z · LW(p) · GW(p)

Rubber doesn't sound that important to me. The Wikipedia article includes all sorts of useful bits: it only went into European use in the late 1700s, at earliest, well after most datings of the Scientific and Industrial Revolutions; most rubber is now synthesized from petroleum; many uses of insulation like transoceanic telegraphs used gutta-percha which is similar but not the same as rubber (and was superior to rubber for a long time); and much use is for motor-vehicle tires, which while a key part of modern civilization, does not seem necessary for cheap long-distance transportation of either goods or humans (consider railroads).

So rubber doesn't look like a defeater. If it didn't exist, we'd have more expensive goods, we'd have considerably different transportation systems, but we'd still have modern science, we'd still have modern industry, we'd still have cheap consumer goods and international trade, and so on and so forth.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-01-02T02:46:29.656Z · LW(p) · GW(p)

That's a pretty convincing analysis that rubber isn't an aspect of the Filter.

comment by ahh · 2013-01-10T02:50:33.950Z · LW(p) · GW(p)

Can anyone recommend a good therapist in San Francisco (or nearby) who's rationalism-friendly? I have some real problems with depression and anxiety, but the last time I tried to get help the guy told me I was paying too much attention to evidence and should think more spiritually and less rationally. Uh...huh. If you don't want to post publicly here, PM or email is fine.

Replies from: pedanterrific, drethelin, knb
comment by pedanterrific · 2013-01-10T21:07:54.332Z · LW(p) · GW(p)

I'll second drethelin; CBT is both evidence-based as a treatment method- there's evidence it works- and evidence-based in practice, meaning you don't have to believe in it or anything, you just follow the prescribed behaviors and observe the results. Really, it's highly rationalism-friendly, being mainly about noticing and combatting "cognitive distortions" (e.g. generalizing from one example, inability to disconfirm, emotional reasoning, etc.). A therapist who specializes in CBT can be pretty well assumed to not be in the habit of dragging "spirituality" into their work.

Replies from: ahh
comment by ahh · 2013-01-10T21:37:08.524Z · LW(p) · GW(p)

I agree that CBT is well-supported by the evidence, and in general should be rationalism-friendly but that isn't always so. The therapist I mentioned in my OP was, in fact, calling himself a CBT practitioner. So I was hoping someone knew a CBT guy (or other equally well-supported method, honestly) he personally liked.

Replies from: Vaniver, pedanterrific
comment by Vaniver · 2013-01-11T02:26:27.422Z · LW(p) · GW(p)

There are a handful of CBT books that are about as effective in general as having a therapist. You might be interested in feeling good, the depression workbook, or the anxiety workbook. I recommend that you keep looking for social support as well.

comment by pedanterrific · 2013-01-10T22:04:20.256Z · LW(p) · GW(p)

Oh. Well, that's surprising.

Sorry, I'm not in the area.

comment by drethelin · 2013-01-10T18:55:03.127Z · LW(p) · GW(p)

CBT style therapy is pretty founded on science

comment by knb · 2013-01-10T18:11:03.086Z · LW(p) · GW(p)

You might want to look at Rational-emotive behavior therapy (REBT), and the affiliated organizations' websites. There are usually a few REBT therapists in any major city.

comment by negamuhia · 2013-01-01T14:15:04.832Z · LW(p) · GW(p)

Happy New Year, LWers, I'm on a 5 month vacation from uni, and don't have a job. Also, my computer was stolen in October, cutting short my progress in self-education.

Given all this free time I have now, which of these 2 options is better?

  • Buy a road bicycle & start a possibly physically risky job as a freelance bike-messenger within my city ( I'm that one guy from Nairobi )in order to get out of the house more, then buy a laptop and continue my self-education in programming, computer science, philosophy, etc.

or

  • buy a laptop, do quick and easy wordpress websites for local businesses, then buy the bike and use it for leisurely riding under no pressure? I only have money for either one or the other for now, and for some reason I'm hesitating. Maybe it's because I want to do both. This is important to me, and I'll appreciate any discussion on this. Thanks.
Replies from: dbaupp, None, negamuhia
comment by dbaupp · 2013-01-01T14:56:29.053Z · LW(p) · GW(p)

I don't have anything specific to offer, but (in theory) hard choices matter less. And if you literally can't decide between them, you can try flipping a coin to make the decision and as it is in the air, see which way you hope it will end up, and that should be your choice.

Replies from: negamuhia, None
comment by negamuhia · 2013-01-03T09:30:08.146Z · LW(p) · GW(p)

Sorry for the delayed reply.

comment by [deleted] · 2013-01-01T16:29:41.466Z · LW(p) · GW(p)

.

comment by [deleted] · 2013-01-01T16:37:25.977Z · LW(p) · GW(p)

I concur with dbaupp's suggestion.

Additionally, you can try the reframing technique. Anna describes it here:

When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))

The example she gives isn't quite isomorphic to the choice you're making, but I think the technique still may be worth trying. Imagine you're currently living out one option but given the chance to take the other - how would you feel about it? And vice versa.

Replies from: negamuhia
comment by negamuhia · 2013-01-03T09:31:21.730Z · LW(p) · GW(p)

Likewise, thank you for your suggestion.

comment by negamuhia · 2013-01-03T09:26:38.848Z · LW(p) · GW(p)

dbaupp, ParagonProtege, thank you both for the links and suggestions. I'm going with the laptop. Anything else I could do (naturally, there's a lot i want to do) will be kickstarted by the modest, but easy(ish) money I'll get by doing ~$100 websites, as I upgrade my code-fu for Other Stuff. ;)

I also haven't cycled actively for years & I'm afraid my unfit body might conk out on me, making me unable to Do The Job once I commit. Cliff scaling is much harder than hill climbing.

From Alicorn's post , I can easily tell that after I get the laptop, the correct thing to have would be a bike, since I can ease myself back into cycling regularly. It's also weird how I saw the Other Option (buy bike, work, afford laptop, buy laptop, cut down on bike work as I increase study & laptop work hours) as just as good, even though I know I will feel like a flake if I stop riding after it gets tougher and more tiring, which is more likely than giving up on wordpress. Wordpress isn't even the only option for devastatingly easy Internet work.

comment by Vaniver · 2013-01-13T15:47:14.611Z · LW(p) · GW(p)

Watson, the IBM AI, was fed urban dictionary to increase its vocabulary / help it understand slang. It started swearing at researchers, and they were unable to teach it good manners, so they deleted the offending vocabulary from its memory and added a swear filter. IBTimes.

comment by D_Malik · 2013-01-04T08:14:18.155Z · LW(p) · GW(p)

It seems to be common knowledge that exposure to blue light lowers melatonin and reduces sleepiness, and that we can thus sleep better if we wear orange glasses or use programs like Redshift that reduce the amount of blue light emanating from the strange glowing rectangles that follow us around everywhere.

So an idea I had is that maybe wearing blue glasses might increase alertness. I've been weirdly fatigued during the day lately, even though I've been using melatonin and redshift. But does the /absolute/ magnitude of the blue light matter, or the amount of blue relative to other colours? Blue glasses would mostly have no effect on the absolute amount, but would increase the relative amount. Orange glasses decrease both so considering them isn't much help.

I tried looking for studies but I have no experience doing that and I only came up with one that actually compares bright ambient light to dim blue light; it found that dim (1 lux) blue light was better for alertness than 2-lux ambient white light.

Thoughts? Anyone better-informed about these things have comments?

Edit: For a sense of a scale: lux measures luminous flux; 50 lux is living-room lights; a candle at 20cm is 10-15 lux; a full moon on a clear night is 0.3 to 1.0 lux. "White light" is actually only about 11% blue light (source), so the 2 lux of white light in the study is 0.2 lux of blue, which is bad because it means that the linked study's result could be explained either by more absolute or more relative blue light.

Replies from: wedrifid, tut
comment by wedrifid · 2013-01-04T15:47:47.862Z · LW(p) · GW(p)

So an idea I had is that maybe wearing blue glasses might increase alertness. I've been weirdly fatigued during the day lately, even though I've been using melatonin and redshift. But does the /absolute/ magnitude of the blue light matter, or the amount of blue relative to other colours? Blue glasses would mostly have no effect on the absolute amount, but would increase the relative amount.

Unless the mechanism which causes our pupils to constrict is itself sensitive exclusively to blue light those blue glasses will increase the absolute amount of blue light that make it into your eyes.

comment by tut · 2013-01-04T13:47:31.860Z · LW(p) · GW(p)

There is light therapy for people who get depressed in the winter. If I don't misunderstand they are nowadays using "full spectrum" (=white) light, not blue light. That might have something to do with what you are talking about, and in that case it is evidence that it is not just the proportion of blue light that matters.

comment by Wei Dai (Wei_Dai) · 2013-01-13T23:49:21.638Z · LW(p) · GW(p)

Do the current moderation policies allow editors to add "next in sequence" and "previous in sequence" links to posts that don't already have such links, and are there any editors willing to do this? If not, can we change the policy to allow this? And I'd like to volunteer to add such links at least to the posts that I come across (I'm already a moderator but not an editor).

comment by Zack_M_Davis · 2013-01-10T01:05:03.584Z · LW(p) · GW(p)

The hard problem of consciousness is starting to seem slightly less impossible to me than it used to.

Specifically, I remember reading someone's dismissal of the possibility of a reductionist explanation of consciousness, something along the lines of, "What? You think someone's going to come up with an explanation of consciousness, and everyone else will slap their forehead and say, 'Of course, that's it'"?

But that kind of argument from incredulity fails because it conflates explanation (writing down or speaking an argument that other humans will hopefully understand) with understanding (whatever-it-is human brains do to model reality).

For example, there are lots of people who mistakenly think a reductionist explanation of free will is impossible, who will not magically be cured by handing them a well-written explanation of compatibilism, because in order for that to work, they would have to read and understand the argument, and whatever process the human brain uses to read and understand stuff could be flawed in such a way that most people just won't get it. Or more mundanely, it takes years to learn a technical discipline like math or chemistry. A mathematician can't just tell an arbitrary person about their ideas; one would need to study for years to understand what the words mean.

In general, none of us really know what other humans are thinking; we're just making inferences from observing their behavior. I trust the global mathematical community enough such that I believe it when I hear news that the Poincare conjecture has been proven, even though I haven't built up the skills to understand the proof. But suppose some neuroscientist somewhere has come up with an adequate explanation of consciousness, but wasn't able to convince their colleagues, because the explanation requires unusual skills for which there is no standard vocabulary and which are very hard to teach ... how would I be able to tell whether or not this has already happened?

Maybe all of this was obvious to some of you (in which case I apologize for being a slow learner), and maybe some of you have no idea what I'm trying to talk about (in which case I apologize for being a poor explainer).

comment by [deleted] · 2013-01-05T19:12:23.990Z · LW(p) · GW(p)

The header backgrounds of Main and Discussion are similar but different. This irks me slightly.

My selfish strategy is to point it out so it irks more people and the minimal effort of changing it becomes worthwhile. Given the autism scores from the survey, I am confident that among the people reading this comment, a good part will be irked. However, I am not familiar with how changes to the design have been made in the past. I am taking this opportunity to make my first prediction on predictionbook.com

comment by OrphanWilde · 2013-01-14T16:05:49.133Z · LW(p) · GW(p)

I have a query - exactly how interested are people here in improving the efficiency of their daily lives? To whit, would a discussion about efficient toilet habits be welcome or unwelcome? (No, I'm not joking, nor am I working up to a toilet joke, I'm entirely serious.)

Replies from: None, Viliam_Bur, Oscar_Cunningham
comment by [deleted] · 2013-01-17T02:09:08.101Z · LW(p) · GW(p)

It is far more important what you are doing than how efficiently you do it. Discussions of specific low-level habits have low value of information.

Further, LW is mostly about the meta questions: how to think, how to strategise, etc.

comment by Viliam_Bur · 2013-01-15T17:05:38.957Z · LW(p) · GW(p)

Imagine all the attention such article would get on the RationalWiki! They would rewrite the LW page from scratch... :D

comment by Oscar_Cunningham · 2013-01-14T20:03:06.360Z · LW(p) · GW(p)

Unwelcome.

Replies from: gwern
comment by gwern · 2013-01-17T01:22:24.174Z · LW(p) · GW(p)

Unless it involves meta-analyses, regressions, value of information calculations, or preferably all 3!

comment by [deleted] · 2013-01-12T09:38:18.065Z · LW(p) · GW(p)

How do you stop suicide, for individuals and or populations? I looked up antidepressants. They don't look so promising. Brief summary follows. Feel free to skip it.

All pharmacological antidepressants have scary side effects. All of them, sometimes individually or sometimes in combination, put you at risk for serotonin toxicity. Most all increase risk of sucide relative to no treatment. Tricyclic antidepressant are old, scary drugs; rarely prescribed. MAOIs kind of scary. Moclobemide is one of the newer, safer MAOIs. Weird dietary reactions. Still not as safe as SSRIs. NDRIs, include Wellbutrin: commonly prescribed. Adverse effects include seizures and cardiovascular events. Less safe than SSRIs. Don't know enough. SSRIs are most commonly prescribed. They include Zoloft, Paxil, Prozac, and Celexa. Efficacy comparable to placebo. Adverse effects of sexual disfunction, nausea, high blood pressure, lots more. SNRIs are newer than SSRIs. comparable efficacy to SSRIs. Include effexor and cymbalta. Effexor has especially high suicide risk. Discontinuing use of SSRIs and SNRIs abruptly might have adverse effects. Sadness, irritability, agitation, dizziness, etc.

What else can be done? Are hotlines effective?

comment by GLaDOS · 2013-01-10T19:08:32.663Z · LW(p) · GW(p)

John Derbyshire Wonders: Is HBD Over?

The flourish of HBD books and talk in the years around 2000 was, to switch metaphors, early growth from seeds too soon planted.Had the shoots been nourished by a healthy stream of scientific results, they might have grown strong enough to crack and split the asphalt of intellectual orthodoxy.But as things turned out, the maintenance crew has had no difficulty smothering the growth.

Even the few small triumphs of HBD—triumphs, I mean, of general acceptance by cognitive elites—have had an ambiguous quality about them.

For example, Freudian psychoanalysis (defined by Nabokov [33] as people’s belief“that all mental woes can be cured by a daily application of old Greek myths [34] to their private parts”), which was radically nurturist in its “explanations” of human personality development, is now defunct, thanks to developments in pharmacology.

But, while this anti-nurturist victory has diminished the quantity of nonsense in the world, like one of Robert E. Lee’s [35] battles it has not been followed by any significant occupation of enemy territory. In the applied human sciences pure “blank slate [36]” nurturism is still entrenched. Educationists, for example, insist that given the right environment, any child can do anything [37]. In criminology, even the boldest of conservative writers tell us that illegitimacy and fatherlessness are the root causes, as if those factors themselves were uncaused.

Replies from: None
comment by [deleted] · 2013-01-10T19:30:10.949Z · LW(p) · GW(p)

My very first post on this site was about the mistreatment of Stephanie Grace related to the new chilling and shrinking of acceptable discourse in the late 2000s after the 90s thaw mentioned in the article.

I was impressed by the reasonableness of the discussion. And I continued to be impressed at how well LessWrong handled matters like these where for almost two years. However making the same post today on this site as a new member wouldn't be as well accepted as it was back then. If this had been the case then I would have taken the claim that this community is one "dedicated to refining the art of human rationality" with a larger grain of salt, I'm unsure if I would have lingered since I had read most of the sequences at that point but was unsure about whether to participate.

So since I'm unsure if it would be appreciated in the community had I arrived today why do I remain? Well in the mean time I've grown to greatly respect the sanity of many excellent commenter's and several people generating good articles post do post here, some have arrived after I started participating. And it is the most civil and intellectually honest internet forum I've ever seen. But despite this I'm unsure if it is rational of me to do so.

Speaking to some other people from here, who make comments like "more people follow your writing than mine can you please comment on my post?" or people using me as a go to example for some matters, apparently I've become a sort of Schilling point for a subculture within the rationalist subculture. I feel kind of sad about this. I preferred it back when Vladimir_M filled this role, he was far worthier than me.

I think we are at the start of a long winter in the West, only technological progress can keep us afloat if it won't falter. And even if it doesn't uFAI is the overwhelmingly likely outcome. I think I need a strong drink.

Replies from: Multiheaded, Douglas_Knight
comment by Multiheaded · 2013-01-10T22:14:21.528Z · LW(p) · GW(p)

From watching you for a while, I think you're driven to off-handedly forecast doom and gloom because it suits your identity as someone strongly dissatisfied with their current world, signaling contrarianism and wallowing in dignified pessimism. And of course elitism and despair look cooler to you, and form a coherent narrative.

And I'm not going to judge this as something negative, or implore you to fix some "problem" with your personal feelings, I just suggest that you keep a skeptical perspective on your self-narrative somewhere in the back of your mind. As you surely already do.

Replies from: None
comment by [deleted] · 2013-01-12T12:42:31.599Z · LW(p) · GW(p)

I've looked at this argument so many times from so many different angles that I would be very surprised if I hadn't in previous correspondence with you talked about it in very similar terms. I think I've given it its proper weight, but I guess readers may not be aware of it so you pointing it out isn't problematic.

comment by Douglas_Knight · 2013-01-11T00:03:14.584Z · LW(p) · GW(p)

However making the same post today on this site as a new member wouldn't be as well accepted as it was back then.

Pretty easy to test.

comment by NancyLebovitz · 2013-01-08T13:24:05.458Z · LW(p) · GW(p)

Infographic of logical and rhetorical fallacies List organized into categories with an icon for each fallacy.

comment by TimS · 2013-01-02T16:47:25.057Z · LW(p) · GW(p)

Kolmogorov complexity via xkcd

comment by NancyLebovitz · 2013-01-01T08:58:52.083Z · LW(p) · GW(p)

I thought I'd seen a survey result of when LWers thought the Singularity was plausible-- maybe a 50% over/under date, but I haven't been able to find it again. Does anyone remember such a thing?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-01-01T09:55:29.031Z · LW(p) · GW(p)

2009 survey results

When asked to determine a year in which the Singularity might take place, the mean guess was 9,899 AD, but this is only because one person insisted on putting 100,000 AD. The median might be a better measure in this case; it was mid-2067.

2011 survey results

The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067. There was some discussion about whether people might have been anchored by the previous mention of 2100 in the x-risk question. I changed the order after 104 responses to prevent this; a t-test found no significant difference between the responses before and after the change (in fact, the trend was in the wrong direction).

The 2012 survey also had a "date of the Singularity" question, but Yvain didn't report on the results of that question, so you'll have to look at the raw data for that.

Replies from: gwern, army1987, NancyLebovitz
comment by gwern · 2013-01-01T18:27:46.421Z · LW(p) · GW(p)

The 2012 survey also had a "date of the Singularity" question, but Yvain didn't report on the results of that question, so you'll have to look at the raw data for that.

R> lw <- read.csv("2012.csv")
R> lw <- as.integer(as.character(lw$Singularity))
R> summary(lw[lw > 2011 & lw < 5000])
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.    NA's
   2010    2060    2080    2140    2150    4000     236

Had to filter because of idiots putting in values like 2147483647 or 30 or 1800.

comment by A1987dM (army1987) · 2013-01-01T21:22:04.629Z · LW(p) · GW(p)

Note that the last survey made it explicitly clear that the question was “what is the year such that P(Singularity before year|Singularity ever) = P(Singularity after year|Singularity ever) = 0.5”, whereas in the previous surveys it was ambiguous between that and “P(Singularity before year) = P(Singularity after year) + P(no Singularity ever) = 0.5”.

comment by NancyLebovitz · 2013-01-01T16:43:58.579Z · LW(p) · GW(p)

Thank you.

comment by Kaj_Sotala · 2013-01-01T08:45:09.192Z · LW(p) · GW(p)

Robert Kurzban clarifies the concept of the EEA (mostly by quoting various excerpts from Tooby & Cosmides). I think this is an important post for people to check out, given how often the concept of EEA is referenced on this site.

In 1990, Tooby and Cosmides wrote (p. 387):

The concept of the EEA has been criticized under the misapprehension that it refers to a place, or to a typologically characterized habitat, and hence fails to reflect the variability of conditions organisms may have encountered.

From this it can be seen that even in 1990, they were taking pains to defend against the possibility that careless readers might take them to be saying that the EEA is to be thought of as a time and a place. Instead, they characterize it this way (pp. 386-387):

The “environment of evolutionary adaptedness” (EEA) is not a place or a habitat, or even a time period. Rather, it is a statistical composite of the adaptation-relevant properties of the ancestral environments encountered by members of ancestral populations, weighted by their frequency and fitness-consequences.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-01T17:03:37.493Z · LW(p) · GW(p)

I find the matter unclarified. Given the large variability of the Pleistocene climate and habitat (that Kurzban mentions), what does the quoted definition of the EEA mean? "A statistical composite...weighted by frequency and fitness-consequences" looks pretty much like a time and a place -- just an average one instead of one asserted to be the actual environment, habitat, and social structure over the whole Pleistocene. Both concepts ignore the variation.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-01-03T04:18:53.083Z · LW(p) · GW(p)

Did you read the whole post? I thought it was relatively clear - if I had to summarize it in my own words, I guess I'd say something like "the EEA is not a specific physical or temporal location, but rather those properties in the environment of the organism which have stayed invariant over very long periods". It doesn't "ignore" the variation, it's specifically defined via the complement of the variation.

Replies from: Richard_Kennaway, tut
comment by Richard_Kennaway · 2013-01-03T19:58:33.825Z · LW(p) · GW(p)

It doesn't "ignore" the variation, it's specifically defined via the complement of the variation.

I really don't see what distiction you are drawing there.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-01-04T06:10:56.409Z · LW(p) · GW(p)

Not sure we're talking about the same thing, so probably better to ask, what do you mean when you say that it ignores the variation?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-06T10:22:41.068Z · LW(p) · GW(p)

what do you mean when you say that it ignores the variation?

It leaves it out. Explicitly saying "I am going to include only what did not change" is still ignoring whatever did change.

comment by tut · 2013-01-03T18:40:33.854Z · LW(p) · GW(p)

Variation is a feature of the environment, which itself makes certain demands of creatures that live in it. This is not taken into account by just taking the average of everything. If you have one foot in a pot of boiling water and the other in a pot of ice water is not the equivalent of having both feet in a pleasantly hot bath. Even though the average temperature will be about the same.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-01-04T06:12:26.218Z · LW(p) · GW(p)

True, which is why the EEA is more complicated than just an average. Like it said in the post:

These invariances can be described as sets of conditionals of any degree of complexity, from the very simple (e.g., the temperature was always greater than freezing) to a two-valued statistical construct (e.g., the temperature had a mean of 31.2 C. and standard deviation of 8.1), to any degree of conditional and structural complexity that is reflected in the adaptation (e.g., predation on kangaroo rats by shrikes is 17.6% more likely during a cloudless full moon than during a new moon during the first 60 days after the winter solstice if one exhibits adult male ranging patterns).

comment by Fadeway · 2013-01-11T19:54:23.615Z · LW(p) · GW(p)

I have an important choice to make in a few months (about what type of education to pursue). I have changed my mind once already, and after hearing a presentation where the presenter clearly favored my old choice, I'm about to revert my decision - in fact, introspection tells me that my decision was already changed at some point during the presentation. In regards to my original change of mind, I may also have been affected by the friend who gave me the idea.

All of this worries me, and I've started making a list of everything I know as far as pros/cons go of each choice. I want to weigh the options objectively and make a decision. I fear that, already favoring one of the two choices, I won't be objective.

How do I decrease my bias and get myself as close as possible to that awesome point at the start of a discussion where you can list pros and cons and describe the options without having yet gotten attached to any position?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-01-12T13:07:40.725Z · LW(p) · GW(p)

Harder Choices Matter Less. Unless you expect that there is a way of improving your understanding of the problem at a reasonable cost (such as discussing the actual object level problem), the choice is now less important, specifically because of the difficulty in choosing.

Replies from: Fadeway
comment by Fadeway · 2013-01-12T17:34:17.018Z · LW(p) · GW(p)

From rereading the article, which I swear I stumbled upon recently, I took away that I shouldn't take too long to decide after I've written my list, lest I spend the extra time conjuring extra points and rationalizations to match my bias.

As for the meat of the post, I don't think it applies as much due to the importance of the decision. I could go out and gather more information, but I believe I have enough, and now it's just a matter of weighing all the factors; for which purpose, I think, some agonizing and bias removal is worth the pain.

Hopefully I can get somewhere with the bias removal step, as opposed to getting stuck on it. (And, considering that I just learned something, I guess this can be labeled "progress"! Thanks :))

comment by [deleted] · 2013-01-01T22:00:03.375Z · LW(p) · GW(p)

Quick question: I want to read Godel Escher Bach, but are there any math or knowledge prerequisites to understanding it?

Replies from: gwern, TimS, quiet, shminux, Michelle_Z
comment by gwern · 2013-01-01T22:21:27.738Z · LW(p) · GW(p)

Not really.

comment by TimS · 2013-01-02T02:30:02.685Z · LW(p) · GW(p)

If you can understand that "This sentence is a lie" is complicated to decide if true - in any depth at all - then you will get interesting insights from GEB.

comment by quiet · 2013-01-03T17:19:23.763Z · LW(p) · GW(p)

Not in the slightest. DH does a good job of providing you with the things that he later asks you to use.

comment by shminux · 2013-01-04T00:43:06.098Z · LW(p) · GW(p)

There is a mindset prerequisite. Some people get forever lost/bored the first time the book talks about valid mathematical statements as well-formed finite strings of symbols.

comment by Michelle_Z · 2013-01-02T02:00:34.875Z · LW(p) · GW(p)

Nope. I mean, I'd suggest knowing WHO Godel, Escher, and Bach are... possibly listen to some of the music/look at some artwork, but its not necessary.

comment by alanog · 2013-01-01T17:43:30.173Z · LW(p) · GW(p)

http://www.science20.com/hammock_physicist/rational_suckers-99998 Slightly intrigued by this article about Braess' paradox. I understand the paradox well enough, but am confused by how he uses it to critisize super-rationality. But mostly I was amused that in the same comment where he says, 'Hofstader's "super-rationality" concept is inconsistent and illogical, and no single respectable game theorist takes it seriously.' he links to EY's The True Prisoners' Dilemma post.

Also, do people know if that claim about game theorists is true? Would most game theorists say that they would defect against copies of themselves in a one-shot PD?

Replies from: Vaniver
comment by Vaniver · 2013-01-01T18:48:23.437Z · LW(p) · GW(p)

Would most game theorists say that they would defect against copies of themselves in a one-shot PD?

It depends on what "against copies of themselves" means. If it means "I know the other person behaves like a game theorist, and the payoff matrix is denominated in utility," then yes. If it means "I know the other person behaves like a game theorist, but the payoff matrix is not denominated in utility because of my altruism towards a copy of myself," then no. If it means "I expect my choices to be mirrored, and the payoff matrix is denominated in utility," then no.

comment by Thomas · 2013-01-01T09:38:33.055Z · LW(p) · GW(p)

I've stumbled upon this:

http://blogs.discovermagazine.com/badastronomy/2008/09/25/a-lunar-mountains-eternally-sunny-disposition/#.UOKtr-RX0Yg

A place on the Moon where the Sun is always visible, never sets. Well, except for an eclipse, of course.

comment by NancyLebovitz · 2013-01-09T13:00:18.726Z · LW(p) · GW(p)

OK, I give up. We're living in a simulation. Science can't possibly work under these conditions.

Replies from: MileyCyrus
comment by MileyCyrus · 2013-01-09T14:13:16.353Z · LW(p) · GW(p)

Did they take it down?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-09T16:40:23.078Z · LW(p) · GW(p)

The link works for me, if that's what you're asking about.

http://www.popsci.com/science/article/2013-01/scientists-hilariously-vent-methodology-overlyhonestmethod

Replies from: MileyCyrus
comment by MileyCyrus · 2013-01-09T21:45:25.988Z · LW(p) · GW(p)

Hmm, still doesn't work for me. That's odd.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-09T23:21:50.592Z · LW(p) · GW(p)

Before I start posting some of the choicest tweets about realworld science, here's the twitter feed.

Huffington Post

Neatorama)

io9

In other words, I probably didn't need to post about this..... everyone would have seen it anyway.

comment by [deleted] · 2013-01-08T04:23:15.476Z · LW(p) · GW(p)

Possibly of interest

Link: http://www.youtube.com/watch?v=XBmJay_qdNc

Whiteboard animation of a talk by Dan Ariely about dishonesty, rationalization, the "what the hell" effect, and bankers. The visual component made it really easy for me to watch.

comment by RobertLumley · 2013-01-07T23:22:51.275Z · LW(p) · GW(p)

I am looking for defenders of Hanson's Meat is Moral. On the surface, this seems like a very compelling argument to me. (I am a vegetarian, primarily for ethical reasons, and have been for two years. At this point the thought of eating meat is quite repulsive to me, and I'm not sure I could be convinced to go back even if I were convinced it were moral.)

It struck me, however, nothing in this argument is specific to animals, and that anyone who truly believes this should also support growing people for cannibalism, as long as those lives are just barely worth living. (I tend to believe in relative depression so I'd argue probably any life that isn't extremely torturous is worth living) This goes so strongly against moral intuition, though, that I can't imagine anyone supporting it.

Replies from: leplen, Desrtopa, army1987, TimS
comment by leplen · 2013-01-08T23:39:59.086Z · LW(p) · GW(p)

Sorry, can't defend it. It's not a horrible argument, but it's also not totally well grounded in facts.

For starters, it takes far more land and resources to produce 1 lb of beef than 1 lb of grain, since you have to grow all the grain to feed the cow, and cows don't turn all of that energy into meat, so if you believe that undeveloped land or other forms of resource conservation have some intrinsic worth, then vegetarianism is preferable.

Secondly, I think the metaphor comparing a factory farm to a cubicle farm is disingenuous. It's emotionally loaded, since I work in a cubicle and I don't wish I were dead, and it's not terribly accurate. I think you could make a different comparison, that is arguably more accurate and compare a factory farm to a concentration camp. In both instances the inhabitants are crowded together with minimal resources as they await their slaughter. (Obviously my example is also emotionally loaded). I think if one were to ask the question should we do things that will encourage the birth of children who will grow up in concentration camps, it's a little more difficult to come down with the same definitive yes.

Additionally, the article wanders into conjecture in several place. It's hard to see the statement "most farm animals prefer living to dying" as anything more than a specious claim. No one has any way of knowing a cow's preference vis-a-vis life or death, probably including the cow. Suicide is a particularly egregious red herring. By what means does a cow in a pen commit suicide? Starving to death? Surely that not comparable to wishing it had never been born...

As for your Soylent Green example, it has even worse problems with trophic losses, because if your farm-raised humans were not strictly vegetarian, you're losing an even higher percentage of your original energy. If the food babies are raised on an all meat diet you may be getting less than 1% of the energy you would have gotten out of just eating the plants you started the process with. Humans also have a ridiculously long gestation time etc. to function as an efficient food item, although the modest proposal you mention has certainly been suggested before.

Finally, the argument makes me nervous because I think that in general the morality of causing things to be born isn't well settled. We regard saving the life of an child as definitely a moral good. It isn't clear that giving birth to a child is also a moral good, or also a comparable moral good. If I had to pick between saving one child and having two babies, I would think that saving the kid's life was the higher moral calling, even though it will result in less children over all.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-09T00:40:01.330Z · LW(p) · GW(p)

For starters, it takes far less land and resources to produce 1 lb of beef than 1 lb of grain

I think you got these flipped around.

Replies from: leplen
comment by leplen · 2013-01-09T15:50:27.677Z · LW(p) · GW(p)

Fixed. Thank you.

comment by Desrtopa · 2013-01-08T01:08:10.403Z · LW(p) · GW(p)

I think that would be true, assuming you have no additional reasons for opposing cannibalism.

Personally, I have no moral opposition to the idea of eating babies, but I suspect that baby farming would cause much more distress to the general population than the food it would produce would justify.

I don't agree with Hanson's position in that essay though. To take an excerpt:

We might well agree that wild pigs have lives more worth living, per day at least, just as humans may be happier in the wild instead of fighting traffic to work in a cubical all day. But even these human lives are worth living, and it is my judgment that most farm animal's lives are worth living too. Most farm animals prefer living to dying; they do not want to commit suicide.

How does he claim to know that? It's not as if he can extrapolate from the fact that they don't kill themselves. Factory farmed animals are in no position to commit suicide, regardless of whether they want to or not. And even if a farm animal's life is pure misery, it probably doesn't have the abstract reasoning abilities to realize that ending its own life, thereby ending the suffering, is a possible thing.

He compares the life of a farmed animal to a worker who has to fight traffic to spend their time working in a cubicle, but an office worker has leisure time, probably a family to spend time with, and enough money to make them willing to work at the job in the first place. I think the abused child in Omelas is a better basis for comparison.

Replies from: None, OrphanWilde
comment by [deleted] · 2013-01-08T19:03:36.367Z · LW(p) · GW(p)

He compares the life of a farmed animal to a worker who has to fight traffic to spend their time working in a cubicle, but an office worker has leisure time, probably a family to spend time with, and enough money to make them willing to work at the job in the first place.

Also: very few office workers get mutilated to prevent them from mutilating their coworkers out of stress, or locked into their cubicles full-time and forced to wallow in their own faeces (periodically being hosed down from outside), or are so over-bred for meat production purposes that even in their cramped conditions the strain of their under-used, oversized muscles strains their skeletons and joints to the breaking point.

Oh, and instead of a salary designed to seem big but actually undervalue your performance, you get paid in being killed (not infrequently a painful and lingering experience) and having any children you bore taken away for no obvious reason.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-08T19:41:16.356Z · LW(p) · GW(p)

Yes. “If you have doubts on this point, I suggest you visit a farm” is a massive Appeal to Generalization from One Example. I'm pretty sure some farms are a helluva much worse than others, and I strongly suspect that the farms a random person is most likely to visit will be closer to the good end of the scale.

comment by OrphanWilde · 2013-01-08T20:25:27.341Z · LW(p) · GW(p)

I vote we breed animals to be happy under these conditions. Or is that baby-eating?

Hmmm.

Replies from: Desrtopa, drethelin
comment by Desrtopa · 2013-01-08T20:55:46.852Z · LW(p) · GW(p)

If you're going to do that, why not skip the animals entirely and raise vat meat? Neither happy or sad, but much more cost effective.

comment by drethelin · 2013-01-08T20:43:02.695Z · LW(p) · GW(p)

not really, the rpoblem with baby eating was the babies were NOT happy

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-08T20:51:43.364Z · LW(p) · GW(p)

No, I think there's a parallel there. The solution in the story was to reduce the babies to chemical reactions, so they weren't aware, and couldn't suffer; that doesn't really lessen the horror implicit in the solution.

Apparently sleep deprivation is making me -more- insightful than normal. I'm going to have to give vegetarianism/veganism more thought. Right on the heels of a huge insight into privilege arguments, which I'm considering writing up.

comment by A1987dM (army1987) · 2013-01-08T11:02:27.666Z · LW(p) · GW(p)

I had to stop (though I may resume later) at "People who buy less meat don't really spend less money on food overall, they mainly just spend more money on other non-meat food" -- it made me go "are you fucking kidding me" and wonder whether he has ever been to a supermarket. See also this -- differences in retail prices aren't quite that extreme, but that's because governments subsidize meat production, so even though not all of the money comes out of meat eaters' pockets, it still comes out of somewhere.

EDIT: I finished reading it, and... if I didn't know who Hanson was and he had posted somewhere that allowed readers to comment, I would definitely conclude he was trolling. Along with things that others have already pointed out, “per land area, farms are more efficient at producing "higher" animals like pigs and cows” -- where the hell did he take that from? Pretty much everyone I've ever read about this topic agrees that growing food for N people on a mostly vegetarian diet requires way less land, energy, and water than growing food for N people on a largely meat-based diet, and there's a thermodynamic argument that makes that pretty much obvious.

(I do agree that “meat eaters kill animals” isn't a terribly good argument because if it wasn't for meat eaters those animals wouldn't have lived in the first place (but that doesn't apply to hunting and fishing); but that's nowhere near one of the main reasons why I limit my consumption of meat.)

Replies from: pedanterrific, None, gwern, NancyLebovitz, drethelin, RobertLumley
comment by pedanterrific · 2013-01-09T16:23:01.593Z · LW(p) · GW(p)

Along with things that others have already pointed out, “per land area, farms are more efficient at producing "higher" animals like pigs and cows” -- where the hell did he take that from? Pretty much everyone I've ever read about this topic agrees that growing food for N people on a mostly vegetarian diet requires way less land, energy, and water than growing food for N people on a largely meat-based diet, and there's a thermodynamic argument that makes that pretty much obvious.

The full sentence is

And if you do manage to induce less farmland and more wild land, you'll have to realize that, per land area, farms are more efficient at producing "higher" animals like pigs and cows. So there is a tradeoff between producing more farm animals with worse lives, or fewer wild animals with better lives, if in fact wild animals live better lives.

or

per land area, farms are more efficient [than wilderness is] at producing "higher" animals like pigs and cows.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-09T17:44:50.531Z · LW(p) · GW(p)

Thanks. I did think “more efficient than what?”, but none of the possibilities I came up with other than “than they are at producing other foodstuffs” seemed relevant in context. (I don't even remember what they were.)

comment by [deleted] · 2013-01-10T19:01:56.370Z · LW(p) · GW(p)

"People who buy less meat don't really spend less money on food overall, they mainly just spend more money on other non-meat food" -- it made me go "are you fucking kidding me" and wonder whether he has ever been to a supermarket.

Not only that, it makes me wonder if he realizes that most people in the world don't live on six figures. I remember once living on nothing but cereal, milk, eggs and kimchi for about eight months because, when rent and bills were totalled, there simply wasn't any money for more food than that.

comment by gwern · 2013-01-09T18:15:55.257Z · LW(p) · GW(p)

Richard Carrier comes to mind as making counterintuitive claims about the efficiency of meat vs plant food: http://freethoughtblogs.com/carrier/archives/87/

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-09T20:41:11.311Z · LW(p) · GW(p)

Interesting...

Just one quibble: “other than pure aesthetics (“I just like it”) ... which are idiosyncratic (i.e. not true for most people)” sounds like a overwhelming exception to me. Given that I've never met anyone trying to convince other people to become vegetarians (though I've read a couple such people), I guess that's by far the most common reason. (I've eaten meat in front of at least a dozen different vegetarians from at least four different countries, and none of them seemed to be bothered by that.)

Replies from: RobertLumley
comment by RobertLumley · 2013-01-10T20:33:43.938Z · LW(p) · GW(p)

Depending on how ostentatiously (Which I know isn't the right word, but I think conveys what I'm trying to evoke?) you were eating the meat, it would bother me. The type of meat would also make a difference to me. I know vegetarians who are bothered if you eat any meat near them. They are obviously polite about it, (I certainly never say anything) but it might bother people more than you realize.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-11T19:42:13.496Z · LW(p) · GW(p)

how ostentatiously

Not at all -- not that I tried to hide the fact that I was eating meat, but I tried to be as nonchalant as I would be if I didn't know they were vegetarians. OTOH I'm not terribly good at hiding emotions, so probably some of them could tell I was feeling a little embarrassed.

The type of meat would also make a difference to me.

What kind of difference? Pork vs beef vs chicken? Steaks vs minced meat? Free-range vs factory farmed vs hunted (but how would you tell)?

Replies from: RobertLumley
comment by RobertLumley · 2013-01-11T20:11:05.997Z · LW(p) · GW(p)

What kind of difference?

My opposition to meat varies linearly with the intelligence of the animal. I'm much more OK with fish than I am pigs.

comment by NancyLebovitz · 2013-01-10T22:32:42.215Z · LW(p) · GW(p)

This reminds me of something I've wondered about. It seems plausible that it's cheaper to be a vegetarian, but the last I checked, meat substitutes seem to cost about as much as meat.

Is it just that no one's been exploring how many people would like good cheap meat substitutes, or is there some reason meat substitutes are so expensive? Or are there cheap ones I haven't noticed?

Price of quorn

Replies from: Alicorn, None
comment by Alicorn · 2013-01-11T04:22:00.794Z · LW(p) · GW(p)

Fancy meat substitutes like quorn are expensive. TVP and tofu are dirt cheap. Going with vegetable sources of protein that make no attempt to directly replace meat, like rice and beans or peanut butter, is also cheap.

comment by [deleted] · 2013-01-14T17:45:10.621Z · LW(p) · GW(p)

Basically what Alicorn said. People aren't necessarily satisfied with the cheap ones that are available - mimicking the exact mouthfeel and flavor of meat is difficult, and because many of the original meat substitutes are from Asia, they weren't common here until fairly recently Mock duck, aka Seitan (made from wheat gluten) is cheap, and very popular in Asia, but it seems to be a perennial also-ran in the US. Back during my veggie days I tried using it, only to find out I have a minor glutease deficiency (not full-on coeliac, but enough that seitan causes problems). It was by far the closest I've found to mimicking texture and mouthfeel for non-specific cuts of meat (as opposed to mimicking burgers or hot dogs or chicken nuggets or something); when prepared right it can be close to indistinguishable from meat.

Making good, cheap meat substitutes is a lot of work; Western would-be consumers often have high standards for them and aren't satisfied with the more-established forms, such as tofu, while new forms have substantial outlays for R&D (Quorn) and sometimes face regulatory hurdles or other barriers to acceptance (Quorn's initial attempt at a US release went very poorly). In the US, where meat production is directly subsidized, it's hard to compete anyway because there's lots of cheaper meat.

comment by drethelin · 2013-01-10T18:57:03.301Z · LW(p) · GW(p)

One of the confounding factors is that a lot of meat is raised on land that's not suitable for human food farming. EG, free range cattle grazing in australia.

Replies from: army1987
comment by RobertLumley · 2013-01-08T22:52:13.427Z · LW(p) · GW(p)

My evaluation is very much the same as yours, in that Hanson is way off on the efficiency of meat vs other foods. My conclusion is just that he is ignorant of the facts though, not trolling.

comment by TimS · 2013-01-08T00:40:49.296Z · LW(p) · GW(p)

Isn't this just a re-statement of the Repugnant Conclusion?

Essentially all domesticated animals are alive because of demand for products made from them (eggs, milk, meat, etc). If everyone kept kosher, there would be far fewer pig-experience-moments than the current world, including much less pig-experience-suffering. Is that good or bad for someone who values pig utility?

Anyway, I've always taken this kind of reasoning as a reason not to adopt that perspective on these types of questions. But I think that means I'm not a consequentialist - which puts me slightly out of consensus in this community.

Replies from: None, RobertLumley
comment by [deleted] · 2013-01-08T19:07:20.276Z · LW(p) · GW(p)

If everyone kept kosher, there would be far fewer pig-experience-moments than the current world, including much less pig-experience-suffering. Is that good or bad for someone who values pig utility?

I value pig-utility. I'd much rather see a smaller number of comparitively well-kept, well-treated farm pigs and a healthy population of wild boars than the status quo. I'd also rather not see that arrived it by a mass slaughter of all other pigs, though, and pragmatically I'm not going to get that either way, so "a largeish-but-not-contemporary number of reasonably well-treated pigs farmed for food production" would be a much more feasible goal. Temple Grandin does a lot of work in this area, actually.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2013-01-10T12:50:34.029Z · LW(p) · GW(p)

a mass slaughter of all other pigs

Isn't this what's happening all the time anyway?

Replies from: None
comment by [deleted] · 2013-01-10T18:54:05.919Z · LW(p) · GW(p)

Not in the sense I was using it above, namely, "We kill them all at once to remove their population." What's happening at present is more like "we kill them in batches to meet production demands, and bring in more." Aggregated over the very long term a whole lot more pigs can suffer and die in the second case; I'm simply saying I don't find "One sudden, nearly-complete mass slaughter" to be a preferable alternative.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2013-01-11T10:17:06.775Z · LW(p) · GW(p)

My point is that the lifetime of a pig (EDIT: being farmed for meat) isn't very long (about 6 months from what I can find on the internet). Thus all we would have to do is stop breeding them for a while and we very quickly wouldn't have many pigs.

Replies from: None
comment by [deleted] · 2013-01-12T03:49:51.261Z · LW(p) · GW(p)

That's totally true, but it feels a bit tangential to what I was saying.

comment by RobertLumley · 2013-01-08T02:25:50.482Z · LW(p) · GW(p)

I think it is in a similar vein, certainly, but I think it's different in some ways too. For example, I don't think most people would accept cannibalism even if the people (victims? food?) led very happy lives, perhaps like a system where people were pampered in spas all day before being killed for food. But the logical extension of Hanson's argument is that this would be a great system. Assuming that there was a remote economic demand for human meat, which, thankfully, there isn't.

Also, I think cannibalism engages people's sense of moral intuition much moreso than simply having a lot of marginally happy people does.

comment by gwern · 2013-01-04T18:10:01.126Z · LW(p) · GW(p)

BEST, a Bayesian replacement for frequentist t-tests I've been using in my self-experiments, now has an online JavaScript implementation: http://www.sumsar.net/best_online/

comment by OrphanWilde · 2013-01-04T15:24:36.299Z · LW(p) · GW(p)

Hey -

Bit of an unusual request: Does anybody know of any good science books for physics? Specifically, books with not only the facts about physics, but the specific reasons and experiments for which those facts are believed?

I have an associate who is interested in the subject, and completely uninterested in reading something that presents current beliefs as facts. When explaining particle spin, it then took me something like four hours to find the relevant experiments performed for proving the existence of particle spin (and I have to confess the information I was able to find on such a fundamental element of modern physics left me a bit underwhelmed).

Replies from: leplen, Vaniver, Emile
comment by leplen · 2013-01-05T02:12:56.042Z · LW(p) · GW(p)

How much physics do you want and how much math do you want? I mean most of the first year of a physics class you can experimentally verify yourself if you have a watch and a ruler. If you're looking to verify special relatively, you'll probably need more equipment, but in general there aren't nearly as many experiments as in other sciences, however, there's a lot more math. If you know the math and the underlying rules, you need a lot less experiments to understand a phenomenon.

Physics is a very broad discipline, which makes this a very difficult question. Do you just want some interesting and surprising physics cocktail facts and the experiments that go along with them?

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-05T02:20:01.778Z · LW(p) · GW(p)

Modern physics (I find "quantum physics" to be a misnomer, as the majority of what we call quantum physics could survive energy being continuous rather than discrete); in particular, the experiments I was able to dig up for things like particle spin weren't particularly impressive. He doesn't find mathematics particularly convincing, on account of the fact that mathematics are models. (To use local parlance, he finds the mathematical proofs to be confusing the map for the territory.)

Replies from: leplen
comment by leplen · 2013-01-05T04:28:51.492Z · LW(p) · GW(p)

Sure the math is a map, but it's a lot easier sometimes to understand how a city is laid out by looking at a good map than by walking around it.

Your statement about quantum physics is as far as I can tell very wrong. If energy is continuous rather than discrete then you have the Rutherford model of the atom rather than the Bohr model, and there's nothing to prevent atoms from all collapsing. More generally, energy confinement is generally taken to be the defining characteristic of quantum systems. If you have a convincing argument for why this is not true I would be very interested to hear it.

Any good modern physics textbook will go over the experiments. It sounds like essentially you want a physics textbook without the math? You could just read the book and skip the math? If you're just interested in the experiments, you could also just get a good "modern physics" lab manual. That would give you a nice write-up of the experiments with minimal math, and they aren't particularly hard to find.

I'm still not sure why you want this book or what it's supposed to be about. I'm made a little nervous by someone who "doesn't want something that presents the current beliefs as facts" and "doesn't find mathematics particularly convincing." If you're looking for a book that is going to lay out the evidence for you for why modern physics is true in an effort to convince skeptics you may be looking for a while.

As for particle spin the relevant experiment is probably The Stern-Gerlach experiment. which is mentioned several times in Wikipedia's article on spin.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-05T04:53:43.726Z · LW(p) · GW(p)

The Bohr model is wrong. It's just wrong in a useful way. And Rhydberg was working on an alternative model to explain exactly this when quantum mechanics came out; he abandoned it. I'm personally inclined to believe he was correct, but that's not what I want to talk about.

The Stern-Gerlach Experiment was merely in agreement with particle spin; at best its existence, given that it predated particle spin theory, proves that particle spin adds up to normality.

He's proficient in classical mechanics, and wants to grok quantum mechanics. In order to do so, he needs to follow it; not just learn the current state, but see why the current state is what it is, what experiments were performed, what ideas were discarded. I'm not terribly helpful in this regard on account of probably being a crank; my explanations tend to come with a large number of "buts" and alternative explanations that are more confusing than helpful.

Replies from: leplen
comment by leplen · 2013-01-05T06:05:20.716Z · LW(p) · GW(p)

In that case maybe chapters 1,2,4 and 6 of Volume 1 of Albert Messiah's Quantum Mechanics? That gives you a pretty nice introduction and connects well with classical mechanics, without relying too much on the math.

I'm sure selections from other textbooks would work as well. For future reference, quantum mechanics is a subset of modern physics, so if you only want quantum mechanics, you should indicate that somehow.

comment by Vaniver · 2013-01-04T15:48:58.079Z · LW(p) · GW(p)

It's not limited to physics, but I enjoyed The Ten Most Beautiful Experiments. It goes through ten experiments in narrative detail, explaining some biographical details of the scientist, what the beliefs at the time were, and what the experiment showed.

comment by Emile · 2013-01-04T15:43:15.769Z · LW(p) · GW(p)

I'm currently reading The Feynman's Lectures on Physics, and it pretty much fits your description. It's not light reading, but it's well written and goes into interesting details.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-04T15:47:09.130Z · LW(p) · GW(p)

It's been years since I've listened (father was a big fan of book on tapes, as he had 2 hours drive time commute every day) to that. I'll give it a look, thanks!

comment by Qiaochu_Yuan · 2013-01-04T10:54:45.116Z · LW(p) · GW(p)

What kind of people do you all have in your heads? Do you find that having lots of people in your head (e.g. the way MoR!Harry has lots of people in his head) is helpful for making sense of the world around you and solving problems and so forth? How might I go about populating my head with more people, and what kind of people would it be useful to populate my head with?

Replies from: knb, FiftyTwo, Bill_McGrath, TheOtherDave, TimS, Oscar_Cunningham
comment by knb · 2013-01-10T20:40:10.302Z · LW(p) · GW(p)

When I'm trying to understand something, I imagine myself explaining it to my younger sister. I started doing this when I was a kid, but it is so useful to me, that I never stopped.

Kind of weird now that she's an adult though.

comment by FiftyTwo · 2013-01-06T23:11:51.802Z · LW(p) · GW(p)

I don't think I have any people in my head other than 'me.'

It takes me substantial conscious effort to emulate other minds. Is this unusual? (I can however easily argue from premises/to conclusions I don't believe).

comment by Bill_McGrath · 2013-01-04T12:25:35.419Z · LW(p) · GW(p)

I imagine defending my arguments with people that I know, debate with, and find are good at challenging my beliefs/making me explain them - my girlfriend and my family most usually. They're always not very good copies - I often make bad predictions at what people will think about certain concepts - but they are useful in getting me to examine arguments. That might be a good place to start.

comment by TheOtherDave · 2013-01-04T16:05:45.061Z · LW(p) · GW(p)

Ten years or so ago, I used to have more distinct personas in my head than I do now.
Back when I did, they roughly speaking exemplified distinct emotional stances.
One was more compassionate, one more ruthless, one more frightened, one more loving, and so forth.
This wasn't quite the way Eliezer writes Harry, but shares some key elements.

My model of what's going on, based on no reliable data, is that there's a transition period between when a particular stance is altogether unacceptable to the ruling coalition in my head (aka "me"), and when that stance has more-or-less seamlessly joined that coalition (aka "I've changed"), during which it is acceptable but not fully internalized and I therefore tag it as "someone else".

As I say, I don't do this nearly so much anymore. That's not to say I'm consistent; I'm not, especially. In particular, I often observe that the way I think and feel is modified by priming effects. I think about problems differently after spending a while reading LW, for example.

What's changed is that there's no sense of a separate identity along with that. To put it in MoR terms: my experience is not of having a Slytherin in my head distinct from me that sometimes thinks things, but rather of sometimes thinking things in a more Slytheriny sort of way.

That suggests to me that maybe the difference is in how rigidly I define the boundaries of "the sorts of things I think".

comment by TimS · 2013-01-09T17:59:58.710Z · LW(p) · GW(p)

I sometime find it helpful to label a particular perspective: cynical-Tim, optimistic-Tim, etc. They are helpful for clarifying my thoughts by formalizing a certain type of self-reflection. But they don't know more than I, so are generally useless at brain-storming - which is how MoR!Harry seems to use them - I've taken those discussions as literary conceit and exposition for the readers, not models of how to be more effective.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-09T20:23:44.349Z · LW(p) · GW(p)

But they don't know more than I, so are generally useless at brain-storming

Brainstorming has at least two components: knowing things, and recognizing that a thing you know is relevant to a situation. People inside your head might not be helpful at the former but they might be helpful at the latter, thanks to the brain's useful ability to mimic other brains.

I think Eliezer might have been inspired by internal family systems, which means this might be more useful at being effective than it sounds.

comment by Oscar_Cunningham · 2013-01-05T14:46:32.906Z · LW(p) · GW(p)

I often try to understand concepts by pretending to explain them to a historical figure who's smart enough to understand what I'm saying but from too long ago to know about the thing I'm trying to explain. For example I might try to explain Newton's Laws to Aristotle.

comment by CCC · 2013-01-14T07:38:21.790Z · LW(p) · GW(p)

There seems to be a reasonable attempt to get to Mars within a decade. See the Mars One website for details.

They intend to have people on Mars by 2023 (four of them), and it seems that a self-sustaining colony will be the eventual goal.

comment by lsparrish · 2013-01-02T06:13:27.105Z · LW(p) · GW(p)

I've recently become interested in holding some competent opinions on FAI. Trying these on for size:

  1. FAI is like a thermostat. The thermostat does not set individual particles in motion, but measures and responds to particles moving in a particular average range. Similarly, FAI measures whether the world is a Nice Place to Live and makes corrections as needed to keep it that way.

  2. Before we can have mature FAI, there is the initial dynamic or immature FAI. This is a program with a very well thought out, tested, reliable architecture that not only contains a representation of Friendliness, but is designed to keep that as part of its fundamental search patterns. As it searches for self-modifications, it passes each potential modification through a filter which rejects any change that fails to provably preserve the Friendliness goal.

  3. Since provability is tricky, many optimizations which would preserve Friendliness could be rejected due to a lack of a strategy to prove them. This seemingly implies that a reliable system with non-trivial things needing proved will be slower to self-improve than a kludgey system with simpler goals like maximizing computronium.

comment by [deleted] · 2013-01-01T11:46:03.045Z · LW(p) · GW(p)

Can we have a way to save comments?

I often need to retrieve something I've read on Lesswrong but search isn't always helpful. Saving everything I read would limit the scope significantly.

Replies from: gwern, drethelin
comment by gwern · 2013-01-01T18:28:33.231Z · LW(p) · GW(p)

Use something like http://www.ibiblio.org/weidai/lesswrong_user.php?u=gwern and then save the generated page?

comment by drethelin · 2013-01-01T12:00:57.061Z · LW(p) · GW(p)

You could click the permalink and bookmark it, or copypaste interesting comments to a text file you can grep

comment by Ritalin · 2013-01-19T17:47:59.336Z · LW(p) · GW(p)

Spec. Ops: The Line; a Rationalist twist?

I've played through Spec. Ops: the Line. Interesting though that game is, there's one aspect that I found very lacking; the intelligence and rationality of the protagonists, both instrumental and cognitive. It's not just in their poor decision-making, or their delusions, but also their complete lack of defenses in front of the horrors of war, both from them and from others. They act from the gut, they mismanage the feelings of guilt, obligation, and fear.

The game has a theme of helplessness in the face of chaos; it doesn't matter whether you try to do the right thing, because the world does not bend to your will, and you'll find yourself forced to do unsavoury things, or having things you do turn out to have horrible unforeseen consequences.

I was wondering whether it was possible to hammer this message home in spite of having intelligent, rational characters. The game, as it is, says "Good intentions and outrageous badassery aren't enough to prevent failure or protect you from moral bankruptcy". I'd like to amend that to "Good intentions, a rational and intelligent approach, and outrageous badassery, aren't enough to prevent failure or protect you from moral bankruptcy or insanity".

Any suggestions on how to tackle such a problem?

comment by FiftyTwo · 2013-01-11T14:13:12.406Z · LW(p) · GW(p)

Anyone heard of Marblar?.

The idea is to crowdsource uses for the huge number of patents and new technologies generated by universities but never used, and awarding prizes. Seems like a really clever idea to capture low hanging fruit, and the sort of thing LW people should be quite good at,

Replies from: drethelin
comment by drethelin · 2013-01-11T18:44:37.450Z · LW(p) · GW(p)

Isn't the whole point of patents for people NOT to use them? If it's not economical for the patent-holders to profit from them isn't it even less economical for someone who would need to pay license fees to use them?

Replies from: pedanterrific
comment by pedanterrific · 2013-01-11T19:47:20.610Z · LW(p) · GW(p)

I think the idea is that it is economical, but the patent-holder simply never thought of it.

comment by Paul Crowley (ciphergoth) · 2013-01-07T07:56:29.487Z · LW(p) · GW(p)

noooooooooooooooooooo! The Singularity Institute, and FHI, jump a shark! :(

Replies from: Vaniver, None, army1987
comment by Vaniver · 2013-01-11T02:31:38.508Z · LW(p) · GW(p)

I seem to remember that, or something similar, popping up on the internet months ago.

comment by [deleted] · 2013-01-10T19:05:20.605Z · LW(p) · GW(p)

Oh? That recently?

comment by A1987dM (army1987) · 2013-01-07T17:47:58.578Z · LW(p) · GW(p)

It lacks the crown on the top!

comment by thescoundrel · 2013-01-05T15:05:34.836Z · LW(p) · GW(p)

If in Newcomb's problem you replace Omega with James Randi, suddenly everyone is a one-boxer, as we assume there is some slight of hand involved to make the money appear in the box after we have made the choice. I am starting to wonder if Newcomb's problem is just simple map and territory- do we have sufficient evidence to believe that under any circumstance where someone two-boxes, they will receive less money than a one box? If we table the how it is going on, and focus only on the testable probability of whether Randi/Omega is consistently accurate, we can draw conclusions on whether we live in a universe where one boxing is profitable or not. Eventually, we may even discover the how, and also the source of all the money that Omege/Randi is handing out, and win. Until then, like all other natural laws that we know but don't yet understand, we can still make accurate predictions.

Replies from: TimS
comment by TimS · 2013-01-09T18:05:12.902Z · LW(p) · GW(p)

No. I think that is fighting the hypothetical.

More generally, the discipline of decision theory is not about figuring out the right solution to a particular problem - it's about describing the properties of decision methods that reach the right solutions to problems generally.

Newcomb's is an example of a situation where some decision methods (eg CDT) don't make what appears to be the right choice. Either CDT is failing to make the right choice, or we are not correctly understanding what the right choice is. That dilemma motivates decision-theorists, not particular solutions to particular problems.

Replies from: thescoundrel
comment by thescoundrel · 2013-01-09T19:30:45.850Z · LW(p) · GW(p)

I think that is fighting the hypothetical.

That's possible, but I am not sure how I am fighting it in this case. Leave Omega in place- why do we assume equal probability of omega guessing incorrectly or correctly, when the hypothetical states he has guessed correctly each previous time? If we are not assuming that, why does cdc treat each option as equal, and then proceed to open two boxes?

I realize that decision theory is about a general approach to solving problems- my question is, why are we not including the probability based on past performance in our general approach to solving problems, or if we are, why are we not doing so in this case?

comment by BlackNoise · 2013-02-05T23:49:52.714Z · LW(p) · GW(p)

Here's an anthropic question/exercise inspired by this fanfic (end of 2nd chapter specifically), I don't have the time to properly think about it but it seems like an interesting tests for current anthropic reasoning theories under esoteric/unusual conditions. The premise is as follows:

There exist a temporal beacon, acting as an anchor in time. An agent/agents may send their memories back to the anchored time, but as time goes on they may also die/be otherwise prevented from sending memories back. Every new iteration, the agent-copy at the time immediately after the beacons' creation gets blasted with memories from 'past' iterations, either from only the immediately preceding one which recursively includes all previous iterations as further back in subjective time, or from every past iteration at once, with or without a convenient way to differentiate between overlapping memories (another malleable aspect of the premise), or for real head-screwes, from all iterations that lived.

the interesting question would be how would an agent estimate their probability of dying in the current iteration, based on information it was blasted with immediately post-anchor time.

A very simple toy model would be something like: assuming all agent copies send back memories after T years if they haven't died, with the probability of dying/being unable to send back memories each iteration being p, an agent that finds itself with memories from N iterations, what should it estimate as its probability of dying in this iteration?

There should probably be more unsafe time-travel based questions to test anthropic decision making, maybe also to shape intuition regarding many-worlds/multiverse views.

comment by FiftyTwo · 2013-01-12T22:45:53.722Z · LW(p) · GW(p)

Does the

"If you don't know what you need, take power"

quote have any origin before Final words? I searched for it but only found it in a post on heuristics that linked back there.

The quote appeals to me quite a lot, but I'd like more discussion around it and arguments for or against. (If you have any feel free to post here.)

comment by CAE_Jones · 2013-01-06T23:54:10.405Z · LW(p) · GW(p)

I spent four hours today not working. Not doing things other than working, mind; I had the necessary files open, took notes designed to lead toward writing code, then spent most of the time simply... not working.

When it became apparent that akrasia was not going to give up, I went to sleep for four hours.

I was trying to work on a map format conversion function, with which my latest project would be able to move forward more quickly, toward my target demo date of March 2013, at which point I would attempt to secure funding and such.

Honestly, it's just a for loop. I can't bring myself to write a blasted for loop. I even had major sources of working-difficulty removed--I had my braille display connected to my computer (on my normally "not fit for another device" desk, even), with music playing (this normally helps), and the house to myself (my most productive week of the year was one when everyone else who lives here was on vacation).

If my loan payments weren't $225 more than my SSI each month, I would throw up my hands and hire an assistant. What can I do on a budget of $-225?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-07T02:27:55.205Z · LW(p) · GW(p)

I don't know if this is relevant for you, but when I'm that stuck, it's a good idea to check on what I've been eating lately-- too much simple carbs means my ability to take action is squelched.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-01-07T03:48:53.535Z · LW(p) · GW(p)

I've noticed the same thing, and tried to control for that here as well.

I finally managed to write it, after twelve hours of non-accomplishment. I don't really know what changed; the first time I tried, was shortly after waking and eating. The third four-hour period consisted of the same, though I think there was a bigger gap between the waking and eating, and in the third four hours I wound up spending time on the internet. The coding itself only took a few minutes, correcting compilation errors included.

comment by MileyCyrus · 2013-01-04T00:11:45.057Z · LW(p) · GW(p)

If a middle-class couple in a first world country decide to create and raise a child, they have done

[pollid:379]

Replies from: Qiaochu_Yuan, FiftyTwo, Jabberslythe
comment by Qiaochu_Yuan · 2013-01-04T01:56:55.078Z · LW(p) · GW(p)

My current thoughts on this issue run as follows: it seems like smart people can come up with various reasons not to have children (e.g. because it frees up their finances and free time to do interesting things, or because life is suffering). This seems dangerous. If smart people stop having children, then the population gets dumber, and I don't want that. On the other hand, insanely smart people really should have money and free time to do interesting things such as save the world.

So my current ideal child-bearing policy is something like the following: dumb people should be discouraged from having children, smart but not insanely smart people should be encouraged to have children, and insanely smart people should do whatever they want. (Maybe periodically donate their genetic material.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-01-06T22:43:39.056Z · LW(p) · GW(p)

We could encourage smart people to have more children by paying some of their expenses, if specified conditions are met. This would be completely legal, and within power of a few people with sufficient money.

If done by LW fans, the conditions should be written to increase the probability that the children will become smart rationalists and contribute to the society positively.

comment by FiftyTwo · 2013-01-06T23:08:12.076Z · LW(p) · GW(p)

Error, insufficient data

comment by Jabberslythe · 2013-01-04T04:09:55.320Z · LW(p) · GW(p)

I'd say something bad, because the money could be better spent. But if they weren't going to do effective altruism stuff with it, it's probably just neutral so far as I can tell.

comment by [deleted] · 2013-01-03T03:14:06.472Z · LW(p) · GW(p)

What percentage of the computer-using populace, or of LWers, do you think uses the Dvorak keyboard?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-03T05:18:42.476Z · LW(p) · GW(p)

A quick google search was surprisingly useless at answering this question. In particular, there is no good answer on Quora.

So, poll time:

[pollid:378]

Replies from: OrphanWilde, 9eB1, drethelin
comment by OrphanWilde · 2013-01-03T18:07:31.154Z · LW(p) · GW(p)

Qwerty user.

I've hit, back when ephedra was legal, more than 250 WPM; but ephedra increased the speed at which I thought. The limiting factor for me is not the speed at which my fingers move, but the speed at which I can articulate and finish my thoughts. My primary issue when typing isn't the typing speed in itself, but an extensive editing process, and the fact that I'll alter my thoughts mid-stream and have to go back to correct my verbs to match the fact that I've altered which subject/noun to use in the sentence.

I've tried Dvorak. It's not any harder to use than Qwerty, but I didn't find it any easier, either. It's just different.

comment by 9eB1 · 2013-01-03T17:16:09.557Z · LW(p) · GW(p)

This poll is subject to self-selection problems. People who use QWERTY are less likely to bother responding. I use Colemak, and so answered "Other."

I'm not sure that I necessarily type much faster using Colemak than I do using QWERTY, but it is far more comfortable in the same way that lounging in a chair feels more comfortable than sitting on a stool. Typing is effortless as compared with typing in QWERTY because of the economy of motion it has with Colemak (and I presume Dvorak as well). I just measured my typing speed at 78 WPM so people can definitely achieve better typing speeds than me with QWERTY if they are dedicated, but I still wouldn't go back to QWERTY.

Replies from: None
comment by [deleted] · 2013-01-03T20:37:42.640Z · LW(p) · GW(p)

This pretty much describes my experience with Dvorak. I'll just add that my learning hump was a few weeks long; I'd recommend learning an alternate layout during an extended period when your typing efficiency can afford to plummet.

comment by drethelin · 2013-01-03T07:53:12.602Z · LW(p) · GW(p)

I type around 100 wpm with Qwerty, which is plenty for conversations on IRC and forum typing. I don't program or anything like that

comment by agamrafaeli · 2013-01-02T18:36:44.530Z · LW(p) · GW(p)

When sitting down to design one's life happiness is a worthy goal. In today's world our online life requires a large amount of attention and as such has a large influence on us, namely our happiness.

The question I'd like to ask is whether it is more likely to make you happy if you have one queue of e-mail messages that incorporates your work and personal life.

A pro argument could be made that by incorporating them you are creating a holistic smooth lifestyle. Such an argument is similar to advocating living near your workplace and having friendships with co-workers outside of the office.

An easy counter argument could be made that the workplace is a natural greenhouse of tension and by separating personal and business you are more likely to separate happiness from tension.

Ideas?

Replies from: NancyLebovitz, mapnoterritory
comment by NancyLebovitz · 2013-01-02T19:54:43.928Z · LW(p) · GW(p)

It seems to me that it would be easy enough to do experiments (maybe a month long) to find out how you're affected. I doubt that the answer is the same for everyone, and it might not be the same for most people at all times.

If there's a quickly changing situation at work or at home, this might mean that you want all your email in one queue.

If work or home is resulting in highly fraught email, you might want the non-fraught one as a refuge.

And you might have privacy concerns which mean that you absolutely don't want the both of them in one queue.

comment by mapnoterritory · 2013-01-05T20:16:55.456Z · LW(p) · GW(p)

A data point from me: I was much more stressed when I had my emails joint. I'd say that in the long run you want to have them separated even if you really enjoy your job.