Open Thread, May 1-15, 2012

post by OpenThreadGuy · 2012-05-01T04:14:51.616Z · LW · GW · Legacy · 267 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

267 comments

Comments sorted by top scores.

comment by [deleted] · 2012-05-01T11:02:23.454Z · LW(p) · GW(p)

Related to: List of public drafts on LessWrong

Article based on this draft: Conspiracy Theories as Agency Fictions

I was recently thinking about a failure mode that classical rationality often recognizes and even reasonably competently challenges, yet nearly all the heuristics it uses to detect it, seem remarkably easy to use to misuse. Not only that they seem easily hackable to win a debate. How much has the topic been discussed on LW? Wondering about this I sketched out my thoughts in the following paragraphs.

On conspiracy theories

What does the phrase even mean? They are generally used to explain events or trends as the results of plots orchestrated by covert groups. Sometimes people use the term to talk about theories that important events are the products of secret plots that are largely unknown to the general public. Conspiracy in a somewhat more legal sense is also used to describe agreement between persons to deceive, mislead, or defraud others of their legal rights, or to gain an unfair advantage in some endeavour. And finally it is a convenient tool to clearly and in vivid colours paint something as low status, it is a boo light applied to any explanation that has people acting in anything that can be described as self-interest and is a few inferential jumps away. One could argue this is the primary meaning of calling an argument a conspiracy theory in on-line debates.

But putting aside the misuse of the label and associated cached thoughts, people do engage in constructing conspiracy theories when they just aren't needed. Note that we do have plenty of historical examples of real conspiracies with pretty high stakes, so we do know they can be the right answer. Sometimes entire on-line communities fixate on them or just don't call such bad thinking out. Why does this happen? Groups are complicated since we are social monkeys. This is something I don't feel like going into right now, since plenty of fancy phrases like tribal attire or bandwagon effect would abound not to mention the obligatory Hansonian status based explanations, packed in a even bigger wall of text. Let us then first take a look at why individuals may be biased towards such explanations.

First off we have a hard time understanding that coordination is hard. If we see a large pay off available and thinking it easily in reach if "we could just get along" seems like a classical failing. Our pro-social sentiments lead us to downplay such barriers in our future plans. Motivated cognition on behalf of assessing the threat potential of perceived enemies or strangers likely shares this problem. Even if we avoid this, we may still be lost since the second big relevant thing is our tendency for anthropomorphizing things that better not be. A paranoid brain seeing agency in every shadow or strange sound, seems like something evolution would favour over one that fails to see it every now and then. In other words the cost of false positives was reasonably low. Also our brains are just plain lazy, the general population is pretty good at modelling other human minds and considering just how hard the task is, we do a pretty remarkable job of it. So when you want rain you do a rain dance to appease the sky spirits since the weather is pretty capricious and angry sky spirits is a model that makes as much sense as any other (when you are stuck in relative ignorance) and is cheap to run on your brain. The modern world is remarkably complex. Our Dunbarian minds probably just plain can't get how a society can be that complex and unpredictable without it being "planned" by a cabal of Satan or Heterosexual White Males or the Illuminati (but I repeat myself twice) scheming to make weird things happen in the small stone age tribe. Learning about and gaining confidence in some models helps people escape anthropomorphizing human society (this might sound strange but here on LW we are wary of doing this to people, ha beat that!) or the economy or government. The latter is particularly salient since the idea that say something like the United States government can be successfully modelled as a single agent to explain most of its actions is something I dare say most people slip up on occasionally. And lastly... naughty secret conspiracy and malignant agency just plain make a good story.

Humans loooove stories.

Replies from: David_Gerard, Viliam_Bur
comment by David_Gerard · 2012-05-01T11:47:11.313Z · LW(p) · GW(p)

Polish this and it will make a decent discussion post.

Replies from: None
comment by [deleted] · 2012-06-09T15:23:15.960Z · LW(p) · GW(p)

I have since expanded and polished it into an article. I hope it isn't unworthy!

comment by Viliam_Bur · 2012-05-02T12:03:57.085Z · LW(p) · GW(p)

nearly all the heuristics it uses to detect [a failure mode], seem remarkably easy to use to misuse.

Related: Reversed stupidity is not intelligence; Knowing About Biases Can Hurt People.

An argument against a conspiracy theory is probabilistic, because we don't deny that conspiracies exist, only that in this specific case, a non-conspiracy explanation is more probable than a conspiracy explanation, therefore focusing on the conspiracy explanation is privileging a hypothesis.

People are not very good at probabilistic reasoning. So some of them prefer an interesting story. And others try to reverse stupidity by making fully general counterarguments against conspiracies.

The situation is further complicated by not having a precise definition of what conspiracy is. Does it require mutual verbal agreement, or does a silent cooperation on Prisonner' Dilemma also count as a conspiracy? (Two duopolistic producers decide to avoid lowering their product prices, without ever speaking with each other.) Somewhere between this is a cooperation organized by people who avoid to speak about the topic directly. (Each of the duopolistic producers publishes a press article "we try to provide the best quality, because making cheap junk would be bad for our customers".) Actually, the players can even deceive themselves that they are really following a different goal, and the resulting cooperation is just a side effect.

comment by maia · 2012-05-01T13:07:53.951Z · LW(p) · GW(p)

At Reason Rally a couple of months ago, we noticed that a lot of atheists there seemed to be there for mutual support - because their own communities rejected atheists, because they felt outnumbered and threatened by their peers, and the rally was a way for them to feel part of an in-group.

There seem to be differing concentrations of people who have had this sort of experience on LessWrong. Some of us felt ostracized by our local communities while growing up, others have felt pretty much free to express atheist or utilitarian views for their whole lives. Does anyone else think this would be worth doing a poll on / have experiences they want to share?

Replies from: maia
comment by maia · 2012-05-02T19:43:27.178Z · LW(p) · GW(p)

Since this got upvoted, I drafted a rough version of a form to use for this poll.

Feedback on the survey design is more than welcome.

Replies from: dbaupp
comment by dbaupp · 2012-05-03T00:23:30.058Z · LW(p) · GW(p)

Are we meant/allowed to fill it out yet?

Replies from: maia
comment by maia · 2012-05-03T02:36:54.961Z · LW(p) · GW(p)

Since it's been up for a few hours and I haven't gotten any criticisms yet, I'm going to post it as a discussion post.

So, go ahead :)

comment by katydee · 2012-05-01T07:06:37.794Z · LW(p) · GW(p)

Whatever happened to the second Quantified Health Prize?

comment by [deleted] · 2012-05-01T05:05:53.825Z · LW(p) · GW(p)

I get a ridiculous amount of benefit by abusing store return deadlines. I've tested and returned an iPhone, $400 Cole Haan bag, multiple coats, jeans, software, video games, and much more. It's surprising how long many return periods are, and it's a fantastic way to try new stuff and make sure you like it.

Replies from: Alicorn
comment by Alicorn · 2012-05-01T06:40:21.856Z · LW(p) · GW(p)

How often do store personnel give you a hard time about returning these objects?

Replies from: None
comment by [deleted] · 2012-05-01T08:15:16.122Z · LW(p) · GW(p)

Never. Usually they ask why I'm returning it, and I just decide how literally true I want my answer to be, and that's that. I try to return it before the deadline, though, and I ask what the terms are at the time of purchase. Sometimes stores will let you return stuff later than the deadline, anyway.

Replies from: David_Gerard
comment by David_Gerard · 2012-05-01T11:48:24.846Z · LW(p) · GW(p)

In that case how is it "abuse"? Are you speaking of your intent?

Replies from: RichardKennaway
comment by RichardKennaway · 2012-05-01T14:31:23.565Z · LW(p) · GW(p)

Because there is an unspoken understanding, that michaelcurzi is clearly aware of, that a no-questions-asked returns policy is intended for cases where the buyer found the item unsuitable in some way, rather than to provide free temporary use of their stuff.

comment by Kaj_Sotala · 2012-05-01T11:44:18.037Z · LW(p) · GW(p)

Re-reading my own post on the 10,000 year explosion, a thought struck me. There's evidence that the humans populations in various regions have adapted to their local environment and diet, with e.g. lactose tolerance being more common in people of central and northern European descent. At the same time, there are studies that try to look at the diet, living habits etc. of various exceptionally long-lived populations, and occasionally people suggest that we should try to mimic the diet of such populations in order to be healthier (e.g. the Okinawa diet).

That made me wonder. How generalizable can we consider any findings from such studies? What odds should one assign to the hypothesis that any health benefits such long-lived populations get from their diet are mostly due to local adaptation for that diet, and would not benefit people with different ancestry?

Replies from: gwern, Vaniver
comment by gwern · 2012-05-01T15:52:10.311Z · LW(p) · GW(p)

The diets I've seen described all sound like fairly old-fashioned diets. None of them seem to suggest foods that would be novel in any areas - what, fish are novel? Fruits and vegetables? Smaller portions and regular exercise?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-05-01T18:00:14.694Z · LW(p) · GW(p)

Your categories are rather broad. "Fish" are not going to be novel anywhere, but "Atlantic cod" might be. Likewise, by "fruit" do you mean apples, oranges, or some African fruit that's completely obscure in the West because nobody's figured out how to commercialise it yet?

Replies from: gwern
comment by gwern · 2012-05-01T18:11:37.819Z · LW(p) · GW(p)

It's a rather broad topic. The main non-disputed example for the 10,000 Year Explosion is lactose-intolerance; lactose is present in most milks, so you could with justice say that this example is an example of an entire unadapted food group. The recommended foods in things like the Mediterranean or Okinawan diets all use food groups consumed by pretty much all ethnicities. No ethnicity is 'fruit intolerant' or 'fish intolerant', that I've heard of. Milk seems to pretty much be the special-case exception that proves the rule.

Replies from: None, David_Gerard
comment by [deleted] · 2012-05-04T14:34:15.661Z · LW(p) · GW(p)

The risks and benefits of alcohol consumption for different ethnic groups seems like another example.

comment by David_Gerard · 2012-05-01T23:37:32.531Z · LW(p) · GW(p)

Milk is two mutations (one in Europe and one in Kenya) and we've worked out when and where. It's a very special case.

comment by Vaniver · 2012-05-01T15:56:20.960Z · LW(p) · GW(p)

This was my rationale for sticking with my mostly wheat-based diet, but I think my belief in this position is slipping. It does appear that there are strong biochemical reasons to favor rice over wheat, for example. I think there's reason to be skeptical of "Okinawans eat this way, so you should too" but I think "Okinawans eat this way" is at least weak evidence for any particular diet change, like "you should eat more rice" or "you should eat more seaweed," but that those changes need other evidence ("you're gluten intolerant" or "iodine is good for you").

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-05-08T08:11:36.635Z · LW(p) · GW(p)

There's a lot to be said for self-experimentation. One of my friends has found that his digestive system "shuts down" (constipation, lack of satiation, possibly additional symptoms I've forgotten) if he doesn't eat wheat. This trait runs in his family. I haven't heard of anyone else having it.

One's ancestry and long-lived populations give clues about what to experiment with, though.

Replies from: Vaniver
comment by Vaniver · 2012-05-08T12:00:44.393Z · LW(p) · GW(p)

That's fascinating. Do you know what he tried to replace it with?

Replies from: NancyLebovitz, NancyLebovitz
comment by NancyLebovitz · 2012-05-08T14:13:05.936Z · LW(p) · GW(p)

I'm pretty sure it was rice. He hasn't experimented to find out whether he needs gluten or if it's something more specific to wheat.

comment by NancyLebovitz · 2012-05-08T14:04:10.419Z · LW(p) · GW(p)

I'm pretty sure it was rice. He hasn't experimented to find out whether he needs gluten or if it's something more specific to wheat.

comment by [deleted] · 2012-05-01T06:07:36.251Z · LW(p) · GW(p)

I feel as though this cobbled-together essay from '03 has a lot of untapped potential.

Replies from: Username, Incorrect
comment by Incorrect · 2012-05-01T16:16:50.782Z · LW(p) · GW(p)

I think most thoughts could probably not be represented externally. It definitely seems useful but "How to Make a Complete Map of Every Thought You Think" sounds like an exaggeration.

Replies from: None
comment by [deleted] · 2012-05-01T16:18:33.084Z · LW(p) · GW(p)

You don't say?

Replies from: praxis, Incorrect
comment by praxis · 2012-05-02T17:05:16.397Z · LW(p) · GW(p)

Statements of the "obvious" contribute plenty to the conversation. Putting the silent consensus into words is useful. Condescending snark is not.

comment by Incorrect · 2012-05-01T16:20:22.092Z · LW(p) · GW(p)

Actually I intended to state precisely what I did.

comment by [deleted] · 2012-05-01T17:05:43.721Z · LW(p) · GW(p)

An economics question:

Which economic school of thought most resembles "the standard picture" of cogsci rationality? In other words, which economists understand probability theory, heuristics & biases, reductionism, evolutionary psychology, etc. and properly incorporate it into their work? If these economists aren't of the neo-classical school, how closely does neo-classical economics resemble the standard picture, if at all?

Unnecessary Background Information:

Feel free to not read this. It's just an explanation of why I'm asking these questions.

I'm somewhat at a loss when it comes to economics. When I was younger (maybe 15 or so?) I began reading Austrian economics. The works of Murray Rothbard, Ludwig von Mises, etc., served as my first rigorous introduction to economics. I self-identified as an Austrian for several years, up until a few months ago.

For the past year, I have learned a lot about cognsci rationality through LW sequences and related works. I think I have a decent grasp of what cognsci rationality is, why it is correct, and how to conflicts with the method of the Austrian school. (For those who aren't aware, Austrians use an apriori method and claim absolute/infinite certainty, among other things.) The final straw came when I read Bryan Caplan's "Why I am not an Austrian Economist" and his debate with Austrian economist Walter Block. Caplan ably defended BayesCraft. I - with emotional difficulty - consciously updated my belief in Austrianism to below 0.5. I knew I could no longer be an Austrian, nor did I want to be.

Caplan is an neo-classical economist, and neo-classical seems to be the dominant school of modern economic thought. So I'm reading my way through introductory neo-classical economics textbooks. (Specifically, Principles of Macreconomics and Principles of Microeconomics by Mankiw.) I am also looking to take some economics courses when I start university in the fall. My primary major will likely be mathematics, but I am considering double majoring in economics. Maybe get a graduate degree in economics? I don't know yet.

But I'm apprehensive about reading bad economics textbooks because I don't know enough good economics to sort out the bunk. And the reason I want to read economics textbooks in the first place is to learn more good economics. So I'm in a catch 22. I think I'm safe enough reading a standard intro to micro/marco book. But when it comes to finance? Banking? Monetary theory? I haven't a clue who to trust.

So I'm looking to take what I do know (cogsci rationality) and see where it is utilized in economics. If there is a school of economic thought that uses it as their methodology, I think that serves as very strong evidence I can likely trust what they say.

Replies from: Matt_Simpson, sixes_and_sevens, Swimmy, badger, Crux, Amanojack
comment by Matt_Simpson · 2012-05-01T19:39:27.034Z · LW(p) · GW(p)

Econ grad student here (and someone else converted away from Austrian econ in part from Caplan's article + debate with Block). Most of economics just chugs right along with the standard rationality (instrumental rationality, not epistemic) assumptions. Not because economists actually believe humans are rational - well some do, but I digress - but largely because we can actually get answers to real world problems out of the rationality assumptions, and sometimes (though not always) these answers correspond to reality. In short, rationality is a model and economists treat it as such - it's false, but it's an often useful approximation of reality. The same goes for always assuming we're in equilibrium. The trick is finding when and where the approximation isn't good enough and what your criteria for "good enough" is.

Now, this doesn't mean mainstream economists aren't interested in cogsci rationality. An entire subfield of economics - Behavioral Economics - rose up in tandem with the rise of the cogsci approach to studying human decision making. In fact, Kahneman won the nobel prize in economics. AFAICT there's a large market for economic research that applies behavioral economics to problems typically studied in classical, rational agent settings. The problem isn't the demand side - I think economists would love to see a fully general theory of general equilibrium with more plausible agents - it's the supply side: getting answers out of models with non-rational agents is a difficult task. It's already hard enough with rational agents for models to be anywhere near realistic - in macro models with micro foundations, we often assume all agents are identical and all firms are identical. This may seem terribly unrealistic, but often there's some other complication in the model that makes it hard enough to find solutions. Adding heterogenous firms and agents is an extra complication that may not add anything illuminating to the model. So, many economists treat the rationality assumptions which are fundamental to neoclassical economics similarly. If the rationality of agents within their model is tangential to the point they're trying to make (which may only be known empirically), they'll choose the easier assumption to work with. There are fields where the frailty of human rationality seems centrally important, and those are the fields where you're most likely to see nonstandard rationality assumptions. Behavioral Finance is an example of one of these.

The biggest thing I would say is, don't think in terms of "schools" of economic thought. Think in terms of models and tools. Most good ideas are eventually assimilated into the "neoclassical" economic toolkit in some form or another. And besides, thinking in terms of schools of thought is a good way to unintentionally mind-kill yourself.

As far as textbooks go, most higher level (intermediate micro and above) will present models without making any claims about when they're a good approximation and when they aren't. Oftentimes this is because the models being presented are actually just stepping stones to the more realistic and more complicated models economists are actually using. This is generally good, though I wish there were more empirical evidence presented. Any edition of Microeconomic Analysis by Varian will give you a good intermediate level (requires some calculus) rundown of standard micro theory. Think of it as taking standard economic intuitions (to economists - even austrians) and writing down equations that describe them so that we can talk about them precisely. I'd steer clear of any non-graduate level macro textbooks. The macro we teach undergrads is not the macro practicing macroeconomists actually believe. (Even on the graduate level, there isn't a generally accepted class of models that economist agree on, so it might not be that useful to study modern macro). If your mathematical background is stronger, Mas-Colell, Whinston and Green's Microeconomic Theory is a standard first year graduate micro text that's densely packed with a lot of material. Simon and Blume's Mathematics for Economists is the standard math primer used to prepare students for the class Mas-Colell is typically used in, if you're unsure about your math background.

Edit: Holy mother of grammar!

Replies from: None, Amanojack
comment by [deleted] · 2012-05-01T21:02:04.785Z · LW(p) · GW(p)

Wow! That was extraordinary helpful. My only regret is that I have but one upvote to give.

You're right about the unintentional self-mindkilling from focusing of schools of thought. It's obvious to me in hindsight.

It might just be a leftover from my Austrian days, but I am thoroughly skeptical of any macroeconomic model. A red flag for me is when I read that macro models aren't generally reducible to micro models. The only reason I'm reading a macro textbook is that my school requires intro to macro as a prerequisite to intro to micro. And I was thinking of studying introductory macro so I have a decent hand on it when I have to take it in school.

Reading the first 100 pages of Mankiw's Principles of Macroeconomics hasn't been too terrible. Though so far I think it has basically been micro disguised as macro. But based on what you're saying, I think it might be better to stop reading it for now. I'll just learn it when I take it in school.

My math background is okay, but not fantastic. I took some calculus my senior year of high school and got up to integration. For my freshman year of university, I'm taking Calc 1 in the fall and Calc 2 in the spring. Mathematics is likely my primary major, so I think I'll read Mathematics for Economists and then move onto Varian.

Thank you very much for the suggested books, advice, and insight.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-05-01T22:37:40.883Z · LW(p) · GW(p)

No problem!

If you're going to attack Varian, I'd suggest not focusing on Mathematics for Economists too much. Make sure you understand basic constrained maximization using the lagrangian and then you're ready for Varian. Anything else he does that seems weird you can pick up as needed. Constrained maximization is usually taught in Calc 3 AFAIK, but I don't think it's too difficult if you can handle Calc 1.

A red flag for me is when I read that macro models aren't generally reducible to micro models.

This shouldn't be as much of a red flag as it is to most people. Is it a red flag when micro models don't reduce to plausible theories of psychology? Not if it isn't worth the effort of doing micro with said theories. Similarly, there's a trade-off between microeconomic foundations in macro models and actually getting answers out of the models. Often the microeconomic foundations themselves aren't even plausible to begin with. It still might be a red flag based on the details of the tradeoff at the margin, but I'm not sure it's that clear.

Replies from: None
comment by [deleted] · 2012-05-01T22:57:21.556Z · LW(p) · GW(p)

I was just reviewing Mathematics for Economists. While a lot of it sounds fascinating, it's probably not what I need at the moment. Too much of it is over my head. So on second thought, I'll probably just review the first half of Calc 1, learn the second half, and tackle Varian.

On the topic of macro reducing to micro, point taken. I appreciate the clarification.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-05-02T06:06:59.428Z · LW(p) · GW(p)

Good idea. I wouldn't worry about complicated integrals if you're just preparing for Varian. You'll need integration, but I don't recall anything too complicated. It's mainly the differential calculus that you'll need.

comment by Amanojack · 2012-05-02T02:54:44.425Z · LW(p) · GW(p)

Debating with Block would turn any rationalist off of Austrian econ. No one got it comletely right except Mises himself. Actually not even him, but he was usually extremely rational and rigorous in his approach - more than any other economist I know of - albeit often poorly communicated.

In any case, any non-ideologically motivated rationalist worth their salt ought to be able to piece together a decent understanding of the epistemological issues by reading the first 200 pages of Human Action.

Replies from: Multiheaded, praxis
comment by Multiheaded · 2012-05-02T04:04:50.273Z · LW(p) · GW(p)

No one got it comletely right except Mises himself. Actually not even him

Um... and you - alone among the ignorant masses - realize and know all this because...? Sorry, but I don't have a high prior on your authority in the discipline.

Replies from: Amanojack
comment by Amanojack · 2012-05-03T02:09:18.053Z · LW(p) · GW(p)

I would have prefaced that with "in my opinion," but I thought that was obvious. (What else would it be?)

comment by praxis · 2012-05-02T17:14:17.739Z · LW(p) · GW(p)

Actually not even him, but he was usually extremely rational and rigorous in his approach - more than any other economist I know of - albeit often poorly communicated.

Interestingly, this is pretty much what I used to say about Marx when I was a Marxist.

Replies from: Amanojack, Thomas
comment by Amanojack · 2012-05-03T02:16:09.516Z · LW(p) · GW(p)

My point was to indicate that not all people who put stock in the "Austrian school" accept post-Misesians as competent intepreters. I meant, essentially: Mises had it right, but read his original work (not later Austrians) and you'll be able to tell whether I'm right.

comment by Thomas · 2012-05-02T17:27:44.841Z · LW(p) · GW(p)

I used to say about Marx when I was a Marxist.

Is your nick from those times? Or a memory of them?

Replies from: praxis
comment by praxis · 2012-05-02T17:32:09.479Z · LW(p) · GW(p)

Or a memory of them?

Slightly. Of course, the word has been used by many.

comment by sixes_and_sevens · 2012-05-01T23:12:36.055Z · LW(p) · GW(p)

Economics is much bigger than it looks from the outside. People sometimes ask me why I'm studying economics, and my honest answer is "I want to be able to build machines that know how to trust one another".

comment by Swimmy · 2012-05-01T19:11:45.191Z · LW(p) · GW(p)

Experimental economists use cogsci sometimes. Many economists incorporate those findings into models. And you can find Bayesian models in game theory, as alternate equilibrium concepts. But if you're looking for a school of universally Bayesian economists who employ research from cognitive science to make predictions, you won't find them. And I don't really know why it would matter. You won't find many biologists using cogsci rationality either, but that doesn't mean their research findings are false.

Ignore schools of thought entirely and focus on independent empirical/theoretical questions. Use your cogsci rationality skills to differentiate between good and bad arguments and to properly weigh empirical papers. The historical disciplines are largely about politics anyway. The biggest tips for assessing econ are: 1) Most empirical papers are (sometimes necessarily) bad and should only change your priors by a small amount; you should look for overwhelming empirical findings if an argument goes against your (reasonable) priors, and 2) High degrees of consensus are a very good sign. On that second point, most textbooks will be stuff that most economists agree on.

Replies from: None
comment by [deleted] · 2012-05-01T19:56:48.163Z · LW(p) · GW(p)

I found your comment very helpful. Thanks!

But your following point trips me up:

if you're looking for a school of universally Bayesian economists who employ research from cognitive science to make predictions, you won't find them. And I don't really know why it would matter. You won't find many biologists using cogsci rationality either, but that doesn't mean their research findings are false.

Sure, I don't think a biologist studying mitochondria needs to be an expert on cogsci. Not being an expert on cogsci doesn't make the biologist's findings false. Similarly, it doesn't necessarily make the economist's findings false if she isn't well versed in cogsci.

But the reason I'm interested in economists who know cogsci (as opposed to biologists, chemists, or physicists) is that their work directly involves human judgments and decision making under uncertainty. And isn't the precisely what cogsci discusses? Working from a better model of how human beings reason might lead to a better model of how the economy operates. Maybe I'm wrong about that?

Ether way, I think your later point suffices to address this.

Ignore schools of thought entirely and focus on independent empirical/theoretical questions. Use your cogsci rationality skills to differentiate between good and bad arguments and to properly weigh empirical papers.

Even if economists aren't well versed in cogsci, if they're making any relevant mistakes, then I'll hopefully catch them when reading.

comment by badger · 2012-05-02T21:49:07.966Z · LW(p) · GW(p)

As another econ grad student and former self-professed Austrian, I'll concur with Matt Simpson. Some economists have a good handle on these topics and others don't, but there aren't clear demarcating lines. Except for in macro, there aren't clearly identifiable schools, which is a good sign. By field of study, micro theorists are more likely to use rationality jargon, be familiar with probabilistic logic, and know a few H&B classic like the Allais or Ellsberg paradoxes. Whether micro theorists actually apply this knowledge better than other economists is another question.

If you are interested in macro, check out Snowden and Vane's Modern Macroeconomics. It presents the full gamut of perspectives, steel-manning mainstream and heterodox schools chapter by chapter.

Replies from: jsalvatier, None
comment by jsalvatier · 2012-05-04T19:24:36.669Z · LW(p) · GW(p)

Out of personal interest: does Modern Macroeconomics discuss the "Monetary Disequilibrium" approach to macro?

Replies from: badger
comment by badger · 2012-05-05T01:45:51.195Z · LW(p) · GW(p)

Assuming you are referring to Austrian-style business cycle theory, the book has a chapter written by Roger Garrison on the subject. While the theory might not be applicable in general, he make a good case that a boom/bust cycle could be generated by credit expansion.

Replies from: jsalvatier
comment by jsalvatier · 2012-05-05T15:18:05.732Z · LW(p) · GW(p)

Oops, I wasn't clear. Monetary Disequilibrium is "Austrian" but is not the same thing as "Austrian Business Cycle Theory" (I think it's mostly orthogonal and I think some Austrians discuss both as important).

Monetary Disequilibrium theory might more accurately be called a monetary economic theory rather than a macro economic theory.

comment by [deleted] · 2012-05-03T00:30:45.343Z · LW(p) · GW(p)

Hey badger, thanks for the information. All of that is good to hear, especially since I'm mostly interested in micro. Down the line I may study finance, possibly get a CFA.

But if/when there comes to time for me to learn advanced macro, I'll be sure to check out Modern Macroeconomics. Steel-manning all the perspectives sounds like it would be very useful to me. Thanks for the suggestion!

comment by Crux · 2012-05-02T02:01:00.965Z · LW(p) · GW(p)

I don't have anywhere near enough time to elaborate on this, but I always feel compelled to respond when anyone mentions Austrian economics. I just want to say--for what it's worth--that even though I'm well-versed in LW-style rationality and epistemology, I consider the work of Ludwig von Mises, and everything that's been an extension thereof, to be in good epistemological standing.

But beware. Mises himself was extremely terrible at explaining the epistemological foundation of his work in a way that avoided being as impenetrable as it was reminiscent of the sort of philosophy most looked down upon on this website, and those who have more than a mere glimmer of understanding of where he was coming from are few and far between, and none of them are popular Austrian economists one would normally run into.

I implore you, and anyone else reading this who's interested, to investigate and scrutinize the epistemological status of the Austrian School not by reading the incompetent, confused regurgitations of the work of a man who himself could hardly do justice to his method, but by analyzing Austrian economic theory itself, and let it stand or fall by its own strength. I know I know, the epistemological commentary makes it sound like religion. It does! But this is merely an epic failure of communication--something (I consider) monumentally unfortunate given the massive importance of what (I believe) this school has to offer to the world.

Replies from: Crux
comment by Crux · 2012-05-02T12:47:10.115Z · LW(p) · GW(p)

That comment was at -2 for several hours, but just now went back to 0. Judging from those two downvotes, some clarification may be in order. I think I may have sounded too confident about my unsubstantiated assertions while not being clear enough about the core issue I was attempting to raise.

What I was trying to bring up is that a school's epistemological commentary and their actual epistemological practice need not necessarily be aligned. There's nothing that says that one must know exactly what one is doing, and furthermore be able to communicate it effectively, to be competent at the task itself.

This, I believe, is the story of the Austrian school. Their actual epistemological practice is in many ways solid, but their epistemological commentary is not. All too many intelligent, scientifically-minded people reject the economic theory because the epistemological theory sounds so ridiculous or pseudoscientific. But what I'm saying is that these people are correct about the latter, but not about extending to backward to the former.

What basis does one have for rejecting the epistemological basis of the actual economic theory on the grounds that their epistemological commentary is bad? In what way does one's commentary about what one is doing have that strong of a causal connection with the success of the endeavor itself? Instead, one must let the theory itself stand or fall upon its own strength.

Rather than looking at the economic theory itself, figuring out the epistemological basis (or lack thereof), and then deciding whether it stands on firm epistemological ground, they look to the Austrians to do their research for them. This, I believe, is a mistake. Mises was bad at communicating his epistemology (though I consider it in many ways solid), and others were just plain bad on epistemology. This does not mean the economic theory is (necessarily) on shaky ground.

How did this happen? Isn't studying epistemology a tool for coming up with sound theory? Wouldn't being terrible on epistemology be a huge red flag? Yes, but the basic story is that Mises was good on epistemology, but bad at communicating it. His predecessors then read and assimilated his economic theory and thus picked up his actual epistemic habits--what he actually did in practice, his mental hygiene patterns, etc.--while misunderstanding his epistemological commentary.

The result is a bunch of people who are good on economic theory, but bad at explaining where exactly all these mental hygiene habits came from or what their epistemological significance is. You could say that they all got there sort of by accident, because they don't really understand why what they're doing is good, but that's beside the point. All that's important is that Mises was a solid thinker, and a lot of people--for whatever reason--picked up where he left off.

Austrian theory would certainly be better if a team of LW-style rationalists could enter the scene and start explaining what the Austrians have failed to. Mises and his mental hygiene habits certainly have had some momentum, but the longer this goes on--the longer the school is dominated by people who's only source of epistemological fortitude is the unconscious assimilation of an old thinker's mental habits--the worse the school will spiral away from its grounded center, until nothing is left of the previous foundation.

It's a tragic situation, to be sure. Austrian economics is as incisive and important at times as it is insane at others, and this is why I would always hesitate to identity myself as a follower of the Austrian school, despite the massive value I believe it has buried behind some of its more visible components.

Replies from: RichardKennaway
comment by RichardKennaway · 2012-05-02T13:59:44.262Z · LW(p) · GW(p)

Their actual epistemological practice is in many ways solid, but their epistemological commentary is not.

How do you know when it's epistemology and when it's just epistemological commentary?

("If you're still alive afterwards, it was just epistemological commentary" -- not quite from The Ballad of Halo Jones)

Replies from: Crux
comment by Crux · 2012-05-02T14:49:36.989Z · LW(p) · GW(p)

Although I don't fully understand the reference, I think I sort of see where it's going.

Either way though, epistemological practice is what one does in coming up with a way of modeling economic activity or anything else, and epistemological commentary is one's attempt to explain the fundamentals of what exactly is going on when one does the former.

In this case, you know it's the result of epistemological practice when it's an actual economic model or whatever (e.g., the Austrian Business Cycle Theory), and you know it's epistemological commentary when they start talking about a priori statements, or logical positivism, or something like that.

Replies from: RichardKennaway
comment by RichardKennaway · 2012-05-03T13:10:49.395Z · LW(p) · GW(p)

In other words, they're batshit crazy, but somehow manage to say some sensible things anyway? I'd be uneasy about assuming that getting the right answers implies that they must be doing something rationally right underneath, and only believe they believe that stuff about economics being an a priori science.

Re the Halo Jones reference: At one point, Halo Jones has joined the army fighting an interstellar war, and in a rare moment of leisure is talking with a hard-bitten old soldier. The army is desperate to get new recruits into the field as fast as possible, and the distinction between training exercises and actual combat is rather blurred. Halo asks her (it's an all-female army), "How do you know if it was combat, or just combat experience?". She replies, "If you're still alive afterwards, it was just combat experience."

Replies from: Crux, Alejandro1
comment by Crux · 2012-05-03T14:09:55.581Z · LW(p) · GW(p)

Far from being batshit crazy, Mises was an eminently reasonable thinker. It's just that he didn't do a very good job communicating his epistemological insights (which was understandable, given the insanely difficult nature of explaining what he was trying to get at), but did fine with enough of the economic theory, and thus ended up with a couple generations of followers who extended his economics rather well in plenty of ways, but systematically butchered their interpretation of his epistemological insights.

People compartmentalize, they operate under obstructive identity issues, their beliefs in one area don't propagate to all others, much of what they say or write is signaling that's incompatible with epistemic rationality, etc. Many of these are tangled together. Yeah, it's more than possible for people to say batshit insane things and then turn around and make a bunch of useful insights. The epistemological commentary could almost be seen as signaling team affiliation before actually getting to the useful stuff.

Just consider the kind of people who are bound to become Austrian economists. Anti-authority etc. They have no qualms with breaking from the mainstream in any way whatsoever. They already think most people are completely batshit insane, and that the world is a joke and is going down the tubes. There's nothing really to constrain them from sounding insane on epistemology. It's not a red flag to them if everyone seems to disagree.

Forget the epistemology. They're just parroting confused secondary accounts of the work of a thinker who himself utterly failed in his endeavor to explain where he was coming from on this topic, and they're parroting it to signal team affiliation, a break from the mainstream, etc. Beliefs don't always propagate throughout the whole web, especially when they're less usefully analyzed as "beliefs" and more as mere words spilled for the purpose of signaling something.

If you read enough and listen to enough of the modern Austrian school (which is a tragically hard prospect given how allergic most LW-style rationalists would be to the presentation and style of argumentation), you'll find that what's going on in the world, or rather what's going so wrong in society, will become incredibly clear, and half of everything will fall into place. It's one of the two major pieces in the puzzle--the other of which may be found on Less Wrong.

Replies from: Mitchell_Porter, None
comment by Mitchell_Porter · 2012-05-03T15:58:01.946Z · LW(p) · GW(p)

Your proposed synthesis of Mises and Yudkowsky(?) is moderately interesting, although your claims for the power and importance of such a synthesis suggest naivete. You say that "what's going so wrong in society" can be understood given two ingredients, one of which can be obtained by distilling the essence of the Austrian school, the other of which can be found here on LW but you don't say what it is. As usual, the idea that the meaning of life or the solution to the world-problem or even just the explanation of the contemporary world can be found in a simple juxtaposition of ideas will sound naive and unbelievable to anyone with some breadth of life experience (or just a little historical awareness). I give friendly AI an exemption from such a judgement because by definition it's about superhuman AI and the decoding of the human utility function, apocalyptic developments that would be, not just a line drawn in history, but an evolutionary transition; and an evolutionary transition is a change big enough to genuinely transform or replace the "human condition". But just running together a few cool ideas is not a big enough development to do that. The human condition would continue to contain phenomena which are unbearable and yet inevitable, and that in turn guarantees that whatever intellectual and cultural permutations occur, there will always be enough dissatisfaction to cause social dysfunction. Nonetheless, I do urge you to go into more detail regarding what you're talking about and what the two magic insights are.

Replies from: Crux, Amanojack
comment by Crux · 2012-05-05T14:57:09.299Z · LW(p) · GW(p)

Oh sorry. I didn't mean that "what's going so wrong in society" is a single piece that can be understood given those two ingredients but is otherwise destined to remain confusing. I meant that what one finds on Less Wrong explains part of what's going so wrong, and Austrian economics (if properly distilled) elucidates the other.

I should clarify though that Less Wrong certainly provides the bigger picture understanding of the situation, with the whole outdated hardware analysis etc., and thus it would be less like two symmetrical pieces being fit together, and more like a certain distilled form of Austrian economics being slotted into a missing section in the Less Wrong worldview.

I also didn't mean to suggest that adding some insight from Less Wrong to some insight from the Austrian school would suddenly reveal the solution to civilization's problems. Rather, what I'm suggesting would just be another step in the process to understanding the issues we face--perhaps even a very large step--and thus would simply put us in a better position to figure out what to do to make it significantly more likely that the future will go well.

Not two magic insights, but two very large collections of knowledge and information that would be very useful to synthesize and add together. Less Wrong has a lot of insights about outdated hardware, cognitive biases, how our minds work and where they're likely to go systematically wrong, certain existential risks, AI, etc., and Austrian economics elucidates something much more controversial: the joke that is the current economic, political, and perhaps even social organization of every single nation on Earth.

As people from Less Wrong, what else should we expect but complete disaster? The current societal structure is the result of tribal political instincts gone awry in this new, evolutionarily discordant situation of having massive tribes of millions of people. Our hardware and factory presets were optimized for hunter-gatherer situations of at most a couple hundred people (?), but now the groups exceed millions. It would be an absolute miracle if societal organization at this point in history were not completely insane. Austrian economics details the insanity at length.

comment by Amanojack · 2012-05-04T06:31:32.274Z · LW(p) · GW(p)

I have also found claims that one or a few simple ideas can solve huge swaths of the world's problems to be a sign of naivity, but another exception is when there is mass delusion or confusion due to systematic errors. Provided such pervasive and damaging errors do exist, merely clearing up those errors would be a major service to humanity. In this sense, Less Wrong and Misesian epistemology share a goal: to eliminate flawed reasoning. I am not sure why Mises chose to put forth this LW-style message as a positive theory (praxeology), but the content seems to me entirely negative in that it formalizes and systematizes many of the corrections economists (even mainstream ones) must have been tired of making. Perhaps he found that people were more receptive to hearing a "competing theory" than to having their own theories covered in red ink.

comment by [deleted] · 2012-05-04T14:55:24.403Z · LW(p) · GW(p)

Considering we already had a post on the epistemic problems of the school, would you be willing to write a post or sequence on what you consider particularly interesting or worthwhile in Austrian economics?

Replies from: Crux
comment by Crux · 2012-05-05T14:57:44.358Z · LW(p) · GW(p)

Yes. May be a while though.

comment by Alejandro1 · 2012-05-03T16:12:30.426Z · LW(p) · GW(p)

A possible analogy for how Crux views the Austrian economics might be how most of us view the Copenhagen quantum mechanics of Bohr, Heisenberg et al: excellent science done by top-notch scientists, unfortunately intertwined with a confused epistemology which they thought was essential to the science, but actually wasn't. (I don't know enough about Austrian economics to say if the analogy is at any level correct, but it seems a sensible interpretation of what Crux says).

comment by Amanojack · 2012-05-02T03:12:05.025Z · LW(p) · GW(p)

Block and Rothbard do not understand Austrian economics and are incapable of defending it against serious rationalist criticism. Ludwig von Mises is the only rigorous rationalist in the "school". His works make mincemeat of Caplan's arguments decades before Caplan even makes them. But don't take my word for it - go back and reread Mises directly.

You will see that the "rationalist" objections Caplan raises are not new. They are simply born out of a misunderstanding of a complex topic. Rothbard, Block, and most of the other "Austrian" economists that followed merely added another layer of confusion because they weren't careful enough thinkers to understand Mises.

ETA: Speaking of Bayesianism, it was also rejected for centuries as being unscientific, for many of the same reasons that Mises's observations have been. In fact, Mises explains exactly why probability is in the mind in his works almost a century ago, and he's not even a mathematician. It is a straightforward application of his Austrian epistemology. I hope that doesn't cause anyone's head to explode.

Replies from: NancyLebovitz, None, Jack
comment by NancyLebovitz · 2012-05-08T08:35:18.840Z · LW(p) · GW(p)

It's been a while since I read Man, Economy, and State, but it seemed to me that Rothbard (and therefore possibly von Mises) anticipated chaos theory. There was a description of economies chasing perfectly stable supply and demand, but never getting there because circumstances keep changing.

comment by [deleted] · 2012-05-02T11:37:57.366Z · LW(p) · GW(p)

In fact, Mises explains exactly why probability is in the mind in his works almost a century ago, and he's not even a mathematician. It is a straightforward application of his Austrian epistemology. I hope that doesn't cause anyone's head to explode.

This intrigues me, could you elaborate?

Replies from: Amanojack
comment by Amanojack · 2012-05-03T02:31:47.482Z · LW(p) · GW(p)

Sure. He wrote about it a lot. Here is a concise quote:

The concepts of chance and contingency, if properly analyzed, do not refer ultimately to the course of events in the universe. They refer to human knowledge, prevision, and action. They have a praxeological [relating to human knowledge and action], not an ontological connotation.

Also:

Calling an event contingent is not to deny that it is the necessary outcome of the preceding state of affairs. It means that we mortal men do not know whether or not it will happen. The present epistemological situation in the field of quantum mechanics would be correctly described by the statement: We know the various patterns according to which atoms behave and we know the proportion in which each of these patterns becomes actual. This would describe the state of our knowledge as an instance of class probability: We know all about the behavior of the whole class; about the behavior of the individual members of the class we know only that they are members. A statement is probable if our knowledge concerning its content is deficient. We do not know everything which would be required for a definite decision between true and not true. But, on the other hand, we do know something about it; we are in a position to say more than simply non liquet or ignoramus. For this defective knowledge the calculus of probability provides a presentation in symbols of the mathematical terminology. It neither expands nor deepens nor complements our knowledge. It translates it into mathematical language. Its calculations repeat in algebraic formulas what we knew beforehand. They do not lead to results that would tell us anything about the actual singular events. And, of course, they do not add anything to our knowledge concerning the behavior of the whole class, as this knowledge was already perfect--or was considered perfect--at the very outset of our consideration of the matter.

Replies from: Jack
comment by Jack · 2012-05-04T07:10:26.078Z · LW(p) · GW(p)

In fact, Mises explains exactly why probability is in the mind in his works almost a century ago, and he's not even a mathematician.

Claiming Ludwig in the Bayesian camp is really strange and wrong. His mathematician brother Richard, from whom he takes his philosophy of probability, is literally the arch-frequentist of the 20th century.

And your quote has him taking Richard's exact position:

The present epistemological situation in the field of quantum mechanics would be correctly described by the statement: We know the various patterns according to which atoms behave and we know the proportion in which each of these patterns becomes actual. This would describe the state of our knowledge as an instance of class probability: We know all about the behavior of the whole class; about the behavior of the individual members of the class we know only that they are members.

When he says "class probability" he is specifically talking about this. ...

They do not lead to results that would tell us anything about the actual singular events.

Which is the the precise opposite of the position of the subjectivist.

Replies from: Crux, Amanojack
comment by Crux · 2012-05-04T13:47:35.529Z · LW(p) · GW(p)

Claiming Ludwig in the Bayesian camp is really strange and wrong. His mathematician brother Richard, from whom he takes his philosophy of probability, is literally the arch-frequentist of the 20th century.

And Ludwig and Richard themselves were arch enemies. Well only sort of, but they certainly didn't agree on everything, and the idea that Ludwig simply took his philosophy of probability from his brother couldn't be further from the truth. Ludwig devoted an entire chapter in his Magnum Opus to uncertainty and probability theory, and I've seen it mentioned many times that this chapter could be seen as his response to his brother's philosophy of probability.

I see what you're saying in your post, but the confusion stems from the fact that Ludwig did in fact believe that frequency probability, logical positivism, etc., were useful epistemologies in the natural sciences, and led to plenty of advancements etc., but that they were strictly incorrect when extended to "the sciences of human action" (economics and others). "Class probability" is what he called the instances where frequency worked, and "case probability" where it didn't.

The most concise quote I could find to make my position seem much more plausible:

Only preoccupation with the mathematical treatment could result in the prejudice that probability always means frequency.

And here's a dump of all the quotes I could find on the topic, reading all of which will make it utterly clear that Ludwig understood the subjectivist nature of probability (emphasis mine, and don't worry about reading much more than just the emphasized portions unless you want to).

First:

Where there is regularity, statistics could not show anything else than that A is followed in all cases by P and in no case by something different from P. If statistics show that A is in x% of all cases followed by P and in (100 − x)% of all cases by Q, we must assume that a more perfect knowledge will have to split up A into two factors B and C of which the former is regularly followed by P and the latter by Q.

Second:

Quantum mechanics deals with the fact that we do not know how an atom will behave in an individual instance. But we know what patterns of behavior can possibly occur and the proportion in which these patterns really occur. While the perfect form of a causal law is: A "produces" B, there is also a less perfect form: A "produces" C in n% of all cases, D in m% of all cases, and so on. Perhaps it will at a later day be possible to dissolve this A of the less perfect form into a number of disparate elements to each of which a definite "effect" will be assigned according to the perfect form. But whether this will happen or not is of no relevance for the problem of determinism. The imperfect law too is a causal law, although it discloses shortcomings in our knowledge. And because it is a display of a peculiar type both of knowledge and of ignorance, it opens a field for the employment of the calculus of probability. We know, with regard to a definite problem, all about the behavior of the whole class of events, we know that class A will produce definite effects in a known proportion; but all we know about the individual A's is that they are members of the A class. The mathematical formulation of this mixture of knowledge and ignorance is: We know the probability of the various effects that can possibly be "produced" by an individual A.

Third:

What the neo-indeterminist school of physics fails to see is that the proposition: A produces B in n% of the cases and C in the rest of the cases is, epistemologically, not different from the proposition: A always produces B. The former proposition differs from the latter only in combining in its notion of A two elements, X and Y, which the perfect form of a causal law would have to distinguish. But no question of contingency is raised. Quantum mechanics does not say: The individual atoms behave like customers choosing dishes in a restaurant or voters casting their ballots. It says: The atoms invariably follow a definite pattern. This is also manifested in the fact that what it predicates about atoms contains no reference either to a definite period of time or to a definite location within the universe. One could not deal with the behavior of atoms in general, that is, without reference to time and space, if the individual atom were not inevitably and fully ruled by natural law. We are free to use the term "individual" atom, but we must never ascribe to an "individual" atom individuality in the sense in which this term is applied to men and to historical events.

Fourth:

Calling an event contingent is not to deny that it is the necessary outcome of the preceding state of affairs. It means that we mortal men do not know whether or not it will happen.

Fifth:

For this defective knowledge the calculus of probability provides a presentation in symbols of the mathematical terminology. It neither expands nor deepens nor complements our knowledge. It translates it into mathematical language. Its calculations repeat in algebraic formulas what we knew beforehand. They do not lead to results that would tell us anything about the actual singular events. And, of course, they do not add anything to our knowledge concerning the behavior of the whole class, as this knowledge was already perfect--or was considered perfect--at the very outset of our consideration of the matter.

Sixth:

A statement is probable if our knowledge concerning its content is deficient. We do not know everything which would be required for a definite decision between true and not true. But, on the other hand, we do know something about it; we are in a position to say more than simply non liquet or ignoramus.

Etc. Probability is in the mind. It is subjective, and dependent upon the current state of knowledge of the observer in question. He seems very clear on this matter.

Back to you:

When he says "class probability" he is specifically talking about this. ...

They do not lead to results that would tell us anything about the actual singular events.

Which is the the precise opposite of the position of the subjectivist.

Is it? Let's analyze the full quote:

For this defective knowledge the calculus of probability provides a presentation in symbols of the mathematical terminology. It neither expands nor deepens nor complements our knowledge. It translates it into mathematical language. Its calculations repeat in algebraic formulas what we knew beforehand. They do not lead to results that would tell us anything about the actual singular events. And, of course, they do not add anything to our knowledge concerning the behavior of the whole class, as this knowledge was already perfect--or was considered perfect--at the very outset of our consideration of the matter.

All he's saying is that taking one's knowledge of the behavior of a class of events the behavior of the individuals of which one knows nothing, and putting it into mathematical notation, does not magically reveal anything about those individual components.

For example (taken from that Mises Wiki link), if you know approximately how many houses will catch fire per year in a neighborhood, but you don't know which ones they will be, transforming this knowledge into mathematical probability theory is no more than a potentially more concise way of describing one's current state of knowledge. It of course cannot add anything to what you already knew.

In fact, this isn't even relevant to the topic at hand. Believe it or not, some people thought probability theory was magical and could help them win at games of chance. This was him responding to that mysticism. I certainly don't see how it makes him not a subjectivist on probability theory, especially when the whole analysis is about states of knowledge etc.

comment by Amanojack · 2012-05-04T10:23:55.009Z · LW(p) · GW(p)

I didn't say he was in the Bayesian camp, I said he had the Bayesian insight that probability is in the mind.

In the final quote he is simply saying that mathematical statements of probability merely summarize our state of knowledge; they do not add anything to it other than putting it in a more useful form. I don't see how this would be interpreted as going against subjectivism, especially when he clearly refers to probabilities being expressions of our ignorance.

comment by Jack · 2012-05-04T07:09:03.246Z · LW(p) · GW(p)

Double post

comment by JoshuaZ · 2012-05-01T14:31:55.804Z · LW(p) · GW(p)

In many artificial rule systems used in games there often turn out to be severe loopholes that allow an appropriate character to drastically increase their abilities and power. Examples include how in Morrowind you can use a series of intelligence potions to drastically increase your intelligence and make yourself effectively invincible or how in Dungeons and Dragons 3.5 a low level character can using the right tricks ascend to effective godhood in minutes.

So, two questions which sort of go against each other: First is this evidence that randomized rule systems that are complicated enough to be interesting are also likely to allow some sort of drastic increase in effective abilities using some sort of loopholes? (essentially going FOOM in a general sense). Second, and in the almost exact opposite direction, such aspects are common in games and one has quite a few science fiction and fantasy novels where a character (generally evil) tries to do something similar. Less Wrong does have a large cadre of people involved in nerd-literature and the like. Is this aspect of such literature and games acting as fictional evidence which is acting in our backgrounds to improperly make such scenarios seem likely or plausible?

Replies from: gwern, Bill_McGrath, sixes_and_sevens
comment by gwern · 2012-05-01T14:51:20.539Z · LW(p) · GW(p)

First is this evidence that randomized rule systems that are complicated enough to be interesting are also likely to allow some sort of drastic increase in effective abilities using some sort of loopholes? (essentially going FOOM in a general sense).

Can you analogize this to being Turing-complete? One thing esoteric languages - and security research! - teaches is that the damndest things can be Turing-complete. (For example, return-into-libc attacks or Wang tiles.)

Replies from: David_Gerard
comment by David_Gerard · 2012-05-01T15:36:08.644Z · LW(p) · GW(p)

the damndest things can be Turing-complete

Yep. Which is why letting a domain-specific language reach Turing-completeness is a danger, because when you can do something you will soon have to do it. I've ranted on this before.

Idle speculation: I wonder if this is analogous to the intelligence increase from chimps to humans. Not Turing-completeness precisely, but some similar opening into a new world of possibility, an open door to a previously unreachable area of conceptspace.

comment by Bill_McGrath · 2012-05-03T10:11:22.661Z · LW(p) · GW(p)

or how in Dungeons and Dragons 3.5 a low level character can using the right tricks ascend to effective godhood in minutes.

That is theoretically possible, but ignores Rule Zero. No GM would allow it.

Also, I'm not sure what you mean by "randomized rule systems"; these games are highly designed and highly artificial, not random.

Replies from: None, JoshuaZ
comment by [deleted] · 2012-05-03T11:20:54.182Z · LW(p) · GW(p)

That is theoretically possible, but ignores Rule Zero. No GM would allow it.

Not necessarily. I've allowed things like that. There isn't anything WRONG with your adventurers ascending to Godhood, if that's what they find fun. I had it happen to the one meta campaign world to the point where there was a pantheon made up of nothing but ascended characters (either from the PC's, or from NPC's who ascended using other methods.) It made a good way of keeping track of things that had been done and so couldn't be done in future games: (Ah, you can't use Celerity, Timestop, and Bloodcasting to get infinite turns, Celerity was turned into a divine power by your earlier character Neo.).

However, the game sort of runs out of non-hand wavy content at that point, so you just have to make up things like Carnage Endelphia Over-Deities, Mass Produced Corrupted Paragon Dragons, etc.

I even had an official metric: If you can use your powers to beat a single character with a EL of 8 higher (The point at which the chart just flat out says "We aren't giving EXP for this, they shouldn't have been able to do that.") They are ascension worthy.

It seemed more fun than saying "No, you can't!" And eventually I just stopped planning things out far in advance because I expected a certain amount of gamebreaking from my players.

It's like the mental equivalent of eating cake with a cup of confectioners sugar on top though. Eventually, even the players eventually sort of get sick of the sweetness and move onto something else. Once they played around with Godly power for a bit, they usually got tired of it and we moved on to a new campaign in the meta campaign world.

But it does still allow you to say "Remember that time we made our own pantheon of gods who clawed their way up from the bottom using a variety of methods?" Which, as memories go, is a neat one to have.

Replies from: Bill_McGrath
comment by Bill_McGrath · 2012-05-03T19:55:34.837Z · LW(p) · GW(p)

Fair enough, though I think that's a special case and most GMs wouldn't be willing to go within a mile of that kind of game play.

It sounds amazingly fun though! Kudos!

comment by JoshuaZ · 2012-05-03T14:15:43.221Z · LW(p) · GW(p)

That is theoretically possible, but ignores Rule Zero. No GM would allow it.

Ok. But Rule Zero is essentially in this context a stop-gap on what the actual rules allow. The universe as far as we can tell isn't intelligently design and thus doesn't have stop gap feature added in.

Also, I'm not sure what you mean by "randomized rule systems"; these games are highly designed and highly artificial, not random.

The idea here is that even rule systems which are designed to make ascension difficult often seem to still allow it. Still, you are correct that this isn't really at all a sample of randomized rule systems. In that regard, your point is pretty similar to that by sixes_and_sevens.

comment by sixes_and_sevens · 2012-05-01T14:49:42.553Z · LW(p) · GW(p)

The notion of "loopholes" rests on the idea that rules have a "spirit" (what they were ostensibly created to do) and a "letter" (how they are practically implemented). Finding a loophole is generally considered to be adhering to the letter of the law while breaking the spirit of the law.

In the examples you cite, the spirit of the rules is to promote a fun, balanced game. Making oneself invincible is considered a loophole because it results in an un-fun, unbalanced game. It's therefore against the spirit of the rules, even though it adheres to the letter.

What "spirit" would you be breaking if you suddenly discovered a way to drastically increase your own abilities?

Replies from: JoshuaZ, Vaniver
comment by JoshuaZ · 2012-05-01T15:07:36.296Z · LW(p) · GW(p)

Loophole may have been a bad term to use given the connotation of rules having a spirit. It might make more sense in context to use something like "Surprisingly easy way to to make one extremely powerful if one knows the right little small things."

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-01T15:29:12.691Z · LW(p) · GW(p)

I think you're missing my point, though I didn't really emphasise it. Rule systems are artificial constructs designed for a purpose. Game rules in particular are designed with strong consideration towards balance. Both the examples you gave would be considered design failures in their respective games. The reason they are noteworthy is because the designers have done a good job of eliminating most other avenues of allowing a player character to become game-breakingly overpowered.

You ask "is this evidence that randomized rule systems that are complicated enough to be interesting are also likely to allow some sort of drastic increase in effective abilities using some sort of loopholes?" Most rule systems aren't randomised; if they were they probably wouldn't do anything useful. They're also not interesting on the basis of how complicated they are, but because they've been explicitly designed to engage humans.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-01T15:34:38.509Z · LW(p) · GW(p)

Ah, I see. I didn't understand correctly the first time. Yes, that seems like a very valid set of points

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-01T15:57:58.920Z · LW(p) · GW(p)

My D&D heyday was 2nd ed, where pretty much any three random innocuous magic items could be combined to make an unstoppable death machine. They've gotten better since then.

comment by Vaniver · 2012-05-02T01:14:13.809Z · LW(p) · GW(p)

What "spirit" would you be breaking if you suddenly discovered a way to drastically increase your own abilities?

That of envy avoidance- rising too high too quickly can also raise ire.

comment by dugancm · 2012-05-05T23:32:52.099Z · LW(p) · GW(p)

I found this person's anecdotes and analogies helpful for thinking about self-optimization in more concrete terms than I had been previously.

A common mental model for performance is what I'll call the "error model." In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.

But we could also consider the "bug model" of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you'll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can't be quantified along a single axis as less or more severe. A bug gets everything that it affects wrong. And fixing bugs doesn't improve your performance in a continuous fashion; you can fix a "little" bug and immediately go from getting everything wrong to everything right. You can't really describe the accuracy of a buggy program by the percent of questions it gets right; if you ask it to do something different, it could suddenly go from 99% right to 0% right. You can only define its behavior by isolating what the bug does.

Often, I think mistakes are more like bugs than errors. My clinkers weren't random; they were in specific places, because I had sub-optimal fingerings in those places. A kid who gets arithmetic questions wrong usually isn't getting them wrong at random; there's something missing in their understanding, like not getting the difference between multiplication and addition. Working generically "harder" doesn't fix bugs (though fixing bugs does require work).

Once you start to think of mistakes as deterministic rather than random, as caused by "bugs" (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.

You stop thinking of people as "stupid."

Tags like "stupid," "bad at _", "sloppy," and so on, are ways of saying "You're performing badly and I don't know why." Once you move it to "you're performing badly because you have the wrong fingerings," or "you're performing badly because you don't understand what a limit is," it's no longer a vague personal failing but a causal necessity. Anyone who never understood limits will flunk calculus. It's not you, it's the bug.

This also applies to "lazy." Lazy just means "you're not meeting your obligations and I don't know why." If it turns out that you've been missing appointments because you don't keep a calendar, then you're not intrinsically "lazy," you were just executing the wrong procedure. And suddenly you stop wanting to call the person "lazy" when it makes more sense to say they need organizational tools.

"Lazy" and "stupid" and "bad at _" are terms about the map, not the territory. Once you understand what causes mistakes, those terms are far less informative than actually describing what's happening.

Error vs. Bugs and the End of Stupidity

comment by dbaupp · 2012-05-01T09:17:53.686Z · LW(p) · GW(p)

I've just uploaded an updated version of my comment scroller. Download here. This update makes the script work correctly when hidden comments are loaded (e.g. via the "load all comments" link). Thanks to Oscar Cunningham for prompting me to finally fix it!

Note: Upgrading on Chrome is likely to cause a "Downgrading extension error" (I'd made a mistake with the version numbers previously), the fix is to uninstall and then reinstall the new version. (Uninstall via Tools > Extensions)


For others who aren't using it: I wrote a small user script that allows you to jump through the green-bordered new comments quickly. More information here.

Replies from: Oscar_Cunningham, NancyLebovitz
comment by Oscar_Cunningham · 2012-05-01T13:56:42.075Z · LW(p) · GW(p)

Yay! Thanks!

comment by NancyLebovitz · 2012-05-02T16:46:46.133Z · LW(p) · GW(p)

Very much a 101 question.... how do I download your program?

Replies from: dbaupp
comment by dbaupp · 2012-05-03T00:22:21.792Z · LW(p) · GW(p)

The program is a short snippet of code in your web browser that runs whenever you visit lesswrong.com. The precise method of installation depends on your web-browser:

  • Firefox: you need to install the Greasemonkey extension, and then just click on the "download here" link above.
  • Google Chrome: just click the "download here" link above.
  • Opera: I think you can just click the "download here" link. (I'm not 100% sure.)
  • Internet Explorer and Safari: this page has links to some help on getting user script support; once you've done that then just click on the "download here" link above.

Once you have got to this stage and clicked the link, a pop-up should appear asking if you want to install this script. It will probably have a warning about "this script can collect your data on lesswrong.com", this particular script is safe to install: it doesn't send any information anywhere (or even store anything for longer than you are viewing a specific page).

(I haven't been able to test it in Opera, Safari or Internet Explorer, so there is no guarantee that it will work correctly for them.)

comment by Multiheaded · 2012-05-14T08:15:39.188Z · LW(p) · GW(p)

Yesterday I was lying in bed thinking about the LW community and had a little epiphany, guessing the reason as to why discussions on gender relations and the traditional and new practices of inter-gender choice and manipulation (or "seduction", more narrowly) around here consistently "fail" as people say - that is, produce genuine disquiet and anger on all sides of the discussion.

The reason is that both opponents and proponents of controversial things in this sphere - be it a techincal approach to romantic relations ("PUA") or "traditional"/conservative gender relations or polyamory or other such examples - are inevitably almost completely correct in pointing out the most blatant and harmful effects of the opposing view's practices. Put simply, all known solutions in the area of sexuality and gender relations, however they compare to each other, are quite awful on an absolute scale (by the vast majority of moral outlooks). Because of the fundamentally broken and irrational nature of how our psychology interacts with the non-ansectral environment, all of our imperfect arrangements will inevitably produce much psychological and/or social suffering and "dysfunction". I use that word in quotes because, according to the view above, our society is inevitably and innately dysfunctional whatever you do with it.

Yet people, while seeing with some clarity the evils of the opposing view, are in denial about those of their own suggestions - just like most non-transhumanist atheists are in denial about many awful realities of the human condition, chiefly death. They simply refuse to allow themselves the thought that something so bad might be going on with no decent solution in sight. Thus, disquiet and misdirected anger.

Now, transhumanists are better off in this regard [1] because they know that humans can eventually be enhanced, and the vast disperancies between how we live and what evolution prepared us for, fixed. So I suggest that we don't approach this topic without keeping in mind the possibilities of transhumanism, so as to eliminate the cognitive dissonance in observing problems that can't be overcome at the level they arise on.

My idea could well be mistaken, generated by sudden intuition instead of methodical inquiry as it was; so would you please discuss it and consider it in more detail - but on a meta level at first, without getting into the usual failure mode?

[1] They might have their own issues, sure, but here's a clear advantage for them.

Replies from: None, TheOtherDave, Mitchell_Porter
comment by [deleted] · 2012-05-14T09:18:18.402Z · LW(p) · GW(p)

Before I comment a nitpick:

Because of the fundamentally broken and irrational nature of how our psychology interacts with the non-ansectral environment, all of our imperfect arrangements will inevitably produce much psychological and/or social suffering and "dysfunction"

While I do agree we are worse off in this regard because of the strangeness of the modern world, there is no reason to think nature wouldn't produce some or perhaps quite a bit of social and psychological suffering even with us being perfectly well adapted to our environment.

I mean we don't expect it to do so with physical pain.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-14T09:33:50.327Z · LW(p) · GW(p)

Yes, yes, I agree. By the local standards I might be a bit of a hippie, but the last thing I want to do is demonize the modern life and compare it negatively with the "natural" (mindless & chaos-spawned) alternative. I was merely focusing on the current problem.

comment by TheOtherDave · 2012-05-14T12:46:50.043Z · LW(p) · GW(p)

Well, I certainly agree that the controversial topics you list have the property you describe -- that is, no popular position on them is unflawed.

I don't believe this significantly explains the low light:heat ratio of discussions about those topics, though. There are lots of topics where no popular position on them is unflawed that nevertheless get discussed without the level of emotional investment we see when gender relations or tribal affiliations (or, to a lesser extent, morality) get involved.

That said, it's not especially mysterious that gender relations and tribal affiliations reliably elicit more emotional involvement than, say, decision theory.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-14T13:29:13.274Z · LW(p) · GW(p)

There are lots of topics where no popular position on them is unflawed that nevertheless get discussed without the level of emotional investment we see when gender relations or tribal affiliations (or, to a lesser extent, morality) get involved.

The problem is that the positions on this topic (not just the popular ones, but all the conceivable non-transhumanist ones) are not just "unflawed", they're pretty damn horrible, absolutely speaking.

Consider everyone (who's smart enough for it and cares to) unabashedly using "PUA"-style psychological manipulation (not the self-improvement bits there, what they call "inner game" and what's found in all other self-help manuals, but specifically "outer game", internalizing the "marketplace" logic and applying it to their love life) versus things staying as they are, with the sexual status race accelerating and getting more crazy. Clearly, both situations are not just "flawed" but fucking horrible, full of suffering and adversity and shit. That's very easy to imagine, and that's where the tension comes from.

(BTW, privately I'm so disgusted at those "seduction" tricks that it took some willpower not to heap abuse at such practices throughout this comment. Don't talk to me about it.)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-14T14:33:39.719Z · LW(p) · GW(p)

To make sure I understand... do you predict that for any question, if a group of people G has a set of possible answers A, and G is attempting to come to consensus on one of those answers, G's ability to cooperate in that effort will anticorrelate (p > .95) with how unpleasant G's expected results of implementing any of A are?

That would surprise me, if so, but it wouldn't vastly shock me. Call it ~.6 confidence that the above is false.

I'm ~.7 confident that G's ability to cooperate in that effort would anticorrelate more strongly with the standard deviation within G of pre-existing individual identifications with political or social entities associated with a particular member of A.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-14T15:11:22.095Z · LW(p) · GW(p)

It's partly so in my opinion. I expect a modest effect like that for most issues, but in a much more dramatic fashion on the most painful problems, where our instincts are highly involved and can easily tell us that all the answers are going to hurt - like sex.

Why else 'd you think that most of European classical tragic/dramatic literature touches on intimate dissatisfaction/suffering, and irrational behavior in regards to it?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-14T15:24:02.164Z · LW(p) · GW(p)

Because intimate relations are really important to us, so we tell lots of stories about it.
It's also why so many popular stories are about couples getting together and living happily ever after.

comment by Mitchell_Porter · 2012-05-14T08:52:29.748Z · LW(p) · GW(p)

You're saying that technology - tinkering with human biology and human psychology - can supply a technical fix for problems with sex and death. But the imperfection and dysfunction of social and cultural solutions will also extend to technological solutions. Some methods of life extension will be lethal. Some hopes will be deluded. Some scientific analyses of psychology will be wrong, but they will supply the basis of a subculture or a technological intervention anyway.

Rather than discuss it on a meta level first - whatever that means - it would be better if you supplied one or two concrete examples of what you have in mind.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-14T09:01:46.034Z · LW(p) · GW(p)

Rather than discuss it on a meta level first - whatever that means

It means that we should not just start discussing whether e.g. polyamory is good, but instead discuss how we, in practice, think and make value judgments about such things - without dwelling too much on concrete examples.

You're saying that technology - tinkering with human biology and human psychology - can supply a technical fix for problems with sex and death.

I hope that it will, but it might well not, or the cure might be as bad as the disease. That's an useful thought in our current discussions because it puts things in perspective and by contrast illuminates the hard-wired, "inevitable" aspects of baseline humanity, that's what I mean.

But the imperfection and dysfunction of social and cultural solutions will also extend to technological solutions.

Absolutely, but my main point is not that we should wait for 50 years/100 years/the Singularity and it'll all be great, but that we should imagine a "good" condition of people and society that's unachievable by "ordinary" means (e.g. hacking ourselves to negate men's attraction to body shape and women's attraction to tribal chieftains) and use it as an example of a desirable outcome when we're talking policy - because this should allow us to notice the imperfection of all those "ordinary" means we're considering. We should allow ourselves a ray of hope to notice the darkness that we're in.

comment by David_Gerard · 2012-05-01T15:32:37.684Z · LW(p) · GW(p)

Just posted today: a small rant about hagiographic biographers who switch off their critical thinking in the presence of literary effect and a cool story. A case study in smart people being stupid.

comment by PECOS-9 · 2012-05-11T20:19:14.391Z · LW(p) · GW(p)

Has anybody actually followed through on Louie's post about optimal employment (i.e. worked a hospitality job in Australia on a work visa)? How did you go about it? Did you just go there without a job lined up like he suggests? That seems risky. And even if you get a job, what if you get fired after a couple of weeks?

I really like the idea, but I'd also like a few more data points.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2012-05-13T06:54:43.627Z · LW(p) · GW(p)

Well, following through in the sense that my flight is this coming wednesday (as in, in a few days), actually. :)

I'm going without a job lined up. And I'll find out how it works out. I don't have data points for you so much as "about to perform the experiment"

comment by gRR · 2012-05-02T15:08:40.535Z · LW(p) · GW(p)

Argument for Friendly Universe:

Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.

Generally, it will succeed. (General intelligence = power of general-purpose optimization.)

Although in a big universe there would exist worlds where unnecessary suffering does not decrease to zero, it would only happen via a long and constantly increasing chains of low-probability coincidences. The total measure of those worlds will tend to zero.

Conclusion: the universe (either big or small) generally operates in such a way as to minimize the unnecessary suffering of all sentient beings.

Generalization: the universe (either big or small) generally operates in such a way as to maximize the values of all sentient beings.

Replies from: Viliam_Bur, shminux, Grognor
comment by Viliam_Bur · 2012-05-02T17:14:14.419Z · LW(p) · GW(p)

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.

Its own pain, probably. Why do you believe it will care about the pain of other beings?

Replies from: gRR, Thomas
comment by gRR · 2012-05-02T17:59:56.566Z · LW(p) · GW(p)

Cooperation with other intelligent beings is instrumentally useful, unless the pain of others is one's terminal value.

Replies from: Viliam_Bur, Matt_Simpson
comment by Viliam_Bur · 2012-05-03T07:18:52.495Z · LW(p) · GW(p)

If one being is a thousand times more intelligent than another, such cooperation may be a waste of time.

Replies from: gRR
comment by gRR · 2012-05-03T10:57:09.143Z · LW(p) · GW(p)

Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they'll cooperate.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-05-03T15:10:30.532Z · LW(p) · GW(p)

Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That's pretty impressive for a bug... but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so.

You don't have to want to make the bugs suffer. It's enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.)

Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc.

(And that's still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)

Replies from: gRR
comment by gRR · 2012-05-03T16:46:35.290Z · LW(p) · GW(p)

Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I'd say they'd leave us alone. Unless, of course, there's a hyperspace bypass that needs to be built.

comment by Matt_Simpson · 2012-05-02T19:39:38.193Z · LW(p) · GW(p)

The conclusion doesn't follow. Ripping apart your body to use the atoms to construct something terminally useful is also instrumentally useful.

Replies from: gRR
comment by gRR · 2012-05-02T19:44:51.626Z · LW(p) · GW(p)

Only if there's general lack of atoms around. When atoms are in abundance, it's more instrumentally useful to ask me for help constructing whatever you find terminally useful.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-05-02T19:55:27.591Z · LW(p) · GW(p)

Right, but your conclusion still doesn't follow - my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.

Replies from: gRR
comment by gRR · 2012-05-02T20:09:49.515Z · LW(p) · GW(p)

Well, of course. But which my conclusion you mean that doesn't follow?

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-05-02T20:44:34.714Z · LW(p) · GW(p)

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain [of others] away.

Replies from: gRR
comment by gRR · 2012-05-02T21:02:42.777Z · LW(p) · GW(p)

But "[of others]" part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there's a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-05-02T21:17:15.745Z · LW(p) · GW(p)

If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion.

This is highly dependent on the strategic structure of the situation.

comment by Thomas · 2012-05-02T17:35:41.312Z · LW(p) · GW(p)

Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don't care, are not enough to conserve the pain.

comment by shminux · 2012-05-02T15:59:23.856Z · LW(p) · GW(p)

I'd be interested in seeing you playing a Devil's advocate to your own position and try your best to counter each of the arguments.

Replies from: gRR
comment by gRR · 2012-05-02T16:35:05.571Z · LW(p) · GW(p)

Fair enough :)

Counterarguments:

The rate of appearance of new suffering intelligent agents may be higher than the rate of disappearance of suffering due to optimization efforts.

A significant number of evolved intelligent agents may have directly opposing values.

The power of general intelligence may be greatly exaggerated.

Replies from: Thomas, shminux
comment by Thomas · 2012-05-02T16:49:20.700Z · LW(p) · GW(p)

The power of general intelligence may be greatly exaggerated.

I rather think, that the power of general intelligence is greatly underestimated. Don't missunderestimate!

Replies from: gRR
comment by gRR · 2012-05-02T18:05:39.441Z · LW(p) · GW(p)

The probability of a general intelligence destroying itself because of errors of judgement may be large. This would mean that "the power of general intelligence is greatly exaggerated" - nonexistent intelligence is unable to optimize anything anymore.

comment by shminux · 2012-05-02T16:49:00.587Z · LW(p) · GW(p)

Which side do you find more compelling and why?

Replies from: gRR
comment by gRR · 2012-05-02T18:02:05.455Z · LW(p) · GW(p)

What's your opinion?

Replies from: shminux
comment by shminux · 2012-05-02T19:43:00.506Z · LW(p) · GW(p)

Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.

What other mechanisms have you compared it to?

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away... Generally, it will succeed. (General intelligence = power of general-purpose optimization.)

How do you define "pain" in a general case? How does one define unnecessary pain? Does boredom counts as a necessary pain? How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?

Replies from: gRR
comment by gRR · 2012-05-02T20:01:16.367Z · LW(p) · GW(p)

What other mechanisms have you compared it to?

To a lack of any.

How do you define "pain" in a general case?

Sharp negative reinforcement in a behavioristic learning process.

How does one define unnecessary pain?

Useless/inefficient for the necessary learning purposes.

Does boredom counts as a necessary pain?

Depends on the circumstances. When boredom is inevitable and there's nothing I can do about it, I would prefer to be without it.

How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?

Same time range in which my utility function operates.

(EDIT: I'm sorry, I should have asked you for your own answers to your questions first. Stupid me.)

comment by Grognor · 2012-05-10T16:16:07.731Z · LW(p) · GW(p)

Do you actually buy this? I don't have the spoons or the time to refute it point-by-point, but I think it's completely, maybe even obviously and overdetermined-ly wrong, if a somewhat interesting idea.

Replies from: gRR
comment by gRR · 2012-05-10T20:43:23.634Z · LW(p) · GW(p)

I wrote it for novelty value, although it seems to be a defensible position. I can think of counterarguments, and counter-counterarguments, etc. Of course, if you are not interested and/or don't have time, you shouldn't argue about it.

Thanks for the "spoons" link, a great metaphor there.

comment by sixes_and_sevens · 2012-05-01T12:38:57.508Z · LW(p) · GW(p)

I'm trying to put together an aesthetically pleasing thought experiment / narrative, and am struggling to come up with a way of framing it that won't attract nitpickers.

In a nutshell, the premise is "what similarities and differences are there between real-world human history and culture, and those of a different human history and culture that diverged from ours at some prehistoric point but developed to a similar level of cultural and technological sophistication?"

As such, I need some semi-plausible way for the human population to be physically divided ~10,000 years ago with no cross-cultural contamination, and for both sides of the divide to each develop into a "global" culture, with one being very much like ours, and speculation of the nature of the other being the point of the thought experiment.

Current contenders are:

Aliens build a really huge impenetrable wall round the equator

Nitpicks are "what about air and space travel?" I could set the narrative in the early 20th century, when we're only just developing means of circumventing the wall, which also frames the "what is the world like on the other side of the wall?" speculation. The trouble is that a lot of interesting cultural, scientific and technological developments have happened in the past century, and it's hard to speculate if they've also occurred on the other side of the wall if they haven't occurred on this one. It also smacks a little too much of alternative histories, with it being a tremendous strain on credulity to claim "this side" of the wall developed in line with real-world history in spite of a hemisphere being missing.

Aliens (who clearly have nothing better to do than fuck around with prehistoric humans) take a bunch of humans and put them on a Counter-Earth

Of course, they have to then replicate the biosphere of earth, and somehow retro-engineer the existence of fossil fuels, and populate it with a bio-diverse ecology of other earth life for the humans to eat, and really they may as well have started a several hundred million years earlier, by which point whatever life is on the Counter-Earth is just going to be classical aliens, rather than human beings, because transplanting them into an ecological niche they're not optimised for is just asking for trouble.

So yes, a separate 10,000 years of human history, completely unrelated to ours, that we haven't interacted with until now. How do I frame it in a narrative that won't let people pick it to pieces without addressing what it's trying to get them to think about?

Replies from: RolfAndreassen, Vaniver, Armok_GoB, drethelin
comment by RolfAndreassen · 2012-05-01T17:57:08.042Z · LW(p) · GW(p)

It's not clear to me why you don't just appeal to Many Worlds, or more generally to alternate histories. These are fairly well-understood concepts among the sort of people who'd be interested in such a thought experiment. Why not simply say "Imagine Carthage had won the Punic Wars" and go from there?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-01T22:29:28.565Z · LW(p) · GW(p)

I'm beginning to doubt my motives for this line of thinking, but I'm not abandoning it altogether.

The trouble with alternate histories is as soon as you say "imagine so-and-so won such-a-war", people start coming up with stories that lead them to a very specific idea about what such a world would be like. I imagine your appeal to imagine Carthage winning the Punic Wars would involve someone picturing a world practically identical to ours, only retro-fitted with Carthaginian influences instead of Roman ones.

I also feel (and it is a feeling I have trouble substantiating) that when posed with a question like "there's another society of humans over there; do they have [x]?", it's a much more straightforward pragmatic question to address than "in an alternate history where such-a-thing happened, do they have [x]?"

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-05-03T04:29:48.216Z · LW(p) · GW(p)

I see your point. Perhaps you could try to appeal to non-specific alternate histories? Not "imagine Carthage wins" but "imagine a butterfly zigged instead of zagging on August 3rd, 5823 BCE".

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-03T10:11:56.628Z · LW(p) · GW(p)

Does that not sound like a super-abstract question to you?

I recognise it as asking pretty much exactly the same question as "an alternate several-thousand years of human history has taken place concurrent to, but separate from, our own; what's it like?", but the Many Worlds appeal is like saying "here is a blank canvas where anything can happen", while the equatorial wall or counter-earth scenario is like saying "here is a situation: how do you deal with it?"

I think that's what I meant by Many Worlds being too open-ended in my response to drethelin.

comment by Vaniver · 2012-05-01T15:52:52.540Z · LW(p) · GW(p)

As such, I need some semi-plausible way for the human population to be physically divided ~10,000 years ago with no cross-cultural contamination, and for both sides of the divide to each develop into a "global" culture, with one being very much like ours, and speculation of the nature of the other being the point of the thought experiment.

So, this actually happened, right? At least, 95% of it. You could give the New World a few advantages (like more animals that are easy and useful to domesticate) and speculate other ways for them to develop.

Keeping parts of the world separated after you have ocean-faring ships and air travel seems hard / implausible / you can make a similarly interesting worlds collide experience without needing the first contact to be now-ish.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-01T16:05:16.270Z · LW(p) · GW(p)

It's not my intention to write a piece of fiction. It's a thought experiment I am trying to prettify. I want to ask questions like "would they have something like women's lib on the other side?" or "would they have public key cryptography?" or "what would their art have in common with our art?"

I am quite surprised to find "prettify" is already in Chrome's spell check dictionary.

Replies from: Vaniver
comment by Vaniver · 2012-05-01T16:18:49.720Z · LW(p) · GW(p)

Are you interested in what the cultures / economics / politics look like, or are you interested in what the technologies look like? It seems to me that stuff like public key cryptography is in some sense the optimal answer to an engineering problem- and so if you have the problem and the engineering skill, then you will find that answer eventually.

For the cultures / economics / politics, then it depends on your view of history. Would the idea of liberty have happened the same way without a New World to expand into? It's really not clear. Could you have an Enlightenment that is politically traditionalist while being culturally and economically radical?

If you're interested in those sorts of questions, it seems like you're better off directly trying to build good models of the cultural / economic / political shifts and memes than you are trying to imagine the outcomes of a general thought experiment.

[Edit] You may be interested in phrasing things as "What would have to change to result in an Enlightenment that is politically traditionalist while being culturally and economically radical?" to build those models and constrain the deviation from reality.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-01T16:42:46.258Z · LW(p) · GW(p)

The broader point of the thought experiment is "is [artefact X] an accident of history or is it somehow inevitable that humans will end up with [artefact X]?"

More pointedly, when looking at various academic works and disciplines, I've been using it as an intuition pump for the question "are you describing something present in all human environments, describing aspects of our history, or just making stuff up?"

I have privately been using it to my own satisfaction for about six months. I'm trying to come up with a way of aesthetically presenting it to other people in such a way that they won't get bogged down in how a separate 10,000 years of human history, with different humans, has happened somewhere.

Replies from: Vaniver, NancyLebovitz
comment by Vaniver · 2012-05-01T20:57:23.029Z · LW(p) · GW(p)

The broader point of the thought experiment is "is [artefact X] an accident of history or is it somehow inevitable that humans will end up with [artefact X]?"

Right, and I think the question (that I put in an edit) of "what would have to change for X to (not) have happened?" is relatively good at answering that question for X. It seems to me like to not get public key cryptography you would need math to be different, but to not get women's lib you would need either biology to be different or the idea of personal autonomy to not have become a cultural hegemon, both of which could have been the case (and point to where to look for why they weren't).

Replies from: Viliam_Bur, sixes_and_sevens
comment by Viliam_Bur · 2012-05-02T12:20:41.217Z · LW(p) · GW(p)

It seems to me like to not get public key cryptography you would need math to be different

Just because the equations would have to be the same, it does not mean the other society would know them and use them like we do. Maybe they don't have Internet yet. Maybe their version of Internet has some (weaker) form of cryptography in the lower layers, so inventing cryptography for higher layers did not feel so necessary. Maybe they researched quantum physics before Internet, so they use quantum cryptography. Or at least they can use different kinds of functions for private/public key pairs.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-03T10:13:13.312Z · LW(p) · GW(p)

This is the sort of reasoning I'm looking to generate.

comment by sixes_and_sevens · 2012-05-01T22:19:52.578Z · LW(p) · GW(p)

I think that question is better for more thorough analysis but less good as an intuition pump.

I'm now trying to figure out whether I find the does-the-alternate-human-society-have-it more tractable as a way of thinking about it, or whether I'm simply attached to it. The question "there's another society of humans over there: do they have [x]?" certainly seems a lot easier to me than "what needs to have happened for this counterfactual to be true?"

comment by NancyLebovitz · 2012-05-08T08:14:50.829Z · LW(p) · GW(p)

I recently ran into the question of whether photography would inevitably lead to loss of interest in representational art.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-08T09:43:11.694Z · LW(p) · GW(p)

Depends on what you mean by "interest", presumably. I don't think people have necessarily lost interest in live music since the inception of recorded music; they just have a cheaper substitute for it.

comment by Armok_GoB · 2012-05-05T22:27:23.356Z · LW(p) · GW(p)

The universe glitched, and an exact duplicate of the entire solar system appeared two lightyears to the right? Contact happens when radio telescopey is invented. Divergence from a new star appearing in opposite places in each ones sky.

comment by drethelin · 2012-05-01T12:43:59.115Z · LW(p) · GW(p)

this is the premise of Hominids.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-01T12:53:03.254Z · LW(p) · GW(p)

I was ignorant of this novel until about five minutes ago. As a result, I'm still pretty ignorant about it.

That seems to be an implementation of something like this scenario using an alternate reality sci-fi trope. I really want to avoid Sliders-style alternate realities because they're (a) too open-ended, and (b) too heavily influenced by existing fiction on the subject.

Replies from: drethelin
comment by drethelin · 2012-05-01T13:19:04.245Z · LW(p) · GW(p)

In what way is an alternate separate earth population functionally different from an alternate universe? You say you're trying to avoid a scifi scenario but your two proposals are already pretty silly scifi.

If open-endedness is a problem, simply limit your universes to 2, like in Hominids.

Also, it would be easier to give recommendations if I knew what argument you were trying to win with this thought experiment.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-01T13:33:54.700Z · LW(p) · GW(p)

I'm not trying to win any arguments. I'm trying to reason about artefacts of human culture that are parochial (accidents of history) or human-universal (practically inevitable products of human history). More to the point, I'm trying to equip other people with tools to reason in a similar fashion.

I'm also not trying to avoid sci-fi scenarios, but I am trying to avoid scenarios which have such a long history as a sci-fi trope that they will inevitably influence people's intuitions.

I'm not writing a story (although I do want to frame the thought experiment as a fictional narrative). I'm not writing specific details about what's on the other side of the wall / solar system / interdimensional gateway. The whole point of the thought experiment is that we don't know what's on the other side, apart from the fact that it contains a bunch of humans with as much chronological history as us. Based on that knowledge, what can we reason about them?

comment by beoShaffer · 2012-05-01T05:53:17.285Z · LW(p) · GW(p)

What, costs/benefits are there to pursing a research career in psychology. Both from a personal perspective and in terms of societal benefit?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-01T06:08:21.544Z · LW(p) · GW(p)

When assessing societal benefit, consider: are you likely to increase the total number of research psychologists, or just increase the size of the pool from which they are drawn? See Just what is ‘making a difference’? - Counterfactuals and career choice on 80000hours.org.

The decision of what career to pursue is one of the largest you will ever take. The value of information here is very high, and I recommend putting a very large amount of work and thought into it - much more than most people do. In particular there is a great deal of valuable stuff on 80000hours.org - it's worth reading it all!

comment by [deleted] · 2012-05-15T10:05:50.361Z · LW(p) · GW(p)

John Derbyshire on Ridding Myself of the Day

I used to console myself with the thought that at least I’d been reading masses of news and informed opinion, making myself wiser and better equipped to add my own few cents to the pile. This is getting harder and harder to believe. There’s something fleeting, something trivializing about the Internet. I think what I have actually done is wasted five or six perfectly good hours when I could have been working up a book proposal, fixing a side door in the garage, doing bench presses, or…or…reading a novel.

...

I should work out a Plan of Life. No Internet after 11 AM! Two good solid books a week, one of them fiction! Regular daily exercise with free weights! Two hours set aside for household chores!

Yet no sooner do I form the idea than despair and fatalism set in. If I were the kind of person to stick to a discipline like that, I would have done so long since. And hey, at least I’m not that guy on the train, drawn irresistibly, twitching, to his microscopic toy. I don’t watch TV, either, surely saving myself major brain rot right there. And I’m one of the dwindling number of American males who occasionally reads fiction (an almost exclusively female-readership zone nowadays, according to publishers’ lore).

...

Human beings weren’t made to work, or think much, or read much. Of our Paleolithic ancestors, Cochran and Harpending remark in The 10,000 Year Explosion that: “If they had full stomachs and their tools and weapons were in good shape…they hung out: They talked, gossiped, and sang.”

If they’d had smartphones, they’d have been twiddling.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-18T10:49:57.033Z · LW(p) · GW(p)

By the way, here's my enlightened opinion on the recent... controversy (what a contrast between the word's neutral blandness and its meaning) featuring that guy:

He's not really a "racist" at all. He does not have any hatred, irrational or otherwise, of other ethnicities. He's just a bit of an asshole - or more than a bit. He's very protective of his in-group and very insensitive to everyone outside it - and flaunts it, at an age when he should really know better. He appears to be not the type of person that we want to encourage in civilized society. I certainly wouldn't care to meet him. But firing him just for being an asshole was stupid.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-18T15:22:40.707Z · LW(p) · GW(p)

Your implicit assertion that hating other ethnicities is a necessary condition for meriting the label "racist" is not universally accepted.

Replies from: None, Multiheaded
comment by [deleted] · 2012-05-18T17:22:25.896Z · LW(p) · GW(p)

Considering what a horrible can of worms the definition of that word is and that "racist" represents a strong political and debating weapon against any enemy, I think society would be much helped to adopt a rationalist taboo on it. Even LessWrong discussions would be improved by this I think.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-18T17:29:08.934Z · LW(p) · GW(p)

Yeah, I'm inclined to agree.

comment by Multiheaded · 2012-05-18T15:48:36.052Z · LW(p) · GW(p)

Yeah, sure. I was going from the minimal (= most right-wing) definition still accepted in polite society today, because I don't want to hear someone complaining that leftists and postmodernists and Jews are overcomplicating things, or whatever.

Replies from: CharlieSheen
comment by CharlieSheen · 2012-05-21T09:55:35.828Z · LW(p) · GW(p)

leftists and postmodernists and Jews are overcomplicating things, or whatever.

Nonsense! No one here would say such a thing. There are no Jews on LessWrong.

comment by NancyLebovitz · 2012-05-12T16:02:17.787Z · LW(p) · GW(p)

The idea that a stone falls because it is 'going home' brings it no nearer to us than a homing pigeon, but the notion that 'the stone falls because it is obeying a law' makes it like a man, and even a citizen.

--C. S. Lewis

Is it a problem to think of matter/energy as obeying laws which are outside of itself? Is it a problem to think of it as obeying more than one law? Is mathematics a way of getting away from the idea of laws of nature? Is there a way of expressing behaviors as intrinsic to matter/energy in English? Is there anything in the Sequences or elsewhere on the subject?

Replies from: arundelo, None
comment by arundelo · 2012-05-12T17:36:36.085Z · LW(p) · GW(p)

I'm not sure what Lewis is trying to say here, but the physical science meaning and the legal meaning of "law" are different enough that I think it's better to consider them different words that are spelled the same (and etymologically related of course). Which means he's making a pun.

Replies from: None
comment by [deleted] · 2012-05-12T17:51:59.800Z · LW(p) · GW(p)

I think it does makes sense to consider them as particular cases of a more general concept, after all. Grammatical rules and the rules of chess would be other instances, somewhere in between.

Replies from: arundelo
comment by arundelo · 2012-05-12T23:57:48.161Z · LW(p) · GW(p)

They are all regularities, but laws of physics are regularities that people notice (or try to notice), while legal laws and chess rules are regularities that people impose. (Grammar rules as linguists study them are more like physics; grammar rules as language teachers teach them are more like chess rules.)

Replies from: None
comment by [deleted] · 2012-05-13T09:14:10.214Z · LW(p) · GW(p)

OK... let's add one more intermediate point and consider the laws of a cellular automaton. I can see analogies both between them and the laws of our universe¹ and the analogies between them and the rules of chess.

  1. And mathematical realists à la Tegmark would see them even more easily than me.
comment by [deleted] · 2012-05-12T18:08:00.672Z · LW(p) · GW(p)

Is there anything in the Sequences or elsewhere on the subject?

http://lesswrong.com/lw/39p/a_sense_of_logic/6gtm

comment by [deleted] · 2012-05-04T14:11:16.089Z · LW(p) · GW(p)

A dicussion on the IRC LessWrong channel about how to provide an incentive to learning the basic mathematics of cool stuff for the mathphobic aspiring rationalists on LW (here is the link to a discussion of that idea, gave us another one.

The Sequences are long

Longer than Lord of the Rings. There is reason rational wiki translates our common phrase of "read the sequences" as "f##k you". I have been here for nearly 2 years and I still haven't read all of them systematically. And even among people read them, how much of them will they recall a few months later? How much are they likely to end up using? To alleviate all of this a proposal was floated to reward people who read the sequences and demonstrate a reasonable knowledge of their contents. The karma reward can easily be generated with a post like this:

"I have completed reading the Quantum Mechanics sequence and have demonstrated some knowledge of its contents. Please keep this post at 50 karma. Read here for explanation. Confirmation by poster XYZ here. "

People who are having trouble reading the sequences or aren't sure they have understood them properly can team up with volunteers willing to quiz them on them. This serves as both a overview and as confirmation that they did indeed read what they set out to read. But where to get the questions? Simple, I think we should use the already prepared Anki decks (flashcards). But Konkvistador dosen't this mean someone can just download the appropriate Anki deck memorize it and then get away with claiming he has read that particular sequence? Why yes it does, but if he learned from the Anki decks well enough to fool the person, does it matter?

Cheating is a slightly bigger concern. Keeping the rewards reasonable (not too high) might help. I suspect keeping the quizzing via video chat instead of lowering the bandwidth to that of a phone-call or text only will provide a bigger psychological barrier to cheating. It also makes it marginally harder.

Two positive side effects:

  • If every week a person is volunteering to quiz hopefuls, people who want to listen in or have a live discussion know someone will be there. This will encourage socialization among LWers. Even if it is just the hopeful and the volunteer it serves as a nice excuse to make the acquaintances needed for regular virtual meetups (like say via google+ hangout) that have often been proposed a reality.

  • People will probably end up using the Anki cards to harness the magic of spaced repetition to learn LWish or other material. New decks on relevant subjects are likelier to be made and shared.

Replies from: TrE
comment by TrE · 2012-05-04T14:25:06.167Z · LW(p) · GW(p)

Of course, the flashcards are not the only way to test the student's knowledge. If the volunteer puts in some effort, he should be able to come up with his own questions, and if the volunteer knows the sequence good enough, he can ask questions and discuss the matter with the student freely. This would ensure that the student has actually understood the sequence posts he was trying to learn, and did not simply memorize.

In the end, the instructor simply has to judge whether the student read and understood the sequence he wanted to learn, and how he's doing this doesn't matter that much, IMO. A good and reliable method could be worked out in detail while the first trials are running.

If this idea gets approval, the next thing to do would be trying it out!

Replies from: None
comment by [deleted] · 2012-05-04T14:28:06.525Z · LW(p) · GW(p)

Yes it needs to be emphasised that the anki idea was floated just as a ready made question set or notes for the person doing the testing.

comment by XiXiDu · 2012-05-03T19:17:34.603Z · LW(p) · GW(p)

...the outstanding feature of any famous and accomplished person, especially a reputed genius, such as Feynman, is never their level of g (or their IQ), but some special talent and some other traits (e.g., zeal, persistence). Outstanding achievements(s) depend on these other qualities besides high intelligence. The special talents, such as mathematical musical, artistic, literary, or any other of the various “multiple intelligences” that have been mentioned by Howard Gardner and others are more salient in the achievements of geniuses than is their typically high level of g. Most very high-IQ people, of course, are not recognized as geniuses, because they haven’t any very outstanding creative achievements to their credit. However, there is a threshold property of IQ, or g, below which few if any individuals are even able to develop high-level complex talents or become known for socially significant intellectual or artistic achievements. This bare minimum threshold is probably somewhere between about +1.5 sigma and +2 sigma from the population mean on highly g-loaded tests.

Childhood IQs that are at least above this threshold can also be misleading. There are two famous scientific geniuses, both Nobelists in physics, whose childhood IQs are very well authenticated to have been in the mid-130s. They are on record and were tested by none other than Lewis Terman himself, in his search for subjects in his well-known study of gifted children with IQs of 140 or above on the Stanford-Binet intelligence test. Although these two boys were brought to Terman’s attention because they were mathematical prodigies, they failed by a few IQ points to meet the one and only criterion (IQ > 139) for inclusion in Terman’s study. Although Terman was impressed by them, as a good scientist he had to exclude them from his sample of high-IQ kids. Yet none of the 1,500+ subjects in the study ever won a Nobel Prize or has a biography in the Encyclopedia Britannica as these two fellows did. Not only were they gifted mathematically, they had a combination of other traits without which they probably would not have become generally recognized as scientific and inventive geniuses. So-called intelligence tests, or IQ, are not intended to assess these special abilities unrelated to IQ or any other traits involved in outstanding achievement. It would be undesirable for IQ tests to attempt to do so, as it would be undesirable for a clinical thermometer to measure not just temperature but some combination of temperature, blood count, metabolic rate, etc. A good IQ test attempts to estimate the g factor, which isn’t a mixture, but a distillate of the one factor (i.e., a unitary source of individual differences variance) that is common to all cognitive tests, however diverse.

Jensen on g and genius

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-05-04T09:39:45.963Z · LW(p) · GW(p)

How much is this statistically correct? I agree with the fact that most high-IQ people are not outstanding geniuses, but neither are most non-high-IQ people. This only proves that high IQ alone is not a guarantee for great achievements.

I suspect a statistical error: ignoring a low prior probability that a human has very high IQ. Let me explain it by analogy -- you have 1000 white boxes and 10 black boxes. Probability that a white box contains a diamond is 1%. Probability that a black box contains a diamond is 10%. It is better to choose a black box? Well, let's look at the results: there are 10 white boxes with a diamond and only 1 black box with a diamond... so perhaps choosing a black box is not so great idea; perhaps is there some other mysterious factor that explains why most diamonds end in the white boxes? No, the important factor is that a random box has only 0.01 prior probability of being black, so even the 1:10 ratio is not enough to make the black boxes contain the majority of diamonds.

The higher the IQ, the less people have it, especially for very high values. So even if these people were on average more successful, we would still see more total success achieved by people with not so high IQ.

(Disclaimer: I am not saying that IQ has a monotonous impact on success. I'm just saying that seeing most success achieved by people with not so high IQ does not disprove this hypothesis.)

Replies from: gwern, private_messaging
comment by gwern · 2012-06-25T22:05:15.440Z · LW(p) · GW(p)

It's interesting how one can read the excerpt in two different ways:

  1. "wow, IQ isn't all it's cracked up to be, look at how none of the sample won Nobels but two rejected did"
  2. "wow, in this tiny sample of a few hundred kids, they used a test which was so accurate in predicting future accomplishment that if the sample had been just a little bit bigger, it would have picked up two future Nobels - people whose level of accomplishment are literally one in millions, and it does this by only posing some boring puzzles without once looking at SES, personality, location, parents, interests, etc!"
comment by private_messaging · 2012-06-27T06:33:50.433Z · LW(p) · GW(p)

Good point.

Also, on typical test I'd expect well educated moderately high IQ person to have 100% success rate on everything that's strongly related to intelligence. So at the top range the differences are driven by the parts that have much less direct relation (e.g. verbal, guess next in sequence, etc). Correlation is a line but real relation we should expect would be more like sigmoid as the relevant parts of test saturate. Furthermore, IQ test doesn't test capacity to develop competence in a complex field.

comment by iDante · 2012-05-01T06:54:34.969Z · LW(p) · GW(p)

I think the Ship of Theseus problem is good reductionism practice. Anyone else think similarly?

Replies from: XiXiDu
comment by XiXiDu · 2012-05-01T11:04:45.924Z · LW(p) · GW(p)

I think the Ship of Theseus problem is good reductionism practice. Anyone else think similarly?

If I was to use an advanced molecular assembler to create a perfect copy the Mona Lisa and destroy the old one in the process, it would still lose a lot of value. That is because many people not only value the molecular setup of things but also their causal history, what transformations things underwent.

Personally I wouldn't care if I was disassembled and reassembled somewhere else. If that was a safe and efficient way of travel then I would do it. But I would care if that happened to some sort of artifact I value. Not only because it might lose some of its value in the eyes of other people but also because I personally value its causal history to be unaffected by certain transformations.

So in what sense would a perfect copy of the Mona Lisa be the same? In every sense except that it was copied. And if you care about that quality then a perfect copy is not the same, it is merely a perfect copy.

Replies from: TheOtherDave, None, BlackNoise
comment by TheOtherDave · 2012-05-01T13:37:17.848Z · LW(p) · GW(p)

Sure. Relatedly, the Mona Lisa currently hanging in the Louvre isn't the original... that only existed in the early 1500s. All we have now is the 500-year-old descendent of the original Mona Lisa, which is not the same, it is merely a descendent.

Fortunately for art collectors, human biases are such that the 500-year-old descendent is more valuable in most people's minds than the original would be.

Replies from: XiXiDu, NancyLebovitz, TobyBartels
comment by XiXiDu · 2012-05-01T15:06:42.455Z · LW(p) · GW(p)

Fortunately for art collectors, human biases are such that the 500-year-old descendent is more valuable in most people's minds than the original would be.

This has nothing to do with biases, although some people might be confused about what they actually value.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-01T16:42:47.016Z · LW(p) · GW(p)

(shrug) Fortunately for art collectors, human minds are such that they reliably ascribe more value to the 500-year-old descendent than to the original.

comment by NancyLebovitz · 2012-05-02T16:23:19.540Z · LW(p) · GW(p)

I'd rather have the early original-- I'd like to see the colors Leonardo intended, though I suppose he was such a geek that he might have tweaked them to allow for some fading.

Paint or Pixel: The Digital Divide in Illustration Art has more than a little (and more than I read) about what collecting means when some art is wholly or partially digital. Some artists sell a copy of the process by which the art was created, and some make a copy in paint of the digital original.

Strange but true: making digital art is more physically wearing than using paintbrushes and pens and such.

Note: the book isn't about illustration in general, it's about fantasy and science fiction illustration in particular.

comment by TobyBartels · 2012-05-01T19:27:09.527Z · LW(p) · GW(p)

Surely the original, if discovered to be still extant after all (and proved to really be the original), would be even more highly valued if we had it?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-01T19:53:12.410Z · LW(p) · GW(p)

Can you expand a little on how you imagine this happening? I suspect we may be talking past one another.

Replies from: TobyBartels
comment by TobyBartels · 2012-05-01T20:38:15.540Z · LW(p) · GW(p)

Ah, I have completely misunderstood you! Thanks for suspecting that we were talking past one another, because it made me reread your comment.

I thought that you were taking as factual certain theories that the Mona Lisa in the Louvre is a copy (not descendant) of a painting that has since been lost. Rather than directly engage that claim (which I think is pretty thoroughly disbelieved), I just responded to the idea that the true original would be less valuable, which I find even weirder. But you were not talking about that at all.

My only defence is that "the original would be" doesn't really make sense either; perhaps you should write "the original was"?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-01T22:25:10.518Z · LW(p) · GW(p)

Heh. I wasn't even aware of any such theories existing.

You don't really need defense here, my point was decidedly obscure, as I realized when I tried to answer your question. I got about two paragraphs into a response before I foundered in the inferential gulf.

I suspect that any way of talking about "the original" as distinct from its "descendent" is going to lose comprehensibility as it runs into the human predisposition to treat identity as preserved over time.

Replies from: TobyBartels
comment by TobyBartels · 2012-05-03T00:43:23.059Z · LW(p) · GW(p)

I wasn't even aware of any such theories existing.

https://en.wikipedia.org/wiki/Speculation\_about\_Mona\_Lisa#Other\_versions (which is more than just speculation about the original)

comment by [deleted] · 2012-05-01T12:37:24.483Z · LW(p) · GW(p)

Requoting:

"Look at any photograph or work of art. If you could duplicate exactly the first tiny dot of color, and then the next and the next, you would end with a perfect copy of the whole, indistinguishable from the original in every way, including the so-called 'moral value' of the art itself. Nothing can transcend its smallest elements" - CEO Nwabudike Morgan, "The Ethics of Greed", Sid Meier's Alpha Centauri

That is because many people not only value the molecular setup of things but also their causal history, what transformations things underwent.

In that case, its history would be that it started off as atoms, was transformed into bits, and then was transformed back into atoms again. If the transformation were truly lossless, people familiar with this fact wouldn't care. Now, this specific example sounds silly because we have no such technology applicable to the Mona Lisa. But consider something like a mass-produced CD. You could take a CD in Europe, losslessly rip it, destroy the CD and copy the bits to America, then send them to a factory to stamp another CD. The resulting variation would be identical to that between the original CD and one of its siblings in Europe. People are familiar with the technologies involved, and they value CDs only for their bits, so the copy really is as good as the original.

(Here I have even taken pains to state that the copy is not a burned CD-R, nor that the original was signed by a band member, or any such thing.)

But I would care if that happened to some sort of artifact I value.

"During World War II, the medals of German scientists Max von Laue and James Franck were sent to Copenhagen for safekeeping. When Germany invaded Denmark, chemist George de Hevesy dissolved them in aqua regia, to prevent confiscation by Nazi Germany and to prevent legal problems for the holders. After the war, the gold was recovered from solution, and the medals re-cast." - Wikipedia

Replies from: XiXiDu
comment by XiXiDu · 2012-05-01T14:00:39.804Z · LW(p) · GW(p)

Here is an example. Imagine there was a human colony in another star system. After an initial exploration drone set up a communication node and a molecular assembler on a suitable planet, all other equipment and all humans were transmitted digitally an locally reassembled.

Now imagine such a colony would either receive a copy of the Venus figurine digitally transmitted and reassembled or by means of a craft capable of interstellar travel. If you don't perceive there to be a difference then you simply don't share my values. But consider how much resources, including time, it took to accomplish the relocation in the latter case.

The value of something can encompass more than its molecular setup. There might be many sorts of sparkling wines that taste just like Champagne. But if you claim that simply because they taste like Champagne they are Champagne, then you are missing what it is that people actually value.

Replies from: None
comment by [deleted] · 2012-05-01T15:16:45.208Z · LW(p) · GW(p)

To try to better understand your value system, I'm going to take what I think you value, attempt to subdivide it in half, and then reconnect it together, and see if it is still valuable. Please feel free to critique any flaws in the following story.

The seller of the Venus tells you "This Venus is the original, carried from Earth on a shuttle, that went through many twists and turns, and near accidents to get here." and there was recently a shuttle carrying "untransportium." so that is extremely plausible and he is a trustworthy seller. You feel confident you have just bought the original Venus.

However, later you find out that someone else next door has one of those duplicates of the Venus. He got it for much much cheaper, but you still enjoy your original. You do have to admit you can't tell them apart in the slightest.

Then later than that, you find out that a unrelated old man who just died had been having fun with you two by flipping a coin, and periodically switching the two Venuses from one house to the other when it came up tails. He cremated himself beyond recovery, so can't be revived and interrogated, and you have confirmed video evidence he appears to have switched the Venuses multiple times in a pattern which resembles that of someone deciding on a fair coinflip, but there doesn't appear to be a way of determining the specific number of switches with any substantial accuracy (video records only go back so far, and you did manage to find an eyewitness who confirms he had done it since before the beginning of the video records). A probabilistic best guess gives you a 50-50 shot of having the original at this point.

Your neighbor, who doesn't really care about the causal history of his Venus, offers to sell you his Venus for part of the price you paid for the original, and then buy himself another replica. Then you will be as certain as you were before to have the original (and you will also have a replica), but you won't know which of the two is which.

Is it worth buying the other Venus at all if you don't particularly value replicas? In relation to a percent of the original, how much would you feel comfortable paying?

I guess another way of expressing what I'm trying to figure out is "Do you value having the original itself, or being able to tell the original apart from replicas, and if so can you split those two apart (in the way I tried to in the story) and tell me how much you value each?"

Replies from: XiXiDu
comment by XiXiDu · 2012-05-01T15:51:25.505Z · LW(p) · GW(p)

"Do you value having the original itself, or being able to tell the original apart from replicas, and if so can you split those two apart (in the way I tried to in the story) and tell me how much you value each?"

The value of both, one of them being the original, would be a lot less than the original. I'd pay maybe 40% for both. The value of just one would reduce to a small fraction. I wouldn't be interested to buy it at all. The reason is the loss of information.

comment by BlackNoise · 2012-05-01T15:00:19.162Z · LW(p) · GW(p)

You would care if certain objects are destructively teleported but not care if the same happens to you (and presumably other humans)

Is this a preference you would want to want? I mean, given the ability to self-modify, would you rather keep putting (negative) value on concepts like "copy of" even when there's no practical physical difference? Note that this doesn't mean no longer caring about causal history. (you care about your own casual history in the form of memories and such)

Also, can you trace where this preference is coming from?

Replies from: XiXiDu
comment by XiXiDu · 2012-05-01T15:40:35.209Z · LW(p) · GW(p)

You would care if certain objects are destructively teleported but not care if the same happens to you (and presumably other humans)

Yeah, I would use a teleporter any time if it was safe. But I would only pay a fraction for certain artifacts that were teleported.

Is this a preference you would want to want? I mean, given the ability to self-modify, would you rather keep putting (negative) value on concepts like "copy of" even when there's no practical physical difference?

I would keep that preference. And there is a difference. All the effort it took to relocate an object adds to its overall value. If only for the fact that other people who share my values, or play the same game and therefore play by the same rules, will desire the object even more.

Also, can you trace where this preference is coming from?

Part of the value of touching an asteroid from Mars is the knowledge of its spacetime trajectory. An atomically identical copy of a rock from Mars that was digitally transmitted by a robot probe printed out for me by my molecular assembler is very different. It is also a rock from Mars but its spacetime trajectory is different, it is artificial.

Which is similar to drinking Champagne and sparkling wine that tastes exactly the same. The first is valued because while drinking it I am aware of its spacetime trajectory, the resources it took to create it and where it originally came from and how it got here.

Replies from: BlackNoise, Armok_GoB, TobyBartels
comment by BlackNoise · 2012-05-01T16:27:47.145Z · LW(p) · GW(p)

If only for the fact that other people who share my values, or play the same game and therefore play by the same rules, will desire the object even more.

How about if there were two worlds - one where they care about whether a spacetime trajectory does or does not go through a destroy-rebuild cycle, and one where they spend the effort on other things they value. In that case, in which world would you rather live in?

The Champagne example helps, I can understand putting value on effort for attainment, but I'd like another clarification:

If you have two rocks where rock 1 is brought from mars via spaceship, and rock 2 is the same as rock 1 only after receiving it you teleport it 1 meter to the right. Would you value rock 2 less than rock 1? If yes, why would you care about that but not about yourself undergoing the same?

Replies from: XiXiDu
comment by XiXiDu · 2012-05-01T17:11:04.885Z · LW(p) · GW(p)

How about if there were two worlds - one where they care about whether a spacetime trajectory does or does not go through a destroy-rebuild cycle, and one where they spend the effort on other things they value. In that case, in which world would you rather live in?

It is not that important. I would trade that preference for more important qualities. But that line of reasoning can also lead to the destruction of all complex values. I have to draw a line somewhere or end up solely maximizing the quality that is most important.

If you have two rocks where rock 1 is brought from mars via spaceship, and rock 2 is the same as rock 1 only after receiving it you teleport it 1 meter to the right. Would you value rock 2 less than rock 1?

Rock 1 and 2 would be of almost equal value to me.

comment by Armok_GoB · 2012-05-06T00:09:56.329Z · LW(p) · GW(p)

In a hypothetical case where you werent oposed to slave trade... what'd you pay for a transported slave very much like yourself? would it matter if you had been transported?

If the slave had some famous causal history, would it matter if it was mental (composed an important song) or physical (lone survivor of a disaster)?

comment by TobyBartels · 2012-05-01T19:28:21.860Z · LW(p) · GW(p)

So the labour theory of value is true for art?

comment by faul_sname · 2012-05-01T04:52:59.972Z · LW(p) · GW(p)

You're off by a couple months. (should read "May 1-15," instead of "March 1-15").

Edit: It's fixed now

Replies from: Thomas
comment by Thomas · 2012-05-01T14:04:54.832Z · LW(p) · GW(p)

As I've said. A script should open such threads. (And I would expect a "thanks" for you.)

comment by Multiheaded · 2012-05-09T04:32:30.000Z · LW(p) · GW(p)

I won't be wasting any more time on TVTropes. The reason is that I've become so goddamn angry at the recent shocking triumph of hypocrisy, opportunism, idiocy and moral panic that I literally start growling after looking at any page for more than five seconds. Never again will I become "trapped" in that holier-than-thou fascist little place. Every cloud has a silver lining, I guess. Still, I'm kinda sad that this utter madness happened.

(One particular thing I'm mad about is their perverted treatment of Sengoku Rance, an excellent and engaging videogame that just so happened to contain rough sexual jokes - that wouldn't really be slanted against women or something, hell, at one point the protagonist loses his penis, and often reaps disastrous consequences for thinking with it - and some erotic content here and there. However, that game was spared the worst of the "discussion", as nothing could be pinned on it in the sickening and awful "anti-pedophile" witch hunt.)

Replies from: None
comment by [deleted] · 2012-05-09T04:48:00.653Z · LW(p) · GW(p)

Er, not to interrupt your moral outrage, but their policy seems more or less reasonable given their size and presumably their legal resources.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-09T05:24:49.623Z · LW(p) · GW(p)

If they had the courage to call for a real discussion of issues like teenage sexuality - not to mention call out the schizophrenic mainstream view of those issues - things might've turned out very differently. How the hell does 4chan get away with things that would make that tinpot dictator of an admin faint - and is no less popular for it? From what I've heard, it's not exactly an unprofitable venture for Moot, either.

Replies from: None
comment by [deleted] · 2012-05-09T05:53:31.267Z · LW(p) · GW(p)

There's a certain degree of irony involved in your comment, posted as it is on another discussion site run by hopefully-benevolent dictators.

4chan has toed the line considerably; the only thing that keeps them from getting van'd is their ruthlessness in weeding out and banning those responsible for posting child pornography.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-09T06:20:58.827Z · LW(p) · GW(p)

There's a certain degree of irony involved in your comment, posted as it is on another discussion site run by hopefully-benevolent dictators.

I'd say there's a greater distance between an oppressive tinpot dictator and a genuinely benevolent one than between a generic dictator and a generic representative democracy.

4chan has toed the line considerably; the only thing that keeps them from getting van'd is their ruthlessness in weeding out and banning those responsible for posting child pornography.

And their limits on what is and what isn't child pornography are some of the most narrow and liberal in the world. E.g. any written stuff is considered a harmless fantasy, as it should be. Particularly shocking drawn pictures might be censored, but as long as it's not "real" you're not in real trouble. Have you seen what /a/ has been like for the last few years?

(I should clarify that, personally, I don't see any specific appeal in erotic material with childlike features, and am faintly pushed off by it on an emotional level. But I have absolutely no problem with those who indulge in it, as long as they don't engage in anything harmful to real people or support those who do.)

comment by Grognor · 2012-05-04T11:26:48.687Z · LW(p) · GW(p)

I'm worried I'm too much of a jerk. I used to think I had solved this problem, but I've recently encountered (or, more accurately, stopped ignoring) evidence that my tone is usually too hostile, negative, mean, horrible &c.

Could some people go through my comment history, and point out where I could improve? Sometimes think I'm exactly enough of a jerk, but other times I bet I cross the line.

Anonymous feedback can go here. Else reply to this comment or send a private message.

Replies from: shminux
comment by shminux · 2012-05-05T00:08:42.903Z · LW(p) · GW(p)

Have you tried going through your critical posts/IRC comments and pretending to be on the receiving end? Typical mind fallacy notwithstanding, this should be a decent first step.

comment by khafra · 2012-05-02T14:14:03.667Z · LW(p) · GW(p)

I think I see a problem in Robin Hanson's I'm a Sim, or You're Not. He argues:

Today, small-scale coarse simulations are far cheaper than large-scale detailed simulations, and so we run far more of the first type than the second. I expect the same to hold for posthuman simulations of humans – most simulation resources will be allocated to simulations far smaller than an entire human history, and so most simulated humans would be found in such smaller simulations.

Furthermore I expect simulations to be quite unequal in who they simulate in great detail – pivotal “interesting” folks will be simulated in full detail far more often than ordinary folks. In fact, I’d guess they’d be simulated over a million times more often. Thus from the point of view of a very interesting person, the chances that that person is in a simulation should be more than a million times the chances from the point of view of an ordinary person.

However, if just two large-scale, full-detail simulations are run, and the rest are all famous people and NPCs, those of us in the epistemically privileged position of being fairly ordinary should still guess at 2/3 probability that we're in a simulation.

The alternative option, if observer-moments in partial "interesting period/place/person simulations" overwhelmingly outnumber those in full simulations, is that I'm extremely likely to become famous, sooner or later; and this should outweigh my very low "inside view" probability of fame.

Should I start preparing for my moment of glory, or re-examine my reasoning?

comment by Paul Crowley (ciphergoth) · 2012-05-01T15:21:03.316Z · LW(p) · GW(p)

From the fact that all of Shadowzerg's comments in this thread have at least three upvotes, I can only assume that the karma sockpuppets are out in force.

http://lesswrong.com/lw/c4k/why_is_it_that_i_need_to_create_an_article_to/

Replies from: TobyBartels
comment by TobyBartels · 2012-05-01T20:23:07.976Z · LW(p) · GW(p)

The sockpuppets have now been overwhelmed.

comment by Paul Crowley (ciphergoth) · 2012-05-01T11:57:55.768Z · LW(p) · GW(p)

I registered the domains maxipok.com and maxipok.org and set them up to redirect to http://www.existential-risk.org/concept.html .

Replies from: David_Gerard
comment by David_Gerard · 2012-05-01T14:27:08.509Z · LW(p) · GW(p)

Are you keeping track of hits?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-01T14:34:46.102Z · LW(p) · GW(p)

I'm just using the registrar's forwarding facility, and it doesn't provide that. I can't quite be arsed to set the domain up myself and do my own redirects, though I guess I could.

Replies from: David_Gerard
comment by David_Gerard · 2012-05-01T23:36:13.749Z · LW(p) · GW(p)

Those are exceedingly particular jargon terms, and if you're going to bother it would be interesting to know if they got any hits at all.

(The question occurred to me because I was, at the time of writing it, procrastinating from going through log files to answer this precise question concerning several internal domain names I want to kill off on a server I want to kill off. When something gets no hits for six months, nobody cares.)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-02T07:34:14.702Z · LW(p) · GW(p)

Good points. It's not so much that I think people might be searching for "maxipok" now - if they do they already largely get the right hit - as that I'd like to popularize the term, and it's always wise to buy the domain before you do that.

Replies from: David_Gerard
comment by David_Gerard · 2012-05-02T07:38:07.075Z · LW(p) · GW(p)

Oh, of course!

comment by Zaine · 2012-05-11T10:43:31.748Z · LW(p) · GW(p)

I know there are many programmers on LW, and thought they might appreciate word of the following Kickstarter project. I don't code myself, but from my understanding it's like Scrivener for programmers:

http://www.kickstarter.com/projects/ibdknox/light-table?ref=discover_pop

comment by TrE · 2012-05-04T14:02:13.200Z · LW(p) · GW(p)

So mstevens, Konkvistador and me had an IRC discussion which sparked an interesting idea.

The basic idea is to get people to read and understand the sequences. As a reward for doing this, there could either be some sort of "medals" or "badges" for users or a karma reward. The "badges" solution would require that changes are made to the site design, but the karma rewards could work without changes, by getting upvotes from the lw crowd.

To confirm that the person actually understood what is written in the sequences, "instructors" are needed. The student could write a post that he's going to read sequence X, if the instructor confirms that the student has succeeded and understood that sequence, the instructor comments the student's post and asks the public to upvote the parent to a fixed amount of karma.

To test a student's knowledge, the instructor and the student hold a video chat when the student thinks he's ready. The video chat serves mainly as a disincentive to cheating, as cheating would still be easily possible (just have a browser open on the screen), but people feel watched and are discouraged from cheating. In the video chat, the instructor can use both the sequences flashcards that circulate (e.g. the Anki cards) and questions he prepared himself. The instructor simply has to judge whether the student read and understood the sequence he wanted to learn.

If this idea gets approval, the next thing to do would be trying it out!

comment by [deleted] · 2012-05-15T12:42:53.641Z · LW(p) · GW(p)

An interesting read I stumbled upon in gwern's Google+ feed.

Shelling Out -- The history of money

comment by Aharon · 2012-05-11T17:25:24.375Z · LW(p) · GW(p)

I need advice on proof reading. Specifically:

How can I effectively read through 10-20 page reports, searching for spelling, formatting and similar mistakes?

and, more importantly, how can I effectively check calculations and tables done in excel for errors?

What I'm looking for is some kind of method to do those tasks. Currently, I try to check my results, but it is hard for me not just to glaze over the finished work - I'm familiar with it and it is hard for me to read a familiar text/table/calculation thoroughly.

Does anybody know how one can improve in this respect?

Replies from: thomblake
comment by thomblake · 2012-05-11T17:56:15.456Z · LW(p) · GW(p)

Does anybody know how one can improve in this respect?

My best advice, though it might not be helpful to you, is to have someone else proofread it.

Replies from: Aharon
comment by Aharon · 2012-05-11T18:08:08.251Z · LW(p) · GW(p)

Possible, but that reflects on my performance if they do indeed find mistakes I could have corrected. The goal is to eliminate most of the stuff myself so I don't waste my co-workers time.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-11T18:26:25.279Z · LW(p) · GW(p)

If your co-workers are also proof reading their own work, and having similar issues proofreading what they are too familiar with, then your time and theirs will be more effeciently utilized by proofreading each other's work. So they find mistakes you could have corrected, and you find mistakes they could have corrected, but all these corrections get done with less time and effort.

Replies from: Aharon
comment by Aharon · 2012-05-11T18:47:29.879Z · LW(p) · GW(p)

Some more background: We're a small enterprise (boss, six employees, secretary, two trainees). Except our secretary and the trainees, everybody has an academic degree. We did try to institute that as a rule, but only me and one person working from home office consistently do so. The person working from home office is also, at the moment, very busy because of some deadlines, so I can't ask him to proofread. The others do proofread, but don't ask for proofreading in return, which makes asking low-status. They are either better at proofreading than I am or make less mistakes in the first place.

In fact, I fear the underlying problem is that I am not able to concentrate well, so my work is more error prone. Making as little mistakes as possible in the first place would obviously be the best solution, but I have even less of an idea how to achieve that, given my current abilities and work conditions.

comment by [deleted] · 2012-05-10T11:35:00.419Z · LW(p) · GW(p)

I've recently spent a lot of time thinking about and researching the sociology and the history of ethics. Based on this I'm going to make a prediction that may be unsettling for some. At least it was unsettling for me when I made it less abstract, shifted to near mode thinking and imagined actually living in such a world. If my model proves accurate I probably eventually will.

"Between consenting adults." as the core of modern sexual morality and the limit of law is going to prove to be a poor Schelling fence.

Replies from: None
comment by [deleted] · 2012-05-10T13:40:32.192Z · LW(p) · GW(p)

Unsettling in what sense? Like, it will eventually erode monogamy, and so cause more sexual inequality? It will break down gender-based sexuality, effectively turning everyone bi? Legalized rape? Dogs and cats living together?

(Channeling Multiheaded: Don't be so vague, especially when you're making predictions!)

Replies from: None, None
comment by [deleted] · 2012-05-10T14:05:57.069Z · LW(p) · GW(p)

Like, it will eventually erode monogamy, and so cause more sexual inequality?

Meh this is already a fait accompli from what I see..

Don't be so vague, especially when you're making predictions!

This is excellent advice. I will do so.

Children (and I do mean children, I'm not talking about young teens) will be considered capable of giving consent to have sex with adults. Their parents will be discouraged from influencing their choice (even if it is "choice") of sexual partner too much. Rape will become a much less serious crime. A significantly smaller fraction of rapes will be persecuted. Western countries will have higher rates of rape than they currently do.

I think the first urge of a LessWronger reading the above is to pattern match such claims to weirdtopia. A place where sex with children and more rape is actually a good thing according to our current utility function just in really counter-intuitive way.

No. I want people to try and discard just universe instincts and try to just for the sake of visualisation consider the above outside of weirdtopia.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2012-05-10T15:30:47.698Z · LW(p) · GW(p)

I agree completely that over time, our current beliefs about who is and isn't capable of giving informed consent to enter into a sexual relationship will be replaced by different beliefs.

I don't quite see why trending towards considering more and more people capable of consent is more likely than trending towards fewer and fewer people capable of it, or something else, but it's certainly possible. (If you can share your thinking along these lines, I'd be interested.)

In terms of my reactions, I am of course repulsed by the idea of people I don't consider capable of informed consent being manipulated or forced into providing the semblance of it, except in those cases where I happen to endorse the thing they're being manipulated or forced into doing, and also repulsed by the idea of people I consider capable of giving informed consent being denied the freedom to do so, except in those cases where I happen to endorse them being denied that freedom.

This includes (but is very much not limited to) being repulsed by the example of eight-year-olds being considered capable of giving consent to have sex with adults, and of anyone not being considered capable of refusing such consent.

I am of course aware that my own notions of who is and isn't capable of consenting to what degree to what acts are essentially arbitrary, and I don't lend much credence to the idea that I am by a moral miracle able to make the right distinction. I make the distinctions I make; as my social context changes I will make different distinctions.

I'm OK with that.

comment by [deleted] · 2012-05-10T14:33:30.325Z · LW(p) · GW(p)

Thanks, that clarifies it.

Agreed about the decline of monogamy as largely inevitable now, though I'm undecided how bad it is, especially with "more fun than sex" superstimuli becoming more widespread.

(And for reference, here's some previous discussion about children and sexuality.)

comment by [deleted] · 2012-05-10T14:24:19.061Z · LW(p) · GW(p)

I think I see a possible slippery slope based on between consenting adults, although EDIT: based on the above it was not be the one Konkvistador was thinking of.

Presumably clearly illegal: Let's say I mind control thousands of people into doing me as many favors as I want using a magical mind control ring that has a 99.9% success rate. (Obviously, these are not consenting adults!)

Currently legal: Let's say I advertise to thousands of people into buying a product of mine using a variety of technological devices and methods which altogether takes several days to fully work, but it only has a 50% success rate. (Obviously, these are consenting adults!)

Incremental steps: Let's say I mind control thousands of people into buying a product of mine using a variety of technological devices and methods which altogether takes several days to fully work, but it only has a 50% success rate.

Let's say I mind control thousands of people into doing me a single favor using a variety of technological devices and methods which altogether takes several days to fully work, but it only has a 50% success rate.

Let's say I mind control thousands of people into doing me a single favor using a variety of technological devices and methods which altogether takes several days to fully work but it has a 99.9% success rate.

Let's say I mind control thousands of people into doing me a single favor using a technological mind control helmet which takes several days to fully work, but it has a 99.9% success rate.

Let's say I mind control thousands of people into doing me as many favors as I want using a technological mind control helmet which takes several days to fully work, but it has a 99.9% success rate.

Let's say I mind control thousands of people into doing me as many favors as I want using a technological mind control helmet which takes several days to fully work but it has a 99.9% success rate.

Let's say I mind control thousands of people into doing me as many favors as I want using a technological mind control helmet that has a 99.9% success rate.

And back to presumably clearly illegal: Let's say I mind control thousands of people into doing me as many favors as I want using a magical mind control ring that has a 99.9% success rate.

Based on the question, at which specific step do I go to far? I don't immediately have a good answer, so at least at first glance it seems like a slippery slope.

comment by lioyujkil · 2012-05-07T04:24:10.857Z · LW(p) · GW(p)

An interesting debate has surfaced after a small group of people have claimed to have success inducing hallucinations through autosuggestive techniques.

http://www.tulpa.info/index.xhtml

http://louderthanthunder.net/tulpa/

http://boards.4chan.org/sci/res/4641620

http://archive.installgentoo.net/sci/thread/4641620

Replies from: gwern
comment by gwern · 2012-05-07T16:08:19.539Z · LW(p) · GW(p)

It's a really bad idea to link to 4chan since their pages by design disappear so quickly; I've made a copy of it at http://www.webcitation.org/67UBpOp6v and the followup thread at http://www.webcitation.org/67UBuzPru but if I had been a day later...

comment by mstevens · 2012-05-04T13:43:16.608Z · LW(p) · GW(p)

Maths is great.

Many of us don't know as much maths as we might like.

Khan Academy has many educational videos and exercises for learning maths. Many people might enjoy and benefit from working through them, but suffer from Akrasia that means they won't actually do this without external stimulus.

I propose we have a KA Competition - people compete to do the maths videos and exercises on that site, and post the results in terms of KA badges and karma (they can link to their profiles there so this can be verified).

The community here will vote up impressive achievements and generally reward those who learn the most.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-05-04T16:07:06.205Z · LW(p) · GW(p)

I'm not sure I'd necessarily advocate this.

Don't get me wrong, I love Khan Academy. It's great for revising topics I haven't seen in a while, or getting a different perspective on ones I'm currently learning, but I'm actually studying towards a maths degree, which I then want to go and do something with.

If I didn't need to learn linear algebra, I don't think I could call me not learning it a case of akrasia. I'd call that a case of me making more time for eating sandwiches and talking to pretty girls. I might wish I knew lots about linear algebra, but the sandwiches and pretty girls are clearly more important to me. As it happens, I do need to learn linear algebra, and as a result, I end up learning it.

If you want people to spend their time learning a skill for the purpose of competing, you may as well tell them to play StarCraft 2. If you want them to learn a useful skill, give them a genuine use for it. Project Euler does this by posing problems solved with algorithms you may actually have to code up in real life one day. Why not assemble actual real-world problems solvable with higher mathematics, let people see what kind of problems they want to solve, and have that direct them in what to learn?

comment by OpenThreadGuy · 2012-05-11T19:35:53.107Z · LW(p) · GW(p)

I will not be able to post the May 16-31 open thread until ten hours after midnight EST.

Edit: Circumstances have changed. I will be able to post it on time. (Thus, comment retracted.)

comment by [deleted] · 2012-05-11T16:55:15.916Z · LW(p) · GW(p)

Requesting Help on Applying Instrumental Rationality

I'm faced with a dilemma and need a big dose of instrumental rationality. I'll describe the situation:

I'm entering my first semester of college this fall. I'm aiming to graduate in 3-4 years with a Mathematics B.S. In order for my course progression to go smoothly, I need to take Calculus I Honors this fall and Calc II in the spring. These two courses serve as a prerequisite bottleneck. They prevent me from taking higher level math courses.

My SAT scores have exempted me from all placement tests, including the math. But without taking a placement test, the highest any math SAT score can place me into is Pre-Calculus I Honors, which is one level below what I want to take in the fall.

So in order to take Calc I H in the fall, I either need to:

(1) Score high enough on a College-Level Math placement test or

(2) Take Pre-Calculus I H for 9 weeks this summer

I've taken both precalc and calc in highschool. I've also been studying precalc material over the past few days, relearning a lot of what I've either forgotten or wasn't taught in class. If I decide to take the test, I'm pretty confident I'll place into Calc I. If I pass the test, I'll save 9 weeks of studying in the summer and use them to prepare for classes I'll be taking in the fall.

But if I decide to forgo the test and take Precalc this summer, I'm also pretty confident I'll do very well in the class. I'd wager above a 90%. The class would ensure I've got the material down better than the placement test, likely give me a great first grade, and would also give me my first six credits. (It's six credits because it combines precalc I and II.)

How can I best decide between these two options? Are there any other relevant factors that I'm leaving out? Etc.

Edit: On second though, maybe this would be better for it's own discussion post.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2012-05-11T17:34:05.486Z · LW(p) · GW(p)

You could take the placement test, and then start studying calculus in the summer (perhaps this is what you meant by "prepare for classes I'll be taking in the fall"), reviewing specific precalc topics as needed when and if your calculus book seems to assume prior knowledge that you don't have.

comment by Aharon · 2012-05-09T08:28:25.401Z · LW(p) · GW(p)

There was a thread a while ago where somebody converted probabilities via logarithms to numbers, so it's easier to use conditional probabilities. Unfortunately, I didn't bookmark it. Doe snaybody know which thread I'm talking about?

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-05-09T10:07:46.961Z · LW(p) · GW(p)

http://lesswrong.com/lw/buh/the_quick_bayes_table/

Maybe?

Replies from: Aharon
comment by Aharon · 2012-05-09T11:10:12.622Z · LW(p) · GW(p)

Yep, that's it. Thank you!

comment by NancyLebovitz · 2012-05-08T08:54:40.986Z · LW(p) · GW(p)

A poem about getting out of the box....

Siren Song

This is the one song everyone
would like to learn: the song
that is irresistible:

the song that forces men
to leap overboard in squadrons
even though they see beached skulls

the song nobody knows
because anyone who had heard it
is dead, and the others can’t remember.

Shall I tell you the secret
and if I do, will you get me
out of this bird suit?

I don’t enjoy it here
squatting on this island
looking picturesque and mythical

with these two feathery maniacs, I don’t enjoy singing
this trio, fatal and valuable.

I will tell the secret to you,
to you, only to you.
Come closer. This song

is a cry for help: Help me!
Only you, only you can,
you are unique

at last. Alas
it is a boring song
but it works every time.

Margaret Atwood, “Siren Song” from Selected Poems 1965-1975. Copyright © 1974, 1976 by Margaret Atwood. Reprinted with the permission of the author and Houghton Mifflin Company.

comment by erratio · 2012-05-07T14:47:50.711Z · LW(p) · GW(p)

Update on the accountability system started about a month ago:it worked for about three weeks with everyone regularly turning in work, now I'm the only one still doing it. Lessons learnt: it seems that the half-life of a motivational technique is about 2 weeks. The importance of not breaking the chain (I suspect it's not coincidence that I'm the only one still going and I'm also the only one who hasn't had unavoidable missed days from travelling). Alternatively, I'm very good at committing to commitment devices, and they're not.

comment by Incorrect · 2012-05-06T20:59:55.896Z · LW(p) · GW(p)

How can I improve my ability to manipulate mental images?

When I try to visualize a scene in my mind I find that edges of the visualization fade away until I only have a tiny image in the center of my visual field or lose the visualization entirely.

Here are some things I have noticed:

  • My ability to visualize seems to go through a cycle in which I first visualize a scene, then I lose pieces of it until it fades entirely, then a little while later I manage to reconstruct the scene.
  • My ability to visualize seems better when I am near sleep though still lucid.
  • Rather than fading evenly towards the center I find that I lose entire chunks of peripheral vision at a time.
comment by syzygy · 2012-05-05T07:30:03.853Z · LW(p) · GW(p)

In any decision involving an Omega like entity that can run perfect simulations of you, there wouldn't be a way to tell if you were inside the simulation or in the real universe. Therefore, in situations where the outcome depends on the results of the simulation, you should act as though you are in the simulation. For example, in counterfactual mugging, you should take the lesser amount because if you're in Omega's simulation you guarantee your "real life" counterpart the larger sum.

Of course this only applies if the entity you're dealing with happens to be able to run perfect simulations of reality.

comment by Rain · 2012-05-04T14:04:35.220Z · LW(p) · GW(p)

It's frog season again. :-(

Replies from: drethelin
comment by drethelin · 2012-05-06T00:24:56.457Z · LW(p) · GW(p)

Is frog repellent a thing? Could keep them away from the stairwell.

Replies from: beoShaffer
comment by beoShaffer · 2012-05-06T00:34:45.568Z · LW(p) · GW(p)

Nothing, reliable or commercially sold that I found. Some homebrew ideas from ehow are here.

Replies from: TimS
comment by TimS · 2012-05-06T01:08:40.492Z · LW(p) · GW(p)

Perhaps frog attractant (mating scent or suchlike) put somewhere else to redirect the frogs? Downside: hard to wash off, so your person becomes very attractive to frogs.

Just a clueless guess.

comment by faul_sname · 2012-05-04T08:35:10.195Z · LW(p) · GW(p)

I've been reading up on working memory training (the general consensus is that training is useless or very nearly so). However, what I find interesting is how strongly working memory is correlated with performance on a wide variety of intelligence tests. While it seems that you can't train working memory, does anyone know what would stand in the way of artificial enhancements to working memory? (If there are no major problems aside from BCIs not yet being at that point, I know what I will be researching over the next few months. If there is something that would prevent this from working, it would be best to know now.)

comment by PECOS-9 · 2012-05-04T06:18:55.518Z · LW(p) · GW(p)

Why doesn't someone like Jaan Tallinn or Peter Thiel donate a lot more to SIAI? I don't intend this to mean that I think they should or that I know better than them, I just am not sure what their reasoning is. They have both already donated $100k+ each, but they could easily afford much more (well, I know Peter Thiel could. I don't know exactly how much money Jaan Tallinn actually has). I am just imagining myself in their positions, and I can't easily imagine myself considering an organization like SIAI to be worth donating $100k to, but not to be worth donating several million to.

Plausible answers I've considered:

  • They're not actually as rich as I am imagining (I know it's not the case with Peter Thiel, but Jaan Tallinn's actions would make sense to me in conjunction with the below possibilities if he had less than about $20 million).
  • They're being prudent/waiting to see how well SIAI performs with current donations before making a larger one.
  • They're saving money for the future, when it may be more clear where the money can best be invested to assure a positive singularity (this doesn't make much sense to me -- if it ever is that clear, I think there won't be a big shortage of funding).

Can you think of other hypotheses? (I may try emailing Jaan Tallinn to ask him myself, depending on how others react to this post).

Edited to add: This is not meant as a suggestion that any millionaires ought to do anything. I am legitimately curious about why they are doing what they're doing -- I have no doubt that Peter Thiel and Jaan Tallinn have both already considered this question from more angles than I have, I just want to know what answers they came up with.

Replies from: lukeprog, albeola, beoShaffer, XiXiDu, Thomas
comment by lukeprog · 2012-05-04T07:54:51.614Z · LW(p) · GW(p)

I may try emailing Jaan Tallinn to ask him myself, depending on how others react to this post

The Singularity Institute is in regular contact with its largest donors. Please do not bother them.

Replies from: PECOS-9
comment by PECOS-9 · 2012-05-04T16:34:33.417Z · LW(p) · GW(p)

It would not be a solicitation for him to donate more (though certainly I'd have to be careful to make it clear that's not my intention) -- clearly that is something to best leave to SI. It would be a request for clarification of his opinion on these issues. Considering he's a public figure who has done multiple talks on the subject, I don't think it's out of line to ask him for his opinions on how best to allocate funding.

comment by albeola · 2012-05-04T07:34:53.756Z · LW(p) · GW(p)

(I may try emailing Jaan Tallinn to ask him myself, depending on how others react to this post).

It seems like that might carry some risk of making him feel like he was being bugged to give more money, or something like that. Maybe it would be better to post a draft of such an email to the site first, just in case?

comment by beoShaffer · 2012-05-04T06:43:05.382Z · LW(p) · GW(p)

Tax issues. I can't find the original thread now, but its been repeatedly stated that it causes legal issues if SI get to much of its funding from a small number of sources.\

Replies from: JoshuaZ, PECOS-9
comment by JoshuaZ · 2012-05-04T06:49:22.849Z · LW(p) · GW(p)

It would probably need to be much larger fractions for those sorts of issues to be relevant. In general, the IRS doesn't mind donations coming from a few large donors when they aren't too closely connected and there are other donors as well.

comment by PECOS-9 · 2012-05-04T06:47:24.706Z · LW(p) · GW(p)

But surely there are ways around this? The first idea that comes to mind for me, couldn't they create an offshoot organization that's not officially part of SI but still collaborates closely? If not, there has to be some other way around it.

edited to add: Of course there may be good reasons not to do the first idea I suggested above, I'm just saying that someone who wants to spend millions on SIAI-related funding probably wouldn't have trouble doing so for purely legal reasons.

comment by XiXiDu · 2012-05-04T10:33:13.004Z · LW(p) · GW(p)

Why doesn't someone like Jaan Tallinn or Peter Thiel donate a lot more to SIAI?

I've long wondered what's Peter Thiel's master plan (more):

Billionaire Peter Thiel has poured $1.7 million more into a super PAC supporting presidential candidate Ron Paul, bringing his total contributions to $2.6 million.

ETA Also see, 'A Conversation with Peter Thiel':

I believe that the late 1960s was not only a time when government stopped working well and various aspects of our social contract began to fray, but also when scientific and technological progress began to advance much more slowly. Of course, the computer age, with the internet and web 2.0 developments of the past 15 years, is an exception. Perhaps so is finance, which has seen a lot of innovation over the same period (too much innovation, some would argue).

There has been a tremendous slowdown everywhere else, however. Look at transportation, for example: Literally, we haven’t been moving any faster. The energy shock has broadened to a commodity crisis. In many other areas the present has not lived up to the lofty expectations we had. I think the advanced economies of the world fundamentally grow through technological progress, and as their rate of progress slows, they will have less growth.

comment by Thomas · 2012-05-04T14:04:05.489Z · LW(p) · GW(p)

Either they - those two and a few more:

A - do not buy a near Singularity. Where "near" means from 1 to 3 decades away.

B - have other, (some would say "maybe not friendly") plans

C - they have a clandestine contract with the SIAI

I think, people seldom live by what they are preaching.

Had I billion of euros to spend, I would not initialize the Singularity through SIAI. Much less by donating and hoping for a good outcome. No. At the most I would invite somebody from SIAI to join MY team.

I generalize from myself.

comment by syzygy · 2012-05-02T22:27:14.553Z · LW(p) · GW(p)

It occurred to me that I have no idea what people mean by the word "observer". Rather, I don't know if a solid reductionist definition for observation exists. The best I can come up with is "an optimization process that models its environment". This is vague enough to include everything we associate with the word, but it would also include non-conscious systems. Is that okay? I don't really know.

Replies from: HeatDeath
comment by HeatDeath · 2012-05-03T00:55:13.185Z · LW(p) · GW(p)

It occurs to me, reading your post, that I have almost no idea what people mean by "conscious system". I'm quite certain I am one, and I regularly experience other people apparently claiming to belong to that set too. I suspect that if we can nail down what it means to belong to the set of "conscious systems", we'll be much more readily able to determine if not being a member of that set disqualifies a thing from being an "observer".

Replies from: syzygy
comment by syzygy · 2012-05-03T05:18:37.984Z · LW(p) · GW(p)

I suppose you're right. Although it's pretty easy for me to imagine something that is "conscious" that isn't an "observer" i.e., a mind without sensory capabilities. I guess I was just wondering whether our common (non-rigorous) definitions of the two concepts are independent.

comment by Brigid · 2012-05-04T23:45:29.353Z · LW(p) · GW(p)

Scientific American argues that using "negative words" (including skeptical, unclear, doubt, and shouldn't) hurts your ability to "sell your science." Instead, you need to be optimistic.

"“If you have a method or idea and you believe it works, you have to be optimistic about it. Optimism is the number-one thing.” ~Anne Kinney, Director of the Solar System Exploration Division at NASA’s Goddard Space Flight Center.

So supposedly, in order to sell your ideas, you need to hide evidence of the LessWrongian values that help you develop a solid method or idea in the first place.

http://blogs.scientificamerican.com/guest-blog/2011/10/06/optimism-and-enthusiasm-lessons-for-scientists-from-steve-jobs/

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-05-05T14:01:05.703Z · LW(p) · GW(p)

Finding a truth and selling the truth are two different processes, with different rules. So are you suggesting that in order to sell the truth we should remove ourselves from the suspicion that we are able to find it?

If yes, then you are probably right. I suppose that when Eliezer writes his Sequence-book, he would sell thousand times more copies if he would put there an introduction stating something like: "These are the eternal truths which have been communicated to me in my dreams by an omniscient being from the seventh dimension. If you read this book and believe it, your soul will be blessed forever." And the rest of LessWrongians could put on some robes, go singing on streets, give people flowers and sell them the book.

In this case, what exactly should a rationalist do?

comment by ZenJedi · 2012-05-01T20:37:06.249Z · LW(p) · GW(p)

If you had to choose a religion, whether real or fictional, what would it be?

I'll go first and say Jediism, with Zensufism a close second. MTFBWY...

On a related note, does the Rationalism set down in the Sequences by EY (PBUH) qualify as a religion?

Replies from: Grognor, shminux, TimS
comment by Grognor · 2012-05-02T00:54:47.957Z · LW(p) · GW(p)

I am, in fact, a Kopimist, in real life.

To answer your "on a related note" question, no. This answer is sufficiently known here that I wouldn't be surprised if it's the entire reason your comment was downvoted. It looks like an insult. Other reasons it's downvoted might be those unexplained acronyms. Also, don't use the word "rationalism" to describe what we do here.

From your blog, it looks like you're deeply into worshiping mystery, which is something we don't like as it's a way of quashing curiosity and intentionally leaving problems unsolved.

Welcome to Less Wrong, though I wouldn't be surprised if this is your only comment.

Replies from: ZenJedi
comment by ZenJedi · 2012-05-02T01:31:23.272Z · LW(p) · GW(p)

Kopimism! Fascinating, thank you for sharing this. As for whether what you do here qualifies as a religion or not, this seems like hair-splitting. When a group of people have a shared belief system that gives their lives purpose, an agenda for humanity, meet-ups around the world, a visionary leader, "hadiths" and sacred texts, etc., what is this if not a religion? I think it's an impressive achievement, and don't see why you find this label insulting.

As far as downvotes go, I have endured far worse than this in my previous incarnations. Let's just say that the Force is too strong with me to be discouraged by such minor chastisements. MTFBWY.

Replies from: faul_sname
comment by faul_sname · 2012-05-02T20:27:17.891Z · LW(p) · GW(p)

I would argue that you have a community in that case, and that religions are a subset of the community category. But then, that really is hair-splitting.

In any case, welcome.

comment by shminux · 2012-05-01T21:50:07.013Z · LW(p) · GW(p)

If you had to choose a religion, whether real or fictional, what would it be?

The question is not well defined. Do you have to pretend to be religious?

On a related note, does the Rationalism set down in the Sequences by EY (PBUH) qualify as a religion?

Search this site for the word "cult". As for PBUH, this usually applies to the dead ancient prophets, and EY is determined to never die.

Replies from: othercriteria
comment by othercriteria · 2012-05-01T21:58:23.635Z · LW(p) · GW(p)

Search this site for the word "cult".

"Phyg", too.

comment by TimS · 2012-05-02T19:15:57.102Z · LW(p) · GW(p)

What do you think of David Brin's criticisms of the Jedi philosophy?

Replies from: ZenJedi
comment by ZenJedi · 2012-05-03T14:49:58.658Z · LW(p) · GW(p)

As a general rule I don't think. I do, or do not -- there is no think. What would be the purpose of my thoughts about David Brin's criticisms? The finger pointing at the moon is not the moon, the Kurzweil curve pointing to the Singularity is not the Singularity, the map is not the territory. Clear your mind of questions, there are no problems to be solved. The act of living is the only solution to the universal wave equation. Does this answer your question?

MTFBWY...