What are your contrarian views?

post by Metus · 2014-09-15T09:17:20.308Z · LW · GW · Legacy · 819 comments

Contents

819 comments

As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.

819 comments

Comments sorted by top scores.

comment by Salemicus · 2014-09-15T13:08:05.759Z · LW(p) · GW(p)

Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.

Replies from: TheAncientGeek, DanielLC, pragmatist
comment by TheAncientGeek · 2014-09-17T15:43:44.655Z · LW(p) · GW(p)

Which dualism?

comment by DanielLC · 2014-09-16T00:00:33.903Z · LW(p) · GW(p)

Do you mean that, without strong evidence that we don't have, we should assume dualism, or that we have strong evidence for dualism?

If it's the second one, can you give me an example of such a piece of evidence?

Replies from: Salemicus
comment by Salemicus · 2014-09-16T11:22:07.351Z · LW(p) · GW(p)

The second position.

An example of the evidence is the two-way causal connection between your inner subjective experiences and the external universe.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T13:42:39.396Z · LW(p) · GW(p)

How is that better explained by dualism?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-17T15:44:54.957Z · LW(p) · GW(p)

Indeed. Two way interaction uis as well or better explained by physicalism.

comment by pragmatist · 2014-09-15T13:13:06.330Z · LW(p) · GW(p)

I upvoted because I disagree (strongly) with the second conjunct, but I do agree that certain varieties of dualism are coherent, and even attractive, theories of mind.

comment by blacktrance · 2014-09-15T19:20:31.256Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.

Replies from: VAuroch, None, sharmake-farah
comment by VAuroch · 2014-09-17T19:44:16.426Z · LW(p) · GW(p)

What would you have to see to convince you otherwise?

Replies from: blacktrance
comment by blacktrance · 2014-09-17T21:09:48.939Z · LW(p) · GW(p)

I think it would take an a priori philosophical argument, rather than empirical evidence.

Replies from: None
comment by [deleted] · 2014-09-26T01:23:23.717Z · LW(p) · GW(p)

Wouldn't cognitive science or neuroscience be sufficient to falsify such a theory? All we really have to do is show that "good life", as seen from the inside, does not correspond to maximized happy-juice or dopamine-reward.

Replies from: blacktrance
comment by blacktrance · 2014-09-26T02:15:47.121Z · LW(p) · GW(p)

The most that would show is what humans tend to prefer, not what they should prefer.

Replies from: None
comment by [deleted] · 2014-09-26T02:19:08.534Z · LW(p) · GW(p)

You're going to have to explain what meta-ethical view you hold such that "prefer on reflection given full knowledge and rationality" and "should prefer" are different.

Replies from: blacktrance
comment by blacktrance · 2014-09-26T03:07:37.378Z · LW(p) · GW(p)

I don't think neuroscience would tell you what you'd prefer on reflection given full knowledge and rationality.

Replies from: None
comment by [deleted] · 2014-09-26T03:16:20.812Z · LW(p) · GW(p)

Sufficiently advanced cognitive science definitely will, though.

Replies from: blacktrance
comment by blacktrance · 2014-09-26T03:30:16.861Z · LW(p) · GW(p)

I'm skeptical of that.

comment by [deleted] · 2014-09-26T01:32:41.048Z · LW(p) · GW(p)

I can think of something I prefer, on reflection, against wireheading. Now what?

Replies from: blacktrance
comment by blacktrance · 2014-09-26T02:17:43.168Z · LW(p) · GW(p)

There's a lot of things that people are capable of preferring that's not pleasure, the question is whether it's what they should prefer.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-15T21:29:57.724Z · LW(p) · GW(p)

Awfully presumptuous of you to tell people what they should prefer.

Replies from: blacktrance
comment by blacktrance · 2014-11-15T22:53:37.715Z · LW(p) · GW(p)

Why? We do this all the time, when we advise people to do something different from what they're currently doing.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-16T02:00:15.669Z · LW(p) · GW(p)

No, we don't. That's making recommendations as to how they can attain their preferences. That you don't seem to understand this distinction is concerning. Instrumental and terminal values are different.

Replies from: blacktrance
comment by blacktrance · 2014-11-16T20:00:05.313Z · LW(p) · GW(p)

My position is in line with that - people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.

Replies from: Transfuturist, DefectiveAlgorithm
comment by Transfuturist · 2014-11-26T01:20:49.758Z · LW(p) · GW(p)

Why is my terminal value pleasure? Why should I want it to be?

Replies from: blacktrance
comment by blacktrance · 2014-11-26T01:43:33.542Z · LW(p) · GW(p)

Fundamentally, because pleasure feels good and preferable, and it doesn't need anything additional (such as conditioning through social norms) to make it desirable.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-26T03:05:20.981Z · LW(p) · GW(p)

Why should I desire what you describe? What's wrong with values more complex than a single transistor?

Also, naturalistic fallacy.

Replies from: blacktrance
comment by blacktrance · 2014-11-26T03:44:40.461Z · LW(p) · GW(p)

It's not a matter of what you should desire, it's a matter of what you'd desire if you were internally consistent. Theoretically, you could have values that weren't pleasure, such as if you couldn't experience pleasure.

Also, the naturalistic fallacy isn't a fallacy, because "is" and "ought" are bound together.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-26T04:15:31.737Z · LW(p) · GW(p)

Why is the internal consistency of my preferences desirable, particularly if it would lead me to prefer something I am rather emphatically against?

Why should the way things are be the way things are?

Replies from: blacktrance
comment by blacktrance · 2014-11-26T04:37:12.911Z · LW(p) · GW(p)

(Note: Being continuously downvoted is making me reluctant to continue this discussion.)

One reason to be internally consistent is that it prevents you from being Dutch booked. Another reason is that it enables you to coherently be able to get the most of what you want, without your preferences contradicting each other.

Why should the way things are be the way things are?

As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-26T05:09:49.183Z · LW(p) · GW(p)

Retracted: Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities.

As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.

I very much disagree. I think you're couching this deontological moral stance as something more than the subjective position that it is. I find your morals abhorrent, and your normative statements regarding others' preferences to be alarming and dangerous.

Replies from: blacktrance
comment by blacktrance · 2014-11-26T05:43:08.035Z · LW(p) · GW(p)

Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities.

You can be Dutch booked with preferences too. If you prefer A to B, B to C, and C to A, I can make money off of you by offering a circular trade to you.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-26T06:32:29.763Z · LW(p) · GW(p)

And if I'm unaware that such a strategy is taking place. Even if I was aware, I am a dynamic system evolving in time, and I might be perfectly happy with the expenditure per utility shift.

Unless I was opposed to that sort of arrangement, I find nothing wrong with that. It is my prerogative to spend resources to satisfy my preferences.

Replies from: blacktrance
comment by blacktrance · 2014-11-26T19:53:27.649Z · LW(p) · GW(p)

I might be perfectly happy with the expenditure per utility shift.

That's exactly the problem - you'd be happy with the expenditure per shift, but every time a fill cycle would be made, you'd be worse off. If you start out with A and $10, pay me a dollar to switch to B, another dollar to switch to C, and a third dollar to switch to A, you'd end up with A and $7, worse off than you started, despite being satisfied with each transaction. That's the cost of inconsistency.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-26T23:12:45.109Z · LW(p) · GW(p)

And 3 utilons. I see no cost there.

Replies from: blacktrance
comment by blacktrance · 2014-11-27T01:09:35.764Z · LW(p) · GW(p)

But presumably you don't get utility from switching as such, you get utility from having A, B, or C, so if you complete a cycle for free (without me charging you), you have exactly the same utility as when you started, and if I charge you, then when you're back to A, you have lower utility.

Replies from: Transfuturist
comment by Transfuturist · 2014-11-28T23:11:41.202Z · LW(p) · GW(p)

If I have utility in the state of the world, as opposed to the transitions between A, B, and C, I don't see how it's possible for me to have cyclic preferences, unless you're claiming that my utility doesn't have ordinality for some reason. If that's the sort of inconsistency in preferences you're referring to, then yes, it's bad, but I don't see how ordinal utility necessitates wireheading.

Replies from: blacktrance
comment by blacktrance · 2014-11-29T10:50:41.802Z · LW(p) · GW(p)

Regarding inconsistent preferences, yes, that is what I'm referring to.

Ordinal utility doesn't by itself necessitate wireheading, such as if you are incapable of experiencing pleasure, but if you can experience it, then you should wirehead, because pleasure has the quale of desirability (pleasure feels desirable).

Replies from: Transfuturist
comment by Transfuturist · 2014-12-01T03:30:45.047Z · LW(p) · GW(p)

And you think that "desirability" in that statement refers to the utility-maximizing path?

Replies from: blacktrance
comment by blacktrance · 2014-12-01T06:36:51.676Z · LW(p) · GW(p)

I mean that pleasure, by its nature, feels utility-satisfying. I don't know what you mean by "path" in "utility-maximizing path".

comment by DefectiveAlgorithm · 2014-11-26T01:49:08.340Z · LW(p) · GW(p)

Can you define 'terminal values', in the context of human beings?

Replies from: blacktrance
comment by blacktrance · 2014-11-26T03:42:01.005Z · LW(p) · GW(p)

Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2014-11-26T11:24:11.890Z · LW(p) · GW(p)

I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings - that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I'm well aware of the huge impact my emotional subsystem has on my decision making. However, I don't consider it 'me' - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.

To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?

Replies from: blacktrance
comment by blacktrance · 2014-11-26T19:56:19.377Z · LW(p) · GW(p)

What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?

That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.

comment by Noosphere89 (sharmake-farah) · 2024-10-06T14:08:45.620Z · LW(p) · GW(p)

I've got to give it to you, the human value is not complex point is frankly aging very well with the rise of LLMs, and one of the miracles is that you can get a reasonably good human value function without very large hacks or complicated code, it's just learned from the data.

To put it another way, I think that this contrarian opinion has been receiving Bayes points compared to a lot of other theories on how complicated are human values.

Replies from: gwern
comment by gwern · 2024-10-06T23:51:43.811Z · LW(p) · GW(p)

the human value is not complex point is frankly aging very well with the rise of LLMs

You just pointed out that what a LLM learned for even a very simple game with extensive clean data turned out to be "a bag of heuristics": https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1?commentId=ykmKgL8GofebKfkCv [LW(p) · GW(p)]

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-10-07T00:16:58.250Z · LW(p) · GW(p)

Alright, I have a few responses to this:

  1. Contra OthelloGPT, in the case of GPT-4, and the new o1-preview model, I believe that the neural networks we are focusing on are deep and sort of wide, which I suspect prevents a lot of the "just find heuristic" behavior, and I believe Chain-Of-Thought scaling is there to make the model be biased more towards algorithmic solutions and sequential computation over heuristics and parallel computation.

You even mentioned that possibility here too:

https://www.lesswrong.com/posts/gcpNuEZnxAPayaKBY/othellogpt-learned-a-bag-of-heuristics-1#5K5wDoMD2YtSJvfw9 [LW(p) · GW(p)]

Replies from: gwern
comment by gwern · 2024-10-07T00:40:25.824Z · LW(p) · GW(p)

Well, it would certainly be nice if that were true, but all the interpretability research thus far has pointed out the opposite of what you seem to be taking it to. The only cases where the neural nets turn out to learn a crisp, clear, extrapolable-out-many-orders-of-magnitude-correctly algorithm, verified by interpretability or formal methods to date, are not deep nets. They are tiny, tiny nets either constructed by hand or trained by grokking (which appears to not describe at all any GPT-4 model, and it's not looking good for their successors either). The bigger deeper nets certainly get much more powerful and more intelligent, but they appear to be doing so by, well, slapping on ever more bags of heuristics at scale. Which is all well and good if you simply want raw intelligence and capability, but not good if anything morally important hinges on them reasoning correctly for the right reasons, rather than heuristics which can be broken when extrapolated far enough or manipulated by adversarial processes.

Replies from: sharmake-farah, sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-10-08T00:47:04.740Z · LW(p) · GW(p)

We actually have a resolution for the thread on whether LLMs naturally learn algorithmic reasoning as they scale up with COT vs just reasoning with memorized bags of heuristics, and the answer is that we have both real reasoning, which is indicative of LLMs actually using somewhat clean algorithms, but there are also a lot of heuristic reasoning involved.

So we both got some things wrong, but also got some things right.

The main thing I got wrong was in underestimating how much COT for current models still involves pretty significant memorization/bag of heuristics to get correct answers, which means I have to raise the complexity of human values, given that LLMs didn't compress as well as I thought, and the thing I got right was that sequential computation like COT does incentive actual noisy reasoning/algorithms to appear, but I was wrong about the strength of the effect, though I was still right to be concerned about the fact that the OthelloGPT network was very wide and skinny, rather than deep and wide, which makes it harder to learn the correct algorithm.

The thread is below:

https://x.com/aksh_555/status/1843326181950828753

I wish someone is willing to do this for the o1 series of models as well.

One other relevant comment is here:

https://www.lesswrong.com/posts/gcpNuEZnxAPayaKBY/othellogpt-learned-a-bag-of-heuristics-1#HqDWs9NHmYivyeBGk [LW(p) · GW(p)]

comment by Noosphere89 (sharmake-farah) · 2024-10-07T01:35:39.501Z · LW(p) · GW(p)

A key crux is that I think those heuristics actually go quite far, because it's much, much easier to learn a quite close to correct model of human values with simple heuristics and internalize the values from it's training data as it's own than it is to learn useful capabilities, and more generally it's easier to learn and internalize human values as it's own than it is to learn useful new capabilities, so even under a heuristic view of LLMs where LLMs are basically always learning a bag of heuristics and don't have actual algorithms, the heuristics for internalizing human values is always simpler than heuristics for learning capabilities, because it's easier to generate training data for human values than it is to generate any other capability.

See below for relevant points:

In general, it makes sense that, in some sense, specifying our values and a model to judge latent states is simpler than the ability to optimize the world. Values are relatively computationally simple and are learnt as part of a general unsupervised world model where there is ample data to learn them from (humans love to discuss values!). Values thus fall out mostly’for free’ from general unsupervised learning. As evidenced by the general struggles of AI agents, ability to actually optimize coherently in complex stochastic ‘real-world’ environments over long time horizons is fundamentally more difficult than simply building a detailed linguistic understanding of the world.

https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes-Further-Than-Capabilities/

Good point though that the claim that current LLMs are definitely learning algorithms rather than just heuristics was definitely not supported very well by the current interpretability results/evidence, though I'd argue that o1-preview is mild evidence we will start seeing more algorithmic/search parts will be used for AIs in the future (though to be clear I believe a majority of the success comes from it's data being higher quality, and only fairly little to it's runtime search.)

comment by jsteinhardt · 2014-09-15T17:36:36.022Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

The replication initiative (the push to replicate the majority of scientific studies) is reasonably likely to do more harm than good. Most of the points raised by Jason Mitchell in The Emptiness of Failed Replications are correct.

Replies from: cousin_it, Osuniev
comment by cousin_it · 2014-09-19T10:17:25.993Z · LW(p) · GW(p)

Imagine a physicist arguing that replication has no place in physics, because it can damage the careers of physicists whose experiments failed to replicate! Yet that's precisely the argument that the article makes about social psychology.

comment by Osuniev · 2014-09-21T11:05:54.404Z · LW(p) · GW(p)

I read this trying to keep as open a mind as possible, and I think there is SOME value to SOME of what he said (ie no two experiments are totally the same and replicators often are motivated to prove the first study wrong)... But one thing that really set me off is that he genuinely considers a study that doesn't prove its hypothesis as a failure, not even acknowledging that IN PRINCIPLE, this study has proven the hypothesis wrong, which is valuable knowledge all the same.

Which is so jarring with what I consider the very basis of science that I find difficult to take Mitchell seriously.

comment by bramflakes · 2014-09-15T11:41:13.463Z · LW(p) · GW(p)

Open borders is a terrible idea and could possibly lead to the collapse of civilization as we know it.

EDIT: I should clarify:

Whether you want open borders and whether you want the immigration status quo are different questions. I happen to be against both, but it is perfectly consistent for somebody to be against open borders but be in favor of the current level of immigration. The claim is specifically about completely unrestricted migration as advocated by folks like Bryan Caplan. Please direct your upvotes/downvotes to the former claim, rather than the latter.

Replies from: None, VAuroch, Punoxysm, FiftyTwo
comment by [deleted] · 2014-09-15T18:21:00.830Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Current levels of immigration are also terrible, and will significantly speed up the collapse of the Western world.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-17T15:53:30.657Z · LW(p) · GW(p)

Citation required.

comment by VAuroch · 2014-09-17T03:59:43.824Z · LW(p) · GW(p)

I'm not clear on whether it's actually a good idea, but if Bryan Caplan's arguments are the best available, it's definitely a horrible idea. He sidesteps all the potential problems without addressing them, or in some cases draws analogies that, when actually considered properly, indicate that it would be a bad idea.

Replies from: Azathoth123, bramflakes
comment by Azathoth123 · 2014-09-18T00:25:55.772Z · LW(p) · GW(p)

I particularly like how he manages to switch between deontology and consequentialism in the same argument.

comment by bramflakes · 2014-09-17T10:39:09.606Z · LW(p) · GW(p)

Actually here I disagree. There are many counterarguments listed on the Open Borders site, and for that I give him credit. Especially as he actually attempts to engage with racialist arguments, rather than dismissing them.

Replies from: VAuroch, gattsuru
comment by VAuroch · 2014-09-18T04:00:29.657Z · LW(p) · GW(p)

I remember at one point encountering the Open Borders site, considering that it sounded like a pretty good idea, then reading through much of the site and becoming decreasingly convinced as I read the specific arguments, which consisted of more holes than solid points.

Recently, it's come up again, specifically in an interview with Caplan which was going around (I saw it via Kaj). Again, I was initially intrigued by the idea, but the more I saw of the actual arguments, the weaker they seemed. He seems to routinely deflect the significant concerns with non-denials and never actually address the pragmatic reasoning against it.

comment by gattsuru · 2014-09-19T17:28:51.113Z · LW(p) · GW(p)

There are many counterarguments listed on the Open Borders site, and for that I give him credit. Especially as he actually attempts to engage with racialist arguments, rather than dismissing them.

I'm not sure if it's a technique worth crediting. There are voting trend issues with Hispanic immigrants that do not boil down to paranoid delusions about the Reconquista, and there are arguments regarding crime and immigration that are strongly and obviously distinguishable from sending every African-American person -- including those innocent of any crime -- out of the country. I'm hard-pressed to believe those arguments were selected for any reason but their weakness and unpalatability.

I personally favor reduced barriers to immigration (outside of a criminal background check and unique person identification, the modern limits are counterproductive at best), but writing up the worst arguments against that belief doesn't really strengthen them.

Replies from: bramflakes
comment by bramflakes · 2014-09-19T20:44:06.817Z · LW(p) · GW(p)

Hey, it's a step up from denying outright that certain types of immigrants will commit more crimes. A lot of people have drank that Kool-Aid.

comment by Punoxysm · 2014-09-15T17:05:52.648Z · LW(p) · GW(p)

Why do you believe this? Countries with the most liberal immigration policies today don't seem to be on the verge of collapse.

Replies from: shminux, bramflakes
comment by Shmi (shminux) · 2014-09-15T17:52:39.013Z · LW(p) · GW(p)

Ebola?

Replies from: bramflakes
comment by bramflakes · 2014-09-15T17:58:51.148Z · LW(p) · GW(p)

Ebola is more an argument for colonialism than against open borders but let's not be picky.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-15T18:12:03.471Z · LW(p) · GW(p)

Ebola is an example of a locally-originated virulent existential threat open borders fail to contain, biological, social or otherwise. Controlled borders, despite all the issues, at least can act as an immune system of sorts.

Replies from: roystgnr, bramflakes
comment by roystgnr · 2014-09-19T05:26:07.098Z · LW(p) · GW(p)

Define "controlled borders"? In the "open borders" context the debate is usually about residency and citizenship restrictions, but in the context of ebola those don't matter; tourists and airline workers and cargo ship crews and so on all carry diseases too.

comment by bramflakes · 2014-09-15T19:17:06.843Z · LW(p) · GW(p)

Yes I agree, I was just being facetious :s

comment by bramflakes · 2014-09-15T17:48:26.425Z · LW(p) · GW(p)

You should visit Bradford someday.

Replies from: gjm, FiftyTwo
comment by gjm · 2014-09-15T20:03:53.791Z · LW(p) · GW(p)

I'm sure Bradford isn't the greatest place to live, but (1) it's better than many US inner cities, (2) the UK seems quite far from collapse, and generally (3) "such-and-such a country allows quite a lot of immigration, and there is one city there that has a lot of immigrants and isn't a very nice place" seems a very very very weak argument against liberal immigration policies.

Replies from: Azathoth123, bramflakes
comment by Azathoth123 · 2014-09-16T01:02:42.819Z · LW(p) · GW(p)

"such-and-such a country allows quite a lot of immigration, and there is one city there that has a lot of immigrants and isn't a very nice place" seems a very very very weak argument against liberal immigration policies.

On the other hand, "such-and-such a country allows quite a lot of immigration, and the niceness of a city inversely correlates with the number of immigrants there" is a stronger argument. Especially if I can get an even stronger correlation by conditioning on types of immigrants.

Replies from: gjm, TheAncientGeek
comment by gjm · 2014-09-16T01:30:48.763Z · LW(p) · GW(p)

Stronger, yes. But ...

  • It's far from clear that the central premise is correct. (Cambridge has a lot of immigrants and I think it's very nice. I'm told Stoke-on-Trent is pretty rubbish but it has few immigrants. Two cherry-picked cases don't tell you much about correlation but, hey, that's one more case than bramflakes offered.)
  • The differential effects of immigration within a country might look different from the overall effects on the country as a whole. (Toy model, not intended to be a description of how things actually are: suppose some immigrant group produces disproportionate numbers of petty criminals and brilliant business executives; then maybe areas with more of that group will have more crime but by the magic of income tax the high earnings of the geniuses will make everyone better off.)
  • For some people -- I am not claiming you are one -- the very fact that a place has more immigrants (or more of particular "types of immigrants", nudge nudge wink wink) makes it less nice. Those who happen not to feel that way may have a different view of the correlation between niceness and immigration from those who do. To take a special case, the immigrants themselves probably don't feel that way, and for some who favour liberal immigration policies the benefit to the immigrants is actually an important part of the point.
Replies from: Azathoth123
comment by Azathoth123 · 2014-09-16T01:46:41.480Z · LW(p) · GW(p)

To take a special case, the immigrants themselves probably don't feel that way,

Actually they probably do. That's why they immigrated in the first place.

and for some who favour liberal immigration policies the benefit to the immigrants is actually an important part of the point.

Well it's remarkable how strong a correlation there is between one's support for immigration and how strong a bubble one has around oneself to protect oneself from them. Look how many of the most prominent immigration advocates live in gated communities.

Replies from: gjm
comment by gjm · 2014-09-16T10:08:32.904Z · LW(p) · GW(p)

That's why they immigrated in the first place.

I can think of reasons why someone might migrate from country A to country B other than preferring country B's people to country A's.

[EDITED to add: Maybe I should give some examples, in case they really aren't obvious. Country B might have: a better political system, less war, more money, better treatment for some group one's part of (women, gay people, intellectuals, Sikhs, ...), less disease, nicer climate, lower taxes, better public services, better jobs, better educational opportunities. Some of those might in some cases be because country A's people are somehow better, but they needn't be, and even if in fact Uzbekistan has lower taxes because it has fewer Swedes and Swedes have a genetic predisposition to raise taxes, someone migrating from Sweden to Uzbekistan for lower taxes needn't be aware of that and needn't have any preference for not being around Swedes.]

how strong a correlation there is [...] how many of the most prominent immigration advocates live in gated communities.

I am interested: How strong, and how many? Do you have figures?

(And how does it compare with how many of the most prominent advocates of anything you care to mention live in gated communities? The most prominent people in any given group are more likely to be rich, and richer people more often live in gated communities.)

In any case, assuming for the sake of argument that there is indeed a positive correlation between being "protected" from immigrants and supporting letting more of them in: I don't understand how your reply is responsive to what I wrote. It seems exactly parallel to this: "Many people advocate prison reform for the sake of the prisoners." "Oh year? Well, a lot of those people prefer to live in places with lower crime rates." Which is true enough, but hardly relevant. There's no inconsistency between wanting some group of people to be better off, and having a personal preference for not living near a lot of them.

Replies from: Azathoth123, Azathoth123, army1987
comment by Azathoth123 · 2014-09-18T00:24:32.990Z · LW(p) · GW(p)

The most prominent people in any given group are more likely to be rich, and richer people more often live in gated communities.

Rich people are more likely to advocate open boarders.

As for prominent people: Mark Zuckerberg bought the four houses surrounding his own "because he wanted more privacy". Bryan Caplan prides himself on the bubble he's constructed around himself.

Replies from: gjm
comment by gjm · 2014-09-18T01:15:57.893Z · LW(p) · GW(p)

Rich people are more likely to advocate open borders.

Again, I'd be interested in the statistics. (That isn't a coded way of saying I think you're wrong, by the way. But I'd be interested to know how big the differences are, whether it depends on what you mean by "rich", etc.)

Mark Zuckerberg [...] Bryan Caplan

I'm not sure why this is relevant. I'm guessing that both of those people advocate open borders, but surely the absolute most any observation of this form could show is that there are at least two people in the world who advocate open borders for bad reasons, or advocate open borders but are terrible people, or something. How can that possibly matter?

comment by Azathoth123 · 2014-09-17T01:12:43.807Z · LW(p) · GW(p)

and even if in fact Uzbekistan has lower taxes because it has fewer Swedes and Swedes have a genetic predisposition to raise taxes, someone migrating from Sweden to Uzbekistan for lower taxes needn't be aware of that and needn't have any preference for not being around Swedes.

Of course, one consequence of this is that if enough Swedes migrate they'll destroy the aspect of Uzbekistan that attracted them in the first place.

In any case, assuming for the sake of argument that there is indeed a positive correlation between being "protected" from immigrants and supporting letting more of them in: I don't understand how your reply is responsive to what I wrote. It seems exactly parallel to this: "Many people advocate prison reform for the sake of the prisoners." "Oh year? Well, a lot of those people prefer to live in places with lower crime rates." Which is true enough, but hardly relevant. There's no inconsistency between wanting some group of people to be better off, and having a personal preference for not living near a lot of them.

It is hypocritical in the original sense of the term, the one from which the word's negative connotations derive, i.e., a leader who insists that the group make sacrifices for the "greater good" without participating in those sacrifices himself.

Replies from: gjm
comment by gjm · 2014-09-17T12:40:46.361Z · LW(p) · GW(p)

they'll destroy the aspect of Uzbekistan that attracted them in the first place.

Until the number of Swedes in Uzbekistan is extremely large, it'll presumably still be better than Sweden in that respect.

It is hypocritical in the original sense of the term

That doesn't actually seem to be the original sense of the term, at least according to my reading of the OED, but I don't think it matters. Anyway, let's suppose you're right and some advocates of liberal immigration policies are hypocrites in that sense. I don't see how that's evidence that the policies are bad, nor do I see how it's responsive to what your comment was a reply to (namely, a claim that many people advocate liberal immigration policies for the benefit of the immigrants).

I'm still curious about "how strong, and how many", by the way. I assume, from what you said on this point, that you have figures; I'd love to see them.

comment by A1987dM (army1987) · 2014-09-21T08:36:19.369Z · LW(p) · GW(p)

Country B might have: a better political system, less war, more money, better treatment for some group one's part of (women, gay people, intellectuals, Sikhs, ...), less disease, nicer climate, lower taxes, better public services, better jobs, better educational opportunities.

Other than climate and to some extent money and war and disease, these mainly depend on which kind of people Country B has.

comment by TheAncientGeek · 2014-09-17T15:56:14.826Z · LW(p) · GW(p)

"... niceness of a city inversely correlates with the number of immigrants there"

Ask any Native American, ho ho.

comment by bramflakes · 2014-09-15T22:47:08.188Z · LW(p) · GW(p)

I'm being flippant of course. I didn't intend it as a serious argument.

Quick response:

1) You cannot compare the UK's cities to the US' cities because the US has a 14% black population and the UK does not. "Inner city" is a codeword for the kind of black dysfunction that thankfully the UK does not possess.

2) The UK is not close to collapse because we don't have fully Open Borders yet. For all its faults, the EU's migration framework isn't quite letting in millions of third-worlders yet.

3) Of course.

If you don't mind, I don't want to get into a lengthy debate on the subject.

Replies from: gjm, polymathwannabe
comment by gjm · 2014-09-15T23:06:05.856Z · LW(p) · GW(p)

I am quite happy not to have a lengthy debate with you on this topic.

comment by polymathwannabe · 2014-09-16T14:12:33.403Z · LW(p) · GW(p)

The difference is only apparent; both societies have treated their nonwhites like trash. The British Empire merely avoided its "dysfunction" problem at home by outsourcing it to India.

Replies from: bramflakes
comment by bramflakes · 2014-09-16T16:02:07.889Z · LW(p) · GW(p)

Then why, despite the xenophobic laws of the 19th and early 20th centuries, are East Asians a dominant minority in the US? Why, despite a millenium of antisemitism, are Ashkenazim getting 27% of Nobels and making up about quarter [edit: not sure of exact number] of US billionaires?

White people have treated all nonwhites like trash at some point or another, yet there's a giant variation in outcomes. Racism as an all-powerful explanation of black dysfunction is untenable.

Replies from: Lumifer
comment by Lumifer · 2014-09-16T16:16:11.902Z · LW(p) · GW(p)

White people have treated all nonwhites like trash at some point or another

I think that most peoples have treated some other tribe as trash at some point or another. The particular case which prompted this response was the English and the Irish, but the list of examples is very long.

comment by FiftyTwo · 2014-09-21T00:39:11.777Z · LW(p) · GW(p)

A non-representative event happened and was blown out of proportion by media.

Replies from: bramflakes
comment by bramflakes · 2014-09-22T18:31:23.704Z · LW(p) · GW(p)

What event are you talking about? If you mean the Pakistani rape gang, that was Rotheram, not Bradford.

comment by FiftyTwo · 2014-09-21T00:41:42.725Z · LW(p) · GW(p)

Collapsing civilisation as we know it is presumably not a bad thing if you think that our current civilisation is fundamentally unjust or suboptimally allocates resources based on arbitrary geographic boundaries.

Replies from: bramflakes
comment by bramflakes · 2014-09-22T18:32:57.407Z · LW(p) · GW(p)

I'll take both of those over the Camp of the Saints.

comment by Shmi (shminux) · 2014-09-15T15:44:01.842Z · LW(p) · GW(p)

There is no territory, it's maps all the way down.

Replies from: hyporational, DanielLC, jsteinhardt, polymathwannabe, D_Malik, None, Slider
comment by hyporational · 2014-09-17T16:45:05.045Z · LW(p) · GW(p)

There are no maps, it's reality all the way up.

Replies from: shminux, Slider
comment by Shmi (shminux) · 2014-09-17T16:55:59.834Z · LW(p) · GW(p)

You might be facetious, but I suspect that it is another way of saying the same thing.

Replies from: TheAncientGeek, hyporational
comment by TheAncientGeek · 2014-09-20T14:28:50.811Z · LW(p) · GW(p)

I suspect it isn't.

The words map and territory aren't relative terms like up and down.

comment by hyporational · 2014-09-17T16:58:48.173Z · LW(p) · GW(p)

I meant to communicate the latter. We share this view.

comment by Slider · 2014-09-22T20:16:14.553Z · LW(p) · GW(p)

the parent post implies a belief in non-psychisism

Replies from: hyporational
comment by hyporational · 2014-09-23T09:18:26.378Z · LW(p) · GW(p)

I'm not sure I understand what you mean.

Replies from: Slider
comment by Slider · 2014-09-25T00:23:08.825Z · LW(p) · GW(p)

That there are no representations. There is no computational system that can be said to be about something.

comment by DanielLC · 2014-09-15T23:20:54.549Z · LW(p) · GW(p)

"The territory" is just whatever exists. It may well be an infinite series of entities, each more refined than the last. It's still a territory.

If there is no territory, what is a map?

Replies from: shminux
comment by Shmi (shminux) · 2014-09-15T23:35:53.177Z · LW(p) · GW(p)

I don't normally call it a map, I call it a model, but whatever the name, it's something that turns observations into predictions of future observations, without claiming that the source of these observations is something called "reality". This can go as much meta as you like. The map-territory model is one such useful model, except when it's not.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T00:05:36.852Z · LW(p) · GW(p)

Are you saying that the universe is built like Solomonoff induction? It randomly produces observations and eliminates possibilities that don't follow them? I'd still consider that as having a territory, but it's certainly contrarian.

At the very least, your model if the universe implies the existence of a series of maps along a timeline.

comment by jsteinhardt · 2014-09-16T01:55:45.256Z · LW(p) · GW(p)

I think this post should win the thread for blowing the most minds. (I'll upvote even though I think your position is tenable, since I only assign it 20% probability or so.)

Replies from: pinkocrat
comment by pinkocrat · 2014-10-12T15:52:40.645Z · LW(p) · GW(p)

I think the whole point is that there's no fact of the matter. "There are only maps" is a map, and on its own logic it's only as true as it is useful. I'm not sure how I would assign a probability to it.

comment by polymathwannabe · 2014-09-15T17:02:33.062Z · LW(p) · GW(p)

That sounds awfully like social constructionism.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-15T17:42:11.529Z · LW(p) · GW(p)

Never heard of it until now, had to look it up, couldn't find a decent writeup about it. This link seems to be the best, yet it does not even give a clear definition.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-15T17:58:26.101Z · LW(p) · GW(p)

Executive summary of social constructionism: all of reality is socially agreed; nothing is objective.

Replies from: shminux, TheAncientGeek
comment by Shmi (shminux) · 2014-09-15T18:17:01.088Z · LW(p) · GW(p)

I'm lost at "socially agreed". I define models as useful if they make good predictions. This definition does not rely on some social agreement, only on the ability to replicate the tests of said predictions.

comment by D_Malik · 2014-09-20T04:33:23.713Z · LW(p) · GW(p)

Can you unpack this? At the moment it seems nonsensical, in a "throwing together random words and hoping people read profound insights into it" way.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-20T07:24:56.623Z · LW(p) · GW(p)

Sure. Have you actually seen "the territory"? Of course not. There are plenty of unexplained observations out there. We assume that these come from some underlying "reality" which generates them. And it's a fair assumption. It works well in many cases. But it is still an assumption, a model. To quote Brienne Strohl on noticing:

You're unlikely to generate alternative hypotheses when the confirming observation and the favored hypothesis are one and the same in your experience of experience.

To most people the map/territory observation is such a "one and the same". I'm suggesting that it's only a hypothesis. It gives way when making a map changes the territory (hello, QM). It is also unnecessary, because the useful essence of the map/territory model is that "future is partially predictable", in a sense that it is possible to take our past experiences, meditate on it for a while, figure out what to expect in the future and see our expectations at least partially confirmed. There is no need to attach the notion of some objective reality causing this predictability, though admittedly it does feel good to pretend that we stand on a solid ground, and not on some nebulous figment of imagination.

If you extract this essence, that future experiences are predictable from the past ones, and that we can shape our future experiences based on the knowledge of the past, it is enough to do science (which is, unsurprisingly, designing, testing and refining models). There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there. And then those small things became gateways to more surprising observations.

Yet we persist in thinking that there are ultimate laws of the universe, and that some day we might discover them all. I posit that there are no such laws, and we will continue digging deeper and deeper, without ever reaching the bottom... because there is no bottom.

Replies from: D_Malik
comment by D_Malik · 2014-09-20T22:33:16.871Z · LW(p) · GW(p)

Thanks for explaining, upvoted. But I still don't see how this could possibly make sense.

There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there.

But our models have become more accurate over time. We've become, if you will, "less wrong". If there's no territory, what have we been converging to?

Have you actually seen "the territory"? Of course not.

...Yes? I see it all the time.

There are plenty of unexplained observations out there. We assume that these come from some underlying "reality" which generates them. And it's a fair assumption.

I seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me. If it's just maps generating our observations, I'd call the maps part of the territory. (Like a map with a picture of the map itself on the territory. Except, in your world, I guess, there's no territory to chart so the map is a map of itself.) This feels like arguing about definitions.

I see how this might sorta make sense if we postulate that the Simulator Gods are trying really hard to fuck with us. Though still, in that case, I think the simulating world can be called a territory.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-21T05:29:37.699Z · LW(p) · GW(p)

But our models have become more accurate over time.

Indeed they have. We can predict the outcome of future experiments better and better.

We've become, if you will, "less wrong".

Yep.

If there's no territory, what have we been converging to?

Why do you think we have been converging to something? Every new model asks generates more questions than it answers. Sure, we know now why emitted light is quantized, but we have no idea how to deal, for example, with the predicted infinite vacuum energy.

...Yes? I see it all the time.

No, you really don't. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.

: I seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me.

It is not a definition, it's a hypothesis. At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.

If it's just maps generating our observations, I'd call the maps part of the territory.

First, I made no claims that maps generate anything. maps are what we use to make sense of observations. Second, If you define the territory the usual way, as "reality", then of course maps are part of the territory, everything is.

in your world, I guess, there's no territory to chart so the map is a map of itself.)

Not quite. You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs. This does not have to end.

This feels like arguing about definitions.

I realize that is how you feel. The difference is that if the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory (possibly at the price of duplicating the territory and calling it a map). I am not convinced that it is a good assumption. Quite the opposite, our experience shows that it is a bad one, it has been falsified time and again. And bad models should be discarded, no matter how comforting they may be.

Replies from: pinyaka, TheAncientGeek
comment by pinyaka · 2014-09-23T16:42:44.909Z · LW(p) · GW(p)

...Yes? I see it all the time.

No, you really don't. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.

You could argue that sensing is part of the territory while any thing that is sensed is part of the map, I think.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-23T18:31:38.321Z · LW(p) · GW(p)

You could, but you should be very careful, since most of sensing is multiple levels of maps. Suppose you see a cat. So, presumably the cat is part of the territory, right? Well, let's see:

  • what you perceive as a cat is constructed in your brain from genetics, postnatal development, education, previous experiences and nerve impulses reaching your visual cortex. There are multiple levels of processing: light entering through your eye, being focused, absorbed by light-sensitive cells, going through 3 or 4 levels of other cells before triggering spikes in the afferent fibers reaching deep into your visual cortex. The work done inside it to trigger "this is a cat" subroutine in a totally different part of the brain is much much more complex.

  • Any of these levels can be disrupted, so that when you see a cat others don't agree (maybe someone drew a "realistic" picture to fool you, or maybe your brain constructed a cat image where that of a different but unfamiliar animal (say, raccoon) would be more accurate). Multiple observations are required to validate that what you perceive as a cat behaves the way your internal model of the cat predicts.

  • Even the light rays which eventually resulted in you being aware of the cat are simplified maps of propagating excitation of the EM field interacted with atoms in what could reasonably be modeled as cat's fur. Unless it is better modeled as lines on paper.

  • This stack of models currently ends somewhere in the Standard Model of Particle physics. Not because it's the "ultimate reality", but because we don't have a good handle on how to continue building the stack.

  • You could argue that all the things I have described are "real" and part of the territory. Absolutely you can. But then why stop there? If light rays are real and not just abstractions, then so are images of cats in your brain.

  • Thus any model is as "real" as any other, though one can argue that accurate (better at anticipating future experiences) model are more real than inaccurate ones. The heliocentric model is "more real" than the geocentric one. in the sense that it has larger domain of validity. But then you are also forced to admit that quarks are more real than mesons and cats are less real than generic felines.

Replies from: pinyaka
comment by pinyaka · 2014-09-23T19:48:40.888Z · LW(p) · GW(p)

By "sensing" I was referring to the end result of all those nerves firing and processes processing when awareness meets the result of all that stuff. I suppose I could have more accurately stated that awareness is a part of the territory as awareness arises directly from some part of your circuitry. Everything about the cat in your example may happen in the brain or not and so you can't really be sure that there's an underlying reality behind it, but awareness itself is a direct consequence of the configuration of the processing equipment.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-23T20:27:27.293Z · LW(p) · GW(p)

So what is a map and not the territory in your example? The cat identification process? The "I see a cat" quale? I am confused.

Replies from: pinyaka
comment by pinyaka · 2014-09-23T21:28:03.158Z · LW(p) · GW(p)

Yes, the cat quale is map.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-25T02:55:37.629Z · LW(p) · GW(p)

I'd argue that it is as real as any other brain process.

Replies from: pinyaka
comment by pinyaka · 2014-09-25T20:16:17.175Z · LW(p) · GW(p)

It's real, but the thing that's being experienced isn't the real thing. The cat quale is a real process, but it's not a real cat (probably). The part of processing the quale that is the awareness (not the object of awareness) is itself the real awareness and holds the distinction of actually being in the territory rather than in the map.

comment by TheAncientGeek · 2014-09-22T13:24:20.039Z · LW(p) · GW(p)

Why do you think we have been converging to something?

What is the point of science, otherwise? Better prediction of observations? But you can't explain what an observantion is.

If the territory theory is able to explain the purpose of science, and the no-territory theory is not , the territory theory is better.

What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.

..according to a map which has "inputs from the territory" marked on it.

seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me.It is not a definition, it's a hypothesis.

At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.

Well, you need to. If the territory theory can explain the very existence of observations, and the no-territory theory cannot, the territory theory is better,

You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs.

Inputs from where?

The difference is that [if] the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory

No it doesn't. "The territory exists, but is not perfectly mappable" is a coherent assumption, particularly in view if the definition of the territory as the source of observations.

comment by [deleted] · 2014-09-15T21:32:32.937Z · LW(p) · GW(p)

Is that contrarian? In the community I come from (physics), that's a pretty commonly considered theory, even if not commonly held as most probable.

Replies from: shminux, TheAncientGeek
comment by Shmi (shminux) · 2014-09-15T22:07:34.054Z · LW(p) · GW(p)

I'm an ex-physicist, and I am pretty sure that realism, and more specifically scientific realism, is the standard, if implicit, ontology in physics.

comment by TheAncientGeek · 2014-09-17T15:36:24.687Z · LW(p) · GW(p)

That depends on exactly what itis supposed to mean. Some people se it to mean that reality is not accessible outside an interpretational framework - that's a Bailey version. A Motte version would be that there is literally nothing in existence except human-made theories. Physicists often aren't good at stating or noticing degrees of realism and anti realism, since they aren't trained for it,

Replies from: None
comment by [deleted] · 2014-09-17T18:59:30.731Z · LW(p) · GW(p)

I didn't interpret shminux's statement as being about realism. There is also the theory that as we move into higher and higher energy we will cover more and more and more specific rules and never reach the terminal fundamental rule set. in other words the fundamental rules of the universe are fractally complex with the fractal function being unknowable.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-18T11:20:11.268Z · LW(p) · GW(p)

Maybe. But Shminux also says that the territory is a map, not that it is unmappable.

comment by Slider · 2014-09-15T20:30:07.217Z · LW(p) · GW(p)

Every computation requires something that instatiates it, ie a abstract or concrete machine to run on. In a very extreme case you might come up with a very abstract idea. However then the instation provider is the imaginer. Every bit of information requires a transfer of energy. Instation is transitive relation. If there is simulation of me it neccesarily instanties my thoughts too.

Also the parent comment implies a belief in panpsychisism.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-17T15:41:28.575Z · LW(p) · GW(p)

Taken literally it is unlikely. However, it is not clear how literally it is meant to be taken.

comment by MaximumLiberty · 2014-09-16T02:38:58.238Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

As a first approximation, people get what they deserve in life. Then add the random effects of luck.

Max L.

Replies from: DanielLC, polymathwannabe
comment by DanielLC · 2014-09-16T04:48:18.192Z · LW(p) · GW(p)

Why do Africans deserve so much less than Americans? Why did people in the past deserve so much less than current people? Why do people with poor parents deserve less than people with rich parents?

Replies from: MaximumLiberty
comment by MaximumLiberty · 2014-09-17T17:30:20.390Z · LW(p) · GW(p)

I count "the circumstances into which you are born" as luck. I'd guess it is the biggest component of luck, along with being struck by a disabling genetic condition or exposed to pandemic. So, the first observation has more salience in similar groups of people. So, for example, the group of people that I hang out with or work with are roughly similar enough for desert to have more salience than luck.

But perhaps that means that birth-luck should be the first approximation, then desert, then additional luck.

Max L.

Replies from: DanielLC
comment by DanielLC · 2014-09-17T18:18:15.121Z · LW(p) · GW(p)

Can you give me an example of something that is neither desert nor luck?

Replies from: MaximumLiberty
comment by MaximumLiberty · 2014-09-18T02:48:20.450Z · LW(p) · GW(p)

Very nice question; better, in fact than the statement to which you responded.. Examples I have in mind:

  • Personal level injustice.
  • Social injustice.
  • How other people treat you.

But my primary point was whether things for which we are personally responsible is a bigger or lesser influence than luck. That is, if I am guessing with little knowledge, I am going to guess desert before luck for most groups with which I'd be interacting.

(Also, I am thinking that variation in luck, when the fact of variation if predictable and bad luck can be insured or mitigated, is desert, not luck.)

Particular applications might make it more clear. If you don't have a job in America, and you appear physically able to work, my first guess is that you are the biggest contributor to your unemployment. If you are unhealthy in America, and weren't born with it, my first approximation will be that you contributed mightily to your poor health. And so on.

Max L.

Replies from: DanielLC
comment by DanielLC · 2014-09-18T03:01:42.659Z · LW(p) · GW(p)

If you fail to buy car insurance, you deserve the expected cost?

I was thinking deserving something bad meant you did something bad, not that you did something stupid.

When you say "deserve," do you mean to imply that it is terminally better for people who deserve more to get more, and people who deserve less to get less?

Replies from: MaximumLiberty
comment by MaximumLiberty · 2014-09-18T05:12:56.689Z · LW(p) · GW(p)

If you fail to buy auto liability insurance and cause an accident (which is entirely predictable over long periods), then my first guess is that you deserve the impoverishment that comes from the situation.

If you fail to buy uninsured motorist insurance and are in an accident that you don't cause (which is entirely predictable) and faulty driver has no insurance and can't pay (which is also entirely predictable), then my first approximation is still pretty good. It is a little off because you could be beset with e string of bad luck.

I think of it the other way around. If I see someone happy and reasonably well off, I am first going to say that they had a hand in it. If I see someone continually unhappy or impoverished (setting aside birth luck), my first guess is also going to be that they are mainly responsible for their own outcomes. Turning it round, they are usually getting what they deserve.

Whether that is better or not depends on more than individual morality, so no, I'm not saying it is better.

Also, the examples seem to have focused on material outcomes, since they are easier to talk about, but I'm also thinking of non-material things. Relationships, self-esteem, etc.

Max L.

comment by polymathwannabe · 2014-09-16T03:14:15.520Z · LW(p) · GW(p)

What ethical theory are you using for your definition of "deserve"?

Replies from: MaximumLiberty
comment by MaximumLiberty · 2014-09-16T04:05:05.581Z · LW(p) · GW(p)

It is a fine question, since the word "deserve" is the link between an observation and a judgment about the person. I don't think I need an answer to it to make the observation that most people here don't hold that view. Which is a good thing, because I don't think I have a satisfactory answer beyond rough moral intuition.

Max L.

comment by spxtr · 2014-09-16T16:46:44.153Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.

Replies from: shminux, Ronak, Larks, Azathoth123, VAuroch, Prismattic, Jiro
comment by Shmi (shminux) · 2014-09-16T20:26:42.722Z · LW(p) · GW(p)

Yes, Yes, No. Still upvoting, because "Scott Alexander" and "uncharitable" in the same sentence does not compute.

Replies from: spxtr
comment by spxtr · 2014-09-16T22:09:45.525Z · LW(p) · GW(p)

I consider him a modern G.K. Chesterton. He's eloquent, intelligent, and wrong.

comment by Ronak · 2014-09-17T02:36:52.576Z · LW(p) · GW(p)

Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)

(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)

Replies from: gattsuru, spxtr
comment by gattsuru · 2014-09-19T16:40:17.480Z · LW(p) · GW(p)

Do you mind telling me how you think he's being uncharitable?

For a very quick example, see this Tumblr post. Mr. Alexander finds an example of a neoreactionary leader trying to be mean to a transgender woman inside the NRx sphere, and then shows the vast majority response of (non-vile) neoreactionaries to at least be less exclusionary than that, even though they have ideological issues with the diagnosis or treatment of gender dysphoria. Then he describes a feminist tumblr which develops increasingly misgendering and rude ways to describe disagreeing transgender men.

I don't know that this is actually /wrong/. All the actual facts are true, and if anything understate their relevant aspects -- if anything, I expect Ozy's understated the level of anti-transmale bigotry floating around the 'enlightened' side of Tumblr. I don't find NRx very persuasive, but there are certainly worse things that could be done than using it as a blunt "you must behave at least this well to ride" test. I don't know that feminism really needs external heroes: it's certainly a large enough group that it should be able to present internal speakers with strong and well-grounded beliefs. And I can certainly empathize with holding feminists to a higher standard than neoreactionaries hold themselves.

The problem is that it's not very charitable. Scott's the person that's /come up/ with the term "Lizardman's Constant" to describe how a certain percentage of any population will give terrible answers to really obvious questions. He's a strong advocate of steelmanning opposing viewpoints, and he's written an article about the dangers of only looking at the .

But he's looking at a viewpoint shown primarily in the <5% margin feminist tumblr, and comparing them to a circle of the more polite neoreactionaries (damning with faint praise as that might be, still significant), and, uh, I'm not sure that we should be surprised if the worst of the best said meaner things than the best of the worst.

I'm not sure he /needs/ to be charitable, again -- feminism should have its own internal speakers, I think mainstream modern feminism could use better critics than whoever's on Fox News next, so on -- but it's an understandable criticism.

((Upvoting the thread starter, but more because one and two are mu statements; either closed questions or not meaningful. Weakly agree on third.))

Replies from: Jiro
comment by Jiro · 2014-09-21T05:16:39.337Z · LW(p) · GW(p)

Being 5% of the group doesn't mean they are 5% of the influence. The loudest 5% may get to set the agenda of the remaining 95% if the remaining ones are willing to go along with things they don't particularly care about, but don't oppose enough to make these things deal-breakers either.

Replies from: Azathoth123, army1987
comment by Azathoth123 · 2014-09-21T20:43:06.263Z · LW(p) · GW(p)

It also helps if the 5% have arguments for their positions.

comment by spxtr · 2014-09-17T02:53:45.583Z · LW(p) · GW(p)

Fortunately, LW is not an appropriate forum for argument on this subject, but for an example of an uncharitable post, see Social Justice and Words, Words, Words.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-19T20:12:16.898Z · LW(p) · GW(p)

Fortunately, LW is not an appropriate forum for argument on this subject

Why? Because people are likely to disagree with you?

comment by Larks · 2014-09-19T00:36:57.020Z · LW(p) · GW(p)

According to the 2013 LW survey, the when asked their opinion of feminism, on a scale from 1 (low) to 5 (high), the mean response was 3.8 , and social justice got a 3.6. So it seems that "feminism is a good thing" is actually not a contrarian view.

If I might speculate for a moment, it might be that LW is less feminist that most places, while still having an overall pro-feminist bias.

Replies from: epursimuove
comment by epursimuove · 2014-09-26T03:00:24.299Z · LW(p) · GW(p)

If by most places you're talking about the world (or Western/American world) in general, that's pretty clearly false. The considerable majority of Americans reject the feminist label, for example. If you're talking about internet communities with well-educated members, then it probably is true.

comment by Azathoth123 · 2014-09-17T01:43:52.156Z · LW(p) · GW(p)

How would you define "privilege"?

Replies from: IlyaShpitser, spxtr
comment by IlyaShpitser · 2014-09-17T01:51:16.583Z · LW(p) · GW(p)

Easier difficulty setting for your life in some context through no fault or merit of your own.

Replies from: Azathoth123, TheAncientGeek
comment by Azathoth123 · 2014-09-17T02:01:33.639Z · LW(p) · GW(p)

So would you describe someone tall as having "height privilege" because they're better at basketball?

Replies from: Prismattic
comment by Prismattic · 2014-09-17T05:38:40.311Z · LW(p) · GW(p)

I'd argue that height privilege (up to a point, typically around 6'6") is a real thing, having nothing to do with being good at sports. There is a noted experiment, which my google-fu is currently failing to turn up, in which participants were shown a video of an interview between a man and a woman. In one group, the man was standing on a footstool behind his podium, so that he appeared markedly taller than the woman. In the other group, the man was standing in a depression behind his podium, so t that he appeared shorter. The content of the interview was identical.

Participants rated the man in the "taller" condition as more intelligent and more mature than the same man in the "shorter" condition. That's height privilege.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-09-17T23:50:20.303Z · LW(p) · GW(p)

There's also a large established correlation between height and income, though not enough to completely rule out a potential common cause like "good genes" or childhood nutrition.

comment by TheAncientGeek · 2014-09-17T15:50:49.494Z · LW(p) · GW(p)

You really need riders to the effect that privilege of an objectionable kind is unrelated to achievement or intrinsic abilities,

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-18T00:43:03.810Z · LW(p) · GW(p)

The problem is that most of the examples SJW object to are in fact related to achievement or intrinsic abilities.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-18T10:02:13.601Z · LW(p) · GW(p)

Uh huh

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-18T23:06:23.342Z · LW(p) · GW(p)

For example, nearly all of what they call "race privilege" is actually "intelligence privilege" or "conscientiousness privilege".

Replies from: IlyaShpitser, banx
comment by IlyaShpitser · 2014-09-19T23:07:12.088Z · LW(p) · GW(p)

I think you are just blind to these things.

I have highly accomplished female friends who tell me horrible stories. I have highly accomplished friends with black skin who tell me horrible stories.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-19T23:30:32.767Z · LW(p) · GW(p)

I have highly accomplished female friends who tell me horrible stories. I have highly accomplished friends with black skin who tell me horrible stories.

Can I have some more specifics.

Also note that in the parent I specifically referred to "race privilege", the situation with "female privilege" is more complicated.

comment by banx · 2014-09-19T00:13:07.368Z · LW(p) · GW(p)

So you're claiming that there is no way in which the US police and justice systems treat black people differently that isn't reducible to intelligence or conscientiousness differences?

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-19T01:30:19.283Z · LW(p) · GW(p)

So you're claiming that there is no way in which the US police and justice systems treat black people differently that isn't reducible to intelligence or conscientiousness differences?

More or less. Remember lower conscientiousness translates into higher propensity to commit violent crime.

There's also a certain amount of what may fairly be called "black privilege" due to the fact that in any highly publicized crime, or alleged crime, with white defendants and a black victim there will be social pressure to through the book at them regardless of lack of evidence, or mitigating circumstances like the victim beating the defendant's head on the pavement. And conversely if the defendants are black and the victim is white there will be social pressure to go light on the defendants some people even arguing that the victim brought the crime upon himself due to his racism. Something similar happens with police shootings. Shootings of blacks are more likely to make the national news and the victim described as an angelic youth even if he had just robbed a convince store and was charging for the officer's gun when he was shot.

Replies from: VAuroch, ChristianKl
comment by VAuroch · 2014-09-19T05:12:35.376Z · LW(p) · GW(p)

All of your claims in this comment are factually incorrect.

Shootings of blacks are more likely to make the national news

Have you ever looked at statistics on shooting deaths? Accepting for the sake of argument that more shootings of black victims may show up on the news in an absolute sense (which I don't believe is actually true), it totally ignores the priors. If a white victim is shot, with high probability that will make the national news; if a black victim is shot, with an extremely high probability it will barely make local news and will receive no national attention. Ferguson wasn't unusual because a young black man shot; it was unusual that anyone paid any attention. Young black men being shot is far too commonplace to make the news under ordinary circumstances.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-19T05:57:20.218Z · LW(p) · GW(p)

Have you ever looked at statistics on shooting deaths?

If you look at my previous sentence, you'll see I was referring to shootings by police. I agree, that young black men get shot all the time, mostly by other young black men, and nobody pays attention to that.

Replies from: VAuroch
comment by VAuroch · 2014-09-19T08:37:24.293Z · LW(p) · GW(p)

No, it is true for shooting deaths by police as well. Every time a white person is shot by a policeman, it's national news. When a black person gets shot by police, it's Tuesday.

Replies from: bramflakes, Lumifer
comment by bramflakes · 2014-09-19T15:20:35.258Z · LW(p) · GW(p)

Every time a white person is shot by a policeman, it's national news. When a black person gets shot by police, it's Tuesday.

Are we living in the same universe?

comment by Lumifer · 2014-09-19T15:09:46.076Z · LW(p) · GW(p)

Every time a white person is shot by a policeman, it's national news.

It is not. Police shoot a lot of people and, funnily enough, no one knows exactly how many.

comment by ChristianKl · 2014-09-19T20:18:24.280Z · LW(p) · GW(p)

Are you basically claiming that those black people who test highly on IQ tests don't get discriminated against?

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-19T23:12:05.795Z · LW(p) · GW(p)

Given things like Affirmative Action and all the pressure to have a "diverse workforce" they're mostly the beneficiaries of discrimination. There aren't many high IQ blacks and there's a lot of demand for them.

comment by spxtr · 2014-09-17T02:41:36.823Z · LW(p) · GW(p)

This is a good definition. In particular, "Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group, who are usually unaware of the privilege they possess. ... A privileged person is not necessarily prejudiced (sexist, racist, etc) as an individual, but may be part of a broader pattern of *-ism even though unaware of it."

No, this is not a motte.

Replies from: shminux, Azathoth123, Prismattic, ChristianKl
comment by Shmi (shminux) · 2014-09-17T17:14:27.295Z · LW(p) · GW(p)

Why the "majority group" qualifier? Privilege has been historically associated with minorities, like aristocracy.

comment by Azathoth123 · 2014-09-17T03:02:05.175Z · LW(p) · GW(p)

Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group

Does it have to be a majority group? For example, does this compared with this count as an example of "black privilege"? Would you describe the fact that some people are smarter (or stronger) than others as "intelligence privilege" (or "strength privilege")?

comment by Prismattic · 2014-09-17T05:33:32.748Z · LW(p) · GW(p)

That's in the bailey, because of "enjoyed by a majority group."

comment by ChristianKl · 2014-09-19T21:03:35.712Z · LW(p) · GW(p)

Why focus only specific majority groups and thereby ignore things like men in domestic violence issues getting a lot less help from society than women?

Nearly everyone has some advantages and disadvantages. It's often not helpful to conflate that huge back of advantages and disadvantages into a single variable.

comment by VAuroch · 2014-09-17T04:14:11.419Z · LW(p) · GW(p)

Like a few others, I agree with the first two but emphatically disagree with the last. And if you were right about it, I'd expect Ozy to have taken Scott to task about it, and him to have admitted to being somewhat wrong and updated on it.

EDIT: This has, in fact, happened.

Replies from: whales
comment by whales · 2014-09-17T09:20:52.948Z · LW(p) · GW(p)

See this tumblr post for an example of Ozy expressing dissatisfaction with Scott's lack of charity in his analysis of SJ (specifically in the "Words, Words, Words" post). My impression is that this is a fairly regular occurrence.

You might be right about him not having updated. If anything it seems that his updates on the earlier superweapons discussion have been reverted. I'm not sure I've seen anything comparably charitable from him on the subject since. I don't follow his thoughts on feminism particularly closely, so I could easily be wrong (and would be glad to find I'm wrong here).

Replies from: VAuroch, army1987, Princess_Stargirl
comment by VAuroch · 2014-09-17T20:37:59.957Z · LW(p) · GW(p)

OK, those things have indeed happened, to some degree. Above comment corrected.

I still don't understand what is uncharitable about the Wordsx3 post specifically. It accurately describes the behavior of a number of people I know (as in, have met, in person, and interacted with socially, in several cases extensively in a friendly manner), and I have no reason to consider them weak examples of feminist advocacy and every reason to consider typical (their demographics match the stereotype). I have carefully avoided catching the receiving end of it, because friends of mine have honestly challenged aspects of this kind of thing and been ostracized for their trouble.

comment by A1987dM (army1987) · 2014-09-17T18:14:22.688Z · LW(p) · GW(p)

There's something wrong with the first link (I guess you typed the URL on a smartphone autocorrecting keyboard or similar).

EDIT: I think this is the correct link.

Replies from: whales
comment by whales · 2014-09-17T18:27:23.196Z · LW(p) · GW(p)

Yeah, that happened when I edited a different part from my phone. Thanks, fixed.

comment by Princess_Stargirl · 2014-09-18T20:17:58.833Z · LW(p) · GW(p)

Imo this quote from her response is a pretty weak argument:

"The concept of female privilege is, AFAICT, looking at the disadvantages gender-non-conforming men face, noticing that women with similar traits don’t face those disadvantages, and concluding that this is because women are advantaged in society. "

In order for this to be a sensible counterpoint you would need to either say "gender conforming male privilege" or you would need to show that there are few men who mind conforming to gender roles. I don't really see why anyone believes most men are fine with living out standard gender norms and I certainly don't see how anyone has evidence for this.

If a high percentage fo men are gender non-conforming and such men are at a disdadvantage in society then the concept of male privilege is seriously weakened. And using it is dangerous as it might harm those men to here that they are "privileged" when this is not the case (at least in terms of gender, maybe they are rich etc).

comment by Prismattic · 2014-09-17T01:31:02.597Z · LW(p) · GW(p)

I agree with claim 1 for some definitions of feminism and not for others. I agree with claim 2. I think that Scott would agree wtih claim 1 (for some definitions) and with claim 2 as well, so I disagree with claim 3.

comment by Jiro · 2014-09-16T18:45:33.304Z · LW(p) · GW(p)

Can you defend these statements?

Replies from: spxtr
comment by spxtr · 2014-09-16T20:15:27.311Z · LW(p) · GW(p)

I can, but I don't want to fall into that inferential canyon.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-09-20T17:35:14.817Z · LW(p) · GW(p)

I think that if you actually can defend them, it might be worth it to go through the canyon. Inferential canyons are a lot easier to cross when your targets are aware of their existence and are willing and able to discuss responsibly.

("worth it" is of course relative to other ways you discuss with strangers on the internet}

comment by scientism · 2014-09-16T20:35:45.998Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Superintelligence is an incoherent concept. Intelligence explosion isn't possible.

Replies from: D_Malik
comment by D_Malik · 2014-09-16T21:16:11.979Z · LW(p) · GW(p)

How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.

What do you predict would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer? What do you predict would happen if we selectively bred humans for intelligence for a couple million years? "Impractical" would be understandable, but I don't see how you can believe superintelligence is "incoherent".

As for "Intelligence explosion isn't possible", that's a lot more reasonable, e.g. see the entire AI foom debate.

Replies from: FiftyTwo, Lalartu
comment by FiftyTwo · 2014-09-21T00:34:08.692Z · LW(p) · GW(p)

How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.

Possibly the concept of intelligence as something that can increase in a linear fashion is in itself incoherent

comment by Lalartu · 2014-09-20T14:58:44.185Z · LW(p) · GW(p)

Well, I will predict this

would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer

Very bored Von Neumann.

if we selectively bred humans for intelligence for a couple million years

People that are very good at solving tests which you use to measure intelligence.

comment by moridinamael · 2014-09-16T16:43:53.541Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Buying a lottery ticket every now and then is not irrational. Unless you have thoroughly optimized the conversion of every dollar you own into utility-yielding investments and expenses, the exposure to large positive tail risk netted by spending a few dollars on lottery tickets can still be rational.

Phrased another way, when you buy a lottery ticket you aren't buying an investment, you're buying a possibility that is not available otherwise.

Replies from: Elo, DanielLC, Prismattic, shminux, Apoplast
comment by Elo · 2014-09-16T23:09:46.347Z · LW(p) · GW(p)

disagree because the cost of the possibility is too high.

comment by DanielLC · 2014-09-18T02:49:46.158Z · LW(p) · GW(p)

If one lottery ticket is worth while, why not two? Are you assigning a nonlinear value to the probability of winning the lottery? That causes a number of problems.

Replies from: moridinamael
comment by moridinamael · 2014-09-18T13:42:00.371Z · LW(p) · GW(p)

At the risk of looking even more like an idiot: Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. The Powerball gets as high as $590,500,000 pretax. NOT buying that one ticket gives you a chance of zero. So buying one ticket is "infinitely" better than buying no tickets. Buying more than one ticket, comparably, doesn't make a difference.

I like to play with the following scenario. A LessWrong reader buys a lottery ticket. They almost certainly don't win. They have one dollar less to donate to MIRI and because they're not wealthy they may not have enough wealth to psychologically justify donating anything to MIRI anyway. However, in at least one worldline, somewhere, they win a half a billion dollars and maybe donate $100,000,000 to MIRI. So from a global humanity perspective, buying that lottery ticket made the difference between getting FAI built and not getting it built. The one dollar spent on the ticket, in comparison, would have had a totally negligible impact.

I fully realize that the number of universes (or whatever) where the LessWrong reader wins the lottery is so small that they would be "better off" keeping their dollar according to basic economics, but the marginal utility of one extra dollar is basically zero.

edit: Digging myself in even deeper, let me attempt to simplify the argument.

You want to buy a Widget. The difference in net utility, to you, between owning a Widget and not owning a Widget is 3^3^3^3 utilons. Widgets cost $100,000,000. You have no realistic means of getting $100,000,000 through your own efforts because you are stuck in a corporate drone job and you have lots of bills and a family relying on you. So the only way you have of ever getting a Widget is by spending negligible amounts of money buying "bad" investments like lottery tickets. It is trivial to show that buying a lottery ticket is rational in this scenario: (Tiny chance) x (Absurdly, unquantifiably vast utility) > (Certain chance) x ($1).

Replace Widget with FAI and the argument may feel more plausible.

Replies from: DanielLC, Lumifer, warbo
comment by DanielLC · 2014-09-18T16:34:09.841Z · LW(p) · GW(p)

So buying one ticket is "infinitely" better than buying no tickets.

So your utility function is nonlinear with respect to probability. You don't use expected utility. It results in certain inconsistencies. This is discussed in the article the allais paradox, but I'll give a lottery example here.

Suppose I offer you a choice between paying one dollar and getting a one in a million chance of winning $500,000, and paying two dollars and getting a one in one million chance of winning $500,000 and a one in two million chance of winning $500,001. You figure that what's basically a 0.00015% chance of winning vs. a 0.0001% chance isn't worth paying another dollar for, so you just pay the one dollar.

On the other hand, suppose I only offer you the first option, but, once you see if you've won, you get another chance. If you win, you don't really want another lottery ticket, since it's not a big deal anymore. So you buy a ticket, and if you lose, you buy another ticket. This results in a 0.0001% chance of ending up with $499,999, a 0.00005% chance of ending up with $499,998, and a 99.99985% chance of ending up with -2$. This is exactly the same set of probabilities as you had for the second option before.

The one dollar spent on the ticket, in comparison, would have had a totally negligible impact.

No it would not. Or at least, it's highly unlikely for you to know that.

Suppose MIRI has their probability of success increased by 50 percentage points if they get a 100 million dollar donation. This means that, if 100 million people all donate a dollar, their probability of success goes up by 50 percentage points. Each successive one will change the probability by a different amount, but on average, each donation will increase the chance of success by one in 200 million. Furthermore, it's expected that the earlier donations would make a bigger difference, due to the law if diminishing returns. This means that donating one dollar improves MIRI's probability of success by more than one in 200 million, and is therefore better than getting a one in 100 million chance of donating 100 million dollars.

Even if MIRI does end up needing a minimum amount of money or something and becomes an exception to the law of diminishing returns, they know more about their financial situation, and since they're dealing with large amounts of money all at once, they can be more efficient about it. They can make a bet precisely tailored to their interests and with odds that are more fair.

comment by Lumifer · 2014-09-18T15:24:56.026Z · LW(p) · GW(p)

So buying one ticket is "infinitely" better than buying no tickets.

You're looking at the (potential) benefits and ignoring the costs. The costs are not negligible: "Thirteen percent of US citizens play the lottery every week. The average household spends around $540 annually on lotteries and poor households spend considerably more than the average." (source).

Buying more than one ticket, comparably, doesn't make a difference.

Buying a second ticket doubles your chances, obviously.

A LessWrong reader buys a lottery ticket ... in at least one worldline, somewhere, they win a half a billion dollars

For each timeline where you buy a lottery ticket there is one where you don't. Under MWI you don't make any choices -- you choose everything, always.

the marginal utility of one extra dollar is basically zero

You've never been poor, have you? :-/

It is trivial to show that buying a lottery ticket is rational in this scenario

It is just as trivial to show that you should spend all your disposable income and maybe more on lottery tickets in this scenario.

Replies from: moridinamael
comment by moridinamael · 2014-09-18T16:09:33.904Z · LW(p) · GW(p)

You're looking at the (potential) benefits and ignoring the costs. The costs are not negligible: "Thirteen percent of US citizens play the lottery every week. The average household spends around $540 annually on lotteries and poor households spend considerably more than the average." (source).

I'm only commenting to the rationality of one individual buying one ticket, not the ethics of the existence of lotteries.

Buying a second ticket doubles your chances, obviously.

Buying one ticket takes you from zero to one, buying two tickets takes you from one to two. 1/0 = infinity, 2/1 = 2. Buying anything more than 1 ticket has sharply diminishing utility. I realize this is a somewhat silly line of argument, so I'm not going to sink any more energy defending it.

For each timeline where you buy a lottery ticket there is one where you don't. Under MWI you don't make any choices -- you choose everything, always.

I don't think we understand each other on this point. I was referring not to choosing, just winning. And the measure of the winning universes is a tiny fraction of all universes. But that doesn't matter when the utility of winning is sufficiently large. And the chance of a given individual buying a ticket isn't 50% in any meaningful quantum-mechanical sense, so "For each timeline where you buy a lottery ticket there is one where you don't" isn't true.

You've never been poor, have you? :-/

No, and I wouldn't recommend that a poor person buy lottery tickets. My original claim was that buying lottery tickets can be rational, not that it is rational in the general case.

It is just as trivial to show that you should spend all your disposable income and maybe more on lottery tickets in this scenario.

That's true. People also say that you should donate all your disposable income to MIRI, or to efficient charities, for exactly the same reasons, and I don't do those things for the same reason that I don't spend all my money on lottery tickets - I'm a human. My line of argument only applies when you want a Widget and have no other way of affording it.

I don't really feel strongly enough about this to continue defending it, it's just that I'm quite sure I'm right in the details of my argument and would welcome an argument that actually changes my mind / convinces me I'm wrong.

Replies from: Lumifer
comment by Lumifer · 2014-09-18T16:47:01.638Z · LW(p) · GW(p)

I treat buying lottery tickets as buying a license to daydream. Once you realize you don't need a license for that... :-)

comment by warbo · 2014-09-22T11:57:14.066Z · LW(p) · GW(p)

Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. NOT buying that one ticket gives you a chance of zero.

There are ways to win a lottery without buying a ticket. For example, someone may buy you a ticket as a present, without your knowledge, which then wins.

So buying one ticket is "infinitely" better than buying no tickets.

No, it is much more likely that you'll win the lottery by buying tickets than by not buying tickets (assuming it's unlikely to be gifted a ticket), but the cost of being gifted a ticket is zero, which makes not buying tickets an "infinitely" better return on investment.

comment by Prismattic · 2014-09-17T01:37:00.481Z · LW(p) · GW(p)

I agree with the first sentence, but I'm not sure if our reasoning is the same. Here's mine: If humans were perfectly rational overall, buying a lottery ticket would never make sense. But we aren't. I think it's rational to buy a lottery ticket say, every six months, and then not check if it's a winner for the six months. Just as humans seems to enjoy the anticipation of an upcoming vacation more than the actual vacation, the human brain can get utility from the hope that ticket might be a winner, and 6 months of an (irrational, but so what?) hope far outweigh the one day of disappointment and one dollar lost when you check the ticket and it hasn't won.

Replies from: moridinamael
comment by moridinamael · 2014-09-18T14:20:12.217Z · LW(p) · GW(p)

I totally agree with this reasoning, but I don't think "fun" is the only good reason to buy a ticket.

comment by Shmi (shminux) · 2014-09-23T20:50:01.112Z · LW(p) · GW(p)

I'd agree that if a lottery ticket and a chocolate bar give you the same hedons and no other options are available, you are better off buying a piece of paper than an unhealthy snack.

comment by Apoplast · 2014-09-22T20:01:09.473Z · LW(p) · GW(p)

I'll put this here because I wish to provide a different perspective without getting bogged down in probabilistic thinking. To say buying a lottery ticket is irrational might be correct if you consider winning the lottery or not to be the only real outcome of such a purchase. The fact is, however, that buying a lottery ticket provides entertainment. You pay a relatively small sum of money to play, until results day, with the fantasy of receiving an enormous amount of money. Entertainment is utility, as far as I'm concerned. Obviously the more money you spend on the lottery, the less justifiable this entertainment is, because as others have pointed out, buying a small number of more tickets doesn't appreciably change the probability of winning. Just one ticket, however, and you're one stroke of luck away from enormous wealth; the enjoyment of knowing that alone can be worth a dollar.

comment by lmm · 2014-09-15T21:16:54.628Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

The dangers of UFAI are minimal.

Replies from: DanielLC, army1987, None
comment by DanielLC · 2014-09-15T23:16:43.147Z · LW(p) · GW(p)

Do you think that it is unlikely for a UFAI to be created, that if a UFAI is created it will not be dangerous, or both?

Replies from: lmm
comment by lmm · 2014-09-16T12:11:35.403Z · LW(p) · GW(p)

I think humans will become sufficiently powerful that UFAI does not represent a threat to them before creating UFAI.

comment by A1987dM (army1987) · 2014-09-17T16:27:33.036Z · LW(p) · GW(p)

“Dangers” being defined as probability times disutility, right?

Replies from: lmm
comment by lmm · 2014-09-17T23:27:23.160Z · LW(p) · GW(p)

With the caveat that I'm treating unbounded negative utility as invalid, sure.

comment by [deleted] · 2014-09-15T21:50:28.464Z · LW(p) · GW(p)

Please do elaborate!

comment by jsteinhardt · 2014-09-15T17:37:34.336Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

For many smart people, academia is one of the highest-value careers they could pursue.

Replies from: gjm, bramflakes
comment by gjm · 2014-09-15T20:00:39.406Z · LW(p) · GW(p)

Clarify "many"?

Replies from: jsteinhardt
comment by jsteinhardt · 2014-09-16T01:59:21.822Z · LW(p) · GW(p)

~30% maybe?

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-17T16:32:58.849Z · LW(p) · GW(p)

What about “smart people”? IQ > 100? IQ > 115? IQ > 130? IQ > 145?

Replies from: jsteinhardt
comment by jsteinhardt · 2014-09-17T18:24:11.146Z · LW(p) · GW(p)

Let's say IQ 145 or higher?

ETA: Although I would push things like conscientiousness into the picture as well if I were trying to be more precise; but for the sake of not writing an essay I'm happy to stick with an IQ cutoff.

comment by bramflakes · 2014-09-15T20:05:18.489Z · LW(p) · GW(p)

Highest value for the person, for society, or both?

Also, by "high value" do you mean purely monetary or do you mean other benefits?

Replies from: jsteinhardt
comment by jsteinhardt · 2014-09-16T01:57:20.069Z · LW(p) · GW(p)

Society. For the second question, not quite sure what it would mean to provide monetary value to society, since money is how people trade for things within society rather than some extrinsic good.

Replies from: atorm, None
comment by atorm · 2014-09-25T00:15:31.783Z · LW(p) · GW(p)

It sure isn't great for the smart people.

comment by [deleted] · 2014-09-26T01:29:03.027Z · LW(p) · GW(p)

Yes, I think that's pretty trivially true. Academia functions monastically: the academic accepts relatively worse material income in order to have the opportunity to donate large sums of value to society.

comment by buybuydandavis · 2014-09-16T07:51:43.112Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Utilitarianism is a moral abomination.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T14:49:31.015Z · LW(p) · GW(p)

I am very interested in this.

  • Exactly what is repugnant about utilitarianism? (Moi, I find that it leads to favoring torture over 3^^^3 specks, which is beyond facepalming; I'd like to hear your view.)

  • I guess the moral assumptions based on which you condem utilitarianism are the same you would propose instead. What moral theory do you espouse?

Replies from: buybuydandavis, pianoforte611
comment by buybuydandavis · 2014-09-19T08:19:34.902Z · LW(p) · GW(p)

Exactly what is repugnant about utilitarianism?

It's inhuman, totalitarian slavery.

Islam and Christianity are big on slavery, but it's mainly a finite list of do's and don'ts from a Celestial Psychopath. Obey those, and you can go to a movie. Take a nap. The subjugation is grotesque, but it has an end, at least in this life.

Not so with utilitarianism. The world is a big machine that produces utility, and your job is to be a cog in that machine. Your utility is 1 seven billionth of the equation - which rounds to zero. It is your duty in life to chug and chug and chug like a good little cog without any preferential treatment from you, for you or anyone else you actually care about, all through your days without let.

And that's only if you don't better serve the Great Utilonizer ground into a human paste to fuel the machine.

A cog, or fuel. Toil without relent, or harvest my organs? Which is less of a horror?

Of course, some others don't get much better consideration. They, too, are potential inputs to the great utility machine. Chew up this guy here, spit out 3 utilons. A net increase in utilons! Fire up the woodchipper!

But at least one can argue that there is a net increase of utilons. Somebody benefited. And whatever your revulsion at torture to avoid dust specks, hey, the utilon calculator says it's a net plus, summed over the people involved.

No, what I object to is having a believer who reduces himself to less than a slave, to raw materials for an industrial process, held up as a moral ideal. It strikes me as even more grotesque and more totalitarian than the slavery lauded by the monotheisms.

Replies from: gjm, atorm, polymathwannabe
comment by gjm · 2014-09-19T17:10:13.279Z · LW(p) · GW(p)

I disagree, but my reasons are a little intricate. I apologize, therefore, for the length of what follows.

There are at least three sorts of questions you might want to use a moral system to answer. (1) "Which possible world is better?", (2) "Which possible action is better?", (3) "Which kind of person is better?". Many moral systems take one of these as fundamental (#1 for consequentialist systems, #2 for deontological systems, #3 for virtue ethics) but in practice you are going to be interested in answers to all of them, and the actual choices you need to make are between actions, not between possible worlds or characters.

Suppose you have a system for answering question 1, and on a given occasion you need to decide what to do. One way to do this is by choosing the action that produces the best possible world (making whatever assumptions about the future you need to), but it isn't the only way. There is no inconsistency in saying "Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I'm going to do Y instead"; that just means that you care about other things besides morality. Which pretty much everyone does.

(The same actually applies to systems that handle question 2 more directly. There is no inconsistency in saying "The gods have commanded that we do X, but I am going to do Y instead because it's easier". Though there might be danger in it, if the gods are real.)

Many moral systems have the property that if you follow them and care about nothing but morality then your life ends up entirely governed by that system, and your own welfare ends up getting (by everyday standards) badly neglected. If this is a problem, it is a problem with caring about nothing but morality, not a problem with utilitarianism or (some sorts of) divine command theory or whatever.

A moral system can explicitly allow for this; e.g., a rule-based system that tells you what you may and must do can simply leave a lot of actions neither forbidden nor compulsory, or can command you to take some care of your own welfare. A consequentialist system can't do this directly -- what sort of world is better shouldn't depend on who's asking, so if you decide your actions solely by asking "what world is best?" you can't make special allowances for your own interest -- but so what? You can take utilitarianism as your source of answers to moral questions, and then explicitly trade off moral considerations against your own interests in whatever way you please. (And utilitarianism doesn't tell you you mustn't. It only tells you that if you do that you will end up with a less-than-optimal world, but you knew that already.)

A utilitarian doesn't have to see their job as being a cog in the Great Utility Machine of the world. They can see their job however they please. All that being a utilitarian means is that when they come to ask a moral question, looking at the consequences and comparing utility is how they do it. Whether they then go ahead and maximize utility is a separate matter.

So, how should a utilitarian look at someone who cares about nothing but (utilitarian) morality -- as a "moral ideal" or a grotesquely subjugated slave or what? That's up to them, and utilitarianism doesn't answer the question. (In particular, I'm not aware of any reason to think that considering such a person a "moral ideal" is a necessary part of maximizing utility.) It might, I suppose, be nice to have a moral system with the property that a life that's best-according-to-that-system is attractive and nice to think about; but it would also be nice to have a physical theory with the property that if it's true then we all get to live happily for ever, and a metaphysics with the property that it confirms all our intuitions about the universe; and, in each case, so we can but adopting those theories probably won't work out well. Likewise, I suggest, for morality.

As for your rhetoric about machines and industrial processes: I don't think "large-scale" is at all the same thing as "industrial". Imagine, if you will, someone who would by admired by the Buddhist or Christian moral traditions, who is filled with love and compassion for everyone s/he sees and works hard to make their lives better even at great personal cost. Now expand this person's awareness and compassion to encompass everyone in the world. What you get is pretty close to the "grotesquely subjugated" utilitarian saint, but there's nothing machine-like or industrial about them: they do what they do out of an intensely personal awareness of everyone's welfare or suffering. Their life might still be subjugated or grotesque, but that has nothing to do with industrial machinery.

You might want to protest that I'm cheating: that it's wrong to call someone a utilitarian if they consider anything other than utility when making decisions. I think this would be a bit like some theists' insistence that no one can properly be called an "atheist" if they admit that slightest smidgeon of doubt about the existence of deities. And I respond in roughly the same way in this case as in the other: You may use the words however you please, but if you restrict the word "utilitarian" to those who are completely singleminded about morality, you end up with hardly anyone coming under that description, and for consistency you should do the same for every other moral system out there, and you end up having a single big bucket of not-completely-singleminded people into which just about everyone goes. Isn't it better to classify people in a way that better matches the actual distribution of beliefs and attitudes, and say that someone is a utilitarian if they answer "what's morally better?" questions by some kind of consideration of overall utility?

Replies from: buybuydandavis
comment by buybuydandavis · 2014-09-20T08:52:15.833Z · LW(p) · GW(p)

Lots to comment on here. That last paragraph certainly merits some comment.

Yes, most people are almost entirely inconsistent about the morality they profess to believe. At least in the "civilized world". I get the impression of more widespread fervent and sincere beliefs in the less civilized world.

Do Christians in the US really believe all their rather wacky professions of faith? Or even the most tame, basic professions of faith? Very very few, I think. There are Christians who really believe, and I tend to like them, despite the wackiness. Honest, consistent, earnest people appeal to me.

For the great mass, I increasingly think they just make talking noises appropriate to their tribe. It's not that they lie, it's more that correspondence to reality is so far down the list of motivations, or even evaluations, that it's not relevant to the noises that come from their mouths.

It's the great mass of people who seem to instinctively say whatever is socially advantageous in their tribe that give be the heebie jeebies. They are completely alien - which, given the relative numbers, means I am totally alien. A stranger in a strange land.

Isn't it better to classify people in a way that better matches the actual distribution of beliefs and attitudes

Yes.

and say that someone is a utilitarian if they answer "what's morally better?" questions by some kind of consideration of overall utility?

That's what the tribesman do, for the purposes of tribesman.

For the purposes of judging an ideology, which I had done, my judgment is based on what it would mean for people to actually adhere to the ideology, and not just make noises that they believe it.

For a number of purposes, knowing who has allegiance to what tribe matters. I don't find the utilitarian tribe here morally abominable, but I do think preaching the faith they do is harmful, and I wish they'd knock it off, as I wish people in general would stop preaching all the various obscenities that they preach.

Then again, what does a Martian know about what is harmful for Earthlings?

Other issues.

"Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I'm going to do Y instead"

Not utilitarianism. In utilitarianism, your happiness and welfare counts 1 seven billionth - that's not even a rounding error, it's undetectable.

Imagine, if you will, someone who would by admired by the Buddhist or Christian moral traditions, who is filled with love and compassion for everyone s/he sees and works hard to make their lives better even at great personal cost.

I've always found statements like this tremendously contradictory.

If he's really so filled with love for other people, why is helping them "a great personal cost", and not a great personal benefit? Me, I enjoy being useful, particularly to people I care about. Helping them is an opportunity, not a cost.

There is no inconsistency in saying "The gods have commanded that we do X, but I am going to do Y instead because it's easier".

What is there, for a supposed believer, is disobedience and sin. You seem tremendously cavalier about violating your professed moral code. Which, given your code, is probably a good thing, though my preference is for people to profess a decent faith that they actually follow, rather than an abomination that they don't.

Replies from: gjm
comment by gjm · 2014-09-20T23:09:39.509Z · LW(p) · GW(p)

Not utilitarianism.

I'm repeating myself here, but: I think you are mixing up two things: utilitarianism versus other systems, and singleminded caring about nothing but morality versus not. It is the latter that generates attitudes and behaviour and outcomes that you find so horrible, not the former.

You are of course at liberty to say that the term "utilitarian" should only be applied to a person who not only holds that the way to answer moral questions is by something like comparison of net utility, but also acts consistently and singlemindedly to maximize net utility as they conceive it. The consequence, of course, will be that in your view there are no utilitarians and that anyone who identifies as a utilitarian is a hypocrite. Personally, I find that just as unhelpful a use of language as some theists' insistence that "atheist" can only mean someone who is absolutely 100% certain, without the tiniest room for doubt, that there is no god. It feels like a tactical definition whose main purpose is to put other people in the wrong even before any substantive discussion of their opinions and actions begins.

why is helping them "a great personal cost", and not a great personal benefit?

It's both. (Just as a literal purchase may be both at great cost, and of great benefit.) Which is one reason why, if this person -- or someone who feels and acts similarly on the basis of utilitarian rather than religious ethics -- acts in this way because they genuinely think it's the best thing to do, then I don't think it's appropriate to complain about how grotesquely subjugated they are.

given your code

What do you believe my code to be, and why?

comment by atorm · 2014-09-24T23:41:06.299Z · LW(p) · GW(p)

Seconding the question "What moral theory do you espouse?"

comment by polymathwannabe · 2014-09-19T12:53:40.558Z · LW(p) · GW(p)

That was beautiful.

comment by pianoforte611 · 2014-09-18T21:38:00.551Z · LW(p) · GW(p)

Under utilitarianism, human farming for research purposes and organ harvesting would be justified if it benefited enough future persons.

Under utilitarianism the ideal life is one spent barely subsisting while giving away all material wealth to effective altruism/charity. (reason being - unless you are barely subsisting, there is someone who would benefit from your wealth more than you).

Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.

Replies from: D_Malik
comment by D_Malik · 2014-09-20T04:48:07.943Z · LW(p) · GW(p)

Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.

Perhaps not with utility theory's usual definition of "prefer", but in practice there is a commonsense way in which I can prefer A more than you prefer B, since we're both humans with almost identical brain architecture.

Replies from: pianoforte611
comment by pianoforte611 · 2014-09-20T13:21:55.884Z · LW(p) · GW(p)

Interesting, so your utilitarianism depends on agents having similar minds, it doesn't try to a be a universal ethical theory for sapient beings.

What exactly is that way in which you can prefer something more than I can? It is not common sense to me, unless you are talking about hedonic utilitarianism. Are you using intensity of desire or intensity of satisfaction as a criteria? Neither one seems satisfactory. People's preferences do not always (or even mostly) align with either. I suppose what I'm asking is for you to provide a systematic way of comparing interpersonal utility.

Replies from: D_Malik
comment by D_Malik · 2014-09-20T22:05:14.398Z · LW(p) · GW(p)

If I say "I prefer not to be tortured more than you prefer a popsicle", any sane human would agree. This is the commonsense way in which utility can be compared between humans. Of course, it isn't perfect, but we could easily imagine ways to make it better, say by running some regression algorithms on brain-scans of humans desiring popsicles and humans desiring not-to-be-tortured, and extrapolating to other human minds. (That would still be imperfect, but we can make it arbitrarily good.)

This isn't just necessary if you're a utilitarian, it's necessary if your moral system in any way involves tradeoffs between humans' preferences, i.e. it's necessary for pretty much every human who's ever lived.

Replies from: pianoforte611, Slider
comment by pianoforte611 · 2014-09-21T15:05:57.212Z · LW(p) · GW(p)

So you are a hedonic utilitarian? You think that morality can be reduced to intensity of desire? I already pointed out that human preferences do not reduce to intensity of desire.

Replies from: D_Malik
comment by D_Malik · 2014-09-23T06:17:15.871Z · LW(p) · GW(p)

I'm not any sort of utilitarian, and that has nothing to do with my point, which is that there obviously is a sense in which I can prefer A more than you prefer B.

comment by Slider · 2014-09-22T19:55:22.010Z · LW(p) · GW(p)

that's more like being conditional that we cooperate. If my enemy would say that I could find it offensive and it doesn'ty compel me to change my actions. If you try to use utlitarian theory to (en)force a cooperation the argument doesn't go throught.

comment by [deleted] · 2014-09-15T16:15:09.472Z · LW(p) · GW(p)

AI boxing will work.

EDIT: Used to be "AI boxing can work." My intent was to contradict the common LW positions that AI boxing is either (1) a logical impossibility, or (2) more difficult or more likely to fail than FAI.

Replies from: Jayson_Virissimo, D_Malik
comment by Jayson_Virissimo · 2014-09-15T18:26:28.479Z · LW(p) · GW(p)

"Can" is a very weak claim. With what probability will it work?

comment by D_Malik · 2014-09-15T18:44:32.469Z · LW(p) · GW(p)

It seems unlikely that the first people to build fooming AGI will box it sufficiently thoughtfully.

I think it's likely to work if implemented very carefully by the first people to build AGI. For instance, if they were careful, a team of 100 people could manually watch everything the AI thinks, stopping its execution after every step and spending a year poring over its thoughts. With lots of fail-safes, with people assigned to watch researchers in case they try anything, with several nested layers of people watching so that if the AI infects an inner layer of people, an outer layer can just pull a lever and kill them all, etc. And with the AI inside several layers of simulated realities, so that if it does bad things in an inner layer we just kill it, and so on. Plus a thousand other precautions that we can think up if we have a couple centuries. Basically, there are asymmetries such that a little bit of human effort can make it astronomically more difficult for an AI to escape. But it seems likely that we won't take advantage of all these asymmetries, especially if e.g. there's something like an arms race.

(See also this, which details several ways to box AIs.)

Replies from: None
comment by [deleted] · 2014-09-15T21:43:51.828Z · LW(p) · GW(p)

Seems like an ad hominum attack. Why wouldn't the people working on this be aware of the issues? My contrarian point is that people concerned about FAI should be working on AI boxing instead.

comment by Kaninchen · 2014-09-19T16:13:34.300Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

It would be of significant advantage to the world if most people started living on houseboats.

Replies from: wassname, None, DanielLC
comment by [deleted] · 2014-09-21T02:55:36.565Z · LW(p) · GW(p)

Waste management?

comment by DanielLC · 2014-09-20T21:11:27.456Z · LW(p) · GW(p)

Is there even enough coast for that?

If people didn't live in cities, they'd have to commute more. There would be a large increase in transportation costs.

Replies from: Kaninchen
comment by Kaninchen · 2014-09-21T13:38:11.213Z · LW(p) · GW(p)

Where I live there is an abundance of canals. "Most people" is perhaps an exaggeration, but the main points in defence of increased houseboating would be:

(1) a house is a large, expensive, immobile and illiquid asset. A houseboat is rather less expensive, which frees up capital for other purposes.
(2) the internet makes it less necessary for most people to live in cities.
(3) there would be less costs associated with moving between different areas.

Replies from: DanielLC, polymathwannabe, ChristianKl, Lumifer, Richard_Kennaway, army1987
comment by DanielLC · 2014-09-21T16:52:59.637Z · LW(p) · GW(p)

I find it difficult to believe that houseboats are inherently less expensive. It seems more likely that there's some reason house boats cannot be made as large and expensive as regular houses, so the average houseboat is much cheaper than the average house, even if it's more expensive than a house of the same quality.

The internet gets much more difficult if you don't live in cities. While it mitigates the costs of people not living near each other, it does not remove them. There are still lots of people putting large amounts of time into physically commuting.

Why not use mobile homes? They can't be stacked in three dimensions like apartments, but at least you can put them in two-dimensional grids.

Replies from: epursimuove, Kaninchen
comment by epursimuove · 2014-09-26T03:10:58.568Z · LW(p) · GW(p)

There certainly are houseboats much larger and more expensive than regular houses.

Replies from: DanielLC
comment by DanielLC · 2014-09-26T04:28:25.651Z · LW(p) · GW(p)

Your link is broken. I'm not sure the proper way to fix it, but it's hard to have links to pages with end parentheses in them.

Replies from: epursimuove
comment by epursimuove · 2014-09-26T04:55:04.745Z · LW(p) · GW(p)

Whoops. Fixed.

comment by Kaninchen · 2014-09-21T20:09:26.449Z · LW(p) · GW(p)

Motor homes might well make more sense for this. The reason I came to this view is that I like canals and so houseboating seemed like a pleasant idea; at around the same time, I read this NY Times piece suggesting that home ownership is not necessarily a good thing. Houseboating seemed like a way of dealing with that; motorhomes simply didn't occur to me as a (probably better) alternative.

comment by polymathwannabe · 2014-10-06T15:31:24.603Z · LW(p) · GW(p)

(2) the internet makes it less necessary for most people to live in cities.

Your mileage may vary. Getting internet made me yearn to move to a larger city where I could meet more interesting people and do more interesting stuff---which in the end I did.

comment by ChristianKl · 2014-09-22T11:17:26.473Z · LW(p) · GW(p)

If you don't want much cost of moving you can simply rent a flat.

comment by Lumifer · 2014-09-21T20:59:27.951Z · LW(p) · GW(p)

A houseboat is rather less expensive

I am pretty sure that out of two equivalent houses the one which floats would be noticeably more expensive, and more expensive to maintain, too. Houseboats are typically less expensive than houses because they are smaller and less convenient.

comment by Richard_Kennaway · 2014-09-22T14:25:32.236Z · LW(p) · GW(p)

Where I live there is an abundance of canals.

Sounds like a Dutch city.

(2) the internet makes it less necessary for most people to live in cities.

But, it seems, no less desired. See e.g. LW meetups.

comment by A1987dM (army1987) · 2014-09-21T19:38:00.426Z · LW(p) · GW(p)

A houseboat is rather less expensive, which frees up capital for other purposes.

Aren't RVs even cheaper?

Replies from: Lumifer, Kaninchen
comment by Lumifer · 2014-09-21T21:02:44.167Z · LW(p) · GW(p)

Aren't RVs even cheaper?

And shacks made out of plywood and corrugated iron are cheaper still.

comment by Kaninchen · 2014-09-21T20:03:45.521Z · LW(p) · GW(p)

Indeed. I would in principle be willing to apply a similar argument to RVs, but (since living in an RV holds no aesthetic appeal for me, whereas houseboating does) I am rather less aware of what the logistics would be like.

comment by Kaninchen · 2014-09-19T16:11:15.423Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

There probably exists - or has existed at some time in the past - at least one entity best described as a deity.

Replies from: FiftyTwo
comment by FiftyTwo · 2014-09-21T00:53:50.187Z · LW(p) · GW(p)

Define deity?

comment by D_Malik · 2014-09-15T18:28:32.823Z · LW(p) · GW(p)

Having political beliefs is silly. Movements like neoreaction or libertarianism or whatever will succeed or fail mostly independently of whether their claims are true. Lies aren't threatened by the truth per se, they're threatened by more virulent lies and more virulent truths. Various political beliefs, while fascinating and perhaps true, are unimportant and worthless.

Arguing for or against various political beliefs functions mostly (1) to signal intelligence or allegiance or whatever, and (2) as mental masturbation, like playing Scrabble. "I want to improve politics" is just a thin veil that system 2 throws over system 1's urges to achieve (1) and (2).

If you actually think that improving politics is a productive thing to do, your best bet is probably something like "ensure more salt gets iodized so people will be smarter", or "build an FAI to govern us". But those options don't sound nearly as fun as writing political screeds.

(While "politics is the mind-killer" is LW canon, "believing political things is stupid" seems less widely-held.)

Replies from: VAuroch, DanielLC, army1987
comment by VAuroch · 2014-09-17T04:09:06.431Z · LW(p) · GW(p)

While I mostly agree, trying to devise political systems that would encourage a smarter populace (ex. SSC's Graduation Speech with the guaranteed universal income and abolishing public schools) seems like a potentially worthwhile enterprise.

comment by DanielLC · 2014-09-15T23:00:50.600Z · LW(p) · GW(p)

I agree that forming political beliefs is not a productive use of my time in the same way that earning a salary to donate to SCI to cure people of parasites is. I disagree that this makes it silly. The reasons you gave may not be the most noble of reasons, but they are still perfectly valid.

comment by A1987dM (army1987) · 2014-09-17T16:43:56.459Z · LW(p) · GW(p)

Twelve people disagree with this? I'm surprised. I was going to downvote for ‘not in the spirit of the game, obviously not a contrarian view’, but I guess I was a victim of the typical mind fallacy.

comment by moridinamael · 2014-09-16T16:47:13.998Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Fossil fuels will remain the dominant source of energy until we build something much smarter than ourselves. Efforts spent on alternative energy sources are enormously inefficient and mostly pointless.

Related claim: the average STEM-type person has no gut-level grasp of the quantity of energy consumed by the economy and this leads to popular utopian claims about alternative energy.

Replies from: RomeoStevens, FiftyTwo
comment by RomeoStevens · 2014-09-16T18:47:30.039Z · LW(p) · GW(p)

It isn't very hard to do a little digging here. http://en.wikipedia.org/wiki/Electricity_generation#mediaviewer/File:Annual_electricity_net_generation_in_the_world.svg

China's aggressive nuclear strategy seems reasonable.

Replies from: moridinamael
comment by moridinamael · 2014-09-16T20:20:47.336Z · LW(p) · GW(p)

Not exactly sure what you mean by "digging." I already comprehend the quantities of energy being consumed because of my education and experience in related fields, it's the average person who I think does not, since I hear them saying things about how a small increase in solar panel efficiency is going to completely and rapidly "cure us of our fossil fuel addiction."

Also, your figure only reflects electricity generation, not total energy consumption which is a much higher figure. Currently non-hydrocarbon fuel sources for transportation is very fringe.

The truth is that the price of fossil fuels has always and will continue to fluctuate in accord with simple supply-demand economics for a long time to come; the cheaper it gets to make energy via alternative methods, the cheaper fossil fuels will become to undercut those alternative sources.

Replies from: RomeoStevens, ChristianKl, Nornagest
comment by RomeoStevens · 2014-09-17T00:55:38.642Z · LW(p) · GW(p)

I looked through the numbers and the trend line. I updated in your direction. Even nuclear can't make a big dent without true mass production of reactors, which almost certainly will not happen.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-17T14:32:59.833Z · LW(p) · GW(p)

I give it well over 70 percent chance of happening. Mostly because I am expecting coal and gas to get really unpleasantly expensive in the next two decades. The remaining 30 percent is mostly taken up by "Technological surprise rendering all extant generation tech obsolete. One of the small-scale fusion plants working out very well, for example.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-18T00:46:58.474Z · LW(p) · GW(p)

Mostly because I am expecting coal and gas to get really unpleasantly expensive in the next two decades.

The only reason they have been getting expensive at all is that governments have been over-regulating them.

Replies from: ChristianKl, Izeinwinter
comment by ChristianKl · 2014-09-19T20:03:17.294Z · LW(p) · GW(p)

If you don't regulate them you don't pay directly but pay in medical cost for conditions such as asthma. You also get lower children IQ which is worth something. According to the EPA calculations the children IQ is worth more than the increased monetary cost of coal plants due to mercury regulation.

comment by Izeinwinter · 2014-09-18T03:33:05.486Z · LW(p) · GW(p)

Ehrr.. Just No. Nuclear might be able to make that case, tough mostly the problem there is sticking with over-grown submarine reactors (pwr's are an asinine choice for use on land) but coal and gas? Those are, if anything underregulated due to excessive political clout. Fossil fuels will get more costly for straightforward reasons of supply and demand. The third world is industrializing, and the first world is going to use ever more electricity due to very predictable changes like the coming switch to all-electric motoring (Which, again, will not be driven by government policy, but by better batteries making the combustion engine a strictly inferior technology for cars) Thus, world wide electricity demand is going to go up. By a lot. That, in turn is going to bid up sea-borne coal and liquified natural gas to ridiculous heights because there just isn't any way to increase the supply to match. Very shortly after that, Resistance to more reactors is going to keel over and die - high electricity costs to industry being entirely unacceptable to people with lots of political clout and lots of media ownership, and suddenly mass-production is going to be on the menu. Hopefully of more sensible designs. Molten salt, molten lead, even sodium. Any design that doesn't require the power to be on for shut-down cooling to work, basically.

Replies from: moridinamael
comment by moridinamael · 2014-09-18T14:17:35.864Z · LW(p) · GW(p)

Fossil fuels will get more costly for straightforward reasons of supply and demand.

Unfortunately it is not quite this simple. The current oil price is on the order of $100 per barrel, but it never broke $40 per barrel prior to 1998. See figure. Also see this figure which is in terms of inflation-adjusted dollars, and shows another huge spike around 1980. The reason for these tremendous spike in price isn't simple supply-demand - complex nonlinear political factors are almost certainly to blame, and price stickiness is partially why oil remains as expensive as it is. It doesn't cost even in the ballpark of $100 per barrel to get oil out of the ground and it won't for a very very long time. The upshot is that the price of oil will continue to beat out other sources of energy by just enough to keep those sources of energy at a marginal level of profitability, because oil (and other fossil fuels) can remain profitable at much lower prices.

I would also point out that the scenario you have just described is highly complex and conjunctive, while "oil continues to do what it has been doing" is an intrinsically simple hypothesis.

Replies from: None, Izeinwinter
comment by [deleted] · 2014-09-18T16:12:14.615Z · LW(p) · GW(p)

Price is set on the margins. The marginal barrels of oil coming out of the ground are certainly in the ballpark of $100, from various shale and tight deposits.

Replies from: Lumifer, moridinamael
comment by Lumifer · 2014-09-18T16:35:33.428Z · LW(p) · GW(p)

The oil prices do not play by the economics textbook rules because most of the world's oil production is controlled by governments and governments have a variety of interests and incentives beyond what a profit-maximizing purely economic agent might have.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-18T18:13:25.489Z · LW(p) · GW(p)

Oil is nearly utterly irrelevant to electricity, however. Nobody sane produces electricity on anything but the most minor of scales using it, and given mass-market electric cars, it is never going to be able to compete on price with electricity. charging a 100 kwh battery pack from bone dry to full would cost an average of 12 dollars and change in the US. That is the equivalent of a gas price of <60 cents per gallon, and electric cars are a better driving experience. (well, a car with a 100 kwh battery back certainly will be. That's a lot of oomp.) The sauds aren't going to be able to beat this transition just by dropping the price of oil for a year or two, nor are price hikes on the electric side going to do it - most of the cost of electricity to private consumers is taxes, so even quite large rises in the cost of coal and gas will not translate one-to-one as end-customer pain, and the differential in price is simply to large. More importantly, the electrification of the world continues apace, and most of the places joining the age of the electron do not have vast coal reserves of their own. The secular pressure to go nuclear is only going to rise, and most of the world is well beyond the reach of the various groups dedicated to the defense of helpless Actinides.

Replies from: Lumifer
comment by Lumifer · 2014-09-18T19:04:43.418Z · LW(p) · GW(p)

Nobody sane produces electricity on anything but the most minor of scales using it

At the moment. But your scenario assumes that most of transportation switches its energy source from petrol/diesel to electricity. That implies that the demand for oil will drop through the floor. And that implies that oil will become very cheap. Which in turn implies that it will again start to make sense to burn it to make electricity to recharge the car batteries.

Remember that electricity is not a source of energy. To support your case for nuclear you need to show that coal and hydrocarbons will be unable to support the energy needs of the humanity in the near future. Whether cars run directly on hydrocarbons or whether there is the intermediate stage of electricity involved does not matter much for this issue.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-18T20:02:56.214Z · LW(p) · GW(p)

Oil has uses other than automotive fuel - way before it reaches the point where it becomes competitive with coal or uranium for stationary power plants, demand from the plastics, avionics and the petrochemicals industry is going to put a floor on the price. I don't expect the saudi oil to stay under the sand, but as an energy player, the global oil industry is doomed. Coal is going to be raking in money hand over fist for a while as prices spike, but once a transition to fission starts, - and high coal prices will get that started - they are done for too. King coal only still reigns at all because the world has been collectively insane about fission due to living in the shadow of the atomic bomb.

Replies from: Lumifer
comment by Lumifer · 2014-09-18T20:15:04.091Z · LW(p) · GW(p)

Oil has uses other than automotive fuel

Certainly true and yes, that will put a floor under the price.

as an energy player, the global oil industry is doomed

That's good, isn't it? People have been pointing out for quite a while that just burning something as useful as oil is pretty silly. It also means that the world is not going to run out of oil in the foreseeable future, right?

Coal is going to be raking in money hand over fist for a while as prices spike

I don't know about that -- there is an awful lot of gas around.

Do you happen to have some sort of a timetable for your predictions?

the world has been collectively insane about fission

While that may be true, I don't see any signs of the world becoming more sane.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-18T20:34:18.336Z · LW(p) · GW(p)

tesla says the model e is going live 2017. Average fleet turnover is 7 years and with affordable EV's on the market (I am sort of assuming at least some competition from other manufacturers here..) combustion cars become roughly as marketable as a buggy and horse, so near-total conversion to electric cars 80+ %) by 2027 - the knockon effects of that on the grid are very predictable, so a sudden dire interest in more baseload during that time period.

Replies from: None, Lumifer
comment by [deleted] · 2014-09-19T12:59:09.789Z · LW(p) · GW(p)

You DO realize that even if Tesla's wildest dreams are realized and they double the world production of lithium ion batteries, they can sell at most a few hundred thousand cars a year...

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-19T14:00:40.999Z · LW(p) · GW(p)

Yhea. Musk isn't breaking ground on big enough factories. That is why I am saying ten years for the switchover rather than much less than that. But once the world spots someone in bog-standard manufacturing making more money than god, everyone, their sister, and the crazed aunt nobody wants to talk to will pile in.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-19T14:55:34.306Z · LW(p) · GW(p)

Increasing the amount of lithium that get's mined each year isn't as easy of just retasking a factory to another task.

Replies from: Osuniev
comment by Osuniev · 2014-09-20T23:02:45.924Z · LW(p) · GW(p)

But things ARE moving in this direction, I believe. Bolivia is trying to figure a way to start getting money from the world's largest reserve of lithium, currently untouched because under the natural wonder Salar de Uyuni

Replies from: ChristianKl
comment by ChristianKl · 2014-09-20T23:05:49.891Z · LW(p) · GW(p)

Things are moving into the direction of producing more lithium but not enough to simply double lithium production in one or two years.

Replacing all cars with electric cars might require a lot more than doubling.

comment by Lumifer · 2014-09-18T20:49:00.649Z · LW(p) · GW(p)

combustion cars become roughly as marketable as a buggy and horse

You are making a rather huge assumption that the electric battery energy density will drastically improve. Without that electric cars will still be limited to cities and commuting.

a sudden dire interest in more baseload during that time period

California had some significant electric shortages recently and that did NOT make them fans of building more power plants, never mind nuclear ones. If the demand spikes, high (electricity) prices will downregulate it at which point, given the cheap oil, the ICE (internal combustion engine) cars could turn out to be a sensible option :-)

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-19T11:47:13.693Z · LW(p) · GW(p)

No I'm not. I am assuming the exact batteries that are going to be coming out of the factories currently being built. Because when it comes to it, nobody is going to give a half a damn that they have to take a mid day break of thirty minutes the one time a year they visit aunt Greta three states over. Tesla is aiming at 200 mile range. At legal speeds, that is just under four hours of driving on the highway. Try and recall the last time you did more than that in one day? Now, for that trip, would a 30 minute lunch break have ruined your life?

The actual pattern of use for everyone in their day to day lives is going to be "jack the car in when you come home for the day, unplug in the morning". Total time spent; 22 seconds. This is more convenient than a weekly stop at the gas station, and vastly cheaper. At this point everyone I discuss with will bring up road trips. Except. People plan those. Including a stop at a super charger station is not a hardship. It is certainly not a hardship severe enough to justify paying 6 to ten times as much to keep your car running on a day to day basis. Is avoiding those semiannual 30 minute pitstops really worth 4 and a half thousand dollars to you? Assuming gas drops in price by half, is it worth 2250 dollars? I guarantee that avoiding them is not worth it to the average consumer. And given a sane discount rate, that difference in fuel costs means nobody is going to buy a gasoline car. You would have to give them away for free! average car turnover; 7 years. 7 x 4.5 = 31.5 thousand. The tesla E is aiming at price point of 35k. So, basically, the fuel savings are going to pay for the car. The only flaw in Elon Musks design I can spot here is that I think his planned production facilities are way to small. That provides an opening for panicked retooling for the production of "Me Too" cars from the traditional car makers. This is the main reason I said ten years for full switchover rather than 7 - most car manufacturers are currently being idiots and acting like they will be making gasoline burners in the year 2025. The panic rethink is going to delay things a bit.

Replies from: Lumifer, Lumifer, army1987
comment by Lumifer · 2014-09-21T00:21:52.043Z · LW(p) · GW(p)

worth 4 and a half thousand dollars to you? Assuming gas drops in price by half, is it worth 2250 dollars?

Y'know, that number bothered me so I decided to check. Are the annual gas expenses really $4,500?

Let's see. An average American car drives somewhere around 12,000 miles per year. A contemporary sedan running on gas goes for around 30 miles per gallon (EPA combined numbers). This means that a car burns about 400 gallons of gas per year. I filled up yesterday for $3.15 per gallon, but let's say the average current price is $3.25. 400 * $3.25 = $1,300 per year. This is the average annual gas expense.

And if you buy a small (still ICE) car, you can get gas mileage up to about 40 mpg, I think. For such a car the annual costs of gas would be below a thousand dollars.

Where does your $4,500 number come from?

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-24T16:19:05.338Z · LW(p) · GW(p)

Uhm.. That looks a heck of a lot like the reasoning I used, except I did something stupid with imperial/metric conversion, and then it didn't trigger any "That must be a mistake" bells because gas in these parts is 2 dollars.. per liter. Why can't you use metric like everyone else? ;)

Replies from: Lumifer
comment by Lumifer · 2014-09-24T16:44:41.284Z · LW(p) · GW(p)

because gas in these parts is 2 dollars.. per liter

This is the case of your government being greedy, not of gasoline being intrinsically expensive :-P

Why can't you use metric like everyone else? ;)

The US is special -- haven't you heard of the American Exceptionalism doctrine? X-D

comment by Lumifer · 2014-09-19T14:50:29.708Z · LW(p) · GW(p)

200 mile range. At legal speeds, that is just under four hours of driving on the highway. Try and recall the last time you did more than that in one day?

No need to try, I drive long distances on a fairly regular basis.

Your estimate of half an hour for a full recharge also seems to have nothing to do with the current reality.

The actual pattern of use for everyone in their day to day lives

I trust you've heard of the typical mind fallacy? People are different. Trying to pretend everyone does the same thing isn't particularly useful.

Assuming gas drops in price by half

And assuming electricity costs go up by how much? One reason gas costs so much is because it's a source of revenue for the government. Do you think the government will just forget about this revenue or maybe electric won't be so cheap after all? You are predicting a huge spike in demand, right?

nobody is going to buy a gasoline car.

LOL. OK, then, it's a simple way to become rich. Short the stocks of everyone who depends on ICE engines -- engine manufacturers, most obviously, but there's a large ecosystem around that -- and go long Tesla and its ecosystem. In ten years you should be swimming in money.

On a bit more serious note, clearly electric cars make sense for some people and some uses. They also clearly do NOT make sense for other people and other uses. Of course there will be more electric cars on the road in ten years. But there will be ICE cars as well.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-19T15:58:09.698Z · LW(p) · GW(p)

In ten? sure. As I said, 80% penetration. It might be higher, but that /does/ depend on better batteries than conservative forward extrapolation of trends predict. Most of them will be used, because you will be able to get a used ice car for junk value and a dollar, and then junk them the first time they have any kind of major problem.

And the government wont be able to tax the electricity to the extent they do gas - no good way to do that without being lynched because electrons are electrons. Most likely, we will wind up with.. I dunno, ridiculously high taxes on tyres?

Replies from: Lumifer
comment by Lumifer · 2014-09-19T16:18:51.683Z · LW(p) · GW(p)

So, to take the used cars out of the equation, you're saying that in 10 years no one will be producing ICE cars..? Or, to avoid absolutes, given 80% penetration and the existence of used cars, do you claim that in ten years something like 95% of cars produced will be purely electric?

And the government wont be able to tax the electricity to the extent they do gas - no good way to do that without being lynched because electrons are electrons.

I wish I shared your optimism :-/

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-19T19:12:10.226Z · LW(p) · GW(p)

More or less. Technology transitions have reinforcing feedback loops - once the transition starts, the bottom falls out of the market for used ICE cars (the junk value and a dollar thing..) which makes new ICE cars very difficult indeed to sell. After a couple years of that, gasoline is no longer sold in nearly as many places...

It's not blind optimism - look, the oil barons currently bribing the US congress (and various european politicians..) fall into two categories; "Marginal producers" and "Funny looking/weirdly dressed foreigners". That means that once the price of oil falls to any significant extent, the political lobby for oil rapidly gets reduced to 90+% "Guys in turbans with no vote". That is an interest group which politicians will loose absolutely no sleep over burning all bridges with. So there shouldn't be any pressure from the top to keep the ICE alive artificially. That will cost revenue, yes, but it will save consumers lots of money. Which they will spend. And that will create revenue, and reduce expeditures, and again.. Being the politician that steps up and says "I know you just saved thousands of dollars that were heading down to the kingdom of sand, but I just cant stand to see a commoner with money so I'm going to slap a 3 thousand dollar surcharge on your electric bill" is a good way to end up on a literal pike. No matter how much economy speak you try to dress it up in.

Replies from: Lumifer
comment by Lumifer · 2014-09-20T00:38:23.497Z · LW(p) · GW(p)

More or less.

Well, we'll see. In the meantime, Chevy Volt, an electric car selling for $27K (with applicable tax credits, that is, a government bribe to make you buy it) is selling rather poorly, I believe.

Being the politician that steps up and says

Heh. No, the politicians have gotten quite good at saying "Look at the shiny!" while they're rifling through you wallet..

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-20T10:58:40.064Z · LW(p) · GW(p)

And the united kingdoms are still happily paying the poll tax, the american war of independence never happened.

Some taxes are much more.. annoying.. to the general public than others. And in this case, what you are envisioning just can't happen. You can not collect high taxes on gasoline and electricity both - the transition is fast, not instant, and so if you do that low income people who are obliged to use gasoline will actually run out of money all-together. And riot. In theory, it is possible to stop taxing gas and start taxing electricity instead, but that is so painfully stupid an idea you wouldn't be elected dogcatcher after suggesting it.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T19:13:07.605Z · LW(p) · GW(p)

You can not collect high taxes on gasoline and electricity both

Well a lot of European countries are doing just that. Or rather they have "sustainable energy" mandates, which from the consumer's point of view function as a high tax on electricity.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-20T19:38:43.508Z · LW(p) · GW(p)

Not high enough to make gasoline competitive with electricity on price. which is the subject under debate. Not that I'm happy about the mandates, because they have failed. The only policies that have ever worked to clean up electricity generation are dams and nukes. Barring technological breakthroughs, I fear they are the only ones that are ever going to work outside of a band near the equator where solar might eventually become sane.

comment by A1987dM (army1987) · 2014-09-20T12:25:44.914Z · LW(p) · GW(p)

Humans aren't that rational; as someone here (Yvain, I think) once mentioned, they will rent/buy houses with an extra bedroom just in case aunt Greta comes over, even though the money they'd save with a smaller house would be enough to buy a stay in a five-star hotel every time aunt Greta comes over.

Also, electric cars just aren't that cool among a large segment of the population, and social status is a major part of the reason people buy expensive-ass cars.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-20T13:36:03.101Z · LW(p) · GW(p)

.. And for the irrational and moneyed segment of the population willing to blow money on cool factor for conspicuous consumption EV motoring offers total freedom of design space. The basis of an EV is a skateboard. 4 wheels on the corners with electric steering, batteries and drive by wire on top of which you can drop any chassis you care to. So, blow the money needed to get the jumbo-tron extra large battery pile, and stick any chassis you care to on top. Areo-dynamics? Who cares, it isn't like the kind of person who would ignore that price gap on the fuel side of things is going to feel the pain that their car is eating watt-hours like a house decked out in Christmas lights.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-20T18:37:25.017Z · LW(p) · GW(p)

If my model of the people I'm talking of is correct, the very idea of electric cars reminds them of granola-eating hippie wusses. (But I haven't interacted with such people on a regular basis since 2009, so my model may be outdated.)

comment by moridinamael · 2014-09-18T16:18:16.523Z · LW(p) · GW(p)

I assure you that this is not true, unless I misunderstand you.

edit:

The Finding and Development cost of a typical worthwhile shale play is $1.50/Mcf (many are even better), the current natural gas price is $3.50/Mcf. Of course there are crappy fields with higher F&D cost, and these won't be drilled until prices are high enough to justify it. In effect there is a continuum of price/barrel out there in the world and this is not what controls present day prices.

comment by Izeinwinter · 2014-09-18T14:44:51.531Z · LW(p) · GW(p)

Eh, Ill stand by my reasoning, but I agree other people might not assign as high probabilities to each step in the chain as I do, so here is a much simpler causative chain that is going to lead to the same place.

China isn't going to keep sacrificing tens of thousands of it's people to the demon smog every year. And once the chinese are knocking of reactors at a high pace, the rest of the world will follow.

Replies from: Lumifer
comment by Lumifer · 2014-09-18T15:16:06.391Z · LW(p) · GW(p)

China isn't going to keep sacrificing tens of thousands of it's people to the demon smog every year.

And a simple solution to this is just to copy the current-day US which does not use a lot of nuclear power and also does not sacrifice many people to the demon smog.

comment by ChristianKl · 2014-09-19T20:03:22.832Z · LW(p) · GW(p)

I hear them saying things about how a small increase in solar panel efficiency is going to completely and rapidly "cure us of our fossil fuel addiction."

We have roughly doubling in solar panel efficiency every 7 years. That's not what I would call "small increase".

Replies from: moridinamael
comment by moridinamael · 2014-09-19T20:13:25.519Z · LW(p) · GW(p)

Even if solar panels were 100% efficient it would not change the overall picture very much. Solar panels are expensive and do not use space efficiently.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-20T01:31:57.536Z · LW(p) · GW(p)

With efficiency I meant the amount you pay per kilowatt hour. It's a variable that has seen consistent doubling every 7 years over the last two decades.

Space on top of most buildings is unused and there are huge deserts that aren't used.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T01:58:30.021Z · LW(p) · GW(p)

Does the include the subsidies many governments have been providing to solar?

Replies from: ChristianKl
comment by ChristianKl · 2014-09-20T15:41:35.042Z · LW(p) · GW(p)

Subsidies per kilowatt hour didn't raise exponentially. I'm not sure to what extend they are factored out.

Solar is also not the only form of energy that get's subventioned. In Germany we used to pay billions per year in coal subventions.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T18:58:54.666Z · LW(p) · GW(p)

Subsidies per kilowatt hour didn't raise exponentially.

They started from zero, so it's technically super-exponential.

comment by Nornagest · 2014-09-16T23:32:13.113Z · LW(p) · GW(p)

Also, your figure only reflects electricity generation, not total energy consumption which is a much higher figure.

From what I remember, transportation's responsible for about a third of CO2 emissions, a bit less than electricity generation. (Various other sources make up the remaining third, most not involving direct energy consumption.) I'm not sure exactly how that translates to energy consumption, but given the economies of scale involved, I suspect the power grid would end up dominating total energy consumption.

comment by FiftyTwo · 2014-09-21T00:43:49.725Z · LW(p) · GW(p)

Is this a claim about the choices we will make or what is possible? If 1 I can buy it as an argument that states will not be rational enough to choose better options, if 2 I think its false.

comment by Jiro · 2014-09-15T19:58:09.065Z · LW(p) · GW(p)

Roko's Basilisk legitimately demonstrates a problem with LW. "Rationality" that leads people to believe such absurd ideas is messed up, and 1) the presence of a significant number of people psychologically affected by the basilisk and 2) the fact that Eliezer accepts that basilisk-like ideas can be dangerous are signs that there is something wrong with the rationality practiced here.

Replies from: fubarobfusco, Sarunas, Emile, polymathwannabe, None, cousin_it
comment by fubarobfusco · 2014-09-16T03:50:40.264Z · LW(p) · GW(p)

My contrarian idea: Roko's basilisk is no big deal, but intolerance of making, admitting, or accepting mistakes is cultish as hell.

comment by Sarunas · 2014-09-16T11:54:10.829Z · LW(p) · GW(p)

"Rationality" leads people to believe such absurd ideas

Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions. E.g. G.K.Chesterton would probably blame atheism[1]. Zooming out even more, for example, someone immersed in Eastern thought might even blame Western thought in general. Despite receiving vastly disproportionate share of media attention it was such a small part of LessWrong history and thought (by the way, is anything that any LWer ever came up with a part of LW thought?) that it seems to wrong to put the blame on LessWrong or rationality in general.

Furthermore, which would you say is better, an ability to formulate an absurd idea and then find its flaws (or, for e.g. mathematical ideas, exactly under what strange conditions they hold) or inability to formulate absurd ideas at all? Ability to come up with various absurd ideas is an unavoidable side effect of having an imagination. What is important is not to start believing it immediately, because in the history of any really new and outlandish idea at the very beginning there is an important asymmetry (which arises due to the fact that coming up with any complicated idea takes time) - an idea itself has already been invented but the good counterarguments do not yet exist (this is similar to the situation where a new species is introduced to an island where it does not have natural predators, which are introduced only later). This also applies to the moment when a new outlandish idea is introduced to your mind and you haven't heard any counterarguments by that moment, one must nevertheless exercise caution. Especially if that new idea is elegant and thought provoking whereas all counterarguments are comparatively ugly and complicated and thus might feel unsatisfactory even after you have heard them.

the presence of a significant number of people psychologically affected

Was there really a significant number of people or is this just, well, an urban legend? The fact that some people are affected is not particularly surprising - it seems to be consistent with the existence of e.g. OCD. Again, one must remember that not everyone thinks the same way and the common thing between people affected might have been something other than acquaintance with LW and rationality which you seem to imply (correct me if my impression was wrong).

the fact that Eliezer accepts that basilisk-like ideas can be dangerous

I think it is better to give Eliezer a chance to explain himself why he did what he did. My understanding is that whenever someone introduces a person to new variant of this concept without explaining proper counterarguments it takes time for that person to acquaint themselves with them. In very specific instances that might lead to unnecessary worrying about it and potentially even some actions (most people would regard this idea as too outlandish and too weird whether or not it was correct and compartmentalize everything even if it was). A clever devil's advocate could potentially come up with more and more elaborate versions of this idea which take more and more time to take down. As you can see, it is not necessary for any form of this idea to be correct for this gap to expand.

Personally I understand (and share) the appeal of various interesting speculative ideas and and the frustration that someone thinks that this is supposedly bad for some people, which seems against my instincts and the highly valuable norm of free marketplace of ideas.

At this point in time, however, the basilisk seems to be more often brought up in order to dismiss all LW, rather than only this specific idea, thus it is no wonder that many people get defensive even if they do not believe it.

All of this does not touch the question whether the whole situation was handled the way it should have been handled.

[1] Although the source says that famous quote is misattributed. Huh. I remember reading a similar idea in one of "Father Brown" short stories. I'll have to check it.

(excuse my english, feel free to correct mistakes)

Replies from: Jiro, lmm
comment by Jiro · 2014-09-16T14:34:15.203Z · LW(p) · GW(p)

Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions.

The quotes indicate that I'm not blaming rationality, I'm blaming something that's called rationality. You're replying as if I'm blaming real rationality, which I'm not.

Was there really a significant number of people or is this just, well, an urban legend?

Censoring substantial references to the basilisk was partly done in the name of protecting the people affected. This requires that there be a significant number of people, not just that there be the normal number of people who can be affected by any unusual idea.

I think it is better to give Eliezer a chance to explain himself why he did what he did.

His explanations have varied. The explanation you linked to is fairly innocuous; it implies that he is only banning discussion because people get harmed when thinking about it. Someone else linked a screengrab of Eliezer's original comment which implies that he banned it because it can make it easier for superintelligences to acausally blackmail us, which is very different from the one you linked.

Replies from: Sarunas
comment by Sarunas · 2014-09-17T13:06:22.170Z · LW(p) · GW(p)

Censoring substantial references to the basilisk was partly done in the name of protecting the people affected. This requires that there be a significant number of people, not just that there be the normal number of people who can be affected by any unusual idea.

Curiously, it is not necessary. For example, it would suffice that people who do the censoring overestimate the number of people that might need protection. Or consider PR explanation that I gave in another comment which similarly does not require a large number of people affected. Some other parts of your comment are also addressed there.

Replies from: Jiro
comment by Jiro · 2014-09-17T14:24:40.356Z · LW(p) · GW(p)

It is certainly possible that few people were affected by the Basilisk, and the people who do the censoring either overestimate the number or are just using it as an excuse. But this reflects badly on LW all by itself, and also amounts to "you cannot trust the people who do the censoring", a position which is at least as unpopular as my initial one.

Replies from: Sarunas
comment by Sarunas · 2014-09-17T20:24:55.430Z · LW(p) · GW(p)

I would guess that the dislike of censorship is not an unpopular position, whatever its motivations.

comment by lmm · 2014-09-16T22:26:59.815Z · LW(p) · GW(p)

Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions. E.g. G.K.Chesterton would probably blame atheism[1]. Zooming out even more, for example, someone immersed in Eastern thought might even blame Western thought in general.

It's whatever makes LW different from the wider population, even the wider nerdy-western-liberal-college-educated cluster. The general population of atheists does not have problems with basilisks, and laughs them off when you describe them to them.

Despite receiving vastly disproportionate share of media attention it was such a small part of LessWrong history and thought (by the way, is anything that any LWer ever came up with a part of LW thought?) that it seems to wrong to put the blame on LessWrong or rationality in general.

It also received a disproportionate amount of ex cathedra moderator action. Which things are so important to EY that he feels it necessary to intervene directly and in a massively controversial way? By their actions we can conclude that the Basilisk is much more important to the LW leadership than e.g. the illegitimate downvoting that drove danerys away.

Furthermore, which would you say is better, an ability to formulate an absurd idea and then find its flaws (or, for e.g. mathematical ideas, exactly under what strange conditions they hold) or inability to formulate an absurd ideas at all? Ability to come up with various absurd ideas is an unavoidable side effect of having imagination. What is important is not to start believing it immediately, because in the history of any really new and outlandish idea at the very beginning there is an important asymmetry (which arises due to the fact that coming up with any complicated idea takes time) - an idea itself has already been invented but the good counterarguments do not yet exist (this is similar to the situation where a new species is introduced to an island where it does not have natural predators, which are introduced only later). This also applies to the moment when a new outlandish idea is introduced to your mind and you haven't heard any counterarguments by that moment, one must nevertheless exercise caution. Especially if that new idea is elegant and thought provoking whereas all counterarguments are comparatively ugly and complicated and thus might feel unsatisfactory even after you heard them.

I don't think this addresses the original argument. If these ideas are dangerous to us then we are doing something wrong. If you're saying that danger is an unavoidable cost of being able to generate interesting ideas, then the large number of other groups who seem to come up with interesting ideas without ideas that present a danger to them seems like a counterexample.

Was there really a significant number of people or is this just, well, an urban legend? The fact that some people are affected is not particularly surprising - it seems to be consistent with the existence of e.g. OCD. Again, one must remember that not everyone thinks the same way and the common thing between people affected might have been something other than acquaintance with LW and rationality which you seem to imply (correct me if my impression was wrong).

I don't know, but the LW leadership's statements seem to be grounded in the claim that there were

Replies from: ChristianKl, Sarunas
comment by ChristianKl · 2014-09-17T13:38:56.116Z · LW(p) · GW(p)

Which things are so important to EY that he feels it necessary to intervene directly and in a massively controversial way? By their actions we can conclude that the Basilisk is much more important to the LW leadership than e.g. the illegitimate downvoting that drove danerys away.

At the time the Basilisk episode happened Eliezer was a lot more active in general then when the illegitimate downvoting happened.

If you're saying that danger is an unavoidable cost of being able to generate interesting ideas, then the large number of other groups who seem to come up with interesting ideas without ideas that present a danger to them seems like a counterexample.

If you look at the self professed skeptic community there are episodes such as elevator gate.

If you go a bit further back and look at what Stalin did, I would call the ideas on which he acted dangerous.

The general population of atheists does not have problems with basilisks, and laughs them off when you describe them to them.

It's pretty easy to speak about a lot of topics in a way that the people you are talking to laugh and don't take the idea seriously. A bunch of that atheist population also treats their new atheism like a religion and closes itself from alternative ideas that sound weird. For practical purposes they are religious and do have a fence against taking new ideas seriously.

comment by Sarunas · 2014-09-17T13:05:45.138Z · LW(p) · GW(p)

It's whatever makes LW different from the wider population, even the wider nerdy-western-liberal-college-educated cluster. The general population of atheists does not have problems with basilisks, and laughs them off when you describe them to them.

What ideas does the general population of atheists have in common besides the lack of belief in God? And what interesting ideas can you derive from that? F.Dostoevsky (who wasn't even an atheist) seems to have thought that from this one could derive that everything is morally permitted. Maybe some atheistic ideas seemed new, interesting and outlandish in the past when there were few atheists (e.g. separation of church and state), but nowadays they are part of common sense.

No, the claim of this hypothetical Chesterton would not be that atheism creates new weird ideas. It would be that by rejecting god you lose the defense against various weird ideas ("It’s the first effect of not believing in God that you lose your common sense." - G.K.Chesterton). It is not general atheism, it is specific atheist groups. And in the history of the world, there were a lot of atheists who believed in strange things. E.g. some atheists believe in reincarnation or spiritism. Some believe that the Earth is a zoo kept by aliens. In previous times, some revolutionaries (led not by their atheism, but by other ideologies) believed that just because the social order is not god given it could be easily changed into basically anything. The hypothetical Chesterton would probably claim that had all these people closely followed church's teachings they would not have believed in these follies since the common sense provided by the traditional christianity would have prevented them. And he would probably be right. The hypthetical Chesterton would probably think that the basilisk is yet another thing in the long list of things some atheists stupidly believe.

Yes, on LessWrong the weirdness heuristic is used less than in more general atheist/skeptic community (in my previous post I have already mentioned why I think it is often useful), and it is considered bad to dismiss the idea if the only counterargument to it is that is weird. Difference in acceptance of weirdness heuristic probably comes from different mentalities: trying to become more rational vs. a more conservative strategy of trying to avoid being wrong (e.g. accepting epistemic learned helplessness when faced with weird and complicated arguments and defaulting to the mainstream position). This difference may reduce a person's defenses against various new and strange ideas. But even then, one of the most upvoted LW posts of all times already talks about this danger.

Nevertheless, while you claim that general population of atheists "laughs them off when you describe them to them.", it is my impression that the same is true here, on LessWrong, as absolute majority of LWers do not consider it as a serious thing (sadly, I do not recall any survey asking about that). It is just a small proportion of LWers that believe in this idea. Thus it cannot be "whatever makes LW different from the wider population", it must be whatever makes that small group different from the wider LW population, because even after rejecting following tradition (which would be the hypothetical Chesterton's explanation) and diminished usage of weirdness heuristic (which would be average skeptic's explanation) majority of LWers still do not believe it. And the reasons why some LWers become defensive when someone brings it up are probably very similar to those described in a blog post "Weak Men Are Superweapons" by Yvain.

One could argue that LessWrong thought made it possible to formulate such an idea. Which I had already addressed in my previous post. Once you have a wide vocabulary of ideas you can come up with many things. It is important to be able to find out if the thing you came up with is true.

If these ideas are dangerous to us then we are doing something wrong. If you're saying that danger is an unavoidable cost of being able to generate interesting ideas, then the large number of other groups who seem to come up with interesting ideas without ideas that present a danger to them seems like a counterexample.

I do not think that thinking about basilisk is dangerous to us. Maybe it is to some people with OCD or something similar, I do not know. I talked about absurdity, not danger. It seems to me that instead of restricting our imagination (so as to avoid coming up with absurd things), we should let it run free and try improve our ability to recognize which of these imagined ideas are actually true.

It also received a disproportionate amount of ex cathedra moderator action. Which things are so important to EY that he feels it necessary to intervene directly and in a massively controversial way? By their actions we can conclude that the Basilisk is much more important to the LW leadership than e.g. the illegitimate downvoting that drove danerys away.

I do not know what exactly did Eliezer think when he decided. I am not him. In fact, I wasn't even there when it happened. I have no way of knowing whether he had actually had a clear reason at the time or simply freaked out and made an impulsive decision, or actually believed it at that moment (at least to the extent of being unable to immediately rule it out completely, which might have led to censor that post in order to postpone the argument). However, I have an idea which I find at least somewhat plausible. This is a guess.

Suppose even a very small number of people (let's say 2-4 people) were affected (again, let's remember that they would be very atypical, I doubt that having, e.g. OCD would be enough) in a way that instead of only worrying about this idea, they would actually take action and e.g. sell the large part of their possessions and donate it to (what was then known as) SIAI, leave their jobs to work on FAI or start donating all their income (neglecting their families) out of fear of this hypothetical torture. Now that would be a PR disaster many orders of magnitude larger than anything basilisk related we have now. Now, when people use the word "cult", they seem to seem to use it figuratively, as a hyperbole (e.g.), in that case people and organizations who monitor real cults would actually label SIAI as a literal one (whether SIAI likes it or not). Now that would be a disaster both for SIAI and the whole friendly AI project, possibly burying it forever. Considering that Eliezer worried about such things even before this whole debacle, it must have crossed his mind and this possibility must have looked very scary leading to the impulsive decision and what we can now see as improper handling of the situation.

Then why not claim that you do this for PR reasons instead of caring about psychological harm of those people? Firstly, one may actually care about those people, especially if one knows one of them personally (which seems to be the case from the screenshot provided by XiXiDu and linked by Jiro). And even in more general case, talking about caring usually looks better than talking about PR. Secondly, "stop it, it is for your own safety" probably stops more people from looking than "stop it, it might give us a bad PR" (as we can see from the recent media attention, the second reason stops basically nobody). Thirdly, even if Eliezer personally met all those people (once again, remember that they would be very atypical) affected and explicitly asked not to do anything, they would understand that he has SIAI PR at stake and thus an incentive to lie to them about what they should do, and they wouldn't want to listen to him (as even remote possibility of torture might seem scary) and, e.g. donate via another person. Or find their own ways of trying to create fAI. Or whatever they can come up with. Or find their ways to fight the possible creation of AI. Or maybe even something like this. I don't know, this idea did not cause me nightmares therefore I do not claim to understand the mindset of those people. Here I must note that in no way I am claiming that because a person has an OCD they would actually do that.

Nowadays, however, what most people seem to want to talk about is not the idea of a basilisk itself, but rather the drama surrounding it. As it is sometimes used to dismiss all LW (again, for reasons similar to this), many people get very defensive and pattern match those who bring this topic up with an intent of potentially discussing it (and related events) to trolls who do it just for the sake of trolling. Therefore this situation might feel worse for some people, especially those who are not targeted by the mass downvoting or have so much karma they can't realistically be significantly affected by it.

I feel like I am putting a lot of effort to steelman everything. I guess I, too, got very defensive, potentially for the reasons mentioned in that SlateStarCodex post. Well, maybe everything was just a combination of many stupid decisions, impulsive behaviour and after-the-fact rationalizations, which, after all, might be the simplest explanation. I don't know. Well, as I wasn't even there, there must people who would be better informed about the events and better suited to argue.

Replies from: Jiro, lmm
comment by Jiro · 2014-09-17T15:03:08.191Z · LW(p) · GW(p)

Then why not claim that you do this for PR reasons instead of caring about psychological harm of those people? Firstly, one may actually care about those people, especially if one knows one of them personally (which seems to be the case from the screenshot provided by XiXiDu and linked by Jiro).

XiXiDu's screenshot is damning because it indicates that Eliezer banned the Basilisk because he thought a variation on it might work, not because of either PR reasons or psychological harm.

Unless you think he was lying about that for the same reason he might want to lie about psychological harm.

Replies from: Sarunas
comment by Sarunas · 2014-09-17T20:21:23.165Z · LW(p) · GW(p)

Well, in that post by Xixidu, there is a quote by Mitchell Porter (that is approved by Eliezer) that, combined with the [reddit post] I have linked earlier, seems he was not able to provide a proof that no variation of basilisk would ever work given that there are more than one possible decision theory, including some exotic and obscure ones that are not yet invented (but who knows what will be invented in the future). Eliezer seems to think that humans minds are unable to actually rigorously follow such a decision theory strictly enough that would be required for such a concept to work. But the human ability is such a vague concept, it is not clear how one can give a formal proof.

However, it seems to me that an inability to provide a formal proof seems to be an unlikely reason to freak out. What (I guess) has happened was that this inability to provide a proof, combined with that unnamed SIAI person's nightmares (I would guess that Eliezer knows all SIAI people personally) and the fear of the aforementioned potential PR disaster might have resulted into the feeling of losing control of a situation and made him panic, thus resulting into that nervous and angry post, emphasizing the danger and need to protect some people (and leaving out cult PR reasons). This is my personal guess, I do not guarantee that it is correct.

Is an inability to actually deny a thing equivalent to a belief that negation of that belief has a positive probability? Well, logically they are somewhat similar, but these two ways to express similar ideas certainly have different connotations and leave very different impressions in the listener's mind what was the person's actual degree of belief.

(I must add that I personally do not like speculating about another person's motivations why he did what he did when I actually have no way of knowing them)

comment by lmm · 2014-09-17T23:46:08.022Z · LW(p) · GW(p)

I think many users do not think it's a serious danger, but it's still banned here. It is IMO reasonable for outsiders to judge the community as a whole by our declared policies.

Coming up with absurd ideas is not a problem. Plenty of absurd things are posted on LW all the time. The problem is that the community took it as a genuine danger.

If EY made a bad decision at the time that he now disagreed with, surely he would have reversed it or at least dropped the ban for future posts. A huge part of what this site is all about is being able to recognize when you've made a mistake and respond appropriately. If EY is incapable of doing that then that says very bad things about everything we do here.

What's cultish as hell to me is having leaders that would wilfully deceive us. If there are some nonpublic rules under which the basilisk is being censored, what else might also be being censored?

Replies from: Sarunas
comment by Sarunas · 2014-09-18T14:15:30.011Z · LW(p) · GW(p)

Well, nobody in LW community is without flaws. People often fail (or sometimes not even try) to live up to the high standards of being a good rationalist. The problem is that in some internet forums "judging the community" somehow becomes something like "this is what LW makes you to believe, and even if they deny it, they do it only because not doing it would give them a bad image" or "they are a cult that wants you to believe in their robot god" which are such gross misrepresentations of LW (or even thedrama surrounding the basilisk stuff) that even after considering Hanlon's razor one is left wondering whether that level of misinterpretation is possible without at least some amount of intentional hostility. I would guess that nowadays a large part of annoyance at somebody even bringing this topic up is a reaction to this perceived hostility.

If EY is incapable of doing that then that says very bad things about everything we do here.

No, neither it says very bad things about everything we do here, nor about everything we do here. Whenever EY makes a mistake and fails to recognize and admit it, it is his personal failing to live up to the standards he wrote about so much. You may object that not enough people called him out on that on LW itself, but it was my impression that many of those that do e.g. on reddit seem to be LW users (as currently there are few related discussions here on LW, there is no context to do that here, besides, EY rarely comments here anymore). In addition to that on this thread there seems to be several LW users who agree with you, thus definitely you are not a lone voice, among LWers there seem to be many different opinions. Besides, on that reddit thread he seems to basically admit that, in fact, he did make a lot of mistakes in handling this situation.

It has just dawned to me that while we are talking about censorship, at the same time we are having this discussion. And frankly, I do not remember when was the last time a comment was deleted solely for bringing this topic up. Maybe the ban has been silently lifted or at least is no longer ever enforced (even though there were no public announcement about this), leaving everything to the local social norm? However, I would guess that due to the said social norm one could predict that if one posted about this topic, unless one made really clear that one is bringing this topic up due to the genuine curiosity (and having a genuinely interesting question) and not for the sake of trolling or "let's see what will happen", o trying to make fun of people and their identity, one would receive a lot of downvotes (due to being pattern matched, which sustains the social norm of not touching this topic). I feel that I should add, that I wouldn't advice you to test this hypothesis, because that would probably be considered as bringing the topic up for the sake of bringing it up. I'm not claiming the situation is perfect, and I would agree that in the ideal case, the free marketplace of ideas should prevail and this discrepancy between the current situation and the ideal case should be solved somehow.

comment by Emile · 2014-09-16T21:06:04.779Z · LW(p) · GW(p)

the presence of a significant number of people psychologically affected by the basilisk

Does "rolling my eyes and reading something else" count as "psychologically affected"?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-17T01:47:01.980Z · LW(p) · GW(p)

May I suggest reading Singularity Sky by Charles Stross, which has precisely such a menacing future AI as an antagonist? (Spoiler: no basilisk memes involved in the plot; they're obviously not obvious to everyone who thinks of this scenario.)

comment by polymathwannabe · 2014-09-15T20:07:25.038Z · LW(p) · GW(p)

I agree with this so much that, in order to not affect the mechanics of this thread, I'm going to upvote some other post of yours.

Replies from: Dagon
comment by Dagon · 2014-09-15T21:34:11.037Z · LW(p) · GW(p)

wait. now I'm not sure how to vote on THIS comment, which is brilliant.

comment by [deleted] · 2014-09-26T01:49:28.091Z · LW(p) · GW(p)

The overwhelming majority of everyone on LessWrong, now and previously, believes that The Thing is completely ridiculous and would never work at all. Last I heard, Eliezer barely gave thought to the concept that it would really work, but instead blew up at the fact that hapless, innocent readers were being very stressed out by their lack of understanding of why it can't really work.

comment by cousin_it · 2014-09-19T10:37:23.724Z · LW(p) · GW(p)

If you want to point out LW beliefs that sound crazy to most people, I guess you don't need to go as far as Roko's basilisk. FAI or MWI would suffice.

comment by [deleted] · 2014-09-15T19:44:22.617Z · LW(p) · GW(p)

[opening post special voting rules yadda yadda]

Biological hominids descended from modern humans will be the keystone species of biomes loosely descended from farms pastures and cities optimized for symbiosis and matter/energy flow between organisms, covering large fractions of the Earth's land, for tens of millions of years. In special cases there may be sub-biomes in which non-biological energy is converted into biomass, and it is possible that human-keystone ocean-based biomes might appear as well. Living things will continue to be the driving force of non-geological activity on Earth, with hominid-driven symbiosis (of which agriculture is an inefficient first draft) producing interesting new patterns materials and ecosystems.

Replies from: Gunnar_Zarncke, FiftyTwo
comment by Gunnar_Zarncke · 2014-09-15T20:24:32.458Z · LW(p) · GW(p)

Upvoted because it is much too specific (too many conjunctions) to be true. Even if many of them sound plausible.

Replies from: None
comment by [deleted] · 2014-09-16T00:36:12.277Z · LW(p) · GW(p)

Bah, I'm always doing that. I have clusters of related suspicions which I put down in one big chunk rather than as separate possibly independent points.

If I had to extract a main point it would be the first bit, biological hominids descended from modern humans existing tens of millions of years from now with their most obvious alterations to the world being an extension of what we have begun with agriculture.

comment by FiftyTwo · 2014-09-21T01:05:05.106Z · LW(p) · GW(p)

Are you imagining these human descendents will be technology using?

Replies from: None
comment by [deleted] · 2014-09-23T22:29:34.953Z · LW(p) · GW(p)

Yes, as hominids have been for more than a million years. An expanded toolkit though, even compared to today (though its possible that not all of our current tools will have the futures many of us expect, in the long run). Good manipulation of electromagnetism alone is having very interesting effects that we have only really begun to touch on, and I expect biotechnology and related things to have interesting roles to play. All of this will have to occur within the context of ecological laws which are pretty immutable, and living systems are very good at evolving and replicating and surviving in many contexts on this planet.

comment by jsteinhardt · 2014-09-15T17:33:56.402Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Frequentist statistics are at least as appropriate as, if not more appropriate than, Bayesian statistics for approaching most problems.

comment by pragmatist · 2014-09-15T12:48:03.051Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Reductionism as a cognitive strategy has proven useful in a number of scientific and technical disciplines. However, reductionism as a metaphysical thesis (as presented in this post) is wrong. Verging on incoherent, even. I'm specifically talking about the claim that in reality "there is only the most basic level".

comment by [deleted] · 2014-09-15T16:22:41.486Z · LW(p) · GW(p)

Meta-comment: I'm not sure that structure or voting scheme is particularly useful. The hope would be to allow conversation about contrarian viewpoints which are actually worth investigating. I'm not sure how you separate the wheat from the chaff, but that should be the goal...

Replies from: D_Malik, FiftyTwo, DanielLC
comment by D_Malik · 2014-09-15T18:02:48.554Z · LW(p) · GW(p)

Yes. Contrarian position: This thread would be better if we upvoted contrarian positions that are interesting or caused updates, not those that we disagree with.

comment by FiftyTwo · 2014-09-21T00:44:42.624Z · LW(p) · GW(p)

Upvote interestingness, downvote incoherence, ignore agreement and disagreement?

Replies from: None
comment by [deleted] · 2014-09-21T02:47:14.349Z · LW(p) · GW(p)

Although you must be certain that incoherence is actually incoherence. Inferential distance means that an idea sufficiently distant from your own beliefs will seem incoherent. Otherwise I like this.

comment by DanielLC · 2014-09-15T23:15:41.301Z · LW(p) · GW(p)

I think it might be better to have one where you upvote things you agree with, and just never downvote.

comment by lmm · 2014-09-15T21:23:51.296Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

An AI which followed humanity's CEV would make most people on this site dramatically less happy.

Replies from: DanielLC, None, None, cousin_it
comment by DanielLC · 2014-09-15T22:56:02.616Z · LW(p) · GW(p)

Do you mean that, if shown the results, we would decide that we don't like humanity's CEV, or that humanity desires that we be unhappy?

Replies from: lmm, NancyLebovitz
comment by lmm · 2014-09-16T12:08:40.855Z · LW(p) · GW(p)

What Nancy said, so 1, and instrumentally but not terminally 2.

comment by NancyLebovitz · 2014-09-16T05:06:16.641Z · LW(p) · GW(p)

Or possibly that if the majority of people got what they want, most people at LW would be incidentally made unhappy.

comment by [deleted] · 2014-09-15T21:45:15.965Z · LW(p) · GW(p)

My intuition is in agreement with this, but I would love a more worked out description of your own thoughts (in part because my own thoughts aren't clear).

Replies from: lmm
comment by lmm · 2014-09-16T22:04:07.432Z · LW(p) · GW(p)

Most of humanity hates deviants and I don't think there's anything incoherent about that value.

Replies from: VAuroch, John_Maxwell_IV
comment by VAuroch · 2014-09-17T04:38:40.572Z · LW(p) · GW(p)

I don't think you could get enough of humanity to agree on what should be considered "deviant" to make that value cohere.

comment by John_Maxwell (John_Maxwell_IV) · 2014-09-19T07:46:29.720Z · LW(p) · GW(p)

What cross-section of humanity are you familiar with?

Replies from: lmm
comment by lmm · 2014-09-19T09:08:50.759Z · LW(p) · GW(p)

Familiar with? Using the most obvious definition I'd say only my girlfriend.

Due to where I live I have neighbours from a wide variety of races and religions and mostly a different class from the people I grew up with, which is different again from the class I work in now. I haven't lived for any substantial time in a different country. Does that answer your question?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-20T03:52:55.634Z · LW(p) · GW(p)

So you know there are these people called "hipsters" who take pride in showing off their deviance and compete with one another to be deviant in interesting and original ways, right? Do you know many of them?

Replies from: lmm
comment by lmm · 2014-09-20T09:59:55.069Z · LW(p) · GW(p)

And everyone loves hipsters, right? Fellow hipsters of course support each other, but the wider world has nothing but respect and admiration for these people. Satisfying everyone's values would certainly mean there were more hipsters around and that hipsters were encouraged to be even more hipster-ey.

Replies from: John_Maxwell_IV, Azathoth123
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-21T06:07:24.373Z · LW(p) · GW(p)

Well, hipsters often like each other and they are a decently large faction. Also, I think hipsters might be disliked because they are overly intentional about being deviant ("I was in to them before they were cool" as a way to try to one-up someone, etc.)

comment by Azathoth123 · 2014-09-20T19:03:33.537Z · LW(p) · GW(p)

Yet the wider world still tends to assign them high status.

Replies from: lmm
comment by lmm · 2014-09-21T12:47:14.905Z · LW(p) · GW(p)

I don't think that's true. To my eyes hipsters are this generation's nouveau riche; people who have money and some kind of status, but don't conform to upper-class tastes. The wealth and status precedes the hipsterism, it doesn't derive from it.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-21T20:41:34.719Z · LW(p) · GW(p)

Well previous generations' nouveau riche had better taste.

Replies from: Lumifer
comment by Lumifer · 2014-09-21T21:08:14.798Z · LW(p) · GW(p)

Well previous generations' nouveau riche had better taste.

Not from the point of view of the previous generation X-)

comment by [deleted] · 2014-09-26T01:42:00.728Z · LW(p) · GW(p)

If by "CEV" you mean the mean reflectively-consistent utility function of all living humans... then yes, this is most likely true, and I consider it a major flaw of average-utilitarianism for FAI. Why so major? Because any decent science-fiction fan can invent three ways to deal with the mutual incompatibility of some people with some other people that doesn't involve just taking an average and punishing the unusual, off the top of his head, and so can most moral theorists other than average-utilitarians.

comment by cousin_it · 2014-09-19T11:08:46.272Z · LW(p) · GW(p)

That's hard to believe. Unless the AI is really weak, it should improve my life, even if it will improve others' lives more.

comment by lmm · 2014-09-15T21:21:36.157Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

The notion of freedom is incoherent. People would be better off abandoning the pursuit of it.

Replies from: None, shminux, polymathwannabe
comment by [deleted] · 2014-09-15T21:49:37.965Z · LW(p) · GW(p)

Freedom meaning what?

Free choice? I don't believe in that.

The right to make any choice which doesn't impair the choices of others? I strongly agree with that.

Replies from: lmm
comment by lmm · 2014-09-16T22:02:13.288Z · LW(p) · GW(p)

I think both are unhelpful.

comment by Shmi (shminux) · 2014-09-15T22:11:49.378Z · LW(p) · GW(p)

What do you think of Free Will Is as Real as Baseball?

Replies from: lmm
comment by lmm · 2014-09-16T21:58:31.284Z · LW(p) · GW(p)

I think it's arguing a level below where I'm coming from (though possibly there's some self-similarity here).

None of this quite settles the question of whether “free will” is actually a crucial ingredient in the best theory of human beings we can imagine developing.

I think it isn't. Which makes the rest of the page true but irrelevant.

comment by polymathwannabe · 2014-09-15T21:34:41.430Z · LW(p) · GW(p)

Where does the incoherence lie?

Replies from: lmm
comment by lmm · 2014-09-15T21:42:07.360Z · LW(p) · GW(p)

The way freedom is usually formulated, in the notion of free will or free choices.

Replies from: TheAncientGeek, polymathwannabe
comment by TheAncientGeek · 2014-09-17T16:05:28.492Z · LW(p) · GW(p)

This is frustrating: I think I can argue against the standard argument of he incoherence of FW...but you haven't given it...or any other,

comment by polymathwannabe · 2014-09-15T21:48:13.193Z · LW(p) · GW(p)

To make sure I'm getting this right: is this the school of anti-freedom where the notion of moral responsibility is also deemed incoherent?

Replies from: lmm
comment by lmm · 2014-09-16T22:03:05.200Z · LW(p) · GW(p)

I would also consider the notion of moral responsibility incoherent. It's not obvious to me that these positions have a common basis.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T22:11:54.551Z · LW(p) · GW(p)

The latter derives from the former:

If my actions are spontaneous and uncaused, I'm not responsible for them.

If my actions are mechanically determined by atoms in my brain, I'm not responsible for them.

comment by pragmatist · 2014-09-15T13:00:27.907Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Causal connections should not be part of our most fundamental model of the Universe. Everything that is useful about causal narratives is a consequence of the Second Law of Thermodynamics, which is irrelevant when we're talking about microscopic interactions. Extrapolating our macroscopic fascination with causation into the microscopic realm has actually impeded the exploration of promising possibilities in fundamental physics.

Replies from: DanielLC, gjm
comment by DanielLC · 2014-09-16T00:11:11.532Z · LW(p) · GW(p)

That would explain why it took so long for someone to discover timeless physics.

Replies from: gjm
comment by gjm · 2014-09-17T11:47:16.392Z · LW(p) · GW(p)

That sentence has the same air of paradox about it as "Many solipsists believe ...". (Perhaps deliberately?)

comment by gjm · 2014-09-17T11:47:59.703Z · LW(p) · GW(p)

I confess I'm surprised by all the upvotes this one is getting. I thought it was quite a mainstream view.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-17T16:24:40.048Z · LW(p) · GW(p)

I upvoted it because of the last sentence. Physicists are well aware that the differential equations (as opposed to the boundary conditions) of our universe are most likely CPT-invariant.

Replies from: pragmatist, gjm
comment by pragmatist · 2014-09-17T18:07:10.389Z · LW(p) · GW(p)

Yes, they are aware of this, but there are many examples of physicists failing to grasp the full significance of this fact. There is a difference between physicists acknowledging the CPT-invariance of fundamental laws, and fully embracing the philosophical consequences of this invariance. Huw Price's Time's Arrow and Archimedes' Point documents a number of cases of physicists failing to do the latter.

For further examples, see the mess of a priori causality conditions and chronology protection conjectures in GR, largely motivated by a desire to avoid "causal paradoxes". Tachyons are declared unphysical for similar reasons. So-called retrocausal interpretations of quantum mechanics, a very promising research topic IMO, are largely unexplored. Advanced (as opposed to retarded) solutions to differential equations are ruled unphysical. I could go on.

There is still a big unwritten assumption in theoretical physics that proper scientific explanations must account for things that happen now in terms of things that happened earlier. I can't think of any reason for this bias beyond an attachment to causal narratives.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-17T22:32:47.250Z · LW(p) · GW(p)

see the mess of a priori causality conditions and chronology protection conjectures in GR

Classical GR actually rules out changing the past (while allowing CTCs), despite the common misconceptions about it. The Novikov's self-consistency principle was self-admittedly a way to say "there is no new physics other than GR". Hawking's famous chronology protection paper mainly showed that QFT cannot be done in the standard way on a wormhole background.

Tachyons are declared unphysical for similar reasons.

They are generally "declared unphysical" because the time-travel aspects cannot be analyzed self-consistently. Plus there is little evidence for it. However, non-propagating tachyon fields are not incompatible with GR. For example, a 2+1D slice of a 3+1D Schwarzschild black hole contains induced FTL matter fields, Kaluza-Klein style. PM me if you want more details.

So-called retrocausal interpretations of quantum mechanics

I can't imagine how an interpretation can be a promising research topic.

There is still a big unwritten assumption in theoretical physics that proper scientific explanations must account for things that happen now in terms of things that happened earlier. I can't think of any reason for this bias beyond an attachment to causal narratives.

I think this is overstating it. As long as a model is capable of predicting future observations based on the current ones, it is worth considering. If not, then it's no longer a natural science, but at best math.

comment by gjm · 2014-09-17T17:00:26.892Z · LW(p) · GW(p)

Oh, that's a good point. I wonder whether pragmatist really meant to make such a strong statement there.

comment by gjm · 2014-09-15T11:32:43.032Z · LW(p) · GW(p)

A word of advice: Perhaps anyone posting a comment here with the intention of voicing a contrarian opinion and getting upvotes for disagreement should indicate the fact explicitly in their comment. Otherwise I predict that the upvote/downvote signal will be severely corrupted by people voting "normally". (Especially if these comments produce discussion -- if A posts something you strongly disagree with and B posts a very good and clearly-explained reason for disagreeing, what are you supposed to do? I suggest the right thing here is to upvote both A and B, but it's liable to be easy to get confused...)

[EDITED to add: 1. For the avoidance of doubt, of course the above is not intended to be a controversial opinion and if you vote on it you should do so according to the normal conventions, not the special ones governing this discussion. 2. It is possible to edit your own comments; if you read the above and think it's sensible, but have already posted a contrarian opinion here, you can fix it.]

comment by Metus · 2014-09-15T11:24:37.180Z · LW(p) · GW(p)

Social problems are nearly impossible to solve. The methods we have developed in the hard sciences and engineering are insufficient to solve them.

Replies from: pragmatist, shminux
comment by pragmatist · 2014-09-15T12:36:13.637Z · LW(p) · GW(p)

Would you disagree with the claim that several significant social problems have in fact been solved over the history of human civilization, at least in parts of the world? Or are you saying that those were the low-hanging fruit and the social problems that remain are nearly impossible to solve?

What would you say about the progress that has been made towards satisfying the Millennium Development Goals?

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-16T00:56:53.859Z · LW(p) · GW(p)

Looking at the list, I would say that to the extent progress has been made towards them (and to the extent they're worthy goals, the "sustainable development" one is trying to solve the wrong problem and the "gender equality" one is just incoherent) it is incidental to the efforts of the UN.

comment by Shmi (shminux) · 2014-09-15T18:20:03.455Z · LW(p) · GW(p)

Yvain seems to agree.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-16T00:54:21.087Z · LW(p) · GW(p)

The problem with Yvain's argument is that it appears to be an example of the PHB fallacy "anything I don't understand is easy to do". Or rather the "a little knowledge" problem "anything I sort of understand is easy to do".

During the Enlightenment, when people first started talking about reorganizing society on a large scale, it seemed like a panacea. Now that we have several centuries extremely messy experience with it, we know that it's harder than it at first appeared and there are many complications. Now that developments in biology seem to make it possible to make changes to biology it again looks like a panacea (at least to the people who haven't learned the lessons of the previous failure). And just as before, I predict people will discover that it's a lot more complicated, probably just a messily.

comment by Ixiel · 2014-09-16T12:19:38.126Z · LW(p) · GW(p)

English has a pronoun that can be used for either gender and, as an accident of history not some hidden agenda, said pronoun in English is "he/him/&c."

Edited: VAuroch is the best kind of correct on "neuter" pronouns. Changed, though that might make a view less controversial than I thought (all but 2 readers agree, really?) even less so :)

Replies from: VAuroch, Slider
comment by VAuroch · 2014-09-17T04:45:24.347Z · LW(p) · GW(p)

I consider this an incoherent claim. "A neuter pronoun", inherently, is one that can be applied to individuals regardless of gender (actual or grammatical). That's what people want when they wish English had a neuter pronoun. 'He/him/his' is not such a pronoun. "They/them/their" is.

Replies from: Ixiel
comment by Ixiel · 2014-09-17T11:05:58.815Z · LW(p) · GW(p)

Nope. "Of all the men and women here, one will prove his worth" is grammatical and does not imply a man IMO. I'm not defining myself right of course, just clarifying why my contrarian claim is coherent.

Replies from: VAuroch, None
comment by VAuroch · 2014-09-18T03:48:31.295Z · LW(p) · GW(p)

That was historically true, but many women and nonbinary people disagree with the statement that it is still true. And it was never neuter; it used to be the case that using male pronouns for an unspecified person was grammatically valid.

Replies from: Ixiel
comment by Ixiel · 2014-09-18T10:55:35.740Z · LW(p) · GW(p)

You are exactly right on technical use of "neuter." Fixed, and thank you.

What is a nonbinary person in the sense you are using it, apart from a subset of non-women? I can't get use from context. Just for curiosity, and probably off topic so pm if exactly one person cares.

If I thought everybody agreed it wouldn't be contrarian now would it?

Replies from: VAuroch
comment by VAuroch · 2014-09-19T05:03:47.644Z · LW(p) · GW(p)

Nonbinary people consider themselves neither male nor female, both male and female, male and female individually but at different times, or any other vector combination of genders besides {1,0} and {0,1}; naturally, they are all transgender. They're fairly uncommon, largely because the idea that identifying as nonbinary is not available to the vast majority of people and would be stigmatized if they did choose to adopt it.

Replies from: Ixiel
comment by Ixiel · 2014-09-19T11:38:58.656Z · LW(p) · GW(p)

Huh, interesting. I had never heard of that, thank you.

comment by [deleted] · 2014-09-18T22:09:08.894Z · LW(p) · GW(p)

How about "every married person should love his husband or wife"?

comment by Slider · 2014-09-22T19:59:27.760Z · LW(p) · GW(p)

Singular "they" existed but then it waned out of use. It has seen some comebacks. If gender information would not be criticfal it woudl have been "he" that would ahve vaned. It might not be a hidden agenda but more like ununderstood or emergent derived agenda.

comment by [deleted] · 2014-09-15T18:22:27.551Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Artificial intelligences are overrated as a threat, and institutional intelligences are underrated.

Replies from: RowanE
comment by RowanE · 2014-09-17T13:22:19.957Z · LW(p) · GW(p)

Overrated in LessWrong, or globally? I upvoted assuming the former, although I agree that institutional intelligences are an underrated threat.

Replies from: None
comment by [deleted] · 2014-09-17T23:40:25.011Z · LW(p) · GW(p)

I intended the former. I'm not sure if it would make sense to say it's the case globally: certain institutional intelligences are sometimes taken as threats (see: the global warming debate), but there isn't really a theory, or even a widespread concept, of institutional intelligences in general. (My suspicion is that it was worked out to some extent in the '30s, and then WW2 went the way it did and the Cold War came around and all that made it seriously unfashionable.)

comment by AABoyles · 2014-09-17T16:04:14.433Z · LW(p) · GW(p)

The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.

Replies from: Ronak
comment by Ronak · 2014-09-18T21:50:32.975Z · LW(p) · GW(p)

Why? This looks as if you're taking a hammer to Ockham's razor.

Replies from: AABoyles
comment by AABoyles · 2014-09-19T13:40:44.340Z · LW(p) · GW(p)

In the strictest sense, yes I am. I design, build and test social models for a living (so this may simply be a case of me holding Maslow's Hammer). The universe exhibits a number of physical properties which resemble modeling assumptions. For example, speed is absolutely bounded at c. If I were designing an actual universe (not a model), I wouldn't enforce upper bounds--what purpose would they serve? If I were designing a model, however, boundaries of this sort would be critical to reducing the complexity of the model universe to the realm of tractable computability.

On any given day, I'll instantiate thousands of models. Having many models running in parallel is useful! We observe one universe, but if there's a non-zero probability that the universe is a model of something else (a possibility which Ockham's Razor certainly doesn't refute), the fact that I generate so many models is indicative of the possibility that a super-universal process or entity may be doing the same thing, of which our universe is one instance.

Replies from: btrettel, btrettel
comment by btrettel · 2014-09-25T17:07:57.736Z · LW(p) · GW(p)

I do think its useful to use what we know about simulations to inform whether or not we live in one. As I said in my other comment, I don't think a finite speed of light, etc., says much either way, but I do want to note a few things that I think would be suggestive.

If time was discrete and the time step appeared to be a function of known time step limits (e.g., the CFL condition), I would consider that to be good evidence in favor of the simulation hypothesis.

The jury is still out whether time is discrete, so we can't evaluate the second necessary condition. If time were discrete, this would be interesting and could be evidence for the simulation hypothesis, but it'd be pretty weak. You'd need something further that indicates something how the algorithm, like the time step limit, to make a stronger conclusion.

Another possibility is if some conservation principle were violated in a way that would reduce computational complexity. In the water sprinkler simulations I've run, droplets are removed from the simulation when their size drops below a certain (arbitrary) limit as these droplets have little impact on the physics, and mostly serve to slow down the computation. Strictly speaking, this violates conservation of mass. I haven't seen anything like this in physics, but its existence could be evidence for the simulation hypothesis.

comment by btrettel · 2014-09-23T20:40:34.964Z · LW(p) · GW(p)

For example, speed is absolutely bounded at c. If I were designing an actual universe (not a model), I wouldn't enforce upper bounds--what purpose would they serve? If I were designing a model, however, boundaries of this sort would be critical to reducing the complexity of the model universe to the realm of tractable computability.

This is not true in general. I've considered a similar idea before, but as a reason to believe we don't live in a simulation (not that I think this is a very convincing argument). I work in computational fluid dynamics. "Low-Mach"/incompressible fluid simulations where the speed of sound is assumed infinite are much more easily tractable than the same situation run on a "high Mach" code, even if the actual fluid speeds are very subsonic. The difference of running time is at least an order of magnitude.

To be fair, it can go either way. The speed of the fluid is not "absolutely bounded" in these simulations. These simulations are not relativistic, and treating them as that would make things more complicated. The speed of acoustic waves, however, is treated infinite in the low Mach limit. I imagine there are situations in other branches of mathematical physics where treating a speed as infinite (as in the case of acoustic waves) or zero (as in the non-relativistic case) simplifies certain situations. In the end, it seems like a wash to me, and this offers little evidence in favor or against the simulation hypothesis.

Replies from: AABoyles
comment by AABoyles · 2014-09-24T14:03:32.929Z · LW(p) · GW(p)

Huh. It never occurred to me that imposing finite bounds might increase the complexity of a simulation, but I can see how that could be true for physical models. Is the assumption you're making in the Low Mach/incompressible fluid models that the speed of sound is explicitly infinite, or is it that the speed of sound lacks an upper bound? (i.e., is there a point in the code where you have to declare something like "sound.speed = infinity"?)

Anyway, I've certainly never encountered any such situation in models of social systems. I'll keep an eye out for it now. Thanks for sharing!

Replies from: Lumifer, btrettel
comment by Lumifer · 2014-09-24T14:50:33.100Z · LW(p) · GW(p)

It never occurred to me that imposing finite bounds might increase the complexity of a simulation

As a trivial point, imposing finite bounds means that you can't use the normal distribution, for example :-)

Replies from: AABoyles
comment by AABoyles · 2014-09-24T16:22:07.809Z · LW(p) · GW(p)

Not true: it means you shouldn't use a normal distribution, and when you do you should say so up front. I see no reason not to apply normal distributions if your limit is high (say, greater than 4 sigmas--social science is much fuzzier than physical science). Better yet, make your limit a function of the number of observations you have. As the probability of getting into the long tail gets higher, make the tail longer.

Replies from: Lumifer
comment by Lumifer · 2014-09-24T16:39:07.561Z · LW(p) · GW(p)

Truncated normal is not the same thing as a plain-vanilla normal. And using it does mean increasing the complexity of the simulation.

Replies from: AABoyles
comment by AABoyles · 2014-09-24T17:14:54.195Z · LW(p) · GW(p)

Sentence 1: True, fair point. Sentence 2: This isn't obvious to me. Selecting random values from a truncated normal distribution is (slightly) more complex than, say, a uniform distribution over the same range, but it is demonstrably (slightly) less complex than selecting random values from an unbounded normal distribution. Without finite boundaries, you'd need infinite precision arithmetic just to draw a value.

Replies from: Lumifer
comment by Lumifer · 2014-09-24T17:53:03.524Z · LW(p) · GW(p)

Sentence 2: This isn't obvious to me

The problem is not with value selection, the problem is with model manipulation. The normal distribution is very well-studied, it has a number of appealing properties which make working with it rather convenient, there is a lot of code written to work with it, etc. Replace it with a truncated normal and suddenly a lot of things break.

Replies from: AABoyles
comment by AABoyles · 2014-09-24T18:04:56.342Z · LW(p) · GW(p)

Oh! I see what you're saying. Definitely can't argue with that.

comment by btrettel · 2014-09-24T14:45:09.581Z · LW(p) · GW(p)

Glad you found my post interesting. I found yours interesting as well, as I thought I was the only one who made any argument along those lines.

There's no explicit step where you say the speed of sound is infinite. That's just the net effect of how you model the pressure field. In reality, the pressure comes from thermodynamics at some level. In the low-Mach/incompressible model, the pressure only exists to enforce mass conservation, and in some sense is "junk" (though still compares favorably against exact solutions). Basically, you do some math to decouple the thermodynamic and "fluctuating" pressure (this is really the only change; the remainder are implications of the change). You end up with a Poisson equation for ("fluctuating") pressure, and this equation lacks the ability to take into account finite pressure/acoustic wave speeds. The wave speed is effectively infinite.

To be honest, I need to read papers like this to gain a fuller appreciation of all the implications of this approximation. But what I describe is accurate if lacking in some of the details.

In some ways, this does make things more complicated (pressure boundary conditions being one area). But in terms of speed, it's a huge benefit.

Here's another example from my field: thermal radiation modeling. If you use ray tracing (like 3D rendering) then it's often practical to assume that the speed of light is infinite, because it basically is relative to the other processes you are looking at. The "speed" of heat conduction, for example, is much slower. If you used a finite wave speed for the rays then things would be much slower.

Replies from: AABoyles
comment by AABoyles · 2014-09-24T16:15:40.369Z · LW(p) · GW(p)

That makes a lot of sense. I asked about explicit declaration versus implicit assumption because assumptions of this sort do exist in social models. They're just treated as unmodeled characteristics either of agents or of reality. We can make these assumptions because they either don't inform the phenomenon we're investigating (e.g. infinite ammunition can be implicitly assumed in an agent-based model of battlefield medic behavior because we're not interested in the draw-down or conclusion of the battle in the absence of a decisive victory) or the model's purpose is to investigate relationships within a plausible range (which sounds like your use case). That said, I'm very curious about the existence of models for which explicitly setting a boundary of infinity can reduce computational complexity. It seems like such a thing is either provably possible or (more likely) provably impossible. Know of anything like that?

Replies from: btrettel, Azathoth123
comment by btrettel · 2014-09-25T17:01:05.697Z · LW(p) · GW(p)

I see your distinction now. That is a good classification.

To go back to the low-Mach/incompressible flow model, I have seen series expansions in terms of the Mach number applied to (subsets of) the fluid flow equations, and the low-Mach approximation is found by setting the Mach number to zero. (Ma = v / c, so if c, the speed of sound, approaches infinity, then Ma goes to 0.) So it seems that you can go the other direction to derive equations starting with the goal of modeling a low-Mach flow, but that's not typically what I see. There's no "Mach number dial" in the original equations, so you basically have to modify the equations in some way to see what changes as the Mach number goes to zero.

For this entire class of problems, even if there were a "Mach number dial", you wouldn't recover the nice mathematical features you want for speed by setting the Mach number to zero in a code that can handle high Mach physics. So, for fluid flow simulations, I don't think an explicit declaration of infinite sound speed reducing computational time is possible.

From the perspective of someone in a fluid-flow simulation (if such a thing is possible), however, I don't think the explicit-implicit classification matters. For all someone inside the simulation knows, the model (their "reality") explicitly uses an infinite acoustic wave speed. This person might falsely conclude that they don't live in a simulation because their speed of sound appears to be infinite.

comment by Azathoth123 · 2014-09-25T00:36:46.790Z · LW(p) · GW(p)

It seems like such a thing is either provably possible or (more likely) provably impossible. Know of anything like that?

Btrettel's example of ray tracing in thermal radiation is such a model. Another example from social science: basic economic and game theory often assume the agents are omniscient or nearly omniscient.

Replies from: AABoyles
comment by AABoyles · 2014-09-25T13:56:13.814Z · LW(p) · GW(p)

False: Assuming something is infinite (unbounded) is not the same as coercing it to a representation of infinity. Neither of those examples when represented in code would require a declaration that thing=infinity. That aside, game theory often assumes players have unbounded computational resources and a perfect understanding of the game, but never omniscience.

Replies from: lackofcheese
comment by lackofcheese · 2014-09-25T23:57:25.938Z · LW(p) · GW(p)

A better term is "logical omniscience".

comment by Thomas · 2014-09-15T11:07:55.382Z · LW(p) · GW(p)

There are some I hold:

  • 1: evolution isn't inherently slow. Sometimes/often it can be faster than any other known method.
  • 2: thinking is nothing but evolutionary process in our heads. No really deep secrets here to be uncovered.
  • 3: from 1 and 2 a really near Singularity is possible, but not mandatory
  • 4: the infinity is not even a coherent concept
  • 5: there are no aliens nearby, life is rare (but this is not a contrarian position anymore)
  • 6: Venus is hot due to volcanoes
  • 7: Mother Nature is a stupid bitch. Some species - like wild dogs of Africa - make everything even worse
  • 8: the Universe is not only to conquer, but to re-shape completely
  • 9: Rome was magnificent, Carthage was not
  • 10: the Relativity isn't coherent either
Replies from: gjm, None, DanielLC
comment by gjm · 2014-09-15T11:30:00.864Z · LW(p) · GW(p)

These are 10 different propositions. Fortunately I disagree with most of them so can upvote the whole bag with a clear conscience, but it would be better for this if you separated them out.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-09-15T14:49:43.241Z · LW(p) · GW(p)

I agree with this meta-comment. Should I downvote it?

Replies from: gjm
comment by [deleted] · 2014-09-15T16:16:37.182Z · LW(p) · GW(p)

the Relativity isn't coherent either

Care to explain this one?

Replies from: Thomas
comment by Thomas · 2014-09-15T16:28:16.947Z · LW(p) · GW(p)

Yes.

A big pie, rotating in the sky, should have apparently shorter circumference than a non-rotating one, and both with the same radii.

I can't swallow this. Not because it is weird, but because it is inconsistent.

Replies from: The_Duck, Alejandro1, None
comment by The_Duck · 2014-09-15T23:13:16.688Z · LW(p) · GW(p)

A big pie, rotating in the sky, should have apparently shorter circumference than a non-rotating one, and both with the same radii.

I can't swallow this. Not because it is weird, but because it is inconsistent.

There is no inconsistency. In one case you are measuring the circumference with moving rulers, while in the other case you are measuring the circumference with stationary rulers. It's not inconsistent for these two different measurements to give different results.

Replies from: Thomas
comment by Thomas · 2014-09-15T23:27:50.219Z · LW(p) · GW(p)

No. I am measuring from here, from the centre with some measuring device.

First I measure stationary pie, then I measure the rotating one. Those red-white stripes are either constant, either they shrink. If they are shrinking, they should multiply as well. If they are not shrinking, what happened with Mr. Lorentz's contraction?

Replies from: Jiro
comment by Jiro · 2014-09-16T02:02:57.962Z · LW(p) · GW(p)

If you measure a wheel with a ruler, and the wheel is moving relative to the ruler, then your measurement assumes that both ends of a piece of the wheel line up with both ends of a piece of the ruler at the same time. Whether these events happen at the same time, and therefore whether this is a measurement of the wheel, are different depending on the frame of reference.

comment by Alejandro1 · 2014-09-15T17:40:34.740Z · LW(p) · GW(p)

Why is it inconsistent?

Replies from: shminux, Thomas
comment by Shmi (shminux) · 2014-09-15T18:28:33.208Z · LW(p) · GW(p)

Special Relativity + some basic mechanics leads to an apparent contradiction in the expected measurements, which is only resolved by introducing a curved space(time). So this would be a failure of self-consistency: the same theory leads to two different results for the same experiment.

However, the two measurements of ostensibly the same thing are done by different observers, so there is no requirement that they should agree. Introducing curved space for the rotating disk shows how to calculate distances consistently.

Replies from: DanielLC, army1987
comment by DanielLC · 2014-09-15T22:27:44.372Z · LW(p) · GW(p)

The problem is that it's inconsistent with solid-body physics?

Solid-body physics is an approximation. This isn't hard to show. Just bend something.

Consider the model of masses connected by springs. This is consistent with special relativity, and can be used to model solid-body physics. In fact, it's a more accurate model of reality than solid-body physics.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-15T23:03:01.579Z · LW(p) · GW(p)

No, that's not the issue. The problem is that no flat-space configuration works.

comment by A1987dM (army1987) · 2014-09-17T16:52:14.164Z · LW(p) · GW(p)

which is only resolved by introducing a curved space(time).

The spacetime itself is flat (if the mass of the pie is negligible), but the spacelike slices are curved because you're slicing it in a weird way.

comment by Thomas · 2014-09-15T18:21:43.220Z · LW(p) · GW(p)

I have two photos of two different pies, one of rotating one and one of not rotating. Photos are indistinguishable, I can't tell which is which.

On the other hand, both pies have one-to-one correspondence with photos an one should be slightly deformed on the edge.

Even if it is, on the photo can't be. The photo is perfectly Euclidean. I have measured no Lorentz contraction.

Replies from: gjm, DanielLC, Slider, Alejandro1
comment by gjm · 2014-09-15T20:05:19.411Z · LW(p) · GW(p)

In other news, the earth is really flat because photographs of the earth are flat.

comment by DanielLC · 2014-09-15T22:41:37.101Z · LW(p) · GW(p)

Just to clarify, is the spinning pie a set of particles in the same relative position as with a still pie, but rotating around the origin? Is it a set of masses connected by springs that has reached equilibrium (none of the springs are stretching or compressing) and the whole system is spinning? Is the pie a solid body?

What exactly we're looking at depends on which of the first two you picked. If you picked the third, it is contradictory with special relativity, but there's a lot more evidence for special relativity than there is for the existence of a solid body. Granted, a sufficiently rigid body will still be inconsistent with special relativity, but all that means is that there's a maximum possible rigidity. Large objects are held together by photons, so we wouldn't expect sound to travel through them faster than light.

Replies from: Thomas
comment by Thomas · 2014-09-15T23:18:59.920Z · LW(p) · GW(p)

The spinning set of particles is a toroidal with let say 1 million light years across - the big R. and with the small r of just 1 centimetre. It is painted red and white, differently each metre.

The whole composition starts to slowly rotate on the signal from the centre. And slowly, very slowly accelerate to reach the speed of 0.1 c in a several million years.

Now, do we see any Lorentzian contraction due to the SR, or not due to the GR?

(Small rockets powered by radioactive decay are more than enough to compensate for the acceleration and for the centrifugal force. Both incredibly small. This is the reason why we have choose such a big scale.)

Replies from: DanielLC
comment by DanielLC · 2014-09-15T23:34:40.172Z · LW(p) · GW(p)

I'm going to assume mass is small enough not to take GR into effect.

From the point of view of a particle on the toroid, the band it's in will extend to about 1.005 meters long. Due to Lorentz contraction, from the point of reference of someone in the center, it will appear one meter long.

Replies from: Thomas
comment by Thomas · 2014-09-15T23:39:59.693Z · LW(p) · GW(p)

The question is ONLY for the central observer. At first he sees 1 m long stripes, but when the whole thing reaches the speed of 0.1 c, how long is each stripe?

Replies from: DanielLC
comment by DanielLC · 2014-09-15T23:54:49.036Z · LW(p) · GW(p)

One meter.

I just want to clarify. I'm assuming the particles are not connected, or are elastic enough that stretching them by a factor of 1.005 isn't a huge deal. If you tried that with solid glass, it would probably shatter.

Come to think of it, this looks like a more complicated form of Bell's spaceship paradox.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-17T16:57:08.536Z · LW(p) · GW(p)

I think you're right, but you're interpreting “sees” literally I'm not 100% sure of that, because of light aberration (the Terrell-Penrose effect).

Replies from: DanielLC
comment by DanielLC · 2014-09-17T17:23:17.595Z · LW(p) · GW(p)

I wasn't interpreting "sees" literally, but it wouldn't make much of a difference. Since the observer is in the center of the circle, the light lag is the same everywhere. The only difference is that the circles bordering the bands will look slightly slanted, and the colors will be slightly blue-shifted.

comment by Slider · 2014-09-15T19:41:24.326Z · LW(p) · GW(p)

Place red and white equilength rulers on the edge of the cylinder. The rotating cylinder will have more and shorther rulers. Thus the photos are not the same. Even better have the cylinder slowly pulse in different colors. The edges will pulse more slowly thus not being in synch with the center.

Related phenomenon is that moving ladders fit into garages that stationary ones would not.

Replies from: Thomas, Jiro
comment by Thomas · 2014-09-15T20:52:26.915Z · LW(p) · GW(p)

Place red and white equilength rulers on the edge of the cylinder. The rotating cylinder will have more and shorther rulers.

They will multiply as the orbital speed increases? Say that Arab numerals are written on the rulers. Say that they are 77 at the beginning. Will this system know when to engage the number 78?

Or will there be two 57 at first? Or how is it going to be?

Replies from: Slider
comment by Slider · 2014-09-15T21:19:47.372Z · LW(p) · GW(p)

I was thinking of already spun cylinder and then adding the sticks by accelerating them to place.

If you had the same sticks already in place the stick would feel a stretch. If they resist this stretch they will pull apart so there will be bigger gaps between them. For separate measuring sticks they have no tensile strenght in the gaps between them. However if you had a measuring rope with continous tensile strenght and at a beginning / end point where the start would be fixed but new rope could freely be pulled from the end point you would see the numbers increase (much like waist measurements when getting fatter). However the purpoted cylinder has maximum tensile strenght anywhere continously. Thus that strenght would actually work against the rotating force making it resist rotation. a non-rigid body will rupture and start to look like a star.

So no there would not be duplicate sticks but yes the rope would know to engage number 78.

If you would fill up a rotating cylinder with sticks and spin it down the stick would press against each other crushing to a smaller lenght. A measuring rope with a small pull to accept loose rope would reel in. A non-rigid body slowing down would spit-out material in bursts that might come resemble volcanoes.

comment by Jiro · 2014-09-15T19:46:50.370Z · LW(p) · GW(p)

Saying that a moving ladder "fits" means that the start of the ladder is in the garage at the same time that the end of the ladder is. If the ladder is moving and contracted because of relativity, these two events are not simultaneous in all reference frames. Thus, you cannot definitely say that the moving ladder fits--whether it fits depends on your reference frame. (In another reference frame you would see the ladder longer than the garage, but you would also see the start of the ladder pass out of the garage before the end of the ladder passes into it.)

Replies from: Slider
comment by Slider · 2014-09-15T20:07:39.322Z · LW(p) · GW(p)

Why have that definition of "fit"? I could eqaully well say that fitting means that there is a reference frame that has a time where the ladder is completely inside.

If you had the carage loop back so that the end would be glued to the start you could still spin the ladder inside it. From the point of the ladder it would appear to need to pass the garage multiple times to oene fit ladder lenght but from the outside it would appear as if the ladder fits within one loop completely. With either perspective the one garage space enough to contain the ladder without collisions. In this way it most definetly fits. Usually garages are thought to be space-limited but not time limited. Thus the eating of the time-dimension is a perfectly valid way of staying within the spatial limits.

edit: actually there is a godo reazson to priviledge the rest frame oft he garage as the one that count as ragardst to fitting as then all of the fitting happens within its space and time.

Replies from: DanielLC
comment by DanielLC · 2014-09-15T22:49:03.851Z · LW(p) · GW(p)

Why have that definition of "fit"? I could eqaully well say that fitting means that there is a reference frame that has a time where the ladder is completely inside.

In that case, the ladder fits.

From the point of the ladder

Each rung of the ladder has a distinct reference frame. "From the point of the ladder" is meaningless.

Replies from: Slider
comment by Slider · 2014-09-16T09:03:28.638Z · LW(p) · GW(p)

If the ladder point of view is ildefined so is the garage point of view as the front and back of the garage have distinct reference frames. Any inertial reference frame is equally good. The ladder is not accelerating thus inertial. In the sense that we can talk of any frame as more than a single event or world line the ladder frame is perfectly good.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T15:54:30.006Z · LW(p) · GW(p)

In the normal example, where the ladder is straight and moving forward, it has only one reference frame. Strictly speaking, each rung has a different reference frame, but they differ only by translation.

From what I understand, you modified it to a circular ladder spinning in a circular garage. In this case, each rung is moving in a different direction, and therefore at a different velocity. Thus, each rung has its own reference frame.

Replies from: Slider
comment by Slider · 2014-09-16T23:52:23.738Z · LW(p) · GW(p)

ah, I meant to glue the end and start together without curved shape/motion. But I guess that is physically unrealisable and potentially more distracting than explanatory.

Replies from: DanielLC
comment by DanielLC · 2014-09-17T17:32:22.749Z · LW(p) · GW(p)

Actually that's not a big deal. Technically you need general relativity to do that, but it's just a quotient space on special relativity. In any case, it works out exactly the same as an infinite series of ladders and garages.

There is one thing you have to be careful about. From the rest frame, the universe could be described as repeating itself every, say, ten feet. But from the point of view of the ladder, it's repeating itself every five feet and 8.8 nanoseconds. That is, if you move five feet, you'll be in the same place, but your clock will be off by 8.8 nanoseconds.

Replies from: Slider
comment by Slider · 2014-09-17T22:55:02.770Z · LW(p) · GW(p)

Actually from the point of view of the ladder the universe still repeats at every ten feet. It is just that from it's point of view it takes the space of two carages at any one instant.Both the garage and ladder are in a state of rest and show equally good times. Yes they read different but doesn't mean they are in error.

I am not sure whether it would see other instances of itself. I only spesified a spatial gluing and not that the garage be split into timeslices. I guess that the change of the point of view has changed some of that gluing to be from future to past. For if the ladder would be too long the frontend would not crash to the same ladder time backend but to a future one. (ignoring the problem of how you would try to slide the ladder into too small a hole in the first place)

Replies from: DanielLC
comment by DanielLC · 2014-09-17T23:45:56.620Z · LW(p) · GW(p)

Actually from the point of view of the ladder the universe still repeats at every ten feet.

No, it does not. I think I messed up before and it's actually 20 feet and 8.8 nanoseconds. From the the point of view of the garage, the coordinates (0 ft, 0 ns) and (10 ft, 0 ns) correspond to the same event. From the point of view of the ladder, the coordinates became (0 ft, 0 ns) and (20 ft, 8.8 ns). They still have to be the same event.

The universe is definitely repeating itself to be off by a certain time, and the distance it is off by is not ten feet.

Replies from: Slider
comment by Slider · 2014-09-18T00:32:41.848Z · LW(p) · GW(p)

The ladder sees the carage length contract. That is less than 10 feet. The ladder doesn't see itself contract that puts the limit on the repeating of the universe.

Are you sure the ladder point equivalences are not (0 ft, 0ns) and (20 ft, -8.8ns)?

Replies from: DanielLC
comment by DanielLC · 2014-09-18T02:44:48.367Z · LW(p) · GW(p)

Are you sure the ladder point equivalences are not (0 ft, 0ns) and (20 ft, -8.8ns)?

It depends on which direction it's moving. I didn't bother to check the sign.

Thinking about it now, if it's going in the positive direction, then it should be (20 ft, -8.8ns). You are correct.

comment by Alejandro1 · 2014-09-15T19:41:56.685Z · LW(p) · GW(p)

If the rotating pie is a pie that when nonrotating had the same radius as the other one, when it rotates it has a slightly larger radius (and circumference) because of centrifugal forces. This effect completely dominates over any relativistic one.

Replies from: Thomas
comment by Thomas · 2014-09-15T20:14:56.160Z · LW(p) · GW(p)

The centrifugal force can be arbitrary small. Say that we have only the outer rim of the pie, but as large as a galaxy. The centrifugal force at the half of speed of light is just negligible. Far less than all the everyday centrifugal forces we deal with.

Now say, that the rim has a zero around velocity at first and we are watching it from the centrer. Gradually, say in a million years time, it accelerates to a relativistic speed. The forces associated are a millionth of Newton per kilogram of mass. No big deal.

The problem is only this - where's the Lorentz contraction?

As long as we have only one spaceship orbiting the Galaxy, we can imagine this Lorentzian shrinking. In the case of that many, that they are all around, we can't.

Replies from: DanielLC, army1987
comment by DanielLC · 2014-09-15T22:35:01.599Z · LW(p) · GW(p)

If you have a large number of spaceships, each will notice the spaceship in front of it getting closer, and the circle of spaceships forming into an ellipse.

At least, that's assuming the spaceships have some kind of tachyon sensor to see where all the other ships are from the point of reference of the ship looking, or something like that. If they're using light to tell where all of the other ships are, then there's a few optical effects that will appear.

Replies from: Thomas
comment by Thomas · 2014-09-15T22:59:27.975Z · LW(p) · GW(p)

The question is what the stationary observer from the centre sees? When the galactic carousel goes around him. With the speed even quite moderate, for the observer has precise instruments to measure the Lorentzian contraction, if there is any.

At first, there is none, because the carousel isn't moving. But slowly, in many million years when it accelerate to say 0.1 c, what does the central observes sees? Contraction or no contraction?

Replies from: DanielLC
comment by DanielLC · 2014-09-15T23:26:53.400Z · LW(p) · GW(p)

He will see each spaceship contract. The distance between the centers of the spaceships will remain the same.

Replies from: Thomas
comment by Thomas · 2014-09-15T23:32:28.988Z · LW(p) · GW(p)

But no, those ships are just like those French TGV's. A whole composition of cars and you can't say where one ends and another begins.

It's like a snake, eating its tail!

Replies from: DanielLC
comment by DanielLC · 2014-09-15T23:56:48.990Z · LW(p) · GW(p)

Then they stretch. Or break.

Replies from: army1987, Thomas
comment by A1987dM (army1987) · 2014-09-17T16:59:54.698Z · LW(p) · GW(p)

Or they stay the same but the radius of the train as measured by the observer in the centre will shrink.

comment by Thomas · 2014-09-16T00:34:54.744Z · LW(p) · GW(p)

They mustn't. All should be smooth just like those Einstein's train. No resulting breaking force is postulated.

But everything boils down to the "a microscope which enlarges the angles"

How do you then see two perpendicular intersecting lines under that microscope?

Can't be.

This Lorentz contraction has the same fundamental problem. How it would look like?

Replies from: DanielLC, Azathoth123
comment by DanielLC · 2014-09-16T01:32:54.589Z · LW(p) · GW(p)

They mustn't. All should be smooth just like those Einstein's train. No resulting breaking force is postulated.

The force is due to chemical bonds. They pull particles back together as their distance increases. These chemical bonds are an example of electromagnetism, which is governed by Maxwell's laws, which are conserved by Lorentz transformation.

Granted, whether a field is electric or magnetic depends on your point of reference. A still electron only produces an electric field, but a moving one produces a magnetic field as well. But if you perform the appropriate transformations, you will find that looking at a system that obeys Maxwell's laws from a different point of reference will result in a system that obeys Maxwell's laws.

In fact, Lorentz contraction was conjectured based on Maxwell's laws before there was any experimental evidence of it. Both of those occurred before Einstein formulated special relativity.

But everything boils down to the "a microscope which enlarges the angles"

Lorentz transformation does not preserve angles Euclidean distance or angles. It preserves something called proper distance.

How it would look like?

This is what Lorentz transformation on 1+1-dimensional spacetime looks like: https://en.wikipedia.org/wiki/Lorentz_transformation#mediaviewer/File:Lorentz_transform_of_world_line.gif. There's one dimension of space, and one of time. Each dot on the image represents an event, with a position and a time. Their movement corresponds to the changing point of reference of the observer. The slope of the diagonal lines is the speed of light, which is preserved under Lorentz transformation.

Here's my question for you: with all of the effort put into researching special relativity, if Lorentz transformation did not preserve the laws of physics, don't you think someone would have noticed?

comment by Azathoth123 · 2014-09-16T01:29:50.851Z · LW(p) · GW(p)

No resulting breaking force is postulated.

Then how are you accelerating them up to c/2?

Replies from: Thomas
comment by Thomas · 2014-09-16T06:26:07.422Z · LW(p) · GW(p)

With a tiny force of 1 micro Newton per kilogram of mass over several million years.

This was the acceleration force.

The centrifugal force is much less.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-17T01:58:20.754Z · LW(p) · GW(p)

With a tiny force of 1 micro Newton per kilogram of mass over several million years.

This is the force that will serve as the breaking force.

comment by A1987dM (army1987) · 2014-09-16T07:45:39.424Z · LW(p) · GW(p)

The problem is only this - where's the Lorentz contraction?

Each piece of the ring is longer as measured by an inertial observer comoving with it than as measured by a stationary one (i.e. one comoving with the centre of the ring). But note that there's no inertial observer that's comoving with all pieces of the ring at the same time, and if you add the length of each piece as measured by an observer comoving with it what you're measuring is not a closed curve, it's a helix in spacetime. (I will draw a diagram when I have time if I remember to.)

Replies from: Thomas
comment by Thomas · 2014-09-16T07:50:23.883Z · LW(p) · GW(p)

The inertial observer in the centre of the carousel measures those torus segments when they are stationary.

Then, after a million years of a small acceleration of the torus and NOT the central observer, the observer should see segments contracted.

Right?

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-16T12:19:37.891Z · LW(p) · GW(p)

During the million years of small acceleration, the torus will have to stretch (i.e. each atom's distance from its neighbours, as measured in its own instantaneous inertial frame will increase) and/or break.

Specifying that you do it very slowly doesn't change anything -- suppose you and I are holding the two ends of a rope on the Arctic Circle, and we go south to the Equator each along a meridian; in order for us to do that, the rope will have to stretch or break even if we walk one millimetre per century.

Replies from: Thomas
comment by Thomas · 2014-09-16T12:42:55.595Z · LW(p) · GW(p)

I don't see any reason this very big torus should break.

Forces are really tiny, for R is 10^21 m and velocity is about 10^8 m/s. That gives you 10^-5 N per kg of centrifugal force. Which can be counterbalanced by a small (radioactive) rocket or something on every meter.

Almost any other relativistic device from literature would easily break long before this one.

If breaking was a problem.

Replies from: army1987, Azathoth123
comment by A1987dM (army1987) · 2014-09-16T13:34:12.068Z · LW(p) · GW(p)

Can you see why the rope in my example would break or stretch, even if we're moving it very very slowly?

Replies from: Thomas
comment by Thomas · 2014-09-16T13:48:10.407Z · LW(p) · GW(p)

Your example isn't relevant for this discussion.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-16T17:08:08.339Z · LW(p) · GW(p)

Why not?

Replies from: Thomas
comment by Thomas · 2014-09-16T18:25:25.800Z · LW(p) · GW(p)

Look!

Not every relativistic projectile will be broken. And every projectile is relativistic, more or less.

Trying to escape from the Ehrenfest's paradox with saying - this starship breaks anyway - has a long tradition. Max Born invented that "exit".

Even if one advocates the breaking down of any torus which is moving/rotating relative to a stationary observer, he must explain why it breaks. And to explain the asymmetry created with this breakdown. Which internal/external forces caused it?

Resolving MM paradox with the Relativity created another trouble. Back to the drawing board!

Pretending that all is well is a regrettable attitude.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-17T13:53:13.320Z · LW(p) · GW(p)

Even if one advocates the breaking down of any torus which is moving/rotating relative to a stationary observer, he must explain why it breaks. And to explain the asymmetry created with this breakdown. Which internal/external forces caused it?

Why wouldn't that also apply to my rope example?

Replies from: Thomas
comment by Thomas · 2014-09-17T17:31:10.785Z · LW(p) · GW(p)

Each piece of the ring is longer as measured by an inertial observer comoving

We, at this problem, don't care for a "comoving" inertial observer. We care for the stationary observer in the center, who first see stationary and then rotating torus, which should contract. But only in the direction of moving.

comment by Azathoth123 · 2014-09-17T01:01:27.778Z · LW(p) · GW(p)

It's not the centrifugal force that's the problem. It's the force you are using to get the ring to start rotating.

Replies from: Thomas
comment by Thomas · 2014-09-17T07:10:25.469Z · LW(p) · GW(p)

Both forces are of the same magnitude! That's why we are waiting 10000000 years to get to a substantial speed.

If one is so afraid that forces even of that magnitude will somehow destroy the thing, one must dismiss all other experiments as well.

Ehrenfest was right, back in 1908. AFAIK he remained unconvinced by Einstein and others. It's a real paradox. Maybe I like it that much, because I came to the same conclusion long ago, without even knowing for Ehrenfest.

The question of the OP was about contrarian views. I gave 10 (even though I have about 100 of them). The 10th was about Relativity and I don't really expect someone would convert here. But it's possible.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-18T01:08:08.733Z · LW(p) · GW(p)

That's why we are waiting 10000000 years to get to a substantial speed.

Yes, and over 10000000 the forces can build up. Consider army's example of the stretching rope. Suppose I applied force to one end of a rope sufficient that over the course of 10000000 years it would double in length. You agree that the rope will either break or the bonds in the rope will prevent the rope from stretching?

The same thing happens with the rotation. As you rotate the object faster the bonds between the atoms are stretched by space dilation. This produces a restoring force which opposes the rotation. Either forces accelerating the rotation are sufficient to overcome this, which causes the bonds to break, or they aren't in which case the object's rotation speed will stop increasing.

Replies from: army1987, Thomas
comment by A1987dM (army1987) · 2014-09-18T09:55:21.463Z · LW(p) · GW(p)

Either forces accelerating the rotation are sufficient to overcome this, which causes the bonds to break,

(or stretch)

or they aren't in which case the object's rotation speed will stop increasing.

In the case of the ring there's another possibility.

comment by Thomas · 2014-09-18T06:04:14.229Z · LW(p) · GW(p)

Yes, and over 10000000 the forces can build up.

Irrelevant. How many tiny forces are inside a street car? They don't just "build up".

Nonsense.

Replies from: gjm
comment by gjm · 2014-09-18T07:21:56.120Z · LW(p) · GW(p)

No one's saying that forces "just build up" by virtue of applying for a long time. Azathoth123 is saying that in this particular case, when these particular forces act for a long time they produce a gradually accumulating change (the rotation of the ring) and that as that change increases, so do its consequences.

Replies from: Thomas
comment by Thomas · 2014-09-18T13:56:48.610Z · LW(p) · GW(p)

I understand. But imagine, that only 1 m of rope is accelerated this way. No "forces buildup" will happen.

As will not happen if we have rope around the galaxy.

Replies from: gjm
comment by gjm · 2014-09-18T14:51:13.180Z · LW(p) · GW(p)

Your rope is moving faster and faster, whether or not it goes all the way around the galaxy. The relations between different bits of the rope are pretty much exactly the setup for Bell's spaceship paradox.

Replies from: Thomas
comment by Thomas · 2014-09-18T18:47:52.092Z · LW(p) · GW(p)

And? The Relativity isn't coherent, that's the whole point.

Transition from one, to another paradox doesn't save the day.

Replies from: gjm
comment by gjm · 2014-09-18T19:42:39.167Z · LW(p) · GW(p)

Yeah, but it's a "paradox" only in the sense of being confusing and counterintuitive, not in the sense of having any actual inconsistency in it. The point is that this is a situation that's already been analysed, and your analysis of it is wrong.

Replies from: Thomas
comment by Thomas · 2014-09-18T20:34:10.368Z · LW(p) · GW(p)

It wouldn't be a problem, if it was just "paradox", but unfortunately it's real.

We can't and therefore don't measure the postulated Lorentz contraction. We have measured the relativistic time and mass dilatation or increase, we did. But there is NO experiment confirming the contraction of length.

Replies from: gjm, army1987
comment by gjm · 2014-09-18T21:17:17.573Z · LW(p) · GW(p)

To get direct verification of length contraction we'd need to take something big enough to measure and accelerate it to a substantial fraction of the speed of light. Taking the fact that we don't have such direct verification as a problem with relativity is exactly like the creationist ploy of claiming that failure to (say) repeat the transition from water-dwelling to land-dwelling life in a lab is a problem with evolutionary biology.

Replies from: Thomas
comment by Thomas · 2014-09-18T22:00:55.302Z · LW(p) · GW(p)

We have. The packet of protons inside LHC, Geneva.

Packets all around the circular tube. Nobody says, they shrink. They say those packets don't qualify for the contraction as they are "not rigid in Born's sens" and therefore not shrinking.

If we can measure even a tinny mass gain, we could measure a tinny contraction.

Had there been any.

Replies from: gjm, army1987
comment by gjm · 2014-09-18T22:45:21.463Z · LW(p) · GW(p)

Funny you should mention that.

Replies from: Thomas
comment by Thomas · 2014-09-19T07:37:31.538Z · LW(p) · GW(p)

This description fits only for protons at such ultrahigh energies that even the most advanced experiments will probably never be able to detect them.

See? It's only calculation based on Relativity, not actual experimental data.

Replies from: gjm
comment by gjm · 2014-09-19T08:40:16.909Z · LW(p) · GW(p)

If you read the whole article instead of quote-mining it for damning-looking sentences, you will see that that's incorrect.

They modelled, performed experiments, and compared the results. That's how science works. The fact that the article also mentions what happens in the models beyond the experimentally-accessible regime doesn't change that.

comment by A1987dM (army1987) · 2014-09-19T08:39:18.474Z · LW(p) · GW(p)

They say those packets don't qualify for the contraction as they are "not rigid in Born's sens" and therefore not shrinking.

A bunch of particles not bound to each other by anything is not rigid in any reasonable sense I can think of, so what's your point?

Replies from: Thomas
comment by Thomas · 2014-09-19T09:06:19.389Z · LW(p) · GW(p)

Every rigid body is just a cloud of particles. If they are bonded together, they are bonded together with other particles like photons. Or gravity. Or strong nuclear force, as quarks in protons and neutrons.

Also the strong nuclear force is responsible for bounding atomic nucleus together. The force just doesn't stop at the "edge of a proton".

But why do you think they "must be bonded together" in the first place?

comment by A1987dM (army1987) · 2014-09-19T12:21:44.793Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Length_contraction#Experimental_verifications

Replies from: Thomas
comment by Thomas · 2014-09-19T12:35:54.393Z · LW(p) · GW(p)

The link you gave does not talk about the direct observation of the Lorentz contraction. Rather of "explanations".

Fast traveling galaxies, of which all the sky is full, DO NOT show any contraction. That would qualify as a direct observation.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-19T12:47:33.322Z · LW(p) · GW(p)

Hubble flow is at best a very noncentral example of travelling. Also, images aren't supposed to show any contraction (see Terrell rotation), only the objects themselves.

Replies from: Thomas
comment by Thomas · 2014-09-19T13:44:34.682Z · LW(p) · GW(p)

If images aren't supposed to show any contraction, then measurements aren't supposed to detect any contraction.

My point exactly.

Are you saying, that there in an invisible contraction?

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-20T11:03:14.916Z · LW(p) · GW(p)

(Why are you expecting apparent sizes to match real sizes in the first place? The Sun looks as small as the Moon as seen from Earth, do you think it actually is?)

Of all light rays entering your eye right now, the ones coming from parts of the object farther away from you departed earlier than the ones coming from parts closer to you. If the object moved between those two times, its image will be deformed in a way that, when combined with Lorentz contraction, foreshortening, etc., will make the object look the same size as if it was stationary but rotated. This is known as Terrell rotation and there are animated illustrations of it on the Web.

(BTW, galaxies are moving along the line of sight, so their Lorentz contraction would be along the line of sight too, and how would you expect to tell (say) a sphere from an oblate spheroid seen flat face-first?)

I agree that “Lorentz contraction” is a misleading name; it's just a geometrical effect akin to the fact that a slab is thicker if you transverse it at an angle than if you transverse it perpendicularly.

Replies from: Thomas
comment by Thomas · 2014-09-20T16:07:44.157Z · LW(p) · GW(p)

will make the object look the same size as if it was stationary but rotated

Yes. Rotated rope looks shorter. Problem remains.

galaxies are moving along the line of sight

We see the close and the far edge of many of them. Still, the pancake apparently isn't neither squeezed neither rotated.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-20T17:04:14.757Z · LW(p) · GW(p)

Yes. Rotated rope looks shorter. Problem remains.

What problem?

comment by [deleted] · 2014-09-15T21:39:30.720Z · LW(p) · GW(p)

There's a reason it's called special relativity. It only works in special cases. Eucludian geometry and Newtonian mechanics are inconsistent, btw. Special relativity solves these inconsistencies in the special contexts where they originally came up (predicting the Lorentz contraction and time dilation which is experimentally observed). It wasn't until the curved space of general relativity was discovered that we had a fully consistent model.

And yes, curved space of general relativity fully explains the rotating disc in a way that is self-consistent in in agreement with observed results (as proven by Gravity Probe B, among other things).

Replies from: DanielLC, Thomas, Azathoth123, The_Duck
comment by DanielLC · 2014-09-15T22:52:30.453Z · LW(p) · GW(p)

Special relativity is consistent. It just isn't completely accurate.

It's inconsistent with solid-body physics, but that's due to the oversimplifications inherent in solid-body physics, not the ones inherent in special relativity.

Trying to fit solid-body physics into general relativity is even worse. With special relativity, it works fine as long as it doesn't rotate or accelerate. Under general relativity, it can only exist on flat space-time, which basically means that nothing in the universe can have any mass whatsoever, including the object in question.

Replies from: None
comment by [deleted] · 2014-09-16T00:34:49.697Z · LW(p) · GW(p)

Twin paradox.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T01:03:43.752Z · LW(p) · GW(p)

What about the twin paradox?

comment by Thomas · 2014-09-15T22:06:06.301Z · LW(p) · GW(p)

curved space of general relativity fully explains the rotating disc in a way that is self-consistent

Is it any Lorentz contraction visible in the case of around the galaxy rim?

Are all the Lorentzian shrinks just cancelled out?

I'd really like to know that.

comment by Azathoth123 · 2014-09-16T00:35:20.677Z · LW(p) · GW(p)

You need GR if you want to treat talk about the rotating reference frame of the disk. Otherwise SR is fine.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-16T07:52:05.901Z · LW(p) · GW(p)

“claim[ing] that special relativity can't handle acceleration at all ... is like saying that Cartesian coordinates can't handle circles”

See http://math.ucr.edu/home/baez/physics/Relativity/SR/acceleration.html

But then again, the question whether the study of flat spacetime using non-inertial reference frames counts as SR depends on what you mean by SR. If you mean the limit of GR as G approaches 0, then it totally does.

comment by The_Duck · 2014-09-15T23:10:46.234Z · LW(p) · GW(p)

You don't need GR for a rotating disk; you only need GR when there is gravity.

Replies from: None
comment by [deleted] · 2014-09-16T00:34:15.153Z · LW(p) · GW(p)

Rotation drags spacetime.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-16T01:27:54.631Z · LW(p) · GW(p)

Only if the rotating object is sufficiently massive.

Replies from: None
comment by [deleted] · 2014-09-16T03:19:15.802Z · LW(p) · GW(p)

Only if the rotating object has any mass at all.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T04:45:11.520Z · LW(p) · GW(p)

For a rotating object of sufficiently small mass, the mass can be ignored, and reasonably accurate results can be found with special relativity.

Replies from: None
comment by [deleted] · 2014-09-16T05:11:15.548Z · LW(p) · GW(p)

I don't disagree. This discussion was philosophical in the pejorative sense, being about absolutely exact results, not reasonable approximations.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T05:30:07.087Z · LW(p) · GW(p)

The OP was claiming that special relativity was incoherent, not just that it wasn't absolutely exact.

If you want absolutely exact results, you'll need a theory of everything. There are quantum effects messing with spacetime.

Replies from: None
comment by [deleted] · 2014-09-16T17:36:19.215Z · LW(p) · GW(p)

Right. Well I'd agree that special relativity is incoherent for accelerating rotating frames -- it gives different experimental predictions depending on your choice of reference frame. It may be unusual to use accelerating reference frames, but they work just fine in classical physics. But they don't in special relativity.

It's not a very meaningful or contrarian statement though. Special relativity was known to be incoherent with regard to accelerating reference frames from day one. "Special" as in "special case", which it is. I guess my objection here is thta the OP listed it as a contrarian viewpoint, but as far as I can tell it is the standard view taught in Physics 103.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T17:59:12.337Z · LW(p) · GW(p)

It may be unusual to use accelerating reference frames, but they work just fine in classical physics.

No they don't. From an accelerating reference frame, an object with no force on it will accelerate. You can only get it to work if you add a fictitious force.

I don't think it's accurate to call it incoherent for accelerating reference frames. If you try to alter the coordinate system so that something that was accelerating is at rest, and you try to predict what happens with the normal laws of physics, you'll get the wrong answer. But it never says you should get the right answer. There's symmetries in the laws of physics that cause them to be preserved by Lorentz transformations. Since a transformation can be found to make an arbitrary object be at rest at the origin and in a given orientation, it's often useful to use the transformation so that you can do the math with the object being at rest. Special relativity simply does not have such a symmetry to allow an accelerating object to be changed to an object at rest.

The fact that you can't use arbitrary "reference frames" doesn't mean that special relativity only works for a special case any more than using the (x,t) |-> (-t,x) transformation on Newtonian physics not working means that Newtonian physics only works in special cases and is incoherent.

The reason special relativity is a special case is that it only applies to flat spacetime, when no mass is involved.

comment by DanielLC · 2014-09-20T21:18:13.614Z · LW(p) · GW(p)

4: the infinity is not even a coherent concept

What do you mean by this? Do you just mean that it doesn't make any sense for something infinite to actually exist, or do you mean that set theory, which claims the existence of an infinite set as an axiom, is inconsistent?

6: Venus is hot due to volcanoes

Why? It seems to me that it would be obvious if the standard theory that Venus gets most of its heat from the sun was wrong, since we can easily see how much it absorbs and emits and look at the difference. Besides which, you'd need expertise to have a reasonable chance of coming up with the correct explanation on your own. Do you have relevant expertise?

Replies from: Thomas
comment by Thomas · 2014-09-21T09:12:51.899Z · LW(p) · GW(p)

Well, I have learned that explaining one of my views from this thread just costed me karma points.

I'll pass those two.

comment by MaximumLiberty · 2014-09-16T02:39:23.989Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

The SF Bay Area is a lousy place to live.

Max L.

Replies from: DanielLC, MaximumLiberty
comment by DanielLC · 2014-09-16T04:55:55.028Z · LW(p) · GW(p)

That's really more a personal taste than a view. The SF Bay Area is not inherently a good or bad place to live. Since you're the only person qualified to judge if you like living their, your opinion on the matter can hardly be considered contrarian. Not unless the majority of the people on LessWrong think you're wrong about not liking living there.

comment by MaximumLiberty · 2014-09-16T03:58:28.177Z · LW(p) · GW(p)

Now, now. The rule of the game is to upvote if you disagree and don't vote otherwise. I lived there for four years, so I think I'm qualified to have an opinion.

Max L.

Replies from: TylerJay, Azathoth123
comment by TylerJay · 2014-09-18T01:00:56.853Z · LW(p) · GW(p)

Well, I thought it was funny

comment by Azathoth123 · 2014-09-16T04:42:12.529Z · LW(p) · GW(p)

I downvoted because I agree.

Replies from: pragmatist
comment by pragmatist · 2014-09-16T06:09:50.768Z · LW(p) · GW(p)

That is not in the rules for the thread. Given the karma toll (and the fact that sufficiently downvoted comment threads get collapsed), it would be a bad idea to make it one of the rules. I think you should simply not vote on comments you agree with, and I suggest reversing any downvotes you've made for this reason.

I do agree that without downvoting it is hard to differentiate between views the community agrees with and views the community has no real opinion about, but I don't think adding this information is worth the disadvantages of downvoting for agreement.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-17T01:36:33.708Z · LW(p) · GW(p)

I think you should simply not vote on comments you disagree with

I assume you meant: "I think you should simply not vote on comments you agree with".

Replies from: pragmatist
comment by pragmatist · 2014-09-17T05:00:31.064Z · LW(p) · GW(p)

Right. Corrected.

comment by Adele_L · 2014-09-15T16:20:57.739Z · LW(p) · GW(p)

Meta

I think LW is already too biased towards contrarian ideas - we don't need to encourage them more with threads like this.

Replies from: shminux, bramflakes
comment by Shmi (shminux) · 2014-09-15T17:44:54.468Z · LW(p) · GW(p)

Treated as a "contrarian opinion" and upvoted.

comment by bramflakes · 2014-09-15T17:53:19.084Z · LW(p) · GW(p)

I think this thread is for opinions that are contrarian relative to LW, and not to the mainstream.

e.g. my opinion on open borders is something that a great majority of people share but is contrarian here, shown by the fact that as of the time of writing it is currently tied for highest-voted in the thread.

Replies from: Adele_L
comment by Adele_L · 2014-09-15T18:08:30.477Z · LW(p) · GW(p)

I think it's still a problem relative to LW.

comment by Slider · 2014-09-15T18:54:52.150Z · LW(p) · GW(p)

Developing a rationalist identity is harmfull. Promoting a "-ism" or group affilication with the label "rational" is harmful.

Replies from: Slider
comment by Slider · 2014-09-15T19:09:12.719Z · LW(p) · GW(p)

Making your mind work better should not be a special action but a constant virtue. Categorising people into class A people that reach this high on sanity waterline and class B people that reach that high isn't healthy. Being rational isn't about being behind a particular answer to some central question. Being rational shouldn't be about social moment and it's inertia but about arguments and dissolving troubles.

The word choice itself is misleading as the word has very differnt meaning in mainstream use. It enforces and communicates an aura of superiority that hinders intercton with other same level disciplines. If being rational is about "prosperity by choice" or "prosperity by cognition" there exist other branches of "winning" that should not be positioned as enemies but rather accomplishes. There is atleast "prosperity by trust" and "prosperity by accumulation".

comment by [deleted] · 2014-09-15T18:31:01.991Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

American intellectual discourse, including within the LW community, is informed to a significant extent by folk beliefs existing in the culture at large. One of these folk beliefs is an emphasis on individualism -- both methodological and prescriptive. This is harmful: methodological individualism ignores the existence of shared cultures and coordination mechanisms that can be meaningfully abstracted across groups of individuals, and prescriptive individualism deprives those who take it seriously of community, identity, and ritual, all of which are basic human needs.

Replies from: None
comment by [deleted] · 2014-09-26T01:38:17.422Z · LW(p) · GW(p)

How is this a contrararian view? It's a simple fact visible to some Americans and almost everyone outside the USA.

comment by A1987dM (army1987) · 2014-09-15T15:57:30.764Z · LW(p) · GW(p)

[META]

Previous incarnations of this idea: Closet survey #1, The Irrationality Game (More, II, III)

Replies from: None
comment by [deleted] · 2014-09-21T02:57:43.678Z · LW(p) · GW(p)

Except not really. The point is to ferret out purportedly rational beliefs held by some members of this community which deserve further investigation or elaboration.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-21T08:01:11.402Z · LW(p) · GW(p)

From “The Irrationality Game” (emphasis as in the original):

You have to actually think your degree of belief is rational. You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average. This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.

comment by FiftyTwo · 2014-09-21T01:08:09.811Z · LW(p) · GW(p)

[Contrarian thread special voting rules]

I would not want to be cryonically frozen and resurrected as my sense of who I am is tied into social factors that would be lost

Replies from: jaime2000, shminux
comment by jaime2000 · 2014-09-21T02:05:27.550Z · LW(p) · GW(p)

Would you be willing to freeze if your family did? Your friends and family? Your whole country? Or even if everyone in the world was preserved, would you expect the structure of society post-resurrection be different enough that you would refuse preservation?

Replies from: FiftyTwo
comment by FiftyTwo · 2014-09-21T10:51:54.535Z · LW(p) · GW(p)

I'm not usre about the friends and family examples, it would depend what I thought that future society would be like. If cryonics was the norm I probably wouldn't opt out of it because I would have reasonable expectation of, if resurrection was successful, there being other people in the same situation so there would be infrastructure to support us.

The social factors I'm thinking of include the skills, qualifications and experience that I have developed in my life, which would likely be irrelevant in a world that can resurrect me. At best I would be a historical curiosity with nothing to contribute.

comment by Shmi (shminux) · 2014-09-23T20:41:15.893Z · LW(p) · GW(p)

I can't really disagree with the statement as is, because it is about your wants, not mine, but "I" do not feel the same way.

comment by lmm · 2014-09-15T21:17:22.321Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Politically, the traditional left is broadly correct.

Replies from: iceman, None, BrassLion, gjm
comment by iceman · 2014-09-15T22:26:53.681Z · LW(p) · GW(p)

Correct meaning what? I'm interpreting "the traditional left" as a value system instead of a set of statements about the world.

Replies from: lmm
comment by lmm · 2014-09-16T21:54:56.752Z · LW(p) · GW(p)

Correct meaning that we would prefer the outcomes of their policy suggestions to the outcomes of other policies, or I guess generically that their values are an effective mechanism for generating good policies.

comment by [deleted] · 2014-09-15T21:50:14.835Z · LW(p) · GW(p)

"Traditional" left meaning what? Communism? Socialism? Democrats?

Replies from: lmm, fubarobfusco
comment by lmm · 2014-09-16T22:01:24.461Z · LW(p) · GW(p)

Traditional as in not the radical left or any post-neocon positions. Socialism. Approximately the position of the leftmost of the two biggest political parties in a typical western-european country.

Replies from: Salemicus
comment by Salemicus · 2014-09-19T21:31:32.527Z · LW(p) · GW(p)

In the typical Western-European country, the leftmost of the two parties has abandoned Socialism and instead espouses the politics of Social Democracy or the Third Way.

comment by fubarobfusco · 2014-09-16T03:36:00.459Z · LW(p) · GW(p)

The Old Left of labor unionism? The New Left of student activism?

comment by BrassLion · 2014-09-16T22:41:14.218Z · LW(p) · GW(p)

I downvoted you because I mostly agree - depending on how broadly you mean broadly. I suspect this is a not uncommon position here, and I would not even be surprised if it were a plurality position.

Replies from: lmm
comment by lmm · 2014-09-16T22:56:58.524Z · LW(p) · GW(p)

That's fine. In some recent threads I've taken what I felt was a mainstream if leftist position, written (IMO) reasonable, positive arguments - and been downvoted for it, to the extent that I'm entertaining the hypothesis that LW is full of libertarians who are strongly opposed to such views. Confirming that out one way or the other is useful information.

Replies from: Prismattic, FiftyTwo
comment by Prismattic · 2014-09-18T04:01:52.644Z · LW(p) · GW(p)

I also have the general impression that in the past few months there has been an uptick of uncharitable tinman-attacks on progressivism by libertarians in the LW comment threads. Curiously, there seems to be less overt hostility between reactionaries and progressives, even though they're much further apart than libertarians and progressives (although this might be because the more hostile Nrx were more likely to exit after the creation of Moreright).

comment by FiftyTwo · 2014-09-21T00:56:55.916Z · LW(p) · GW(p)

I've had the same feeling. I suspect there are loud reactionary and libertarian minorities and a large number of liberal quiet people

comment by gjm · 2014-09-17T11:30:01.096Z · LW(p) · GW(p)

I take the downvotes Imm is getting for (informative, clear, concise) answers to requests for clarification -- and the number of downvotes his original perhaps-contrarian statement is getting (a larger fraction than any of the others I checked) -- as evidence that, in accordance with Imm's hypothesis, there are people on LW who regard holding left-wing views as an offence that requires punishment.

(I dare say there are people who feel the same way on the other side, but I haven't seen any sign that they engage in the same sort of punitive downvoting.)

[EDITED to add: I checked some more top-level comments here, and found a few more with a larger fraction of downvotes than Imm's, but not many. Those others weren't obviously political.]

comment by Ronak · 2014-09-18T21:42:29.171Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.

(I have a large interval for how controversial this is, so pardon me if you think it's not.)

Replies from: Azathoth123, polymathwannabe
comment by Azathoth123 · 2014-09-18T23:18:29.405Z · LW(p) · GW(p)

Do you mean humanities in the abstract or the people currently occupying humanities departments?

Replies from: Ronak
comment by Ronak · 2014-09-19T02:10:34.540Z · LW(p) · GW(p)

In the abstract. Though, undoubtedly, many of the people can do wonders too.

comment by polymathwannabe · 2014-09-19T01:32:51.026Z · LW(p) · GW(p)

Although the social sciences have undeniably helped a lot with our understanding of ourselves, their refusal to follow the scientific method is disgraceful.

Replies from: AABoyles, Ronak, Azathoth123
comment by AABoyles · 2014-09-19T13:04:10.798Z · LW(p) · GW(p)

As a social scientist (who spends a LOT of time and effort developing rigorous methodology in keeping with the scientific method), I find your dismissal of my entire academic superfield disgraceful. Perhaps you've confused social science with punditry?

Replies from: Lumifer, polymathwannabe
comment by Lumifer · 2014-09-19T15:13:49.439Z · LW(p) · GW(p)

What kind of social science do you do?

Replies from: AABoyles
comment by AABoyles · 2014-09-19T15:24:35.747Z · LW(p) · GW(p)

Computational Social Science (which is extremely methodology-oriented). I was trained in Political Science, but the lines between the social sciences are pretty fuzzy. I do substantive work which could be called Political Science, Sociology, or Economics.

Replies from: Lumifer
comment by Lumifer · 2014-09-19T16:12:14.422Z · LW(p) · GW(p)

Computational Social Science

The definitions that I found are very wide and very fuzzy, and, essentially, boil down to "social science but with computers!". Is it, basically, statistics (which nowadays is often called by the fancier name of "data science")?

Replies from: AABoyles
comment by AABoyles · 2014-09-19T17:09:08.062Z · LW(p) · GW(p)

I doubt you can find a widely-acceptable definition of Data Science which is any less fuzzy. Computational Social Science (CSS) is a subset of Data Science. Take Drew Conway's Data Science Venn Diagram: If your Substantive Expertise is a Social Science, you're doing Computational Social Science.

Statistics is an important tool in CSS, but it doesn't cover the other types of modeling we do: Agent-Based, System Dynamic, and Algorithmic Game Theoretic to name a few.

Replies from: Lumifer
comment by Lumifer · 2014-09-19T17:27:10.144Z · LW(p) · GW(p)

Computational Social Science (CSS) is a subset of Data Science.

Ah, I see, so you're coming from that direction.

But let me ask a different question -- in what kind of business you're in? Are you in the business of making predictions? in the business of constructing explanations of how the world works? in the business of coming up with ways to manipulate the world to achieve desired ends?

Replies from: AABoyles
comment by AABoyles · 2014-09-19T18:05:49.299Z · LW(p) · GW(p)

I'm in the business of modeling. I do all three of those tasks, but the emphasis is definitely on the last.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-19T19:37:01.818Z · LW(p) · GW(p)

Could you give examples of successful interventions that you field has come up with, that wouldn't otherwise have been put into practice?

Replies from: AABoyles
comment by AABoyles · 2014-09-19T20:03:13.491Z · LW(p) · GW(p)

Nope! Not to say that an intervention proposed by a computational social model has never influenced policy in real life--I just don't know of any examples. That said, I'm workin' on it.

comment by polymathwannabe · 2014-09-19T15:09:05.633Z · LW(p) · GW(p)

Perhaps you were exposed to better education. In Latin American universities, the humanities are plagued with antipositivism. If you've managed to stay away from it, kudos to you.

Replies from: AABoyles
comment by AABoyles · 2014-09-19T15:35:13.940Z · LW(p) · GW(p)

Oof. You just trampled one of my pet peeves: Social Science is a subset of the Sciences, not the Humanities.

There's still a persistent anti-positivist streak in the Humanities in the US, but mostly positivism has just been irrelevant to the work of Humanities scholars (though this is changing in some interesting and exciting ways).

More importantly, the social sciences in the US are overwhelming positivist, even amongst researchers whose work is not strictly empirical. I wish I could take credit for those good influences, but I think you're probably the one deserving of kudos for managing to become a rationalist in such a hostile environment.

comment by Ronak · 2014-09-19T02:21:38.618Z · LW(p) · GW(p)

When I said humanities I didn't mean social sciences; in fact, I thought social sciences explicitly followed the scientific method. Maybe the word points to something different in your head, or you slipped up. Either way, when I say humanities, I actually mean fields like philosophy and literature and sociology which go around talking about things by taking the human mind as a primitive.

The whole point of the humanities is that it's a way of doing things that isn't the scientific method. The disgraceful thing is the refusal to interface properly with scientists and scientific things - but there's no shortage of scientists who refuse to interface with humanities either, when you come down to it. My head's canonical example is Indian geneticists who try to go around finding genetic caste differences; Romila Thapar once gave an entertaining rant about how anything they found they'd be reading noise as signal because the history of caste was nothing like these people imagined.

And, on the other hand, we have many Rortys and Bostroms and Thapars in the humanities who do interface.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-19T23:23:16.627Z · LW(p) · GW(p)

My head's canonical example is Indian geneticists who try to go around finding genetic caste differences;

Funny humanities people were saying the same thing about genetic racial differences until said difference started showing up.

Replies from: Ronak
comment by Ronak · 2014-09-20T13:25:16.993Z · LW(p) · GW(p)

a) Actually Thapar's point wasn't that there were no genetic differences (in fact, the theory of caste promulgated by Dalit activists is that it's created by the prohibition of inter-caste marriage and therefore pretty much predicts genetic differences) - but that the groupings done by the researchers wasn't the correct one.

b) I should actually check that what I surmised is what she said. Thanks for alerting me to the possibility.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T18:55:15.880Z · LW(p) · GW(p)

Actually Thapar's point wasn't that there were no genetic differences (in fact, the theory of caste promulgated by Dalit activists is that it's created by the prohibition of inter-caste marriage and therefore pretty much predicts genetic differences) - but that the groupings done by the researchers wasn't the correct one.

So do you have independent evidence that the theory promulgated by the Dalit activists is correct, because theories promulgated by activists don't exactly have the best track record.

Replies from: Ronak
comment by Ronak · 2014-09-20T20:30:22.218Z · LW(p) · GW(p)

Actually, with the caveat that I don't have any object-level research, I doubt it; they assign a rigidity to the whole thing that seems hard to institute. My point was that 'do there exist genetic differences' is not the issue here.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T22:17:21.873Z · LW(p) · GW(p)

So what is the issue, that geneticists didn't consult with Dalit activists before designing their experiment?

Replies from: Ronak
comment by Ronak · 2014-09-21T00:02:38.209Z · LW(p) · GW(p)

So, Romila Thapar is not a Dalit activist, just a historian (I'm guessing this is a source of confusion; I could be wrong).

I'm saying they should have read up before starting their project.

I can't find the study for some reason, so I'll try and do it from memory. They randomly picked from a city Dalits (Dalit is a catch-all term coined by B R Ambedkar for people of the lowest castes, and people outside the caste system, all of whom were treated horribly) and people from the merchant castes to look for genetic differences. Which is all fine and dandy - but for the fact that neither 'Dalit' not 'merchant-caste' is an actual caste; there are many castes which come into those two categories. So, assuming a simple no-inter-caste-marriage model of caste, a merchant family from village A thousands of kilometres from village B has about as much (or, considering marginal things like babies born out of rape, even less) genetic material in common than a merchant and Dalit family from the same village - unless there's a common genetic ancestor to all merchant families. And that's where reading historical literature comes in - the history of caste is much more complicated, involving for example periods when it was barely enforced and shuffling and all sorts of stuff. So, they will find differences in their study, but it won't reflect actual caste differences.

comment by Azathoth123 · 2014-09-19T01:39:02.691Z · LW(p) · GW(p)

The problem is that they're trying to study areas where it's really hard to get enough scientific evidence.

comment by blacktrance · 2014-09-16T22:13:18.996Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

There is nothing morally wrong about eating meat, and vegetarianism/veganism aren't morally superior to meat-eating.

Replies from: Lumifer, falenas108, Elo
comment by Lumifer · 2014-09-16T23:45:57.001Z · LW(p) · GW(p)

That looks like a mainstream position, not contrarian.

Replies from: blacktrance
comment by blacktrance · 2014-09-17T00:07:17.687Z · LW(p) · GW(p)

It's contrarian among LWers, which is what the OP asked for.

Replies from: Lumifer
comment by Lumifer · 2014-09-17T04:03:36.916Z · LW(p) · GW(p)

Is that so? I know there are some vocal vegetarians on LW, I am not sure that makes them the local mainstream.

Replies from: Prismattic
comment by Prismattic · 2014-09-17T05:20:15.783Z · LW(p) · GW(p)

I think there are more LW members who are meat-eating and feel hypocritical/gulity about it than there are actual vegetarians.

Replies from: Lumifer
comment by Lumifer · 2014-09-17T14:44:47.093Z · LW(p) · GW(p)

Looking at the 2013 poll:

VEGETARIAN:
No: 1201, 73.4%
Yes: 213, 13.0%
Did not answer: 223, 13.6%

I can't speak to the feeling of guilt, but vegetarians are a small minority here.

Replies from: William_Quixote
comment by William_Quixote · 2014-10-07T15:49:08.211Z · LW(p) · GW(p)

At the time of the poll people had requested more granularity in the answers. I think a lot of folks leaned towards veggie ideas in the sense of reduced or substanceualy below average meat consumption without actualy being vegitayians.

As to how many a lot was, who knows. I think it's likely that vegetarianism is probably more acceptable here than in the population at large and so disagreeing with it would be more contrarian

comment by falenas108 · 2014-09-21T19:43:17.471Z · LW(p) · GW(p)

For most of the vegetarians I know, the issue isn't inherently eating meat. It's the way the animals are treated before they are killed.

Replies from: Lumifer
comment by Lumifer · 2014-09-21T21:05:28.769Z · LW(p) · GW(p)

For most of the vegetarians I know, the issue isn't inherently eating meat. It's the way the animals are treated before they are killed.

Maybe you know a weird subset of vegetarians, but I don't think most would be fine with eating a dead animal that has been very well treated throughout its life.

comment by Elo · 2014-09-16T23:07:46.027Z · LW(p) · GW(p)

agree (mostly), (not vegetarian) would you prefer to eat a bacterial-produced meat product? Assuming it could be made to taste the same...

Replies from: blacktrance
comment by blacktrance · 2014-09-16T23:38:44.116Z · LW(p) · GW(p)

If its price was less than or equal to the price of normal meat, I'd buy it, otherwise, I'd stick with normal meat.

Replies from: Elo
comment by Elo · 2014-09-17T06:22:30.946Z · LW(p) · GW(p)

I suspect it will end up being cheaper because it would be faster to produce than an entire life-cycle of an animal...

comment by John_Maxwell (John_Maxwell_IV) · 2014-09-16T03:50:53.014Z · LW(p) · GW(p)

This seems pretty similar to the irrationality game. That's not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):

  • Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.

  • Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy "contrarian" views that are in fact pretty widespread already.

  • Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let's reward that.

  • Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).

  • Encourage people to make contrarian factual statements rather than contrarian value statements. If we believe different things about the world, we have a better chance of having a productive discussion than if we value different things in the world.

Not sure if these rules should apply to top-level comments only or every comment in the thread. Another interesting question: should playing devil's advocate be allowed, i.e. presenting novel arguments for unpopular positions you don't actually agree with, and in under what circumstances (are disclaimers required, etc.)

You could think of my proposed rules as being about halfway between irrationality game and a normal LW open thread. Perhaps by doing binary search, we can figure out what the optimal degree to facilitate contrarianism is, and even make every Nth open thread a "contrarian open thread" that operates under those rules.

Another interesting way to do contrarian threads might be to pick particular views that seem popular on Less Wrong and try to think of the best arguments we can for why they might be incorrect. Kind of like a collective hypothetical apostasy. The advantage of this is that we generate potentially valuable contrarian positions no one is holding yet.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-16T04:40:54.545Z · LW(p) · GW(p)

Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.

This has the problem that beliefs with a large inferential distance won't get stated.

The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-16T05:06:46.968Z · LW(p) · GW(p)

This has the problem that beliefs with a large inferential distance won't get stated.

Is it useful to have beliefs with a large inferential distance stated without supporting evidence? Given that the inferential distance is large, I'm not going to be able to figure it out on my own am I? At least having a sketch of an argument would be useful. The more you fill in the argument, the more minds you change and the more upvotes you get.

The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.

"Upvote if the comment caused you to change your mind" is not the same thing as "upvote if you disagree".

Another idea, which kinda seems to be getting adopted in this thread already: have a short note at the bottom of every comment right above the vote buttons reminding people of the voting behavior for the thread, to counteract instinctive voting.

comment by jsteinhardt · 2014-09-15T17:40:41.240Z · LW(p) · GW(p)

[meta]

Is there some way to encourage coherence in people's stated views? For some of the posts in this thread I can't tell whether I agree or disagree because I can't understand what the view is. I feel an urge to downvote such posts, although this could easily be a bad idea, since extreme contrarian views will probably seem less coherent. On the other hand, if I can't even understand what is being claimed in the first place then it's hard for me to get much benefit out of it.

Replies from: ChristianKl, Gunnar_Zarncke
comment by ChristianKl · 2014-09-16T11:08:07.224Z · LW(p) · GW(p)

Is there some way to encourage coherence in people's stated views? For some of the posts in this thread I can't tell whether I agree or disagree because I can't understand what the view is.

That's to be expected with it comes to contrarian views. A lot of positions are not widely held because they are complicated to understand or take certain background knowledge.

If you would give me a bunch of academic math problems I wouldn't understand the problems. In math it's fairly easy to say: Hey math is complicated, it's okay that I don't know enough about the topic to understand the claim. But in other areas the same applies. Understanding what other people think is often hard when they differ substantially from yourself.

comment by Gunnar_Zarncke · 2014-09-15T20:26:16.417Z · LW(p) · GW(p)

This thread is mixed up. A top level meta comment (like in the irrationality sequence) is missing for example.

comment by [deleted] · 2014-09-21T02:59:19.876Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-21T20:35:03.311Z · LW(p) · GW(p)

What do you mean by that. Technical all that is required is the proper arrangement of transistors.

Replies from: None
comment by [deleted] · 2014-09-21T22:44:32.357Z · LW(p) · GW(p)

I mean that the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI. There aren't any significant known unknowns left to be resolved.

Replies from: Azathoth123, None, Lumifer
comment by Azathoth123 · 2014-09-23T01:47:58.830Z · LW(p) · GW(p)

Then where's the AI?

Replies from: None
comment by [deleted] · 2014-09-23T01:59:15.393Z · LW(p) · GW(p)

All the pieces for bitcoin were known and available in 1999. Why did it take 10 years to emerge?

comment by [deleted] · 2014-09-26T01:47:00.413Z · LW(p) · GW(p)

I don't see anything in there about a goal system -- not even one that optimizes for paperclips. Goetzel and his lot are dualists and panpsychists: how can we expect them to complete a UFAI when they turn to mysticism when asked to design its soul?

comment by Lumifer · 2014-09-21T23:14:43.760Z · LW(p) · GW(p)

the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI.

So, um, what's the problem, then?

Replies from: None
comment by [deleted] · 2014-09-22T02:58:36.263Z · LW(p) · GW(p)

So, um, what's the problem, then?

There are no problems. UFAI could be constructed by a few people who know what they are doing on today's commodity hardware with only a few years effort.

Replies from: Richard_Kennaway, Lumifer
comment by Richard_Kennaway · 2014-09-22T11:03:58.681Z · LW(p) · GW(p)

The outside view on this is that such predictions have been made since the start of A(G)I 50 or 60 years ago, and it's never panned out. What are the inside-view reasons to believe that this time it will? I've only looked through the table of contents of the Goertzel book -- is it more than a detailed survey of AGI work to date and speculations about the future, or are he and his co-workers really onto something?

Replies from: None
comment by [deleted] · 2014-09-22T15:37:20.636Z · LW(p) · GW(p)

My prediction / contrarian belief is that they are really onto something, with caveats (did you look at the second book? that's where their own design is outlined).

At the very highest level I think their CogPrime design is correct in the sense that it implements a human-level or better AGI that can solve many useful categories of real world problems, and learn / self-modify to solve those categories it is not well adapted to out of the box.

I do take issue with some of the specific choices they made in both fleshing out components and the current implementation, OpenCog. For example I think using the rule-based PLN logic engine was a critical mistake, but at an architectural level that is a simple change to make since the logic engine is / should be loosly coupled to the rest of the design (it's not in OpenCog, but c'est la vie. I think a rewrite is necessary anyway for other reasons). I'd swap it out for a form of logical inference based on Bayesian probabalistic graph models a la Pearl. There are various other tweaks I would make regarding the atom space, sub-program representation, and embodiment. I'd also implement the components within the VM language of the AI itself, such that it is able to self-modify its own core capabilities. But at the architectural level these are tweaks of implementation details. It's remains largly the same design outlined by Goertzel et al.

AI has been around for almost 60 years. However AGI as a discipline was invented by Goertzel et al only in the last 10 to 15 years or so. The story before that is honestly quite a bit more complex, with much of the first 50 years of AI being spent working on the sub-component projects of an integrative AGI. So without prototype solutions to the component problems, I don't find it at all surprising that progress was not made on integrating the whole.

comment by Lumifer · 2014-09-22T04:51:34.125Z · LW(p) · GW(p)

UFAI could be constructed by a few people who know what they are doing on today's commodity hardware with only a few years effort.

Any evidence for that particular belief?

Replies from: None
comment by [deleted] · 2014-09-22T05:14:30.740Z · LW(p) · GW(p)

What do you think is missing from the implementation strategy outlined in Goertzel's Engineering General Intelligence?

Replies from: Lumifer
comment by Lumifer · 2014-09-22T06:08:06.882Z · LW(p) · GW(p)

Haven't read it, but I'm guessing a prototype..?

Replies from: None
comment by [deleted] · 2014-09-22T06:20:32.888Z · LW(p) · GW(p)

If you had that then you wouldn't need a few years to implement it now would you.

comment by blacktrance · 2014-09-21T02:48:51.277Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Moral realism is true.

Replies from: None, shminux
comment by [deleted] · 2014-09-26T01:52:03.452Z · LW(p) · GW(p)

Certainly when I dissolved the concept of universal normativity into agent-design normativity, I found myself looking at something that more closely resembles moral realism than any non-realist position I've seen.

comment by Shmi (shminux) · 2014-09-23T20:39:11.259Z · LW(p) · GW(p)

Do you mean this (i.e. that a specific morality has or had evolutionary advatage) or something else?

Replies from: blacktrance
comment by blacktrance · 2014-09-23T21:12:17.486Z · LW(p) · GW(p)

I mean that moral statements have a truth-value, some moral statements are true, and the truth of moral statements isn't determined by opinion.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-23T22:49:50.734Z · LW(p) · GW(p)

What does it mean for a moral statement to be true? After all, it is not a mathematical statement. How does one tell if a moral statement is true?

EDIT: it seems like a category error to me (morality is evaluated as if it were math), but maybe I am missing something.

Replies from: Lumifer, blacktrance
comment by Lumifer · 2014-09-24T00:52:17.824Z · LW(p) · GW(p)

What does it mean for a moral statement to be true?

In many religions (which do tend towards moral realism :-/) morality is quite similar to physics: it describes the way our world is constructed. Good people go to heaven, evil people go to hell, karma determines your rebirth, etc. etc. Morality is objective, it can be discovered (though not exactly by a scientific method), and "this moral statement is true" means the usual thing -- correspondence to reality.

comment by blacktrance · 2014-09-24T00:28:56.210Z · LW(p) · GW(p)

What does it mean for a moral statement to be true?

It's hard to give a general answer to this, as different moral realists would answer this question differently. Most would agree that it means that there are facts about one ought to do and not do.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-24T01:55:46.231Z · LW(p) · GW(p)

How do you tell if something is a fact?

Replies from: blacktrance
comment by blacktrance · 2014-09-24T02:31:11.783Z · LW(p) · GW(p)

That depends on what it's a fact about. If it's a fact about the physical world, I use my senses. If it's about mathematics, I use mathematical methods (e.g. proofs). If it's a moral fact, I reason about whether it's something that one should do.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-24T03:26:45.083Z · LW(p) · GW(p)

How do you know if your reasoning is correct and someone else's (who disagrees with you) isn't?

Replies from: blacktrance
comment by blacktrance · 2014-09-24T03:34:04.372Z · LW(p) · GW(p)

By engaging with their arguments, seeing what they're based on, whether they really are what one ought to do, etc.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-24T03:55:09.245Z · LW(p) · GW(p)

So, what do you do if you start from the same premises but then diverge? Is there an "objective" way to figure out who is right in absence of some mathematical theory of morality?

Replies from: blacktrance
comment by blacktrance · 2014-09-24T04:06:21.837Z · LW(p) · GW(p)

If we start with the same premises, we should reach the same conclusions, if I'm interpreting your question correctly. It may help to provide a concrete example of disagreement.

comment by Risto_Saarelma · 2014-09-19T05:05:25.960Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

You can expect to have about as much success effectively and systematically teaching rationality as you could in effectively and systematically teaching wisdom. Attempts for a systematic rationality curriculum will end up as cargo cultism and hollow ingroup signaling at worst and heuristics and biases research literature scholarship at best. Once you know someone's SAT score, knowing whether they participated in rationality training will give very little additional predictive power on whether they will win at life.

Replies from: John_Maxwell_IV, DanielLC, None
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-19T08:00:39.554Z · LW(p) · GW(p)

I'd like to hear a more substantive argument if you've got one. Do you think there are few general-purpose life skills (e.g. those purportedly taught in Getting Things Done, How to Win Friends and Influence People, etc.)? What's your best evidence for this?

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-09-19T15:15:49.782Z · LW(p) · GW(p)

I think that there is a huge unseen component in life skills where in addition to knowing about a skill, you need to recognize a situation where the skill might apply, remember about the skill, figure out if the skill is really appropriate given what's going on, know exactly how you should apply the skill in that given situation and so on. There isn't really an algorithm you can follow without also constantly reflecting on what is actually going on, and I think that in what basically looks like another instance of Moravec's paradox, the big difficult part is actually in the unconscious situation awareness and the things you can write in a book like GTD and give to people are a tiny offshoot on that.

No solid evidence for this except for the observation that there don't seem to be self-helpy systems for general awesomeness that actually do consistently make people who stick with them more awesome.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-20T03:51:24.551Z · LW(p) · GW(p)

recognize a situation where the skill might apply, remember about the skill

OK, what if you were to, say, at the end of each day brainstorm situations during the day when skill X could have been useful in order to get better at recognizing them?

There isn't really an algorithm you can follow without also constantly reflecting on what is actually going on

Could meditation be useful for this?

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-09-20T05:29:13.676Z · LW(p) · GW(p)

OK, what if you were to, say, at the end of each day brainstorm situations during the day when skill X could have been useful in order to get better at recognizing them?

Sounds like this would still run into the problem I anticipate and be hindered by poor innate memory and pattern matching abilities or low conscientiousness. Some people just won't recognize the situation even in retrospect or have already forgotten about it.

Here's an example of what a less than ideal teaching scenario might look like. If MIT graduates are one end of the spectrum, that's close to the another, and most people are going to be somewhere in between.

Could meditation be useful for this?

Meditation is definitely one of the more interesting self-improvement techniques where you basically just follow an algorithm. Still, it probably won't increase your innate g, much like nothing else seems to. And there are some not entirely healthy subcultures around extensive meditation practices (detachment from the physical world as in "the only difference between an ideal monk and a corpse is that the monk still has a beating heart" and so on), which might be trouble for someone who really wants an algorithm to follow and grabs on to meditation without having much of a counterweight in their worldview.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-21T06:18:27.684Z · LW(p) · GW(p)

"There exists no rationality curriculum such that a person of average IQ can benefit from it" and "there exists no rationality curriculum such that a person of LW-typical IQ can benefit from it" are not the same statement.

And there are some not entirely healthy subcultures around extensive meditation practices (detachment from the physical world as in "the only difference between an ideal monk and a corpse is that the monk still has a beating heart" and so on), which might be trouble for someone who really wants an algorithm to follow and grabs on to meditation without having much of a counterweight in their worldview.

shrug It sounds as though you want a rationality curriculum to fail, given that you are brainstorming this kind of creative failure mode.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-09-21T08:41:07.160Z · LW(p) · GW(p)

shrug It sounds as though you want a rationality curriculum to fail, given that you are brainstorming this kind of creative failure mode.

I want to believe that the rationality curriculum will fail iff it is the case that the rationality curriculum will fail.

comment by DanielLC · 2014-09-20T21:19:58.458Z · LW(p) · GW(p)

You can expect to have about as much success effectively and systematically teaching rationality as you could in effectively and systematically teaching wisdom.

What's the difference?

comment by [deleted] · 2014-09-20T07:43:13.489Z · LW(p) · GW(p)

Upvoted because I disagree with the implicit assumption that the best way of teaching rationality-as-winning would look like heuristics and biases scholarship, rather than teaching charisma, networking, action, signaling strategies, and how to stop thinking.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-09-20T07:55:23.367Z · LW(p) · GW(p)

No, I'm saying it's another failure mode for producing general awesomeness, but at least it might produce some useful scholarship.

EDIT: I also don't think that your description would go very far, it'd still end up with the innately clever people dominating, and the rest just stuck in the general arms race and the confusion of actually effectively applying the skills to the real world, just like all the self-help we already have that teaches that stuff seems to end up.

comment by FiftyTwo · 2014-09-21T01:08:39.492Z · LW(p) · GW(p)

[Contrarian thread special voting rules]

I bite the bullet on the repugnant conclusion

comment by FiftyTwo · 2014-09-21T00:47:55.811Z · LW(p) · GW(p)

[Contrarian thread, special voting rules apply]

Engaging in political processes (and learning how to do so) is a useful thing, and is consistently underrated by the LW consensus.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-23T20:44:37.951Z · LW(p) · GW(p)

Just a reminder, the local meme "politics is the mind killer" is an injunction not against discussing politics, but against using political examples in a non-political argument.

Replies from: FiftyTwo
comment by FiftyTwo · 2014-09-23T22:27:08.919Z · LW(p) · GW(p)

Agreed. But there is also a generally negative attitude towards politics

comment by polymathwannabe · 2014-09-15T18:32:47.902Z · LW(p) · GW(p)

Dollars and utilons are not meaningfully comparable.

Edited to restate: Dollars (or any physical, countable object) cannot stand in for utilons.

Replies from: DanielLC, Alejandro1
comment by DanielLC · 2014-09-15T23:06:22.927Z · LW(p) · GW(p)

Can you explain what is wrong with the following comparison?

The value of a dollar in utilons is equal to the increase in expected utilons brought by being given another dollar.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T12:34:47.266Z · LW(p) · GW(p)

The problem is the law of diminishing marginal utility. Translating from dollars to utilons is not straightforward at all; how much utility that dollar gives you depends on factors like how many dollars you already have, how much you owe, what services you can sell, and how much you know about what to do with money. For that same reason, utilons do not add up linearly by giving you a second, third, etc., dollar.

Replies from: Alejandro1, DanielLC
comment by Alejandro1 · 2014-09-17T11:20:57.707Z · LW(p) · GW(p)

Right; assuming (falsely of course) that humans have coherent preferences satisfying the VNM axioms, what can be measured in utilons are not "amount of dollars" in the abstract, but "amount of dollars obtained in such-and-such way in such-and-such situation". But I wouldn't call this "not being meaningfully comparable". And there is nothing special about dollars here, any other object, event or experience is subject to the same.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-17T13:29:52.690Z · LW(p) · GW(p)

Every time there's an argument that goes like, "Would you pay a penny to avoid scenario X?", which in real life means actually "Would you sacrifice a utilon to avoid scenario X?" and therefore requires us to presuppose that dollars can stand in for utilons, something special is being assumed about dollars.

Replies from: Alejandro1
comment by Alejandro1 · 2014-09-17T13:52:40.990Z · LW(p) · GW(p)

But "Would you pay a penny to avoid scenario X?" in no way means "Would you sacrifice a utilon to avoid scenario X?" (the latter is meaningless, since utilons are abstractions subject to arbitrary rescaling). The meaningful rephrasing of the penny question in terms of utilons is "Ceteris paribus, would you get more utilons if X happens, or if you lose a penny and X doesn't happen?" (which is just roundabout way of asking which you prefer). And this is unobjectionable as a way of testing whether you have really a preference and getting a vague handle on how strong it is.

I would prefer if people avoided the word "utilon" altogether (and also "utility" outside of formal decision theory contexts) because there is an inevitable tendency to reify these terms and start using them in meaningless ways. But again, nothing special about money here.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-17T14:02:29.241Z · LW(p) · GW(p)

I would prefer if people avoided the word "utilon" altogether

Seconded. But then we would also need to avoid using language that sneaks disguised utilons into the conversation.

comment by DanielLC · 2014-09-16T15:48:57.914Z · LW(p) · GW(p)

If you had another dollar, then the value of the next dollar in utilons would decrease. Unless you're both an egoist and terrible with money, it will only be a slight decrease. After all, a dollar isn't much compared to all of the money you will make over your life.

I do not think that the fact that utilons do not add up linearly means that the conversion is not useful. For one thing, it allows you to express the law of diminishing marginal utility.

comment by Alejandro1 · 2014-09-17T11:14:41.144Z · LW(p) · GW(p)

Utilons do not exist. They are abstractions defined out of idealized, coherent preferences. To the extent that they are meaningful, though, their whole point is that anything one might have a preference over can be quantified in utilons--including dollars.

comment by polymathwannabe · 2014-09-15T12:26:23.843Z · LW(p) · GW(p)

My current understanding of U.S. laws on cryonics is that you have to be legally pronounced brain-dead before you can be frozen. I think that defeats the entire purpose of cryonics; I can't trust attempts to reverse-engineer my brain if I'm already brain-dead; that is, if my brain cells are already damaged beyond resuscitation. I don't live in the U.S. anyway, but sometimes I consider moving there just to be close to cryonics facilities. However, as long as I can't freeze my intact brain, I can't trust the procedure.

Replies from: RomeoStevens
comment by RomeoStevens · 2014-09-16T18:54:15.138Z · LW(p) · GW(p)

brain dead does not necessarily refer to damaged brain cells. It often refers to electrical activity. As people have been resuscitated after the cessation of brain activity (i.e. human's are cold bootable) without loss of personality it seems reasonable to still give cryonics a go.

comment by TheAncientGeek · 2014-09-20T14:44:53.423Z · LW(p) · GW(p)

[ Please read the OP before voting. Special voting rules apply.]

MWI is wrong, and relational QM is right.

Physicalism is wrong, because of the mind body problem, and other considerations, and dual aspect neutral monism is right.

STEM types are too quick to reject ethical Objectivism. Moreover moral subjectivism is horribly wrong. Don't know what the right answer is, but it could be some kind of Kantianism or Contractarianism.

Arguing to win is good, or to be precise, it largely coincides with truth seeking,

There is no kind of smart that makes you uniformly good at everything.

Even though philosophy has no established body of facts, it is possible to be bad at philosophy and make mistakes in it. Scientists who try to solve longstanding philosophical problems in their lunch breaks end up making fools of themselves. Philosophy is not broken science.

A physicalistically respectable form of free will is defensible.

Bayes is oversold, Quantifying what you haven't first understood is pointless. Being a good rationalist at the day to day level has a lot to do with noticing your own biases, and with emotional maturity, than mental arithmetic.

MIRI hasn't made a strong case for AI dangers.

The standard theism/atheism debate is stale, broken and pointless..people who cant understand metaphysics arguing with people who believe it but cant articulate it.

All epistemological positions boil down to fundamental uproveable, intuitions. Empiricism doesn't escape betause it is based on the intuition that if you can see something, it is really there. STEM types have an overly optimistic view of their existed8logo, because they are accelerated out of worrying about fundamental issues.

Rationality is more than one thing.

Replies from: polymathwannabe, ChristianKl
comment by polymathwannabe · 2014-09-20T17:59:55.611Z · LW(p) · GW(p)

There are so many problems with this post I wish I could vote several times.

One example: how can you claim both "A physicalistically respectable form of free will is defensible" and "Physicalism is wrong?"

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-20T21:17:07.915Z · LW(p) · GW(p)

Easily. The wrongness of physicalism doesn't imply the wrongness of everything that is merely compatible with it.

comment by ChristianKl · 2014-09-20T16:58:11.124Z · LW(p) · GW(p)

Too much statements in a single post.

comment by TheMajor · 2014-09-16T18:56:03.389Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Somewhere between 1950 and 1970 too many people started studying physics, and now the community of physicists has entered a self-sustaining state where writing about other people's work is valued much, much more than forming ideas. Many modern theories (string theory, AdS/CFT correspondence, renormalisation of QFT) are hard to explain because they do not consist of an idea backed by a mathematical framework but solely of this mathematical framework.

Replies from: lmm
comment by lmm · 2014-09-17T12:35:16.073Z · LW(p) · GW(p)

Agree with the first half, disagree with the second

comment by Slider · 2014-09-15T19:12:26.828Z · LW(p) · GW(p)

Friendliness by mathematical proof about exact trustworthiness of future computing principles is misguided.

comment by polymathwannabe · 2014-09-15T12:27:53.546Z · LW(p) · GW(p)

I sense this opinion is not that marginal here, but it does go against the established orthodoxy: I'm pro-specks.

Replies from: solipsist
comment by solipsist · 2014-09-15T12:42:32.129Z · LW(p) · GW(p)

Define?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-15T12:50:10.489Z · LW(p) · GW(p)

Meaning, in this scenario, I prefer 3^^^3 specks to 50 years of torture for one person.

Replies from: RomeoStevens, fubarobfusco
comment by RomeoStevens · 2014-09-15T19:24:46.206Z · LW(p) · GW(p)

I think that my objection is that the analysis sneaks in an ontological assumption: sensory experiences are comparable across a huge range. I'm not very sure that's true.

Replies from: DanielLC
comment by DanielLC · 2014-09-15T23:11:15.528Z · LW(p) · GW(p)

What does it mean for something to be incomparable? You can't just not decide.

Replies from: RomeoStevens
comment by RomeoStevens · 2014-09-16T18:50:52.851Z · LW(p) · GW(p)

Sensory experiences that reliably change utility functions are hard to reason about.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T21:40:00.461Z · LW(p) · GW(p)

I'm not sure what you mean. Are you saying that since torture will destroy someone's mind, it's vastly worse than a dust speck, and exactly how much worse is nigh impossible to tell?

It can't be that hard to tell. Maybe you're not sure whether or not it's in the range of ten thousand dust specks to a quintillion dust specks, but it seems absurd to be so confused about it that you don't even know if it's worse than 3^^^3 dust specks.

comment by fubarobfusco · 2014-09-16T06:54:11.946Z · LW(p) · GW(p)

What's your reasoning? I expect serious attempts at an answer to have to cope with questions such as —

  • How many degrees of pain might a human be capable of? Is the scale linear? logarithmic?
  • How does the 'badness' (or 'natural evil', classically) of pain vary with its intensity and its duration? (Is having a nasty headache for seven days exactly seven times worse than having that headache for one day, or is it more or less than seven times worse?)
  • How does the 'badness' of some pain happening to N people scale with N? (If 100 people stub their toes, is that 100 times worse than one person stubbing his or her toe and 99 going safely unstubbed?)

Even if questions such as these can't be given precise answers, it should be possible to give some sort of bounds for them, and it's possible that those bounds are narrow enough to make the answer obvious.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T12:48:00.545Z · LW(p) · GW(p)

You want a scientific scale for measuring pain? Take your pick.

Not only is there no universally standardized measure of pain, the reason why I'm pro-specks is that I don't believe that pain distributed over separate brains is summable. It does not scale.

Elsewhere EY argued that a billion skydivers, each increasing the atmosphere's temperature by a thousandth of a degree, would individually not care about the effect, but collectively kill us all. The reason why the analogy doesn't apply is that all the skydivers are in the same atmosphere, whereas the specks are not hurting the same consciousness. Unless the pain is communicable (via hive mind or what have you), it will still be roundable to zero. You could have as many specks as you like, each of them causing the mildest itch in one eye, and it would still not surpass the negative utility from torturing one person.

Edited to add: I still don't have a clear idea of how infinite specks would change the comparison, but infinites don't tend to occur in real life.

Replies from: Jiro
comment by Jiro · 2014-09-16T18:43:29.475Z · LW(p) · GW(p)

the reason why I'm pro-specks is that I don't believe that pain distributed over separate brains is summable. It does not scale.

If the problem was solvable that easily, it wouldn't be a problem.

Just slightly change the definition of "speck" (or reinterpret the intent of the original definition): let a speck be "an amount of pain just slightly above the threshold where the pain no longer rounds down to zero for an individual". Now would you prefer specks for a huge number of people to torture for one person?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T19:00:05.577Z · LW(p) · GW(p)

I'm already taking "speck" to have that meaning. Even raising the threshold (say, 3^^^3 people stubbing their toe against the sidewalk with no further consequences), my preference stands.

Replies from: Jiro
comment by Jiro · 2014-09-16T19:11:53.789Z · LW(p) · GW(p)

If you're already taking "speck" to have that meaning, then your statement "Unless the pain is communicable (via hive mind or what have you), it will still be roundable to zero." would no longer be true.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T19:19:14.420Z · LW(p) · GW(p)

Granted. Let's take an example of pain that would be decidedly not roundable to zero. Say, 3^^^3 paper cuts, with no further consequences. Still preferable to torture.

Replies from: gjm, Jiro
comment by gjm · 2014-09-17T12:19:35.797Z · LW(p) · GW(p)

(What I'm about to say is I think the same as Jiro has been saying, but I have the impression that you aren't quite responding to what I think Jiro has been saying. So either you're misunderstanding Jiro, in which case another version of the argument might help, or I'm misunderstanding Jiro, in which case I'd be interested in your response to my comments as well as his/hers :-).)

It seems to me pretty obvious that one can construct a scale that goes something like this:

  • a stubbed toe
  • a paper cut
  • a painfully grazed knee
  • ...
  • a broken ankle
  • a broken leg
  • a multiply-fractured leg
  • ...
  • an hour of expertly applied torture
  • 80 minutes of expertly applied torture
  • ...
  • a year of expertly applied torture
  • 13 months of expertly applied torture
  • ...
  • 49 years of expertly applied torture
  • 50 years of expertly applied torture

with, say, at most a million steps on the scale from the stubbed toe to 50 years' torture, and with the property that any reasonable person would prefer N people suffering problem n+1 to (let's say) (1000N)^2 people suffering problem n. So, e.g. if I have to choose between a million people getting 13 months' torture and a million million million people getting 12 months' torture, I pick the former.

(Why not just say "would prefer 1 person suffering problem n+1 to 1000000 people suffering problem n"? Because you might take the view that large aggregates of people matter sublinearly, so that 10^12 stubbed toes aren't as much worse than 10^6 stubbed toes as 10^6 stubbed toes are than 1. The particular choice of scaling in the previous paragraph is rather arbitrary.)

If so, then we can construct a chain: 1 person getting 50 years' torture is less bad than 10^6 people getting 49 years, which is less bad than 10^18 people getting 48 years, which is less bad than [... a million steps here ...] which is less bad than [some gigantic number] getting stubbed toes. That final gigantic number is a lot less than 3^^^3; if you replace (1000N)^2 with some faster-growing function of N then it might get bigger, but in any case it's finite.

If you want to maintain that TORTURE is worse than SPECKS in view of this sort of argument, I think you need to do one of the following:

  • Abandon transitivity. "Yes, there's a chain of worseness just as you describe, but that doesn't mean that the endpoints compare the way you say." (In that case: Why do you find that credible?)
  • Abandon scaling entirely, even for small differences. "No, 80 minutes' torture for a million people isn't actually much worse than 80 minutes' torture for one person." (In that case: Doesn't that mean that as long as one person is suffering something bad, you don't care whether any other person suffers something less bad? Isn't that crazy?)
  • Abandon continuity. "No, you can't construct that scale of suffering you described. Any chain of sufferings that starts with a stubbed toe and ends with 50 years' torture must have at least one point in the middle where it makes an abrupt jump such that no amount of the less severe suffering can outweigh a single instance of the more severe." (In that case: Can you point to a place where such a jump happens?)
  • Abandon scaling entirely for large numbers. "Any given suffering is much worse when it happens to a million people than to one, but there's some N beyond which it makes no difference at all how many people it happens to." (In that case: Why? You might e.g. appeal to the idea that beyond a certain number, some of the people are necessarily exact duplicates of one another.)
  • Abandon logic altogether: "Bah, you and your complicated arguments! I don't care, I just know that TORTURE is worse than SPECKS." (In that case: well, OK.)
  • Something else I haven't thought of. (In that case: what?)

Incidentally, for my part I am uncertain about TORTURE versus SPECKS on two grounds. (1) I do think it's possible that for really gigantic numbers of people badness stops depending on numbers, or starts depending only really really really weakly on numbers, so weakly that you need a lot more arrows to make a number large enough to compensate -- precisely on the grounds that when the exact same life is duplicated many times its (dis)value might be a slowly growing function of the number of duplicates. (2) The question falls far outside the range of questions on which my moral intuitions are (so to speak) trained. I've never seriously encountered any case like it (with the outlandishly large numbers that are required to make it work), nor have any of my ancestors whose reproductive success indirectly shaped my brain. And, while indeed it would be nice to have a consistent and complete system of ethics that gives a definite answer in every case and never contradicts itself, in practice I bet I don't. And cases like this I think it's reasonable to mistrust both whatever answers emerge directly from my intuitions (SPECKS is better!) and the answers I get from far-out-of-context extrapolations of other intuitions (TORTURE is better!).

[EDITED immediately after posting, to fix a formatting screwup.]

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-17T19:56:27.699Z · LW(p) · GW(p)

(Small nitpicking: The pain from "a multiply-fractured leg" may bother you longer than "an hour of expertly applied torture", but the general idea behind the scale is clear.)

If I have to choose between a million people getting 13 months' torture and a million million million people getting 12 months' torture, I pick the former.

In this case I'd choose as you do, just as in Jiro's example:

3^^^3 people with a certain pain [versus] 1 person with a very slightly bigger pain.

The problem with these scenarios, however, is that they introduce a new factor: they're comparing magnitudes of pain that are too close to each other. This not only applies to the amount of pain, but also to the amount of people:

10^12 stubbed toes aren't as much worse than 10^6 stubbed toes as 10^6 stubbed toes are than 1.

I'd rather be tortured for 12 than 13 months if those were my only options, but after having had both experiences I would barely be able to tell the difference. If you want to pose this problem to someone with enough presence of mind to tell the difference, you're no longer torturing humans.

(If psychological damage is cumulative, one month may or may not make the difference between PTSD and total lunacy. Of course, if at the end of the 12 months I'm informed that I still have one more month to go, then I will definitely care about the difference. But let's assume a normal, continuous torture scenario, where I wouldn't be able to keep track of time.)

This is why,

1 person getting 50 years' torture is less bad than 10^6 people getting 49 years, which is less bad than 10^18 people getting 48 years, which is less bad than [... a million steps here ...] which is less bad than [some gigantic number] getting stubbed toes.

runs into a Sorites problem that is more complex than EY's blunt solution of nipping it at the bud.

In another thread (can't locate it now), someone argued that moral considerations about the use of handguns were transparently applicable to the moral debate on nuclear weapons, and I didn't know how to present the (to me) super-obvious case that nuclear weapons are on another moral plane entirely.

You could say my objection to your 50 Shades of Pain has to do with continuity and with the meaningfulness of a scale over very large numbers. Such a quantitative scale would necessarily include several qualitative transitions, and the absurd results of ignoring them are what happens when you try to translate a subjective, essentially incommunicable experience into a neat progression of numbers.

(You could remove that obstacle by asking self-aware robots to solve this thought experiment, and they would be able to give you a precise answer about which pain is numerically worse, but in that case the debate wouldn't be relevant to us anymore.)

while indeed it would be nice to have a consistent and complete system of ethics that gives a definite answer in every case and never contradicts itself, in practice I bet I don't.

The underlying assumptions behind this entire thought experiment are a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years, which is regrettable, and a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture, which is appalling.

Replies from: Jiro, gjm
comment by Jiro · 2014-09-17T21:29:03.803Z · LW(p) · GW(p)

3^^^3 people with a certain pain [versus] 1 person with a very slightly bigger pain. The problem with these scenarios, however, is that they introduce a new factor: they're comparing magnitudes of pain that are too close to each other.

That was in response to your idea that small amounts of pain cannot be added up, but large amounts can.

If this is true, then there is a transition point where you go from "cannot be added up" to "can be added up". Around that transition point, there are two pains that are close to each other yet differ in that only one of them can be added up. This leads to the absurd conclusion that you prefer lots of people with one pain to 1 person with the other, even though they are close to each other.

Saying "the trouble with this is that it compares magnitudes that are too close to each other" doesn't resolve this problem, it helps create this problem. The problem depends on the fact that the two pains don't differ in magnitude very much. Saying that these should be treated as not differing at all just accentuates that part, it doesn't prevent there from being a problem.

Replies from: polymathwannabe, polymathwannabe
comment by polymathwannabe · 2014-09-17T22:07:31.753Z · LW(p) · GW(p)

I'm thinking of the type of scale where any two adjacent points are barely distinguishable but you see qualitative changes along the way; something like this.

Replies from: Jiro
comment by Jiro · 2014-09-18T01:39:12.488Z · LW(p) · GW(p)

That doesn't solve the problem. The transition from "cannot be added up" to "can be added up" happens at two adjacent points.

comment by polymathwannabe · 2014-09-17T22:04:07.189Z · LW(p) · GW(p)

Presumably, you still think that large amounts of pain can be added up.

As I don't think pain can be expressed in numbers, I don't think it can be added up, no matter its magnitude.

Replies from: Jiro
comment by Jiro · 2014-09-18T14:45:58.032Z · LW(p) · GW(p)

In that case, you can't even prefer one person with pain to 3^^^^3 people with the same pain.

(And if you say that you can't add up sizes of pains, but you can add up "whether there is a pain", the latter is all that is necessary for one of the problems to happen; exactly which problem happens depends on details such as whether you can do this for all sizes of pains or not.)

comment by gjm · 2014-09-17T22:24:19.096Z · LW(p) · GW(p)

they're comparing magnitudes of pain that are too close to each other.

Doesn't that make the argument stronger? I mean, if you're not even sure that 13 months of torture are much worse than 12 months of torture, then you should be pretty confident that 10^6 instances of 12 months' torture are worse than 1 instance of 13 months' torture, no?

Such a quantitative scale would necessarily include several qualitative transitions

So that was the option I described as "abandon continuity". I was going to ask you to be more specific about where those qualitative transitions happen, but if I'm understanding you correctly I think your answer would be to say that the very question is misguided because there's something ineffable about the experience of pain that makes it inappropriate to try to be quantitative about it, or something along those lines. So I'll ask a different question: What do those qualitative transitions look like? What sort of difference is it that can occur between what look like two very, very closely spaced gradations of suffering, but that is so huge in its significance that it's better for a billion people to suffer the less severe evil than for one person to suffer the more severe?

(You mention one possible example in passing: the transition from "PTSD" to "total lunacy". But surely in practice this transition isn't instantaneous. There are degrees of psychological screwed-up-ness in between "PTSD" and "total lunacy", and there are degrees of probability of a given outcome, and what happens as you increase the amount of suffering is that the probabilities shift incrementally from each outcome to slightly worse ones; when the suffering is very slight and brief, the really bad outcomes are very unlikely; when it's very severe and extended, the really bad outcomes are very likely. So is there, e.g., a quantitative leap in badness when the probability of being badly enough messed-up to commit suicide goes from 1% to 1.01%, or something?)

a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years

If you mean that anyone here is assuming some kind of moral calculus where suffering is denominated in torture-years and is straightforwardly additive across people, I think that's plainly wrong. On the other hand, if you mean that it should be absolutely obvious which of those two outcomes is worse ... well, I'm not convinced, and I don't think that's because I have a perverted moral system, because it seems to me it's not altogether obvious on any moral system and I don't see why it should be.

a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture

I'm not sure what you mean. Could you elaborate?

comment by Jiro · 2014-09-16T22:12:41.330Z · LW(p) · GW(p)

Presumably, you still think that large amounts of pain can be added up.

In that case, that must have a threshold too; something that causes a certain amount of pain cannot be added up, while something that causes a very very slightly greater amount of pain can add up. That implies that you would prefer 3^^^3 people having pain at level 1 to one person having pain of level 1.00001, as long as 1 is not over the threshold for adding up but 1.00001 is. Are you willing to accept that conclusion?

(Incidentally, for a real world version, replace "torture" with "driving somewhere and accidentally running someone over with your car" and "specks" with "3^^^3 incidences of not being able to do something because you refuse to drive". Do you still prefer specks to torture?)

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-16T23:14:20.921Z · LW(p) · GW(p)

That implies that you would prefer 3^^^3 people having pain at level 1 to one person having pain of level 1.00001, as long as 1 is not over the threshold for adding up but 1.00001 is.

As I stated before, doctors can't agree on how to quantify pain, and I'm not going to attempt it either. This does not prevent us from comparing lesser and bigger pains, but there are no discrete "pain units" any more than there are utilons.

(Incidentally, for a real world version, replace "torture" with "driving somewhere and accidentally running someone over with your car" and "specks" with "3^^^3 incidences of not being able to do something because you refuse to drive". Do you still prefer specks to torture?)

I would choose the certain risk of one traffic victim over 3^^^3 people unable to commute. But this example has a lot more ramifications than 3^^^3 specks. The lack of further consequences (and of aggregation capability) is what makes the specks preferable despite their magnitude. A more accurate comparison would be choosing between one traffic victim and 3^^^3 drivers annoyed by a paint scratch.

Replies from: Jiro
comment by Jiro · 2014-09-17T03:14:08.020Z · LW(p) · GW(p)

As I stated before, doctors can't agree on how to quantify pain, and I'm not going to attempt it either. This does not prevent us from comparing lesser and bigger pains, but there are no discrete "pain units" any more than there are utilons.

If you can compare bigger and smaller pains, and if bigger pains can add and smaller pains cannot, you run into this problem. Whether you call one pain 1 and another 1.00001 or whether you just say "pain" and "very slightly bigger pain" is irrelevant--the question only depends on being able to compare them, which you already said you can do. What you say implies that you would prefer 3^^^3 people with a certain pain to 1 person with a very slightly bigger pain. Is this really what you want?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-17T19:56:57.889Z · LW(p) · GW(p)

Please see my recent reply here.

comment by ChristianKl · 2014-09-15T11:30:20.297Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

The study and analysis of human movement is very underfunded. There a lot of researches into getting information about static information such as DNA or X-ray but very little about getting dynamic information about how humans move.

Replies from: NancyLebovitz, army1987, jsteinhardt
comment by NancyLebovitz · 2014-09-16T03:22:03.878Z · LW(p) · GW(p)

I agree with this, so I'm telling you instead of upvoting.

comment by A1987dM (army1987) · 2014-09-17T17:03:38.894Z · LW(p) · GW(p)

There a lot of researches into getting information about static information such as DNA or X-ray but very little about getting dynamic information about how humans move.

Except for the purpose of making CGI actors' movements look realistic.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-17T17:55:56.249Z · LW(p) · GW(p)

Except for the purpose of making CGI actors' movements look realistic.

That's mostly not done within biology but by companies that produce closed-source knowledge and proprietary algorithms for the purposes of CGI.

comment by jsteinhardt · 2014-09-16T08:25:14.960Z · LW(p) · GW(p)

This is very intriguing. Can you give examples of what gains we would get from studying this?

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T12:00:17.312Z · LW(p) · GW(p)

I think most people who suffer from back pain suffer from back pain because their muscles do something they shouldn't do. RSI is probably also an illness that has to do with muscles engaging in patterns of activation that's aren't healthy.

I personally had to relearn walking after 7 weeks of being in bed in the hospital. You need an amazing number of different muscles to walk and if you don't use a bunch you are walking suboptimally.

These days you can use approaches such as Feldenkrais to relearn how to use all your muscles but Feldenkrais isn't really science-based. A real science of movement that would have equipment that measures human movement very exactly and then runs machine learning algorithms over those measurements is likely yield a science-based version of Feldenkrais that's more efficient and where you can diagnose issues much better to be able to say beforehand whether Feldenkrais will help a person.

Replies from: NancyLebovitz, John_Maxwell_IV, Richard_Kennaway
comment by NancyLebovitz · 2014-09-20T14:58:16.746Z · LW(p) · GW(p)

I think I've found a scientifically-based system. It's based on anatomy, and uses pressure plates to establish how people move their weight when they stand and walk.

Unfortunately, the book costs $60, and is a book of principles and facts, not methods. Even though it's directed toward body-workers rather than people in general, it still doesn't include the exercises for activating the appropriate movement patterns to improve walking.

Nonetheless, I'm experimenting cautiously with what I can get out of it-- gently shifting the weight transfer patterns in my feet while walking toward what's recommended, for example. This may be doing some good, but I'll do more of a report later.

Author's blog

Replies from: ChristianKl
comment by ChristianKl · 2014-09-20T16:02:39.731Z · LW(p) · GW(p)

I think I've found a scientifically-based system. It's based on anatomy, and uses pressure plates to establish how people move their weight when they stand and walk.

It might be a bit measurement based but that alone doesn't mean it's science-based. As far as I can see the author doesn't hold an academic degree and doesn't even think that it's important to have somebody with an academic degree recommend his method if you look at the testimonial page. I don't see references to published papers on the website.

Of course that doesn't mean that the knowledge in the book isn't useful. On the other hand it doesn't help with sanctioning treatment as evidence-based and getting them covered by mainstream medical providers.

comment by Richard_Kennaway · 2014-09-16T15:17:15.183Z · LW(p) · GW(p)

A real science of movement that would have equipment that measures human movement very exactly and then runs machine learning algorithms over those measurements is likely yield a science-based version of Feldenkrais that's more efficient and where you can diagnose issues much better to be able to say beforehand whether Feldenkrais will help a person.

What would science-based Feldenkreis teaching look like?

How would you get from "machine learning algorithms over measurements", which sounds like statistical curve-fitting, to actionable conclusions, about how people should use their bodies, and how to teach people how to do that? No-one can follow instructions like "increase activation of the iliopsoas by 3%", even if you somehow validated a causal model that made that a useful thing to do in some situation. People can barely follow verbal instructions at all about posture and movement.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T15:42:31.079Z · LW(p) · GW(p)

No-one can follow instructions like "increase activation of the iliopsoas by 3%", even if you somehow validated a causal model that made that a useful thing to do in some situation.

The problem is that we don't really know which instructions people can easily follow and which they can't follow. If the problem is "increasing activation of the iliopsoas by 3%" you can empirically test various interventions. Without having a casual model of what kind of movement is good, you can't validate interventions and determine whether the intervention is good.

Apart from that it's possible to do biofeedback. Good feedback can give humans perception and control over many variables.

On the other hand you can make a similar argument about the useful of understanding how proteins do what they do. Just because you understanding a pathway doesn't mean you can manipulate it the way you want. Science advances by first mapping the space of phenomena and then hopefully finding a way to intervene.

comment by chaosmage · 2014-09-15T10:02:01.824Z · LW(p) · GW(p)

I think raising the sanity waterline is the most important thing we can do, and we do too little of it because our discussions tend to happen amongst ourselves, i.e. with people who are far from that waterline.

Any attempt to educate people, including the attempt to educate them about rationality, should focus on teens, or where possible on children, in order to create maximum impact. HPMOR does that to some degree, but Less Wrong usually presupposes cognitive skills that the very people who'd benefit most from rationality do not possess. It is very much in-group discussion. If "refining the art of human rationality" is our goal, we should be doing a lot more outreach and a lot more production of very accessible rationality materials. Simplified versions of the sequences, with more pictures and more happiness. CC licensed leaflets and posters. Classroom materials. Videos (compare the SciShow video on Bayes' Theorem), because that's how many curious young minds get their extracurricular knowledge these days.

In fact, if we crowdfunded somebody with education materials production experience to do that (or better yet, crowdfund two or three and let them compete for the next round), I'd contribute significantly.

Replies from: Viliam_Bur, ChristianKl, John_Maxwell_IV, Three-Monkey Mind
comment by Viliam_Bur · 2014-09-15T18:59:15.638Z · LW(p) · GW(p)

Is this supposed to be a contrarian view on LW? If it is, I am going to cry.

Unless we reach a lot of young people, we risk than in 30-40 years the "rationalist movement" will be mostly a group of old people spending most of their complaining about how things were better when they were young. And the change will come so gradually we may not even notice it.

Replies from: chaosmage
comment by chaosmage · 2014-09-16T07:35:51.121Z · LW(p) · GW(p)

I don't think anybody has explicitly spoken out against it, but it seems to me everyone acts quite opposed to the idea.

comment by ChristianKl · 2014-09-15T12:43:37.298Z · LW(p) · GW(p)

I think video are the wrong medium. Videos have the problem of getting people to think they understand something when they don't. People learn all the right buzzwords but that doesn't mean that they actually are more rational.

Kaj Sotala for example designs a game for his master thesis that's intended to teach Bayes method. I think such a game would be much more valuable than a video that explains Bayes method.

We have prediction book and the Credence game as tools to teach people to be more rational. They aren't yet on a quality level where the average person will use them. Focusing more energy on updating those concepts and making them work better is more valuable than producing videos.

CFAR also does develop teaching materials. A core feature of CFAR is that it actually focuses on produces quality instead of just producing videos and hoping that those videos will have an impact. I know that there someone in Germany who teaches a high school class based on CFAR inspired material.

comment by John_Maxwell (John_Maxwell_IV) · 2014-09-19T07:52:40.772Z · LW(p) · GW(p)

Seems pretty sensible to me. I'm not that worried about a 30-40 year old "rationalist" movement, however... in the same way the ideas on LW appealed to me as a teen, it seems likely that they will end up appealing to other teens, if they end up hearing about them (stuff like e.g. HPMOR makes it likely that they will).

comment by Three-Monkey Mind · 2020-06-02T22:22:48.878Z · LW(p) · GW(p)

If “refining the art of human rationality” is our goal, we should be doing a lot more outreach and a lot more production of very accessible rationality materials.

I agree, and I'm in favor of this sort of thing. I try to do this sort of thing among my friends. Sometimes it works, at least a little bit.

On the other hand, if we're trying to save Earth from being turned into paperclips, we ought to focus our efforts on people who're smart enough to be able to meaningfully contribute to AI risk reduction.

On the other other hand, there are people here who could help with sanity-line-raising materials who can't help with rationality training as a way to avert AI x-risk.

On the other other other hand, some people who might be able to help with AI risk might get into the possibly-less-important sanity-waterline-raising projects, and this would be a bad thing.

comment by RomeoStevens · 2014-09-15T19:15:57.106Z · LW(p) · GW(p)

Changing minds is usually impossible. People will only be shifted on things they didn't feel confident about in the first place. Changes in confidence are only weakly influenced by system 2 reasoning.

Replies from: Prismattic
comment by Prismattic · 2014-09-17T01:43:48.104Z · LW(p) · GW(p)

You may find this article by Tom Stafford (of Mindhacks) to be of interest.

Replies from: RomeoStevens
comment by RomeoStevens · 2014-09-17T10:21:52.189Z · LW(p) · GW(p)

Interesting. They posit several conditions for mind-changing.

  1. The person is motivated/somehow involved with the outcome of the decision.

  2. The argument for a different view is strong.

  3. Their own view is shown to have a weakness.

  4. They like you. (ingroup affiliation is also mentioned)

  5. They have time to think about it.

Not all conditions are always required, but each has a significant impact.

comment by Slider · 2014-09-15T18:43:49.122Z · LW(p) · GW(p)

Coherent Extapolated Volition is a bad way of approaching friendliness.

Replies from: Slider
comment by Slider · 2014-09-15T18:53:13.873Z · LW(p) · GW(p)

CEV assumes that things can be made coherent. This ignores on how tendensies of pulling in opposite directions play out in practise. This might be a feature and not a bug. The extra polation part is like predicting growth without actual growing. It will lead to another place than natural growth would result with. It also assumes that humans will on the whole cooperate. How should the AI approach conflicts of interest against the constituents? If there is another cold war style scenario does it mean most of the AI power is wasted on being neutral? Or does the AI just empower indiviudal symmetry breakingt decisions to bgi scales, in that the AI isn't the one that f up everything it just supplies the one at fault with tools to make it big.

Organization tend to be able to form stances that are way more narrower than individual living philosphies. Humanity can't settle on a volition that would could be broad enough to serve as a personal moral action guiding priniciple. If a dictator forces a "balanced" view why not just go all the way and make a Imposed Corrective Values.

Replies from: RomeoStevens
comment by RomeoStevens · 2014-09-16T18:52:20.680Z · LW(p) · GW(p)

opposed utility functions can be sharded into separate light cones.

comment by [deleted] · 2014-09-15T18:28:00.453Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Utilitarianism relies on so many levels of abstraction as to be practically useless in most situations.

Replies from: DanielLC
comment by DanielLC · 2014-09-15T23:10:34.869Z · LW(p) · GW(p)

I denotationally agree. In a given situation, utilitarianism will most likely have negligable value. But I think those others are a big deal. Knowing where to donate money makes a much larger difference than every other choice I make combined. In my experience, utilitarians are much better at deciding where to donate.

comment by Daniel_Burfoot · 2014-09-15T14:10:46.183Z · LW(p) · GW(p)

Our society is ruled by a Narrative which has no basis in reality and is essentially religious in character. Every component of the Narrative is at best unjustified by actual evidence, and at worst absurd on the face of it. Moreover, most leading public intellectuals never seriously question the Narrative because to do so is to be expelled from their positions of prestige. The only people who can really poke holes in the Narrative are people like Peter Thiel and Nassim Taleb, whose positions of wealth and prestige are independently guaranteed.

The lesson is that in the modern world, if you want to be a philosopher, you should first become a billionaire. Then and only then will you have the independence necessary to pursue truth.

Replies from: polymathwannabe, FiftyTwo, None
comment by polymathwannabe · 2014-09-15T14:20:05.159Z · LW(p) · GW(p)

What exactly does that Narrative say?

Replies from: Jayson_Virissimo, army1987
comment by Jayson_Virissimo · 2014-09-15T18:36:11.675Z · LW(p) · GW(p)

Why would he answer you without first being a billionaire?

comment by A1987dM (army1987) · 2014-09-21T08:45:06.550Z · LW(p) · GW(p)

Read “How Dawkins Got Pwned” by Mencius Moldbug. (Warning: it's long.)

comment by FiftyTwo · 2014-09-21T00:51:15.713Z · LW(p) · GW(p)

What possible evidence can you have for the existence of a great truth which is by definition not available to you?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2014-09-21T16:14:27.853Z · LW(p) · GW(p)

It's available to me, I just can't talk about it with other people.

comment by [deleted] · 2014-09-19T15:18:25.966Z · LW(p) · GW(p)

This seems obvious to me. Except that in order to become a billionaire you need to act exactly as if you believe in the narrative and probably come to believe it.

EDIT: Though I fail to see how Thiel pokes 'the narrative' - it seems to me he doubles down even harder on large chunks of it.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-21T08:51:06.354Z · LW(p) · GW(p)

EDIT: Though I fail to see how Thiel pokes 'the narrative' - it seems to me he doubles down even harder on large chunks of it.

Probably you and Daniel Burfoot are thinking of different Narratives. I guess the one he's thinking about is what Moldbug calls “Universalism”, “the blue pill”, “M.42”, and other such deliberately silly names.

comment by ChristianKl · 2014-09-15T11:27:32.303Z · LW(p) · GW(p)

Finding better ways for structuring knowledge is more important than faster knowledge transfer through devices such as high throughput Brain Computer Interfaces.

It's a travesty that outside of computer programming languages few new languages get invented in the last two decades.

Replies from: gjm
comment by gjm · 2014-09-15T11:29:14.773Z · LW(p) · GW(p)

These are two separate (though related) propositions. For the purpose of this thread it would be better to separate them. (You'd probably also get more karma that way :-).)

Replies from: ChristianKl
comment by ChristianKl · 2014-09-15T11:35:46.752Z · LW(p) · GW(p)

I don't think they aren't separate. Languages are ways for structuring information. That might be what's the most contrarian thought in the post ;)

Replies from: gjm
comment by gjm · 2014-09-15T12:23:17.647Z · LW(p) · GW(p)

I understand; that's why they're related. But they're not the same statement; someone could agree with your first statement and disagree with your second. In fact, they could agree that finding better ways of structuring knowledge is really important, and agree that languages are ways of structuring information, but not think it's a bad thing that languages aren't being invented faster -- e.g., they might hold that outside of computer programming, there are almost always better ways to improve how we structure information than by inventing new languages.

comment by A1987dM (army1987) · 2014-09-17T16:19:15.047Z · LW(p) · GW(p)

I'm upvoting top-level comments which I think are in the spirit of this post but I personally disagree with (in the case of comments with several sentences, if I disagree with their conjunction), downvoting ones I don't think are in the spirit of this post (e.g. spam, trolling, views which are clearly not contrarian either on LW nor in the mainstream), and leaving alone ones which are in the spirit of this post but I already agree with. Is that right?

What about comments I'm undecided about? I'm upvoting them if I consider them less likely than my model of the average LWer does and leaving them alone otherwise. Is that right?

Replies from: shminux
comment by Shmi (shminux) · 2014-09-17T21:56:29.213Z · LW(p) · GW(p)

I interpret the intention as "upvote serious ones you disagree with, downvote trolls, ignore those you agree with". In other words, you are not judging what you think LW finds contrarian, you are reporting whether you agree with the views posters perceive as contrarian, not penalizing people for misjudging what is contrarian.

Hopefully this thread is a useful tool for figuring out which views are the most out of the LW mainstream, but still are taken seriously by the community. 10+ upvotes would probably be in the ballpark.

comment by fubarobfusco · 2014-09-16T07:20:39.390Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Improving the typical human's emotional state — e.g. increasing compassion and reducing anxiety — is at least as significant to mitigating existential risks as improving the typical human's rationality.

The same is true for unusually intelligent and capable humans.

For that matter, unusually intelligent and capable humans who hate or fear most of humanity, or simply don't care about others, are unusually likely to break the world.

(Of course, there are cases where failures of rationality and failures of compassion coincide — the fundamental attribution error, for instance. It seems to me that attacking these problems from both System 1 and System 2 will be more effective than either approach alone.)

comment by passive_fist · 2014-09-15T21:16:36.670Z · LW(p) · GW(p)
  1. Once you actually take human nature into account (especially, the the things that cause us to feel happiness, pride, regret, empathy), then most seemingly-irrational human behavior actually turns out to be quite rational.

  2. Conscious thought processes are often deficient in comparison to subconscious ones, both in terms of speed and in terms of amount of information they can integrate together to make decisions.

  3. From 1 and 2 it follows that most attempts at trying to consciously improve 'rational' behavior will end up falling short or backfiring.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-20T16:15:36.240Z · LW(p) · GW(p)

I agree, which probable makes it contrarian.

comment by Fivehundred · 2014-09-22T21:11:55.637Z · LW(p) · GW(p)

History isn't over in any Fukuyamian sense; in fact the turmoil of the twenty-first century will dwarf the twentieth. A US-centered empire will likely take shape by century's end.

I will elaborate if requested.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-23T20:33:49.729Z · LW(p) · GW(p)

I agree with the first sentence, I disagree with the second. Few countries managed more than a century of world domination and the US is already showing the classic signs of decay.

Replies from: Azathoth123, Fivehundred
comment by Azathoth123 · 2014-09-24T01:53:54.239Z · LW(p) · GW(p)

So? The Roman Republic managed to expand its holdings even as it was decaying into the Empire.

comment by Fivehundred · 2014-09-23T21:24:27.901Z · LW(p) · GW(p)

Which would be?

comment by polymathwannabe · 2014-09-15T12:37:26.610Z · LW(p) · GW(p)

I've always had problems with MWI, but it's just a gut feeling. I don't have the necessary specialized knowledge to be able to make a decent argument for or against it. I do concede it one advantage: it's a Copernican explanation, and so far Copernican explanations have a perfect record of having been right every time. Other than that, I find it irritating, most probably because it's the laziest plot device in science-fiction.

Replies from: Azathoth123, TheAncientGeek, DanielLC, polymathwannabe
comment by Azathoth123 · 2014-09-16T01:09:24.823Z · LW(p) · GW(p)

Technical explanation: the problem with MWI is that it makes the fact that density matrices work seem like a complete epistemological coincidence.

Incidentally, I remember a debate between Eliezer and Scott Aaronson where the former confessed he stopped reading his QM textbook right before the chapter on density matrices.

Replies from: pragmatist
comment by pragmatist · 2014-09-16T06:07:08.970Z · LW(p) · GW(p)

I don't understand what you mean. Could you explain? I'm familiar with QM, so you don't need to avoid technicality in your explanation.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-17T01:24:30.775Z · LW(p) · GW(p)

Suppose we have two jars of qubits:

Half the qubits in jar (1) are in state |0> and the other half are in state |1>.

Half the qubits in jar (2) are in state |+> and the other half are in state |->.

Notice that although from a classical Bayesian MWI perspective the two jars are in very different states, there is no way to tell them apart even in principal.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-17T17:13:16.577Z · LW(p) · GW(p)

BTW, that's the only reason why I'm not fully convinced by realist interpretations of QM.

comment by TheAncientGeek · 2014-09-17T16:22:54.432Z · LW(p) · GW(p)

Against Many Worlds

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-17T16:35:57.440Z · LW(p) · GW(p)

Broken link, but files 9703089 and 9704039 appear to be the relevant ones.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-17T17:02:46.569Z · LW(p) · GW(p)

Thanks. Fixed. Doing links on a tablet is a nightmare.

comment by DanielLC · 2014-09-15T23:14:10.108Z · LW(p) · GW(p)

What's a Copernican explanation?

Replies from: Nornagest, polymathwannabe
comment by Nornagest · 2014-09-16T00:09:08.532Z · LW(p) · GW(p)

I've never heard the term before, but in context I'd guess it means something like "an explanation that implies we're less important than the previous explanation did". Heliocentrism vs. geocentrism, evolution vs. a supernatural creation narrative culminating in people, etc.

Replies from: Aleksander, DanielLC
comment by Aleksander · 2014-09-16T05:33:09.549Z · LW(p) · GW(p)

Freud's psychoanalysis has been often put in the same category of "Copernican" things as heliocentrism and evolution.

comment by DanielLC · 2014-09-16T01:00:24.043Z · LW(p) · GW(p)

Why is MWI more Copernican than the Copenhagen interpretation?

You do realize that an "observer" doesn't have to be conscious, right? The Copenhagen interpretation doesn't treat humans specially. If anything, I'd say that the Copenhagen interpretation is more Copernican, since it explains the Born probabilities without requiring anthropics.

Replies from: Nornagest
comment by Nornagest · 2014-09-16T01:24:04.063Z · LW(p) · GW(p)

My comment was not intended to be an endorsement of polymathwannabe's analysis. I'm not a QM expert and am not qualified to comment on the details of either interpretation.

comment by polymathwannabe · 2014-09-16T02:00:06.982Z · LW(p) · GW(p)

The Copernican principle states that there's nothing special or unique or privileged about our local frame of reference: we're not at the center of the solar system, we're not at the center of the galaxy, this is not the only galaxy, and the next logical step would be to posit that this is not the only universe.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T02:36:32.256Z · LW(p) · GW(p)

I do not believe in reincarnation of any sort. I believe this is my only life.

It has been believed that the Earth was flat. I'm sure at least someone had considered the implication that the Earth goes on forever. This has turned out to be false. The Earth has positive curvature, and thus only finite surface area.

Quite a few people have considered the idea that atoms are little solar systems, which could have their own life. It turns out that electrons are almost certainly fundamental particles. And even if they're not, the way physics works on a small scale is such that life would be impossible.

Similarly, galaxies do not make up molecules. Even if there are other forces as would be necessary, the light speed limit combined with the expansion of the universe creates a cosmological event horizon. Beyond a certain scale, it is physically impossible for anything to interact.

There are a variety of physical theories that predict other universes. They work in different ways, and tend not to be contradictory. It would be unwise to reject them out of hand, but it would be equally unwise to automatically accept them.

comment by polymathwannabe · 2014-09-16T00:47:48.602Z · LW(p) · GW(p)

The Copernican principle states that there's nothing special or unique or privileged about our local frame of reference: we're not at the center of the solar system, we're not at the center of the galaxy, this is not the only galaxy, and the next logical step would be to posit that this is not the only universe.

comment by Stefan_Schubert · 2014-09-15T11:58:07.263Z · LW(p) · GW(p)

Anti-contrarianism.

comment by Florian_Dietz · 2014-09-15T11:54:00.328Z · LW(p) · GW(p)

You can't solve AI friendliness in a vacuum. To build a friendly AI, you have to simultaneously work on the AI and the code of ethics it should use, because they are interdependent. Until you know how the AI models reality most effectively you can't know if your code of ethics uses atoms that make sense to the AI. You can try to always prioritize the ethics aspects and not make the AI any smarter until you have to do so, but you can't first make sure that you have an infallible code of ethics and only start building the AI afterwards.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-15T17:51:54.888Z · LW(p) · GW(p)

How is this different from the LW mainstream?

Replies from: None, Florian_Dietz
comment by [deleted] · 2014-09-15T21:52:21.771Z · LW(p) · GW(p)

Any work on AI implementation is seriously downvoted here.

comment by Florian_Dietz · 2014-09-15T19:44:53.084Z · LW(p) · GW(p)

The last time I saw someone suggest that one should build an AI without first solving friendliness completely, he was heavily downvoted. I found that excessive, which is why I posted this. I am positively surprised to see that I basically got no reaction with my statement. My memory must have been exaggerated with time, or maybe it was just a fluke.

edit: I now seriously doubt my previous statement. I just got downvoted on a thread in which I was explicitly instructed to post contrarian opinions and where the only things that should get downvotes are spam and trolling, which I didn't do. Of course it's also just possible that someone didnt read the OP and used normal voting rules.

comment by ChristianKl · 2014-09-15T11:31:55.498Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Toxicology research is underfunded. Investing more money into finding tools to measure toxicology makes more sense than spending money into trying to understand the functioning of various genes.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-09-16T03:22:25.083Z · LW(p) · GW(p)

I'm not sure whether I agree or disagree. What's your line of thought?

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T10:56:41.583Z · LW(p) · GW(p)

Scientific progress has a lot to do with having powerful tools. If you get more powerful tools you can easily get a bunch of new insight. As a result it's worthwhile to focus most of our efforts on building better tools instead of funding specific insights.

Secondly the standard way to judge whether something is poisonous is to see how big of a dose you need to kill rats. If the substance doesn't kill rats or reduce their lifespan but it just reduces their IQ our tox tests don't pick it up. That means that we might have pesticides in use that do things like reducing IQ at doses that are legally allowed without our testing procedure being able to tell.

Practically we could measure variables like the heart rate of a labrat via infrared for it's whole lifespan and see whether it shows unusual patterns. Every action that the rat does in their lifetime could be measured. The rat lives in a controlled environment where a lot of information besides whether or not the rat dies can be gathered.

Of course we already do some additional tests such as a test for whether a substance might cause cancer that has nothing to do with rats but in general we could do much better at measuring toxicology.

On the one hand that means that we could regulate harmful substances where we currently don't know they are harmful. On the other hand drug development also get's cheaper if you catch bad drugs while you do your tests with rats and before you run expensive trials with humans.

comment by hikari · 2014-10-09T19:34:37.750Z · LW(p) · GW(p)

[Read the OP before voting for special voting rules.]

The many worlds interpretation of quantum mechanics is categorically confused nonsense. Its origins lie in a map/territory confusion, and in the mind projection fallacy. Configuration space is a map, not territory—it is an abstraction used for describing the way that things are laid out in physical space. The density matrix (or in the special case of pure states, the state vector, or the wave function) is a subjective calculational tool used for finding probabilities. It's something that exists in the mind. Any 'interpretation' of quantum mechanics which claims that any of these things exists in reality (e.g. MWI) therefore commits the mind projection fallacy.

comment by A1987dM (army1987) · 2014-09-19T12:28:03.277Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Homeownership is not a good idea for most people.

Replies from: atorm
comment by atorm · 2014-09-25T00:55:22.065Z · LW(p) · GW(p)

Please elaborate.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-10-10T12:37:24.314Z · LW(p) · GW(p)

The largest avoidable source of pain and boredom in the life of a typical western citizen is their commute. - The sane response to this problem is to live as close to your place of employment as at all practical - valuing the time spent commuting at the same rate as your hourly earnings, the monthly penalty for living any significant distance at all from your job can get quite absurd for professionals just in financial terms, and the human cost is greater, because it is sucking up the time you actually can dispose of as you wish.

Home ownership increases the costs of moving residence dramatically compared to renting, and is thus not a good idea unless you have a job which you anticipate keeping for a far greater period of time than is typical in modern society. IE, do you have tenure or the effective equivalent? Then buying over renting makes sense. If you don't, all buying does is make it hurt a lot more to move when you get a new job.

comment by jsteinhardt · 2014-09-15T17:32:34.324Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

Sitting down and thinking really hard is a bad way of deciding what to do. A much better way is to find several trusted advisors with relevant experience and ask their advice.

Replies from: RomeoStevens, Elo
comment by RomeoStevens · 2014-09-16T18:55:36.680Z · LW(p) · GW(p)

corollary: sitting down and thinking hard can be a good way to come up with strategies for finding good sources of advice.

comment by Elo · 2014-09-16T06:59:49.639Z · LW(p) · GW(p)

been contemplating this recently. not sure if I agree or disagree. but going to come to my conclusions soon. Just need to find some time to sit down and thing about it...

comment by Gunnar_Zarncke · 2014-09-15T15:24:01.983Z · LW(p) · GW(p)

This looks like two posts I saw quite a while ago where contrarian posts were also intended to be up-voted. I can't seem to find those posts (searching for contrarian doesn't match anything and searching for 'vote' is obviously useless). Nonetheless those posts urged to mark each contrarian comment to clearly indicate the opposite voting semantics to avoid unsuspecting readers being misled by the votes. Maybe someone can provide the links?

Replies from: army1987, polymathwannabe
comment by A1987dM (army1987) · 2014-09-15T16:03:53.313Z · LW(p) · GW(p)

Now that there's the karma toll, using downvotes to mean anything other than ‘I don't think this comment or post belongs here’ is a bad idea. Also, now we have poll syntax.


I'd want to vote comments in this thread according to whether they're interesting or boring, regardless of whether I agree with them.

Replies from: Azathoth123, Douglas_Knight
comment by Azathoth123 · 2014-09-16T01:12:36.658Z · LW(p) · GW(p)

I really wish there was a way to suspend the toll for irrationality game posts.

comment by Douglas_Knight · 2014-09-17T06:31:13.332Z · LW(p) · GW(p)

This could have been done by each commenter creating a poll in the comment, leaving normal voting for interest.

comment by polymathwannabe · 2014-09-15T15:38:07.171Z · LW(p) · GW(p)

I only remember this one:

http://lesswrong.com/r/discussion/lw/jvg/irrationality_game_iii/

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-09-15T15:52:07.336Z · LW(p) · GW(p)

Yupp. That one. Maybe the OP could change the title to include 'irrationality' (or add a tag) to stay in the original spirit.

Replies from: gjm
comment by gjm · 2014-09-15T17:00:28.366Z · LW(p) · GW(p)

My understanding is that this is explicitly meant not to be quite the same thing as the Irrationality Game. Specifically, in the IG the idea is to find things you think are more likely to be true than the LW consensus reckons them; here (I think) the idea is to find things you think are actually likely to be true despite the LW consensus.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-09-15T18:51:11.476Z · LW(p) · GW(p)

That can only be answered by the OP.

comment by pinkocrat · 2014-10-12T16:06:06.286Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

As long as you get the gist (think in probability instead of certainty, update incrementally when new evidence comes along), there's no additional benefit to learning Bayes' Theorem.

comment by Jiro · 2014-09-15T15:47:15.803Z · LW(p) · GW(p)

Meta: It is easy to take a position that is held by a significant number of people and exaggerate it to the point where nobody holds the exaggerated version. Does that count as a contrarian opinion (since nobody believes the exaggerated version that was stated) or as a non-contrarian opinion (since people believe the non-exaggerated version)?

(Edit: This is not intended to be a controversial opinion. It's meta.)

Replies from: gjm
comment by gjm · 2014-09-15T16:58:39.943Z · LW(p) · GW(p)

My understanding is that the idea is to post opinions you actually hold that count as contrarian.

Replies from: Jiro
comment by Jiro · 2014-09-15T17:23:54.758Z · LW(p) · GW(p)

I was mostly thinking of the one about open borders. Hardly anyone thinks that open borders would destroy civilization, but that's an exaggerated version of "open borders are a bad idea". If I disagree that they would destroy civilization, but I agree that they are a bad idea, should I treat it as a contrarian opinion or a non-contrarian opinion?

Furthermore, it sounds like that would not qualify as "opinions you actually hold" unless the poster thought it would destroy civilization.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-16T01:16:14.329Z · LW(p) · GW(p)

Hardly anyone thinks that open borders would destroy civilization,

Really? I consider it obvious for a sufficiently strong definition of "open boarders".

Also it wouldn't completely destroy civilization because the open boarders aspect would collapse before all of civilization did.

comment by ChristianKl · 2014-09-15T11:20:54.416Z · LW(p) · GW(p)

[Please read the OP before voting. Special voting rules apply.]

The truth of a statement depends on the context in which the statement is made.

Replies from: gjm, blacktrance, Creutzer, DanielLC
comment by gjm · 2014-09-15T14:59:54.410Z · LW(p) · GW(p)

I think this is uncontroversial if taken as referring to the following two things:

  • The meaning of a statement depends on its context. (E.g., "that's a red herring" might mean different things when pointing at a net full of fish and when listening to a debate.)
  • A statement may contain indexical terms whose reference varies with time or place or whatever. (E.g., one person may say "I have a headache" and another "I do not have a headache" and there is no contradiction.)

and controversial but not startlingly so if taken as referring to the following:

  • Some statements implicitly refer to a particular set of goals, values, etc. and different systems of values may be active in different contexts. (E.g., if one person says "Apple stock is a good investment" and another "Apple stock is a bad investment" there may be no contradiction; they may just have different goals and therefore different attitude to volatility, correlation with other tech stocks, etc. Or if one person says "Meringue is delicious" and another "Meringue is disgusting" they may be describing their own tastes as much as the characteristics of meringue as such.)

Are you intending to state something more than those?

Replies from: ChristianKl
comment by ChristianKl · 2014-09-15T15:17:22.726Z · LW(p) · GW(p)

There are some people who believe that there something called objective reality and you can check whether a statement is true in objective reality.

I say that a statement might be true in context A but false in context B.

Replies from: gjm
comment by gjm · 2014-09-15T16:35:11.922Z · LW(p) · GW(p)

I don't think you answered my question. (Perhaps because you think it's meaningless or embodies false presuppositions or something.)

Aside from the facts that (1) the same utterance can mean different things in different contexts, (2) indexical terms can refer differently in different contexts, and (3) different values and preferences may be implicit in different contexts, do you think there are further instances in which the same statement may have different truth values in different contexts?

(I think the boundary between #1 and "real" differences in truth value is rather fuzzy, which I concede might make my question unanswerable.)

Some concrete examples may be useful. The following seem like examples where one can avoid 1,2,3. Are they ones where you think the truth value might be context-dependent, and if so could you briefly explain what sort of context differences would change the truth value?

  • Simple statements about small integers. "1234 times 4321 = 5332114". Obviously could mean something else if you're talking to someone whose system of notation swaps the glyphs for 3 and 4, or who uses "times" to mean what we mean by "plus"; but beyond that?
  • Empirical statements with all terms near to being typical cases. "Freshly spilled human blood is usually red." Obviously could mean something different in an alternate world where "human" denotes some other species or in a community where "red" is used to denote short-wavelength rather than long-wavelength visible light; but beyond that?
Replies from: ChristianKl
comment by ChristianKl · 2014-09-15T23:16:31.151Z · LW(p) · GW(p)

Obviously could mean something else if you're talking to someone whose system of notation swaps the glyphs for 3 and 4, or who uses "times" to mean what we mean by "plus"; but beyond that?

The fact that you claim to get 7 digits of accuracy by multiplying two 4 digit numbers is very questionable. If I would go after my physics textbook 1234 times 4321 = 5332000 would be the prefered answer and 1234 times 4321 = 5332114 would be wrong as the number falsely got 3 additional digits of accuracy.

A more exotic issue is whether times is left or right associative. The python pep on matrix multiplication is quite interesting. It goes through edge cases such as whether matrix multiplication is right or left associative.

Obviously could mean something different in an alternate world where "human" denotes some other species or in a community where "red" is used to denote short-wavelength rather than long-wavelength visible light; but beyond that?

Red is actually a quite nice example. Does it mean #FF0000? If so, the one that my monitor displays? The one that my printer prints? On is red not a property of an object but a property of the light and it means light with a certain wavelength? That means that if I light the room a certain way the colors of objects change. If it's a property of the object, what's when the object emits red light but doesn't reflect it? Alternatively red could also be something that triggers the color receptors of humans in a specific way. In that case small DNA changes in the person who perceives red alter slightly what red means. But "human red" is even more complex because the brain does comlex postprocessing after the color receptors have given a certain output.

If red means #FF0000 then is #EE0000 also red or is it obviously not red because it's not #FF0000? What do you do when someone with design experience and who therefore has many names for colors comes along and says that freshly spilled human blood is crimson rather than red? If we look up the color crimson you will find that Indiana University has IU crimson and the University of Kansas has KU crimson. Different values for crimson make it hard to decide whether or not the blood is actually colored crimson.

Depending on how you define red mixing it with green and blue might give you white or it might give you black.

I used to naively think that I can calculate the difference between two colors by calculating the Hamilton distance of the hex values. There even a W3C recommendation of defining the distance of colors for website design that way. It turns out it you actually need a formula that's more complex and I'm still not sure whether the one the folks gave me on ux.stackexchange is correct for human color perception. Of course you need to have a concept of distance if you want to say that red is #FF0000 +- X.

I also had lately on LW a disagreement about what colors mean when I use red to mean whatever my monitor shows me for red/#FF0000 because my monitor might not be rightly calibrated.

You might naively think that the day after September 2 is always September 3. That turns out not to be true. There also a case where a September 14 follows after a September 2. Some people think that a minute always has 60 seconds but the official version is that it can also sometimes have 61. It get's worse. You don't know how many leap seconds will be introduced in the next ten years. It get's announced only 6 months in advance. That means it's practically impossible to build a clock that tells the time accurately down to a second in ten years. If you look closer at statements things usually get really messy.

The US airforce shoot down an US helicopter in Iraq partly because they don't consider helicopters to be aircraft. Most of the time you can get away with making vague statements for practical purposes but sometimes a change in context changes the truth value of a statement and then you are screwed.

Replies from: gjm, TheAncientGeek
comment by gjm · 2014-09-15T23:49:00.511Z · LW(p) · GW(p)

Multiplication: so this looks like you're again referring to meanings being context-dependent (in this case the meaning of "= 5332114"). So far as I can see, associativity has nothing whatever to do with the point at issue here and I don't understand why you bring it up; what am I missing?

Redness: yeah, again in some contexts "red" might be taken to mean some very specific colour; and yes, colour is a really complicated business, though most of that complexity seems to me to have as little to do with the point at issue as associativity has to do with the question of what 1234x4321 is.

So: It appears to me that what you mean by saying that statements' truth values are context-dependent is that (1) their meanings are context-dependent and (2) people are often less than perfectly precise and their statements apply to cases they hadn't considered. All of which is true, but none of which seems terribly controversial. So, sorry, no upvote for contrarianism from me on this occasion :-).

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T14:23:12.397Z · LW(p) · GW(p)

Redness: yeah, again in some contexts "red" might be taken to mean some very specific colour; and yes, colour is a really complicated business, though most of that complexity seems to me to have as little to do with the point at issue as associativity has to do with the question of what 1234x4321

If you are in the object oriented paradigm 1234.times(4321) is something slightly different than 1234.times(4321)

(2) people are often less than perfectly precise and their statements apply to cases they hadn't considered

In my map of the world I wouldn't formulate that statement, because you rate precision by an objective standard and I don't think that a single standard exists. Statements that people make are precisely the statements they make. My disagreement is about a fundamental issue and not simply about handling one example differently.

Replies from: gjm
comment by gjm · 2014-09-16T16:29:48.523Z · LW(p) · GW(p)

1234.times(4321) is something slightly different than 1234.times(4321)

I guess you wanted one of those to be the other way around. In any case, that's commutativity not associativity, it has nothing to do with object orientation (you could equally say that times(1234,4321) and times(4321,1234) are different), and since it happens that multiplication of numbers is commutative it again seems like a total red herring.

because you rate precision by an objective standard

I do? Please, tell me more about how I rate precision.

My disagreement is about a fundamental issue.

Which I think simply amounts to the fact that the same sentence may denote different propositions in different contexts, because meaning is context-dependent. Which I think is not at all controversial.

I may well be misinterpreting you, but in that case I think it's time for you to be clearer about what you mean. So far, all the examples we've had have been (so it seems to me) either (a) cases where meaning or reference is context-dependent but it's at least arguable that once the meaning is nailed down you have a proposition whose truth value is not context-dependent, or (b) just observations that life is complicated sometimes (September 1752, leap seconds, etc.) without any actual context-dependent proposition in sight.

As I remarked above, I'm not certain how precise one can make the distinction between context-dependent meaning and context-dependent truth value. But since (I think) intelligent thoughtful people are generally entirely unbothered by the idea of context-dependent meaning, any version of context-dependent truth value that can't be clearly distinguished from context-dependent meaning shouldn't be that controversial :-).

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T16:52:11.155Z · LW(p) · GW(p)

I guess you wanted one of those to be the other way around. In any case, that's commutativity not associativity

In python I can overwrite the times function of one element that means that different elements have slightly different times functions. As such it's important to know whether X times Y means that times is a function of the X or of the Y object.

But you might be right that associativity is not the right word.

I do? Please, tell me more about how I rate precision.

When I see the word precision I see it has having a certain meaning that people at university taught me. You might mean something different with the term. What do you mean?

Which I think simply amounts to the fact that the same sentence may denote different propositions in different contexts, because meaning is context-dependent

In the case of color my position lead to disagreement with other people on LW because I follow different heuristics about truth than other people.

But since (I think) intelligent thoughtful people are generally entirely unbothered by the idea of context-dependent meaning, any version of context-dependent truth value that can't be clearly distinguished from context-dependent meaning shouldn't be that controversial :-).

There are many thoughtful people who put a lot of value in searching something like objective truth. Do you deny that proposition?

Replies from: gjm
comment by gjm · 2014-09-16T19:17:21.305Z · LW(p) · GW(p)

Do you deny that proposition?

Nope. Because so far everything you've said seems perfectly compatible with "something like objective truth".

I appreciate that you consider yourself to be denying that any such thing is possible -- which is why I am interested in finding out exactly what it is you're claiming, and whether your disagreement with the seekers after objective truth is about more than terminology.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T21:00:19.562Z · LW(p) · GW(p)

I appreciate that you consider yourself to be denying that any such thing is possible -- which is why I am interested in finding out exactly what it is you're claiming, and whether your disagreement with the seekers after objective truth is about more than terminology.

I certainly do make real life decision based on my views that I wouldn't make if I would seek objective truth. When making Anki cards, every one of my cards explicitly states the context in which a statement stands. Creating Anki cards forces you to think very hard about statements and how they can be true or false.

Barry Smith against Fantology might also be worth reading.

A dark force haunts much of what is most admirable in the philosophy of the last one hundred years. It consists, briefly put, in the doctrine to the effect that one can arrive at a correct ontology by paying attention to certain superficial (syntactic) features of first-order predicate logic as conceived by Frege and Russell. More specifically, fantology is a doctrine to the effect that the key to the ontological structure of reality is captured syntactically in the ‘Fa’ (or, in more sophisticated versions, in the ‘Rab’) of first-order logic, where ‘F’ stands for what is general in reality and ‘a’ for what is individual.

Replies from: gjm
comment by gjm · 2014-09-16T21:28:47.794Z · LW(p) · GW(p)

every one of my cards explicitly states the context in which a statement stands.

Again, perfectly consistent with holding that meaning rather than truth is context-dependent.

And, again, I appreciate that you would say (T) "Truth is context-dependent" rather than (M) "Meaning is context-dependent". What I'm trying to grasp is (1) whether you mean by (T) more than someone else might mean by (M) and (2) whether your reasons for holding (T) are good reasons for holding (T) rather than (M).

against Fantology

This doesn't seem to me to have much to do with whether there are objective truths; it's about what sort of things, and relationships between things, and qualities, and so forth, there are. Holding that some truths aren't well expressible in terms of first-order predicate calculus isn't the same thing as holding that there are no truths.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T22:38:00.472Z · LW(p) · GW(p)

Again, perfectly consistent

If you at most people's Anki cards they don't contain explicit references to context. There are reasons why that's the case. It comes from the way those people think about the nature of knowledge.

Holding that some truths aren't well expressible in terms of first-order predicate calculus isn't the same thing as holding that there are no truths.

I haven't said that there are no truths but that truths are context dependent. On of the issues of first-order predicate calculus is that it doesn't contain information about context.

comment by TheAncientGeek · 2014-09-17T16:18:14.235Z · LW(p) · GW(p)

One of my pet theories is that colour terms are implicitly indexical, so I dont think that what you say departs from the uncontroversial cases.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-17T17:23:17.427Z · LW(p) · GW(p)

Let's say you want to seed an AGI with knowledge.

Some humans get some gene therapy that turns their blood blue. Then the AGI reasons that those entities aren't humans anymore and therefore don't need to be treated according to the protocol for treating humans. Or it get's confused with other issues about 'red'.

The only way around this is to specifically tell the FAI the context in which the statement is true. If you look at Eliezers post about truth you won't find a mention that context is important and needs to be passed along.

It's just not there. It's not a central part of his idea of truth. Instead he talks about probabilities of things being true.

None of my Anki cards has "this is true with 0.994%" probability. Instead it has a reference to context. That's because context is more important than probability for most knowledge. In the framework that Eliezer propagates probability is of core importance.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-17T17:34:23.002Z · LW(p) · GW(p)

That isn't an example where a truth value depends on context, it's an example where making the correct deductions depends on the correct theoretical background.

However I agree that quantifying what you haven't first understood is pointless.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-17T18:22:43.614Z · LW(p) · GW(p)

The thing is that having freshly spilled blood turn out to be red is neither sufficient nor necessary to be a human.

For the AGI it's necessary to treat sentence that tell the AGI sufficient or necessary conditions from sentences that tell the AGI about observations. Those two kind of statements are different contexts.

We have plenty of smart people on LW and plenty of people who use Anki but we don't have a good Anki deck that teaches knowledge about rationality. I think that errors in thinking about knowledge and the importance of context prevent people from expressing their knowledge about rationality explicitly via a medium such as Anki cards.

I think that has a lot to do with people searching for knowledge that objectively true in all contexts instead of focusing on knowledge that's true in one context. If you switch towards your statement just being true in one context that you explicitly define you can suddenly start to say stuff.

When it comes to learning color terms I did have LW people roughly saying: "You can't do this, objective truth is different. Your monitor doesn't show true colors".

Replies from: Lumifer, TheAncientGeek
comment by Lumifer · 2014-09-17T18:52:39.962Z · LW(p) · GW(p)

When it comes to learning color terms I did have LW people roughly saying: "You can't do this, objective truth is different. Your monitor doesn't show true colors".

That was probably me. You can define colors objectively, it's not hard. That includes colors in a digital image given a color space. However what you see on your computer monitor may (and does) differ considerably from the reference standard for a given color.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-17T19:06:33.705Z · LW(p) · GW(p)

That was probably me. You can define colors objectively, it's not hard.

You can decide for some definition. But that's not the core of the issue. The core of the issue is that I can write decent Anki cards given my context-based truth which I couldn't write otherwise if I wouldn't think the context based frame.

However what you see on your computer monitor may (and does) differ considerably from the reference standard for a given color.

As I said above deciding for a reference standard isn't a trivial decision. The W3C standard is for example pretty stupid when it defines distance between colors. I also can't easily find a defined reference standard for what #228B22 (forestgreen) is. Could you point me to a standard document that defines "the reference standard"?

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2014-09-18T13:28:59.552Z · LW(p) · GW(p)

The core of the issue is that I can write decent Anki cards given my context-based truth which I couldn't write otherwise if I wouldn't think the context based frame.

The core issue is whether your argument amounts to context based truth, or context based meaning.

comment by Lumifer · 2014-09-17T19:09:08.273Z · LW(p) · GW(p)

Could you point me to a standard document that defines "the reference standard"?

Sure.

comment by TheAncientGeek · 2014-09-20T13:57:44.760Z · LW(p) · GW(p)

If you are crediting an AI with the ability to undersand plain English, you are crediting it with a certain amount of ability to detect context in the first place.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-20T15:34:17.173Z · LW(p) · GW(p)

If you are crediting an AI with the ability to undersand plain English, you are crediting it with a certain amount of ability to detect context in the first place.

I haven't said anything about plain English. If you look at Cyc's idea of red I don't think you find the notion that red means different things based on context.

comment by blacktrance · 2014-09-15T19:11:44.927Z · LW(p) · GW(p)

The full meaning of a statement depends on the context in which it is made.

comment by Creutzer · 2014-09-15T14:06:23.611Z · LW(p) · GW(p)

How is that a contrarian statement? Obviously natural language is heavily context-dependent. So what exactly do you mean when you say that?

Replies from: ChristianKl
comment by ChristianKl · 2014-09-15T14:34:23.389Z · LW(p) · GW(p)

I'm not saying something just focused on natural language. I think it's true for any statements.

If you look at http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/ there's no mention of context and how a statement can be true in context A but false in context B.

comment by DanielLC · 2014-09-15T23:02:31.941Z · LW(p) · GW(p)

Just to clarify, you mean that there is a context in which "0 = 1" is a true statement, which is not tantamount to redefining "0", "=", or "1"? That is, in some alternate universe, "0 = 1" is consistent with the axioms of Peano arithmatic?

Replies from: ChristianKl
comment by ChristianKl · 2014-09-15T23:28:20.740Z · LW(p) · GW(p)

In most cases the numbers that normal humans use don't follow strictly the axioms of Peano. Most of the time the dates of a month follow Peano. Most of the time the day after September 2 is September 3. But not always.

On a computer you can't store every natural number as specified by Peano with common integers.

If you start counting apples and get really many apples you suddenly have a black hole and no apples anymore.

I don't know enough about the philosophy of math to get really deep by we had lately someone writing posts about constructivist math that also contained the notion that there are no absolute mathematical truths.

Replies from: DanielLC
comment by DanielLC · 2014-09-16T00:07:48.410Z · LW(p) · GW(p)

In that case, it sounds like you're just not a math realist. There are plenty of people who believe that Peano arithmetic somehow exists on its own. Or possibly people who have a different definition of "exist" from me. It's hard to tell the difference. But I don't think disagreeing with that is all that unusual.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-16T00:32:21.487Z · LW(p) · GW(p)

In that case, it sounds like you're just not a math realist.

I'm not good enough at math to confidently answer that question. I'm good enough at math that I think that people want to debate whether or not something like infinite small numbers exist.

I don't care primarily about math. I see math as a tool. I'm happy that there are some people who build useful math and I'm happy to use it when convenient but it's not central for me.

comment by [deleted] · 2014-09-26T02:06:37.251Z · LW(p) · GW(p)

While I am not pro-wireheading and I expect this to be only a semi-contrarian position here...

Happiness is actually far more important than people give it credit for, as a component of a reflectively coherent human utility function. About two thirds of all statements made of the form, "$HIGHER_ORDER_VALUE is more important than being happy!" are reflectively incoherent and/or pure status-signaling. The basic problem that needs addressing is of distinction between simplistic pleasures and a genuinely happy life full of variety, complexity, and subtlety, but the signaling games keep this otherwise obvious distinction from entering the conversation simply because happiness of all kinds is signaled to be low-status.

Replies from: DanielLC
comment by DanielLC · 2014-09-27T00:24:34.815Z · LW(p) · GW(p)

What do you mean by "pure signalling"? You evolve to show the signal. Whether the actual metal process that makes the signal is a cost-benefit analysis on lying or actually believing what you say doesn't matter. Does evolving to automatically feel more empathy for children as a way of signalling that you'd be a good parent count as "pure signalling"?

comment by Princess_Stargirl · 2014-09-18T20:25:17.135Z · LW(p) · GW(p)

the United States prison system is a tragedy on par or exceeding the horror of the Soviet gulags. In my opinion the only legitimate reason for incarcerating people is to prevent crime. The USA currently has 7 times the OECD average number of prisoners and crime rates similar to the OECD average. 6/7 of the Us penial system population is a little over 2 million people. If we are unnescesarily incarcerating anywhere close to 2 million people right now then the USA is a morally hellish country.

note: Less than half of the inamtes in the USa are there for drug related charges. It is very close to 50% federally but less at the state level. Immediately pardoning all criminals primarily gets us to 3.5 times the OECd average.

Replies from: shminux, Azathoth123
comment by Shmi (shminux) · 2014-09-18T23:24:46.498Z · LW(p) · GW(p)

This seems close to the (liberal) mainstream. Why do you think it is contrarian on LW?

Replies from: Princess_Stargirl
comment by Princess_Stargirl · 2014-09-19T01:28:19.050Z · LW(p) · GW(p)

I do not think most people consider this a problem on the par of the Soviet Gulag. Though possibly I am wrong.

Replies from: Lumifer
comment by Lumifer · 2014-09-19T02:14:08.287Z · LW(p) · GW(p)

The problem with the Soviet Gulag wasn't so much its size, but rather the whole system it was part of and things which got you sent to it.

comment by Azathoth123 · 2014-09-18T23:33:08.527Z · LW(p) · GW(p)

Is your claim that they're in prison for crimes they didn't commit, or that we should let more crimes go unpunished?

Replies from: Lumifer, TheAncientGeek
comment by Lumifer · 2014-09-19T02:17:47.625Z · LW(p) · GW(p)

I'm not the OP, but I'll throw a quote into this thread:

There's no way to rule innocent men. The only power any government has is the power to crack down on criminals. Well, when there aren't enough criminals, one makes them. One declares so many things to be a crime that it becomes impossible for men to live without breaking laws.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-19T05:58:35.494Z · LW(p) · GW(p)

So which crimes would you take off the books and what percent of prisoners would that remove?

Replies from: Lumifer
comment by Lumifer · 2014-09-19T14:34:33.010Z · LW(p) · GW(p)

We can start with the drug war, things like civil forfeiture, and go on from there. You might be interested in this book.

The problems with the US criminal justice system go much deeper than just the abundance of laws, of course.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-19T23:20:31.301Z · LW(p) · GW(p)

things like civil forfeiture

Civil forfeiture doesn't fill prisons.

You might be interested in this book.

The problem with having to many felonies is not that prisons get filled with people being punished for silly things, it's that the people who do get punished for silly things tend to correlate with the people actively opposing the current administration.

Replies from: Lumifer
comment by Lumifer · 2014-09-20T00:41:58.128Z · LW(p) · GW(p)

There are a LOT of problems with having too many felonies, but that's a large discussion not quite in the LW bailiwick...

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T01:57:00.307Z · LW(p) · GW(p)

Agreed, but the discussion was about there supposedly being too many people in prison.

comment by TheAncientGeek · 2014-09-20T16:11:03.403Z · LW(p) · GW(p)

False dichotomy, It's about sentence length, eg three strikes.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T18:57:06.626Z · LW(p) · GW(p)

So if we reduced sentences what effect do you think that would have on crime rates? Remember three strikes was passed in response to crime rates being too high.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-20T21:03:27.725Z · LW(p) · GW(p)

Drastically increasing sentences didn't drastically reduce crime, so...

Comparable countries have lower crime rates and lower prison populations, so they must be doing something right.

You don't have to keep moving the big lever up and down: you can get Smart on Crime.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-20T22:15:57.835Z · LW(p) · GW(p)

Drastically increasing sentences didn't drastically reduce crime, so...

Well, the crime did fall. Whether it was due to increased sentences or something else is still being debated.

Comparable countries have lower crime rates and lower prison populations,

They also have fewer people from populations with high predisposition to violence (and yes, I mean blacks).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-09-20T23:02:12.815Z · LW(p) · GW(p)

The last was disappointingly predictable.

comment by VAuroch · 2014-09-17T05:13:52.632Z · LW(p) · GW(p)

Provably-secure computing is undervalued as a mechanism for guaranteeing Friendliness from an AI.

Replies from: somnicule
comment by somnicule · 2014-09-17T11:42:33.425Z · LW(p) · GW(p)

I'm not sure what you mean by provably-secure, care to elaborate?

It sounds like it might possibly be required and is certainly not sufficient.

Replies from: VAuroch
comment by VAuroch · 2014-09-17T19:12:43.337Z · LW(p) · GW(p)

Provably-secure computing is a means by which you have a one-to-one mapping between your code and a proof that the results will not give you bad outcomes to a certain specification. The standard method is to implement a very simple language and prove that it works as a formal verifier, use this language to write a more complex formal verifying language, possibly recurse that, then use the final verifying language to write programs that specify start conditions and guarantee that given those conditions outcomes will be confined to a specified set.

It seems to be taken for granted on LW and within MIRI that this does not provide much value because we cannot trust the proofs to describe the actual effects of the programs, and therefore it's discounted entirely as a useful technique. I think it would substantially reduce the difficulty of the problem which needs to be solved for a fairly minor cost.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-19T21:09:07.943Z · LW(p) · GW(p)

it's discounted entirely as a useful technique

I don't think it's true, that it's generally considered not useful. One of MIRIs interviews was with one person engaged into provably-secure computing and I didn't see any issues in that post. It's just that provably-secure computing is not enough when you don't have a good specification.

comment by Ander · 2014-09-17T19:44:55.912Z · LW(p) · GW(p)

Bitcoin and a few different altcoins can all coexist in the future and each have significant value, each fulfilling different functions based on their technical details.

Replies from: Nornagest
comment by Nornagest · 2014-09-17T20:23:45.814Z · LW(p) · GW(p)

That doesn't seem especially contrarian to me, given the base premise that cryptocurrency has legs in the first place. At the very least, it seems obvious that easy-to-trace and difficult-to-trace transaction systems have different and complementary niches.

Replies from: Ander
comment by Ander · 2014-09-17T20:37:33.441Z · LW(p) · GW(p)

I thought it was contrarian, but perhaps I am wrong? I've seen plenty of 'every altcoin is worthless, dont ever buy any' comments in discussions in the past.

Replies from: lmm
comment by lmm · 2014-09-18T07:03:25.989Z · LW(p) · GW(p)

I think it's a small (but loud and motivated) group of bitcoin fans that think that, with most people taking your position (at least conditional on the statement that any cryptocurrency is useful at all).

comment by pragmatist · 2014-09-15T13:01:53.193Z · LW(p) · GW(p)

A whale is a big fish, not a mammal.

...

(OK, I'll admit that this is just a blatant karma-grab.)

ETA: An unsuccessful karma-grab, apparently. Curses! Oh well, back to the drawing board.

Replies from: shminux
comment by Shmi (shminux) · 2014-09-15T17:49:27.445Z · LW(p) · GW(p)

OP asks for honest opinions, something you actually believe in.

Replies from: pragmatist
comment by pragmatist · 2014-09-15T18:45:26.021Z · LW(p) · GW(p)

Yeah, my comment was an attempt at humor. Tough crowd, I guess.