Open thread, 11-17 August 2014
post by David_Gerard · 2014-08-11T10:12:57.465Z · LW · GW · Legacy · 274 commentsContents
274 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
274 comments
Comments sorted by top scores.
comment by Filipe · 2014-08-11T20:41:38.415Z · LW(p) · GW(p)
Economist Scott Sumner at Econlog praised heavily Yudkowsky and the quantum physics sequence, and applies lessons from it to economics. Excerpts:
I've recently been working my way through a long set of 2008 blog posts by Eliezer Yudkowsky. It starts with an attempt to make quantum mechanics seem "normal," and then branches out into some interesting essays on philosophy and science. I'm nowhere near as smart as Yudkowsky, so I can't offer any opinion on the science he discusses, but when the posts touched on epistemological issues his views hit home.
and
Replies from: Viliam_BurI used to have a prejudice against math/physics geniuses. I thought when they were brilliant at high level math and theory; they were likely to have loony opinions on complex social science issues. Conspiracy theories. Or policy views that the government should wave a magic wand and just ban everything bad. Now that I've read Robin Hanson, Eliezer Yudkowsky and David Deutsch, I realize that I've got it wrong. A substantial number of these geniuses have thought much more deeply about epistemological issues than the average economist. So when Hanson says we put far too little effort into existential risks, or even lesser but still massive threats like solar flares, and Yudkowsky says cryonics is under-appreciated, or when they say AI (or brain ems) is coming faster than we think and will have far more profound effects than we realize, I'm inclined to take them very seriously.
↑ comment by Viliam_Bur · 2014-08-13T07:06:05.151Z · LW(p) · GW(p)
Reading the comments... one commenter objects to WMI in a way which I would summarize as: "MWI provides identical experimental predictions to CI, which makes it useless, and also MWI provides wrong experimental predictions (unlike CI), which makes it wrong".
The author immediately detects the contradiction:
You know more about it than me. But if it's just equations and you can't empirically test which interpretation is true, then why does the Casimir force make MWI less likely?
Another commenter says that MWI has a greater complexity of thought, and while it is more useful to explore algorithmic possibilities on quantum computers abstractly, CI wins because it is about the real world.
Then the former commenter says (in reaction to the author) that MWI didn't provide useful predictions, and that Casimir force can only be explained by quantum equations and not by classical physics.
(Why exactly is that supposed to be an argument against MWI? No idea. Also, if MWI doesn't provide useful predictions, how can it be useful for studying quantum computers? Does it mean that quantum computers are never going to work in, you know, the real life?)
Finally, yet another commenter explains things from MWI point of view, saying that "observers" must follow the same fundamental physics as rocks.
comment by sixes_and_sevens · 2014-08-11T14:57:22.715Z · LW(p) · GW(p)
What sophisticated ideas did you come up with independently before encountering them in a more formal context?
I'm pretty sure that in my youth I independently came up with rudimentary versions of the anthropic principle and the Problem of Evil. Looking over my Livejournal archive, I was clearly not a fearsome philosophical mind in my late teens, (or now, frankly), so it seems safe to say that these ideas aren't difficult to stumble across.
While discussing this at the most recent London Less Wrong meetup, another attendee claimed to have independently arrived at Pascal's Wager. I've seen a couple of different people speculate that cultural and ideological artefacts are subject to selection and evolutionary pressures without ever themselves having come across memetics as a concept.
I'm still thinking about ideas we come up with that stand to reason. Rather than prime you all with the hazy ideas I have about the sorts of ideas people converge on while armchair-theorising, I'd like to solicit some more examples. What ideas of this sort did you come up with independently, only to discover they were already "a thing"?
Replies from: Adele_L, Ander, polymathwannabe, Metus, None, Username, James_Miller, sediment, niceguyanon, bramflakes, HopefullyCreative, Unnamed, None, TylerJay, RomeoStevens, wadavis, lmm, moridinamael, 2ZctE, Gvaerg, None, Alicorn, Curiouskid, Dahlen, iarwain1, Luke_A_Somers, ShardPhoenix, sediment, VAuroch, solipsist↑ comment by Adele_L · 2014-08-11T16:13:36.766Z · LW(p) · GW(p)
When I was a teenager, I imagined that if you had just a tiny infinitesimally small piece of a curve - there would only be one moral way to extend it. Obviously, an extension would have to be connected to it, but also, you would want it to connect without any kinks. And just having straight-lines connected to it wouldn't be right, it would have to be curved in the same sort of way - and so on, to higher-and-higher orders. Later I realized that this is essentially what a Taylor series is.
I also had this idea when I was learning category theory that objects were points, morphisms were lines, composition was a triangle, and associativity was a tetrahedron. It's not especially sophisticated, but it turns out this idea is useful for n-categories.
Recently, I have been learning about neural networks. I was working on implementing a fairly basic one, and I had a few ideas for improving neural networks: making them more modular - so neurons in the next layer are only connected to a certain subset of neurons in the previous layer. I read about V1, and together, these led to the idea that you arrange things so they take into account the topology of the inputs - so for image processing, having neurons connected to small, overlapping, circles of inputs. Then I realized you would want multiple neurons with the same inputs that were detecting different features, and that you could reuse training data for neurons with different inputs detecting the same feature - saving computation cycles. So for the whole network, you would build up from local to global features as you applied more layers - which suggested that sheaf theory may be useful for studying these. I was planning to work out details, and try implementing as much of this as I could (and still intend to as an exercise), but the next day I found that this was essentially the idea behind convolutional neural networks. I'm rather pleased with myself since CNNs are apparently state-of-the-art for many image recognition tasks (some fun examples). The sheaf theory stuff seems to be original to me though, and I hope to see if applying Gougen's sheaf semantics would be useful/interesting.
I really wish I was better at actually implementing/working out the details of my ideas. That part is really hard.
Replies from: HopefullyCreative, Luke_A_Somers↑ comment by HopefullyCreative · 2014-08-13T07:21:50.592Z · LW(p) · GW(p)
I had to laugh at your conclusion. The implementation is the most enjoyable part. "How can I dumb this amazing idea down to the most basic understandable levels so it can be applied?" Sometimes you come up with a solution only to have a feverish fit of maddening genius weeks later finding a BETTER solution.
In my first foray into robotics I needed to write a radio positioning program/system for the little guys so they would all know where they were not globally but relative to each other and the work site. I was completely unable to find the math simply spelled out online and to admit at this point in my life I was a former marine who was not quite up to college level math. In banging my head against the table for hours I came up with an initial solution that found a position accounting for three dimensions(allowing for the target object to be in any position relative to the stationary receivers). Eventually I came up with an even better solution that also came up with new ideas for the robot's antenna design and therefore tweaking the solution even more.
That was some of the most fun I have ever had...
↑ comment by Luke_A_Somers · 2014-08-12T15:08:05.167Z · LW(p) · GW(p)
I did the Taylor series thing too, though with s/moral/natural/
↑ comment by Ander · 2014-08-11T19:27:08.211Z · LW(p) · GW(p)
I came up with the idea of a Basic Income by myself, by chaining together some ideas:
Capitalism is the most efficient economic system for fulfilling the needs of people, provided they have money.
The problem is that if lots of people have no money, and no way to get money (or no way to get it without terrible costs to themselves), then the system does not fulfill their needs.
In the future, automation will both increase economic capacity, while also increase the barrier to having a 'valuable skill' allowing you to get money. Society will have improved capacity to fulfill the needs of people with money, yet the barrier to having useful skills and being able to get money will increase. This leads to a scenario where the society could easily produce the items needed by everyone, yet does not because many of those people have no money to pay for them.
If X% of the benefits accrued from ownership of the capital were taken and redistributed evenly among all humans, then the problem is averted. Average people still have some source of money with which they can purchase the fulfillment of their needs, which are pretty easy to supply in this advanced future society.
X=100%, as in a strict socialism, is not correct, as then we get the economic failures we saw in the socialist experiments of the past century.
X = 0%, as in a strict libertarianism, is not correct, as then everyone whose skills are automated starve.
At X = some reasonable number, capitalism still functions correctly (that is, it works today with our current tax rate levels, and hopefully in our economically progressed future society, it provides sufficient money to everyone to supply basic needs).
Eventually I found out that my idea was pretty much a Basic Income system.
↑ comment by polymathwannabe · 2014-08-11T18:51:36.106Z · LW(p) · GW(p)
Once a Christian friend asked me why I cared so much about what he believed. Without thinking, I came up with, "What you think determines what you choose. If your idea of the world is inaccurate, your choices will fail."
This was years before I found LW and learned about the connection between epistemic and instrumental rationality.
P.S. My friend deconverted himself some years afterwards.
↑ comment by Metus · 2014-08-11T15:22:54.271Z · LW(p) · GW(p)
This is not a direct answer: Every time I come up with an idea in a field I am not very deeply involved in sooner or later I will realise that the phenomenon is either trivial, a misperception or very well studied. Most recently this happened with pecuniary externalities.
↑ comment by [deleted] · 2014-08-12T05:09:25.009Z · LW(p) · GW(p)
Came up with the RNA-world hypothesis on my own when reading about the structure and function of ribosomes in middle school.
Decided long ago that there was a conflict between the age of the universe and the existence of improvements in space travel meaning that things such as we would never be able to reach self-replicating interstellar travel. Never came to the conclusion that it meant extinction at all and am still quite confused by people who assume its interstellar metastasis or bust.
↑ comment by Username · 2014-08-11T21:34:48.612Z · LW(p) · GW(p)
Derivatives. I imagined tangent lines traveling along a function curve and thinking 'I wonder what it looks like when we measure that?' And so I would try to visualize the changing slopes of the tangent lines at the same time. I also remembering wondering how to reverse it. Obviously didn't get farther than that, but I remember being very surprised when I took calculus and realizing that the mind game I had been playing was hugely important and widespread, and could in fact be calculated.
↑ comment by James_Miller · 2014-08-11T17:40:57.143Z · LW(p) · GW(p)
For as long as I can remember, I had the idea of a computer upgrading its own intelligence and getting powerful enough to make the world a utopia.
↑ comment by sediment · 2014-08-12T11:08:13.531Z · LW(p) · GW(p)
Oh, another thing: I remember thinking that it didn't make sense to favour either the many worlds interpretation or the copenhagen interpretation, because no empirical fact we could collect could point towards one or the other, being as we are stuck in just one universe and unable to observe any others. Whichever one was true, it couldn't possibly impact on one's life in any way, so the question should be discarded as meaningless, even to the extent that it didn't really make sense to talk about which one is true.
This seems like a basically positivist or postpositivist take on the topic, with shades of Occam's Razor. I was perhaps around twelve. (For the record, I haven't read the quantum mechanics sequence and this remains my default position to this day.)
↑ comment by niceguyanon · 2014-08-11T19:59:32.905Z · LW(p) · GW(p)
In 6th or 7th grade I told my class that it was obvious that purchasing expensive sneakers is mostly just a way to show how cool you are or that you can afford something that not everyone else could. Many years latter I would read about signalling http://en.wikipedia.org/wiki/Signalling_(economics)
The following are not ideas as much as questions I had while growing up, and I was surprised/relieved/happy to find out that other people much smarter than me, spent a lot of time thinking about and is "a thing". For example I really wanted to know if there was a satisfactory way to figure out if Christianity was the one true religion and it bothered me very much that I could not answer that question. Also, I was concerned that the future might not be what I want it to be, and I am not sure that I know what I even want. It turns out that this isn't a unique problem and there are many people thinking about it. Also, what the heck is consciousness? Is there one correct moral theory? Well, someone is working on it.
↑ comment by bramflakes · 2014-08-11T15:44:31.528Z · LW(p) · GW(p)
At school my explanation for the existence of bullies was that it was (what I would later discover was called) a Nash equilibrium.
↑ comment by HopefullyCreative · 2014-08-13T07:07:05.378Z · LW(p) · GW(p)
I had drawn up some rather detailed ideas for an atomic powered future: The idea was to solve two major problems. The first was the inherent risk of an over pressure causing such a power plant to explode. The second problem to solve was the looming water shortage facing many nations.
The idea was a power plant that used internal sterling technology so as to operate at atmospheric pressures. Reinforcing this idea was basically a design for the reactor to "entomb" itself if it reached temperatures high enough to melt its shell. The top of the sterling engine would have a salt water reservoir that would be boiled off. The water then would be collected and directed in a piping system to a reservoir. The plant would then both produce electricity AND fresh water.
Of course then while researching thorium power technology in school I discovered that the South Korean SMART micro reactor does in fact desalinate water. On one level I was depressed that my idea was not "original" however, overall I'm exited that I came up with an idea that apparently had enough merit for people actually go through and make a finished design based upon it. The fact that my idea had merit at all gives me hope for my future as an engineer.
↑ comment by Unnamed · 2014-08-12T06:19:34.806Z · LW(p) · GW(p)
I'm another independent discoverer of something like utilitarianism, I think when I was in elementary school. My earliest written record of it is from when I was 15, when I wrote: "Long ago (when I was 8?), I said that the purpose of life was to enjoy yourself & to help others enjoy themselves - now & in the future."
In high school I did a fair amount of thinking (with relatively little direct outside influence) about Goodhart's law, social dilemmas, and indirect utilitarianism. My journal from then include versions of ideas like the "one thought too many" argument, decision procedures vs. criterion for good, tradeoffs between following an imperfect system and creating exceptions to do better in a particular case, and expected value reasoning about small probabilities of large effects (e.g. voting).
On religion, I thought of the problem of evil (perhaps with outside influence on that one) and the Euthyphro argument against divine command theory.
16-year-old me also came up with various ideas related to rationality / heuristics & biases, like sunk costs ("Once you’re in a place, it doesn’t matter how you got there (except in mind - BIG exception)"), selection effects ("Reason for coincidence, etc. in stories - interesting stories get told, again & again"), and the importance of epistemic rationality ("Greatest human power - to change ones mind").
↑ comment by [deleted] · 2014-08-11T15:36:15.623Z · LW(p) · GW(p)
I've found that ideas that affect me most fall into two major categories: either they will be ideas that hit me completely unprepared or they are ideas that I knew all along but had not formalized. Many-Worlds and and timelessness were the former for me. Utilitarianism and luminosity were the latter.
↑ comment by TylerJay · 2014-08-12T17:19:39.722Z · LW(p) · GW(p)
After learning the very basics of natural selection, I started thinking about goal systems and reward circuits and ethics. I thought that all of our adaptations were intended to allow us to meet our survival needs so we could pass on our genes. But what should people do once survival needs are met? What's the next right and proper goal to pursue? That line of reasoning's related Googling led me to Eliezer's Levels of Intelligence paper, which in turn led me to Less Wrong.
Reading through the sequences, I found so many of the questions that I'd thought about in vague philosophical terms explained and analyzed rigorously, like personal identity vs continuity of subjective experience under things like teleportation. Part of the reason LW appealed to me so much back then is, I suspect, that I had already thought about so many of the same questions but just wasn't able to frame them correctly.
↑ comment by RomeoStevens · 2014-08-12T04:02:17.768Z · LW(p) · GW(p)
This made me curious enough to skim through my childhood writing. Convergent and divergent infinite series, quicksort, public choice theory, pulling the rope sideways, normative vs positive statements, curiosity stoppers, the overton window.
My Moloch moment is what led me to seek out Overcomingbias.
↑ comment by wadavis · 2014-08-11T20:36:45.040Z · LW(p) · GW(p)
Tangent thread: What sophisticated idea are you holding on to that you are sure has been formalized somewhere but haven't been able to find?
I'll go first: When called to explain and defend my ethics I explained I believe in "Karma, NO not the that BS mysticism Karma, but plain old actions have consequences in our very connected world kind of Karma." If you treat people in a manner of honesty and integrity in all things, you will create a community of cooperation. The world is strongly interconnected and strongly adaptable so the benefits will continue outside your normal community, or if you frequently change communities. The lynchpin assumption of these beliefs is that if I create One Unit of Happiness for others, it will self propagate, grow and reflect, returning me more that One Unit of Happiness over the course of my lifetime. The same applies for One Unit of Misery.
I've only briefly studied ethics and philosophy, can someone better read point my to the above in formal context.
Replies from: iarwain1, None, buybuydandavis↑ comment by iarwain1 · 2014-08-13T15:19:45.448Z · LW(p) · GW(p)
This seems like a good place to ask about something that I'm intensely curious about but haven't yet seen discussed formally. I've wanted to ask about it before, but I figured it's probably an obvious and well-discussed subject that I just haven't gotten to yet. (I only know the very basics of Bayesian thinking, I haven't read more than about 1/5 of the sequences so far, and I don't yet know calculus or advanced math of any type. So there are an awful lot of well-discussed LW-type subjects that I haven't gotten to yet.)
I've long conceived of Bayesian belief statements in the following (somewhat fuzzily conceived) way: Imagine a graph where the x-axis represents our probability estimate for a given statement being true and the y-axis represents our certainty that our probability estimate is correct. So if, for example, we estimate a probability of .6 for a given statement to be true but we're only mildly certain of that estimate, then our belief graph would probably look like a shallow bell curve centered on the .6 mark of the x-axis. If we were much more certain of our estimate then the bell curve would be much steeper.
I usually think of the height of the curve at any given point as representing how likely I think it is that I'll discover evidence that will change my belief. So for a low bell curve centered on .6, I think of that as meaning that I'd currently assign the belief a probability of around .6 but I also consider it likely that I'll discover evidence (if I look for it) that can change my opinion significantly in any direction.
I've found this way of thinking to be quite useful. Is this a well-known concept? What is it called and where can I find out more about it? Or is there something wrong with it?
Replies from: Lumifer, Anders_H, Anders_H↑ comment by Lumifer · 2014-08-13T15:45:11.909Z · LW(p) · GW(p)
Imagine a graph where the x-axis represents our probability estimate for a given statement being true and the y-axis represents our certainty that our probability estimate is correct. So if, for example, we estimate a probability of .6 for a given statement to be true but we're only mildly certain of that estimate, then our belief graph would probably look like a shallow bell curve
I don't understand where the bell curve is coming from. If you have one probability estimate for a given statement with some certainty about it, you would depict it as a single point on your graph.
The bell curves in this context usually represent probability distributions. The width of that probability distribution reflects your uncertainty. If you're certain, the distribution is narrow and looks like a spike at the estimate value. If you're uncertain, the distribution is flat(ter). Probability distributions have to sum to 1 under the curve, so the smaller the width of the distribution, the higher the spike is.
How likely you are to discover new evidence is neither here nor there. Even if you are very uncertain of your estimate, this does not convert into the probability of finding new evidence.
Replies from: iarwain1↑ comment by iarwain1 · 2014-08-13T16:17:34.228Z · LW(p) · GW(p)
I think you're referring to the type of statement that can have many values. Something like "how long will it take for AGI to be developed?". My impression (correct me if I'm wrong) is that this is what's normally graphed with a probability distribution. Each possible value is assigned a probability, and the result is usually more or less a bell curve with the width of the curve representing your certainty.
I'm referring to a very basic T/F statement. On a normal probability distribution graph that would indeed be represented as a single point - the probability you'd assign to it being true. But we're often not so confident in our assessment of the probability we've assigned, and that confidence is what I was trying to represent with the y-axis.
An example might be, "will AGI be developed within 30 years"? There's no range of values here, so on a normal probability distribution graph you'd simply assign a probability and that's it. But there's a very big difference between saying "I really have not the slightest clue, but if I really must assign it a probability than I'd give it maybe 50%" vs. "I've researched the subject for years and I'm confident in my assessment that there's a 50% probability".
In my scheme, what I'm really discussing is the probability distribution of probability estimates for a given statement. So for the 30-year AGI question, what's the probability that you'd consider a 10% probability estimate to be reasonable? What about a 90% estimate? The probability that you'd assign to each probability estimate is depicted as a single point on the graph and the result is usually more or less a bell curve.
How likely you are to discover new evidence is neither here nor there. Even if you are very uncertain of your estimate, this does not convert into the probability of finding new evidence.
You're probably correct about this. But I've found the concept of the kind of graph I've been describing to be intuitively useful, and saying that it represents the probability of finding new evidence was just my attempt at understanding what such a graph would actually mean.
Replies from: Lumifer, Azathoth123↑ comment by Lumifer · 2014-08-13T16:36:13.085Z · LW(p) · GW(p)
In my scheme, what I'm really discussing is the probability distribution of probability estimates for a given statement.
OK, let's rephrase it in the terms of Bayesian hierarchical models. You have a model of event X happening in the future which says that the probability of that event is Y%. Y is a parameter of your model. What you are doing is giving a probability distribution for a parameter of your model (in the general case this distribution can be conditional, which makes it a meta-model, so hierarchical). That's fine, you can do this. In this context the width of the distribution reflects how precise your estimate of the lower-level model parameter is.
The only thing is that for unique events ("will AGI be developed within 30 years") your hierarchical model is not falsifiable. You will get a single realization (the event will either happen or it will not), but you will never get information on the "true" value of your model parameter Y. You will get a single update of your prior to a posterior and that's it.
Is that what you have in mind?
Replies from: iarwain1↑ comment by iarwain1 · 2014-08-13T17:08:48.701Z · LW(p) · GW(p)
I think that is what I had in mind, but it sounds from the way you're saying it that this hasn't been discussed as a specific technique for visualizing belief probabilities.
That surprises me since I've found it to be very useful, at least for intuitively getting a handle on my confidence in my own beliefs. When dealing with the question of what probability to assign to belief X, I don't just give it a single probability estimate, and I don't even give it a probability estimate with the qualifier that my confidence in that probability is low/moderate/high. Rather I visualize a graph with (usually) a bell curve peaking at the probability estimate I'd assign and whose width represents my certainty in that estimate. To me that's a lot more nuanced than just saying "50% with low confidence". It has also helped me to communicate to others what my views are for a given belief. I'd also suspect that you can do a lot of interesting things by mathematically manipulating and combining such graphs.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-13T17:19:00.559Z · LW(p) · GW(p)
One problem is that it's turtles all the way down.
What's your confidence in your confidence probability estimate? You can represent that as another probability distribution (or another model, or a set of models). Rinse and repeat.
Another problem is that it's hard to get reasonable estimates for all the curves that you want to mathematically manipulate. Of course you can wave hands and say that a particular curve exactly represents your beliefs and no one can say it ain't so, but fake precision isn't exactly useful.
↑ comment by Azathoth123 · 2014-08-14T03:54:28.115Z · LW(p) · GW(p)
I'm referring to a very basic T/F statement. On a normal probability distribution graph that would indeed be represented as a single point - the probability you'd assign to it being true. But we're often not so confident in our assessment of the probability we've assigned, and that confidence is what I was trying to represent with the y-axis.
Taken literally, the concept of "confidence in a probability" is incoherent. You are probably confusing it with one of several related concepts. Lumifer has described one example of such a concept.
Another concept is how much you think your probability estimate will change as you encounter new evidence. For example, your estimate for whether the outcome of the coin flip for the 2050 Superbowl will be heads is 1/2, and you are unlikely to encounter evidence that changes it (until 2050 that is). On the other hand, your estimate for the probability AI being developed by 2050 is likely to change a lot as you encounter more evidence.
Replies from: VAuroch, iarwain1↑ comment by VAuroch · 2014-08-14T07:26:31.002Z · LW(p) · GW(p)
I don't know, I think the existence of the 2050 Superbowl is significantly less than 100% likely.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-14T10:33:34.056Z · LW(p) · GW(p)
What's your line of thought?
Replies from: VAuroch↑ comment by VAuroch · 2014-08-14T21:32:11.518Z · LW(p) · GW(p)
It wouldn't be the first time a sport has gone from vastly popular to mostly forgotten within 40 years. Jai alai was the particular example I had in mind; it was once incredibly popular, but quickly descended to the point where it's basically entirely forgotten.
↑ comment by iarwain1 · 2014-08-14T14:10:37.231Z · LW(p) · GW(p)
Taken literally, the concept of "confidence in a probability" is incoherent.
Why? I thought the way Lumifer expressed it in terms of Bayesian hierarchical models was pretty coherent. It might be turtles all the way down as he says, and it might be hard to use it in a rigorous mathematical way, but at least it's coherent. (And useful, in my experience.)
Another concept is how much you think your probability estimate will change as you encounter new evidence.
This is pretty much what I meant in my original post by writing:
I usually think of the height of the curve at any given point as representing how likely I think it is that I'll discover evidence that will change my belief. So for a low bell curve centered on .6, I think of that as meaning that I'd currently assign the belief a probability of around .6 but I also consider it likely that I'll discover evidence (if I look for it) that can change my opinion significantly in any direction.
But expressing it in terms of how likely my beliefs are to change given more evidence is probably better. Or to say it in yet another way: how strong new evidence would need to be for me to change my estimate.
It seems like the scheme I've been proposing here is not a common one. So how do people usually express the obvious difference between a probability estimate of 50% for a coin flip (unlikely to change with more evidence) vs. a probability estimate of 50% for AI being developed by 2050 (very likely to change with more evidence)?
↑ comment by Anders_H · 2014-08-13T16:48:13.251Z · LW(p) · GW(p)
I believe you may be confusing the "map of the map" for the "map".
If I understand correctly, you want to represent your beliefs about a simple yes/no statement. If that is correct, the appropriate distribution for your prior is Bernoulli. For a Bernoulli distribution, the X axis only has two possible values: True or False. The Bernoulli distribution will be your "map". It is fully described by the parameter "p"
If you want to represent your uncertainty about your uncertainty, you can place a hyperprior on p. This is your "map of the map". Generally, people will use a beta distribution for this (rather than a bell-shaped normal distribution). With such a hyperprior, p is on the X-axis and ranges from 0 to 1.
I am slightly confused about this part, but it is not clear to me that we gain much from having a "map of the map" in this situation, because no matter how uncertain you are about your beliefs, the hyperprior will imply a single expected value for p
↑ comment by Anders_H · 2014-08-13T16:45:37.812Z · LW(p) · GW(p)
I believe you may be confusing the "map of the map" for the "map".
If I understand correctly, you want to represent your beliefs about a simple yes/no statement. If that is correct, the appropriate distribution for your prior is Bernoulli. For a Bernoulli distribution, the X axis only has two values: True or False. The Bernoulli distribution will be your "map". It is fully described by the parameter "p"
If you want to represent your uncertainty about your uncertainty, you can place a hyperprior on p. This is your "map of the map". Generally, people will use a beta distribution for this (rather than a bell-shaped normal distribution). With such a hyperprior, p is on the X-axis and ranges from 0 to 1.
I am slightly confused about this part, but it is not clear to me that we gain much from having a "map of the map" in this situation, because no matter how uncertain you are about your beliefs, the hyperprior will imply a single expected value for p.
↑ comment by [deleted] · 2014-08-12T03:47:43.381Z · LW(p) · GW(p)
What sophisticated idea are you holding on to that you are sure has been formalized somewhere but haven't been able to find?
The influence of the British Empire on progressivism.
There was that book that talked about how North Korea got its methods from the Japanese occupation, and as soon as I saw that, I thought, "well, didn't something similar happen here?" A while after that, I started reading Imagined Communities, got to the part where Anderson talks about Macaulay, looked him up, and went, "aha, I knew it!" But as far as I know, no one's looked at it.
Also, I think I stole "culture is an engineering problem" from a Front Porch Republic article, but I haven't been able to find the article, or anyone else writing rigorously about anything closer in ideaspace to that than dynamic geography, except the few people who approach something similar from an HBD or environmental determinism angle.
↑ comment by buybuydandavis · 2014-08-11T21:15:53.926Z · LW(p) · GW(p)
I believe Rational Self Interest types make similar arguments, though I can't recall anyone breaking it down to marginal gains in utility.
↑ comment by moridinamael · 2014-08-15T19:25:38.716Z · LW(p) · GW(p)
Well, this isn't quite what you were asking for, but, as a young teenager a few days after 9/11, I was struck with a clear thought that went something like: "The American people are being whipped into a blood frenzy, and we are going to massively retaliate against somebody, perpetuating the endless cycle of violence that created the environment which enabled this attack to occur in the first place."
But I think it's actually common for young people to be better at realpolitik and we get worse at it as we absorb the mores of our culture.
↑ comment by 2ZctE · 2014-08-15T03:59:17.389Z · LW(p) · GW(p)
In middle school I heard a fan theory that Neo had powers over the real world because it was a second layer of the matrix-- the idea of simulations inside simulations was enough for me to come to Bostrom's simulation argument.
Also during the same years I ended up doing an over the top version of comfort zone expansion by being really silly publicly.
In high school I think I basically argued a crude version of compatibilism before learning the term, although my memory of the conversation is a bit vague
↑ comment by Gvaerg · 2014-08-13T15:20:10.244Z · LW(p) · GW(p)
This happened when I was 12 years old. I was trying to solve a problem at a mathematical contest which involved proving some identity with the nth powers of 5 and 7. I recall thinking vaguely "if you go to n+1 what is added in the left hand side is also in the right hand side" and so I discovered mathematical induction. In ten minutes I had a rigorous proof. Though, I didn't find it so convincing, so I ended with an unsure-of-myself comment "Hence, it is also valid for 3, 4, 5, 6 and so on..."
When I was in high school, creationism seemed unsatisfying in the sense of a Deus Ex Machina narrative (I often wonder how theists reconcile the contradiction between the feeling of religious wonder and the feeling of disappointment when facing Deus Ex Machina endings). The evolution "story" fascinated me with its slow and semi-random progression over billions of years. I guess this was my first taste of reductionism. (This is also an example of how optimizing for interestingness instead of truth has led me to the correct answer.)
↑ comment by [deleted] · 2014-08-12T03:42:22.100Z · LW(p) · GW(p)
Cartesian skepticism and egoism, when I was maybe eleven. I eventually managed to argue myself out of both -- Cartesian skepticism fell immediately, but egoism took a few years.
(In case it isn't obvious from that, I did not have a very good childhood.)
I remember coming close to rediscovering pseudoformalism and the American caste system, but I discovered those concepts before I got all the way there.
↑ comment by Alicorn · 2014-08-11T21:29:39.674Z · LW(p) · GW(p)
I independently conceived of determinism and a vague sort of compatibilism when I was twelveish.
Replies from: ahbwramc, sediment↑ comment by ahbwramc · 2014-08-11T23:03:32.205Z · LW(p) · GW(p)
I remember being inordinately relieved/happy/satisfied when I first read about determinism around 14 or 15 (in Sophie's World, fwiw). It was like, thank you, that's what I've been trying to articulate all these years!
(although they casually dismissed it as a philosophy in the book, which annoyed 14-or-15-year-old me)
↑ comment by Curiouskid · 2014-08-21T03:22:15.948Z · LW(p) · GW(p)
When I was first learning about neural networks, I came up with the idea of de-convolutional networks: http://www.matthewzeiler.com/
Also, I think this is not totally uncommon. I think this suggests that there is low-hanging fruit in crowd-sourcing ideas from non-experts.
Another related thing that happens is that I'll be reading a book, and I'll have a question/thought that gets talked about later in the book.
↑ comment by Dahlen · 2014-08-14T16:04:37.609Z · LW(p) · GW(p)
I rediscovered most of the more widely agreed upon ontological categories (minus one that I still don't believe to adhere to the definition) before I knew they were called that, at about the age of 17. The idea of researching them came to me after reading a question from some stupid personality quiz they gave us in high school, something like "If you were a color, which color would you be?" -- and something about it rubbed me the wrong way, it just felt ontologically wrong, conflating entities with properties like that. (Yes, I did get the intended meaning of the question, I wasn't that much of an Aspie even back then, but I could also see it in the other, more literal way.)
I remember it was in the same afternoon that I also split up the verb "to be" into its constituent meanings, and named them. It seemed related.
↑ comment by Luke_A_Somers · 2014-08-12T15:06:20.512Z · LW(p) · GW(p)
In second or third grade, I noticed that (n+1) (n+1) = (n n) + n + (n+1).
↑ comment by ShardPhoenix · 2014-08-12T09:41:21.736Z · LW(p) · GW(p)
I came up with a basic version of Tegmark's level 4 multiverse in high school and wrote an essay about it in English class. By that time though I think I'd already read Permutation City which involves similar ideas.
↑ comment by solipsist · 2014-08-11T19:59:13.253Z · LW(p) · GW(p)
Fun question!
Under 8: my sister and I were raised atheist, but we constructed what amounted to a theology around our stuffed animals. The moral authority whom I disappointed most often, more than my parents, was my teddy bear. I believed in parts of our pantheon and ethics system so deeply, devoutly, and sincerely that, had I been raised in a real religion, I doubt my temperament would have ever let me escape.
Around 8: My mother rinsed out milk bottles twice, each time using a small amount of water. I asked her why she didn't rinse it out once using twice as much water. She explained that doubling the water roughly doubled the cleansing power, but rinsing the bottle twice roughly squared the cleaning power. The most water-efficient way to clean a milk bottle, I figured, would involve a constant stream of water in and out of the bottle. I correctly modeled how the cleaning rate (per unit water) depends on the current milk residue concentration, but I couldn't figure out what to do next or if the idea even made sense.
Around 14: Composition is like multiplication, and unions (or options, or choices)) are like addition.
University: (1) use Kolmogorov complexity to construct a bayesian prior over universes, then reason anthropically. When you do this, you will (2) conclude with high probability that you are a very confused wisp of consciousness.
comment by Metus · 2014-08-11T11:24:02.731Z · LW(p) · GW(p)
In the last open thread Lumifer linked to a list by the American Statistical Association with points that need to be understood to be considered statistically literate. In the same open thread in another comment sixes_and_sevens asked for statements we know are true but the average lay person gets wrong. As response he mainly got examples from the natural sciences and mathematics. Which makes me wonder, can we make a general test of education in all of these fields of knowledge that can be automatically graded? This test would serve as a benchmark for traditional educational methods and for autodidacts checking themselves.
I imagine having simple calculations for some things and multiple-choice tests for other scenarios where intuition suffices.
Edit: Please don't just upvote, try to point to similar ideas in your respective field or critique the idea.
Replies from: whales, ChristianKl, sixes_and_sevens↑ comment by whales · 2014-08-11T18:20:48.212Z · LW(p) · GW(p)
There are concept inventories in a lot of fields, but these vary in quality and usefulness. The most well-known of these is the Force Concept Inventory for first semester mechanics, which basically aims to test how Aristotelian/Newtonian a student's thinking is. Any physicist can point out a dozen problems with it, but it seems to very roughly measure what it claims to measure.
Russ Roberts (host of the podcast EconTalk) likes to talk about the "economic way of thinking" and has written and gathered links about ten key ideas like incentives, markets, externalities, etc. But he's relatively libertarian, so the ideas he chose and his exposition will probably not provide a very complete picture. Anyway, EconTalk has started asking discussion questions after each podcast, some of which aim to test basic understanding along these lines.
↑ comment by ChristianKl · 2014-08-11T12:00:22.011Z · LW(p) · GW(p)
It seems to me like something that can be solved by a community driven website where users can vote on questions.
↑ comment by sixes_and_sevens · 2014-08-12T13:52:22.378Z · LW(p) · GW(p)
I've often considered a self-assessment system where the sitter is prompted with a series of terms from the topic at hand, and asked to rate their understanding on a scale of 0-5, with 0 being "I've never heard of this concept", and 5 being "I could build one of these myself from scratch".
The terms are provided in a random order, and include red-herring terms that have nothing to do with the topic at hand, but sound plausible. Whoever provides the dictionary of terms should have some idea of the relative difficulty of each term, but you could refine it further and calibrate it against a sample of known diverse users, (novices, high-schoolers, undergrads, etc.)
When someone sits the test, you report their overall score relative to your calibrated sitters ("You scored 76, which puts you at undergrad level"), but you also report something like the Spearman rank coefficient of their answers against the difficulty of the terms. This provides a consistency check for their answers. If they frequently claim greater understanding of advanced concepts than basic ones, their understanding of the topic is almost certainly off-kilter (or they're lying). The presence of red-herring terms (which should all have canonical score of 0) means the rank coefficient consistency check is still meaningful for domain experts or people hitting the same value for every term.
Actually, this seems like a very good learning-a-new-web-framework dev project. I might give this a go.
Replies from: somnicule, Luke_A_Somers, NancyLebovitz↑ comment by somnicule · 2014-08-13T23:10:00.061Z · LW(p) · GW(p)
Look up Bayesian Truth Serum, not exactly what you're talking about but a generalized way to elicit subjective data. Not certain on its viability for individual rankings, though.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-08-14T09:12:01.660Z · LW(p) · GW(p)
This is all sorts of useful. Thanks.
↑ comment by Luke_A_Somers · 2014-08-12T14:57:59.999Z · LW(p) · GW(p)
One problem that could crop up if you're not careful is a control term being used in an educational source not considered - a class, say, or a nonstandard textbook. I have a non-Euclidean geometry book that uses names for Euclidean geometry features that I certainly never encountered in geometry class. If those terms had been placed as controls, I would provide a non-zero rating for them.
↑ comment by NancyLebovitz · 2014-08-12T15:25:08.380Z · LW(p) · GW(p)
Who's going to do the rather substantial amount of work needed to put the system together?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-08-12T16:59:06.154Z · LW(p) · GW(p)
Do you mean to build the system or to populate it with content? The former would be "me, unless I get bored or run out of time and impetus", and the latter is "whichever domain experts I can convince to list and rank terms from their discipline".
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-12T19:57:46.206Z · LW(p) · GW(p)
I was thinking about the work involved in populating it.
comment by [deleted] · 2014-08-16T01:46:04.194Z · LW(p) · GW(p)
What are some good paths toward good jobs, other than App Academy?
Replies from: beoShaffer, shminux↑ comment by beoShaffer · 2014-08-18T22:57:17.379Z · LW(p) · GW(p)
See Mr.Money Mustache's 50 Jobs over $50,000 without a degree and SSC's Floor Employment for a number of suggestions.
↑ comment by Shmi (shminux) · 2014-08-16T18:15:40.071Z · LW(p) · GW(p)
I assume you don't consider going to a good school as a good path?
Replies from: None↑ comment by [deleted] · 2014-08-17T01:09:59.214Z · LW(p) · GW(p)
It's difficult for people who aren't in exactly the right place -- and I think people like that would be less likely to be around here.
Certainly not likely for me; I'm already out of college, and I went to a no-name local school. (Didn't even occur to me to apply up.)
comment by Vaniver · 2014-08-15T20:42:05.977Z · LW(p) · GW(p)
I've just finished the first draft of a series of posts on control theory, the book Behavior: The Control of Perception, and some commentary on its relevance to AI design. I'm looking for people willing to read the second draft next week and provide comments. Send me a PM or an email (I use the same username at gmail) if you're interested.
In particular, I'm looking for:
- People with no engineering background.
- People with tech backgrounds but no experience with control theory.
- People with experience as controls engineers.
(Yes, that is basically a complete grouping of people. But somehow people are more likely to think you're looking for them if you specifically say you're looking for them, and I think I can learn different useful things about the post from people in those groups.)
comment by Gunnar_Zarncke · 2014-08-14T08:58:51.201Z · LW(p) · GW(p)
My son was asked what he'd wish for when he could wish for any one thing whatsoever.
He considered a while and then said: "I have so many small wishes that I'd wish for many wishes."
My ex-wife settled for "I want to be able to conjure magic" reasoning that then she could basically make any thing come true.
For me it is obviously "I want a friendly artificial general intelligence" - seems like the safest bet.
Thus basically we all chose alike things.
Replies from: shminux, NancyLebovitz, DanielLC↑ comment by Shmi (shminux) · 2014-08-14T18:24:38.991Z · LW(p) · GW(p)
Maybe he'll grow up to be a mathematician.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-08-15T21:54:41.841Z · LW(p) · GW(p)
Naa, he is too practical. Builds real things. It's more likely that one of his youger brothers do. Like the five year old who told me that infinity can be reached only in steps of infinity each (thus one step), not in smaller steps (following some examples how 1000 can be reached in steps of 1, 100, 1000, 200 and other).
↑ comment by NancyLebovitz · 2014-08-14T10:37:03.896Z · LW(p) · GW(p)
If I only had three wishes, I would still spend one of them on having enough sense to make good wishes. I'd probably do that if I only had two wishes.
I might even use my only wish on having significantly better sense. My current situation isn't desperate-- if I only had one wish and were desperate, the best choice might well be to use the wish on dealing with the desperate circumstance as thoroughly as possible.
↑ comment by DanielLC · 2014-08-14T22:16:05.467Z · LW(p) · GW(p)
For me it is obviously "I want a friendly artificial general intelligence" - seems like the safest bet.
But the AI would still be constrained by the laws of physics. Intelligence can't beat thermodynamics. You need to wish for an omnipotent friendly AI.
comment by Lumifer · 2014-08-12T14:56:06.208Z · LW(p) · GW(p)
The Unicorn Fallacy (warning, relates to politics)
Is there an existing name for that one? It's similar to the nirvana fallacy but looks sufficiently different to me...
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-12T15:58:36.570Z · LW(p) · GW(p)
I am not aware of an existing one, although it is related to Moloch, as described in SSC when applied to the state:
although from a god’s-eye-view everyone knows that eliminating corporate welfare is the best solution, each individual official’s personal incentives push her to maintain it.
What Munger describes as The State, SSC calls Moloch. What your link calls the Munger test, may as well be called the Moloch test:
Replies from: LumiferThe Munger test:
In debates, I have found that it is useful to describe this problem as the "unicorn problem," precisely because it exposes a fatal weakness in the argument for statism. If you want to advocate the use of unicorns as motors for public transit, it is important that unicorns actually exist, rather than only existing in your imagination. People immediately understand why relying on imaginary creatures would be a problem in practical mass transit. But they may not immediately see why "the State" that they can imagine is a unicorn. So, to help them, I propose what I (immodestly) call "the Munger test."
Go ahead, make your argument for what you want the State to do, and what you want the State to be in charge of. Then, go back and look at your statement. Everywhere you said "the State" delete that phrase and replace it with "politicians I actually know, running in electoral systems with voters and interest groups that actually exist."
If you still believe your statement, then we have something to talk about.
↑ comment by Lumifer · 2014-08-12T16:08:37.403Z · LW(p) · GW(p)
What Munger describes as The State, SSC calls Moloch
I don't know about that. I understand Moloch as a considerably wider and larger system than just a State.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-12T17:19:50.731Z · LW(p) · GW(p)
Probably. I think Moloch is a metaphor for the actual, uncaring and often hostile universe, as contrasted with an imagined should-universe (the unicorn).
Replies from: jaime2000, Nornagest, Lumifer↑ comment by jaime2000 · 2014-08-14T17:30:05.021Z · LW(p) · GW(p)
I think Moloch is a metaphor for the actual, uncaring and often hostile universe, as contrasted with an imagined should-universe
No, that's Gnon (Nature Or Nature's God). Moloch is the choice between sacrificing a value to remain competitive against others who have also sacrificed that value, or else to stop existing because you are not competitive. The name comes an ancient god people would sacrifice their children to.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-14T18:21:37.761Z · LW(p) · GW(p)
Right, thanks.
↑ comment by Nornagest · 2014-08-12T17:27:27.412Z · LW(p) · GW(p)
I've been thinking of Moloch as the God of the Perverse Incentives, which doesn't quite cover it (it has the right shape, but strictly speaking a perverse incentive needs to be perverse relative to some incentive-setting agent, which the universe lacks) but has the advantage of fitting the meter of a certain Kipling poem.
Replies from: Dagon↑ comment by Lumifer · 2014-08-12T18:03:17.339Z · LW(p) · GW(p)
I think Moloch is a metaphor for the actual, uncaring and often hostile universe
Well, not THAT wide :-)
My thinking about Moloch is still too fuzzy for good definitions, but I'm inclined to to treat is as emergent system behavior which, according to Finagle's Law, is usually not what you want. Often enough it's not what you expect, too, even if you designed (or tinkered with) the system.
The unicorn is also narrower than the whole should-universe -- specifically it's some agent or entity with highly unlikely benevolent properties and the proposal under discussion is entirely reliant on these properties in order to work.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-08-13T05:15:51.941Z · LW(p) · GW(p)
My thinking about Moloch is still too fuzzy for good definitions
Moloch is based on the neo-reactionaries' Gnon. Notice how Nyan deals with the fuzziness by dividing Gnon into four components, each of which can be analyzed individually. Apparently Yvain's brain went into "basilisk shock" up on exposure to the content, which is why his description is so fuzzy.
Replies from: None, Lumifer↑ comment by [deleted] · 2014-08-16T01:42:45.518Z · LW(p) · GW(p)
Maybe genealogically, but Moloch and Gnon are two completely different concepts.
Gnon is a personalization of the dictates of reality, as stated in the post defining it. Every city in the world has the death penalty for stepping in front of a bus -- who set that penalty? Gnon did. Civilizations thrive when they adhere to the dictates of Gnon, and collapse when they cease to adhere to them. And so on. The structure is mechanistic/horroristic (same thing, in this case): "Satan is evil, but he still cares about each human soul; while Cthulhu can destroy humanity and never even notice." (in the comments here) Gnon is Cthulhu. Gnon doesn't care what you think about Gnon. Gnon doesn't care about you at all. But if you don't care about Gnon, you can't escape the cost.
There's nothing dualistic about Gnon: there's only the spectrum from adherence to rebellion. Moloch vs. Elua, on the other hand, is totally Manichaean: the 'survive-mode' dictates of Gnon are identified with Moloch, the evil god of multipolar traps and survival-necessitated sacrifices, and Moloch must be defeated by creating a new god to take over the world and enforce one specific morality and one specific set of dictates everywhere.
(Land, Meltdown: "Philosophy has an affinity with despotism, due to its predilection for Platonic-fascist top-down solutions that always screw up viciously.")
Replies from: Emile, Nornagest↑ comment by Emile · 2014-08-17T17:10:57.512Z · LW(p) · GW(p)
Philosophy has an affinity with despotism, due to its predilection for Platonic-fascist top-down solutions that always screw up viciously.
"Platonic-fascist top-down solutions" that didn't screw up viciously: universal education, the hospital system, unified monetary systems, unified weights and measures, sewers, enforcement of a common code of laws, traffic signals, municipal street cleaning...
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-08-18T07:29:09.437Z · LW(p) · GW(p)
unified monetary systems
A lot of people would argue that this is in fact in the process of screwing up right now.
enforcement of a common code of laws
This really didn't develop top-down.
↑ comment by Lumifer · 2014-08-13T14:44:05.401Z · LW(p) · GW(p)
Moloch is based on the neo-reactionaries' Gnon
That's not self-evident to me. At the levels of abstraction we're talking about, the idea of opaque, uncaring, often perverse, and sometimes malevolent system/universe/reality is really a very old and widespread meme.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-08-13T23:07:25.649Z · LW(p) · GW(p)
Personalizing it in quite this way was based on Gnon. Also the level of abstraction we (i.e., Yvain) are talking about it's impossible to say much of anything meaningful as you yourself noted in the grandparent.
Replies from: kaliumcomment by James_Miller · 2014-08-11T17:59:58.660Z · LW(p) · GW(p)
How morally different are ISIS fighters from us? If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion? Or might genetics play a role in our differing moral views? I find it hard to think of ISIS members as human, or at least I don't want to belong to the same species as them. But yes I do realize that some of my direct ancestors almost certainly did horrible, horrible things by my current moral standards.
Replies from: DanielLC, polymathwannabe, Lumifer, Gunnar_Zarncke, buybuydandavis, NancyLebovitz, bramflakes, Viliam_Bur, niceguyanon, Richard_Kennaway, ChristianKl↑ comment by DanielLC · 2014-08-11T22:04:45.923Z · LW(p) · GW(p)
I find it hard to think of ISIS members as human
That's how the ISIS fighters feel about the Yazidi.
Replies from: James_Miller↑ comment by James_Miller · 2014-08-11T22:51:57.127Z · LW(p) · GW(p)
Yes, an uncomfortable symmetry.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-12T06:53:26.305Z · LW(p) · GW(p)
Symmetry? Do you want to behead the children of ISIS fighters?
Replies from: James_Miller, Azathoth123, DanielLC↑ comment by James_Miller · 2014-08-12T15:03:34.056Z · LW(p) · GW(p)
No, so I guess it's not perfect symmetry.
↑ comment by Azathoth123 · 2014-08-13T04:52:26.179Z · LW(p) · GW(p)
What age are we talking about here? ISIS has been recruiting children as young as 9 and 10.
↑ comment by polymathwannabe · 2014-08-11T19:01:29.578Z · LW(p) · GW(p)
I find it hard to think of ISIS members as human, or at least I don't want to belong to the same species as them.
Beware of refusing to believe undeniable reality just because it's not nice.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-08-12T06:31:15.925Z · LW(p) · GW(p)
Yes. But in in this case it might be an inkling that the credibility of the sources may be the cause.
↑ comment by Lumifer · 2014-08-11T18:08:07.866Z · LW(p) · GW(p)
How morally different are ISIS fighters from us? If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion?
A relevant factor which is (intentionally or not) ignored by American media is that, from the point of view of pious Muslims, Yazidis are satanists.
To quote Wikipedia (Taus Melek is basically the chief deity for Yazidis, God the Creator being passive and uninvolved with the world):
As a demiurge figure, Tawûsê Melek is often identified by orthodox Muslims as a Shaitan (Satan), a Muslim term denoting a devil or demon who deceives true believers. The Islamic tradition regarding the fall of "Shaitan" from Grace is in fact very similar to the Yazidi story of Malek Taus – that is, the Jinn who refused to submit to God by bowing to Adam is celebrated as Tawûsê Melek by Yazidis, but the Islamic version of the same story curses the same Jinn who refused to submit as becoming Satan.[38] Thus, the Yazidi have been accused of devil worship.
So, what's the Christianity's historical record for attitude towards devil worshippers?
or at least I don't want to belong to the same species as them
Any particular reason you feel this way about the Sunni armed groups, but not about, say, Russian communists, or Mao's Chinese, or Pol Pot's Cambodians, or Rwandans, or... it's a very long list, y'know?
Replies from: Nornagest, buybuydandavis, James_Miller↑ comment by Nornagest · 2014-08-11T18:42:23.808Z · LW(p) · GW(p)
from the point of view of pious Muslims, Yazidis are satanists [...] what's the Christianity's historical record for attitude towards devil worshippers?
The closest parallel might be to Catharism, a Gnostic-influenced sect treating the God of the Old Testament as an entity separate from, and opposed to, the God of the New, and which was denounced as a "religion of Satan" by contemporary Christian authorities. That was bloodily suppressed in the Albigensian Crusade. Manicheanism among other early Gnostic groups was similarly accused as well, but it's much older and less well documented, and reached its greatest popularity (and experienced its greatest persecutions) in areas without Christian majorities.
A few explicitly Satanist groups have popped up since the 18th century, but they've universally been small and insignificant, and don't seem to have experienced much persecution outside of social disapproval. Outside of fundamentalist circles they seem to be treated as immature and insincere more than anything else.
On the other hand, unfounded accusations of Satanism seem to be fertile ground for moral panics -- from the witch trials of the early modern period (which, Wiccan lore notwithstanding, almost certainly didn't target any particular belief system) to the more recent Satanic ritual abuse panics.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-11T18:53:52.126Z · LW(p) · GW(p)
The closest parallel might be to Catharism
I would probably say that the closest parallel is the persecution of witches in medieval Europe (including but not limited to the witch trials).
Replies from: Nornagest↑ comment by Nornagest · 2014-08-11T19:02:22.438Z · LW(p) · GW(p)
The persecution of witches targeted individuals or small groups, not (as far as modern history knows) members of any particular religion; and the charges leveled at alleged witches usually involved sorcerous misbehavior of various kinds (blighting crops, causing storms, bringing pestilence...) rather than purely religious accusations. Indeed, for most of the medieval era the Church denied the existence of witches (though, as we've seen above, it was happy to persecute real heretics): witch trials only gained substantial clerical backing well into the early modern period.
Seems pretty different to me.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-11T19:10:37.203Z · LW(p) · GW(p)
Charges of being in league with the Devil were a necessary part of accusations against the witches because, I think, sorcery was considered to be possible for humans only through the Devil's help. The witches' covens were perceived as actively worshipping the Devil.
I agree that it's not the exact parallel, but do you think a whole community (with towns and everything) of devil worshippers could have survived in Europe or North America for any significant period of time? Compared to Islam, Christianity was just more quick and efficient about eliminating them.
Replies from: Nornagest↑ comment by Nornagest · 2014-08-11T20:00:43.795Z · LW(p) · GW(p)
I agree that it's not the exact parallel, but do you think a whole community (with towns and everything) of devil worshippers could have survived in Europe or North America for any significant period of time?
That veers more into speculation than I'm really comfortable with. That said, though, I think you're giving this devil-worship thing a bit more weight than it should have; sure, some aspects of Melek Taus are probably cognate to the Islamic Shaitan myth, but Yazidi religion as a whole seems to draw in traditions from several largely independent evolutionary paths. We're not dealing here with the almost certainly innocent targets of witch trials or with overenthusiastic black metal fans, nor even with an organized Islamic heresy, but with a full-blown syncretic religion.
No similar religions of comparable age survive in Christianity's present sphere of influence, though the example of Gnosticism suggests that the early evolution of the Western branch of Abrahamic faith was pretty damn complicated, and that many were wiped out in Christianity's early expansion or in medieval persecutions. There are a lot of younger ones, however, especially in the New World: Santeria comes to mind.
That's only tangentially relevant to the historical parallels I'm trying to outline, though.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-11T20:41:14.308Z · LW(p) · GW(p)
a full-blown syncretic religion
Oh, it certainly is, but the issue is not what we are dealing with -- the issue is how the ISIS fighters perceive it.
The whole Middle-East-to-India region is full of smallish religions which look to be, basically, outcomes of "Throw pieces of several distinct religious traditions together, blend on high for a while, then let sit for a few centuries".
Replies from: Nornagest↑ comment by Nornagest · 2014-08-11T21:58:13.694Z · LW(p) · GW(p)
Oh, it certainly is, but the issue is not what we are dealing with -- the issue is how the ISIS fighters perceive it.
I'm pretty sure their perceptions are closer to an Albigensian Crusader's attitude toward Catharism -- or even your average Chick tract fan's attitude toward Catholicism -- than some shit-kicking medieval peasant's grudge toward the old man down the lane who once scammed him for a folk healing ritual that invoked a couple of barbarous names for shock value. Treating religious opponents as devil-worshippers is pretty much built into the basic structure of (premodern, and some modern) Christianity and Islam, whether or not there's anything to the accusation (though as I note above, the charge is at least as sticky for Catharism as for the Yazidi). The competing presence of a structured religion that's related closely enough to be uncomfortable but not closely enough to be a heresy per se... that's a little more distinctive.
↑ comment by buybuydandavis · 2014-08-11T21:34:16.084Z · LW(p) · GW(p)
A relevant factor which is (intentionally or not) ignored by American media is that, from the point of view of pious Muslims, Yazidis are satanists.
It hasn't been ignored by the American media. I've heard it multiple times. I don't think the term used was Satanist, but "devil worshippers".
↑ comment by James_Miller · 2014-08-11T18:26:33.026Z · LW(p) · GW(p)
Although I'm a libertarian now, in my youth I was very left-wing and can understand the appeal of communism. For many of the others on the long list, yes they do feel very other to me.
Replies from: bbleeker, Azathoth123↑ comment by Sabiola (bbleeker) · 2014-08-12T12:41:39.886Z · LW(p) · GW(p)
I too was very left-wing when I was young, and now I feel communism does belong with the others on that list. It fills the same mental space as a religion, and is believed in much the same way (IME).
↑ comment by Azathoth123 · 2014-08-13T04:55:25.898Z · LW(p) · GW(p)
Take some ISIS propaganda and do s/infidels/capitalist exploiters, s/Allah/the revolution, etc.
↑ comment by Gunnar_Zarncke · 2014-08-11T22:34:04.588Z · LW(p) · GW(p)
First you might want to consider propaganda.
http://www.revleft.com/vb/ten-commandments-war-t52907/index.html?s=8387131b8a98f6ee7e6ba74cce570d8e
http://home.cc.umanitoba.ca/~mkinnear/16_Falsehood_in_wartime.pdf
We do not want war.
The opposite party alone is guilty of war
The enemy is the face of the devil.
We defend a noble cause, not our own interest.
The enemy systematically commits cruelties; our mishaps are involuntary.
The enemy uses forbidden weapons.
We suffer small losses, those of the enemy are enormous.
Artists and intellectuals back our cause.
Our cause is sacred.
All who doubt our propaganda, are traitors.
↑ comment by buybuydandavis · 2014-08-11T21:46:27.913Z · LW(p) · GW(p)
It's a little harder to say about the ISIS guys, but I think personality wise many of us are a lot like the Al Qaeda leadership. Ideology and Jihad for it appeals.
Most people don't take ideas too seriously. We do. And I think it's largely genetic.
I find it hard to think of ISIS members as human
Human, All Too Human.
Historically, massacring The Other is the rule, not the exception. You don't even need to be particularly ideological for that. People who just go with the flow of their community will set The Other on fire in a public square, and have a picnic watching. Bring their kids. Take grandma out for the big show.
Replies from: James_Miller↑ comment by James_Miller · 2014-08-11T22:56:03.923Z · LW(p) · GW(p)
Most people don't take ideas too seriously. We do. And I think it's largely genetic.
Excellent point. I wonder if LW readers and Jihadists would give similar answers to the Trolley problem.
Replies from: buybuydandavis, Nornagest↑ comment by buybuydandavis · 2014-08-12T02:20:30.685Z · LW(p) · GW(p)
I don't think that's the test. It's not that they'd give the same answers to any particular question.
I think the test would be a greater likelihood to be unshakeable according to adjustments along moral modalities that move others who are not so ideological. How "principled" are you? How "extreme" a situation are you willing to assent to, relative to the general population? Largely, how far can you override morality cognitively?
↑ comment by Nornagest · 2014-08-11T23:08:37.855Z · LW(p) · GW(p)
I wonder if LW readers and Jihadists would give similar answers to the Trolley problem.
A hundred bucks says the answer is "no". Religious fundamentalism is not known to encourage consequential ethics.
There may be certain parallels -- I've read that engineers and scientists, or students of those disciplines, are disproportionately represented among jihadists -- but they're probably deeper than that.
Replies from: buybuydandavis, Richard_Kennaway, Prismattic↑ comment by buybuydandavis · 2014-08-12T02:36:13.378Z · LW(p) · GW(p)
Also disproportionately represented as the principals in the American Revolution. Inventors, engineers, scientists, architects.
Franklin,Jefferson, Paine, and Washington all had serious inventions. That's pretty much the first string of the revolution.
↑ comment by Richard_Kennaway · 2014-08-12T07:03:08.554Z · LW(p) · GW(p)
A hundred bucks says the answer is "no". Religious fundamentalism is not known to encourage consequential ethics.
That might depend on the consequences.
A runaway trolley is careering down the tracks and will kill a single infidel if it continues. If you pull a lever, it will be switched to a side track and kill five infidels. Do you pull the lever?
The lever is broken, but beside you on the bridge is a very fat man, one of the faithful. Do you push him off the bridge to deflect the trolley and kill five infidels, knowing that he will have his reward for his sacrifice in heaven?
↑ comment by Prismattic · 2014-08-12T02:02:53.942Z · LW(p) · GW(p)
I've read that engineers and scientists, or students of those disciplines, are disproportionately represented among jihadists
I've also read this, but I want to know if it corrects for the fact that the educational systems in many of the countries that produce most jihadis don't encourage study of the humanities and certain social sciences. Is it really engineers in particular, or is the educated-but-stifled who happen overwhelmingly to be engineers in these countries?
↑ comment by NancyLebovitz · 2014-08-11T19:01:43.098Z · LW(p) · GW(p)
Part of "us" is our culturally transmitted values.
My impression is that ISIS is mostly a new thing-- it's a matter of relatively new memes taken up by adolescents and adults rather than generational transmission.
I don't think it's practical to see one's enemies, even those who behave vilely and are ideologically committed to continuing to do so, as non-human. To see them as non-human is to commit oneself as framing them as incomprehensible. More exactly, the usual outcomes seems to be "all they understand is force" or "there's nothing to do but kill them". which makes it difficult to think of how to deal with them if victory by violence isn't a current option.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-11T19:12:11.889Z · LW(p) · GW(p)
I don't think it's practical to see one's enemies ... as non-human.
On the contrary, that's the attitude specifically trained in modern armies, US included. Otherwise not enough people shoot at the enemy :-/
Replies from: NancyLebovitz, Azathoth123↑ comment by NancyLebovitz · 2014-08-11T19:54:08.788Z · LW(p) · GW(p)
You might not be in an army.
↑ comment by Azathoth123 · 2014-08-13T04:40:28.185Z · LW(p) · GW(p)
On the contrary, that's the attitude specifically trained in modern armies,
I'm not sure about modern armies, but ancient and even medieval armies certainly didn't need this attitude to kill their enemies.
↑ comment by bramflakes · 2014-08-11T19:24:43.268Z · LW(p) · GW(p)
Or might genetics play a role in our differing moral views?
It's possible that more inbred clannish societies have smaller moral circles than Western outbreeders.
I against my brother, my brothers and I against my cousins, then my cousins and I against strangers
- Bedouin proverb
↑ comment by [deleted] · 2014-08-12T03:52:10.946Z · LW(p) · GW(p)
I was talking to someone from Tennessee once, and he said something along the lines of: "When I'm in a bar in western Tennessee, I drink with the guy from western Tennessee and fight the guy from eastern Tennessee. When I'm in a bar in eastern Tennessee, I drink with the guy from Tennessee and fight the guy from Georgia. When I'm in a bar in Georgia, I drink with the guy from the South and fight the guy from New England."
↑ comment by [deleted] · 2014-08-12T05:02:29.123Z · LW(p) · GW(p)
It's possible that more inbred clannish societies have smaller moral circles than Western outbreeders.
The history of the European takeover of the Americas and the damn near genocide of somewhere between tens and hundreds of millions of people in the process, and the history of the resultant societies, should disavow everyone here of any laughable claims of ethnic superiority in this regard. I also strongly suspect that the European diaspora of the Americas and elsewhere just hasn't had enough time for the massive patchwork of tribalisms to inevitably crystallize out of the liquid wave of disruptive post-genocide settlement that happened over the last few hundred years, and instead we only have a few very large groups in this hemisphere that are coming to hate each other so far. Though sometimes I suspect the small coal mining town my parents escaped from could be induced to have race riots between the Poles and Italians.
Also... Germany. Enough said.
EDIT: Not directed at you, bramflakes, but at the whole thread here... how in all hell am I seeing so much preening smug superiority on display here? Humans are brutal murderous monkeys under the proper conditions. No one here is an exception at all except through accidents of space and time, and even now we all reading this are benefiting from systems which exploit and kill others and are for the most part totally fine with them or have ready justifications for them. This is a human thing.
Replies from: Richard_Kennaway, James_Miller↑ comment by Richard_Kennaway · 2014-08-12T07:16:12.987Z · LW(p) · GW(p)
Humans are brutal murderous monkeys under the proper conditions.
They are also sweetness and light under the proper conditions.
No one here is an exception at all except through accidents of space and time
You seem to be claiming that certain conditions -- those not producing brutal murderous monkeys -- are accidents of space and time, but certain others -- those producing brutal murderous monkeys -- are not. That "brutal murderous monkeys" is our essence and any deviation from that mere accident, in the philosophical sense. That the former is our fundamental nature and the latter mere superficial froth.
There is no actual observation that can be made to distinguish "proper conditions" from "parochial circumstance", "essence" from "accident", "fundamental" from "superficial".
Replies from: MrMind↑ comment by MrMind · 2014-08-12T09:51:40.961Z · LW(p) · GW(p)
Chimpanzees tribes, given enough resources, can pass from an equilibrium based on violence to an equilibrium based on niceness and sharing.
I cannot seem to find, despite extensive search, the relevant experiment, but I remember it vividly.
I guess the same thing can happen to humans too.
Replies from: Richard_Kennaway, army1987↑ comment by Richard_Kennaway · 2014-08-12T10:14:53.913Z · LW(p) · GW(p)
I guess the same thing can happen to humans too.
It visibly does. If you're not sitting in a war zone, just look around you. Are the people around you engaged in brutally murdering each other?
This is not to say that the better parts of the world are perfect, but to look at those parts and moan about our brutally murderous monkey nature is self-indulgent posturing.
↑ comment by A1987dM (army1987) · 2014-08-20T19:08:24.647Z · LW(p) · GW(p)
I cannot seem to find, despite extensive search, the relevant experiment
↑ comment by James_Miller · 2014-08-12T15:38:39.038Z · LW(p) · GW(p)
how in all hell am I seeing so much preening smug superiority on display here?
We have a right to feel morally superior to ISIS, although probably not on genetic grounds.
No one here is an exception at all except through accidents of space and time
But is this true? Do some people have genes which strongly predispose them against killing children. It feels to me like I do, but I recognize my inability to properly determine this.
and even now we all reading this are benefiting from systems which exploit and kill others and are for the most part totally fine with them or have ready justifications for them.
As a free market economist I disagree with this. The U.S. economy does not derive wealth from the killing of others, although as the word "exploit" is hard to define I'm not sure what you mean by that.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2014-08-12T21:15:53.663Z · LW(p) · GW(p)
We have a right to feel morally superior to ISIS, although probably not on genetic grounds.
The Stanford prison experiment suggests that you don't need that much to get people to do immoral things. ISIS evolved over years of hard civil war.
ISIS also partly has their present power because the US first destabilised Iraq and later allowed funding of Syrian rebels. The US was very free to avoid fighting the Iraq war. ISIS fighters get killed if they don't fight their civil war.
Replies from: fubarobfusco, James_Miller↑ comment by fubarobfusco · 2014-08-13T02:37:22.974Z · LW(p) · GW(p)
The Stanford prison experiment suggests that you don't need that much to get people to do immoral things.
The Stanford prison "experiment" was a LARP session that got out of control because the GM actively encouraged the players to be assholes to each other.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-08-14T01:57:59.882Z · LW(p) · GW(p)
I agree with that interpretation of the experiment but "active encouragement" should count as "not that much."
↑ comment by James_Miller · 2014-08-12T21:25:48.676Z · LW(p) · GW(p)
I am very confident that a college student version of me taking part in a similar experiment as a guard would not have been cruel to the prisoners in part because the high school me (who at the time was very left wing) decided to not stand up for the pledge of allegiance even though everyone else in his high school regularly did and this me refused to participate in a gym game named war-ball because I objected to the name.
Replies from: Nornagest, ChristianKl↑ comment by Nornagest · 2014-08-12T21:44:21.436Z · LW(p) · GW(p)
I didn't stand for the Pledge in school either, but in retrospect I think that had less to do with politics or virtue and more to do with an uncontrollable urge to look contrarian.
I can see myself going either way in the Stanford prison experiment, which probably means I'd have abused the prisoners.
↑ comment by ChristianKl · 2014-08-13T10:22:32.490Z · LW(p) · GW(p)
But you aren't that left wing anyone but go around teaching people to make decisions based on game theory.
Replies from: James_Miller↑ comment by James_Miller · 2014-08-13T15:18:20.106Z · LW(p) · GW(p)
I moved to the right in my 20s.
↑ comment by Lumifer · 2014-08-12T16:01:28.451Z · LW(p) · GW(p)
We have a right to feel morally superior to ISIS
Who is "we"? and are you comparing individuals to an amorphous military-political movement?
Do some people have genes which strongly predispose them against killing children.
Everyone has these genes. It's just that some people can successfully override their biological programming :-/
Killing children is one of the stronger moral taboos, but a lot of kids are deliberately killed all over the world.
By the way, the US drone strikes in Pakistan are estimated to have killed 170-200 children.
Replies from: None↑ comment by [deleted] · 2014-08-14T09:21:54.193Z · LW(p) · GW(p)
Everyone has these genes. It's just that some people can successfully override their biological programming :-/
"Every computer has this code. It's just that some computers can successfully override their programming."
What does this statement mean?
Replies from: Risto_Saarelma, Lumifer↑ comment by Risto_Saarelma · 2014-08-14T10:13:09.579Z · LW(p) · GW(p)
What does this statement mean?
Suppressing bad instincts. Seems to make sense to me and describe a real thing that's often a big deal in culture and civilization. All it needs to be coherent is that people can have both values and instincts, that the values aren't necessarily that which is gained by acting on instincts, and that people have some capability to reflect on both and not always follow their instincts.
For the software analogy, imagine an optimization algorithm that has built-in heuristics, runtime generated heuristics, optimization goals, and an ability to recognize that a built-in heuristic will work poorly to reach the optimization goal in some domain and a different runtime generated heuristic will work better.
↑ comment by Lumifer · 2014-08-14T15:38:16.282Z · LW(p) · GW(p)
The usual. The decisions that you make result from a weighted sum of many forces (reasons, motivations, etc.). Some of these forces/motivations are biologically hardwired -- almost all humans have them and they are mostly invariant among different cultures. The fact that they exist does not mean that they always play the decisive role.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-08-20T21:22:21.035Z · LW(p) · GW(p)
Some of these forces/motivations are biologically hardwired -- almost all humans have them and they are mostly invariant among different cultures.
You appear to be implying that all (or nearly all) motivations that are hardwired are universal and vice versa, neither of which seems obvious to me.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-25T19:49:18.277Z · LW(p) · GW(p)
You appear to be implying that all (or nearly all) motivations that are hardwired are universal and vice versa, neither of which seems obvious to me.
Hm. I would think that somewhere between many and most of the universal terminal motivations are hardwired. I am not sure why would they be universal otherwise (similar environment can produce similar responses but I don't see why would it produce similar motivations).
And in reverse, all motivations hardwired into Homo sapiens should be universal since the humanity is a single species.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-08-26T12:50:44.483Z · LW(p) · GW(p)
Hm. I would think that somewhere between many and most of the universal terminal motivations are hardwired. I am not sure why would they be universal otherwise (similar environment can produce similar responses but I don't see why would it produce similar motivations).
Well, about a century ago religion was pretty much universal, and now a sizeable fraction of the population (especially in northern Eurasia) is atheist, even if genetics presumably haven't changed that much. How do we know there aren't more things like that?
And in reverse, all motivations hardwired into Homo sapiens should be universal since the humanity is a single species.
I'm aware of the theoretical arguments to expect that same species -> same hardwired motivations, but I think they have shortcomings (see the comment thread to that article) and the empirical evidence seems to be against (see this or this).
Replies from: Lumifer↑ comment by Lumifer · 2014-08-26T15:10:01.258Z · LW(p) · GW(p)
Well, about a century ago religion was pretty much universal
Was it? Methinks you forgot about places like China, if you go by usual definitions of "religion". Besides, it has been argued that the pull towards spiritual/mysterious/numinous/godhead/etc. is hardwired in some way.
I think they have shortcomings
This is a "to which degree" argument. Your link says "Different human populations are likely for biological reasons to have slightly different minds" and I will certainly agree. The issue is what "slightly" means and how significant it is.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-08-26T16:52:42.109Z · LW(p) · GW(p)
This is a "to which degree" argument. Your link says "Different human populations are likely for biological reasons to have slightly different minds" and I will certainly agree. The issue is what "slightly" means and how significant it is.
Well, that's a different claim from “all motivations hardwired into Homo sapiens should be universal” (emphasis added) in the great-gradparent.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-26T17:25:30.237Z · LW(p) · GW(p)
If you want to split hairs :-) all motivations hardwired into Homo Sapiens should be universal. Motivations hardwired only into certain subsets of the species will not be universal.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-08-26T21:24:15.768Z · LW(p) · GW(p)
If you mean motivations hardwired into all Homo Sapiens sure, but that's tautological! :-)
↑ comment by Viliam_Bur · 2014-08-13T07:50:38.470Z · LW(p) · GW(p)
How morally different are ISIS fighters from us?
Uhm, taboo "morally different"?
Are their memes repulsive to me? Yes, they are.
Do they have terminal value as humans (ignoring their instrumental value)? Yes, they do.
How about their instrumental value? Uhm, probably negative, since they seem to spend a lot of time killing other humans.
If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion? Or might genetics play a role in our differing moral views?
Probably yes. I think there can be a genetic influence, but there is much more of "monkey see, monkey do" in humans.
↑ comment by niceguyanon · 2014-08-14T19:30:13.492Z · LW(p) · GW(p)
Here is a Vice documentary posted today about ISIS: https://news.vice.com/video/the-islamic-state-full-length
↑ comment by Richard_Kennaway · 2014-08-12T06:51:57.604Z · LW(p) · GW(p)
If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion?
The question is irrelevant. If it is wrong to behead children for having the "wrong" religion, that is not affected by fictional scenarios in which "we" believed differently. (It's not clear what "we" actually means there, but that's a separate philosophical issue.) Truth is not found by first seeing what you believe, and then saying, "I believe this, therefore it is true."
Or might genetics play a role in our differing moral views?
This question is also irrelevant.
I find it hard to think of ISIS members as human
Well, they are. Start from there.
↑ comment by ChristianKl · 2014-08-11T21:28:14.873Z · LW(p) · GW(p)
Focusing on "morally correct" might prevent a lot of understanding of the situation. People in war usually don't do things because they are morally correct.
comment by advancedatheist · 2014-08-11T15:38:35.540Z · LW(p) · GW(p)
I wonder why we don't see more family fortunes in the U.S. in kin groups that have lived here for generations. Estate taxes tend to inhibit the transmission of wealth down the line, but enough families have figured out how to game the system that they have held on to wealth for a century or more, notably including families which supply a disproportionate number of American politicians; they provide proof of concept of the durable family fortune. Otherwise most Americans seem to live in a futile cycle where their lifetime wealth trajectory starts from zero at birth and returns to zero by death.
Steve Sailer noted on his blog a few months back that in the UK, people with Anglo-Norman surnames in our time have held on to more wealth on average than Brits with surnames suggesting manual-laborer origins. For example, Aubrey de Grey has an Anglo-Norman surname, and he reportedly inherited several million British pounds when his mother died a few years ago. I gather that this doesn't generally happen to ordinary Brits. Apparently the warriors who came over from France with William the Conqueror in 1066, and participated in the division of the spoils, started a way of handling wealth which enabled their descendants to hold on to inherited assets down through the centuries. If the Anglo-Normans could do it, and if some American families have figured out how to do it more recently, then what keeps this practice from becoming widespread in American society?
Replies from: NancyLebovitz, buybuydandavis, Nornagest, Illano, Izeinwinter↑ comment by NancyLebovitz · 2014-08-11T18:49:32.588Z · LW(p) · GW(p)
Another possibility is that Americans are more individualistic. Maintaining a family fortune means subordinating yourself enough that it isn't spent down.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-11T19:03:49.393Z · LW(p) · GW(p)
"Lacking self-control" is probably what you mean :-)
Example: the Vanderbilts.
Replies from: wadavis↑ comment by wadavis · 2014-08-11T20:14:12.663Z · LW(p) · GW(p)
Supporting the individualistic argument. The family values trend in my prosperous region of Canada is leaning toward successful businessmen and entrepreneurs valuing empowering their children but not supporting their children past adolescence.
The accepted end goal IS to die as close to net zero as possible, I've not seen strong obligations to leave a large inheritance behind. The only strong obligation is the empowerment of their upper-middle class children so they can follow the same zero to wealth to zero cycle.
Where sons stay in the same industry as fathers, instead of striking out on their own, they work for the fathers firm until they have the credit and savings to start taking loans and buying shares of the fathers firm. Successful succession planning is when the children can buy 100% of the firm by the time the parents are ready for retirement.
(All based on personal observations of a single province and a group of peers n~20)
Replies from: Lumifer↑ comment by Lumifer · 2014-08-11T20:48:37.034Z · LW(p) · GW(p)
The accepted end goal IS to die as close to net zero as possible
Is there an exception for real estate? I'm thinking both "regular" houses (reverse mortgages are uncommon) and, in particular, things like summer houses and farmland which tend to stay in the family.
I agree that the desire to leave behind a large bank account is... not widespread, but land and houses look sticky to me.
Replies from: wadavis↑ comment by wadavis · 2014-08-11T23:00:40.734Z · LW(p) · GW(p)
Farmland is far closer to a business asset and ends up treated the same as any other economic asset. Of course in farming there is a higher ratio of dynasty minded families (function of this province's immigration history and strong east-european cultural backgrounds).
I see what you mean about personal homes and personal land. There may be a mental division between economic assets, which shall not be given only sold, personal assets which are gifted away. This is a gap in my knowledge, It appears I need to spend more time with close to retirement, independently wealthy individuals.
↑ comment by buybuydandavis · 2014-08-11T21:20:05.324Z · LW(p) · GW(p)
What I'd like to know is how the Brits are doing it.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-08-11T22:17:21.870Z · LW(p) · GW(p)
The part of my brain that generates sardonic responses says "Oxbridge and nepotism". At risk of generating explanations for patterns that don't really exist, class, education and assortative mating seem to make for wealthy dynasties.
↑ comment by Nornagest · 2014-08-11T17:51:22.552Z · LW(p) · GW(p)
I think there's a couple of fairly simple reasons contributing to Americans not having a culture of inheritance: first, that we live a long time by historical standards; and second, that we have a norm of children moving out after maturity. The first means that estates are generally released after children are well into their careers, and sometimes after they're themselves retired. The second means that all but the very wealthiest have to establish their own careers rather than living off the family dime.
This wouldn't directly affect actual inheritance, but it does take a lot of the urgency out of establishing a legacy. That lack of urgency might in turn contribute to reductions in real inheritance, given that you can sink a more or less arbitrary amount of money (by middle-class standards) into things like travel and expensive hobbies.
↑ comment by Illano · 2014-08-11T15:52:07.510Z · LW(p) · GW(p)
In American society in particular, I would assume a large reason that wealth is not passed from generation to generation currently is the enormous costs associated with end-of-life medical care. You've got to be in the top few percent of Americans to be able to have anything left after medical costs (or die early/unexpectedly which also tends to work against estate planning efforts.)
Replies from: shminux, buybuydandavis↑ comment by Shmi (shminux) · 2014-08-11T16:38:00.404Z · LW(p) · GW(p)
enormous costs associated with end-of-life medical care
This only became a thing in the last 50 years or so and would not have been a major expense a century ago. Even now the costs are about $50k to $100k per person, which is in line with what a healthy upper middle-class person spends every year. The wealthy spend a lot more than that, so the palliative care costs are unlikely to make a dent in their fortunes.
Replies from: Illano↑ comment by Illano · 2014-08-11T17:30:49.370Z · LW(p) · GW(p)
Good point about the medical costs being a relatively recent development. However, I still think they are a huge hurdle to overcome if wealth staying in a family is to become widespread. Using the number you supplied of $50k/year, the median American at retirement age could afford about 3 years of care. (Not an expert on this, just used numbers from a google search link. This only applies for the middle class though, but essentially it means that you can't earn a little bit more than average and pass it on to your kids to build up dynastic wealth, since for the middle classes at least, at end-of-life you pretty much hit a reset button.
Replies from: Lumifer, shminux↑ comment by Lumifer · 2014-08-11T18:02:11.607Z · LW(p) · GW(p)
essentially it means that you can't earn a little bit more than average and pass it on to your kids to build up dynastic wealth
I don't think it ever works like this -- saving a bit and accumulating it generation after generation. The variability in your income/wealth/general social conditions is just too high. "Dynastic wealth" is usually formed by one generation striking it absurdly rich and the following generations being good stewards of it.
↑ comment by Shmi (shminux) · 2014-08-11T20:45:52.743Z · LW(p) · GW(p)
You seem to be grasping here. The OP talked about passing down old family fortunes, not problems building new ones. Whether EOL care expenses are a significant hurdle to the new wealth accumulation is an interesting but unrelated question. My suspicion is that if it is, then there ought to be an insurance one can buy to limit exposure.
↑ comment by buybuydandavis · 2014-08-11T21:19:43.724Z · LW(p) · GW(p)
I don't think those costs are relevant for families with fortunes.
↑ comment by Izeinwinter · 2014-08-12T14:48:23.703Z · LW(p) · GW(p)
It is widespread. In other words, your thesis is not supported by facts. There is just nothing to explain here, except how the illusion of self-made fortunes is perpetuated in the teeth of the facts. The US has appallingly low social mobility, and a ever rising share of very rich americans got that way by inheritance. This isn't obvious because flaunting the fact that you got born with a silver spoon for every day of the week in your mouth runs against the american mythos - Members of the British old money crowd are proud of the fact that they personally did nothing to create their wealth and flaunt it. A proper 'murican in the exact same situation is slightly embarrassed by it and might at least show up for board meetings in the family business so that they can maintain some pretense that they work for a living.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-12T15:03:59.875Z · LW(p) · GW(p)
The US has appallingly low social mobility
Citation needed.
This is a useful half-page overview of issues with attaching meaning to economic mobility data.
This argues that the mobility in the US has been stable or rising for decades.
This talks about comparing the US with other countries.
comment by Richard_Kennaway · 2014-08-13T11:52:43.860Z · LW(p) · GW(p)
This is not an attempt at an organised meetup, but the World Science Fiction Convention begins tomorrow in London. I'll be there. Anyone else from LessWrong?
I had intended to be at Nineworlds last weekend as well, but a clash came up with something else and I couldn't go. Was anyone else here there?
comment by Shmi (shminux) · 2014-08-12T17:28:39.581Z · LW(p) · GW(p)
If any LWer is attending the Quantum Foundations of a Classical Universe workshop at the IBM Watson Research Center, feel free to report!
Several relatively famous experts are discussing anthropics, the Born rule, MWI, Subjective Bayesianism, quantum computers and qualia.
Replies from: MrMind↑ comment by MrMind · 2014-08-13T13:05:53.237Z · LW(p) · GW(p)
Here is a list of papers about the talks, if you want to get an idea without attending.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-13T16:54:09.838Z · LW(p) · GW(p)
I've read most of those I care to, but there is always something about face-to-face discussions that is lost in print.
comment by Thomas · 2014-08-11T17:03:04.046Z · LW(p) · GW(p)
I am getting the red envelope sign on the right side here, as I had a message. But then I see it's not for me. For a few days now.
Replies from: Richard_Kennaway, Nornagest, drethelin↑ comment by Richard_Kennaway · 2014-08-11T20:43:04.533Z · LW(p) · GW(p)
Have you ever clicked on the grey envelope icon found at the bottom right of every post and comment? If you do, then immediate replies to it show up in your inbox also. Look at the parent of one of these mysterious messages and see if its envelope is green. If it is, you can click it again to turn it off.
Replies from: Thomascomment by Username · 2014-08-11T13:33:45.282Z · LW(p) · GW(p)
My brain spontaneously generated an argument for why killing all humans might be the best way to satisfy my values. As far as I know it's original; at any rate, I don't recall seeing it before. I don't think it actually works, and I'm not going to post it on the public internet. I'm happy to just never speak of it again, but is there something else I should do?
Replies from: Richard_Kennaway, Username, polymathwannabe, NancyLebovitz, buybuydandavis, solipsist, lmm, zzrafz, None↑ comment by Richard_Kennaway · 2014-08-11T14:27:40.597Z · LW(p) · GW(p)
is there something else I should do?
Find out how your brain went wrong, with a view to not going so wrong again.
Replies from: zzrafz↑ comment by zzrafz · 2014-08-11T16:20:42.412Z · LW(p) · GW(p)
Playing devil's advocate here, the original poster is not that wrong. Ask any other living species on Earth and they will say their life would be better without humans around.
Replies from: Nectanebo, Lumifer, DanielLC↑ comment by Nectanebo · 2014-08-11T17:26:11.141Z · LW(p) · GW(p)
Apart from the fact that they wouldn't say anything (because generally animals can't speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species' existence has made the lives of other animals much better than they would otherwise be. I'm thinking of veterinary clinics that often perform work on wild animals, pets that don't have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.
As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world's predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity's existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature's whims perhaps indefinitely, rather than ours perhaps temporarily.
Replies from: zzrafz↑ comment by DanielLC · 2014-08-11T21:45:09.224Z · LW(p) · GW(p)
Humans are leading to the extinction of many species. Given the sorts of things that happen to them in the wild, this may be an improvement.
This is too distant from the original argument to be an argument for it. I'm just playing devil's advocate recursively.
↑ comment by Username · 2014-08-12T00:24:58.351Z · LW(p) · GW(p)
It seems I was unclear. I have no intention of attempting to kill all humans. I'm not posting the argument publicly because I don't want to run the (admittedly small) risk that someone else will read it and take it seriously. I'm just wondering if there's anything I can do with this argument that will make the world a slightly better place, instead of just not sharing it (which is mildly negative to me and neutral to everyone else - unless I've sparked anyone's curiousity, for which I apologise).
↑ comment by polymathwannabe · 2014-08-11T14:43:03.276Z · LW(p) · GW(p)
What values could possibly lead to such a choice?
Replies from: satt, Gunnar_Zarncke↑ comment by satt · 2014-08-12T00:00:35.093Z · LW(p) · GW(p)
Hardcore negative utilitarianism?
In The Open Society and its Enemies (1945), Karl Popper argued that the principle "maximize pleasure" should be replaced by "minimize pain". He thought "it is not only impossible but very dangerous to attempt to maximize the pleasure or the happiness of the people, since such an attempt must lead to totalitarianism."[67] [...]
The actual term negative utilitarianism was introduced by R.N.Smart as the title to his 1958 reply to Popper[69] in which he argued that the principle would entail seeking the quickest and least painful method of killing the entirety of humanity.
Suppose that a ruler controls a weapon capable of instantly and painlessly destroying the human race. Now it is empirically certain that there would be some suffering before all those alive on any proposed destruction day were to die in the natural course of events. Consequently the use of the weapon is bound to diminish suffering, and would be the ruler's duty on NU grounds.[70]
(Pretty cute wind-up on Smart's part; grab Popper's argument that to avoid totalitarianism we should minimize pain, not maximize happiness, then turn it around on Popper by counterarguing that his argument obliges the obliteration of humanity whenever feasible!)
↑ comment by Gunnar_Zarncke · 2014-08-11T16:47:16.106Z · LW(p) · GW(p)
Values that value animals as high or nearly as high as humans.
Replies from: Baughn, polymathwannabe↑ comment by Baughn · 2014-08-11T17:58:59.766Z · LW(p) · GW(p)
Not if you account for the typical suffering in nature. Humans remain the animals' best hope of ever escaping that.
Replies from: NancyLebovitz, DanielLC↑ comment by NancyLebovitz · 2014-08-11T18:46:02.154Z · LW(p) · GW(p)
It might not just be about suffering-- there's also the plausible claim that humans lead to less variety in other species.
Replies from: DanielLC, Baughn↑ comment by DanielLC · 2014-08-12T04:24:42.560Z · LW(p) · GW(p)
I feel like that's a value that only works because of scope insensitivity. If the extinction of a species is as bad as killing x individuals, then when the size of the population is not near x, one of those things will dominate. But people still think about it as if they're both significant.
↑ comment by Baughn · 2014-08-11T19:06:32.958Z · LW(p) · GW(p)
Why does that, um, matter?
I can see valuing animal experience, but that's all about individual animals. Species don't have moral value, and nature as a whole certainly doesn't.
Replies from: James_Miller, NancyLebovitz↑ comment by James_Miller · 2014-08-11T21:10:36.149Z · LW(p) · GW(p)
Would you say the same about groups of humans? Is genocide worse than killing an equal number of humans but not exterminating any one group?
Replies from: fubarobfusco, Azathoth123↑ comment by fubarobfusco · 2014-08-12T03:47:59.443Z · LW(p) · GW(p)
I suspect that the reason we have stronger prohibitions against genocide than against random mass murder of equivalent size is not that genocide is worse, but that it is more common.
It's easier to form, motivate, and communicate the idea "Kill all the Foos!" (where there are, say, a million identifiable Foos in the country) than it is to form and communicate "Kill a million arbitrary people."
Replies from: Azathoth123, NancyLebovitz↑ comment by Azathoth123 · 2014-08-13T04:19:05.820Z · LW(p) · GW(p)
I suspect that the reason we have stronger prohibitions against genocide than against random mass murder of equivalent size is not that genocide is worse, but that it is more common.
I suspect that's not actually true. The communist governments killed a lot of people in a (mostly) non-genocidal manner.
The reason we have stronger prohibitions against genocide is the same reason we have stronger prohibitions against the swastika than against the hammer and sickle. Namely, the Nazis were defeated and no longer able to defend their actions in debates while the communists had a lot of time to produce propaganda.
Replies from: Vulture↑ comment by Vulture · 2014-08-13T20:45:10.872Z · LW(p) · GW(p)
Wait, what? Did considering genocide more heinous than regular mass murder only start with the end of WWII?
Replies from: NancyLebovitz, Viliam_Bur↑ comment by NancyLebovitz · 2014-08-14T00:08:11.412Z · LW(p) · GW(p)
For that it's worth, the word genocide may been invented to describe what the Nazis did-- anyone have OED access to check for earlier cites?
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-08-14T04:04:07.744Z · LW(p) · GW(p)
It existed before, but it's use really picked up after WWII.
↑ comment by Viliam_Bur · 2014-08-15T08:49:55.736Z · LW(p) · GW(p)
Unfortunately, genocides happen all the time.
But only one of them got big media attention. Which made it the evil.
Cynically speaking: if you want the world to not pay attention to a genocide, (a) don't do it in a first-world country, and (b) don't do it during the war with other side which can make condemning the genocide part of their propaganda, especially if at the end you lose the war.
↑ comment by NancyLebovitz · 2014-08-12T15:27:51.947Z · LW(p) · GW(p)
Alternatively, killing a million people at semi-random (through poverty or war) is less conspicuous than going after a defined group.
↑ comment by Azathoth123 · 2014-08-13T04:14:18.555Z · LW(p) · GW(p)
Is genocide worse than killing an equal number of humans but not exterminating any one group?
I don't see why it should be.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-13T14:38:15.750Z · LW(p) · GW(p)
Do particular cultures or, say, languages, have any value to you?
Replies from: Vulture, Azathoth123↑ comment by Azathoth123 · 2014-08-13T23:03:45.575Z · LW(p) · GW(p)
Do particular computer systems or, say, programming languages, have any value to you?
Compare your attitude to these two questions, what accounts for the difference?
Replies from: Lumifer↑ comment by Lumifer · 2014-08-14T00:47:06.938Z · LW(p) · GW(p)
The fact that I am human.
And..?
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-08-14T04:08:38.952Z · LW(p) · GW(p)
And what? You're a human not a meme, so why are you assigning rights to memes? And why some memes and not others?
Replies from: Lumifer↑ comment by Lumifer · 2014-08-14T04:16:49.281Z · LW(p) · GW(p)
I am not assigning any rights to memes. I am saying that, as a human, I value some memes. I also value the diversity of the meme ecosystem and the potential for me to go and get acquainted with new memes which will be fresh and potentially interesting to me.
Why some memes and not others -- well, that flows out of my value system and personal idiosyncrasies. Some things I find interesting and some I don't -- but how that's relevant?
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-08-15T03:08:11.106Z · LW(p) · GW(p)
Why some memes and not others -- well, that flows out of my value system and personal idiosyncrasies.
So why should anyone else care about your personal favorite set of favored memes?
↑ comment by NancyLebovitz · 2014-08-11T19:52:34.810Z · LW(p) · GW(p)
A fair number of people believe that it's a moral issue if people wipe out a species, though I'm not sure if I can formalize an argument for that point of view. Anyone have some thoughts on the subject?
↑ comment by polymathwannabe · 2014-08-11T18:38:18.049Z · LW(p) · GW(p)
Let's suppose for a moment that's what Username meant. If Username deems other beings to be more valuable than humans, then Username, as a human, will have a hard time convincing hirself of pursuing hir own values. So I guess we're safe.
Replies from: Username↑ comment by Username · 2014-08-12T00:55:00.664Z · LW(p) · GW(p)
I'm not going to say what the values are, beyond that I don't think they would be surprising for a LWer to hold. Also, yes, you're safe.
But it seems like you started with disbelief in X, and you were given an example of X, and your reaction should be to now assume that there are more examples of X; and it looks like instead, you're attempting to reason about class X based on features of a particular instance of it.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-08-12T03:06:25.446Z · LW(p) · GW(p)
I thought it was clear that "Username deems other beings to be more valuable than humans" was a particular instance of X, not a description of the entire class.
↑ comment by NancyLebovitz · 2014-08-11T22:14:01.861Z · LW(p) · GW(p)
I'd say not to worry about it unless it's a repetitive thought.
↑ comment by buybuydandavis · 2014-08-11T21:00:42.355Z · LW(p) · GW(p)
You should consider that the problem may not be in the argument, but in your beliefs about the values you think you have.
Replies from: Username↑ comment by solipsist · 2014-08-11T21:11:28.622Z · LW(p) · GW(p)
Why are you asking this question?
If you have larger worries about your mental heath or are worried that you might do something Very Bad, you should consider seeking mental assistance. I don't know the best course there (actually, that would be a great page for someone to write up) but I'm sure there are several people here who could point you in a good direction.
If your name is Leó Szilárd and you wish to register a Omega-class Dangerous Idea™ with the Secret Society of Sinister Scheme Suppressors, I do not believe they exist. Anyone claiming to be a society representative is actually a 4chan troll who will post the idea on a 30 meter billboard in downtown Hong Kong just to mock you. An argument simple enough to be generated spontaneously in your brain is probably loose in the wild already and not very dangerous. To play it safe, stay quiet and think.
If you're asking because you've just thought of this neat thing and you want to share it with someone, but are worried you might look a bit bad, I'm sure plenty of people here would be happy to read your argument in a private message.
↑ comment by lmm · 2014-08-11T19:08:23.542Z · LW(p) · GW(p)
Do you care about it? It sounds like you're responding appropriately (though IMO it's better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation). If the generation of that argument, or what it implies about your brain, is causing trouble with your life then it's worth investigating, but if it's not bothering you then such investigation might not be worth the cost.
Replies from: Username↑ comment by Username · 2014-08-12T00:44:13.668Z · LW(p) · GW(p)
though IMO it's better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation
This is the sort of thing I'm thinking about. The argument seems more robust than the obvious-to-me counterargument, so I feel that it's better to just not set people thinking about it. I'm not sure though.
Replies from: solipsist↑ comment by solipsist · 2014-08-12T02:24:30.680Z · LW(p) · GW(p)
If the argument is simple enough for your brain to generate it spontaneously, someone else has probably thought of it before and not released a mind plague upon humanity. There could even be an established literature on the subject in philosophy journals. Have you done a search?
The argument may not have good keywords and be ungooglable. If that's the case, you could (a) discuss with a friendly neighborhood professional philosopher or (2) pay a philosophy grad student $25 to bounce your idea off them.
I quickly brainstormed 6 (rather bad) reasons killing everyone in the world would satisfy someone's values. How do these reasons compare in persuasiveness? If your reason isn't much better than I don't think you have much to worry about.
↑ comment by zzrafz · 2014-08-11T16:18:21.393Z · LW(p) · GW(p)
Since you won't be able to kill all humans and will eventually get caught and imprisoned, the best move is to abandon your plan, accordingo to utilitarian logic.
Replies from: None↑ comment by [deleted] · 2014-08-12T06:59:53.305Z · LW(p) · GW(p)
I'm not so sure this is obvious. How much damage can one intelligent, rational, and extremely devoted person do? Certainly there are a few people in positions that obviously allow them to wipe out large swaths of humanity. Of course, getting to those positions isn't easy (yet still feasible given an early enough start!).. But I've thought about this for maybe two minutes, how many nonobvious ways would there be for someone willing to put in decades?
The usual way to rule them out without actually putting in the decades is by taking outside view and pointing at all the failures. But nobody even seems to have seriously tried. If they had, we'd have at least seen partial successes.
↑ comment by [deleted] · 2014-08-11T16:57:15.752Z · LW(p) · GW(p)
Reform yourself. Killing all humans is axiomatically evil in my playbook, so eithar (a) you are reasoning from principles which permit Mark!evil (which makes you Mark!evil, and on my watch-list), or (b) you made a mistake. It's probably the latter.
comment by bramflakes · 2014-08-18T13:01:43.831Z · LW(p) · GW(p)
The ‘six universal’ facial expressions are not universal, cross-cultural study shows
Replies from: DanielLC↑ comment by DanielLC · 2014-08-19T01:23:33.172Z · LW(p) · GW(p)
It doesn't seem to be clear whether that's just people of different cultures grouping faces differently, like how they might group colors differently even though their eyes work the same, or if their face/emotion correspondence is different.
comment by [deleted] · 2014-08-12T14:55:10.416Z · LW(p) · GW(p)
Cryonics question:
For those of you using life insurance to pay your cryonics costs, what sort of policy do you use?
Replies from: James_Miller, Joshua_Blaine↑ comment by James_Miller · 2014-08-13T03:40:02.621Z · LW(p) · GW(p)
Whole life via Rudi Hoffman for Alcor.
↑ comment by Joshua_Blaine · 2014-08-12T18:10:27.731Z · LW(p) · GW(p)
I've not personally finished my own arrangements, but I'll likely be using whole life of some kind. I do know that Rudi Hoffman is an agent well recommended by people who've gone the insurance route, so talking to him will likely get you a much better idea of what choices people make (A small warning, his sight is not the prettiest thing). You could also contact the people recommended on Alcor's Insurance Agents page, if you so desire.
comment by David_Gerard · 2014-08-18T16:55:46.920Z · LW(p) · GW(p)
comment by mouseking · 2014-08-15T01:29:28.922Z · LW(p) · GW(p)
I've been noticing a theme of utilitarianism on this site -- can anyone explain this? More specifically: how did (x)uys rationalize a utilitarian philosophy over an existential, nihilistic, or hedonistic one?
Replies from: Dahlen, Richard_Kennaway, ChristianKl, Ef_Re↑ comment by Dahlen · 2014-08-15T18:31:26.489Z · LW(p) · GW(p)
To put it as simply as I could, LessWrongers like to quantify stuff. A more specific instance of this is the fact that, since this website started off as the brainchild of an AI researcher, the prevalent intellectual trends will be those with applicability in AI research. Computers work easily with quantifiable data. As such, if you want to instill human morality into an AI, chances are you'll at least consider conceptualizing morality in utilitarian terms.
↑ comment by Richard_Kennaway · 2014-08-15T13:00:06.292Z · LW(p) · GW(p)
The confluence of a number of ideas.
Cox's theorem shows that degree of belief can be expressed as probabilities.
The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities.
Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic.
Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility.
Your morality is your utility function: your beliefs about how people should live are preferences about they should live.
Add the idea of actually being convinced by arguments (except arguments of the form "this conclusion is absurd, therefore there is likely to be something wrong with the argument", which are merely the absurdity heuristic) and you get LessWrong utilitarianism.
Replies from: blacktrance↑ comment by blacktrance · 2014-08-15T23:10:50.385Z · LW(p) · GW(p)
Utilitarianism is more than just maximizing expected utility, it's maximizing the world's expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.
Replies from: Richard_Kennaway, Vulture↑ comment by Richard_Kennaway · 2014-08-16T08:03:04.429Z · LW(p) · GW(p)
That is a good point, but I think one under-appreciated on LessWrong. It seems to go "rationality, therefore OMG dead babies!!" There is discussion about how to define "the world's expected utility", but it has never reached a conclusion.
Replies from: blacktrance↑ comment by blacktrance · 2014-08-16T08:54:53.415Z · LW(p) · GW(p)
In addition to the problem of defining "the world's expected utility", there is also the separate question of whether it (whatever it is) should be maximized.
↑ comment by Vulture · 2014-08-17T17:13:27.982Z · LW(p) · GW(p)
Utilitarianism is more than just maximizing expected utility, it's maximizing the world's expected utility.
I think this is probably literally correct, but misleading. "Maximizing X's utility" is generally taken to mean "maximize your own utility function over X". So in that sense you are quite correct. But if by "maximizing the world's utility" you mean something more like "maximizing the aggregate utility of everyone in the world", then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.
Replies from: blacktrance↑ comment by blacktrance · 2014-08-17T20:52:21.411Z · LW(p) · GW(p)
Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means - but they'd agree that maximizing one's own utility is contrary to utilitarianism.
Replies from: Vulture↑ comment by Vulture · 2014-08-18T00:34:58.349Z · LW(p) · GW(p)
Anyone who believes that "maximizing one's own utility is contrary to utilitarianism" is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I'm not sure what I can say to make the matter more clear.
Replies from: blacktrance↑ comment by blacktrance · 2014-08-18T01:09:18.102Z · LW(p) · GW(p)
Maximizing one's own utility is practical rationality. Maximizing the world's aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you'd donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-18T06:30:41.095Z · LW(p) · GW(p)
However, if utilitarianism is your ethics, the world's utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-18T06:39:34.781Z · LW(p) · GW(p)
It's the old System I (want ice cream!) vs System 2 (want world peace!) friction again.
↑ comment by ChristianKl · 2014-08-15T11:40:36.134Z · LW(p) · GW(p)
In general this site focuses on the friendly AI problem, a nihilistic or a hedonistic AI might not be friendly to humans. The notion of an existentialist AI seems to be largely unexplored as far as I know.
↑ comment by Ef_Re · 2014-08-15T01:58:48.223Z · LW(p) · GW(p)
To the extent that lesswrong has an official ethical system, that system is definitely not utilitarianism.
Replies from: James_Miller, Vulture, 2ZctE↑ comment by James_Miller · 2014-08-15T02:36:58.747Z · LW(p) · GW(p)
I don't agree. LW takes a microeconomics viewpoint of decision theory and this implicitly involves maximizing some weighted average of everyone's utility function.
↑ comment by 2ZctE · 2014-08-15T17:06:41.228Z · LW(p) · GW(p)
To the extent that lesswrong has an official ethical system, that system is utilitiarianism with "the fulfillment of complex human values" as a suggested maximand rather than hedons
Replies from: Ef_Re↑ comment by Ef_Re · 2014-08-16T18:35:30.580Z · LW(p) · GW(p)
That would normally be referred to as consequentialism, not utilitarianism.
Replies from: 2ZctE↑ comment by 2ZctE · 2014-08-18T03:08:25.901Z · LW(p) · GW(p)
Huh, I'm not sure actually, I had been thinking of consequentialism as being the general class of ethical theories based on caring about the state of the world, and that it's utilitarianism when you try to maximize some definition of utility (which could be human value-fulfillment if you tried to reason about it quantitatively). If my usages are unusual I more or less inherited them from the consequentialism faq I think
Replies from: Ef_Re↑ comment by Ef_Re · 2014-08-22T23:46:07.504Z · LW(p) · GW(p)
If you mean Yvain's, while his stuff is in general excellent, I recommend learning about philosophical nomenclature from actual philosophers, not medics.
comment by Error · 2014-08-12T22:30:04.615Z · LW(p) · GW(p)
I posted this in the last open thread but I think it got buried:
I was at Otakon 2014, and there was a panel about philosophy and videogames. The description read like Less Wrongese. I couldn't get in (it was full) but I'm wondering if anyone here was responsible for it.
Replies from: David_Gerard↑ comment by David_Gerard · 2014-08-13T11:22:41.639Z · LW(p) · GW(p)
The description: "Philosophy in Video Games [F]: A discussion of philosophical themes present in many different video games. Topics will include epistemology, utilitarianism, philosophy of science, ethics, logic, and metaphysics. All topics will be explained upon introduction and no prior knowledge is necessary to participate!"
Did they record all panels?
Replies from: Errorcomment by [deleted] · 2014-08-17T23:30:30.925Z · LW(p) · GW(p)
David Collingridge wouldn't have liked Nick Bostrom's "differential technological development" idea.
comment by pianoforte611 · 2014-08-13T23:00:45.560Z · LW(p) · GW(p)
Is it easier for you to tell men or women apart?
Obvious hypothesis: whichever gender you are attracted to, you will find them easier to tell apart.
Replies from: kalium, ChristianKl, arundelo, bramflakes↑ comment by kalium · 2014-08-14T03:03:32.572Z · LW(p) · GW(p)
It's easier for me to tell women apart because their hairstyles have more interpersonal variation. (I distinguish people mainly by hair. It takes a few months before I learn to recognize a face.) I'm pretty much just attracted to men though.
↑ comment by ChristianKl · 2014-08-14T11:23:59.172Z · LW(p) · GW(p)
I don't really know. I'm attracted to women and if I look back most cases of confusing one person for another are cases where a dance Salsa with a woman for 10 minutes and then months later I see the same woman again.
I also use gait patterns for recognition and have sometimes a hard time deciding whether a photo is of a person that I have seen in person if I haven't interacted that much with the person..
As far as attraction goes it's also worth noting that I sometimes do feel emotions that come from having interacted with a person beforehand but it takes me some time to puzzle together where I did meet the person before. The emotional part gets handled by different parts of the brain.
Replies from: wadavis↑ comment by bramflakes · 2014-08-14T00:06:03.216Z · LW(p) · GW(p)
What do you mean "tell apart"?
Replies from: pianoforte611↑ comment by pianoforte611 · 2014-08-14T03:15:27.358Z · LW(p) · GW(p)
I mean how likely are you to mistake one for the other.?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-08-14T03:22:06.453Z · LW(p) · GW(p)
I am bisexual, leaning toward liking men more, and sometimes women seem to me to look all the same.
However, if I'm introduced to two obviously distinct people, and their names have the same initial, it'll be months before I get who's who right.
comment by polymathwannabe · 2014-08-12T20:48:36.340Z · LW(p) · GW(p)
In a world without leap years, how many people should a company have to be reasonably certain that everyday will be someone's birthday?
Replies from: xnn↑ comment by xnn · 2014-08-12T21:23:34.414Z · LW(p) · GW(p)
See Coupon collector's problem, particularly "tail estimates".
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-08-12T21:34:05.244Z · LW(p) · GW(p)
Thank you.
comment by [deleted] · 2014-08-12T12:59:09.487Z · LW(p) · GW(p)
If post a response to someone, and someone replies to me, and they get a single silent downvote prior to me reading their response, I find myself reflexively upvoting them just so they won't think I was the one who did the single silent downvote, since it seems plausible to me that if you have a single downvote, and no responses, the most likely explanation to me was that it was from the person who you replied to downvoted you, and I don't want people to think that.
Except, then I seem to have gotten my opinion of the post hopelessly biased before even reading it, because I'd feel bad if I revoked the upvote, let alone actually downvoted them, and I feel like I can't get back to the status quo of them just having a 0 point or positive post.
It also doesn't seem like it would have the same effect if someone replied to me and was heavily downvoted, but I don't actually recall that happening.
If I try to assess this more rationally, I get the suggestion 'You're worrying far too much about what other people MIGHT be thinking, based on flimsy evidence."
Thoughts?
Replies from: Lumifer, polymathwannabe, Xachariah, lmm, Richard_Kennaway, ChristianKl↑ comment by polymathwannabe · 2014-08-12T14:30:59.667Z · LW(p) · GW(p)
the most likely explanation to me was that it was from the person who you replied to downvoted you
It's easy for users to abandon that supposition by themselves after they have spent enough time at LW.
↑ comment by lmm · 2014-08-14T09:42:45.285Z · LW(p) · GW(p)
I upvote people who reply to me on unpopular threads disproportionately often, because I want to encourage that.
I upvote people who I think have an unfairly low score.
Given this, behaviour much like yours follows. I think that's fine.
I'd recommend always reading before voting though.
↑ comment by Richard_Kennaway · 2014-08-13T07:11:35.147Z · LW(p) · GW(p)
If I try to assess this more rationally, I get the suggestion 'You're worrying far too much about what other people MIGHT be thinking, based on flimsy evidence."
Thoughts?
I think the thought you thought of there is right.
↑ comment by ChristianKl · 2014-08-12T16:47:38.377Z · LW(p) · GW(p)
If you want to make it clear that you didn't downvote just start your post with (I didn't downvote the above post)
comment by NancyLebovitz · 2014-08-18T08:26:42.262Z · LW(p) · GW(p)
30 day experiment with homemade soylent-- mostly positive outcome.