Open thread, Aug. 03 - Aug. 09, 2015

post by MrMind · 2015-08-03T07:05:57.365Z · LW · GW · Legacy · 178 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

178 comments

Comments sorted by top scores.

comment by gwern · 2015-08-04T18:02:48.049Z · LW(p) · GW(p)

Emotiv EPOC give-away:

So back in March 2013 or so, another LWer gave me a "Special Limited Edition Emotiv EPOC Neuroheadset"/"Research Edition SDK". Idea was that I could maybe use it for QS purposes like meditation or quantifying mental effects of nootropics. EEG headsets turn out to be a complicated area with a lot of unfamiliar statistics & terminology in it, and so I never quite got around to making any use of it; so it's been sitting on my desk gathering dust ever since.

I'm not doing as much QS stuff these days and it's been over two years without a single use, so it's time I admit that it's unlikely I'm going to use it any time soon as well.

I might as well ship to another American LWer who might get some use out of it. If you're interested, email me.

EDIT: it's taken

Replies from: Elo
comment by Elo · 2015-08-05T13:08:18.783Z · LW(p) · GW(p)

Upvoted for good cultural standards.

comment by cousin_it · 2015-08-03T07:49:32.380Z · LW(p) · GW(p)

Here's the slides from my talk on logical counterfactuals at the Cambridge/MIRI workshop in May 2015. I'm planning to give a similar talk tomorrow at the Google Tel Aviv office (meetup link). None of the material is really new, but I hope it shows that basic LWish decision theory can be presented in a mathematically rigorous way.

Replies from: ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2015-08-03T09:10:30.053Z · LW(p) · GW(p)

This is super interesting. Is this based on UDT?

Replies from: cousin_it
comment by cousin_it · 2015-08-03T09:39:06.270Z · LW(p) · GW(p)

Yeah, it's UDT in a logic setting. I've posted about a similar idea on the MIRI research forum here.

comment by Lumifer · 2015-08-07T17:11:33.066Z · LW(p) · GW(p)

Speed matters: Why working quickly is more important than it seems

An interesting blog post which points out additional benefits of doing things quickly. A sampler:

The obvious benefit to working quickly is that you’ll finish more stuff per unit time. But there’s more to it than that. If you work quickly, the cost of doing something new will seem lower in your mind. So you’ll be inclined to do more.

The converse is true, too. If every time you write a blog post it takes you six months, and you’re sitting around your apartment on a Sunday afternoon thinking of stuff to do, you’re probably not going to think of starting a blog post, because it’ll feel too expensive.

What’s worse, because you blog slowly, you’re liable to continue blogging slowly—simply because the only way to learn to do something fast is by doing it lots of times.

This is true of any to-do list that gets worked off too slowly. A malaise creeps into it. You keep adding items that you never cross off. If that happens enough, you might one day stop putting stuff onto the list.

Replies from: Viliam
comment by Viliam · 2015-08-07T20:12:03.939Z · LW(p) · GW(p)

Also in terms of reinforcement learning, if you work quickly, you will get more rewards per unit of time, and the rewards will be closer in time to the start the work (shorter time delay means better reinforcement).

comment by Username · 2015-08-03T14:36:24.300Z · LW(p) · GW(p)

The problem of “me” studies by Joseph Heath

So what I would like to discuss today is just one strand or tendency, that often gets described as political correctness, but that is more precisely known as the problem of “me” studies.

Although described in political terms, biases caused by "me" studies also affect other fields such as philosophy

Replies from: Vaniver
comment by Vaniver · 2015-08-03T17:31:13.489Z · LW(p) · GW(p)

Very good article. "Me" studies refers to, basically, studying yourself--which gets into topics of identity politics. (Instead of just studying your life, you might, say, study the history of your racial group in your country.) But the core of it is a simple model of how discussions get radicalized when people are studying oppression. The post-script is also fascinating reading, because someone objects to a minor comment in the first post in a way that highlights the underlying dynamics (see the first comment, possibly by the author of the email).

The next post on the same topic, On the problem of normative sociology, is also well worth reading.

(On an unrelated, but still rationalist, topic, see his post on the implications of psychology on consumerism / climate change.)

Replies from: Viliam
comment by Viliam · 2015-08-04T08:46:27.785Z · LW(p) · GW(p)

see his post on the implications of psychology on consumerism / climate change

Yes, this is a frequent mistake of rationality, sometimes so difficult to explain to people outside of LW.

Essentially, the world is a system of gears. To understand some activity that happens in world, look at the gears, what they do, and how they interact. Don't search for a mysterious spirit responsible for the activity, if the activity can be fully explained by the gears.

This is a simple application of naturalism into economics (and therefore to politics, because often politics = economics + value judgements). Yeah, but many people fail hard at naturalism, even those who call themselves atheists.

Unfortunately, seeing the world as a system of gears is often considered a "right-wing" position; and the "left-wing" position is calling out the various evil spirits. (I am not saying that this is inherently a left-wing approach; possibly just a recent fashion.) As if people fail to coordinate to solve hard problems merely because evil corporate wizards make them do so using magical brainwashing powers, instead of simply everyone optimizing locally for themselves.

"Me" studies refers to, basically, studying yourself--which gets into topics of identity politics.

Yep; when the topic of your study is yourself, then suddenly every criticism of your "science" becomes invalidation of your experience, which is a bad thing to do. Just take your diary and put "doctoral thesis" on it and you are done. The only condition is that you can't disagree with the political opinion of your advisor, of course; such diary would not be accepted as "scientific".

And in the spirit of "the world is connected; if you lie once, the truth becomes forever your enemy" we learn that science itself is merely an oppresive tool of the evil spirit of white patriarchy.

Yeah, I know, local taboo on politics, et cetera. But it is so painful to watch how some anti-epistemologies become standard parts of mainstream political opinions, and then when you comment on some elementary rationality topic, you are already walking on the political territory. It is no longer only priests and homeopaths who get offended by definitions of science and evidence. :(

EDIT: To express the core of my frustration I would say that the people who complain loudest about humanity not solving various prisoners' dilemmas are among those who defect in the prisoners' dilemmas of rationality and civilization. Yes, the problem you complain about is real, but if you want to solve it, start by the fucking look in the mirror, because there it is.

Replies from: HungryHippo, None, Houshalter
comment by HungryHippo · 2015-08-05T19:39:44.110Z · LW(p) · GW(p)

Essentially, the world is a system of gears. To understand some activity that happens in world, look at the gears, what they do, and how they interact. Don't search for a mysterious spirit responsible for the activity, if the activity can be fully explained by the gears.

You put your finger on something I've been attempting to articulate. There's a similar idea I've seen here on Lesswrong. That idea said approximately that it's difficult do define what counts as a religion, because not all religions fulfill the same criteria. But a tool that seems to do the job you want to do is to separate people (and ideas) based on the question "is mind made up of parts or is it ontologically fundamental?". This seems to separate the woo from the non-woo.

My mutation of this idea is that there are fundamentally two ways of explaining things. One is the "animistic" or "intentional stance (Cf. Daniel Dennett)" view of the world, the other is the "clockwork" view of the world.

In the the animistic view, you explain events by mental (fundamentally living) phenomena. Your explanations point towards some intention.

God holds his guiding hand over this world and saved the baby from the plane crash because he was innocent, and God smote America because of her homosexuals. I won the lottery because I was good. Thunderclaps are caused by the Lightningbird flapping his wings, and lightning-flashes arise when he directs his gaze towards the earth. Or perhaps Thor is angry again, and is riding across the sky. Maybe if we sacrifice something precious to us, a human life, we might appease the gods and collect fair weather and good fortune.

Cause and effect are connected by mind and intention. There can be no unintended consequences, because all consequences are intended, at least by someone. Whatever happens was meant (read: intended) to happen. If you believe that God is good, this gives comfort even when you are under extreme distress. God took you child away from you because he wanted her by his side in heaven, and he is testing you only because he loves you. If you believe in no God, then bad things happen only because some bad person with bad intentions intended them to happen. If only we can replace them with good people with good intentions, the ills of society will be relieved.

In the clockwork view of the world, every explanation explains away any intention. The world is a set of forever-falling dominoes.

Everything that happens can be explained by some rule that neither loves you nor hates you but simply is. Even love is spoken of in terms of neural correlates, rising and falling levels of hormones. Sensory experiences, like the smell of perfume or excitations of the retina, explain love the same way aerodynamics explain the flying of an airplane. We might repackage the dominoes and name them whatever we like, still everything is made out of dominoes obeying simple rules. But even if the rules are simple, the numerous interacting pieces make the game complicated. Unintended consequences are the norm, and even if your intentions are good, you must first be very cautious that the consequences do not turn out bad.

The animist is more likely to parse "China has bad relations with Japan" in the same way as they parse the sentence "Peter dislikes Paul", while the clockworker is likely to interpret it as "The government apparatus of either country are both attempting to expand control over overlapping scarce resources."

The animist is more likely to support the notion that "The rule of law, in complex times, Has proved itself deficient. We much prefer the rule of men! It's vastly more efficient.", while the clockworkers are more likely to bind themselves by the law and to insist that a process should be put in place so that even bad actors are incentivized to do good. The animist believes that if only we could get together and overcome our misunderstandings, we would realize that, by nature, we are friends. The clockworker believes that despite being born with, by nature, opposing interests, we might both share the earth and be friendly towards each other.

The animist searches for higher meaning, the clockworker searches for lower meaning.

Replies from: Lumifer
comment by Lumifer · 2015-08-05T20:12:45.274Z · LW(p) · GW(p)

I don't know if it's a good separation as stated. Let me illustrate with a 2x2 table.

Earthquake in California: God punished sin (animist) -- The tectonic plate moved (clockwork)

Alice went for a coffee: Alice wants coffee (animist) -- A complicated neuro-chemical mix reacting to some set of stimuli made Alice go get coffee (clockwork)

The problem is that I want the clockwork description for the earthquake, but I want the animist description for Alice. The clockwork description for Alice sounds entirely unworkable.

The animist believes... that, by nature, we are friends.

The way you set it up, the animist believes that there is no such thing as "by nature" and that God's will decides all, including who will be friends and who will not.

The clockworker believes that ... we might both share the earth and be friendly towards each other.

Don't see that. The clockworker believes we will do whatever the gears will push us to do. Clockworkers are determinists, basically.

Replies from: Viliam, HungryHippo
comment by Viliam · 2015-08-06T07:44:06.920Z · LW(p) · GW(p)

I want the clockwork description for the earthquake, but I want the animist description for Alice

We should use explanations of type "the entity is a human, they think and act like a human" for humans, and for nothing else. (Although in some situations it may be useful to also think about a human as a system.)

The most frequent error in my opinion is modelling a group of humans as a single human. Maybe a useful help for intuition would be to notice when you are using a gramatical singular for a group of people, and replace it with plural. E.g. "government" -> "politicians in the government"; "society" -> "individuals in the society"; "educational system" -> "teachers and students", etc.

Replies from: Lumifer
comment by Lumifer · 2015-08-06T14:59:49.969Z · LW(p) · GW(p)

The most frequent error in my opinion is modelling a group of humans as a single human.

I think it's a bit more complicated. I see nothing wrong with modeling a group of humans as a single entity which has, say, particular interests, traditions, incentives, etc. There are big differences between "government" and "politicians in the government" -- an obvious one would be that politicians come and go, but the government (including a very large and very influential class of civil servants) remains.

I am not saying that we should anthropomorphise entities, but treating them just as a group of humans doesn't look right either.

Replies from: Viliam, Dagon
comment by Viliam · 2015-08-06T19:00:12.482Z · LW(p) · GW(p)

I see nothing wrong with modeling a group of humans as a single entity which has, say, particular interests, traditions, incentives, etc.

Such model ignores e.g. minorities which don't share the interests of the majority, or the internal fighting between people who have the same interests but compete with each other for scarce resources (such as status within the group).

As a result, the group of humans modelled this way will seem like a conspiracy, and -- depending on whether you choose to model all failures of coordination as "this is what the entity really wants" or "this is what the entity doesn't want, but does it anyway" -- either evil or crazy.

Replies from: Lumifer
comment by Lumifer · 2015-08-06T20:26:30.464Z · LW(p) · GW(p)

Well, let's step back a little bit.

How good a model is cannot be determined without specifying purpose of this model. In particular, there is no universally-correct granularity -- some models track a lot of little details and effects, while others do not and aggregate all of them into a few measures or indicators. Both types can be useful depending on the purpose. In particular, a more granular model is not necessarily a better model.

This general principle applies here as well. Sometimes you do want to model a group of humans as a group of distinct humans, and sometimes you want to model a group of humans as a single entity.

comment by Dagon · 2015-08-06T19:54:10.748Z · LW(p) · GW(p)

It's a bit more complicated, but still basically true: a group is not very well modeled as an individual. Heck, I'm not sure individual humans have sufficient consistency over time to be well-modeled as an individual. I suspect that (Arrow's Theorem)[https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem] applies to subpersonal thinking modules as well as it does whole people.

A single entity which can believe and act simultaneously in contradictory ways is not really a single entity, is it?

Replies from: Lumifer
comment by Lumifer · 2015-08-06T20:27:15.070Z · LW(p) · GW(p)

See my answer to Viliam...

comment by HungryHippo · 2015-08-05T22:42:19.018Z · LW(p) · GW(p)

don't know if it's a good separation as stated.

As stated, my comment is more a vague suggestion than a watertight deduction from first principles.

What I intend to suggest is that just as humans vary along the dimensions of aggression, empathy, compassion, etc., so too do we vary according to what degree, and when, we give either explanation (animistic/clockwork) primacy over the other. I'm interested in these modes of explanation more from the perspective of it being a psychological tendency rather than it giving rise to a self-consistent world view.

In the mental operations of humans there is a tendency to say "here, and no longer, for we have arrived" when we explain phenomena and solve problems. For some this is when they have arrived at a kind of "spirit", for some it is when we arrive at "gears". For some it is gears six days a week, and spirits on Sunday.

The degree to which you seek spirit-explanations depends on the size and complexity of the physical system (a spec of dust, a virus, a bacteria, a single celled organism, an ant, a frog, a mouse, a cat, a monkey, a human), and also the field of inquiry (particle physics, ..., sociology). And it probably depends on some personal nature-and-nurture quality. Sometimes explanations are phrased in terms of spirits, sometimes gears.

I'm also not saying that the world-views necessarily contradict each other (in that they deny the existence of phenomena the other asserts), only that each world-view seeks different post-hoc rationalizations. The animist will claim that the tectonic plate moved because God was wrathful and intended it. The clockworker will claim that God's wrath is superfluous in his own model of earthquakes. Whether either world-view adds something the other lacks is beside the point, only that each desire to stop at a different destination, psychologically.

In real life I once heard, from an otherwise well adjusted member of society, that the Devil was responsible for the financial crisis. I did not pry into what he meant by this, but he seemed quite satisfied by the explanation. Let me emphasize this: when being told that the cause of the financial crisis was the Devil, he was quite content with simply nodding his assent. I have not researched the crisis myself, but I am confident I would phrase my explanation in terms of "externalities" or "perverse incentives" or some such. The two explanations would agree to every detail of what actually happened during the financial crisis, but for some psychological reason what he and I find to be a psychologically satisfying explanation differs.

It's perfectly fine to say that "Alice wants coffee". The question is whether a person is more likely to believe that Alice's "wanting something" is what sets in motion her biochemical reactions, or whether it is the biochemical reactions which sets in motion her "wanting something". Whether the cart pushes the horse, or vice versa.

I don't think any world-views (as implemented in real humans) are self-consistent (or even deduced from first principles), but a person's gear/spirit tendency can probably predict additional beliefs that the person holds. For example, which person is more likely to believe that ghosts exist? If a person tends towards gear/spirit-explanations, in which is he more likely to say "when I'm in a room with only my teddy bear, I no longer feel alone"? I'm not necessarily saying that these beliefs are necessary consequences of each world-view, only that certain ideas are associated with each world-view for one reason or other.

As to the "by nature" part of my comment, I mean that those who tend towards spirit-explanations are more likely to believe something approximating "we descended from the gods", while those who tend towards gear-explanations are more likely to believe "we ascended from the beasts". The former tend towards utopianism, the latter tends towards the "tragic vision" of life. (See: A Conflict of Visions ). The utopians/spiritists take it for granted that we should be friends and wonder why we aren't. Those with the tragic vision of life/gearists take it for granted that we are in conflict and wonder why we sometimes aren't.

Replies from: Lumifer
comment by Lumifer · 2015-08-05T23:57:25.891Z · LW(p) · GW(p)

Do you think your distinction maps to the free will vs. determinism dimension?

I think what makes me confused is that religion is heavily mixed into the animistic view. Can the animistic view (in particular with respect to natural phenomena) exist without being based on religion?

Replies from: HungryHippo
comment by HungryHippo · 2015-08-06T02:20:57.995Z · LW(p) · GW(p)

I don't have any hard and fast answers, so I cannot be completely sure.

My guess is that a "spirit" person is more likely to believe in free will, while a "gear" person is more likely to believe in the absence of free will. What free will means precisely, I'm not sure, so it feels forced for me to claim that another person would believe free will, when I myself am unable to make an argument that is as convincing to me as I'm sure their arguments must be to them. I haven't thought much of free will, but the only way I'm personally able to conceive of it is that my mind is somehow determined by brain-states which in turn are defined by configurations of elementary particles (my brain/my body/the universe) with known laws, if unknown (in practice) solutions. So personally I'm in the "it's gears all the way down" camp, at least with the caveat that I haven't thought about it much. But there are people who genuinely claim to believe in free will and I take their word for it, whatever those words mean to them. So my guess at the beginning of the paragraph should interpreted as: if you ask a "spirit" person he will most likely say, "yes, I believe in free will", while a "gear" person will most likely say "no, I do not believe in free will." The factual content of each claim is a separate issue. Whether either world-view can be made self-consistent is a further issue. I think a "spiritist" would accept the will of someone as a sufficient first cause of a phenomenon, with the will being conceived of only as a "law unto itself".

When it comes to determinism, I think a "gearist" are more likely to be determinists, since that is what has dominated all of the sciences (except for quantum physics).

"Spiritits" on the other hand, I don't know. If God has a plan for everything and everyone, that sounds pretty deterministic. But if you pray for him to grant you this one wish, then you don't know whether he will change the course of the universe for your benefit or no, so I would call that pretty indeterministic. Even if you don't pray, you can never really know what God has in store for you. If all your Gods are explicitly capricious, then there are no pretensions to determinism. I think a "spiritist" is more likely to believe in indeterminism.

The animistic view with respect to natural phenomena feels very religious to me as well. I use the word "feel" here because I have no precise definition of religion and maybe none exists. (See the very beginning of my comments in this thread.) If you believe that the river is alive, that the wind can be angry, and the waves vengeful, is that (proto?) religious? Or is it simply the pathetic fallacy? What if you believe, with Aristotle, that "nature abhors a vacuum"? That is animistic with out being, I think, religious. Or what of Le Chatelier's Principle, in which a chemical reaction "resists" the change you impose on it (e.g. if you impose an increased pressure, the chemicals will react to decrease the pressure again)?

comment by [deleted] · 2015-08-05T01:20:37.649Z · LW(p) · GW(p)

I'm interested in your characterization of left vs. right, as it seems to me both parties make this mistake equally.

What examples were you thinking of when making that characterization?

Replies from: Viliam
comment by Viliam · 2015-08-05T09:03:55.862Z · LW(p) · GW(p)

Different countries have different definitions of left and right. There seems to be some system, but also... well, let me give you an example: In Slovakia, the political party promoting marijuana legalization and homosexual marriage was labeled by its opponents as right-wing, because... well, they also supported free-market, and supporting free market means opposing communists, and since communists are left-wing, then logically if you oppose them, you must be right-wing. Having exactly the same opinions in USA would make one left-wing, if I understand it correctly.

This said, the examples I have in mind may be rather atypical for most LW readers. Thinking about my country, I would roughly classify political parties into three groups, listed from most powerful to least powerful.

1) Communists, including some small Nazi-ish parties, because they have a similar ideology (defend the working class, blame evil people for everything bad; the difference is that for Communists the evil people are Americans and entrepreneurs, while for Nazis they are Americans, Hungarians, Jews, and Gypsies; also both are strongly pro-Russia).

2) Liberals/Libertarians, basicly anyone who knows Economics 101 and wants to have some free market, and in extreme cases even things like marijuana and gay marriage.

3) Catholics, who only care about more power and money to Catholic church, and are willing to support either of the previous two groups if they in return give them what they want (so far they mostly joined the Liberals, but it always created a lot of tension within the government)

So for me, "left-wing" usually means (1), and "right-wing" usually means (2) + (3).

In my country, knowing Economics 101 already gets you labeled "right-wing", and if you say things like "if you increases taxes, you will punish the rich, but you will also make stuff more expensive for the poor" or "if you increase minimum wage, some people will get higher salary, but other people will get fired or unable to find a job", this is perceived by many as being mind-killed.

But in other countries it may be completely different.

Replies from: Username, Vaniver, None
comment by Username · 2015-08-07T12:56:54.947Z · LW(p) · GW(p)

Different countries have different definitions of left and right

In many non-western countries the very dichotomy between left and right doesn't make any sense. Westerners make a lot of fatal mistakes when they try to project their limited understanding onto non-Western countries.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-08-08T08:22:56.230Z · LW(p) · GW(p)

On reading that comment on Top Comments Today before having read its ancestors, I thought you was talking about Australian Aboriginal cultures who use compass directions even in everyday situations where Europeans would use relative directions.

comment by Vaniver · 2015-08-05T13:03:45.563Z · LW(p) · GW(p)

Having exactly the same opinions in USA would make one left-wing, if I understand it correctly.

The only American political party like that is the libertarian party, which is consistently considered right-wing. (That is, the combination of marijuana legalization, gay rights, and free-market; you do find people in favor of marijuana legalization, gay rights, and less free market on the left.)

Replies from: Viliam
comment by Viliam · 2015-08-06T07:49:06.241Z · LW(p) · GW(p)

You are right. Well, in Slovakia the libertarian-ish party is the only one that would touch the topic of marijuana and gay rights. We do not have a "marijuana, gay rights, less free market" party, and maybe not even the voters who would vote for such party. Any kind of freedom is right-wing (although not everything right-wing is pro-freedom).

comment by [deleted] · 2015-08-05T17:42:52.016Z · LW(p) · GW(p)

Communists (in the marxist sense) definitely take a systems thinking gear like approach, not a magical "evil people do evil things" approach. The entire idea behind Marxism is that there's a systemic problem with capitalism where the rich own the means of production. This will lead to a systemic unrest among those who don't own the means of production, which will eventually lead to a revolution.

I would say the problem is not systems thinking in this case, but lack of empiricism. Communism has been proven again and again to lead to corruption, but that fact is ignored by communists because it contradicts their systemic models. That's not just a leftist problem though. For instance, it's been shown again and again that raising the minimum wage doesn't lead to unemployement, but that's been ignored again and again by the right because it contradicts their systemic model.

Replies from: Viliam, Lumifer
comment by Viliam · 2015-08-06T08:10:59.141Z · LW(p) · GW(p)

Communists (in the marxist sense) definitely take a systems thinking gear like approach, not a magical "evil people do evil things" approach.

True for textbook Communism, but it doesn't work for politicians. What is a Communist politician supposed to say to their voters: "Let's sit here with our hands folded and wait for the inevitable collapse of the capitalism"? They must point fingers. They must point fingers more than their competitors for the same role.

And when the Communists rule the country... they empirically can't deliver what Marx has promised. So they must find excuses. Stuff like "Socialism is the initial stage of the Communism, containing still some elements of capitalist system such as money; just wait a few years longer and you will see the final stage". In reality, you have a pseudo-capitalist system with a dictatorship of the Communist Party, state-owned factories and regulated prices, mandatory employment and press censorship... and you stay there for decades, because... well, again, you must point fingers. American imperialists, internal traitors, everyone is trying to destroy our 'freedom' and happiness.

EDIT: But probably more important than this all is that you have to "sell" Communism to people who are prone to magical thinking. So whatever the original theory was, as soon as it reaches the masses, your average supporter will think magically.

Replies from: None
comment by [deleted] · 2015-08-07T21:29:26.381Z · LW(p) · GW(p)

Yes, I agree with all of this. It's essentially restating my initial point, which is that communism's problem is not that they don't think in systems - it's that they don't update their systems based on empirical results.

comment by Lumifer · 2015-08-05T17:56:39.893Z · LW(p) · GW(p)

it's been shown again and again that raising the minimum wage doesn't lead to unemployement

That's not true. Or, rather, it's only true if you cherry-pick your economics papers. In fact, there is considerable debate as to the economic consequences (especially beyond short-term) of the minimum wage and the question is far from settled.

Replies from: None
comment by [deleted] · 2015-08-05T18:04:16.115Z · LW(p) · GW(p)

Well, yes.

Which still means that the right wing stance is at best, incomplete. Which isn't the stance you see any of the politicians taking, at least not in the US.

Replies from: Lumifer
comment by Lumifer · 2015-08-05T18:07:35.755Z · LW(p) · GW(p)

I haven't seen anyone (well, anyone who isn't drooling or foaming) claim "completeness" :-)

Replies from: None
comment by [deleted] · 2015-08-05T20:14:16.594Z · LW(p) · GW(p)

I think it's implied in the arguments you see them making.

Replies from: Lumifer
comment by Lumifer · 2015-08-06T04:18:24.424Z · LW(p) · GW(p)

I think it's implied in the arguments you see them making.

Oh, like this one?

it's been shown again and again that raising the minimum wage doesn't lead to unemployement

Replies from: None
comment by [deleted] · 2015-08-06T04:33:55.975Z · LW(p) · GW(p)

Yes, as I've admitted in the previous comments, that's not true.

And yes, I've seen the reverse attitude many times.

comment by Houshalter · 2015-08-06T20:39:58.621Z · LW(p) · GW(p)

I don't really agree with that link. Like the picture from a random real estate listing he likely cherry picked. And just assumed was an environmentalist because of where they live. And assume they use a lot of gas because they have ATVs. Which makes no sense at all. The mileage on those is actually not terrible, and they are typically not driven very long distances. Those motorcycles are even better than cars on fuel consumption.

But even the idea that people can't be for the environment if they don't own an electric car and live in a tiny house, or whatever. I view environmentalist lifestyles as extremely pointless. Your individual sacrifice won't contribute even the tiniest drop in the bucket.

Even if everyone did it, the price of gas and coal would just go down, and other countries would buy it - the total consumption would remain the same. They will keep mining and pumping it out of the ground until it's no longer possible to do so. The only way to solve the problem is to force them to keep it in the ground, or at least heavily tax it as it is taken out.

And that's a much more reasonable stance. You might not want to sacrifice individually and pointlessly. But you would be willing to do so if everyone else has to.

Replies from: Vaniver
comment by Vaniver · 2015-08-06T20:50:53.825Z · LW(p) · GW(p)

Your central point, that it's a collective action problem, is Heath's main point, as I read his article. He points out that people do not live environmentalist lifestyles as evidence that they will not vote for making non-environmentalist lifestyles expensive, and thus Klein's claim that democracy and local politics will help solve this issue is fundamentally mistaken.

And assume they use a lot of gas because they have ATVs. Which makes no sense at all. The mileage on those is actually not terrible, and they are typically not driven very long distances. Those motorcycles are even better than cars on fuel consumption.

While I agree that he doesn't have sufficient information to conclude the amount of gas they use, it's certainly fair to claim that their lifestyle is fossil fuel intensive, which was his claim. I have friends with guns; the amount of gunpowder consumption they require is better measured by how frequently they go to the range, not the number of guns they own. But it still seems fair to argue that using guns is a gunpowder-intensive hobby.

Even if everyone did it, the price of gas and coal would just go down, and other countries would buy it

...other countries don't count as everyone?

Replies from: Houshalter, Lumifer
comment by Houshalter · 2015-08-06T22:57:26.232Z · LW(p) · GW(p)

He points out that people do not live environmentalist lifestyles as evidence that they will not vote for making non-environmentalist lifestyles expensive, and thus Klein's claim that democracy and local politics will help solve this issue is fundamentally mistaken.

This is possibly true, but not necessarily so. We know nothing about these people's political beliefs or what issues they care about. He's making massive judgements about them from a single picture in a real estate listing.

Even if we are accepting the premise that people only vote in their self interest (they don't), these people are clearly very wealthy. Increases in energy prices will have a lower impact on them.

it's certainly fair to claim that their lifestyle is fossil fuel intensive, which was his claim. I have friends with guns; the amount of gunpowder consumption they require is better measured by how frequently they go to the range, not the number of guns they own. But it still seems fair to argue that using guns is a gunpowder-intensive hobby.

"intensive" is a pretty strong word. Clearly they use gas, but everyone uses gas. Just because they have ATVs and motorcycles doesn't mean much. As I said, they get good mileage, sometimes better than cars, and they aren't going to be driving them long distances. A single person with a long commute is likely going to use far more gas than them.

Replies from: Vaniver
comment by Vaniver · 2015-08-07T00:59:55.805Z · LW(p) · GW(p)

these people are clearly very wealthy.

Would you describe this as "necessarily so"?

A single person with a long commute is likely going to use far more gas than them.

Which is why it might be relevant that this is in a suburb of Toronto--i.e. someone who lives here and works in the city probably has an hour-long commute.

Replies from: Houshalter
comment by Houshalter · 2015-08-08T20:34:16.467Z · LW(p) · GW(p)

Would you describe this as "necessarily so"?

Because they own a giant, expensive home, with a garage filled with expensive toys.

Which is why it might be relevant that this is in a suburb of Toronto--i.e. someone who lives here and works in the city probably has an hour-long commute.

Perhaps, but that is not what the article said at all. He was judging them entirely based on a picture of their garage, and the fact that they owned motorcycles and ATVs. He didn't try to estimate how much gas they use per day. He looked at a photo and noticed it didn't have the aesthetic of environmentalism.

Replies from: Vaniver
comment by Vaniver · 2015-08-09T03:48:15.371Z · LW(p) · GW(p)

Because they own a giant, expensive home, with a garage filled with expensive toys.

Wealth typically refers to one's assets minus one's liabilities; evidence of assets does not suffice to demonstrate wealth. My point was that you are putting forward a reasonable general claim that is not necessarily true--even if this particular home seller is underwater on their mortgage, similar people exist that are not and one would expect the latter group to be more likely--at the same time that you are criticizing Heath for putting forward a reasonable general claim that is not necessarily true--people who own multiple ATVs and motorcycles and live an hour from the city probably consume more gasoline than the average Canadian and are unlikely to be a strong supporter of the environmentalist political coalition.

comment by Lumifer · 2015-08-06T20:58:46.745Z · LW(p) · GW(p)

our central point, that it's a collective action problem, is Heath's main point, as I read his article.

It seems Heath is talking about what Scott Alexander calls Moloch.

comment by Lumifer · 2015-08-05T17:18:33.455Z · LW(p) · GW(p)

Heh. Index Funds May Work a Little Too Well.

Replies from: None, ChristianKl
comment by [deleted] · 2015-08-05T23:58:34.160Z · LW(p) · GW(p)

Great share

comment by ChristianKl · 2015-08-05T17:42:54.248Z · LW(p) · GW(p)

Even when a law professor thinks that those purchases should be illegal, I find it hard to imagine that the legal system moves against index funds.

comment by [deleted] · 2015-08-05T12:34:30.117Z · LW(p) · GW(p)

One of the most, if not the most effective ways for me to focus on a particular task is to open Paint (on Windows) write in one word what I'm doing right now, e.g. "complexity" for an online course on complexity I'm taking and leave it like this on the side of my screen (or on the second screen), to always be in my field of view, but not interfere with anything.

This creates a really weird effect that whenever I want to get distracted by something automatically and almost completely effortlessly tells my mind to focus on the task instead, and doesn't let me get distracted.

Can anybody check how well it generalises for them?

Replies from: Elo
comment by Elo · 2015-08-05T13:22:46.598Z · LW(p) · GW(p)

look into something called a kanban board. Consider segments of:

  • "tasks to do"
  • "single task doing now"
  • "single next task"
  • "tasks awaiting external imput"
  • "completed tasks"

Where a task X seems hard, break it down into smaller tasks, (task of break X down to smaller tasks)

As a bonus: make an estimate of how long each task will take. After completing - compare your predicting ability and update your time-guess methods.

Replies from: None
comment by [deleted] · 2015-08-05T14:37:13.555Z · LW(p) · GW(p)

I'm not sure how is this relevant to my comment. I do use Complice for what you're describing, but the gist of my comment is that the need for the reminder of "single task doing right now" to be in the field of view, lest it's not getting forgotten.

Replies from: Elo, None
comment by Elo · 2015-08-05T14:53:50.940Z · LW(p) · GW(p)

I thought you were looking for related task focus strategies. My mistake.

field of view

I have not found the need for the instruction to be in the field of view. I don't imagine very much of an impact if I were to place it in my field of view. I would rather use as much of my field of view as possible for the task at hand, but I will try it and get back to you.

Replies from: None
comment by [deleted] · 2015-08-06T10:20:36.814Z · LW(p) · GW(p)

That'd be awesome!

Replies from: Elo
comment by Elo · 2015-08-18T06:05:36.909Z · LW(p) · GW(p)

update: it didn't do much for me. need to try a few more times.

comment by [deleted] · 2015-08-09T09:51:49.102Z · LW(p) · GW(p)

Doesn't complice have this exact feature?

Replies from: None
comment by [deleted] · 2015-08-09T09:55:22.774Z · LW(p) · GW(p)

Almost. It doesn't have "tasks awaiting external input" section :)

Replies from: None
comment by [deleted] · 2015-08-09T09:57:58.038Z · LW(p) · GW(p)

I meant, doesn't Complice have the "single task doing right now" function?

Replies from: None, None
comment by [deleted] · 2015-08-12T20:56:43.520Z · LW(p) · GW(p)

So I tried using this function and it's way too cluttered and distracting for me, so I guess the answer is no. The purity of Paint turned out to be pretty important.

comment by [deleted] · 2015-08-09T11:29:24.771Z · LW(p) · GW(p)

Oh.

comment by cousin_it · 2015-08-04T19:28:24.859Z · LW(p) · GW(p)

Wikipedia on Chalmers, consciousness, and zombies:

Chalmers argues that since such zombies are conceivable to us, they must therefore be logically possible. Since they are logically possible, then qualia and sentience are not fully explained by physical properties alone.

That kind of reasoning allows me to prove so many exciting things! I can imagine a world where gravity is Newtonian but orbits aren't elliptical (my math skills are poor but my imagination is top notch), therefore Newtonian gravity cannot explain elliptical orbits. And so on.

Am I being a hubristic idiot for thinking I can disprove a famous philosopher so casually?

Replies from: Elo, Manfred, Kaj_Sotala, Viliam, gjm
comment by Elo · 2015-08-05T13:12:09.185Z · LW(p) · GW(p)

I believe there is a misunderstanding in the word where if you taboo you might get:

= imagination

= describe with a human brain

= develop all the requirements for it to be feasible and the understanding of how to make it so within one human brain.

where for C3 if we can conceive3 it, it is logically possible.

Replies from: cousin_it
comment by cousin_it · 2015-08-05T13:29:15.187Z · LW(p) · GW(p)

I used the first meaning. Doesn't Chalmers use it as well?

Replies from: Elo, None
comment by Elo · 2015-08-05T14:26:07.532Z · LW(p) · GW(p)

are conceivable to us, they must therefore be logically possible.

Things that are imaginable are not therefore logically possible. I find it an unreasonable and untrue leap of reasoning.

Does that make sense?

Replies from: fubarobfusco, cousin_it
comment by fubarobfusco · 2015-08-05T18:35:36.650Z · LW(p) · GW(p)

In fact, there are quite a lot of concepts that are imaginable but not logically possible. Any time a mathematician uses a proof by contradiction, they're using such a concept.

We can state very clearly what it would mean to have an algorithm that solves the halting problem. It is only because we can conceive of such an algorithm, and reason from its properties to a contradiction, that we can prove it is impossible.

Or, put another way, yes, we can conceive of halting solvers (or zombies), but it does not follow that our concepts are self-consistent.

comment by cousin_it · 2015-08-05T16:10:49.290Z · LW(p) · GW(p)

Yeah, that's probably right. I'm not sure what "logically possible" means to philosophers, so I tried to give a reductio ad absurdum of the argument as a whole, which should work for any meaning of "logically possible".

Replies from: IffThen
comment by IffThen · 2015-08-07T00:58:42.853Z · LW(p) · GW(p)

Logically possible just means that "it works in theory" -- that there is no logical contradiction. It is possible to have an idea that is logically possible but not physically possible, e.g., a physicist might come up with a internally consistent theory of a universe that hold that the speed of light in a vacuum is 3mph.

These are in contrast to logically impossible worlds, the classic example being a world that contains both an unstoppable force and an unmovable object; these elements contradict each other, so cannot both occur in the same universe.

Replies from: cousin_it
comment by cousin_it · 2015-08-07T11:41:05.200Z · LW(p) · GW(p)

OK.

Is a world with Newtonian gravity and non-elliptical orbits logically possible?

Is a world where PA proves ¬Con(PA) logically possible?

Is a world with p-zombies logically possible?

Too often, people confuse "I couldn't find a contradiction in 5 minutes" with "there's provably no contradiction, no matter how long you look". The former is what philosophers seem to use routinely, while the latter is a very high standard. For example, our familiar axioms about the natural numbers provably cannot meet that standard, due to the incompleteness theorems. I'd be very surprised if Chalmers had an argument that showed p-zombies are logically possible in the latter sense.

Replies from: IffThen
comment by IffThen · 2015-08-07T13:27:24.703Z · LW(p) · GW(p)

"Chalmers argues that since such zombies are conceivable to us, they must therefore be logically possible. Since they are logically possible, then qualia and sentience are not fully explained by physical properties alone."

This is shorthand for "in the two decades that Chalmers has been working on this problem, he has been defending the argument that..." You might look at his arguments and find them lacking, but he has spent much longer than five minutes on the problem.

comment by [deleted] · 2015-08-06T06:50:48.470Z · LW(p) · GW(p)

I get the impression Chalmers is using something like Conceivable1 for the "zombies are conceivable" part of the arguments then sneakily switching to something more like Conceivable3 for the "conceivable, therefore logically possible" part.

Replies from: IffThen
comment by IffThen · 2015-08-07T01:09:38.315Z · LW(p) · GW(p)

I suspect you already know this, but just in case, in philosophy, a zombie is an object that can pass the Turing test but does not have internal experiences or self-awareness. Traditionally, zombies are also physically indistinguishable from humans.

comment by Manfred · 2015-08-04T20:44:55.011Z · LW(p) · GW(p)

The truth is usually simple, but arguments about it are allowed to be unboundedly complicated :P

Which is to say, I bet Chalmers has heard this argument before and formulated a counterargument, which would in turn spawn a counter-counterargument, and so on. So have you "proven" anything in a publicly final sense? I don't think so.

Doesn't mean you're wrong, though.

Replies from: iarwain1
comment by iarwain1 · 2015-08-04T21:53:25.023Z · LW(p) · GW(p)

The question is, how do I tell (without reading all the literature on the topic) if my argument is naive and the counterarguments that I haven't thought of are successful, or if my argument is valid and the counterarguments are just obfuscating the truth in increasingly complicated ways?

Replies from: None
comment by [deleted] · 2015-08-05T01:23:05.343Z · LW(p) · GW(p)

You either ask an expert, or become an expert.

Although I'd be wary of philosophy experts, as there's not really a tight feedback loop in philosophy.

comment by Kaj_Sotala · 2015-08-06T12:18:38.182Z · LW(p) · GW(p)

My default assumption is that if someone smart says something that sounds obviously false to me, either they're giving their words different meanings than I am, or alternatively the two-sentence version is skipping a lot of inferential steps.

Compare the cautionary tale of talking snakes.

Replies from: Jiro
comment by Jiro · 2015-08-06T15:18:37.244Z · LW(p) · GW(p)

If the tale of talking snakes really showed what it is supposed to show, we'd see lots of nonreligious people refuse to accept evolution on the grounds that evolution is so absurd that it's not worth considering. That hardly ever happens; somehow the "absurdity" is only seen as absurd by people who have separate motivations to reject it. I don't think that apes turning into men is any more absurd than matter being composed of invisible atoms, germs causing disease, or nuclear fusion in stars. Normal people say "yeah, that sounds absurd, but scientists endorse them, I guess they know what they're doing".

comment by Viliam · 2015-08-05T09:08:18.348Z · LW(p) · GW(p)

Am I being a hubristic idiot for thinking I can disprove a famous philosopher so casually?

You certainly are appropriating higher status than you deserve from the academic point of view. :P

comment by gjm · 2015-08-05T12:44:54.649Z · LW(p) · GW(p)

If you are, then so am I, because that was also my immediate reaction on hearing this conceivability argument.

comment by MrMind · 2015-08-03T09:34:20.860Z · LW(p) · GW(p)

This is an important paper regarding the foundation of probability, in particular section 2.5 which lists all the papers that dealt previously with fixing the holes in Cox's theorem.

comment by G0W51 · 2015-08-07T09:09:30.856Z · LW(p) · GW(p)

I have heard (from the book Global Catastrophic Risks) that life extension could increase existential risk by giving oppressive regimes increased stability by decreasing how frequently they would need to select successors. However, I think it may also decrease existential risk by giving people a greater incentive to care about the far future (because they could be in it). What are your thoughts on the net effect of life extension?

Replies from: pcm, Username, None
comment by pcm · 2015-08-07T15:05:27.168Z · LW(p) · GW(p)

One of the stronger factors influencing the frequency of wars is the ratio of young men to older men. Life extension would change that ratio to imply fewer wars. See http://earthops.org/immigration/Mesquida_Wiener99.pdf.

Stable regimes seem to have less need for oppression than unstable ones. So while I see some risk that mild oppression will be more common with life extension, I find it hard to see how that would increase existential risks.

Replies from: G0W51, knb
comment by G0W51 · 2015-08-07T17:33:08.830Z · LW(p) · GW(p)

Oppression could cause an existential catastrophe if the oppressive regime is never ended.

comment by knb · 2015-08-11T10:50:26.370Z · LW(p) · GW(p)

But why do young men cause wars (assuming they do)? If everyone remains biologically 22 forever, are they psychologically more similar to actual 22 year-olds or to whatever their chronological age is? If younger men are more aggressive due to higher testosterone levels (or whatever) agelessness might actually have the opposite effect, increasing the percentage of the male population which is aggressive.

comment by Username · 2015-08-07T12:35:08.598Z · LW(p) · GW(p)

Radical life extension might lead to overpopulation and wars that might escalate to existential risk level danger.

comment by [deleted] · 2015-08-10T03:30:49.932Z · LW(p) · GW(p)

Is there anything that can't somehow be spun into increasing existential risk? The biggest existential risk is being alive at all in the first place.

Replies from: G0W51
comment by G0W51 · 2015-08-10T04:07:47.272Z · LW(p) · GW(p)

Yes, but I'm looking to see if it increases existential risk more than it decreases it, and if the increase is significant.

comment by DataPacRat · 2015-08-04T04:57:23.459Z · LW(p) · GW(p)

Seeking plausible-but-surprising fictional ethics

How badly could a reasonably intelligent follower of the selfish creed, "Maximize my QALYs", be manhandled into some unpleasant parallel to a Pascal's Mugging?

How many rules-of-thumb are there, which provide answers to ethical problems such as Trolley Problems, give answers that allow the user to avoid being lynched by an angry mob, and don't require more than moderate mathematical skill to apply?

Could Maslow's Hierarchy of Needs be used to form the basis of a multi-tiered variant of utilitarianism?

Would trying to look at ethics from an Outside View such as, say, a soft-SF rubber-forehead alien suggest any useful, novel approaches to such problems?

(I'm writing a story, and looking for inspiration to finalize a character's ethical system, and the consequences thereof. I'm trying to stick to the rules of reality, including of sociology, so am having some trouble coming up with a set of ethics that isn't strictly worse than the ones I know of already, and is reasonably novel to someone who's read the Sequences. Other than this post, my next approach will be to try to work out the economic system being used, and then which virtues would allow a member to profit - somewhat unsatisfying, but probably good enough if nobody here can suggest something better. So: can you suggest something better? :) )

Replies from: Illano, DanielLC
comment by Illano · 2015-08-04T18:03:12.933Z · LW(p) · GW(p)

For story purposes, using a multi-tiered variant of utilitarianism based on social distance could lead to some interesting results. If the character were to calculate his utility function for a given being by something Calculated Utility = Utility / (Degrees of Separation from me)^2, it would be really easy to calculate, yet come close to what people really use. The interesting part from a fictional standpoint could be if your character rigidly adheres to this function, such that you can manipulate your utility in their eyes by becoming friends with their friends. (e.g. The utility for me to give a random stranger $10 is 0 (assuming infinite degrees of separation), but if they told me they were my sister's friend, it may have a utility of $10/(2)^2, or $2.50) It could be fun to play around with the hero's mind by manipulating the social web.

Replies from: DataPacRat
comment by DataPacRat · 2015-08-04T22:02:06.721Z · LW(p) · GW(p)

I think I once heard of a variant of this, only using degrees of kinship instead of social connections. Eg, direct offspring and full siblings are discounted to 50%, grandchildren to 25%, and so forth.

Replies from: DataPacRat
comment by DataPacRat · 2015-08-05T19:34:33.666Z · LW(p) · GW(p)

I was just struck by a thought, which could combine the two approaches, by applying some sort of probability measure to one's acquaintances about how likely they are to become a blood relative of one's descendants. The idea probably needs tweaking, but I don't think I've come across a system quite like it before... Well, at least, not formally. It seems plausible that a number of social systems have ended up applying something like such a heuristic through informal social-evolutionary adaptation, which could provide some fodder for contrasting the Bayesian version against the historically-evolved versions.

Anyone have any suggestions on elaborations?

Replies from: Illano
comment by Illano · 2015-08-06T13:01:44.866Z · LW(p) · GW(p)

Sounds somewhat like the 'gay uncle' theory, where having 4 of your siblings kids pass on their genes is equivalent to having 2 of your own pass on their genes, but with future pairings included, which is interesting.

Stephen Baxter wrote a couple of novels that explored the first theory a bit Destiny's Children series, where gur pbybal riraghnyyl ribyirq vagb n uvir, jvgu rirelbar fhccbegvat n tebhc bs dhrraf gung gurl jrer eryngrq gb.

The addition of future contributors to the bloodline as part of your utility function could make this really interesting if set in a society that has arranged marriages and/or engagement contracts, as one arranged marriage could completely change the outcome of some deal. Though I guess this is how a ton of history played out anyway, just not quite as explicitly.

comment by DanielLC · 2015-08-04T17:47:23.357Z · LW(p) · GW(p)

How badly could a reasonably intelligent follower of the selfish creed, "Maximize my QALYs", be manhandled into some unpleasant parallel to a Pascal's Mugging?

They'd be just as subject to it as anyone else. It's just that instead of killing 3^^^3 people, they threaten to torture you for 3^^^3 years. Or offer 3^^^3 years of life or something. It comes from having an unbounded utility function. Not from any particular utility function.

comment by [deleted] · 2015-08-03T16:07:57.301Z · LW(p) · GW(p)

Does anybody else get the sense that in terms of karma, anecdotes seem to be more popular than statistical analysis when rating comments? It seems like a clear and common source of bias to me. Thoughts?

Replies from: Lumifer, Username, None, LessWrong1
comment by Lumifer · 2015-08-03T16:09:52.868Z · LW(p) · GW(p)

Does anybody else get the sense that in terms of karma, anecdotes seem to be more popular than statistical analysis when rating comments?

Are you basing this observation on anecdotes or on statistical analysis? :-P

comment by Username · 2015-08-07T12:31:02.297Z · LW(p) · GW(p)

Bikeshed effect

comment by [deleted] · 2015-08-03T19:23:02.211Z · LW(p) · GW(p)

I get the opposite sense.

Replies from: satt
comment by satt · 2015-08-08T16:30:42.459Z · LW(p) · GW(p)

Same. I'd guess that ceteris paribus, comments based on statistical analysis would get more upvotes than anecdotes; it's just that ceteris ain't paribus.

A big part of a comment's karma is how many (logged-in) people read the comment, and in a given thread early comments tend to get more readers than late comments. Assuming that posting a statistical analysis is more time-consuming than posting an anecdote (and I think on average it is), comments with statistical analysis are systematically disadvantaged because they're posted later.

(This has definitely been my anecdotal experience. People seem to like comments where I dredge up statistics, but because I often post them as a thread winds down, or even after it's gone fallow, they're often less upvoted than their more-poorly-sourced parents.)

comment by Gunslinger (LessWrong1) · 2015-08-03T16:21:25.881Z · LW(p) · GW(p)

Isn't "karma" just a fancy word for "how much I like this post/comment"? I mean at registration I didn't get questions like:

  • (1) How much of your income have you donated to charity? [1]
  • (2) What is more dangerous to humanity? God's rage, or an out-of-control AI? [2]
  • (3) Which of the sequences is your favorite? Name two. [3]
  • [1] You must enter a number above 100%
  • [2] AI means artificial intelligence, but you're supposed to know that.
  • [3] "Favorite" means one, and it asks for two; clearly the write answer is to point out the grammatical error.

So clearly not everyone here is as rational as they should be, or at least LW-rational.

Moreover, that's only by a per-person basis. On comments with more upvotes, it would probably be the community's outlook on the comment/post. Same with downvotes. Just a matter of scale, really.

comment by [deleted] · 2015-08-03T10:04:07.391Z · LW(p) · GW(p)

Did some 5-min research for curiosity.

Are major categories in abnormal psychology actually good labels, statistically?

Big 5 personality traits were discovered through factor analysis.

Terms like depression, anxiety and personality disorders are or are entering common vernacular, but of unknown origin.

Google searched for (one of 'construct validity and factor analysis) + (one of: depression, anxiety and personality disorder) and selected relevant results on the visible half of the first page (didn't scroll don't more than flick)

Of those pages, closed tabs about research that was too specific (in journal of sports psychology on college wrestlers, a curious paper from a guy who uses the term 'nominology' which based on a google search, is a noologism used by noone else? or based on foreigners with very different culture - Chinese this one

Looks like the popular depression and anxiety tests are pretty valid but also non-overlap between test items indicates that they're overly broad terms. Personality disorder papers provided insufficient statistical info in the abstract and used vague terms.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-03T10:20:31.759Z · LW(p) · GW(p)

Looks like the popular depression and anxiety tests are pretty valid but also non-overlap between test items indicates that they're overly broad terms.

The fact that you don't understand overlap as a layperson doesn't indicate that a test doesn't test for a real thing. On the other hand the DSM-V categories are likely not the best possible label. That even the NIH opinion who declared that they are willing to fund studies that don't use them and try to find new categories.

If you want to dig deeper in how to think about such terms "How to think straight about Psychology" by CFAR advisor and professor of psychology Keith Stanovich is a good read.

comment by IffThen · 2015-08-07T00:12:08.454Z · LW(p) · GW(p)

I'd like a quick peer review of some low-hanging fruit in the area of effective altruism.

I see that donating blood is rarely talked about in effective altruism articles; in fact, I've only found one reference to it on Less Wrong.

I am also told by those organizations that want me to donate blood that each donation (one pint) will save "up to three lives". For all I know all sites are parroting information provided by the Red Cross, and of course the Red Cross is highly motivated to exaggerate the benefit of donating blood; "up to three" is probably usually closer to "one" in practice.

But even so, if you can save one life by donating blood, and can donate essentially for free (or nearly so), and can donate up to 6.5 times per year...

...and if the expected ROI for monetary donation is in the thousands of dollars for each life, then giving blood is a great deal.

Am I missing anything?

And as a corollary, should I move my charitable giving to bribing people to donate blood whenever there is a shortage?

Replies from: Elo, ike, NancyLebovitz, ChristianKl
comment by Elo · 2015-08-07T05:47:25.686Z · LW(p) · GW(p)

To my knowledge - the line "up to three lives" is quoted because a blood sample can be separated into 3 parts? or 3 samples, to help with different problems.

What is not mentioned often is the shelf-life for blood products. 3 months on the shelf and that pint is in the medical-waste basket. AKA zero lives saved.

And further, if a surgery goes wrong and they need multiple transfusions to stabilise a person the lives saved goes into fractional numbers. (0.5, 0.33, 0.25...) But those numbers are not pretty.

Further; if someone requires multiple transfusions over their life; to save their life multiple times...

There are numbers less than 1 (0); there are numbers smaller than a whole; and (not actually a mistake made here) real representative numbers don't often fall to a factor of 5 or 10. (5, 10, 50, 100, 1000).

Anyway if you are healthy and able to spare some blood then its probably a great thing to do.

Ike's article linked does start to cover adverse effects of blood donation; I wonder if a study has been made into it.

(http://www.ihn-org.com/wp-content/uploads/2014/04/Side-effects-of-blood-donation-by-apheresis-by-Hans-Vrielink.pdf comes as a source from wikipedia on prevelance of adverse effects) (oh shite thats a lot more common than I expected.

The risk I see is that donating blood temporarily disables you by a small amount. I would call it akin to being a little tipsy; a little sleep deprived, or a little drowsy; or a little low in blood pressure (oh wait yea). Nothing bad happens by being a little drowsy, or a little sleep deprived. It really depends on the whole-case of your situation as to whether something bad happens. (See: swiss cheese model: https://en.wikipedia.org/wiki/Swiss_cheese_model )

The important question to ask is - can you take it? If yes; then go right ahead. If you are already under pressure from the complexities of life in such a way that you might be adversely burdening yourself to donate blood; Your life is worth more (even for the simple reasoning that you can donate in the future when you are more up to it)

Replies from: IffThen
comment by IffThen · 2015-08-07T14:04:41.775Z · LW(p) · GW(p)

I'm not sure where you got the 3 month figure from; in America we store the blood for less than that, no more than 6 weeks. It is true that the value of your donation is dependent on your blood type, and you may find that your local organization asks you to change your donation type (platelets, plasma, whole blood) if you have a blood type that is less convenient. I do acknowledge that this question is much more relevant for those of us who are typo O-.

Replies from: Elo
comment by Elo · 2015-08-09T21:48:02.624Z · LW(p) · GW(p)

I don't know. The number was in my head that a processed blood sample can last 3 months. entirely possible that it doesn't.

" After processing, red cells can be stored for up to 42 days; plasma is frozen and can be stored for up to 12 months;" http://www.donateblood.com.au/faq/about-blood/how-long-until-my-blood-used

comment by ike · 2015-08-07T03:53:10.601Z · LW(p) · GW(p)

http://acesounderglass.com/2015/04/07/is-blood-donation-effective-yes/

Replies from: ChristianKl
comment by ChristianKl · 2015-08-07T07:45:18.285Z · LW(p) · GW(p)

So I’m just going to use the average effectiveness as the marginal effectiveness for now.

Right...

Replies from: ike
comment by ike · 2015-08-07T11:49:46.093Z · LW(p) · GW(p)

How would you usually go about calculating marginal effectiveness?

Replies from: ChristianKl
comment by ChristianKl · 2015-08-07T13:03:09.828Z · LW(p) · GW(p)

In this case it seems like the marginal value of blood donation should be roughly what the organizations like the red cross are willing to pay to get additional blood donations.

You could look at how often patients get less blood because of supply issues.

Replies from: IffThen, ike
comment by IffThen · 2015-08-09T02:12:01.005Z · LW(p) · GW(p)

From the Freakonomics blog: "FDA prohibits any gifts to blood donors in excess of $25 in cumulative value".

Various articles give different amounts for the price per pint that hospitals pay, but it looks like it's in the range of $125 in most cases.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-09T21:06:40.803Z · LW(p) · GW(p)

Basically that means that the FDA thinks that putting that limit on blood donations won't reduce the amount of blood donation in critical way that results in people dying as a result.

comment by ike · 2015-08-07T13:51:23.760Z · LW(p) · GW(p)

In this case it seems like the marginal value of blood donation should be roughly what the organizations like the red cross are willing to pay to get additional blood donations.

That is briefly mentioned in the post, and in more detail in the comments.

It does depend on certain efficiency assumptions about the Red Cross, though.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-08T08:52:56.681Z · LW(p) · GW(p)

It does depend on certain efficiency assumptions about the Red Cross, though.

If you don't believe that the Red Cross is doing a good job on this then research it's actual practice and openly criticising it could be high leverage. There enough money in the medical system to pay a reasonable price for the blood that's needed.

comment by NancyLebovitz · 2015-08-10T02:23:45.046Z · LW(p) · GW(p)

I assume the effectiveness of blood donation is affected by whether someone has a rare blood type.

comment by ChristianKl · 2015-08-07T07:53:03.618Z · LW(p) · GW(p)

A core idea of EA is the marginal value of a donation. The marginal value of an additional person donating blood is certainly less than a live saved.

And as a corollary, should I move my charitable giving to bribing people to donate blood whenever there is a shortage?

Certainly not. Finding funding to have enough blood donations isn't a problem. Our medical system has enough money to pay people in times of shortage.

But it doesn't want to pay people. The average quality of blood of people who have to be bribed is lower than the average quality of people who donate blood to help their fellow citizens.

Replies from: IffThen
comment by IffThen · 2015-08-07T13:54:21.005Z · LW(p) · GW(p)

I think you are often right about the marginal utility of blood. However, it is worth noting that the Red Cross both pesters people to give blood (a lot, even if you request them directly not to multiple times), and that they offer rewards for blood -- usually a t-shirt or a hat, but recently I've been getting $5 gift cards. Obviously, this is not intended to directly indicate the worth of the blood, but these factors do indicate that bribery and coercion is alive and well.

EDIT: The FDA prohibits any gifts to blood donors in excess of $25 in cumulative value.

It is also worth noting that there is a thriving industry paying for blood plasma, which may indicate that certain types of blood donation are significantly more valuable than others (plasma are limited use, but can be given regardless of blood type).

comment by Houshalter · 2015-08-06T20:05:09.588Z · LW(p) · GW(p)

I stumbled across this document. I believe it may have influenced a young Eliezer Yudkowsky. He's certainly shown reverence for the author before.

This essay includes everything. A rant against frequentism and the superiority of bayes. A rant against modern academic institutions. A rant against mainstream quantum physics. A section about how mainstream AI is too ad hoc and not grounded in perfect bayesian math. A closing section about sticking to your non-mainstream beliefs and ignoring critics.

I'm not really qualified to speak about most of it. The part about AI, particularly, bothered me. He attacks neural networks, and suggests that bayesian networks are best.

I initially wrote a big rant about how terribly he misunderstands neural networks. But the more I think about it, the more I like the idea of bayesian networks. The idea of ideal, perfect, universal methods appeals to my mind a great deal.

And that's a serious problem for me. I once got very into libertarianism over that. And then crazy AI methods that are totally impractical in reality.

And thinking about it some more; Bayesian networks are cool, but I don't think they could replace all of ML. I mean half of what neural networks do isn't just better inference. Sometimes we have plenty of training data and overfitting isn't much of an issue. It's just getting a model to fit to the data at all.

Bayes theorem doesn't say anything about optimization. It's terribly expensive to approximate. And Jaynes' rant against non-linear functions doesn't even make any sense outside of boolean functions (and even there it isn't necessarily optimal; you would have to learn a lookup table for each node that explodes exponentially with the number of inputs. (And if you are going to go full Bayesian, why stop there? Why not go to full Solomonoff Induction (or some approximation of it, at least.))

Replies from: Manfred
comment by Manfred · 2015-08-08T05:31:31.952Z · LW(p) · GW(p)

I don't think he suggests bayesian networks (which, to me, mean the causal networks of Pearl et al). Rather, he is literally suggesting trying to learn by Bayesian inference. His comments about nonlinearity I think are just to the effect that one shoudn't have to introduce nonlinearity with sigmoid activation functions, one should have nonlinearity naturally from Bayesian updates. But yeah, I think it's quite impractical.

E.g. suppose you wanted to build an email spam filter, and wanted P(spam). A (non-naive) Bayesian approach to this classification problem might involve a prior over some large population of email-generating processes. Every time you get a training email, you update your probability that a generic email comes from a particular process, and what their probability was of producing spam. When run on a test email, the spam filter goes through every single hypothesis, evaluates its probability of producing this email, and then takes a weighted average of the spam probabilities of those hypotheses to get its spam / not spam verdict. This seems like too much work.

Replies from: Houshalter
comment by Houshalter · 2015-08-08T21:57:43.297Z · LW(p) · GW(p)

I don't know, that comment really seemed to suggest Bayesian networks. I guess you could allow for a distribution of possible activation functions, but that doesn't really fit what he said about learning the "exact" nonlinear function for every possible function. That fits more with bayes nets, which use a lookup table for every node.

Your example sounds like a bayesian net. But it doesn't really fit his description of learning optimal nonlinearities for functions.

comment by G0W51 · 2015-08-06T18:18:53.289Z · LW(p) · GW(p)

The book Global Catastrophic Risks states that it does not appear plausible that molecular manufacturing will not come into existence before 2040 or 2050. I am not at all an expert on molecular manufacturing, but this seems hard to believe, given how little work seems to be going into it. I couldn't find any sources discussing when molecular manufacturing will come into existence. Thoughts?

Replies from: None
comment by [deleted] · 2015-08-07T18:19:27.881Z · LW(p) · GW(p)

There are reasons very little work is going into it - the concept makes very little sense compared to manipulating biological systems or making systems that work similar to biological systems. See http://www.sciencemag.org/content/347/6227/1221.short or this previous post of mine: http://lesswrong.com/lw/hs5/for_fai_is_molecular_nanotechnology_putting_our/97rl

comment by sjs31 · 2015-08-06T02:45:07.948Z · LW(p) · GW(p)

I realize there are sites dedicated to career discussions, but I like the advice I've seen lurking here. I'm currently interviewing for a remote-work technical position at a well-known Silicon Valley company. I'd be leaving a stable, somewhat boring, high-paying position that I've had for 10 years, for something much more exciting and intellectually challenging. I'm also old (late 40s). This particular company has a reputation for treating its employees well, but with SV's reputation for rampant ageism and other cultural oddities, what questions should I be asking and what advice would you give for evaluating the move, if an offer comes up?

Replies from: Dagon, Strangeattractor
comment by Dagon · 2015-08-06T19:19:51.132Z · LW(p) · GW(p)

I'm roughly your age, and have been working for 10 years at the same company (in the Seattle area, not SV, but we have offices there). Unlike your position, it's never been boring - I've been able to work on both immediate-impact and far-reaching topics, and there's always more interesting things coming down the pike.

I mostly want to address the age-ism and cultural oddities issue. It definitely exists, and is worse in California than other places. However, it's a topic that can't be analyzed by averages or aggregates over a geographic area. It varies so much by company, by position/role within a company, and by individual interaction with the nearby-team cultures that you really can't decide anything based on the region. This is especially true for remote work - your cultural experience will be far different from someone living there.

So, the questions you should be asking are about the expectations for your specific interaction with the employer and coworkers, rather than about the general HR-approved culture spiel you'll get if you ask generally.

comment by Strangeattractor · 2015-08-15T23:36:54.488Z · LW(p) · GW(p)

Yes, as Dagon says it is very company specific. Is there a way that you could talk to people who already work at the company who are not the people who are involved in the hiring process? If you are on Linked In, perhaps you could find out if you have some connections who would talk to you informally over the phone or in person.

Even though you would be working remotely, it may be worth it to go visit the place in person to get a feel for things and observe things that they wouldn't tell you explicitly, before making a decision of this magnitude.

Also, read the company's annual report. There are clues to its culture in there, and numbers that will help make sense of the company and the direction it is likely to take in the near future. Not enough people read the annual report when applying for a position at a company or evaluating an offer.

comment by ZeitPolizei · 2015-08-03T21:50:22.284Z · LW(p) · GW(p)

Using Prediction Book (or other prediction software) for motivation

Does anyone have experience with the effects of documenting things you need to do in PredictionBook (or something similar) and the effects it has on motivation/actually doing those things? Basically, is it possible to boost your productivity by making more optimistic predictions? I've been dabbling with PredictionBook and tried it with two (related) things I had to do, which did not work at all.

Thoughts, experiences?

Replies from: btrettel, btrettel, None
comment by btrettel · 2015-08-03T23:29:14.303Z · LW(p) · GW(p)

I've made a fair number of predictions about things I need to do on PredictionBook, and I don't think it has had much any effect on my motivation. Boosting your productivity might be possible if you make optimistic predictions if you are strongly motivated to be well calibrated.

Another possible use of PredictionBook for motivation is getting a more objective view on whether you might complete a task by a certain date. If others think you are overconfident, then you could put in place additional things to ensure you complete the task.

comment by btrettel · 2015-08-06T23:36:26.883Z · LW(p) · GW(p)

Another idea: Self-fulfilling prophecies

This seems to be the general name for the phenomena of a prediction causing itself to be fulfilled. I don't have time to read the Wikipedia entry right now, but I suspect it'll offer some ideas about how to use predictions to your own advantage. Let me know if you think of anything good. I'll post a reply here if I do.

Replies from: Viliam
comment by Viliam · 2015-08-07T20:03:05.305Z · LW(p) · GW(p)

I am afraid that the perverse incentives would be harmful here. The easy way to achieve perfect accuracy in predicting your own future action is to predict failure, and then fail intentionally.

Even if one does not consciously go so far, it could still be unconsciously tempting to predict slightly smaller probability of success, because you can always adjust the outcome downwards.

To avoid this effect completely, (as a hypothetical utility maximizer) you would have to care about your success infinitely more than about predicting correctly. In which case, why bother predicting?

comment by [deleted] · 2015-08-05T14:09:50.883Z · LW(p) · GW(p)

When you write the predictions, do you simply add optimism without changing the processes to reach a conclusion, or do you try to map out the "how" of making an outcome match more optimistic outcomes?

Replies from: ZeitPolizei
comment by ZeitPolizei · 2015-08-05T16:08:36.915Z · LW(p) · GW(p)

Good point, when I wrote down the predictions, I just used my usual unrealistically optimistic estimate of: "This is in principle doable in this time and I want to do it.", i.e. my usual "planning" mode, without considering how often I usually fail to execute my "plans". So in this case, I think I adjusted neither my optimism, nor my plans, I only put my estimate for success into actual numbers for the first time (and hoped that would do the trick).

comment by btrettel · 2015-08-03T13:09:50.565Z · LW(p) · GW(p)

Are there any good established systems for keeping track of a large number of hypotheses?

I've been using PredictionBook for this. Unfortunately it's hard to compare competing hypotheses. It would be nice to have all related hypotheses on one page, but there really isn't any mechanism to support that (tagging would be a start). The search is quite limited, as well. And due to the short comments, it's rare and clumsy to detail the evidence for each hypothesis. I guess I could figure out how to add tagging and make a pull request at GitHub, but I don't have time for that. I see someone has already filed a feature request for tags, so I'm not the only one who'd like that.

Ultimately I'd like to see something like analysis of competing hypotheses available, and I'd like to preserve the history, share this with other people, and get notifications when I should know the truth. I suppose putting a text file with the evidence in version control and also using PredictionBook might be the best approach for now. If anyone has any better ideas, I'd be interested in hearing them.

Edit: Searching and tagging would additionally be useful if you want to find previous data from which to construct a new prediction from. I'm particularly interested in this for estimating how long it takes to complete tasks, but I'll have a separate database for that (which I intend to link to PredictionBook).

Replies from: btrettel
comment by btrettel · 2015-08-03T13:30:17.822Z · LW(p) · GW(p)

Now after I post this, I see there has been a brief discussion of analysis of competing hypotheses before on LessWrong, from which you can find an open source software for the methodology (GitHub).

I also see there are other softwares for this methodology, but none of these seem quite like what I want. I'll have to look closer.

Any other systems would be of interest to me. This is a good system for comparing competing hypotheses, but does nothing for the management of non-competing hypotheses (which could still be related).

comment by [deleted] · 2015-08-05T23:56:42.080Z · LW(p) · GW(p)

Recently I've been thinking of dealing with social problems in the physical world, vs the psychological world, and the victim's world vs the perpetrators world.

Is it more effective to deal with public anxiety over a certain danger, than to deal with the anxiety-provoking stimuli itself? For instance, if gun ownership spreads fear and anxiety among a populace, would it be more effective to address those concerns by education about the threat of increased gun ownership (irrespective of change in actual level of physical danger) or to remove the stimuli (e.g. banning or restricting gun ownership)?

Edit: In the treatment of psychological disorders, OCD and PTSD are treated by exposure, that is, interacting with the stimuli (physical), whereas depression is treated with CBT (psychological). Perhaps problems can be parsed into whether they are about avoidance coping, in which case psychological approaches are preferred, or 'cognitive distortions', in which case psychological approaches are indicated.

Of course, both are psychological, as much as physical. It's just that there isn't terminology to parse them in another differentiating way.

Operationalised: fight fear physically, fight persuasion psychologically.

Looks like a handy hereustic to decide between externalising and internalising.

I suppose prerequisite to this is Dagon's approach to issues. It sort of echoes Eleizer's 'check consequentialism'

Break down the problem, and identify your goals in dealing with it/them. Is your problem one or more of: 1) fear is unpleasant and you'd rather not experience it, regardless of any other experienced or behavioral differences? 2) there are consequences to not using an account? 3) there are consequences to trying to use an account when it's not necessary?

Replies from: Jiro, ChristianKl, MrMind
comment by Jiro · 2015-08-06T15:30:22.322Z · LW(p) · GW(p)

Is "which is more effective" even a useful question to ask?

Suppose it was found that the most effective way to deal with people's fears of terrorism is to ban Islam. Should we then ban Islam?

(Also, if you will do X when doing X is most effective, that creates incentives for people who want X to respond unusually strongly to doing X. You end up creating utility monsters.)

Replies from: IffThen
comment by IffThen · 2015-08-07T04:02:48.879Z · LW(p) · GW(p)

It is definitely a necessary question to ask. You need to have a prediction of how effective your solutions will be. You also need predictions of how practical they are, and it may be that something very effective is not practical -- e.g. banning Islam. You could make a list of things you should ask: how efficient, effective, sustainable, scalable, etc. But effective certainly has a place on the list.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-07T07:53:16.060Z · LW(p) · GW(p)

Banning religions in general is no effective move if you have a different goal than radicalising people. Christianity grew in the Roman empire at a time where being a Christian was punishable by death.

Replies from: Jiro
comment by Jiro · 2015-08-07T14:03:02.023Z · LW(p) · GW(p)

I don't see any Arians around.

Beware survivorship bias. If some religion was suppressed effectively, it's less likely that you'd have heard of it and even if you have, less likely that it would come to mind.

At any rate, my point wasn't just about effectiveness. It was that we have ideas about rights and we don't decide to suppress something just because it is effective, if doing the suppression violates someone's rights.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-07T16:14:34.485Z · LW(p) · GW(p)

Beware survivorship bias. If some religion was suppressed effectively, it's less likely that you'd have heard of it and even if you have, less likely that it would come to mind.

Neither Russia nor China moved to forbid Islam even through both have homegrown Muslim terrorists. I don't think their concern was mainly about rights.

Replies from: Jiro
comment by Jiro · 2015-08-07T17:38:18.337Z · LW(p) · GW(p)

That was a hypothetical. The hypothetical was chosen to be something that embodies the same principles but to which most people would find the answer fairly clear. The hypothetical was not chosen to actually be true.

comment by ChristianKl · 2015-08-06T09:10:09.539Z · LW(p) · GW(p)

In general the effectiveness of awareness raising programs intended to shift public perception of a risk is low.

comment by MrMind · 2015-08-06T08:09:55.606Z · LW(p) · GW(p)

I'm afraid that nobody knows, but you will have to dig into sociological studies to find out for sure.

I just want to offer you a different perspective, a parameter that might affect your investigation. It might be possible that cultural and economical influences affect the general level of anxiety in a population, so that even if you just ban a stimulus (say gun ownership) anxiety will just find another object of focus.

comment by G0W51 · 2015-08-04T03:58:20.099Z · LW(p) · GW(p)

Why don't people (outside small groups like LW) advocate the creation of superintelligence much? If it is Friendly, it would have tremendous benefits. If superintelligence's creation isn't being advocated out of fears of it being unFriendly, then why don't more people advocate FAI research? Is it just too long-term for people to really care about? Do people not think managing the risks is tractable?

Replies from: MrMind, IffThen, None
comment by MrMind · 2015-08-04T07:08:51.545Z · LW(p) · GW(p)

One answer could be that people don't really think that a superintelligence is possible. It doesn't even enter in their model of the world.

Replies from: None, G0W51
comment by [deleted] · 2015-08-04T09:03:20.348Z · LW(p) · GW(p)

Like this? https://youtube.com/watch?v=xKk4Cq56d1Y

comment by G0W51 · 2015-08-04T12:20:26.727Z · LW(p) · GW(p)

I think something else is going on. The responses to this question about the feasibility of strong AI mostly stated that it was possible, though selection bias is probably largely at play, as knowledgable people would be more likely to answer than the ignorant would be.

Replies from: MrMind
comment by MrMind · 2015-08-05T07:11:08.323Z · LW(p) · GW(p)

Surely AI is a concept that's more and more present in the Western culture, but only as fictional, as far as I can tell.
No man in the street takes it seriously, as in "it's really starting to happen". Possibly the media are paving the way for a change in that, as the insurgence of AI related movies seems to suggest, but I would bet it's still an idea very far from their realm of possibilities. Also, once the reality of an AI would be estabilished, it would still be a jump to believe in the possibility of an intelligence superior to human's, a leap that for me is tiny but for many I suspect would not be so small (self-importance and all that).

Replies from: G0W51
comment by G0W51 · 2015-08-06T09:18:38.121Z · LW(p) · GW(p)

But other than self-importance, why don't people take it seriously? Is it otherwise just due to the absurdity and availability heuristics?

comment by IffThen · 2015-08-07T03:50:55.352Z · LW(p) · GW(p)

FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn't heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal.

My thought process until I started reading LessWrong and related sites was basically split between "scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks -- playing chess, obscure math problems, managing complicated data traffic" and "intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends". Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI.

I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an 'intelligence' of any sort.

comment by [deleted] · 2015-08-04T22:58:57.427Z · LW(p) · GW(p)

If you're not reading about futurism, it's unlikely to come up. There aren't any former presidential candidates giving lectures about it, so most people have never heard of it. Politics isn't about policy as Robin Hanson likes to say.

comment by Gunnar_Zarncke · 2015-08-03T07:53:35.315Z · LW(p) · GW(p)

IQ is said to correlate with life success. If rationality is about 'winning at life' wouldn't it be sensible to define a measure of 'life success'? Like the average increase of some life success metric like income over time.

Replies from: Viliam, ZeitPolizei, Strangeattractor, Lumifer
comment by Viliam · 2015-08-04T08:55:27.992Z · LW(p) · GW(p)

It's complicated, but maybe we could make some approximations. For example: "list ten things you care about, create a metric for each of them", providing a list of what people usually care about.

comment by ZeitPolizei · 2015-08-03T16:27:55.601Z · LW(p) · GW(p)

What purpose would such a measure serve? And do you try to find a universal measure or one that is individual for every person? Because different people have different goals, you could try to measure how well reality aligns with their goals, but then you just select for people who can accurately predict what they can achieve.

I have a definition of success. For me, it's very simple. It's not about wealth and fame and power. It's about how many shining eyes I have around me.

--Benjamin Zander

Replies from: Viliam
comment by Viliam · 2015-08-04T08:57:46.799Z · LW(p) · GW(p)

What purpose would such a measure serve?

A crude check of how much you are lying to yourself, for example if you believe that reading LessWrong improved your life. You could enter some data and get the result that no, your life is approximately the same as it was ten years ago. On the other hand, you could also find an improvement that you didn't realize, because of hedonistic treadmill.

comment by Strangeattractor · 2015-08-15T23:48:31.674Z · LW(p) · GW(p)

Bhutan's Gross National Happiness Index, and various indices inspired by it, attempt to measure this in populations. https://en.wikipedia.org/wiki/Happiness_economics

comment by Lumifer · 2015-08-03T16:33:59.104Z · LW(p) · GW(p)

wouldn't it be sensible to define a measure of 'life success'?

"He who dies with the most toys wins" :-P

Replies from: None
comment by [deleted] · 2015-08-03T19:20:24.689Z · LW(p) · GW(p)

See, that was before they invented chess...

comment by DataPacRat · 2015-08-04T02:21:10.687Z · LW(p) · GW(p)

Cardinal numbers for utilons?

I have a hunch.

Trying to add up utilons or hedons can quickly lead to all sorts of problems, which are probably already familiar to you. However, there are all sorts of wacky and wonderful branches of non-intuitive mathematics, which may prove of more use than elementary addition. I half-remember that regular math can be treated as part of set theory, and there are various branches of set theory which can have some, but not all, of the properties of regular math - for example, being able to say that X < Y, but not necessarily that X+Z > Y. A bit of Wikipedia digging has reminded me of Cardinal numbers, which seem at least a step in the right direction: If the elements of set X has a one-to-one correspondence with the elements of set Y, then they're equal, and if not, then they're not. This seems to be a closer approximation of utilons than the natural numbers, such as, say, if the elements of set X being the reasons that X is good.

But I could be wrong.

I'm already well past the part of math-stuff that I understand well; I'd need to do a good bit of reading just to get my feet back under me. Does anyone here, more mathematically-inclined than I, have a better intuition of why this approach may or may not be helpful?

(I'm asking because I'm considering throwing in someone who tries to follow a cardinal-utilon-based theory of ethics in something I'm writing, as a novel change from the more commonly-presented ethical theories. But to do that, I'd need to know at least a few of the consequences of this approach might end up being. Any help would be greatly appreciated.)

Replies from: Manfred, Toggle, asr, Douglas_Knight, DanielLC, MrMind
comment by Manfred · 2015-08-04T04:14:15.803Z · LW(p) · GW(p)

I think the most mathy (and thus, best :P) way to go about this is to think of the properties that these "utility" objects have, and just define them as objects with those properties.

For starters, you can compare them for size - The relationship is either bigger, or smaller, or the same. And you can do an operation to them that is a weighted sum - if you have two utilities that are different, you can do this operation to them and get a utility that's in between them, with a third parameter (the probability of one versus the other) distinguishing between different applications of this operation.

Actually, I think this sort of thing is pretty much what Savage did.

comment by Toggle · 2015-08-04T06:51:39.643Z · LW(p) · GW(p)

Seems to be an established conversation around this point, see: https://en.wikipedia.org/wiki/Ordinal_utility https://en.wikipedia.org/wiki/Cardinal_utility

"The idea of cardinal utility is considered outdated except for specific contexts such as decision making under risk, utilitarian welfare evaluations, and discounted utilities for intertemporal evaluations where it is still applied. Elsewhere, such as in general consumer theory, ordinal utility with its weaker assumptions Is preferred because results that are just as strong can be derived."

Or you could go back to the original Theory of Games proof, which I believe was ordinal- it's going to depend on your axioms. In that document, Von Neumann definitely didn't go so far as to treat utility as simply an integer.

Replies from: DataPacRat, roystgnr
comment by DataPacRat · 2015-08-04T22:04:06.779Z · LW(p) · GW(p)

Seems to be an established conversation around this point

Well, I guess coming up with an idea a century-ish old could be considered better than /not/ having come up with something that recent...

Replies from: Toggle
comment by Toggle · 2015-08-05T02:22:22.319Z · LW(p) · GW(p)

When I was a freshman, I invented the electric motor! I think it's something that just happens when you're getting acquainted with a subject, and understand it well- you get a sense of what the good questions are, and start asking them without being told.

comment by roystgnr · 2015-08-06T20:06:35.777Z · LW(p) · GW(p)

That's one of the most amusing phrases on Wikipedia: "specific contexts such as decision making under risk". In general you don't have to make decisions and/or you can predict the future perfectly, I suppose.

comment by asr · 2015-08-04T03:26:10.716Z · LW(p) · GW(p)

It's a tempting thought. But I think it's hard to make the math work that way.

I have a lovely laptop here that I am going to give you. Suppose you assign some utility U to it. Now instead of giving you the laptop, I give you a lottery ticket or the like. With probability P I give you the laptop, and with probability 1 - P you get nothing. (The lottery drawing will happen immediately, so there's no time-preference aspect here.) What utility do you attach to the lottery ticket? The natural answer is P * U, and if you accept some reasonable assumptions about preferences, you are in fact forced to that answer. (This is the basic intuition behind the von Neumann-Morgenstern Expected Utility Theorem.)

Given that probabilities are real numbers, it's hard to avoid utilities being real numbers too.

Replies from: Lumifer, DataPacRat
comment by Lumifer · 2015-08-04T04:04:01.312Z · LW(p) · GW(p)

it's hard to avoid utilities being real numbers too

If we are going into VNM utility, it is defined as the output of the utility function and the utility function is defined as returning real numbers.

comment by DataPacRat · 2015-08-04T03:46:12.793Z · LW(p) · GW(p)

I could try to rescue the idea by throwing in units, the way multiplying distance units by time units gives you speed units... but I'd just be trying to technobabble my way out of the corner.

I think the most that I can try to rescue from this failed hunch is that some offbeat and unexpected part of mathematics might be able to be used to generate useful, non-obvious conclusions for utilitarian-style reasoning, in parallel with math based on gambling turning out to be useful for measuring confidence-strengths more generally. Anybody have any suggestions for such a subfield which won't make any actual mathematicians wince, should they read my story?

comment by Douglas_Knight · 2015-08-06T21:41:21.966Z · LW(p) · GW(p)

The von Neumann-Morgenstern theorem say that if you are uncertain about the world then you can denominate your utility in probabilities. Since probabilities are real numbers, so are utilities.

comment by DanielLC · 2015-08-04T17:55:00.448Z · LW(p) · GW(p)

There are various ways to get infinite and infinitesimal utility. But they don't matter in practice. Everything but the most infinite potential producer of utility will only matter as a tie breaker, which will occur with probability zero.

Cardinal numbers also wouldn't work well even as infinite numbers go. You can't have a set with half an element, or with a negative number of elements. And is there a difference between a 50% chance of uncountable utilons and a 100% chance?

comment by MrMind · 2015-08-04T07:17:58.977Z · LW(p) · GW(p)

I don't think that non-additivity is the only thing that matters about utilons: sometimes they do add, after all.

Besides that, yes, infinite cardinal numbers can have the property you cite: since for them
X + Z = max(X,Z)
if X < Y and Z < Y, it results
X + Z < Y

comment by Thomas · 2015-08-03T12:48:34.403Z · LW(p) · GW(p)

A voice of reason.

Against Musk, Hawking and all other "pacifists".

Replies from: Manfred, ZeitPolizei, roystgnr, MrMind, LessWrong1
comment by Manfred · 2015-08-03T19:22:59.223Z · LW(p) · GW(p)

Meh. The assumption that bans won't work seems to miss most of the subtlety of reality, which can vary between the failure of U.S. prohibition of alcohol to Japan's two gun-related homocides per year.

comment by ZeitPolizei · 2015-08-03T17:03:22.011Z · LW(p) · GW(p)

Trying to summarize here:

The open letter says: "If we allow autonomous weapons, a global arms race will make them much cheaper and much more easily available to terrorists, dictators etc. We want to prevent this, so we propose to outlaw autonomous weapons."

The author of the article argues, that the technology gets developed either way and will be cheaply available, and then continues to say, that autonomous weapons would reduce casualties in war.

I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties. The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don't want to have them.

I think the main problem with this whole discussion was already mentioned elsewhere: Robotics and AI experts aren't experts on politics, and don't know what the actual effects of an autonomous weapon ban would be.

Replies from: Thomas, ChristianKl
comment by Thomas · 2015-08-03T17:08:59.715Z · LW(p) · GW(p)

Robotics and AI experts aren't experts on politics

True. And the experts in politics usually don't want to even consider such childish fantasies like autonomous killing robots.

Until at least, they are here.

comment by ChristianKl · 2015-08-04T08:02:55.364Z · LW(p) · GW(p)

I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties.

What does "if used ethically" mean?

This is a bit like the debate around tasers. Taser seem like a good idea because they allow policeman to use less force. In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun. The do additional shots with the tasers.

The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don't want to have them.

The US is already using it's drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people. That's not in line with ethical use. They use the weapons whenever they expect that to produce a military advantage.

Robotics and AI experts aren't experts on politics, and don't know what the actual effects of an autonomous weapon ban would be.

Elon Musk does politics in the sense that he has experience in lobbying for laws getting passed. He likely has people with deeper knowledge on staff.

On the other hand I don't see that the author of the article has political experience.

Replies from: ZeitPolizei, Douglas_Knight
comment by ZeitPolizei · 2015-08-04T11:03:03.182Z · LW(p) · GW(p)

What does "if used ethically" mean?

I was thinking mainly along the lines of using it in regular combat vs. indiscriminately killing protesters.
Autonomous weapons should eventually be better than humans at (a) hitting targets, thus reducing combatant casualties on the side that uses them and (b) differentiating between combatants and non-combatants, thus reducing civilian casualties. This is working under the assumption, that something like a guard robot would accompany a patrolling squad. Something like a swarm of small drones, that sweep a city to find and subdue all combatants is of course a different matter.

The US is already using it's drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people.

I wasn't aware of this, do you have a source on that? Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-04T21:02:16.649Z · LW(p) · GW(p)

I was thinking mainly along the lines of using it in regular combat

US drones in Pakistan usually don't strike in regular combat but strike a house while people sleep in it.

indiscriminately killing protesters

If you want to kill protesters you don't need drones. You can simply shoot into the mass. In most cases that however doesn't make sense and is no effective move.

If you want to understand warfare you have to move past the standard spin.

I wasn't aware of this, do you have a source on that?

http://www.theguardian.com/commentisfree/2012/aug/20/us-drones-strikes-target-rescuers-pakistan

Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.

The fact that civilian casualties exists doesn't show that a military violates ethical standards. Shooting on rescues on the other hand is a violation of ethical standards.

From a military standpoint there's an advantage to be gained by killing the doctors of the other side, from an ethical perspective it's bad and there's international law against it.

The US tries to maximize military objectives instead of ethical ones.

comment by Douglas_Knight · 2015-08-06T20:46:19.473Z · LW(p) · GW(p)

In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun.

Do you have a source for that?

One method would be to look at the number of police killings and see if it changed the trend. But it's pretty tough to get the number of American police killings, let alone the estimate a trend and determine causes.

One could imagine a policy decision to arm people with tasers instead of guns, which is not subject to your complaint. People are rarely disarmed, but new locations could make different choices about how to arm security guards. But I do not know the proportion of guards armed in various ways, let alone the trends.

comment by roystgnr · 2015-08-03T20:07:53.816Z · LW(p) · GW(p)

Where did "pacifists" and the scare quotes around it come from?

Replies from: ChristianKl
comment by ChristianKl · 2015-08-04T07:51:25.630Z · LW(p) · GW(p)

The UFAI debate isn't mainly about military robots.

comment by MrMind · 2015-08-04T07:39:25.011Z · LW(p) · GW(p)

The article's two main points are:

1 - a ban won't work
2 - properly programmed autonomous weapon (AW) could reduce causalties

So, the conclusion goes, we should totally dig AW.

Point n° 2 is the most fragile: they could very well reduce as increment causalties, depending on how they are programmed. It's also true that the availability of cheaper soldiers might make for cheaper (i.e., more affordable) wars. But point n° 1 is debatable also: after all, the ban on chemical and biological weapon has worked, sorta.

comment by Gunslinger (LessWrong1) · 2015-08-03T13:07:06.134Z · LW(p) · GW(p)

The real question that we should be asking is this

This sounds like a straw man, right there at the beginning. Stopped there.

comment by [deleted] · 2015-08-03T07:18:16.652Z · LW(p) · GW(p)

Rhetorical solution: Multi armed bandit problem

disclaimer: I'm not a computer scientist. I read up on the problem to see what the takeaways might be for decision theory. Since I'm not trained in any formal logic, I don't know how to represent this solution in symbols. I think of the problem in terms of things like - am I spending too much time becoming smarter, than doing things that are smart?

  • Exploitation dominates exploration cause unless exploration is a subset of exploitation by definition, it would not be optimising expected utility for a given optimisation problem.

  • If exploitation is a subset of exploitation then unless components of exploitation have negative utility and thus wouldn’t be included in exploitation anyway, exploitation will have a higher expected utility than exploration

Thoughts?