Open thread, July 31 - August 6, 2017

post by Thomas · 2017-07-31T14:41:09.485Z · LW · GW · Legacy · 72 comments
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

72 comments

Comments sorted by top scores.

comment by SodaPopinski · 2017-08-01T17:05:04.072Z · LW(p) · GW(p)

This is a very interesting part of an interview with Freeman Dyson where he talks about how computation could go on forever even if the universe faces a heat death scenario. https://www.youtube.com/watch?v=3qo4n2ZYP7Y

Replies from: Thomas
comment by Thomas · 2017-08-01T22:35:23.220Z · LW(p) · GW(p)

Even if a computation goes forever, it doesn't necessarily perform more than a certain finite amount of computation. And when we are below the Planck's temperature scale, the further cooling is useless for your heat driven machines. Life stops.

I believe that there is a lot of computing down there in the coldness around the absolute zero. But not an infinite amount.

Replies from: SodaPopinski, alicey
comment by SodaPopinski · 2017-08-02T05:47:54.463Z · LW(p) · GW(p)

I believe Dyson is saying there could indeed by an infinite amount. Here is a wikipedia article about it https://en.wikipedia.org/wiki/Dyson%27s_eternal_intelligence and the article itself http://www.aleph.se/Trans/Global/Omega/dyson.txt

Replies from: Thomas
comment by Thomas · 2017-08-02T06:21:46.131Z · LW(p) · GW(p)

Yes. But there is another problem. When a super-civilization goes to sleep, for the Universe to cool some more, it has to establish some alarm clock mechanism to wake them up after some time. Which needs some energy. If they circuit their alarm clock to a thermometer to wake them up when it will be cool enough, is that energy free? I don't think so.

Well, I don't believe it's possible to postpone the end of all calculations indefinitely, but I still find this Dyson text fascinating and very relevant.

comment by alicey · 2017-08-02T01:14:10.624Z · LW(p) · GW(p)

blather

Replies from: g_pepper
comment by g_pepper · 2017-08-02T03:07:01.756Z · LW(p) · GW(p)

Thomas's comment seems quite sensible to me.

It seems to me that Dyson's argument was that as temperature falls, so does the energy required for computing. So, the point in time when we run out of available energy to compute diverges. But, Thomas reasonably points out (I think - correct me if I am misrepresenting you Thomas) that as temperature falls and the energy used for computing falls, so does the speed of computation, and so the amount of computation that can be performed converges, even if we were to compute forever.

Also, isn't Thomas correct that Planck's constant puts an absolute minimum on the amount of energy required for computation?

These seem like perfectly reasonable responses to Dyson's comments. What am I missing?

Replies from: Thomas, alicey
comment by Thomas · 2017-08-02T06:00:38.350Z · LW(p) · GW(p)

You understand me correctly in every way. If I am right, that's another matter. I think I am.

Dyson opens up another interesting question with this. Is it better to survive forever with a finite subjective time T, or it is better to consume 2*T experience in a finite amount of calendar time?

Replies from: Baughn
comment by Baughn · 2017-08-06T12:39:58.241Z · LW(p) · GW(p)

Dyson opens up another interesting question with this. Is it better to survive forever with a finite subjective time T, or it is better to consume 2*T experience in a finite amount of calendar time?

Isn't 2*T obviously better? Maybe I'm missing something here...

Replies from: Thomas
comment by Thomas · 2017-08-06T20:28:59.079Z · LW(p) · GW(p)

Isn't 2*T obviously better?

We are on the same page here. But a lot of people want to survive as long as possible. Not as much as possible, but as long as possible.

Replies from: gjm
comment by gjm · 2017-08-07T13:12:58.258Z · LW(p) · GW(p)

I would guess that most people who want that simply haven't considered the difference between "how much" and "how long", and if convinced of the possibility of decoupling subjective and objective time would prefer longer-subjective to longer-objective when given the choice.

(Of course the experiences one may want to have will typically include interacting with other people, so "compressed" experience may be useful only if lots of other people are similarly compressing theirs.)

Replies from: Thomas
comment by Thomas · 2017-08-07T20:01:34.747Z · LW(p) · GW(p)

Well. If one (Omega or someone like him) asked me to choose between 1000 years compressed into the next hour or just 100 years uncompressed inside the real time from now on ... I am not sure what to say to him.

Replies from: gjm
comment by gjm · 2017-08-07T21:25:17.589Z · LW(p) · GW(p)

Again, the existence of other people complicates this (as it does so many other things). If I'm offered this deal right now and choose to have 1000 years of subjective experience compressed into the next hour and then die, then e.g. I never get to see my daughter grow up, I leave my wife a widow and my child an orphan, I never see any of my friends again, etc. It would be nice to have a thousand years of experiences, but it's far from clear that the benefits outweigh the costs.

This doesn't seem to apply in the case of, e.g., a whole civilization choosing whether or not to go digital, and it would apply differently if this sort of decision were commonplace.

comment by alicey · 2017-08-10T02:56:30.708Z · LW(p) · GW(p)

you are missing the concept of blather

Replies from: g_pepper
comment by g_pepper · 2017-08-10T04:14:33.726Z · LW(p) · GW(p)

The definition of "blather" that I find is:

"talk long-windedly without making very much sense", which does not sound like Thomas's comment.

What definition are you using?

comment by Daniel_Burfoot · 2017-07-31T23:04:18.317Z · LW(p) · GW(p)

Against Phrasal Taxonomy Grammar, an essay about how any approach to grammar theory based on categorizing every phrase in terms of a discrete set of categories is doomed to fail.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2017-08-03T10:02:06.069Z · LW(p) · GW(p)

I'm curious about your "system that doesn’t require a strict taxonomy". Is that written up anywhere? Also, does your work have any relevance to how children should be taught grammar in school?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2017-08-06T22:47:17.655Z · LW(p) · GW(p)

I haven't written it up, though you can see my parser in action here.

One key concept in my system is the Theta Role and the associated rule. A phrase can only have one structure for each role (subject, object, determiner, etc).

I don't have much to say about teaching methods, but I will say that if you're going to teach English grammar, you should know the correct grammatical concepts that actually determine English grammar. My research is an attempt to find the correct concepts. There are some things that I'm confident about and some areas where the system needs work.

One very important aspect of English grammar is argument structure. Different verbs characteristically can and cannot take various types and combinations of arguments, such as direct objects, indirect objects, infinitive complements, and sentential complements. For example, the word "persuade" takes a sentential (that-) complement, but only when also combined with a direct object ("I will persuade [him] that the world is flat" is incorrect without the direct object). In contrast, the verb "know" can take either a direct object or a that-complement, but not both. To speak English fluently, you need to memorize all these combinations, but before you memorize them, you need to know that the concept exists.

Replies from: Valerio
comment by Valerio · 2017-08-09T06:00:18.451Z · LW(p) · GW(p)

Daniel, I'm curious too. What do you think about Fluid Construction Grammar? Can it be a good theory of language?

comment by Sandi · 2017-07-31T20:25:23.416Z · LW(p) · GW(p)

What would be the physical/neurological mechanism powering ego depletion, assuming it existed? What stops us from doing hard mental work all the time? Is it even imaginable to, say, study every waking hour for a long period of time, without ever having an evening of youtube videos to relax? I'm not asking what the psychology of willpower is, but rather if there's a neurology of willpower?

And beyond ego depletion, there's a very popular model of willpower where the brain is seen as a battery, used up when hard work is being done and charged when relaxing. I see this as a deceptive intuition pump since it's easy to imagine and yet it doesn't explain much. What is this energy being used up, physically?

Surely it isn't actual physical energy (in terms of calories) since I recall that the energy consumption of the brain isn't significantly increased while studying. In addition, physical energy is abundant nowadays because food is plentiful. If the lack of physical energy was the issue, we could just keep going by eating more sugar.

The reason we can't workout for 12 hours straight is understood, physiologically. Admittedly, I don't understand it very well myself, but I'm sure an expert could provide reasons related to muscles being strained, energy being depleted, and so on. (Perhaps I would understand the mental analogue better if I understood this.) I'm looking for a similar mechanism in the brain.

To better explain what I'm talking about, what kind of answer would be satisfying, I'll give you a couple fake explanations.

  • Hard mental work sees higher electrical activity in the brain. If this is kept up for too long, neurons would get physically damaged due to their sensitivity. To prevent damage, brains evolved a felling of tiredness when the brain is overused.
  • There is a resource (e.g. dopamine) that is literally depleted during tasking brain operation and regenerated when resting.
  • There could also be a higher level explanation. The inspiration for this came from an old text by Yudkowsky. (I didn't seriously look at those explanations as an answer to my problem because of reasons). I won't quote the source since I think that post was supposed to be deleted. This excerpt gives a good intuitive picture:

My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraid to push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once.

Let me speculate on the answer.

1) There is no neurological limitation. The hardware could, theoretically, run demanding operations indefinitely. But, theories like ego depletion are deceptive memes that spread throughout culture, and so we came to accept an nonexistent limitation. Our belief in the myth is so strong, it might as well be true. The same mechanism as learned helplessness. Needless to say, this could potentially be overcome.

2) There is no neurological limitation, but otherwise useful heuristics stop us from kicking it into higher gear. All of the psychological explanations for akrasia, the kind that are discussed all the time here, come into play. For example, youtube videos provide a tiny, but steady and plentiful stimulus to the reward system, unlike programming, which can have a much higher payout, but one that's inconsistent, unreliable and coupled with frustration. And so, due to a faulty decision making procedure, the brain never gets to the point where it works to its fullest potential. The decision making procedure is otherwise fast and correct enough, thus mostly useful, so simply removing it isn't possible. The same mechanism as cognitive biases. It might be similar to how we cannot do arithmetic effortlessly even though the hardware is probably there.

3) There is an in-built neurological limitation because of an evolutionary advantage. Now, defining this evolutionary advantage can lead to the original problem. For example, it cannot be due to minimizing energy consumption, as discussed above. But other explanations don't run into this problem. Laziness can often lead to more efficient solutions, which is beneficial, so we evolved ego depletion to promote it, and now we're stuck with it. Of course, all the pitfalls customary to evolutionary psychology apply, so I won't go in depth about this.

4) There is a neurological limitation deeply related to the way the brain works. Kind of like cars can only go so fast, and it's not good for them if you push them to maximum speed all the time. At first glance, the brain is propagating charge through neurons all the same, regardless of how tiring an action it's accomplishing. But one could imagine non-trivial complexities to how the brain functions which account for this particular limitation. I dare not speculate further since I know so little about neurology.

Replies from: ChristianKl, IlyaShpitser, Lumifer, gjm
comment by ChristianKl · 2017-07-31T21:20:38.451Z · LW(p) · GW(p)

I don't think either of those explanations is true but writing out my alternative theory and doing it full justice is a longer project.

I think part of the problem is that "hard mental work" is a category that's very far from a meaningful category on the physical/neurological level. Bad ontology leads to bad problem modeling and understanding.

comment by IlyaShpitser · 2017-07-31T21:37:18.197Z · LW(p) · GW(p)

Imo: legislative gridlock of the congress inside your head (e.g. a software issue). Unclear if a problem or not.

comment by Lumifer · 2017-07-31T21:00:15.921Z · LW(p) · GW(p)

The question about willpower depletion is different from the question about mental fatigue and you tend to conflate the two. Which one do you mean?

comment by gjm · 2017-08-07T14:41:53.784Z · LW(p) · GW(p)

I have a hazy memory that there's some discussion of exactly this in Keith Stanovich's book "What intelligence tests miss".

Unfortunately, my memory is hazy enough that I don't trust it to say accurately (or even semi-accurately) what he said about it :-). So this is useful only to the following extent: if Sandi, or someone else interested in Sandi's question, has a copy of Stanovich's book or was considering reading it anyway, then it might be worth a look.

comment by whpearson · 2017-07-31T19:15:47.512Z · LW(p) · GW(p)

I've decided to create a website/community that will focus on improving autonomy of humans.

https://improvingautonomy.wordpress.com

The first goal is to explore how to do intelligence augmentation of humans in safe way (using populations dynamics/etc).

I think that this is both a more likely development path than singleton AIs and also a more desirable one if done well.

Still a work in progress. I'm putting it here so that if people have good arguments that this path should not be developed at all, I would like to hear them before I get too embroiled in it.

Replies from: Lumifer
comment by Lumifer · 2017-07-31T19:56:28.596Z · LW(p) · GW(p)

By "humans" you mean "individuals", right?

I expect that one of the arguments contra that you will see (I do not subscribe to it) is that highly-capable individuals are just too dangerous. Basically, power can be used not only to separate oneself from the society you don't like, but also to hurt the society you don't like. The contemporary technological society is fragile and a competent malcontent can do terrible damage to it.

Replies from: whpearson
comment by whpearson · 2017-07-31T20:04:01.214Z · LW(p) · GW(p)

Individuals yup. That is the failure mode to guard against.

I want to ask if it is possible to get a safe advanced society with things like mutual inspection for defection and creating technology sharing groups with other pro-social people. Such that anti-social people do not get a strategic decisive advantage (or much advantage at all).

Replies from: Lumifer
comment by Lumifer · 2017-07-31T20:48:06.689Z · LW(p) · GW(p)

A few issues immediately come to mind.

  • What's "pro-social" and "anti-social"? In particular, what if you're pro-social, but pro-a-different-social? Consider your standard revolutionaries of different kinds.

  • Pro- and anti-social are not immutable characteristics. People change.

  • If access to technology/power is going to be gated by conformity, the whole autonomy premise goes out of the window right away

Replies from: whpearson
comment by whpearson · 2017-07-31T21:30:52.091Z · LW(p) · GW(p)

Pro-social is not trying to take over the entire world or threatening . It is agreeing to mainly non-violent competition. Anti-social is genocide/pogroms, biocide, mind crimes, bio/nano warfare.

I'd rather no gating, but some gating might be required at different times.

Replies from: Dagon, Lumifer
comment by Dagon · 2017-07-31T22:57:07.601Z · LW(p) · GW(p)

mainly non-violent competition

Heh. If you think there's any such thing as "non-violent competition", you're not seeing through some levels of abstraction. All resource allocation is violent or has the threat of violence behind it.

Poor competitors fail to reproduce, and that is the ultimate violence.

Replies from: whpearson
comment by whpearson · 2017-08-01T19:58:01.049Z · LW(p) · GW(p)

If the competition stops a person reproducing then sure it is a little violent. If it stops an idea reproducing, then I am not so sure I care about stopping all violence.

Poor competitors fail to reproduce, and that is the ultimate violence.

Failure to reproduce is not the ultimate violence. Killing someone and killing everyone vaguely related to them (including the bacteria that share a genetic code), destroying their culture and all its traces is far more violent.

Replies from: Dagon
comment by Dagon · 2017-08-01T21:19:05.740Z · LW(p) · GW(p)

If it stops an idea reproducing, then I am not so sure I care about stopping all violence.

Ideas have no agency. Agents competing for control/use of resources contain violence. I probably should back up a step and say "denial of goals is the ultimate violence". If you have a different definition (preferably something more complete than "no hitting"), please share it.

comment by Lumifer · 2017-08-01T15:06:58.197Z · LW(p) · GW(p)

There was a reason I mentioned revolutionaries.

Let's take something old, say the French Revolution. Which side is pro-social? Both, neither?

Let's take a hypothetical, say there is a group in Iran which calls the current regime "medieval theocracy" and wants to change the society to be considerably more Western-style. Are they pro-social?

comment by Thomas · 2017-07-31T15:06:19.080Z · LW(p) · GW(p)

A summer problem.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2017-07-31T16:57:22.062Z · LW(p) · GW(p)

I guess the important thing to realise is that the size atoms is irrelevant to the problem. If we considered two atoms joined together to be a new "atom" then they would be twice as heavy, so the forces would be four times as strong, but there would be only half as many atoms, so there would be four times fewer pairs.

So the answer is just the integral as r and r' range over the interior of the earth of G ρ(r) ρ(r')/(r-r')^2, where ρ(r) is the density. We can assume constant density, but I still can't be bothered to do the integral.

The earth has mass 5.97*10^24 kg and radius 6.37*10^6 m, G = 6.674*10^-11 m^3 kg^-1 s^-2 and we want an answer in Newtons = m kg s^-2. So by dimensional analysis, the answer is about G M^2/r^2 = 5.86*10^25.

Replies from: Thomas, cousin_it, Manfred
comment by Thomas · 2017-07-31T17:28:32.766Z · LW(p) · GW(p)

You estimate around 1 Earth weight on Earth's surface.

comment by cousin_it · 2017-08-01T04:41:13.488Z · LW(p) · GW(p)

I guess the important thing to realise is that the size atoms is irrelevant to the problem.

That doesn't seem right, though? Imagine a one dimensional version of the problem. If a stick of length 1 is divided into n atoms weighing 1/n each, then each pair of adjacent atoms is distance 1/n apart, so the force between them is 1. Since there are n such pairs, the total force grows at least linearly with n. And it gets even worse if some atoms are disproportionately closer to others (in molecules).

comment by Manfred · 2017-07-31T22:24:30.906Z · LW(p) · GW(p)

Cool insight. We'll just pretend constant density of 3M/4r^3.

This kind of integral shows up all the time in E and M, so I'll give it a shot to keep in practice.

You simplify it by using the law of cosines, to turn the vector subtraction 1/|r-r'|^2 into 1/(|r|^2+|r'|^2+2|r||r'|cos(θ)). And this looks like you still have to worry about integrating two things, but actually you can just call r' due north during the integral over r without loss of generality.

So now we need to integrate 1/(r^2+|r'|^2+2r|r'|cos(θ)) r^2 sin(θ) dr dφ dθ. First take your free 2π from φ. Cosine is the derivative of sine, so substitution makes it obvious that the θ integral gives you a log of cosine. So now we integrate 2πr (ln(r^2+|r'|^2+2r|r'|) - ln(r^2+|r'|^2-2r|r'|)) / 2|r'| dr from 0 to R. Which mathematica says is some nasty inverse-tangent-containing thing.

Okay, maybe I don't actually want to do this integral that much :P

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2017-08-02T19:50:20.974Z · LW(p) · GW(p)

EDIT: On second thoughts most of the following is bullshit. In particular, the answer clearly can't depend logarithmically on R.

I had a long train journey today so I did the integral! And it's more interesting than I expected because it diverges! I got the answer (GM^2/R^2)(9/4)(log(2)-43/12-log(0)). Of course I might have made a numerical mistake somewhere, in particular the number 43/12 looks a bit strange. But the interesting bit is the log(0). The divergence arises because we've modelled matter as a continuum, with parts of it getting arbitrarily close to other parts.

To get an exact answer we would have to look at how atoms are actually arranged in matter, but we can get a rough answer by replacing the 0 in log(0) by r_min/R, where r_min is the average distance between atoms. In most molecules the bond spacing is somewhere around 100 nm. So r_min ~ 10^-10, and R = 6.37*10^6 so log(r_min/R) ~ -38.7, which is more significant than the log(2)-43/12 = -2.89. So we can say that the total is about 38.7*9/4*GM^2/R^2 which is 87GM^2/R^2 or 5.1*10^27.

[But after working this out I suddenly got worried that some atoms get even closer than that. Maybe when a cosmic ray hits the earth it does so with such energy that it gets really really close to another nucleus, and then the gravitational force between them dominates the rest of the planet put together. Well the strongest cosmic ray on record is the Oh-My-God particle with mass 48J. So it would have produced a spacing of about h_barc/48, which is about 6.6\10^-28. But the mass of a proton is about 10^-27, so Gm^2/r^2 is about G, and this isn't as significant as I feared.]

Replies from: Thomas
comment by Thomas · 2017-08-03T06:31:10.290Z · LW(p) · GW(p)

Very nice, and the result is a thousand Earth's weights now. I wonder if every atom inside Earth feels the gravity of every other atom at every moment. (I think not. Which is a heresy, so please don't pay any attention to that.)

comment by cousin_it · 2017-08-06T10:24:33.740Z · LW(p) · GW(p)

If we want a measure of rationality that's orthogonal to intelligence, maybe we could try testing the ability to overcome motivated reasoning? Set up a conflict between emotion and reason, and see how the person reacts. The marshmallow test is an example of that. Are there other such tests, preferably ones that would work on adults? Which emotions would be easiest?

Replies from: gjm, MrMind, Wei_Dai, Dagon, ChristianKl
comment by gjm · 2017-08-07T13:17:17.057Z · LW(p) · GW(p)

It seems like it would be tricky to distinguish "good at reasoning even in the face of emotional distractions" from "not experiencing strong emotions". The former is clearly good; the latter arguably bad.

I'm not sure how confident I am that the paragraph above makes sense. How does one measure the strength of an emotion, if not via its effects on how the person feeling it acts? But it seems like there's a useful distinction to be made here. Perhaps something like this: say that an emotion is strong if, in the absence of deliberate effort, it has large effects on behaviour; then you want to (1) feel emotions that have a large effect on you if you let them but (2) be able to reduce those effects to almost nothing when you choose to. That is, you want a large dynamic range.

Replies from: cousin_it
comment by cousin_it · 2017-08-07T14:35:10.116Z · LW(p) · GW(p)

Among other things I'd like to test the ability to abandon motivated beliefs, like religion. Yes, it might be due to high intelligence or weak emotions. But if we want a numerical measure that's orthogonal to intelligence, we should probably treat these the same.

Replies from: Lumifer
comment by Lumifer · 2017-08-07T14:55:07.465Z · LW(p) · GW(p)

So you want something like intellectual lability? I have strong doubts it will be uncorrelated to intelligence.

I'm guessing you're aiming at "strongly-supported views held strongly, weakly-supported views held weakly", but stupid people don't do that.

Replies from: cousin_it
comment by cousin_it · 2017-08-07T15:45:49.737Z · LW(p) · GW(p)

The marshmallow and Asch experiments aren't testing anything like intellectual lability. They are testing if you can do the reasonable thing despite emotions and biases. That's a big part of rationality and that's what I'd like to test. Reasoning yourself out of religion is an advanced use of the same skill.

Replies from: Lumifer
comment by Lumifer · 2017-08-07T15:53:57.862Z · LW(p) · GW(p)

The marshmallow experiment tests several things, among them the time preference. Asch tests, also among other things, how much do you value fitting well into society. It's not all that simple.

testing the ability to do the reasonable thing despite emotions and biases

May I then suggest calling this ability "vulcanness" and measure it in millispocks?

And how that "ability to do the reasonable thing" is going to be orthogonal to intelligence?

Replies from: cousin_it
comment by cousin_it · 2017-08-07T16:06:42.529Z · LW(p) · GW(p)

When asked about their preferences verbally, most people wouldn't endorse the extreme time discounting that would justify eating the marshmallow right away, and wouldn't endorse killing a test subject to please the experimenter. So I don't think these behaviors can be viewed as rational.

Replies from: Lumifer
comment by Lumifer · 2017-08-07T16:17:39.024Z · LW(p) · GW(p)

You are aware of the difference between expressed preferences and revealed preferences, yes? It doesn't seem to me that sticking with expressed preferences has much to do with rationality.

Replies from: cousin_it
comment by cousin_it · 2017-08-07T16:26:49.435Z · LW(p) · GW(p)

I prefer to work under the assumption that some human actions are irrational, not just revealed preferences. Mostly because "revealed preferences" feels like a curiosity stopper, and researching specific kinds of irrationality (biases) is so fruitful in comparison.

Replies from: Lumifer
comment by Lumifer · 2017-08-07T17:45:40.313Z · LW(p) · GW(p)

I prefer to work under the assumption that some human actions are irrational, not just revealed preferences.

Huh? Both expressed and revealed preferences might or might not be rational. There's nothing about revealed preferences which makes them irrational by default.

feels like a curiosity stopper

Nobody's telling you to stop there. Asking, for example, "why does this person have these preferences and is there a reason they are not explicit?" allows you to continue.

comment by MrMind · 2017-08-07T10:31:39.540Z · LW(p) · GW(p)

Which emotions would be easiest?

Sexual attraction...

Replies from: Viliam, cousin_it
comment by Viliam · 2017-08-07T12:52:03.055Z · LW(p) · GW(p)

I am imagining how to set up the experiment...

"Sir, I will leave you alone in this room now, with this naked supermodel. She is willing to do anything you want. However, if you can wait for 20 minutes without touching her -- or yourself! -- I will bring you one more."

Replies from: MrMind
comment by MrMind · 2017-08-07T13:30:19.607Z · LW(p) · GW(p)

I don't know how much sexual satisfaction scales linearly, but from 1 to 2 seems about right.

comment by cousin_it · 2017-08-07T14:07:32.581Z · LW(p) · GW(p)

Yeah. Fear might be even easier. But I'm not sure how to connect it with motivated reasoning.

Now that I think of it, Asch's conformity experiment might be another example of what I want (if conformity is irrational). It seems like a fruitful direction.

Replies from: Lumifer
comment by Lumifer · 2017-08-07T15:01:50.988Z · LW(p) · GW(p)

Fear might be even easier.

The gom jabbar test.

Replies from: cousin_it
comment by cousin_it · 2017-08-07T15:08:17.997Z · LW(p) · GW(p)

Gom jabbar might be more about stubbornness than rationality :-)

Replies from: Lumifer
comment by Lumifer · 2017-08-07T15:24:35.500Z · LW(p) · GW(p)

The point of the test was to distinguish between a human and an animal :-/

REVEREND MOTHER: I hold at your neck the gom jabbar. Don't pull away or you'll feel that poison. A Duke's son must know about many poisons --this one kills only animals.

PAUL: Are you suggesting a Duke's son is an animal?

REVEREND MOTHER: Let us say I suggest you may be human. Your awareness may be powerful enough to control your instincts. Your instincts will be to remove your hand from the box. If you do so you will die.

comment by Wei Dai (Wei_Dai) · 2017-08-07T18:57:31.995Z · LW(p) · GW(p)

Set up a conflict between emotion and reason, and see how the person reacts. The marshmallow test is an example of that.

This article argues that marshmallow tests are mostly just measuring how much a kid wants to please adults or do what adults expect of them.It seems likely that any single test of rationality someone could come up with will be very noisy, so to get an idea of how rational someone is we'll need to do lots of tests, or (since that's very costly) settle for something like a questionaire.

Replies from: cousin_it
comment by cousin_it · 2017-08-07T21:49:43.515Z · LW(p) · GW(p)

Yeah, I guess a rationality test needs to have many questions, like an IQ test. It will be tricky to make each question emotionally involved, but hey, I just started thinking about it.

comment by Dagon · 2017-08-06T21:30:40.472Z · LW(p) · GW(p)

Why do you want a measure of rationality that's orthogonal to (measures of) intelligence? Whatever this reason is will likely lead you to a better phrasing of what aspects of behavior/capability you want to test for.

comment by ChristianKl · 2017-08-06T17:15:49.884Z · LW(p) · GW(p)

Keith Stanovich worked on creating a test for rationality: https://mitpress.mit.edu/books/rationality-quotient

Replies from: cousin_it, cousin_it
comment by cousin_it · 2017-08-06T21:19:54.959Z · LW(p) · GW(p)

His test doesn't involve emotions.

comment by cousin_it · 2017-08-06T21:18:15.847Z · LW(p) · GW(p)

I know!

comment by sone3d · 2017-07-31T19:36:49.471Z · LW(p) · GW(p)

Please, recommend me more books in the line of ‘Metaphors we live by’ and ‘Surfaces and Essences’.

Replies from: username2
comment by username2 · 2017-08-01T09:34:53.888Z · LW(p) · GW(p)

Wilson's Six views of embodied cognition gives a broad overview of embodied cognition in 12 pages and has a few good references. https://people.ucsc.edu/~mlwilson/publications/Embodied_Cog_PBR.pdf

I decided to read Holyoak et al.'s Mental Leaps: Analogy in Creative Thought when Surfaces and Essences started feeling drawn-out.

comment by disconnect_duplicate0.563651414951392 · 2017-08-02T19:29:18.283Z · LW(p) · GW(p)

80,000 Hours recently ranked "Judgement and decision making" as the most employable skill.

I think they've simplified too much and ended up with possibly harmful conclusions. To illustrate one problem with their methodology, imagine that they had looked at medieval England instead. Their methods would have found kings and nobles having highest pay and satisfaction, and judgment heavily associated with those jobs. The conclusion? "Peasants, practiceth thy judgment!"

What do you think? If there was a twin study where the other twin pursued programming, and the other judgment, who would end up with higher satisfaction and pay? If you think it's not the programmer, why?

Replies from: Screwtape, ChristianKl
comment by Screwtape · 2017-08-02T20:46:34.632Z · LW(p) · GW(p)

Also germane is that if a high-schooler asked me how to practice judgement and decision making, I'm not entirely sure how I'd suggest learning that. (Maybe play lots of games like poker or Magic? Read the sequences? Be a treasurer in high school clubs?) If someone asked how to practice programming, I can think of lots of ways to practice that and get better.

Confounder- I make my living by programming and suspending my judgement and decision making.

Replies from: Lumifer
comment by Lumifer · 2017-08-02T23:48:02.795Z · LW(p) · GW(p)

how to practice judgement and decision making

Good judgement comes from experience.

Experience comes from bad judgement.

Replies from: ChristianKl
comment by ChristianKl · 2017-08-03T10:57:12.276Z · LW(p) · GW(p)

Experience alone might not be enough, it's good when the experience has feedback loops.

comment by ChristianKl · 2017-08-02T20:48:42.678Z · LW(p) · GW(p)

I'm not sure what pursuing "judgement and decision making" would look like in practice.

Replies from: disconnect_duplicate0.563651414951392
comment by disconnect_duplicate0.563651414951392 · 2017-08-03T05:48:25.714Z · LW(p) · GW(p)

We can't really well practice or even measure most of the recommended skills, such as judgment, critical thinking, time management, monitoring performance, complex problem solving, active learning. This is one of the reasons why I disagree with the article, and think its conclusions are not useful.

They're a bit like saying that high intelligence is associated with better pay and job satisfaction.

Replies from: ChristianKl
comment by ChristianKl · 2017-08-03T09:07:29.714Z · LW(p) · GW(p)

I think "can't practice" is a bit strong. CFAR would be a practice that trains a bunch of those skills. The problem is that there's no 3 year CFAR bachelor where the student does that kind of training all the time but CFAR does 4 day workshops.

Replies from: disconnect_duplicate0.563651414951392
comment by disconnect_duplicate0.563651414951392 · 2017-08-03T09:27:54.589Z · LW(p) · GW(p)

I do not mean that it is impossible to practice, just that it's not a well-defined skill you can measuredly improve like programming. I believe it's not a skill you can realistically practice in order to improve your employability.

I have been following CFAR from their beginning. If anything, the existence and current state of CFAR demonstrates how judgment is a difficult skill to practice, and difficult to measure. There's no evidence of CFAR's effectiveness available on their website (or it is well hidden).