Open thread, Jan. 02 - Jan. 08, 2017
post by MrMind · 2017-01-02T07:50:31.498Z · LW · GW · Legacy · 37 commentsContents
37 comments
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
37 comments
Comments sorted by top scores.
comment by Viliam · 2017-01-02T15:45:23.003Z · LW(p) · GW(p)
A consequence of availability bias: the less you understand what other people do, the easier "in principle" it seems.
By "in principle" I mean that you wouldn't openly call it easy, because the work obviously requires specialized knowledge you don't have, and cannot quickly acquire. But it seems like for people who already have the specialized knowledge, it should be relatively straightforward.
"It's all just a big black box for me, but come on, it's only one black box, don't act like it's hundreds of boxes."
as opposed to:
"It's a transparent box with hundreds of tiny gadgets. Of course it takes a lot of time to get it right!"
Replies from: Val, Sarunas, username2, WhySpace_duplicate0.9261692129075527, niceguyanon↑ comment by Val · 2017-01-03T20:58:11.003Z · LW(p) · GW(p)
Isn't this very closely related to the Dunning-Kruger effect?
Replies from: Viliam, Grothor↑ comment by Viliam · 2017-01-07T21:37:36.940Z · LW(p) · GW(p)
Seems quite different to me. D-K effect is "you overestimate how good you are at something", while what I describe does not even involve a belief that you are good at the specific thing, only that -- despite knowing nothing about it on the object level -- you still have the meta-level ability of estimating how difficult it is "in principle".
An example of what I meant would be a manager in an IT company, who has absolutely no idea what "fooing the bar" means, but feels quite certain that it shouldn't take more than three days, including the analysis and testing.
While an example of D-K would be someone who writes a horrible code, but believes to be the best programmer ever. (And after looking at other people's code, keeps the original conviction, because the parts of the code he understood he could obviously write too, and the parts of the code he didn't understand are obviously written wrong.)
↑ comment by Richard Korzekwa (Grothor) · 2017-01-03T23:14:40.191Z · LW(p) · GW(p)
I may be misunderstanding the connection with the availability heuristic, but it seems to me that you're correct, and this is more closely related to the Dunning-Kruger effect.
What Dunning and Kruger observed was that someone who is sufficiently incompetent at a task is unable to distinguish competent work from incompetent work, and is more likely to overestimate the quality of their own work compared to others, even after being presented with the work of others who are more competent. What Viliam is describing is the inability to see what makes a task difficult, due to unfamiliarity with what is necessary to complete that task competently. I can see how this might relate to the availability heuristic; if I ask myself "how hard is it to be a nurse?", I can readily think of encounters I've had with nurses where they did some (seemingly) simple task and moved on. This might give the illusion that the typical day at work for a nurse is a bunch of (seemingly) simple tasks with patients like me.
↑ comment by Sarunas · 2017-01-03T22:32:02.277Z · LW(p) · GW(p)
When we are talking about science, social science, history or other similar disciplines the disparity may arise from the fact most introductory texts present the main ideas which are already well understood and well articulated, whereas the actual researchers spend the vast majority of their time on poorly understood edge cases of those ideas (it is almost tautological to say that the harder and less understood part of your work takes up more time since the well understood ideas are often called such because they no longer require a lot of time and effort).
↑ comment by username2 · 2017-01-14T00:37:13.861Z · LW(p) · GW(p)
See also: The apprenticeship of observation.
↑ comment by WhySpace_duplicate0.9261692129075527 · 2017-01-03T17:00:03.913Z · LW(p) · GW(p)
That looks like a useful way of decreasing this failure mode, which I suspect we LWers are especially susceptible to.
Does anyone know any useful measures (or better yet heuristics) for how many gears are inside various black boxes? Kolmogorov complexity (from Solomonoff induction) is useless here, but I have this vague idea that chaos theory systems > weather forecasting > average physics simulation > simple math problems I can solve exactly by hand
However, that's not really useful if I want to know how long it would take to do something novel. For example, I’m currently curious how long it would take to design a system for doing useful computation using more abstract functions instead of simple Boolean logic. Is this a weekend of tinkering for someone who knows what they are doing? Or a thousand people working for a thousand years?
I could look at how long it took to design some of the first binary or ternary computers, and then nudge it up by an order of magnitude or two. However, I could also look at how long it takes to write a simple lambda-calculus compiler, and nudge up from that. So, that doesn't narrow it down much.
How should I even go about making a Fermi approximation here? And, by extension, what generalized principles can we apply to estimate the size of such black boxes, without knowing about any specific gears inside?
↑ comment by niceguyanon · 2017-01-03T15:06:03.479Z · LW(p) · GW(p)
Could you provide a simple linkage as to why the effect(less I know, easier it seems for the other specialized person) is a consequence of the availability bias?
One connection I could draw from the effect to the availability bias is the ease of recall of the less specialized person of successful resolutions of the specialized person. For example, a manager who has numerous recollections of being presented a problem and assigning it to the subordinate for a fix. The manager only sees the problem and the eventual fix, and none of the difficult roadblocks encountered by the workers, therefore the manager tends to underestimate the difficulty. I'm not sure if this is a connection you would agree with.
Replies from: Viliam↑ comment by Viliam · 2017-01-07T21:47:37.233Z · LW(p) · GW(p)
Indeed, "manager" was the example I had in mind while writing this.
Could you provide a simple linkage as to why the effect (less I know, easier it seems for the other specialized person) is a consequence of the availability bias?
I am not aware of a research; this is from personal experience. In my experience, it seems to help when instead of one big black box you describe the work to the management as multiple black boxes. For example, instead of "building an artificial intelligence" you split it into "making a user interface for the AI", "designing a database structure for the AI", "testing the AI", etc. Then, if the managers have an intuitive idea of how much an unknown work takes (e.g. three days per black box), they agree that the more black boxes there are, the more days it will take.
(On the other hand, this can also get horribly wrong if the managers -- by the virtue of "knowing" what the original black box consists of -- become overconfident in their understanding of the problem, and start giving you specific suggestions, such as to leave out some of the suggested smaller black boxes, because their labels don't feel important. Or inviting an external expert to solve one of the smaller black boxes as a thing separate from the rest of the problem, based on the manager's superficial understanding; so the expert will produce something irrelevant for your project in exchange for half of your budget, which you now have to include somehow and pretend to be grateful for it.)
comment by Elo · 2017-01-04T05:12:48.362Z · LW(p) · GW(p)
From the lesswrong slack, we created a document that is called, "Mi Casa Su Casa" or in english, "My house is your house". We wanted to make it easy for lesswrongers to share their homes with each other. It's not very populated right now and needs some love. I am going to put it on the lesswrong wiki, as well as make a post about it so that it is earchable
https://docs.google.com/spreadsheets/d/1Xh5DuV3XNqLQ4Vv8ceIc7IDmK9Hvb46-ZMoifaFwgoY/edit?usp=sharing
Replies from: WalterL, gjm↑ comment by WalterL · 2017-01-04T14:45:47.694Z · LW(p) · GW(p)
There's a lesswrong slack? I did not know that.
Replies from: Elo, Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2017-01-04T23:13:59.759Z · LW(p) · GW(p)
It's cool and active. message Elo if you want to get in.
comment by Daniel_Burfoot · 2017-01-03T23:42:32.438Z · LW(p) · GW(p)
Rationality principle, learned from strategy board games:
In some games there are special privileged actions you can take just once or twice per game. These actions are usually quite powerful, which is why they are restricted. For example, in Tigris and Euphrates, there is a special action that allows you to permanently destroy a position.
So the principle is: if you get to the end of the game and find you have some of these "power actions" left over, you know (retrospectively) that you were too conservative about using them. This is true even if you won; perhaps if you had used the power actions you would have won sooner.
Generalizing to real life, if you get to the end of some project or challenge, and still have some "power actions" left over, you were too conservative, even if the project went well and/or you succeeded at the challenge.
What are real life power actions? Well, there are a lot of different interpretations, but one is using social capital. You can't ask your rich grand-uncle to fund your startup every six months, but can probably do it once or twice in your life. And even if you think you can succeed without asking, you still might want to do it, because there's not much point in "conserving" this kind of power action.
Replies from: Dagon, Gunnar_Zarncke, ChristianKl, WalterL↑ comment by Dagon · 2017-01-04T00:00:24.963Z · LW(p) · GW(p)
This isn't true in all games, and doesn't generalize to life. There are lots of "power moves" that just don't apply to all situations, and if you don't find yourself in a situation where it helps, you shouldn't use them just because they're limited.
It doesn't even generalize completely to those games where a power move is always helpful (but varies in how helpful it is). It's perfectly reasonable to wait for a better option, and then be surprised when a better option doesn't occur before the game ends,
See also https://en.wikipedia.org/wiki/Secretary_problem - the optimal strategy for determining the best candidate for a one-use decision ends up with the last random-valued option 1/e of the time. (edit:actually, it only FINDS the best option 1/e of the time. close to 2/3 of the time it fails.)
↑ comment by Gunnar_Zarncke · 2017-01-04T23:17:19.297Z · LW(p) · GW(p)
Another example is the question: "When have you failed the last time?" Because when you don't fail you are not advancing as fast as you could (and don't learn as much).
On the other hand: Running of full energy may drain your power over the long run.
↑ comment by ChristianKl · 2017-01-12T20:01:18.798Z · LW(p) · GW(p)
Relationships also get build by asking for favor and by giving favors. Social capital isn't necessarily used up by asking for favors.
↑ comment by WalterL · 2017-01-04T14:51:19.529Z · LW(p) · GW(p)
Sure. If you never miss a plane then you are getting to the airport too early.
Replies from: gjm↑ comment by gjm · 2017-01-04T15:19:14.955Z · LW(p) · GW(p)
This seems like one of the less credible Umeshisms.
Suppose that at the margin you can get to the airport one minute later on average at the cost of a probability p of missing your plane. Then you should do this if U(one minute at home instead of at airport) > -p U(missing plane). That's not obviously the case. Is it?
The argument might be that if you never miss a plane then your current plane-missing probability is zero, and therefore you're in the "safe" region, and it's very unlikely that you're right on the boundary of the safe region, so you can afford to move a little in the unsafe direction. But this is rubbish; all that "I never miss a plane" tells you is a rough upper bound on how likely you are to miss a plane, and if you don't fly an awful lot this might be a very crude upper bound; and depending on those U() values even a rather small extra chance of missing the plane might be (anti-)worth a lot more to you than a few extra minutes spent at home.
(A couple of other fiddly details, both of which also argue against arriving later at the airport. First, you may be happier when you leave more margin because you are less worried about the consequences of missing your plane. Second, even when you don't actually miss your plane you may sometimes have to endure lesser unpleasantnesses like having to run to avoid missing it.)
What about "power actions" in board games? Here I mostly agree with Daniel, but with a proviso: If there is uncertainty about when the game is going to end, then on any given occasion you may find that it ends before you use a "power action" but that you were still right (in the sense of "executing a good strategy" rather than "doing things that turn out in hindsight to be optimal") not to use it earlier. Because maybe there was an 80% chance that the game would go on longer and you'd have a more effective time to use it later, and only a 20% chance that it would end this soon.
On the real-life application, I think Daniel is probably right that many people are too conservative about spending social capital.
comment by Val · 2017-01-03T21:36:27.872Z · LW(p) · GW(p)
I always thought the talking snakes argument was very weak, but being confronted by a very weird argument from a young-earth creationist provided a great example for it:
If you believe in evolution, why don't you grow wings and fly away?
The point here is not about the appeal to ridicule (although it contains a hefty dose of that too). It's about a gross misrepresentation of a viewpoint. Compare the following flows of reasoning:
- Christianity means that snakes can talk.
- We can experimentally verify that snakes cannot talk.
- Therefore, Christianity is false.
and
- Evolution means people can spontaneously grow wings.
- We can experimentally verify that people cannot spontaneously grow wings.
- Therefore, evolution is false.
The big danger in this reasoning is that one can convince oneself of having used the experimental method, or of having been a rationalist. Because hey, we can scientifically verify the claim! - Without realizing that the verified claim is very different from the claims the discussed viewpoint actually holds.
I've even seen many self-proclaimed "rationalists" fall into this trap. Just as many religious people are reinforced by a "pat on the back" from their peers if they say something which is liked by the community they are in, so can people feel motivated to claim they are rationalists if that causes a pat on the back from people they interact with the most.
Replies from: WalterL↑ comment by WalterL · 2017-01-04T14:50:23.677Z · LW(p) · GW(p)
I know someone who told me that she hoped President Trump wouldn't successfully legalize rape.
I opined that, perhaps, that might not be on his itinerary.
She responded that, of course it was, and proved it thus: She is against it. She is against him. Therefore he is for it.
I bring this up to sort of angle at 'broadening' the talking snakes point. The way people do arguments is kind of how they do boggle. Find one thing that is absolutely true, and imply everything in its neighborhood. One foot of truth gives you one mile of argument.
As soon as you have one true thing, then you can retreat to that if anyone questions any part of the argument. Snakes can't talk, after all.
Replies from: gjm↑ comment by gjm · 2017-01-04T17:16:49.053Z · LW(p) · GW(p)
She responded that, of course it was, and proved it thus: She is against it. She is against him. Therefore he is for it.
That sounds so extremely stupid that I have to ask: Is that literally, in so many words, what she said, or is it possible that she said something (still presumably stupid but) a bit less stupid, and there's a slight element of caricature in your presentation of what she said?
Replies from: WalterL↑ comment by WalterL · 2017-01-04T19:55:14.114Z · LW(p) · GW(p)
I'm distilling, sure. The actual text was something like:
Jane: I'm just so concerned and terrified about that awful man being elected. I shudder to think what he'll do.
Walter: (distracted) nods
Jane: I doubt he'll manage to legalize rape, but...
Walter: (tuning into conversation fully) wat?
Dan: There's a whole government, they'll stop him.
Walter: I don't think Donald Trump is trying to legalize rape.
Jane: You think the best of people, but you have to open your eyes. He is in bed with all kinds of people. He mocked a retarded person on national tv. He is taking money from the russians.
Walter: Even if that's true, I still don't think he's for legalizing rape.
Dan: Listen, if you were right, then he never would have called all Mexicans rapists, or proposed that we wall them out!
Walter: nods, tuning back out
Conversation continues in a precious bodily fluids direction.
It stuck in my mind as a perfect example of the whole "arguments are soldiers" thing. Lesswrong is always more correct than I imagine.
Replies from: Fluttershy↑ comment by Fluttershy · 2017-01-05T00:20:00.341Z · LW(p) · GW(p)
It helps that you shared the dialogue. I predict that Jane doesn't System-2-believe that Trump is trying to legalize rape; she's just offering the other conversation participants a chance to connect over how much they don't like Trump. This may sound dishonest to rationalists, but normal people don't frown upon this behavior as often, so I can't tell if it would be epistemically rational of Jane to expect to be rebuffed in the social environment you were in. Still, making claims like this about Trump may be an instrumentally rational thing for Jane to do in this situation, if she's looking to strengthen bonds with others.
Jane's System 1 is a good bayesian, and knows that Trump supporters are more likely to rebuff her, and that Trump supporters aren't social allies. She's testing the waters, albeit clumsily, to see who her social allies are.
Jane could have put more effort into her thoughts, and chosen a factually correct insult to throw at Trump. You could have said that even if he doesn't try to legalize rape, then he'll do some other specific thing that you don't approve of (and you'd have gotten bonus points for proactively thinking of a bad thing to say about him). The implementation of either of these changes would have had a roughly similar effect on the levels of nonviolence and agreeability of the conversation.
This generalizes to most conversations about social support. When looking for support, many people switch effortlessly between making low effort claims they don't believe, and making claims that they System-2-endorse. Agreeing with their sensible claims, and offering supportive alternative claims to their preposterous claims, can mark you as a social ally while letting you gently, nonviolently nudge them away from making preposterous claims.
Replies from: Viliam, Val↑ comment by Viliam · 2017-01-07T22:07:27.143Z · LW(p) · GW(p)
Generally speaking, when a person says "X", they rarely mean X. They usually mean one of the things they associate with X, usually because it is what their social group associates with X.
For example, when someone says "make trains run on time", they mean Hitler. (Even if the quote actually comes from Mussolini. No one cares about Mussolini, but everyone knows Hitler, so for all practical purposes, Mussolini is Hitler, and fascists are nazis. If you don't trust me, just ask a random person whether Hitler's followers were fascists.) Why don't they simply use the word "Hitler" when they want to talk about Hitler? Because "Hitler" actually means: a bad person. So when you want to talk about Hitler specifically, as opposed to talking about a generic bad person, you must say "make trains run on time".
This is how normies talk all the time. If you take how normies talk, and add a lot of penises, you get Freudian psychoanalysis. If instead of penises you use holy texts, you get kabbalah. This explains why both of them are so popular among normies who want to know the deep truths about the universe.
↑ comment by Val · 2017-01-05T21:56:12.634Z · LW(p) · GW(p)
This comment was very insightful, and made me think that the young-earth creationist I talked about had a similar motivation. Despite this outrageous argument, she is a (relatively speaking) smart and educated person. Not academic-level, but neither grown up on the streets level.
comment by Viliam · 2017-01-02T09:37:42.300Z · LW(p) · GW(p)
Two new comments in the old (very old) Welcome Thread. Sorry, I am extremely lazy today, would someone else please make a fresh one?
Replies from: Grothor, MrMind↑ comment by Richard Korzekwa (Grothor) · 2017-01-16T22:28:26.263Z · LW(p) · GW(p)
I made the thread here:
http://lesswrong.com/r/discussion/lw/ogw/welcome_to_less_wrong_11th_thread_january_2017/
I just copied all the text, added the same tags, changed the date and thread number (it's 11, but someone forgot to add tags on 10), and posted to discussion. If I somehow managed to miss that someone already made the post, then I assume you'll delete it or let me know and I'll delete it.
Replies from: Viliam, Elocomment by ignoranceprior · 2017-01-05T02:52:35.410Z · LW(p) · GW(p)
Sorry if this is a petty question, but I was wondering why my karma score for the last thirty days was -2. I haven't made any comments for months and the only one I can find that has any downvotes is from July (and because of that one downvoted comment my total score is only 68% positive?)
comment by Flipnash · 2017-01-04T12:52:01.527Z · LW(p) · GW(p)
Where is the best place to discuss politics in the less wrong diaspora?
Replies from: 9eB1, btrettel↑ comment by btrettel · 2017-01-08T23:06:16.957Z · LW(p) · GW(p)
Omnilibrium was nominally supposed to be a rationalist political discussion website, but it seems to have died.
I have been too busy to participate much, but I did find my brief time there to be valuable, and would be interested in seeing the website become more active again.