Open Thread, Jul. 27 - Aug 02, 2015
post by MrMind · 2015-07-27T07:16:38.332Z · LW · GW · Legacy · 222 commentsContents
222 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
222 comments
Comments sorted by top scores.
comment by chaosmage · 2015-07-27T17:05:23.519Z · LW(p) · GW(p)
The excellent intro to AI risk by the Computerphile people (mentioned in the last Open Thread) has an even better continuation: AI Self Improvement. This is quite obviously inspired by the Sequences (down to including the simile of playing Kasparov at Chess), but explained with remarkable brevity and lucidity.
comment by Grit · 2015-07-27T15:20:07.529Z · LW(p) · GW(p)
Published 4 hours ago as of Monday 27 July 2015 20.18 AEST:
Replies from: James_Miller, Daniel_Burfoot, Thomas, JiroMusk, Wozniak and Hawking urge ban on AI and autonomous weapons: Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.
↑ comment by James_Miller · 2015-07-27T23:17:47.041Z · LW(p) · GW(p)
I didn't read the letter but did it have a subsection saying something like "and if this creates a future shortage of military manpower we would welcome our children being drafted to fight against the common enemies of mankind such as ISIS"?
↑ comment by Daniel_Burfoot · 2015-07-27T18:18:51.054Z · LW(p) · GW(p)
I like the spirit of the proposal but I fear it will be very hard to draw a reasonable line between military AI and non-military AI. Do computer vision algorithms have military applications? Do planning algorithms? Well, they do if you hook them up to a robot with a gun.
↑ comment by Thomas · 2015-07-27T15:49:43.306Z · LW(p) · GW(p)
Outlaw it, and only outlaws will have it.
Replies from: Vaniver, None↑ comment by Vaniver · 2015-07-27T16:46:08.252Z · LW(p) · GW(p)
See also the website of the (I think) most prominent pressure group in this area: Campaign to Stop Killer Robots.
This came up at the AI Ethics panel at AAAI, and the "outlaws" argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant. International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
The two qualifiers--offensive and autonomous-- are also both material. If we have anti-rocket flechettes on a tank, it's just not possible to have a human in the loop, because you need to launch them immediately after you detect an incoming rocket, so defensive autonomous weapons are in. Similarly, offensive AI is in; your rifle / drone / etc. can identify targets and aim for you, but the ban is arguing that there needs to be a person that verifies the targeting system is correct and presses the button (to allow the weapon to fire; it can probably decide the timing). The phrase they use is "meaningful human control."
The idea, I think, is that everyone is safer if nation-states aren't developing autonomous killbots to fight other nation's autonomous killbots. So long as they're more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
The trouble I had with it was the underlying principle of "meaningful human control" is an argument I do not buy for livingry, and that makes me reluctant to buy it for weaponry, or to endorse weaponry bans that could then apply the same logic to livingry. It seems to me that they implicitly assume that a principle on 'life and death decisions' only affects weaponry, but not at all--one of the other AAAI attendees pointed out that in their donor organ allocation software, the fact that there was no human control was seen as a plus, because it implied that there was no opportunity for corruption of the people involved in making the decision, because those people did not exist. (Of course people were involved at a higher meta level, in writing the software and establishing the principles by which the software operates.)
And that's just planning; if we're going to have robot cars or doctors or pilots or so on, we need to accept robots making life and death decisions and relegate 'meaningful human control' to the places where it's helpful. And it seems like we might also want robot police and soldiers.
Replies from: ZankerH, VoiceOfRa, Lumifer, VoiceOfRa↑ comment by ZankerH · 2015-07-27T18:30:21.107Z · LW(p) · GW(p)
International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
And yet the international community has failed to persecute those responsible for the one recent case of a government using chemical warfare to murder its citizens en masse - Syria. Plenty of governments still maintain extensive stockpiles of chemical weapons. Given the enforcement track-record, I'd say given being put in a similar situation to the Syrian government, they're more likely to use similar or harsher measures in the future.
If you outlaw something and then fail to enforce the law, it isn't worth the paper it's written on. How do you think the ban on autonomous weapons will be enforced if the USA, China or Russia unilaterally break it? It won't be.
Replies from: satt↑ comment by satt · 2015-07-30T23:44:09.687Z · LW(p) · GW(p)
If you outlaw something and then fail to enforce the law, it isn't worth the paper it's written on.
This strikes me as...not obvious. In my country most rapes are not reported, let alone prosecuted, but that doesn't lead me to conclude that the law against rape "isn't worth the paper it's written on".
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-28T01:44:57.462Z · LW(p) · GW(p)
So long as they're more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
I would argue driverless cars are far more dangerous in that regard, if only because you are likely to have more of them and they are already in major population centers.
Replies from: Vaniver↑ comment by Lumifer · 2015-07-27T17:19:30.962Z · LW(p) · GW(p)
A lot of contemporary weaponry is already fairly autonomous. For example, it would be trivial to program a modern anti-air missile system to shoot at all detected targets (matching specified criteria) without any human input whatsoever -- no AI needed. And, of course the difference between offensive fire and defensive fire isn't all that clear-cut. Is a counter-aritllery barrage offensive or defensive? What about area-denial weapons?
I have a feeling it's a typical feel-good petition ("I Did My Part To Stop Killer Robots -- What About You?") with little relevance to military-political realities.
↑ comment by VoiceOfRa · 2015-07-28T01:43:23.961Z · LW(p) · GW(p)
This came up at the AI Ethics panel at AAAI, and the "outlaws" argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant.
Disagree. It only seems that way because you are looking at too small a time scale. Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it "dishonorable", or whatever the equivalent is. (Look up the papal bulls against crossbows and gunpowder sometime). This lasts a generation at most, generally until the next major war.
Replies from: Vaniver, satt↑ comment by Vaniver · 2015-07-28T02:00:07.322Z · LW(p) · GW(p)
Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it "dishonorable", or whatever the equivalent is.
Consider chemical warfare in WWI vs. chemical warfare in WWII. I'm no military historian, but my impression is that it was used because it was effective, people realized that it was lose-lose relative to not using chemical warfare, and then it wasn't used in WWII, because both sides reasonably expected that if they started using it, then the other side would as well.
One possibility is that this only works for technologies that are helpful but not transformative. An international campaign to halt the use of guns in warfare would not get very far (like you point out), and it is possible that autonomous military AI is closer to guns than it is chemical warfare.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-28T02:05:45.069Z · LW(p) · GW(p)
Chemical warfare was only effective the first couple times it was used, i.e., before people invented the gas mask.
Replies from: garabik, Pfft↑ comment by garabik · 2015-07-28T17:43:41.805Z · LW(p) · GW(p)
Combat efficiency is much reduced when using gas mask.
Moreover, while gas masks for horses do (did) exist, good luck persuading your horse wearing it. And horses were rather crucial in WWI and still very important in WWII.
We did not see gas used during WWII mostly because of Hitler's aversion and a (mistaken) belief that the Allies had stockpiles of nerve agents and Germany feared their retaliation.
↑ comment by Jiro · 2015-07-27T18:42:04.889Z · LW(p) · GW(p)
I wonder if we'll follow up by seeing politicians propose changes to C++ standards.
Even assuming these guys are experts on how dangerous the weapons are, they have no expertise in politics that the man on the street doesn't, and the man on the street doesn't put out press releases. This is no better than all those announcements by various celebrities concerning some matter they have no expertise on, except they're tech celebrities and are intentionally confusing technical knowledge and political knowledge to make themselves sound like experts because they are experts about the wrong link in the chain.
Replies from: Houshalter↑ comment by Houshalter · 2015-07-28T00:07:21.821Z · LW(p) · GW(p)
I think you missed the "More than 1,000 experts and leading robotics researchers sign open letter warning of military artificial intelligence arms race". Musk and Hawking are just the high profile names the editor decided would grab attention in a headline.
Experts in robotics and AI would be aware of the capabilities of these systems, and how they might develop in the future. Therefore I think they are qualified to have an opinion on whether or not it's a good idea to ban them.
Replies from: Jiro↑ comment by Jiro · 2015-07-28T04:12:00.414Z · LW(p) · GW(p)
No, because whether it's a good idea to ban them on the basis of their dangerousness depends on 1) whether they are dangerous (which they are experts on) and 2) whether it's a good idea to ban things that are dangerous (which is a political question that they are not experts on). And the latter part is where most of the substantial disagreement happens.
We already have things that we know are dangerous, like nuclear weapons, and they aren't banned. A lot of people would like them banned, of course, but we at least understand that that's a political question, and "I know it's dangerous" doesn't make someone an expert on the political question. Just like you or I don't become experts on banning nuclear weapons just because we know nuclear weapons are dangerous, these guys don't become experts on banning AI weapons just because they know AI weapons are dangerous.
Replies from: Houshalter↑ comment by Houshalter · 2015-07-28T08:03:34.711Z · LW(p) · GW(p)
I'm sorry this is just silly. You are saying no one should have opinions on policy except politicians? Politicians are experts on what policies are good?
I think they are more like people who are experts at getting elected and getting funded. For making actual policy, they usually just resort to listening to experts in the relevant field; economists, businessmen, and in this case robotics experts.
The robotics experts are telling them "hey this shit is getting really scary and could be stopped if you just stop funding it and discourage other countries from doing so." It is of course up to actual politicians to debate it and vote on it, but they are giving their relevant opinion.
Which isn't at all unprecedented, we do the same thing with countless other military technologies, like chemical weapons and nukes. Or even simple things like land mines and hollow point bullets. It's not like they are asking for totally new policies. They are more like, "hey this thing you are funding is really similar to these other things you have forbidden."
And nukes are banned btw. We don't make any more of them, we are trying to get rid of most of the ones we have made. We don't let other countries make them. We don't test them or let anyone else test them. And no one is allowed to actually use them.
Replies from: Jiro↑ comment by Jiro · 2015-07-28T14:13:04.524Z · LW(p) · GW(p)
You are saying no one should have opinions on policy except politicians? Politicians are experts on what policies are good?
I'm saying that nobody like that should have opinions as experts, that claim they know better because they're experts. Their opinions are as good as yours or mine. But you and I don't put out press releases that people pay any attention to, based on our "expertise".
Furthermore, when politicians do it, everyone is aware that they are acting as politicians, and can evaluate them as politicians. The "experts" are pretending that their conclusion comes from their expertise, not from their politics, when in fact all the noteworthy parts of their answer come from their politics.
This is no better than if, say, doctors were to put out a press release that condemns illegal immigration on the grounds that illegal immigrants have a high crime rate, and doctors have to treat crime victims and know how bad crime is.
The robotics experts are telling them "hey this shit is getting really scary and could be stopped if you just stop funding it and discourage other countries from doing so."
Whether that actually works, particularly the "discourage other countries" part, is a question they have no expertise on.
comment by [deleted] · 2015-07-27T23:41:40.543Z · LW(p) · GW(p)
Does anybody know of a way to feed myself data about current time/North? I noticed that I really dislike not knowing time or which direction I'm facing, but pulling out a phone to learn them is too inconvrnient. I know there's north paw, but it'd be too awkward to actually wear it.
Something with magnets under the skin, maybe?
Replies from: James_Miller, ZeitPolizei, gudamor, Elo, Elo, Lumifer↑ comment by James_Miller · 2015-07-28T05:05:01.658Z · LW(p) · GW(p)
Sometimes I will be talking to a student, and be perfectly happy to talk with her until a minute before my next class starts, but I'm uncertain of the time. If I make any visible effort to look at the time, however, she will take it as a sign that I want to immediately end our conversation, so I could use your described device.
Replies from: jam_brand, Richard_Kennaway, Risto_Saarelma, Lumifer↑ comment by jam_brand · 2015-07-28T08:38:09.159Z · LW(p) · GW(p)
While I'm sure you've thought of setting silent alarms on your phone, a slightly less obvious idea would be to get a watch that has a vibrating alarm capability.
Replies from: James_Miller↑ comment by James_Miller · 2015-07-28T15:09:21.664Z · LW(p) · GW(p)
While I'm sure you've thought of setting silent alarms on your phone
Actually, no, Thanks for the suggestion.
↑ comment by Richard_Kennaway · 2015-07-28T12:21:32.060Z · LW(p) · GW(p)
Why not look at the time and say that you need to keep an eye on the time for your next class?
Replies from: James_Miller↑ comment by James_Miller · 2015-07-28T15:04:00.377Z · LW(p) · GW(p)
I sometimes do this, but the students still get anxious and either leave or ask if they should leave.
↑ comment by Risto_Saarelma · 2015-07-28T11:33:56.077Z · LW(p) · GW(p)
Obviously the solution is a smartwatch which pushes retractable needles in a pattern that tells the current time in binary into the skin of your wrist once every minute.
↑ comment by ZeitPolizei · 2015-07-29T04:39:19.815Z · LW(p) · GW(p)
Do you know about this thing? It actually gets introduced at 11:00. It's originally intended to let deaf people hear again, but later on he shows that you can use any data as input. It's (a) probably overkill and (b) not commercially available, but depending on how much time and resources you want to invest I imagine it shouldn't be all too hard to make one with just 3 pads or so.
↑ comment by gudamor · 2015-08-04T05:33:23.384Z · LW(p) · GW(p)
Instead of real-time directional data, could you improve your sense of direction with training? Something like: estimate North, pull out phone and check, score your estimate, iterate. I imagine this could rapidly be mastered for your typical locations, such that you no longer need to pull out your phone at all.
↑ comment by Elo · 2015-07-27T23:57:23.126Z · LW(p) · GW(p)
How very interesting - I would find north to be unhelpful as it's not intrinsically relevant to me. compared to say - which direction my house is from here.
Can you use sun or shadow based heuristics? (the sun rises in the east and sets in the west - give or take a correction factor for how far towards the poles you are). And maybe note the direction of a few star signs for night-time. cloudy-time is a bit harder to manage.
There are probably compass devices you can probably get as a bracelet. as an example http://www.ebay.com.au/itm/NEW-Tactical-WRIST-COMPASS-Military-Outdoor-Survival-Watch-Strap-Band-Bracelet-/121644526398?hash=item1c52942b3e Try camping stores also for similar devices? There is probably a watch with a built in compass.
To my knowledge there are no internal-magnet systems that impart a north-knowledge by having them inside your body.
I have a natural sense of direction (I assume nearly everyone does) can you train yours?
Replies from: solipsist, None↑ comment by solipsist · 2015-07-29T23:39:49.439Z · LW(p) · GW(p)
Question: I have a strong sense of a "dominant" direction (often, but not always, west). This direction is self-apparent in virtually every memory or mental visualization of any location I can think of. So, for example, captain's chair on the USS enterprise "obviously" faces "east", and the library on Myst island is obviously on the north side. I'm not going to forget which direction is down, and I'm not going to forget which direction is (usually) west.
Does anyone else here have oriented spacial memories?
Replies from: Gunnar_Zarncke, Elo↑ comment by Gunnar_Zarncke · 2015-07-30T19:58:16.612Z · LW(p) · GW(p)
There are (indigenous) languages/cultures which give directions in absolute (compass) terms instead of our usual left/right/front/back terms. I'd guess that implies that memory gets tagged with that a lot. I'd also guess that some people are more naturally predisposed to deal with that. Like you.
↑ comment by Elo · 2015-07-30T13:29:40.472Z · LW(p) · GW(p)
Neat-o. I don't have any particularly strong sense of directions in my memories but I always know where I am in relation to other places I know. It will occasionally bother me when I walk into a maze-like building and lose my orientation. But I usually re-orientate when I leave such a building.
I wonder if this is a branch of synaesthesia?
Replies from: Dagon↑ comment by Dagon · 2015-07-30T17:29:31.466Z · LW(p) · GW(p)
Neat-o for both of you!
I don't have a very good "native" sense of direction, and there are lots of times I find I've gone to two different places from my home or work often and think I know where they are, but then get surprised when they're very nearby each other.
With cognitive effort, I can usually get directions right, but it's based on landmarks and reasoning rather than any type-1 sense.
Replies from: Elo↑ comment by Elo · 2015-07-31T01:22:16.489Z · LW(p) · GW(p)
I have been building a streetmap in my head for the past 15+ years of my life. at some point (before smart-maps) I realised it would be good to have a sense of location. from then on I started "building" the map in my head of where it "feels" like everything is.
Now days when I travel (drive) somewhere I recognise the main arterial roads of my city; and the common traffic conditions of them. I usually set a smart-map to outsource estimating my time of arrival, but I can also look at a map and recognise the nearness of a new place to a place I have been to previously and guide myself via "known routes"
↑ comment by [deleted] · 2015-07-28T03:00:14.857Z · LW(p) · GW(p)
Thanks for such an extensive answer. I can orient myself using Sun, so outdoors it's not really a problem. I could use watch, but I find it rather intrusive as well and it doesn't feed the data -- I have to look at it to get information.
To my knowledge there are no internal-magnet systems that impart a north-knowledge by having them inside your body.
This is useful.
Replies from: Elo↑ comment by Elo · 2015-07-28T04:15:55.703Z · LW(p) · GW(p)
I am imagining the creation of such a device; it seems to be a tricky one. most compasses work on a needle-like object being able to float in the direction of the earth's magnetic field. So to make something like that which can be used internal to the body and provide feedback about where it is; seems difficult.
I have been wearing a magnetic ring for several weeks and plan to write about my experiences, but essentially it would also not do what you want it to do.
North paw is the only thing I can think of. I can suggest more wearable devices but they require you to access them. I wonder if you could wear something near your ear that could somehow hint at which way it was facing (possibly with sound) or something in your mouth. Part of the problem is that we don't have a lot of methods of finding magnetic north. its basically just using needles or magnets.
↑ comment by Elo · 2015-07-28T23:02:34.660Z · LW(p) · GW(p)
Question - was a different request to . I took it to read, "current real-time information about north-facing"
If it is a separate request most wearable fitness devices have a vibrating alarm.
Replies from: None↑ comment by [deleted] · 2015-07-29T03:01:28.799Z · LW(p) · GW(p)
I want current real-time information about north, and current real-time information about time :) (most likely in separate devices)
Replies from: HungryHippo, Elo↑ comment by HungryHippo · 2015-07-31T13:05:10.936Z · LW(p) · GW(p)
Your analog watch can serve as an impromptu compass.
Point the hour hand towards the sun, then true south will be halfway between the hour hand and the 12-o'clock mark. Assuming you're in the northern hemisphere.
E.g. if it's around 2-o'clock, direct the hour hand towards the sun, and south will be in the 1-o'clock direction --- and therefore north towards the 7-o'clock direction.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-31T14:32:58.453Z · LW(p) · GW(p)
Point the hour hand towards the sun, then true south will be halfway between the hour hand and the 12-o'clock mark.
Only if your watch shows solar time which is normally not the case.
Replies from: tut↑ comment by tut · 2015-07-31T16:22:47.839Z · LW(p) · GW(p)
If you don't adjust for DST and what part of your timezone you are in it will be off by something like 30 degrees. If you need better than that the adjustments are not very hard (The 80/20 in most places being to just subtract one hour from what the clock shows because of DST).
↑ comment by Elo · 2015-07-29T07:35:22.189Z · LW(p) · GW(p)
re: time
smart watches as mentioned.
I would actually like a device that vibrated in every 5 minute window (or other settable window of time). To remind me to re-evaluate my progress on current tasks and confirm to myself that I am doing well. Essentially as a pattern-interrupt if I am in a bad pattern. It might end up interrupting good patterns as well, but I would still be interested to experiment if it works to be on par more helpful than unhelpful.
I wonder if anyone knows of an app to make my phone do it.
Replies from: hyporational↑ comment by hyporational · 2015-08-06T10:55:10.429Z · LW(p) · GW(p)
Caynax hourly chime and Mindfulness bell on Android
Replies from: Elo↑ comment by Elo · 2015-08-06T11:06:05.507Z · LW(p) · GW(p)
thanks. will install and try them.
Replies from: hyporational↑ comment by hyporational · 2015-08-21T15:57:11.033Z · LW(p) · GW(p)
Did they work? Did you try any other solutions?
Replies from: Elo↑ comment by Elo · 2015-08-22T09:56:57.502Z · LW(p) · GW(p)
Mindfulness bell seems to have bothered people around me. They are getting used to it. It's not really doing its job of keeping me mindful (I currently have it set on 30mins). I would like any suggestion you have for "thought process to go through with the intention of being mindful". I tend to still think, "is this the highest value thing I could be doing right now?" and have occasionally closed things I was messing around on and moved on; but being a smart-guy I can rationalise that "yes this is" far more often than it probably is.
At least it gets me to stop and wonder "what am I doing" frequently. which is a good thing. I expect a month from now I will have a naturally trained "mindful clock" and won't need the chime.
Also mindfullness bell conflicts with "narritive app" when the chime goes off it crashes the other app. Which is probably because of bad coding; but I have the bad coding to thank for letting me keep all my other notifications on silent while keeping that one on loud.
Thank you!
P.S. any other apps you would suggest?
comment by James_Miller · 2015-07-27T17:17:55.611Z · LW(p) · GW(p)
Apparently, NASA is testing an EM Drive, a reactionless drive which to work would have to falsify the law of conservation of momentum. As good Bayesians I know that we should have a strong prior belief that the law of conservation of momentum is correct so that even if EM Drive supporters get substantial evidence we should still think that they are almost certainly wrong, especially given how common errors and fraud are in science. But, my question is how confident should we be that the law of conservation of momentum is correct? Is it, say, closer to .9999 or 1-1/10^20?
Replies from: None, IlyaShpitser, Daniel_Burfoot, Vaniver, knb, cousin_it, shminux↑ comment by [deleted] · 2015-07-27T17:38:54.946Z · LW(p) · GW(p)
If it breaks conservation of momentum and also produces a constant thrust, it breaks conservation of energy since kinetic energy goes up quadratically with time while input energy goes up linearly.
If it doesn't break conservation of energy there will be a priviliged reference frame in which it produces maximum thrust per joule, breaking the relativity of reference frames.
Adjust probability estimates accordingly.
Replies from: DanielLC, None↑ comment by IlyaShpitser · 2015-07-27T17:46:42.041Z · LW(p) · GW(p)
Conservation laws occasionally turn out to be false. That said, momentum is pretty big, since it corresponds to translation and rotation invariance, and those intuitively seem pretty likely to be true. But then there was
↑ comment by Daniel_Burfoot · 2015-07-27T18:25:53.261Z · LW(p) · GW(p)
I would give at least .00001 probability to the following: momentum per se is not conserved, but instead some related quantity, call it zomentum, is conserved, and momentum is almost exactly equal to zomentum under the vast majority of normal conditions.
In general, since we can only do experiments in the vicinity of Earth, we should always be wondering if our laws of physics are just good linearized approximations, highly accurate in our zone of spacetime, of real physics.
Replies from: Squark↑ comment by Squark · 2015-08-01T18:28:14.220Z · LW(p) · GW(p)
This is not a very meaningful claim since in modern physics momentum is not "mv" or any such simple formula. Momentum is the Noether charge associated with spatial translation symmetry which for field theory typically means the integral over space of some expression involving the fields and their derivatives. In general relativity things are even more complicated. Strictly speaking momentum conservation only holds for spacetime asymptotics which have spatial translation symmetry. There is no good analogue of momentum conservation for e.g. compact space.
Nonetheless, the EmDrive drive still shouldn't work (and probably doesn't work).
↑ comment by Vaniver · 2015-07-27T17:36:53.075Z · LW(p) · GW(p)
This seems much more like a "We know he broke some part of the Federal Aviation Act, and as soon as we decide which part it is, some type of charge will be filed" situation. The person who invented it doesn't think it's reactionless, if thrust is generated it's almost certainly not reactionless, but what's going on is unclear.
↑ comment by Shmi (shminux) · 2015-07-28T06:24:23.670Z · LW(p) · GW(p)
See http://www.preposterousuniverse.com/blog/2015/05/26/warp-drives-and-scientific-reasoning/ for a thorough analysis.
comment by [deleted] · 2015-07-27T14:42:56.214Z · LW(p) · GW(p)
There's been far less writings on improving rationality here on LW during the last few years. Has everything important been said about the subject, or have you just given up on trying to improve your rationality? Are there diminishing returns on improving rationality? Is it related to the fact that it's very hard to get rid off most of cognitive bias, no matter how hard you try to focus on them? Or have people moved talking about these on different forums, or in real life?
Or like Yvain said on 2014 Survey results.
Replies from: sixes_and_sevens, D_Malik, Unnamed, None, Viliam, pcmIt looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions. To give an example of how horrendous, people who were 50% sure of their answers to question 10 got it right only 13% of the time; people who were 100% sure only got it right 44% of the time. Obviously those numbers should be 50% and 100% respectively.
This builds upon results from previous surveys in which your calibration was also horrible. This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.
↑ comment by sixes_and_sevens · 2015-07-27T15:31:07.420Z · LW(p) · GW(p)
LW's strongest, most dedicated writers all seem to have moved on to other projects or venues, as has the better part of its commentariat.
In some ways, this is a good thing. There is now, for example, a wider rationalist blogosphere, including interesting people who were previously put off by idiosyncrasies of Less Wrong. In other ways, it's less good; LW is no longer a focal point for this sort of material. I'm not sure if such a focal point exists any more.
Replies from: Baughn, Lumifer↑ comment by Baughn · 2015-07-28T08:00:05.002Z · LW(p) · GW(p)
Where, exactly? All I've noticed is that there's less interesting material to read, and I don't know where to go for more.
Okay, SSC. That's about it.
Replies from: Vaniver, Username, Sarunas, Benito↑ comment by Vaniver · 2015-07-28T19:18:52.942Z · LW(p) · GW(p)
Here's one discussion. One thing that came out of it is the RationalistDiaspora subreddit.
↑ comment by Sarunas · 2015-07-28T14:19:51.138Z · LW(p) · GW(p)
http://agentfoundations.org/ , https://intelligence.org/ , offline.
↑ comment by Ben Pace (Benito) · 2015-07-28T15:37:55.172Z · LW(p) · GW(p)
Tumblr is the new place.
↑ comment by Lumifer · 2015-07-27T16:17:50.390Z · LW(p) · GW(p)
LW as an incubator?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2015-07-27T16:34:23.092Z · LW(p) · GW(p)
Or a host for a beautiful parasitic wasp?
Replies from: Lumifer↑ comment by D_Malik · 2015-07-27T16:43:20.031Z · LW(p) · GW(p)
About that survey... Suppose I ask you to guess the result of a biased coin which comes up heads 80% of the time. I ask you to guess 100 times, of which ~80 times the right answer is "heads" (these are the "easy" or "obvious" questions) and ~20 times the right answer is "tails" (these are the "hard" or "surprising" questions). Then the correct guess, if you aren't told whether a given question is "easy" or "hard", is to guess heads with 80% confidence, for every question. Then you're underconfident on the "easy" questions, because you guessed heads with 80% confidence but heads came up 100% of the time. And you're overconfident on the "hard" questions, because you guessed heads with 80% confidence but got heads 0% of the time.
So you can get apparent under/overconfidence on easy/hard questions respectively, even if you're perfectly calibrated, if you aren't told in advance whether a question is easy or hard. Maybe the effect Yvain is describing does exist, but his post does not demonstrate it.
Replies from: cousin_it, tim↑ comment by cousin_it · 2015-07-27T18:54:34.877Z · LW(p) · GW(p)
Wow, that's a great point. We can't measure anyone's "true" calibration by asking them a specific set of questions, because we're not drawing questions from the same distribution as nature! That's up there with the obvious-in-retrospect point that the placebo effect gets stronger or weaker depending on the size of the placebo group in the experiment. Good work :-)
↑ comment by tim · 2015-07-28T02:24:27.259Z · LW(p) · GW(p)
I am probably misunderstanding something here, but doesn't this
Then the correct guess, if you don't know whether a given question is "easy" or "hard"...
Basically say, "if you have no calibration whatsoever?" If there are distinct categories of questions (easy and hard) and you can't tell which questions belong to which category, then simply guessing according to your overall base rate will make your calibration look terrible - because it is
Replies from: D_Malik↑ comment by D_Malik · 2015-07-28T16:50:03.759Z · LW(p) · GW(p)
Replace "if you don't know" with "if you aren't told". If you believe 80% of them are easy, then you're perfectly calibrated as to whether or not a question is easy, and the apparent under/overconfidence remains.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-28T17:15:04.492Z · LW(p) · GW(p)
If you believe 80% of them are easy, then you're perfectly calibrated as to whether or not a question is easy, and the apparent under/overconfidence remains.
I am still confused.
You don't measure calibration by asking "Which percentage of this set of questions is easy?". You measure it by offering each question one by one and asking "Is this one easy? What about that one?".
Calibration applies to individual questions, not to aggregates. If, for some reason, you believe that 80% of the questions in the set is easy but you have no idea which ones, you are not perfectly calibrated, in fact your calibration sucks because you cannot distinguish easy and hard.
Replies from: tut↑ comment by tut · 2015-07-28T18:08:24.445Z · LW(p) · GW(p)
Calibration for single questions doesn't make any sense. Calibration applies to individuals, and is about how their subjective probability of being right about questions in some class relates to what proportion of the questions in that class they are right about.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-28T18:30:09.419Z · LW(p) · GW(p)
Well, let's walk through the scenario.
Alice is given 100 calibration questions. She knows that some of them are easy and some are hard. She doesn't know how many are easy and how many are hard.
Alice goes through the 100 questions and at the end -- according to how I understand D_Malik's scenario -- she says "I have no idea whether any particular question is hard or easy, but I think that out of this hundred 80 questions are easy. I just don't know which ones". And, under the assumption that 80 question were indeed easy, this is supposed to represent perfect calibration.
That makes no sense to me at all.
Replies from: Vaniver↑ comment by Vaniver · 2015-07-28T19:11:54.702Z · LW(p) · GW(p)
D_Malik's scenario illustrates that it doesn't make sense to partition the questions based on observed difficulty and then measure calibration, because this will induce a selection effect. The correct procedure to partition the questions based on expected difficulty and then measure calibration.
For example, I say "heads" every time for the coin, with 80% confidence. That says to you that I think all flips are equally hard to predict prospectively. But if you were to compare my track record for heads and tails separately--that is, look at the situation retrospectively--then you would think that I was simultaneously underconfident and overconfident.
To make it clearer what it should look like normally, suppose there are two coins, red and blue. The red coin lands heads 80% of the time and the blue coin lands heads 70% of the time, and we alternate between flipping the red coin and the blue coin.
If I always answer heads, with 80% when it's red and 70% when it's blue, I will be as calibrated as someone who always answers heads with 75%, but will have more skill. But retrospectively, one will be able to make the claim that we are underconfident and overconfident.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-28T19:41:02.976Z · LW(p) · GW(p)
D_Malik's scenario illustrates that it doesn't make sense to partition the questions based on observed difficulty and then measure calibration, because this will induce a selection effect. The correct procedure to partition the questions based on expected difficulty and then measure calibration.
Yes, I agree with that. However it still seems to me that the example with coins is misleading and that the given example of "perfect calibration" is anything but. Let me try to explain.
Since we're talking about calibration, let's not use coin flips but use calibration questions.
Alice gets 100 calibration questions. To each one she provides an answer plus her confidence in her answer expressed as a percentage.
In both yours and D_Malik's example the confidence given is the same for all questions. Let's say it is 80%. That is an important part: Alice gives her confidence for each question as 80%. This means that for her the difficulty of each question is the same -- she cannot distinguish between then on the basis of difficulty.
Let's say the correctness of the answer is binary -- it's either correct or not. It is quite obvious that if we collect all Alice's correct answers in one pile and all her incorrect answers in another pile, she will look to be miscalibrated, both underconfident (for the correct pile) and overconfident (for the incorrect pile).
But now we have the issue that some questions are "easy" and some are "hard". My understanding of these terms is that the test-giver, knowing Alice, can forecast which questions she'll be able to mostly answer correctly (those are the easy ones) and which questions she will not be able to mostly answer correctly (those are the hard ones). If this is so (and assuming the test-giver is right about Alice which is testable by looking at the proportions of easy and hard questions in the correct and incorrect piles), then Alice fails calibration because she cannot distinguish easy and hard questions.
You are suggesting, however, that there is an alternate definition of "easy" and "hard" which is the post-factum assignment of the "easy" label to all questions in the correct pile and of the "hard" label to all questions in the incorrect pile. That makes no sense to me as being an obviously a stupid thing to do, but it may be that the original post argued exactly against this kind of stupidity.
P.S. And, by the way, the original comment which started this subthread quoted Yvain and then D_Malik pronounced Yvain's conclusions suspicious. But Yvain did not condition on the outcomes (correct/incorrect answers), he conditioned on confidence! It's a perfectly valid exercise to create a subset of questions where someone declared, say, 50% confidence, and then see if the proportion of correct answers is around that 50%.
Replies from: Unnamed, Vaniver↑ comment by Unnamed · 2015-07-28T21:25:30.777Z · LW(p) · GW(p)
Suppose that I am given a calibration question about a racehorse and I guess "Secretariat" (since that's the only horse I remember) and give a 30% probability (since I figure it's a somewhat plausible answer). If it turns out that Secretariat is the correct answer, then I'll look really underconfident.
But that's just a sample size of one. Giving one question to one LWer is a bad method for testing whether LWers are overconfident or underconfident (or appropriately confident). So, what if we give that same question to 1000 LWers?
That actually doesn't help much. "Secretariat" is a really obvious guess - probably lots of people who know only a little about horseracing will make the same guess, with low to middling probability, and wind up getting it right. On that question, LWers will look horrendously underconfident. The problem with this method is that, in a sense, it still has a sample size of only one, since tests of calibration are sampling both from people and from questions.
The LW survey had better survey design than that, with 10 calibration questions. But Yvain's data analysis had exactly this problem - he analyzed the questions one-by-one, leading (unsurprisingly) to the result that LWers looked wildly underconfident on some questions and wildly overconfident on others. That is why I looked at all 10 questions in aggregate. On average (after some data cleanup) LWers gave a probability of 47.9% and got 44.0% correct. Just 3.9 percentage points of overconfidence. For LWers with 1000+ karma, the average estimate was 49.8% and they got 48.3% correct - just a 1.4 percentage point bias towards overconfidence.
Being well-calibrated does not only mean "not overconfident on average, and not underconfident on average". It also means that your probability estimates track the actual frequencies across the whole range from 0 to 1 - when you say "90%" it happens 90% of the time, when you say "80%" it happens 80% of the time, etc. In D_Malik's hypothetical scenario where you always answer "80%", we aren't getting any data on your calibration for the rest of the range of subjective probabilities. But that scenario could be modified to show calibration across the whole range (e.g., several biased coins, with known biases). My analysis of the LW survey in the previous paragraph also only addresses overconfidence on average, but I also did another analysis which looked at slopes across the range of subjective probabilities and found similar results.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-28T23:52:10.100Z · LW(p) · GW(p)
That is why I looked at all 10 questions in aggregate.
Well, you did not look at calibration, you looked at overconfidence which I don't think is a terribly useful metric -- it ignores the actual calibration (the match between the confidence and the answer) and just smushes everything into two averages.
It reminds me of an old joke about a guy who went hunting with his friend the statistician. They found a deer, the hunter aimed, fired -- and missed. The bullet went six feet to the left of the deer. Amazingly, the deer ignored the shot, so the hunter aimed again, fired, and this time the bullet went six feet to the right of the deer. "You got him, you got him!" yelled the statistician...
So, no, I don't think that overconfidence is a useful metric when we're talking about calibration.
but I also did another analysis which looked at slopes across the range of subjective probabilities
Sorry, ordinary least-squares regression is the wrong tool to use when your response variable is binary. Your slopes are not valid. You need to use logistic regression.
Replies from: Unnamed↑ comment by Unnamed · 2015-07-29T01:11:25.896Z · LW(p) · GW(p)
Overconfidence is the main failure of calibration that people tend to make in the published research. If LWers are barely overconfident, then that is pretty interesting.
I used linear regression because perfect calibration is reflected by a linear relationship between subjective probability and correct answers, with a slope of 1.
If you prefer, here is a graph in the same style that Yvain used.
X-axis shows subjective probability, with responses divided into 11 bins (<5, <15, ..., <95, and 95+). Y-axis shows proportion correct in each bin, blue dots show data from all LWers on all calibration questions (after data cleaning), and the line indicates perfect calibration. Dots below the line indicate overconfidence, dots above the line indicate underconfidence. Sample size for the bins ranges from 461 to 2241.
↑ comment by Vaniver · 2015-07-28T23:41:53.617Z · LW(p) · GW(p)
My understanding of these terms is that the test-giver, knowing Alice, can forecast which questions she'll be able to mostly answer correctly (those are the easy ones) and which questions she will not be able to mostly answer correctly (those are the hard ones).
I agree that if Yvain had predicted what percentage of survey-takers would get each question correct before the survey was released, that would be useful as a measure of the questions' difficulty and an interesting analysis. That was not done in this case.
That makes no sense to me as being an obviously a stupid thing to do, but it may be that the original post argued exactly against this kind of stupidity.
The labeling is not obviously stupid--what questions the LW community has a high probability of getting right is a fact about the LW community, not about Yvain's impression of the LW community. The usage of that label for analysis of calibration does suffer from the issue D_Malik raised, which is why I think Unnamed's analysis is more insightful than Yvain's and their critiques are valid.
However it still seems to me that the example with coins is misleading and that the given example of "perfect calibration" is anything but.
It is according to what calibration means in the context of probabilities. Like Unnamed points out, if you are unhappy that we are assigning a property of correct mappings ('calibration') to a narrow mapping ("80%"->80%) instead of a broad mapping ("50%"->50%, "60%"->60%, etc.), it's valid to be skeptical that the calibration will generalize--but it doesn't mean the assessment is uncalibrated.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-29T00:09:29.924Z · LW(p) · GW(p)
It is according to what calibration means in the context of probabilities.
Your link actually doesn't provide any information about how to evaluate or estimate someone's calibration which is what we are talking about.
if you are unhappy that we are assigning a property of correct mappings ('calibration') to a narrow mapping
It's not quite that. I'm not happy with this use of averages. I'll need to think more about it, but off the top of my head, I'd look at the average absolute difference between the answer (which is 0 or 1) and the confidence expressed, or maybe the square root of the sum of squares... But don't quote me on that, I'm just thinking aloud here.
Replies from: Vaniver↑ comment by Vaniver · 2015-07-29T01:27:26.269Z · LW(p) · GW(p)
Your link actually doesn't provide any information about how to evaluate or estimate someone's calibration which is what we are talking about.
If we don't agree about what it is, it will be very difficult to agree how to evaluate it!
It's not quite that. I'm not happy with this use of averages.
Surely it makes sense to use averages to determine the probability of being correct for any given confidence level. If I've grouped together 8 predictions and labeled them "80%", and 4 of them are correct and 4 of them are incorrect, it seems sensible to describe my correctness at my "80%" confidence level as 50%.
If one wants to measure my correctness across multiple confidence levels, then what aggregation procedure to use is unclear, which is why many papers on calibration will present the entire graph (along with individualized error bars to make clear how unlikely any particular correctness value is--getting 100% correct at the "80%" level isn't that meaningful if I only used "80%" twice!).
I'll need to think more about it, but off the top of my head, I'd look at the average absolute difference between the answer (which is 0 or 1) and the confidence expressed, or maybe the square root of the sum of squares... But don't quote me on that, I'm just thinking aloud here.
You may find the Wikipedia page on scoring rules interesting. My impression is that it is difficult to distinguish between skill (an expert's ability to correlate their answer with the ground truth) and calibration (an expert's ability to correlate their reported probability with their actual correctness) with a single point estimate,* but something like the slope that Unnamed discusses here is a solid attempt.
*That is, assuming that the expert knows what rule you're using and is incentivized by a high score, you also want the rule to be proper, where the expert maximizes their expected reward by supplying their true estimate of the probability.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-29T02:26:03.206Z · LW(p) · GW(p)
If one wants to measure my correctness across multiple confidence levels, then what aggregation procedure to use is unclear
Yes, that is precisely the issue for me here. Essentially, you have to specify a loss function and then aggregate it. It's unclear what kind will work best here and what that "best" even means.
You may find the Wikipedia page on scoring rules interesting.
Yes, thank you, that's useful.
Notably, Philip Tetlock in his Expert Political Judgement project uses Brier scoring.
↑ comment by Unnamed · 2015-07-27T18:23:59.178Z · LW(p) · GW(p)
I re-analyzed the calibration data, looking at all 10 question averaged together (which I think is a better approach than going question-by-question, for roughly the reasons that D_Malik gives), and found that veterans did better than newbies (and even newbies were pretty well calibrated). I also found similar results for other biases on the 2012 LW survey.
↑ comment by [deleted] · 2015-07-27T21:09:18.785Z · LW(p) · GW(p)
A lot of this has moved to blogs. See malcolmocean.com, mindingourway.com, themindsui,com, agentyduck.blogspot.com, and slatestarcodex.com for more of this discussion.
That being said, I think writing/reading about rationality is very different than becoming good at it. I think someone who did a weekend at CFAR, or the Hubbard Research AIE level 2 workshop would rank much higher on rationality than someone who spent months reading through all the sequences.
↑ comment by Viliam · 2015-07-28T09:09:25.946Z · LW(p) · GW(p)
1) There are diminishing returns on talking about improving rationality.
2) Becoming more rational could make you spend less time online, including on LessWrong. (The time you would have spent in the past writing beautiful and highly upvoted blog articles is now spent making money or doing science.) Note: This argument is not true if building a stronger rationalist community would generate more good than whatever you are doing alone instead. However, there may be a problem with capturing the generated value. (Eliezer indirectly gets paid for having published on LessWrong. But most of the others don't.)
↑ comment by pcm · 2015-07-27T19:01:13.255Z · LW(p) · GW(p)
Some of the discussion has moved to CFAR, although that involves more focus on how to get better cooperation between System 1 and System 2, and less on avoiding specific biases.
Maybe the most rational people don't find time to take surveys?
comment by chaosmage · 2015-07-28T17:24:30.191Z · LW(p) · GW(p)
"The Games of Entropy", which premiered at the European Less Wrong Community Weekend 2015, chapter two of the science and rationality promoting art project Seven Secular Sermons, is now available on YouTube. The first chapter, "Adrift in Space and Time" is also there, re-recorded with better audio and video quality. Enjoy!
comment by Fluttershy · 2015-07-27T11:23:10.022Z · LW(p) · GW(p)
I've just finished a solid first-draft of a post that I'm planning on submitting to main, and I'm looking for someone analytical to look over a few of my calculations. I'm pretty sensitive, so I'd be embarrassed if I posted something with a huge mistake in it to LW. The post is about the extent to which castration performed at various ages extends life expectancy in men, and was mainly written to inform people interested in life extension about said topic, though it might also be of interest to MtF trans people.
All of my calculations are in an excel spreadsheet, so I'll email you the text of the post, as well as the excel file, if you're interested in looking over my work. I'm mainly focused on big-picture advice right now, so I'm not really looking for someone to, say, look for typos. The only thing I'm really worried about is that perhaps I've done something mathematically unsavory when trying to crudely use mean age-at-death actuarial data from a subset of the population that existed in the past to estimate how long members of that same subset of the population might live today.
Being able to use math to build the backbone of a scientific paper might be a useful skill for any volunteers to have, though I don't suspect that any advanced knowledge of statistics is necessary. Thanks!
Replies from: cousin_it, Vaniver↑ comment by cousin_it · 2015-07-27T18:55:56.482Z · LW(p) · GW(p)
I can take a look, send me a PM if you like.
Replies from: Fluttershy↑ comment by Fluttershy · 2015-07-28T03:08:13.718Z · LW(p) · GW(p)
Thanks for the offer! I've just emailed Vaniver (since I already know him), and I'll re-evaluate how confident I feel about my post after I chat with him, and then send you a note if I think that I'm not quite where I want to be with the post by then.
↑ comment by Vaniver · 2015-07-27T15:30:23.663Z · LW(p) · GW(p)
I can take a look; you know my email.
All of my calculations are in an excel spreadsheet, so I'll email you the text of the post, as well as the excel file, if you're interested in looking over my work.
One of the trends I've seen happening that I'm a fan of is writing posts/papers/etc. in R, so that the analysis can be trivially reproduced or altered. In general, spreadsheets are notoriously prone to calculation errors because the underlying code is hidden and decentralized; it's much easier to look at a python or R script and check its consistency than an Excel table.
(It's better to finish this project as is than to delay this project until you know enough Python or R to reproduce the analysis, but something to think about for future projects / something to do if you already know enough Python or R.)
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-07-27T22:54:59.360Z · LW(p) · GW(p)
Spreadsheets can be reproduced and altered just as any code. I think the purpose of writing a post in code is mainly about keeping the code in sync with the exposition. But this was the purpose of MS Office before R even existed.
I am skeptical of spreadsheets, but is there any evidence that they are worse than any other kind of code? Indeed
These error rates, although troubling, are in line with those in programming and other human cognitive domains.
(I am not sure what that means. If the per-cell error rate is the same as the per-line rate of conventional programming, that definitely counts as spreadsheets being terrible. But I think the claim is 0.5% per-cell error rate and 5% per-line error rate.)
Even if there were evidence that spreadsheets are worse than other codebases, I would be hesitant to blame the spreadsheets, rather than the operators. It is true that there are many classes of errors that they make possible, but they also have the positive effect of encouraging the user to look at intermediate steps in the calculation. I suspect that the biggest problem with spreadsheets is that they are used by amateurs. People see them as safe and easy, while they see conventional code as difficult and dangerous.
Replies from: Vaniver↑ comment by Vaniver · 2015-07-27T23:47:35.589Z · LW(p) · GW(p)
Spreadsheets can be reproduced and altered just as any code.
The key word missing here is inspected, which seems like the core difference to me.
I suspect that the biggest problem with spreadsheets is that they are used by amateurs.
I agree with this.
comment by NancyLebovitz · 2015-08-01T16:54:31.738Z · LW(p) · GW(p)
How to tell if a process is out of control-- that weirdness might be random, but you should check in case it isn't.
Replies from: Elo, Elo, MrMind↑ comment by Elo · 2015-08-10T00:15:48.263Z · LW(p) · GW(p)
I received feedback from some friends to suggest that this is not applicable to large datasets - i.e. big data. I play with my own quantified self data sets of 100,000+ lines from time to time. (think minutised data at 1440 minutes a day for a year and counting). Can you discuss this more (maybe in the next open thread?)
Replies from: Vaniver, NancyLebovitz↑ comment by Vaniver · 2015-08-10T01:09:54.118Z · LW(p) · GW(p)
It shouldn't be too challenging to apply Nelson rules to 100k lines, but the point of statistical process control is continuous monitoring--if you weigh yourself every day, you would look at the two-week trend every day, for example. Writing a script that checks if any of these rules are violated and emails you the graph if that's true seems simple and potentially useful.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-08-14T01:37:38.509Z · LW(p) · GW(p)
I think what Elo's friends mean is that the constants hard-coded into Nelson's rules reflect some assumption on sample size. With a big sample, you'll violate them all the time and it won't mean anything. But they are a good starting point for tuning the thresholds.
Replies from: Vaniver↑ comment by Vaniver · 2015-08-14T02:26:52.980Z · LW(p) · GW(p)
I think what Elo's friends mean is that the constants hard-coded into Nelson's rules reflect some assumption on sample size. With a big sample, you'll violate them all the time and it won't mean anything. But they are a good starting point for tuning the thresholds.
If you have many parallel sensors, then yes, a flag that occurs 5% of the time due to noise will flag on at least one twentieth of your sensors. Elo's point, as I understood it, was that they have a long history--which is not relevant to the applicability of SPC.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-08-14T02:47:18.861Z · LW(p) · GW(p)
The long history is not relevant, but the frequency. Most of Nelson's rules are 1/1000 events. If you don't expect trends to change more often than 1000 measurements, that's too sensitive. I don't know what Elo is measuring every minute, but that probably is too sensitive and most of the hits will be false positives. (Actually, many things will have daily cycles. If Nelson notices them, that's interesting, but after removing such patterns, it will probably be too sensitive.)
Replies from: Vaniver↑ comment by NancyLebovitz · 2015-08-10T00:43:46.464Z · LW(p) · GW(p)
All I know about it is that the link looked like it was worth mentioning here. If you're interested in further discussion, you should bring it up yourself..
comment by ScottL · 2015-07-28T01:53:03.217Z · LW(p) · GW(p)
Has any one been working on the basics of rationality or summarizing the sequences? I think it would be helpful if someone created a sequence in which they cover the less wrong core concepts concisely as well as providing practical advice on how to apply rationality skills related to these concepts at the 5 second level.
A useful format for the posts might be: overview of concept, example in which people frequently fail at being rational because they innately don't follow the concept and then advice on how to apply the concept. Or another format might be: principle underlying multiple less wrong concepts, examples in which people fail at being rational because they don't follow the concepts and then advice on how to deal with the principle and become more rational.
I think that all these posts should summed up with or contain pratical methods on how to improve rationality skills and ways to quantify and measure these improvements. The results of CFAR workshops could probably provide a basis for these methods.
Lots of links to the related less wrong posts or wikis would also be useful.
Replies from: Vaniver↑ comment by Vaniver · 2015-07-28T02:21:15.145Z · LW(p) · GW(p)
The Rationality eBook is out now, which is an improvement over where things stood four years ago.
The nice thing about summarizing the Sequences / separating the useful concepts from blog posts / writing new explanations for those concepts is that it's a thing that you can do, and partial completion is useful. The wiki is the natural place to host this.
Replies from: ScottL↑ comment by ScottL · 2015-07-28T03:28:49.374Z · LW(p) · GW(p)
I assume you mean Rationality: From AI to Zombies. I have read this. I think that the wiki is brilliant for: concise definitions of the concepts, hosting the links to all of the related posts and storing reference data like the meanings of acronyms. I guess I am more looking for something that would work as an introduction for less wrong newbies, a refresher of the main concepts for less wrong veterans and a guideline or best practices document which will explain methods that can be used to apply the core less wrong concepts. These methods should preferrably have been verified to be useful in some way.
I suppose I could write summaries/new explanations, but I have the following problems with this:
- I am sure that there are other people on this site who could do a much better job at this than I can
- I don’t have any practical experience teaching these concepts to others
- I don’t have access to data on what methods have worked in teaching these concepts to others
↑ comment by [deleted] · 2015-07-28T15:22:52.469Z · LW(p) · GW(p)
There is the reading group. I'm also short on experience on what ingrains techniques in one's life, but it seems that consistent and gradual integration is one way.
Noticing when the concept will be relevant is the first step. Making sure you have a way of honing situational noticing is more difficult. Application only gets a chance when these two things are developed to a decent extent.
Brienne has a blog that goes into detail on this. Useful.
But no one has gone through the entirety of the sequences and compacted them entirely. I am starting a practical application system overview for myself, and will begin posting results with the reading group. It will take a long time before the posts are complete and ready to be compiled, however.
comment by ZeitPolizei · 2015-07-28T00:39:55.860Z · LW(p) · GW(p)
Donating now vs. saving up for a high passive income
Is there any sort of consensus on whether it is generally better to (a) directly donate excess money you earn or (b) save money and invest it until you have a high enough passive income to be financially independent? And does the question break down to: Is the long term expected return for donated money (e.g. in terms of QALYs) higher than for invested money (donated at a later point)? If it is higher for invested money there is a general problem of when to start donating, because in theory, the longer you wait, the higher the impact of that donated money. If the expected return for invested money is higher atm, I expect there will however come a point in time where this will no longer be the case.
If the expected return is higher for immediately donated money, are there additional benefits of having a high passive income that can justify actively saving money? E.g. not needing to worry about job security too much...
Replies from: RomeoStevens↑ comment by RomeoStevens · 2015-07-28T06:23:42.499Z · LW(p) · GW(p)
Paul Christiano has written about this subject:
http://rationalaltruist.com/2013/03/12/giving-now-vs-later/
http://rationalaltruist.com/2013/06/10/the-best-reason-to-give-later/
http://rationalaltruist.com/2014/05/14/machine-intelligence-and-capital-accumulation/
comment by Stingray · 2015-07-27T09:16:57.427Z · LW(p) · GW(p)
Is there any scientific backing for ASMR?
Replies from: None, None↑ comment by [deleted] · 2015-07-28T09:01:26.113Z · LW(p) · GW(p)
There's a very recent paper on PeerJ (hooray, open access), perhaps not what one would call "scientific backing" in the strongest sense, but more a study aiming to establish the scope of the phenomenon, and relate it to other aspects of perceptual experience - through survey of self-reported ASMR experiencers: Barratt & Davis, 2015
While ASMR appears to be a genuine, relatively prevalent perceptual experience, the exact nature of the phenomenon is still unknown....
Full survey data is also provided as supplemental information (see the link above) in case anyone wants to do some deeper digging.
↑ comment by [deleted] · 2015-07-27T10:26:24.929Z · LW(p) · GW(p)
Could be related to skin orgasms. Some people have brought it in relation with foreplay, cuddling and delousing. Perhaps the closeness of the sounds evoke strong associations with being actually touched resulting in something like a laughter response to being tickled (which could be thought of as a signaling that there is no danger from a spider crawling down your neck). Perhaps the high spectra in whispering make it even related to chills we receive from high-pitched noises, which is possibly related to teeth maintenance. I'm not aware of any research.
comment by Elo · 2015-07-31T04:52:46.592Z · LW(p) · GW(p)
I travelled to a different city for a period of a few days and realised I should actively avoid trying to gather geographical information (above a rough sense) to free up my brain space for more important things. Then I realised I should do that near home as well.
Two part question:-
- What do you outsource that is common and uncommon among people that you know?
- What should you be avoiding keeping in your brain that you currently are? (some examples might be birthdays, what day of the week it is, city-map-location, schedules/calendars, task lists, shopping lists)
And while we are at it: What automated systems have you set up?
Replies from: Kaj_Sotala, Lumifer, None, Gunnar_Zarncke↑ comment by Kaj_Sotala · 2015-08-04T12:45:44.562Z · LW(p) · GW(p)
I was under the impression that "brain space" was unlimited for all practical intents and purposes, and that having more stuff in your brain might actually even make extra learning easier - e.g. I've often heard it said that a person loses fluid intelligence when they age, but this is compensated by them having more knowledge that they can connect new things with. Do you know of studies to the contrary?
↑ comment by Lumifer · 2015-07-31T14:46:47.778Z · LW(p) · GW(p)
What do you outsource that is common and uncommon among people that you know?
A lot of little facts (of the kind that people on LW use Anki decks to memorize). I outsource them to Google.
I barely remember any phone numbers nowadays and that seems to be common.
What should you be avoiding keeping in your brain that you currently are?
Schedules / to-do lists. I really should outsource them to some GTD app, but can't bring myself to use one consistently.
↑ comment by [deleted] · 2015-08-03T17:44:26.480Z · LW(p) · GW(p)
Dunno about that; in my case information is either worth knowing (like 'what poplar tree marks the turn left to Epipactis palustris' - ideally outsourced to a map, but I would just get confused, or 'what kind of porridge to cook tonight given the kid rejected x, y and z' - ideally outsourced to a notebook, but there are too many details to bother with it) or worth losing.
When I am ill, though, I outsource my meds list to the fridge door. (Recipes and shopping lists, too, occasionally.)
↑ comment by Gunnar_Zarncke · 2015-08-02T19:34:43.732Z · LW(p) · GW(p)
I'm not sure whether it makes me a more satisfied/happy person if I out-source lots of things to devices. I agree that it is likely more efficient to delegate lots of memory work and planning habits to devices. But it also takes some of your autonomy away. It of course depends on the specific interaction and probably also on the person (some people may feel it quite natural to delegate tasks to (virtual) persons they trust). But as long as the out-sourced task affects you later in a non-adaptive way (and I judge this to be mostly the case) then this might not feel as natural as one might like.
See also my post about when augmentations feel/are natural.
Replies from: Elo↑ comment by Elo · 2015-08-03T03:41:15.650Z · LW(p) · GW(p)
At some point you start outsourcing "enjoying things". Which is exactly what I would suggest not doing. maybe I wasn't clear - but don't outsource things that you don't want to. i.e. I like cooking so I will probably never outsource my food-making process because I like doing it myself. However I don't like shopping, so I could outsource that, and I could outsource cleaning up afterwards.
comment by [deleted] · 2015-07-27T13:24:33.858Z · LW(p) · GW(p)
I'm looking for "simple tricks" for noticing internal cognition, of variable content. I don't have a particularly difficult time now, but if I can find something to expedite my self-training that would be neat.
I have in place a system where every week I focus on a group of techniques I want to adopt, but connecting my thinking to my notes seems like it could be a iffy sort of step. A simple physical/sensory association (like snapping fingers) is what I'm going to be resorting to, and I do practice mindfulness, but is there any other staples I am unaware of?
Thanks :)
Replies from: None↑ comment by [deleted] · 2015-07-27T21:21:39.132Z · LW(p) · GW(p)
Brienne's actually written a pretty great article on this, here: http://agentyduck.blogspot.com/2014/12/how-to-train-noticing.html I can't seem to find the article, but she has another one where she talks about using an old style ticker for positive habits. She clicks it every time she notices, and it's not only immediate feedback, but it gives you a running tally of how many you've done for the day, which is a bit gameified.
If it's a negative habit you're noticing, an old NLP standby is to snap a rubber band on your wrist, just enough that it stings.. A more modern version of this is the Pavlock wristband - it shocks you when you do the negative thing.
Another good one is to wear a bracelet, which you switch from wrist to wrist when you notice the habit. I believe this originated with Will Bowen and his no complaint experiment. These remind you throughout the day by having the wrist band, and every time you switch it, you'll be paying more attention because you haven't habituated to the wristband yet.
Finally, there's apps like tagtime/moodrecorder/etc. These are installed on your phone and pop up variably throughout the day, asking you to be present to internal state and record it. This gives you a more holistic view of what's going on internally throughout the day.
Replies from: Nonecomment by Elo · 2015-07-27T17:48:45.289Z · LW(p) · GW(p)
I am having a crisis in my life of trying to ask people a particular question and have them try to answer a different question. Its painful. I just want to yell at people; "answer the question I asked! not the one you felt like answering that was similar to the one I asked because you thought that was what I wanted to hear about or ask about!".
This has happened recently for multiple questions in my life that I have tried to ask people about. Do you have suggestions for either: a. dealing with it b. getting people to answer the right question
Assuming there isn't something wrong with the question I originally ask and how I present it.
Replies from: Richard_Kennaway, Dagon, itaibn0, Lumifer, Artaxerxes, MrMind, cousin_it, None↑ comment by Richard_Kennaway · 2015-07-28T08:43:54.537Z · LW(p) · GW(p)
Do you have suggestions for either: a. dealing with it b. getting people to answer the right question
(a) Recognise that getting upset over it does not achieve your purpose.
(b) Have you tried asking for what you want? For example:
Elo: (question)
A.N.Other: (answer not addressing what you wanted)
Elo: That's all very well, but what I really want to know is (restatement of the question)
etc., many variations possible depending on context.
Having answered your question, I shall now say something which is not an answer to your question. What is your experience of the other side of that situation, when someone asks you a question?
As a software developer, I spend a lot of time on both sides of this. When a user reports a problem, I need to elicit information about exactly what they were doing and what happened, information that they may not be well able to give me. There's no point in getting resentful that they aren't telling me exactly what I need to know off the bat. It's my job to steer them towards what I need. And when users ask me questions, I often have to ask myself, what is the real question here? Questions cannot always be answered in the terms in which they were put.
Replies from: Elo↑ comment by Elo · 2015-07-28T16:03:30.407Z · LW(p) · GW(p)
That's all very well, but what I really want to know is (restatement of the question)
I like this idea, but I fear that means my question asking process has to start including a "wait for the irrelevant answer, then ask the question again" process. Which would suck if that's the best way to go about it. My question could include a "this is the most obvious answer but it won't work so you should answer the question I asked" which is kind of what I was including with the statement, ("assuming there isn't something wrong with the question..."). But for some reason I still attracted a -notAnswer- even with that caveat in there - so I am not really sure about it.
I expect to spend some time working on (a. as asked in the OP) dealing with it. I can see how the IT industry would be juggling both sides, and at times you may know the answer to their question is actually best found by answering a different question (why can't I print; is your computer turned on?).
I suspect the difference is that in IT you are an expert in the area and are being asked questions by people of less expert-status, so your expertness of being able to get to the answer implicitly gives you permission to attack the problem as presented in a different way. You could probably be more effective by appealing to known-problems with known solutions in your ideaspace. In this case (and using my post as a case-study for the very question itself) there are no experts. There are no people of "know this problem better". Especially considering I didn't really give enough information as to even hint as to a similarity in problemspace to any other worldly problems other than the assumption statement. Perhaps not including the assumption statement would have yielded all people answering the question, but I suspect (as said in other responses) I would get 101 responses in the form of, "communicate your question better".
Dealing with the lack of success in answering questions; doesn't solve the problem of (b in the OP) getting people to answer the right question.
I have asked on a few of the response threads now; is there something wrong with the culture of answering a different question (I find there is)? and what can be done about it?
↑ comment by Dagon · 2015-07-27T19:40:58.719Z · LW(p) · GW(p)
Assuming there isn't something wrong with the question I originally ask and how I present it.
I wouldn't assume this. Most of the time when I notice this in my conversations, it turns out I've made false assumptions about my conversational partner's state (of knowledge, receptiveness, or shared priors). Identifying those mistakes in my communication choices then lets me rephrase or ask different questions more suited to our shared purposes in the discussion.
Replies from: Elo↑ comment by Elo · 2015-07-27T23:48:35.650Z · LW(p) · GW(p)
You literally did the thing that I asked the question about. There is a reason why I quoted that assumption - exactly because I didn't want you to answer that question - I wanted answers to the question that I asked.
I feel like morpheus in this scene https://www.youtube.com/watch?v=5mdy8bFiyzY
I feel like my question was strawmanned and the weakest part of it was attacked to try to win. I want to be clear that this is not a win-state for question-answering. This is a way to lose at answering a question.
I don't mean to attack you; but you have generated the prime example of it. I feel like there is an oversight in lesswrong culture to do this often, I have noticed I am confused in my own life. I realised I was doing this to people and I changed it in myself. Now I want to deliver this understanding to more people.
The question should be steelmanned and the best part of it answered, not the weakest, softest, smallest, useless, irrelevant morsel that was stated as part of the problem.
The most important question: " how do we fix it? "
(closely followed by - does that make sense? as an also important question)
Replies from: Dagon↑ comment by Dagon · 2015-07-28T00:12:58.718Z · LW(p) · GW(p)
Right, but conversation and discussion isn't about what you want. It's what each of us wants. You can ask whatever you like, and I can answer whatever I like. If we're lucky, there's some value in each. If we're aligned in our goals, they'll even match up.
The most important question: " how do we fix it? "
We don't. We accept it and work within it. Most communication is cooperation rather than interrogation, and you need to provide evidence for an assertion rather than just saying "assume unbelievable X".
Replies from: Elo↑ comment by Elo · 2015-07-28T01:17:53.315Z · LW(p) · GW(p)
"assume unbelievable X".
Only this is not an unbelievable X, its an entirely believable X (I wouldn't have any reason to ask an unbelieveable - as would anyone asking a question - unless they are actually trying to trick you with a question). In fact - assuming that people are asking you to believe an "unbelievable X" is a strawman of the argument in point.
Invalidating someone else's question (by attacking it or trying to defeat the purpose of the question) for reasons of them not being able to ask the right question or you wanting to answer a different question - is not a reasonable way to win a discussion. I am really not sure how to be more clear about it. Discussions are not about winning. one doesn't need to kill a question to beat it; one needs to fill it's idea-space with juicy information-y goodness to satisfy it.
Yes it is possible to resolve a question by cutting it up; {real world example - someone asks you for help. You could defeat the question by figuring out how to stop them from asking for help, or by finding out why they want help and making sure they don't in the future, or can help themselves. Or you could actually help them.}
Or you could actually respond in a way that helps. There is an argument about giving a man a fish or teaching him to fish; but that's not applicable because you have to first assume people asking about fishing for sharks already know how to fish for normal fish. Give them the answers - the shark meat, then if that doesn't help - teach them how to fish for sharks! Don't tell them they don't know how to fish for normal fish then try to teach them to fish for normal fish, suggesting they can just eat normal fish.
Assuming there isn't something wrong with the question I originally ask and how I present it.
More importantly - this is a different (sometimes related) problem that can be answered in a different question at a different time if that's what I asked about. AND one I will ask later, but of myself. One irrelevant to the main question.
Can you do me a favour and try to steelman the question I asked? And see what the results are, and what answer you might give to it?
conversation and discussion isn't about what you want. It's what each of us wants.
Yes this is true, but as the entity who started a thread (of conversation generally) I should have more say about it's purpose and what is wanted from it. Of course you can choose to not engage, you can derail a thread, and this is not something that you should do. I am trying to outline that the way you chose to engage was not productive (short of accidentally providing the example of failing to answer the question).
The original question again -
Replies from: itaibn0, DagonDo you have suggestions for either:
a. dealing with it
b. getting people to answer the right question
↑ comment by itaibn0 · 2015-07-28T18:51:57.920Z · LW(p) · GW(p)
"assume unbelievable X".
Only this is not an unbelievable X, its an entirely believable X (I wouldn't have any reason to ask an >unbelieveable - as would anyone asking a question - unless they are actually trying to trick you with a >question). In fact - assuming that people are asking you to believe an "unbelievable X" is a strawman of the >argument in point.
Are you sure that's how you want to defend your question? If you defend the question by saying that the premise is believable, you are implicitly endorsing the standard that questions should only be answered if they are reasonable. However, accepting this standard runs the risk that your conversational partner will judge your question to be unreasonable even if it isn't and fail to answer your question, in exactly the way you're complaining about. A better standard for the purpose of getting people to answer the questions you ask literally is that people should answer the questions that you ask literally even if they rely on fantastic premises.
Can you do me a favour and try to steelman the question I asked? And see what the results are, and what answer you might give to it?
A similar concern is applicable here: Recall that steelmanning means, when encountering a argument that seems easily flawed, not to respond to that argument but to strengthen it ways the seem reasonable to you and answer that instead. The sounds like the exact opposite of what you want people to do to your questions.
↑ comment by Dagon · 2015-07-28T02:19:56.118Z · LW(p) · GW(p)
A lot of those examples aren't "defeating the question", they're an honest attempt to understand the motivation behind the question and help with the underlying problem. In fact, that was my intent when I first responded.
You sound frustrated that people are misunderstanding you and answering questions different than the ones you want answered. I would like to help with this, by pointing out that communication takes work and that often it takes some effort and back and forth to draw out what kind of help you want and what kind your conversational partner(s) can provide.
You can be a lot less frustrated by asking questions better, and being more receptive to responses that don't magically align with your desires.
Replies from: Elo↑ comment by Elo · 2015-07-28T03:06:03.493Z · LW(p) · GW(p)
understand the motivation behind the question
Does it matter where the question comes from? Why?
Did you misunderstand my original question? It would seem that you understood the question and then chose a different path to resolving it other than the route I was aiming for the direction of the answer to cover.
Assuming that you now (several posts onwards) understand the question - can you turing-repeat what you think the question is; back to me?
Replies from: Dagon↑ comment by Dagon · 2015-07-28T16:33:34.333Z · LW(p) · GW(p)
I don't think I fully understand the question (or rather, the questions - there are always multiple parts to a query, and multiple followup directions based on the path the discussion takes). I don't think it's possible, actually - language is pretty limiting, and asynchronous low-bandwidth typed discussion even more so. To claim full understanding of your mind-state and desires when you asked the question would be ludicrous.
I think the gist of your query was around feeling frustrated that you often find yourself asking a question and someone answers in a way that doesn't satisfy you. I intended to reassure you that this happens to many of us, and that most of the time, they're just trying to be helpful and you can help them help you by adding further information to their model of you, so they can more closely match their experiences and knowledge to what they think you would benefit from hearing.
And in doing so, I was reminded that this works in reverse, as well - I often find myself trying to help by sharing experiences and information, but in such a way that the connection is not reciprocated or appreciated because my model of my correspondent is insufficient to communicate efficiently. I'll keep refining and trying, though.
↑ comment by itaibn0 · 2015-07-27T21:16:46.981Z · LW(p) · GW(p)
Sometimes what happens is that people don't know the answer to the question you're asking but still want to contribute to the discussion, so they answer a different question which they know the answer to. In this case the solution is to find someone who knows the answer before you start asking.
Replies from: Elo↑ comment by Elo · 2015-07-27T23:59:35.014Z · LW(p) · GW(p)
Sounds like it might be worthwhile accepting the fact that some answers are just rubbish, and ignoring them when I notice they are not answering the relevant question. This helps; but is a bit harder to do if it happens in person than online.
↑ comment by Lumifer · 2015-07-27T17:59:19.468Z · LW(p) · GW(p)
It's a perennial problem. My method which kinda-sorta works is to get very, very specific up to and including describing which varieties you do NOT want. It works only kinda-sorta because it tends to focus people on edge cases and definition gaming.
If you can extend the question into a whole conversation where you can progressively iterate closer to that you want, that can help, too.
Replies from: Elo↑ comment by Artaxerxes · 2015-07-28T00:16:37.247Z · LW(p) · GW(p)
Yeah, this happens.
I just want to yell at people; "answer the question I asked! not the one you felt like answering that was similar to the one I asked because you thought that was what I wanted to hear about or ask about!".
Try this, except instead of yelling, say it nicely.
One thing you could do as an example is some variation of "oh sorry, I must have phrased the question poorly, I meant (the question again, perhaps phrased differently or with more detail or with example answers or whatever)".
Replies from: Elo↑ comment by Elo · 2015-07-28T00:45:32.613Z · LW(p) · GW(p)
I probably wasn't clear about that - I never actually yell at anyone but it evokes the emotion of wanting to do so. And I notice the same pattern so often these days. Questions not getting the answer they ask.
Edit: also the yelling-idea-thing happens as a response to the different-question being answered, not something people could predict and purposely cause me to do, so should be unrelated.
Case in point - you are an example of not answering the question I asked.
Replies from: Artaxerxes, bbleeker, None↑ comment by Artaxerxes · 2015-07-28T19:29:47.459Z · LW(p) · GW(p)
You said
Do you have suggestions for either: a. dealing with it b. getting people to answer the right question
I said
I just want to yell at people; "answer the question I asked! not the one you felt like answering that was similar to the one I asked because you thought that was what I wanted to hear about or ask about!".
Try this, except instead of yelling, say it nicely.
and I also said
One thing you could do as an example is some variation of "oh sorry, I must have phrased the question poorly, I meant (the question again, perhaps phrased differently or with more detail or with example answers or whatever)".
So I answered the question in detail.
Perhaps you aren't very good at recognizing when someone has answered your question? Obviously this is only one data point so we can't look into it too heavily, but we have at least established that this is something you are capable of doing.
↑ comment by Sabiola (bbleeker) · 2015-07-28T15:16:22.458Z · LW(p) · GW(p)
But he did answer your question. You wrote:
Do you have suggestions for either: a. dealing with it b. getting people to answer the right question
And Artaxerxes wrote:
One thing you could do as an example is some variation of "oh sorry, I must have phrased the question poorly, I meant (the question again, perhaps phrased differently or with more detail or with example answers or whatever)".
Isn't that an answer to your point b?
↑ comment by [deleted] · 2015-07-28T07:59:27.262Z · LW(p) · GW(p)
But if people don't answer the right question, despite your formulating it as plainly and civilly as possible, it means they are either motivated to miss your meaning or you are not being specific enough.
Perhaps you could ask them a question which logically follows from the expected answer to your actual question, and when they call you out on it, explain why you think this version plausible; they might object, but they at least should be constrained by your expectations. Do you think this would work?
Replies from: Elo↑ comment by Elo · 2015-07-28T16:10:23.185Z · LW(p) · GW(p)
I am confused by this:
Perhaps you could ask them a question which logically follows from the expected answer to your actual question
Can you provide a worked example? Or explain it again? Or both?
Replies from: None↑ comment by [deleted] · 2015-07-28T16:38:05.649Z · LW(p) · GW(p)
Perhaps the most famous worked example is Have you stopped beating your wife?, an instance of a rhetorical device called begging the question; strictly speaking, this is Dark Arts, but since you are assumedly willing to immediately take a step back and change your mind about the assumption (as in, 'Oh, you're single' or 'Oh, you've never beaten her' or 'Oh, you only beat other people's wives') it should not be that bad.
↑ comment by MrMind · 2015-07-28T07:51:56.977Z · LW(p) · GW(p)
Assuming there isn't something wrong with the question I originally ask and how I present it.
Aren't you blocking, with this assumption, all the parameters you can intervene on to improve your communication?
As for dealing with it, you can try to see it this way: every time you ask a precise question with very stringent constraints, you are basically asking people to solve a difficult problem for you. You are, in a sense, freeloading on others' brainpower.
As with everything, this is a scarce resource, one that you cannot really expect people to give to you freely.
Learn to accept that we are animals constantly doing cost-analysis, and so if you notice a question not being answered the way you want to, it's probably because of this, and you need to supply a more adequate reward.
↑ comment by Elo · 2015-07-28T15:25:06.027Z · LW(p) · GW(p)
Aren't you blocking, with this assumption
yes. Because I don't want those answers right now. They help to answer a different question. one I am not asking this time.
Explicitly that statement was included because I didn't want 101 answers that look like, "maybe you should find ways to ask clearer". Because that's not the problem or strategy I am trying to use to attack the puzzle right now.
Your model does help.
I have specific concern about the culture of answering questions in this way and the way it is not-productive at answering things. I noticed myself doing the thing and managed to untrain it, or train different strategies to answer that are helpful instead; I am looking for a method of sharing the policy of "answer the face value question +/- answer the question that is asked specifically first before trying to answer the question you think they want you to ask". Any suggestions?
Replies from: MrMind↑ comment by MrMind · 2015-07-29T07:55:32.371Z · LW(p) · GW(p)
I feel that
Assuming there isn't something wrong with the question I originally ask and how I present it.
and
Do you have suggestions for either: [....] b. getting people to answer the right question
are contradictory requirements.
What am I missing?
↑ comment by cousin_it · 2015-07-27T19:02:13.695Z · LW(p) · GW(p)
Yeah, I get that a lot. Some random suggestions:
- Ask in a more friendly and open-ended way
- Tailor your question to the crowd's interests and biases
- Accept the tangents and try to spin them into other interesting conversations
- Find a different crowd to ask
↑ comment by Elo · 2015-07-28T00:03:54.055Z · LW(p) · GW(p)
Thanks!
I think I will have to expand my asky-circles (4 above)
I am concerned for 2 (above) because part of me has already done that ad infinitum. Most things I ask have gone around my head for days and then gone over in circles multiple times before completing them. I am no longer sure how to best do that.
↑ comment by [deleted] · 2015-07-27T21:27:28.598Z · LW(p) · GW(p)
Could you give two examples of questions you asked and the answers you got? Context matters a lot here.
Replies from: Elo↑ comment by Elo · 2015-07-27T23:50:12.758Z · LW(p) · GW(p)
Can I give you one for now:
- This question in Dagon's answer below.
Happy to share other examples but I believe that should be clear.
Replies from: None↑ comment by [deleted] · 2015-07-28T00:38:01.969Z · LW(p) · GW(p)
It seems like you want people to take your question at literal face value, instead of trying to solve the actual problem that caused you to ask the question.
Is that an accurate summary of your stance?
If yes, why do you want that? If no, what's a better summary of your reasoning for asking questions?
Replies from: Elo↑ comment by Elo · 2015-07-28T01:46:01.917Z · LW(p) · GW(p)
This is a reasonably good interpretation of the question. Yes.
Assuming a problem Px has happened to generated the question Qx. I have already processed from the problem state Px, to the question state Qx. I have eliminated possible solutions (Sa, Sb, Sc) that I have come up with and why they won't work alongside the details of my entire situation. Where Px is a big problem space and explaining an entire situation would be to write the universe on paper (pointless and not helpful to anyone).
Or I decided that it is possible to work on a small part of Px with a question Qx. To ask the question Qx is to specify where in the problem space I am trying to work, and starting from there.
If I were to ask the problem question "why don't people understand me?" I alone can generate hundreds of excuses and reasons for it, that is unhelpful. I have already narrowed Px down to a Qx strategy for solving it. I wanted to ask the specific, "getting people to answer the face-question" the one I chose to ask, not the entire problem set that it comes from.
If I asked the full problem-space at once, no one would read it because it would be too long, and then no one would respond.
At some point the responder should be assuming that the OP actually knows Px that they are asking about and are asking Qx for a good reason (possibly a long one not worth explaining in detail). How can I make that point happen precisely when I ask the question and not 5 interactions later? I would have thought it would involve a caveat of "don't answer in this way because I considered Sa, Sb, Sc already". (i.e. my use of the phrase - Assuming X...)
Also important: is the process of "answering the wrong question" (as I am trying to describe it) able to be reasonably defined as responding to a strawman of a question?
Is asking a literal-face-value question a bad idea? If that's the question I want to be answered?
Replies from: None↑ comment by [deleted] · 2015-07-28T02:30:45.604Z · LW(p) · GW(p)
From my view, it's absolutely a great idea to ask literal-face-value questions. I think we approach the problem from different angles - you're looking to fill in specific holes in your knowledge or reasoning, by generating the perfect question to fill in that hole.
I think that's great when it happens, and I also try to remember that I'm dealing with messy, imperfect, biased, socially evolved humans with HUGE inferential gaps to my understanding of the problem. Given that, my model of getting help with a problem is not Ask great question - get great answer. It usually goes more like this:
-Bring up problem I'm having >they bring up solution I've already tried/discarded (or which isn't actually a solution to my specific problem)> I mention that > they mention some more> this goes back and forth for a while > they mention some new argument or data I hadn't considered > continue some more > at some point one of us is getting bored or we've hashed out everything > move on to another topic.
I find that with this approach, given that I'm asking the right people, I have a high probability of getting new approaches to my problems, altering my existing perspective, and coming closer to a solution. Using this way of approaching advice getting many times gives me a much better understanding of the problem and it's potential solutions, and allows me to cover the full problem space without overwhellmng people or sounding like a know-it-all.
Coming at it from this angle, I think it's a great idea to start with a specific question, and still understand that I may move much closer to having my problem solved, without ever coming close to answering the question as I asked it (although oftentimes, we circle back around to the original question to the end, and I hear a novel answer to it).
With that in mind, there are several things I do when asking advice that I think may be helpful to you (or may not be).
I try not to say "I already thought about that and..." too much, as it ends these conversations before we get to the good stuff. Instead, I ask leading questions that bring people to the same conclusion without me sounding like a know it all.
I remain open to the fact that there might be evidence or arguments I'm not aware of in my basic logic, and therefore remain curious even when we're covering territory that I think I've already covered ad nauseum.
I precommit to trying the best specific solution they offer that I haven't tried, even if I think it has a low probability of success.
I keep them updated on trying their suggested solutions, and express gratitude even if the suggestion doesn't work.
Over the long term, as these relationships build up more, the people you get advice from will get a better idea of how you think, and you're more likely to get the "specific answer to specific question" behavior, but even if you don't, you'll still get valuable feedback that can help you solve the problem.
comment by raydora · 2015-07-31T18:12:26.246Z · LW(p) · GW(p)
Has anyone read anything about Applied Information Economics?
Replies from: Manfredcomment by Username · 2015-07-29T20:19:25.502Z · LW(p) · GW(p)
NNAISENSE leverages the 25-year proven track record of one of the leading research teams in AI to build large-scale neural network solutions for superhuman perception and intelligent automation, with the ultimate goal of marketing general-purpose neural network-based Artificial Intelligences.
An AI startup created by Jurgen Schmidhuber.
comment by NancyLebovitz · 2015-08-01T17:54:46.913Z · LW(p) · GW(p)
Video: complete mapping of a tiny bit of mouse brain The thing that was mentioned as a surprise is that neuron branches can be very close to each other and not connect.
Replies from: Nonecomment by tetronian2 · 2015-07-29T00:38:19.206Z · LW(p) · GW(p)
Possibly of local interest: Research on moral reasoning in intelligent agents by the Renssalear AI and Reasoning Lab.
(I come from a machine learning background, and so I am predisposed to look down on the intelligent agents/cognitive modelling folks, but the project description in this press release just seems laughable. And if the goal of the research is to formalize moral reasoning, why the link to robotic/military systems, besides just to snatch up US military grants?)
Replies from: MrMind↑ comment by MrMind · 2015-07-29T07:50:08.741Z · LW(p) · GW(p)
I did not find the project so laughable. It's hopelessly outdated in the sense that logical calculus does not deal with incomplete information, and I suspect that they simply conflate "moral" with "utilitarian" or even just "decision theoretic".
Replies from: tetronian2↑ comment by tetronian2 · 2015-07-30T00:49:30.569Z · LW(p) · GW(p)
It appears they are going with some kind of modal logic, which also does not appear to deal with incomplete information. I also suspect "moral" will be conflated with "utilitarian" or "utilitarian plus a diff". But then there is this bit in the press release:
Bringsjord’s first step in designing ethically logical robots is translating moral theory into the language of logic and mathematics. A robot, or any machine, can only do tasks that can be expressed mathematically. With help from Rensselaer professor Mei Si, an expert in the computational modeling of emotions, the aim is to capture in “Vulcan” logic such emotions as vengefulness.
...which makes it sound like the utility function/moral framework will be even more ad hoc.
Replies from: MrMindcomment by [deleted] · 2015-07-30T10:38:06.253Z · LW(p) · GW(p)
I have a fear that I'll forget I have my windows live/outlook calendar, or forget to use it. Any tips for getting over that? Same with the fact that I have a LW account. I get obsessive over email, calendar, onedrive, fb and lw!
Replies from: Dagon, Username↑ comment by Dagon · 2015-07-30T17:25:05.230Z · LW(p) · GW(p)
Break down the problem, and identify your goals in dealing with it/them. Is your problem one or more of: 1) fear is unpleasant and you'd rather not experience it, regardless of any other experienced or behavioral differences? 2) there are consequences to not using an account? 3) there are consequences to trying to use an account when it's not necessary?
You might address the fear via therapy, medication, meditation, and/or introspection - though not in that order, please. Introspection should include trying to separate out the components of the fear, in order to decide how much to focus on your perception, and how much to focus on the second thing, behavior.
You might address the actual problems of too many accounts by having fewer - just don't sign up for things that aren't going to be a net positive value. Or consolidating into a PasswordSafe or other place to list the accounts, so you can see when you've last used something. Or simply a checklist of daily and weekly information to look at.
Replies from: None↑ comment by [deleted] · 2015-08-09T00:16:03.693Z · LW(p) · GW(p)
Break down the problem, and identify your goals in dealing with it/them. Is your problem one or more of: 1) fear is unpleasant and you'd rather not experience it, regardless of any other experienced or behavioral differences? 2) there are consequences to not using an account? 3) there are consequences to trying to use an account when it's not necessary?
I haven't forgotten about your comment, I'm actually just stopped by some of these suggested questions and have been mulling over them. Thanks by the way!
comment by [deleted] · 2015-07-30T00:46:20.871Z · LW(p) · GW(p)
Is Project Healthy Children's recommendation by EA orgs other than Givewell the case for neglect for nutritrional interventions at Givewell?.
I can't find any recent research on the matter, despite their 2012 intent to reassess it as a priority area. I think it would allay my concerns that EA org's neglect nutrition as a focus area, and potentially allay many lay fears, if there was more disclosed about the evidence for nutritrional interventions. Much of academic development studies focussed on issues like access to marketplaces and other agriculture related things, and we may be abe to convert people from those interest points by speaking to their existing knowledge.
In fact, there seems to be so much important comment in the EA space that gets mentioned, then not followed up from. It's really confusing. it's unclear what the current, open problems are, not relative priorities in the research agendas.
To illustrate that I'm not just claiming there is a general problem from one example, where are the updates on the CEA's views on givewell? I could go on, but it's easier just to go through the relevant blogs of EA orgs and tracking back through time. I'm not aware of my concern being raised elsewhere.
comment by redding · 2015-07-28T12:20:02.981Z · LW(p) · GW(p)
There are different levels of impossible.
Imagine a universe with an infinite number of identical rooms, each of which contains a single human. Each room is numbered outside: 1, 2, 3, ...
The probability of you being in the first 100 rooms is 0 - if you ever have to make an expected utility calculation, you shouldn't even consider that chance. On the other hand, it is definitely possible in the sense that some people are in those first 100 rooms.
If you consider the probability of you being in room Q, this probability is also 0. However, it (intuitively) feels "more" impossible.
I don't really think this line of thought leads anywhere interesting, but it definitely violated my intuitions.
Replies from: pragmatist, Toggle, Richard_Kennaway, RolfAndreassen, shminux, MrMind↑ comment by pragmatist · 2015-07-29T05:00:07.126Z · LW(p) · GW(p)
There is no such thing as a uniform probability distribution over a countably infinite event space (see Toggle's comment). The distribution you're assuming in your example doesn't exist.
Maybe a better example for your purposes would be picking a random real number between 0 and 1 (this does correspond to a possible distribution, assuming the axiom of choice is true). The probability of the number being rational is 0, the probability of it being greater than 2 is also 0, yet the latter seems "more impossible" than the former.
Of course, this assumes that "probability 0" entails "impossible". I don't think it does. The probability of picking a rational number may be 0, but it doesn't seem impossible. And then there's the issue of whether the experiment itself is possible. You certainly couldn't construct an algorithm to perform it.
Replies from: Sarunas, redding↑ comment by Sarunas · 2015-07-29T13:51:55.937Z · LW(p) · GW(p)
Of course, this assumes that "probability 0" entails "impossible". I don't think it does. The probability of picking a rational number may be 0, but it doesn't seem impossible.
Given uncountable sample space, P(A)=0 does not necessarily imply that A is impossible. A is impossible iff the intersection of A and sample space is empty.
Intuitively speaking, one could say that P(A)=0 means that A resembles "a miracle" in a sense that if we perform n independent experiments, we still cannot increase the probability that A will happen at least once even if we increase n. Whereas if P(B)>0, then by increasing number of independent experiments n we can make probability of B happening at least once approach 1.
↑ comment by redding · 2015-07-29T12:52:48.308Z · LW(p) · GW(p)
I (now) understand the problem with using a uniform probability distribution over a countably infinite event space. However, I'm kind of confused when you say that the example doesn't exist. Surely, its not logically impossible for such an infinite universe to exist. Do you mean that probability theory isn't expressive enough to describe it?
Replies from: pragmatist↑ comment by pragmatist · 2015-07-29T15:03:54.189Z · LW(p) · GW(p)
When I say the probability distribution doesn't exist, I'm not talking about the possibility of the world you described. I'm talking about the coherence of the belief state you described. When you say "The probability of you being in the first 100 rooms is 0", it's a claim about a belief state, not about the mind-independent world. The world just has a bunch of rooms with people in them. A probability distribution isn't an additional piece of ontological furniture.
If you buy the Cox/Jaynes argument that your beliefs must adhere to the probability calculus to be rationally coherent, then assigning probability 0 to being in any particular room is not a coherent set of beliefs. I wouldn't say this is a case of probability theory not being "expressive enough". Maybe you want to argue that the particular belief state you described ("Being in any room is equally likely") is clearly rational, in which case you would be rejecting the idea that adherence to the Kolmogorov axioms is a criterion for rationality. But do you think it is clearly rational? On what grounds?
(Incidentally, I actually do think there are issues with the LW orthodoxy that probability theory limns rationality, but that's a discussion for another day.)
Replies from: redding↑ comment by redding · 2015-07-29T21:27:26.669Z · LW(p) · GW(p)
From a decision-theory perspective, I should essentially just ignore the possibility that I'm in the first 100 rooms - right?
Similarly, if I'm born in a universe with infinite such rooms and someone tells me to guess whether my room is a multiple of 10 or not. If I guess correctly, I get a dollar; otherwise I lose a dollar.
Theoretically there are as many multiples of 10 as not (both being equinumerous to the integers), but if we define rationality as the "art of winning", then shouldn't I guess "not in a multiple of 10"? I admit that my intuition may be broken here - maybe it just truly doesn't matter which you guess - after all its not like we can sample a bunch of people born into this world without some sampling function. However, doesn't the question still remain: what would a rational being do?
Replies from: pragmatist↑ comment by pragmatist · 2015-07-30T09:56:54.122Z · LW(p) · GW(p)
From a decision-theory perspective, I should essentially just ignore the possibility that I'm in the first 100 rooms - right?
Well, what do you mean by "essentially ignore"? If you're asking if I should assign substantial credence to the possibility, then yeah, I'd agree. If you're asking whether I should assign literally zero credence to the possibility, so that there are no possible odds -- no matter how ridiculously skewed -- I would accept to bet that I am in one of those rooms... well, now I'm no longer sure. I don't exactly know how to go about setting my credences in the world you describe, but I'm pretty sure assigning 0 probability to every single room isn't it.
Consider this: Let's say you're born in this universe. A short while after you're born, you discover a note in your room saying, "This is room number 37". Do you believe you should update your belief set to favor the hypothesis that you're in room 37 over any other number? If you do, it implies that your prior for the belief that you're in one of the first 100 rooms could not have been 0.
(But. on the other hand, if you think you should update in favor of being in room x when you encounter a note saying "You are in room x", no matter what the value of x, then you aren't probabilistically coherent. So ultimately, I don't think intuition-mongering is very helpful in these exotic scenarios. Consider my room 37 example as an attempt to deconstruct your initial intuition, rather than as an attempt to replace it with some other intuition.)
Theoretically there are as many multiples of 10 as not (both being equinumerous to the integers), but if we define rationality as the "art of winning", then shouldn't I guess "not in a multiple of 10"?
Perhaps, but reproducing this result doesn't require that we consider every room equally likely. For instance, a distribution that attaches a probability of 2^(-n) to being in room n will also tell you to guess that you're not in a multiple of 10. And it has the added advantage of being a possible distribution. It has the apparent disadvantage of arbitrarily privileging smaller numbered rooms, but in the kind of situation you describe, some such arbitrary privileging is unavoidable if you want your beliefs to respect the Kolmogorov axioms.
Replies from: redding↑ comment by redding · 2015-07-30T11:53:31.342Z · LW(p) · GW(p)
What I mean by "essentially ignore" is that if you are (for instance) offered the following bet you would probably accept: "If you are in the first 100 rooms, I kill you. Otherwise, I give you a penny."
I see your point regarding the fact that updating using Bayes' theorem implies your prior wasn't 0 to begin with.
I guess my question is now whether there are any extended versions of probability theory. For instance, Kolmogorov probability reverts to Aristotelian logic for the extremes P=1 and P=0. Is there a system of though that revers to probability theory for finite worlds but is able to handle infinite worlds without privileging certain (small) numbers?
I will admit that I'm not even sure saying that guessing "not a multiple of 10" follows the art of winning, as you can't sample from an infinite set of rooms either in traditional probability/statistics without some kind of sampling function that biases certain numbers. At best we can say that whatever finite integer N you choose as N goes to infinity the best strategy is to pick "multiple of 10". By induction we can prove that guessing "not a multiple of 10" is true for any finite number of rooms but alas infinity remains beyond this.
↑ comment by Toggle · 2015-07-28T14:03:51.793Z · LW(p) · GW(p)
Your math has some problems. Note that, if p(X=x) = 0 for all x, then the sum over X is also zero. But if you're in a room, then by definition you have sampled from the set of rooms- the probability of selecting a room is one. Since the probability of selecting 'any room from the set of rooms' is both zero and one, we have established a contradiction, so the problem is ill-posed.
↑ comment by Richard_Kennaway · 2015-07-29T10:19:31.280Z · LW(p) · GW(p)
As others have pointed out, there is no uniform probability distribution on a countable set. There are various generalisations of probability that drop or weaken the axiom of countable additivity, which have their uses, but one statistician's conclusion is that you lose too many useful properties. On the other hand, writing a blog post to describe something as a lost cause suggests that it still has adherents. Googling /"finite additivity" probability/ turns up various attempts to drop countable additivity.
Another way of avoiding the axiom is to reject all infinities. There are then no countable sets to be countably additive over. This throws out almost all of current mathematics, and has attracted few believers.
In some computations involving probabilities, the axiom that the measure over the whole space is 1 plays no role. A notable example is the calculation of posterior probabilities from priors and data by Bayes' Theorem:
Posterior(H|D) = P(D|H) Prior(H) / Sum_H' ( P(D|H') Prior(H') )
(H, H' = hypothesis, D = data.)
The total measure of the prior cancels out of the numerator and denominator. This allows the use of "improper" priors that can have an infinite total measure, such as the one that assigns measure 1 to every integer and infinite measure to the set of all integers.
There can be a uniform probability distribution over an uncountable set, because there is no requirement for a probability distribution to be uncountably additive. Every sample drawn from the uniform distribution over the unit interval has a probability 0 of being drawn. This is just one of those things that one comes to understand by getting used to it, like square roots of -1, 0.999...=1, non-euclidean geometry, and so on.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-07-30T22:14:50.530Z · LW(p) · GW(p)
As I recall, Teddy Seidenfeld is a fan of finite additivity. He does decision theory work, also.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-07-30T23:01:57.975Z · LW(p) · GW(p)
As I recall, Teddy Seidenfeld is a fan of finite additivity.
Do you know why?
The recent thread on optional stopping and Bayes led me to this paper, which I see Seidenfeld is one of the authors of, which argues that countable additivity has bad consequences. But these consequences are a result of improper handling of limits, as Jaynes sets forth in his chapter 15. Seidenfeld and his coauthors go to great lengths (also here) exploring the negative consequences of finite additivity for Bayesian reasoning. They see this as a problem for Bayesian reasoning rather than for finite additivity. But I have not seen their motivation.
If you're going to do probability on infinite spaces at all, finite additivity just seems to me to be an obviously wrong concept.
ETA: Here's another paper by Seidenfeld, whose title does rather suggest that it is going to argue against finite additivity, but whose closing words decline to resolve the matter.
↑ comment by RolfAndreassen · 2015-07-29T02:15:28.979Z · LW(p) · GW(p)
I opine that you are equivocating between "tends to zero as N tends to infinity" and "is zero". This is usually a very bad idea.
↑ comment by Shmi (shminux) · 2015-07-28T14:28:37.319Z · LW(p) · GW(p)
Measure theory) is a tricky subject. Also consider https://twitter.com/ZachWeiner/status/625711339520954368 .
Replies from: redding↑ comment by MrMind · 2015-07-29T07:53:13.021Z · LW(p) · GW(p)
This is an old problem in probability theory, and there are different solutions.
PT is developed first in finite model, so it's natural that its extension to infinite models can be done in a few different ways.
Replies from: reddingcomment by Username · 2015-07-27T19:32:40.647Z · LW(p) · GW(p)
One more difference between statistics and [machine learning, data science, etc.] A blog post about differences between statistics and data science.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-27T19:42:16.530Z · LW(p) · GW(p)
I belong to the "data science is just the new cool name for statistics" camp :-) I think the blog post linked in the parent confuses what statistics is with how statistics is typically taught. As a consequence it sets up a very narrow idea of statistics and then easily shows that "data science" is wider than narrow and limited "statistics".
comment by [deleted] · 2015-07-27T18:03:56.807Z · LW(p) · GW(p)
Great article arguing that the singularity is far:
https://timdettmers.wordpress.com/2015/07/27/brain-vs-deep-learning-singularity/
Here is the corresponding thread on /r/machinelearning:
Replies from: jacob_cannell↑ comment by jacob_cannell · 2015-07-27T18:37:08.252Z · LW(p) · GW(p)
Ridiculously terrible article - lots of unsupportable assertions without any evidence. He doesn't seem to have any knowledge of the actual constraint space on circuit design - thermodynamic, area, latency, etc.
See my longer reply comment here.
He uses a 200hz firing rate, when neurons actually fire at < 1hz on average. He claims the cerebellum has more compute power than the cortex, which is pretty ridiculous - given that the cortex has far more synapses and more volume, and the fact that the brain is reasonably efficient. He doesn't understand that most of the energy usage is in wire dissipation, not switching. His estimates are thus off by many orders of magnitude. The article is not worth reading.
comment by snarles · 2015-07-27T14:49:11.595Z · LW(p) · GW(p)
Disclaimer: I am lazy and could have done more research myself.
I'm looking for work on what I call "realist decision theory." (A loaded term, admittedly.) To explain realist decision theory, contrast with naive decision theory. My explanation is brief since my main objective at this point is fishing for answers rather than presenting my ideas.
Naive Decision Theory
Assumes that individuals make decisions individually, without need for group coordination.
Assumes individuals are perfect consequentialists: their utility function is only a function of the final outcome.
Assumes that individuals have utility functions which do not change with time or experience.
Assumes that the experience of learning new information has neutral or positive utility.
Hence a naive decision protocol might be:
A person decides whether to take action A or action B
An oracle tells the person the possible scenarios that could result from action A or action B, with probability weightings.
The person subconsciously assigns a utility to each scenario. This utility function is fixed. The person chooses the action A or B based on which action maximizes expected utility.
As a consequence of the above assumptions, the person's decision is the same regardless of the order of presentation of the different actions.
Note: we assume physical determinism, so the person's decision is even known in advance to the oracle. But we suppose the oracle can perfectly forecast counterfactuals; to emphasize this point, we might call it a "counterfactual oracle" from now on.
It should be no surprise that the above model of utility is extremely unrealistic. I am aware of experiments demonstrating non-transitivity of utility, for instance. Realist decision theory contrasts with naive decision theory in several ways.
Realist Decision Theory
Acknowledges that decisions are not made individually but jointly with others.
Acknowledges that in a group context, actions have a utility in of themselves (signalling) separate from the utility of the resulting scenarios.
Acknowledges that an individual's utility function changes with experience.
Acknowledges that learning new information constitutes a form of experience, which may itself have positive or negative utility.
Relaxing any one of the four assumptions radically complicates the decision theory. Consider only relaxing conditions 1 and 2: then game theory becomes required. Consider relaxing only 3 and 4, so that for all purposes only one individual exists in the world: then points 3 and 4 mean that the order in which a counterfactual oracle presents the relevant information to the individual affect the individual's final decision. Furthermore, an ethically implemented decision procedure would allow the individual to choose which pieces of information to learn. Therefore there is no guarantee that the individual will even end up learning all the information relevant to the decision, even if time is not a limitation.
It would be great to know which papers have considered relaxing the assumptions of a "naive" decision theory in the way I have outlined.
Replies from: Stingray, Dagon↑ comment by Stingray · 2015-07-27T18:57:54.615Z · LW(p) · GW(p)
Acknowledges that in a group context, actions have a utility in of themselves (signalling) separate from the utility of the resulting scenarios.
Why do people even signal anything? To get something for themselves from others. Why would signaling be outside the scope of consequentialism.
Replies from: snarles↑ comment by snarles · 2015-07-28T13:35:11.355Z · LW(p) · GW(p)
Ordinarily, yes, but you could imagine scenarios where agents have the option to erase their own memories or essentially commit group suicide. (I don't believe these kinds of scenarios are extreme beyond belief--they could come up in transhuman contexts.) In this case nobody even remembers which action you chose, so there is no extrinsic motivation for signalling.
↑ comment by Dagon · 2015-07-27T15:26:33.867Z · LW(p) · GW(p)
Unpack #1 a bit.
Are you looking for information about situations where an individual's decisions should include predicted decisions by others (which will in turn take into account the individual's decisions)? The (Game Theory Sequence)[http://lesswrong.com/lw/dbe/introduction_to_game_theory_sequence_guide/] is a good starting point.
Or are you looking for cases where "individual" is literally not the decision-making unit? I don't have any good less-wrong links, but both (Public Choice Theory)[http://lesswrong.com/lw/2hv/public_choice_and_the_altruists_burden/] and the idea of sub-personal decision modules come up occasionally.
Both topics fit into the overall framework of classical decision theory (naive or not, you decide) and expected value.
Items 2-4 don't contradict classical decision theory, but fall somewhat outside of it. decision theory generally looks at instrumental rationality - how to best get what one wants, rather than questions of what to want.
Replies from: snarles↑ comment by snarles · 2015-07-28T13:42:03.016Z · LW(p) · GW(p)
Thanks for the references.
I am interested in answering questions of "what to want." Not only is it important for individual decision-making, but there are also many interesting ethical questions. If a person's utility function can be changed through experience, is it ethical to steer it in a direction that would benefit you? Take the example of religion: suppose you could convince an individual to convert to a religion, and then further convince them to actively reject new information that would endanger their faith. Is this ethical? (My opinion is that it depends on your own motivations. If you actually believed in the religion, then you might be convinced that you are benefiting others by converting them. If you did not actually believe in the religion, then you are being manipulative.)
Replies from: Dagon↑ comment by Dagon · 2015-07-28T16:23:54.222Z · LW(p) · GW(p)
Cool. The (Metaethics Sequence)[http://wiki.lesswrong.com/wiki/Metaethics_sequence] is useful for some of those things.
I have to admit that, for myself, I remain unconvinced that there is an objective truth to be had regarding "what should I want". Partly because I'm unconvinced that "I" is a coherent unitary thing at any given timepoint, let alone over time. And partly because I don't see how to distinguish "preferences" from "tendencies" without resorting to unmeasurable guesses about qualia and consciousness.
comment by Gunnar_Zarncke · 2015-08-02T19:22:26.376Z · LW(p) · GW(p)
Is it just me or has LW participation increased in the last months?
Replies from: shminux, Username↑ comment by Shmi (shminux) · 2015-08-03T02:02:02.686Z · LW(p) · GW(p)
Alexa says that the ranking has been dropping in the last couple of months, so not very likely.
comment by jacob_cannell · 2015-07-31T20:03:52.234Z · LW(p) · GW(p)
Meta: I'm having trouble figuring out how to get polls to work in posts. I'd like to create a simple thread with a poll concerning some common predictions about the future of AI/AGI.
I've tried the syntax from the wiki, it only seems to work in comments. Is this intended? Is there a simple way to get a poll into a post itself?
Replies from: Elocomment by [deleted] · 2015-07-31T12:46:09.295Z · LW(p) · GW(p)
International development has a burgeoning prize market. SSC or OB suggests setting prizes, instead of donating as incentive. I'm wondering what extent different members of our community recommend local government councilors advocate for prizes as an alternative to grants?
On semi-related note, markets maybe somewhat efficient at the transactional level, but inefficient in regards to my demands for others' transactions. Less development in the world means less technological advancement and less chance I'll get some cool future tech. Empirically, market-based solutions in development contexts don't work to supply to need. Experimental results reveal that In most cases, user fees neither improved nor worsened targeting among those who obtained the product, but it did reduce the fraction of individuals in need who got the product.
While I'm discussing the importance of empirical evidence and overreliance on conceptual models - I'm increasingly concerned by the EA foray into politics. EA's are making tremendous assumptions about the way public policy works. For instance, the new public policy writing think tank the dothack crew are starting seems very naive and based on a chain of tenuous assumptions about impact. I'll start with an obvious assumption that's wrong. Policy briefs don't acually influence anyone involved in policy cause they have opinions already. It's useful for influencing uninformed men (women don't care, evidently) who self-rate themselves as influential. So basically, self-aggrandising members of the public. I suspect many of the dotimpact crew are just following Givewell's lead into public policy. There's a reason Givewell does things the way they do and not other ways - you can read about it in their blog. Soon, presented with the right opportunity ideally from some collaborators reading this who are inspired, I'd like to start a coordinated effort to
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-07T13:44:49.893Z · LW(p) · GW(p)
Policy briefs don't acually influence anyone involved in policy cause they have opinions already
Not everybody involved in policy has his opinions already formed. When a new issue comes up, politicians have to form an opinion about it and don't have to have a premade opinion.
The point of lobbying often isn't to convince a politician but to provide the politician with arguments to back up a position that he already holds. It's also to help with written the actual law. As far as I know the Global Priorities Project succeeded into getting an amendment that they wrote in actual law.
comment by knb · 2015-07-30T20:54:45.278Z · LW(p) · GW(p)
Scott Sumner describes the Even Greater Stagnation. It's interesting to try to square the reality of very slow growth in developed economies with the widespread notion that we are living in a time of rapid technological change. My intuition is that there really is a lot happening in science and technology, but a combination of supply side and demand side problems are preventing new discoveries and technologies from becoming marketable products. It's also probably true that overall scientific and technological progress is slower than it used to be (hard to measure objectively, I think.)
Replies from: satt↑ comment by satt · 2015-07-31T04:52:52.690Z · LW(p) · GW(p)
It's interesting to try to square the reality of very slow growth in developed economies with the widespread notion that we are living in a time of rapid technological change.
Also interesting is that there's some precedent for this: British GDP growth (table 21 of this) was similarly anaemic during the (first) Industrial Revolution. The average growth rate didn't even hit 2% until around the 1830s!
comment by chaosmage · 2015-07-30T18:02:11.407Z · LW(p) · GW(p)
Is there a subreddit or some other place where I can describe ideas for products or services, explicitly forfeit any rights to them, and they are actually as good as I imagine (maybe other user can help rate, or say how much it'd be worth to them), have a chance someone with the resources to do so will actually implement one or another?
Replies from: Stingray, Lumifer