Open thread for December 17-23, 2013
post by Paul Crowley (ciphergoth) · 2013-12-17T20:45:00.004Z · LW · GW · Legacy · 306 commentsContents
306 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
306 comments
Comments sorted by top scores.
comment by calef · 2013-12-19T19:31:11.504Z · LW(p) · GW(p)
A full half (20/40) of the posts currently under discussion are meetup threads.
Can we please segregate these threads to another forum tab (in the vein of the Main/Discussion split)?
Edit: And only 5 or so of them actually have any comments in them.
Replies from: Vaniver, passive_fist, Emile↑ comment by Vaniver · 2013-12-20T19:52:10.535Z · LW(p) · GW(p)
I might as well point out my solution- I've set the date of the Austin meetup to be six years from now, and edit the date each week. It stays on the map, it stays on the sidebar (so I remember to edit the date- if this were automatic, then it could be correct), and it stays out of discussion.
↑ comment by passive_fist · 2013-12-19T21:36:45.177Z · LW(p) · GW(p)
This issue has been brought up many times, and I agree that it's a major problem. The solution I suggested was to have all the meetup locations be brought together into a single weekly meetup thread, with all the city names in the title. This could be done either automatically with a little bit of coding, or by just having someone do the coordination. I even volunteered to be the one doing the coordinating. But no one seemed to be interested in actually agreeing to do it. I still stand by my suggestion, if it is adopted.
Replies from: ThisSpaceAvailable, Douglas_Knight↑ comment by ThisSpaceAvailable · 2013-12-22T08:59:18.004Z · LW(p) · GW(p)
It seems to me that the forum format is ill suited for the subject matter in the first place. Unless the intent is to use the discussion forum raise awareness of the meetups, it seems to me that a website specifically designed for the meetups would make more sense. Especially since most people aren't going to have much interest in a meetup more than, say, 100 miles away from where they live. If people really want to be notified about the meetups without having to go to a separate website, couldn't that be accomplished through an RSS feed or some such solution? Granted, that would take more effort on individual LWers, but I have doubts about how much clutter should be accepted simply to make becoming aware of every meetup as effortless as possible.
Is there a way on my end to tell my computer to not include any post that includes "Meetup" in the title?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-12-23T08:17:06.437Z · LW(p) · GW(p)
I think it works to have meet-ups on this site rather than in a separate blog, but they shouldn't be separate posts in discussion.
↑ comment by Douglas_Knight · 2013-12-19T21:59:48.634Z · LW(p) · GW(p)
Frank Adamek has done what you suggest for years. That you don't notice it being done is a pretty bad sign about the idea. If you want to contribute, you should be trying to get people to use his system, rather than trying to introduce a new system. Or maybe you should suggest modifications. But the first step is knowing the current system.
Replies from: passive_fist↑ comment by passive_fist · 2013-12-19T22:02:36.888Z · LW(p) · GW(p)
I'm aware of Frank's posts to main. It came up during the last discussion about this idea. What I am suggesting is to remove the individual meetup threads from discussion, to clear up the clutter. In addition, the meetup cities would be right up there in the title (to respond to objections that having a single thread would result in reduced visibility). Instead of everyone submitting to discussion and then someone gathering up everything in main, everyone would simply submit to the person doing the coordinating. The reason I proposed myself as a volunteer was that I didn't know if Frank would be willing to do this, given that it would require daily correspondence with the people organizing the meetups.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-12-20T16:50:33.291Z · LW(p) · GW(p)
I don't know how typical I am, but I check Discussion at lot more often than Main.
Replies from: Baughn, passive_fist↑ comment by Baughn · 2013-12-21T23:10:23.664Z · LW(p) · GW(p)
That's because Discussion has a lot more activity, right?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-12-22T00:20:56.137Z · LW(p) · GW(p)
I'm not sure how much of the difference is more activity. It feels like a higher proportion of things I'm interested in, but that could just be more frequency of things I'm interested in.
↑ comment by passive_fist · 2013-12-20T19:27:34.724Z · LW(p) · GW(p)
Yes I think the proposed aggregated meetup threads should be in discussion.
↑ comment by Emile · 2013-12-19T19:59:51.268Z · LW(p) · GW(p)
... and of those five, for two of them the comments consist of me complaining that the meetup location hasn't been included in the title.
That being said, personally I don't mind the meetup posts that much, and I'm not sure that moving them to their own section would be an improvement. I find it pretty likely that nobody would ever look there.
Replies from: Anatoly_Vorobey, RolfAndreassen↑ comment by Anatoly_Vorobey · 2013-12-20T08:04:43.039Z · LW(p) · GW(p)
Next iteration: meetup announcements occupy their own tab, top of Discussion starts with an "ad" line about recent announcements, in a bright color or otherwise distinguished: "Recent meetup announcements: Moscow, Tel-Avid, Boulder, London", every city is a link.
↑ comment by RolfAndreassen · 2013-12-19T21:10:11.007Z · LW(p) · GW(p)
I find it pretty likely that nobody would ever look there.
If true, what should we infer about the policy of having them cluttering up Discussion?
Replies from: Emile↑ comment by Emile · 2013-12-19T21:45:11.388Z · LW(p) · GW(p)
That policy forces everybody to see the meetup announcements, and thus probably increases meetup attendance (and knowing your announcement will have a wide (forced) public encourages people to create meetups).
Replies from: Lumifer↑ comment by Lumifer · 2013-12-19T22:02:53.222Z · LW(p) · GW(p)
That policy forces everybody to see the meetup announcements
No, it doesn't. Partially because of the meetup clutter I don't look at the posts page at all and just go straight into comments.
And what is the cost-benefit analysis for forcing everyone to read about meetups all over the globe?
comment by JoshuaZ · 2013-12-19T05:08:24.463Z · LW(p) · GW(p)
Recent work shows that it is possible to use acoustic data to break public key encryption systems. Essentially, if one can send specific encrypted plaintext then the resulting sounds the CPU makes when decrypting can reveal information about the key. The attack was successfully demonstrated to work on 4096 bit RSA encryption. While some versions of the attack require high quality microphones, some versions apparently were successful just using mobile phones.
Aside from the general interest issues, this is one more example of how a supposedly boxed AI might be able to send out detailed information to the outside. In particular, one can send surprisingly high bandwith even accidentally through acoustic channels.
Replies from: army1987, RolfAndreassen↑ comment by A1987dM (army1987) · 2013-12-19T07:43:51.353Z · LW(p) · GW(p)
Now that's creepy.
↑ comment by RolfAndreassen · 2013-12-20T03:45:53.833Z · LW(p) · GW(p)
Eh... if an attacker has the level of physical access to the CPU that's required to plant a microphone, you have worse problems than acoustic attacks.
Replies from: Pentashagon, JoshuaZ, ChristianKl↑ comment by Pentashagon · 2013-12-20T04:52:02.987Z · LW(p) · GW(p)
For personal devices the attacker may have access to the microphone inside the device via flash/java/javascript/an app, etc.
Replies from: Lumifer↑ comment by Lumifer · 2013-12-20T05:46:58.777Z · LW(p) · GW(p)
If the attacker can run code on your device, a keylogger is a much simpler solution.
Replies from: Pentashagon, Baughn↑ comment by Pentashagon · 2013-12-20T06:06:19.756Z · LW(p) · GW(p)
I think it is probably simpler to enable the microphone from a web or mobile application than to install a keylogger in the OS. But then if you consider acoustic keyloggers...
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-20T15:21:56.117Z · LW(p) · GW(p)
With an acoustic keylogger you could scoop the my KeePass password but the actual passwords that I use to log into websites.
↑ comment by JoshuaZ · 2013-12-20T04:09:47.636Z · LW(p) · GW(p)
That might have been true a few years ago, but they point out that that's not as true anymore. For example, they suggest one practical application of this technique might be to put your own server in a colocation facility, stick a microphone in it and slurp up as many keys as you can. They also were able to get a version of the technique to work 4 meters away, which is far enough that this becomes somewhat different from having direct physical access. They also point out that laser microphones could also be used with this method.
↑ comment by ChristianKl · 2013-12-20T15:28:47.630Z · LW(p) · GW(p)
In the example the used a mobile phone. Going from having owned a microphone to being able to know a key on a computer is a significant step.
Additionally there are other way to get audio access. Heating pipes conduct sound waves if the attacker has a good microphone.
Glass of windows vibrates in a way that can be detected from a distance.
comment by Anatoly_Vorobey · 2013-12-18T00:10:30.290Z · LW(p) · GW(p)
Yesterday I noticed a mistake in my reasoning that seems to be due to a cognitive bias, and I wonder how widespread or studied it is, or if it has a name - I can't think of an obvious candidate.
I was leaving work, and I entered the parking elevator in the lobby and pressed the button for floor -4. Three people entered after me - call them A, B and C - but because I hadn't yet turned around to face the door, as elevator etiquette requires, I didn't see which one of them pressed which button. As I turned around and the doors started to close, I saw that -2 and -3 were lit in addition to my -4. So, three floors and four people, means two people will come out on one of the floors, and I wondered which one it'll be.
The elevator stopped at floor -2. A and B got out. Well, I thought, so C is headed for -3, and I for -4 alone. As the doors were closing, B rushed back and squeezed through them. I realized she didn't want -2, and went out of the elevator absent-mindedly. I wondered which floor she did want. The elevator went down to -3. The doors opened and B got out... and then something weird happened: C didn't. I was surprised. Something wasn't right in my idle deductions. I figured it out in the few seconds it took for the elevator to descend to my floor and let me out together with C.
Where did I go wrong? When I knew that B left on -2, I deduced, correctly, that C will get out on -3. But then B came back; the fact of her leaving on -2 turned out to be wrong; yet I didn't cancel my deduction about C and didn't return him the "freedom" of leaving either on -3 or on -4. It didn't even occur to me to do that. Why didn't it?
It seems important that the new information was a correction of a known fact, and not just some other fact. If I treat the new information "B does not leave at -2" purely as a fact, the consequence for C is "C may leave either on -3 or on -4", which is already clear as it is and not worth updating. No, it seems "B does not leave at -2" has a special character when it comes to correct the previously-assumed "B left at -2". It comes as a "rollback" of existing information and I need to "roll back" everything I deduced from that information. And that seems hard to do and easy to forget. So if wasn't just a failure to update that I committed. It was a failure to "roll back".
On reflection, this mistake seems like something we might be doing often, and something to keep an eye out for. Is there a name for this mistake, has it been studied?
Replies from: Benito, Gurkenglas, niceguyanon, JQuinton↑ comment by Ben Pace (Benito) · 2013-12-18T11:09:29.676Z · LW(p) · GW(p)
Seems related to the studies where people are told a fact, but it's in red, which they're told means it's not true. After seeing lots of different facts in colours blue or red (blue means true) they're asked about certain facts, and they're more likely to remember a false fact as true than a true fact as false - we're more likely to believe things, and don't tend to take on contrary evidence as easily.
↑ comment by Gurkenglas · 2013-12-18T00:16:59.657Z · LW(p) · GW(p)
http://wiki.lesswrong.com/wiki/Cached_thought http://lesswrong.com/lw/k5/cached_thoughts/
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2013-12-18T00:25:08.872Z · LW(p) · GW(p)
Thanks. Cached thoughts seem applicable, but also too broad for what I'm describing. After all, if I failed to update on A and B exiting on -2, and continued thinking C may get out either on -3 or -4, that could also be described as a cached thought which I retained even when new evidence contradicted it. But I didn't do that, and was in no danger on doing that. I think that it's the necessity to roll back to the previous state, rather than just, in general, update on new evidence and get rid of the cached thought, that seems important here.
↑ comment by niceguyanon · 2013-12-20T09:45:41.423Z · LW(p) · GW(p)
This post is very interesting. It reminds me very much of some variations of the change scam. You seem to be describing something really similar, the rollback of information you speak of is applicable to the counting of change. I also feel like this sort of mistake happens often but I might not notice it. I feel like this deserves a name like rollback deduction failure or something.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-12-21T09:47:48.976Z · LW(p) · GW(p)
Change blindness seems related.
↑ comment by JQuinton · 2013-12-18T18:16:45.140Z · LW(p) · GW(p)
Seems a bit Monty Hall-ish. You updated when B got out on 2 but didn't retract your update when B re-entered. After your update, C -- or maybe you were thinking "the remainder of strangers on this elevator" -- had near certain chance of getting out on 3 so when B came back in it looks like you mashed the two together as "the remainder of strangers on this elevator".
I have no clue if this phenomenon has a name or not.
comment by lukeprog · 2013-12-19T20:00:11.636Z · LW(p) · GW(p)
Reproduced for convenience...
On G+, John Baez wrote about the MIRI workshop he's currently attending, in particular about Löb's Theorem.
Timothy Gowers asked:
Is it possible to give just the merest of hints about what the theorem might have to do with AI?
Qiaochu Yuan, a past MIRI workshop participant, gave a concise answer:
Replies from: JGWeissmanSuppose you want to design an AI which in turn designs its (smarter) descendants. You'd like to have some guarantee that not only the AI but its descendants will do what you want them to do; call that goal G. As a toy model, suppose the AI works by storing and proving first-order statements about a model of the environment, then performing an action A as soon as it can prove that action A accomplishes goal G. This action criterion should apply to any action the AI takes, including the production of its descendants. So it would be nice if the AI could prove that if its descendants prove that action A leads to goal G, then action A in fact leads to goal G.
The problem is that if the AI and its descendants all believe the same amount of mathematics, say PA, then by Lob's theorem this implies that the AI can already prove that action A leads to goal G. So it must already do the cognitive work that it wants its smarter descendants to do, which raises the question of why it needs to build those descendants in the first place. So in this toy model Lob's theorem appears as a barrier to an AI designing descendants which it both can't simulate but can provably trust.
↑ comment by JGWeissman · 2013-12-19T20:22:09.363Z · LW(p) · GW(p)
Qiaochu's answer seems off. The argument that the parent AI can already prove what it wants the successor AI to prove and therefore isn't building a more powerful successor, isn't very compelling because being able to prove things is a different problem than searching for useful things to prove. It also doesn't encompass what I understand to be the Lobian obstacle, that being able to prove that if your own mathematical system proves something that thing is true implies that your system is inconsistent.
Is there more context on this?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-12-19T21:18:19.177Z · LW(p) · GW(p)
It's entirely possible that my understanding is incomplete, but that was my interpretation of an explanation Eliezer gave me once. Two comments: first, this toy model is ignoring the question of how to go about searching for useful things to prove; you can think of the AI and its descendants as trying to determine whether or not any action leads to goal G. Second, it's true that the AI can't reflectively trust itself and that this is a problem, but the AI's action criterion doesn't require that it reflectively trust itself to perform actions. However, it does require that it trust its descendants to construct its descendants.
comment by Dorikka · 2013-12-18T02:38:17.716Z · LW(p) · GW(p)
As there was some interest in Soylent some time ago, I'm curious what people who have some knowledge of dietary science think of its safety and efficacy given that the recipe appears to be finalized. I don't know much about this area, so it's difficult for me to sort out the numerous opinions being thrown around concerning the product.
ETA: Bonus points for probabilities or general confidence levels attached to key statements.
Replies from: ChristianKl, ThrustVectoring, ephion, Lumifer, RomeoStevens, Izeinwinter, Dorikka↑ comment by ChristianKl · 2013-12-19T22:25:02.358Z · LW(p) · GW(p)
They included vitamin D2 instead of D3. From what I read about vitamin D that seems to be a bad decision.
↑ comment by ThrustVectoring · 2013-12-18T20:16:51.593Z · LW(p) · GW(p)
Given that dogfood and catfood work as far as mono-diets go, I'm pretty hopeful that personfood is going to work out as well. I don't know enough about nutrition in general to identify any deficiencies (and you kind of have to wait 10+ years for any long-term effects), but the odds are good that it or something like it will work out in the long run. I'd go with really rough priors and say 65% safe (85% if you're willing to have a minor nutritional deficiency), up to 95% three years from now. These numbers go up with FDA approval.
Replies from: Risto_Saarelma, passive_fist↑ comment by Risto_Saarelma · 2013-12-19T07:27:46.917Z · LW(p) · GW(p)
Given that dogfood and catfood work as far as mono-diets go
They mostly seem to, but if they cause a drop in energy or cognitive capability because of some nutrient balance problems, the animals won't become visibly ill and humans are unlikely to notice. A persistent brain fog from eating a poor diet would be quite bad for humans on the other hand.
Replies from: hyporational↑ comment by hyporational · 2013-12-20T06:20:08.397Z · LW(p) · GW(p)
Most of the selective breeding has been done while these animals were on simple diets, so perhaps some genetic adaptation has happened as well. Besides, aren't carnivore diets quite monotonous in nature anyway?
Replies from: Lumifer, NancyLebovitz↑ comment by Lumifer · 2013-12-20T16:16:16.352Z · LW(p) · GW(p)
Most of the selective breeding has been done while these animals were on simple diets
I am not so sure of that. People have been feeding cats and dogs commercial pet food only for the last 50 years or so and only in wealthy countries. Before that (and in the rest of the world, still) people fed their pets a variety of food that doesn't come from a bag or a can.
aren't carnivore diets quite monotonous in nature anyway?
In terms of what you kill and eat, mostly yes, but in terms of (micro)nutrients prey not only differs, but also each body contains a huge variety (compared to plants).
↑ comment by NancyLebovitz · 2013-12-20T15:13:43.580Z · LW(p) · GW(p)
aren't carnivore diets quite monotonous in nature anyway?
There's probably seasonal variation-- Farley Mowat described wolves eating a lot of mice during the summer when mice are plentiful. Also, I'm pretty sure carnivores eat the stomach contents of their prey-- more seasonal variation. And in temperate-to-cold climates, prey will have the most fat in the fall and the least in the early spring.
It wouldn't surprise me if there's a nutritional variation for dry season/rainy season climates, but I don't know what it would be.
↑ comment by passive_fist · 2013-12-19T09:34:42.637Z · LW(p) · GW(p)
I actually thought this way at first, but after reading up more on nutrition, I'm slightly skeptical that soylent would work as a mono-diet. For instance, fruits have been suggested to contain chemical complexes that assist in absorption of vitamins. These chemical complexes may not exist in soylent. In addition, there hasn't really been any long-term study of the toxic effects of soylent. Almost all the ingredients are the result of nontrivial chemical processing, and you inevitably get some impurities. Even if your ingredient is 99.99% pure, that 0.01% impurity could nevertheless be something with extremely damaging long-term toxicity. For instance, heavy metals, or chemicals that mimic the action of hormones.
Obviously, toxic chemicals exist in ordinary food as well. This is why variety is important. Variety in what you eat is not just important for the sake of chemicals you get, but for the sake of chemicals you don't get. If one of your food sources is tainted, having variety means you aren't exposed to that specific chemical in levels that would be damaging.
I still think it's promising though, and I think we'll eventually get there. It may take a few years, but I think we'll definitely arrive on a food substitute that has everything the body needs and nothing the body doesn't need. Such a food substitute would be even more healthy than 'fresh food'. I just doubt that this first iteration of Soylent has hit that mark.
I'll be watching Soylent with interest.
Replies from: ephion↑ comment by ephion · 2013-12-19T14:40:01.697Z · LW(p) · GW(p)
It seems to me that Soylent is at least as healthy as many protein powders and mass gainers that athletes and bodybuilders have been using for quite some time. That is to say, it depends on quality manufacturing. If Soylent does a poor job picking their suppliers, then it might be actively toxic.
↑ comment by ephion · 2013-12-18T21:37:23.441Z · LW(p) · GW(p)
I'd like to see creatine included, just because most people would see mental and physical benefits from supplementation. The micronutrients otherwise look good. I've read things to the effect that real food is superior to supplementation (example), so I don't think that this is a suitable replacement to a healthy diet. I do think that this will be a significant improvement over the Standard American Diet, and a step up for the majority of people.
The macronutrients also look good -- especially the fish oil! 102g of protein is a solid amount for a non-athlete, and athletes can easily eat more protein if desired. Rice protein is pretty terrible to eat, I hope that they get that figured out. I'd probably prefer less carbs and more fat for myself, but I think that's just a quirk of my own biology.
↑ comment by Lumifer · 2013-12-18T20:40:31.720Z · LW(p) · GW(p)
Well, my estimates for long-term consequences would probably be:
Soylent is fine to consume occasionally -- 98%
Soylent is fine to be a major (but not sole) part of your diet -- 90%
Soylent is fine to be the sole food you consume -- 10%
↑ comment by chairbender · 2013-12-19T02:43:36.334Z · LW(p) · GW(p)
What are your credentials w.r.t. nutrition?
Replies from: Lumifer↑ comment by Lumifer · 2013-12-19T04:15:22.920Z · LW(p) · GW(p)
My credentials are my posts.
I don't do arguments from authority.
Replies from: Dorikka↑ comment by Dorikka · 2013-12-19T05:07:33.974Z · LW(p) · GW(p)
Given that you didn't mention otherwise, I assumed that you were mostly going off priors in the absence of much domain-specific knowledge, as ThrustVectoring was. I haven't read enough of your posts to accurately gauge how heavily to weight your opinion -- if my assumption is incorrect, I'd appreciate it if you would let me know.
Replies from: Lumifer↑ comment by Lumifer · 2013-12-19T16:10:58.444Z · LW(p) · GW(p)
There is no data about long-term effects of Soylent. Everyone has only priors and nothing but priors. By the way, "domain-specific knowledge" is a prior as well.
I am not sure how are you going to gauge the proper weighting for people's opinions. This is the internet, after all. If I tell you "I'm highly credentialed. Just trust me" :-D will that satisfy you?
On a bit more serious note I prefer arguments that stand on their own, regardless of their source (and its credibility or lack thereof). In fact, nutrition is such a screwed-up field that I would probably downgrade opinions from someone who claims to be a nutritionist...
Replies from: Dorikka↑ comment by Dorikka · 2013-12-19T22:09:17.169Z · LW(p) · GW(p)
This is the internet, after all. If I tell you "I'm highly credentialed. Just trust me" :-D will that satisfy you?
Eh; it would be medium-strength evidence. Even though I have no way to verify what you say, I don't think that you have any real incentive or motive to deceive me (given that simple trolls are unlikely to amass >2K karma). :P
(I think we've exhausted the usefulness of this subthread, so I probably won't respond to any replies -- tapping out.)
↑ comment by ChristianKl · 2013-12-19T14:38:31.478Z · LW(p) · GW(p)
What exactly do you mean with fine?
Replies from: Lumifer↑ comment by Lumifer · 2013-12-19T16:17:49.011Z · LW(p) · GW(p)
Um. Probably lack of noticeable health/fitness problems. But yes, it's a vague word. On the other hand, the general level of uncertainty here is high enough to make a precise definition not worthwhile. We are not running clinical trials here.
By the way, the vagueness of "major ... part of ... diet" is a bigger handwave here :-/
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-19T22:15:38.383Z · LW(p) · GW(p)
Probably lack of noticeable health/fitness problems.
The more I read about nutrition the more I come to the conclusion that most diets do have effects. Some advantages and some disadvantages.
I thing there a good chance that A diet without any cholesterol might reduce some hormone levels and some people who look hard enough might see that as an issue.
↑ comment by A1987dM (army1987) · 2013-12-19T07:32:41.918Z · LW(p) · GW(p)
The first one sounds underconfident (at least if you don't count people allergic or intolerant to one of the ingredients, nor set a very high bar for what to call “fine”).
Replies from: Lumifer↑ comment by RomeoStevens · 2013-12-19T04:50:37.670Z · LW(p) · GW(p)
I'd rank it below existing dietary replacements.
Replies from: Dorikka↑ comment by Dorikka · 2013-12-19T05:03:08.964Z · LW(p) · GW(p)
Thanks for your input. Are there any existing dietary replacements you recommend that are similarly easy to prepare? (Soylent Orange seems to be working well for you as a solution, but I don't think I would actually go to the trouble to put the ingredients together.)
On a related note, do you have any new/more specific criticisms of Soylent, other than those that you presented in this post?
Replies from: RomeoStevens, ChristianKl, hyporational↑ comment by RomeoStevens · 2013-12-19T06:15:31.730Z · LW(p) · GW(p)
None that I would recommend. None of my criticisms are original, Soylent still seems a very haphazard concoction to me. I do have a bunch of specific issues with Soylent that I haven't discussed in detail e.g. lack of cholesterol and saturated fat not being great for hormones. But yeah, I'm not super motivated to get deep into it unless I decide to try to turn the latest variant of Soylent Orange into an actual service. I'm still working on it.
↑ comment by ChristianKl · 2013-12-19T14:37:12.912Z · LW(p) · GW(p)
Thanks for your input. Are there any existing dietary replacements you recommend that are similarly easy to prepare?
Easy as in time requirements or easy as in money? The kind of fluid food replacement that they use in hospitals is probably better than what Soylent produces.
↑ comment by hyporational · 2013-12-19T14:12:41.934Z · LW(p) · GW(p)
Liquid diets are not exactly a new idea, and most of them don't have to be prepared at all but come in portions. Since most of them have been developed for medical use, the price tag is significantly higher. Some of them have been developed for patients who can't swallow normal food at all, so I doubt they lack anything important that Soylent contains and probably have been much more rigorously tested. If anyone knows studies that have been done on these people, I'm all ears.
↑ comment by Izeinwinter · 2013-12-22T21:21:22.831Z · LW(p) · GW(p)
Never mind it's safety, I do not like it's hedonics at all. Basic: If you currently are eating blandly enough that shifting to a liquid mono-diet for any reason other than dire medical necessity is not a major quality of life sacrifice, you need to reprioritize either your time or your money expeditures.
Loosing one of the major pleasures of life is not a rational sacrifice. Life is supposed to be enjoyable!
Replies from: fubarobfusco, Kaj_Sotala, VAuroch, passive_fist, Alsadius↑ comment by fubarobfusco · 2013-12-23T03:00:26.737Z · LW(p) · GW(p)
Perhaps eating isn't a major pleasure of life for everyone.
I'm imagining an analogous argument about exercise. Someone formulates (or claims to, anyway) a technique combining drugs and yoga that provides, in a sweatless ten minutes per week, equivalent health benefits to an hour of normal exercise per day. Some folks are horrified by the idea — they enjoy their workout, or their bicycle commute, or swimming laps; and they can't imagine that anyone would want to give up the euphoria of extended physical exertion in exchange for a bland ten-minute session.
To me, that seems like a failure of imagination. People don't all enjoy the same "pleasures of life". Some people like physical exercise; others hate it. Some people like tasty food; others don't care about it. Some people like sex; others simply lack any desire for it; still others experience the urge but find it annoying. And so on.
Replies from: NancyLebovitz, Lumifer↑ comment by NancyLebovitz · 2013-12-23T08:14:38.367Z · LW(p) · GW(p)
Strong agreement-- I've read enough from people who simply don't find food very interesting to believe that they're part of the human range.
More generally, people's sensoriums vary a lot.
↑ comment by Lumifer · 2013-12-23T18:12:09.867Z · LW(p) · GW(p)
I'm imagining an analogous argument about exercise.
It's a weak analogy as humans are biologically hardwired to eat but are not hardwired to exercise.
Some people like tasty food; others don't care about it. Some people like sex; others simply lack any desire for it; still others experience the urge but find it annoying.
True, but two comments. First, let's also look at the prevalence. I'm willing to make a wild approximation that the number of people who truly don't care (and never will care) about food is about the same as the number of true asexuals and that's what, 1-2%?
Second, I suspect that many people don't care about food because of a variety of childhood conditioning and other psychological issues. In such cases you can treat it as a fixable pathology. And, of course, one's attitude towards food changes throughout life (teenagers are notoriously either picky or indifferent, adults tend to develop more discriminating tastes).
↑ comment by Kaj_Sotala · 2013-12-27T12:03:58.886Z · LW(p) · GW(p)
Preparing food is an annoying hassle which tends to interfere with my workflow and distract from doing something more enjoyable. Food does provide some amount of pleasure, but having to spend the time actually making food that's good enough to actually taste good (or having to leave the house to eat out) is enough of an annoyance that my quality of life would be much improved if I could just cease to eat entirely.
↑ comment by VAuroch · 2013-12-24T13:27:05.253Z · LW(p) · GW(p)
Soylent's creator argues that it increases the quality of life benefits of food, since the savings from the Soylent diet meant that when he chooses to eat out, he can afford very good quality food and preparation.
For myself, while I enjoy eating good food, I do not enjoy preparing food (good or otherwise), and in fact I enjoy eating significantly less than I dislike preparing food. So the total event (prepares good food -> eats good food) has negative utility to me, other than the nutritional necessity.
↑ comment by passive_fist · 2013-12-23T00:03:30.950Z · LW(p) · GW(p)
Additionally, if one's schedule is so tight that preparing simple home-made meals (nothing complicated, just stuff that can be prepared with 5 minutes of work) is out of the question, that seems like a fast route to burnout.
↑ comment by Alsadius · 2013-12-23T02:23:21.341Z · LW(p) · GW(p)
Here's the one pro-Soylent friend I have discussing why he likes it(tl;dr, he's bad at eating and figures it'll balance him out):
http://justinsamlal.blogspot.ca/2013/06/soylent-preliminary-stuff.html
comment by DisclosureQuestion · 2013-12-17T20:49:34.706Z · LW(p) · GW(p)
I'm not ready for my current employer to know about this, so I've created a throwaway account to ask about it.
A week ago I interviewed with Google, and I just got the feedback: they're very happy and want to move forward. They've sent me an email asking for various details, including my current salary.
Now it seems to me very much as if I don't want to tell them my current salary - I suspect I'd do much better if they worked out what they felt I was worth to them and offered me that, rather than taking my current salary and adding a bit extra. The Internet is full of advice that you shouldn't tell a prospective employer your current salary when they ask. But I'm suspicious of this advice - it seems like the sort of thing they would say whether it was true or not. What's your guess - in real life, how offputting is it for an employer if a candidate refuses to disclose that kind of detail when you ask for it as part of your process? How likely are Google to be put off by it?
Replies from: Anatoly_Vorobey, solipsist, sdr, Benquo, aubrey, jkaufman, ConvenientlyPrompted, Douglas_Knight, Benito, None↑ comment by Anatoly_Vorobey · 2013-12-17T23:23:56.937Z · LW(p) · GW(p)
I work at Google. When I was interviewing, I was in the exact same position of suspecting I shouldn't tell them my salary (which I knew was below market rate at the time). I read the same advice you did and had the same reservations about it. Here's what happened: I tried to withhold my salary information. The HR person said she had to have it for the process to move forward and asked me not to worry about it. I tried to insist. She said she totally understood where I was coming from, but the system didn't allow her flexibility on this point. I told her my salary, truthfully. I received an offer which was substantially greater than my salary and seemingly uncorrelated with it.
My optimistic reading of the situation is that Google's offer is mostly based on approximate market salary for the role, adjusted perhaps by how well you did at the interviews, your seniority, etc. (these are my guesses, I don't have any internal info on how offers are calculated by HR). Your current salary is needed due for future bookkeeping, statistics, or maybe in case it's higher than what Google is prepared to offer and they want to decide if it's worth it to up the offer a little bit. That's my theory, but keep in mind that that it's just a bunch of guesses, and also that it's a big company and policies may be different in different countries and offices.
Replies from: Brillyant↑ comment by Brillyant · 2013-12-19T15:15:19.299Z · LW(p) · GW(p)
I think it is worth mentioning that "the system won't allow for flexibility on this" is just about the oldest negotiation tactic in the book. (Along with, "let me check with my boss on that...")
In reality, there is zero reason Google, or any employer, should need to know your current or past salary information apart from that information's ability to work as a negotiation tactic in their favor.
Google has something you want (a job that pays $) and you have something they want (skill to make them $). Sharing your salary this early in the process tips the negotation scales (overwhelmingly) in their favor.
That said, Google is negotiating from a place of immense strength. They choose from nearly anyone they want, while there is only one Google...
...so, if Google wants to know your salary, tell them your salary. And enjoy your career at one of the coolest companies around. You win. :)
Replies from: Pentashagon↑ comment by Pentashagon · 2013-12-20T05:52:33.895Z · LW(p) · GW(p)
That said, Google is negotiating from a place of immense strength. They choose from nearly anyone they want, while there is only one Google...
And if salary is what matters use them as resume-points to get a higher salary somewhere else.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-12-20T09:54:39.441Z · LW(p) · GW(p)
And if salary is what matters use them as resume-points to get a higher salary somewhere else.
There are some things you may want to consider when using this strategy. For example, choose the appropriate amount of time you want to spend at Google. Too short may be suspicious, but too long would be a lost purpose if your goal is to make more money somewhere else later.
Optimize for having the most impressive CV when you leave. This means you should have an impressively sounding job description. Think about your CV items "on project X I worked as Y and my responsibilities were Z", and try to manage your career within Google to optimize these.
Have a plausible story about why you decided to work for Google, and why you later decided to work somewhere else. This story can also be made up later, but if you prepare it in advice, you can make it more realistic.
The most simple version of this advice would be: If you choose Google with hope of having an impressive CV and a higher salary later, don't stay there for the next 10 years in a role of code monkey working all the time on some completely unknown project that will be cancelled shortly after you leave.
↑ comment by sdr · 2013-12-17T22:54:51.806Z · LW(p) · GW(p)
The rationale behind salary negotiations are best expanded upon by patio11's "Salary Negotiation: Make More Money, Be More Valued" (that article's well worth the rent).
In real life, the sort of places where employers take offense by you not disclosing current salary (or generally, by salary negotiations -that is, they'd hire someone else if he's available more cheaply) are not the places you want to work with: if they're putting selection pressure for downscaling salaries, all your future coworkers are going to be, well, cheap.
This is anecdotally not true for Google; they can afford truckloads, if they really want to have you onboard. So this is much more likely to come from standardized processes. Also note in Google's case, that decisions are delegated to a board of stakeholders, so there isn't really one person who can be put off due to salary (and they probably handle the hire/no hire decisions entirely separate to the salary negotiations).
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-12-18T10:39:09.056Z · LW(p) · GW(p)
if they're putting selection pressure for downscaling salaries, all your future coworkers are going to be, well, cheap.
Also, the company will probably be less likely to buy you a decent computer for work, install a new server when your department needs it, or hire new people when there is more work than you can handle. Even if you somehow don't care about money for yourself, you probably do at least care about having decent working conditions. Maybe the just-world hypothesis makes you believe that lower salary will somehow be balanced by better working conditions, but it's probably the other way round.
↑ comment by Benquo · 2013-12-19T00:26:10.678Z · LW(p) · GW(p)
I'm a manager at a financial firm and I've hired people. I'd consider it pretty normal not to want to say. "Everyone" knows that trying to get the other person to name a number first is a common negotiating tactic, no real grownup is going to take it personally or get upset about this.
I don't know how "normal" a company Google is in this way, but I'd guess it's pretty normal.
If you are challenged on this, you can try stating it as a rule: "I'm not prepared to discuss my current salary, I'm here to talk about working for Google." Or, "As a policy I don't disclose my current salary. I'm sure you understand." Or make up some blah about how that's proprietary information for your current employer and you don't feel comfortable disclosing it.
If they absolutely refuse to process your application without this (which is a bad move on their part if they really want you, but some companies are stubborn that way), other options are to fudge your number upwards somehow, though personally I wouldn't try the ones that actually involve telling a literal lie:
Give them a wide range of expectations instead of your current salary. Say that of course it depends on the other details of the offer, any other offers you might get, etc.
Roll in as much stuff as you plausibly can (adding in bonuses or other moneylike benefits, and making an adjustment if the cost of living in Googleland is higher than where you live now). Example: I could add my salary of $80k, my last year's bonus of $5k (or next year's bonus, or my average bonus in percentage terms, whichever is highest), and my $2k transit benefits for a total of $87k.
Round up to the nearest $10k and say it's an approximate figure. So if I make $87k I might say I'm in the ballpark of $90k.
State a range (e.g. if I made $69k I might say I make something in the high 5 figures, or somewhere near the $70-80k range)
Lie outright, but plausibly.
↑ comment by aubrey · 2013-12-21T21:18:58.718Z · LW(p) · GW(p)
My guess is that a mild refusal would be acceptable to Google. They are unlikely to be put off by a change of subject.
A hard refusal might annoy them, if they persist in asking.
I suggest naming a very high figure first, to gain benefit from the anchoring effect. Then mentioning your salary will make it the anchor for the negotiation. Google has a reputation for paying high salaries.
If you are looking for advice on negotiation, I suggest searching for 'anchoring' as well as 'negotiation, to get more evidence-based advice.
Good luck.
↑ comment by jefftk (jkaufman) · 2013-12-21T04:02:43.417Z · LW(p) · GW(p)
If it's helpful to know what other Google employees make my compensation details are here.
↑ comment by ConvenientlyPrompted · 2013-12-18T00:06:43.977Z · LW(p) · GW(p)
This reminded me to ask about a similar question: I am currently interviewing. Assuming I get an in-person interview, that will involve a long flight. I feel like I shouldn't tell my current employer that I'm interviewing until I have an offer, but in order to hide it I presumably will have to take holiday on fairly short notice, have a plausible reason for why I'm taking it, and generally act like I'm not taking a long flight to an interview. There's a chance that I'll have to do this multiple times. (Though ideally I'd take multiple in-person interviews in the same trip.)
I don't particularly like the idea of doing this. It feels deceitful and stressful. How bad an idea would it be to just let my employer know what's going on?
Replies from: Viliam_Bur, palladias↑ comment by Viliam_Bur · 2013-12-18T13:33:23.274Z · LW(p) · GW(p)
How bad an idea would it be to just let my employer know what's going on?
Extremely bad. People have been fired or denied promotion because of this. Don't even tell any of your colleagues.
I am not discussing the legal aspects of this, but you will probably be perceived as not worth investing in the long term. Imagine that your interview fails and you decide to stay. Your current employer is not going to trust you with anything important anymore, because they will be expecting you to leave soon anyway.
Okay, this may sound irrational, because you are not your employer's slave, and technically you are (any anyone else is) free to leave sooner or later. But people still make estimates. It is in your best interest to pretend to be a loyal and motivated employee, until the day you are 100% ready to leave.
It feels deceitful
This is part of the human nature; what we have evolved to do. Even your dislike for deceit is part of the deceit mechanism. If you unilaterally decide to stop playing the game, it most likely means you lose.
There is probably an article by Robin Hanson about how LinkedIn helps us to get in contact with new job offers while maintaining plausible deniability, which is what makes it so popular, but I can't find the link now.
Replies from: Douglas_Knight, Gunnar_Zarncke, gwern, ConvenientlyPrompted↑ comment by Douglas_Knight · 2013-12-19T06:57:04.114Z · LW(p) · GW(p)
Here is the post by Robin Hanson.
I founded it by searching site:overcomingbias.com social network. The key was generalizing from the specific linkedin to "social network," though I can't say why I thought to do that.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-12-19T09:34:52.614Z · LW(p) · GW(p)
Thank you! This was probably the one I remembered.
↑ comment by Gunnar_Zarncke · 2013-12-18T17:43:23.304Z · LW(p) · GW(p)
It is only deceitful if you haven't made a honest effort to improve your situation in your current company. It is as deceitful to stay silent and don't give your employer a chance to increase your salary of your position.
It depends on the type of company of course. There are those that see you as an exchangable human resource where it may be appropriate to see the company as a slave owner who has to be hidden tghe truth of your escape from.
But there are companies where honesty about work situations is seen as interest in the company and critique used as feedback to improve the environment.
Salary negotiatons will be always tough though. Strictly comparing offers is the only reliable way to sell yourself. Everything else is falling prey to the salary negotiation tricks of the business world.
EDIT: I'm from Germany so my view my be country specific.
Replies from: ConvenientlyPrompted, Viliam_Bur↑ comment by ConvenientlyPrompted · 2013-12-18T23:31:15.403Z · LW(p) · GW(p)
In this case, I don't want to leave, just there are things that I want more than I want to stay. Not that it couldn't be improved, but they probably can't offer anything to change my mind.
↑ comment by Viliam_Bur · 2013-12-18T21:21:33.684Z · LW(p) · GW(p)
But there are companies where honesty about work situations is seen as interest in the company and critique used as feedback to improve the environment.
If you are in such company, that's great! Try to improve things; provide the feedback.
But don't mention the fact that you are doing interviews with another company.
↑ comment by gwern · 2013-12-19T00:12:38.477Z · LW(p) · GW(p)
There is probably an article by Robin Hanson about how LinkedIn helps us to get in contact with new job offers while maintaining plausible deniability, which is what makes it so popular, but I can't find the link now.
I can't find it either. Nothing on OB, nothing in Google for 'Linkedin "Robin Hanson"' or sharpened to add 'hypocrisy'. Sure it was Linkedin he was talking about?
↑ comment by ConvenientlyPrompted · 2013-12-18T23:22:46.599Z · LW(p) · GW(p)
Even your dislike for deceit is part of the deceit mechanism. If you unilaterally decide to stop playing the game, it most likely means you lose.
I think I ADBOC. It's not like the "disliking to be deceitful" gene evolved to make its bearers lose the game.
Certainly there are risks to being honest, but there are also benefits. Admittedly, the most salient one to me right now is "I don't want to treat my current employer poorly", and I'm not sure that lying about going to interviews is actually significantly worse than merely not telling them about interviews.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-12-19T09:44:06.106Z · LW(p) · GW(p)
It's not like the "disliking to be deceitful" gene evolved to make its bearers lose the game.
For people who properly compartmentalize, it helps to win the game. By signalling dislike to deceit, they gain other people's trust... and then at the right moment they do something deceitful and (if they are well-calibrated) most likely profit from it..
It's the people in the valley of bad rationality who may lose the game when they realize all the consequences and connections, and try to tune their honesty up to eleven. (For example by telling their boss that they would be willing to quit the company if they had a better offer from somewhere else, and that they actually look at the available information about other companies.)
Replies from: hyporational↑ comment by hyporational · 2013-12-19T14:26:38.220Z · LW(p) · GW(p)
Disliking deception also makes people more cautious and frugal about it, which is probably beneficial too.
↑ comment by palladias · 2013-12-18T23:25:44.522Z · LW(p) · GW(p)
It depends a lot on your company, so I think your inside view will be better than our outside view. I told my employer when I went out to do a tryout with CFAR, and that went well. One reason I told my boss was that, if I were hired, I'd need to scramble to get all my projects annotated well enough to be able to pass of seamlessly, and I didn't want her to be left in the lurch or to make any plans that hinged on having her quant around for the next month. (Hiring sometimes took a while at my old company).
My boss really appreciated my being forthright and it saved me a lot of tsuris. I think it also worked better because it was expected that people in my role (Research Associate) wouldn't stick around forever.
Replies from: ConvenientlyPrompted↑ comment by ConvenientlyPrompted · 2013-12-18T23:35:20.631Z · LW(p) · GW(p)
Yes, not leaving my employer in the lurch is important to me, but I do feel like they expect me to be around for a while. I'm glad to hear of your positive experience.
↑ comment by Douglas_Knight · 2013-12-17T23:17:45.716Z · LW(p) · GW(p)
Almost everyone finds an explicit refusal to answer offputting. Don't do it. But that doesn't mean that you should actually answer. Usually a good choice is to answer a different question, such as to make them an offer.
↑ comment by Ben Pace (Benito) · 2013-12-17T22:15:26.961Z · LW(p) · GW(p)
I would advise googling to find average salaries for similar positions, especially with google.
↑ comment by [deleted] · 2013-12-17T22:08:12.964Z · LW(p) · GW(p)
I don't feel qualified to answer your question, though if I were to make a guess, I wouldn't expect them to be put off by refusal. Assuming Google behaves at least somewhat rationally, they should at this point have an estimate of your value as an employee and it doesn't seem like your current salary would provide much additional information on that.
So, the question is, to what extent Google behaves rationally. This ties to something that I always wonder whenever I read salary negotiation advice. What is the specific mechanism by which disclosing current salary can hurt you? Yes, anchoring, obviously. But who does it? Is the danger that the potential employer isn't behaving rationally after all and will anchor to the current salary, lowering the upper bound on what they're willing to offer? Or is the danger primarily that anchoring will undermine your confidence and willingness to demand more (and if you felt sufficiently entitled, it wouldn't hurt you at all)?
Replies from: Viliam_Bur, gjm↑ comment by Viliam_Bur · 2013-12-18T13:18:20.392Z · LW(p) · GW(p)
Or is the danger primarily that anchoring will undermine your confidence and willingness to demand more (and if you felt sufficiently entitled, it wouldn't hurt you at all)?
I would guess this one. It can make you ask less, with almost zero effort on the employer's side; they don't even have to read your answer. So the cost:benefit ratio of asking you this question is huge. And even if it doesn't work on some people, it most likely does on average, so it can save a lot of money.
↑ comment by gjm · 2013-12-18T09:15:05.617Z · LW(p) · GW(p)
Yes, anchoring, obviously.
The mechanism that seems most important to me doesn't really involve any sort of cognitive bias much. It goes like this. You are on (say) $50k/year. You are good enough that you'd be good value at $150/year, but you'd be willing to move if offered $60k/year, if that were all you could get. You apply for a new job and have to disclose your current salary to every prospective employer. So you get offers in (say) the $60k-80k range because everyone knows, or at least guesses, that that's enough to tempt you and that no one else will be offering much more. You might get a lot more if you successfully start a bidding war, but otherwise you're going to end up paid way less than you could be.
Note that everyone in this scenario acts rationally, arguably at least. Your prospective employer offers you (say) $75k. This would be irrational if you'd turn that down but accept a higher offer. But actually you'll take it. This would be irrational if you could get more elsewhere. But actually you can't because no one else will offer you much more than your current salary.
(You could try telling them that you have a strict policy of not taking offers that are way below what you think you're worth, in the hope that it'll stop them thinking you'd accept an offer of $75k. But you might not like the inference they'd draw from that and your current salary.)
Obvious note: Of course people care about lots of other things besides money, your value to one employer isn't the same as your value to another, etc. This has only limited effect on the considerations above.
Replies from: Nornagest, None↑ comment by Nornagest · 2013-12-18T18:40:31.179Z · LW(p) · GW(p)
I was recently in a similar position, but I nonetheless managed to negotiate a large salary increase by taking a job in a different city, quoting the salary level that I wanted, and pleading cost-of-living increases when I was asked to justify it. They did negotiate me down by about $5000, and I wouldn't say I'm quite at market rates yet for my level of experience, but it did seem to successfully anchor the negotiations on my asking price rather than my previous salary.
The new city actually did have a higher cost of living than the old one, but I get the impression that the hiring manager didn't care about the actual rate so much as he cared about having a rationale that looked good on paper.
↑ comment by [deleted] · 2013-12-18T18:11:33.608Z · LW(p) · GW(p)
Well, assuming your example numbers, if my work would bring $150k+$x/year and the company didn't hire me because I refused to take $60k/year, instead demanding, say, $120k/year (over twice the current salary, how greedy), then they just let $30k+$something/year walk out the door. Would they really do that (assuming rational behavior blah blah)?
I don't see how they would benefit from playing the game of salary-negotiating chicken to the bitter end. Having a reputation for not offering market salaries for people with unfortunate work history? That actually sounds like it could be harmful.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-19T16:22:53.796Z · LW(p) · GW(p)
The company doesn't really know your true value. If you are really worth $150k it raises the question why you can't get your present employeer to pay you that wage. Your present employeer has a lot more information about your skills then they do.
comment by Bayeslisk · 2013-12-21T08:13:41.486Z · LW(p) · GW(p)
Something's brewing in my brain lately, and I don't know what. I know that it centers around:
-People were probably born during the Crimean War/US Civil War/The Boxer Rebellion who then died of a heart attack in a skyscraper/passenger plane crash/being caught up in, say WWII.
-Accurate descriptions of people from a decade or two ago tend to seem tasteless. (Casual homophobia) Accurate descriptions of people several decades ago seem awful and bizarre. (Hitting your wife, blatant racism) Accurate descriptions of people from centuries ago seem alien in their flat-out implausible awfulness. (Royalty shitting on the floor at Versailles, the Albigensian Crusade, etc...)
-We seem no less shocked now by social changes and technological developments and no less convinced that everything major under the sun has been done and only tweaks and refinements remain than people of past eras did.
I guess what I'm saying is that the Singularity seems a lot more factually supported-ly likely than it otherwise might have been, but we won't realize we're going through it until it's well underway because our perception of such things will also wind up going faster for most of it.
Replies from: ChristianKl, passive_fist, NancyLebovitz↑ comment by ChristianKl · 2013-12-21T17:25:59.790Z · LW(p) · GW(p)
We seem no less shocked now by social changes and technological developments and no less convinced that everything major under the sun has been done and only tweaks and refinements remain than people of past eras did.
I do expect the future to be different.
I could imagine a future where people see illegalizing LSD as strange as illegalizing homosexuality.
I can imagine that Google's rent an AI car service will completely remove personal ownership of cars in a few decades. This removes cars as status symbols with means that they will be built on other design criteria like energy efficiency.
I can imagine a constructed language possibly overtaking English.
There are a lot of other things that are more vague.
Replies from: army1987, Bayeslisk↑ comment by A1987dM (army1987) · 2013-12-22T09:32:28.659Z · LW(p) · GW(p)
illegalizing LSD
Drug prohibition laws introduced in the 20th century are a nice counterexample to reactionaries' claim that Cthulhu only swims left, BTW.
(Edited to replace “always” with “only” -- I misremembered that quote.)
Replies from: None, Douglas_Knight, None↑ comment by [deleted] · 2013-12-23T10:24:21.157Z · LW(p) · GW(p)
Cthulhu always swims left isn't an observation that on every single issue society will settle on the left's preferences, but that the general trend is leftward movement. If you interpreted it as that, the fall of the Soviet Union and the move away from planned economies should be a far more important counterexample.
Before continue I should define how I'm using left and right. I think them real in the sense they are the coalitions that tend to form in under current socioeconomic conditions, when due to the adversarial nature of politics, you compress very complicated preferences into as few dimensions (one) as possible. Principle component analysis makes for a nice metaphor on this.
Back to Cthulhu. As someone who's preferences can be described as right wing I would be quite happy with returning to 1950s levels of state intervention, welfare and relative economic equality in exchange for that period's social capital and cultural norms. Controlling for technological progress obviously. Some on the far right of mainstream conservatives might accept the same trade. This isn't to say I would find it a perfect fit, not by long shot, but it would be a net improvement. I believe most Western far right people would accept this trade, most Western far left people would not accept this trade. And in America at least, centrists would be uncomfortable both with that level of state intervention and the social norms of the 1950s.
Now that we have this claim about revealed preferences, let's invoke a very simple heuristic. Imagine you have two players playing a zero sum game of politics, they are offered to move the game to the position it had 50 moves ago. One player accepts, the other refuses. Ceteris paribus which one do you think is winning?
Replies from: Viliam_Bur, satt, Douglas_Knight, ygert↑ comment by Viliam_Bur · 2013-12-24T14:25:09.925Z · LW(p) · GW(p)
The "who would prefer to return 50 years back?" argument is interesting, but I think the meaning of "winning" has to be defined more precisely. Imagine that 50 years ago I was (by whatever metric) ten times as powerful as you, but today I am only three times as powerful than you. Would you describe this situation as your victory?
In some sense, yes, it is an improvement of your relative power. In other sense, no, I am still more powerful. You may be hopeful about the future, because the first derivative seems to work for you. On the other hand, maybe the second derivative works for me; and generally, predicting the future is a tricky business.
But it is interesting to think about how the time dimension is related to politics. I was thinking that maybe it's the other way round; that "the right" is the side which self-identifies with the past, so in some sense it is losing by definition -- if your goal is to be "more like yesterday than like today", then tautologically today is worse according to this metric than the yesterday was. And there is a strong element of returning to the past in some right-wing movements.
But then I realized that some left-wing movements have this component too. I remember communists emphasising that millenia ago humans lived in perfectly egalitarian hunter-gatherer societies (before the surplus value was taken by the evil slavers / feudal lords / capitalists), so when the true communism comes, this ancient harmony will be restored. Similarly, some feminists (maybe just a small minority of them, I don't know) have stories about how exactly the ancient matriarchal societies were organized, so overthrowing patriarchy would kinda restore this ancient order.
At this moment my working hypothesis is that "returning to the perfect past" is simply an universal human bias, and the main political difference is where exactly is your Golden Age located. Then it would seem that the right wing puts the Golden Age in the more recent past, while the left wing prefers prehistorical societies.
That makes it pretty likely that in its heart, the left is about returning towards our hunter-gatherer instincts and abandoning as much as possible of our disappointing civilization, while the right is about insisting on some specific adaptations to scarcity. Something like what Yvain said, with the connotational objection that the category of danger does not include only zombies, but also criminals or dysfunctional bureaucracies, which are everyday reality for some people. Generally, as we improve economically, we can afford to remove some of the adaptations to scarcity; the trade-offs that are no longer necessary. But sometimes while doing so we fuck up things horribly and the scarcity returns; often in a way that university professors don't notice, simply because it does not happen to them.
Replies from: None↑ comment by [deleted] · 2013-12-24T16:37:58.970Z · LW(p) · GW(p)
Imagine that 50 years ago I was (by whatever metric) ten times as powerful as you, but today I am only three times as powerful than you. Would you describe this situation as your victory?
I would describe it as me playing very well for the past 50 years and the game going my way.
↑ comment by satt · 2013-12-24T14:43:30.182Z · LW(p) · GW(p)
Cthulhu always swims left isn't an observation that on every single issue society will settle on the left's preferences, but that the general trend is leftward
Dropping the "always" might lead to less confusion on this point.
Replies from: None↑ comment by [deleted] · 2013-12-24T16:34:33.721Z · LW(p) · GW(p)
I only used "Cthulhu always swims left" because that is how army1987 termed it. Moldbug says "Cthulhu may swim slowly. But he only swims left."
Replies from: satt, army1987↑ comment by satt · 2013-12-24T16:58:37.931Z · LW(p) · GW(p)
That formulation has the same problem. Like "always swims left", "only swims left" suggests that every observed movement is leftwards.
Replies from: Randy_M↑ comment by A1987dM (army1987) · 2013-12-30T18:47:42.725Z · LW(p) · GW(p)
(I've corrected it now.)
↑ comment by Douglas_Knight · 2013-12-24T03:10:22.743Z · LW(p) · GW(p)
Why does politics compress issues into a single linear dimension?
I suppose it makes sense in a two party system, but why do parliamentary systems with several small parties mainly have parties on the extremes, rather than mainly having parties with orthogonal preferences that could ally with either major party? In principle, a Green party ought not to care much about the left-right access, but in practice it cares very much.
Replies from: None↑ comment by [deleted] · 2013-12-24T07:20:14.925Z · LW(p) · GW(p)
Why does politics compress issues into a single linear dimension?
I don't know, it might be an artifact of representative systems in the West where the all important thing is getting the majority vote. "Left" vs. "Right" being a strong signal of who your allies tend to be seems to work pretty well descriptively for most people's political identities and preferences.
↑ comment by ygert · 2013-12-23T11:51:08.701Z · LW(p) · GW(p)
A slight nitpick: I wouldn't describe politics as anything remotely near zero-sum. The actions of the players of a country's game of politics have very far-reaching effects on the citizens and residents of that country, and in some cases of the residents of the entire world.
The actions of the players definitely do affect the world at large in ways outside the scope of the game, which makes it about as far from zero-sum as it could possibly be. I'm pretty sure that this changes the outcome of your thought experiment dramatically.
↑ comment by Douglas_Knight · 2013-12-23T04:24:28.658Z · LW(p) · GW(p)
That depends on the definition of "left." What is your definition?
(I am skeptical that "left" is a useful concept.)
Moldbug sometimes seems to define it as Puritan or Protestant more generally. But at other times he seems to say that the two are the same, but not by definition.
In the early 19th century, the Temperance movement was the same people seeking the abolition of slavery and women's suffrage. Surely this counts as left? A century later it won by having a broader base, but it was still Protestant. Indeed, much of the appeal was as a way to attack Catholic immigrants.
Marijuana and cocaine were banned at about the same time as alcohol. One interpretation was that this was a side effect of the temperance movement. (This is more clear in the case of cocaine, which was a dry run of prohibition; less clear with marijuana, which was banned later.) Another interpretation is that they were race war (just like alcohol). That sounds right-wing, but what is your definition? The prohibition of LSD was much later and seems much more clearly right-wing.
I am talking only of America. I believe that prohibitions elsewhere were imported. If you think America is right-wing, that makes them seem right-wing.
Added: I forgot opium. It was nationally banned with cocaine, but it was banned in SF much earlier, when the Temperance movement was a lot weaker. I think most local Prohibitions were left-wing, but opium in SF might not fit that.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-23T13:09:07.039Z · LW(p) · GW(p)
Marijuana and cocaine were banned at about the same time as alcohol. One interpretation was that this was a side effect of the temperance movement. (This is more clear in the case of cocaine, which was a dry run of prohibition; less clear with marijuana, which was banned later.)
I think the ban of Marijuana in 1937 was a win for DuPond business interests. From a right/left perspective of the 19th century that's difficult to parse as either left or right.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-12-24T02:55:41.341Z · LW(p) · GW(p)
Yes, the commercial aspects probably pushed it over the line despite it not being banned earlier, but the fact that lots of things were banned suggest that there is probably a common cause and that the commercial aspects were only secondary. Whether the common cause is that one group opposed everything or that one group opposed alcohol and moved the Overton window is harder to decide.
↑ comment by [deleted] · 2013-12-23T10:08:28.873Z · LW(p) · GW(p)
I would consider legalizing drugs, prostitution and taking away women's votes to be well worth voting for. If I believed in voting that is.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-23T13:07:18.002Z · LW(p) · GW(p)
What's your issue with women's voting rights?
Replies from: ygert, None, None↑ comment by ygert · 2013-12-23T15:26:05.331Z · LW(p) · GW(p)
If I had to guess, I'd say that as Konkvistador is against democracy and voting in general, he wants voting rights to be denied to everyone, and as such, starting with 51% of the population is a good step in that direction.
Am I correct, or is there something more?
Replies from: army1987, Viliam_Bur, Eugine_Nier↑ comment by A1987dM (army1987) · 2013-12-30T18:53:51.248Z · LW(p) · GW(p)
starting with 51% of the population is a good step in that direction
Sure, but the process would likely have hysteresis depending on which group you remove first, and “women” doesn't seem like the best possible choice to me -- even “people without a university degree” would likely be better IMO.
↑ comment by Viliam_Bur · 2013-12-24T14:43:24.064Z · LW(p) · GW(p)
Maybe it is because of our instincts that scream at us that every woman is precious (for long-term survival of the tribe), but the males are expendable. Taking the votes away from the expendable males could perhaps get popular support even today, if done properly. The difficult part in dismantling democracy are the votes of women.
(Disclaimer: I am not advocating dismantling democracy by this comment; just describing the technical problems.)
↑ comment by Eugine_Nier · 2013-12-26T00:26:05.774Z · LW(p) · GW(p)
If you stop thinking of democracy as sacred and start seeing letting various groups vote as a utility calculation, one starts looking at questions like how various groups vote, how politicians attempt to appeal to them, and what effect this has on the way the country winds up being governed.
Replies from: fubarobfusco, army1987, pragmatist↑ comment by fubarobfusco · 2013-12-26T14:36:43.155Z · LW(p) · GW(p)
Don't forget to consider what sorts of political expression are available to those who are not allowed the vote.
↑ comment by A1987dM (army1987) · 2014-01-03T13:40:51.214Z · LW(p) · GW(p)
Sure, but I'd guess voting patterns vary much more with age, education, and income than with gender.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-01-04T03:23:12.344Z · LW(p) · GW(p)
It's not just a question of whether they vary, it's whether they vary in a way that systematically correlates with better (or worse) decisions. Also there are Campbell's law considerations.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-01-04T09:50:34.937Z · LW(p) · GW(p)
I think my point still stands.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-01-08T04:49:58.187Z · LW(p) · GW(p)
Well, education is subject to Campbell's law, but I suspect Konkvistador wouldn't object to raising the voting age, or imposing income requirements.
↑ comment by pragmatist · 2013-12-31T11:57:56.575Z · LW(p) · GW(p)
Another strike against utilitarianism! One person's modus ponens is another person's modus tollens.
↑ comment by Bayeslisk · 2013-12-21T21:23:40.044Z · LW(p) · GW(p)
But these are not, seemingly, as different as, say, the discovery of LSD. Or psychotropics. Or the establishment of homosexuality as relatively innate. Or the invention of the car, or the very first creation of a constructed language.
Replies from: ChristianKl, Eugine_Nier↑ comment by ChristianKl · 2013-12-22T15:09:11.342Z · LW(p) · GW(p)
The invention of the car wasn't that big a deal. At the beginning it wasn't clear that cars are all that great. It took time for people to figure out that cars are much more awesome than horse carts.
I think you underrate the effects of legalizing LSD. If you say you legalize all drugs, you have to ask yourself questions such as why pharma company pay a lot of money for clinical trials when all substances can be legally sold. As a society you have to answer those questions.
As far as the establishment that homoesexuality is relatively innate, I think you have to keep in mind how vague the term homosexuality happens to be. At the moment homosexuality seems to be an identity label. To me it's not clear that this will be the case in 200 years.
A lot of men who fuck other men in prisons don't see themselves as homosexual. Plenty of people who report that they had pleasureable sex with a person of the same sex don't label themselves as homosexual.
There are also a lot of norms about avoiding physical contact with other people. A therapist is supposed to work on the mind and that doesn't mean just hugging a person for a minute. I can imagine a society in which casual touches between people are a lot more intimate than they are nowadays and behavior between males that a conversative American would label as homosexual would be default social behavior between friends.
If you run twin studies you find that being overweight has a strong genetic factor. The same goes for height. Yet the average of both changed a lot during the last two hundred years. The notion of something being innate might even be some rest of what Nietzsche called the God in the grammar. It might not be around in 100 years anymore as it exists nowadays.
Replies from: None↑ comment by [deleted] · 2013-12-23T10:38:53.576Z · LW(p) · GW(p)
There are also a lot of norms about avoiding physical contact with other people. A therapist is supposed to work on the mind and that doesn't mean just hugging a person for a minute. I can imagine a society in which casual touches between people are a lot more intimate than they are nowadays and behavior between males that a conversative American would label as homosexual would be default social behavior between friends.
This futuristic society of casual male intimacy was known as the 19th century.
In it, the Russia of the 1950s and the modern Middle East you could observe men dancing together, holding hands, cuddling, sleeping together and kissing.
Replies from: army1987, ChristianKl↑ comment by A1987dM (army1987) · 2013-12-29T21:04:18.218Z · LW(p) · GW(p)
More generally, ISTM that displays of affection between heterosexual men correlate negatively with homophobia within each society but positively across societies. (That's because the higher your prior probability for X is, the more evidence I need to provide to convince you that not-X.)
↑ comment by ChristianKl · 2013-12-23T13:56:00.568Z · LW(p) · GW(p)
If I look at that description it seems to me that the current way of seeing homosexuality won't be permanent.
It seems being homesexual became a separate identity to the extend that people focused in not engaging in certain kinds of intimacy to signal that they aren't gay.
If the stigma against homosexuality disappears, homosexuality as identity might disappear the same way.
The word homosexuality is even in decline in google ngrams.
Replies from: VAuroch↑ comment by VAuroch · 2013-12-24T14:26:19.807Z · LW(p) · GW(p)
There's a distinction occasionally drawn between homosexual and gay; homosexual is the sexual preference, gay is the cultural lump/stereotype populated mainly by homosexuals. So the 'metrosexual' thing in the early 00s was a kind of fad for heterosexual men adopting gay culture.
This distinction is mainly drawn to point out that the political right's objection is largely to 'gay' rather than to 'homosexual'.
Replies from: ChristianKl, Eugine_Nier↑ comment by ChristianKl · 2013-12-24T14:37:45.419Z · LW(p) · GW(p)
What does "sexual preference" mean exactly?
Do you mean that the criminals in prisons who rape other criminals are gay but not homosexual?
Are you implying that neither or the terms is actually about whether a man has sexs with another man?
Replies from: VAuroch↑ comment by VAuroch · 2013-12-24T14:46:49.327Z · LW(p) · GW(p)
Under this distinction: Men who prefer to have sex with men rather than women are homosexual. Men who prefer to have sex with women rather than men are heterosexual.
Prison sex may be homosexual (that's a matter of fuzzy definitions), but (under this distinction) definitely isn't gay.
↑ comment by Eugine_Nier · 2013-12-26T00:30:54.480Z · LW(p) · GW(p)
This distinction is mainly drawn to point out that the political right's objection is largely to 'gay' rather than to 'homosexual'.
No the political right's objection is to people engaging in homosexual sex and to popular culture telling people this is a normal and healthy thing to do. The subtler objection is to it telling people that if they find 19th century style male bonding appealing it means that they're "gay" and should thus engage in homosexual sex.
Replies from: VAuroch, army1987↑ comment by VAuroch · 2013-12-26T09:03:31.484Z · LW(p) · GW(p)
I see no reason to believe that is the case; gay culture, by its nature of growing out of highly-liberal communties during the 60s and 70s, is highly hedonistic and permissive, both things the political right objects to already. That they strongly dislike (perceived) core attributes of this culture and the associated homosexuality looks like a strictly simpler hypothesis than that they dislike (perceived) core attributes of this culture, and also homosexuality.
In short: Occam appears to be on my side, so you'll need some evidence for that.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-12-29T05:08:30.761Z · LW(p) · GW(p)
Read what traditionalists actually write for one thing. They're against hedonistic behaviors and that includes homosexual sex (this is not the only reason they're against it). Notice that this was true long before the current cultural concept of what it means to "act gay".
↑ comment by A1987dM (army1987) · 2013-12-29T21:22:30.069Z · LW(p) · GW(p)
normal
Taboo that word. Is being left-handed normal?
ISTM the point of that word is often to sneak connotations in.
The subtler objection is to it telling people that if they find 19th century style male bonding appealing it means that they're "gay" and should thus engage in homosexual sex.
What? ISTM it's right-wingers who say things like that. EDIT: I guess I had misread that (I had read “should” as ‘are likely to’ rather than ‘had better’), in which case... what??? I can't remember anyone ever suggesting anything remotely like that with a straight face, and I know plenty of left-wingers; are you sure you aren't attacking a straw man?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-12-29T21:53:40.708Z · LW(p) · GW(p)
I guess I had misread that (I had read “should” as ‘are likely to’ rather than ‘had better’), in which case... what??? I can't remember anyone ever suggesting anything remotely like that with a straight face,
They tend to phrase it as encouraging people to "find out if they're gay", i.e., encourage people to declare themselves "gay" if what amounts to 19th century style male bonding appeals to them. Furthermore, once someone has been declared "gay" it's considered a horrendous hate crime to discourage him from engaging in homosexual sex.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-12-30T10:36:04.855Z · LW(p) · GW(p)
They tend to phrase it as encouraging people to "find out if they're gay", i.e., encourage people to declare themselves "gay" if what amounts to 19th century style male bonding appeals to them.
Never heard that either.
Furthermore, once someone has been declared "gay" it's considered a horrendous hate crime to discourage him from engaging in homosexual sex.
And once someone has been declared "straight" it's considered a horrendous hate crime to discourage him from engaging in heterosexual sex (except by fundamentalist Christians and the like, but that also applies to gay sex), so what's your point?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-12-30T23:00:20.670Z · LW(p) · GW(p)
And once someone has been declared "straight" it's considered a horrendous hate crime to discourage him from engaging in heterosexual sex
Encouraging "gays" to become "straight" is considered a hate crime, encouraging "straights" to become "gay" is framed as encouraging them to "find out if they're gay" and considered commendable.
Also, at least in the US, encouraging "straights" to hold of until marriage is considered old fashioned but not nearly as bad as attempting to deconvert "gays". The latter has in fact been made illegal in California.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-12-31T00:26:23.630Z · LW(p) · GW(p)
encouraging "straights" to become "gay" is framed as encouraging them to "find out if they're gay" and considered commendable.
What the hell are you talking about? AFAICT nearly all straight people I know would find such an, ahem, encouragement quite annoying at the very best, and most of them would be utterly disgusted by it. “I'm flattered, but I'm straight” said with a poker face is about as positive a reaction as I'd ever anticipate seeing.
Replies from: Moss_Piglet↑ comment by Moss_Piglet · 2013-12-31T00:59:50.408Z · LW(p) · GW(p)
You and Eugine seem to be talking past one another;
He's saying that society tends to see it as (at worst) a bit of a faux pas for a gay man to try to get a straight to switch teams whereas a gay converter is one step off from an SS officer in terms of the hatred they get.
You, on the other hand, seem to be talking about how annoyed straight guys get when being harassed by gays trying to convert them, and presumably vice versa. That people get pissed off, with good reason, when people try to dictate terms to them on whom they desire.
Oddly enough, both of you are right. It is much more acceptable for gay men to be "straight chasers" and try to get straight guys to "come out" than it is for Christians to be "deconverters" and try to get gay guys to "find Jesus," at least everywhere I've lived (admittedly, my favorite cities tend to be pretty deep blue). People confronted with this kind of obnoxious behavior don't appreciate it in either case, but the straight guy has to be a lot more careful not to say anything "offensive" to the guy grabbing him (God forbid throwing a punch) than the gay guy who can tell the pastor to go to hell and walk off with the full force of the law / media behind him.
Replies from: pragmatist, army1987↑ comment by pragmatist · 2013-12-31T11:48:28.657Z · LW(p) · GW(p)
There seems to be a pretty big asymmetry here that you're ignoring. Christian "deconverters" aren't simply saying "Hey, why don't you try straight sex? You might end up enjoying it." They're saying "There is something deeply wrong with your sexual orientation and you will suffer eternally unless you sincerely attempt to change it." I doubt that attempts to convert straight men result in higher rates of depression or suicide among them.
The appropriate analog of the gay "straight chasers" you're talking about would be a straight woman who attempts to "convert" gay guys by, say, trying to convince them to sleep with her, maybe because she likes the challenge. Do you think such a person would also be seen as one step off from an SS officer?
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-01-03T18:02:59.139Z · LW(p) · GW(p)
The appropriate analog of the gay "straight chasers" you're talking about would be a straight woman who attempts to "convert" gay guys by, say, trying to convince them to sleep with her, maybe because she likes the challenge. Do you think such a person would also be seen as one step off from an SS officer?
BTW, IME straight men who manage to convince lesbians to sleep with them usually inspire awe, not disgust. (I can't think of any concrete examples of the gender-reversed situation, which you described.)
↑ comment by A1987dM (army1987) · 2013-12-31T09:36:21.007Z · LW(p) · GW(p)
He's saying that society tends to see it as (at worst) a bit of a faux pas for a gay man to try to get a straight to switch teams
Actually he said it is “considered commendable”, but I see your point.
↑ comment by Eugine_Nier · 2013-12-26T00:17:39.486Z · LW(p) · GW(p)
Or the establishment of homosexuality as relatively innate.
When did this actually happen? All the arguments I've boil down to either the "it shows up on brain scans and is thus innate" fallacy, or if you don't agree it's innate you must be an EVIL HOMOPHOBE!!!11!!
Replies from: army1987, army1987↑ comment by A1987dM (army1987) · 2013-12-30T10:33:31.417Z · LW(p) · GW(p)
if you don't agree it's innate you must be an EVIL HOMOPHOBE!
What? I can't see why knowing that genetics (assuming that's what's meant by “innate”) affects how likely people are to commit violent crimes would make me dislike violent criminal any less, nor why knowing that (say) the concentration of lead in the air also affects that would make me dislike them any more.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-12-30T22:55:35.729Z · LW(p) · GW(p)
I can't see why knowing that genetics (assuming that's what's meant by “innate”) affects how likely people are to commit violent crimes would make me dislike violent criminal any less,
Well, there are a lot of people arguing that we should go easy on violent criminals since "it's not their fault". I don't agree with this argument, but a lot of people seem to be convinced by it.
↑ comment by A1987dM (army1987) · 2013-12-29T21:09:43.550Z · LW(p) · GW(p)
Twin studies. (Though by that standard lots of things are relatively innate.)
Replies from: Douglas_Knight, Emile, Eugine_Nier↑ comment by Douglas_Knight · 2013-12-30T00:05:29.923Z · LW(p) · GW(p)
Relative to what? If "lots of things" are "relatively" something, your standards are probably too low.
Yes, twin studies give a simple upper bound to the genetic component of male homosexuality, but it is very low. As an exercise, you might try to name 10 things with a lower genetic contribution. But I think defining "innate" as "genetic" is a serious error, endemic in all discussions of human variety.
Added, months later: Cochran and Ewald suggest as a benchmark leprosy, generally considered an infection, not at all innate. Yet it has (MZ/DZ) twin concordance of 70/20. For something less exotic, TB is 50/20. That's higher than any reputable measure of the concordance of homosexuality. The best studies I know are surveys of twin registries: in Australia, there is a concordance of 40/10 for Kinsey 1+ and 20/0 for Kinsey 2+; in Sweden, 20/10 and 5/0.
↑ comment by Emile · 2013-12-31T11:52:12.196Z · LW(p) · GW(p)
Since everybody in this subthread is talking about the numbers without mentioning them, from Wikipedia:
Replies from: Douglas_KnightBiometric modeling revealed that, in men, genetic effects explained .34–.39 of the variance [of sexual orientation], the shared environment .00, and the individual-specific environment .61–.66 of the variance. Corresponding estimates among women were .18–.19 for genetic factors, .16–.17 for shared environmental, and .64–.66 for unique environmental factors.
↑ comment by Douglas_Knight · 2014-01-02T23:17:55.016Z · LW(p) · GW(p)
Numbers like ".34–.39" imply great precision. In fact, that is not a confidence interval, but two point estimates based on different definitions. The 95% confidence interval does not exclude 0 genetic contribution. I'm getting this from the paper, table 1, on page 3 (77), but I find implausible the transformation of that raw data into those conclusions.
↑ comment by Eugine_Nier · 2013-12-29T21:46:04.154Z · LW(p) · GW(p)
Ok, taboo "relatively innate". The common analogy used in the 'civil rights' arguments is to things like skin color. By that standard homosexuality is not innate.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-12-30T10:45:47.579Z · LW(p) · GW(p)
Ok, taboo "relatively innate".
I can't speak for Bayeslisk, but I'd say it means that things other than what happens to you after your birth have a non-negligible effect (by which standard your accent is hardly innate). But I agree it's not a terribly important distinction.
The common analogy used in the 'civil rights' arguments is to things like skin color. By that standard homosexuality is not innate.
I probably agree. (But of course it's a continuum, not two separate classes. Skin colour also depends by how long you sunbathe and how much carotene you eat, yadda yadda yadda.)
↑ comment by passive_fist · 2013-12-23T00:20:23.081Z · LW(p) · GW(p)
The problem is that human social mores seem to change on the order of 20-40 years which is consistent with the amount of time it takes a new generation of people to take the helm and for the old generation to die out. I have personally seen extreme societal change within my own country of origin, change that happened in only the span of 30 years. In comparison, Western culture over this same time has seemed almost stagnant (despite the fact that it, too, has undergone massive changes such as acceptance of homosexuality).
However, by some estimates, we are already just 20-40 years away from the singularity (2035-2055). This seems like too short a time for human culture to adapt to the massive level that is required. For instance, consider a simple thing like food. Right now, the idea of eating meat that has been grown in a lab seems unsettling and strange to many people. Now consider what future technology will enable, step-by-step:
- Food produced by nanotech with simple feedstock, with no slow and laborious cell growth required.
- Food produced by nanotech with household waste, including urine and feces (possibly the feces of other people as well), thus creating a self-contained system.
- Changing human biochemistry so that waste is simply recycled inside our bodies, requiring no food at all, and just an energy source plus some occasional supplements.
- Uploading brains. Food becomes an archaic concept.
There is likely to not be a very large span of time between each of these steps.
Replies from: Bayeslisk, TheOtherDave↑ comment by Bayeslisk · 2013-12-23T09:58:47.707Z · LW(p) · GW(p)
Absent mass mind uploading, I doubt that food in some relatively recognizable form will ever die out, or that we will ever find it economically feasible to eat food known to be made from human waste. Sunlight and feedstock are cheap, people get squicked easily, and stuff that's stuck around for a long time is likely to continue sticking around. You may as well say we'll outgrow a need for fire, language, or tools; indeed, I'd believe any of those over the total abandonment of food.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-23T14:39:42.821Z · LW(p) · GW(p)
Absent mass mind uploading, I doubt that food in some relatively recognizable form will ever die out, or that we will ever find it economically feasible to eat food known to be made from human waste.
Fecal implants do seem to have some health benefits.
There are people who do drink their own urine. Spirilina can be grown on urine.
Algae also have the advantage that they are signal cell organism with means that it's easy to introduce new genes into them via DIY-bio efforts.
That means you can easily change the way the stuff tastes and let it produce vitamins and other substances. If you want a cheap source of THC you can transplant the relevant genes needed to produce the THC into an algae and grow it at home in a way that isn't as easily discovered as growing hemp.
You can trade different algae species and get more interesting compounds than THC.
Replies from: Bayeslisk↑ comment by Bayeslisk · 2013-12-24T00:11:07.098Z · LW(p) · GW(p)
What are fecal implents?
Few people do, and I doubt that it will catch on; spirulina can also be grown on runoff fertilizer, which will probably sound more appealing to most people.
Replies from: Lumifer, ChristianKl↑ comment by Lumifer · 2013-12-24T01:16:15.804Z · LW(p) · GW(p)
What are fecal implents?
I think the parent post means fecal transplants which are a way to reseed the gut biota with something hopefully more suitable.
Replies from: Bayeslisk↑ comment by Bayeslisk · 2013-12-24T07:50:57.434Z · LW(p) · GW(p)
Oh, makes sense. That's not food, though; that's a very easy organ(?) transplant.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-24T13:10:09.141Z · LW(p) · GW(p)
Oh, makes sense. That's not food, though; that's a very easy organ(?) transplant.
You don't transplant the organ but the feces. They get processed in the intestine. Stuff that enters the body to be processed in the intestine is food for some definition of "food".
But once you accept the goal to get feces into the gut, the way is only a detail that's open to change.
Replies from: Lumifer, Bayeslisk↑ comment by Bayeslisk · 2013-12-25T16:29:20.725Z · LW(p) · GW(p)
No, I know that the colon is not transplanted; the flora is. Hence the (?). Also, it hopefully doesn't get processed but rather survives to colonize the gut. Further, an enema would probably be far more effective, given its lack of strong acid and pepsin designed to kill the flora.
↑ comment by ChristianKl · 2013-12-24T02:52:41.555Z · LW(p) · GW(p)
What are fecal implents?
Sorry, typo. Should be fecal implants or stool transplants.
Few people do, and I doubt that it will catch on; spirulina can also be grown on runoff fertilizer, which will probably sound more appealing to most people.
Sounding appealing is a question of marketing. Plenty of people prefer organic food that grown with feces of animals over food grown with "chemical" fertilizer. They even pay more money for the product.
I also think you underrate the cost of fertilizer for some poor biohacker in Neirobi who has plenty of access to empty bottles. Human urine should also be pretty cheap to buy in third world megacities.
Access to cheap natural gas and oil is also central for the current way of doing agriculture. Without having access to those resources for cheap prices resource reuse might be a bigger deal.
Replies from: Bayeslisk↑ comment by TheOtherDave · 2013-12-23T06:01:30.828Z · LW(p) · GW(p)
human social mores seem to change on the order of 20-40 years which is consistent with the amount of time it takes a new generation of people to take the helm and for the old generation to die out
If there's a causal link here, then it's possible the biggest problem with social change and technological advances would be due to increased longevity, in which case it might not matter how long the time span is... even if there were decades, it wouldn't be enough.
Replies from: passive_fist↑ comment by passive_fist · 2013-12-23T08:14:36.465Z · LW(p) · GW(p)
In some sci-fi settings they have rules where people above a certain 'age' can't directly enter politics anymore. Although I'm not sure exactly how effective that would be, since they would still hold power and influence, and human nature seems to be that we allow more power and influence to the elderly than to the young.
↑ comment by NancyLebovitz · 2013-12-23T08:30:50.725Z · LW(p) · GW(p)
Vinge said something of the sort-- that the Singularity would be unimaginable from its past, but after the Singularity (he's assuming one which includes humans), the path to the Singularity will be known, and it will seem quite plausible.
Replies from: Bayeslisk↑ comment by Bayeslisk · 2013-12-23T09:56:18.785Z · LW(p) · GW(p)
That's something a little different - I think that's already talked about here. Maybe under the Hindsight Bias? At any rate, I'm not talking about looking back; I'm talking about looking from within. The march of history is almost always too slow to see, and even with a significant speedup it'd still probably seem "normal". Only right at the end would it be clear that a Singularity is occurring.
comment by Michelle_Z · 2013-12-23T03:31:52.974Z · LW(p) · GW(p)
I want my family to be around in the far future, but they aren't interested. Is that selfish? I'm not sure what I should do, or if I should even do anything.
Replies from: Risto_Saarelma, shminux, ChristianKl↑ comment by Risto_Saarelma · 2013-12-24T11:46:55.313Z · LW(p) · GW(p)
I don't think the odds are good. Getting serious about cryonics will break a whole bunch of implicit assumptions about the order of life, and people who haven't signed out from the norms and conventions layer of mainstream society to the degree of your average outcast LessWronger are going to be keenly aware of the unspoken rules that are being broken.
Telling people that there's reason to think cryonics is a valid option and that you support it is good, but trying to get to the bottom of all disagreements beyond that seems like taking it on yourself to make a religious fundamentalist relative accept evolution. It's probably not going to happen, because the surface level argument is tied up to a head full of invisible machinery that won't respond to reasoning about technical feasibility.
↑ comment by Shmi (shminux) · 2013-12-23T20:17:29.856Z · LW(p) · GW(p)
I wonder how you framed it.
Do you think that any resuscitation technology, including defibrillation is a sin (or use their favorite objection against cryonics)?
How about one that enables resuscitation on a longer time frame? How long is still OK? Hours? Days? Years?
Would you take a treatment that makes one feel younger and live longer?
Would you approve of being cooled down for a day or two until a life-saving liver/heart/kidney transplant is available? What if it requires cooling deep enough that your heart stops beating?
Their replies, if any, might give you a hint of their true objections. If they are truly religious in nature, and your family attends church regularly, consider having a talk with your local pastor (or whatever religious authority figure they look up to). To paraphrase Ender's game and HPMoR, children's opinions have zero weight, so try to engage someone actually being listened to.
Replies from: Michelle_Z↑ comment by Michelle_Z · 2013-12-23T20:31:27.651Z · LW(p) · GW(p)
They weren't arguing that it wouldn't work. They think that being revived is selfish, that spending money on having your head frozen is selfish, and my mom says she wants to die. The old death=good cached thought seems to be one of the main driving factors. She also said there'd be no place for her in the future, that the world might be inconceivably different and strange, and that she would be unable to deal with it.
When I explained that some thousand people have done it, and a lot more are signed up, she said that was only "insane rich eccentrics" and when I explained that ordinary people do it, she said some nasty things about those people, along the lines of calling them nuts.
My main question was related towards figuring out if I should keep pursuing it, and try to change their minds, or if I should respect their wishes. I don't know what the right thing to do in this situation is- because saving lives is very important, but respecting others' rights is also pretty important. But the difficulty of this situation is compounded, because I'm angry with her and I don't want to give up because I'm angry.
Replies from: shminux, Lumifer, hyporational↑ comment by Shmi (shminux) · 2013-12-23T21:16:20.589Z · LW(p) · GW(p)
I don't know what the right thing to do in this situation is- because saving lives is very important, but respecting others' rights is also pretty important.
First, there is no objectively right thing to do. At this point you are expending effort on an essentially selfish goal: saving your mother's life against her current wishes. Not that "selfish" is in any sense bad or negative. But if you actually cared about saving lives in general, you would apply your effort where it is more likely to pay off. Your current position is no more defensible than hers: you selfishly want her to have a chance to live in some far future with you, she selfishly disregards your wishes and wants to expire when it's her time. Certainly telling her that her wishes are less valid than yours is not likely to convince her. You can certainly point out that by deciding to forgo cryo she behaves just as selfishly as you do by wanting her to sign for cryo. Maybe then you and her can discuss what "selfish" means to each of you, and maybe have some progress from there. Of course, you should be fully prepared to change your mind and do your best to steelman her arguments. Can you make them better than she does, have her agree and then discuss potential weaknesses in them?
Replies from: Michelle_Z↑ comment by Michelle_Z · 2013-12-23T21:26:13.620Z · LW(p) · GW(p)
But if you actually cared about saving lives in general, you would apply your effort where it is more likely to pay off.
I already am. This is in addition to that.
It is definitely a good idea to talk to her about what selfish means, because my mother and I have differing views on what is selfish and what is not.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-12-23T21:58:41.667Z · LW(p) · GW(p)
It is definitely a good idea to talk to her about what selfish means, because my mother and I have differing views on what is selfish and what is not.
I'm interested to know what comes out of these discussions and if you guys manage to converge. Keep us posted.
↑ comment by Lumifer · 2013-12-23T20:45:06.426Z · LW(p) · GW(p)
My main question was related towards figuring out if I should keep pursuing it, and try to change their minds, or if I should respect their wishes ... the difficulty of this situation is compounded, because I'm angry with her.
As the first step I would recommend to stop being angry with her.
Also keep in mind that for a true-believer Christian cryonics is basically trying to cheat oneself out of heaven -- not a very appealing idea :-/
↑ comment by hyporational · 2013-12-23T21:55:18.402Z · LW(p) · GW(p)
The old death=good cached thought seems to be one of the main driving factors.
Have you read this? Might give you some useful tools to speak against that idea.
I don't know what the right thing to do in this situation is- because saving lives is very important, but respecting others' rights is also pretty important.
Would you rather act on your own preferences, or some lesswrongian's?
I'm angry with her and I don't want to give up because I'm angry.
Anger is temporary, so not a great basis for long term decisions. Also, anger will affect your tone and therefore make you less convincing.
Replies from: Michelle_Z↑ comment by Michelle_Z · 2013-12-24T03:55:16.119Z · LW(p) · GW(p)
I've read it.
I feel my own judgement is suspect on this occasion. I don't know. I want to help her and she's alternating between being incredibly blase and being furious with me. It's not like I can just point her at some books to read, because her and my dad don't like to read. And the things that convinced me, my parents regard as rubbish or nonsense and get-your-head-out-of-space-go-get-married-and-be-normal-goddamnit!
If I continue to pursue this, either the relationship between my parents and I will suffer and they won't choose to freeze themselves, or they'll choose to freeze themselves and our relationship won't suffer. Large risk, large benefit.
My other consideration is to attempt to be subtle, plant the seeds in their heads that give them the sense that maybe the world doesn't work how they think it does (I managed to convince my dad that the earth was old and that dinosaurs did not roam the earth with humans this way, so it has some merit.)
Replies from: satt, hyporational, ChristianKl↑ comment by satt · 2013-12-24T12:58:28.696Z · LW(p) · GW(p)
(I managed to convince my dad that the earth was old and that dinosaurs did not roam the earth with humans this way, so it has some merit.)
This aside's quite important; it sounds like the inferential distance between you and your parents is huge. Trying to bridge it in one fell swoop is quite ambitious, so I'd err towards a slow & subtle approach. (Not that I have much experience with this problem!)
↑ comment by hyporational · 2013-12-24T07:24:40.484Z · LW(p) · GW(p)
I think subtlety usually works the best with stubborn individuals, but might easily backfire now that you've been in their face. If you were to use that strategy, I'd recommend you let the issue settle for a while so that they don't immediately see what you're trying to do. If they realize you're manipulating them, that might make them even less susceptible to your ideas. Planning is the key, unless it's an emergency.
↑ comment by ChristianKl · 2013-12-24T15:03:50.741Z · LW(p) · GW(p)
Don't try to push an idea in a way that costs you something.
When it comes to convincing others it helps to understand the other person. Nobody get's angry if you show genuine interest into how they think the world works. Listen a lot.
It might also help to reduce the amount of things that make her furious with you. If those wouldn't exist it might be easier to convince her on other questions.
↑ comment by ChristianKl · 2013-12-23T12:43:34.172Z · LW(p) · GW(p)
If someone really believes in going to heaving after they die, then being locked up in some state between being alive and dead is an issue.
Various religious people do believe that proper burials are important for letting the soul pass on.
comment by [deleted] · 2013-12-22T05:03:59.892Z · LW(p) · GW(p)
I hope MIRI is thinking about how to stop Johnny Depp.
http://trailers.apple.com/trailers/wb/transcendence/
Replies from: Kaj_Sotala, JoshuaZ↑ comment by Kaj_Sotala · 2013-12-22T13:39:43.929Z · LW(p) · GW(p)
(YouTube version for people who don't want to download QuickTime.)
Huh, a major Hollywood movie about superintelligence, uploading and the Singularity that seems like its creators might actually even have a mild clue of what they're talking about. Trailers can always be misleading, of course, but I'll have to say that this looks very promising - expect to enjoy this one a lot.
↑ comment by JoshuaZ · 2013-12-22T07:02:29.420Z · LW(p) · GW(p)
I'm not sure how to react to that. While the trailer does get some points correct (an intelligence explosion is dangerous, and much smarter than you things can likely do stuff that you can't even imagine) it looks like it is essentially from the technological-progress-is-bad-because-hubris end of science fiction, akin to the rebooted Outer Limits. And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
Replies from: ChristianKl, Kaj_Sotala↑ comment by ChristianKl · 2013-12-22T18:18:54.494Z · LW(p) · GW(p)
And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
Given that the film probably doesn't end up with all of humanity being dead, it probably rather overstates than understates the safety of uploads.
↑ comment by Kaj_Sotala · 2013-12-22T14:00:20.849Z · LW(p) · GW(p)
it looks like it is essentially from the technological-progress-is-bad-because-hubris end of science fiction
I didn't get that vibe: it looked like the terrorists blowing up AI labs were being depicted as being bad (or at least not-good) guys, whereas some of the main characters seemed genuinely conflicted and torn about whether to try to upload their friend in an attempt to save him, and whether to even keep him running after he'd uploaded. If they had been going for the hubris angle, I would have expected a lot more of a gung-ho attitude towards building potential superintelligences.
And maybe I'm reading too much into it, but I get the feeling that this has a lot more of a shades-of-gray morality than is normal for Hollywood: e.g. it's not entirely clear whether the terrorists really are bad guys, nor whether the main character should have been uploaded, etc.
And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
Well, there's only as much that you can pack into a two-hour movie while still keeping it broadly accessible. If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that's still a big win. A movie doesn't need to communicate every subtlety of a topic if it regardless gets people to read up on the topic on their own. (Supposedly science fiction has historically inspired a lot of people to pursue scientific careers, particularly related to e.g. space exploration, though I don't know how accurate this common-within-the-scifi-community belief is.)
Replies from: ChristianKl, Kawoomba↑ comment by ChristianKl · 2013-12-22T18:13:16.154Z · LW(p) · GW(p)
Well, there's only as much that you can pack into a two-hour movie while still keeping it broadly accessible.
And you can put even less in a two and a halve minute trailer.
↑ comment by Kawoomba · 2013-12-22T17:39:59.392Z · LW(p) · GW(p)
If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that's still a big win.
If that were it (couple major concepts semi-accurately, the rest entertainment/drama), I'd agree. However, "imagine a machine with a full range of human emotion" (quote from the trailer) and the invariable AI-stopped-using-stupid-gimmicks ending (there's gonna be a happy ending) is more likely to create yet another Terminator-style distortion/caricature to fight. The false concepts that get planted along with the semi-accurate ones can do large net harm by muddling the issue using powerful visual saliency cheats (how can 'boring forum posts' measure up against flashy Hollywood movies).
"Oh, you're into AI safety? Yea, just like Terminator! Oh, not like that? Like Transcendence, then?" anticipatory facepalm
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-12-22T18:19:12.820Z · LW(p) · GW(p)
I expect that any people whose concepts get hopelessly distorted by this movie would be a lost cause anyway. Reasoning correctly about AI risk already requires the ability to accept a number of concepts that initially seem counterintuitive: if you can't manage "this doesn't work the way it does in movies", you probably wouldn't have managed "an AI doesn't work the way all of my experience about minds says a mind should work" either.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-22T18:24:20.738Z · LW(p) · GW(p)
"hopelessly" probably not. But that doesn't mean that the distortion is insignificant.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-12-22T18:31:35.232Z · LW(p) · GW(p)
Granted. Still, the general public is never going to have an accurate understanding of any complex concept, be that concept evolution, climate change, or the Singularity. The understanding of non-specialists in any domain is always going to be more or less distorted. The best we can hope for is that the popularizations that make the biggest splash are even semi-accurate so that the popular understanding won't be too badly distorted: and considering everything that Hollywood could have done with this movie, this looks pretty promising.
comment by JoshuaZ · 2013-12-18T04:39:12.709Z · LW(p) · GW(p)
There's been some prior discussion here about the problem of uncertainty of mathematical statements. Since most standard priors (e.g. Solomonoff) assume that one can do a large amount of unbounded arithmetic, issues of assigning confidence to say 53 being prime are difficult, as are issues connected to open mathematical problems (e.g. how does one discuss how likely one is to estimate that the Riemann hypothesis is true in ZFC?). The problem of bounded rationality here seems serious.
I've run across something that may be related, and at minimum seems hard to formalize. For a mathematical statement A, let F(A) be "A is proveable in ZFC) (you could use some other axiomatic system but this seems fine for now). Let G(A) be "A will be proven in ZFC by 2050". Then one can give examples of statements A and B, where it seems like P(F(A)) is larger than P(F(B)) but the reverse holds for P(G(A)) and P(G(B)).
The example that originally came to mind is technical: let A be the statement "ZPP is contained in P^X where X is an oracle for graph isomorphism" and let B be the statement "ZPP is contained in P^Y where Y is an oracle that answers whether Ackermann(n)+1 has an even or odd number of distinct prime factors." The intuition here is that one expects Ackermann(n)+1 to be essentially random in the parity of its number of distinct prime factors, and a strong source of pseudorandom bits forces collapse of ZPP. However, actually proving that Ackermann(n)+1 acts this way looks completely intractable. In contrast, there's no strong prior reason to think graph isomorphism has anything to do with making ZPP type problems easier (aside from some very minor aspects) but there's a lot of machinery out there that involves graph isomorphism and people thinking about it.
So, is this sort of thing meaningful? And are there other more straightforward, less complicated or less technical examples? I do have an analog involving not math but space exploration. P(Life on Mars) might be lower than P(Life on Europa) even though P(We discover life on Mars in the next 20 years) might be higher than P(We discover life on Europa in the next 20 years) simply because we send so many more probes to Mars. Is this a helpful analog or is it completely different?
Replies from: None, DanielLC, Anatoly_Vorobey↑ comment by [deleted] · 2013-12-19T07:35:51.985Z · LW(p) · GW(p)
How about the statements:
A: "The number of prime factors of 4678946132165798721321 is divisible by 3"
B: "The number of prime factors of 9876216987326578968732678968432126877 8498415465468 5432159878453213659873 1987654164163415874987 3674145748126589681321826878 79216876516857651 64549687962165468765632 132185913574684613213557 is divisible by 2"
P(F(A)) is about 1/3 and P(F(B)) is about 1/2.
But it's far more likely that someone will bother to prove A, just because the number is much smaller.
ETA: To clarify, I don't expect it to be particularly hard to prove or disprove, I just don't think anyone will bother.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-19T16:32:38.659Z · LW(p) · GW(p)
Whether someone will bother really depends on why someone wants to know. You can simple type "primefactors of 9876216987326578968732678968432126877" into Wolfram Alpha and get your answer. It's not harder than typing "primefactors of 4678946132165798721321" into Wolfram Alpha
Replies from: Oscar_Cunningham, None, RolfAndreassen↑ comment by Oscar_Cunningham · 2013-12-19T18:31:52.122Z · LW(p) · GW(p)
I don't know if this was due to an edit, but the second number in Khoth's post is far larger than 9876216987326578968732678968432126877, and indeed Alpha won't factor it.
To be honest I'm sort of surprised that Alpha is happy to factor 4678946132165798721321, I'd have thought that that was already too large.
↑ comment by RolfAndreassen · 2013-12-19T16:45:06.040Z · LW(p) · GW(p)
Technically it is harder, since there are more digits; apart from the additional work involved this also makes more opportunities for mistakes. In addition, of course, the computer at the other end is going to have to do more work.
↑ comment by DanielLC · 2013-12-18T06:10:16.176Z · LW(p) · GW(p)
If there's some new hypothesis it's likely to be proven or disproven quickly. If you look at an old one, like the Riemann hypothesis, that people have tried and failed to prove or disprove, it probabily won't be proven or disproven any time soon. Thus, it's not hard to find something more likely to be proven quickly than the Riemann hypothesis, but is still less likely to be true.
↑ comment by Anatoly_Vorobey · 2013-12-18T21:42:49.926Z · LW(p) · GW(p)
Let A = "pi is normal", and B = "pi includes in it as a contiguous block the first 2^128 digits of e". B is more likely to be provable in ZFC, simply because A requires B but not vice versa. A is vastly more likely to be proven by 2050. Is this a valid example, or do you see it as cheating in some way?
I'm not sure if this question is meaningful/interesting. It may be, but I'm not seeing it.
Replies from: JoshuaZ, Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-12-18T22:25:53.661Z · LW(p) · GW(p)
Doesn't the fact that A implies B mean that it's very easy to prove B once you've proved A?
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2013-12-18T22:46:17.878Z · LW(p) · GW(p)
You're right, I blundered and this example is no good.
comment by [deleted] · 2013-12-18T02:25:49.236Z · LW(p) · GW(p)
Eliezer said in his Intelligence Explosion Microeconomics that Google is maybe the most potential candidate to start the FOOM scenario.
I've gotten the impression that Google doesn't really take this Friendliness business seriously. But beyond that, what is Google's stance towards it? On the scale of "what a useless daydreaming", "an interesting idea but we're not willing to do anything about it", "we may allocate some minor resources to it at some point in the future", or something else?
Replies from: Manfred, ChristianKl↑ comment by Manfred · 2013-12-18T07:08:28.369Z · LW(p) · GW(p)
http://lesswrong.com/lw/4rx/agi_and_friendly_ai_in_the_dominant_ai_textbook/
This book's second author is Peter Norvig, director of research at google.
Replies from: None↑ comment by ChristianKl · 2013-12-19T14:40:26.445Z · LW(p) · GW(p)
I've gotten the impression that Google doesn't really take this Friendliness business seriously. But beyond that, what is Google's stance towards it?
It difficult to know from the outside how Google spends it money on undisclosed projects.
comment by Locaha · 2013-12-19T21:21:02.279Z · LW(p) · GW(p)
Are there solid examples of people getting utility from Lesswrong? As opposed to utility they could get from other self-help resources?
Replies from: Ben_LandauTaylor, Michelle_Z, hyporational, niceguyanon, Nornagest, brazil84↑ comment by Ben_LandauTaylor · 2013-12-20T08:21:39.534Z · LW(p) · GW(p)
Are there solid examples of people getting utility from Lesswrong?
The Less Wrong community is responsible for me learning how to relate openly to my own emotions, meeting dozens of amazing friends, building a career that's more fun and fulfilling than I had ever imagined, and learning how to overcome my chronic bouts of depression in a matter of days instead of years.
As opposed to utility they could get from other self-help resources?
Who knows? I'm an experiment with a sample size of one, and there's no control group. In the actual world, other things didn't actually work for me, and this did. But people who aren't me sometimes get similar things from other sources. It's possible that without Less Wrong, I might still have run across the right resources and the right community at the right moment, and something else could have been equally good. Or maybe not, and I'd still be purposeless and alone, not noticing my ennui and confusion because I'd forgotten what it was like to feel anything else.
↑ comment by Michelle_Z · 2013-12-23T03:40:03.018Z · LW(p) · GW(p)
I did self help before I joined lesswrong, and had almost no results. I'd partially attribute Lesswrong to changing me in ways such that I switched my major from graphic design to biology, in an effort to help people through research. I've also gotten involved in effective altruism in my community, starting the local THINK club for my college, which is donating money to various (effective) charities. I have a lovely group of friends from the Lesswrong study hall who have been tremendously supportive and fun to be around. There are a number of other small things, like learning about melatonin, which fixed my insomnia...etc. but those are more of a result of being around people who are knowledgeable of such things, not necessarily lesswrong-people.
In short, yes, it is helpful.
↑ comment by hyporational · 2013-12-20T07:02:35.243Z · LW(p) · GW(p)
What would solid examples look like? Are there solid examples of people getting utility from other self-help sources? Can you think of any?
Less Wrong isn't just a self-help resource. I enjoy the conversational norms and topics here, and that's utility for me, but can you measure it?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-12-20T11:19:52.286Z · LW(p) · GW(p)
I can make you cash offers to abandon it until you take one. This is leaky but workable.
Replies from: hyporational↑ comment by hyporational · 2013-12-20T13:16:05.464Z · LW(p) · GW(p)
True. It's surprisingly difficult to think about the hypothetical figures since I'm not short on cash, can't seem to make myself much happier spending more money, and still don't know any viable alternative to LW. It also seems thinking about this in terms of a subscription fee instead of getting a cash offer changes the figures significantly, which I guess tells us something about the diminishing marginal utility of money.
This makes me wonder if there are any threads here discussing how to convert money into experiential happiness. ETA: yes there are.
Replies from: Lumifer↑ comment by Lumifer · 2013-12-20T16:26:53.221Z · LW(p) · GW(p)
how to convert money into experiential happiness. ETA: yes there are.
I am wary of such type of advice because it almost always aims itself at an average person. Someone who is not average might not find such advice useful and it could turn out the be misleading and harmful.
Also a large part of it comes from psychology papers which are, um, not an unalloyed source of truth.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-12-20T18:48:33.553Z · LW(p) · GW(p)
yes, but in the absence of significant countervailing evidence one should not assume that they are so different as to render the advice useless.
Replies from: Lumifer↑ comment by Lumifer · 2013-12-20T19:34:00.690Z · LW(p) · GW(p)
Well, that depends on the person, doesn't it? Some are sufficiently different and some are not.
Generic advice is generic. Only you can prevent wildfires.. err.. decide whether it is appropriate specifically for you or not. My point is really that you shouldn't treat it as "scientifically established" gospel and get unhappy if you are weird enough for it not to apply.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-12-22T19:06:35.022Z · LW(p) · GW(p)
Some are sufficiently different and some are not.
Guessing here is a bad idea though, because it is specifically in relation to an area where people are known to be bad at predicting their own responses.
decide whether it is appropriate specifically for you or not.
with a big dose of empiricism.
↑ comment by niceguyanon · 2013-12-20T07:31:04.314Z · LW(p) · GW(p)
Solid as in empirical, no. But I fee like I get a lot out of LW. It's a good source for finding other resources. What do you want help in, if any?
Replies from: Locaha↑ comment by Locaha · 2013-12-20T21:20:26.997Z · LW(p) · GW(p)
What do you want help in, if any?
Understanding if reading lesswrong is more or less a waste of time than other internet stuff I read.
Replies from: ChristianKl, army1987↑ comment by ChristianKl · 2013-12-21T16:27:41.110Z · LW(p) · GW(p)
Understanding if reading lesswrong is more or less a waste of time than other internet stuff I read.
I think that depends a lot on how you interact with it. You can read a post on commitment contracts and adopt the technique or you can read the post and just accept the new information. The impact on your life will the very different.
↑ comment by A1987dM (army1987) · 2013-12-21T06:41:16.243Z · LW(p) · GW(p)
It prob'ly depends on what the other Internet stuff you read is.
↑ comment by brazil84 · 2013-12-28T11:48:35.276Z · LW(p) · GW(p)
I used TDT to get in the habit of flossing my teeth every night -- it worked beautifully.
I'm not sure if TDT is available elsewhere as I gave up on self-help books many years ago.
Also I'm not sure of the health benefits of flossing, but still.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-12-28T12:50:35.943Z · LW(p) · GW(p)
I'm not sure if TDT is available elsewhere as I gave up on self-help books many years ago.
I don't know about self-help books, but the moral advice to choose as if you are choosing more than the immediate consequences is found in moral philosophy.
"I want to be the kind of agent that chooses X (habitually), therefore I will choose X (now)" reasoning can be found in virtue ethics, although the argument there is based on habit and character development rather than being an algorithm. Aristotle discusses the importance of practicing good decisions in the Nichomachean Ethics: "Similarly we become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts." (source)
"I want to live in a world where people choose X, therefore I will choose X" is a line of reasoning I've heard connected to the Jewish moral idea of tikkun olam, though I don't have a source on that.
Replies from: brazil84comment by [deleted] · 2013-12-18T18:19:10.971Z · LW(p) · GW(p)
Where do I find local Bitcoin discussion here?
Replies from: Emile, Pentashagon, army1987↑ comment by Pentashagon · 2013-12-20T05:48:37.076Z · LW(p) · GW(p)
What did you want to discuss?
↑ comment by A1987dM (army1987) · 2013-12-19T07:40:31.486Z · LW(p) · GW(p)
http://www.google.com/search?q=bitcoin+site:lesswrong.com
(SCNR.)
comment by Ben Pace (Benito) · 2013-12-17T22:25:43.083Z · LW(p) · GW(p)
A while back, I posted in an open thread about my new organisation of LW core posts into an introductory list. One of the commenters mentioned the usefulness of having videos at the start and suggested linking to them somehow from the welcome page.
Can I ask who runs the welcome page, and whether we can discuss here whether this is a good idea, and how perhaps to implement it?
comment by dhoe · 2013-12-18T20:22:18.738Z · LW(p) · GW(p)
What's so great about rationality anyway? I care a lot about life and would find it a pity if it went extinct, but I don't care so much about rationality, and specifically I don't really see why having the human-style half-assed implementation of it around is considered a good idea.
Replies from: CAE_Jones, Lumifer, mwengler, Viliam_Bur, passive_fist↑ comment by CAE_Jones · 2013-12-18T22:48:50.797Z · LW(p) · GW(p)
"Rationality" as used around here indicates "succeeding more often". Or if you prefer, "Rationality is winning".
That's the idea. From the looks of it, most of us either suck at it, or only needed it for minor things in the first place, or are improving slowly enough that it's indistinguishable from "I used more flashcards this month". (Or maybe I just suck at it and fail to notice actually impressive improvements people have made; that's possible, too.)
[Edit: CFAR seems to have a better reputation for teaching instrumental rationality than LessWrong, which seems to make sense. Too bad it's a geographically bound organization with a price tag.]
Replies from: Viliam_Bur, somervta↑ comment by Viliam_Bur · 2013-12-19T11:52:22.524Z · LW(p) · GW(p)
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation. Or at least to measure winning, so we could say whether CFAR lessons contribute to winning.
Sometimes income is used as a proxy for winning. It has some problems. For our purposes I would guess a big problem is that the changes of income within a year or two (since when CFAR provides workshops) are mostly noise. (Also, for employees this metric could be more easily optimized by preparing them for job interviews, helping them to optimize their CVs, and pressuring them into doing as many interviews as possible.)
Replies from: passive_fist, Jayson_Virissimo, Lumifer↑ comment by passive_fist · 2013-12-19T21:50:40.595Z · LW(p) · GW(p)
The biggest issue with using income as a metric for 'winning' is that some people - in fact, most people - do not really have income as their sole goal, or even as their most important one. For most people, things like having social standing, respect, and importance, are far more important.
Replies from: None↑ comment by [deleted] · 2013-12-19T23:12:57.682Z · LW(p) · GW(p)
That, and income being massively externally controlled for the majority of people. The world, contrary to reports, is not a meritocracy.
Replies from: Lumifer, passive_fist↑ comment by Lumifer · 2013-12-20T01:12:28.763Z · LW(p) · GW(p)
income being massively externally controlled for the majority of people
Huh?
If you mean that people don't necessarily get the income they want, well, duh...
The world, contrary to reports, is not a meritocracy.
No, it isn't, but I don't see the relevance to the previous point.
Replies from: ygert↑ comment by ygert · 2013-12-20T09:48:07.134Z · LW(p) · GW(p)
I think the point was government handout programs. This is a massive external control on many people's incomes, and it is part of how the world is not a meritocracy.
(Please note, I ADBOC with CellBioGuy, so don't take my description as anything more than a summary of what I think he is trying to say.)
Replies from: NancyLebovitz, Lumifer↑ comment by NancyLebovitz · 2013-12-20T15:17:05.078Z · LW(p) · GW(p)
He might also be saying that most people don't have an obvious path for marginal increases to their income.
Replies from: None↑ comment by [deleted] · 2013-12-22T16:56:19.354Z · LW(p) · GW(p)
This is closer to what I was getting at. Above someone mentioned government assistance programs, which is also true to a point but not really what I meant (another 'disagree connotatively').
I was mostly going for the fact that circumstances of birth (family and status not genetics), location, and locked-in life history have far more to do with income than a lot of other factors. And those who make it REALLY big are almost without exception extremely lucky rather than extremely good.
↑ comment by Lumifer · 2013-12-20T16:03:53.507Z · LW(p) · GW(p)
I ADBDC with CellBioGuy
You what with CellBioGuy..?
Replies from: arundelo↑ comment by arundelo · 2013-12-20T16:07:19.995Z · LW(p) · GW(p)
Should be "ADBOC" -- "agree denotationally, but object connotatively". (ygert is probably thinking of "disagree" instead of "object".)
Replies from: Lumifer, ygert↑ comment by Lumifer · 2013-12-20T16:30:26.500Z · LW(p) · GW(p)
Ah, thanks. I usually think of such things as "technically correct but misleading" -- that's more or less the same thing, right?
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-12-20T19:51:25.605Z · LW(p) · GW(p)
Yes.
↑ comment by ygert · 2013-12-21T15:38:00.063Z · LW(p) · GW(p)
Yes, my mistake. I was in a rush, and didn't have time to double check what the acronym was. Edited now.
Replies from: arundelo↑ comment by arundelo · 2013-12-21T17:32:28.872Z · LW(p) · GW(p)
I think I could make an argument that "object" has a semantic advantage over "disagree" but one advantage is that "adboc" can be pronounced as a two-syllable word.
↑ comment by passive_fist · 2013-12-20T00:20:14.000Z · LW(p) · GW(p)
Yes, this is true. You cannot meaningfully compare incomes between people that, say, live in developed vs. developing countries.
↑ comment by Jayson_Virissimo · 2013-12-19T18:05:44.600Z · LW(p) · GW(p)
The value of income varies pretty widely across time and place (let alone between different people), so using it as a metric for "winning" is highly problematic. For instance, I was mostly insensitive to my income before getting married (and especially having my first child) beyond being able to afford rent, internet, food, and a few other things. The problem is, I don't know of any other single number that works better.
↑ comment by Lumifer · 2013-12-19T18:16:44.010Z · LW(p) · GW(p)
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation.
Since in the local vernacular rationality is winning, you need no measures: the correlation is 1 by defintion :-/
Sometimes income is used as a proxy for winning.
It's a very bad proxy as "winning" is, more or less, "achieving things you care about" and income is a rather poor measure of that. For the LW crowd, anyway.
Replies from: Emile↑ comment by Emile · 2013-12-19T20:16:28.386Z · LW(p) · GW(p)
talk of "rationality as winning" is about instrumental rationality; when Viliam talks about the correlation between rationality and winning, it's not clear whether it's instrumental rationality (taking the best decisions towards your goals) or epistemic rationality (having true beliefs), but the second one is more likely.
But even if it's about instrumental rationality, I wouldn't say that the correlation is 1 by definition: I'd say winning is a combination of luck, resources/power, and instrumental rationality.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-12-20T10:17:25.773Z · LW(p) · GW(p)
winning is a combination of luck, resources/power, and instrumental rationality
Exactly. And the question is how much can we increase this result using the CFAR's rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of "proofs that rationality works", but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of "proofs that rationality doesn't work" and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR's strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-12-20T15:15:40.301Z · LW(p) · GW(p)
Also, rationality might mostly work by making disaster less common-- it's not so much that the victories are bigger as that fewer of them are lost.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-12-21T12:09:58.493Z · LW(p) · GW(p)
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let's assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values "success" and "failure". For a typical smart but not rational person, the coin generates 90% "success" and 10% "failure". For an x-rationalist, the coin generates 99% "success" and 1% "failure". If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 "successes", then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don't know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-12-21T16:49:23.772Z · LW(p) · GW(p)
I suspect rationality does a lot to prevent likely failures as well as unlikely failures.
↑ comment by mwengler · 2013-12-19T23:32:50.235Z · LW(p) · GW(p)
Rationality is the process of humans getting provably better at predicting the future. Evidence based medicine is rational. "traditional" and "spiritual" medicine are not rational when their practitioners and customers don't really care whether their impression that they work stands up to any kind of statistical analysis. Physics is rational, its hypotheses are all tested and open to retesting against experiment, against reality.
When it comes to "winning," it needs to be pointed out that rationality when consciously practiced allows humans to meet their consciously perceived and explicitly stated goals more reliably. You need to be rational to notice that this is true, but it isn't a lot more of a leap than "I think therefore i am."
One could analyze things and conclude that rationality does not enhance humanities prospects for surviving our own sun's supernova, or does not materially enhance your own chances of immortality, both of which I imagine strong cases could be made for. While being rational, I continue to pursue pleasure and happiness and satisfaction in ways that don't always make sense to other rationalists and to the extent that I find satisfaction and pleasure and happiness, i don't much care that other rationalists do not think what I am doing makes sense. But ultimately, I look at the pieces of my life, and my decisions, through rational lenses whenever I am interested in understanding what is going on, which is not all the time.
Rationality is a great tool. It is something we can get better at, by understanding things like physics, chemistry, engineering, applied math, economics and so on, and by by understanding human mind biases and ways to avoid them. It is something that sets humans apart from other life on the planet and something that sets many of us apart from many other humans on the planet, being a strength many of us have over those other humans we compete with for status and mates and so on. Rationality is generally great fun, like learning to drive fast or to fly a plane.
And if you use it right, you can get laid, and then have more data available for determining if that's what you REALLY want.
↑ comment by Viliam_Bur · 2013-12-18T21:17:49.205Z · LW(p) · GW(p)
I care a lot about life and would find it a pity if it went extinct
So far, humans are the life's best bet for surviving the day our Sun goes supernova.
why having the human-style half-assed implementation of it around is considered a good idea
Because we don't have better one (yet?).
Replies from: shminux, Nornagest, dhoe↑ comment by Shmi (shminux) · 2013-12-19T02:22:25.322Z · LW(p) · GW(p)
So far, humans are the life's best bet for surviving the day our Sun goes supernova.
Not to detract from your point, but that's pretty unlikely. Unless it becomes a part of a tight binary star several billion years down the road, when it has turned into a white dwarf. Of course, by then Earth will have been destroyed during the Sun's red giant stage.
↑ comment by Nornagest · 2013-12-18T22:14:22.176Z · LW(p) · GW(p)
So far, humans are the life's best bet for surviving the day our Sun goes supernova.
This is a pedantic point in context, but our solar system almost certainly isn't going to develop into a supernova. There's quite a menagerie of described or proposed supernova types, but all result either from core collapse in a very massive star (more than eight or so solar masses) or from accretion of mass (usually from a giant companion) onto a white dwarf star.
A close orbit around a giant star will sterilize Earth almost as well, though, and that is developmentally likely. Though last I heard, Earth's thought to become uninhabitable well before the Sun develops into a giant stage, as it's growing slowly more luminous over time.
↑ comment by dhoe · 2013-12-19T08:31:09.925Z · LW(p) · GW(p)
Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they're too resource intensive), shouldn't we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn't that be desirable iff the probabilities are as stated?
Replies from: MathiasZaman↑ comment by MathiasZaman · 2013-12-19T13:39:08.159Z · LW(p) · GW(p)
As a human, I find solutions that destroy all humans to be less than ideal. I'd prefer a solution that curbs our "destructive tendencies", instead.
Replies from: dhoe↑ comment by dhoe · 2013-12-19T14:07:56.421Z · LW(p) · GW(p)
But is there a rational argument for that? Because on a gut level, I just don't like humans all that much.
Replies from: Oscar_Cunningham, RolfAndreassen↑ comment by Oscar_Cunningham · 2013-12-19T18:34:16.356Z · LW(p) · GW(p)
I think you're wrong about your own preferences. In particular, can you think of any specific humans that you like? Surely the value of humanity is at least the value of those people.
↑ comment by RolfAndreassen · 2013-12-19T16:43:50.591Z · LW(p) · GW(p)
Then there may, indeed, be no rational argument (or any argument) that will convince you; a fundamental disagreement on values is not a question of rationality. If the disagreement is sufficiently large - the canonical example around here being the paperclip maximiser - then it may be impossible to settle it outside of force. Now, as you are not claiming to be a clippy - what happened to Clippy, anyway? - you are presumably human at least genetically, so you'll forgive me if I suspect a certain amount of signalling in your misanthropic statements. So your real disagreement with LW thoughts may not be so large as to require force. How about if we just set aside a planet for you, and the rest of us spread out into the universe, promising not to bother you in the future?
↑ comment by passive_fist · 2013-12-19T21:45:26.912Z · LW(p) · GW(p)
CAE_Jones answered the first part of your question. As for the second part, the human-style half-assed implementation of it is the best we can do in many circumstances, because bringing to bear the full machinery of mathematical logic would be prohibitively difficult for many things. However, just because it's hard to talk about things in fully logical terms, doesn't mean we should just throw up our hands and just pick random viewpoints. We can take steps to improve our reasoning, even with our mushy illogical biological brains.
comment by Ben Pace (Benito) · 2013-12-17T22:20:42.230Z · LW(p) · GW(p)
Following up on my post in the last open thread, I'm reading Understanding Uncertainty which I think is excellent.
I would like to ask for help with one thing, however.
The book is in lay terms, and tries to be as non-technical as possible, so I've not been able to find an answer to my question online that hasn't assumed my having more knowledge than I do.
Can anyone give me a real life example of a series of results, where the assumption of exchange ability holds and it isn't a Bernoulli series?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-12-17T22:31:55.139Z · LW(p) · GW(p)
Let's say that we have a box of weighted coins. Some are more likely to fall heads; others tails. We pull one out and flip it many times. The flips are identical, so we can switch the order. They are independent conditional on knowing which coin was chosen, but ahead of time they are dependent, the one telling us about the choice of coin and thus about the other. De Finetti's theorem says that all exchangeable sequences take this form.
Added: Actually, de Finetti's theorem only applies to infinite sequences. Here's an example of a finite exchangeable sequence that doesn't fit the theorem: draw balls from a box without replacement. This can only go on until the box is empty. And of course you can combine the two: randomly choose a box with at least n balls and then pull out n balls without replacement.
Added: A crazy model that is exchangeable is Pólya's urn. It is not obvious that it is exchangeable, let alone that the conclusion of de Finetti's theorem applies. Pólya's urn contains balls of two colors, the initial numbers of which are known. Every time you draw one out, you put k of the same color back. If k=1, this is drawing with replacement; if k=0, this is drawing without replacement, both of which are exchangeable. And if k is a larger integer, it is also exchangeable.
Here is an idea of how to the exchangeability. What if are somehow confused about the size of the balls, and think that they are r times bigger than they really are? Then each time we remove an actual ball, we're removing 1/r part of a confused ball. That's like removing 1 confused ball and putting back 1-1/r balls. Thus k=1-1/r is like drawing without replacement, but with this confusion. This is exchangeable. Thus the model is exchangeable for infinitely many values of k, which verifies some identities for infinitely many values of k, which is probably enough to verify it as an algebraic identity.
comment by [deleted] · 2013-12-23T10:45:42.481Z · LW(p) · GW(p)
A lot of things modern "conservatives" consider traditional are recent innovations barely a few decades or a century old. Chesterton's fence doesn't apply to them.
Replies from: bramflakes, Lumifer↑ comment by bramflakes · 2013-12-23T15:32:32.307Z · LW(p) · GW(p)
Examples?
Replies from: ChristianKl↑ comment by ChristianKl · 2013-12-24T14:55:22.927Z · LW(p) · GW(p)
I would guess that this comment came out of the discussion about homosexuality and male to male intimacy between friends further down in the thread.
Drug prohibition is also something that's roughly a century old and Konkvistador wrote a post writing that he would be under some circumstances be okay with getting rid of it.
↑ comment by Lumifer · 2013-12-23T18:28:50.534Z · LW(p) · GW(p)
So? For most people "traditional" means "what my grandparents used to do". Very very few people have a sense of history that extends far back.
Replies from: None↑ comment by [deleted] · 2013-12-24T07:24:05.167Z · LW(p) · GW(p)
So?
This was an observation of when the argument of Chesterton's fence applies.
Replies from: Lumifer↑ comment by Lumifer · 2013-12-24T16:08:41.421Z · LW(p) · GW(p)
Why doesn't Chesterton's fence apply to "recent innovations"? It applies to everything that you don't know how it came into being -- time frame doesn't matter much.
Replies from: Randy_M, None↑ comment by [deleted] · 2013-12-24T16:33:07.521Z · LW(p) · GW(p)
A stronger case for Chesterton's fence can be made for older over recent innovations, I guess I should write an essay to explain the arguments on this, I forgot this wasn't widely talked about outside a certain IRC channel.
Replies from: Lumifer↑ comment by Lumifer · 2013-12-24T17:21:34.871Z · LW(p) · GW(p)
A stronger case for Chesterton's fence can be made for older over recent innovations
Hm. I would expect the reverse. The Chesterton's Fence argument is about knowing the purpose of something and being able to understand the consequences of changing it. With older traditions both are harder. Granted, there is the offsetting factor that over the course of years (or centuries) no one was bothered enough to change it -- an evolutionary argument, sort of -- but an appeal to the wisdom of ancestors is not the same thing as the Chesterton's Fence.
Replies from: None↑ comment by [deleted] · 2013-12-24T19:25:07.220Z · LW(p) · GW(p)
The Chesterton's Fence argument is about knowing the purpose of something and being able to understand the consequences of changing it. With older traditions both are harder.
This is turning the argument on its head.
The point isn't that knowing a purpose for something is a reason to keep the thing. If we know the reason for it and judge it good, of course we shall keep it. Banal. If we know a reason for a thing, and judge it bad, then the argument isn't an encouragement to keep it either. No Chesterton's Fence is the argument that us not knowing the reason behind something is a reason to keep it. Applying it to things, for which we easily learn why they are there, is pretty much redundant as far as heuristics go.
Let me quite directly, from his novel The Thing (1929). In the chapter entitled, “The Drift from Domesticity” he writes:
Replies from: Douglas_Knight, Lumifer, fubarobfuscoIn the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."
↑ comment by Douglas_Knight · 2013-12-24T22:23:41.145Z · LW(p) · GW(p)
What you say here is reasonable, but it is completely unrelated to your comment that started this thread. If, as in your original comment, people are mistaken about the age of their traditions, they are ignorant of the origins, and thus Chesterton advice to learn the origin applies.
Replies from: None↑ comment by Lumifer · 2013-12-24T23:53:24.978Z · LW(p) · GW(p)
Chesterton's Fence is the argument that us not knowing the reason behind something is a reason to keep it.
Kinda. I actually read it as an argument for passivity unless you know what you're doing.
Not knowing the reason for something is a "reason to keep it" -- well, it's a reason to not do anything. If that something gets destroyed by, say, a force of nature, would Chesterton's Fence tell you to rebuild it? No, I don' think so.
The Chesteron's Fence is primarily a warning against hubris, against pretending to contain all the reasons of the world in your head. It is, basically, an entreaty to consider unknown unknowns, especially if you have evidence of their workings in front of you.
Replies from: None↑ comment by [deleted] · 2013-12-25T08:48:37.471Z · LW(p) · GW(p)
Not knowing the reason for something is a "reason to keep it" -- well, it's a reason to not do anything. If that something gets destroyed by, say, a force of nature, would Chesterton's Fence tell you to rebuild it? No, I don' think so.
Force of nature is misleading in the context of where it is likely to be applied. No social norms or institutions subsist without maintenance. But let me keep it and tweak it a bit, if you could easily prevent the force of nature destroying the fence, would you say the argument encourages you to do so?
↑ comment by fubarobfusco · 2013-12-24T21:05:53.527Z · LW(p) · GW(p)
Chesterton's Fence is the argument that us not knowing the reason behind something is a reason to keep it.
Here's a Bayesian counterargument for cultural practices:
Culture is more likely to have retained the instruction "Do X!" but not retained knowledge of X's original purpose, if that purpose is not relevant any more.
If X's purpose is still relevant, then retaining and teaching about X's original purpose provides greater incentive for learning and teaching X, making X more likely to be retained. But if X's original purpose is not still relevant, then retaining knowledge of the original purpose is a disincentive to learn and teach X itself, making X less likely to be retained. So, given that X is still taught, learning that its original purpose is known is evidence that it is still relevant; whereas learning that it is not known is evidence that it is not still relevant.
Replies from: None↑ comment by [deleted] · 2013-12-25T08:42:42.416Z · LW(p) · GW(p)
If X's purpose is still relevant, then retaining and teaching about X's original purpose provides greater incentive for learning and teaching X, making X more likely to be retained. But if X's original purpose is not still relevant, then retaining knowledge of the original purpose is a disincentive to learn and teach X itself, making X less likely to be retained. So, given that X is still taught, learning that its original purpose is known is evidence that it is still relevant; whereas learning that it is not known is evidence that it is not still relevant.
If you are using the model of memetic selection, then useful things Xs are unlikely to have true explanations of why they are useful attached to them, but the most virulent ones. Sometimes they are the same, but obviously often they aren't. After all Robin Hanson gets a lot of low hanging fruit showing us how for example school isn't about learning etc.
Sometimes the most persistent combination would be a behavior or practice without an explicit explanation at all.
comment by [deleted] · 2013-12-18T00:53:07.635Z · LW(p) · GW(p)
"The mathematician’s patterns, like the painter’s or the poet’s, must be beautiful; the ideas, like the colours or the words, must fit together in a harmonious way. Beauty is the first test: there is no permanent place in the world for ugly mathematics." - G. H. Hardy, A Mathematician's Apology (1941)
Just heard this quoted on The Infinite Monkey Cage.
Replies from: ygert↑ comment by ygert · 2013-12-19T09:59:05.855Z · LW(p) · GW(p)
Isn't the place for this the Rationality Quotes thread?
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-12-19T10:15:33.796Z · LW(p) · GW(p)
It's more to do with mathematics than rationality.
Replies from: ChristianKl, ygert↑ comment by ChristianKl · 2013-12-19T17:17:38.750Z · LW(p) · GW(p)
I think the quotes thread is pretty general and even mathematics quotes fit better at that place than here.
comment by protest_boy · 2013-12-28T04:07:16.086Z · LW(p) · GW(p)
Does anyone have any recommended "didactic fiction"? Here are a couple of examples:
1) Lauren Ipsum (http://www.amazon.com/Lauren-Ipsum-Carlos-Bueno/dp/1461178185) 2) HPMoR
comment by FiftyTwo · 2013-12-22T22:29:52.896Z · LW(p) · GW(p)
There's a thread in the rationalist fiction subreddit for brainstorming rationalist story ideas which might interest people here.
comment by niceguyanon · 2013-12-20T10:29:15.286Z · LW(p) · GW(p)
Is LW the largest and most established online forum for discussion of AI? If yes, then we should be aware that LW, or at least EY's ideas about AI, might be underestimated with regards to how widespread these ideas are to the people that matter, like AI researchers.
I say this because I come across a lot of comments with the sentiment of lamenting the world's AI researchers aren't more aware of friendliness on the level that is discussed here. I might also just be projecting what I think is the sediment here, in that case, just ignore this comment. Thoughts?
Edit spelling
Replies from: Dorikka