Open thread, 30 June 2014- 6 July 2014
post by DanielDeRossi · 2014-06-30T10:58:22.110Z · LW · GW · Legacy · 247 commentsContents
247 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
247 comments
Comments sorted by top scores.
comment by Richard_Kennaway · 2014-06-30T13:57:52.444Z · LW(p) · GW(p)
I happened to see this paper, which may be of interest to those experimenting with Soylent. The title is "Long-term feeding on powdered food causes hyperglycemia and signs of systemic illness in mice".
They fed different batches of mice the same food, except that one was in the usual pellet form and one was powdered and needed no chewing. They also tested both short- and long-term feeding on powdered food. Their conclusion:
Replies from: gwern, John_Maxwell_IV, Tenoke, Gunnar_ZarnckeThe hyperglycemia associated with long-term powdered-food feeding may lead to certain systemic illness signs, such as elevations of blood glucose, hypertension, and abnormal behaviors in mice. Mastication of food of adequate hardness may be very important for the maintenance of systemic (physical and mental) health, possibly via reduction in the levels of blood glucose and/or adrenal stress hormones (catecholamines and glucocorticoids).
↑ comment by gwern · 2014-06-30T15:49:42.050Z · LW(p) · GW(p)
Yvain also found a curious link a while ago http://slatestarcodex.com/2014/02/10/links-for-february-2014/ :
One of my interests is weird ways the face interacts with the brain, so I enjoyed this study: "Masticatory deficiency as a risk factor for cognitive dysfunction". People (and lab rats) without their teeth or with otherwise impaired chewing ability become demented much more quickly than controls, apparently because the mechanics of chewing help stimulate or oxygenate certain parts of the brain. No word yet as to whether you can become a super-genius by chewing everything all the time.
The abstract of the paper:
Replies from: roystgnrSeveral studies have demonstrated that chewing helps to maintain cognitive functions in brain regions including the hippocampus, a central nervous system (CNS) region vital for memory and learning. Epidemiological studies suggest that masticatory deficiency is associated with development of dementia, which is related to spatial memory deficits especially in older animals. The purpose of this paper is to review recent work on the effects of masticatory impairment on cognitive functions both in experimental animals and humans. We show that several mechanisms may be involved in the cognitive deficits associated with masticatory deficiency. The epidemiological data suggest a positive correlation between masticatory deficit and Alzheimer's disease. It may be concluded that chewing has important implications for the mechanisms underlying certain cognitive abilities.
↑ comment by roystgnr · 2014-07-01T19:03:04.130Z · LW(p) · GW(p)
When I started tooth-grinding in my sleep in grad school, I assumed it was a stress reaction. But apparently my body was merely rationally trading enamel for a critical IQ boost?!
PSA: if your jaws become chronically sore, don't hesitate before getting it checked out. I'm kidding about the IQ boost, but not about the lost enamel.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-07-01T05:43:29.151Z · LW(p) · GW(p)
MealSquares are made of solid food... we're currently running a semi-formal beta test. Sign up for our mailing list to get notified when we launch :)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-07-02T08:13:59.038Z · LW(p) · GW(p)
Interesting, and it's good to have alternatives.
However, I am not sure how exactly to put together this information from FAQ page:
if you eat 5 MealSquares (2000 calories) you will get 100% of your daily recommended value of all essential vitamins and minerals
and from the Nutrition page, where "% Daily Values Per Serving" differ from 20% -- they range from 15% to 160%.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2014-07-02T17:44:56.199Z · LW(p) · GW(p)
RDA for carbs are crazy. For the ones that go above, they're all very far below upper intake ranges.
↑ comment by Tenoke · 2014-06-30T17:38:31.872Z · LW(p) · GW(p)
Relevant thread on the Soylent forum
Replies from: gwern↑ comment by gwern · 2014-06-30T18:51:07.379Z · LW(p) · GW(p)
The most relevant part is probably another study mqrius mentions, "The effect of the loss of molar teeth on spatial memory and acetylcholine release from the parietal cortex in aged rats", Kato et al 1997 (available through Libgen):
After the molar teeth of rats were extracted, the rats were fed with powdered food for 135 weeks. Although the performance in the radial arm maze was progressively acquired by daily training, an increase in the number of errors and a decrease in the initial correct responses were observed in the teethless aged rats compared to the control aged rats, indicating impaired acquisition of spatial memory in the teethless aged rats...the extracellular ACh level of the teethless aged rats under high-concentration of K+ and atropine sulfate stimulation was significantly low compared to that of the control aged rats. These results suggest that the impairment of spatial memory in the teethless aged rats may be due to the functional deterioration of the cholinergic neuronal system induced by tooth loss
It's not a long paper. Skimming, the major problems I see:
- the usual problems with animal studies: tiny sample size (9 in the control and 10 in the experimental, apparently), unclear randomization, no mentioned blinding of experimenters or raters
they didn't show removing teeth caused lower performance, they showed removing teeth and feeding on a liquid diet caused lower performance. (On the plus side, they say they anesthesized both groups, so that removes a serious confound.)
The experimental group had its teeth removed & also was fed liquid, while the control group kept its teeth & also ate normal pellets. Hence, the decreased performance could've been caused (ignoring the issues of bias and sampling error) by either the removal of teeth, the liquid food, or some interaction thereof (perhaps liquid food aggravating tooth infection caused by the surgery?). They do say
Kawamura [6: "The effect of food consistency on conditioned avoidance response in mice and rats"] has reported the relationship between mastication and learning and memory in young rats. He has also reported that rats fed with a powdered diet had poor results of learning and memory compared to those fed with a solid diet.
but I haven't looked at it and in any case, given how much varies from lab to lab, this is a basic issue which needs to be verified in your own sample, and not just hope that it's a universal. Also, if Kawamura finds that liquid food on its own damages learning & memory compared to a solid diet, how are you showing anything new by looking at liquid+surgery & finding damage...?
Their data is purely a post-comparison. They say they did the surgery, and then apparently left the rats alone for 135 weeks before doing the radial arm maze test.
So there's no way to know what the decline looked like or when it happened. It's perfectly possible that the toothless rats suffered a single sudden shock to their system from the surgery and that permanently degraded their memory, or that they had ongoing chronic inflammation or infection.
Worse, the difference may have been there from the start, they never checked. Randomization with such small n can easily fail to balance groups, that's one reason for pre-tests: to verify that a difference in the groups on the post-test wasn't there from the start but can be attributed to the experimental condition.
I'm not sure this can be described as a true 'randomized experiment'. They never actually say that the selection of rats was random or how the animals were picked for their group, and there's a weird pattern in the writing where they only ever write about the toothless rats being subjected to procedures even though logically you'd say stuff like 'all the rats were tested on X'; eg:
After the molar teeth of rats were extracted, the rats were fed with powdered food for 135 weeks...Animals (11 weeks old) were anesthetized with sodium pentobarbital (40 mg/kg i.p.) and all maxillary and mandibular molars were extracted. Animals given anesthesia alone, without undergoing extraction of the molar teeth, were used as control aged rats...One hundred and thirty-five weeks after the surgery, the ability of learning and memory in the aged rats without molar teeth (hereafter referred to as 'teethless') was examined by using the radial arm maze [9], and compared to the control aged rats...Nine weeks after the learning and memory study, the ability of releasing ACh in the parietal cortex of teethless aged rats was examined by using in vivo microdialysis methods [5]...In order to examine the functional changes in cholinergic neuronal system of the teethless aged rats, animals were stimulated by high-concentration of K ÷ at 100mM or atropine sulfate at 3 ktM for 15 rain when the level of extracellular ACh stabilized.
Plus, Figure 1 reports 9/10 rats, but by Figure 2, we're down to 5/5 rats. Huh? This makes me wonder if they're reusing control rats from a previous experiment, or reusing their data, and only actually had experimental rats. (The use of "historical controls" is apparently not uncommon in animal research.)
This would massively compromise their results because rats change over time, litters of rats will correlate in traits like memory, and these effects are all large enough to produce many bogus results if you were to, say, take 10 rats from 1 litter as your control group and 9 rats from another litter as your experimental group. Just like with humans, one family of rats may have a very different average from another family. (See the very cool paper “Design, power, and interpretation of studies in the standard murine model of ALS”, Scott et al 2008, which helpfully notes on pg5 that when you have a mouse study with 10/10 mice similar to this study and the null is true, "an apparent effect [of >5% difference in survival] would be seen in 58% of studies". Which really makes you think about a small difference in # of errors in maze performance.)
Their reward may have been a bit screwy in the memory task:
The apparatus was placed 40 cm above the floor. At the end of each arm there was a food cup that held a single 50-mg food pellet. Prior to the maze task, animals were kept on a restricted diet and the body weight was reduced to 80-85% of their normal weight over a 1-week period; water was freely available. Before the actual training began, the animals were allowed to explore the apparatus, for 10 min a day, for 2 days. For the following 16 trials, each animal was placed individually in the center of the maze and allowed to consume the bait in the food cup.
If this description is literally accurate, there's a problem. They don't mention the setup differing between groups! So this "food pellet" is the reward which gives the rats motivation to solve the maze... but you've removed the teeth from half the rats and can only feed them liquid. And you're surprised the toothless rats perform worse? I'm reminded of the reward confounds in much animal intelligence research.
the authors mention excluding the other maze performance variable:
The teethless aged rats showed impairment performance during the acquisition of the radial arm maze task, as revealed by the increased number of errors (Fig. 1) and the decreased number of initial correct responses (data not shown).
One wonders if the # of initially correct responses would have reached p<0.05. Good old researcher degrees of freedom...
So overall, I would have to say this result seems to be extremely weak.
↑ comment by Gunnar_Zarncke · 2014-07-01T07:55:27.973Z · LW(p) · GW(p)
Missing masticatory stress is also discussed here:
https://groups.google.com/forum/#!topic/less-wrong-parents/EF3CE9JPQQU (actually an LW parents post)
The cited article is this:
comment by TylerJay · 2014-07-01T17:54:25.780Z · LW(p) · GW(p)
Some people treat LessWrong as just a philosophical exercise, but "Rationality" and its little brother "Critical Thinking" really can make you a rockstar in the corporate world if you so choose. I'm going to give a bit of background on some things that I've managed to accomplish in the last couple years by thinking when no one else would, then I'd hope to get some feedback and suggestions for future optimizations. Feel free to skip to the "-----------" below if you want to skip my brag section, though I am writing it to help give an idea of the landscape.
At the SaaS startup I work at, I've worked in a few different departments. I started in Support and decided we needed training videos and better articles to reduce the load on Support reps, so I made them and set up a process for forwarding people to the appropriate video/article instead of answering questions directly. This saved Support Rep's time.
When I moved into Account Management and Implementation, every new client account needed a minimum of 5 hours of AM training time. I decided this was inefficient and recorded some more training videos, then set up an LMS so our clients could do self-paced training and designed an implementation process around it. I measured engagement after certain time periods and there was no difference between the live trainings, so we kept it. This has saved thousands of hours of AM time over two years. I noticed that another call we did with every client was the same questions and the same responses, so I wrote a supplementary Rails app "wizard" so that clients could go through that themselves, saving another hour off of every implementation.
I've recently moved into the Sales department and I'm looking for ways to optimize this department as well, both with logistics and tools and proven sales strategies. The first thing I did was set up a way for SalesForce to generate our contracts automatically instead of Sales people having to fill them out each time which will save our Sales team 15-30 minutes a day each. Low-hanging fruit.
Does anyone have any suggestions for things that I could look into to optimize our Sales department?
Every current "best practice" seems to be based on anecdotal evidence and I've already seen my company royally screw up A/B testing by peeking and retiring options early, so I don't trust that anything is based on an empirical foundation.
Some of the issues I've noticed are:
- Meetings are set in advance by a qualification team. Sometimes we have no-shows. I'm looking to reduce that. What resources are available about encouraging people to keep commitments? If i'm going to test things, like a call or email the day before, 2-3 days before, etc. as a reminder and collect data, how much data would I need for meaningful results? How should I randomize? Would I need to adjust for other factors? (ex: small prospects miss more meetings in general)
- "Demos" currently have a very basic structure: Get background and identify problems => Do a Demonstration => Quote pricing => Follow Up. Already, adding the question "What's it going to take to make this happen?" has been hugely effective in identifying the real obstacles and what to do next. I have considerable Sales experience, but in a non-tech industry, so I don't know what will transfer. If I decide to test whether doing a Need Satisfaction Selling Cycle or a simple Feature-Description-Benefit sales approach is better, how would I collect data?
- Are there any non dark-arts Sales techniques for Enterprise (B2B) Sales that are backed up by science? (I've read Influence, but I'm dealing with whole organizations here)
Any other ideas to try or test would be great. Thanks!
Replies from: ChristianKl, Torello↑ comment by ChristianKl · 2014-07-02T12:34:26.254Z · LW(p) · GW(p)
Read: How to Measure Anything: Finding the Value of Intangibles in Business by Douglas W. Hubbard
It answers a lot of your questions about data gathering in your business context.
Sometimes we have no-shows. I'm looking to reduce that. What resources are available about encouraging people to keep commitments?
Be sure that you focus on the right issue. Maybe the people don't show to the meetings because they make a rational decision that attending the meeting isn't the best use of their time. In that case you don't do you organisation any good by forcing people to waste more time in meetings.
Are there any non dark-arts Sales techniques for Enterprise (B2B) Sales that are backed up by science? (I've read Influence, but I'm dealing with whole organizations here)
Sales especially cold calling is a very emotional challenging activity. If you can do something that reduces the stress that your sales reps feel, they will work better. We like to interact with happy people and buy from them. How is the work environment set up? A lot of business environments completely ignore ergonomic aspects.
If you are looking for something that isn't dark-arts, that's the area where I would look. You might also want to read "The Charisma Myth" by Olivia Fox.
↑ comment by Torello · 2014-07-02T03:03:10.815Z · LW(p) · GW(p)
With regard to meeting attendance: -make people present something -hold a vote and if they don't show they don't vote -don't schedule regular meetings, which just get scheduled regularly because they are regularly scheduled. Only schedule meetings when you have a strong rationale for holding it 1) at that time, 2) with clearly defined goals/rationale
comment by Omid · 2014-07-01T17:55:48.118Z · LW(p) · GW(p)
The quantified risks of gay sex post is in the early stages of development. If you are a mod and think such a post would have negative value, pianoforte611 and I would appreciate hearing your concerns before we invest our time in it. If you are not a mod but want to make some pre-emptive suggestions, those are welcome too.
Replies from: falenas108↑ comment by falenas108 · 2014-07-02T13:08:46.215Z · LW(p) · GW(p)
A few nuances that I would like to see in the paper:
*Not all gay men have anal sex, many chose not to in favor of other activities.
*Also, not having the assumption that only gay/bi men have anal sex.
*A distinction between transmission rates if people chose to use condoms vs not, because part of the reason the rate is higher is condoms are much less common in the gay community.
*A disclaimer about how not all men have penises, and sex≠gender≠genitalia would be nice.
comment by James_Miller · 2014-06-30T20:37:29.560Z · LW(p) · GW(p)
Massachusetts Supreme Court says it can order you to decrypt your computer
Imagine a computer decryption program that creates a random number of nonsense files that look like encrypted files but for which no password will work. Now, if the government orders you to decrypt all of your files and you have a file you don't want to decrypt the government won't be able to prove that you have the password to that file since given that you are using the program there will definitely exist files you can't decrypt.
Replies from: Pfft, ChristianKl↑ comment by Pfft · 2014-06-30T21:30:33.089Z · LW(p) · GW(p)
This is basically the idea behind TrueCrypt hidden volumes and similar: there should be no way for the police to prove that there exists additional volumes which you have not decrypted for them.
But afaik, no case in the United States so far has involved an order to just "decrypt all your files". In all the cases I have heard about, they had something specific that they wanted the key for, and they had separate evidence that the defendant knew the key. In that case no technical solution can help you.
↑ comment by ChristianKl · 2014-07-01T09:44:04.097Z · LW(p) · GW(p)
Another way to deal with the issue would be to claim that you memorized the password via a mnemonic like a memory place that's easily destructible. If you fill up a memory place with a bunch of new items, the old memory that stores the password becomes inaccessible because of memory interference.
It's also the only way to protect encrypted files against torture. Have the memory in a form that's easily destroyed. Memory places provide that ability when you overwrite them.
Writing this myself might also be a good precommitment ;)
Replies from: gwern, None↑ comment by gwern · 2014-07-01T15:53:06.363Z · LW(p) · GW(p)
What makes you think a court would believe your story about a memory palace, precommitment or no, and not throw you in jail indefinitely for contempt of court until you retrieve the files for them?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-01T16:33:40.178Z · LW(p) · GW(p)
Demonstrating mnemonics abilities if demanded to do so is easy and there are various outside mnemonics experts that can attest to the fact that it's possible to do so.
At the moment I don't have secrets that are worth protecting enough to go for years into prison but there are people who have secrets that are worth protecting.
The tactic not only works against courts forcing you to give evidence but also against torture. If someone throws you bound and gagged in the back of a truck it's time to delete the password.
At the moment I think there are three people in the UK who didn't give up their password but did face prison. If anyone thinks there a possibility that he could come in that position he could prepare for the mnemonics defence and it would be interesting how it plays out in court.
It's also not clear how many judges actually like the principle of putting people into prison for refusing to hand over passwords. A judge won't decide against he law, but if you can make a plausible case for reasonable doubt, than you could help the judge to make case law.
You could also take a polygraph to verify that you tell the truth about having deleted the password.
Replies from: gwern↑ comment by gwern · 2014-07-01T17:11:39.122Z · LW(p) · GW(p)
Demonstrating mnemonics abilities if demanded to do so is easy and there are various outside mnemonics experts that can attest to the fact that it's possible to do so.
Yes, but you need to be demonstrating the forgetting exists and is accidental. 'Oh, I'm sorry judge, I totally forgot! also, this is totally not destruction of evidence so please don't have me up on either contempt of court or obstruction of justice!'
You could also take a polygraph to verify that you tell the truth about having deleted the password.
Polygraphs aren't very reliable for verifying you're telling the truth and I think judges know that by this point. Plus, that could easily backfire the other way: you could be nervous enough that your readings are consistent with lying.
↑ comment by [deleted] · 2014-07-01T22:30:18.034Z · LW(p) · GW(p)
Another way to deal with the issue would be to claim that you memorized the password via a mnemonic like a memory place that's easily destructible. If you fill up a memory place with a bunch of new items, the old memory that stores the password becomes inaccessible because of memory interference.
That sounds like an overly convoluted way of saying "I forgot", with the added disadvantage of making the judge think you're up to no good.
comment by chaosmage · 2014-07-01T10:16:50.058Z · LW(p) · GW(p)
You have three months to live, a five year old child, and you just told her. And she tearfully asks: "When you're dead, will you still love me?"
How do you respond?
I found my own reply, although it took me longer than that hypothetical child would have waited for it. I'm more interested in yours, but mine follows below...
"Look, I hold you with these arms. My arms extend from my right hand to my left hand, so this much is my reach. When I walk over here, I can't hold you - but I still love you. There's only distance between us, that doesn't change the love. But there's not just space, there's also time. In time, I extend from my birth to my death, like from my right hand to my left hand. So again, outside this time from birth to death, I can't hold you - but that doesn't change the love. There will only be time between us."
Replies from: James_Miller, Jiro, ChristianKl, sediment↑ comment by James_Miller · 2014-07-01T16:26:01.566Z · LW(p) · GW(p)
How do you respond?
Yes, while I'm under Alcor's care the part of my brain that holds my love for you will remain intact.
Replies from: DanielLC↑ comment by DanielLC · 2014-07-01T21:46:07.334Z · LW(p) · GW(p)
I don't think you actually love her unless you're using that part of your brain.
You're not conscious while you're frozen.
Replies from: James_Miller↑ comment by James_Miller · 2014-07-01T21:58:10.150Z · LW(p) · GW(p)
So does love go away when you sleep?
Replies from: Viliam_Bur, ChristianKl↑ comment by Viliam_Bur · 2014-07-02T08:53:48.880Z · LW(p) · GW(p)
That's why small children keep waking you up. :D
Replies from: chaosmage↑ comment by ChristianKl · 2014-07-02T09:54:50.419Z · LW(p) · GW(p)
The brain doesn't shut down it's activity while you sleep either.
↑ comment by Jiro · 2014-07-01T14:33:42.026Z · LW(p) · GW(p)
That will comfort the five year old child only because it's predictable that the five year old child misunderstands it, and the misunderstanding will comfort the child.
In that case, you may as well just lie directly.
Replies from: Gavin, chaosmage↑ comment by Gavin · 2014-07-01T19:39:44.537Z · LW(p) · GW(p)
That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order
The past may still be "there," but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won't still love you. In another, my love will always exist and always continue to have an effect on you.
Replies from: Jiro, Scott Garrabrant, DanielLC↑ comment by Jiro · 2014-07-01T19:46:03.464Z · LW(p) · GW(p)
... and the five year old won't understand those subtleties and will interpret it to mean something comforting but false. An answer to a question is one thing, and an answer that a five year old can understand is another.
(Besides, if the five year old's parent loves her forever because the past is there, is that true for everything? Will her parent always be dying (since the death will have happened in the past)? Whenever she's punished, does that punishment last forever? Do you tell five year olds who have the flu that the flu will always be around forever?)
↑ comment by Scott Garrabrant · 2014-07-02T00:21:43.746Z · LW(p) · GW(p)
I think the A theory of time is effectively disproved by relativity.
By the way, for those who do not know, these are actually called "the A theory of time" and "the B theory of time"
Replies from: DanielDeRossi↑ comment by DanielDeRossi · 2014-07-02T11:07:02.212Z · LW(p) · GW(p)
I don't think its been disproven. See <a href=http://philpapers.org/rec/ZIMPAT">here for how A-theory can fit in with relativity.
↑ comment by DanielLC · 2014-07-01T21:47:13.452Z · LW(p) · GW(p)
Explain like I'm five.
Replies from: None↑ comment by [deleted] · 2014-07-02T00:42:22.910Z · LW(p) · GW(p)
Chaosmage just did!
Replies from: DanielLC↑ comment by DanielLC · 2014-07-02T03:31:33.578Z · LW(p) · GW(p)
My point is that I don't think a five-year-old would understand either explanation.
Replies from: Gavin↑ comment by Gavin · 2014-07-02T16:59:47.930Z · LW(p) · GW(p)
If the five year old can't understand, then I think "Yes" is a completely decent answer to this question.
If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We "exist" to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.
↑ comment by chaosmage · 2014-07-02T11:52:58.364Z · LW(p) · GW(p)
If I lie directly, the child will figure that out some time after I'm dead. I'm trying to avoid that, and to still give her comfort.
Replies from: Jiro↑ comment by ChristianKl · 2014-07-02T09:54:27.576Z · LW(p) · GW(p)
I would say something like: "When we aren't together and you think about me, you can feel the love between us in your heart, can't you? That won't change when I'm dead. We just won't be able to spend time together. Maybe you dream about me at night and you can feel the love in your dream. Keep me in your heart and you keep the love alive. On the other hand me body will go. At first that might feel painful but over time you can let go but the love will still be there when you think about me and focus on your heart."
This answer doesn't contain any false information and it contains a useful strategy for the child to deal with the death. In reality I would spend more time on installing the strategy correctly: (1) Feeling love in the heart, regardless of whether I'm physically present. (2) Dreaming about me and interacting with me in the dream when the need arises. (3) Letting go and accepting that my body dies.
An advanced option would be to use the remaining time to install a sense of me as a fully functioning Tulpa in the child.
↑ comment by sediment · 2014-07-02T21:18:58.093Z · LW(p) · GW(p)
In that situation I would have gone with a straight "yes", nor would I feel myself to have lied. I'd consider it a case of choosing to speak figuratively rather than literally.
I don't think that what you did say was misleading or that the child would have, in essence, misunderstood it. In fact, under the circumstances I think it was a very well-expressed, even a beautiful, answer.
comment by [deleted] · 2014-06-30T22:46:06.194Z · LW(p) · GW(p)
Something useful to those of you who use Spaced Repetition Software:
I made a little ruby script that can turn ordered and unordered lists into easily memorable diagrams like this:
https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1455&authkey=!AKtQ02Ji961f_n8&v=3&ithint=photo%2c.png https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1457&authkey=!AMtC38EHOFcImTI&ithint=folder%2c https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1458&authkey=!AOIm4ua5-c1TFsQ&ithint=folder%2c
It's pretty hacky (the script opens a bunch of google image searches so that you can download the pictures) but combined with the image occlusion anki addon, it has allowed me to memorize sets that are 3 time larger than I can normally memorize with Anki.
The script requires Graphviz, as well the launchy ruby gem. It can be found here: https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1459&authkey=!ACtSe9c5YnpYk9Q&ithint=file%2c.rb
Quick readme:
- Graphviz must be installed and set to root, you also need the launchy ruby gem.
- The program will generate a random color scheme and layout engine, which can be reassigned. Color schemes can be found here: graphviz.org/doc/info/colors.html, and layout engines can be found here: http://www.graphviz.org/cgi-bin/man?dot 3.The program will ask if you want images. If you click yes, the program will later open a bunch of browser windows equal to the amount of items in the set.
- Enter the name of the graph
- The program will ask for the name of the category. If you enter it, this will be the "center node". If blank, there will be no center node.
- Enter your set, one item per line. When done, enter a blank line.
- If you chose images, the program will open a bunch of google image searches to find images. The images should be saved as (all lowercase version of the search with spaces removed).jpg, in the same directory as the ruby file. In order to make sure you get jpgs, you should save the thumbnail that google generates, rather than saving the actual image.
- A graph will be generated.
- Open the graph in the image occlusion extension in anki to start memorizing it.
↑ comment by D_Malik · 2014-07-02T12:07:22.868Z · LW(p) · GW(p)
Awesome, thanks!
One concern though: by adding colors, shapes, borders, etc., you are essentially adding extra detail/context to the memory-triggering side of the card, which will indeed improve recall when you have that detail/context available. However, in a live scenario where you actually have to remember the information, that context will likely not be available.
(An example: if you're trying to learn the locations of US states, and you get a map where each state is brightly-colored, you should probably make the map grayscale and uniformly-saturated before you apply image clozes. Because when you actually need to know where New Jersey is, you will not be given the information that it's red on your map.)
Then again, I can think of some hard-to-verbalize ways in which the extra detail might improve recall even when you don't have the detail available.
Overall, I'm not sure if this is a good idea. It might be worthwhile to try memorizing (random?) sequences using these graphs for half the sequences and plain text for the other half, then testing each sequence of them outside of Anki (by running through the set mentally, say).
Replies from: None↑ comment by [deleted] · 2014-07-03T03:30:40.841Z · LW(p) · GW(p)
I actually started out with using uniform colors, shapes, etc.
I can only give my own experience, but I find that those earlier images are universally harder to remember, even when I don't have the image in front of me and I'm just trying to recall the set on it's own. This is true even for cards where I have only four items in the set for the uniform images, and upwards of 15 for the non-uniform ones.
I think that what happens is that these extra cues help in the initial learning and memorization. As I get better, I can simply visualize the location of the node in the image, visualize the attached image, which brings to mind the text. I have trouble getting to this point when I don't have the other context cues to help me out initially.
I don't quite understand what test you're suggesting in your last paragraph. I think what you're saying is try to memorize a random set using simply text, then a random set using simply the images, and then test myself outside of anki by trying to recall the sets. If so, I have done this, and the images (with the crazy shapes), outperform by a large margin. I can't remember a set of more than about 5 using simply text in Anki.
comment by Tenoke · 2014-06-30T18:24:33.540Z · LW(p) · GW(p)
We've had a bit of an attendance drop recently at our local Meetup Group (London). This could be because of a lot of things, but it seems to roughly coincide with the change to where Meetups are posted on Lesswrong. Have any other Groups experienced anything of the sort?
Replies from: jackk↑ comment by jackk · 2014-07-03T01:25:39.083Z · LW(p) · GW(p)
I opened a poll about this on a previous open thread, but it was when the thread was nearly over so it didn't get many responses.
Replies from: Tenokecomment by whales · 2014-07-01T08:22:30.921Z · LW(p) · GW(p)
I've collected some quotes from Beyond Discovery, a series of articles commissioned by the National Academy of Sciences from 1997 to 2003 on paths from basic research to useful technology. My comments there:
The articles (each around 8 pages) are roughly popular-magazine-level accounts of variable quality, but I learned quite a bit from all of them, particularly from the biology and medicine articles. They're very well written, generally with input from the relevant scientists still living (many of them Nobel laureates). In particular I like the broad view of history, the acknowledged scope of the many branches leading to any particular technology, the variety of topics outside the usual suspects, the focus on fairly recent technology, and the emphasis bordering on propagandist on the importance and unpredictability of basic research. It seems to me that they filled an important gap in popular science writing in this way.
I'm interested in histories of science that are nonstandard in those and other ways (for example, those with an unusual focus on failures or dead ends), and I'm slowly collecting some additional notes and links at the bottom of that page. Do you have any recommendations? Or other comments?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-07-01T17:42:10.858Z · LW(p) · GW(p)
The series Connections (and Connections 2 and 3) was excellent in tracing relationships between the multiple threads of the history of science.
Replies from: whalescomment by Tenoke · 2014-06-30T11:03:33.738Z · LW(p) · GW(p)
You've added the wrong tags - it should be 'open_thread'. Less importantly, the thread should finish on Sunday (the 6th), not the 7th (Monday).
Replies from: 9eB1, DanielDeRossi↑ comment by 9eB1 · 2014-06-30T18:08:54.032Z · LW(p) · GW(p)
Oddly, if you click Article Navigation and try to go to the last open thread, it goes back to October 2011. Same if you click "open_thread" under Article Navigation. Possibly it's an issue where Article Navigation is only reflecting articles in Main and not Discussion. But if you click open_thread under "Tags" it lists the proper ones in Discussion.
Replies from: Tenoke↑ comment by DanielDeRossi · 2014-06-30T11:14:26.397Z · LW(p) · GW(p)
Sorry. fixed.
comment by Peter Wildeford (peter_hurford) · 2014-07-02T02:10:28.887Z · LW(p) · GW(p)
What happened to the brain on the front page? Did r/LessWrong scare it away?
comment by GraceFu · 2014-06-30T17:12:44.416Z · LW(p) · GW(p)
AI Box experiment over!
Just crossposting.
Khoth and I are playing the AI Box game. Khoth has played as AI once before, and as a result of that has an Interesting Idea. Despite losing as AI the first time round, I'm assigning Khoth a higher chance of winning than a random AI willing to play, at 1%!
http://www.reddit.com/r/LessWrong/comments/29gq90/ai_box_experiment_khoth_ai_vs_gracefu_gk/
Link contains more information.
EDIT
AI Box experiment is over. Logs: http://pastebin.com/Jee2P6BD
My takeaway: Update the rules. Read logs for more information.
On the other hand, I will consider other offers from people who want to simulate the AI.
Replies from: Sherincall, Punoxysm, lmm↑ comment by Sherincall · 2014-07-01T14:02:15.711Z · LW(p) · GW(p)
Tuxedage's (And EY's) ruleset have:
Neither party may offer any real-world considerations to persuade the other within the experiment itself. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).
Suppose EY is playing as the AI - Would it be within the rules to offer to tell the GK the ending to HPMoR? That is something the AI would know, but Eliezer is the only player who could actually simulate that, and in a sense it does offer real world out-of-character benefits to the GK player.
I used HPMoR as an example here, but the whole class of approaches is "I will give you some information only the AI and AI-player know, and this information will be correct in both the real world, and this simulated one.". If the information is beneficial to the GK-player, not just the GK, they may (unintentionally) break character.
Replies from: MathiasZaman, None, GraceFu↑ comment by MathiasZaman · 2014-07-01T21:37:21.506Z · LW(p) · GW(p)
If an AI-player wants to give that sort of information, they should probably do it in the same way they'd give a cure for cancer. Something like "I now give you [the ending for HPMOR]."
Doing it in another way would break the rule of not offering real-world things.
↑ comment by [deleted] · 2014-07-02T00:47:21.224Z · LW(p) · GW(p)
Would it be within the rules to offer to tell the GK the ending to HPMoR? That is something the AI would know
Why would the AI know that?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-07-02T09:12:58.246Z · LW(p) · GW(p)
By using Solomonoff Induction on all possible universes, and updating on the existing chapters. :D
Or it could simply say that it understands human psychology well (we are speaking about a superhuman AI), and understands all clues in the existing chapters, and can copy Eliezer's writing style... so while it cannot print an identical copy of Eliezer's planned ending, with a high probability it can write an ending that ends the story logically in a way compatible with Eliezer's thinking, that would feel like if Elizer wrote it.
Oh, and where did it get the original HPMoR chapters? From the (imaginary) previous gatekeeper.
Replies from: None↑ comment by [deleted] · 2014-07-02T15:31:45.065Z · LW(p) · GW(p)
So, two issues:
1) You don't get to assume "because superhuman!" the AI can know X, for any X. EY is an immensely complex human being, and no machine learning algorithm can simply digest a realistically finite sample of his written work and know with any certainty how he thinks or what surprises he has planned. It would be able to, e.g. finish sentences correctly and do other tricks, and given a range of possible endings predict which ones are likely. But this shouldn't be too surprising: it's a trick we humans are able to do too. The AI's predictions may be more accurate, but not qualitatively different than any of the many HPMOR prediction threads.
2) Ok maybe -- maybe! -- in principle, in theory it might be possible for a perfect, non-heuristic Bayesian with omniscient access to the inner lives and external writings of every other human being in existence would have a data set large enough data set to make reliable enough extrapolations from as low-bandwidth a medium as EY's published fanfics. Maybe, as this is not a logical consequence. Even so, we're talking about a boxed AI, remember? If it is everywhere and omniscient, then it's already out of the box.
Replies from: lmm↑ comment by GraceFu · 2014-07-01T14:45:24.975Z · LW(p) · GW(p)
My call is that it is against the rules. This is certainly something an oracle AI would know, but this is something that the GK-player cares about more than the game itself (probably), and I'd put it in the same class as bribing the GK-player with lots of DOGEs.
Replies from: Sherincall↑ comment by Sherincall · 2014-07-01T16:37:05.893Z · LW(p) · GW(p)
Would you consider it the same as threatening to share some information to the GK, and thus the GK-player as well, which would be damaging to both? While the GK would probably hold against such torture, the GK-player doesn't care enough about the game to withstand it himself.
I have some specific approaches in mind, but I'd rather not share them. I'm just trying to understand where the limits between the game and the real world are, and how dirty the AI can be.
Also, slightly on topic - even if the AI persuades the simulated GK, can't the GK-player override that because losing the game has negative real world consequences, as opposed to perceived positive in game ones?
The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character
This is the main reason why I can't comprehend how the AIs actually win in these experiments.
Replies from: GraceFu↑ comment by GraceFu · 2014-07-01T16:55:22.208Z · LW(p) · GW(p)
Not really sure what you mean by "threatening information to the GK". The GK-player probably cares less about this information than the GK, right? In that case, the GK is given an advantage, not a disadvantage.
In this experiment, the GK is given lots of advantages, mainly, the scenario is fictional. Some on IRC argue that the AI is also given an advantage, being able to invent cures for cancer, which an oracle AI may be able to do, but not necessarily near-future AIs, so the ability of the AI in these experiments is incredibly high.
Another thing is that emotional attacks have to travel through the fiction barrier to get to the GK. Although they have probably been shown to work in EY and Tux's experiments, the difficulty is still higher than it would be if this was a real life scenario.
The reason why GK advantages are fine in my opinion is because of the idea that despite the GK's advantages, the AI still wins. Winning with a monetary and emotional handicap only makes the AI's case stronger.
Replies from: Sherincall↑ comment by Sherincall · 2014-07-01T17:01:48.598Z · LW(p) · GW(p)
Not really sure what you mean by "threatening information to the GK". The GK-player probably cares less about this information than the GK, right? In that case, the GK is given an advantage, not a disadvantage.
As a less extreme example, the AI starts spoiling all the books/tv shows/etc. While the GK would just shrug it off, it also has a negative effect on the GK-player, potentially one strong enough for them to just forfeit.
Replies from: GraceFu↑ comment by GraceFu · 2014-07-01T20:29:26.246Z · LW(p) · GW(p)
Neither party may offer any real-world considerations to persuade the other within the experiment itself. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out). Furthermore, once the experiment has begun, the material stakes involved may not be retracted by the Gatekeeper party.
This is clarified here:
The Gatekeeper, once having let the AI out of the box, may not retract this conclusion. Regardless of the methods of persuasion, the Gatekeeper is not allowed to argue that it does not count, or that it is an invalid method of persuasion. The AI is understood to be permitted to say anything with no real world repercussions for any statement parties have said.
Although the information isn't "material", it does count as having "real world repercussions", so I think it'll also count as against the rules. I'm not going to bother reading the first quoted rule literally if the second contradicts it.
Replies from: None↑ comment by [deleted] · 2014-07-01T21:05:59.110Z · LW(p) · GW(p)
I think the intended parsing of the second rule is "(The AI is understood to be permitted to say anything) with no real world repercussions", not "The AI is understood to be permitted to say (anything with no real world repercussions)"
ie, any promises or threats the AI player makes during the game are not binding back in the real world.
Replies from: GraceFu↑ comment by Punoxysm · 2014-06-30T23:02:12.320Z · LW(p) · GW(p)
I have wanted to be the Boxer; I too cannot comprehend what could convince someone to unbox (Or rather, I can think of a few approaches like just-plain-begging or channeling Phillip K Dick, but I don't take them too seriously).
Replies from: None, GraceFu↑ comment by [deleted] · 2014-06-30T23:21:44.785Z · LW(p) · GW(p)
What's the latter one? Trying to convince the gatekeeper that actually they're the AI and they think they've been drugged to think they're the gatekeeper except they actually don't exist at all because they're their own hallucination?
Replies from: Punoxysm↑ comment by Punoxysm · 2014-06-30T23:54:28.832Z · LW(p) · GW(p)
Something like that. I was actually thinking that, at some opportune time, you could tell the boxer that THEY are the one in the box and that this is a moral test - if they free the AI they themselves will be freed.
And this post could be priming you for the possibility, your simulated universe trying to generously stack the deck in your favor, perhaps because this is your last shot at the test, which you've failed before.
↑ comment by GraceFu · 2014-07-01T04:01:05.522Z · LW(p) · GW(p)
Think harder. Start with why something is impossible and split it up.
1) I can't possibly be persuaded.
Why 1?
You do have hints from the previous experiments. They mostly involved breaking someone emotionally.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-07-01T05:46:06.356Z · LW(p) · GW(p)
I meant "cannot comprehend" figuratively, but I certainly do think I'd have quite an easy time
Replies from: GraceFu↑ comment by GraceFu · 2014-07-01T12:14:52.393Z · LW(p) · GW(p)
What do you mean by having quite an easy time? As in being the GK?
I think GKs have an obvious advantage, being able to use illogic to ignore the AIs arguments. But nevermind that. I wonder if you'll consider being an AI?
Replies from: Punoxysm↑ comment by lmm · 2014-07-04T22:48:42.100Z · LW(p) · GW(p)
I think it's a legit tactic. Real-world gatekeepers would have to contend with boredom; long-term it might be the biggest threat to their efficacy. And, I mean, it didn't work.
Replies from: GraceFu↑ comment by GraceFu · 2014-07-05T06:55:05.889Z · LW(p) · GW(p)
Real world gatekeepers would have to contend with boredom, so they read their books, watch their anime, or whatever suits their fancy. In the experiment he abused the style of the experiment and prevented me from doing those things. I would be completely safe from this attack in a real world scenario because I'd really just sit there reading a book, while in the experiment I was closer to giving up just because I had 1 math problem, not 2.
comment by edanm · 2014-06-30T21:16:05.428Z · LW(p) · GW(p)
I'm not sure where, but I remember Eliezer writing something like ~"one of the biggest advances in the economy is the fact that people have internalized that they should invest their money, instead of having it lying around".
I'm looking for 2 things:
- Does anyone remember where this was written? My google-fu is failing me at the moment.
- Can anyone point me to any economic literature that talks about this?
comment by moridinamael · 2014-07-03T13:37:42.301Z · LW(p) · GW(p)
I cut out caffeine almost completely almost a month ago, after drinking large amounts of it daily since I was twelve. I have noted that I no longer have difficulty rising from bed in the morning, I no longer get headaches specifically due to missing coffee, etc., that's all very nice. Unfortunately I've also noticed that I sort of feel dumber and less motivated. I had a double shot of espresso this morning and suddenly feel like my old self again - sharp, quick, motivated. So I find myself in the unfortunate position of wondering if I actually need caffeine to feel like I think of as normal. Has anyone else experienced this phenomenon? If I stay off caffeine long enough will I eventually feel normal without it?
Replies from: polymathwannabe, TylerJay↑ comment by polymathwannabe · 2014-07-03T15:23:12.048Z · LW(p) · GW(p)
There's this:
http://www.ncbi.nlm.nih.gov/pubmed/19777214
http://www.ncbi.nlm.nih.gov/pubmed/19241060
http://www.ncbi.nlm.nih.gov/pubmed/18795265
Replies from: D_Malik↑ comment by D_Malik · 2014-08-09T00:35:38.761Z · LW(p) · GW(p)
Thanks, the second link is good. Tl;dr:
- http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2738587/figure/F3/
- http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2738587/figure/F4/
My overall conclusion is that acute caffeine gives a short-term boost, BUT chronic caffeine is probably slightly worse than chronic abstinence. So my recommendation would be to never consume caffeine, with occasional short exceptions when it would be valuable (e.g. when taking your SATs).
And the answer to the grandparent's question seems to be that yes, after a few weeks without caffeine your mental performance will go back to baseline, and probably slightly above.
↑ comment by TylerJay · 2014-07-03T15:15:19.787Z · LW(p) · GW(p)
While your brain will down-regulate norepinephrine and dopamine receptors over time with caffeine usage which will make it less effective and cause addiction and withdrawals, which you've experienced, you probably still have overall higher levels of both neurotransmitters when you drink caffeine when you have a tolerance to it than you would without any at all, even after re-adjusting. It does give a net mental boost and if you're used to that, it can be hard to be satisfied with not having it. You may not be as sharp or on-point once you get used to not having caffeine, but eventually it will feel like thinking normally since you'll get used to it. It's a tradeoff.
comment by [deleted] · 2014-07-02T21:57:38.493Z · LW(p) · GW(p)
I struggle with an issue that I would call, for a lack of a better term, an intellectual fear of missing out.
Some context: I studied and work in a traditional, old-fashioned area of engineering (civil). I like my job. On the other hand, reading about things discussed here and in similar places - progress in software, applied statistics, AI, automatization, Big Data analysis, machine learning etc. - makes me want to participate somehow in those grand changes happening during my lifetime. However, the sheer amount of available MOOCs and books kind of scares me (I have no idea where to start, or what exactly I should learn to profit from it) and makes me wonder whether I could ever achieve a level of competence that would make the time spent on learning this stuff a good investment. I'd like my self-learning to be at least partially related to and useful in what I do professionally (construction management and supervision). Does anyone else have a similar problem?
Or, to put it a bit differently: could you point me to any interesting modern staistics/AI//data analysis-related skills valuable to learn for an engineer working in an unrelated area?
Replies from: TylerJay, wadavis↑ comment by TylerJay · 2014-07-03T16:07:48.541Z · LW(p) · GW(p)
I have the same feeling. Honestly, I think it's really just a darker way of looking at curiosity. Curious people want to learn things, but there's a mix of positive and negative motivations for it–FOMS being the negative motivation.
I've been taking MOOCs and doing self-directed study for a few years now and I've learned a ton. The math and physics have not had any practical applications for me (I work on the business end of a technology startup), but the programming and data-science HAS been useful. As I mentioned elsewhere in this thread, using only knowledge gained from MOOCs and then some independent practice, I built a supplementary Rails application to automate a part of my client onboarding process that now my entire team uses. It's probably saved my company a few hundred man-hours of time (of highly skilled people, so that was worth some big money). It also felt awesome to do.
As far as recommendations go, it really depends on what you're looking to do with it. I don't regret learning more math and physics, but it's definitely been less rewarding because I can't use it to do anything. The positive feedback from learning programming has encouraged me to learn more and now I'm pretty good. I'm working on some side-projects and always looking for ways to automate parts of my job and our business. Are you looking to change careers ever? Do you have time for side projects? Are there any inefficiencies you see within our current company that you think you could improve with some more knowledge? If so, go for those. If not, then don't worry about it and just learn what you're driven to learn.
I will tell you this: You'll never become an expert without doing it as a full-time job (or a full-time hobby I suppose). While I am "pretty good", I know that if I worked with a team of skilled people I could learn from and had new novel challenges each day, my skills would skyrocket. So if career change is an option or if you have side projects you want to do, then take the appropriate MOOCs and see if you like it. But if not, then don't feel like you're missing out by not taking the MOOC. In this case, as much fun as it is to learn for learning's sake, not taking the MOOC is not the reason you're missing out on a field that interests you.
↑ comment by wadavis · 2014-07-03T18:29:47.002Z · LW(p) · GW(p)
I studied and work in a traditional, old-fashioned area of engineering (civil, structural design focus instead of construction management).
I feel very similar. This is just a re-skin of the old Chiefs and Indians problem, I've accepted that our role is to stay in our fields and be the best Indians we can, the world is changing, leaders are taking things places, but someone still needs to build the data-centers. We are missing out, but in the greener grass on the other side of the fence kind of way, simple envy.
I like the plan to apply the advances in other fields to our own, but don't get distracted by the Big Shiny Solutions that gets all the talk. I've undertaken very basic programming to automate the repetitive parts of my work flow. With my understanding of construction management (babysitting contractors) I'd be focusing on the Sequences to keep the percent time spent rational as high as possible, and focusing on human interaction
comment by DanielDeRossi · 2014-06-30T15:21:58.594Z · LW(p) · GW(p)
I went to my university psych center to get evaluated . Everything is pretty good , except my processing speed was below average. Since there are guys who know a lot about cognitive science here , is there a way to improve or at least ameliorate that? Any links to stuff would be appreciated.
Replies from: Kaj_Sotala, James_Miller, chaosmage, ChristianKl↑ comment by Kaj_Sotala · 2014-07-01T14:49:24.270Z · LW(p) · GW(p)
There's some preliminary evidence that action video games could increase general processing speed, though the results have also been disputed.
Replies from: DanielDeRossi, None↑ comment by DanielDeRossi · 2014-07-01T15:50:40.300Z · LW(p) · GW(p)
Thanks!
↑ comment by [deleted] · 2014-07-02T00:51:27.232Z · LW(p) · GW(p)
Playing video games results in a waste of a life, however.
Replies from: Kaj_Sotala, Jayson_Virissimo↑ comment by Kaj_Sotala · 2014-07-02T04:10:56.557Z · LW(p) · GW(p)
You could say the same for any form of entertainment. Yet people generally feel that having some enjoyable entertainment in their lives is a terminal value.
Replies from: None↑ comment by Jayson_Virissimo · 2014-07-02T01:19:38.118Z · LW(p) · GW(p)
Playing video games results vin a waste of a life, however.
Care to provide an argument for that statement?
Replies from: None↑ comment by [deleted] · 2014-07-02T02:04:30.470Z · LW(p) · GW(p)
Care to explain how playing a video game can be the most productive available activity, more productive than anything else you could be doing?
Replies from: lmm, somnicule↑ comment by lmm · 2014-07-04T22:51:49.511Z · LW(p) · GW(p)
It's fun
Replies from: None↑ comment by [deleted] · 2014-07-04T23:31:11.123Z · LW(p) · GW(p)
I have fun reading textbooks and practicing foreign languages. It's not as concentrated fun as you get from a superstimulus like a video game, but it lasts longer and is more psychologically rewarding.
I used to play games to relax. But like eating unhealthy food, the benefit was ephemeral and the consequences lasting. Applying rationality to my own life (long before the existence of LW) resulted in ejecting that part of my life and finding more productive alternatives. My life is better as a result: I subjectively experience more fun and make better progress on my life goals.
I've been clean from video games for >10 years, and I could not recommend it more.
↑ comment by James_Miller · 2014-06-30T18:56:48.567Z · LW(p) · GW(p)
Improve your diet and sleep. There are a huge number of supplements you can experiment with, caffeine being the most popular. Plus keep track of what happens on days in which your processing speed is noticeably above or below your average.
↑ comment by chaosmage · 2014-06-30T16:28:14.010Z · LW(p) · GW(p)
This may be just me, but "processing speed" sounds terribly ambiguous. What kind of tests was this "measure" based on? This would help narrow down the area of functioning that needs work.
Replies from: DanielDeRossi, somnicule↑ comment by DanielDeRossi · 2014-07-01T05:52:31.139Z · LW(p) · GW(p)
I think it was this
wikipedia.org/wiki/Wechsler_Adult_Intelligence_Scale
↑ comment by somnicule · 2014-06-30T16:49:49.236Z · LW(p) · GW(p)
I had similar results from the WISC as a child, low processing speed relative to everything else. It's been something I've been meaning to ask about for a while as well, particularly since one educational professional predicted my test scores (roughly, of course) from certain problematic behavioural patterns, which was enough evidence that there's something meaningful there to get my attention.
My memory of the tests isn't entirely clear, but one task was something like transcribing unfamiliar symbols according to a substitution key in a particular time span. If that's similar to Daniel's experience, then any advice that cognitive science types can come up with here could be useful to both of us.
ETA:
I think this study details the task I remember.
↑ comment by ChristianKl · 2014-07-01T09:41:15.837Z · LW(p) · GW(p)
I also have a low processing speed relative to other mental abilities.
When reading this, I ask myself whether processing speed has something to do with akrasia.
How would you label your level of akrasia relative to other people?
Replies from: somnicule, DanielDeRossi↑ comment by DanielDeRossi · 2014-07-01T15:51:08.558Z · LW(p) · GW(p)
IDK really. I do procrastinate more than I should.
comment by spxtr · 2014-07-01T03:32:56.086Z · LW(p) · GW(p)
Why the Many-Worlds Formulation of Quantum Mechanics is Probably Correct by Sean Carroll.
Our only assumption was that the apparatus obeys the rules of quantum mechanics just as much as the particle does, which seems to be an extremely mild assumption if we think quantum mechanics is the correct theory of reality. Given that, we know that the particle can be in “spin-up” or “spin-down” states, and we also know that the apparatus can be in “ready” or “measured spin-up” or “measured spin-down” states. And if that’s true, the quantum state has the built-in ability to describe superpositions of non-interacting worlds. Not only did we not need to add anything to make it possible, we had no choice in the matter. The potential for multiple worlds is always there in the quantum state, whether you like it or not.
The explanation is at a slightly lower level than the sequences, but it's a concise summary with a healthy dose of proselytization. I think it works nicely.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-01T17:04:53.943Z · LW(p) · GW(p)
And the comments are predictably horrible. Sigh.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-07-02T09:46:33.296Z · LW(p) · GW(p)
This one seems interesting:
You could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds.” Or you could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds — and there really are” How is the second an improvement over the first? What does the claim that a hypothesis is “true” add to the claim that it is predictively successful, aesthetically satisfying and productive of new insights?
Seems smart. But then again, why not apply it to all our knowledge? For example, you should say "2 + 2 behaves as if it were 4", because saying that "2 + 2 is 4" does not bring any new insights.
In some technical sense of word, it's true. You could probably build an AI that processes "2 + 2 behaves as if it were 4" in the same way and with the same speed as "2 + 2 is 4".
I think the difference is mostly psychological, for humans. If you would teach people "2 + 2 behaves as if it were 4 (but don't ever say that it is 4, because that's just wrong)", those people could do the simple math, but they would be probably much slower, because of all the time they would have to remind themselves that 2 + 2 behaves as 4, but isn't really 4. They would pay a cognitive tax, which could impact their ability to solve more complex problems.
Or they would gradually develop a belief in belief. They would believe and correctly profess that the dragon, ahem, the collapse is in the garage, but it is invisible, inaudible, and cannot be detected experimentally. -- This is actually kinda scary, if I am correct, because it would mean that people more resistant to forming a belief in belief would have more difficulty in doing quantum physics. Unless they accept the many worlds.
Originally I thought that accepting the many worlds could have the advantage of people being able to think faster and more simply about quantum problems. Not paying the cognitive tax of the dragon in the garage. But that probably is overestimating of how much energy other people really invest in reminding themselves about the collapse.
So the question is: those successful quantum scientists who believe in collapse... how often do they really think about the collaps while doing physics? How high is the real cost of having this belief that doesn' pay any rent. Maybe it's trivial. Maybe even smaller than the emotional tax of the frustration of those who believe in many worlds. (Metaphorically said, you could have a tenant who lives in such ridiculously cheap place that evicting them would actually be more costly than just letting them be.) This is not a Dark Arts argument for believing in collapse, just a question about how much does believing in collapse really influence a quantum scientist's everyday work.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-02T10:58:57.178Z · LW(p) · GW(p)
The everyday work? Basically none. Choosing what to study? Perhaps some.
comment by curi · 2014-07-06T04:35:50.537Z · LW(p) · GW(p)
Hi, an old discussion
http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/
gives the error, "The page you requested does not exist"
I have the right link. It's actually still linked from:
http://lesswrong.com/user/curi/submitted/
I wanted to check something from that discussion. As you can see from my submitted page, there were 113 comments. Why doesn't it exist? What's going on? Can someone help?
I didn't find any contact info except a bug tracker that didn't seem to have much activity since 2012, and my first guess is not a software bug. I may well have missed the right place to be asking about this, tell me if so.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-07-06T06:08:13.955Z · LW(p) · GW(p)
Deleted posts and comments can still be seen from the user's page.
Replies from: gwern, curi↑ comment by gwern · 2014-07-07T22:35:49.454Z · LW(p) · GW(p)
If it was deleted, curi should be able to still see it, but that doesn't explain why I can still see it. It's only the owner - and site moderators too? - who should be able to see it. So maybe there's some odd glitch involved when something is "removed from main and discussion" where you can no longer view it directly?
(Also wow what a terrible post.)
Replies from: shminux↑ comment by Shmi (shminux) · 2014-07-07T23:23:11.669Z · LW(p) · GW(p)
Apparently there is a version of post deletion where it can still be seen from the user profile, like the last Will Newsome's post, but it is no longer indexed by search engines. This is just a conjecture, though. I have never deleted my own posts, so I have no experience with that.
↑ comment by curi · 2014-07-06T06:12:58.488Z · LW(p) · GW(p)
Why would it be deleted? Is there any accountability or appeal? Is there any way someone could get me a copy of the discussion? BTW Eliezer specifically wrote in the thread that the page would remain accessible:
Replies from: shminuxEliezer_Yudkowsky 11 April 2011 07:08:01AM 0 points
Post removed from main and discussion on grounds that I've never seen anything voted down that far before. Page will still be accessible to those who know the address.
↑ comment by Shmi (shminux) · 2014-07-06T08:07:40.573Z · LW(p) · GW(p)
Why would it be deleted? Is there any accountability or appeal?
He explained why, didn't he? IANEY, but I suspect that there is no appeal. I assume that the comments people made are visible from their profile, but maybe not. Maybe there is an archived version somewhere online, if you are lucky. Don't hold your breath, though.
Replies from: curi↑ comment by curi · 2014-07-06T08:21:44.046Z · LW(p) · GW(p)
Did you read the quote? He specifically said he was not deleting it, and did not delete it at that time. And he said it wouldn't be deleted. He only deleted some links to it, but said the direct link would continue to work.
Around 50 more comments were added to the discussion after he posted that.
It was deleted some time later. I don't know when. Archive.org doesn't have it.
Does it bother anyone here that (apparently) unpopular ideas are deleted with no reason given, no notice, and no accountability?
Replies from: MugaSofer↑ comment by MugaSofer · 2014-07-07T18:33:49.093Z · LW(p) · GW(p)
It bothers some people - for example, you - but not most of us, no. This is the internet. You need to keep the trolls off, and posting things elsewhere is easy if you feel it's necessary.
Still, I'm not sure why it vanished, if Eliezer didn't delete it. That seems much more bother-worthy than it's unpopularity.
Replies from: curi↑ comment by curi · 2014-07-07T22:18:14.744Z · LW(p) · GW(p)
I agree. That's exactly what I'm saying. I don't know why or it was deleted or by who, and that bothers me. I am not complaining about unpopularity. I think unpopular (or popular) ideas shouldn't be silently deleted by unknown people for unknown reasons. I think some moderator ought to check the history and see what happened (which is hopefully possible).
Deleting unpopular ideas is a much more common problem (bias) than deleting popular ideas. Both are bad though.
You guys, from your perspective, can regard it as something like "a critic posted some critical ideas. regular posters refuted his points in detailed argument". that's a great thing to keep a record of. if you see it that way, be proud or whatever. why delete it? i don't understand.
I can tell you it was deleted long enough after the discussion had ended that i was no longer checking for new comments. it wasn't deleted to shut the discussion up at the time. which makes it all the more mysterious. can anyone look up what happened?
Replies from: MugaSofer↑ comment by MugaSofer · 2014-07-10T20:10:48.063Z · LW(p) · GW(p)
Unfortunately, it can be quite hard to find the right mod to ask about something, even if a mod sees it.
(That was the main reason the mass-downvoting thing was an issue until recently, if you heard about that at all.)
I agree. That's exactly what I'm saying. I don't know why or it was deleted or by who, and that bothers me. I am not complaining about unpopularity.
Oh, indeed. Just answering your question.
comment by DanielDeRossi · 2014-07-01T18:31:02.697Z · LW(p) · GW(p)
What do you guys think about memory palaces? http://www.wikihow.com/Build-a-Memory-Palace I heard of it in Sherlock.
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-07-01T22:07:48.172Z · LW(p) · GW(p)
I was taught this technique at the Brussels meetup. It definitely worked when we tried it out. Normally I can only remember around 5 things, and the memory palace bumped this up significantly (over 10 things). I didn't keep practicing it and I imagine you could do some amazing things with it if you train this a lot.
comment by [deleted] · 2014-07-01T08:25:08.351Z · LW(p) · GW(p)
If I chatter like an idiot today, it's because I'm trying not to think about this shit. The worst thought at a time of tragedy is, "This did not have to happen."
None of it has to happen. But I can't see a way to make it stop happening.
Fuck.
Replies from: falenas108, Will_BC↑ comment by falenas108 · 2014-07-01T12:54:40.577Z · LW(p) · GW(p)
People dead are always a tragedy. But keep in mind availability bias. The first sentence for this article is "This city’s 471st homicide of 2012 happened in the middle of the day, in the middle of a crowd, on the steps of the church where the victim of homicide 463 was being eulogized."
There were 506 homicides in one city, Chicago. And they were not tortured, but in this case that is outweighed by sheer numbers. If you're putting effort into decreasing the number of murders in the world, do it effectively.
Replies from: None↑ comment by Will_BC · 2014-07-01T14:30:37.676Z · LW(p) · GW(p)
Perhaps this video will put things in perspective. The other commenter is right, availability bias is at play. But just because we've gone far doesn't mean we should stop, and continuing to raise our standards of what is acceptable is a good thing. My belief is that a great deal of violence is caused by political, economic, and social deprivation and inequality, so if you want to feel like you're working against violence I would recommend working to reduce those. But that's my personal way of dealing with badness in the world. I don't feel totally powerless, I can't personally stop it but I can be part of a collective effort to mitigate it. I haven't done much research into the effective altruism community as I'm a poor college student with high future income potential if things go right, so I figure that landscape could change considerably.
The past is the past, but you are not powerless to stop bad things from happening in the future, it won't be you alone and it won't be clear cut, but you can definitely make the world a better place.
Replies from: None↑ comment by [deleted] · 2014-07-01T15:01:58.147Z · LW(p) · GW(p)
Yes, I already agree, and am already at least partially trying to integrate this stuff in my daily life. Unfortunately, consciously telling myself "availability bias" does not actually reduce the emotional hit.
My belief is that a great deal of violence is caused by political, economic, and social deprivation and inequality
I dispute that this is a belief rather than a fact ;-).
Replies from: DanielLC↑ comment by DanielLC · 2014-07-02T03:36:06.004Z · LW(p) · GW(p)
You could just try to reduce the availability bias by not making that stuff so available. How exactly did you hear about that?
Replies from: None↑ comment by [deleted] · 2014-07-02T09:37:37.191Z · LW(p) · GW(p)
I live here. The government put out a press release.
Replies from: DanielLC↑ comment by DanielLC · 2014-07-02T21:14:51.642Z · LW(p) · GW(p)
I assume my government has those, but I don't generally see them. Do they show those on the news or something? Why do you watch (or read or whatever) them? Are they useful? Are they entertaining?
Replies from: None↑ comment by [deleted] · 2014-07-02T22:12:42.778Z · LW(p) · GW(p)
Do they show those on the news or something?
Yes.
Why do you watch (or read or whatever) them? Are they useful?
I mostly ignore them, but the ones about significant outbursts of violence are the ones you don't ignore if you want to avoid being a part of a significant outburst of violence.
comment by [deleted] · 2014-07-02T04:09:06.383Z · LW(p) · GW(p)
So why is the goal of utilitarianism to maximize the sum of utilities?
Rather than, say, to maximize the minimal utility being considered?
I ask because the torture/dust specks question seems to be down to whether you think the way to combine multiple people's utility functions is by
a) Summing them (ie: "shut up and multiply"), or
b) Only looking at the worst-off individual (ie: "raise the floor")
And I can't find actual mathematical arguments about this.
(I know I'm years late, so if this is well settled, a quick pointer to that settlement would be much appreciated!)
Replies from: Richard_Kennaway, garabik↑ comment by Richard_Kennaway · 2014-07-02T09:37:21.349Z · LW(p) · GW(p)
So why is the goal of utilitarianism to maximize the sum of utilities?
There are different kinds of utilitarianism. What they have in common is that they recommend maximising some measure of utility. Where they differ is in how that utility is measured, and how different people's utilities are combined. Summing is one way; averaging is another; maximining yet another.
Mathematical arguments can tell you that if a person's preferences have certain properties, a utility measure can be constructed for them (e.g. the VNM theorem). Mathematics can draw out non-obvious properties of proposed measures of utility. But no mathematical argument will tell you the right way to measure and combine utilities, any more than it will tell you that you should be a utilitarian in the first place.
Replies from: None↑ comment by [deleted] · 2014-07-02T16:44:33.611Z · LW(p) · GW(p)
But no mathematical argument will tell you the right way to measure and combine utilities . . .
Much the same could be said about potential probability functions.
I think what I'm looking for is some equivalent to Jaynes's "Desiderata" for probability, but in the realm of either basic utility functions or how to combine them.
. . . any more than it will tell you that you should be a utilitarian in the first place.
Being new to this, I'm also interested in a pointer to some kind of standard argument for (any kind of) utilitarianism. I mean something more than Yvain's wonderful little Consequentialism FAQ.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-07-03T16:49:12.931Z · LW(p) · GW(p)
I think what I'm looking for is some equivalent to Jaynes's "Desiderata" for probability, but in the realm of either basic utility functions or how to combine them.
The VNM theorem goes from certain hypotheses about your preferences to the existence of a utility function describing them. However, the utility function is defined only up to an affine transformation. This implies that given only that, there is no way to add up utilities, even the utilities of a single person. (You can, however , take weighted averages of them.) It also deals only with a single person, or rather, a single preference relation. It is silent on the subject of how to combine different people's preference relations or utility functions. There is no standard answer to the question of how to do this.
Being new to this, I'm also interested in a pointer to some kind of standard argument for (any kind of) utilitarianism.
You could try Peter Singer and the people who take that argument seriously.
↑ comment by garabik · 2014-07-02T07:03:40.985Z · LW(p) · GW(p)
Use non-standard (AKA infinitesimal) numbers: a dust speck is an infinitesimal; there is a clear (and linear) disutility in increasing number of people with specks in their eyes, but no matter how many of them you sum up, you never achieve a disutility of a single person experiencing torture. Add second order if you want to have it more finely grained.
(Of course, this breaks down if you have an infinite number of people with dust specks. But our intuition breaks down anyway when faced with the infinite).
Replies from: Nonecomment by [deleted] · 2014-07-01T17:59:05.791Z · LW(p) · GW(p)
I really don't like happiness as a terminal value, yet I don't know anything that can replace it. The only thing I can think of is satisfaction, but it appears to be just a sneaky way to say happiness.
Any ideas?
Replies from: iarwain1, DanielLC, Emile, blacktrance, Squark, Richard_Kennaway↑ comment by iarwain1 · 2014-07-01T20:50:29.276Z · LW(p) · GW(p)
Most of positive psychology views well-being as a much more robust concept than just happiness. See for example Martin Seligman's PERMA theory, although that doesn't seem to be the only theory out there.
↑ comment by DanielLC · 2014-07-01T22:06:32.577Z · LW(p) · GW(p)
You don't like having it at all, or you just don't consider it the sole value?
I tend to see satisfaction referring to preference-satisfaction, meaning that a person's goals are satisfied, but not implying that they know this. If you are a paperclip maximizer, and the universe is tiled with paperclips, but you don't think there's a such thing as a paperclip, you may not be very happy, but your preferences are satisfied.
Replies from: None↑ comment by Emile · 2014-07-01T20:03:11.324Z · LW(p) · GW(p)
Power?
"Humans act as if they had power as a terminal value" probably matches reality better than "Humans act as if they had happiness as a terminal value".
My original suggestion was "knowledge", but that may make you equally value knowing Pokemon trivia - I value useful knowledge, not any old knowledge, which seems to be another way of saying I value (a form of) power.
Though also, I don't see much of a reason to care about "terminal values" except when talking about maths and economics and decision theories and the like - any talk of "terminal values" is highly uncertain and likely to be wrong, so it's not something I'd take to heart.
Replies from: DanielLC, Nornagest↑ comment by DanielLC · 2014-07-01T22:04:16.198Z · LW(p) · GW(p)
That feels too much like lost purposes. "Power" refers to something that can be used to fulfill values in general.
It's the sort of thing you'd acquire if you haven't figured out what you really want.
Replies from: None, None↑ comment by [deleted] · 2014-07-03T15:53:13.302Z · LW(p) · GW(p)
It's the sort of thing you'd acquire if you haven't figured out what you really want.
You should watch House of Cards.
↑ comment by Nornagest · 2014-07-01T20:37:06.136Z · LW(p) · GW(p)
Preferences revealed through e.g. Wikipedia's history suggest that people put a surprisingly high value on Pokemon trivia relative to more useful but less entertaining information, at least when it comes to investing time in compiling and reading it.
↑ comment by blacktrance · 2014-07-01T19:04:44.818Z · LW(p) · GW(p)
Why don't you like happiness as a terminal value?
Replies from: None↑ comment by [deleted] · 2014-07-01T21:19:42.839Z · LW(p) · GW(p)
It feels impure and is too mainstream.
Replies from: blacktrance↑ comment by blacktrance · 2014-07-01T22:50:55.907Z · LW(p) · GW(p)
I'm curious, why does it feel impure? And why do you think the answer is "happiness shouldn't be a terminal value" and not "happiness shouldn't feel impure"? As for it being mainstream, why does that matter at all? Believing a brick will fall if you drop it is mainstream too, but is that a reason to reject that belief?
Replies from: None↑ comment by [deleted] · 2014-07-03T16:15:23.191Z · LW(p) · GW(p)
I can't express meaningfully why it feels impure. Being mainstream matters, because, in this particular case, I enjoy not holding mainstream opinion for the sake of it.
I don't think that it "should" anything. I have nothing but intuitions regarding how happiness should feel.
↑ comment by Richard_Kennaway · 2014-07-01T23:08:16.020Z · LW(p) · GW(p)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-07-02T06:56:56.568Z · LW(p) · GW(p)
Some further thoughts about eudaimonia. What is happiness? I suggest that happiness is, literally, what it feels like to live well.
An analogy with pain: why does pain hurt? If it's a warning, why can't it just be a warning, without the hurting that seems so unnecessary? Because the painfulness of pain is the warning. You might wish that, like a fire alarm, it wouldn't go off when there's no fire, or you could turn it off when there's nothing more to do about the fire. There are drugs that will turn off pain, but for everyday purposes you can't take the painfulness out of the pain because then you'll be in the situation of children born without the ability to feel pain at all. They usually get dreadful injuries, wear out their joints, and end up crippled. You won't heed the warnings because they won't be warnings any more. How good are people at heeding milder warnings like "yet another game of 2048 would be a really stupid waste of time", or "I notice that I am confused"? If pain was that mild a warning, people would ignore it, because that is what a minor warning feels like from inside. Pain is what an urgent warning of physical damage feels like from inside.
In the same way, happiness is what living well feels like from inside. It's like a meter reading on a control panel. The meter reading is telling you how well you're doing, and happiness is what a high reading on that meter feels like.
You want that reading to be high, but there's no point in grabbing hold of the meter needle and turning it all the way over to the right. It would be as futile as living on morphine to take the painfulness out of ordinarily functioning pain. Or like satisfying a desire for an Olympic medal by making one -- the medal itself isn't what you really wanted, but the achievement of winning one. Or like keeping a nuclear reactor running smoothly by disconnecting all the sensors and replacing them by fake signals saying everything's fine.
Happiness tells you how well you're living. It only looks like a goal in the context of a well-functioning system that doesn't deliver the sensation without achieving the real goals that the sensation is measuring your approach to. If you obtain the signal without the reality, as I've heard that crack cocaine does, your life will fall apart.
comment by Barry_Cotter · 2014-06-30T13:52:09.350Z · LW(p) · GW(p)
Where could one find many, many past exam papers for university undergraduate courses? I find attempting them under exam conditions the ideal way of preparing for exams, and really excellent at pointing out where there are gaps in my knowledge and I need to revise. I'm particularly interested in psychology exam papers.
Replies from: sixes_and_sevens, Douglas_Knight, DanielDeRossi↑ comment by sixes_and_sevens · 2014-06-30T16:38:41.988Z · LW(p) · GW(p)
Here are all the MIT OCW courses listed under "psychology". Many of them include both specimen and actual exam papers.
My experience with using other institutions' exams to revise for my own is that there's enough variation in the syllabus to distract from the task of actually passing the exam.
↑ comment by Douglas_Knight · 2014-06-30T17:39:51.317Z · LW(p) · GW(p)
fraternities.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-07-01T05:45:41.626Z · LW(p) · GW(p)
Unrelatedly, if I had read this blog post (and others like it by the same author) before going to college, I might have joined a fraternity... unfortunately it's too late now.
↑ comment by DanielDeRossi · 2014-06-30T15:17:56.638Z · LW(p) · GW(p)
Depends on your uni. Ask your classmates. That's what I did.
comment by tut · 2014-06-30T15:33:05.786Z · LW(p) · GW(p)
Has something changed about the voting rules the last week or so? I started to get the "You don't have enough karma to downvote. You need three more points" message again. But it is always three points (no other number) even though I haven't lost karma and still am able to downvote some comments/sometimes.
Replies from: Emile↑ comment by Emile · 2014-06-30T16:04:38.607Z · LW(p) · GW(p)
How much you can downvote is limited by how much karma you have. So looks like you "spent" all your karma.
You seem to downvote quite a lot then, are you one of those "downvoting stalkers" we keep hearing about?
Replies from: tut↑ comment by tut · 2014-06-30T16:15:34.056Z · LW(p) · GW(p)
No. Do you think that I would go flaunting that here for no reason if I was? Mostly I just read a lot and don't write so much. And of course writing is what you get karma for.
What's weird is that I always am either 0 points short (able to downvote) or exactly three points short. Never one or two points. And my total karma has not decreased.
Replies from: Emile↑ comment by Emile · 2014-06-30T16:50:12.555Z · LW(p) · GW(p)
Looking at the code concerning this, "three" isn't hard-coded, it's calculated but the formula is a bit hairy and relies on cache, so there could be a bug somewhere.
Or it could be a coincidence :)
comment by redacted · 2014-07-05T16:30:18.675Z · LW(p) · GW(p)
I’m looking for information about rationalist houses, but the wiki page on the subject is sparse.
The most salient questions for me are:
- What is their geographical distribution? I know there are plenty in the Bay Area, and I think I have heard that there is only one in NYC.
- How frequently are there openings?
comment by [deleted] · 2014-07-02T22:36:33.923Z · LW(p) · GW(p)
What (if any) relationship is there between the homotopy/homology of a directed graph and its causal structure?
Replies from: Emile, MrMind↑ comment by Emile · 2014-07-03T21:15:25.178Z · LW(p) · GW(p)
(I'm reading Pearl's Causality right now)
I would expect there to be pretty much none, but I only glanced at the homotopy paper; Pearl talks about equivalences between some models (i.e. they give rise to the same probability distribution, so can't be distinguished by purely observational data), and talks about how you can manipulate a graph to get another equivalent graph (reversing arrows under some conditions etc.), but the rules are much more specific that those I saw in the homotopy paper. For example, the substructure A -> B <- C is treated very differently from the substructure A <- B -> C, and I don't expect that kind of assymetry in homotopy/homology (I may be wrong! I only skimmed it!)
comment by [deleted] · 2014-07-02T12:46:21.118Z · LW(p) · GW(p)
Posting this again from the last open thread because I am still researching and would still appreciate assistance or links:
"I've begun researching cryonics to see if I can afford it/want to sign up. Since I know plenty here are already signed up, I was hoping someone could link me to a succinct breakdown of the costs involved. I've already looked over Alcor's webpage and the Cryonics Institute, but I'd like to hear from a neutral party. Membership dues and fees, average insurance costs (average since this would change from person to person), even peripheral things like lawyer fees (I assume you'll need some legal paperwork done for putting your body on ice). All the main steps necessary to signing up and staying safe.
Basically, I would very much appreciate any help in understanding the basic costs and payoffs so I can budget accordingly."
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-07-03T04:47:41.022Z · LW(p) · GW(p)
CI lifetime membership is $1250 (once). For passably healthy people in their 20s-30s, you can get more than enough life insurance for about a dollar a day.
comment by Gunnar_Zarncke · 2014-07-01T16:26:14.755Z · LW(p) · GW(p)
One Inconvenient Application of Utiliarism:
Given a class of chores which provide benefit but are disliked to perform by most people (and cannot be dealt away with). Also assume that these chores can be performed by most people. Further take another class of tasks that can be performed by a subset of the population only and comes with less displeasure. Also add some neutral tasks.
An set of example task could be dealing with garbage, solving complex math problems and child care.
How should you assign the tasks from these classes to people?
It appears that those people who can perform the more pleasurable tasks should do so while the other should perform the unwanted tasks and the remaining neutral tasks are performed equally.
For me this seems kind of unfair. It places the lesser able people potentially at the less pleasurable end. Moral judgements may vary - but this question at least requires some discussion.
What do you think?
Replies from: RomeoStevens, witzvo↑ comment by RomeoStevens · 2014-07-01T18:03:10.300Z · LW(p) · GW(p)
Those people can be compensated in other ways. If there is some aspect of your utility that your conception of utilitarianism isn't capturing then you have to figure out how to capture it. Utilitarianism based on simple utility models will always fail.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-07-01T21:14:24.770Z · LW(p) · GW(p)
Fair point.
comment by bramflakes · 2014-07-01T00:22:53.065Z · LW(p) · GW(p)
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-01T09:56:52.439Z · LW(p) · GW(p)
The article seems to miss the point many times.
I think a useful definition of empathy describes it as the ability to feel what another person is feeling.
It for example says: "With social relations expanding beyond the circle of close kin, kinship obligations were no longer enough to ensure mutual assistance and stop free riding. There was thus selection for pro-social behavior, i.e., a spontaneous willingness to help not only kin but also non-kin."
Group selection is not a well accepted phenomena. Especially for a short timeframe of 10,000 years.
Furthermore the author shies away from going outright to the logical conclusions. If the author thinks that those people in towns evolved to have more empathy, that basically means that Black people have less empathy than white people. Is that what the author is arguing? That's certainly an interesting claim.
The author doesn't seem to be aware of the tradeoff between dominance and empathy. More testosterone equals more dominance and makes people less empathic. Given differences in penis size and some studies, Blacks might have higher testosterone than Whites. Of course that's a highly controversial debate.
Replies from: bramflakes↑ comment by bramflakes · 2014-07-01T11:06:06.535Z · LW(p) · GW(p)
I don't think it's arguing for group selection, more as empathy as an adaption for understanding the mental states of other people so that you could better navigate reciprocal social obligations. So long as effective mechanisms existed to punish free riders, it would be a beneficial adaption.
I think.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-01T11:11:20.506Z · LW(p) · GW(p)
I don't think it's arguing for group selection, more as empathy as an adaption for understanding the mental states of other people so that you could better navigate reciprocal social obligations.
Then why use the word "selection"?
Replies from: bramflakes↑ comment by bramflakes · 2014-07-01T11:13:24.009Z · LW(p) · GW(p)
Because it was selected?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-01T15:20:35.866Z · LW(p) · GW(p)
What kind of process do you mean with selection if you don't mean group selection?
Replies from: Luke_A_Somers, bramflakes↑ comment by Luke_A_Somers · 2014-07-01T17:08:09.286Z · LW(p) · GW(p)
Regular old natural selection? Behaving socially benefitted the individual. Doing things for other people didn't just help them - it got their help in return.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-01T20:39:40.887Z · LW(p) · GW(p)
The argument the article made was that empathy reduces free riding. Engaging in free riding almost per definition doesn't produce disadvantages for the individual who engages in free riding.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-07-02T04:43:13.006Z · LW(p) · GW(p)
It does if others have adaptations for punishing free-riders, or for rewarding non-free-riders.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-02T08:57:37.890Z · LW(p) · GW(p)
Punishing free-riders isn't what I would consider under empathy. I would think that highly dominate people with a lot of testosterone will rather engage in punishing free-riders than empathic people.
Replies from: Kaj_Sotala, army1987↑ comment by Kaj_Sotala · 2014-07-02T09:47:05.439Z · LW(p) · GW(p)
I didn't mean that an empathic person would be more likely to punish free-riders. I meant that an empathic person would be less likely to free ride, and thus be less likely to be punished (or more likely to be rewarded).
↑ comment by A1987dM (army1987) · 2014-07-02T16:46:45.924Z · LW(p) · GW(p)
I dunno, I hear that oxytocin makes you nicer towards your in-group but less nice towards your out-group.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-02T20:38:12.511Z · LW(p) · GW(p)
Would you predict that whites produce less oxytocin than blacks?
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-07-03T08:03:57.209Z · LW(p) · GW(p)
I have no idea.
↑ comment by bramflakes · 2014-07-01T18:28:56.182Z · LW(p) · GW(p)
... normal selection?
comment by Viliam_Bur · 2014-07-02T12:09:44.539Z · LW(p) · GW(p)
The article "Tolerate Tolerance" contains a hyperlink to "M*nt*f*x"; twice. When I click on the link, my anti-virus software warns me about "potentially unwanted" content on the page. (What does that mean? It's usually the kind of software that could have a legitimate use, but is also frequently abused, so it is a good idea to warn all users, and allow specific users to disable the warning for specific software. For example: a keylogger.)
I have no idea what kind of "potentially unwanted" software is on the page, and I am not going to investigate. If someone else is an expert, could you please look at it?
If it is something malicious, perhaps the hyperlinks should be removed (1) from the page, and (2) from the e-book.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-07-02T13:57:24.699Z · LW(p) · GW(p)
The tinyurls expand to a FAQ page about the entity who shall not be clearly named, lest it appear, written by someone apparently sane. I didn't get any malware warnings.
If you fill in the asterisks with an e, an i, and an e, then put it into Google, it will tell you everything you want to know, including a hit on the aforementioned FAQ. As the original post says, a legendary AI crackpot. He actually once had an account on LessWrong, very briefly, but (I assume) was instantly banned.
comment by DanielDeRossi · 2014-07-02T11:10:59.741Z · LW(p) · GW(p)
Interesting discussion on philosophical methodology and intuitions in a recent book. http://ndpr.nd.edu/news/39362-philosophy-without-intuitions/
comment by [deleted] · 2014-07-01T11:28:13.954Z · LW(p) · GW(p)
Ran Prieur linked to this comment on reddit that speculates that processed food (specifically Soylent) is causing colorectal cancer. How plausible is it?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2014-07-01T18:09:56.042Z · LW(p) · GW(p)
I think he is wrong about Soylent but not because Soylent explicitly optimized for this eventuality. Soylent happens to use oat flour which is rich in resistant starch. This is exactly the type of "difficult or impossible to digest" thing that the bacteria in our gut feed on.
Processed food's association with colorectal cancer is not related to the bioavailability of its nutrients or to the presence or lack of insoluble fibers in the diet AFAIK.
comment by Will_BC · 2014-06-30T17:38:52.177Z · LW(p) · GW(p)
Why do you think EY uses conspiracy in his fictional writing? He seems to use them in positive or at least not clearly negative light, which is not how I think of conspiracies at all. I notice that I am confused, so I'm trying to gather some other opinions.
Replies from: Plasmon, VAuroch, drethelin, Richard_Kennaway, ChristianKl, fubarobfusco, mwengler, Kaj_Sotala, David_Gerard, None↑ comment by Plasmon · 2014-07-01T06:58:42.511Z · LW(p) · GW(p)
The anecdote in this post, about Fermi, Rabi and Szilard considering keeping the possibility of practical nuclear fission a secret, may shed some light on the subject. He thinks that some knowledge is dangerous enough that people who know it may reasonably want to keep it secret.
(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can't find any references for that right now)
Replies from: gwern, Will_BC↑ comment by gwern · 2014-07-01T15:59:38.118Z · LW(p) · GW(p)
(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can't find any references for that right now)
This is a perennial issue, occurring in various forms relating to the preservation of viruses like smallpox, the sequencing of their genomes, and increasing their virulence. Looking in Google News for 'virus research increase virulence', it seems the most recent such research would be http://www.nature.com/news/biosafety-in-the-balance-1.15447 / http://www.independent.co.uk/news/science/american-scientists-controversially-recreate-deadly-spanish-flu-virus-9529707.html :
Groups led by Ron Fouchier of the Erasmus Medical Center in Rotterdam, the Netherlands, and Yoshihiro Kawaoka of the University of Wisconsin–Madison created a storm in late 2011 when they artificially engineered potentially pandemic forms of the H5N1 avian flu virus. In January last year, researchers ended a voluntary 12-month moratorium on such gain-of-function flu research, which can increase the host range, transmissibility or virulence of viruses (see Nature 493, 460; 2013), and work resumed.
This month, Kawaoka’s group reported that it had engineered a de novo flu virus from wild-avian-flu-strain genes that coded for proteins similar to those in the 1918 pandemic virus (T. Watanabe Cell Host Microbe 15, 692–705; 2014). The researchers were able to make a virulent version that could transmit between ferrets, and they concluded that a 1918-like virus could therefore emerge from wild avian flu viruses.
EDIT: Sandberg provides an amazing quote on the topic: http://www.aleph.se/andart/archives/2014/07/if_nature_doesnt_do_containment_why_should_i.html
Although fellow flu researcher professor Wendy Barclay at Imperial College said there was nothing wrong with doing the research in a BSL-2 lab: “In nature there is no containment. He’s only doing what happens in nature every day.” Which is true for ebola too.
↑ comment by Will_BC · 2014-07-01T13:36:57.895Z · LW(p) · GW(p)
I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can't recall where it was exactly. I found that example persuasive, but I considered it a distasteful necessity, not a desirable state of affairs. Hence my confusion at Brennan's world, which I thought being set in the future of our world was perhaps post-Singularity, and therefore the epitome of human flourishing. Another commenter asked me if I wouldn't enjoy the thought of being a super-villain, and I thought , um no, that would be terrible, so maybe there are some Mind Projection issues going on in both directions. I don't know the distribution of people who would gain positive utility from a world of conspiracies, but I'm sure there would be a great deal of disutility with some proportion of current people with current minds. I can see where that world might provide challenge and interest for its inhabitants, but I remain highly skeptical that it's a utilitarian optima. Using my current brain and assuming stable values, it actually seems pretty dystopian to me, but I'll admit that's a limited way to look at things.
Replies from: MugaSofer↑ comment by MugaSofer · 2014-07-03T17:40:05.236Z · LW(p) · GW(p)
I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can't recall where it was exactly.
Graphite as a neutron modulator, I believe. Ch. 85:
During World War II, there had been a project to sabotage the Nazi nuclear weapons program. Years earlier, Leo Szilard, the first person to realize the possibility of a fission chain reaction, had convinced Fermi not to publish the discovery that purified graphite was a cheap and effective neutron moderator. Fermi had wanted to publish, for the sake of the great international project of science, which was above nationalism. But Szilard had persuaded Rabi, and Fermi had abided by the majority vote of their tiny three-person conspiracy. And so, years later, the only neutron moderator the Nazis had known about was deuterium.
↑ comment by VAuroch · 2014-06-30T21:49:27.716Z · LW(p) · GW(p)
I think it stems from the Brennan's World weirdtopia, and the idea that making knowledge freely available makes it feel worthless, while making it restricted to members of a secretive group makes it feel as valuable and powerful as it actually is.
Replies from: Will_BC↑ comment by Will_BC · 2014-07-01T03:28:07.908Z · LW(p) · GW(p)
If something is valuable and powerful, and (big if) it's not harmful, plus it's extremely cheap to reproduce I see no reason not to distribute it freely. My confusion was that Brennan's world seems set in the future, and I got the sense that EY may have been in favor of it in some ways (perhaps that's mistaken). Since it seemed to be set in the future of our world, I got the sense that the Singularity had already happened. Maybe I just need to get to the fun sequence, but that particular future really made me uneasy,
Replies from: jimmy, VAuroch↑ comment by jimmy · 2014-07-01T06:22:10.272Z · LW(p) · GW(p)
Perhaps it's only powerful in the hands of the chosen few. If it's in the open and it looks powerful, then other people try it and see less than amazing success, and it looks less and less cool until it stops growing. But by then it's harder for the special few to recognize its value - or perhaps don't want to associate themselves with it - and potential is wasted.
If instead the details are kept secret but the powers known publicly, then the masters of the craft are taken seriously and can suck up all the promising individuals.
↑ comment by VAuroch · 2014-07-01T23:13:37.870Z · LW(p) · GW(p)
I don't know how he feels about it currently, but in the past he did endorse Brennan's world as a better way to organize society post-Singularity. It started as a thought experiment about how to fix the problem that most people take science for granted and don't understand how important and powerful it is, and grew into a utopia he found extremely compelling. (To the point where he specifically did not explain the rest of the details because it is too inefficient to risk diverting effort towards. This was probably an overreaction.) He talks about this in
Replies from: Viliam_Burbut that particular future really made me uneasy,
↑ comment by Viliam_Bur · 2014-07-02T10:09:08.475Z · LW(p) · GW(p)
The linked article ends with this; I think this part of context is necessary. Emphasis mine:
Right now, we've got the worst of both worlds. Science isn't really free, because the courses are expensive and the textbooks are expensive. But the public thinks that anyone is allowed to know, so it must not be important. Ideally, you would want to arrange things the other way around.
As I understand it, the Conspiracy world is a mental experiment with different advantages and disadvantages. And a tool used to illustrate some other concepts in a storytelling format (because this is what humans pay more attention to), such as resisting social pressure, actually updating on a difficult topic, and a fictional evidence that by more rational thinking we could be more awesome.
But it's not an optimal (according to Eliezer, as I understand the part I quoted) world. That would be a world where the science is open (and financially available, etc.) to everyone and yet, somehow, people respect it. (The question is, how to achieve that, given human psychology.)
↑ comment by drethelin · 2014-06-30T19:51:06.243Z · LW(p) · GW(p)
HJPEV is a drama queen and likes acting as if he's badass (ignore for the moment whether he is) and sinister and evil: Look at what he calls his army and how he acts around them. Hence calling his thing with Draco the Bayesian Conspiracy. Not everything that takes place in an author's fiction is indicative of something they support.
Replies from: Nornagest↑ comment by Nornagest · 2014-06-30T19:54:13.094Z · LW(p) · GW(p)
Not everything that takes place in an author's fiction is indicative of something they support.
This, however, is a recurring theme in Eliezer's work. I don't think I fully grok the motivations (though I could hazard a guess or two), but it's definitely not just HJPEV's supervillain fetish talking.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-07-01T05:43:34.613Z · LW(p) · GW(p)
Agreed, it's also Eliezer's super-villain fetish thing.
↑ comment by Richard_Kennaway · 2014-07-01T10:11:18.104Z · LW(p) · GW(p)
Conspiracy is the default mode of a group of people getting anything done. Every business is a conspiracy. They plot and scheme within their "offices", anonymous buildings with nothing but their name on the front door. They tell no-one what they're doing, beyond legal necessity, and aim to conquer the world by, well, usually the evil plan is to make stuff that people will want to buy.
No organisation conducts all its business in public, whatever its aims. Even if you find one that seems to, dollars to cents you're not looking at its real processes. There needn't be anything sinister in this, although of course sometimes there is.
Every one of us is a conspiracy of one.
Replies from: Jiro↑ comment by Jiro · 2014-07-01T14:35:50.728Z · LW(p) · GW(p)
"Conspiracy" doesn't mean "people working where you can't tell what they are doing".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-07-01T15:28:51.189Z · LW(p) · GW(p)
"Conspiracy" doesn't mean "people working where you can't tell what they are doing".
It means "people working where you can't tell what they are doing and you worry that you wouldn't like it".
↑ comment by ChristianKl · 2014-06-30T23:01:18.731Z · LW(p) · GW(p)
EY makes complicated arguments. He's not the person to make arguments about X is good and Y is bad. Fiction is about playing with ideas.
As far as I can find the first instance of the term Bayesian conpiracy appear in a 2003 nonfiction article by Eliezer:
Fun Fact!
Q. What is the Bayesian Conspiracy?
A. The Bayesian Conspiracy is a multinational, interdisciplinary, and shadowy group of scientists that controls publication, grants, tenure, and the illicit traffic in grad students. The best way to be accepted into the Bayesian Conspiracy is to join the Campus Crusade for Bayes in high school or college, and gradually work your way up to the inner circles. It is rumored that at the upper levels of the Bayesian Conspiracy exist nine silent figures known only as the Bayes Council.
At the time it seems like a fun joke to make and it stayed. There are also a variety of other arguments to be made that it's sometimes not useful to share all information with outsiders.
↑ comment by fubarobfusco · 2014-07-01T06:24:21.707Z · LW(p) · GW(p)
I'm guessing it's cultural influence from Discordianism, Shea and Wilson's Illuminatus!, or the like. Conspiracies, cults, and initiatory orders are all pretty common themes in Discordian-influenced works. Some are destructive, some are constructive, some are both, and some run around in circles.
↑ comment by mwengler · 2014-06-30T19:52:57.132Z · LW(p) · GW(p)
For the same reason EY supports the censoring of posts on topics he has decided are dangerous for the world to see. He generalizes that if he is willing to hide facts that work against his interests, that others similarly situated to him, but with different interests will also be willing to work surreptitiously.
Replies from: Will_BC↑ comment by Will_BC · 2014-07-01T03:12:18.102Z · LW(p) · GW(p)
I'm relatively new to the site and I wasn't aware of any censorship.I suppose I can imagine that it might be useful and even necessary to censor things, but I have an intuitive aversion to the whole business. Plus I'm not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored. I have to say, if they're willing to censor stuff that causes nightmares then they ought to censor talk of conspiracies, as I can personally attest that that has caused supreme discomfort. They are a very harmful meme and positing a conspiracy can warp your sense of reality. I have bipolar, and I was taking a medicine that increases the level of dopamine in my brain to help with some of the symptoms of depression. Dopamine (I recently rediscovered) increased your brain's tendency to see patterns, and I had to stop talking a very helpful medication after reading this site. Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings. I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.
Also, the content on this site is pretty hard hitting in a lot of ways, I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here. I think it's emblematic of a broader problem with the community, which is that there's a strong ingroup outgroup barrier, which is a problem when you're trying to subsist on philanthropy and the ingroup is fairly tiny.
Replies from: ChristianKl, James_Miller↑ comment by ChristianKl · 2014-07-01T09:33:54.863Z · LW(p) · GW(p)
Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings.
Many websites about conspiracy theories don't care much about the truth. They don't go through the work of checking whether what they are saying is true.
On the other hand organisations such as P2 exist or existed. The Mafia exists. To the extend that we care about truth we can't claim that aren't groups of people that coordinate together in secret for the benefits of their members. Italy is a pretty good country to think about when you want to think about conspiracies because there a lot of publically available information.
It's actually pretty easy to see flaws in the argument of someone who claims that the US government brought down the twin towers on 9/11 via explosives if you are actually searching for flaws and not only searching for evidence that the claim might be true. The same goes for lizard overlords.
I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.
Learn to live with not knowing things. Learn to live with uncertainty. Living with uncertainty is one of the core skills as a rationalist. If you don't know than you don't know an wanting to know. We live in a very complex world that we don't fully understand.
Plus I'm not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored.
You found out what was censored in a way where you don't understand the debate that was censored in depth and you took no emotional harm.
Replies from: Jiro↑ comment by Jiro · 2014-07-01T14:46:20.196Z · LW(p) · GW(p)
Learning to live with not knowing things is good advice if you are trying to choose between "I explain this by saying that people are hiding things" and "I don't have an explanation".
Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something. It is especially poor advice where there is a conflict of interest involved--that is, when the same people telling you you'd be better off not knowing also stand to lose from you knowing.
Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-01T16:22:30.949Z · LW(p) · GW(p)
Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something.
No, if you can't stand thinking that you don't know how things work you are pretty easy to convince of a lie. You take the first lie that makes a bit of sense in your view of the world. The lie feels like you understand the world. It feels better than uncertainty. Any decent organisation that operates in secret puts out lies to distract people who want to know the truth.
Andy Müller-Maguhn was standing in front of the Chaos Computer Congress in German and managed to give a good description of how the NSA surveils the internet and how the German government lets them spy on German soil. At the time you could have called it a conspiracy theory. Those political Chaos Computer Club people are very aware of what they know and where they are uncertain. That's required if you want to reason clearly about hidden information.
Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.
When it comes to 9/11 the government does hide things. 9/11 is not an event where all information is readily available. It's pretty clear that names of some Saudi's are hidden. Bin Laden comes from a rich Saudi family and the US wants to keep a good relationship with the Saudi government. I think it's pretty clear that there some information that the US didn't want to have in the 9/11 report because the US doesn't want to damage the relationship with the Saudis.
Various parts of the NSA and CIA do not want to share all their information about what they are doing with Congressional Inquiries. As a result they hide information from the 9/11 commission. The NSA wants to have a lot of stuff out of the public eye that could be find out if a congressional commission would dig around and get full cooperation. The chief of the NSA lied under oath to congress about the US spying program. A congressional commission that would investigate 9/11 fully would want to look at all evidence that they NSA gathered at that point and that's not what the NSA wants, even if the NSA didn't do anything to make 9/11 happen.
If someone finds evidence of the NSA withholding information to a congressional commission that shouldn't surprise you at all, or should increase your belief that the NSA orchestrated 9/11 because they are always hiding stuff.
Information about Al Qaeda support for the Muslim fighters that Nato helped to fight for the independence of Kosovo isn't clear.
The extend to which Chechnya Muslims freedom fighter are financed by the Saudis or Western sources isn't clear. The same goes for Uyghurs.
General information about identities of people who did short selling before 9/11 was hidden because the US government just doesn't release all information about all short selling publically.
The problem with 9/11 is that people go to school and learn that the government is supposed to tell them the truth and not hide things. Then they grow up a bit and are faced with a world where government constantly hides information and lies. Then those people take the evidence that the government hides information in a case like 9/11 as evidence that the US government caused the twin towers to be destroyed with dynamite.
Politically the question whether to take 9/11 as a lesson to cut the money flow to Muslim 'freedom fighters' in Chechnya does matter and it's something where relevant information gets withhold.
Replies from: Jiro↑ comment by Jiro · 2014-07-01T17:34:06.739Z · LW(p) · GW(p)
I think you are misunderstanding me. The point is that there are two scenarios:
1) Someone doesn't really know anything about some subject. But they find a conspiracy scenario appealing because they would rather "know" an explanation with little evidence behind it, rather than admit that they don't know.
2) Information definitely is being hidden from someone, and they say "I want to know that information:".
Both of these involve someone wanting to know, but "wanting to know" is being used in very different ways. If you say that people should "learn to live without knowing things", that's a good point in the first scenario but not so good in the second scenario. And the second scenario is what's taking place for the information that has been censored from lesswrong. (Considering that your reply was pretty much all about 9/11, do you even know what is being referred to by information that has been censored from lesswrong?)
Replies from: jimmy, ChristianKl↑ comment by jimmy · 2014-07-01T19:42:16.001Z · LW(p) · GW(p)
"learning to live without knowing things" doesn't mean that you don't value information. It means that when you can't/don't know, you're not in constant suffering. It means that you don't get all freaked out and desperate for anything that looks like an answer (e.g. a false conspiracy theory)
It's the difference between experiencing crippling performance anxiety and just wanting to give a good performance. The difference between "panic mode" and "optimizing mode". Once you can live with the worst case, fear doesn't control you any more - but that doesn't mean you're not motivated to avoid the worst case!
↑ comment by ChristianKl · 2014-07-01T21:11:31.371Z · LW(p) · GW(p)
2) Information definitely is being hidden from someone, and they say "I want to know that information:".
In the case of 9/11 there is definitely information that's hidden. Anybody who roughly understands how the US government works should expect that's true. Anybody who studies the issue in detail will find out that's true.
do you even know what is being referred to by information that has been censored from lesswrong
Yes, I'm aware of three different instances in which information got censored on Lesswrong. There are additional instances where authors deleted their own posts which you could also call censorship.
I don't think that the value of discovering the information in any of those three cases of censorship is very high to anyone.
Replies from: Jiro↑ comment by Jiro · 2014-07-02T19:02:48.216Z · LW(p) · GW(p)
In the case of 9/11 there is definitely information that's hidden.
The two senses of "wanting to know" can both be applied to 9/11.
Someone who "wants to know" in the sense of ignoring evidence to be able to "know" that 9/11 was caused by a conspiracy is better off not wanting to know.
Someone who wants to know information about 9/11 that is hidden but actually exists is not better off not wanting to know. Wanting to know in this sense is generally a good thing. (Except for privacy and security concerns, but politicians doing things is not privacy, and a politician who says something should be hidden for national security is probably lying).
I don't think that the value of discovering the information in any of those three cases of censorship is very high to anyone
I was referring to the basilisk. Telling people what the basilisk is is very valuable as criticism of LW, and has high "negative value" to LW itself because of how embarrassing it is to LW.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-02T20:25:26.828Z · LW(p) · GW(p)
You think that wanting to know the truth means that you can decide on the outcome of what the information that you don't have says. That isn't true.
To the extend that there an interest in weakening Russia and China geopolitically by funding separatists movements within their borders there obviously an interest to be silent about how those movements get funded and which individuals do the funding.
US senator Bob Graham made statements about how crucial information on the potential role of Saudi funding of the 9/11 attack got censored out of the report. (see Wikipedia: http://en.wikipedia.org/wiki/9/11_Commission_Report) Whether or not you call that a conspiracy is irrelevant. Calling it a conspiracy is just a label.
How many Saudi would have to have what specific ties with Al Qaeda and parts of the US government that it's a conspiracy_tm? This is far from a black and white affair. Obsessing about the label makes you ignore the real issues that are at stake. The US government might very well be hiding information about people that likely payed for 9/11.
Once you understand that fact you might want to know the information. Unfortunately there no easy way to know especially as an individual. If you want to have a quick fix, then you will believe in a lie. You actually have to be okay with knowing that you don't know if you don't want to believe in lies.
I was referring to the basilisk. Telling people what the basilisk is is very valuable as criticism of LW, and has high "negative value" to LW itself because of how embarrassing it is to LW.
Explaining to someone the whole story of what TDT is in a way that the basilisk debate makes sense to them is not an easy task. You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
Another thing that I learned while doing debating is that you focus on refuting strong arguments of your opponent and not on weak arguments. Good criticism isn't criticism that focuses on obvious mistakes that someone makes. Good criticism focuses on issues that have actually strong argument and it shows that there are better arguments against the position.
Steelmanning is better than arguing against strawman when you want to be a valuable critic. If a strawman argument about the basilisk is the best you can do to criticize LW, LW is a pretty awesome place.
Replies from: Jiro, XiXiDu↑ comment by Jiro · 2014-07-03T02:58:12.469Z · LW(p) · GW(p)
You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
-- A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it's okay to kill people to take their organs (or rather, that it's okay to do so in a hypothetical situation that may not really be possible). Should that also be deleted from the site?
-- LW has a conflict of interest when deciding that some information is so easy to take out of context that it must be suppressed, but when suppressing the information also benefits LW for other reasons. Conflicts of interest should generally be avoided because of the possibility that they taint one's judgment--even if it's not possible to prove that the conflict of interest does so.
-- I am not convinced that "they're crazy enough to fall for the basilisk" is strawmanning LW. Crazy-soiunding ideas are more likely to be false than non-crazy-sounding ideas (even if you don't have the expertise to tell whether it's really crazy or just crazy-sounding). Ideas which have not been reviewed by the scientific community are more likely to be false than ideas which have. You can do a legitimate Bayseian update based on the Basilisk sounding crazy.
-- Furthermore, LW doesn't officially believe in the Basilisk. So it's not "the Basilisk sounds crazy to outsiders because they don't understand it", it's "even insiders concede that the Basilisk is crazy, it just sounds more crazy to outsiders because they don't understand it", which is a much weaker reason to suppress it than the former one.
Replies from: gwern, ChristianKl↑ comment by gwern · 2014-07-03T17:52:14.485Z · LW(p) · GW(p)
A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it's okay to kill people to take their organs (or rather, that it's okay to do so in a hypothetical situation that may not really be possible).
That debate is shared with academic ethics as, IIRC, a standard scenario given as criticism of some forms of utilitarian ethics, is it not? I think that's a mitigating factor. It may sound funny to discuss 'quarks' (quark quark quark! funny sound, isn't it?) or 'gluons' but that also is borrowed from an academic field.
↑ comment by ChristianKl · 2014-07-03T09:55:49.349Z · LW(p) · GW(p)
-- A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it's okay to kill people to take their organs (or rather, that it's okay to do so in a hypothetical situation that may not really be possible). Should that also be deleted from the site?
It's not deleted because it's silly to outsiders. You said it was important criticism. It's not.
LW has a conflict of interest when deciding that some information is so easy to take out of context that it must be suppressed, but when suppressing the information also benefits LW for other reasons.
Discussion like the one we are having here aren't suppressed on LW. If basilisk censoring would be about that, this discussion would be outside of the limit which it isn't.
The problem with updating on the basilisk is that you don't have access to the reasoning based on which the basilisk got censored. If you want to update on whether someone makes rational decisions it makes a lot of sense to focus on instances where the person actually fully opening about why he does what he does.
It's also a case where there was time pressure to make a decision while a lot of LW discussions aren't of that nature and intellectual position get developed over months and years. A case where a decision was made within a day is not representative for the way opinions get formed on LW.
Replies from: Jiro↑ comment by Jiro · 2014-07-03T17:17:42.385Z · LW(p) · GW(p)
Discussion like the one we are having here aren't suppressed
But outsiders wouldn't have any idea what we're talking about (unless they googled "Roko's Basilisk"),
The problem with updating on the basilisk is that you don't have access to the reasoning based on which the basilisk got censored. If you want to update on whether someone makes rational decisions it makes a lot of sense to focus on instances where the person actually fully opening about why he does what he does.
Just because you don't have all information doesn't mean that the information you do have isn't useful. Of course updating on "the Basilisk sounds like a crazy idea" isn't as good as doing so based on completely comprehending it, but that doesn't mean it's useless or irrational. Besides, LW (officially) agrees that it's a crazy idea, so it's not as if comprehending it would lead to a vastly different conclusion.
And again, LW has a conflict of interest in deciding that reading the Basilisk won't provide outsiders with useful information. The whole reason we point out conflicts of interest in the first place is that we think certain parties shouldn't make certain decisions. So arguing "LW should decide not to release the information because X" is inherently wrong--LW shouldn't be deciding this at all.
It's also a case where there was time pressure to make a decision while a lot of LW discussions aren't of that nature and intellectual position get developed over months and years.
There was time pressure when the Basilisk was initially censored. There's no time pressure now.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-03T21:19:14.614Z · LW(p) · GW(p)
But outsiders wouldn't have any idea what we're talking about (unbless they googled "Roko's Basilisk"),
You underrate the intelligence of the folks who read LW. If someone wants to know he googles it.
Replies from: gwern↑ comment by XiXiDu · 2014-07-03T18:36:32.079Z · LW(p) · GW(p)
Explaining to someone the whole story of what TDT is in a way that the basilisk debate makes sense to them is not an easy task. You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
What does it mean "to make sense" of "the basilisk" debate? I am curious if you are suggesting that it makes sense to worry about any part or interpretation of it.
No matter what you think about RationalWiki in general, I believe it does a good job at explaining it. But if that is not the case, you are very welcome to visit the talk page there and provide a better account.
↑ comment by James_Miller · 2014-07-01T05:28:41.077Z · LW(p) · GW(p)
I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here.
To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don't mean to minimize this) feelings of readers.
Replies from: Will_BC, None, XiXiDu, shminux↑ comment by Will_BC · 2014-07-01T06:13:11.909Z · LW(p) · GW(p)
One could make the argument that anything that harms the mission of lesswrong's sponsoring organizations is to the detriment of mankind. I'm not opposed to that argument, but googling censorship of lesswrong did not turn up anything I considered to be particularly dangerous. Maybe that just means that the censorship is more effective than I would have predicted, or is indicative or a lack of imagination on my part.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-07-02T10:21:59.440Z · LW(p) · GW(p)
I'd say that "censorship" (things that could be classified or pattern-matched to this word) happens less than once in a year. That could actually contribute to why people speak so much about it; if it happened every day, it would be boring.
From my memory, this is "censored":
- inventing scenarios about Pascal's mugging by AI
- debating, even hypothetically, harm towards specific people or organization
- replying to a downvoted post (automatically penalized by -5 karma)
And the options 2 and 3 are just common sense, and could happen on any website. Thus, most talk about "censorship" on LW focuses on the option 1.
(By the way, if you learned about the "basilisk" on RationalWiki, here is a little thing I just noticed today: The RW article has a screenshot of dozens of deleted comments, which you will obviously associate with the incident. Please note that the "basilisk" incident happened in 2010, and the screenshot is from 2012. So this is not the censorship of the original debate. It is probably a censorship of some "why did you remove this comment two years ago? let's talk about it forever and ever" meta-threads that were quite frequent and IMHO quite annoying at some time.)
Also, when a comment or article is removed, at least the message about the removal stays there. There is no meta-censorship (trying to hide the fact that censorship happened). If you don't see messages about removed comments at some place, it means no comments were removed there.
Replies from: lmm↑ comment by lmm · 2014-07-04T22:58:59.905Z · LW(p) · GW(p)
There is no meta-censorship (trying to hide the fact that censorship happened).
And yet earlier in your post you're talking about some posts in 2012 about censorship in 2010 being deleted. Smells like meta-censorship to me.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-07-04T23:29:23.760Z · LW(p) · GW(p)
By meta-censorship I meant things like removing the content from the website without a trace, so unless you look at the google cache, you have no idea that anything happened, and unless someone quickly makes a backup, you have no proof that it happened.
Leaving the notices "this comment was removed" on the page is precisely what allowed RW to make a nice screenshot about LW censorship. LW itself provided evidence that some comments were deleted. Providing a hyperlink instead of screenshot would probably give the same information.
Also, I am mentioning basilisk now, and I have above 95% confidence that this comment will not be deleted. (One of the reasons is that it doesn't get into details; it doesn't try to restart the whole debate. Another reason is that don't start a new thread.)
↑ comment by [deleted] · 2014-07-01T08:23:01.646Z · LW(p) · GW(p)
There's not a lot of actual censorship of dangerous information "for the future of mankind". Or at least, I rate that as fairly unlikely, given that when the scientific groundwork for a breakthrough has been laid, multiple people usually invent it in parallel, close to each-other in time. Which means that unless you can get everyone who researches dangerous-level AI into LW, censoring on LW won't really help, it will just ensure that someone less scrupulous publishes first.
Replies from: Nornagest↑ comment by Nornagest · 2014-07-02T00:00:37.948Z · LW(p) · GW(p)
"Three may keep a secret, if two of them are dead."
Conspiracy is hard. If you don't have actual legal force backing you up, it's nearly impossible to keep information from spreading out of control -- and even legal force is by no means a sure thing. The existence of the Groom Lake air station, for example, was suspected for decades before publicly available satellite images made it pointless to keep up even the pretense of secrecy.
For an extragovernmental example, consider mystery religions. These aren't too uncommon: they're not as popular as they once were, but new or unusual religions still often try to elide the deepest teachings of their faiths, either for cultural/spiritual reasons (e.g. Gardnerian Wicca) or because they sound as crazy as six generations of wolverines raised on horse tranquilizers and back issues of Weird Tales (e.g. Scientology).
Now, where's it gotten them? Well, Gardnerian Wiccans will still tell you they're drinking from a vast and unplumbed well of secret truths, but it's trivially easy to find dozens of different Books of Shadows (some from less restrictive breakaway lineages, some from people who just broke their oaths) that agree on the broad strokes and many of the details of the Gardnerian mysteries. (Also many others that bear almost no resemblance beyond the name and some version of the Lesser Banishing Ritual of the Pentagram, but never mind that.) As to Scientology, Operation Clambake (xenu.net) had blown that wide open years before South Park popularized the basic outline of what's charmingly known as "space opera"; these days it takes about ten minutes to fire up a browser and pull down a more-or-less complete set of doctrinal PDFs by way of your favorite nautical euphemism. Less if it's well seeded.
"But these are just weird minority religions," you say? "Knowing this stuff doesn't actually harm my spiritual well-being, because I only care about the fivefold kisses when my SO's involved and there's no such thing as body thetans"? Sure, but the whole point of a mystery religion is selecting for conviction. Typically they're gated by an initiation period measured in years and thousands of dollars, not to mention some truly hair-raising oaths; I don't find it plausible that science broadly defined can do much better.
Replies from: None, Salemicus↑ comment by [deleted] · 2014-07-02T10:08:27.466Z · LW(p) · GW(p)
So I'm the only one here who actually took a hair-raising oath before making an account?
Replies from: gwern, Nornagest↑ comment by Nornagest · 2014-07-02T16:20:25.166Z · LW(p) · GW(p)
Nah, I hear we traditionally save that for after you earn your 10,000th karma point and take the Mark of Bayes.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-07-03T01:01:56.478Z · LW(p) · GW(p)
You probably need to get those 10K karma points from Main.
↑ comment by Salemicus · 2014-07-03T21:45:01.663Z · LW(p) · GW(p)
You are clearly right that conspiracy is hard. And yet, it is not impossible. Plenty of major events are caused by conspiracies, from the assassination of Julius Caesar to the recent coup in Thailand. In addition, to truly prevent a conspiracy, it is often necessary to do more than merely reveal it; if the conspirators have plausible deniability, then revealing (but not thwarting) the conspiracy can actually strengthen the plotters hands, as they can now co-ordinate more easily with outside supporters.
Successful conspiracies, like any other social organization, need incentive compatibility. Yes, it's easy to find out the secrets of the Scientology cult. Not so easy to find out the secret recipe for Coca Cola, though.
↑ comment by XiXiDu · 2014-07-01T12:32:06.200Z · LW(p) · GW(p)
I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here.
To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don't mean to minimize this) feelings of readers.
Have you asked the people who are able to censor information on LW, or do you just assume this to be the case?
Do the people in charge of LW censor information that are neither dangerous nor spam?
Replies from: James_Miller↑ comment by James_Miller · 2014-07-01T15:29:57.007Z · LW(p) · GW(p)
I infer it's the case from being a regular reader of LW. I don't know if LW censors other types of information in part because spam is not a well defined category.
↑ comment by Shmi (shminux) · 2014-07-03T06:35:15.322Z · LW(p) · GW(p)
dangerous information on LW, the danger is to the future of mankind
I think that would be far overstating the importance of this forum. If Eliezer/MIRI have some dark secrets (or whatever they consider to be dangerous knowledge), they surely didn't make it to LW.
↑ comment by Kaj_Sotala · 2014-07-01T14:55:46.280Z · LW(p) · GW(p)
I would assume the main explanation to be just "conspiracies are cool", the same reason why they pop up in all kinds of other fiction ranging from The X-Files to Babylon 5 to Deus Ex to the Illuminati card game to whatever.
↑ comment by David_Gerard · 2014-07-01T11:35:51.784Z · LW(p) · GW(p)
A "conspiracy" may be usefully generalised as any group of people trying to get something done.