Open thread, Jan. 25 - Jan. 31, 2016
post by username2 · 2016-01-25T21:07:02.746Z · LW · GW · Legacy · 170 commentsContents
170 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
170 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2016-01-25T21:41:17.528Z · LW(p) · GW(p)
If you haven't heard of it yet, I recommend the novel Crystal Society (freely available here, also $5 Kindle version.)
You could accurately describe it as "what Inside Out would have been if it looked inside the mind of an AI rather than a human girl, and if the society of mind had been composed of essentially sociopathic subagents that still came across as surprisingly sympathetic and co-operated with each other due to game theoretic and economic reasons, all the while trying to navigate the demands of human scientists building the AI system".
Brienne also had a good review of it here.
Replies from: Dagon↑ comment by Dagon · 2016-01-25T23:41:01.707Z · LW(p) · GW(p)
Hmm. review scared me a bit, and the home page talking about incredibly nearsighted populist economics is a huge turn-off. Still, probably need to read it.
Is the kindle version different in any way from the free mobi file? I'll gladly spend $5 for good formatting or easier reading, but would prefer not to pay Amazon if they're not providing value.
Replies from: Kaj_Sotala, iceman↑ comment by Kaj_Sotala · 2016-01-26T14:43:14.413Z · LW(p) · GW(p)
Is the kindle version different in any way from the free mobi file?
Haven't compared the two, but I would assume no. The formatting on the Kindle version was nothing fancy, just standard.
I think I saw the author commenting that he'd have put it up on Kindle for free as well if it was possible, and there's no mention of it on the story's site, so it's probably not intended as a "deluxe edition".
↑ comment by iceman · 2016-01-26T00:52:35.277Z · LW(p) · GW(p)
What's wrong with the economics on the home page? It seems fairly straightforward and likely. Mass technological unemployment seems at least plausible enough to be raised to attention. (Also.)
Replies from: Dagon↑ comment by Dagon · 2016-01-26T01:31:43.001Z · LW(p) · GW(p)
It (and your link) treat "employment" as a good. This is ridiculous - employment is simply an opportunity to provide value for someone. Goods and services becoming cheap doesn't prevent people doing things for each other, it just means different things become important, and a larger set of people (including those who are technically unemployed) get more stuff that's now near-free to create.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2016-01-26T05:42:52.855Z · LW(p) · GW(p)
Goods and services becoming cheaper is basically the economists definition of progress, so that's all good.
a larger set of people (including those who are technically unemployed) get more stuff that's now near-free to create.
There is no natural law which ensures that everyone has earnings potential greater than cost of living. New tech isn't making food or housing cheaper fast enough, and can't be expected to in the future. AI could suddenly make most of the work force redundant without making housing or food free.
Replies from: TheAncientGeek, Lumifer↑ comment by TheAncientGeek · 2016-01-27T16:24:35.683Z · LW(p) · GW(p)
There is no natural law which ensures that everyone has earnings potential greater than cost of living.
Indeed not, but that correct idea often leads people to the incorrect idea that robotics-induced disemployment, and subsequent impoverishment, are technological inevitabilities. Whether people everybody is going to have enough income to eat depends on how the (increased) wealth of such a society is distributed .. basically to get to the worst-case scenario, you need a sharp decline of interest in wealth redistribution, even compared to US norms. It's a matter of public policy, not technological inevitability. So it's not really the robots taking over people should be afraid of, it's the libertarians taking over.
New tech isn't making food or housing cheaper fast enough,
I am not sure what that is supposed to mean. There is enough food and living space to go round, globally, but it is not going to everyone who needs its, which is, again, re/distribution problem
↑ comment by Lumifer · 2016-01-26T15:53:59.734Z · LW(p) · GW(p)
New tech isn't making food or housing cheaper fast enough, and can't be expected to in the future
First, what's "fast enough"? Look up statistics of what fraction of income did an average American family spend on food a hundred years ago and now.
Second, why don't you expect it in the future? Biosynthesizing food doesn't seem to be a huge problem in the context that includes all-powerful AIs...
Replies from: jacob_cannell↑ comment by jacob_cannell · 2016-01-26T17:15:54.224Z · LW(p) · GW(p)
First, what's "fast enough"?
Fast enough would be moore's law - price of food falling by 2x every couple of years. Anything less than this could lead to biological humans becoming economically unviable, even as brains in vats.
Look up statistics of what fraction of income did an average American family spend on food a hundred years ago and now.
Like this?
Second, why don't you expect it in the future? Biosynthesizing food doesn't seem to be a huge problem in the context that includes all-powerful AIs...
Biosynthesized food is an extremely inefficient energy conversion mechanism vs say solar power. Even in the ideal case, the human body burns about 100 watts. When AGI becomes more power efficient than that, even magical 100% efficient solar->food isn't enough for humans to be competitive. When AGI requires less than 10 watts, even human brains in vatts become uncompetitive.
A future of all-powerful AIs is the future where digital intelligence becomes more efficient than biological. So the only solution there where humans remain competitive involve uploading.
Replies from: Lumifer↑ comment by Lumifer · 2016-01-26T17:25:14.439Z · LW(p) · GW(p)
price of food falling by 2x every couple of years. Anything less than this could lead to biological humans becoming economically unviable,
Why so? Human populations do not double every couple of years.
When AGI requires less than 10 watts, even human brains in vatts become uncompetitive.
Hold on. We're not talking about competition between computers and humans. You said that in the future there will not be enough food for all (biological) humans. That has nothing to do with competitiveness.
Replies from: gjm↑ comment by gjm · 2016-01-26T19:07:02.657Z · LW(p) · GW(p)
We're not talking about competition between computers and humans. You said that in the future there will not be enough food for all (biological) humans.
I think you are misremembering the context. Here's the first thing he said on the subject:
There is no natural law which ensures that everyone has earnings potential greater than cost of living. New tech isn't making food or housing cheaper fast enough, and can't be expected to in the future. AI could suddenly make most of the work force redundant without making housing or food free.
and that is explicitly about the relationship between food cost and earning power in the context of AI.
Replies from: Lumifer↑ comment by Lumifer · 2016-01-26T19:38:19.877Z · LW(p) · GW(p)
I was expressing my reservations about the "New tech isn't making food or housing cheaper fast enough" part.
Of course not everyone has earning potential greater than the cost living. That has always been so. People in this situation subsist on charity (e.g. of their family) or they die.
As to an AI making work force redundant, the question here is what's happening to the demand part. The situation where an AI says "I don't need humans, only my needs matter" is your classic UFAI scenario -- presumably we're not talking about that here. So if the AI can satisfy everyone's material needs (on some scale from basics to luxuries) all by itself, why would people work? And if it's not going to give (meat) people food and shelter, we're back to the "don't need humans" starting point -- or humans will run a parallel economy.
Replies from: gjm, TheAncientGeek↑ comment by gjm · 2016-01-26T21:18:35.060Z · LW(p) · GW(p)
I take it jacob_cannell has in mind neither a benevolent godlike FAI nor a hostile (or indifferent-but-in-competition) godlike UFAI, in either of which cases all questions of traditional economics are probably off the table, but rather a gradual encroachment of non-godlike AI on what's traditionally been human territory. Imagine, in particular, something like the "em" scenarios Robin Hanson predicts, where there's no superduperintelligent AI but lots of human-level AIs, probably the result of brain emulation or something very like it, who can do pretty much any of the jobs currently done by biological humans.
If the cost of running (or being) an emulated human goes down exponentially according to something like Moore's law, then we soon have -- not the classic UFAI scenario where humans are probably extinct or worse, nor the benevolent-AI scenario where everyone's material needs are satisfied by the AI -- but an economy that works rather like the one we have now except that almost any job that needs a human being to do it can be done quicker and cheaper by a simulated human being than by a biological one.
At that point, maybe some biological humans are owners of emulated humans or the hardware they run on, and maybe they can reap some or all the gains of the ems' fast cheap work. And, if that happens, maybe they will want some other biological humans to do jobs that really do need actual flesh. (Prostitution, perhaps?) Other biological humans are out of luck, though.
Replies from: Lumifer↑ comment by Lumifer · 2016-01-27T00:33:47.697Z · LW(p) · GW(p)
Given that jacob_cannell is talking about food and housing, I don't think he has the ems scenario in mind.
I am not a big fan of ems, anyway -- I think this situation as described by Hanson is not stable.
Replies from: gjm↑ comment by gjm · 2016-01-27T01:15:48.272Z · LW(p) · GW(p)
Given that jacob_cannell is talking about food and housing, I don't think he has the ems scenario in mind.
The scenario I think he has in mind is one in which there are both biological humans and ems; he identifies more with the biological humans, and he worries that the biological humans are going to have trouble surviving because they will be outcompeted by the ems.
(I'm pretty skeptical about Hansonian ems too, for what it's worth.)
Replies from: jacob_cannell↑ comment by jacob_cannell · 2016-01-29T17:56:14.851Z · LW(p) · GW(p)
I think the Hansonian EM scenario is probably closer to the truth than the others, but it focuses perhaps too much on generalists. The DL explosion will also result in vastly powerful specialists that are still general enough to do complex human jobs, but still are limited or savant like in other respects. Yes, there's a huge market for generalists, but that isn't the only niche.
Take this Go AI for example - critics like to point out that it can't drive a car, but why would you want it to? Car driving is a different niche, which will be handled by networks specifically trained for that niche to superhuman level. A generalist AGI could 'employ' these various specialists as needed, perhaps on fast timescales.
Specialization in human knowledge has increased over time, AI will accelerate that trend.
↑ comment by TheAncientGeek · 2016-01-28T09:47:27.771Z · LW(p) · GW(p)
So if the AI can satisfy everyone's material needs (on some scale from basics to luxuries) all by itself, why would people work?
If people own the advanced robots or AIs that are responsible for most production, why would they be impoverished by them? More to the point, why would they want the majority of people who don't own automated factories to be impoverished, since that means they would have no-one to sell to? There's no law of economics saying that ina a wealthy society most people would starve, rather to keep an economy going in anything like its present form, you have to have redistribution. In such a future, tycoons would be pushing for basic income -- it's in their own interests.
comment by philh · 2016-01-28T17:49:34.036Z · LW(p) · GW(p)
The CFAR fundraiser has only a few days left, and is on $150k out of $400. If you're on the fence about donating, this is a good time. If you haven't already, you might want to read why CFAR?.
I can't donate from this computer, but I intend to donate £875 (~$1250) before the fundraiser expires, representing four months of tithing upfront.
Replies from: philh↑ comment by philh · 2016-01-29T21:12:21.756Z · LW(p) · GW(p)
Donated, but it only came to $1205.
(CFAR, did you lose the currency selection option on your donate page in the past few days? I appreciated that option. It's easier than guessing an approximate dollar amount and then potentially going back if it converts to a GBP amount sufficiently far from what I intended.)
comment by DataPacRat · 2016-01-28T17:23:01.924Z · LW(p) · GW(p)
Seeking comments
I'm trying a writing experiment, and want to design as much of a story as possible before starting writing it. I want to make sure I'm not forgetting any obvious details, or leaving out important ideas, so I'd appreciate any comments you can add to my draft design document at https://docs.google.com/document/d/1XcgNwELHCU-r7GuYUgDNDDIviThd8Y7Bdto_kMIcmlI/edit . Thank you for your help.
Replies from: MrMind, polymathwannabe↑ comment by MrMind · 2016-01-29T09:20:12.536Z · LW(p) · GW(p)
I've briefly looked it up and it's interesting. You've obviously focused a lot on world-building, so kudos to that.
What I'm not seeing is an interesting developement: there are many directions the plot could go, but you need to choose one and explore that.
I don't know much about the literary trope of the last man on Earth, but it's interesting that the protagonist might not be the last man, but the last sane man, akin to "I am legend".
Will there be any kind of conflict? How do you plan the story to end?
↑ comment by DataPacRat · 2016-01-29T20:10:57.379Z · LW(p) · GW(p)
I've briefly looked it up and it's interesting. You've obviously focused a lot on world-building, so kudos to that.
And I'm still building. I neglected some numbers in the rocket design, and now it looks like I have to rebuild the whole idea from scratch.
What I'm not seeing is an interesting developement: there are many directions the plot could go, but you need to choose one and explore that.
I can see that. To me, the most interesting idea so far is that when the risk of gathering new information is that high, the correct decision to be made may be to act to eliminate a potential threat without knowing whether or not that threat really exists. My current intuition is that I'm probably going to build the story around that choice, and the consequences thereof. (Hopefully without falling into the same sorts of story-issues that cropped up in "The Cold Equations".)
Will there be any kind of conflict?
At the moment, my memories of high-school English classes say that the conflicts will primarily be "Man vs Nature" (eg, pulling a 'The Martian' to build infrastructure faster than it can glitch out) and "Man vs Self" ("Space madness!"), with some "Man vs Man" showing up later as diverging copies' goals diverge.
How do you plan the story to end?
I think I'll keep things hopeful, and finish up with a successful launch to/arrival at Tau Ceti.
Replies from: MrMind↑ comment by MrMind · 2016-02-01T09:07:48.437Z · LW(p) · GW(p)
I neglected some numbers in the rocket design, and now it looks like I have to rebuild the whole idea from scratch.
Ehrm... I think you should devote the main chunck of effort to develope the story and then derive the technical data you need. Very little of that world building stuff will actually go onto the page (if you want some reader, that is).
Replies from: DataPacRat↑ comment by DataPacRat · 2016-02-01T19:09:41.638Z · LW(p) · GW(p)
I've settled reasonably firmly on the questions I want to focus on - how to make decisions when gathering useful information to help make those decisions carries a risk of an immense cost, especially when the stakes are meaningful enough to be significant - and I've just finished working out the technical details of what my protagonist will have been doing up to the point where he can't put off making those decisions any longer.
I have a rule-of-thumb for science-fiction, in that knowing what a character /can't/ do is more important than knowing what they /can/, so by having worked out all the technical stuff up to that point, I now have a firm grasp of the limits my protagonist will be under when he has to deal with those choices. If I'm doing things right, the next part of the design process is going to be less focused on the technology, and more on decision theory, AI risk, existential risk, the Fermi Paradox and the Great Filter, and all that good, juicy stuff.
Replies from: MrMind↑ comment by polymathwannabe · 2016-01-28T19:42:10.829Z · LW(p) · GW(p)
I have read it and added several comments.
Replies from: DataPacRat↑ comment by DataPacRat · 2016-01-28T22:42:34.298Z · LW(p) · GW(p)
And I thank you for your input; I'll definitely be using your input to improve the design-doc.
comment by [deleted] · 2016-01-28T08:40:56.728Z · LW(p) · GW(p)
Are there QALY or DALY reference tables by condition, disease or event?
If not, constructing one would be of unspeakable value to the EA community, not to mention conventional academics and decision makers.
Replies from: gwern↑ comment by gwern · 2016-01-28T16:23:17.078Z · LW(p) · GW(p)
https://research.tufts-nemc.org/cear4/AboutUs/WhatistheCEARegistry.aspx
Also look into the Global Burden of Disease.
Replies from: None↑ comment by [deleted] · 2016-01-28T18:20:46.498Z · LW(p) · GW(p)
- CEA registry is interesting but it doesn't have a standardised database of QALY or DALY by condition
- Global burden of disease doesn't format their content in a fashion suitable for direct comparison of health state, unweighted by prevalence. I have a preference for QALY over DALY anyway.
↑ comment by gwern · 2016-01-28T20:39:05.659Z · LW(p) · GW(p)
I'm not sure what you mean. If you search CEA, you get back utilities. If you search "Alzheimer's" in https://research.tufts-nemc.org/cear4/SearchingtheCEARegistry/SearchtheCEARegistry.aspx you get back
- Article ID | Health State | Weight
- 2013-01-14161 Alzheimer's disease, severe, profound, or terminal disease 0.34
- 2013-01-14161 Alzheimer's disease, mild/moderate 0.6
- 2012-01-10088 Severe Alzheimer's Disease 0.37
- 2012-01-10088 Moderate Alzheimer's Disease 0.54
- 2012-01-10088 Mild Alzheimer's Disease 0.68
- 2012-01-10088 Amnestic mild cognitive impairment (AMCI) 0.73
- 2012-01-10076 Nursing home care with Alzheimer's disease (severe-end stage) 0.34
- 2012-01-10076 Home care with Alzheimer's disease 0.6
- 2012-01-08699 Caregiver of a person with Alzheimer's Disease 0.9
- 2012-01-08699 Patients with Alzheimer's disease 0.408
A utility weight over a year is the QALY; if you die in a year of Alzheimer's, and the weight of that year is 0.6, then you lost 0.4 QALYs compared to if you had lived that year in perfect health and then died, no?
comment by Fluttershy · 2016-01-28T10:07:24.955Z · LW(p) · GW(p)
I'm trying to help a dear friend who would like to work on FAI research, to overcome a strong fear that arises when thinking about unfavorable outcomes involving AI. Thinking about either the possibility that he'll die, or the possibility that an x-risk like UFAI will wipe us out, tends to strongly trigger him, leaving him depressed, scared, and sad. Just reading the recent LW article about how a computer beat a professional Go player triggered him quite strongly.
I've suggested trying to desensitize him via gradual exposure; the approach would be similar to the way in which people who are afraid of snakes can lose their fear of snakes by handling rope (which looks like a snake) until handling rope is no longer scary, and then looking at pictures of snakes until such pictures are no longer scary, and then finally handling a snake when they are ready. However, we've been struggling to think of what a sufficiently easy and non-scary first step might be for my friend; everything I've come up with as a first step akin to handling rope has been too scary for him to want to attempt so far.
I don't think that I'll even be able to convince my friend that desensitization training will be worth it at all--he's afraid that the training might trigger him, and leave him in a depression too deep for him to climb out of. At the same time, he's so incredibly nice, and he really wants to help with FAI research, and maybe even work for MIRI in the "unlikely" (according to him) event that he is able to overcome his fears. Are there reasonable alternatives to, say, desensitization therapy? Are there any really easy and non-scary first steps he might be okay with trying if he can be convinced to try desensitization therapy? Is there any other advice that might be helpful to him?
Replies from: Risto_Saarelma, Richard_Kennaway, Bryan-san, ChristianKl↑ comment by Risto_Saarelma · 2016-01-29T19:46:11.829Z · LW(p) · GW(p)
This sounds like someone who's salient feature is math anxiety from high school asking how to be a research director at CERN. It's not just that the salient feature seems at odds with the task, it's that the task isn't exactly something you just walk into, while you sound like you're talking about helping someone overcome a social phobia by taking a part-time job at supermarket checkout. Is your friend someone who wins International Math Olympiads?
↑ comment by Richard_Kennaway · 2016-01-28T11:27:46.531Z · LW(p) · GW(p)
He sounds like someone with a phobia of fire wanting to be a fireman. Why does he want to work on FAI? Would not going anywhere near the subject work for him instead?
Replies from: Fluttershy↑ comment by Fluttershy · 2016-01-28T21:11:41.521Z · LW(p) · GW(p)
He wants to work on FAI for EA/utilitarian reasons--and also because he already has many of the relevant skills. He's also of the opinion that working on FAI is of much higher value than, say, working on other x-risks or other EA causes.
↑ comment by Bryan-san · 2016-01-28T19:20:34.832Z · LW(p) · GW(p)
If someone has anxiety about a topic, I suggest they go after all the normal anxiety treating methods. SSC has a post about Things that Sometimes Work If You Have Anxeity, though actually going to see a therapist and getting professional help would likely help more.
If he wants to try exposure therapy, good results have apparently recently occurred from doing that while on propranalol.
↑ comment by ChristianKl · 2016-01-28T10:46:24.998Z · LW(p) · GW(p)
Working on AI research and working on FAI research aren't the same thing. I think it's likely a bad idea to not distinguish between the two when talking with a friend who wants to go into research and fears that UFAI will wipe us out.
More to the core of the issue desensitization training is a slow way to deal with fears and I'm not sure that it even works in this context. A good therapist or coach has tools to help people deal with fears.
Replies from: Fluttershy↑ comment by Fluttershy · 2016-01-28T10:55:30.136Z · LW(p) · GW(p)
Oops. I've tried to clarify that he's only interested in FAI research, not AI research on the whole.
Replies from: username2, Richard_Kennaway, Lumifer↑ comment by Richard_Kennaway · 2016-01-28T11:41:56.979Z · LW(p) · GW(p)
FAI is only a problem because of AI. The imminence of the problem depends on where AI is now and how rapidly it is progressing. To know these things, one must know how AI (real, current and past AI, not future, hypothetical AI, still less speculative, magical AI) is done, and to know this in technical terms, not fluff.
I don't know how much your friend knows already, but perhaps a crash course in Russell and Norvig, plus technical papers on developments since then (i.e. Deep Learning) would be appropriate.
↑ comment by Lumifer · 2016-01-28T15:25:59.456Z · LW(p) · GW(p)
he's only interested in FAI research
There's no such thing, any more than there is research into alien flying saucers with nanolasers of doom. There's a lot of fiction and armchair speculation, but that's not research.
Any reason he's not trying to fix his phobia by conventional means?
Replies from: Fluttershy↑ comment by Fluttershy · 2016-01-28T21:18:24.197Z · LW(p) · GW(p)
What I mean is that he'd be interested in working for MIRI, but not, say, OpenAI, or even a startup where he'd be doing lots of deep learning, if he overcomes his phobia.
Replies from: Lumifercomment by Fluttershy · 2016-01-28T09:16:23.117Z · LW(p) · GW(p)
The latest SSC links post links to this post on getting government grants, which sort of set off my possibly-too-good-to-be-true filter, despite the author's apparent sarcasm in the example he gave about bringing in police officers to talk with schoolchildren. Can anyone more knowledgeable comment on this article? Is it realistic for EAs to go out and expect to find government grants that they could get a reasonable amount of utility out of?
Replies from: Zubon, ChristianKl↑ comment by Zubon · 2016-01-29T20:32:12.347Z · LW(p) · GW(p)
My experience is mostly with formula grants, where the grant is mostly a formality like the EFT reimbursement. Many grants have expected recipients. Others are desperately seeking new applicants and ideas. From the outside, it is difficult to tell which is which, and from the inside grantor agencies often have trouble telling why random outsiders are applying to their intentionally exclusive grants but they have trouble finding good applicants for the ones where they want new folks.
↑ comment by ChristianKl · 2016-01-28T21:22:26.053Z · LW(p) · GW(p)
The ability to write a successful grant is a skill. Some people in the EA community could likely do this successfully if they focus on getting good at it. Other people might not have the relevant skill set.
comment by Daniel_Burfoot · 2016-01-26T03:47:32.843Z · LW(p) · GW(p)
Anyone in Singapore? I'm here for a month, PM me and I'll buy you a beer lah.
comment by cleonid · 2016-01-25T21:58:22.858Z · LW(p) · GW(p)
From Omnilibrium:
Replies from: Fluttershy↑ comment by Fluttershy · 2016-01-26T09:30:17.255Z · LW(p) · GW(p)
The title of the article on charity seems clickbait-y to me. I think that if a charity had negative utility, that would imply that burning a sum of money would be preferable to donating that money to that charity. However, this is not the thesis of the article; instead, the article's thesis is:
Replies from: mwengler, Nonewhen non-profit organizations conduct a successful marketing campaign they do not collect more money for charity. Instead they take it away from other charitable causes.
↑ comment by mwengler · 2016-01-27T14:40:54.594Z · LW(p) · GW(p)
I think that if a charity had negative utility, that would imply that burning a sum of money would be preferable to donating that money to that charity.
If there are two charities, one which feeds homeless population for $3/day and a 2nd which feeds same population same food for $6/day, AND people tend to give some amount of money to one charity or the other, but not both, then it seems pretty reasonable to describe the utility of the more expensive charity as negative. It is not that it would be better to burn my contribution, but rather that I am getting $3 worth of good from a $6 donation. But just out and out burning money being superior to donating it is not the only way to interpret negative utility.
If you have $6 to give towards feeding the homeless, it would be better to burn $2 and donate $4 to the cheaper provider than to give the entire $6 to the more expensive charity. But only in the same sense that it would be better to burn $3000 and buy a particular car for $10,000 than to burn no money and buy that exact same car for $14,000. Whereever there are better and worser deals, burning less than the full savings can be worked in as part of a superior choice. This does not have anything to do with whether these are charities or for profit businesses.
Replies from: Error↑ comment by Error · 2016-01-27T17:51:36.826Z · LW(p) · GW(p)
I've always thought of negative utility as "cost exceeds benefits"; but it seems to be getting used here as if "opportunity cost exceeds benefits", which is not the same thing.
I'm not sure which is correct. Not that familiar with utilitarianist nuts and bolts.
Replies from: mwengler↑ comment by mwengler · 2016-01-27T18:49:59.043Z · LW(p) · GW(p)
I'm not sure which is correct. Not that familiar with utilitarianist nuts and bolts.
As with so many things, if there is more than one way to interpret something there is generally not too much to be gained by interpreting so that there is an error when there is a way to interpret it that makes sense. Clearly if a new charity sets up that takes twice the cost to provide the same benefit, and people switch donations from the cheaper charity to the more expensive one, utility produced has been decreased compared to the counterfactual where the new more expensive charity was not set up.
So whatever terminology you prefer, 1) opportunity cost is a real thing and arguably is the only good way to compare money to food quantitatively, and 2) whatever the terminology, the point of the original article is a decrease in utility from adding a charity, which is a sensible idea and well within the bounds of reasonable interpretation of the title under question.
comment by polymathwannabe · 2016-01-29T21:25:49.710Z · LW(p) · GW(p)
More on the replicability problem.
comment by [deleted] · 2016-01-29T16:00:45.124Z · LW(p) · GW(p)
How would you go about teaching 'general science, in particular biology, preferably plants' to a six-year-old who plays go (and wins)? I used to think she's just a cute kid who listens for much longer than I have any right to expect, and now this.
Replies from: Vaniver, bogus↑ comment by Vaniver · 2016-01-29T16:38:50.905Z · LW(p) · GW(p)
Have you read much Feynman? He has some stories of how his father encouraged him to develop the scientific mindset (like this) that might be helpful. The core thing seems to be focusing on the act of thinking things through, not the memory trick of knowing how things are labeled. I'm not sure how to incorporate that into biology, though.
Replies from: None, None↑ comment by [deleted] · 2016-03-04T14:31:22.837Z · LW(p) · GW(p)
(Braggity brag)
Today, we played a game of identifying common (Ukrainian) foodstuffs. There were 4 plastic bottles with wide mouths (1 transparent, 3 opaque but somewhat see-through when light goes through them), holding a small amount of stuff, and separately a handful of beads. The transparent bottle contained rice, the other ones - either soya or wheat or buckwheat (I'm thinking mixes for next time, and maybe use some other things like pepper,..). The task was to learn what was inside without opening and looking.
At first she didn't know how to, then she didn't know how to describe sounds made by sloshing the stuff around, then tried to brute-force it by guessing, then spilled the rice vacating the bottle for beads (a reference point for larger particles), then described smells I could not detect (but that's neither here nor there), then misjudged the shadow size, and then we ran out of time just as we came to the conclusion that the last bottle contains either peas or soya or lentil. I had to give her some hints since she didn't know everything to be found in the kitchen, and soya is certainly far from the commonest things (but she knew it) and to look for pepper which I hadn't anticipated, but still. Love my job.
↑ comment by [deleted] · 2016-01-29T20:35:15.670Z · LW(p) · GW(p)
Yes, I agree, but there are... complications: 1) she cannot read, write half an alphabet or count beyond 30, because 'nobody at home has the time to teach me', and I keep thinking 'surely teaching her to read is the best thing I can ever do', but she wants to talk about botany, of all things, and I don't really have any formal power to make her do anything else; 2) she doesn't understand the value of observations and records, and I don't want to show her 'tricks' like pigment separation because inferential distance and so on, and so we are stuck with 'simple fast transformations' which are very difficult for me to keep fitting into some kind of system, so I just wing it; 3) for an hour and a half! Torture! We keep veering off into binocular vision, birds and so on, but it's like n All-Biology Test and I hate the lack of structure, but I cannot just tell her to go away; 4) and until today I kind of thought she was a good little polite girl, who humored my rants.
I can show her pictures of time series (of some developments), but it's something you prepare without haste and usually with much trouble. I think it would be a good exercise in pattern-matching, but... there are so many things which can go wrong.
Replies from: username2, Viliam↑ comment by username2 · 2016-01-30T11:33:11.781Z · LW(p) · GW(p)
I think that children are very good at learning even the most unexpected things. I think that whatever you do things are unlikely to go wrong, as long as you pay some attention to what she likes and what bores her and don't force her to do things she hates. Children are curious, but their attention benefits from some direction.
↑ comment by Viliam · 2016-01-30T21:27:18.936Z · LW(p) · GW(p)
Does she object against learning the alphabet in principle, or merely because she doesn't consider it the best use of the time when she is with you?
If it's the latter, you could prepare her a tool to learn the alphabet when she is back at home. Just print a paper containing all letters in the alphabet, and next to each letter put two or three pictures of things that start with that letter. ("А = автомобіль, антена; Б = банан,..."; free pics.) Or maybe make it two-sided cards, so she can practive active recall. Then give her a book on botany, as a motivation.
Replies from: None↑ comment by [deleted] · 2016-01-31T09:35:50.417Z · LW(p) · GW(p)
A book on botany with great pictures I already gave her, and her relatives read to her from it. I think I'll have a chat with her dad, though. There are many primers she can use which are much better than anything I can printout...
Replies from: ScottL↑ comment by ScottL · 2016-01-31T11:11:15.732Z · LW(p) · GW(p)
The book sounds good. I think ultimately there are two things that are important here: the first is teaching her about botany and the second is to instill and build on her drive to want to learn the material or more broadly a problem solving/curious mindset. In my opinion, the second one is more important.
Two pieces of advice:
- Forget about the structure. Just think about setting up an environment that will let her explore, play and teach herself. The book is a good start. Maybe, a plant for her room would be a good idea.
- Explore with her. The best thing you can do, I reckon, is to take her outside and explore with her. I don’t know much about Botany, but I think it would be cool, as an example, if you picked up a flower and pointed out to her that most people are born with two arms and then asked: “So, would that mean that the amount of petals on this type of flower will all be the same”. Then, no matter what she says you can go to a group of the flowers and let her count the amount of petals to see if they’re the same. Then, you can ask another question: are the buds the same etc.
comment by k_ebel · 2016-01-26T21:43:59.416Z · LW(p) · GW(p)
Hello!
I'm getting into the Bay area this afternoon for the CFAR workshop starting tomorrow. I'm looking for advice on how to spend the time and also where might be a good place to look for affordable lodging for one evening.
I'd initially thought about crashing at the Piedmont house hostel as it's cheap and close enough that I could visit CFAR before heading over tomorrow, but it appears to be sold out. I figured there are probably folks here who know the area or have visited, so I didn't see any harm in asking for info, or checking to see if anyone was getting up to anything.
:) Kim
Replies from: Gunnar_Zarncke, Manfred↑ comment by Gunnar_Zarncke · 2016-01-26T22:13:33.054Z · LW(p) · GW(p)
Hi k_ebel. I'm not sure whether this is the best place for getting in touch with people on short notice (though I'm open to stand corrected). A more immediate way to get in touch and discuss matters is the LW Slack https://wiki.lesswrong.com/wiki/Less_Wrong_Slack or the LW irc.
comment by RaelwayScot · 2016-01-26T20:12:33.544Z · LW(p) · GW(p)
What are your thoughts on the refugee crisis?
Replies from: fubarobfusco, Viliam, Gunnar_Zarncke, username2↑ comment by fubarobfusco · 2016-01-26T20:52:25.154Z · LW(p) · GW(p)
There's a whole -osphere full of blogs out there, many of them political. Any of those would be better places to talk about it than LW.
Replies from: Kaj_Sotala, RaelwayScot↑ comment by Kaj_Sotala · 2016-01-27T11:22:34.387Z · LW(p) · GW(p)
What's wrong with LW?
Replies from: Elo↑ comment by RaelwayScot · 2016-01-26T21:02:49.208Z · LW(p) · GW(p)
Then which blogs do you agree with on the matter of the refugee crisis? (My intent is just to crowd-source some well-founded opinions because I'm lacking one.)
Replies from: polymathwannabe↑ comment by polymathwannabe · 2016-01-27T14:48:24.636Z · LW(p) · GW(p)
LW avoids discussing politics for the same reason prudent Christmas dinner hosts avoid discussing politics. If you wish to take your crazy uncle to the pub for a more heated chat, there's Omnilibrium.
↑ comment by Viliam · 2016-01-26T21:19:19.019Z · LW(p) · GW(p)
Another sad example of a problem that would be difficult but not impossible to solve rationally in theory, but in real life the outcome will be very far from optimal for many reasons (human stupidity, mindkilling, conflicts of interest, problems with coordination, etc.).
There are many people trying to escape from a horrible situation, and I would really want to help them. There are also many people pretending to be in the same situation in order to benefit from any help offered to the former; that increases the costs of the help. A part of what created the horrible situation is in the human heads, so by accepting the refugees we could import a part of what they are trying to escape from.
As usual, the most vocal people go to two extremes: "we should not give a fuck and just let them die", or trying to censor the debate about all the possible risks (including the things that already happened). Which makes it really difficult to publicly debate solutions that would both help the refugees and try to reduce the risk.
Longer-term consequences: If we let the refugees in, it will motivate even more people to come. If we don't let the refugees in, we are giving them the choice to either join the bad guys or die (so we shouldn't be surprised if many of them choose to join the bad guys).
Supporting Assad, as a lesser evil than ISIS is probably the best realistic option, but kinda disappointing. (Also anything that gives more power to Russia creates more problems in long term.) Doesn't solve the underlying problem, that the states in the area are each a random mix of religions and ethnicities, ready to kill each other. A long-term solution would be rewriting the map, to split the groups who want to cut each other's throats into different states. No chance to make Turkey agree on having Kurdistan as a neighbor. Etc.
If I were a king of Europe, my solution would be more or less to let the refugees in, but to have them live under Orwellian conditions, which would expire in 5 or 10 years after their coming assuming they commited no crimes (a trivial crime would merely extend the period, a nontrivial crime would lead to deportation, with biometric data taken so the person doesn't get a second chance). For example, there would be a limit of one refugee family per street, so they cannot create ghettos. Mandatory lessons on how to fit in the culture. Islam heavily controlled, only the most nonviolent branches allowed.
↑ comment by Gunnar_Zarncke · 2016-01-29T14:02:05.039Z · LW(p) · GW(p)
Tim on the LW Slack gave an impressive illustration of the different levels the refugee crisis can be seen. He was referring to Constructive Development Theory which you might want to look up for further context. I quote verbatim with his permission:
Replies from: Lumifer, waveman, wavemanAs an example of different thinking at different levels consider the "boat people" issue here in Australia. Australia is an Island so the only way to get here is by boat or by air. We fine airlines and commercial ships who bring people without valid visas. People get a tourist visa and overstay (thus we may it hard for people from poor countries to get tourist visas as my sister in law can attest) or they come here by boat without a visa. Many of the people who arrive by boat are refugees according to the UN definition, and others are economic migrants.
Initially the government did not know what to do. Then they implemented a solution to turn back the boats. Then this policy was rescinded to great fanfare and the boats resumed. Many people drowned in these leaky un-seaworthy boats. Then the policy was resumed again. Currently the boats are diverted to various remote islands and the people are resettled in various places that are not first world countries and are usually not considered desirable places to live eg Papua New Guinea. Before resettlement, which can take years, people are placed in detention. This includes children. Conditions are unpleasant.
This policy is very controversial. Many people regard it as morally indefensible. The detention of children is a particularly hot issue. The Uniting Church around the corner has a sign saying "children do not belong in detention".
I will try to describe how level 2 3 4 and 5 people might approach the issue. Please bear in mind I am not trying to argue a position on the issue but just to illustrate how people might approach it. You will see they often use the same word to mean very different things. Also, you will see that people tend to misinterpret the thinking of people at a higher level, in terms they understand. This usually means they map the higher level thinking into a lower level.
Level 2 (primary school / gangster): This migrants might take my job, or compete for government money or scarce housing. So I don't want them. I might have to pay higher taxes to support them. So I don't want them. I don't like people who look or act different from me. They smell funny and talk funny so I don't want them. The migrants will boost demand for housing and infrastructure which will be good for my company and I will make more money. So I want them.
Level 3 (teenage idealist or person with 'tribal' loyalties): (eg IMHO http://greens.org.au/policies/immigration-refugees) Jesus himself was a refugee. We should be compassionate and let them stay. If people came all this way they must have a good reason so we should let them stay. This policy is cruel and must end. You simply cannot have children in detention. This is not an issue of defense or border security. This problem is our fault because we participated in .
Level 2 people tend to think of Level 3 people as bleeding hearts, out of touch with the real world. They can also get very angry because the migrants tend to end up in the suburbs where level 2 people live rather than level 3 people. See this trenchant satire of singer and social activist Joan Baez (from the 1960s -may offend!) "pull the triggers we're with you all the way - all the way across the bay".
WARNING MAY OFFEND https://www.youtube.com/watch?v=NafrFdBXfrk WARNING MAY OFFEND(edited)
Level 4 (full modern adult / systematic thinker): Indeed we should be kind to vulnerable people. Let's see how we can best do that. Perhaps we should increase our refugee quota which is quite low. People often drown when they come by boat so we should discourage that. Unfortunately this may involve some people being detained - I feel sad about that. If we can get the message out, people will stop coming by boat and the drownings and the need for detention will end though. We hope this will be a temporary situation. Rather let's select the best way we know how from the 40,000,000 refugees around the world and bring them in safely by air. We need measures to discourage economic migration as we cannot take 40m refugees let alone 2b people from poorer countries. Foreign aid is far more cost effective at improving people's lives than economic migration. We should substantially increase foreign aid. Also we need to ensure that the people who come in are not extremists or criminals. So we need to assess people before we bring them in.
In my experience people at level 3 tend to interpret level 4 arguments as being at level 2. There is a sub-second delay before accusations of racism etc are leveled. Level 2 people tend to think level 4 people are stupid.
Level 5 (post-modern): What does this debate and how rancorous it is tell us about ourselves? How do we deal with a situation where millions of people live in abject poverty while we live in relative luxury? Given that evidently people are not prepared to share the wealth evenly? Can there be a way to bring people with level 2 / 3 thinking and with very different belief systems into our community in a way that will work? What can we learn from people with very different world views? Can we look at migrant groups who have done well and those who have not and see what we can learn from this - about them and about us? Can we look at root causes for why countries are poor and why there are wars? Can we attack the problems at a higher level? Maybe our thinking about these problems is part of the problem?
People at lower levels tend to think level 5 people are off with the fairies.
↑ comment by Lumifer · 2016-01-29T16:05:17.009Z · LW(p) · GW(p)
Is there a implication of ranking with the way the levels are numbered? Are Level 5 people "more advanced" than lower levels and should one strive to move up levels?
Maybe it's just me, but I don't see post-modernists as the ultimate peak of human thinking.
Replies from: OrphanWilde, waveman, Vaniver, Viliam↑ comment by OrphanWilde · 2016-01-29T16:45:44.923Z · LW(p) · GW(p)
In the original, there's an observable pattern to these "levels", alternating between multiple contradictory models, and then a new model in which the various previously-contradictory models are reconciled into a unified framework. Even numbers are a cohesive framework, odd numbers are multiple-competing-model frameworks.
This pattern is conspicuously absent from Tim's reconstruction. The level 3 people don't share or understand the level 2 people's concerns; in truth, they're merely level 2 people of Tim's favored tribe. The Level 4 described is just Tim's level 3 with a hint of understanding of level 2 concerns; in truth, they're level 3 people of Tim's disfavored tribe. Tim's level 5, Postermodernism, is a Level 3 of Tim's-favored-tribe understanding of Level 5.
IOW, this farming is just predictable and blatant tribalism of the form of placing your own way of thinking as being "superior" to the opposing tribe's way of thinking.
Replies from: waveman↑ comment by waveman · 2016-01-29T23:27:36.397Z · LW(p) · GW(p)
This was part of a much larger discussion so a lot is omitted here.
In kegan's books, people at 'higher' levels sometimes lose something that the lower levels have. Level 4 people can lose a sense of intimacy and connection with other people, God etc. Level 3 people often fail to appreciate level 2 people's mindset. Level 4 people can lack a sense of immediacy that level 2 people have.
The progression in Kegan's book is really about the fact that what you are subject to at one level becomes object at the next level. It does not require that Level X people fully understand people at 'lower' levels.
I guess in one sense I have succeeded because your guess at my favored view is entirely wrong. I was trying not to make an argument about refugee policy but to illustrate various kinds of thinking.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2016-02-01T14:42:30.812Z · LW(p) · GW(p)
The progression in Kegan's book is really about the fact that what you are subject to at one level becomes object at the next level. It does not require that Level X people fully understand people at 'lower' levels.
No, that is not what the progression is "really about". And yes, you have to be able to understand people at "lower levels" in order to be at a higher level. A Level 4 Person might not have a sense of intimacy or connection - but they have to be able to understand that other people have intimacy and connections.
I guess in one sense I have succeeded because your guess at my favored view is entirely wrong. I was trying not to make an argument about refugee policy but to illustrate various kinds of thinking.
So what is your favored view, and how does it meaningfully differ from the Postmodern view you espouse as the Level 5 solution?
↑ comment by waveman · 2016-01-29T23:20:50.993Z · LW(p) · GW(p)
Kegan points out that many who fancy themselves as postmodernists are actually trapped in level 3. They have been told that modernism has its flaws and there therefore reject it and stay at level 3. This fits some young people in college.
A level 5 would be post-modern in the sense that they have mastered modernist ideas but are not trapped within them.
↑ comment by Vaniver · 2016-01-29T16:12:19.993Z · LW(p) · GW(p)
Is there a implication of ranking with the way the levels are numbered? Are Level 5 people "more advanced" than lower levels and should one strive to move up levels?
The linked post gives a brief overview. The higher levels are 'more advanced' in that there is an asymmetry; the level 5 can emulate a level 4 more easily than a level 4 can emulate a level 5. But that doesn't translate to 'more advanced' in all possible meanings. A relevant quote from the link:
Replies from: LumiferKegan likes to make the analogy of comparing drivers who can drive a stick-shift with drivers who only drive an automatic. Can we say that someone is a “better driver” simply because they can drive a stick?
↑ comment by Lumifer · 2016-01-29T16:37:26.962Z · LW(p) · GW(p)
So the implication is that's a straight IQ ladder, then. My original objection stands.
Replies from: Vaniver↑ comment by Vaniver · 2016-01-29T16:41:06.047Z · LW(p) · GW(p)
My experience is that it's related to, but distinct from, g. High g and more mature age make the higher levels easier but don't create them on their own.
Replies from: Lumifer↑ comment by Lumifer · 2016-01-29T16:51:22.987Z · LW(p) · GW(p)
Why would a high-IQ level 4 person have trouble emulating level 5? See e.g. Sokal, etc.
ETA: I looked through the linked article and I stick by my impression that this is a straightforward IQ ladder modified by "maturity" (appropriate socio-emotional development, I guess?) In particular, I expect that levels have pretty hard IQ requirements, e.g. a person with the IQ of 80 just won't make it to Level 4.
Replies from: waveman, Vaniver↑ comment by waveman · 2016-01-29T23:39:34.578Z · LW(p) · GW(p)
I think it is partly linked to IQ. I agree that there are probably limits to the levels people at low IQs can achieve,
But there is also a development process that takes time. Few teenagers, no matter how smart, are at level 5 Think by analogy that few 15 year old people have mastered quantum field theory. No matter how smart you are it takes time
Sokal is emulating level 3 people who think they are level 5. These people are anti-modern not post-modern. Most post-modernists are at level 3 as far as I can tell. I have been trawling through their works to assess this.
A level 5 physicist might be someone like say Robert Laughlin a Nobel Physicist who wrote a book "A Different Universe" questioning how fundamental 'fundamental' physics is. He has mastered modernist physics and is now building on this. This is very different from a Deepak Chopra type who doesn't even get to first base in this enterprise.
↑ comment by Vaniver · 2016-01-29T17:15:05.289Z · LW(p) · GW(p)
I don't think Sokal is an example of systems of systems thinking. (The post-modernist label is not a particularly useful one; here it means the level after the modernist level, and is only partly connected to other things called post-modernist.)
Why would a high-IQ person have trouble emulating someone of the opposite sex? (There doesn't appear to be the same asymmetry--both men and women seem bad at modeling each other--but hopefully this will point out the sort of features that might be relevant.)
↑ comment by Viliam · 2016-01-30T21:43:19.825Z · LW(p) · GW(p)
Some charitable reading is required; the labels are oversimplifications.
I agree that most post-modernists are merely pretending to be at some high level of thinking, and the reason it works for them is that most of their colleagues are in exactly the same situation, so they pass the "peer review". But we can still use them as a pointer towards the real thing. What would be the useful mental skills that these people are pretending to have?
I remember reading somewhere about a similar model, but for the given question, on each level both "pro" and "con" positions were provided. That made it easier for the reader to focus on the difference between the levels.
↑ comment by waveman · 2016-01-29T23:41:41.859Z · LW(p) · GW(p)
Some things to bear in mind in relation to Kegan's work are
Pretty well everyone thinks that they are 1-2 levels higher than they are actually at. This may include you. It certainly included me.
Most people are at level 3 or below.
Very few people under 30 are at level 4.
Hardly anyone is at level 5.
This from Kegan.
↑ comment by waveman · 2016-01-30T03:08:28.500Z · LW(p) · GW(p)
This may also help - a more systematic description of the levels. The right two columns are mine, from memory the others are by Kegan,
https://drive.google.com/file/d/0B_hpownP1A4PaXN1Tjg2RFd6N0E/view?usp=sharing
comment by Capla · 2016-01-31T21:15:45.322Z · LW(p) · GW(p)
[I'll post on next week's open thread as well.]
If you are interested in AI risk or other existential risks and want to help, even if you don't know how, and you either...
- Live in Chicago
- Attend the University of Chicago
- Are are intending to attend the University of Chicago in the next two years.
...please message me.
I'm looking for people to help with some projects.
comment by [deleted] · 2016-01-30T07:15:10.688Z · LW(p) · GW(p)
"Whilst there are some plant-based sources of vitamin B12, such as certain algae and plants exposed to bacterial action or contaminated by soil or insects, humans obtain almost all of their vitamin B12 from animal foods."
Who are these bacterial-actioned plants?
Replies from: Nonecomment by [deleted] · 2016-01-28T16:18:49.920Z · LW(p) · GW(p)
Is your completed university coursework published online, why/why not? Should I publish my completed university coursework online? It is not outstanding. However, I value transparency and feedback. I reckon it's unlikely that someone will provide unsolicited feedback unless they have a vendetta against me in which case the work could be used against me. However, I suspect it may give me social 'transparency' points which are valued amongst our kind of people. Yes?
Other people seem to post their essays and other content online without much fuss. I got through my classes but I feel ashamed to publish that work because I feel it's inadequate. Maybe this is just imposter syndrome or perfectionism and it will be good to publish to combat that? Or, maybe I really am scraping through on the sympathy of professors and the accomodating standards in educational institutions in this day and age :/
I feel different about my post history here and at my linked reddit accounts because I don't habitually tie my content to my name (which is unique enough to find me if you google it and no one else)
Replies from: Huluk↑ comment by Huluk · 2016-01-28T20:59:38.524Z · LW(p) · GW(p)
I have thought about this, too. I am currently not publishing my coursework (mostly programming / lab reports) because the tasks may be used again in the following year. I do not want to force instructors to make new exercises for each course and I don't think I'd get much use out of publishing them. The argument wouldn't apply to essays, of course.
comment by [deleted] · 2016-01-30T09:42:22.639Z · LW(p) · GW(p)
Can I just give a huge shoutout to Briant Tomasik.. I've never met the guy, but just take a look at his unbelievably well documented thinking like this and this. I feel conjuring up any words to describe how saintly that man is would be an injustice to superhuman levels of compassion AND intelligence. Why isn't he one of the 'big names' in the EA space already? Or is he, but just not in the circles I run in?
comment by ChristianKl · 2016-01-26T18:34:16.871Z · LW(p) · GW(p)
If I look in Google Maps at California there seem to be huge open spaces. What's stopping new cities in California to be build on land that's outside of the existing cities?
Replies from: None, knb, Pfft, Lumifer, MrMind, _rpd↑ comment by [deleted] · 2016-01-27T07:34:30.979Z · LW(p) · GW(p)
Cities are where they are because of actual reasons of geography, not just people plopping things down randomly on a map. You need to get stuff into them, stuff out of them, have the requisite power and water infrastructure to get to them (ESPECIALLY in California)... they aren't something you plop down randomly on a whim.
Replies from: Kaj_Sotala, username2↑ comment by Kaj_Sotala · 2016-01-27T11:24:35.867Z · LW(p) · GW(p)
Also, previous attempts at doing exactly this have only had modest success:
California City had its origins in 1958 when real estate developer and sociology professor Nat Mendelsohn purchased 80,000 acres (320 km2) of Mojave Desert land with the aim of master-planning California's next great city. He designed his model city, which he hoped would one day rival Los Angeles in size, around a Central Park with a 26-acre (11 ha) artificial lake. Growth did not happen anywhere close to what he expected. To this day a vast grid of crumbling paved roads, intended to lay out residential blocks, extends well beyond the developed area of the city.
↑ comment by username2 · 2016-01-28T11:15:12.421Z · LW(p) · GW(p)
There are planned desert cities in Arabian peninsula. If land value in California grows because people value geographical proximity to San Francisco that much at some point it will outweigh costs of having to build infrastructure in the middle of the desert.
Replies from: Viliam↑ comment by Viliam · 2016-01-29T09:21:19.578Z · LW(p) · GW(p)
There are multiple problems that need to be solved here. Buying land is one of them, and yes, it seems like a reasonable investment for someone who has tons of money. The other problem is water.
Yet another problem could be the transit from the new city to SF. Geographical proximity may be useless if the traffic jams make commuting impossible.
↑ comment by knb · 2016-01-27T00:06:02.179Z · LW(p) · GW(p)
A lot of Californians like those big open spaces. Others don't want developments that make it easier for poor people to live around them (due to fear of crime, "bad schools" or other unpleasantness.)
From 1969 onward in California, “progressivism” has chiefly been about preserving privilege, especially the privilege of living in an uncrowded bucolic manner in the finest landscapes (typically, the coast in Southern California, the first valley in from the coast in Northern California) by blocking on environmentalist grounds developments that would make these regions more affordable to more people.
San Francisco is now one of the most expensive real estate markets in the world, and the populace wants to keep it that way.
Replies from: SanguineEmpiricist↑ comment by SanguineEmpiricist · 2016-01-27T20:37:17.612Z · LW(p) · GW(p)
Alright so how do we keep these people away then while lowering prices?
Replies from: username2↑ comment by username2 · 2016-01-29T13:10:26.095Z · LW(p) · GW(p)
You can implement Hukou system. Obviously, it would lead to other problems.
Replies from: bogus↑ comment by bogus · 2016-01-29T13:23:31.730Z · LW(p) · GW(p)
You wouldn't even need the hukou; private covenants would be quite enough. However, these conevants are banned as an infringement of civil rights. But the real solution is to decouple education from local real-estate markets, by allowing people to freely choose their preferred schools (public, charter or private, via student-linked vouchers) regardless of their home address or VAT code.
Replies from: username2, Lumifer↑ comment by Lumifer · 2016-01-29T16:08:55.710Z · LW(p) · GW(p)
I am a bit doubtful that free school choice will solve the "in some places real estate is really expensive" problem.
For example, NYC has a notoriously bad public school system and very expensive real estate.
Replies from: bogus↑ comment by bogus · 2016-01-29T16:21:42.564Z · LW(p) · GW(p)
The problem is not expensive real estate persay; it's supply restrictions that make real estate more expensive than necessary. Free school choice would remove much of the motive for these restrictions.
Replies from: Lumifer↑ comment by Lumifer · 2016-01-29T16:30:35.786Z · LW(p) · GW(p)
E.g. in New York City..?
I don't think school is the only or even the main reason for supply restrictions. People like to live with neighbours of approximately the same social standing and will actively oppose hoi polloi moving in, even without schools being involved.
↑ comment by Pfft · 2016-01-27T18:04:24.693Z · LW(p) · GW(p)
I guess because people want to live in the existing cities? It's not like there is nowhere to live in California--looking at some online apartment listings you can rent a 2 bedroom apt in Bakersfield CA for $700/month. But people still prefer to move to San Francisco and pay $5000/month.
↑ comment by Lumifer · 2016-01-26T18:44:47.575Z · LW(p) · GW(p)
Environmental Impact Statements :-D
On a bit more serious note, the usual thing -- you can build a city in the middle of a desert, but why would people want to live there? People want to live in LA or SF, not just in Californian boondocks...
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-26T19:38:35.134Z · LW(p) · GW(p)
People might want to live in LA or SF but on the net the high prices cause people to migrate out of California with 94.000 more people leaving California than joining it in 2011.
It seem like there's open space in 1-hour driving distance of SF. Living there at a decent rent might be preferable to leaving California all-together.
↑ comment by _rpd · 2016-01-27T01:05:21.197Z · LW(p) · GW(p)
High quality infrastructure and community services are expensive, but taxpayers are reluctant to relocate to the new community until the infrastructure and services exist. It's a bootstrap problem. Haven't you ever played SimCity?
Replies from: polymathwannabe, ChristianKl↑ comment by polymathwannabe · 2016-01-27T15:18:18.245Z · LW(p) · GW(p)
Then how are new cities ever founded? How did Belmopan, Brasília, Abuja and Islamabad do it? Look at the dozens of new cities built just in Singapore during the past half century.
The OP's proposal to build a city in the middle of the desert strikes me as similar to the history of Las Vegas. What parts of it can be replicated?
Replies from: _rpd, Lumifer↑ comment by _rpd · 2016-01-27T19:16:42.750Z · LW(p) · GW(p)
How did Belmopan, Brasília, Abuja and Islamabad do it?
Well all of these are deliberate decisions to build a national capital. They overcame the bootstrap problem by being funded by a pre-existing national tax base.
dozens of new cities built just in Singapore during the past half century
Again, government funding is used to overcome the bootstrap problem. Singapore is also geographically small, and many of these "cities" would be characterized as neighborhoods if they were in the US.
Las Vegas
Well, wikipedia says it began life as a water resupply stop for steam trains, and then got lucky by being near a major government project - Hoover dam. Later it took advantage of regulatory differences. An eccentric billionaire seems to have played a key roll.
There seem to be several towns that exist because of regulatory differences, so this seems a factor to consider - at least one eccentric billionaire seems fairly serious about "seasteading" for this reason. Historically, religious and ideological differences have founded cites, if not nations, so this is one way to push through the bootstrap phase - Salt Lake City being a relatively modern example in the US. Masdar City - zero carbon, zero waste - is an interesting example - ironically funded by oil wealth.
↑ comment by Lumifer · 2016-01-27T15:48:55.116Z · LW(p) · GW(p)
similar to the history of Las Vegas. What parts of it can be replicated?
By traditional mythology, the reason Las Vegas exists is because the mob (mafia) wanted to have a playground far far away from the Feds :-)
Replies from: tut↑ comment by ChristianKl · 2016-01-27T10:55:57.847Z · LW(p) · GW(p)
It's expensive but interest rates are low and the possible profit is huge.
Replies from: _rpd↑ comment by _rpd · 2016-01-27T11:41:23.746Z · LW(p) · GW(p)
But similar profits are available at lower risk by developing at the edges of existing infrastructure. In particular, incremental development of this kind, along with some modest lobbying, will likely yield taxpayer funded infrastructure and services.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-27T12:00:11.878Z · LW(p) · GW(p)
It seems like you can't do incremental development by building more real estage inside the cities because of the cities not wanting to give new building permits that might lower the value of existing real estage.
Replies from: _rpd↑ comment by _rpd · 2016-01-27T12:26:45.728Z · LW(p) · GW(p)
I think Seattle's South Lake Union development, kickstarted by Paul Allen and Jeff Bezos, is a counter example ...
http://crosscut.com/2015/05/why-everywhere-is-the-next-south-lake-union/
Perhaps gentrification is a more general counter example. But you're right, most developers opt for sprawl.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-27T14:14:46.759Z · LW(p) · GW(p)
I think Seattle's South Lake Union development, kickstarted by Paul Allen and Jeff Bezos, is a counter example ...
No, it's not in California. In California a city like Mountain View blocks a company like Google from building new infrastructure on it's edges.
Perhaps gentrification is a more general counter example.
In what sense? Gentrification simply means that rents go up in certain parts of the city. It doesn't have directly something to do with new investments.
Replies from: Lumifer, _rpd↑ comment by Lumifer · 2016-01-27T15:43:26.668Z · LW(p) · GW(p)
Gentrification simply means that rents go up in certain parts of the city.
Not at all. Gentrification is the replacement of a social class by a different social class. There are a LOT of consequences to that -- the character of the neighbourhood changes greatly.
↑ comment by _rpd · 2016-01-27T15:13:16.450Z · LW(p) · GW(p)
Gentrification simply means that rents go up in certain parts of the city. It doesn't have directly something to do with new investments.
In my experience gentrification is always associated with renovation and new business investment. The wikipedia article seems to confirm that this is not an uncommon experience.
comment by [deleted] · 2016-01-26T10:51:44.927Z · LW(p) · GW(p)
I love you, LessWrongers. Thank you for being a kind of family to me. If you're reading this and fear that the community is very critical like I know some IRL attendees are, remember that its not personal and its all a useful learning experience if you have the right attitude and sincerely do your best to make useful contributions.
comment by MrMind · 2016-01-28T11:17:21.797Z · LW(p) · GW(p)
Those that follows are random spurts of ideas that emerged when thinking at AlphaGo. I make no claim of either validity, soundness or even sanity. But they are random interesting directions that are fun for me to investigate, and they might turn out interesting for you too:
- AlphaGo uses two deep neural networks to prune the enormous search tree of a Go position, and it does so unsupervised.
- Information geometry allows us to treat information theory as geometry.
- Neural networks allows us to partition high-dimensional data.
- Pruning a search tree is also strangely similar to dual intuitionistic logic.
- Deep neural networks can thus apply a sort of paraconsistent probabilistic deduction.
- Probabilistc self-reflection is possible.
- Deep neural networks can operate a sort of paraconsistent probabilistic self-reflection?
↑ comment by Gunnar_Zarncke · 2016-01-29T22:18:42.514Z · LW(p) · GW(p)
The the Alpha Go Discussion Post.
↑ comment by bogus · 2016-01-29T22:25:15.197Z · LW(p) · GW(p)
AlphaGo uses two deep neural networks to prune the enormous search tree of a Go position, and it does so unsupervised.
lol no. The pruning ('policy') network is entirely the result of supervised learning from human games. The other network is used to evaluate game states.
Your other ideas are more interesting, but they are not related to AlphaGo specifically, just deep neural networks.
Replies from: MrMind↑ comment by MrMind · 2016-02-01T09:32:05.874Z · LW(p) · GW(p)
lol no. The pruning ('policy') network is entirely the result of supervised learning from human games.
If I understood correctly, this is only the first stage in the training of the policy network. Then (quoting from Nature):
Replies from: bogusThe second stage of the training pipeline aims at improving the policy network by policy gradient reinforcement learning (RL). The RL policy network pρ is identical in structure to the SL policy network, and its weights ρ are initialised to the same values, ρ = σ. We play games between the current policy network pρ and a randomly selected previous iteration of the policy network.
↑ comment by bogus · 2016-02-01T20:04:53.766Z · LW(p) · GW(p)
The second stage of the training pipeline aims at improving the policy network by policy gradient reinforcement learning (RL).
Except that they don't seem to use the resulting network in actual play; the only use is for deriving their state-evaluation network.
comment by [deleted] · 2016-01-27T12:20:08.526Z · LW(p) · GW(p)
How true is the proverb: 'To break habit you must make a habit'
Replies from: Pfft, Brillyant, Richard_Kennaway↑ comment by Pfft · 2016-01-27T17:53:38.769Z · LW(p) · GW(p)
In animal training it is said that best way to get rid of an undesired behaviour is to train the animal with an incompatible behaviour. For example if you have a problem with your dog chasing cats, train it to sit whenever it sees a cat -- it can't sit and chase at the same time. Googling "incompatible behavior" or "Differential Reinforcement of an Incompatible Behavior" yields lots of discussion.
The book Don't Shoot the Dog talks a lot about this, and suggests that the same should be true for people. (This is a very Less Wrong-style book: half if it is very expert advice on animal training, half of it is animal-training-inspired self-help, which is probably on much less solid ground, but presented in a rational, scientific, extremely appealing style.)
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-27T21:20:17.960Z · LW(p) · GW(p)
When it comes to training animals you can only go through behavorism. On the other hand when training people you can use CBT and other approaches.
Replies from: username2↑ comment by username2 · 2016-01-28T15:49:54.430Z · LW(p) · GW(p)
Excuse my ignorance, but isn't CBT based on behaviorism?
Replies from: Viliam, ChristianKl↑ comment by Viliam · 2016-01-29T09:14:50.697Z · LW(p) · GW(p)
Behaviorism in its original form assumed that thoughts or emotions don't exist, or at least that it is unscientific to talk about them. Later behaviorists took less extreme positions, and allowed "black boxes" in their models corresponding to things that can't be measured (before inventing EEG).
In CBT the "B" stands for behavioral, but "C" stands for cognitive, which is like the exact of behaviorism. CBT is partially based on behaviorism, but the other essential root is so-called Rational Therapy. (Fun fact for LW readers: the Rational Therapy was inspired by Alfred "the map is not the territory" Korzybski. It's a small world.)
↑ comment by ChristianKl · 2016-01-28T16:40:31.536Z · LW(p) · GW(p)
CBT has many parts like the acceptance paradox that have nothing to do with behaviorism.
↑ comment by Brillyant · 2016-01-27T17:48:46.811Z · LW(p) · GW(p)
I think it's certainly true. I suppose it depends on your definition of "habit"...
Isn't much of what we do habitual, whether it benefits us or not? In this way, you have either good habits or bad that are reciprocals of one another.
For example, people who refrain are not said to have a "habit of not biting their nails". But that is, I think, what is happening.
Replies from: 4hodmt↑ comment by 4hodmt · 2016-01-27T18:35:48.387Z · LW(p) · GW(p)
I stopped biting my nails (coating them in a bitter substance to remind myself not to bite them if I tried) and I did not make any replacement habit. I don't have a "habit of not biting my nails" any more than I have a habit of breathing. It happens automatically without conscious effort, so calling "not biting nails" a habit is misusing the word.
Replies from: Brillyant↑ comment by Brillyant · 2016-01-27T19:21:20.203Z · LW(p) · GW(p)
This is why I mentioned the definition of "habit" in my comment.
I don't have a "habit of not biting my nails"... It happens automatically without conscious effort...
From Wikipedia:
"A habit (or wont) is a routine of behavior that is repeated regularly and tends to occur unconsciously."
↑ comment by Richard_Kennaway · 2016-01-27T13:03:00.830Z · LW(p) · GW(p)
Was that "How true?" or "How true!"?
I think it is true, with the proviso that the habit to make can be the habit of noticing when the old habit is about to happen and not letting it.
Replies from: Nonecomment by [deleted] · 2016-01-31T07:24:41.944Z · LW(p) · GW(p)
There is a website called Wizardchan. Months back multiple posts at separate intervals predicted negative interests rates, starting with Japan. Worryingly, there were negative consequences predicted for thereafter that evade my memory. I had never heard of it and thought it was silly. I returned to the site today to watch for gloating. No reference is available. The website operates like 4chan in that content disappears regularly. I don’t know what to make of this. I don’t know what to make of this information. Are we privy to privileged or in any way useful information here or just noise?
comment by [deleted] · 2016-01-30T10:06:55.913Z · LW(p) · GW(p)
How many of you adopt a false easygoing/go-with-the-flow or selfless/helpful persona to make it seem like you're happy to put other people ahead of yourself?
Replies from: Bryan-san↑ comment by Bryan-san · 2016-02-01T16:08:33.531Z · LW(p) · GW(p)
I think there is very high value in sincerity, that both of the qualities you've described are heavily attached to sincerity, and that the effective and regular signaling of sincerity is going to be pretty much impossible to maintain without actually being sincere. If you really want to be effective in these areas, you might try to become easygoing and less selfish rather than trying to figure out how to fake those things.
comment by [deleted] · 2016-01-27T06:51:56.958Z · LW(p) · GW(p)
You can volunteer to inspect prisons. What a great opportunity for criminal entrepreneurs to recruit people who've gone through an intensive, immersive crime university and have limited job opportunities, all awhile maintaining a prosocial image.
comment by [deleted] · 2016-01-27T12:25:45.984Z · LW(p) · GW(p)
Reading this Wikipedia article on the psychology of torture, the doubt that comes to my mind is how valid are the constructs underlying the thesis around the extraordinarily counterintuitive cultural specificity of resilience to torture thesis and what is the empirical evidence or data source if I am to analyse it independently.
comment by [deleted] · 2016-01-28T16:44:17.391Z · LW(p) · GW(p)
The evolutionary arguments against the plausible of altruism as a construct in and of itself are probably the greatest existential threat to the effective altruism movement. That is, the implication that it is inherently disingenuine. This juxtaposition of care-harm and sanctity-degradation moral foundations are quite simply a rare mix in the human population based on personal observation and inference.
Replies from: Viliam↑ comment by Viliam · 2016-01-29T09:26:01.413Z · LW(p) · GW(p)
Those evolutionary arguments seem like not understanding the cognitive-evolutionary boundary. If you go with the "evolution is about survival and reproduction, so besides sex and murder everything else is a hypocrisy", you explain away altruism, but at the same time you explain away almost everything else. The argument seems sane only when it is used selectively.
comment by WhyAsk · 2016-01-30T23:11:26.548Z · LW(p) · GW(p)
Analysis of a mind game.
Any comments as to the internal workings of A and B are welcome.
Note that, in the US, a person has the right to confront his/her accuser. In the exchange below, B has done that but the specific accusation has never been clarified by A. Very tricky. I have definitely learned from this exchange.
B: ". . .women are biologically superior. . ."
A: This says far more about you than you could possibly imagine. I suggest being more cautious going forward.
At this point it is not clear how exactly B can guard against whatever beliefs he holds that are dangerous. In any case A has decided that B is incapable of comprehending the problem. Note that "defining" someone is a form of verbal abuse according to Satir, and possibly others. "Going forward" makes me think that A is a Brit. Since the Brits "invented" English I may be at a disadvantage here. But I'll go forward anyway.
B: It says that I take for fact what people say who study this type of thing. I suggest that your conduct in this post is offensive.
So B challenges A.
If A is a "sniper" (one who hides behind double or ambiquous meanings), the antidote is to smoke them out, just like a real sniper. Apparently that is what B instinctively did.
A: If that is all it says, you have nothing to be offended about.
A still has not said exactly what is wrong with B's thinking and tells B how B should feel. This is now framed as A the parent and B the child.
B: It's not your call. Who are you?
It's not up to A to decide how B should feel. Then B asks for ID, see "smoke them out", above.
TIA for reading. :D
Replies from: polymathwannabe↑ comment by polymathwannabe · 2016-01-31T19:38:15.274Z · LW(p) · GW(p)
Note to readers: this "hypothetical" scenario was actually this exchange.
You're reading too much into just a few words, and you seem overconfident in your ability to divine other people's intentions. Interpreting the above exchange as a "mind game" is ridiculously paranoid.
Replies from: WhyAsk↑ comment by WhyAsk · 2016-01-31T23:11:11.405Z · LW(p) · GW(p)
Note to readers: I never said it was hypothetical.
And, the textbooks written about my personality type say I have a sensitivity to other people's issues.
And, I'm not starting from zero; over the years I've had office mates and others who acted in a similar way and so I know what works.
Strangely, some of these people may actually have wanted my approval or recognition. Very few get that, even those who are well-behaved. I think I know what causes this, but that info is classified - sorry.
They may have spotted ways that we two are similar. Of course, the idea that I am similar to these verbal bullies is repugnant to me but it's very likely accurate. In my whole life maybe a half dozen people fit this pattern.
Also, there are books on "Verbal Judo" but they are hard to come by from my local library. I scoop up what I can. As long as all I do is counterpunch, block-then-strike, I feel I have the moral high ground.
But, that aside, if this is a false positive for a mind/head/word game, what do you make of this exchange? Is the literal meaning the only thing going on?
In your whole life, have you ever met someone who "put one over on you", left you with the feeling that you've been "had" and you couldn't even verbalize how? If yes, in retrospect, what really went on? Did you act optimally? What would you change for future encounters of this type?
Thanks for reading. :)
Replies from: polymathwannabe↑ comment by polymathwannabe · 2016-02-01T01:22:40.302Z · LW(p) · GW(p)
Of course, lots of people have attacked me verbally. But they don't know me closely enough to really know how to insult me. So their worst attacks don't even need a reply: they're hopelessly misfired. I can let them yell as long as they want. The part of me they want to hurt is one they can never reach and will never see.
Even with that protection, in my experience it's emotionally exhausting to be constantly expecting attacks from every interaction. If you stop seeing an aggressive intent behind every comment, your life will be much less stressful. People are essentially good, and most of them are too busy going through their own day to bother ruining other people's.
Replies from: gjm, WhyAsk↑ comment by gjm · 2016-02-01T19:26:28.429Z · LW(p) · GW(p)
In fairness, the person (OrphanWilde) playing the part of A in that little dialogue (1) gives at least one person other than WhyAsk (namely, me) the impression of playing dark-artsy status games in his comments and (2) has described himself in so many words as a practitioner of the Dark Arts. In that context, it's not so crazy for WhyAsk to suspect something of the sort may have been going on.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2016-02-01T20:07:19.345Z · LW(p) · GW(p)
Perfectly fair. (Given that I get the impression polymathwannabe doesn't like me very much, I appreciate the neutrality of the choice to defend, however, so I'd prefer not to discourage them in the general case.)
Replies from: polymathwannabe↑ comment by polymathwannabe · 2016-02-01T21:32:11.284Z · LW(p) · GW(p)
I'm sorry that I gave you that impression. I may dislike your political opinions, but I don't dislike you.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2016-02-01T21:56:04.903Z · LW(p) · GW(p)
Oh! Thank you for stating that.
It's not something to apologize over, though, in either case. I think there are perfectly valid reasons to dislike me. I think there are valid reasons to like me, as well. I tend to treat like/dislike as a preference statement, rather than a value statement. (Which isn't universal, but people generally tend to use the word "hatred" with regard to negative valuation.)
↑ comment by WhyAsk · 2016-02-01T13:53:58.428Z · LW(p) · GW(p)
We've gotten derailed.
All we need do is ask O. Wilde what his or her intentions were in those posts.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2016-02-01T14:51:07.680Z · LW(p) · GW(p)
I was telling you not to say those sorts of things to people, because they reveal more about you than you probably expect they do, and reveal things about yourself you don't want to be advertising.
Replies from: WhyAsk↑ comment by WhyAsk · 2016-02-02T15:01:45.021Z · LW(p) · GW(p)
Welcome back.
Can you be specific, without paraphrasing? And no ad hominem, please.
At this point you might as well let the cat all the way out of the bag, if there is a cat to be let out.
Am I in physical danger? If yes, from whom?
BTW, this is about the strangest thread I've ever participated in. I guess it's an opportunity to learn, which is what I hope I'm doing on this forum.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2016-02-02T15:43:58.988Z · LW(p) · GW(p)
[Edited: Content removed.]
Replies from: WhyAskcomment by [deleted] · 2016-01-27T11:47:42.679Z · LW(p) · GW(p)
Passive-aggression is tempting.
It’s only by having the courage to speak up, respectfully, that we can all help each other learn. Speaking up respectfully isn’t the same as phrasing everyth..
The first step is to accept that you have a right to feel angry.
The next step is to foster self-awareness about what it is you need, or want to express.
The last step is to have the courage to be clear.
Next time I'll tell the passenger next to me that I feel uncomfortable with his manspreading an inch into my seat territory on the plane
comment by [deleted] · 2016-01-27T06:55:57.296Z · LW(p) · GW(p)
Studies show that users often associate using a mobile phone with headaches, impaired memory and concentration, fatigue, dizziness and disturbed sleep. These are all symptoms of radiation sickness. There are also concerns that some people may develop electrosensitivity or IEI-EMF from excessive exposure to electromagnetic fields.
...
Some national radiation advisory authorities, including those of Austria, France,Germany, and Sweden, have recommended measures to minimize exposure to their citizens. Examples of the recommendations are:
- Use hands-free to decrease the radiation to the head.
- Keep the mobile phone away from the body.
- Do not use telephone in a car without an external antenna.
thanks Wikipedia
Replies from: polymathwannabe, IlyaShpitser↑ comment by polymathwannabe · 2016-01-27T14:58:29.226Z · LW(p) · GW(p)
You're solving for a supposed problem that does not exist at all.
Replies from: None↑ comment by [deleted] · 2016-01-28T02:26:09.415Z · LW(p) · GW(p)
- This isn't a solved problem. It doesn't require a physics explanation, as in the first link, for there to be a hazard identified at a health level. Take the Zika virus for instance: there is an association between that and birth defects, but we don't know why: the physical cause let alone the biological cause. However, it's a hazard and you would be stupid not to take action on it
- The RationalWiki articles (last two) don't even try to construct a compelling argument against the hazard of phone radiation. They simple push the thesis that there is woo and phobia on the matter. That doesn't mean that irrationality makes them wrong about the topic of irrationality.
- the downvoting on my post is alarming. I paid 5 karma points to reply to a downvoted thread because this is really odd behaviour. I was bringing attention to points made elsewhere. If the points are true, then it is not bullshit. If the points are false, then the authority of the source (wikipedia) is such that it ought to be discussed. Finally, if it is true but it's an information hazard, that's worthwhile matter of discussion on LessWrong of all places.
↑ comment by polymathwannabe · 2016-01-28T02:49:28.568Z · LW(p) · GW(p)
This isn't a solved problem.
It isn't a problem at all. There is no solid evidence of risk.
↑ comment by IlyaShpitser · 2016-01-27T15:11:24.091Z · LW(p) · GW(p)
Stop spreading bullshit.