Open thread, Oct. 19 - Oct. 25, 2015
post by MrMind · 2015-10-19T06:59:09.766Z · LW · GW · Legacy · 198 commentsContents
198 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
198 comments
Comments sorted by top scores.
comment by username2 · 2015-10-19T10:42:24.317Z · LW(p) · GW(p)
Luke quotes from Superforecasting on his site:
"Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources—from the New York Times to obscure blogs—that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. Thanks to Doug’s simple invention, he is sure to constantly encounter different perspectives."
wishing to get his hands on this program.
Does anyone know of something similiar, or who this 'Doug' may be? I wonder if this may be as simple as simply asking this man. The book gives 'Doug Lorch' as his full name. Google gives a facebook account as first result, but I have no idea if this is an actual match.
Replies from: ChristianKl, signal↑ comment by ChristianKl · 2015-10-19T13:58:37.762Z · LW(p) · GW(p)
The facebook account links to a blog : http://newsandold.blogspot.de/ The blog indicates that he's politically knowledgeable. The facebook account said that he worked at IBM when the superforcaster was reported as a retired computer programmer.
I think he's your man ;) The facebook account only has 12 friends so it doesn't seem to be very active. But it's worth a try to contact him.
comment by John_Maxwell (John_Maxwell_IV) · 2015-10-19T07:47:39.840Z · LW(p) · GW(p)
Someone created an /r/controlproblem subreddit.
Replies from: None, Gurkenglas, Viliam↑ comment by [deleted] · 2015-10-19T08:19:14.129Z · LW(p) · GW(p)
Actually very high quality subreddit. I'm impressed.
Replies from: Soothsilver↑ comment by Soothsilver · 2015-10-19T19:41:43.080Z · LW(p) · GW(p)
I never realized how many people there are who say "it's a good thing if AI obliterates humanity, it deserves to live more than we do".
Replies from: OrphanWilde, passive_fist↑ comment by OrphanWilde · 2015-10-19T20:21:10.801Z · LW(p) · GW(p)
On some level, the question really comes down to what kind of successors we want to create; they aren't going to be us, either way.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-19T22:53:20.627Z · LW(p) · GW(p)
On some level, the question really comes down to what kind of successors we want to create; they aren't going to be us, either way.
That depends on whether you plan to die.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-10-19T22:59:36.989Z · LW(p) · GW(p)
If I didn't, the person I become ten thousand years from now isn't going to be me; I will be at most a distant memory from a time long past.
Replies from: Soothsilver↑ comment by Soothsilver · 2015-10-20T13:13:23.779Z · LW(p) · GW(p)
It will still be more "me" than paperclips.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-10-20T13:29:54.730Z · LW(p) · GW(p)
Than paperclips, yes. Than a paperclip optimizer?
Well... ten thousand years is a very, very long time.
↑ comment by passive_fist · 2015-10-20T02:52:27.974Z · LW(p) · GW(p)
It's a perfectly reasonable position when you consider that humanity is not going to survive long-term anyway. We're either going extinct and leaving nothing behind, evolving into something completely new and alien, or getting destroyed by our intelligent creations. The first possibility is undesirable. The second and third are indistinguishable from the point of view of the present (if you assume that AI will be developed far enough into the future that no current humans will suffer any pain or sudden death because of it).
Replies from: Soothsilver↑ comment by Soothsilver · 2015-10-20T13:12:54.731Z · LW(p) · GW(p)
You might still want your children to live rather than die.
↑ comment by Gurkenglas · 2015-10-21T21:10:05.856Z · LW(p) · GW(p)
The questions asked there mostly seem basic and answered by some sequence or another. Maybe someone should make a post pointing out the most relevant sequences so those people can be thinking about the unsolved problems on the frontier?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-10-26T07:56:05.227Z · LW(p) · GW(p)
Great idea. I commission you for the task! (You might also succeed in collecting effective critiques of the sequences.)
comment by Panorama · 2015-10-23T10:26:00.758Z · LW(p) · GW(p)
The scientists encouraging online piracy with a secret codeword
What if you're a scientist looking for the latest published research on a particular subject, but you can't afford to pay for it?
...
Replies from: philh, ChristianKlAndrea Kuszewski, a cognitive scientist and science writer, invented the tag, which uses a code phrase: "I can haz PDF" - a play on words combining a popular geeky phrase used widely online in a meme involving cat pictures, and a common online file format.
"Basically you tweet out a link to the paper that you need, with the hashtag and then your email address," she told BBC Trending radio. "And someone will respond to your email and send it to you." Who might that "someone" be? Kuszewski says scientists who have access to journals, through subscriptions or the institutions they work at, look out for the tag so they can help out colleagues in need.
↑ comment by ChristianKl · 2015-10-23T10:36:15.155Z · LW(p) · GW(p)
I'm shocked, shocked...
comment by [deleted] · 2015-10-20T09:55:52.954Z · LW(p) · GW(p)
Think you have a finely callibrated and important information diet? Imagine if you had the world's strongest intelligence agency tailor the news for you. Well, you don't have to imagine, because the president's daily briefs have just been declassified. If you're interested, you can collaborate with researchers to get a better handle on it. Enjoy.
Replies from: satt, ChristianKl↑ comment by satt · 2015-10-20T21:31:45.282Z · LW(p) · GW(p)
For convenience, here's a link to the individual briefs as separate PDF files, for anyone else who doesn't want to download all 34MB at once. (I thought the Flickr page might have a few convenient, face-on snapshots of pages from the briefs, but the CIA reckoned it was more important to take 5 photos of a woman wheeling a trolley of briefs through the CIA lobby. #thanksguys)
I suspect daily presidential briefings from the CIA are finely (as in carefully & deliberately) calibrated but not that well calibrated (as in being accurate, representative and not tendentious). The CIA doubtless has incentives to misrepresent some things to the president — and indeed a president probably has some incentives to allow/encourage being misled about certain things!
↑ comment by ChristianKl · 2015-10-20T10:52:53.060Z · LW(p) · GW(p)
It's an intersting data set but I don't think it's useful as a primary source. Given that the freshest "news" in the pile is from 1977, I don't think the term "news" is appropriate. If you are interested in what happened 40 years ago it might be better to read more recently written history books than contemporary intelligence analysis.
comment by iarwain1 · 2015-10-19T14:05:42.153Z · LW(p) · GW(p)
What makes a good primary care physician and how do I go about finding one?
Replies from: Sjcs, Fluttershy, Tem42, Lumifer, raydora↑ comment by Sjcs · 2015-10-20T10:29:35.734Z · LW(p) · GW(p)
Off the top of my head, the most reliable way would be to ask another senior medical professional - senior as they would tend to have been in the same geographic area for a while and know their colleagues, plus have more direct contact with primary care physicians. Also, rather than asking "who should i see as my primary care physician", you could ask "who would you send your family to see?". This might help prevent them from just recommending a friend/someone with whom they have a financial relationship. I note that this would be relatively hard to do unless you already know a senior medical professional.
Another option would be to ask a medical student (if you happen to know any in your area) which primary care physicians teach at their university and they would recommend. Through my medical training I have found that teaching at a medical school to be weak-to-moderate evidence of being above average. Asking a medical student would help add a filter for avoiding some of the less competent ones, strengthening this evidence
I think lay-people's opinions correlate much more strongly with how approachable and nice their doctor is, as opposed to competence. Doctor rating sites could be used just to select for pleasant ones, if you care about that aspect.
(caveats: opinion based; my experience is limited to the country i trained in; I am junior in experience)
↑ comment by Fluttershy · 2015-10-19T14:46:48.745Z · LW(p) · GW(p)
This is a great question, and I'm glad that you asked, since I am interested in hearing what people think about this as well. I suppose that word of mouth is generally superior to, say, just searching for a primary care doctor through your insurance provider's website, but I don't have any more specific ideas than that.
Personally, I can, and often have, put off going to the doctor due to akrasia, so I put a bit of extra weight on how nice the doctor is-- having a nice doctor lowers the willpower-activation-energy needed for me to make an appointment. I also think that willingness to spend time with patients is important, but I'd be more likely to think this than the average person-- I'm pretty shy, so I'll often tell my doctors that I don't have any more questions (when I actually do) if they seem like they're in a hurry, so as to not bother them.
↑ comment by Tem42 · 2015-10-19T22:26:55.018Z · LW(p) · GW(p)
Ask everyone you know; ask for their recommendations, and ask why they make those recommendations. Most of the answers you get will not be worth much, but look for the good answers; you only need one.
The trick here is that while it is nearly impossible to find the perfect doctor through any method, you are only looking for a good doctor. Any reasonable recommendation followed by a quick Google search (Google allows reviews on doctors, and most established doctors in larger cities will have at least one or two) to weed out the bad apples will do. This is one of those situations where the perfect is the enemy of productivity.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-19T22:50:44.664Z · LW(p) · GW(p)
On what basis do you belief that publically posted reviews of doctors correlate with the quality of the medical ability of the doctor?
Replies from: Tem42↑ comment by Tem42 · 2015-10-20T01:21:35.065Z · LW(p) · GW(p)
I don't assume much of a reliable correlation; but it doesn't require much. Once you have found a likely few doctors, it is worth finding out if a lot of people hate one of them -- particularly if they explain why. It's basically a very cheap way to filter out potential problems. If I felt that there was a strong correlation, I would have recommended starting with the Google reviews -- after all, Googling is much more time expedient than talking to people.
For context, of the few doctors I sampled on Google review, I found none of them to have anything significant posted in their review. The worst I saw was "receptionist was very rude!"
Given two or more okay choices of doctors given by friends and acquaintances, I think that it is fair to apply this sort of filter, even if you have weak evidence that it is effective. The worst that will happen is that you make the other good choice, rather than the good choice you would have made. The best that might happen is that you avoid an unpleasant experience (well, the best is that you lower your chances of dying through physician error). This calculation may change if you have only one doctor under consideration.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-20T08:58:16.747Z · LW(p) · GW(p)
The best that might happen is that you avoid an unpleasant experience
If a doctor tells you the straight truth about what you have to change in your life that can be unpleasant. I think it can lead to bad reviews. I don't know whether it's useful to avoid those doctors on the other hand. Defensive medicine doesn't seem to be something to strive for.
Replies from: Tem42↑ comment by Tem42 · 2015-10-20T10:52:52.433Z · LW(p) · GW(p)
Yes, but if you are reading the reviews, you will be able to determine if they are useful to you. Many will not be. You should certainly be applying the same critical thinking skills that you used when hearing recommendations from your friends in the first place.
I am assuming that there are useful negative comments, although I haven't seen any yet. (My interpretation was that this was because I was only looking at good doctors to start with). If you have a useful comment on any doctor you have seen, please do add it -- it could save someone some trouble.
↑ comment by Lumifer · 2015-10-19T16:18:22.902Z · LW(p) · GW(p)
What makes a good primary care physician
First of all, competence and skill.
Just like everyone else, doctors vary in how good they are. Unfortunately, there is a popular meme (actively promulgated by the doctors guild) that all doctors are sufficiently competent so that any will do. That's... not true.
Given this, it's shouldn't be surprising that finding out the particular doctor's competency ex ante is hard to impossible (unless s/he screwed up so hard, s/he ran into trouble with the law or the medical board). Typically you'll have to rely on proxies (e.g. the reputation of the medical school s/he went to).
Beyond that, things start to depend on what do you need a doctor for. If you have a condition to be treated, you probably want a specialist in that (even primary care physicians have specializations). If you want to run a lot of tests on yourself, you want a doctor who's amenable to ordering whatever tests you ask him for. Etc., etc.
↑ comment by raydora · 2015-10-20T00:21:01.991Z · LW(p) · GW(p)
I don't have any surefire methods that don't require a very basic working knowledge of medicine, but a general rule of thumb is the physician's opinion of the algorithmic approach to medical decision making. If it is clearly negative, I'd be willing to bet that the physician is bad. Not quite the same as finding a good one, but decent for narrowing your search.
Along with this, look for someone who thinks in terms of possibilities rather than certainties in diagnoses.
All assuming you're looking for a general practitioner, of course. I wouldn't select surgeons based on this rule of thumb, for instance.
If you're looking for someone who simply has good tableside manner, then reviews and word of mouth do work.
Replies from: Dorikka↑ comment by Dorikka · 2015-10-20T02:49:15.045Z · LW(p) · GW(p)
Any particular evidence in favor of this approach, anecdotal or otherwise?
Replies from: raydora↑ comment by raydora · 2015-11-08T03:29:06.026Z · LW(p) · GW(p)
Late reply, I know!
Standardizing decisions through checklists and decision trees has, in general, shown to be useful if the principles behind those algorithms are based on a reliable map. In medical practice, that's probably the evidence-based medicine approach to screening, diagnosis, and treatment.
In addition, all this assumes that patient management skills are not a concern, since it's not something I personally consider important (from the point of view of a patient) when considering a provider of any medical or technical service. If you typically require more from your physician (and many people do see physicians as societal pillars and someone to talk to their non-medical problems about) than medical evaluation and treatment, then it is something to keep in mind.
Anecdotally, every medical provider I've encountered who was a vocal opponent of clinical decision support systems had a tendency to jump to dramatic conclusions that were later proven wrong.
This is one of the few studies on the subject that isn't behind a paywall.
comment by [deleted] · 2015-10-19T09:14:30.960Z · LW(p) · GW(p)
Given the absence of a boasting thread recently. Here's a little boasting:
Helped monitor and proxy for these publically/non-internet user accessible (meaning multiple people can use and post with them through me) Reddit accounts
Identified a potent research interest
Monitored and learning from responses to my LW content as appropriate
comment by Vladimir_Golovin · 2015-10-21T10:41:42.886Z · LW(p) · GW(p)
Just a quick dump of what I've been thinking recently:
A train of thought is a sequence of thoughts about a particular topic that lasts for some time, which may produce results in the form of decisions and updated beliefs.
My work, as a technical co-founder of a software company, essentially consists of riding the right trains of thought and documenting decisions that arise during the ride.
Akrasia, in my case, means that I'm riding the wrong train of thougt.
Distraction means some outside stimulus that compels my mind to hop to a different train of thought that my mind is currently riding or should be riding. The stimuli can be anything: people talking to me, a news story, a sexually attractive person across the street, an advertisement, etc.
Some train rides are long: they last for hours, days or even weeks, while some are short and last for seconds or minutes. Historically, I've done my best work on very long rides.
Different trains of thoughts have different 'ticket costs'. Hopping to a sex-related or a politics-related train of thought is extremely cheap. Caching a big chunk of a problem into my mind requires consciois effort, and thus the ticket is more expensive. In my case, the right trains of thought are usually expensive.
Interruptions set back the distance traveled, or, in some cases, completely reset the distance to the original departure station. Or they may switch me to a different train of thought completely, while, at the same time, depleting the resource (willpower?) that I need for boarding the correct train of thought.
My not-so-recent decision to stop reading peoplenews has greatly reduced the number and severity of unwanted / involuntary train hops.
My "superfocus periods", during which I'm able to ride a single right train of thought for multiple days or weeks, are mostly due to the absence of stimuli that compel my mind to jump to different, cheaper trains of thought. These periods happen when I'm away from work and sometimes from my family, which means I can safely drop my everyday duties such as showing up in the office, doing errands, replying emails, meeting people etc.
Keeping a detailed work diary is tremendously helpful for re-boarding the right train of thought after severe interruptions / "cache wipes". I use Workflowy.
I noticed that I'm reluctant about boarding long rides when I expect interruptions during the ride. Recent examples include reluctance about reading Bostrom's Superintelligence at home, or reluctance about 'loading' a large piece of project into my head at work, because my office iss full of programmers that ask (completely legitimate) questions about their current tasks.
comment by qmotus · 2015-10-19T08:17:13.500Z · LW(p) · GW(p)
It's often entertained on LessWrong that if we live in some sort of a big world, then conscious observers will necessarily be immortal in a subjective sense. The most familiar form of this idea is quantum immortality in the context of MWI, but arguably a similar sort of what I would call 'big world immortality' is also implied if, for example, we live in another sort of multiverse or in a simulation.
It seems to me that big world scenarios are well accepted here, but that a lot of people don't take big world immortality very seriously. This confuses me, and I wonder if I'm missing something. I suppose that there are good counterarguments that I haven't come across or that haven't actually been presented yet because people haven't spent that much time thinking about stuff like this. The ones I have read are from Max Tegmark, who's stated that he doesn't believe quantum immortality to be true because death is a gradual, not a binary process, and (in Our Mathematical Universe) because he doesn't expect the necessary infinities to actually occur in nature. I'm not sure how credible I find these.
So, should we take big world immortality seriously? I'd appreciate any input, as this has been bothering me quite a bit as of late and had a rather detrimental effect on my life. Note that I'm not exactly very thrilled about this; to me, this kind of involuntary immortality, that nevertheless doesn't guarantee that anyone else will survive from an observers point of view, sounds pretty horrible. David Lewis presented a very pessimistic scenario in 'How Many Lives Has Schrödinger's Cat' as well.
Replies from: Kaj_Sotala, passive_fist, RowanE, entirelyuseless↑ comment by Kaj_Sotala · 2015-10-21T17:47:07.292Z · LW(p) · GW(p)
So, should we take big world immortality seriously?
Whether or not we take it seriously doesn't seem to have any effect on how we should behave as far as I can tell, so what would taking it seriously imply?
Replies from: qmotus↑ comment by qmotus · 2015-10-22T13:37:35.333Z · LW(p) · GW(p)
I mostly wanted to hear opinions on whether to believe it or not. But anyways, I'm not so sure that you're correct. I think we should find out whether big world immortality should affect our decisions or not. If it is true then I believe that we should, for instance, worry quite a bit about the measure of comfortable survival scenarios versus uncomfortable scenarios. This might have implications regarding, for example, whether or not to sign up for cryonics (I'm not interested in general, but if it significantly increases the likelihood that big world immortality leads to something comfortable, I might) or worrying about existential risk (from a purely selfish point of view, existential risk is much more threatening if I'm guaranteed to survive no matter what, but from my point of view no one else is, than in the case where it's just as likely to wipe me out as anyone else).
Replies from: entirelyuseless↑ comment by entirelyuseless · 2015-10-22T17:07:08.428Z · LW(p) · GW(p)
If you're going to worry about things like that if big world immortality is true, you can just worry about them anyway, because the only thing that you will ever observe (even if big world immortality is false) is that you always continue to survive, even when other people die, even from things like nuclear war.
Your observations will always be compatible with your personal immortality, no matter what the truth is.
Replies from: qmotus↑ comment by qmotus · 2015-10-28T18:32:40.152Z · LW(p) · GW(p)
Well, sort of, but I still think there is an important difference in that without big world immortality all the survival scenarios may be so unlikely that they aren't worthy of serious consideration, whereas with it one is guaranteed to experience survival, and the likelihood of experiencing certain types of survival becomes important.
Let's suppose you're in a situation where you can sacrifice yourself to save someone you care about, and there's a very, very big chance that if you do so, you die, but a very, very small chance that you end up alive but crippled, but the crippled scenarios form the vast majority of the scenarios in which you survive. Wouldn't your choice depend at least to some degree on whether you expect to experience survival no matter what, or not?
↑ comment by passive_fist · 2015-10-20T02:47:27.653Z · LW(p) · GW(p)
Some tangential food for thought: My grandfather died recently after a slow and gradual eight-year decline in health. He suffered from a kind of neurodegenerative disorder with symptoms including various clots and plaques in his brain that gradually increased in size and number while the functioning proportion of his brain tissue decreased.
During the first year he had simple forgetfulness. In the second year it progressed to wandering and excessive eating. It then slowly progressed to incontinence, lack of ability to speak, and soon, lack of ability to move. During his final three years he was entirely bedridden and rarely made any voluntary motor movements even when he was fully awake. His muscle mass had decreased to virtually nothing. During his last month he could not even perform the necessary motor movements to eat food and had to go on life support. When he finally did die, many in the family said it didn't make any difference because he was already dead. I was amazed that he held out as long as he did; surely his heart should have given out a long time ago.
Was I a witness to his gradual dissolution in a sequence of ever-increasingly-unlikely universes? Maybe in some other thread he had a quick and painless death. Maybe in an even less likely thread, he continued declining in health to an even less likely state of bodily function.
Replies from: qmotus↑ comment by qmotus · 2015-10-20T08:50:07.744Z · LW(p) · GW(p)
Well, that's just sad. But I suppose you should believe that you witnessed a relatively normal course of decline. In more unlikely threads there possibly were quick and painless deaths, continuing declining, and also miraculous recoveries.
I guess the interesting question your example raises, in this context, is this: is there a way to draw a line from your grandfather in a mentally declined state to a state of having miraculously recovered, or is there a fuzzy border somewhere that can only be crossed once?
Replies from: gjm↑ comment by gjm · 2015-10-20T16:32:43.157Z · LW(p) · GW(p)
It seems to me that a disease that inflicts gross damage to substantial volumes of brain pretty much destroys the relevant information, in which case there probably isn't much more line from "mentally declined grandfather" to "miraculously restored grandfather" than from "mentally declined grandfather" to "grandfather miraculously restored to someone else's state of normal mental functioning" (complete with wrong memories, different personality, etc.).
↑ comment by RowanE · 2015-10-20T05:40:49.147Z · LW(p) · GW(p)
I consciously will myself to believe in big world immortality, as a response to existential crises, although I don't seem to have actual reasons not to believe such besides intuitions about consciousness/the self that I've seen debated enough to distrust.
Replies from: qmotus↑ comment by qmotus · 2015-10-20T08:24:12.886Z · LW(p) · GW(p)
So did I understand correctly, believing in big world immortality doesn't cause you an existential crisis, but not believing in it does?
Replies from: RowanE↑ comment by RowanE · 2015-10-22T22:11:01.599Z · LW(p) · GW(p)
Yes - I mean existential crisis in the sense of dread and terror from letting my mind dwell on my eventual death, convincing myself I'm immortal is a decisive solution to that insofar as I can actually convince myself. I don't mind existence being meaningless, it is that either way, I care much more about whether it ends.
Replies from: qmotus↑ comment by entirelyuseless · 2015-10-19T13:27:26.406Z · LW(p) · GW(p)
I think it should be taken seriously, in the sense that there is a significant chance that it is true. I agree that Less Wrong in general tends to be excessively skeptical of the possibility, probably due to an excessive skepticism-of-weird-things in general, and possibly due to an implicit association with religion.
However:
1) It may just be false because the big world scenarios may fail to be true. 2) It may be false because the big world scenarios fail to be true in the way required; for example, I don't think anyone really knows which possibilities are actually implied by the MWI interpretation of quantum mechanics. 3) It may be false because "consciousness just doesn't work that way." While you can argue that this isn't possible or meaningful, it is an argument, not an empirical observation, and you may be wrong. 4) If it's true, it is probably true in an uncontrollable way, so that basically you are going to have no say in what happens to you after other observers see you die, and in whether it is good or bad (and an argument can be made that it would probably be bad). This makes the question of whether it is true or not much less relevant to our current lives, since our actions cannot affect it. 5) There might be a principle of caution (being used by Less Wrong people). One is inclined to exaggerate the probability of very bad things, in order to be sure to avoid them. So if final death is very bad, people will be inclined to exaggerate the probability that ordinary death is final.
Replies from: Kaj_Sotala, qmotus↑ comment by Kaj_Sotala · 2015-10-21T17:43:39.835Z · LW(p) · GW(p)
I agree that Less Wrong in general tends to be excessively skeptical of the possibility, probably due to an excessive skepticism-of-weird-things in general
Of all the things LW has been accused of, this is the first time I see a skepticism-of-weird-things in general being attributed to the site.
Replies from: Lumifer↑ comment by qmotus · 2015-10-20T08:36:36.147Z · LW(p) · GW(p)
Regarding one, two and three: shouldn't we, in any case, be able to make an educated guess? Am I wrong in assuming that based on our current scientific knowledge, it is more likely true than not? (My current feeling is that based on my own understanding, this is what I should believe, but that the idea is so outrageous that there ought to be a flaw somewhere.)
Two is an interesting point, though; I find it a bit baffling that there seems to be no consensus about how infinities actually work in the context of multiverses ("infinite", "practically infinite" and "very big" are routinely used interchangeably, at least in text that is not rigorously scientific).
Regarding four, I'm not so sure. Take cryonics for example. I suppose it does either increase or decrease the likelihood that a person ends up in an uncomfortable world. Which way is it, and how big is the effect? Of course, it's possible that in the really long run (say, trillions of times the lifespan of the universe) it doesn't matter.
Regarding five, I guess so. Then again, one might argue that big world immortality would itself be a 'very bad thing'.
comment by Panorama · 2015-10-23T10:12:43.752Z · LW(p) · GW(p)
UN climate reports are increasingly unreadable
The climate summary findings of the Intergovernmental Panel on Climate Change (IPCC) are becoming increasingly unreadable, a linguistics analysis suggests.
IPCC summaries are intended for non-scientific audiences. Yet their readability has dropped over the past two decades, and reached a low point with the fifth and latest summary published in 2014, according to a study published in Nature Climate Change1.
The study used the Flesch Reading Ease test, which assumes that texts with longer sentences and more complex words are harder to read. Reports from the IPCC’s Working Group III, which focuses on what can be done to mitigate climate change by cutting carbon dioxide emissions, received the lowest marks for readability.
Confusion created by the writing style of the summaries could hamper political progress on tackling greenhouse-gas emissions, thinks Ralf Barkemeyer, who led the analysis and works on sustainable business management at the KEDGE Business School in Bordeaux, France. The readability scores “are not just low but exceptionally low”, he says.
comment by Houshalter · 2015-10-19T19:36:00.927Z · LW(p) · GW(p)
I've been thinking about some of the issues with CEV. It's come up a few times that humanity might not have a coherent, non-contradictory set of values. And the question of how to come up with some set of values that best represents everyone.
It occurs to me that this might be a problem mathematicians have already solved, or at least given a lot of thought. In the form of voting systems. Voting is a very similar problem. You have a bunch of people you want to represent fairly, and you need to select a leader that best represents their interests.
My favorite alternative voting system is the Condorcet Method. Basically it compares each candidate in a 1v1 election, and selects the candidate that would have won every single election.
It is possible for there not to be a Condorcet winner. If the population has circular preferences. Candidate A > Candidate B > C > A... Like a rock paper scissors thing.
To solve this there are a number of methods developed to select the best compromise. My favorite is Minimax. It selects the candidate who's greatest loss is the least bad. I think that's the most desirable way to pick a winner, and it's also super simple.
There are some differences. Instead of a leader, we want the best set of values and policies for the AI to follow. And there might not be a finite set of candidates, but an infinite number of possibilities. And actually voting might be impractical. Instead an AI might have to predict what you would have voted, if you knew all the arguments and had much time to think about it and come to a conclusion. But I think it can still be modeled as a voting problem.
Now this isn't actually something we need to figure out now. If we somehow had an FAI, we could probably just ask it to come up with the most fair way of representing everyone's values. We probably don't need to hardcode these details.
The bigger issue is why would the person or group building the FAI even bother to do this? They could just take their own CEV and ignore everyone elses. And they have every incentive to do this. It might even be significantly simpler than trying to do a full CEV of humanity. So even if we do solve FAI, humanity is probably still screwed.
EDIT: After giving it some more thought, I'm not sure voting systems are actually desirable. The whole point of voting is that people can't be trusted to just specify their utility functions. The perfect voting system would be for each person to give a number to each candidate based on how much utility they'd get from them being elected. But that's extremely susceptible to tactical voting.
However with FAI, it's possible we could come up with some way of keeping people honest, or peering into their brains and getting their true value function. That adds a great deal of complexity though. And it requires trusting the AI to do a complex, arbitrary, and subjective task. Which means you must have already solved FAI.
Replies from: Tem42, Lumifer↑ comment by Tem42 · 2015-10-19T21:34:30.917Z · LW(p) · GW(p)
If I were God of the World, I would model the problem as more of a River Crossing Puzzle. How do you get things moving along when everyone on the boat wants to kill each other? Segregation! Resettling humanity mapped over a giant Venn diagram is trivial once we are all uploaded, but it also runs into ethical problems; just as voting and enacting the will of the majority (or some version thereof) is problematic, so is setting up the world so that the oppressor and the oppressee will never be allowed meet. However, in my experience people are much happier with rules like "you can't go there" and much less happier with rules like "you have to do what that guy wants". This is probably due to our longstanding tradition of private property.
This makes some assumptions as to what the next world will look like, but I think that it is a likely outcome -- it is always much easier to send the kids to their rooms than to hold a family court, and I think a cost/benefit analysis would almost surely show that it is not worth trying to sort out all human problems as one big happy group.
Of course, this assumes that we don't do something crazy like include democracy and unity of the human race as terminal values.
Replies from: gjm↑ comment by gjm · 2015-10-19T23:08:07.729Z · LW(p) · GW(p)
Segregation!
This puts me in mind of Eliezer's "Failed Utopia #4-2".
↑ comment by Lumifer · 2015-10-19T19:50:28.532Z · LW(p) · GW(p)
Voting is a very similar problem.
Not quite.
The local population consists of 80% blue people and 20% orange people. For some reason, the blue people dislike orange people. A blue leader arises who says "We must kill all the orange people and take their stuff!" Well, it's an issue, and how do people properly decide on a policy? By voting, of course. Everyone votes and the policy passes by simple majority. And so the blue people kill all the orange people and take their stuff. The end.
Replies from: None, WalterL, Houshalter↑ comment by [deleted] · 2015-10-20T17:07:38.911Z · LW(p) · GW(p)
This is exactly the type of problems that mathematicians have tried to solve with different voting schemes. One recent example that has the potential to solve this problem is quadratic vote buying, which takes into account strong preferences of minorities.
Replies from: Lumifer↑ comment by Lumifer · 2015-10-20T17:24:05.903Z · LW(p) · GW(p)
This is exactly the type of problems that mathematicians have tried to solve
I am not sure this is a mathematical problem. Generally speaking, giving a minority the veto power trades off minority safety against government ability to do things. In the limit you have decision making by consensus which has obvious problems.
quadratic vote buying
What do you buy votes with? Money? Then it's an easy way for the blue people to first take orange people's stuff and then, once the orange people run out of resources to buy votes with, to kill them anyway.
Replies from: None↑ comment by [deleted] · 2015-10-20T17:37:58.209Z · LW(p) · GW(p)
Generally speaking, giving a minority the veto power trades off minority safety against government ability to do things. In the limit you have decision making by consensus which has obvious problems.
That's precisely why it is a mathematical problem... you need to quantify the tradeoffs, and figure out which voting schemes maximize different value schemes and utility functions. Math can't SOLVE this problem because it's a ought problem, not an is problem.
But you can't answer the ought side of things without first knowing the is side.
In terms of quadratic vote buying, money is only one way to do it, another is to have an artificial or digital currency just for vote buying, for which people get a fixed amount for the year.
I don't think your concept of it really makes sense in the context of modern government with a police force, international oversight, etc. All voting schemes break down when you assume a base state of anarchy - but assuming there's already a rule of law in place, you can maximize how effective those laws are (or the politicians who make them) by changing your voting rules.
Replies from: Lumifer↑ comment by Lumifer · 2015-10-20T17:48:13.070Z · LW(p) · GW(p)
That's precisely why it is a mathematical problem... Math can't SOLVE this problem
Ahem.
in the context of modern government with a police force, international oversight, etc.
I would be quite interested to learn who exerts "international oversight" over, say, USA.
Besides, are you really saying a "modern" government can do no wrong??
assuming there's already a rule of law in place, you can maximize how effective those laws are
I'm sorry, I'm not talking about the executive function of the government which merely implements the laws, I'm talking about the legislative function which actually makes the laws. There is no assumption of the base state of anarchy.
Replies from: None↑ comment by [deleted] · 2015-10-20T17:52:50.631Z · LW(p) · GW(p)
Ahem
This isn't helpful. There's nothing for me to respond to.
I would be quite interested to learn who exerts "international oversight" over, say, USA.
The UN (specifically, other very powerful countries that trade with the US).
I'm talking about the legislative function which actually makes the laws. There is no assumption of the base state of anarchy.
Would a historical example of what you're talking about be the legality of slavery?
Replies from: Lumifer↑ comment by Lumifer · 2015-10-20T17:58:57.095Z · LW(p) · GW(p)
There's nothing for me to respond to.
Let me unroll my ahem.
You claimed this is a mathematical problem, but in the next breath said that math can't solve it. Then what was the point of claiming it to be a math problem in the first place? Just because dealing with it involves numbers? That does not make it a math problem.
The UN
LOL. Can we please stick a bit closer to the real world?
Would a historical example of what you're talking about be the legality of slavery?
Actually, the first example that comes to mind is the when the US decided that all Americans who happen to be of Japanese descent and have the misfortune to live on the West Coast need to be rounded up and sent to concentration, err.. internment camps.
Replies from: JoshuaZ↑ comment by Houshalter · 2015-10-20T20:07:52.207Z · LW(p) · GW(p)
But maybe that's the correct outcome? If 80% of the population truly believes that some people should die, maybe they should. What higher authority can we appeal to?
I'm not saying I think minorities should die. But I also don't think the majority thinks that either. So it's just an absurd hypothetical. You could say the same thing about CEV in general. "We shouldn't take the utility function of humanity, because what if it's bad?" Bad according to what? What higher utility function are we using to determine badness? Some individual's?
I think Condorcet voting is the best way to compromise between a lot of different people's values. It tends to favor moderates and compromises. Especially the Minimax method i mentioned.
I don't think this system is great, I just think it's the best we can possibly do.
Replies from: Jiro, Lumifer↑ comment by Jiro · 2015-10-21T01:06:44.872Z · LW(p) · GW(p)
"We shouldn't take the utility function of humanity, because what if it's bad?" Bad according to what? What higher utility function are we using to determine badness? Some individual's?
You'll have to convince me that taking other people's utility function into account is consistent with my utility function.
Replies from: Houshalter↑ comment by Houshalter · 2015-10-22T17:53:34.648Z · LW(p) · GW(p)
It's not. I literally discussed that in my first comment. If you can become dictator, it's definitely in your interest to do so. Instead of turning power over to a democracy.
But I would much rather live under a democracy than a dictatorship where I'm not dictator.
↑ comment by Lumifer · 2015-10-20T20:24:54.210Z · LW(p) · GW(p)
But maybe that's the correct outcome? If 80% of the population truly believes that some people should die, maybe they should. What higher authority can we appeal to?
Really?
So it's just an absurd hypothetical.
I wonder if you heard the word "genocide" before. Not in the context of hypotheticals, but as a recurring feature of human history.
Replies from: Houshalter↑ comment by Houshalter · 2015-10-22T17:44:18.203Z · LW(p) · GW(p)
Really?
That's not an argument.
If 80% of the population has a certain value, how can you say that value is wrong? Statistically you are far more likely to be in that 80%.
And the alternative isn't "you get to be dictator and have all your values maximized without compromise". It's "some random individual is picked from the population and gets his values maximized over everyone else's." Democracy of values is far preferable.
I wonder if you heard the word "genocide" before. Not in the context of hypotheticals, but as a recurring feature of human history.
By functioning democracies? With a perfectly rational and informed population?
That's the important part of CEV, or at least my interpretation of it. The AI predicts what you would decide, if you knew all the relevant information and had plenty of time to think about it. I'm not suggesting a regular democracy where the voters barely know anything.
Replies from: Lumifer↑ comment by Lumifer · 2015-10-22T18:58:49.732Z · LW(p) · GW(p)
That's not an argument.
Indeed, it is not. The question mark at the end might indicate that it is a question.
If 80% of the population has a certain value, how can you say that value is wrong?
I don't see any problems with this whatsoever. I am not obligated to convert to the values of the majority. What is the issue that you see?
By functioning democracies?
There is a bit of a true Scotsman odor to this question :-) but let me point out my example upthread and ask you whether the Nazi party came to power democratically.
AI predicts what you would decide
At this level you might as well cut to the chase and go straight to "I wish for you to do what I should wish for". No need to try to tell God... err.. AI how to do it.
Replies from: Houshalter↑ comment by Houshalter · 2015-10-23T23:24:43.393Z · LW(p) · GW(p)
I don't see any problems with this whatsoever. I am not obligated to convert to the values of the majority. What is the issue that you see?
And they aren't obligated to convert to your values. Not everyone can have their way! Democratic voting is the fairest way of making a decision when people can't agree.
There is a bit of a true Scotsman odor to this question :-) but let me point out my example upthread and ask you whether the Nazi party came to power democratically.
Yes I know it's No-True-Scotsman-y, but I really believe that a totally informed population would make very different decisions than an angry mob during a war and depression.
And even your examples are not convincing. Internment during wartime wasn't anywhere near the level of genocide. And the Nazi election was far from fair:
...the Nazis "unleashed a campaign of violence and terror that dwarfed anything seen so far." Storm troopers began attacking trade union and Communist Party (KPD) offices and the homes of left-wingers. In the second half of February, the violence was extended to the Social Democrats, with gangs of brownshirts breaking up Social Democrat meetings and beating up their speakers and audiences. Issues of Social Democratic newspapers were banned. Twenty newspapers of the Centre Party, a party of Catholic Germans, were banned in mid-February for criticizing the new government. Government officials known to be Centre Party supporters were dismissed from their offices, and stormtroopers violently attacked party meetings in Westphalia.
Six days before the scheduled election date, the German parliament building was set alight in the Reichstag fire, allegedly by the Dutch Communist Marinus van der Lubbe. This event reduced the popularity of the KPD... This emergency law removed many civil liberties and allowed the arrest of... 4,000 leaders and members of the KPD shortly before the election, suppressing the Communist vote and consolidating the position of the Nazis. The KPD was effectively outlawed...
The resources of big business and the state were thrown behind the Nazis' campaign to achieve saturation coverage all over Germany. Brownshirts and SS patrolled and marched menacingly through the streets of cities and towns. A "combination of terror, repression and propaganda was mobilized in every... community, large and small, across the land." To further ensure the outcome of the vote would be a Nazi majority, Nazi organizations "monitored" the vote process. In Prussia 50,000 members of the SS, SA and Stahlhelm were ordered to monitor the votes as deputy sheriffs by acting Interior Minister Hermann Göring.
.
At this level you might as well cut to the chase and go straight to "I wish for you to do what I should wish for". No need to try to tell God... err.. AI how to do it.
Well I did mention that in my first comment. This is more of an aesthetic thing to talk about. Once we have an AI we can just ask it how to solve this problem.
But I still think it's somewhat important to think about. Because if we go with your solution, we just get whatever the creator of the AI wants. He becomes supreme dictator of the universe forever, and forces his values on everyone for eternity. I would much rather have CEV or something like it.
Replies from: Lumifer↑ comment by Lumifer · 2015-10-26T14:48:31.222Z · LW(p) · GW(p)
Democratic voting is the fairest way of making a decision when people can't agree.
That sounds like an article of faith.
"Fair" is a very... relative world. Calling something "fair" rarely means more than "I like / approve of it".
This is more of an aesthetic thing to talk about.
Ah. Well, speaking aesthetically, I find the elevation of mob rule to be the ultimate moral principle ugly and repugnant. Y'know, de gustibus 'n'all...
if we go with your solution
I don't believe I proposed any.
Replies from: Houshalter↑ comment by Houshalter · 2015-10-30T05:55:45.292Z · LW(p) · GW(p)
Well see my edit to my first comment. I'll paste it here:
After giving it some more thought, I'm not sure voting systems are actually desirable. The whole point of voting is that people can't be trusted to just specify their utility functions. The perfect voting system would be for each person to give a number to each candidate based on how much utility they'd get from them being elected. But that's extremely susceptible to tactical voting.
However with FAI, it's possible we could come up with some way of keeping people honest, or peering into their brains and getting their true value function. That adds a great deal of complexity though. And it requires trusting the AI to do a complex, arbitrary, and subjective task. Which means you must have already solved FAI.
Do you agree that the fairest system would be to combine everyone's utility functions and maximize them? Of course somehow giving everyone equal weight to avoid utility monsters and other issues. I think these issues can be worked out.
If so, do you agree that voting systems are the best compromise when you can't just read people's utility functions? And need to worry about tactical voting? Because that is basically what I was getting at.
If you don't agree to the above, then I don't understand your objection. CEV is about somehow finding the best compromise of all humans' utility functions. About combining them all. All I'm talking about is more concrete methods of doing that.
Replies from: Jiro, Lumifer↑ comment by Jiro · 2015-11-01T16:44:55.826Z · LW(p) · GW(p)
Do you agree that the fairest system would be to combine everyone's utility functions and maximize them? Of course somehow giving everyone equal weight to avoid utility monsters and other issues. I think these issues can be worked out.
Anything you can do maximizes some combination of people's utility functions. So it is trivially true that the fairest system is a system which uses some combination of people's utility functions. Unless you can first describe how you are going to avoid utility monsters and other perils of utilitarianism, you really haven't said anything useful.
↑ comment by Lumifer · 2015-10-30T14:35:35.172Z · LW(p) · GW(p)
Do you agree that the fairest system would be to combine everyone's utility functions and maximize them?
No, I do not. I do not think that humans have coherent utility functions. I don't think utilities of different people can be meaningfully combined, too.
I think these issues can be worked out.
Ah, yes, the famous business plan of the underpants gnomes...
If so, do you agree that voting systems are the best compromise when you can't just read people's utility functions?
No, I do not. They might be best given some definitions of "best" and given some conditionals, but they are not always best regardless of anything.
CEV is about somehow finding the best compromise of all humans' utility functions. About combining them all.
What makes you think it is possible?
comment by Panorama · 2015-10-23T13:19:29.712Z · LW(p) · GW(p)
Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?
The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.
comment by cousin_it · 2015-10-21T12:56:00.147Z · LW(p) · GW(p)
Thanks to Turing completeness, there might be many possible worlds whose basic physics are much simpler than ours, but that can still support evolution and complex computations. Why aren't we in such a world? Some possible answers:
1) Luck
2) Our world has simple physics, but we haven't figured it out
3) Anthropic probabilities aren't weighted by simplicity
4) Evolution requires complex physics
5) Conscious observers require complex physics
Anything else? Any guesses which one is right?
Replies from: solipsist, IlyaShpitser, lmm, Manfred, Dagon↑ comment by solipsist · 2015-12-05T18:03:58.094Z · LW(p) · GW(p)
Other answers I've considered:
o) Simpler universes are more likely, but complicated universes vastly outnumber simple ones. It's rare to be at the mode, even though the mode is the most common place to be.
p) Beings in simple universes don't ask this question because their universe is simple. We are asking this question, therefore we are not in a simple universe.
2') You don't spend time pondering questions you can quickly answer. If you discover yourself thinking about a philosophy problem, you should expect to be on the stupider end of entities capable of thinking about that problem.
↑ comment by IlyaShpitser · 2015-10-24T22:30:36.383Z · LW(p) · GW(p)
n) The world is optimized for good theatre, not simplicity.
↑ comment by Manfred · 2015-10-21T19:30:05.303Z · LW(p) · GW(p)
I'm of the opinion that there isn't going to be a satisfactory answer. It's true that the complexity of our universe makes it more likely that there's some special explanation, but sometimes things just happen. Why am I the me on October 21, and not the me on some other day? Well, it's a hard job, but someone's got to do it.
Replies from: cousin_it↑ comment by Dagon · 2015-10-21T14:55:36.626Z · LW(p) · GW(p)
How do #1 and #3 differ? I think both are "yes, there are many such worlds - we happen to be in this one".
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-10-21T16:54:01.080Z · LW(p) · GW(p)
It doesn't sound impossible that anthropic probabilities are weighted by simplicity and we're lucky.
Replies from: Dagoncomment by Douglas_Knight · 2015-10-19T14:53:15.558Z · LW(p) · GW(p)
Do people take advantage of instant run-off voting to "not throw away their vote"?
What do they do in Australia? Where else do people have such systems? I suppose I could just look up Australia, but I fear it might be hard to interpret and I’d rather hear from someone with experience of it.
I ask because the recent British Labour leadership election was very different from the last. I suspect that there was a substantial portion of the electorate who preferred, say, Abbot in 2010, but didn't vote for her because she was not viable. The whole complicated system exists to allow people to simply express their preferences and not put in the strategic voting effort of determining who is viable, but maybe it isn't doing much.
(It is definitely doing something. In 2010, 28% of the vote share went to non-viable candidates. A plurality system applied to those first round votes would have chosen David over Ed.)
Replies from: Irgy, Good_Burning_Plastic↑ comment by Irgy · 2015-10-20T06:45:50.547Z · LW(p) · GW(p)
As an Australian I can say I'm constantly baffled over the shoddy systems used in other countries. People seem to throw around Arrow's impossibility theorem to justify hanging on to whatever terrible system they have, but there's a big difference between obvious strategic voting problems that affect everyone, and a system where problems occur in only fairly extreme circumstances. The only real reason I can see why the USA system persists is that both major parties benefit from it and the system is so good at preventing third parties from having a say that even as a whole they can't generate the will to fix it.
In more direct answer to your question, personally I vote for the parties in exactly the order I prefer them. My vote is usually partitioned as: [Parties I actually like | Major party I prefer | Parties I'm neutral about | Parties I've literally never heard of | Major party I don't prefer | Parties I actively dislike]
A lot of people vote for their preferred party, as evidenced by more primary votes for minor parties. Just doing a quick comparison, in the last (2012) US presidential election only 1.74% of the vote went to minor candidates, while in the last Australian federal election (2013) an entire 21% of the votes went to minor parties.
Overall it works very well in the lower house.
In the upper house, the whole system is so complicated no-one understands it, and the ballot papers are so big that the effort required to vote in detail prevents most people from bothering. In the upper house I usually just vote for a single party and let their preference distribution be automatically applied for me. Of course I generally check what that is first, though you have to remember to do it beforehand since it's not available while you're voting. Despite all that though, it's a good system I wouldn't want it replaced with anything different.
comment by gjm · 2015-10-24T08:57:02.928Z · LW(p) · GW(p)
More spam: someone called "lucy" is posting identical nonsense about vampires to multiple threads. Less obnoxious than denature123 yesterday, but still certainly spam.
(I've been assuming that, given that we have multiple moderators, it's better to post comments like this than to PM one or more individual moderators. I will be glad of correction from actual moderators if some other approach is better.)
Replies from: CAE_Jones↑ comment by CAE_Jones · 2015-10-24T18:21:53.659Z · LW(p) · GW(p)
I must admit, had lucy managed to only post the vampire ads in threads about interventions to increase longevity / social skills / etc, I might have considered them worth keeping around for entertainment value. At least then we could use them as an excuse to discuss how blood transfusions from healthy donors affects various quality-of-life factors.
(I wonder how long before someone tries to start a business based around selling healthy blood / fecal transplants / etc, and how long before the FDA tells them to stop before they sell someone diseases.)
comment by Panorama · 2015-10-23T10:16:30.806Z · LW(p) · GW(p)
'Zeno effect' verified: Atoms won't move while you watch
One of the oddest predictions of quantum theory – that a system can’t change while you’re watching it – has been confirmed in an experiment by Cornell physicists.
...
Graduate students Yogesh Patil and Srivatsan K. Chakram created and cooled a gas of about a billion Rubidium atoms inside a vacuum chamber and suspended the mass between laser beams. In that state the atoms arrange in an orderly lattice just as they would in a crystalline solid.,But at such low temperatures, the atoms can “tunnel” from place to place in the lattice. The famous Heisenberg uncertainty principle says that the position and velocity of a particle interact. Temperature is a measure of a particle’s motion. Under extreme cold velocity is almost zero, so there is a lot of flexibility in position; when you observe them, atoms are as likely to be in one place in the lattice as another.
...
The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser. A light microscope can’t see individual atoms, but the imaging laser causes them to fluoresce, and the microscope captured the flashes of light. When the imaging laser was off, or turned on only dimly, the atoms tunneled freely. But as the imaging beam was made brighter and measurements made more frequently, the tunneling reduced dramatically.
comment by mwengler · 2015-10-21T14:03:54.435Z · LW(p) · GW(p)
There is significant progress in genetic modification of humans and in physical modification/augmentation of humans. It is plausible we will have genetically modified and/or physically modified human intelligence before we have artificial intelligence.
FAI is the pursuit of artificial intelligence constrained in a way that it will not be a threat to unmodified humans. Or at least that is what it seems to be to me as an observer of discussions here, is this a reasonable description of FAI?
It occurs to me that natural human intelligence has certainly not developed with any such constraints. Indeed, if humanity can develop UAI, then that is essentially proof that human intelligence is not Friendly in the sense we wish FAI to be.
Presumably we have been more worried with how to constrain AI to be friendly because AI could learn to self-modify and experience exponential growth and thus overwhelm human intelligence. But what of modified human intelligence, genetic or physical? These ARE examples of self-modification. And they both appear to be capable of inducing exponential growth.
Is the threat from unfriendly human intelligence any less or any different, or worthy of consideration as an existential risk? If an intelligence arises from modified human, is it a threat to unmodified human, or an enhancement on it? How do we define natural and artificial when our purpose in defining it is to protect the one from the other?
Replies from: polymathwannabe, g_pepper, Lumifer, Dagon↑ comment by polymathwannabe · 2015-10-21T17:00:03.129Z · LW(p) · GW(p)
Human intelligence has already chosen to maximize the burning of oil with no regard for the viability of our biosphere, so we're already living under an Unfriendly Human Intelligence scenario.
↑ comment by g_pepper · 2015-10-21T15:08:49.497Z · LW(p) · GW(p)
Bostrom discusses this possibility in Superintelligence, both in the form of enhanced biological cognition and in brain/machine interfaces. Ultimately he argues that a super intelligent singleton is more likely to be a machine than an enhanced biological brain. He argues that increases in cognitive ability should be much faster with a machine intelligence than through biological enhancement, and that machine intelligence is more scalable (I believe that he makes the point that, while a human brain the size of a warehouse is not practical, a computer the size of a warehouse is).
↑ comment by Lumifer · 2015-10-21T14:54:02.583Z · LW(p) · GW(p)
human intelligence is not Friendly in the sense we wish FAI to be.
Well, of course it's not. Nobody ever said it is.
capable of inducing exponential growth.
Biologically, on the wetware substrate? I don't think that's possible. And if you mean uploads/ems, the distinction between human and AI becomes somewhat vague at this point...
↑ comment by Dagon · 2015-10-21T14:53:15.829Z · LW(p) · GW(p)
Currently, I'd say the threat from unfriendly natural intelligence is many orders of magnitude higher than that from AI.
There is a valid question of the shape of the improvement curve, and it's at least somewhat believable that technological intelligence outstrips puny humans very rapidly at some point, and shortly thereafter the balance shifts by more than is imaginable.
Personally, I'm with you - we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
Replies from: Lumifer, ChristianKl↑ comment by Lumifer · 2015-10-21T15:13:13.253Z · LW(p) · GW(p)
we should be looking for ways to engineer friendliness into humans
No. That's a really bad idea.
First, no one even knows what "friendliness" is. Second, I strongly suspect that attempts to genetically engineer "friendly humans" will end up creating genetic slaves.
Replies from: Dagon↑ comment by Dagon · 2015-10-21T19:10:12.773Z · LW(p) · GW(p)
Perhaps. Don't both of those concerns apply to AI as well?
Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).
I don't have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Replies from: Lumifer↑ comment by Lumifer · 2015-10-21T19:57:50.725Z · LW(p) · GW(p)
I don't have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I'm a speciesist :-)
Besides, we're not discussing what to do or not to do with hypothetical future conscious AIs. We're discussing whether "we should be looking for ways to engineer friendliness into humans". Humans are not hypothetical and "ways to engineer into humans" are not hypothetical either. They are usually known by the name of "eugenics" and have a... mixed history. Do you have reasons to believe that future attempts to "engineer humans" will be much better?
Replies from: Tem42, Dagon↑ comment by Tem42 · 2015-10-21T22:31:15.912Z · LW(p) · GW(p)
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool -- through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born -- that is, changes that do not take anyone out of the gene pool.
↑ comment by Dagon · 2015-10-22T13:36:18.832Z · LW(p) · GW(p)
Sure I do. I'm a speciesist :-)
I probably am too, but I don't much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven't existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I've read about FAI goals seems to apply to future humans as much or more as to future AIs.
↑ comment by ChristianKl · 2015-10-21T21:04:20.515Z · LW(p) · GW(p)
Personally, I'm with you - we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
As far as I understand engineering humans to be more friendly is a concern for the Chinese. They also happen to be more likely to do genetic engineering than the West.
comment by [deleted] · 2015-10-21T10:39:37.013Z · LW(p) · GW(p)
I notice I boast about things without even considering if they're things others will find impressive or shameful - like not attending class. it's a bad habit not to exercise my consideration, empathy and/or theory of mind more. Reckon I've identified the right failure mode here or I'm misattributing?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-21T12:11:11.287Z · LW(p) · GW(p)
I think you get it roughly right.
I recently added a quote of Dannett to the rationality quote thread that fits here:
We really have to think of reasoning the way we think of romance, it takes two to tango. There has to be a communication.
You don't have to communicate to everybody but it's very worthwhile to have two way conversations about your important habits with other people. It's important for mental sanity.
comment by [deleted] · 2015-10-19T08:18:31.878Z · LW(p) · GW(p)
Reintroducing the most meta-concept I'm aware of: integrative complexity!
comment by ike · 2015-10-27T01:10:02.471Z · LW(p) · GW(p)
Munchkining real estate http://www.bloomberg.com/news/articles/2015-10-23/this-startup-tracks-america-s-murder-houses (I'm referring to the resellers mentioned in the article, not the actual startup covered).
Replies from: Curiouskid↑ comment by Curiouskid · 2015-10-27T02:55:20.648Z · LW(p) · GW(p)
Another thing I've heard recently, but not looked into much is living in a house boat off of the coast of San Francisco, and then paddling in on a Kayak.
comment by [deleted] · 2015-10-21T23:57:14.920Z · LW(p) · GW(p)
- MIRI's research guide is definately overkill for interpreting it's individual papers.
- So, I have reason to believe it will be overkill for interpreting it's technical research agenda.
- Has anyone done a kind of annotation of the thing that is more amateur friendly?
- Surely it's in MIRI's best interest to make it more accessible as to compel potential benefactors to support their research?
comment by [deleted] · 2015-10-21T01:11:46.486Z · LW(p) · GW(p)
I've finally gotten to reading a bunch of MIRI papers. I don't pretend to understand as they are meant to be understood. Can it be predicted whether a maths problem is solved, solvable, unsolved or unsolvable? I feel really...dismayed and discouraged reading through MIRI's work. I feel as though they are trying to solve questions that cannot be solved. Though, many famous maths problem go from unsolved to solved, and I struggled with high school maths so I certainly would prefer to defer to some impressive reasoning from you, my peers at LessWrong before I abandon my support for MIRI.
Replies from: IlyaShpitser, MrMind↑ comment by IlyaShpitser · 2015-10-21T01:28:43.084Z · LW(p) · GW(p)
You should worry more about whether MIRI's way of doing problems is a good way of solving hard problems, not how hard the problems are.
Problem difficulty is a constant you cannot affect, social structure is a variable.
Replies from: None, gjm↑ comment by [deleted] · 2015-10-22T08:26:52.304Z · LW(p) · GW(p)
As I read through the Agenda, I can hear Anna Salamon telling me something along the lines of: if you think something is a rational course of action, the antecedents to that course must neccersarily be rational or you are wrong. She doesn't explain it like that and I cant first that poplar thread but whatever...
Now reviewing the research agenda, there are some things which concern me about their way of doing problem solving. I'd appreciate anyone's input, challenges, clarification and additions:
..We focus on research that cannot be safely delegates to machines
nice sound bite. No quarrel with this. Just wanted to point it out
No AI problem (including the problem of error-tolerant agent design itself) can be safely delegated to a highly intelligent agent that has incentives to manipulate or decieve its programmers
for the same reason, I won't delegate trust to design friendly AI up to strangers at MIRI alone ;)
It would be risky to delegate a crucial task before attaining a solid theoretical understanding of exactly what task is being delegated.
this is the critical assumption behind MIRI's approach. Is there any reason to believe this is the case?
It may be possible to use our understanding of ideal Bayesian inference to task a highly intelligent system with developing increasingly e ective approximations of a Bayesian reasoner, but it would be far more dicult to delegate the task of \ nding good ways to revise how con dent you are about claims" to an intelligent system before gaining a solid understanding of probability theory. The theoretical understanding is useful to ensure that the right questions are being asked.
shouldn't establishing this be the very first item in the research agenda, before jumping in to problems they assume are solveable. In fact, the abscence of evidence for them being solveable should be evidence of absence...no?
When constructing intelligent systems which learn and interact with all the complexities of reality, it is not sucient to verify that the algorithm behaves well in test settings. Additional work is necessary to verify that the system will continue working as intended in application.
has it been demonstrated anywhere that formalisms are optimal for exception handling?
Because the stakes are so high, testing combined with a gut-level intuition that the system will continue to work outside the test environment is insucient, even if the testing is extensive.
Is this a legitimate forced choice between pure mathematics and gut level intuition + testing?
MIRI alleges a formal understanding is neccersary for robust AI control, then defines formality as follows:
What constitutes a formal understanding? It seems essential to us to have both (1) an understanding of precisely what problem the system is intended to solve; and (2) an understanding of precisely why this practical system is expected to solve that abstract problem. The latter must wait for the development of practical smarter-than-human systems, but the former is a theoretical research problem that we can already examine.
So first, why aren't they disproving Rice's theorem?
The goal of much of the research outlined in this agenda is to ensure, in the domain of superintelligence alignment|where the stakes are incredibly high|that theoretical understanding comes first.
Okay, show me some data from a very well designed experimenting suggesting theory should come first for the safe development of technology
Honestly, all the MIRI maths and formal logic fetishism got me impressed and awe struck. But I feel like their methodological integrity isn't tight. I reckon they need some quality statisticians and experiment designers to step in. On the other hand, MIRI operates a very very good ship. They market well, fundraise well, movement build well, community build well, they design well, they write okay now (but not in the past!), they get shit done even and they bring together very very good abstract reasoners. And, they have been instrumental, through LessWrong, in turning my life around.
In good faith, Clarity, still trying to be the in-house red team and failing slightly less at it one post at a time.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-10-24T22:32:43.886Z · LW(p) · GW(p)
maths and formal logic
Lots of this going on in the big wide world. Consider looking in more places to deal with selection bias issues.
Replies from: None↑ comment by gjm · 2015-10-21T15:58:23.337Z · LW(p) · GW(p)
I mostly agree, but: You can affect "problem difficulty" by selecting harder or easier problems. It would still be right not to be discouraged about MIRI's prospects if (1) the hard problems they're attacking are hard problems that absolutely need to be solved or (2) the hardness of the problems they're attacking is a necessary consequence of the hardness (or something) of other problems that absolutely need to be solved. But it might turn out, e.g., that the best road to safe AI takes an entirely different path from the ones MIRI is exploring, in which case it would be a reasonable criticism to say that they're expending a lot of effort attacking intractably hard problems rather than addressing the tractable problems that would actually help.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-10-21T17:39:14.356Z · LW(p) · GW(p)
MIRI would say they don't have the luxury of choosing easier problems. They think they are saving the world from an imminent crisis.
Replies from: gjmcomment by airen · 2015-10-19T09:28:43.267Z · LW(p) · GW(p)
As any other amateur who reads Eliezer’s quantum physics sequence, I got caught up in the “why do we have the Born rule?” mystery. I actually found something that I thought was a bit suspicious (even though lots of people must have thought of it, or experimentally rejected this already.) Note that I'm deep in amateur swamp, and I'll gleefully accept any "wow, you are confused" rejections.
Here is my suggestion:
What if the universes, that we live in, are not located specifically in configuration space, but in the volume stretched out between configuration space and the complex amplitude? So instead of talking about “the probability of winding up here in configuration space is high, because the corresponding amplitude is high”, we would say “the probability of winding up here is high, because there are a lot of universes here”. And here, would mean somewhere on the line between a point in configuration space and the complex amplitude for that point. (All these universes would be exactly equal.) And then we completely remove the Born rule. Of course someone thought of this, but responds: “But if we double the amplitude in theory, the line becomes twice as long, and there would be twice as many universes. But this is not what we observe in our experiments, when we double the amplitude, the probability of finding ourselves there multiplies by four!” This is true, if you study a line between the complex amplitude peak and a point in configuration space. But you are never supposed to study a point in configuration space, you are supposed to integrate over a volume in configuration space.
Calculating the volume between the complex amplitude “surface” and the configuration space, is not like taking all the squared amplitudes of all points of the configuration space and summing them up. The reason is that, when we traverse the space in one direction and the complex amplitude changes, the resulting volume “curves”, causing there to be more volume out near the edges (close to the amplitude peak) and less near the configuration space axis.
Take a look at the following image (meant to illustrate an "amplitude volume" for a single physical property): [http://www.wolframalpha.com] , type in: ParametricPlot3D {u Sin[t], u Cos[t], t / 5}, {t, 0, 15}, {u, 0, 1}
Imagine that we’d peer down from above, looking along the property axis. If we completely ignore what happens in the view direction, the volume (the blue areas) would have the shape of circles. If we’d double the amplitude, the volume from this perspective would be quadrupled.
But as it is, what happens along the property axis matters. The stretching out causes the volume to be less than the amplitude squared. It seems that, the higher the frequency is, the closer the volume is to have a square relationship with the amplitude, while as the frequency lowers, the volume approaches a linear relationship with the frequency. Studying the two extreme cases; with frequency 0 the geometric object would be just a straight plane, with an obvious linear relationship between amplitude and volume, while with an “infinite” frequency, the geometric object would become a cylinder, with a squared relationship between volume and amplitude. This means that the overall current amplitude-configuration-space ratio is important, but as far as I know, it is unknown to us.
In a laboratory environment, where all frequencies involved are relatively low, we would see systems evolving linearly. But when we observe the outcome of the systems, and entangle them with everything else, what suddenly matters is the volume of our combined wave which has a very very high frequency.
Or does it? At this point I'm beginning to lose track and the questions starts piling up.
What happens when multiple dimensions are mixed in? I’m guessing that high-frequency/high-amplitude still approaches a squared relationship from amplitude to volume, but I’m not at all certain.
What happens over time as the universe branches, does the amplitude constantly decrease while the length and frequencies remain the same? (Causing the relationship to dilute from squared to linear?)
Note that this suggestion also implies that there really exists one single configuration space / wave function that forms our reality.
So, what do you think?
Replies from: Manfred↑ comment by Manfred · 2015-10-19T17:14:27.034Z · LW(p) · GW(p)
At least one of us is confused about this post :P
It seems like what you're doing is strictly more complicated than just doubling the number of dimensions in state-space and using those extra dimensions only so you can say the amount of "stuff" goes as amplitude squared. Which is already very unsatisfying.
I'm really confused where frequency is supposed to come in.
Replies from: airen↑ comment by airen · 2015-10-19T18:21:14.768Z · LW(p) · GW(p)
It's most likely me being confused.
My picture of it right now is that all the dimensions you need in total, are all the dimensions in state-space + 2 dimensions for the complex amplitude. If this assumption is wrong, then we have found the error in my thinking already!
Note that the two complex amplitude dimensions are of course not like the other dimensions. For every position in the state-space, there is a single point in the amplitude dimensions. Or in my suggestion, a line from origo out to the calculated complex value.
Don't try to think this through with matrices, there's a very real chance that what I'm after cannot be captured by matrices at all. I think you have to do a complete geometric picture of it.
comment by MrMind · 2015-10-21T07:04:18.523Z · LW(p) · GW(p)
How do you feel about floating posts in the Discussion section?
Like: electing a few threads that stay at top for the month/week they are active, the open threads, the monthly media thread, etc.
Is that even possible with LW code?
↑ comment by username2 · 2015-10-21T11:05:54.866Z · LW(p) · GW(p)
Is the LW code open source? If not, why not ? Is it a fork of the reddit code ? Can we update the reddit specific code ? (Reddit allows sticky posts since 2013 if I'm not mistaken) Who takes care of the site ?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-21T11:59:14.277Z · LW(p) · GW(p)
Yes, the code is open source. Yes, it's a fork of the reddit code. TrikeApps takes care of the website as a volunteer effort to help MIRI. But LW isn't a high priority for either of them.
The code is on Google Code: https://code.google.com/p/lesswrong/
Replies from: username2comment by Gunslinger (LessWrong1) · 2015-10-20T18:25:53.971Z · LW(p) · GW(p)
I have a few questions and think the guys at LW probably can help. I'm not sure LW is the best place to ask this, but I don't really know any other place.
Many people (politicians, famous, or what-have-you) have a website and have a "contact" page. How can I write a message that will have an impact? I'm assuming that:
- They receive a large volume of email and may not respond or even read it;
- The mail may not be delivered to them; maybe they have someone else to take care of it for them.
Those are the things that pop out of my head right now, anything else I should double-check?
Now, if those were the preperations, now we have to get the actual cooking done. How can you make an impactful message? Something that will definitely get their attention, something they might just start thinking in the middle of the day. Something that will make them stare at the screen and make them seriously think about it. Most important of all, something that gets them to reply, and a good reply that can make the exchange continue.
I'm willing to put significant effort into this, so don't be afraid to recommend a book or two, or three.
Replies from: Lumifer, ChristianKl↑ comment by Lumifer · 2015-10-20T18:36:47.024Z · LW(p) · GW(p)
In the usual way: offer them something they want.
Leaving sex aside, the traditional things are money and power. Impactful letters begin like this: "I { control a large voting block | can direct cash flow from a network of donors } and would like to discuss X with you". Oh, and, of course, impactful letters are NOT sent to the "contact page" address.
↑ comment by ChristianKl · 2015-10-20T19:45:06.512Z · LW(p) · GW(p)
My first impulse is that it's worthwhile to focus on actual substance instead of focusing on trying to engage a politician for the sake of influencing a politician.
The second step is making sure that you don't as appear as clueless as the average person who writes the politician. Actually try to understand the positions of the stakeholder in the debate you want to comment on and what the issue is about from the view of the politician.
Third would be to have a role in the debate. You can act as a member of an NGO. You can be a blogger. Failing that you could be a person who edited the Wikipedia page of the politician and who tries to understand the policy of the politician better.
The standard way lobbyists use to get a politicians interest is also to give them campaign donations.
comment by [deleted] · 2015-10-29T21:00:10.726Z · LW(p) · GW(p)
How well can you disambiguate someone's notes to the self? I'd like to calibrate my powers of mentalisation!
Here are some hypothetical goals someone may have. For those that are a unclear or odd, would you like propose what you think they may really be saying and how you came to that conclusion. Can you even infer which of them mean what they say and which mean something secret?
I'll give you feedback, cause I generated and obfuscated the writing myself!
- Complete remaining non E3 non research units
- Fight with the lions after touring Turkey
- Apply for PhD programs in Norway, Germany or UK
- Network with intelligent Africans
- Get married and have kids in the Baltic countries
- Bush tucker tour in south and central Australia
- hair silver grey then blue
- Buy €M
- Investigate Colombian prostitution tolerance zones then Investigate post auc criminal groups (see wiki) networks, organisational structures and psyops & the office of the high counsellor for reintegration.
comment by Gunslinger (LessWrong1) · 2015-10-25T17:36:35.918Z · LW(p) · GW(p)
Not really sure where to ask but is anyone in contact with Dahlen [? · GW]? We've had a cool discussion but it stopped abruptly and they haven't posted anything for a while nor replied to PMs.
comment by Cariyaga · 2015-10-24T00:18:26.119Z · LW(p) · GW(p)
What website would you suggest for looking into medical research, for someone who's not versed in reading medical literature? I'm specifically looking for any developments or studies into the treatment of urethral strictures for my own reference.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-24T07:20:08.465Z · LW(p) · GW(p)
The Mayo clinic provides good introductionary descriptions: http://www.mayoclinic.org/diseases-conditions/urethral-stricture/basics/definition/con-20037057
But even if you are not versed in reading, if you want to learn about new developments read original papers. If there are obstacles there a LW help desk
comment by Panorama · 2015-10-23T10:09:20.862Z · LW(p) · GW(p)
Final Kiss of Two Stars Heading for Catastrophe
Using ESO’s Very Large Telescope, an international team of astronomers have found the hottest and most massive double star with components so close that they touch each other. The two stars in the extreme system VFTS 352 could be heading for a dramatic end, during which the two stars either coalesce to create a single giant star, or form a binary black hole.
comment by [deleted] · 2015-10-22T11:00:03.104Z · LW(p) · GW(p)
I have a Gmail, Google Drive, Google Calendar, Facebook and Facebook Messenger apps on my mobile (iphone).
Can I streamline (reduce the number of) my apps without losing functioning?
Replies from: lmm, ike, polymathwannabe↑ comment by lmm · 2015-10-24T17:34:14.755Z · LW(p) · GW(p)
This sounds like an XY problem - what are you trying to achieve by reducing the number of apps?
Replies from: None↑ comment by [deleted] · 2015-10-25T00:59:58.716Z · LW(p) · GW(p)
XY? What does that refer to? Female chromosomes?
Trying to reduce decision fatigue and streamline my time management. Spending lots of time looking at apps lately.
Replies from: philh↑ comment by ike · 2015-10-27T01:07:27.664Z · LW(p) · GW(p)
You can probably do most of what the Facebook app lets you do in safari. You can add a Google calendar to the stock ios calendar app.
You might be able to text whoever you message with messenger, or just use the website.
Gmail can likewise be set up with the stock mail app.
The only app you really need is google drive.
↑ comment by polymathwannabe · 2015-10-22T14:00:09.247Z · LW(p) · GW(p)
In my tablet, I use all those (including the Facebook ones) through Google Chrome. I don't miss the apps at all.
comment by [deleted] · 2015-10-22T08:53:12.072Z · LW(p) · GW(p)
Attention everyone excellent, in one way or another!
What are the determinants of success for an amateur on their path to expertise in an area you are exceptional at that isn't already described accurately on Lesswrong?
Describe a rough estimate of the variance in the success of entrants to the field that can be attributed to each determinant you identify?
No time for modesty now, you're hear to teach and learn!
Ask not what LessWrong can do for you, but what you can do for LessWrong!
comment by [deleted] · 2015-10-22T00:37:58.422Z · LW(p) · GW(p)
Is there any work on developing brain implants or similar for pain moderation in case of sudden injury and you want to down regulate pain so you can think clearly and get help and function?
I don't feel safe knowing I have to wait for an ambulance to get access to serious pain killers and such.
It's probably best, since they are often harmful and liable to abuse, but surely someone is working on solving these issues.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-22T07:33:31.553Z · LW(p) · GW(p)
The human brain is quite capable of shutting down pain without any implants provided you train that ability. No implants needed.
Replies from: None↑ comment by [deleted] · 2015-10-22T10:59:14.574Z · LW(p) · GW(p)
Can you guide me down this rabbit hole?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-22T11:27:13.587Z · LW(p) · GW(p)
Dave Elman has a well known process for shutting down pain via hypnosis. I know two people face to face where I know they got their wisdom teeth drawn while shutting off the pain themselves via self-hypnosis.
In CFAR lingo, pain is a very strong signal from system 1 and the fact that system 2 thinks the pain is not useful doesn't mean that system 1 shuts it off. You actually need a very good relationship between the system 1 and system 2 to have that happen.
A good start for that is Gendlin's focusing. Listen to the uncomfortable feelings in your body to release them. As a beginner you likely won't release strong physical pain that way but lesser issues such as a headache can from time to time be released.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-10-22T13:29:53.840Z · LW(p) · GW(p)
Move your locus of self to the afflicted space (it helps to close your eyes, and visualize moving your mind to the point; to practice this, if it comes difficult to you, close your eyes, and visualize flying around the room you're in); pain vanishes while you hold it there. Returns, slightly diminished, when you relax your focus. Once you get practiced, you can split your locus of self, and direct threads of attention/self onto painful areas, which diminish with the attention.
That's my description. Your internal descriptions may differ, and/or these instructions may not apply to you in any sense - the internal experience of a mind varies wildly from person to person.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-22T15:55:47.978Z · LW(p) · GW(p)
What kind of results do you achieve with that strategy?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-10-22T17:16:42.829Z · LW(p) · GW(p)
Pain in the area of focus fades or vanishes. I'm assuming, by the similar nature implicit between focusing on the pain, and "listening" to the uncomfortable feeling, that there's some kind of similar action taking place there.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-22T20:02:15.628Z · LW(p) · GW(p)
What was the strongest pain to which you successfully applied the technique?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-10-22T20:09:04.532Z · LW(p) · GW(p)
A hand I had accidentally dumped boiling liquid over, although the reduction in pain wasn't complete in that case, and it was difficult to maintain concentration. (I couldn't make my attention... large enough? To encompass the entire hand.)
I don't generally apply the technique, because it's usually counterproductive; the problem with pain is that it is distracting me from what I want to pay attention to, so giving it my full attention is just making the problem worse.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-22T20:20:44.937Z · LW(p) · GW(p)
You mean you have to keep up the mental concentration to keep the pain reduction?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-10-22T20:35:42.226Z · LW(p) · GW(p)
Yes.
comment by [deleted] · 2015-10-21T00:42:14.834Z · LW(p) · GW(p)
The Intelligence Agent's forum looks very active. I'm glad it has taken off.
Can I solicit any reviews about anyone's experience with it so far?
The content itself is beyond me. I'm curious whether I should refer to it intermittently while still learning MIRI's research syllabus or whether the expectation is to have a command of everything before starting. I suspect the latter given the caliber of posts, but that may simply be the founder effect and unintended.
comment by [deleted] · 2015-10-20T23:12:26.004Z · LW(p) · GW(p)
It just struck me...I have no idea about the room for more funding considerations for MIRI. Googling after it suggests that question hasn't even been seriously analysed before. Surely I'm missing something...
Replies from: Artaxerxes↑ comment by Artaxerxes · 2015-10-21T05:55:04.035Z · LW(p) · GW(p)
This post discusses MIRI, and what they can do with funding of different levels.
What are you looking for, more specifically?
comment by [deleted] · 2015-10-19T14:20:38.609Z · LW(p) · GW(p)
Why do I find nasal female voices so sexy? Even languages that emphasise nasality are sexy to me, like Chinese or French, whereas their language group companions with other linguistic similarities do not (e.g. Spanish in the case of French). Is there anything I can do to downregulate my nasal voice fetish?
Replies from: polymathwannabe, satt↑ comment by polymathwannabe · 2015-10-19T14:47:34.928Z · LW(p) · GW(p)
Why do you want less of something you like?
Replies from: None↑ comment by [deleted] · 2015-10-19T16:08:35.278Z · LW(p) · GW(p)
Presumably "nasal voices" aren't a terminal goal for clarity, and he'd like it to stop clouding his judgement of other characteristics that are more important in finding someone he enjoys.
Replies from: None↑ comment by [deleted] · 2015-10-20T01:25:07.093Z · LW(p) · GW(p)
Yes that's right
I want more of something I like, but on the precondition that I want to like something I like. However, there is nothing I like that I have reason to like exclusive of all other likes so if I can like something less, all else constant, it becomes easier for me to satisfy my like for the remainder of things. Therefore, it is instrumental to my terminal goals to like goals in both strict preferences and to eliminate them in decreasing order of preference
comment by SodaPopinski · 2015-10-21T22:57:26.547Z · LW(p) · GW(p)
Do we know whether quantum mechanics could rule out acausal between partners outside eachother's lightcone? Perhaps it is impossible to model someone so far away precisely enough to get a utility gain out of an acuasal trade? I started thinking about this after reading this wiki article on the 'Free will theorem' https://en.wikipedia.org/wiki/Free_will_theorem .
Replies from: lmm↑ comment by lmm · 2015-10-24T17:42:51.472Z · LW(p) · GW(p)
The whole point of acausal trading is that it doesn't require any causal link. I don't think there's any rule that says it's inherently hard to model people a long way away.
Imagine being an AI running on some high-quality silicon hardware that splits itself into two halves, and one half falls into a rotating black hole (but has engines that let it avoid the singularity, at least for a while). The two are now causally disconnected (well, the one outside can send messages to the one inside, but not vice versa) but still have very accurate models of each other.
Replies from: SodaPopinski↑ comment by SodaPopinski · 2015-10-24T22:51:41.247Z · LW(p) · GW(p)
Yes, I understand the point of acausal trading. The point of my question was to speculate on how likely it is that quantum mechanics may prohibit modeling accurate enough to make acausal trading actually work. My intuition is based on the fact that in general faster than light transmission of information is prohibited. For example, even though entangled particles update on each others state when they are outside of each others light cone, it is known that it is not possible transmit information faster than light using this fact.
Now, does mutually enhancing each others utility count as information, I don't think so. But my instinct is that acausal trade protocols will not be possible do to the level of modelling required and the noise introduced by quantum mechanics.
↑ comment by lmm · 2015-11-04T13:15:18.721Z · LW(p) · GW(p)
I don't understand. Computers are able to provide reliable boolean logic even though they're made of quantum mechanics. And any "uncertainty" introduced by QM has nothing to do with distance. You seem very confused.
Replies from: SodaPopinski↑ comment by SodaPopinski · 2015-11-04T19:30:57.507Z · LW(p) · GW(p)
My question is simply: Do we have any reason to believe that the uncertainty introduced by quantum mechanics will preclude the level of precision in which two agents have to model each other in order to engage in acausal trade?
Replies from: lmm↑ comment by lmm · 2015-11-06T20:01:16.574Z · LW(p) · GW(p)
No. There are any number of predictable systems in our quantum universe, and no reason to believe that an agent need be anything other than e.g. a computer program. In any case "noise" is the wrong way to think about QM; quantum behaviour is precisely predictable, it's just the subjective Born probabilities that apply.
comment by [deleted] · 2015-10-21T00:55:20.499Z · LW(p) · GW(p)
Critical thinking is a responsibility for every intelligent agent, just as the benefaction of the critical thought of others ought to be a right for all sufferable life. Millions of hollocaust victims were at the mercy of men just following orders. Never again
Replies from: NancyLebovitz, Viliam↑ comment by NancyLebovitz · 2015-10-21T14:06:59.263Z · LW(p) · GW(p)
I thought you were going to cite this, which shows a much higher level of critical thinking than most people can manage.
comment by [deleted] · 2015-10-20T10:15:53.836Z · LW(p) · GW(p)
Imperialism and a defence of inequality in capitalist republics
Who was is that first articulated the argument that: Since money flows to those can anticipate and predict the need of others, those who get power are those who can do that and therefore if those people are caring then they are the best capable of looking after the rest
And, what strong critiques are available
comment by [deleted] · 2015-10-22T12:34:52.462Z · LW(p) · GW(p)
I'm troubleshooting my ongoing failures in sustained, romantic relationships. My social cognition is impaired, and so too is the social cognition of autistic people. With some googling about feelings of inadequacy and asperges, I found a book that documents feelings of inadequacy in asperger's men as they relate to relationships with women that proposes one conditional stimuli (women 'performing' (better in relationships) than them) to explain this phenomenon:
Men (with asperg syndrome)...may suffer feelings of inadequacy if they appear to be performing less well than the women that are around them..
I appreciated this reading since it characterises a conditional that is not usually characterised in similar literature. It ampts my appreciation for primary qualitative literature in mental health. I
Another asperger like trait I have is comparable to alexthemyia but a more general deficit in self-awareness. So, if the author's hypothesis for the conditional stimulis in some aspie's feeling of inadequacy also explains my feeling of inadequacy, I have little or no intuitive feelings of whether that is the case. This makes troubleshooting cognitive biases, and by consequence, using REBT/CBT techniques highly inefficient for me as I have to test out every possible logical fallacy I may or may not be making against all possible corrections. The space is narrowed, of course, by knowledge of the kinds of fallacies similar people tend to make and interventions that tend to work. I wanted to type this out to better wrap my head around my theory about my very slow rate of progress in improving certain aspects of my mental health and social skills. I hope that it is useful for anyone else who is struggling with similar issues since I no of no one else similar enough to me that I can use them as a general point of reference and mentorship for multiple kinds of problems we may share.
My current attitude to relationship strategy given my asperger like relationship issues reflects the position given here that both aspie and (potential) partners can work together to have a successful relationship.
I'm working on other insecurities too, like insecurity around wearing nice clothes
Replies from: ChristianKl↑ comment by ChristianKl · 2015-10-22T16:05:33.162Z · LW(p) · GW(p)
Men (with asperg syndrome)...may suffer feelings of inadequacy if they appear to be performing less well than the women that are around them..
If you idealize a woman and seek the perfect woman, you will appear to be performing less well than your image of the person. To avoid that effect it's good to have a relationship with a person where you also see their flaws and you both can be open about your flaws.
Replies from: Nonecomment by [deleted] · 2015-10-20T11:01:34.751Z · LW(p) · GW(p)
I'm so over getting super fascinated by someone and thinking they're the sun and moon then talking to them more and realise they're just human like the rest of us...agh. I'm so bad with romance haha. I don't know how to stop idealising people who seem like perfect matches at the time, but then as I talk to them I realise their just a regular person. What can I do about this?
Replies from: None, ChristianKl↑ comment by ChristianKl · 2015-10-20T18:40:49.048Z · LW(p) · GW(p)
Having a healthy relationship is about relating to another human being and not about relating to a mental ideal. If you idealize them at the beginning that's okay. It's typical human behavior. You also don't have to have to commit to have a relationship for life to have a valuable relationship.
comment by [deleted] · 2015-10-20T09:50:30.569Z · LW(p) · GW(p)
I would be interested to see a kind of survey where anyone can rate each active or interested LWer on their profficiency on the content of each sequence. It's somewhat annoying how my interpretation of the sequence would appear to cut down on a good deal of the questions asked and reasked in discussion comments by some very active. But perhaps it's me that's ignorant. This could be an enlightening self awareness exercise for many of us, improve the calibre of posts by shaming people who would otherwise claim profficiency when their peers may be skeptical, and raise the sanity waterline among LW ourselves.
comment by [deleted] · 2015-10-21T12:35:44.478Z · LW(p) · GW(p)
Based on any experiences here, in real life, would you want to meet, avoid, ignore or be indifferent to me?
Replies from: lmm, Dagon, polymathwannabe↑ comment by lmm · 2015-10-24T17:51:28.780Z · LW(p) · GW(p)
You seem a very enthusiastic participant here, despite a lot of downmodding. I admire that - on here. In real life my fear would be that that translated into clinginess - wanting to come to all my parties, wanting to talk forever, and the like. (And perhaps that it reflects being socially unpopular, and that there might be a reason for that). So I'd lean slightly to avoid.
Replies from: None↑ comment by [deleted] · 2015-10-25T00:57:45.585Z · LW(p) · GW(p)
Haha, thanks for that analysis. How unexpected and insightful. Your premise is mostly correct, but your conclusion ain't) I'm extremely clingy with a few people who I have crushes on and idealise at a given time (2 at the moment). It's generally very short lived ~1 month and always women, haha. On the other hand, I'm quite popular with friends and acquiescence, get invited to lots of parties but rarely accept (goal oriented, ain't got time for that!), I haven't tried to fall in love with or done some cruel socially experiment on. On the other hand, my instinctive drive tor respond to this may be telling of some degree of insecurity about my social status...
↑ comment by Dagon · 2015-10-21T19:03:29.862Z · LW(p) · GW(p)
Unsure the difference between "ignore" or "be indifferent". I'll treat both as "not avoid but not seek".
My prior distribution for internet commenters for whom I don't have other social connections is (rounded estimates) around 10% avoid, 90% indifferent and maybe 1% want to meet. LW moves much of the "avoid" into "indifferent", and maybe quadruples "want to meet". The few comments I've noticed from you specifically match my general LW impressions.
So, biggest weight on "indifferent to meeting you". Slightly more interested than avoid-ey if I have to make an effort one way or the other.
↑ comment by polymathwannabe · 2015-10-21T16:52:40.504Z · LW(p) · GW(p)
The information we can get about you through an internet forum may never be enough for us to give an answer that will be useful to you.
comment by [deleted] · 2015-10-20T09:33:06.316Z · LW(p) · GW(p)
Intelligence Intelligence
- If AI is an existential risk, it is a national security risk
- If AI is a national security risk, it is a risk intelligence agencies would be interested in
- If intelligence (in the spook sense) communities are interested in a risk, they are likely to develop a formal or informal research agenda into that risk
- If research agenda's in friendly AI exist that are not MIRI's, MIRI may be interested in accessing said research agenda
- Thought MIRI's full technical research agenda is secret, it is plausible that they are not currently collaborating with intelligence agencies
- MIRI may stand to benefit from access to AI research agendas from intelligence communities
- If MIRI is unable to achieve collaborations on their own, LW activists may be able to assist them
- Therefore, LW activists may have an interest in 'penetrating' intelligence agencies to extricate their technical research agendas around AI pursuant to greater research excellence and collaboration on AI safety and control problems.
- If this is in MIRI's interest, it may be in a given rationalists interest
- Rationalists with AI subject matter expertise may be interested in pursuing friendly AI research at the object level instead
- Non subject matter experts may be interests in penetration with the intention of general access to an intelligence communities knowledge
- Intelligence forces actively disqualify those with open curiosity about intelligence matters:
'Viewing or downloading information from a secure system beyond the clearance subject’s need-to-know' is cause for rejection of a security clearance in Australia
- Therefore, penetrating intelligence communities for the purposes of creating greater transparency in the friendly AI research arena without AI subject matter expertise which may improve the likelihood of being assigned to AI safety specifically may be a poor use of one's time.
↑ comment by gjm · 2015-10-20T16:30:04.209Z · LW(p) · GW(p)
Your first three bullet points seem to imply that entities like the NSA should be expected to have research programmes dedicated to things like pandemics and asteroid strikes. That seems unlikely to me; why would the NSA or CIA or whatever be the right venue for such research? The only advantage of doing it in house rather than letting organizations dedicated to health and space handle it would be if somehow there were some nation-specific interests optimized by keeping their research secret. Which seems unlikely, because if human life is wiped out by an asteroid strike or something then the distinction between US interests and PRC interests will be of ... limited importance.
Now, would we expect unfriendly AI research to be any different? I can think of three ways it might be. (1) Maybe an organization like the NSA has more in-house expertise related to AI than related to asteroid strikes. (2) There aren't large-scale (un)friendly AI research efforts out there to delegate to, whereas agencies like NASA and CDC exist. (3) If sufficiently-friendly AI can be made, it could be harnessed by a particular nation, so progress towards that goal might be kept secret. Of these, #1 might be right but I still think it unlikely that intelligence agencies have enough concentration of relevant experts to be good places for (U)FAI research; #2 is probably true but it seems like the way to fix it would be for the nation(s) in question to fund (U)FAI research if their experts say it's worth doing; #3 might be correct, but hold onto that thought for a moment.
LW activists may have an interest in 'penetrating' intelligence agencies
Jiminy. Are you seriously suggesting that an effective way to enhance AI friendliness research would be an attempt to compromise the security of national intelligence agencies? That seems more likely to be an effective way to get killed, exiled, thrown into jail for a long time, etc.
Let me at this point remind you of the conclusion a couple of paragraphs ago: if in fact there is (U)FAI research going on in intelligence agencies, it's probably because AI is seen as a possible advantage one nation can have over another. So your mental picture at this point should not be of someone like Edward Snowden extracting information from the NSA, it should be of someone trying to smuggle secret information out of the Manhattan Project. (Which did in fact happen, so I'm not claiming it's impossible, but it sounds like a really unappealing job even aside from petty concerns about treason etc.)
I notice that your conclusion is that for some people, attempting to breach intelligence agencies' security in order to extract information about (U)FAI research "may be a poor use of one's time". I can't disagree with this, but it seems to me that something much stronger is true: for anyone, attempting to do that is almost certainly a really bad use of one's time.
↑ comment by ChristianKl · 2015-10-20T13:27:30.385Z · LW(p) · GW(p)
I find it unlikely that US services have such programs without a person like Peter Thiel being aware of the existance of those programs.
LW activists may have an interest in 'penetrating' intelligence agencies to extricate their technical research agendas around AI pursuant to greater research excellence and collaboration on AI safety and control problems.
You don't get research collaboration by a strategy of treating other stakeholders in a hostile manner and thinking about penetrating them.
comment by [deleted] · 2015-10-20T01:27:03.021Z · LW(p) · GW(p)
Things to consider when advertising:
Problem recognition Stimulus discrimination Necessity or Problem recognition...incl situational influences All potential alternatives Decision rules: Conjunctive disjunctive elimination-by-aspect, compensatory Reference group influence Status differentiation Stimulus generation
Based on my reading of the textbook Consumer behaviours, implications for marketing strategy fifth edition by Quester, yesterday
comment by [deleted] · 2015-10-21T23:52:38.095Z · LW(p) · GW(p)
- I got off on the thought of a chair with maximum sinking in to it factor
- The Pink sink looks a little too commfy.
- May be the hyperstimuli of the chair universe.
- May not be in my best interests as may cut productivity and inhibit exercise.
- On other hand, my impress others, reduce environmental stress and associated fatigue, and probably feels really super really good :)