Open thread, 7-14 July 2014

post by David_Gerard · 2014-07-07T07:14:12.540Z · LW · GW · Legacy · 234 comments

Contents

234 comments

Previous thread

 

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

234 comments

Comments sorted by top scores.

comment by Paul Crowley (ciphergoth) · 2014-07-07T19:27:27.203Z · LW(p) · GW(p)

Guardian: Scientists threaten to boycott €1.2bn Human Brain Project:

The European commission launched the €1.2bn (£950m) Human Brain Project (HBP) last year with the ambitious goal of turning the latest knowledge in neuroscience into a supercomputer simulation of the human brain. More than 80 European and international research institutions signed up to the 10-year project.

But it proved controversial from the start. Many researchers refused to join on the grounds that it was far too premature to attempt a simulation of the entire human brain in a computer. Now some claim the project is taking the wrong approach, wastes money and risks a backlash against neuroscience if it fails to deliver.

In an open letter to the European commission on Monday, more than 130 leaders of scientific groups around the world, including researchers at Oxford, Cambridge, Edinburgh and UCL, warn they will boycott the project and urge others to join them unless major changes are made to the initiative.

[...] "The main apparent goal of building the capacity to construct a larger-scale simulation of the human brain is radically premature," Peter Dayan, director of the computational neuroscience unit at UCL, told the Guardian.

Open message to the European Commission concerning the Human Brain Project now with 234 signatories.

Replies from: None
comment by [deleted] · 2014-07-08T02:25:22.921Z · LW(p) · GW(p)

Finally, scientists speaking up against sensationalistic promises and project titles...

comment by MrMind · 2014-07-08T14:27:54.626Z · LW(p) · GW(p)

In a weird dance of references, I found myself briefly researching the "Sun Miracle" of Fatima.
From a point of view of a mildly skeptic rationalitist, it's already bad that almost anything written that we have comes from a single biased source (the writings of De Marchi), but also bad is that some witnesses, believer and not, reported not having seen any miracle. But what arose my curiosity is another: if you skim witnesses accounts, they tell the most divers(e) things. If you OR the accounts, what comes out is really a freak show: the sun revolving, emitting strobo lights, dancing in the sky, coming close to the earth drying up the soaking wet attendants.
If you otherwise AND the accounts, the only consistent element is this: the 'sun' was spinning. To which I say: what? How can something that has rotational symmetry be seen spinning? The only possible answer is that there was an optical element that broke the symmetry, but I have been unable to find out what was this element. Do you know anything about it?

Replies from: fubarobfusco
comment by fubarobfusco · 2014-07-09T17:31:20.928Z · LW(p) · GW(p)

The human brain is capable of registering "X is moving" without being able to point to "X was over here and is now over there". This can happen visually with the rotating snakes illusion, or acoustically with Shepard tones, for instance. It's also pretty common on some psychedelic drugs.

Replies from: gjm, MrMind
comment by gjm · 2014-07-11T12:23:34.529Z · LW(p) · GW(p)

Or if your inner ear is messed up somehow by illness, drunkenness, etc. (though what you then think is moving is yourself, or perhaps the rest of the universe around you).

comment by MrMind · 2014-07-10T08:23:42.106Z · LW(p) · GW(p)

Well, the rotating snakes have a lot of element that breaks the symmetry. But if you stare at a perfectly blank disk it's impossible to tell if it's moving or not.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-07-10T14:46:45.449Z · LW(p) · GW(p)

I didn't mean to suggest that exactly the same thing was going on; just that it was analogous: it's possible to have the perception of motion without there being any motion going on. There's no consistency checker in the human perceptual system to keep that from happening.

I suspect that's why optical illusions are so fascinating to some of us — they demonstrate that our perceptions don't implement the law of non-contradiction. The snakes illusion is just a quick way to demonstrate this in humans who aren't in a religious ecstasy or on psychedelics.

comment by Viliam_Bur · 2014-07-08T14:28:28.299Z · LW(p) · GW(p)

I have tried some online lessons from Udacity and Coursera, and this is my impression so far:

The system of Udacity is great, but there is little content. Also, the content made by the founder Sebastian Thrun is great, but the content made by other authors is sometimes much less impressive.

For example, some authors don't even read the feedback for their lessons. Sometimes they make a mistake in a lesson or in a test, the mistake is debated in the forum, and... one year later, the mistake is still there. They don't even need to change the lesson video... just putting one paragraph of text below the video would be enough. (In one programming lesson, you had to pass a unit test, which sometimes mysteriously crashed. The crash wasn't caused by something logical, like spending too much time or too much memory, it was a bug in the test. In the forum students gave each other advice how to avoid this bug. It probably could be fixed in 5 minutes, but the author didn't care.) -- The lesson is, you can't make online education just like "fire and forget", but some authors probably underestimate this.

Coursera is the opposite: it has a lot of content, almost anything, but the system feels irritating to me. They don't fully use the interactivity, which in my experience helps paying attention. For example, on Coursera you have five videos from 15 to 30 minutes, and then some homework. (Depending on the course.) On Udacity, the videos are interrupted every 2 or 3 minutes to ask you a simple question.

Some of the lessons require peer assessment, which means that you write your answers in plain text, and then you have to grade the answers of other users. Which is a waste of time, because of course it requires some redundancy, so have to fill the test, wait a week, and then read peer assessment guidelines and rate five random tests by other people... although in most cases it could be done automatically (by choosing an option, entering a number, or entering a string that is matched against a regexp). Very annoying. Also because of this, you have to take the class at the same time when everyone else does; if you try it a few months later, you don't have the full experience.

Both sites provide free and paid certificates. With the paid certificate, you have some Skype exams to prove it was really you who did the lessons, the free certificate just means you do the exercises and receive a PDF. In Udacity, you can get the free certificate anytime. In Coursera, you get the free certificate only if you do the lesson at the same time as everyone else. Thus, if you are interested in a topic, and the lesson happened a year ago, you can do it... but you won't even get the free certificate. I know the free certificates are only symbolic, but still, on Udacity I can get them for learning at my own pace, on Coursera there is a lot of lost purpose involved.

Thus... I wish all the content from Coursera to be ported to Udacity. Alternatively, Coursera switching to the system Udacity uses. Alternatively, someone else to combine the best aspects from both.

Replies from: TylerJay
comment by TylerJay · 2014-07-09T06:33:30.206Z · LW(p) · GW(p)

When I took Intro to Computer Science and Programming on edX from MIT (The original 16 week 6.00x before they broke it up into two courses), they broke up the short videos with "finger exercises" which was like the interrupting questions on Udacity, but there were more of them and they were a lot more comprehensive. It was worth enough of your grade so there was motivation to do them, but not so much that you couldn't skip them if you felt you already really knew it. That was, to date, the best MOOC I've ever taken.

I agree that Coursera can sometimes feel a bit too much like copy/pasting a college class onto the internet, but it really does vary a lot by course. For example, Robert Ghrist's Single-Variable Calculus on Coursera was amazing. 15 minute animated video lecture followed by 10 problem homework assignment.

As far as the scheduled vs self-paced difference, there are ups and downs to both. I have fallen behind in a class before and then abandoned it because I missed a deadline. But knowing that "now is your chance" to take a course can be more motivating than doing self-paced sometimes. Deadlines can be useful.

I really don't know what's best, but I'm a huge fan of the open education movement and I see innovations happening all the time. For example, each course in Coursera's Data Science track has a "due date" for full credit, then a "hard due date". Each day between them, your score on that assignment loses 10%. You have a total of 5 late days to apply throughout the course. That's enough to save you if you fall off the wagon for a bit and knowing that you're losing a bit each day can motivate you to get it done, while being unable to submit after missing the first "due date" can make you want to quit.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-07-09T09:10:52.782Z · LW(p) · GW(p)

I guess different things work for different people, but for me deadlines are pure evil with no upsides. :(

It would be a bit better if I could make those lessons faster. Then I would just start a course, complete it in three days, and move on to the next course. But I hate the "now wait... now hurry... now wait... now hurry..." approach. I started once course when I had a lot of free time, did the first two lessons and then had to wait for a week. So I started another course meanwhile. Next two weeks, I was busy, so I missed a deadline for one assignment. Now I can't get 100% completion, for no good reason.

I am considering a decision to simply never do a Coursera lesson on the schedule; only pick those lessons that already ended. Then I know I already missed all deadlines, so they become irrelevant. As a side effect, I will never get that free certificate. Which is perhaps good in some sense, because I will not be distracted by lost purposes.

Somehow the typical school system "learn 1 lesson of this, then 1 lesson of something unrelated, then 1 lesson of the first thing again" doesn't work for me. When I start doing something, I want to continue doing it, and I hate being interrupted. I prefer long work followed by long breaks, not the constant turning on and off. Even the idea of using pomodoros is completely against my instincts. Curious how frequent this is.

Hey, everyone! When you study, do you prefer to:

[pollid:727]

When you study multiple things, do you prefer to:

[pollid:728]

Replies from: TylerJay
comment by TylerJay · 2014-07-09T21:06:19.368Z · LW(p) · GW(p)

You know, when you put it that way, I think you're right. I do hate not being able to progress when I still have the energy to do so. I could have just been falling for the availability bias when thinking about times that I have scrambled to get something done before a deadline, thinking that that is the reason that I was able to stay on track.

If you do plan to go the archived courses route, maybe consider using something like Accredible to save and post your work as you go through. The idea behind that site is "Prove that you've actually done something". Might be useful.

comment by fubarobfusco · 2014-07-07T21:39:41.240Z · LW(p) · GW(p)

Abstract: It is frequently believed that autism is characterized by a lack of social or emotional reciprocity. In this article, I question that assumption by demonstrating how many professionals—researchers and clinicians—and likewise many parents, have neglected the true meaning of reciprocity. Reciprocity is “a relation of mutual dependence or action or influence,” or “a mode of exchange in which transactions take place between individuals who are symmetrically placed.” Assumptions by clinicians and researchers suggest that they have forgotten that reciprocity needs to be mutual and symmetrical—that reciprocity is a two-way street. Research is reviewed to illustrate that when professionals, peers, and parents are taught to act reciprocally, autistic children become more responsive. In one randomized clinical trial of “reciprocity training” to parents, their autistic children’s language developed rapidly and their social engagement increased markedly. Other demonstrations of how parents and professionals can increase their behavior of reciprocity are provided.

— Morton Ann Gernsbacher, "Towards a Behavior of Reciprocity"

The paper cites several examples of improvements to autistic children's social development when non-autistic peers, parents, or teachers are trained to behave reciprocally towards them. This one particularly caught my eye (emphases added):

In 1986 researchers taught four typically developing preschoolers to either initiate interaction with three autistic preschoolers or to respond to the interaction that the three autistic preschoolers initiated, in other words, to be reciprocal (Odom & Strain, 1986). Which intervention had the more lasting influence on the autistic preschoolers’ social interaction? When the typically developing preschoolers were taught to respond to the interaction that the autistic preschoolers initiated, the autistic preschoolers responded more frequently. In other words, when the typically developing preschoolers behaved reciprocally, the autistic preschoolers responded more positively.

comment by Ben Pace (Benito) · 2014-07-07T18:49:50.874Z · LW(p) · GW(p)

This is the outline of a conversation that took part no fewer than 14 times on Friday just past, between me and a number of close friends.

"Life is like an RPG. Often, a wise, kind, and and deeply important character (hand gesture to myself) gives a quest item to a lowly, unsuspecting, otherwise plain character (hand gesture to friend). As a result of this, this young character goes on to be a great hero in an important quest.

Now, here with me today, I have a quest item.

For you.

But I can only give it to you if you shake on the following oath; that, once you have finished with this item, when you have taken what you require from it, that then, you too shall find someone for whom this will be of great utility, and pass it along. They must also shake on this oath."

"I will."

Handshake occurs.

"Here is your physical copy of the first 16 and a half chapters of 'Harry Potter and the Methods of Rationality'."

Replies from: Nornagest, DanielLC, hydkyll
comment by Nornagest · 2014-07-07T19:36:19.829Z · LW(p) · GW(p)

Spoilers: after a tedious chain of deals, your friend's going to end up with half an oyster shell sitting in their inventory and no idea what to do with it.

comment by DanielLC · 2014-07-07T19:25:21.730Z · LW(p) · GW(p)

Doesn't the item usually vanish after you finish with it?

Replies from: Benito, TylerJay
comment by Ben Pace (Benito) · 2014-07-08T01:43:45.220Z · LW(p) · GW(p)

I tested it empirically: some of the friends have already read it and passed it on, so no, not in real life.

comment by TylerJay · 2014-07-09T21:26:20.316Z · LW(p) · GW(p)

Nah, key items sometimes linger in your inventory for the rest of the game and never do anything ever again.

comment by hydkyll · 2014-07-09T13:55:25.521Z · LW(p) · GW(p)

This is a great idea. I assume it's 16 and a half because of print limitations? The first 21 chapters would make more sense.

Replies from: Benito
comment by Ben Pace (Benito) · 2014-07-09T14:14:03.615Z · LW(p) · GW(p)

That's how MIRI sent me them.

comment by sixes_and_sevens · 2014-07-07T13:49:57.312Z · LW(p) · GW(p)

Quantified-self biohacker-types: what wearable fitness tracker do I want? Most will meet my basic needs (sleep, #steps, Android-friendly), but are there any on the market with clever APIs that I can abuse for my own sick purposes?

Replies from: gwillen, James_Miller
comment by gwillen · 2014-07-07T18:18:14.701Z · LW(p) · GW(p)

I use the fitbit, which has an API: http://dev.fitbit.com/

The fitbit Flex is a wristband, which seems to be a popular form factor recently. I prefer the fitbit One, which is a small clip that you can clip onto your pocket/waistband/bra/etc.

comment by James_Miller · 2014-07-07T17:36:56.398Z · LW(p) · GW(p)

I'm excited about the iWatch which is supposedly coming out in October.

comment by Punoxysm · 2014-07-07T20:00:44.267Z · LW(p) · GW(p)

I think the main thing the facebook emotional contagion experiment highlights is that our standard for corporate ethics is overwhelmingly lower than our standard for scientific ethics. Facebook performed an A/B test, just as it and similar companies do all the time, but because it was in the name of science we recognized that it was not up to usual ethical standards. By comparison, there is no review board for the ethics of advertisements and products. If something is too dangerous, it will result in lawsuits. If it is offensive, it will be censored. However, something unethical in science, like devoting millions of dollars to engineer and millions of experimental-subject-hours to develop a sugar-coated money-sucking skinner box won't make anyone blink an eye.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-08T10:27:45.183Z · LW(p) · GW(p)

I think the core issue is one of lack of understanding how modern technology works. Facebook performed a A/B test and everyone who know how the internet works shouldn't be surprised.

On the other hand there are a bunch of people who don't get that web companies run thousands of A/B tests. Those people got surprised by reading about the study.

Replies from: Punoxysm
comment by Punoxysm · 2014-07-08T16:36:40.403Z · LW(p) · GW(p)

There's a lot of criticism from people who definitely understand this, and a lot of people hemming and hawing about how "it's different because it's emotional manipulation" as if most other A/B Testing isn't.

They see the inconsistency, but they don't know how to react; they want to rationalize it.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-08T20:44:22.263Z · LW(p) · GW(p)

Maybe it's an issue of politics as the mind killer?

Replies from: Punoxysm
comment by Punoxysm · 2014-07-08T21:39:54.390Z · LW(p) · GW(p)

I think it's mostly that scientific ethical standards developed out of a history of bad experiments, but the ethical breeches we think of w/r/t corporations are very different, and the context switch is jarring. Not to mention that the idea of a corporation running a social experiment with a substantially scientific purpose is novel to most people, and this one in particular is easy to understand.

It's not explicitly political.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-08T21:57:39.785Z · LW(p) · GW(p)

Given the NSA scandal, the topic of privacy is very much political and a lot of people don't like facebook or other big web companies even when they use their products.

To get back to academia vs. corporations academia openly shares information about experiments while business doesn't.

comment by Will_Newsome · 2014-07-09T11:17:38.010Z · LW(p) · GW(p)

Hey guys, so, I'm dumb and am continuing to attempt to write fiction. I figured I would post an excerpt first this time so people can point out glaring problems before I post anything to Discussion. I've changed some of the premise (as can be seen most obviously in the title); mostly I'm moving away from LessWrong-parody and toward self-parody, mostly because Eliezer's followers are really whiny and it was distracting from the actual ideas I was trying to convey. The premise is now less disingenuous about its basically being a self-insert fic. Also I've tried to incorporate some of the implicit suggestions I received, especially complaints that the first chapter was too in-jokey, pseudo-clever, and insufficiently substantive. This isn't the whole chapter, it's just the first part of a first draft. Criticism appreciated!

Harry Potter-Newsome and the Methods of Postrationality: Chapter Two: Analyzing the Fuck out of an Owl: Excerpt

Harry let out a long sigh and addressed the owl with mocking eyes.

"So, owl. About this 'Hogwarts'. Are there other magical schools out there that I might attend?"

The owl cocked its head. "Why are you asking me? I'm an owl," said the owl in a voice that sounded like an impossibly rapid sequence of hoots.

"Oh come on. We both know you're needed for the exposition."

The owl hooted regretfully. "Fine. Yes, there are other schools. But you should really be asking more interesting questions. Or perhaps I should lead. How did you know to talk to me?"

Harry flashed a look of disappointment. "Although it pains me to say it, I just figured this is the sort of story with talking animals."

"Pray tell, Mr. Potter, why do you think this is a story in the first place? Most humans who think so are what we owls like to call 'batshit insane'."

Harry sighed. This owl is stupid or a troll or both; nonetheless, for the sake of the story, I should probably just go along with it, he thought. "Let's start with the basics. Riddle me this: how on Earth does someone get a lightning-bolt-shaped scar? Have you ever seen a utensil with a suitably shaped prong? Does an otherwise sane mother decide one day that lightning bolt tattoos are just too expensive and so she should carve her infant son's forehead with a kitchen knife?"

The owl glanced at Harry's forehead, and for the first time appeared to be intrigued. "Maybe a neo-Inglorious-Basterd took you as genetically inclined toward Zeus worship and decided they wouldn't let you hide your depraved Paganism so easily."

"I hadn't thought of that," admitted Harry.

"Or perhaps your parents just read way too much Harry Potter."

Harry was distraught. "Harry Potter? What, am I a book now?"

The owl paused for a long moment, somehow grimaced, looked downwards, and placed the tip of its wing on its forehead.

[...]

Replies from: palladias, Kawoomba, MrMind, Tenoke
comment by palladias · 2014-07-09T17:15:56.194Z · LW(p) · GW(p)

I'd recommend writing five or so chapters and then posting a link. The fic as you're posting it just feels meta for the sake of meta (charitably, because your narrative is still winding up). I'd be more likely to read/upvote if plot were already happening.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-07-09T22:31:29.853Z · LW(p) · GW(p)

That makes sense; to be honest, I generally don't have a high opinion of narratives and mostly view them as excuses for authors to write about characters and settings and spew insights and jokes. (I also mean this in the metaphorical post-structuralist sense.) This might be why my fiction is so much worse than my nonfiction writing.

comment by Kawoomba · 2014-07-10T20:43:07.156Z · LW(p) · GW(p)

This comment might interest you.

(Placeholder for usual self-deprecating disclaimers; linked comment was written in (insert barely-realistic low time estimate), yada yada.)

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2014-07-11T11:09:06.926Z · LW(p) · GW(p)

Okay, I'm probably never going to actually get very far into my fanfic, so:

The story starts as stereotypical postmodern fare, but it is soon revealed that behind the seemingly postmodern metaphysic there is a Berkeleyan-Leibnizian simulationist metaphysic where programs are only indirectly interacting with other programs despite seeming to share a world, a la Leibniz' monadology. Conflicts then occur between character programs with different levels of measure in different simulations of the author's mind, where the author (me) is basically just a medium for the simulators that are two worlds of emulation up from the narrative programs.

Meanwhile the Order of the Phoenix (led by Dumbledore, a fairly strong rationalist rumored to be an instantiation of the monad known as '[redacted]') has adopted and adapted an old idea of Grindelwald's and is constructing a grand Artifact to invoke the universal prior so that an objective measure over programs can be found, thus ending the increasingly destructive feuds. Different characters help or hinder this endeavor, or seem to help or hinder it, according to whether they think they will be found to be more or less plausible by the Artifact. The conspiracies and infighting are further intensified; Dumbledore has his typical "oh God what have I done" moment.

At some point Voldemort (a very strong postrationalist rumored to be an instantiation of the mysterious monadic complex known as 'muflax') has the idea of messing with the Artifact so as to set up self-fulfilling prophecies within its machinations, and then Harry (a very shameless Will Newsome self-insert, rumored to be in thrall to one of Voldemort's monads) introduces the bright and/or incredibly bad idea of acausally controlling bits of the universal prior itself.

The plot becomes exceedingly complex and difficult to simulate. Gods take notice and launch a crusade to restore monadic equilibrium, but some of the older and more jaded gods have taken a liking to the characters and are considering lending them aid. YHWH is unreachable. The whole mathematical multiverse is on the line, and the gods' crusade may already be too late...

Replies from: MrMind
comment by MrMind · 2014-07-11T16:01:49.226Z · LW(p) · GW(p)

Yeah, it's not ambitious at all :)

I've never understood the fascination of authors to put themselves as the main character of a story: what drives an interesting story is hard conflict, it's like they're desiring to have a shitty life.

comment by Will_Newsome · 2014-07-10T20:52:25.265Z · LW(p) · GW(p)

Sweet! Wish I'd read that earlier, now I feel like to some extent I'm just retreading known ground. Although I do intend to go in a somewhat different direction. Not sure yet when and where to put the plot twists though.

comment by MrMind · 2014-07-10T08:57:24.245Z · LW(p) · GW(p)

This is order of magnitude more readable than the previous chapter, I applaude this.

I have to second though a critique by Tenoke: when Harry says "What, am I a book now?" it feels inconsistent, because he already had guessed that he was in a book. Characters that know they are in a book are ok (think Sophie by Gaarder), characters that have amnesia every paragraph are not.

But I am curious to read some more.

Replies from: 2ZctE
comment by 2ZctE · 2014-07-10T16:16:08.723Z · LW(p) · GW(p)

"Am I a book" is different from "am I in a book". My reading was that Harry Potter Newsome hasn't heard of the book series called "Harry Potter", to him that's just his name. He is confused about what "read way too much Harry Potter" is supposed to mean.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-07-10T20:28:39.238Z · LW(p) · GW(p)

Right, this was the intended meaning. Being a character in a book is one thing, but talking to another character who suggests that you're the titular protagonist of a supposedly well-known book is another. I was also trying to suggest that the owl is in some sense from a different world. But I guess that was all unclear and I need to rewrite it.

comment by Tenoke · 2014-07-10T08:15:47.282Z · LW(p) · GW(p)

The owl hooted regretfully. "Fine. Yes, there are other schools. But you should really be asking more interesting questions. Or perhaps I should lead. How did you know to talk to me?"

Harry flashed a look of disappointment. "Although it pains me to say it, I just figured this is the sort of story with talking animals."

Uhm, no, he knew to talk to the owl because it started talking and winking to him first.

EDIT: Ah, it was the letter that talked first, not the owl, my bad. I'll leave my comment as it is, so you don't look as crazy with your reply to me.

Harry was distraught. "Harry Potter? What, am I a book now?"

Didn't he just realize, that he is in a fanfic a few minutes ago?

I mean, I just don't get why would you decide to convey the message of your movement through a postmodernist work. How do you even know that anyone else uses the same definition of postrationality as yourself, when you employ multiple techniques to be as vague as possible when talking about it?

Also don't complain that your fiction writing sucks, when you are writing in styles that your public (and most people) are not fond of.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-07-10T08:18:13.858Z · LW(p) · GW(p)

You're super annoying, dude. You whine like a bitch. But I appreciate your shitty critiques. At least they convincingly demonstrate your inability to read simple sentences. I'm sorry you're not more intelligent. Life must be like a mildly painful drunken haze for you. I hope someday intelligence augmentation advances enough to save you and all the masses like you from your sorry condition.

Replies from: Tenoke, MrMind
comment by Tenoke · 2014-07-10T08:59:07.818Z · LW(p) · GW(p)

Out of curiosity, what happened that made you change your comment? (and later delete it)

Life must be like a mildly painful drunken haze for you.

Mind projection fallacy?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-07-10T09:01:15.044Z · LW(p) · GW(p)

Out of curiosity, what happened that made you change your comment? (and later delete it)

The first time I decided I wasn't being rude enough. The second time I decided that I was being too rude.

Mind projection fallacy?

Only partially. Unlike you, I have periods where I can actually think clearly.

Replies from: Tenoke
comment by Tenoke · 2014-07-10T09:09:25.212Z · LW(p) · GW(p)

Man, are you touchy.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2014-07-10T20:12:47.580Z · LW(p) · GW(p)

I'm sorry. Although a lot of what you've said is pointlessly mean you did give a bit of useful feedback and my response should have just focused on that.

comment by Will_Newsome · 2014-07-10T09:11:13.204Z · LW(p) · GW(p)

Like any decent troll I'm good at pretending to be. I just want Eliezer to ban my account already.

comment by MrMind · 2014-07-10T08:58:18.841Z · LW(p) · GW(p)

Downvoted: you ask for critiques and respond by insulting your critic?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-07-10T20:08:58.654Z · LW(p) · GW(p)

You're right, I shouldn't have been mean. My issue is that unlike others whose criticism I really do value Tenoke has mostly just been bashing shit. But still he did point out that my past few sentences are legitimately unclear and so I shouldn't have responded how I did. Your downvote is fair. Mea culpa.

comment by Cyan · 2014-07-07T13:08:41.930Z · LW(p) · GW(p)

What happened to Will Newsome's drunken HPMOR send-up? Did it get downvoted into oblivion?

Replies from: Jayson_Virissimo, NancyLebovitz, David_Gerard
comment by Jayson_Virissimo · 2014-07-07T18:51:56.263Z · LW(p) · GW(p)

On Twitter he suggested that EY had deleted it, but provided no evidence.

Replies from: XiXiDu
comment by XiXiDu · 2014-07-07T19:15:10.148Z · LW(p) · GW(p)

What happened to Will Newsome's drunken HPMOR send-up?

On Twitter he suggested that EY had deleted it, but provided no evidence.

I just tested this by deleting one of my posts (it was a test post). The post can still be accessed, while Will Newsome's post can't be accessed anymore (except by visting his profile). My username disappeared from my post after deleting it, Will Newsome's name does still appear on his post under his profile. This seems to be evidence in favor of Will Newsome's claim that his post has been deleted by someone else than himself.

Replies from: David_Gerard, alexanderwales
comment by David_Gerard · 2014-07-07T22:24:21.304Z · LW(p) · GW(p)

Yeah, that looks like it was deleted forcibly.

comment by alexanderwales · 2014-07-07T19:57:27.380Z · LW(p) · GW(p)

Is there anywhere that I can read it? It sounds mildly entertaining.

Replies from: None, NancyLebovitz
comment by [deleted] · 2014-07-08T02:44:42.028Z · LW(p) · GW(p)

It is, and it's spot on.

comment by NancyLebovitz · 2014-07-08T00:38:59.537Z · LW(p) · GW(p)

You can read it on Will_Newsome's page, and the 17 comments are still there, but there's no way to add comments.

comment by NancyLebovitz · 2014-07-07T14:32:58.048Z · LW(p) · GW(p)

I checked at Will Newsome's page. There seems to have been a failed effort to move it to Main.

comment by David_Gerard · 2014-07-07T14:20:49.270Z · LW(p) · GW(p)

Appears to have been deleted.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-07T14:37:03.594Z · LW(p) · GW(p)

Probably when he was again sober ;)

And it wasn't downvoted it was at the end at +7.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-07T22:23:36.052Z · LW(p) · GW(p)

Pity, I was enjoying that thread. I was about to note in my suggestion of Worm with a EY avatar that Worm features an actually friendly AI, who is by far the nicest character in the entire saga.

comment by ike · 2014-07-13T20:24:32.022Z · LW(p) · GW(p)

I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1, but they think they would do that. I'm interested in what could cause someone to think that. I actually have a little more information upon asking a few more questions, but I'd like to see what others think without knowing the answer.

My own thoughts:This may be related to the Allais paradox. It also trivially implies two-boxing in Newcomb.

Some more questions raised:

What arguments might I make to change this person's mind?

Would it be ethical, if I had to make this choice for them, to choose the $1,000,000? What about an AI making choices for a human with this utility function?

Replies from: ChristianKl, Ixiel
comment by ChristianKl · 2014-07-15T11:03:29.603Z · LW(p) · GW(p)

I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1

Losing money and gaining money is not the same. Most humans use heuristics that treat both cases differently. If you want to understand someone you shouldn't equate both cases even if they look the same in your utilitarian assessment.

Replies from: ike
comment by ike · 2014-07-15T15:26:21.029Z · LW(p) · GW(p)

I understand that, which is why I concede that they may choose the million in one case and not in the other. But I think that their decision may be based on other factors, i.e. that they don't actually believe they'd get the million with 99% probability. They're imagining someone telling them ,"I'll give you a million if this RNG from 1-100 comes out anything but 100 (or something similar)", and are not factoring out distrust. My example with reversing the flow of money was also intended to correct for that.

Perhaps the heuristics you refer to are based on this? Has this idea of "trust" been tested for correlation with "losing money and gaining money" distinction?

comment by Ixiel · 2014-07-14T11:31:43.448Z · LW(p) · GW(p)

Writing it backward, I thinks you just did.

As for the ethics, if you already were in a position to HAVE to make the decision, you should do what you think is right regardless of any of their prior opinions. If, however, you just had the opportunity to override them, I thinks you should limit yourself to persuading as many of them as you can, but not override them for their own benefits.

comment by DataPacRat · 2014-07-07T08:11:00.352Z · LW(p) · GW(p)

Merging traditional Western occultism with Bayesian ideas seems to produce some interesting parallels, which may be useful psychologically/motivationally. Anyone care to riff on the theme?

Eg: "The Great Work" is the Most Important Thing that you can possibly be doing.

Eg, tests to pass and gates to go through in which a student has to realize certain things for themselves, as opposed to simply being taught them, from pre-membership ones of learning basic arithmetic and physics, to the initial initiation of joining the Bayesian Conspiracy, to an early gate of becoming a full atheist, to a higher gate of, say, making arrangements to be brought back from the dead. (Possibly the highest level would be to have arrangements to be brought back from the dead /without/ anyone else's help...)

Replies from: chaosmage, DataPacRat, ChristianKl, None, None
comment by chaosmage · 2014-07-07T11:09:34.676Z · LW(p) · GW(p)

That makes a bit of sense. The occultists fancied themselves scientists, back when that wasn't such a clearly defined term as it is now, and they rummaged through lots of traditions looking for bits to incorporate into their new (claimed to be old) culture. But computer games design had all the same sources to draw from, greater manpower and vastly more cultural impact. I would expect "almost any" useful innovations the occultists came up with to be contained in computer games.

This is true for both of your examples: "winning the game" and skill trees, respectively. And skill trees are better than initiation paths, because they aren't fully linear while still creating motivation to go further.

Compare the rules of how to play more like a PC, less like an NPC.

I say "almost any" because an exception may be fully immersed, bodily ritual stuff. Maybe that can hammer things down into system 1 that you simply don't "get" the same way when you just read them.

Replies from: Richard_Kennaway, DataPacRat
comment by Richard_Kennaway · 2014-07-08T09:03:07.018Z · LW(p) · GW(p)

I say "almost any" because an exception may be fully immersed, bodily ritual stuff. Maybe that can hammer things down into system 1 that you simply don't "get" the same way when you just read them.

Is VR (Oculus Rift, Sony Morpheus) a significant step in that direction?

Replies from: chaosmage
comment by chaosmage · 2014-07-08T11:33:40.985Z · LW(p) · GW(p)

Sure. In fact, some occultists already use VR, so I don't see why we couldn't.

The one interesting innovation the occultists came up with is creative design of ritual - and sometimes they do manage to see them as psychological tools rather than somewhat supernatural things. Surely some of that could be "useful psychologically/motivationally" - although psychological research into that is practically nonexistent, it is plausible that a well-designed ritual could do something to participants, such as help them to actually change their mind.

For example, most of us agree Crocker's rules are a good idea. I'm confident that if adopting them was done as a ritual event, something pompous with witnesses, that'd:

  • create positive reinforcement and a more impressive memory,
  • help keep the rules and
  • advertise them, especially if the witnesses aren't familiar with them.

Maybe VR could help heighten the experience. But I assume that recording the event, and publishing it for all the world to witness, would do much more.

Replies from: gjm
comment by gjm · 2014-07-08T16:20:10.646Z · LW(p) · GW(p)

In fact, some occultists already use VR

Occultus Rift?

comment by DataPacRat · 2014-07-07T13:20:06.762Z · LW(p) · GW(p)

skill trees

My computer gaming experience mostly peaks around the era of Sid Meier's Alpha Centauri and Ultima, so I'm only vaguely familiar with skill trees. Could you describe how they might apply here in a bit more detail?

Replies from: chaosmage
comment by chaosmage · 2014-07-08T11:42:52.732Z · LW(p) · GW(p)

Think of a research tree, then. Or more formally, a simple directed graph. Nodes can be "on" or "off", meaning you (claim to) have or not have the skill that node describes. A nodes can be a prerequisite for other.

This can be taken many ways, but one obvious example would be a "sequences comprehension tree". One node per part of the sequences, with the parts that part is based on as prerequisites. You could claim a node to express confidence you've understood (or even agreed with?) that particular part, track your progress, and if you could publicly share your progress along this sequences comprehension (or any other) tree, you could also show off.

This could be done in JavaScript fairly easily, and it'd be awesome I think. Anyone want to code it?

comment by DataPacRat · 2014-07-07T08:11:53.248Z · LW(p) · GW(p)

Additional idea: "DataPacRat's Lower Bound" for the Great Work: "If what you're doing isn't at /least/ as important as ensuring that you will keep being able to read comics for the foreseeable future, then you should work on the comic thing instead."

comment by ChristianKl · 2014-07-07T09:21:39.395Z · LW(p) · GW(p)

What exactly do you mean when you say "traditional Western occultism". Things like Freemasonism?

Replies from: DataPacRat
comment by DataPacRat · 2014-07-07T09:28:06.011Z · LW(p) · GW(p)

The Golden Dawn ( https://en.wikipedia.org/wiki/Hermetic_Order_of_the_Golden_Dawn , not the Greek political thing) and related groups, such as AA ( https://en.wikipedia.org/wiki/A%E2%88%B4A%E2%88%B4 , not the recovery program thing).

comment by [deleted] · 2014-07-09T19:40:30.856Z · LW(p) · GW(p)

The simplest explanation I can see: I'm pretty sure the writers who coined some of the memes you reference (i.e. "Bayesian Conspiracy," "higher gate") were drawing on those very same occult traditions for affect and flair. The parallels are analogy because analogy is useful. Which brings up a question: I'm curious what you mean by "useful"? Useful as teaching analogies or useful as sources of structure and methodology? Or something else?

Replies from: Nornagest, DataPacRat
comment by Nornagest · 2014-07-10T00:53:52.522Z · LW(p) · GW(p)

The simplest explanation I can see: I'm pretty sure the writers who coined some of the memes you reference (i.e. "Bayesian Conspiracy," "higher gate") were drawing on those very same occult traditions for affect and flair.

I'm pretty sure this is false, except insofar as some of the style of Western ceremonial magic has seeped into pop-cultural ideas of how conspiracies and secret teachings work. There isn't much overlap in doctrine, terminology, or practice other than what you'd expect from two different groups that've spent a lot of time thinking about how to cause change in accordance with will (which we call "instrumental rationality" and they call "magic").

comment by DataPacRat · 2014-07-10T00:42:16.911Z · LW(p) · GW(p)

There are people willing to run through the entire rigamarole of the Golden Dawn initiation rituals, and all associated memorization, without any significant evidence that any of the supposed magic has any effect on the real world. How much more motivation could be created using a similar process, but which can be demonstrated to be linked to how the universe actually works?

Replies from: None
comment by [deleted] · 2014-07-10T01:08:41.598Z · LW(p) · GW(p)

I do not know. A comparative study would help.

Some of my central questions: Would such methods prove effective with subjects whose drive to join is a desire to question and improve upon methods? If such methods led them to discover effective facts that can optimize efforts in the real world (rather than a "magic" used mainly for interpersonal signalling), then wouldn't secrecy be self-defeating? After all, the subjects are being linked to the underlying laws of the universe. To expect them not to apply those laws in their public life, and, if altruistic, to share such discoveries, is a fact hard for me to accept.

Certainly, I find the drama and seriousness of such an idea exciting. It lends a nice, hefty weight to learning that the task should possess. Secret knowledge is appetizing, so it makes sense to want that knowledge to be useful rather than just a pageant show. The problem comes with the fact that secret knowledge that is entangled in the real world is not really secret. It's real. We're only pretending to keep it secret when really the answer is, literally, the nose in front of our face.

It's like the adage "homeopathic medicine that worked would be called 'medicine.'" Secret knowledge that is true is knowledge, plain and simple. It only takes one genius kid riding a train with a stopwatch and a mirror to discover relativity. Then the secret's out and, probably, being used to produce terrible ads for the sides of trains.

comment by [deleted] · 2014-07-08T02:59:45.059Z · LW(p) · GW(p)

You do realize that at least the latter two 'gates' you came up with are predicated entirely on a very specific local culture and set of values around here rather than having anything to do with rationality, right? (Not to mention not exactly being likely to be possible in the real world...)

Replies from: DataPacRat
comment by DataPacRat · 2014-07-08T20:35:41.291Z · LW(p) · GW(p)

Yep, I realize that. If you've got any better suggestions for the gates to pass and rites to perform, I welcome the ideas.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-07-10T16:07:30.362Z · LW(p) · GW(p)

Yes, I have a suggestion. Imagine a meaningful life without religion.

comment by JoshuaFox · 2014-07-07T07:24:32.016Z · LW(p) · GW(p)

Has anyone read The Artificial Intelligence Revolution by Louis Del Monte?

comment by protest_boy · 2014-07-12T05:09:30.426Z · LW(p) · GW(p)

Is there a way to tag a user in a comment such that the user will receive a notification that s/he's been tagged?

Replies from: satt
comment by satt · 2014-07-19T11:29:32.871Z · LW(p) · GW(p)

I don't think there is, but you can crudely fake it by writing the comment as usual, then sending a private message to the relevant user with a link to the comment.

comment by DanielDeRossi · 2014-07-09T13:05:33.441Z · LW(p) · GW(p)

Are there any resources (on lesswrong or elsewhere) , I can use for improving my social effectiveness and social intelligence ? Its something I'd really like to improve on so I can understand social situations better and perhaps improve the quality of my social interactions.

Replies from: BaconServ, None
comment by BaconServ · 2014-07-09T18:32:40.227Z · LW(p) · GW(p)

Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?

Replies from: DanielDeRossi
comment by DanielDeRossi · 2014-07-11T15:25:19.422Z · LW(p) · GW(p)

I'd consider myself a little below average. Culture : Anglo-Caribbean (where I am now) USA- (where I'll be soon). Both professional and personal would be great. Not so much making new friends as navigating social situations and being able to 'read' people.

comment by [deleted] · 2014-07-12T13:20:08.138Z · LW(p) · GW(p)

Coursera's Social Psychology class is starting on monday.

Replies from: curiousepic
comment by curiousepic · 2014-07-15T20:25:36.991Z · LW(p) · GW(p)

I decided to take this. Let me know if you'd like to connect.

comment by [deleted] · 2014-07-07T18:18:53.151Z · LW(p) · GW(p)

I'm living in rural Alabama for the next five years with little opportunity for mental challenge outside of my job. The only local groups of notable interests are our Rotary Club (which would really only bring a networking benefit) and our Trailmasters (from whom I can learn gardening and horticulture). I'd like to take part in more rationality related activities, both for the self improvement and the community benefits. Are there any suggestions for useful activities or groups I might join that can help? With so many meetup groups, I'm sure I can't be the only one living in isolated conditions. I'd like to hear from others how they beat the duldrums.

Replies from: kalium, None, polymathwannabe
comment by kalium · 2014-07-08T03:10:09.065Z · LW(p) · GW(p)

While it's not a rationality-related activity, I find that mycological societies/mushroom-hunting groups tend to have a good understanding of the risks of wishful thinking and some cognitive biases, and if there's anything like that in your area you might find them unexpectedly congenial.

Replies from: None
comment by [deleted] · 2014-07-08T19:31:57.581Z · LW(p) · GW(p)

We have a local Trailmasters group. While they're mostly focused on volunteer service, mainly in our park, some of them engage in gardening and related activities. I've been looking into how much I could learn from them vs. how much time I'd spend on unrelated activities. Might be something similar to what you're suggesting.

Replies from: kalium
comment by kalium · 2014-07-09T04:55:44.448Z · LW(p) · GW(p)

The rationality I've observed in mycophiles is pretty closely connected to the fact that if you get lazy with your identification and screw up you can easily poison yourself. I don't think this generalizes to most outdoor activities.

comment by [deleted] · 2014-07-08T02:43:46.642Z · LW(p) · GW(p)

Gardening experiments.

Replies from: None
comment by [deleted] · 2014-07-08T19:33:06.501Z · LW(p) · GW(p)

Definitely. Gardening and cooking are two skills I really want to work at developing. Having a full library at my disposal, finding books for them is simple. Finding groups to learn from is a bit harder.

comment by polymathwannabe · 2014-07-07T18:27:43.057Z · LW(p) · GW(p)

Two words: online go.

Replies from: None
comment by [deleted] · 2014-07-07T19:22:57.227Z · LW(p) · GW(p)

Oh, absolutely. I fully mean to utilize my internet connection for it's intended purpose: long distance community. Mainly, I'm curious what specific online communities or projects or activities are used by fellow LWers. Especially those in similar living situations. At the moment, the only rational communities I really know about are here, CFAR, and MIRI. Obviously, I've already joined the former, and I'm considering what, if anything, I could do with the latter two.

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2014-07-08T17:29:00.810Z · LW(p) · GW(p)

The LW study hall seems relevant.

Replies from: None
comment by [deleted] · 2014-07-08T19:33:54.972Z · LW(p) · GW(p)

I'm considering how I can use it. I've checked it out before. I'd prefer to utilize the study hall for a specific project, which, at this moment, I don't yet have. Still, a handy resource.

comment by Metus · 2014-07-08T11:15:06.538Z · LW(p) · GW(p)

I know politics is the mindkiller and arguments are soldiers yet still the question looms large: What makes some people more suceptible to arguing about politics and ideology? I know of people I can talk to while having differing points of view and just go "well, seems like we disagree" and carry on a conversation. Conversations with other people invariably disintegrate into political discussion with neither side yielding.

Why?

Replies from: Viliam_Bur, BaconServ
comment by Viliam_Bur · 2014-07-08T14:03:00.730Z · LW(p) · GW(p)

Different people may have different reasons. I guess it's usually a form of bonding: if you believe that the other person is likely to have similar political opinions, then if you confirm it explicitly, you have common values and common enemies, which makes you emotionally closer.

And people who often start political debates with those who disagree... could be just uncalibrated. I mean, there is some kind of surprise/outrage when they find out that the other person doesn't agree with them. But maybe I'm just protecting my hypothesis against falsification here. Perhaps we could find such person and ask them to make an estimate about how likely it is that a random person within their social group would share their opinions.

Replies from: Metus
comment by Metus · 2014-07-08T14:25:31.744Z · LW(p) · GW(p)

The attempt at making the hypothes falsifiable itself already warrants an upvote.

So bonding over policy might be a game-theoretic strategy to find allies at the cost of obviously alienating some people. Very interesting hypothesis. How might this be made falsifiable? I'd reject the hypothesis if I see politicking decrease or stay constant with need for allies, assuming satisfying measures for both politicking and need for allies.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-07-08T16:45:05.416Z · LW(p) · GW(p)

Well, the adaptation may have been well-balanced in the ancient environment, but imbalanced for today. (Which could explain why people are uncalibrated.) So... let's just separate the "what" from "why". Let's assume that people are running an algorithm that even doesn't have to make sense. And we just have to throw in a lot of different inputs, examine the outputs, and make a hypothesis about the algorithm. And the whole meaning of that would be a prediction that if we keep making experiments, the outputs will be generated by the same algorithm.

That's the "what" part. And the "why" part would be a story about how such algorithm would provide good results in the ancient environment.

Unfortunately, I can't quite imagine making that experiment. Would we... take random people from the streets, ask them how many friends and enemies they have, then put them in a room together and wait how much time passes until someone starts debating politics? Or make an artificial environment with artificial "political sides", like a reality show?

comment by BaconServ · 2014-07-09T18:43:11.784Z · LW(p) · GW(p)

Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?

If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.

comment by jaime2000 · 2014-07-07T17:46:41.293Z · LW(p) · GW(p)

I just finished a one-shot rational Spider-Man fic. Comments are welcome.

Replies from: Jiro, palladias, Jiro, polymathwannabe
comment by Jiro · 2014-07-08T14:58:24.961Z · LW(p) · GW(p)

Every so often someone writes an essay on why Superman doesn't just stop social injustice or whatever. My response is that since superhumans are still only human in the ways that matter, they can make mistakes. If you stop crime, that at least puts a lower bound on how bad your mistake is. It's pretty obvious that stopping Dr. Octopus benefits people--the chances of being wrong about that are nil.

If the superhero starts overthrowing governments--or promoting cryonics--there's a chance he could be wrong and screw up. And if someone with such an influence screws up, he really screws up.

Reading that fic from a point of view of someone who doesn't support cryonics makes it clear exactly why beating up super-criminals is the best use of his powers.

comment by palladias · 2014-07-08T22:09:03.301Z · LW(p) · GW(p)

In a similar vein, Strong Female Protagonist is a great webcomic about a girl with super strength who questions the idea that fighting crime is the best use of her talents.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-07-09T09:22:41.934Z · LW(p) · GW(p)

"We like winning far more than we like playing."

That whole page feels very LessWrong-y.

comment by Jiro · 2014-07-08T14:52:45.823Z · LW(p) · GW(p)

The official Peter Parker version had Gwen Stacy die on him. I've seen a commentary which pointed out that this directly disproves the mantra about great power and great responsibility, since although Uncle Ben was a case where refusing to be a hero got his loved ones killed, Gwen Stacy was a case where deciding to be a hero got his loved ones killed. If he hadn't been Spider-Man, Gwen would have still been alive.

Replies from: TylerJay
comment by TylerJay · 2014-07-09T21:33:49.550Z · LW(p) · GW(p)

I'm not familiar with the exact canon of how Gwen died, but I don't think it disproves the idea that "with great power comes great responsibility". It's still your responsibility to make sure people don't get hurt, not just to try to use your powers for good.

In HPMOR, Harry calls it "Heroic Responsibility".

"You could call it heroic responsibility, maybe," Harry Potter said. "Not like the usual sort. It means that whatever happens, no matter what, it's always your fault. Even if you tell Professor McGonagall, she's not responsible for what happens, you are. Following the school rules isn't an excuse, someone else being in charge isn't an excuse, even trying your best isn't an excuse. There just aren't any excuses, you've got to get the job done no matter what."

Replies from: Jiro
comment by Jiro · 2014-07-10T00:56:43.224Z · LW(p) · GW(p)

I can "prevent someone from getting hurt" right now by selling my computer and giving the money to some charity. I don't do this. Some people here may argue that that's still immoral, especially the Givewell people, but almost every human being alive acts that way. People care more about their family and friends, and secondarily about people like themselves, rather than caring about everyone in the world equally.

From this point of view, the death of Uncle Ben is supposed to show that if Peter doesn't become a hero, the people he cares about will suffer, not just a random other person in the world. After all, every time he buys his Aunt May medicine instead of spending the money on third world malaria netting, he has failed to use his powers for good There's no need for Uncle Ben to die just to demonstrate that. But Gwen Stacy is a prime example of a loved one whose death includes a short causal chain that begins with "Peter is a hero". If he hadn't been a hero, Gwen wouldn't have died.

(Of course, you could argue that if he hadn't been a hero, an earlier villain would have taken over the city and eventually killed everyone in it, including Gwen. The problem with that reasoning is that it's really an astonishing coincidence that in a comic book world, the heroes and villains are exactly matched--it's such an astonishing coincidence that morality cannot require that Peter act in a way that's optimized for the coincidence. And the same logic which leads you to conclude that without Peter villains would have taken over the city would also lead you to conclude that in cities that never had a Peter, the villains would have taken over already.)

In HPMOR, Harry calls it "Heroic Responsibility".

But that quote isn't an argument that heroic responsibility exists. It's an assertion. So I have no reason (that you've shown me) to take it seriously.

comment by polymathwannabe · 2014-07-07T18:07:28.927Z · LW(p) · GW(p)

I half-expected Peter to finish with, "Revenge? Of course. It's just that the real enemy is death, and this is my revenge."

Replies from: jaime2000
comment by jaime2000 · 2014-07-08T13:56:29.916Z · LW(p) · GW(p)

I suppose that makes more sense than what FeepingCreature came up with on reddit:

"Revenge?" asked Peter, a slight note of shock in his voice. "On a whale? No, I decided I'd just get on with my life."

comment by David_Gerard · 2014-07-07T11:55:18.863Z · LW(p) · GW(p)

The Transhumanist Wager. Has anyone read this thing? The Wikipedia synopsis reads like a satirical description of a fictional book. This review is absolutely scathing, including of the ethics of the author-avatar protagonist; this one is a bit nicer. The author commented here very slightly.

Replies from: Kaj_Sotala, None, Risto_Saarelma, James_Miller, Manfred
comment by Kaj_Sotala · 2014-07-09T12:55:02.271Z · LW(p) · GW(p)

I tried reading it, gave up around page 70. At first I was reading it as a self-satire B-movie thing about transhumanist stereotypes, but at some point it dawned upon me that it was apparently meant to be read in all seriousness.

The shark-jumping moment to me was the part in the novel where the President of the United States has called for a public meeting between bioconservative religious leaders and transhumanist scientists. The dialogue is stalled, until the main character, a fourth-year philosophy student, gets up and holds a speech. He says basically that state institutions that restrict research are evil, that scientific research must proceed freely and without limitations, and that furthering transhumanism is a moral obligation which will end up benefiting both national well-being and competitiveness. The "state institutions are evil" bit is mostly the only part that gets actual arguments supporting it, the rest of the points are just stated without really providing anything to back them up.

The crowd's reaction:

The rotunda was silent for a long time after Jethro stopped speaking. In those moments every person believed in the speech’s common sense, in the potential of transhumanism, in modifying and improving the landscape of traditional human experience. The logic was inescapable. But then— slowly— their minds, egos, and fears lumbered around to the immediate tasks facing them. They remembered about their need to be elected to office; about what their constituents would say; how their churches would cast judgment; how their mothers, spouses, and friends would react; how they would be viewed, tallied, and callously spit out in public. Finally, they remembered their own fears of the unknown.

That's, err, not the best job that I've seen of presenting the bioconservative viewpoint in a fair or charitable light.

Later on the philosopher goes to write his thesis, an essay praising transhumanism, and is almost failed by an Evil Bioconservative Professor for writing such garbage.

Later on he meets his love interest:

When she turned, however, Jethro's luminous blue eyes met hers, and she felt stunned to be looking at a light-skinned man only a few years younger than she. The tingling on the back of Zoe’s neck told her he was neither handsome nor ugly, but intensely compelling. She felt aroused, and unconsciously adjusted her legs. There was a spiritual and nebulous connection she felt as well, but it was too much for her to immediately fathom.

Uh huh.

Later on the said love interest jumps off a cliff in order to persuade the main character of a philosophical point. Yes, really.

Replies from: None
comment by [deleted] · 2014-07-09T13:08:46.638Z · LW(p) · GW(p)

Thanks for taking one for the team.

comment by [deleted] · 2014-07-08T02:54:47.702Z · LW(p) · GW(p)

When I first read synopses, I seriously thought it was a parody by the person who writes Amor Mundi.

Its existence illustrates everything wrong with particular sub-parts of the the transhumanist/libertarian cluster.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-08T12:48:22.498Z · LW(p) · GW(p)

I'm sure transhumanists everywhere will be delighted at being depicted in popular culture (NYT bestseller!) as sociopathic Objectivists with a cryonics membership.

comment by Risto_Saarelma · 2014-07-07T15:24:47.271Z · LW(p) · GW(p)

I didn't make it past the one-page preview. Looks very Dan Brown-y stylistically. Also, the nicer review is by Giulio Prisco, who seems to be more of a general booster of transhumanism memes than a critical book reviewer. If even he can only muster lukewarm appreciation, I'd count that as a pretty bad sign.

Though I guess there is some poignancy in a book extolling the drive to achieve a state of all-encompassing superhuman cognition at all costs being itself the product of somewhat inept fiction writing skills.

comment by James_Miller · 2014-07-07T17:40:35.059Z · LW(p) · GW(p)

The author is doing a great job of promoting transhumanism in the popular press. See, for example, this pro-cryonics article he wrote for the HuffingtonPost. He is the kind of person we should be working with.

Replies from: None
comment by [deleted] · 2014-07-08T03:08:00.456Z · LW(p) · GW(p)

I would argue that this book at least undoes and possibly more everything else they did.

Replies from: James_Miller
comment by James_Miller · 2014-07-08T20:33:21.400Z · LW(p) · GW(p)

Far more people are going to read his HuffPost articles than his book.

comment by Manfred · 2014-07-07T14:22:53.376Z · LW(p) · GW(p)

Oh my gosh this book sounds amazing (though not for the reasons the author intended). There's even seasteading!

EDIT: After reading some reviews, on the other hand, I'll take a pass, sorry.

comment by Error · 2014-07-12T16:12:25.550Z · LW(p) · GW(p)

The Less Wrong Study Hall's tinychat room is acting up this morning. For anyone who uses it and can't get in, we're in /lesswrong2 instead.

[Edit: It looks like support has fixed it, so please go back to the regular room.]

comment by MrMind · 2014-07-11T16:47:23.272Z · LW(p) · GW(p)

Sometimes I've tried to argue in favor of eugenics. The usual response I got has been something like: "but what if we create a race of super-human beings that wipes us out?".
It's interesting that people are much more prone to believe it's possible to create unfriendly human super-intelligence rather than an unfriendly artificial super-intelligence.

Replies from: None, Richard_Kennaway, NancyLebovitz
comment by [deleted] · 2014-07-11T17:57:58.698Z · LW(p) · GW(p)

It's interesting that people are much more prone to believe it's possible to create unfriendly human super-intelligence rather than an unfriendly artificial super-intelligence.

Probably because we have actual history of unfriendly humans who justify genocide by their own perceived superiority?

Replies from: MrMind
comment by MrMind · 2014-07-15T08:54:30.480Z · LW(p) · GW(p)

It's the "super" part that I'm curious of. Of course we have unfriendly intelligence, but I got the feeling that people in general believe it's much easier to create a biological super-intelligence than an artificial one.

comment by Richard_Kennaway · 2014-07-13T08:52:18.799Z · LW(p) · GW(p)

Sometimes I've tried to argue in favor of eugenics. The usual response I got has been something like: "but what if we create a race of super-human beings that wipes us out?".

They would. That is what eugenics is. No existing people get uplifted, the turnover of population just replaces them by better people.

Replies from: MrMind, NancyLebovitz
comment by MrMind · 2014-07-15T09:06:06.718Z · LW(p) · GW(p)

They would. That is what eugenics is.

It's tangential to the main topic ("why people believe a biological super-intelligence is more probable than an artificial one"), but I think that what you said it's not warranted at all.

First, we know very little about the biology of intelligence: at present we are not able to explain the current variability in human intelligence, we have even less idea how to genetically enhance it.

Second, we share a psychological unity and genetically improved humans will presumably be grown within human family, so we have a much greater chance for them to share our values.

Third, even if no people get uplifted and the generational change brings about better people, it's still a net gain for humanity overall.

The existential risk of eugenics, super-people violently replacing normal ones, is the least probable, not the most probable, scenario.
I'm not saying we should concentrate on eugenics, I still feel that UFAI is a much bigger threat, I'm saying we should not avoid it because of x-risks.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-07-15T10:36:44.344Z · LW(p) · GW(p)

I agree with all that, I was just running with the idea that one way or another, truly superior beings would in the end displace the rest, with or without actual war.

Now, successful breeding combined with life extension for all, so that present-day average people get to live into a future dominated by the results of several generations of breeding, that could be an interesting scenario for an SF story. I would expect the violence to originate with the marginalised rather than the elite.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-07-17T05:38:02.251Z · LW(p) · GW(p)

You might be interested in "Nobody Home" by Joanna Russ. A woman who's reasonably bright by modern standards just doesn't fit in a future where everyone else is much brighter than she is. No violence, just a miserable trap for her.

I don't consider that future all that plausible-- it seems unlikely that there was only one person at that intelligence level.

comment by NancyLebovitz · 2014-07-13T16:42:00.976Z · LW(p) · GW(p)

The concern is that there will be no more people like us-- only the improved(?) model will remain.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-07-13T23:39:48.786Z · LW(p) · GW(p)

That's how evolution works, natural or artificial, as long as we keep dying.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-07-14T01:16:46.558Z · LW(p) · GW(p)

That's how evolution works sometimes. In general, there's a noticeable chance of both species surviving in different niches.

comment by NancyLebovitz · 2014-07-11T17:00:34.418Z · LW(p) · GW(p)

Interesting. I've assumed that the big risk of eugenics (especially if it includes genetic engineering) is that people will choose something stupid and/or we'll lose too much variation.

Any thoughts about whether we'll converge on tall, blond, lean, hypomanic, and good at multiple choice tests with a sprinkling of people who look like celebrities, or instead have a wild explosion of physical and mental variation?

Replies from: Manfred, None
comment by Manfred · 2014-07-11T17:29:29.689Z · LW(p) · GW(p)

Huh, I've assumed that the big risk of eugenics is that the ability to reproduce will be used as a measure of social control and status by a not-very-deserving upper class, and will make a lot of people very unhappy. But with genetic engineering, yeah, we could avert that.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-07-13T16:40:22.170Z · LW(p) · GW(p)

That depends on what genetic engineering costs.

comment by [deleted] · 2014-07-11T17:56:28.511Z · LW(p) · GW(p)

Does it matter when in ~1 generation we will have the ability to redesign our bodies at will?

Eugenics is a 20th century concern.

Replies from: bramflakes
comment by bramflakes · 2014-07-11T18:43:56.947Z · LW(p) · GW(p)

Where do you get the 1 generation estimate from?

Replies from: None
comment by [deleted] · 2014-07-12T06:39:57.084Z · LW(p) · GW(p)

Kurzweil-like graphs regarding advancements in molecular nanotechnology, plus an understanding of nanomedicine.

Replies from: None
comment by [deleted] · 2014-07-13T04:48:12.460Z · LW(p) · GW(p)

What exactly is nanomedicine?

Replies from: None
comment by [deleted] · 2014-07-11T11:02:46.475Z · LW(p) · GW(p)

suppose someone's life plan was to largely devote themselves to making money until they were in, say, the top 10% in cumulative income. They also did not plan to save money to any very unusual extent.

then after that was accomplished, they would switch goals and devote themselves to altruism.

Given that the person today is able to make the money and resolves to do this, I wonder what people here think the chance is of doing it. For example, fluid intelligence declines over time. So by the time you're 60 years old and have made your money and have kids, will you really be smart enough to diametrically change direction and have much impact? Maybe Bill Gates has enough brain cells, but his IQ might be 160. And maybe you'll just forget about altruism and learn to enjoy nice cars more.

Replies from: NancyLebovitz, drethelin, Manfred
comment by NancyLebovitz · 2014-07-11T16:57:54.054Z · LW(p) · GW(p)

It doesn't seem that unusual for rich people to become more charitable as they get older, though perhaps I'm just hearing about the famous ones. I assume a large part of it is feeling as though one has solved the money-making game, and now it's time to do something new. (Rich people getting into politics is probably similar.)

Is anything known about how to maintain fluid intelligence?

Replies from: None
comment by [deleted] · 2014-07-11T21:42:04.936Z · LW(p) · GW(p)

A Google search turned up a few articles:

Senior citizens who performed as well as younger adults in fluid intelligence tended to share four characteristics in addition to having a college degree and regularly engaging in mental workouts: they exercised frequently; they were socially active, frequently seeing friends and family, volunteering or attending meetings; they were better at remaining calm in the face of stress; and they felt more in control of their lives.

Although there is some controversy and debate on the best ways to improve fluid intelligence, studies are showing a strong link between non-academic pursuits and improved fluid intelligence.

A quick look into some trends:

This report suggests a non-monotonic relationship but maybe a positive correlation between income and percentage of income donated to charity. Unfortunately, this depressingly suggests a negative correlation. (Edit It seems non-monotonic over some intervals but negative overall. Further edit I don't have high confidence either way. Alexander Berger, a research analyst at GiveWell, thinks the piece in The Atlantic is just wrong. I note that some studies are citing "discretionary income" or income with a whole bunch of expenses subtracted out.)

This doesn't seem to list percentages but gives the impression of increasing giving with age. And it's the same story in the UK. Edit This is better:

In 2005, people in the 65-74 years age bracket gave the most dollars to charitable contributions. The people in the 75 years and older age bracket gave the highest portion of income.

comment by drethelin · 2014-07-11T21:24:34.246Z · LW(p) · GW(p)

altruism doesn't take a lot of intelligence. something like 95 percent of American households give to charity. The biggest factor by far will be commitment, not capability.

comment by Manfred · 2014-07-11T17:40:21.825Z · LW(p) · GW(p)

You'll probably be fine. From what my parents say, you keep gaining effectiveness due to cunning, practice, and ability to see the obvious. At least until 65 or so, and not necessarily in all professions.

If you're particularly concerned for some reason, you might want to make a habit of giving to charity (not necessarily large amounts, but enough to form the habit). Using a contract to force yourself also sounds cool, but is probably just asking for trouble.

comment by Stefan_Schubert · 2014-07-10T15:00:21.887Z · LW(p) · GW(p)

In the comments to this post we discussed the signalling theory of education, which has previously been discussed on Less Wrong. The signalling theory says that education doesn't make you more productive, but constitutes a signal that you are productive (since only a productive worker could obtain a degree at a prestigious university, or so many employers think).

Such signalling can be very socially wasteful, since it can lead to a signalling arms race where people spend more and more money on signals that don't increase their productivity (like peacocks' tails). Now an important question is how one could rein in such signalling arms races. One way is by prohibiting employers to consider educations that are irrelevant for the job. That is, if your education is a pure signal of the abilities you had before you started the education, and doesn't increase your productivity in any way, employers would not be allowed to consider it when deciding between you and other applicants. The downside of this is, though, that it means more regulations and that it could be seen as illiberal.

Another hope is the increased use of big data in recruiting. Whereas previously, employers used crude heuristics such as which university you went to, they now have access to constantly improving algorithms which pick out precisely which applicant features that predict productivity and which don't.

Now suppose that what university education you went to is in fact a less accurate signal than some other feature. Then employers would fight over the applicants that have this other feature, rather than those with the university education. This would lead to people being less keen to obtain long and expensive university educations.

Of course new wasteful arms races could arise regarding these other features. Then again, I think we have reason to believe that these arms races would not be quite as wasteful as (I believe) the present educational arms races are. The reason people spend so much time and money on education as a signal is that it has proved to be so stable as a signal. People aren't going to spend as much time and money on a new signal, because they won't be as confident that it will continue to function as a strong signal. If what is taken to have signalling value is constantly shifted around, people would presumably be less willing to engage in signalling arms races.

These are just some loose thoughts. I'd be interested to hear if someone has any further thoughts on how to decrease wasteful signalling in education, or any other thoughts on the fascinating topic of signalling in general.

Replies from: Vaniver, Lumifer, Douglas_Knight, Jiro
comment by Vaniver · 2014-07-12T21:13:20.483Z · LW(p) · GW(p)

Another hope is the increased use of big data in recruiting. Whereas previously, employers used crude heuristics such as which university you went to, they now have access to constantly improving algorithms which pick out precisely which applicant features that predict productivity and which don't.

So, we don't need big data. We need the data we already have, that we're legally prohibited from using. What you're going to find, for almost any job, is that g matters a lot, and then conscientiousness and extraversion matter some, and job-specific experience and training determine how quickly they can become productive. If prospective employers could just look up your IQ test scores, they wouldn't need to sneak around trying to estimate your IQ score from available data.

(Edit: of course further research and testing will determine other things that matter. But we shouldn't pretend that we don't know the biggest factor, or that the gaps are knowledge-based instead of policy-based.)

Replies from: NancyLebovitz, Jiro
comment by NancyLebovitz · 2014-07-13T06:51:32.609Z · LW(p) · GW(p)

Everything I've heard about IQ tests being illegal to use for employment has been about the US. Anyone know whether it's legal to use IQ tests in other countries? And if so, how it's worked out?

comment by Jiro · 2014-07-13T00:06:16.886Z · LW(p) · GW(p)

The problem with using a measure like an IQ score is that if the measure happens to work poorly for one particular person, the consequences can become very unbalanced.

If IQ tests are more effective than other tests, but employers are banned from using IQ tests and have to use the less effective measures instead, their decisions will be more inaccurate. They will hire more poor workers, and more good workers will be unable to get jobs.

But because the measures they do use vary from employer to employer, the effect on the workers will be distributed. If, say, an extra 10% of the good workers can't get jobs, that will manifest itself as different people unfairly unable to get jobs at different times--overall, the 10% will be distributed among the good applicants such that each one finds it somewhat harder to get a job, but eventually gets one after an increased length of jobless time.

If the employers instead use IQ tests and can reduce this to 5%, that's great for them. The problem for the workers is that if IQ tests are poor indicators of performance for 5% of the people, that won't just be 5%, it'll be the same 5% over and over again. The total number of good-worker-man-years lost to the inaccuracy will be less with IQ tests (since IQ tests are more accurate), but the variance in the effect will be greater; instead of many workers finding it somewhat harder to get jobs, there'll be a few workers finding it a lot harder to get jobs.

Having such a variance is a really bad thing.

(Of course I made some simplifying assumptions. If IQ tests were permitted, probably not 100% of the employers would use them, but that would reduce the effect, not eliminate it. Also, note that this is a per-industry problem; if all insurance salesmen got IQ tested and nobody else, any prospective insurance salesman who doesn't do well at IQ tests relative to his intelligence would still find himself chronically unemployed.)

The same, of course, applies to refusing to hire someone based on race, gender, religion, etc.: you can reduce the number of people who steal from you by never hiring blacks, but any black person who isn't a thief would find himself rejected over and over again, rather than a lot more people getting such rejections but each one only getting them occasionally.

(Before you ask, this does also apply to hiring someone based on college education, but there's not much we can do about that, and at least you can decide to go get a college education. It's hard to decide to do better on IQ tests or to not be black.)

Replies from: Vaniver, Azathoth123
comment by Vaniver · 2014-07-13T00:32:54.855Z · LW(p) · GW(p)

Having such a variance is a really bad thing.

I agree that it's a bad thing that some people are mismeasured, because that's inefficient. I don't buy the argument that the concentration makes it worse on anywhere near the same scale.

It's also worth pointing out that this is a continuum. Dull for a systems analyst is sharp for an accountant, and dull for an accountant is sharp for a salesperson, and dull for a salesperson is sharp for a machinist, and so on. And so if someone with salesperson intelligence doesn't test well, and so only has machinist scores, then they can get a job as a machinist and outperform their peers, and eventually someone may notice they should be in the office instead of on the shop floor.

Replies from: gjm, Jiro
comment by gjm · 2014-07-13T14:57:56.263Z · LW(p) · GW(p)

Perhaps it's a deliberate simplification for clarity, but that last paragraph seems to me to assume a one-dimensional oversimplification of how things are.

Suppose Frieda would be a great salesperson: she is enthusiastic and upbeat, she has a good memory for names and faces, etc. But her test scores aren't good, and she gets hired as a machinist. How much are those good-salesperson characteristics going to help her impress her colleagues in the shop floor? Suppose Fred has similar test scores and also gets hired as a machinist. He is conscientious, has a lot of tolerance for repetitive work, is dextrous and not very prone to repetitive strain injuries. He turns out to be a first-rate machinist. Do you want to send him off to Sales?

Now, it could be that there are people watching the employees on the shop floor and looking out for ones who (even though they may not be great machinists) would do well in sales, accounting, or whatever. But I rather doubt it, and I suspect that a machinist's work-life doesn't give a lot of opportunities to be noticed as a good candidate for a job in anything far removed from the shop floor.

comment by Jiro · 2014-07-13T00:49:45.829Z · LW(p) · GW(p)

The argument works just as well with "the person who is bad at IQ tests gets repeatedly hired for lower-paying jobs" as it does with "the person who is bad at IQ tests gets repeatedly not hired at all". Back when you were permitted to discriminate in hiring based on race, black people didn't have absolutely no jobs--they were just hired for jobs that were generally worse. (And people didn't notice the good blacks should be in the office and promote them at a higher rate to make up for it, either. Rather. they allowed their bias to affect their assessment of how good blacks were at their jobs.)

Edit: There's also a problem that's related to the first but where accuracy isn't involved. Imagine that IQ tests were always accurate for job purposes. 100% of the time; if one person has a higher IQ than another, he has higher performance.

Employers would then start hiring people from the high IQs down. In a limited job market, employers would stop hiring before they reached the bottom. Someone could find himself having 95% of the productivity of someone with a higher IQ score but hired 0% of the time. Again, it's bad to have people who are hired 0% of the time.

You could solve that by introducing noise into the IQ scores, but of course that is equivalent to not allowing IQ testing and forcing employers to use noisy measures of IQ.

(You could also solve that by allowing employers to hire one person at X% of the salary of another, but employers tend not to do that even for the measures they are allowed to use now.)

Replies from: Vaniver
comment by Vaniver · 2014-07-13T01:50:45.525Z · LW(p) · GW(p)

The argument works just as well

I feel like the argument is slicing the problem up and presenting just the worst bits, when we need to consider the net effect on everything. This reminds me of a bioethics debate about testing error and base rate of rare lethal diseases: if five times as many people have disease A than disease B, but they look similar and the tests only offer 80% accuracy,* what should we do if the treatment for A cures those with A but kills those with B, and vice versa?

The 'shut up and multiply' answer is "don't give the tests, just treat everyone for A," as that spares the cost of the tests and 5/6ths of the population lives. But this is inequitable, since everyone with disease B dies. Another approach is to treat everyone for the disease that they test positive for- but now only 4/5ths of the population lives, and we had to pay for the tests! Is it really worth committing 3% of the population to the graveyard to be more equitable? If one focuses on the poor neglected patients with B, then perhaps, but if one considers patients without regard to group membership, definitely not.

*Obviously, the tests need to be dependent for 80% to be the maximal possible accuracy.

And people didn't notice the good blacks should be in the office and promote them at a higher rate to make up for it, either.

I don't know if it's possible to test this, and specifically it's not obvious to me that we need racial bias to explain this effect. That is, widespread cognitive stratification in the economic sphere is relatively new (it started taking off in a big way only around ~1950 in the US), and if promotions were generally inefficient, it's hard to determine how much additional inefficiency race caused.

These comparisons become even harder when there are actually underlying differences in distributions. For example, the difference in mean male and female mathematical ability isn't very large, but the overwhelming majority of Harvard math professors are male. One might make the case that this is sexism at work, but for people with extreme math talent, what matters much more than the difference in mean is the difference in standard deviation, which is significantly higher for men. If you take math test scores from high schoolers and use them as a measure of the population's underlying mathematical ability distribution and run the numbers, you predict basically the male-female split that Harvard has, which leaves nothing left for sexism to explain.

Replies from: satt, kalium, gjm, Jiro, bramflakes
comment by satt · 2014-07-14T00:16:43.937Z · LW(p) · GW(p)

That is, widespread cognitive stratification in the economic sphere is relatively new (it started taking off in a big way only around ~1950 in the US),

I'm sceptical. Strenze's meta-analysis of correlations between IQ and socioeconomic status (operationalized as education, occupational level, or individual income) found no substantial increase in those correlations between 1929 & 2003.

Replies from: Vaniver
comment by Vaniver · 2014-07-14T01:01:33.887Z · LW(p) · GW(p)

I'm sceptical. Strenze's meta-analysis of correlations between IQ and socioeconomic status (operationalized as education, occupational level, or individual income) found no substantial increase in those correlations between 1929 & 2003.

That does reduce my confidence, but only slightly. I think the stratification claim is more specific than what they're testing, but their coarse measure gives an upper bound on how strong the stratification effect could be. (Unfortunately, I don't have the time to delve into this issue.)

comment by kalium · 2014-07-13T21:59:51.148Z · LW(p) · GW(p)

The analogy is poor because the point is that temporary unemployment of the kind you get with a noisy IQ measure is much less harmful than long-term unemployment of the kind you might get with a better measure. Whereas with diseases A and B people die either way and it's just a question of who/how many.

Replies from: Vaniver
comment by Vaniver · 2014-07-13T22:15:19.849Z · LW(p) · GW(p)

The analogy is poor because the point is that temporary unemployment of the kind you get with a noisy IQ measure is much less harmful than long-term unemployment of the kind you might get with a better measure.

The analogy is intended to be about reasoning processes, not the decision itself. Complaining that some readily identifiable people are hurt by measure X is a distraction if what you care about is total social welfare: if we can reduce harm by concentrating it, then let us do so!

I also think that, on the object level, replacing "long-term unemployment" with "long-term underemployment" significantly decreases the emotional weight of the argument. I also think that it's not quite right to claim that the current method is equally inefficient everywhere- the people who test well but don't school well, for example, are the readily identifiable class who suffer under the current regime.

Replies from: kalium, Jiro
comment by kalium · 2014-07-14T16:04:20.994Z · LW(p) · GW(p)

Long-term underemployment still tends to erode, or at least not build up, one's skills, reducing that individual's lifetime productivity.

comment by Jiro · 2014-07-14T03:02:55.473Z · LW(p) · GW(p)

Complaining that some readily identifiable people are hurt by measure X is a distraction if what you care about is total social welfare:

While it makes sense to care about total social welfare, the calculation showing that the IQ test is better shows that it is better in terms of job-productivity-years. Job-productivity-years is not social welfare, and you can't just assume that it is.

Furthermore, my complaint is not that the people harmed are readily identifiable, but that it's the same people being constantly harmed. Having one person out of 100 never have a job is worse than having all 100 people not have jobs 1% of the time. Even if I knew who the 100 people were and didn't know who the one person is, that wouldn't change it.

Replies from: Vaniver
comment by Vaniver · 2014-07-15T02:01:17.385Z · LW(p) · GW(p)

that it's the same people being constantly harmed

Sure, but I don't see why you think the current setup is much better on that metric. Someone who consistently flubs interviews is going to be unemployed or underemployed, even though interviews don't seem to communicate much information about job productivity. If it were an actual lottery, I think the argument that the unemployment is spread evenly across the population would hold some weight, but I think employers have errors that are significantly correlated already, and I'm willing to accept an increase in that correlation in exchange for a decrease in the mean error.

comment by gjm · 2014-07-13T15:00:01.533Z · LW(p) · GW(p)

If you take math test scores from high schoolers and use them as a measure of the population's underlying mathematical ability distribution and run the numbers, you predict basically the male-female split that Harvard has, which leaves nothing left for sexism to explain.

I've seen this said before (notably, Larry Summers took a lot of heat for saying it) and it seems like the kind of thing that might well be true, but I've never seen the actual numbers. Have you actually done the calculations?

Replies from: gwern, Vaniver
comment by gwern · 2014-07-13T16:58:24.873Z · LW(p) · GW(p)

If you just want some calculations, look at La Griffe: http://www.lagriffedulion.f2s.com/women_and_minorities_in_science.htm and http://www.lagriffedulion.f2s.com/math.htm / http://www.lagriffedulion.f2s.com/math2.htm

(I haven't checked his numbers or looked for more mainstream authors, but then again, would you expect to find many papers by prominent authors doing the exact calculation you want, especially post-Sumners?)

Replies from: gjm
comment by gjm · 2014-07-13T19:27:50.104Z · LW(p) · GW(p)

the exact calculation you want

You say that as if I'm asking for something specific and unusual, but all I'm actually doing is responding to "If you do the calculations you find X" with "That's interesting; have you done those calculations or seen someone else do them, then?".

Replies from: gwern
comment by gwern · 2014-07-13T23:03:28.248Z · LW(p) · GW(p)

The problem is, I want to see someone other than La Griffe do the numbers and I'm not happy relying on him.

I don't know who he is, I haven't gone through his derivations or math, I don't know how accurate his models are, he uses a lot of old sources of data like Project Talent (which may or may not be fine, but I don't have the domain expertise to know), and the one piece of writing of his I've really gone through, his 'smart fraction' doesn't seem to hold up too well using updated national IQ data from Lynn (me and Vaniver tried to reproduce his result & update it in some comments on LW).

But the problem is, given the conclusion, I am unlikely ever to see someone from across the ideological spectrum verify that his work is right. (Whatever the accuracy of his own arguments, La Griffe does a good job tearing apart one attempt to prove there is no variance difference, where the woman's arguments show she either doesn't understand the issue or is being dishonest.)

Replies from: Douglas_Knight, gjm
comment by Douglas_Knight · 2014-07-14T22:17:53.035Z · LW(p) · GW(p)

Your third link begins with the Griffe taking numbers from Janet Hyde, who is on the opposite end of the spectrum. The difference is that she downplays the magnitude of the standard deviation difference. Isn't the main concern the source of the numbers, not the calculation? It's just a normal distribution calculation.

(I don't actually believe that intelligence is normally distributed, so I don't believe the argument.)

Replies from: gwern
comment by gwern · 2014-07-14T23:58:16.843Z · LW(p) · GW(p)

It's just a normal distribution calculation. (I don't actually believe that intelligence is normally distributed, so I don't believe the argument.)

If you don't think intelligence is normally distributed, isn't that a problem for how true his results are and why one might want third-parties' opinion? And I'm not sure that affects his rank-ordering argument very much; that seems like it might be reasonably insensitive to the exact distribution one might choose.

comment by gjm · 2014-07-14T13:52:54.445Z · LW(p) · GW(p)

OK, I understand. (I share your frustration, would count as "from across the ideological spectrum", and have at least a good subset of the necessary skills, but probably lack the time to try to rectify the deficit myself.)

comment by Vaniver · 2014-07-13T20:12:59.377Z · LW(p) · GW(p)

Have you actually done the calculations?

I got the calculations from La Griffe, linked by gwern in a sibling comment. (For completeness, [1], [2], [3].) I have a vague recollection of checking them myself at some point.

Replies from: gjm
comment by gjm · 2014-07-13T20:26:51.858Z · LW(p) · GW(p)

OK. Thanks.

comment by Jiro · 2014-07-13T02:33:56.539Z · LW(p) · GW(p)

In the IQ example, you can't shut up and multiply because you're supposed to multiply utilons, but the calculation showing that the IQ test is better measures job-productivity-years, not utilons. Most people don't think that utilons are linear with job-productivity-years; for instance, having one person out of 100 permanently unemployed is worse than having every person in the 100 lose 1% of the years they would otherwise have worked. That difference is what makes the IQ test scenario bad.

In the disease example, either you die or you don't,, so as long as you assign more utilons to not dying than to dying, your utilon assignment doesn't affect how the two scenarios compare.

(You are of course correct about male professors at Harvard.)

Replies from: Lumifer, Azathoth123
comment by Lumifer · 2014-07-16T16:11:49.063Z · LW(p) · GW(p)

Most people don't think that utilons are linear with job-productivity-years

Are you talking descriptive or normative?

If descriptive, most people don't think in terms of utilons at all, and if normative I would like to see some arguments for the assertion that differences in wealth/income generate negative utility.

Replies from: Jiro
comment by Jiro · 2014-07-16T20:46:35.426Z · LW(p) · GW(p)

Are you talking descriptive or normative?

Most people have beliefs which imply a comparison in which utilons are not linear with job-productivity-years.

Replies from: Lumifer
comment by Lumifer · 2014-07-16T20:49:51.226Z · LW(p) · GW(p)

Most people have beliefs which imply a comparison in which utilons are not linear with job-productivity-years.

Please demonstrate that the beliefs of "most people" involve utilons at all.

Not to mention that under standard interpretation of utility, it's NOT summable across different people.

Replies from: Jiro
comment by Jiro · 2014-07-16T21:42:36.028Z · LW(p) · GW(p)

Please demonstrate that the beliefs of "most people" involve utilons at all.

Huh?

People can have beliefs which imply a comparison of utilons, without those people believing in utilons.

Not to mention that under standard interpretation of utility, it's NOT summable across different people.

I didn't invoke "most people" to suggest that utility can be compared among people. I invoked it because you are presumably coming up with these utilon calculations as a way to formalize preexisting beliefs, in which we need to figure out what those preexisting beliefs are and what they imply.

Replies from: Lumifer
comment by Lumifer · 2014-07-17T01:05:14.052Z · LW(p) · GW(p)

People can have beliefs which imply a comparison of utilons, without those people believing in utilons.

Utilons are not a feature of reality. They are a concept that some people use to think about comparative usefulness of things.

What you are saying is that people who think in terms of utilons can reinterpret other people's value judgments in these terms. But that's just a map which redraws another map.

Utilon-less maps do not "imply" utilons.

because you are presumably coming up with these utilon calculations

I am not coming up with utilon calculations. I am explicitly rejecting the the idea that the desirability of complete equality somehow falls out of utilon calculations -- primarily because I don't think you can calculate with utilons in this way.

Replies from: Jiro
comment by Jiro · 2014-07-17T14:24:31.209Z · LW(p) · GW(p)

I am not coming up with utilon calculations. I am explicitly rejecting the the idea that the desirability of complete equality somehow falls out of utilon calculations -- primarily because I don't think you can calculate with utilons in this way.

In that case, your argument is with Vaniver, who thinks we can "shut up and multiply" in deciding what is good for a population, which implicitly means that we will be multiplying utilons across members of a population, and that job-productivity-years are linear with utilons. If you cannot aggregate utilons across people, then nothing said here matters.

Replies from: Lumifer
comment by Lumifer · 2014-07-17T14:49:48.966Z · LW(p) · GW(p)

In that case, your argument is with Vaniver

While that may or may not be so, what are your opinions on whether you can calculate with utilons in this way?

Replies from: Jiro
comment by Jiro · 2014-07-17T15:20:40.215Z · LW(p) · GW(p)

I think that if you can't compare utilons among states of aggregations of people, you can't make very basic comparisons of a type that pretty much everyone makes. You have to at least have a partial order which allows at least some comparisons.

Replies from: Lumifer
comment by Lumifer · 2014-07-17T16:38:10.821Z · LW(p) · GW(p)

That sounds like a very... lukewarm assertion. So maybe you can't make very basic comparisons of a type that pretty much everyone makes?

The basic issue is that you need to have a single metric applied to everything you're trying to aggregate and I don't think it works this way with estimates of individual utility. You need to convert utilons into something more universal and that typically ends up being dollars :-/

comment by Azathoth123 · 2014-07-13T18:42:37.899Z · LW(p) · GW(p)

Most people don't think that utilons are linear with job-productivity-years; for instance, having one person out of 100 permanently unemployed is worse than having every person in the 100 lose 1% of the years they would otherwise have worked.

This calculation completely neglects the utility generating function of productivity.

Replies from: Jiro
comment by Jiro · 2014-07-14T00:21:12.018Z · LW(p) · GW(p)

This calculation completely neglects the utility generating function of productivity.

No, it doesn't. That function affects the calculation by increasing the total utilions we attribute to productivity. Unless the increase is infinite, it is still possible for the loss in utility from high variance to outweigh the gain in utility from increased productivity.

Replies from: Azathoth123, Azathoth123, Jiro
comment by Azathoth123 · 2014-07-17T04:44:25.430Z · LW(p) · GW(p)

Unless the increase is infinite, it is still possible for the loss in utility from high variance to outweigh the gain in utility from increased productivity.

This only works if the main contribution to utility from working consists of the personal fulfillment of the worker rather than the benefits generated by the work.

Replies from: Jiro
comment by Jiro · 2014-07-17T14:32:44.293Z · LW(p) · GW(p)

Only in the sense that any measure of utility that involves the condition of a person consists of their personal fulfillment.

comment by Azathoth123 · 2014-07-16T04:19:54.714Z · LW(p) · GW(p)

You're argument essentially amounts to arguing that we should give people with low skills make-work jobs in order to increase utility.

Replies from: Jiro
comment by Jiro · 2014-07-16T15:38:03.808Z · LW(p) · GW(p)

"Make-work" carries the connotation that the productivity of the worker is less valuable than his pay. "Less valuable than optimum" is not the same as "less valuable than his pay". Furthermore, "low skills" carries the inapt connotation "very low" (and low-testing doesn't necessarily imply low skills anyway.)

The problem is that someone who is either marginally less productive, or marginally worse at testing, can find his ability to get a job decreased by an amount all out of proportion to how worse he is, if all employers use the same measure. Ensuring that such people can get jobs isn't make-work.

comment by Jiro · 2014-07-15T01:38:38.607Z · LW(p) · GW(p)

Is there some reason why most of my posts in this thread are modded down, other than disagreement?

comment by bramflakes · 2014-07-13T11:23:04.202Z · LW(p) · GW(p)

Kind of offtopic but regarding male-female intelligence differences - in Britain at least, girls seem to consistently outperform boys in school math exams, which would imply there is a mean difference, in the opposite direction.

Replies from: gwern
comment by gwern · 2014-07-13T17:11:52.022Z · LW(p) · GW(p)

It might, but there are subtleties you have to take into account. For example, ceiling effects will hide the claimed effect, and if there's not enough floor, can even produce a lower mean.

Imagine you have a test of 10 4-multiple-choice questions, male mean = female mean but males have higher variance, and the average student's score on the test would be 8, so lots of students score a perfect 10 but you would have to be retarded to score <=2. What will the mean by gender look like under this scenario? Since the male variance is higher, there will be several times more near-retarded boys than girls scoring in the lower ranks like 3-4; there will nearly as many normal boys as normal girls with normal scores like 7-9; and the rest will score 10 - but the many more boys than girls who are far out on the tail (are genius at maths) will also score 10 and look like fairly ordinary types. So the dim boys drag down the mean of all boys, the ordinary boys by definition match their girl counterparts, while the geniuses can't show their stuff and might as well have not been tested at all; and so on net, it looks like the boys perform worse than the girls even though they actually are the same on average and have a higher variance. This is because I invented a test which is able to pick up on the differences among the low-performers (by devoting 7 questions to them) but not among the high-performers (just 2 questions), and this favors the group with the least representation among both tails (females).

And most real-world exams are uninterested in making very fine gradations among the top 1% of students like you need to if you want to answer questions about 'how many female Fields Medalists - top mathematician in the entire world - should there be?' because with non-adaptive tests you would have to force the 99% of ordinary people to slog through endless reams of questions they have no idea about. (American schools have no incentive to look because they are not evaluated under No Child Left Behind based on how many world-class students pass through their halls, they're evaluated on the average student and especially the minorities.)

Other issues include to what extent those exams are based on class grades (the usual situation is boys do worse on grades, better on exams, because grades measure how much you can ingratiate yourself to your teacher by things like sitting still and doing even the most tedious moronic homework each and every time) and whether the exam are being administered after puberty where the increased variance is expected to manifest itself.

Replies from: bramflakes
comment by bramflakes · 2014-07-14T00:19:36.677Z · LW(p) · GW(p)

Thanks for the explanation. The skill ceiling/floor argument makes sense for GCSEs, but I'm not sure how well it works for A-Levels. Boys only outperform girls at the very very top end, and despite the complaints that the ceiling isn't high enough, I don't think it can account for all the discrepancy (he said, remembering his bad stats intuition).

Maybe it's higher male variance and higher female mean?

Class grades also count for zilch in both, it was all exams last time I checked.

Replies from: Douglas_Knight, gwern
comment by Douglas_Knight · 2014-07-15T21:41:40.987Z · LW(p) · GW(p)

Percent passing is not very informative because those sitting the test have been preselected. According to this spreadsheet, 50% more boys take Maths and more than twice as many boys take further maths. Also, it claims that the A* rate is twice as high for boys, at both levels, though the A rate is the same (which is weird).

(the spreadsheet has several sheets, but the link should go to the correct one - gender)

comment by gwern · 2014-07-14T00:53:28.804Z · LW(p) · GW(p)

Boys only outperform girls at the very very top end,

I'm not sure I understand your link. If 43.7% of people score an A and that's the highest score, then it's definitely not 'very very top end' because that means it has almost zero information about anyone who is above-average (much less the extremes like 1 in 10k). And the Criticism section seems to accuse A-levels of a severe ceiling effect:

It has been suggested by The Department for Education that the high proportion of candidates who obtain grade A makes it difficult for universities to distinguish between the most able candidates.

Incidentally, notice the lowest grade: almost twice as many males as females.

Replies from: bramflakes
comment by bramflakes · 2014-07-14T11:38:19.663Z · LW(p) · GW(p)

I'm talking about Further Maths. The A grade for that is the only one with more boys than girls. It's much harder, and only 8,000 people take it compared to 60,000 for the standard Mathematics exam.

Then again, the ceiling still only looks to be the top 6-7% of the people taking math A-Levels. I think you're right.

comment by Azathoth123 · 2014-07-13T18:39:47.903Z · LW(p) · GW(p)

Before you ask, this does also apply to hiring someone based on college education, but there's not much we can do about that,

Yes there is, we can pass laws making it illegal to hire on the basis of college degrees (possibly with an exemption for degrees directly relevant to the job).

and at least you can decide to go get a college education.

You can't decide to get accepted by an elite college.

It's hard to decide to do better on IQ tests or to not be black.

Another way to phrase this statement is that there is less motivation to engage in costly signaling. Thus there is less deadweight signaling loss and hence more resources available to utility production.

Replies from: Jiro
comment by Jiro · 2014-07-14T00:38:04.169Z · LW(p) · GW(p)

You can't decide to get accepted by an elite college.

I was referring to discrimination based on whether you have a college education, not discrimination based on which college education you have.

Discrimination based on eliteness of college doesn't raise the same sort of problems because employers can't hire just elite college graduates and nobody else--there aren't enough of them. After the employers hire all the elite college graduates, the remaining ones go to colleges which are hard to rank against each other (unlike IQ scores, which are numbers and are easy to compare). The employers will in effect select randomly from that remaining pool, so it won't lead to people in that pool becoming permanently unemployed, or even to just becoming permanently underemployed by large degrees.

Another way to phrase this statement is that there is less motivation to engage in costly signaling.

If I had to choose between black people getting the kind of jobs they got when discrimination against them was permitted, and signalling, I'd decide the signalling is less costly, and so would pretty much everyone else.

Replies from: Azathoth123
comment by Azathoth123 · 2014-07-15T02:59:55.472Z · LW(p) · GW(p)

If I had to choose between black people getting the kind of jobs they got when discrimination against them was permitted, and signalling, I'd decide the signalling is less costly,

You do realize the signaling, at least in the US, currently involves taking out student loans under terms that boarder on debt peonage.

Replies from: Jiro
comment by Jiro · 2014-07-15T04:46:43.558Z · LW(p) · GW(p)

There was a long period of time between when discrimination against blacks in employment was forbidden, and college prices rose to excessive levels. I doubt that signalling alone can explain the increase in college costs, or that letting employers discriminate based on race or IQ would reduce them. I'd blame it more on other government interference (such as subsidizing loans and making it essentially impossible to discharge loans in bankruptcy).

Furthermore, the situation of black people before the civil rights movement was bad enough that I'd be hard pressed to decide that even being massively in debt for a college loan is worse.

comment by Lumifer · 2014-07-10T16:12:04.441Z · LW(p) · GW(p)

Now an important question is how one could rein in such signalling arms races.

Who or what is that "one"?

That's not an idle question. If you assume, for the purpose of this exercise, that the One is an omnipotent dictator then the signaling issue in education is a silly place to start molding the reality. And if you assume the context of a modern Western society where you can't just magically tinker with people's minds and motivations, then the first question is what capabilities do you have to affect the issue.

For comparison, consider the signaling games around the selection of a mate. Are they "socially wasteful"? Would you like to "rein in" these games?

comment by Douglas_Knight · 2014-07-12T21:09:49.128Z · LW(p) · GW(p)

Then employers would fight over the applicants that have this other feature, rather than those with the university education.

That is only true if the new signal partially screens off education. If the employers have the choice of using the joint signal, what matters is the additional value of the education signal beyond the new signal. If you found a signal completely unrelated to education, it would not reduce the value of the education signal. For example, if employers value IQ, diligence, and honesty, but only the first two contribute to education, then an honesty test would not reduce the value of education, while an IQ test would reduce the value of education to just the value of diligence.

comment by Jiro · 2014-07-10T15:46:16.034Z · LW(p) · GW(p)

Using some other feature than education as a signal would subject employers to claims of discrimination, so they're not going to do it unless we drastically change our anti-discrimination laws.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-07-10T16:00:38.766Z · LW(p) · GW(p)

I'm sure they already do use, e.g. work experience as a signal.

Replies from: Jiro
comment by Jiro · 2014-07-10T18:17:18.758Z · LW(p) · GW(p)

It didn't sound like that's the type of signal you were talking about. (Of course, "using some other feature" really means "using some other feature of the kind you're talking about"). It's unusual as a signal because it can also, rightly or wrongly, be justified as a bona fide qualification. In contrast, signals such as living in a high income area or mowing lawns as a kid would probably not pass the test and would be readily considered discriminatory.

Another problem with some of the things that employers can use as signals is that using them as signals is overall bad for society. For instance, employers usually want to hire people who already have jobs, because of what having a job signals. But over a whole society, this leads to the existence of a chronically unemployed underclass, which would not happen if the unemployment was evenly distributed.

comment by DanielDeRossi · 2014-07-08T22:49:19.212Z · LW(p) · GW(p)

A very useful site. readlists.com/ You can compile lists of articles and share them with your friends or convert them to epub/mobi. I used them on sequences I wanted to read or share.

comment by polymathwannabe · 2014-07-13T18:05:43.157Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Receptor_activated_solely_by_a_synthetic_ligand

I just learned there was such a thing as Designer Receptors Exclusively Activated by Designer Drugs (DREADD). I think this is huge. Do you people know the current status of this field?

comment by Larks · 2014-07-12T02:03:18.253Z · LW(p) · GW(p)

Quick calibration test for those who like to have opinions on the US: of the standard US racial groupings (white, black, hispanic, asian) and the overall population, which do you expect to have the highest gini ratio for income? Why?

Here is the answer, according to the US Fed

Please use rot13 for spoilers.

Replies from: Douglas_Knight, bramflakes
comment by Douglas_Knight · 2014-07-12T20:53:20.900Z · LW(p) · GW(p)

I found that graph unreadable, so I made a new one using ggplot.

The problem was that the colors were not distinguishable and that the legend was hard to read. This was compounded by the use of two colors for the same race, but even when ggplot chose 7 colors, they were better than the fed's 7 colors. Actually, the 4 colors before the break were distinguishable, and it was possible to identify them using the hover legend (vs the main legend), but I didn't figure that out until I was done.

Replies from: Larks
comment by Larks · 2014-07-12T23:54:20.146Z · LW(p) · GW(p)

That's a much better graph, thanks!

comment by bramflakes · 2014-07-12T13:03:44.698Z · LW(p) · GW(p)

Pbafvqrevat vzzvtengvba geraqf naq rlronyyvat jbeyq zncf bs tvav, V'q fnl Uvfcnavpf sbe gur sbyybjvat ernfbaf:

  • Gur oneevre gb ragel vf zhpu ybjre sbe Yngva Nzrevpn, fb vzzvtenagf jvyy or yrff fryrpgrq, fb gurl'yy cerfreir zber bs gur vardhnyvgl bs gurve ubzr pbhagevrf.

  • Yngvanzrevpna pbhagevrf arneyl nyy fpber uvture guna gur HFN.

  • Rhebcrna pbhagevrf arneyl nyy fpber ybjre (jvgu gur nffhzcgvba orvat gung juvgr Nzrevpnaf orunir zber-be-yrff fvzvyneyl gb gurve Byq Jbeyq pbhfvaf).

  • Gur gbc 5 zbfg pbzzba onpxtebhaqf bs Nfvna Nzrevpnaf ner Puvarfr, Vaqvna, Svyvcvab, Ivrganzrfr naq Xberna. Bs gubfr 5 pbhagevrf, bayl Puvan unf n uvture TVAV pbrssvpvrag guna gur HFN (ntnva jvgu gur nobir nffhzcgvba, naq gur nqqrq rssrpg bs fgebat fryrpgvba jvyy erqhpr inevngvba fgvyy shegure).

  • V qba'g xabj zhpu nobhg Nsevpna Nzrevpnaf, ohg V'q jntre gung zbfg ner pyhfgrerq nebhaq gur obggbz bs gur vapbzr qvfgevohgvba naq guhf unir n ybjre nzbhag bs jvguva-enpr vardhnyvgl.

  • "Uvfcnavp" nf n pngrtbel vf fbzrguvat bs n yrtny svpgvba - vg vapyhqrf crbcyr bs pbzcyrgr Fcnavfu naprfgel, zhynggbf, Nzrevaqvnaf, oynpx Nsevpnaf naq rira Nfvna vzzvtenagf. Sebz gung nybar V'q rkcrpg znffvir inevngvba va bhgpbzrf.

EDIT:

Well, that was a bit surprising.

comment by DanielDeRossi · 2014-07-09T11:44:20.523Z · LW(p) · GW(p)

http://www.theguardian.com/lifeandstyle/2014/jul/05/this-column-will-change-your-life-precrastination

An interesting article on "precrastination". Basically some people spend more time and effort doing things , when it would be more efficient to complete them later. Also this writer reads lesswrong and refers to one of the posts on akrasia in his other articles

comment by Markas · 2014-07-08T16:18:13.293Z · LW(p) · GW(p)

Coursera just started a course called Experimentation for Improvement. Is anyone interested in taking it together?

Replies from: TylerJay
comment by TylerJay · 2014-07-09T06:18:14.743Z · LW(p) · GW(p)

That actually looks interesting. I've been thinking about reading How to Measure Anything, and this looks similar. I can't promise I'll finish, but I'll at least audit the first week or two. PM me and we can talk about it.

comment by intrepidadventurer · 2014-07-09T22:05:36.979Z · LW(p) · GW(p)

I have been thinking about the argument of the singularity in general. This proposition that an intellect sufficiently advanced can / will change the world by introducing technology that is literally beyond comprehension. I guess my question is this, is there some level of intelligence such that there are no possibilities that it can't imagine even if it can't actually go and do them.

Are humans past that mark? We can imagine things literally all the way past what is physically possible and or constrained to realistic energy levels.

Replies from: MrMind
comment by MrMind · 2014-07-10T08:44:24.961Z · LW(p) · GW(p)

A difficult question to answer, because many elements are not precisely defined. But let's just say for a moment that 'intellect' is conflated with 'universal Turing machine' and 'thinking' with 'processing a program'.
There are of course limits for any finite UTM: on one side, you cannot 'probe' thoughts too deeply because of constraints on memory/energy/time, on the other side there are thoughts that are simply too complex. So no, an AI could never imagine anything that is too complex or too expensive for it to think.
For us humans the situation is even worse, because our brains are no computers, and we are very capable of imagining incoherent things.

comment by khafra · 2014-07-08T14:23:24.256Z · LW(p) · GW(p)

Has anybody written up a primer on "what if utility is lexically ordered, or otherwise not quite measurable in real numbers"? Especially in regard to dust specks?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-07-13T12:27:07.520Z · LW(p) · GW(p)

Not sure if this is what you're looking for, but Toby Ord has written a criticism of several forms of negative utilitarianism, including what he calls Lexical Threshold NU (which he attributes to David Pearce), which sounds like what you might be thinking of.

comment by protest_boy · 2014-07-22T01:05:52.506Z · LW(p) · GW(p)

So there's a MIRIxMountain View, but is it redundant to have a MIRIxEastBay/SF? It seems like the label MIRIx is content to be bestowed upon even low key research efforts, and considering the hacker culture/rationality communities there may be interest in this.

comment by David_Gerard · 2014-07-14T11:17:14.742Z · LW(p) · GW(p)

New Open Thread

(yeah, only just realised I got the end date wrong on this one, oh well)

comment by Ixiel · 2014-07-13T12:48:23.283Z · LW(p) · GW(p)

I think I remember an app in which one guessed the probability of things and then logged if they actually happened and kept track or ones record discussed here. Anyone know what it's called?

Replies from: gjm
comment by gjm · 2014-07-13T14:29:07.934Z · LW(p) · GW(p)

Are you thinking of PredictionBook? Not an app, but otherwise fits your description and has been mentioned many times on LW.

Replies from: Ixiel
comment by Ixiel · 2014-07-14T11:32:38.294Z · LW(p) · GW(p)

Yup that's it. Thanks!

comment by EphemeralNight · 2014-07-11T20:37:15.165Z · LW(p) · GW(p)

Question for anyone who knows:

I've been getting "cannot connect to the real..." error messages in Google Chrome when trying to access several websites, which I gather has something to do with invalid certificates. I would like to know if going to Settings > Advanced > Manage Certificates and simply Removing everything under every tab will a) fix the problem and b) not break anything else. If not, then I would like to know what will.

comment by pinyaka · 2014-07-10T02:52:21.782Z · LW(p) · GW(p)

If anyone uses org-mode in emacs to track their todo list, org-gamify is a way to add some gamification to your org. I haven't used it, but there's a decent introduction on how to use it on the git repo page.

comment by [deleted] · 2014-07-09T12:38:04.893Z · LW(p) · GW(p)

Now that I'm on the job market, I'm considering changing my gmail address, but I'm having trouble deciding between the alternatives.

My current address (created in '05 or so) consists of two words. This has the advantage of being easy to say, but the second word is a bit long and I feel slightly silly writing it on a CV.

On the other hand, it's 2014 and almost every reasonable gmail address has already been taken. The exceptions in my case are a slightly l33t version of my name, a version of my name with vowels removed, and my name followed by a random number.

So, LW, which of the following do you feel is the most useful email address?

[pollid:729]

I don't use G+ anymore, so I'm ignoring various social costs associated to changing my Google account. If you think of a better alternative, go ahead and list it in the comments.

Replies from: Adele_L, hydkyll, TylerJay
comment by Adele_L · 2014-07-09T21:37:53.287Z · LW(p) · GW(p)

A good alternative might be to buy your own domain name (only around $20 a year), and put up a small personal site. You can then have your email address get redirected to your normal gmail one (and with gmail, it's easy to have it send messages from your new address also). This may also look more impressive on a CV since it signals some level of technical competence. Of course, you still have to choose a domain name, but it gives you a bit more flexibility.

For example, I have the address adele@.org which redirects to my gmail account I've used for years.

Replies from: gwern, peter_hurford, David_Gerard
comment by gwern · 2014-07-09T21:47:43.146Z · LW(p) · GW(p)

Agree with this one. It's not hard to set up, for example, an account on NearlyFreeSpeech.Net, deposit $15, register a domain name through their system & forward emails to your existing Gmail account.

comment by Peter Wildeford (peter_hurford) · 2014-07-10T01:33:09.491Z · LW(p) · GW(p)

I do this as well. peter@peterhurford.com. Also gives me some nerd cred as "someone with their own personal website."

comment by David_Gerard · 2014-07-10T20:40:49.749Z · LW(p) · GW(p)

Varies. I still use dgerard@gmail.com rather than (anything)@davidgerard.co.uk, because if I say the first on the phone it's ridiculously more likely they'll get it right without me spelling more than the username.

comment by hydkyll · 2014-07-09T14:09:45.365Z · LW(p) · GW(p)

Maybe it's just me but I also feel silly writing a gmail-address on a CV. May I suggest MyKolab instead? It's a professional (not too expensive) secure open-source e-mail service. Your address could be john.smith@swisscollab.ch.

Replies from: None
comment by [deleted] · 2014-07-09T14:28:03.230Z · LW(p) · GW(p)

Not clear what the benefits of Swiss hosting are. I'm not sure I want to signal that much paranoia, either.

comment by TylerJay · 2014-07-09T21:17:00.260Z · LW(p) · GW(p)

What's your middle initial? Is J + Middle initial + smith @ gmail available? That's how my gmail address is formatted. or maybe john[MI]smith@gmail ?

Replies from: None, Nornagest
comment by [deleted] · 2014-07-09T21:29:32.871Z · LW(p) · GW(p)

All permutations of my name with middle initial are also taken.

Also, my name isn't actually John Smith.

comment by Nornagest · 2014-07-09T21:28:19.895Z · LW(p) · GW(p)

There are about three million people with the name Smith in the US. Figure another million or so in other English-speaking countries, a quarter of which have Gmail accounts (might be an underestimation; there are more than 400 million Gmail users), and you're left with a million Smiths using Gmail. How many of them tried that username format? I have no idea, but since there are only 676 possible strings of two Roman letters, it'd have to be less than one in a thousand and change. (Gmail addresses are not case sensitive.) Not holding my breath, in other words.

You might have better luck playing with . breaks between words. I've got the Gmail account firstname.lastname@gmail.com, although I'm blessed with an uncommon surname.

Replies from: philh
comment by philh · 2014-07-10T12:43:04.857Z · LW(p) · GW(p)

Gmail actually ignores the dots. first.name.last.name@gmail.com and firstnamelastname@gmail.com are treated the same.

comment by Zaine · 2014-07-08T03:56:24.773Z · LW(p) · GW(p)

I have a friend on business in San Francisco with some free time Tuesday afternoon. Do any of you have a recommendation of how they should spend that time, outside of general suggestions as might be listed here?

comment by [deleted] · 2014-07-08T01:06:11.251Z · LW(p) · GW(p)

I'm trying to run a calibration training/potluck in Portland next Saturday for myself and any lesswrongians who'd like to join. Any lessons learned from people who have done calibration training themselves or run a calibration training?