Open Thread, July 1-15, 2013

post by Vaniver · 2013-07-01T17:10:10.892Z · LW · GW · Legacy · 345 comments

Contents

345 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

345 comments

Comments sorted by top scores.

comment by sixes_and_sevens · 2013-07-02T01:20:17.262Z · LW(p) · GW(p)

I recently remarked that the phrase "that doesn't seem obvious to me" is good at getting people to reassess their stated beliefs without antagonising them into a defensive position, and as such it was on my list of "magic phrases". More recently I've been using "can you give a specific example?" for the same purpose.

What expressions or turns of phrase do you find particularly useful in encouraging others, or yourself, to think to a higher standard?

Replies from: shminux, RomeoStevens, gwillen, Qiaochu_Yuan
comment by shminux · 2013-07-02T08:15:27.784Z · LW(p) · GW(p)

This is not quite what you want, but if you are a grad student giving a talk and a senior person prefaces her question to you with "I am confused about...", you are likely talking nonsense and they are too polite to tell you straight up.

Replies from: David_Gerard, FiftyTwo
comment by David_Gerard · 2013-07-02T14:12:21.791Z · LW(p) · GW(p)

Which reminds me of my born-again Christian mother - evangelicals bend over backwards to avoid dissing each other, so if you call someone "interesting" in a certain tone of voice it means "dangerous lunatic" and people take due warning. (May vary, this is in Perth, Australia.)

comment by FiftyTwo · 2013-07-02T16:19:22.078Z · LW(p) · GW(p)

Alternatively "I may hve misunderstood but surely...." is a good way to couch an objection

comment by RomeoStevens · 2013-07-02T22:58:27.339Z · LW(p) · GW(p)

depersonalizing the argument is something I've had great success with. Steelmanning someone's argument directly is insulting, but steelmanning it by stating that it is similar to the position of high status person X, who is opposed by the viewpoint of high status person Y allows you to discuss otherwise inflammatory ideas dispassionately.

Replies from: satt
comment by satt · 2013-07-04T21:38:38.912Z · LW(p) · GW(p)

I've experimented with repersonalizing arguments: instead of challenging someone else for holding a belief, I direct the challenge at myself by putting their argument in my own mouth and saying what contrary evidence prevents me from believing it.

Someone else: You know that global warming business is a load of rubbish, right? Isn't real.

Me: That's not obvious to me. There are records of global average surface temperatures going back 150 years or so.

Someone else: Well, they can't know what the temperature was like before then.

Me: I'm sometimes inclined to think so, but then I'd have to contend with the variety of records based on tree rings, ice cores, and boreholes which go back centuries or millennia.

comment by gwillen · 2013-07-02T04:09:25.571Z · LW(p) · GW(p)

I frequently use "Hmm, it's not entirely clear to me that [X]...", which seems very directly analogous to yours.

Replies from: mindspillage
comment by mindspillage · 2013-07-04T04:09:08.084Z · LW(p) · GW(p)

I like this, and also "I don't quite understand why [X]", which puts them in the pleasant position of explaining to me from a position of superiority--or sometimes realizing that they can't.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-06T10:55:58.324Z · LW(p) · GW(p)

I guess this only works on people who feel friendly. Making them also feel superior... now they owe you a decent explanation.

A hostile person could find other way to feel superior, without explanation. For example, they could say: "Just use google to educate yourself, dummy!"

comment by Qiaochu_Yuan · 2013-07-02T01:32:00.861Z · LW(p) · GW(p)

"Outside view, ..."

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-02T01:57:49.423Z · LW(p) · GW(p)

That one seems much more effective after one has absorbed certain memes. In contrast the one's given by sixes_and_sevens seem to work in a more general setting.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-02T04:59:58.694Z · LW(p) · GW(p)

Yes, that's one I use on myself and not on others (unless they know what it means).

comment by bentarm · 2013-07-01T18:46:21.569Z · LW(p) · GW(p)

So, everyone agrees that commuting is terrible for the happiness of the commuter. One thing I've struggled to find much evidence about is how much the method of commute matters. If I get to commute to work in a chauffeur driven limo, is that better than driving myself? What if I live a 10 minute drive/45 minute walk from work, am I better off walking? How does public transport compare to driving?

I suspect the majority of these studies are done in US cities, so mostly cover people who drive to work (with maybe a minority who use transit). I've come across a couple of articles which suggest cycling > driving here and conflicting views on whether driving > public transit here but they're just individual studies - I was wondering if there's much more known about this, and figured that if there is, someone here probably knows it. If no one does, I might get round to a more thorough perusal of the literature myself now I've publicly announced that the subject interests me.

Replies from: ChristianKl, niceguyanon, Camaragon, aelephant
comment by ChristianKl · 2013-07-02T13:19:48.267Z · LW(p) · GW(p)

I think it entirely depends on what you do during your commute.

A lot of drivers who drive during rush hour feel stress because they get annoyed at the behavior of other drivers. That's terrible for the happiness of the commuter.

Traveling via public transport also gives you plenty of opportunities to get upset over other people. It provides you the opportunity to get upset if the bus comes a bit late.

If you travel via public transport you can do tasks like reading a book that you can't do while driving a car or cycling.

comment by niceguyanon · 2013-07-02T06:13:59.807Z · LW(p) · GW(p)

Does anyone else experience the phenomenon of perceiving the duration of a commute to be shorter when the distance is shorter? For example, it feels like it takes less time or is more enjoyable to walk 3/4 mile in 15 minutes than to travel a few miles by subway in 15 minutes. I think its because being close in proximity makes me feel like "Hey I'm basically there already" where as traveling a few miles makes me think "I'm not even in the same neighborhood yet" even though both of these take me the same amount of time.

Replies from: Viliam_Bur, tut, army1987
comment by Viliam_Bur · 2013-07-02T07:00:43.437Z · LW(p) · GW(p)

For me an important aspect is feeling of control. 15 minutes of walking is more pleasant that 10 minutes of waiting for bus and 5 minutes of travelling by bus.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-02T07:04:39.874Z · LW(p) · GW(p)

Every now and then, I decide that I don't have the patience to wait 10 minutes for a bus that would take me to where I'm going in 10 minutes. So I walk, which takes me an hour.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-07-12T17:35:20.027Z · LW(p) · GW(p)

I had the opposite effect recently - I thought that I'd save time by waiting for the bus, but it turns out that walking gets me to work from the train about 12 minutes sooner. Coming back, I don't have a ridiculous wait, so I still take the bus.

I could do even better if I got some wheels of some sort involved. Maybe it's time to take up skateboarding. Scooter? Bike seems like it would be too cumbersome, even if I can get one that folds up.

Replies from: spqr0a1
comment by spqr0a1 · 2013-07-13T16:02:40.854Z · LW(p) · GW(p)

If the commute is mostly flat, consider Freeline skates. They take up much less space than any of the mentioned wheels; the technique is different from skateboarding but the learning curve isn't any worse.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-01T02:29:45.525Z · LW(p) · GW(p)

I have discovered that I am so terrible at skateboarding and rollerblading that self-preservation requires me to stop trying.

comment by tut · 2013-07-02T08:29:19.680Z · LW(p) · GW(p)

Not in general, but I recognize your example. Walking is pleasant and active and allows me to think sustained thoughts, so it makes time 'pass' quickly. Whereas riding the subway is passive and stressful and makes me think many scattered thoughts in short time, so it makes time 'pass' slowly, making the ride seem longer. Also, if you walk somewhere in 15 minutes that probably takes about 15 minutes, but if you ride the subway for 15 minutes that probably takes more like half an hour from when you leave home to when you get to your goal.

comment by A1987dM (army1987) · 2013-07-05T10:42:33.067Z · LW(p) · GW(p)

More generally, I've noticed I tend to underestimate how much time it passes when I'm directly controlling how fast I'm going (climbing stairs, driving on an open road, reading) and overestimate it when I'm not (using an elevator, driving in congested traffic, watching a video).

Short-distance public transport is an exception: once I'm on the bus, it feels like it takes 5 minutes to get from home to the university, but it actually takes 20.

comment by Camaragon · 2013-07-08T07:47:15.569Z · LW(p) · GW(p)

I download loads of music and audiobook and books (though it's more bothersome to read while moving) and listen to them on my commute to work, it takes me around 45 minutes commute to get to work via train system and it takes the same time to get back home. Doing this, I totally don't mind the commute. Look forward to it even since It was the only time I get to read or listen to anything.

comment by aelephant · 2013-07-01T23:12:44.207Z · LW(p) · GW(p)

Better in what way?

According to Freakonomics, public transportation may actually be less efficient than driving: http://www.freakonomics.com/2012/11/07/can-mass-transit-save-the-environment-right-wing-or-left-wing-heres-a-post-everybody-can-hate/

Replies from: bentarm, NancyLebovitz
comment by bentarm · 2013-07-02T11:31:33.640Z · LW(p) · GW(p)

Apologies, I should have made this clearer (and will probably edit the original to do so). Commuting is terrible for the happiness of the commuter. The rest of the post should be interpreted in light of this.

As for the Freakonomics research - it seems quite implausible that the marginal commuter has a bigger impact by taking transit rather than a car (I seem to remember listening to an episode of Freakonomics radio about this discussion, and being disappointed by the lack of marginal analysis).

comment by NancyLebovitz · 2013-07-02T07:07:21.502Z · LW(p) · GW(p)

I wonder whether it would help to use smaller buses/shorter trains at off-peak hours.

comment by ESRogs · 2013-07-02T23:16:15.147Z · LW(p) · GW(p)

I've just noticed that the Future of Humanity Institute stopped receiving direct funding from the Oxford Martin School in 2012, while "new donors continue to support its work." http://www.oxfordmartin.ox.ac.uk/institutes/future_humanity

Does that mean it's receiving no funding at all from Oxford University anymore? I'm surprised that there was no mention of that in November here: http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/. Is the FHI significantly worse off funding wise than it was in previous years?

comment by Username · 2013-07-02T18:50:13.039Z · LW(p) · GW(p)

Posting here rather than the 'What are you working on' thread.

3 weeks ago I got two magnets implanted in my fingers. For those who haven't heard of this before, what happens is that moving electro-magnetic fields (read: everything AC) cause the magnets in your fingertips to vibrate. Over time, as nerves in the area heal, your brain learns to interpret these vibrations as varying field strengths. Essentially, you gain a sixth sense of being able to detect magnetic fields, and as an extension, electricity. It's a $350 superpower.

The guy who put them in my finger told me it will take about six months before I get full sensitivity. So, what I'm doing at the moment is research into this and quantifying my sensitivity as it develops over time. The methodology I'm using is wrapping a loop of copper wire around my fingers and hooking it up to a headphone jack, which I will then plug into my computer and send randomized voltage levels through. By writing a program so I can do this blind, I should be able to get a fairly accurate picture of where my sensitivity cutoff level is.

One thing I'm stuck on is how to calculate the field strength acting on my magnets. Getting the B field for a solenoid is trivial, but with a magnetic core I'm sure it throws everything out of whack. If anyone has any links to the physics of how to approach that, I'd be much obliged.

And if you're curious about what it's like so far to have magnets in your fingers, feel free to ask.

Replies from: drethelin, Locaha, wadavis, knb
comment by drethelin · 2013-07-02T19:29:57.817Z · LW(p) · GW(p)

"superpower" is overstating it. Picking up paperclips is neat and being able to feel metal detectors as you walk through them or tell if things are ferrous is also fun but it's more of just a "power" than a superpower. It also has the downside of you needing to be careful around hard-drives and other strong magnets. On net I'm happy I got them but it's not amazing.

Replies from: gwillen, Username
comment by gwillen · 2013-07-02T23:54:10.028Z · LW(p) · GW(p)

FYI, there's no need to be careful around hard drives (except for your own safety, since they're large chunks of metal your magnet will stick to.) The platters of a modern hard drive are too high-coercivity and too well-shielded for even a substantial neodymium magnet (bigger than you can fit in a fingertip) to affect them.

Credit cards, on the other hand.

Replies from: wedrifid, BerryPick6, drethelin
comment by wedrifid · 2013-07-09T10:59:46.507Z · LW(p) · GW(p)

Credit cards, on the other hand.

Great thinking! Once you have fully developed and trained your superpower sensitivity you can read the cards by merely brushing your hands past someone's wallet!

Replies from: CronoDAS
comment by CronoDAS · 2013-07-15T09:08:14.766Z · LW(p) · GW(p)

::deliberately failing to get the joke::

I think the issue is that the magnets will destroy the data on the credit card stripe...

comment by BerryPick6 · 2013-07-03T08:52:49.302Z · LW(p) · GW(p)

Also, aren't MRI's going to be a problem?

comment by drethelin · 2013-07-03T02:47:00.973Z · LW(p) · GW(p)

It's not the being careful about ruining them, it's the giant magnet IN them that can fuck you up.

comment by Username · 2013-07-02T19:38:06.521Z · LW(p) · GW(p)

I'd mostly agree with that. After I finish my current project though I have some more in mind about using them as input methods, so for me they're as much toys I can experiment with as anything else.

comment by Locaha · 2013-07-03T20:17:38.933Z · LW(p) · GW(p)

It's all fun and games until you need to get MRI and your fingers burst into flames.

Then it's just fun.

comment by wadavis · 2013-07-08T20:22:07.630Z · LW(p) · GW(p)

Do you notice the accumulation of ferrous, for the lack of a better word, dust fragments?

My magnets I have for misc. projects at home quickly pick up a collection of small fragments, but maybe my world is just to closely tied to steel fabrication shops.

Replies from: Username
comment by Username · 2013-07-08T21:30:53.582Z · LW(p) · GW(p)

Not yet, though I haven't done any metalwork since I got the magnets.

This was one of the questions I asked the guy who put them in, since I'll be running into this eventually. He said that this was one of his concerns going into getting his own, as he does a lot of work in a shop, but that he has found that iron and steel filings haven't been a problem.

comment by knb · 2013-07-03T13:57:17.904Z · LW(p) · GW(p)

Is there any practical use for having magnets in your fingers? It seems like a bizarrely bad idea to me.

Replies from: Username, drethelin
comment by Username · 2013-07-03T16:39:09.324Z · LW(p) · GW(p)

Besides telling if a device is live or not, not that I know of. The one major issue is that you can't have an MRI, although if I'm in a situation where I can't tell a doctor that I have them, magnets being ripped out of my fingers is the least of my worries. If need be, I could have a doctor make a small incision and take them out. And I do have to be careful not to hold on to powerful magnets for too long, or it will crush the skin in between the two magnets. Other than that though, there's no real downside. They're off to the side so it doesn't affect my grip, and once my skin finishes healing they'll be unnoticeable.

The upside for me is the qualia of sensing emfs and having them as toys to play with. I treated the decision like getting a tattoo, where my personal rule is I have to love a design for a continuous year before getting it. I haven't settled on a design long enough to get a tattoo, but I had planned on getting magnets for about a year and a half so I went ahead and did it.

comment by drethelin · 2013-07-03T19:46:01.006Z · LW(p) · GW(p)

Having extra senses is pretty cool.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2013-07-05T18:33:34.615Z · LW(p) · GW(p)

I'm wearing a magnetic ring (a post on LW gave me the idea). It's fun, and I can take it off whenever I want to. Occasionally it comes in useful too; when I need to open up my computer I can put the little screws on my ring, and I can tell whether a pot will work on my induction hot plate.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-07-12T18:01:37.213Z · LW(p) · GW(p)

This sounds like a much lower-commitment variant, but it doesn't seem like it would have close to the same sensitivity.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2013-07-13T08:28:05.913Z · LW(p) · GW(p)

Yes, it's not really like having an extra sense.

comment by letter7 · 2013-07-01T20:54:06.110Z · LW(p) · GW(p)

There's something that happens to me with an alarming frequency, something that I almost never (or don't remember) see being referenced (and thus I don't know the proper name). I'm talking about that effect when I'm reading a text (any kind of text, textbook, blog, forum text) and suddenly I discover that two minutes passed and I advanced six lines in the text, but I just have no idea of what I read. It's like a time blackhole, and now I have to re-read it.

Sometimes it also happens in a less alarming way, but still bad: for instance, when I'm reading something that is deliberately teaching me an important piece of knowledge (as in, I already know whathever is in this text IS important) I happen to go through it without questioning anything, just "accepting" it and a few moments later it suddenly comes down on me when I'm ahead: "Wait... what, did he just say 2 pages ago that thermal radiation does NOT need matter to propagate?" and I have again to go back and check that I was not crazy.

While I don't know the name of this effect, I have asked some acquantainces of mine about that, while some agreed that they have it others didn't. I would like very much to eliminate this flaw, anybody knows what I could do to train myself not to do it or at least the correct name so I can research more about it?

Replies from: NancyLebovitz, polutropon, David_Gerard, moreati, shminux, Alsadius, aelephant, David_Gerard, Discredited, NancyLebovitz
comment by NancyLebovitz · 2013-07-02T12:35:59.243Z · LW(p) · GW(p)

I give you credit for noticing you're running on automatic in as little as five minutes.

This is a guess, but meditation might help since it's a way of training the ability to focus.

comment by polutropon · 2013-07-06T22:01:20.696Z · LW(p) · GW(p)

Are you sleep deprived? This kind of attention lapse sounds like the calling card of a microsleep.

comment by David_Gerard · 2013-07-02T14:19:32.581Z · LW(p) · GW(p)

I'm talking about that effect when I'm reading a text (any kind of text, textbook, blog, forum text) and suddenly I discover that two minutes passed and I advanced six lines in the text, but I just have no idea of what I read. It's like a time blackhole, and now I have to re-read it.

I do this all the time. I have seen it referred to in literature (a character reading a page three times before realising he can't take it in, as a way to show that he's extremely distracted), but that's not quite the same as just zoning out.

comment by moreati · 2013-07-01T22:34:45.010Z · LW(p) · GW(p)

If it's material you want to/are required to learn from try taking notes as you read the material, to force yourself to recall it in your own terms/language.

If it's just recreational/online reading try increasing the font size/spacing or decreasing the browser width, or using a browser extension like readability. Don't scroll with the scroll bar or the mouse wheel - use pg up/pg down to make it easier to keep your position.

Replies from: None, letter7
comment by [deleted] · 2013-07-02T00:48:06.244Z · LW(p) · GW(p)

I don't know if I deliberately developed a habit of highlighting the current paragraph when reading long articles, but it has become extremely useful.

Replies from: tim
comment by tim · 2013-07-02T19:38:41.788Z · LW(p) · GW(p)

In the same vein, I get easily distracted when reading text and the ability to click around, select and deselect text that I'm reading helps me to stay engaged.

Writing that out it sounds like it would be super distracting but its not (for me). Possibly related to the phenomenon where some people work better with noise in the background rather than in silence. Clicking around might help maintain a minimum level of stimulation while reading.

Replies from: None, Emile
comment by [deleted] · 2013-07-03T02:27:40.976Z · LW(p) · GW(p)

Chewing gum does this for me. It's the perfect level of low-level background stimulation to focus on important things.

Replies from: Alsadius
comment by Alsadius · 2013-07-03T17:56:11.552Z · LW(p) · GW(p)

There was a couple university classes where I found that playing Sudoku in class actually helped me learn the material, because I gained more in alertness than I lost in distraction.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2013-07-05T18:02:35.445Z · LW(p) · GW(p)

When I was in school I couldn't take notes. I couldn't write fast enough, and trying to write things down occupied so much of my attention I couldn't follow what the teacher was saying next. I should have learned shorthand; but instead I doodled. Somehow, keeping my hands busy kept my ears open.

comment by Emile · 2013-07-02T21:41:29.088Z · LW(p) · GW(p)

I don't have any stats, but wouldn't be surprised if the majority of people (sometimes) read on a computer like that, by highlighting various bits as they go.

comment by letter7 · 2013-07-01T23:04:54.783Z · LW(p) · GW(p)

I understand the "recall in your own terms", that sounds like very practical advice, even more in my case since english isn't my mother language and thus I could try translating it, which would ensure a deeper understanding. Thanks.

I don't see how the way that information is displayed (font size/spacing and using the scroll bar) could impact in the way I'm reading, could you explain that a little more?

comment by shminux · 2013-07-01T22:14:27.989Z · LW(p) · GW(p)

Probably automaticity is what you are looking for. I am not sure how to force one's mind to attend to a repetitive task. One trick for avoiding reading automaticity is to paraphrase and check for potential BS every paragraph or so.

Replies from: letter7
comment by letter7 · 2013-07-01T23:02:39.301Z · LW(p) · GW(p)

Indeed it's something along those lines, however, in the article it's represented in a positive light, where

a skilled reader, multiple tasks are being performed at the same time such as decoding the words, comprehending the information, relating the information to prior knowledge of the subject matter, making inferences, and evaluating the information's usefulness to a report he or she is writing

My problem is that, somehow, I do that, but without comprehending anything. The article linked to an interesting program in Australia, though, QuickSmart. It's aimed at middle students, but I think I could perhaps benefinit from it.

comment by Alsadius · 2013-07-03T17:54:53.715Z · LW(p) · GW(p)

I have this happen sometimes - usually it's because I let my mind wander to something unrelated but I kept my eyes moving out of habit.

comment by aelephant · 2013-07-01T23:08:16.914Z · LW(p) · GW(p)

I can't remember where I read it, but I remember hearing that in order to really understand an argument, you have to take a leap of faith & accept all of the propositions & conclusions in that argument. If you don't, you will be automatically & subconsciously strawmanning it. After you've exposed yourself to the whole idea, you can go back & look at it critically. I have no idea if this is BS & wish I could track down where I came across it. Cheers to any help.

comment by David_Gerard · 2013-07-14T14:32:49.982Z · LW(p) · GW(p)

Aha, It happens to Redditors too! Rage comic, thread.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-07-15T08:10:32.284Z · LW(p) · GW(p)

Trying to read Neuromancer when I was 11, after a local computer magazine had written about how it's basically the best book ever, was basically this all the time. I knew very few English cultural idioms back then, and Gibson really likes his cultural idioms, like "You ever the heat?" for "Did you use to be a cop?" I could read Stephen King novels in English fine at that point, but Neuromancer was just pages and pages of me having no idea what's going on, and I eventually gave up about a third in.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-15T17:57:49.098Z · LW(p) · GW(p)

Not quite - this is talking about words you could understand but your attention wanders.

Did you ever come back to it, or try a translation?

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-07-15T20:15:27.621Z · LW(p) · GW(p)

I also get the thing where I stop understanding text just from not paying attention, and as far as I remember, the experience of reading that was the same. I don't remember ever being actively aware that I couldn't understand the text, just having the constant weird situation of reading sentences I seemed to be able to read just fine, but still ending with very little idea of what the narrative was.

I picked up the book again a couple of years ago and read it through without problem. That was also when I got a clearer idea of how the book was full of tricky narrative beats I'd have had no hope of understanding properly the first time around.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-15T20:25:33.981Z · LW(p) · GW(p)

I don't remember ever being actively aware that I couldn't understand the text, just having the constant weird situation of reading sentences I seemed to be able to read just fine, but still ending with very little idea of what the narrative was.

To be honest, I got that from Gibson first time through the trilogy and I'm a native speaker ;-) They made more sense on rereading.

comment by Discredited · 2013-07-11T16:53:34.773Z · LW(p) · GW(p)

I'm sorry to drop references without a summary, but this will have to do at the moment: "Lost thoughts: Implicit semantic interference impairs reflective access to currently active information"

comment by NancyLebovitz · 2013-07-03T02:20:43.318Z · LW(p) · GW(p)

.....Slicereader which breaks text into paragraphs that are displayed one-per-page. To advance to the next paragraph, you press the spacebar.

comment by gjm · 2013-07-01T22:28:09.485Z · LW(p) · GW(p)

Hey komponisto (and others interested in music) -- if you haven't already seen Vi Hart's latest offering, Twelve Tones, you might want to take a look. Even though it's 30 minutes long.

(I don't expect komponisto, or others at his level, will learn anything from it. But it's a lot of fun.)

Replies from: Will_Newsome, NancyLebovitz, David_Gerard
comment by Will_Newsome · 2013-07-01T22:34:35.129Z · LW(p) · GW(p)

I second the recommendation. I found it interesting that I enjoyed it so much despite learning almost nothing at all. Everything in the video was stuff I'd heard or thought about before, but seeing it presented in a unified, artistic, humorous fashion was very entertaining.

Replies from: tim
comment by tim · 2013-07-02T19:27:38.262Z · LW(p) · GW(p)

On the flip-side, I know almost nothing about music, was unable to understand a lot of the video, and still enjoyed it quite a bit.

comment by NancyLebovitz · 2013-07-03T03:01:19.757Z · LW(p) · GW(p)

I had no idea that the purpose of twelve tone was to teach people how to decontextualize musical sounds. Is listening to such music more valuable than meditation?

Replies from: gjm
comment by gjm · 2013-07-03T07:40:19.237Z · LW(p) · GW(p)

I'm pretty sure it wasn't Schoenberg's purpose.

comment by David_Gerard · 2013-07-02T14:18:18.317Z · LW(p) · GW(p)

That is amazing.

comment by elharo · 2013-07-05T13:00:40.730Z · LW(p) · GW(p)

A Big +1 to whoever modified the code to put pink borders around comments that are new since the last time I logged in and looked at an article. Thanks!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-09T20:32:47.254Z · LW(p) · GW(p)

I see them as purple, and I'm neutral for purple vs. the previous green.

comment by mikko (morrel) · 2013-07-04T18:02:55.065Z · LW(p) · GW(p)

Heya. I'm organizing a meetup, but to announce it here I seem to need some karma. Thanks.

comment by Viliam_Bur · 2013-07-01T19:48:54.889Z · LW(p) · GW(p)

I noticed a strategy that many people seem to use; for lack of a better name, I will call it "updating the applause lights". This is how it works:

You have something that you like and it is part of your identity. Let's say that you are a Green. You are proud that Greens are everything good, noble, and true; unlike those stupid evil Blues.

Gradually you discover that the sky is blue. First you deny it, but at some moment you can't resist the overwhelming evidence. But at that moment of history, there are many Green beliefs, and the belief that the sky is green is only one of them, although historically the central one. So you downplay it and say: "All Green beliefs are true, but some of them are meant metaphorically, not literally, such as the belief that the sky is green. This means that we are right, and the Blues are wrong; just as we always said."

Someone asks: "But didn't Greens say the sky is green? Because that seems false to me." And you say: "No, that's a strawman! You obviously don't understand Greens, you are full of prejudice. You should be ashamed of yourself." The someone gives an example of a Green that literally believed the sky is green. You say: "Okay, but this person is not a real Green. It's a very extreme person." Or if you can't deny it, you say: "Yes, even withing the Green movement, some people may be confused and misunderstand our beliefs, also our beliefs have evolved during time, but trust me that being Green is not about believing that the sky is literally green." And in some sense, you are right. (And the Blues are wrong. As it has always been.)

To be specific, I have several examples in my mind; religion is just one of them; probably any political or philosophical opinion that had to be updated significantly and needs to deny its original version.

Replies from: Qiaochu_Yuan, TimS, NancyLebovitz, taelor, Jack, ciphergoth, mstevens, None, ChristianKl
comment by Qiaochu_Yuan · 2013-07-01T21:14:53.558Z · LW(p) · GW(p)

My strategy is to avoid conversations of this form entirely by default. Most Greens do not need to be shown that the belief system they claim to have is flawed, and neither do most Blues. Pay attention to what people do, not what they say. Are they good people? Are they important enough that bad epistemology on their part directly has large negative effects on the world? If the answers to these questions are "yes" and "no" respectively, then who cares what belief system they claim to have?

Replies from: niceguyanon
comment by niceguyanon · 2013-07-02T04:43:11.482Z · LW(p) · GW(p)

If the answers to these questions are "yes" and "no" respectively, then who cares what belief system they claim to have?

I'm really going to try and remind myself of this more often. Most of the time the answers are "yes" and "no" and points are rarely won for pointing out bad epistemology.

comment by TimS · 2013-07-01T20:24:07.165Z · LW(p) · GW(p)

Yes, like moving-the-goalposts, this is an annoying and dishonest rhetorical move.

Yes, even withing the Green movement, some people may be confused and misunderstand our beliefs, also our beliefs have evolved during time, but trust me that being Green is not about believing that the sky is literally green.

Suppose some Green says:

Yes, intellectual precursors to the current Green movement stated that the sky was literally Green. And they were less wrong, on the whole, then people who believed that the sky was blue. But the modern intellectual Green rejects that wave of Green-ish thought, and in part identifies the mistake as that wave of Greens being blue-ish in a way. In short, the Green movement of a previous generation made a mistake that the current wave of Greens rejects. Current Greens think we are less wrong than the previous wave of Greens.

Problematic, or reasonable non-mindkiller statement (attacking one's potential allies edition)?

How much of that intuition is driven by the belief that Bluism is correct. If we change the labels to Purple (some Blue) and Orange (no Blue), does the intuition change?

Replies from: DSherron, JoshuaZ
comment by DSherron · 2013-07-01T20:51:00.817Z · LW(p) · GW(p)

If, after realizing an old mistake, you find a way to say "but I was at least sort of right, under my new set of beliefs," then you are selecting your beliefs badly. Don't identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.

For people having an otherwise rational debate, they need to at this point drop the Green and Blue labels (any rationalist should be happy to do so, since they're just a shorthand for the full belief system) and start specifying their actual beliefs. The fact that one identifies as a Green or a Blue is a red flag of glaring irrationality, confirmed if they refuse to drop the label to talk about individual beliefs, in which case do the above. Sticking with the labels is a way to make your beliefs feel stronger, via something like a halo effect where every good thing about Green or Greens gets attributed to every one of your beliefs.

comment by JoshuaZ · 2013-07-01T23:26:35.367Z · LW(p) · GW(p)

There's a further complicating factor: often when this happens, both modern Blues and Greens won't exactly correspond to historical Blues and Greens even though both are using the same terms. Worse, when the entire region of acceptable social policy has changed, sometimes an extreme Green or Blue today might be what was seen as someone of the other type decades ago.

Replies from: TimS
comment by TimS · 2013-07-02T01:00:57.296Z · LW(p) · GW(p)

Yes, the first wave of a movement may have many divergent descendents, which end up on different sides of a current political dispute. And the direct-est descendent might be on the opposite side of the political divide from what we would predict the first-wave proponents would adopt. But for that to happen, there needs to be significant passage of time.

By contrast, if the third wave of a movement cannot point to an immediately prior second wave that actually believed the position criticized (and which the third wave has already rejected), then Villiam_Bur's moving-the-goalposts criticism has serious bite, to the point that an outsider probably should not accept the third wave as genuinely interested in rational discussion or true beliefs.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-02T01:29:35.922Z · LW(p) · GW(p)

And here we were having a very nice discussion without pointing out any potentially controversial/mindkilling examples. Using the phrasing of second and third wave doesn't make it less subtle or less potentially mindkilling.

In the specific case which you are not so obliquely referencing, there's a pretty strong argument that much of thirdwave feminism has strands from first and second wave, while also agreeing on the most basic premises.

It is also worth noting in this context, that movements (wherever they are politically) aren't in general after rational discussion or true beliefs but at accomplishing specific goal sets. You will in any diverse movement find some strains that are more or less interested in rational discussion, but criticizing a movement for its failure to embody rationality is not by itself a very useful criticism.

Replies from: Emile
comment by Emile · 2013-07-02T17:50:18.323Z · LW(p) · GW(p)

Um, I had not linked the parent of your comment to any specific movement until you pointed out the possible existence of such a link ...

Replies from: David_Gerard, JoshuaZ, Luke_A_Somers
comment by David_Gerard · 2013-07-04T12:38:57.145Z · LW(p) · GW(p)

It is the big obvious current example where the ideological battle is between "second wave" and "third wave" and the first wave is barely mentioned. I encounter it in relation to the UK social justice Twittersphere, which is tangential to the more Kankri Vantas stretches of Tumblr. (Or, more accurately, the Porrim Maryam stretches.)

Edit: Can anyone think of another field described as having numbered waves where the battle was between second and third?

comment by JoshuaZ · 2013-07-02T18:04:01.482Z · LW(p) · GW(p)

I'm pretty sure that's what TimS was talking about given his use of the phrases "second wave" and "third wave". It is especially clear because if one was going to be talking about a generic example and using the term wave, one would in the same context have likely discussed the first wave v. the second wave. The off-by-one only makes sense in that specific historical context.

comment by Luke_A_Somers · 2013-07-12T18:08:28.454Z · LW(p) · GW(p)

Oppositely, the second and third waves immediately screamed 'feminism' to me, but I couldn't assemble the rest of the analogy. The third wave has plenty of legitimate differences and similarities with both the first and second waves. I'm still not sure what TimS was getting at.

comment by NancyLebovitz · 2013-07-02T12:41:04.202Z · LW(p) · GW(p)

This process can be a stage in the process of leaving the Greens-- I've heard stories of deconversion which sounded a lot like that.

comment by taelor · 2013-07-01T22:01:34.490Z · LW(p) · GW(p)

Karl Popper came up with the Falsifiability Principle as a direct response to watching Marxists, Freudians, and others do exactly this.

comment by Jack · 2013-07-03T15:52:43.577Z · LW(p) · GW(p)

Ideologies and theo-philosophical schools are rarely if ever defined precisely enough to exclude true facts about the world or justifications for genuinely good ideas. They're more collections of rules of thumb, methods, technical terms and logics. If mathematically formulated scientific theories are under-determined then ideologies are so, but ten-fold. The problem of inferential distance when it comes to worldviews isn't really about shear decibels of information that need to be communicated. It's that the interlocutors are playing different games and speaking different languages. And I suspect most deconversions are more like picking up a new language and forgetting your old one, than they are the product of repeated updates based on the predictive failures of the old ideology/religion. It's a pseudo-rational process which is why it doesn't reliably occur in just one direction.

Back to your point: since people have egos, memetic complexes usually have self-perpetuating features and applause lights don't constrain future experience it makes sense that if anything is held constant it will be Greens being really sure they are right. That's non-optimal and definitely irksome to people like all of us. It's inefficient because we're spending a resources on constructing post-hoc justifications for how the real Green answer is the true one and the corrections to our model may not be more curve-fitting. That is, what ever beliefs and assumptions that led the Greens to be wrong in the first place may still be in place. Plus, it is kind of creepy in a "we've always been at war with Eurasia" kind of way.

But on the other hand it is sort of okay, right? At least they're updating! You can think of academic departments of philosophy, religion, law and humanities as just the cost of doing business to mollify our egos as we change our minds. And changing people's minds this way is almost certainly much easier than making them convert to the doctrine of the hated enemy and engage in extend self-flagellation. It's a line of retreat.

Making the modern Green cop to the literal beliefs of her intellectual ancestor seems like an exercise in scoring points, not genuine persuasion. Who needs credit? The curve fitting is still an issue but you might be better off trying to make room for better beliefs and assumptions within the context of Green thought. Especially since it isn't obvious the opposing movement did anything other than get lucky.

A few out-there scholars think Descartes was an atheist. He almost certainly wasn't. But there is a reason they suspect him even though much of the Meditations is an extended argument for the existence of God. The thing is that the practical upshot of his non-empirical argument for God is that we should completely abandon the Christian-Aristotelian-Scholastic tradition and use our senses to discover what the world is really like. "The sky is certainly Green and this proves that the ideal method for discovering the color of things is visual examination and use of a spectrometer."

Ideological multi-lingualism is a crucial skill; I'd like to hear ideas for cultivating it.

comment by Paul Crowley (ciphergoth) · 2013-07-01T20:16:51.586Z · LW(p) · GW(p)

This sounds very much like religion - I'd be interested in hearing about a solid non-religious example.

Replies from: TimS, David_Gerard
comment by TimS · 2013-07-01T20:26:35.302Z · LW(p) · GW(p)

Let's avoid object level examples until we resolve how to distinguish this dishonest rhetorical move from honest updates on the low validity of prior arguments now abandoned. Otherwise, we get bogged down in mindkiller without any general insight into how to be more rational.

Replies from: ESRogs
comment by ESRogs · 2013-07-02T23:23:34.437Z · LW(p) · GW(p)

But aren't we all agreed the specific examples are super-helpful for understanding a general phenomenon?

comment by David_Gerard · 2013-07-02T14:21:47.515Z · LW(p) · GW(p)

Politics. Social issues. You see a lot of it when circumstances change and a political party or activist organisation has to then reconcile the conflict between consequentialism and deontology, and somehow satisfy both sets of followers.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-07-02T14:33:30.827Z · LW(p) · GW(p)

I discussed this with coffeespoons yesterday; the trouble is that political leaders often speak much less ambiguously than religious ones, so there's a lot less room to say "Well, what Marx really meant was..."

Replies from: David_Gerard
comment by David_Gerard · 2013-07-02T14:52:19.860Z · LW(p) · GW(p)

I dunno. I am reluctant to name present-day political examples on LW, but you doubtless feel a slight urge to throw your computing device against the wall when you see some current eloquent bit of black-has-always-been-white spin from our esteemed leaders here in the UK.

I found myself at our local church a couple of Sundays ago, where the sermon was a really very good polemic conclusively demonstrating that Galatians 2 rules racism as unChristian. I thought it was marvellously reasoned and really quite robust, except for the problem of large chunks of observed Christian history. (The resolution: you can, of course, prove anything and its opposite from a compilation that size.)

comment by mstevens · 2013-07-02T10:32:59.711Z · LW(p) · GW(p)

I think there's a related rhetorical trick that's something like redefining the applause lights, or brand extension.

Greens believe the sky is green. I want them to believe the entire world is green. I will use their commitment to sky greeness and just persuade them it means something slightly different.

Clouds are kind of like the sky so should really be considered green if you're being fair about things. And rain is in the sky, who are you to say it's not green? Rain falls on the ground, which is therefore also part of the sky.

After a while, you can persuade people that, since the sky is green, obviously rocks are green.

This explanation isn't great but more practical examples are somewhat mindkilling.

comment by [deleted] · 2013-07-01T20:38:23.149Z · LW(p) · GW(p)

Some selection effects: I wonder if the perceived solidarity of most identity-heavy groups is due to vague language that easily facilitates mind projection within the group. Surviving communities will have either reduced their exposure to fracturing forces, or drifted towards more underspecified beliefs as a result of such exposure. I think religious strains fall very nicely into these two groups, but I'm not so sure about political groups.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-02T07:31:48.669Z · LW(p) · GW(p)

Being specific is a good rationalist tool and a bad strategy for social relations. The more specific one is, the fewer people agree with them. The best social strategy is to have a few fuzzy applause lights and gather agreement about them.

I'll try to find a less sensitive political example. Some people near me are fans of "direct democracy"; they propose it as a cure for all the political problems. I try being more specific and ask whether they imagine that people in the whole country will vote together, or that each region will vote separately on their local policies... but they refuse to discuss this, because they see that it would split their nicely agreeing group into disagreeing subgroups. But for me this distinction seems very important in predicting the consequences of such system.

comment by ChristianKl · 2013-07-02T12:59:21.919Z · LW(p) · GW(p)

If you talk with a rationalist about making decisions via intuition, he has to grant you that their are domains of problems where intuition is very useful. Rationalism is about winning, so of course a good rationalist will use intuition for those domains.

If you look at medicine, In the last decade Cochrane has finally found that chiropractors can successfully treat back pain, even through their mental model of the human body conflicts with the model of mainstream medicine.

Most big mental systems get updated over time. As long as you do update towards new evidence, you don't have to trash your old beliefs completely.

comment by [deleted] · 2013-07-03T14:38:25.711Z · LW(p) · GW(p)

What if it really was like that?

If you read any amount of history, you will discover that people of various times and places have matter-of-factly believed things that today we find incredible (in the original sense of “not credible”). I have found, however, that one of the most interesting questions one can ask is “What if it really was like that?”

... What I’m encouraging is a variant of the exercise I’ve previously called “killing the Buddha”. Sometimes the consequences of supposing that our ancestors reported their experience of the world faithfully, and that their customs were rational adaptations to that experience, lead us to conclusions we find preposterous or uncomfortable. I think that the more uncomfortable we get, the more important it becomes to ask ourselves “What if it really was like that?”

The true meaning of moral panics

In my experience, moral panics are almost never about what they claim to be about. I am just (barely) old enough to remember the tail end of the period (around 1965) when conservative panic about drugs and rock music was actually rooted in a not very-thinly-veiled fear of the corrupting influence of non-whites on pure American children. In retrospect it’s easy to understand as a reaction against the gradual breakdown of both legally enforced and de-facto racial segregation in the U.S.

Replies from: Alsadius, Multiheaded
comment by Alsadius · 2013-07-03T18:20:38.298Z · LW(p) · GW(p)

It seems fairly believable that an oppressed underclass that is intentionally deprived of education and opportunity will, on average, be cruder, less intellectually inclined, have less wealth and status, and more prone to failing at life in various ways due to the lack of a support structure. This is true of any group, whatever their intrinsic nature, simply due to the act of discrimination.

I remember once reading an essay about Jews in(IIRC) Rudyard Kipling's works, where they're portrayed in pretty appalling ways, while all sorts of other groups are portrayed positively. The author came to the conclusion that acting in cowardly and profiteering fashion was a survival tactic created by anti-semitic laws, and that Kipling was probably just conveying the reality of the time. (I'm not enough of an expert to judge the truth of this, but it seemed reasonable)

Replies from: fubarobfusco, Viliam_Bur
comment by fubarobfusco · 2013-07-03T21:52:41.554Z · LW(p) · GW(p)

Sure. Also see the recent follow-ups to the Stanford marshmallow experiment. It sure looks like some of what was once considered to be innate lack of self-restraint may rather be acquired by living in an environment where others are unreliable, promises are broken, etc.

Replies from: Desrtopa
comment by Desrtopa · 2013-10-08T23:36:24.691Z · LW(p) · GW(p)

Possibly, but the followup only tells us that, at least in the short term, kids will be less likely to delay gratification from specific individuals who have proven to be untrustworthy (and the protocol of that experiment kind of went for overkill on the "demonstrating untrustworthiness" angle.)

It might be that children become less able to delay gratification if raised in environments where they cannot trust promises from their guidance figures, but the same effect could very easily be caused by rational discounting of the value of promises from individuals who have proven unlikely to deliver on them.

comment by Viliam_Bur · 2013-07-06T17:02:26.680Z · LW(p) · GW(p)

Your argument sounds perfectly reasonable to me. Yet I would advise against reversing stupidity. Just because there is a systematic influence that makes it worse for the opressed people, it does not automatically mean that without that influence all the differences would disappear. Although it is worth trying experimentally.

Replies from: Alsadius
comment by Alsadius · 2013-07-08T03:39:20.619Z · LW(p) · GW(p)

Agreed, of course. I never claimed that there are no intrinsic group differences - FWIW, I believe that there are, they're just vastly smaller than intrinsic individual differences, and thus should be ignored in nearly all non-statistical circumstances. But group cultural differences are obviously very significant, as are group differences in education, opportunity, and support. We can do a fair bit about those, and we ought to.

comment by gjm · 2013-07-09T09:38:35.772Z · LW(p) · GW(p)

{EDITED to clarify, as kinda suggested by wedrifid, some highly relevant context.}

This comment by JoshuaZ was, when I saw it, voted down to -3, despite the fact that it

  • addresses the question it's responding to
  • gives good reasons for making the guess JoshuaZ said he made
  • seems like it's at a pretty well calibrated level of confidence
  • is polite, on topic, and coherent.

A number of JoshuaZ's other recent comments there have received similar treatment. It seems a reasonable conclusion (though maybe there are other explanations?) that multiple LW accounts have, within a short period of time, been downvoting perfectly decent comments by JoshuaZ. As per other discussions in that thread [EDITED to add: see next paragraph for more specifics], this seems to have been provoked by his making some "pro-feminist" remarks in the discussions of that topic brought up by recent events in HPMOR.

{EDITED to add...} Highly relevant context: Elsewhere in the thread JoshuaZ reports that, apparently in response to his comments in that discussion, he has had a large number of comments on other topics downvoted in rapid succession. This, to my mind, greatly raises the probability that what's going on is "playing the man, not the ball": that the cause of the downvotes isn't simply that many LW participants disagree strongly with me about the merits of the individual comments.

It seems to me that this is a kind of abuse that needs to be stopped. To be clear, I don't mean abuse of JoshuaZ, who I bet is perfectly capable of handling it. I mean abuse of LW. Specifically, it appears to be a concerted attempt to shape discussions here not by rational argument, nor even by appeal to emotion, but by intimidation.

(I suppose I should mention an amusing contrary hypothesis to which I attach very low probability. Perhaps the downvotes are from friends of JoshuaZ, who hope to attract sympathy upvotes and will change their own downvotes to upvotes in a week or two when no one's watching any more.)

I would address this to the LW admins by PM if I knew who they are, but the only person I know to be an LW admin is Eliezer and I believe he's very busy at the moment.

{EDITED to add ...} One other remark, just in case of suspicions. I am not JoshuaZ, nor do I have any idea who he is outside LW, nor (so far as I know) have I had any interaction with him outside LW, nor have I had enough in-LW interaction with him to regard him as an ally or a friend or anything of the kind. There is no personal element to any of what I have said.

{Totally irrelevant remark: The squiggly brackets are because [this sort] which I'd normally use for noting what I've edited interacts badly with the Markdown hyperlink syntax.}

Replies from: wedrifid, wedrifid, gjm
comment by wedrifid · 2013-07-09T15:13:24.904Z · LW(p) · GW(p)

{Totally irrelevant remark: The squiggly brackets are because [this sort] which I'd normally use for noting what I've edited interacts badly with the Markdown hyperlink syntax.}

The escape character, which solves this and various other potential problems, is "\".

Replies from: gjm
comment by gjm · 2013-07-09T15:19:22.106Z · LW(p) · GW(p)

Ah yes. Thanks. After a little experimentation, it transpires that what is needed to fix the problem is escaping the opening square bracket of the non-hyperlink text; escaping the closing square bracket is harmless but unnecessary.

Replies from: wedrifid
comment by wedrifid · 2013-07-09T16:06:48.409Z · LW(p) · GW(p)

Ah yes. Thanks. After a little experimentation, it transpires that what is needed to fix the problem is escaping the opening square bracket of the non-hyperlink text; escaping the closing square bracket is harmless but unnecessary.

Yes, Markdown is robust like that. Which is sometimes a nuisance. You can get away with writing underscored_words but a second underscore fucksitup.

comment by wedrifid · 2013-07-09T10:27:04.215Z · LW(p) · GW(p)

This comment by JoshuaZ was, when I saw it, voted down to -3, despite the fact that it

  • addresses the question it's responding to
  • gives good reasons for making the guess JoshuaZ said he made
  • seems like it's at a pretty well calibrated level of confidence
  • is polite, on topic, and coherent.

I downvoted that comment and encourage others to feel free to downvote any comment of a type they would prefer not to see on less wrong. Most (but not quite all) cases of people using social politics to force their preferences onto others are things I wish to see left of. This includes but is not limited to sex politics. Being 'on topic' is no virtue when the topic itself is toxic.

At your prompting I have downvoted the parent of the comment in question. To whatever extent a comment is justified by being an answer to a question the asking of said question must assume responsibility. For the same reason I have no objection if others choose to downvote my own contributions to that thread for what could be considered "Feeding". Kawoomba has a good point.

Note: This is a different issue to the systematic downvoting of a user on all subjects (sometimes referred to as 'karma assassination'). That is universally considered an abuse of the system. However the example you give only demonstrates your subjective disagreement with the evaluations of some others regarding the desirability of a particular comment.

You have defined your campaign by references to "this sort of abuse" where 'this' refers to comments like the example comment being downvoted. As such I cannot support it. People are allowed to not like stuff and vote it down. If you had instead made your campaign to be against karma-assassination then I would support it. I myself have lost several thousand karma in bursts like that. I suggest revision.

Replies from: gjm
comment by gjm · 2013-07-09T12:04:06.736Z · LW(p) · GW(p)

This is a different issue to [...] 'karma assassination'

It should be mentioned explicitly here -- as it has been in the discussion in the other thread, and as I know you have seen since you replied to it -- that JoshuaZ reports precisely the sort of "karma assassination" behaviour you describe, in connection with the same topic. It's because of that context that I think it likely that the highly negative score of his comment is at least partly the result of punitive downvoting aimed at him rather than at his comment specifically.

I shall amend my comment upthread to mention this context, which I agree is relevant.

Yes, it may be reasonable to decide that a particular topic is toxic and try to discourage all posting on that topic. (Though I think occasional comments arguing for dropping the subject would be a far better way of doing that than flinging downvotes around.) But that is plainly not what was happening, because only comments on one side of the issue were sitting at gratuitously low values relative to, for want of a better term, their topic-agnostic merit. (Your own description of your own actions is some evidence for this: you had downvoted that comment but not its parent.)

I am not sure why you think the word "campaign" is appropriate, though I can see why you might find it rhetorically convenient. I see what looks to me like a systematic attempt to stop LW participants expressing certain sorts of opinion, through intimidation rather than argument, I think that's bad, and I have said so a couple of times and expressed willingness to help technically if others agree with me. That's a "campaign"?

Replies from: wedrifid
comment by wedrifid · 2013-07-09T12:58:24.487Z · LW(p) · GW(p)

It should be mentioned explicitly here -- as it has been in the discussion in the other thread, and as I know you have seen since you replied to it -- that JoshuaZ reports precisely the sort of "karma assassination" behaviour you describe, in connection with the same topic.

I agree and even considered mentioning JoshuaZ's reports as further evidence that not only is there a distinction between the two but that in this case there are probably far more representative targets you could point to which would allow you to champion your (inferred, primary) cause without hindrance due to the actual expressed petition.

But that is plainly not what was happening, because only comments on one side of the issue were sitting at gratuitously low values relative to, for want of a better term, their topic-agnostic merit.

'Topic agnostic merit' would be misleading. If the topic were to go through a translation device that removed information about the topic but preserved reasoning style and social-political implications the comments would still be easily distinguishable. For some the problem with the topic is that it inevitably produces a certain type of thinking. It not merely the topic that one must be agnostic to. It is better to refer simply to your subjective evaluation of merit, which is at least unarguable.

I am not sure why you think the word "campaign" is appropriate

It seems the best word for it. By all means provide a better word or phrase that expresses the same thing with sufficiently few words. The word is connotatively neutral to me. You have my support in your conditional on it being a against karma-assassination and not a against downvoting the kind of comment that you mentioned. I am declaring my own against the latter but pointing at the karma-assassination target as a way for you to achieve your goal without opposition or controversy..

, though I can see why you might find it rhetorically convenient.

No, I'm good and virtuous and you are sinister and I see through you because I am insightful and sophisticated! (ie. I reject those connotations, but lets move on. We're both being as forthright and straightforward as we can be here, not deviously rhetorical.)

Replies from: gjm, gjm
comment by gjm · 2013-07-09T13:15:04.797Z · LW(p) · GW(p)

The particular comment I linked to was primarily addressing the question: What fraction of HPMOR readers are female? It justified a guess (explicitly stated to be only a guess) that the fraction is on the order of 1/2 by observing (1) that among HPMOR readers of the writer's own acquaintance the fraction is close to that, even though the writer knows substantially more men than women, and (2) that the readership of fanfiction generally skews female.

Even without any translation of that comment -- with nothing other than removing it from the context of an argument with the word "feminism" in it -- what about it would be "easily distinguishable" or exhibit a problematic "certain type of thinking"? What about its social-political implications would be unusual?

I suggest that the answer is: Nothing at all. (Which is one reason why I chose that comment as providing evidence that at least some of JoshuaZ's recent downvoters have been playing the man rather than the ball.)

comment by gjm · 2013-07-09T15:08:12.468Z · LW(p) · GW(p)

(I think the bits about the term "campaign" weren't there when I replied before, hence the separate reply.)

OK, all noted. How about the following? (Which I hope will both clarify my position and get past debates about potentially tendentious terminology.)

  • I have a that LW should have, and actively enforce, strong community norms against would-be intimidatory mass-downvoting that isn't (even in principle) justified by the demerits of the comments being downvoted.
  • You (if I've understood you right) this proposal.
  • I am not that LW should have or enforce norms against downvoting-gjm-dislikes, or anything of that kind,
  • and of course I would not expect you to any such .
  • I am also not that LW should have or enforce norms against downvoting comments on the basis of their subject matter (as opposed to their merits given the subject matter)
    • and -- though I failed, at least initially, to make this clear -- I don't think it credible that that alone explains what I've been characterizing as would-be intimidatory downvoting of JoshuaZ's comments.
  • And, again, I would not expect you to any such and I understand your reasons for wanting to what you took to be such a on my part.
  • Though, as it happens, I dislike many instances of this sort of topic-based downvoting and think it would be a bad thing if there were a concerted effort to prevent LW discussions of sexism, feminism, etc., in connection with HPMOR. I understand that you disagree, and repeat that what I am is not for this sort of topic-based downvoting to be prevented, forbidden, or punished.

[EDITED once shortly after posting, for clarity only.]

Replies from: wedrifid
comment by wedrifid · 2013-07-09T15:50:22.653Z · LW(p) · GW(p)

I have a that LW should have, and actively enforce, strong community norms against would-be intimidatory mass-downvoting that isn't (even in principle) justified by the demerits of the comments being downvoted.

'Intimidation' is a world like 'manipulation' in as much as it refers to influence provoking behaviours which is somewhat fuzzy and depends rather a lot on desired connotations. 'Intimidation' inherent in the purpose of the karma system. Users are granted the (trivial) power to use against comments so that users are intimidated out of posting things that aren't wanted.

There are instances of intimidation that are undesirable and others that work as intended. We both acknowledge that some downvotes that cause intimidation need preventing. We disagree on some cases. I'd also perhaps focus on the 'punitive' and 'systematic' aspects more so than 'intimidatory'.

and -- though I failed, at least initially, to make this clear -- I don't think it credible that that alone explains what I've been characterizing as would-be intimidatory downvoting of JoshuaZ's comments.

Conditional on JoshuaZ having recently experienced karma-assassination it seems unlikely that this comment would be an exception. Yet there are other obvious influences and confounding factors (topic-based downvoting for instance) that are also at play (as my testimony indicates). That is enough for me to force a clarification. I am happy with the specification you provided in the parent. Best of luck with your efforts!

Though, as it happens, I dislike many instances of this sort of topic-based downvoting and think it would be a bad thing if there were a concerted effort to prevent LW discussions of sexism, feminism, etc., in connection with HPMOR.

I thought similarly once. Observations of many such conversations changed my mind. Fortunately Reddit exists. There are plenty of other places to take the mind killing where it would be far more relevant and suitable.

comment by gjm · 2013-07-09T09:42:08.219Z · LW(p) · GW(p)

If the information needed to take action against this sort of abuse is difficult to do anything with because it requires grovelling through whatever database underlies LW, I hereby volunteer (if told it would be useful by someone with power to use it) to make whatever software enhancements to the LW code are required to make it easy.

(I have no experience with the LW codebase but am an experienced software person. Getting-started pointers would be welcome if anyone takes me up on that offer.)

comment by iceman · 2013-07-13T23:10:47.609Z · LW(p) · GW(p)

There is now fanfic about Eliezer in the Optimalverse. I'm not entirely sure what to make of it.

Replies from: gwern
comment by gwern · 2013-07-13T23:31:47.071Z · LW(p) · GW(p)

For some strange reason, it rather resembled the Hogwarts School of Witchcraft and Wizardry. So whoever did this knew about Hermione Granger and the Burden of Responsibility. That wasn’t much comfort...Someone was mocking him, or at least mocking his self-insert as Godric Gryffindor. The alicorn pony he had become sighed.

:)

comment by Ratcourse · 2013-07-03T14:00:16.497Z · LW(p) · GW(p)

In response to this post: http://www.overcomingbias.com/2013/02/which-biases-matter-most-lets-prioritise-the-worst.html

Robert Wiblin got the following data (treated by a dear friend of mine):

89 Confirmation bias

54 Bandwagon effect

50 Fundamental attribution error

44 Status quo bias

39 Availability heuristic

38 Neglect of probability

37 Bias blind spot

36 Planning fallacy

36 Ingroup bias

35 Hyperbolic discounting

29 Hindsight bias

29 Halo effect

28 Zero-risk bias

28 Illusion of control

28 Clustering illusion

26 Omission bias

25 Outcome bias

25 Neglect of prior base rates effect

25 Just-world phenomenon

25 Anchoring

24 System justification

24 Kruger effect

23 Projection bias

23 Mere exposure effect

23 Loss aversion

22 Overconfidence effect

19 Optimism bias

19 Actor-observer bias

18 Self-serving bias

17 Texas sharpshooter fallacy

17 Recency effect

17 Outgroup homogeneity bias

17 Gambler's fallacy

17 Extreme aversion

16 Irrational escalation

15 Illusory correlation

15 Congruence bias

14 Self-fulfilling prophecy

13 Wobegon effect

13 Selective perception

13 Impact bias

13 Choice-supportive bias

13 Attentional bias

12 Observer-expectancy effect

12 False consensus effect

12 Endowment effect

11 Rosy retrospection

11 Information bias

11 Conjunction fallacy

11 Anthropic bias

10 Focusing effect

10 Déformation professionnelle

08 Positive outcome bias

08 Ludic fallacy

08 Egocentric bias

07 Pseudocertainty effect

07 Primacy effect

07 Illusion of transparency

06 Trait ascription bias

06 Hostile media effect

06 Ambiguity effect

04 Unit bias

04 Post-purchase rationalization

04 Notational bias

04 Effect)

04 Contrast effect

03 Subadditivity effect

03 Restorff effect

02 Illusion of asymmetric insight

01 Reminiscence bump

Replies from: gwern, curiousepic, Normal_Anomaly
comment by gwern · 2013-08-23T16:06:27.055Z · LW(p) · GW(p)

One could try ranking biases by the size of the correlation between susceptibility-to-the-bias and damaging behavior, for example, using the correlations in http://lesswrong.com/lw/ahz/cashing_out_cognitive_biases_as_behavior/

comment by curiousepic · 2013-08-23T14:51:27.599Z · LW(p) · GW(p)

This is totally worth a discussion post.

comment by Normal_Anomaly · 2013-07-03T23:56:20.825Z · LW(p) · GW(p)

You may not have noticed when you posted this, but the formatting of your post didn't show up like I think you may have wanted, with the result that it's hard to read. (If you're wondering, it takes 2 carriage returns to get a line break out.)

If you intended the comment to look like it does, I apologize for bothering you.

Replies from: Ratcourse
comment by Ratcourse · 2013-07-04T19:44:46.290Z · LW(p) · GW(p)

Corrected, thank you

comment by Ratcourse · 2013-07-02T14:14:57.407Z · LW(p) · GW(p)

How do you correct your mistakes?

For example, I recently found out I did something wrong at a conference. In my bio, in areas of expertise I should have written what I can teach about, and in areas of interest what I want to be taught about. This seems to maximize value for me. How do I keep that mistake from happening in the future? I don't know when the next conference will happen. Do I write it on anki and memorize that as a failure mode?

More generally, when you recognize a failure mode in yourself how do you constrain your future self so that it doesn't repeat this failure mode? How do you proceduralize and install the solution?

Replies from: moridinamael, maia, Douglas_Knight, wadavis
comment by moridinamael · 2013-07-02T17:53:38.106Z · LW(p) · GW(p)

For a while I was in the habit of putting my little life lessons in the form of Anki cards and memorizing them. I would also memorize things like conflict resolution protocols and checklists for depressive thinking. Unfortunately it didn't really work, in the sense that my brain consistently failed to recall the appropriate knowledge in the appropriate context.

I tried using an iOS app caled Lift but I found it difficult to use and not motivating.

I also tried using an iOS app called Alarmed to ping me throughout the day with little reminders like "Posture" and "Smile" and "Notice" to improve my posture, attitude, and level of mindfulness, respectively. This worked better but I eventually got tired of my phone buzzing so often with distracting, non-critical information and turned off the reminders.

My very first post on LessWrong was about proceduralizing rationality lessons, I think it's one of the biggest blank spots in the curriculum.

Replies from: Ratcourse
comment by Ratcourse · 2013-07-03T14:08:25.563Z · LW(p) · GW(p)

Yes, a blank spot and one that makes everything else near-useless. This needs to be figured out.

comment by maia · 2013-07-03T01:54:08.577Z · LW(p) · GW(p)

I'm not sure this applies to your particular situation, but a general solution for proceduralizing behaviors that was discussed at minicamp (and which I'd actually done before) is: Trigger yourself on a particular physical sensation, by visualizing it and thinking very hard about the thing you want yourself to remember. So an example would be if you want to make sure you do the things on your to-do list as soon as you get home, spend a few minutes trying to visualize with as much detail as you can what the front door of your house looks like, and recall what it feels like to be stepping through it, and think about "To do list time!" at the same time. (Or if you have access to your front door at the time you're trying to do this, actually stepping through your front door repeatedly while thinking about this might help too.)

And if there's some way to automate it, then of course that's ideal, though you said you don't know when the next conference will happen so that's more difficult.

Or another kind of automation: maybe you could save the bio you wrote in a Word document, and write a reminder in it to add the edits you want... or just do them now, and save the bio for future use. Then all you have to remember is that you wrote your bio already. Which is another problem, but conceivably a smaller one: I don't know about your hindbrain, but upon being told it had to write a bio, mine would probably be grasping at ways to avoid doing work, and having it done already is an easy out.

Replies from: Ratcourse
comment by Ratcourse · 2013-07-03T14:07:56.971Z · LW(p) · GW(p)

That automation makes sense, thank you. Trying to think of how to generalize it, and how to to merge it with the first suggestion.

comment by Douglas_Knight · 2013-07-03T02:15:35.030Z · LW(p) · GW(p)

For a problem like this, remembering for something rare in the indefinite future, the important thing is to remember at that time that you know something. At that point, if you've put it in a reasonable place, you can find it. It seems to me that the key problem is the jump from "have to write a bio" to "how to write a bio," that is, making sure you pause and think about what you know or have written down somewhere. Some people claim success with Anki here, but it doesn't make sense to me.

What most people do with bios is that they reuse them, or at least look at the old one whenever needing a new one. As Maia says, if you write an improved bio now, you can find it next time, when you look for the most recent version. But that doesn't necessarily help remember why it was an improvement. If you have a standard place for bios, you can store lots of variants (lengths, types of conferences, resume, etc), along with instructions on what distinguishes them. But I think what most people do is search their email for the last one they submitted. If you can't learn to look in a more organized place, you could send yourself an email with all the bios and the instructions, so that it comes up when you search email.

Replies from: Ratcourse
comment by Ratcourse · 2013-07-03T14:07:02.883Z · LW(p) · GW(p)

Anki doesn't work for me on this, agreed. The above suggestion seems to dominate this one.

comment by wadavis · 2013-07-08T20:56:48.530Z · LW(p) · GW(p)

Discuss the failure in person, face to face, with a helpful colleague. Admit your failure. Make a conversation out of it. Brainstorm the fixes you've already made to the bio and any others that come up. Let the conversation have an attached emotion, whether it be a feel good problem solving session or a public shaming.

Memories with emotions stick around better.

I found that even the small act of saying "Yep, I did forgot to update the new widget part number" to a supervisor / team-member helps me remember to in the future.

comment by sixes_and_sevens · 2013-07-02T13:30:25.801Z · LW(p) · GW(p)

I've been thinking about tacit knowledge recently.

A very concrete example of tacit knowledge that I rub up against on a regular basis is a basic understanding of file types. In the past I have needed to explain to educated and ostensibly computer-literate professionals under the age of 40 that a jpeg is an image, and a PDF is a document, and they're different kinds of entities that aren't freely interchangeable. It's difficult for me to imagine how someone could not know this. I don't recall ever having to learn it. It seems intuitively obvious. (Uh-oh!)

So I wonder if there aren't some massive gains to be had from understanding tacit knowledge more than I do. Some applications:

  • Being aware of the things I know which are tacit knowledge, but not common knowledge
  • Building environments that impart tacit knowledge, (eg. through well-designed interfaces and clear conceptual models)
  • Structuring my own environment so I can more readily take on knowledge without apparent effort
  • Imparting useful memes implicitly to the people around me without them noticing

What do you think or know about tacit knowledge, LessWrong? Tell me. It might not be obvious.

Replies from: kpreid, Douglas_Knight, Qiaochu_Yuan, elharo
comment by kpreid · 2013-07-03T03:32:34.172Z · LW(p) · GW(p)

a jpeg is an image, and a PDF is a document

Sir, you are wrong on the internet. A JPEG is a bitmap (formally, pixmap) image. A PDF is a vector image.

The PDF has additional structure which can support such functionality as copying text, hyperlinks, etc, but the primary function of a PDF is to represent a specific image (particularly, the same image whether displayed on screen or on paper).

Certainly a PDF is more "document"-ish than a JPEG, but there are also "document" qualities a PDF is notably lacking, such as being able to edit it and have the text reflow appropriately (which comes from having a structure of higher-level objects like "this text is in two columns with margins like so" and "this is a figure with caption" and an algorithm to do the layout). To say that there is a sharp line and that PDF belongs on the "document" side is, in my opinion, a poor use of words.

(Yes, this isn't the question you asked.)

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2013-07-03T09:51:47.107Z · LW(p) · GW(p)

I'm not sure I want to get into an ontological debate on whether a PDF is a document or not, but I believe the fact that it's got the word "document" in its name and is primarily used for representing de facto documents makes my original statement accurate to several orders of approximation.

comment by Douglas_Knight · 2013-07-02T21:39:56.754Z · LW(p) · GW(p)

That isn't the standard use of "tacit knowledge." At least it doesn't match the definition. Tacit knowledge is supposed to be about things that are hard to communicate. The standard examples are physical activities.

Maybe knowing when to pay attention to file extensions is tacit knowledge, but the list of what they mean is easy to write down, even if it is a very long list. Knowing that it valuable to know about them is probably the key that these people were missing, or perhaps they failed to accurate assess the detail and correctness of their beliefs about file types.

comment by Qiaochu_Yuan · 2013-07-02T18:48:57.798Z · LW(p) · GW(p)

Unfortunately, everything I know about tacit knowledge is tacit.

Replies from: BerryPick6
comment by BerryPick6 · 2013-07-03T08:50:39.498Z · LW(p) · GW(p)

How do you know that?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-03T08:57:45.127Z · LW(p) · GW(p)

Not everything I know about what I know about tacit knowledge is tacit!

Replies from: BerryPick6
comment by BerryPick6 · 2013-07-03T13:36:26.456Z · LW(p) · GW(p)

This conversation just metacitasized.

It's okay, I'll show myself out.

comment by elharo · 2013-07-05T13:38:39.902Z · LW(p) · GW(p)

Uh-oh indeed. Like most statements involving the word "is", this is probably one of those questions that should be dissolved. Thus I will ask:

What do you mean when you say document? I.e. what are the characteristics that a document has which a JPEG file does not, and which a PDF does have? Why is it wrong for something that is an image to also be a document?

Replies from: Viliam_Bur, sixes_and_sevens
comment by Viliam_Bur · 2013-07-06T11:27:41.704Z · LW(p) · GW(p)

I'll try: You don't need OCR to get the words out of the document. An image is just dots and/or geometric shapes. (Which would make a copy-protected PDF not a document.)

comment by sixes_and_sevens · 2013-07-05T14:31:20.638Z · LW(p) · GW(p)

This seems to be actively running away from the point. Also, see the other response re: my lack of interest in this particular ontological discussion.

In my example, there's also a concrete reason to distinguish between images and documents. The image is going to be embedded on a webpage, where people will simply look at it. Meanwhile, the document is going to be printed off as an actual physical document. Their respective formats are generally optimised for these different purposes.

comment by Kaj_Sotala · 2013-07-09T16:59:39.123Z · LW(p) · GW(p)

I just noticed the Recent Karma Awards link in the sidebar. Has it been there for long?

Replies from: Martin-2
comment by Martin-2 · 2013-07-09T23:03:07.465Z · LW(p) · GW(p)

At least a few months.

comment by Fhyve · 2013-07-03T06:13:52.474Z · LW(p) · GW(p)

How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:

  • incidentally name drop my local rationalist meetup group, (ie. "I am going to a rationalist's meetup on Sunday")

  • link to lesswrong articles whenever relevant (rarely)

  • be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)

  • when asked, motivate rationality by indicating a whole bunch of cognitive biases, and how we don't naturally have principles of correct reasoning, we just do what intuitively seems right

This is quite passive (other than name dropping and article linking) and mostly requires them to ask me about it first. I want something more proactive that is not straight up linking to Lesswrong, because the first thing they go to is The Simple Truth and immediately get turned off by it (The Simple Truth shouldn't be the first post in the first sequence that you are recommended to read on Lesswrong). This has happened a number of times.

Replies from: Risto_Saarelma, elharo, Viliam_Bur, NancyLebovitz
comment by Risto_Saarelma · 2013-07-03T08:00:12.614Z · LW(p) · GW(p)

This sounds like you think of them as mooks you want to show the light of enlightenment to. The sort of clever mathy people you want probably don't like to think of themselves as mooks who need to be shown the light of enlightenment. (This also might be sort of how I feel about the whole rationalism as a thing thing that's going on around here.)

That said, actually being awesome for your target audience's values of awesome is always a good idea to make them more receptive to looking into whatever you are doing. If you can use your rationalism powers to achieve stuff mathy university people appreciate, like top test scores or academic publications while you're still an undergraduate, your soapbox might be a lot bigger all of a sudden.

Then again, it might be that rationalism powers don't actually help enough in achieving this, and you'll just give yourself a mental breakdown while going for them. The math-inclined folk, who would like publication writing superpowers, probably also see this as the expected result, so why should they buy into rationality without some evidence that it seems to be making people win more?

Replies from: Fhyve
comment by Fhyve · 2013-07-03T10:06:45.200Z · LW(p) · GW(p)

To be honest, unless they have exceptional mathematical ability or are already rationalists, I will consider them to be mooks. Of course, I wont make that apparent, it is rather hard to make friends that way. Acknowledging that you are smart is a very negative signal, so I try to be humble, which can be awkward in situations like when only two out of 13 people pass a math course that you are in, and you got an A- and the other guy got a C-.

And by the way, rationality, not rationalism.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-07-03T10:50:45.815Z · LW(p) · GW(p)

Incidentally, what exactly makes a person already be a rationalist in this case?

Replies from: Fhyve
comment by Fhyve · 2013-07-03T19:51:25.810Z · LW(p) · GW(p)

Pretty much someone who has read the Lesswrong sequences. Otherwise, someone who is unusually well read in the right places (cognitive science, especially biases; books like Good and Real and Causality), and demonstrates that they have actually internalized those ideas and their implications.

Replies from: metatroll, elharo
comment by metatroll · 2013-07-04T00:56:07.274Z · LW(p) · GW(p)

Related question: how can I upgrade myself from someone who trolls robo-"rationalists" that think acquaintance with a particular handful of concepts, buzzwords, and habits of thought is a mark of superiority rather than just a mark of difference, to a superbeing faster than a speeding singularity who can separate P from NP in a single bound?

comment by elharo · 2013-07-05T13:13:21.046Z · LW(p) · GW(p)

Rational is about how you think, not how you got there. There have been many rational people throughout history who have read approximately none of that.

Replies from: Fhyve, Desrtopa
comment by Fhyve · 2013-07-06T06:29:31.420Z · LW(p) · GW(p)

I am mostly talking about epistemic rationality, not instrumental rationality. With that in mind, I wouldn't consider anyone from a hundred years ago or earlier to be up to my epistemic standards because they simply did not have access to the requisite information, ie. cognitive science and Bayesian epistemology. There are people that figured it out in certain domains (like figuring out that the labels in your mind are not the actual things that they represent), but those people are very exceptional and I doubt that I will meet people that are capable of the pioneering, original work that these exceptional people did.

What I want are people who know about cognitive biases, understand why they are very important, and have actively tried to reduce the effects of those biases on themselves. I want people who explicitly understand the map and territory distinction. I want people who are aware of truth-seeking versus status arguments. I want people who don't step on philosophical landmines and don't get mindkilled. I would not expect someone to have all of these without having at least read some of Lesswrong or the above material. They might have collected some of these beliefs and mental algorithms on their own, but it is highly unlikely that they came across all of them.

Is that too much to ask? Are my standards too high? I hope not.

comment by Desrtopa · 2013-07-05T13:18:12.001Z · LW(p) · GW(p)

Eh, without adopting particularly unconventional (for this site) standards, you could reasonably say that there have been very few rational people throughout history (or none.)

There's a reason people on this site use the phrase "I'm an aspiring rationalist."

comment by elharo · 2013-07-05T13:08:11.662Z · LW(p) · GW(p)

Taboo "rationalist". That is, don't make it sound like this is a group or ideology anyone is joining (because, done right, it isn't.)

Discuss, as appropriate, cognitive biases and specific techniques. E.g. planning fallacy, "I notice I am confused", "what do you think you know and why do you think you know it?", confirmation bias, etc.

Tell friends about cool books you've read like HPMoR, Thinking Fast and Slow, Predictably Irrational, Getting Things Done, and so forth. If possible read these books in paper (not ebooks) where your friends can see what you're reading and ask you about them.

comment by Viliam_Bur · 2013-07-03T12:39:44.092Z · LW(p) · GW(p)

The problem with rationality is that unless you are at some level, you don't feel like you need to become more rational. And I think most people are not there, even the smart ones. Seems to me that smart people often realize they miss some specific knowledge, but they don't go meta and realize that they miss knowledge-gathering and -filtering skills. (And that's the smart people. The stupid ones only realize they miss money or food or something.) How do you sell something to a person who is not interested in buying?

Perhaps we could make a selection of LW articles that can be interesting even for people not interested in rationality. Less meta, less math. The ones that feel like "this website could help me make more money and become more popular". Then people become interested, and perhaps then they become interested more meta -- about a culture that creates this kind of articles.

(I guess that even for math-inclined people the less mathy articless would be better. Because they can find math in thousand different places; why should they care specifically about LW?)

As a first approximation: The Science of Winning at Life and Living Luminously.

comment by NancyLebovitz · 2013-07-03T15:06:53.380Z · LW(p) · GW(p)

How about bringing up specific bits of rationality when you talk with them? If they talk about plans, ask them how much they know about how long that sort of project is likely to take. If they seem to be floundering with keeping track of what they're thinking, encourage them to write the bits and pieces down.

If any of this sort of thing seems to register, start talking about biases and/or further sources of information.

This is a hypothetical procedure-- thanks for mentioning that The Simple Truth isn't working well as an introduction.

comment by anotherblackhat · 2013-07-11T23:20:57.369Z · LW(p) · GW(p)

There's a scam I've heard of;

Mallet, a notorious swindler, picks 10 stocks and generates all 1024 permutations of "stock will go up" vs. "stock will go down" predictions. He then gives his predictions to 1024 different investors. One of the investors receive a perfect, 10 out 10 prediction sheet and is (Mallet hopes) convinced Mallet is a stock picking genius.

Since it's related to the Texas sharpshooter fallacy, I'm tempted to call this the Texas stock-picking scam, but I was wondering if anyone knew a "proper" name for it, and/or any analysis of the scam.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-12T05:31:22.840Z · LW(p) · GW(p)

Derren Brown demonstrated this scam on TV and called it The System. That might help you track down a name.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-12T23:28:39.611Z · LW(p) · GW(p)

I've got one cite of it as the Perfect Prediction scam, but it doesn't seem to be a standard name.

This scam shows up in an R. A. Lafferty story, with the suggestion that if you're good at picking winners, you'll get a better return by not doing a fifty-fifty split at each stage.

I've checked some pages about types of frauds, and that one doesn't show up. I'm guessing that it's just too much work compared to other sorts of fraud.

comment by lukeprog · 2013-07-13T05:07:43.921Z · LW(p) · GW(p)

Miles Brundage recently pointed me to these quotes from Ed Fredkin, recorded in McCorduck (1979).

On speed of thought:

Say there are two artificial intelligences... When these machines want to talk to each other, my guess is they'll get right next to each other so they can have very wide-band communication. You might recognize them as Sam and George, and you'll walk up and knock on Sam and say, "Hi, Sam. What are you talking about?" What Sam will undoubtedly answer is, "Things in general," because there'll be no way for him to tell you. From the first knock until you finish the "t" in about, Sam probably will have said to George more utterances than have been uttered by all the people who have ever lived in all of their lives. I suspect there will be very little communication between machines and humans, because unless the machines condescend to talk to us about something that interests us, we'll have no communication.

On whether advanced AIs will share our goals:

Eventually, no matter what we do there'll be artificial intelligences with independent goals... There may be a way to postpone it. There may even be a way to avoid it, I don't know. But it's very hard to have a machine that's a million times smarter than you as your slave.

On basement AI:

Today I can buy a machine for five dollars that's better than one costing five million dollars twenty years ago... [One day] a paper boy with his route money will be able to save up in a month and buy such a machine. Thus anybody will have the necessary hardware to do Al pretty soon; it will be like a free commodity.

Now, under those circumstances, it's possible that some mad genius, some Newton-like person, even a kid working by himself, could make tremendous progress. He could develop AI all by himself, relying on what others do, but building it in private rather than at a big institution like MIT. And the application of such a machine would be irresistible. How could you avoid this? You can't license computers; that never was practical... If you made the use of electricity in any way a capital offense, worldwide and suddenly, and you did it immediately... then perhaps you could prevent this from happening. But anything short of that isn't going to do it, because you won't need a laboratory with big government funding very soon - that's only a temporary phase we're passing through. So what Joe Weizenbaum would like to do is impossible-its bringing time to a halt, and it can't be done. What we can do is make the future more secure for human beings by being reasonable about how you bring AI about, and the only reasonable course is to work on this problem in a way that promises to be best for all of society, and not just for some singular mad genius.

On the risk of bad guys getting AI first:

What's equally frightening is that the world has developed means for destroying itself in a lot of different ways, global ways. There could be a thermonuclear war or a new kind of biological hazard or what-have-you. That we'll come through all this is possible but not probable unless a lot of people are consciously trying to avoid the disaster. McCarthy's solution of asking an artificial intelligence what we should do presumes the good guys have it first. But the good guys might not. And pulling the plug is no way out. A machine that smart could act in ways that would guarantee that the plug doesn't get pulled under any circumstances, regardless of its real motives... I mean, it could toss us a few tidbits, like the cure for this and that.

I think there are ways to minimize all this, but the one thing we can't do is to say well, let's not work on it. Because someone, somewhere, will. The Russians certainly will - they're working on it like crazy, and its not that they're evil, its just that they also see that the guy who first develops a machine that can influence the world in a big way may be some mad scientist living in the mountains of Ecuador. And the only way we'd find out about some mad scientist doing artificial intelligence in the mountains of Ecuador is through another artificial intelligence doing the detection. Society as a whole must have the means to protect itself against such problems, and the means are the very same things we're protecting ourselves against.

On trying to raise awareness of AI risks:

I can't persuade anyone else in the field to worry this way... They get annoyed when I mention these things. They have lots of attitudes, of course, but one of them is, "Well yes, you're right, but it would be a great disservice to the world to mention all this."...

...my colleagues only tell me to wait, not to make my pitch until it's more obvious that we'll have artificial intelligences. I think by then it'll be too late. Once artificial intelligences start getting smart, they're going to be very smart very fast. What's taken humans and their society tens of thousands of years is going to be a matter of hours with artificial intelligences. If that happens at Stanford, say, the Stanford AI lab may have immense power all of a sudden. Its not that the United States might take over the world, it's that Stanford AI Lab might.

Replies from: lukeprog
comment by lukeprog · 2013-07-13T05:19:43.846Z · LW(p) · GW(p)

Later in that chapter, McCorduck quotes Marvin Minsky as saying:

...we have people who say we've got to solve problems of poverty and famine and so forth, and we shouldn't be working on things like artificial intelligence... [But I think] we should have a certain number of people worrying about... whether artificial intelligence will be a huge disaster some day or be one of the best events in the universe...

...You might be the only one who can help with the disaster that's going to happen [decades from now], and if you don't prepare yourself, and instead just go off into some social welfare project right now, who will do it then? ...Yes, I feel that there's a great enterprise going on which is making the world of the future all right.

...which sounds eerily like a pitch for MIRI.

Unfortunately, Minsky did not then rush to create the MIT AI Safety Lab.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-13T16:55:36.272Z · LW(p) · GW(p)

I don't think that's a legitimate "Unfortunately". If you're not inspired and an approach doesn't pop into your head, throwing money at the problem until you get some grad students who couldn't get a postdoc elsewhere is not necessarily going to be productive, can indeed be counterproductive, and Minsky would legitimately know that.

Replies from: lukeprog
comment by lukeprog · 2013-07-13T17:03:10.919Z · LW(p) · GW(p)

Okay, then: "Unfortunately, Minsky was not then inspired, by a reasonable approach to the problem, to create the MIT AI Safety Lab."

comment by sixes_and_sevens · 2013-07-02T13:03:26.946Z · LW(p) · GW(p)

Related to "magic phrases", what expressions or turns of phrase work for you, but don't work well for a typical real-world audience?

I tend to use "it's not magic" as shorthand for "it's not some inscrutable black-boxed phenomenon that defies analysis and reasoning". Moreover, I seem to have internalised this as a reaction whenever I hear someone describing something as if it were such a phenomenon. Using the phrase generally doesn't go down well, though.

comment by NancyLebovitz · 2013-07-13T17:48:38.380Z · LW(p) · GW(p)

On why playing hard to get is a bad idea, and why a lot of women do it.

This was something I was meaning to post about in some of the gender discussions, but I wasn't sure that a significant proportion of men were still put off by women who were direct about wanting sex with them-- but apparently, it's still pretty common.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-14T09:58:33.463Z · LW(p) · GW(p)

There is a difference between "I want sex with you specifically (because you attract me)", and "I want sex with anyone (and you are the nearest one)". For me, the former would feel nice, but the latter would feel... creepy.

This may be another situation of not being specific: when women report that "men were put off by them being direct about wanting sex with them", I don't know which one of these situations it was. Also, it depends on context; there is a difference between getting a sex offer from a friendly person in a romantic situation, or getting a sex offer from an unknown heavily drunk woman at a disco (happened to me, and yes I was put off). These details may change the situation, and are usually not reported, because of course the goal of report is to make the other people seem horrible.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-14T14:57:04.099Z · LW(p) · GW(p)

Another possibility is that if a woman makes a courteous and straightforward statement of interest, there's no guarantee that the man is likewise interested, but she might interpret this as being wrong for being straightforward rather than that there was no way he was going to reciprocate.

From the comments, and I admit there was less than I thought there was going to be:

I remember when I first started dating my fiancé (four years ago now!), he told me that if a woman tells a man she likes him before he makes a move, he'll stop liking her. I thought this was supremely stupid, so I told my dad about it in a can-you-believe-he-said-that-roll-eyes fashion, but my dad actually told me he thinks that's true! I definitely want to break that train of thought in the next generation. It's really stupid, like teaching women that speaking up makes them unattractive. And seriously, if I hadn't been direct about my feelings when I started dating my fiancé, it would've taken forever for our relationship to become more serious, because who wants to risk rejection by asking out someone who has given no indication that they like you?

**

It's fascinating to me because I'm recently divorced and back in the dating game, and I am failing miserably because I refuse to do this. I'm an honest and open person. I don't believe in playing games, and the games confuse me anyway. If I like someone, I tell them. And apparently this is now perceived as "coming on too strong" and is INTIMIDATING? My friends are convinced I push people away by being too forward. My gosh, I'm not saying, "Hello, nice to meet you, let's screw." I'm saying things like, "We've been talking for awhile and I think you're interesting. Let's get drinks sometimes." How is this intimidating? I think you hit the nail on the head.

**

One of the reasons women may not want to say yes immediately is the continuing social stigma attached to confident, self-assured women. I have always rejected the hard-to-get routine, and been clear in my requests for friendship or sex. Some men have been frightened by my assertiveness, but many have been relieved by it.

**

" I felt conflicted about being as blunt and up-front as I am about dating because it was contradictory to all the advice I'd always been given, but to do otherwise feels dishonest. However, everything paid off when I found my boyfriend. He's totally clueless when it comes to "playing the dating game" so he didn't push me, or try to read into what I said. He took my honesty at face value, and every "no" as a "no."

This one might be evidence-- it depends on what she meant by "everything paid off when I found my boyfriend". I'm inclined to think that her honesty didn't work a few times.

**

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-14T16:59:16.300Z · LW(p) · GW(p)

Being the one who approaches has many advantages, but it comes with a cost -- one must learn to deal with rejection. There is a difference between knowing, generally, "my attractivity is probably average", and being specifically rejected by one specific sympathetic person who seemed to be interested just a while ago, but probably just wanted to talk.

Interpreting rejection as "these men are afraid of a honest / courageous woman" can help protect the ego. It could also be why the men said it -- to avoid an offense, a confrontation. (Women also say various things that don't make much sense, when they mean: "I don't consider you attractive.") I mean, if an extremely attractive woman would approach those men, a lot of them yould probably say yes and consider themselves lucky. (This is an experimentally testable prediction!)

comment by lukeprog · 2013-07-13T04:50:50.818Z · LW(p) · GW(p)

Oh neato. The class notes for a recent class by Minsky link to Intelligence Explosion: Evidence and Import under "Suggested excellent reading."

comment by Paul Crowley (ciphergoth) · 2013-07-11T19:36:54.931Z · LW(p) · GW(p)

Would be good to have a single central place for all CFAR workshop reviews, good and bad. Here's two:

Replies from: None, Qiaochu_Yuan
comment by [deleted] · 2013-07-12T06:01:29.827Z · LW(p) · GW(p)

How about the Wiki?

comment by skeptical_lurker · 2013-07-04T17:08:48.532Z · LW(p) · GW(p)

I've been talking to some friends who have some rather odd spiritual (in the sense of disorganised religion) beliefs. Odd because its a combination of modem philosophy LW would be familiar with (acausal commuication between worlds of Tegmark's level IV multiverse) ancient religion, and general weirdness. I have trouble pointing my finger at exactly what is wrong with this reasoning, although I'm fairly sure there is a flaw, in the same way I'm quite sure I'm not a Boltzmann brain, but it can be hard articulating why. So, if anyone is interested, here is the reasoning:

1) Dualism is wrong, due to major philosophical problems as well as Occam's razor

2) I think therefore I am, so I know that the 'mental world' exists.

3) Therefore Idealism is true, the mental world exists but the physical is just an illusion

4) In response to 'so why can't you fly?' the answer is a lack of mental discipline: after all, its hard to control your thoughts

5) If two different people existed in the same universe, there is no reason why they would perceive the same illusions.

6) Therefore, each universe consists of one conscious observer and their illusory reality

7) But Tegmark's level IV multiverse is true, so we can acausaly communicate between worlds, in fact all conversations are actually acausal communication between worlds.

8) This also implies there is reincarnation, of a sort - there is no body to die, so you just construct a new illusory reality.

From here on it gets into more standard 'spiritual' realms, although I did find it amusing when my friend told me that there are at least aleph-2 gods.

I should state that these beliefs are largely pointless, in that its not obvious that they actually influence any decisions the believers make, and that they do seem to make people happy without any major downsides.

I should also make it clear that I don't believe this, because I wouldn't want to lose status as a rationalist by believing in something unpopular!

TL;DR

To a large extent, this boils down to: how do I distinguish between the hypothesizes that the universe is lawful, and the hypothesis that the universe is determined by my beliefs, and I believe it to be lawful.

Replies from: Viliam_Bur, Alejandro1, metatroll, Richard_Kennaway, wedrifid, shminux, Will_Newsome
comment by Viliam_Bur · 2013-07-06T16:53:17.257Z · LW(p) · GW(p)

In response to 'so why can't you fly?' the answer is a lack of mental discipline: after all, its hard to control your thoughts

How do you know what you claim to know? (Okay, not you, but whoever said this.) Do you have any reproducible experimental proof of whatever violation of physical laws using mental discipline?

Isn't it suspicious that undisciplined thoughts are enough to create an illusion of physical reality perfectly obeying the physical laws, but are unable to violate the laws? That sounds to me like speaking about an archer who always perfectly hits the middle of the target, but is unable to shoot the arrow outside of the target, supposedly because he is too clumsy. I mean, isn't hitting the center of the target more difficult that missing the target? Wouldn't creating a reality perfectly obeying the laws of physics all the time require more mental discipline than having things happen randomly?

I am sure there can be dozen ad-hoc explanations, I just wanted to show how it doesn't make sense.

This also implies there is reincarnation, of a sort - there is no body to die, so you just construct a new illusory reality.

So, if you get killed, your mental discipline will improve enough to let you create new reality you can't create now? Interesting...

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-07-11T00:09:40.305Z · LW(p) · GW(p)

How do you know what you claim to know? (Okay, not you, but whoever said this.) Do you have any reproducible experimental proof of whatever violation of physical laws using mental discipline?

To play devils' advocate ... do you have any reproducible experimental proof of believing that an event would happen that would violate the laws of physics, and then the laws were upheld?

Isn't it suspicious that undisciplined thoughts are enough to create an illusion of physical reality perfectly obeying the physical laws, but are unable to violate the laws?

Yes, I quite agree. It's also odd that I cannot play the violin, and yet other people can, which would imply that I can imagine people with knowledge that I don't have. If reality was an illusion, I would expect it to be a lot more like wonderland.

However, we are dealing with priors and intuition here, in that we cannot run experiments, getting disembodied consciousnesses to imagine realities and then observing what they imagine. Its difficult to even run thought experiments, given that you would be trying to model something that supposedly works outside of physics.

So: if you have a prior belief that an illusory reality would be undisciplined (and I agree here), and someone else has a prior that this is not a problem, and that reductionism is highly implausible, how can this disagreement be resolved?

Even if both parties were perfect Bayesian reasoners, Aumann's agreement theorem doesn't apply, because there is no experimental evidence to update on. How can we determine which prior is correct? Perhaps we could agree that approximate Kolmogorov complexity provides an objective prior, although I think objections would be raised, but even in that case it doesn't help in practice unless you can actually calculate approximate Kolmogorov complexity.

So, if you get killed, your mental discipline will improve enough to let you create new reality you can't create now? Interesting...

I think the mental discipline is supposed to be needed to control reality, not to create it. Nevertheless, anything that allows one to escape death does make 'motivated cognition' spring to mind.

comment by Alejandro1 · 2013-07-29T00:32:53.222Z · LW(p) · GW(p)

It looks like you and your friends have rediscovered Lebniz's monadology. Leibniz believed that only minds were real, matter as distinct from minds is an illusion, and minds do not interact causally, but they seem to share a same "reality" by virtue of a "pre-established harmony" between their non-causally related experiences. This last part can perhaps be reexpressed in modern terms as acausal communication.

comment by metatroll · 2013-07-09T11:45:26.246Z · LW(p) · GW(p)

I guess the fact that I lack mental discipline is also the reason that I lack mental discipline, and the reason that lacking mental discipline causes me to lack mental discipline, too.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-07-10T23:44:47.413Z · LW(p) · GW(p)

Sorry, to clarify, are you saying that the reasoning is circular and thus faulty?

Thing about the mental health is that it is circular, in that there are vicious cycles. If I have mental discipline, I can discipline myself to practice discipline more.

Replies from: metatroll
comment by metatroll · 2013-07-12T07:28:28.472Z · LW(p) · GW(p)

Lack of mental discipline is also the reason I can't answer your question without breaking character.

comment by Richard_Kennaway · 2013-07-11T12:46:55.680Z · LW(p) · GW(p)

Where do your friends get this stuff? Did they read the Sequences on LSD or something? Do they do anything differently in everyday life on account of it (besides talking about it)?

To a large extent, this boils down to: how do I distinguish between the hypothesizes that the universe is lawful, and the hypothesis that the universe is determined by my beliefs, and I believe it to be lawful.

How did you get the belief that it is lawful?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-07-11T22:55:18.813Z · LW(p) · GW(p)

Where do your friends get this stuff? Did they read the Sequences on LSD or something?

I doubt it, for the sequences are very long and I don't think one's attention span would hold while tripping. They might have read David Lewis on LSD.

It comes from many different places. Friend A got here through psychedelics and Schrodinger, friend B through their families' Hindu beliefs dating back thousands of years. Oddly enough, they mostly agree with each other.

Do they do anything differently in everyday life on account of it (besides talking about it)?

Not really. Many of them try to influence reality through positive thinking, but then this probably has psychosomatic benefits anyway. But, if for instance one of them was ill, they would use conventional medicine.

How did you get the belief that it is lawful?

Why do I believe that the universe is lawful? Because it appears lawful, and due to reasons discussed in other replies to my post, and my common sense has marked the alternative as insane.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-12T07:04:59.324Z · LW(p) · GW(p)

Why do I believe that the universe is lawful? Because it appears lawful, and due to reasons discussed in other replies to my post, and my common sense has marked the alternative as insane.

Well then, there's how you:

distinguish between the hypothesizes that the universe is lawful, and the hypothesis that the universe is determined by my beliefs, and I believe it to be lawful.

You observe lawfulness, not just believe in lawfulness. Whatever the source of that lawfulness, the lawfulness itself is right there in your observations.

Is it lawful independently of you, or is it lawful because you are God but have forgotten yourself? I suppose you could seek out and practice spiritual exercises to remember your true being as God, and only if that fails to produce a smidgen of miracle-working ability, consider that you might not be God after all. But "we are subject to physical law because we have forgotten our divine nature" is already too much like claiming to have an invisible dragon.

comment by wedrifid · 2013-07-09T10:52:14.185Z · LW(p) · GW(p)

7) But Tegmark's level IV multiverse is true, so we can acausaly communicate between worlds, in fact all conversations are actually acausal communication between worlds.

That latter part seems to rely on a misleading use of words. There seems to be a rather distinct difference between acausal conversation and causal conversation.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-07-10T23:47:30.410Z · LW(p) · GW(p)

I am using words as clearly as possible. To clarify, my friend believes it is impossible for one sentient being to causally influence another.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-11T09:02:07.234Z · LW(p) · GW(p)

my friend believes it is impossible for one sentient being to causally influence another.

And yet your friend bothers to talk. Why?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-07-11T23:00:35.137Z · LW(p) · GW(p)

Talking I can understand, I mean I suppose acausal communication could be as fun? What's perhaps more surprising is altruism and empathy. Would you buy someone a drink if it doesn't cause them to drink it? What if you one-box?

comment by shminux · 2013-07-04T18:24:45.254Z · LW(p) · GW(p)

In what sense does the mental world exist and physical is an illusion? What's the difference between an illusion and reality in this case?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-07-04T21:13:36.158Z · LW(p) · GW(p)

I suppose that the causality goes mind -> physical in an idealist veiwpoint, whereas in a materialist viewpoint phyiscal things cause mental things, and in a monist viewpoint mental and physical are two aspects of the same thing.

At least one of my friends claims to have some weak ability to alter physical reality by thinking (which is possible if physical reality is an illusion), which is interesting because he is otherwise a very intelligent scientist.

Also, believing that physical reality is an illusion is, like whoa man, its really deep.

comment by Will_Newsome · 2013-07-27T02:21:10.073Z · LW(p) · GW(p)

areyoufuckingkiddingme.jpg

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-07-27T17:20:32.579Z · LW(p) · GW(p)

Ok, I know this topic is unimportant compared to many other things, such as FAI and HPMOR, but there's no need to be rude.

Replies from: Will_Newsome
comment by Will_Newsome · 2013-07-28T23:45:51.358Z · LW(p) · GW(p)

Kant you have a sense of Hume about it?

comment by [deleted] · 2013-07-08T19:47:18.140Z · LW(p) · GW(p)

Any LWers in Seattle fancy a coffee?

I'm at UW until the end of the month, so would prefer cafes within walking distance of the university.

comment by Paul Crowley (ciphergoth) · 2013-07-11T19:59:10.929Z · LW(p) · GW(p)

Is there a way of making precise, and proving, something like this?

For any noisy dynamic system describable with differential equations, observed through a noisy digitised channel, there exists a program which produces an output stream indistinguishable from the system.

It would be good to add some notion of input too.

There are several issues with making this precise and avoiding certain problems, but I suspect all of this is already solved so it's probably not worth me going into detail here. In the unlikely event this isn't already a solved problem, I could have a go at precisely stating and proving this.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-12T05:33:44.560Z · LW(p) · GW(p)

I don't completely understand what you mean (in particular, I would really like you to be more specific about what you mean by "noisy" and "indistinguishable"), but this looks like it shouldn't be true on cardinality grounds. There should be uncountably many possible distinguishable noisy behaviors of a dynamical system.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-07-12T07:31:17.108Z · LW(p) · GW(p)

By "indistinguishable" I mean some sort of bound on the advantage) of any algorithm trying to tell the two apart. I think if I try to answer on "noisy" without knowing more about what you need specified I won't answer your question - I'm thinking of some sort of continuous equivalent to the role that noise plays in a Kalman filter.

The cardinality thing is a big problem - if the "system" is a single uncomputable real number that doesn't change, from which we take multiple noisy readings, then for any program that tries to emulate it, there is a distinguishing program whose advantage approaches 1 as the number of readings go up.

It still feels like there must be something like this that we can prove!

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-07-12T22:38:32.330Z · LW(p) · GW(p)

Uncountability doesn't seem like a big deal to me. Just give the Turing machine an auxiliary tape containing the real parameter.

A related question is whether a Turing machine can fully simulate the dynamical system, that is, whether it can compute the state at any future time to any precision using only finitely many bits from the starting parameters.

I think the answer to that differential equations with initial conditions a computable real number can evolve to an uncomputable real number. But if the initial random number is not arbitrary, but guaranteed random, maybe it is computable. (That is, maybe the inputs with computable futures have full measure.)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-07-13T11:22:43.144Z · LW(p) · GW(p)

Uncountability doesn't seem like a big deal to me. Just give the Turing machine an auxiliary tape containing the real parameter.

I can't work out whether this works or not. Here's a really simple example system: each output is 0 or 1, drawn with a fixed probability. If the probability is an uncomputable number, then no algorithm with a finite initial state can generate output with exactly that probability; there has to be a rational number inbetween the real probability and the simulated one, which means there's an attacker that can distinguish the two given enough outputs.

If instead the simulator can read the real probability on an infinite tape... obviously it can't read the whole tape before producing an output. So it has to read, then output, then read, then output. It seems intuitive that with this strategy, it can place an absolute limit on the advantage that any attacker can achieve, but I don't have a proof of that yet.

A related question is whether a Turing machine can fully simulate the dynamical system, that is, whether it can compute the state at any future time to any precision using only finitely many bits from the starting parameters.

Obviously if this can be done then what I ask can be done. I had thought this impossible, which is why I wanted to substitute an easier question about distinguishability.

I think the answer to that differential equations with initial conditions a computable real number can evolve to an uncomputable real number. But if the initial random number is not arbitrary, but guaranteed random, maybe it is computable. (That is, maybe the inputs with computable futures have full measure.)

Well that's clearly true, if the dynamical system is that when t = 0 then y = 0 and dy/dt = 1, then y will pass through lots of uncomputable values at uncomputable times. Some kind of computable uncertainty about the initial state may address the cardinality issue, but I'm not sure how to formalise that.

Replies from: pengvado, Douglas_Knight
comment by pengvado · 2013-07-14T00:32:47.217Z · LW(p) · GW(p)

If instead the simulator can read the real probability on an infinite tape... obviously it can't read the whole tape before producing an output. So it has to read, then output, then read, then output. It seems intuitive that with this strategy, it can place an absolute limit on the advantage that any attacker can achieve, but I don't have a proof of that yet.

In this model, a simulator can exactly match the desired probability in O(1) expected time per sample. (The distribution of possible running times extends to arbitrarily large values, but the L1-norm of the distribution is finite. If this were a decision problem rather than a sampling problem, I'd call it ZPP.)

Algorithm:

  1. Start with an empty string S.
  2. Flip a coin and append it to S.
  3. If S is exactly equal to the corresponding-length prefix of your probability tape P, then goto 2.
  4. Compare (S < P)
Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-07-14T08:23:09.704Z · LW(p) · GW(p)

D'oh! Of course - thanks!

comment by Douglas_Knight · 2013-07-13T15:40:09.411Z · LW(p) · GW(p)

Well that's clearly true, if the dynamical system is that when t = 0 then y = 0 and dy/dt = 1, then y will pass through lots of uncomputable values at uncomputable times.

That's silly. You should only ask what the state of the system is at a specified time (yet another auxiliary tape).

comment by JoshuaZ · 2013-07-10T13:04:45.871Z · LW(p) · GW(p)

Article discussing how the cost of copper has gone up over time as we've used more and more of the easily accessible, high percentage ores. This is another example of a resource which may contribute to Great Filter considerations (along with fossil fuels). As pointed out in the article, unlike oil, copper doesn't have many good replacements for a lot of what it is used for.

That said, I suspect that this is not a major aspect of the Filter. If the cost goes up, the main impact would be on consumer goods which would become more expensive. That's unpleasant but not a Filter event. It also isn't relevant from the standpoint of resources necessary to bootstrap us back up to the current tech level in event of a major disaster since there will be all sorts of nearly pure copper that could be scavenged from the remains of civilization.

This may however be a strong argument for either finding new copper replacements (possibly novel alloys), or for the development of asteroid mining which will help out with a lot of different metals.

Thoughts? Does this analysis seem accurate?

comment by [deleted] · 2013-07-05T13:37:30.144Z · LW(p) · GW(p)

I was reading http://slatestarcodex.com/ and I found myself surprised again, by Yvain persuasively steelmanning an argument that he doesn't himself believe in at http://slatestarcodex.com/2013/06/22/social-psychology-is-a-flamethrower/

It's particularly ironic because in that very post, he mentions:

I can’t find the link for this, but negatively phrased information can sometimes reinforce the positive version of that information.

Which seems to be what I am falling for. He outright says:

I think some of the arguments below will be completely correct, others correct only in certain senses and situations, and still others intriguing but wrong. I think that modern pop social psychology probably contains the same three categories in about the same breakdown, so I don’t feel too bad about this.

So to sum up, here is my experience:

1: Yvain: "Here are some arguments. I don't fully believe most of them."

2: I start reading.

3: Michaelos: "Huh. All of these seem to be somewhat well reasoned arguments, there are links, and I can follow the logic on most of them."

4: At some point, I forget the "Yvain doesn't believe this." Tag.

5: I then read his summary which points out that these also have entirely opposite summaries which are also justified.

6: I find myself flabbergasted that I've made the same mistake about Yvain's writing again.

Based on this, I get the feeling I should be doing something differently when I read Yvain's articles, but I'm not even sure what that something is.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-06T10:25:43.416Z · LW(p) · GW(p)

you should probably update towards "being convincing to me is not sufficient evidence of truth." Everything got easier once I stopped believing I was competent to judge claims about X by people who investigate X professionally. I find it better to investigate their epistemic hygiene rather than their claims. If their epistemic hygiene seems good (can be domain specific) I update towards their conclusions on X.

comment by moral_dilemma · 2013-07-02T08:03:21.501Z · LW(p) · GW(p)

A married couple has asked me to donate sperm and to impregnate the wife. They would then raise the child as their own, with no help from me. Would it be ethical or unethical for me to give them sperm? In particular, am I doing a service or a disservice to the child I would create?

[pollid:534]

Replies from: Qiaochu_Yuan, David_Gerard, Douglas_Knight, shminux, Alsadius, DanielLC
comment by Qiaochu_Yuan · 2013-07-02T18:52:06.440Z · LW(p) · GW(p)

Assuming you don't have any particular reason to expect that this couple will be abusive, it's more ethical the better your genes are. If you have high IQ or other desirable heritable traits, great. (It seems plausible to anticipate that high IQ will become even more closely correlated with success in the future than it is now.) If you have mutations that might cause horrible genetic disorders, less great.

comment by David_Gerard · 2013-07-02T14:10:43.514Z · LW(p) · GW(p)

The child is wanted, so if they don't actually neglect it it'll grow up fine.

Note that if you donate sperm without going through the appropriate regulatory hoops as a sperm donor (which vary per country), you will be liable for child support.

comment by Douglas_Knight · 2013-07-02T20:44:09.598Z · LW(p) · GW(p)

I am surprised no one else has brought up the LW party line: consequentialism.

What is the alternative?
What is the consequence of your decision?

Probably the alternative is that someone else donates sperm. Either way, they raise a child that is not the husband's. If creating such a life is terrible (which I don't believe), is it worse that it is your child than someone else's? Consequentialism rejects the idea that you are complicit in one circumstance and not the other.

There are other options, like trying to convince them not to have children, or to get a donation from the husband's relatives, but they are unlikely to work.

If the choice is between your sperm or another's, then, as Qiaochu says, the main difference to the child is genes of the donor. Also, your decision might affect your relationship with the couple.

comment by shminux · 2013-07-02T08:12:01.547Z · LW(p) · GW(p)

What can possibly be unethical about it? You are the only one who is vulnerable, since you might be legally on the hook for child support.

Replies from: moral_dilemma
comment by moral_dilemma · 2013-07-02T08:15:51.309Z · LW(p) · GW(p)

What can possibly be unethical about it?

It creates a child who will not be raised by their biological father.

since you might be legally on the hook for child support.

Unlikely in this context, since they are much wealthier than I. I doubt they would want to share custody with me in exchange for my pittance of a salary.

Replies from: falenas108, ChristianKl
comment by falenas108 · 2013-07-02T12:39:38.208Z · LW(p) · GW(p)

It creates a child who will not be raised by their biological father.

What's the specific problem this would cause?

Replies from: drethelin
comment by drethelin · 2013-07-02T17:07:04.261Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Cinderella_effect

Replies from: falenas108, maia
comment by falenas108 · 2013-07-02T17:20:51.648Z · LW(p) · GW(p)

Questions about the validity of the Cinderella effect aside, the OP knows the couple and can probably make a more informed judgement about this.

Of course, you can't tell this perfectly. But if the OP is anything more than casual acquaintances with the couple, I would say specific evidence probably overpowers the general case.

comment by maia · 2013-07-03T01:56:06.941Z · LW(p) · GW(p)

Has this been demonstrated in adoptive parents, though? Having only adopted children seems as though it might bias things in a different direction.

comment by ChristianKl · 2013-07-02T13:02:38.932Z · LW(p) · GW(p)

Unlikely in this context, since they are much wealthier than I. I doubt they would want to share custody with me in exchange for my pittance of a salary.

They might die and the child has still rights against you.

comment by Alsadius · 2013-07-03T18:01:55.779Z · LW(p) · GW(p)

Given that the child won't exist if you say no, it's hard to assert that they'd be worse off if you decline. Just make sure you don't get too clingy.

comment by DanielLC · 2013-07-03T01:30:03.476Z · LW(p) · GW(p)

There are some concerns about overpopulation, but I'd say that developed countries are underpopulated. Minimum wage is significantly above subsistence wage, so people are generating more wealth than they must consume.

There is the problem of factory farming. The child is likely to eat meat, which funds factory farming. Since there is little if any concern for the animals, they are not treated well, and I find it unlikely that their lives are worth living.

Replies from: Dias, Adele_L
comment by Dias · 2013-07-09T17:38:38.265Z · LW(p) · GW(p)

Minimum wage is significantly above subsistence wage,

You want the average wage, not the minimum wage. Germans are worthwhile people, even though their minimum wage is zero. Similarly, raising or lowering the minimum wage (holding employment and output fixed) should not affect our estimation of people's value-add.

Replies from: DanielLC
comment by DanielLC · 2013-07-09T17:42:59.399Z · LW(p) · GW(p)

What you want is the market wage for untrained labor. Taking the value of trained labor and subtracting the cost of the training should also work and get the same answer.

Minimum wage is legal thing, and doesn't show anything, unless the politicians are consistently setting it just below the market rate for untrained labor. I'm pretty sure they are, but I'd still say you are correct. I shouldn't have said "minimum wage".

comment by Adele_L · 2013-07-03T02:49:08.717Z · LW(p) · GW(p)

The factory farming concern can probably be mitigated by instilling awareness of this situation, as well as effective interventions to the child.

Replies from: DanielLC
comment by DanielLC · 2013-07-03T04:21:04.041Z · LW(p) · GW(p)

He said that they would raise the child with no help from him. That doesn't seem like it would be easier to get the child to be a vegetarian than any random other person.

Replies from: Adele_L
comment by Adele_L · 2013-07-03T04:27:40.965Z · LW(p) · GW(p)

But if he knows the parents, he can know whether or not they are likely to raise their kid to be a vegetarian or not.

comment by AlexSchell · 2013-07-01T23:59:41.628Z · LW(p) · GW(p)

This is a poll for people who have ever made an attempt at obtaining a career in programming or system administration or something like that. I'm interested in your response if you've made any attempt of this sort, whether you've succeeded, changed your mind, etc.

ETA: Oops, I forgot an "I just want to see the results" option. If you vote randomly to see them, I'd appreciate it if you do not vote anonymously, and leave a comment reply.

At what age have you learned to touch-type? [pollid:530]

How did you come to learn to touch-type? [pollid:531]

How did your career attempt turn out? [pollid:532]

I'm interested in this data to improve on some anecdotal evidence that is of interest to me: V'ir orra gbyq nobhg n cbffvoyr pbaarpgvba orgjrra n angheny vapyvangvba gbjneq gbhpu-glcvat (znavsrfgrq ol yrneavat gb gbhpu-glcr vaqrcraqragyl naq rneyl va yvsr) naq yvxryvubbq bs rawblvat naq fhpprrqvat va pnerref bs gur nsberzragvbarq glcr.)

Replies from: Emile, kpreid, David_Gerard, AlexSchell, itaibn0, gwillen
comment by Emile · 2013-07-02T21:47:36.261Z · LW(p) · GW(p)

I answered as close as I can remember, but I think touch typing is more of something I kind of picked up as time went by, rather than something I specifically learned at one point in time. I remember pushing myself to practice touch typing at various points, but the general recollection I have is that I didn't really practice in a focused, systematic way, and yet now I can type this without needing to look at my keyboard (and in fact, when I look at my keyboard I'll be likely to make more mistakes).

So I probably picked it up in my early twenties with a lot of typing of homework and essays and posts on forums.

comment by kpreid · 2013-07-03T03:16:31.793Z · LW(p) · GW(p)

By “touch-typing” do you mean typing without looking at the keyboard, that and typing using all ten fingers, or that and using the formal start-with-your-fingers-on-the-home-row techniques?

Replies from: AlexSchell
comment by AlexSchell · 2013-07-05T21:59:47.572Z · LW(p) · GW(p)

For consistency with previous results, answer using your best guess as to what I mean. (Jura V nfxrq, V zrnag gur jubyr ubzrebj ohfvarff ohg V qvqa'g ernyvmr gung gbhpu-glcvat vf glcvpnyyl qrsvarq va gur oebnqrfg frafr lbh zragvba.)

comment by David_Gerard · 2013-07-02T14:14:05.123Z · LW(p) · GW(p)

I'm a sysadmin now, but I learnt to type when I was a rock critic.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2013-07-02T17:36:15.563Z · LW(p) · GW(p)

Well, I hope you told the poll that your career attempt succeeded and also put in the age that you learned.

comment by AlexSchell · 2013-07-02T00:23:50.658Z · LW(p) · GW(p)

I replied randomly

comment by itaibn0 · 2013-07-04T18:58:39.964Z · LW(p) · GW(p)

Note: I did not read the first paragraph and mistakenly answered the first two questions nonrandomly. Both of these questions have a spurious N/A answer.

comment by gwillen · 2013-07-02T06:05:23.436Z · LW(p) · GW(p)

I said 'primarily through own efforts', but 1) really I partially learned through a class, and partially independently of it, and it's hard to recall the mixture of influences, and 2) 'efforts' is not really the right word, since the learning was really incidental to the fact that I was typing a hell of a lot; it wasn't a deliberate act.

Not sure how this influences the survey. :-)

comment by BerryPick6 · 2013-07-07T22:28:43.213Z · LW(p) · GW(p)

I need help finding a particular thread on LW, it was a discussion of either utility or ethics, and it utilized the symbols Q and Q* extensively, as well as talking about Lost Purposes. My inability to locate it is causing me brain hurt.

Replies from: gwern
comment by gwern · 2013-07-08T19:54:28.637Z · LW(p) · GW(p)

That's hard, because search engines have been dumbed down to the point where you can't google for a literal 'Q*'... A local search turned up http://lesswrong.com/lw/1zv/the_shabbos_goy/ as having one use of 'Q*' and bringing up 'lost purposes'.

Replies from: BerryPick6
comment by BerryPick6 · 2013-07-09T00:09:39.735Z · LW(p) · GW(p)

Probably made even more difficult because I misremembered the letter. It was G*, and the article was The Importance of Goodhart's Law. It suddenly came back to me in a flash after seeing your reply, so thanks!

comment by Alsadius · 2013-07-03T17:03:27.468Z · LW(p) · GW(p)

I'm looking for good, free, online resources on SQL and/or VBA programming, to aid in the hunt for my next job. Does anyone have any useful links? As well, any suggestions on a good program to use for SQL?

Replies from: luminosity
comment by luminosity · 2013-07-03T23:37:59.372Z · LW(p) · GW(p)

What do you mean by "a good program to use for SQL?" A database engine to run queries in? A command line or GUI client for connecting to such a database? Something else entirely.

For what it's worth if you're looking for a database engine, my recommendation is Postgres. Free, open source, and a lot stricter than MySQL, even if you make MySQL as strict as you possibly can.

As for learning, I don't know any tutorials that are still around nowadays. I do recommend if you're learning it, to actually build something where you need to use queries.

Toy example: Build a weblog.

  • Start by creating tables for posts and comments.
  • Write an admin interface for creating new posts. Write a form for saving comments on a post.
  • Then simple queries to pull out the latest few posts for display, and comments for display on a post's page.
  • Add a simple tagging and other meta data facility.
  • Write some reports using data available to you (eg, find top ten most commented posts, find most used tag, if you add viewer ratings, or unique view tracking, then grabbing most viewed posts, and counts of times viewed).

This should take you through exercises from very basic and easy statements through to some more advanced topics (grouping etc), and I find using a skill incredibly valuable to learning and internalising that skill. My first computer program was a blog, and while it was a disaster and a mess in many ways, I learned (or internalised) a lot about programming, and a lot about SQL in the process.

Replies from: Alsadius
comment by Alsadius · 2013-07-04T03:40:27.689Z · LW(p) · GW(p)

I've never touched SQL, past a few references to commands in XKCD. A couple quick Google searches have not produced anything that has seemed usable. I want a software environment that lets me do stuff like your example. Frankly, I'm so unfamiliar with what exists than I don't want to be much more specific than that, aside from saying that I want it to have a good help function if possible(I've used help to self-teach complex Excel functions, so it should be sufficient).

Replies from: luminosity
comment by luminosity · 2013-07-04T05:28:59.754Z · LW(p) · GW(p)

My recommendation is to (if not on linux already), download and set up a VM with a linux install on it. For a toy example, it should be as easy as installing a recent ubuntu, going to the command line, typing "sudo apt-get install postgres".

To log in to the database you can then use "sudo -u postgres psql" (Postgres creates a system user called postgres who by default is a superuser with passwordless local authentication.)

From here you can get help by typing \? to list available commands and \h SQL STATEMENT BEGINNING to get some basic help on an sql statement. eg, "\h create table" will show you the valid syntax for a create table statement.

This W3Schools tutorial seems adequate as a very basic level tutorial to give you an idea of what's possible and where you'd want to go from there.

It should be fairly easy to get python, php, ruby, etc to talk to the database, when you want to try to integrate it with an external program.

Alternatively, since you'll probably be doing VBA in a Windows, environment you could install SQL Server Express edition to play around with, but I'm not particularly familiar with it, or its environment. Someone else can probably help you out with that better.

Replies from: Alsadius
comment by Alsadius · 2013-07-04T06:23:03.635Z · LW(p) · GW(p)

I've tried Linux before, and seen nothing about it that would make it worth the effort of learning a whole new command interface. Is there any particular advantage to it here, or is this a matter of personal preference?

Replies from: Risto_Saarelma, CAE_Jones, luminosity
comment by Risto_Saarelma · 2013-07-09T18:52:29.926Z · LW(p) · GW(p)

If you want to be a programmer, you know the command line so that when the guy interviewing you for a job asks you how you'd remove all phone numbers from the bodies of 50 000 HTML files, you will say "use sed" instead of starting to sketch a 500 line C program that does directory traversal, input buffering and token lexing.

The command line isn't just a different command interface, it's an actual programming environment in itself, which the modern GUIs never are.

Replies from: Alsadius
comment by Alsadius · 2013-07-10T08:33:31.912Z · LW(p) · GW(p)

I'm not going for comp sci jobs, I'm going for finance. Programming is a secondary skill, more used for Excel macros than general-purpose tasks. And virtually nobody uses Linux environments.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-07-10T08:48:03.915Z · LW(p) · GW(p)

Going for a field where Linux isn't used is a pretty good reason not to bother with it. You can cover the regexp/scripting-fu the linked article is talking about pretty well by just being handy with Perl or Python instead.

comment by CAE_Jones · 2013-07-04T06:44:03.699Z · LW(p) · GW(p)

Linux seems universally seen as the platform best suited for programming in general, from my observations. There aren't the arbitrary restrictions from OS features found in Windows or iOS, the filesystem is much more optimized, etc.

It's never been presented to me as anything but something obvious that one should try if one intends to do anything fancy with computers, and the learning curve has always been presented to me as negligible. I've tried a Linux-style shell via Putty and Cygwin, and things did not turn out usefully. Putty I could not figure out at all without a CS professor walking me through it, and I get the strong impression that Cygwin choked on the fact that my Windows username has a space in it and I had no clue how to get around that. It doesn't help that my main uses were compiling C programs and trying to compile Festvox (for making arbitrary Festival-style text-to-speech engines), the former more trivial than the latter. I haven't found it worth looking into learning a new OS without a significant reason, and it didn't turn out well the two times I had reasons (and said reasons that did not last, so I didn't end up making enough of an effort to learn it).

Again, the internet and every computer science department I've encountered insist that Linux is optimized for anything more complicated than HTML or Windows/iOS-specific code, and is definitely worth learning. I expect circumstances will force me to learn it properly, eventually. My experiences so far do not leave me eager, however.

[edit]: D'oh, I think I just repeated everything you said, only longer. :( [/edit]

Replies from: luminosity
comment by luminosity · 2013-07-04T07:24:30.079Z · LW(p) · GW(p)

Putty is fairly terrible, and a major pain in the arse if you do need to ssh from a windows box.

comment by luminosity · 2013-07-04T07:19:13.302Z · LW(p) · GW(p)

Partly personal preference, partly an inability to answer your question in a Windows programming environment, not being particularly familiar with SQL Server. While you can install things such as postgres, python, etc on Windows, getting a working setup on an ubuntu system to play with for free, is trivial:

e.g.

  • Install OS
  • Open command prompt
  • sudo apt-get install python
  • sudo apt-get install python-pip
  • sudo apt-get install postgres
  • sudo apt-get install python2.7-dev
  • sudo pip install psycopg2
  • sudo -u postgres psql "--command=CREATE USER test PASSWORD testing;"
  • sudo -u postgres psql "--command=CREATE DATABASE testdb OWNER test;"
  • python

  • import psycopg2

  • connection = psycopg2.connect(username='test', password='testing', dbname='testdb')
  • cursor = connection.cursor()
  • cursor.execute('SELECT 1;')
  • print cursor.fetchone()

(Code not tested, don't currenty have an environment to test it on... you get the idea though.)

Again though, this is the environment I do nearly all my coding work in. I'd not be surprised if a Windows developer found getting an SQL Server Express install up and running on their system to be far easier for them. I'm just not able to walk you through that.

Replies from: Alsadius
comment by Alsadius · 2013-07-04T07:42:22.167Z · LW(p) · GW(p)

Understood. I do not expect that learning Linux is a useful way to spend my time, however, so I'll pass. But Postgres does seem to have a Windows version, so I'll give that a shot. It seems to be what I'm looking for.

comment by Martin-2 · 2013-07-15T18:12:25.799Z · LW(p) · GW(p)

Steven Landsburg at TBQ has posted a seemingly elementary probability puzzle that has us all scratching our heads! I'll be ignominiously giving Eliezer's explanation of Bayes' Theorem another read, and in the mean time I invite all you Bayes-warriors to come and leave your comments.

comment by beoShaffer · 2013-07-13T23:51:54.723Z · LW(p) · GW(p)

Meta question. Is it better to correct typos and minor, verifiable factual errors (e.g. a date being a year off) in a post in the post's comment thread or a PM to the author?

Replies from: gwern
comment by gwern · 2013-07-14T00:08:24.788Z · LW(p) · GW(p)

I prefer PMs and do them often for both comments & posts. A minor correction is of no enduring interest and it's better if it didn't take up space publicly. (Can you imagine if every Wikipedia article could only be read as a sequence of diffs? That's what doing minor corrections in public is like.)

comment by Tenoke · 2013-07-10T16:39:53.566Z · LW(p) · GW(p)

Does anyone have any information/links/recommendations regarding how to reduce computer-related eye strain? Specifically any info on computer glasses? I was looking at Gunnar but I can't find enough reliable evidence to justify buying them and I would be surprised if there are no better options.

Fwiw I went to an optician today who deemed my vision good, however I spend large amounts of time in front of my screens and my eyes are tired a large fraction of the time.

comment by CAE_Jones · 2013-07-10T06:06:35.434Z · LW(p) · GW(p)

So I recently released a major update to my commercial game, and announced that I would be letting people have it for donations at half the price for the remainder of July 2013. I suspect I did not make that last part prominent enough in the post to the forum where most of my audience originates, since of the purchases made since, only one took the half price option--the rest all paid the normal price. The post included three links: the game's page, my audio games (general) page, and the front page of my website, I believe in that order. (That is also in ascending order of how prominent the sale is on each page.)

From this I conclude that, if I'm trying to encourage people with a sale, I should point that out often. It is not clear to me if this would encourage at least twice as many purchases as making the sale available but having it involve slightly more effort.

comment by ESRogs · 2013-07-08T18:22:46.688Z · LW(p) · GW(p)

I'm trying to find a getting-a-programming-job LW article I remember reading recently for a friend. I thought it was posted within the last few months, but searching through Discussion I didn't find it.

The post detailed one LWer's quest to find a programming job, including how he'd been very thorough preparing for interviews and applying to many positions over a matter of months, finally getting a few offers, playing them off each other, and eventually, I believe, accepting a position at Google.

Anyone know the article I'm talking about?

Replies from: gwern
comment by gwern · 2013-07-08T19:25:32.990Z · LW(p) · GW(p)

programming interview google site:lesswrong.com; first hit: http://lesswrong.com/lw/hd1/maximizing_your_donations_via_a_job/

Replies from: ESRogs
comment by ESRogs · 2013-07-08T20:40:14.521Z · LW(p) · GW(p)

Thanks!

I think I might have seen that link when googling and skipped it because the title didn't sound like what I was expecting. After googling didn't turn up what I was looking for I tried an exhaustive visual search through the last few months of Discussion, but apparently this was in Main. Is there an easy way to browse through the titles of Main articles?

Edit: in fact, my browser history indicates that this was the first link in one of my searches as well. Oops.

comment by [deleted] · 2013-07-08T16:54:18.304Z · LW(p) · GW(p)

I find myself a non-altruist in the sense that while I care about the well-being and happiness of many proximate people, I don't care about the good of people unqualifiedly. What am I getting wrong? If asked to justify unqualified altruism, what would you say?

Replies from: drethelin
comment by drethelin · 2013-07-08T19:39:28.199Z · LW(p) · GW(p)

You're not getting anything wrong. You're not cosmically required to be a perfect altruist to be a good person.

Replies from: None
comment by [deleted] · 2013-07-08T21:05:49.267Z · LW(p) · GW(p)

So would you say that there is no reason to care about people unqualifiedly? If you wouldn't be willing to say this, what reason would you give for unqualified altruism?

Replies from: drethelin
comment by drethelin · 2013-07-08T21:21:34.827Z · LW(p) · GW(p)

I'm not sure exactly what you're getting at. Unqualified altruism is good for other people and bad for yourself. If you care about other people a lot more than you care about yourself then you have reasons for unqualified altruism. Cares are not something we need reasons for, they're emotions we have.

Replies from: None
comment by [deleted] · 2013-07-08T21:31:43.005Z · LW(p) · GW(p)

I guess I got to thinking about this after reading Lukeprog's EA post. There he mentioned that EAists care about people regardless of how 'far away' (in space, time, political association, etc.) they are. And Singer's wonderful pond argument likewise involves the premise that if you ought to help a child in immediate danger right in front of you, you ought to help a child in immediate danger in Africa.

I suppose it struck me that, at least for Singer, the move from local altruism (caring about the drowning child) to unqualified altruism (caring about sapient beings everywhere and when) is a premise in an argument. Should I really conclude that this move is not one that I can make on the basis of reasons?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-10T08:44:48.323Z · LW(p) · GW(p)

It sounds like you would benefit from (re?)reading the metaethics sequence.

I don't understand what you mean by a move being made "on the basis of reasons." Are you familiar with the distinction between instrumental and terminal values? In that language, I would say that the question you seem to be asking is a type error: you're asking something like "why should unqualified altruism be one of my terminal values?" but the definition of "should" just refers to your terminal values (or maybe your extrapolated terminal values, or whatever).

This is all assuming that you're highly confident that unqualified altruism is not in fact one of your terminal values. It's possible that you have some uncertainty about this and that that uncertainty is what you're really trying to resolve.

Replies from: None
comment by [deleted] · 2013-07-10T12:24:45.024Z · LW(p) · GW(p)

I could be asking one of two things, depending on where someone arguing for unqualified altruism (as, say, Singer does) stands. Singer's argument has the form 'If you consider yourself obligated to save the local child, then you should consider yourself obligated to save the non-local one.' He could be arguing that unqualified altruism is in fact my terminal value, given that local altruism is, and I should realize that the restrictions involved in the qualification 'local' are irrational. Or he could be arguing that unqualified altruism is a significant or perhaps necessary instrumental value, given what you can read about my terminal values off my commitment to local altruism.

I'm not sure which he thinks, though I would guess that his argument is intended to be the former one. I realize you might not endorse Singer's argument, but these are two ways to hear my question: 'Is unqualified altruism a terminal value of mine, given that local altruism is a value of mine?' and 'Is ethical altruism an instrumental value of mine, given what we can know about my terminal values on the assumption that local altruism is also an instrumental value of mine?'

I'm not entirely sure which applies to me. I don't think I have any terminal values so specific as 'unqualifed/local altruism', but I may be reflecting badly.

comment by hedges · 2013-07-06T19:45:38.649Z · LW(p) · GW(p)

At what age should you sign up your child for cryonics?

Replies from: Alicorn
comment by Alicorn · 2013-07-07T05:35:45.938Z · LW(p) · GW(p)

Prenatally.

Replies from: wedrifid
comment by wedrifid · 2013-07-09T10:50:03.202Z · LW(p) · GW(p)

At what age should you sign up your child for cryonics?

Prenatally.

Which prompts the additional speculation:

  • At what (negative) age is abortion allowed?
  • If you choose to get an abortion after signing a child up for cryonics should you cryopreserve the aborted fetus?
  • If someone is wealthy and also desires having many descendants in the case of a positive transhuman future is the bulk use of prenatal cryonics a viable and legal option?
comment by Pablo (Pablo_Stafforini) · 2013-07-03T23:36:50.240Z · LW(p) · GW(p)

Some folks at the Effective Altruists Facebook group suggested that it might be useful to have a map of EAs. If you would like to be listed in such a map, please fill this form. The data collected will be used to auto-populate this Google Map. (The map is currently unlisted: it can be seen only by those with access to the corresponding URL.)

comment by lsparrish · 2013-07-03T19:37:48.288Z · LW(p) · GW(p)

Anyone care to speculate as to when/at what point bitcoin price is likely to stop dropping?

comment by beriukay · 2013-07-03T10:44:51.657Z · LW(p) · GW(p)

One of my best friends is a very high suicide risk. Has anybody dealt with this kind of situation; specifically trying to convince the friend to try psychiatry? I'll be happy to talk details, but I'm not sure the Open Thread is the best medium.

Replies from: Viliam_Bur, elharo
comment by Viliam_Bur · 2013-07-03T12:47:29.603Z · LW(p) · GW(p)

Just this: If you friend starts saying that their problems are solved and everything is going to be okay, become more careful. Sudden improvement in the mood and later returning to the original level is more dangerous than the original situation, because at this moment the person has a new belief: "every improvement is only temporary". Which makes them less likely to act reasonably.

comment by elharo · 2013-07-05T13:04:22.161Z · LW(p) · GW(p)

I've been there, and this is one of those situations that requires professional help, not random advice from the Internet. If you're in the U.S., call the National Suicide Prevention Lifeline at 1-800-273-8255 and explain the situation. They can assist you further. If you're in some other country, just Google for your local equivalent.

Replies from: wedrifid
comment by wedrifid · 2013-07-09T10:54:51.465Z · LW(p) · GW(p)

I've been there, and this is one of those situations that requires professional help, not random advice from the Internet. If you're in the U.S., call the National Suicide Prevention Lifeline at 1-800-273-8255 and explain the situation. They can assist you further. If you're in some other country, just Google for your local equivalent.

This seems to be useful, specific and actionable advice from a random on the internet.

comment by niceguyanon · 2013-07-02T06:45:06.376Z · LW(p) · GW(p)

I was browsing through the West L.A Meet up discussion article and found it really fascinating. It will be about humans generating random number strings and the many applications where this would be useful. It's too bad I can't attend. Off the top of my head, I feel like I can only come up with one digit randomly by looking at my watch, not sure how I would get more than that. Does anyone have a decent way to generate random numbers on the spot with out a computer?

Replies from: ShardPhoenix, maia, NancyLebovitz, army1987, Pentashagon, Armok_GoB, Alsadius, Benito
comment by ShardPhoenix · 2013-07-02T10:47:44.978Z · LW(p) · GW(p)

Read the serial numbers on the paper money in your wallet?

comment by maia · 2013-07-03T02:01:25.166Z · LW(p) · GW(p)

Pick a nearby object. What letter does its name begin with? Convert that from a letter (base 26) to a number and truncate.

Probably has some systematic bias from the names of common everyday objects overwhelming it, but a decent start.

EDIT: Oh wait... that also has the problem of being biased because you're truncating and there are only 26 numbers. Maybe the bias against zxwq will almost cancel it out?

comment by NancyLebovitz · 2013-07-02T12:38:06.746Z · LW(p) · GW(p)

Use the random numbers from your watch in groups to get more digits.

comment by A1987dM (army1987) · 2013-07-08T17:20:42.705Z · LW(p) · GW(p)

A mobile phone with an Internet connection and random.org.

comment by Pentashagon · 2013-07-03T21:44:48.901Z · LW(p) · GW(p)

You can cast a lot of dice into a shoebox and shake the box on edge so that they all end up in a line and then read them off as a base-6 number, or other bases if you have other shapes. This is just from the diceware page. I personally can't think of a more efficient way of consistently generating random numbers.

comment by Armok_GoB · 2013-07-03T18:41:38.219Z · LW(p) · GW(p)

Use something like 10 biased RNGs, like just trying to think of random-seeming sequence the naive way, then convert them to binary, reverse the order of every other one, and XOR them.

comment by Alsadius · 2013-07-03T18:00:08.022Z · LW(p) · GW(p)

This isn't an even smearing, but looking at a random piece of text and converting the letters into 1-26 should be sufficient for many purposes. If you want additional randomness, add the letters of the first nontrivial word up mod 26(or mod 10, or whatever).

Replies from: elharo
comment by elharo · 2013-07-05T13:55:03.676Z · LW(p) · GW(p)

No, that won't work due to Benford's Law. In this case, there will be a lot more 1's and somewhat more 2's than the other 8 digits. I.e. 10 letters have numbers beginning with 1 and 7 have numbers beginning with 2, but none have letters beginning with 0. The non-random distribution of letters in English text will probably also skew your results.

Replies from: Alsadius, Emile
comment by Alsadius · 2013-07-05T17:51:12.051Z · LW(p) · GW(p)

Hence why I said "sufficient for many purposes". If you're trying to choose between 3 places to eat lunch, for example, "the next letter of text mod 3" is a perfectly acceptable method for determining it. If you're trying to encrypt nuclear launch codes, not so much.

comment by Emile · 2013-07-06T12:16:30.673Z · LW(p) · GW(p)

Benford's Law applies to the first digit, whereas Alsadius's use of modulo means taking the last one, which would be much less biased (the bias would be drowned by the bias from common words and letters).

comment by Ben Pace (Benito) · 2013-07-02T12:09:33.915Z · LW(p) · GW(p)

Use memory techniques to memorise a hundred is what I plan to do.

Replies from: None
comment by [deleted] · 2013-07-02T12:56:52.597Z · LW(p) · GW(p)

Your random numbers will be more generally useful if other people can verify the randomness.

comment by Torello · 2013-07-01T18:20:52.670Z · LW(p) · GW(p)

Seeking Educational Advice...

I imagine some LW user have these questions, or can answer them. Sorry if this isn’t the right place (but point me to the right place please!).

I’m thinking of returning to university to study evolution/biology, the mind, IT, science-type stuff.

Are there any legitimate way (I mean actually achievable, you have first-hand experience, can point to concrete resources) to attend an adequate university for no or low-cost?

How can I measure my aptitude for various fields (for cheap/free)? (I did an undergrad degree in education which was so easy I don't know if I could make the grades in a demanding field).

My first undergrad degree (education) was non-science, so should I go back for another undergrad degree, or try to fill gaps on my own and do a post-grad in something with science?

I've started investigating free online education (lesswrong, edx, coursera, etc) but I have concerns: don't I need credentials? Don't I need classmates/colleagues/collaborators to help teach me, motivate me, and supply me with equipment? How do I know if I really understand the material? How do I address these concerns?

p.s. – I’m all for “munchkin” style answers/solutions to these problems, so long as they are actually feasible

Replies from: RolfAndreassen, Kawoomba, Risto_Saarelma, OrphanWilde, ChristianKl, pragmatist
comment by RolfAndreassen · 2013-07-01T19:05:49.433Z · LW(p) · GW(p)

Do you care about the piece of paper? If not, you can likely attend courses in the literal sense - just show up for the lectures - without paying anything at all. Old textbooks are cheap, if you want problem sets, and you almost certainly do - I strongly opine that you cannot learn anything even remotely math-oriented without doing problems. But no rule says you have to do the same problems the others in the class are doing.

Clearly, this is not the method for you if you need a lot of feedback and guidance, nor if you want the credential in addition to the knowledge.

comment by Kawoomba · 2013-07-01T18:28:22.751Z · LW(p) · GW(p)

Mostly depends on what languages you speak fluently, what countries you can obtain visas for, your willingness to relocate to said countries and your plans on what you'll do with the "science-type stuff". If you want advice, edit your post accordingly. Most of the answers will come out to public colleges in your home state, or Europe. Or plumbing.

Replies from: Torello
comment by Torello · 2013-07-01T18:32:13.898Z · LW(p) · GW(p)

Hey, thanks for your reply.

I do want advice... how would you suggest I edit the post?

I don't know what I plan to do with what I study, I just know that it's very interesting to me. I'm not sure if that's good or bad.

I'm a native English speaker, and I speak passable Spanish (I can read light novels, hold conversations, etc). I never really considered doing an undergrad degree abroad.

Replies from: ygert
comment by ygert · 2013-07-01T19:29:15.703Z · LW(p) · GW(p)

I just realized I probably totally misunderstood what was being asked. Never mind.

comment by Risto_Saarelma · 2013-07-09T18:32:03.897Z · LW(p) · GW(p)

How can I measure my aptitude for various fields (for cheap/free)? (I did an undergrad degree in education which was so easy I don't know if I could make the grades in a demanding field).

Get a textbook of the appropriate level on the subject that has exercises and the correct answers to them, read the book, then do the exercises and see what you come up with? If it's math or physics, you should be able to tell by yourself whether your solutions resemble the example solutions in the text, seem to make sense and come up with the correct answers.

I don't know how well this will work with evolutionary biology or cognitive science. If you want to include philosophy in the "mind" part, it's my understanding that you need to be a trained academic philosopher to reliably tell fancy garbage and acceptable academic philosophy apart, so the approach probably won't work there.

After reading a couple of introductory textbooks, try to find grad students in the field in online chats and ask them about the stuff to gauge how well you've understood it. You can probably find plenty of math and computer science literate people on Lesswrong to bounce stuff off of.

Also, do you actually know you need to attend lectures to learn things, or are you just planning to do this because attending lectures is what people who get educated are supposed to do in the standard narrative? I'm pretty much incapable of following spoken academic lectures myself, and basically learn most everything by reading. If I wanted to get an education, I'd just go for a big stack of textbooks and a good note-taking system and ignore live lectures entirely at least on the undergrad level.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-13T05:22:10.623Z · LW(p) · GW(p)

If you want to include philosophy in the "mind" part, it's my understanding that you need to be a trained academic philosopher to reliably tell fancy garbage and acceptable academic philosophy apart, so the approach probably won't work there.

My understanding is that there is considerable overlap between these two categories.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-07-13T07:08:09.850Z · LW(p) · GW(p)

That's another problem. You might not be able to trust an academic philosopher's judgment on whether a bit of philosophy is actually any good as much as, say, an academic mathematician's judgment on whether a bit of mathematics is any good.

comment by OrphanWilde · 2013-07-02T18:05:52.341Z · LW(p) · GW(p)

Are you wanting a degree, or are you wanting education?

If you're just wanting the information, the university in question may permit low-cost auditing

comment by ChristianKl · 2013-07-02T13:15:45.030Z · LW(p) · GW(p)

Are there any legitimate way (I mean actually achievable, you have first-hand experience, can point to concrete resources) to attend an adequate university for no or low-cost?

Of course, I just go to any university in my city and they don't cost anything.

I've started investigating free online education (lesswrong, edx, coursera, etc) but I have concerns: don't I need credentials?

Whether or not you need credentials depends on your goals. Yudkowsky started SI/MIRI without any credentials.

Don't I need classmates/colleagues/collaborators to help teach me,

When it comes to programming questions that I face as part of my university studies I go to StackOverflow.

motivate me,

Depends on your ability to self motivate.

and supply me with equipment?

Depends on whether you want to do something that needs equipment.

How do I know if I really understand the material?

If you can remember the Anki cards about a topic it's likely that you understand the topic. But more importantly, what's your goal? What do you want to be able to do with your "understanding of the material"?

Replies from: Torello
comment by Torello · 2013-07-02T16:06:45.918Z · LW(p) · GW(p)

Thanks very much for your reply,

Apparently you're from Berlin, (I'm sure this is google-able)--are foreign (US in my case) students able to enroll in classes without fees, difficult to obtain visas, etc? Are many courses offered in English?

I'm not really sure what I want to do with my "understanding of the material," which is largely why I'm not sure if credentials/access to equipment are important to me.

It's hard to measure, but I think I'm pretty motivated. Unsurprisingly, I don't have the raw intelligence of Yudkowsky, so I have doubts about how well someone with my skill set will be able to make progress without support/credentials.

Again, thanks for the reply.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-02T17:27:51.081Z · LW(p) · GW(p)

Apparently you're from Berlin, (I'm sure this is google-able)--are foreign (US in my case) students able to enroll in classes without fees, difficult to obtain visas, etc? Are many courses offered in English?

As far as I know there are no additional fees for foreign students. A lot of Master courses get offered in English. I think it should be easy for US citizens to get a visa.

But I'm a German citzens so I don't know the details from the perspective of being an US citizen well.

I'm not really sure what I want to do with my "understanding of the material,"

How about spending a gap year to think about what you want to do with your life before starting a new degree at university?

Unsurprisingly, I don't have the raw intelligence of Yudkowsky, so I have doubts about how well someone with my skill set will be able to make progress without support/credentials.

I don't think that raw intelligence is the most important thing. The important thing is to be willing to do work in a way without a clear path.

Having social skills is also important. If you have marketable skills and network well, credentials aren't important.

If you are truly interested in biology and science I would suggest that you do quantified self style self experiments. Start a blog about them.

comment by pragmatist · 2013-07-02T04:41:20.830Z · LW(p) · GW(p)

I went to a college in the United States where admissions are need-blind (they don't consider how much financial aid you'll need in their decision to admit you) and that offers full-need aid (once admitted, they will meet any financial need you demonstrate). I was an international student, so the aid was not in the form of a loan, but a straight-up grant. I basically ended up paying nothing to go to a college that normally charges $60k+ a year. So if you're not American, this is a possibility. If you are American, I understand that most (all?) of the financial aid is in the form of federal loans, which you may or may not want to incur.

Wikipedia says there are only seven US universities that offer full need-blind aid to international students. There are many more that are need-blind and full-need for US students, although this will probably involve loans. That Wikipedia page also lists four non-US universities that offer need-blind and full-need aid to all applicants. If you are American, applying to one of those may be a better bet, because you might get a grant instead of a loan. I've heard good things about the National University of Singapore.

comment by Rukifellth · 2013-07-13T23:52:56.149Z · LW(p) · GW(p)

I personally regard this entire subject as a memetic hazard, and will rot13 accordingly.

Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:

... gung gurer vf bayl bar crefba va gur havirefr, lbh, naq rirelbar lbh frr nebhaq lbh vf ernyyl whfg lbh.

Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.

V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhnyvfz, be whfg nethzragf ntnvafg Pybfrq naq Rzcgl Vaqvivqhnyvfz gung yrnir BV nf gur bayl nygreangvir. Vpbcb Irggbev rkcynvarq vg yvxr guvf:

PV pnaabg znantr fngvfsnpgbevyl gur "pbagvahvgl ceboyrz" (jung znxrf lbh gb pbagvahr gb erznva lbh va gvzr). Guvf vf jul va "Ernfba naq Crefbaf", Qrerx Cnesvg cebcbfrq RV nf n fbyhgvba. Va "V Nz Lbh", Qnavry Xbynx cebcbfrq BV, fubjvat gung grpuavpnyyl gurl ner rdhvinyrag. Fb pubbfvat orgjrra RV naq BV frrzf gb or n znggre bs crefbany gnfgr. Znlor gurve qvssreraprf zvtug or erqhprq gb n grezvabybtl ceboyrz. Bgurejvfr, V pbafvqre BV zber fgebat orpnhfr vg pna rkcynva jung V pnyyrq "gur vaqvivqhny rkvfgragvny ceboyrz" [Jung jr zrna jura jr nfx bhefryirf "Pbhyq V unir arire rkvfgrq?"]

Gur rrevrfg cneg nobhg gur Snprobbx tebhc "V Nz Lbh: Qvfphffvbaf va Bcra Vaqvivqhnyvfz" vf gung gur crbcyr va gung tebhc gerng gur pbaprcg bs gurer orvat bayl bar crefba gur fnzr jnl gung Puevfgvnaf gerng gur pbaprcg bs n Tbq gung jvyy qnza gurve ybirq barf gb Uryy sbe abg oryvrivat va Uvz. Vg'f nf vs ab bar va gur tebhc ernyvmrf gur frpbaq yriry vzcyvpngvbaf bs gurer abg orvat nalbar ryfr, be znlor gurl qba'g rira pner.

Replies from: Thomas, Qiaochu_Yuan, Douglas_Knight, Richard_Kennaway, drethelin
comment by Thomas · 2013-07-16T16:03:44.969Z · LW(p) · GW(p)

I think it is true. Self awareness is not hardware (wetware, whatever-ware) dependent. Just upload yourself and everything would be just fine. You'll be on two places at the same time, but with no communications between your instances, the old and the new one.

The same situation here, only that you have more than one natural born upload. Many billion, in fact.

The naturalism leads to this (frightening) conclusion.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T16:26:49.281Z · LW(p) · GW(p)

Doesn't that black box the process of uploading?

Replies from: Thomas
comment by Thomas · 2013-07-16T16:58:36.793Z · LW(p) · GW(p)

I am not sure what do you mean by this blackboxing.

But to think, that the process of consciousnesses will work inside a computer, but will not work inside some other human skull - is naive.

It should work either on both places or nowhere.

People respond to this with "My memories are crucial, they are my unique identifier!". Well, you can forget pretty much everything and you will feel the same way. Besides, at every moment that you are self aware, you are remembering different little pieces of everything, doesn't matter what exactly. Might be a memory of a total solar eclipse, millions have almost the same short movie in their heads. Nothing unique here,

The consciousnesses is a funny algorithm, running everywhere. This is why, you should care about the future and behave accordingly at the present time.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T17:12:19.267Z · LW(p) · GW(p)

Black boxing is when a complicated process is skipped over in reasoning. You supposed that mind uploading was possible for the sake of argument, to support a conclusion outside of the argument.

Replies from: Thomas
comment by Thomas · 2013-07-16T17:24:29.954Z · LW(p) · GW(p)

I see no reason, why uploading would be impossible. As I see no reason, why interstellar traveling would be impossible.

I have no idea how to actually do both, but that's another matter.

If the naturalistic view is valid, it is difficult to see a reason why those two would be impossible. But if the Universe is a magic place, then of course. It's possible that they are both impossible due to some spell of a witch, or something.

Still, I do assign a small probability to the possibility, that the consciousnesses is something not entirely computable and therefor not executable on some model of a Turing machine. But then again, the probability for this I see quite negligible.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T17:42:05.507Z · LW(p) · GW(p)

Does it matter what consciousness is made out of for mind uploading to be possible?

Replies from: Thomas
comment by Thomas · 2013-07-16T18:47:53.371Z · LW(p) · GW(p)

Of course. If some of us are right, the consciousness is an algorithm running on a substrate able to compute it.

Then, the transplantation to another substrate is sure possible. How difficult this copping actually is, I wonder.

That all, assuming no magic is involved here. No spirituality, no soul and no other holly crap.

But when we embrace the algorithmic nature of consciousness, intelligence, memories and so on, we lose the unique identifier, so dear to most otherwise rational people. Their mantra goes "You only live once!" or "Everyone is unique and unrepeatable person!". Yes, sure. So when I was born, a signal traveled across the Universe to change it from the place I could be born, to a place this possibility now expired for good? May I ask, is this signal faster then light? If it isn't ... well, it isn't good enough.

I am just an algorithm, being computed here and there, before and now.

Replies from: Rukifellth, bogus, Rukifellth
comment by Rukifellth · 2013-07-17T22:44:20.573Z · LW(p) · GW(p)

I forgot to mention this, but I also tried my hand at writing an essay about this sort of thing: finding the physical manifestation of consciousness. If I could vouch for the rigor of it, I'd have posted it to the Facebook group already, but alas, I can't., though it may be of some use here.

Identifying the physical manifestation of consciousness.

Identifying the final place where physical cause and mental effect meet has been one of neuroscience's top questions, and as many of us know, is known as the "Hard Problem". I'd like to try my hand at making a set of rules for the development of a procedure that would pry out the location of that "final destination". The process is by elimination, ruling out as many intermediaries between consciousness and cause as possible until no intermediary remains. At such a point, it must be concluded that the cause in question is consciousness itself. The principles outlined identify the characteristics of an intermediary, so that they may be cut out. A cause is only an intermediary if it violates any one of these principles:

Instantaneous Change: A change to this physical thing must create an immediate change in mental state. For example, if the heart is our soul, shooting a person in the heart shouldn't even leave a millisecond of perception, or people with heart disease should also develop psychiatric symptoms not attributable to stress in the course of their illness.

Predictable Change: If a small change in physical state produces a small change in mental state, then a increasing the magnitude of that same change should increase the corresponding mental state without producing any surprises. If increasing that physical change begins to produce the effects of a smaller, but different physical change, then there's still an intermediary between physical and mental. For example, SSRI's lift certain kinds of depression, but continued usage can "burn out" serotonin receptors, which means that chemicals like SSRI's cannot possibly be considered "units of consciousness".

Unique Change/Repeatability: A change in the state of this physical thing must create a mental state that is unique to that physical change. In graphing terms, value x cannot map to more than one value of y. If there's more than one possible y value or multiple's x's can create the same y, then there's still an intermediary between physical and mental. For example, and continuing from above, one could start to wonder if "receptors" are the "units of consciousness" and work from there by asking if it's possible to reproduce a mental state using something other than neurotransmitter receptors. If this possible, then the "unique change" clause is violated by having multiple x's mapping onto the same y, which implies that there's an intermediary between neurotransmitter receptors and mental states.

Suppose an LED and its switch are the same thing. To demonstrate this, we put it through the three ( principles to see how the system behaves. Failing any one of these tests indicates that we need to go deeper.

For the Instantaneous Change principle, we can just grab a hypothetical Planck-time high speed camera. If the state change of both the light and the switch are both perfectly in sync with each other even at Planck-time, then they are both the same object. This is not the case, as even the femto-second camera demonstrated on TED Talks could show.

The Predictable Change principle is unapplicable, because there are only two possible states, on/off for the switch, and their two directly correlated states, on/off for the light, so we move on. We can't very well add a third state for the switch and and expect any kind of change.

Unique Change can be tested by looking at the switch. It appears to be between a power source and the LED light. The method of Alexander the Great would have us cut the switch out of the circuit and see what happens when we pull the wires together. Do the wires, which have the two states, connected/unconnected, correlate directly with the LD's states of on/off? If so, then the switch was not the LED, for the states of the LED are not permanently changed.

comment by bogus · 2013-07-17T20:18:31.651Z · LW(p) · GW(p)

Their mantra goes "You only live once!" ...

Wait, so that's where the whole 'YOLO' thing/meme comes from? I notice that I am confused...

comment by Rukifellth · 2013-07-16T19:40:10.622Z · LW(p) · GW(p)

How does this square with chaos theory, which models behaviour that diverges greatly due to infinitesimal changes at the start?

Replies from: None, Thomas
comment by [deleted] · 2013-07-17T05:12:42.134Z · LW(p) · GW(p)

What has it got to do with chaos theory?

Replies from: Rukifellth
comment by Rukifellth · 2013-07-17T09:17:57.367Z · LW(p) · GW(p)

Suppose you have two similar but extremely complicated systems that put compound pendulums to shame and both of which have different starting conditions. Would the state of one system ever be identical to the state of the other at any state that has occurred, or will occur, with system two?

Replies from: None
comment by [deleted] · 2013-07-17T19:24:05.322Z · LW(p) · GW(p)

No, with extremely high probability.

How does that relate to whatever Thomas was saying? For that matter, what is Thomas saying?

Replies from: Plasmon, Rukifellth
comment by Plasmon · 2013-07-17T19:48:17.521Z · LW(p) · GW(p)

No, with extremely high probability.

Are you sure?

Replies from: None
comment by [deleted] · 2013-07-17T19:58:48.100Z · LW(p) · GW(p)

That's a really cool proof, but phase space can be exponentially large, especially for an "extremely complicated" system. It also requires finite bounds on system parameters.

For that to break my "extremely high probability", there would have to be relatively few orbits in the phase space approaching a space-filling set of curves, which is itself extremely unlikely, unless you can think up some pathological example.

It does weaken my statement, though.

comment by Rukifellth · 2013-07-17T19:29:43.291Z · LW(p) · GW(p)

Their mantra goes "You only live once!" or "Everyone is unique and unrepeatable person!".

He suggested that it was possible for a person to be repeated, mental state and all, given enough time. I thought to conceptualize the minds of people as being like extremely complicated systems with chaotic interactions to ask if his belief could be true.

comment by Thomas · 2013-07-16T19:57:09.873Z · LW(p) · GW(p)

How the identity of a single person squares with it? Wouldn't a tiny change convert me into somebody else?

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T20:13:50.620Z · LW(p) · GW(p)

At no point has one cubic centimeter of air been exactly like another cubic centimeter of air.

Replies from: Thomas
comment by Thomas · 2013-07-17T04:55:51.495Z · LW(p) · GW(p)

At no point you are exactly the same, as you were seconds ago.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-17T16:31:23.878Z · LW(p) · GW(p)

Oh I see what you meant now. You don't become somebody else, which implies there's an existing mental state that has existed before- you become somebody new.

Replies from: Thomas
comment by Thomas · 2013-07-17T17:36:43.348Z · LW(p) · GW(p)

No, not somebody new. The same consciousness algorithm is running and I am indistinguishable from the consciousness algorithm.

It is not I am you", it is I am equal consciousness and You are equal consciousness. Therefor *I am you.

For you can change every part of your body and every piece of your memories. Until you are self aware, it's you. Even with a different body somewhere else.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-17T19:07:47.848Z · LW(p) · GW(p)

Just wondering, does Less Wrong have a procedure for understanding concepts that are incredibly distant from direct experience?

comment by Qiaochu_Yuan · 2013-07-15T05:42:55.014Z · LW(p) · GW(p)

What would you do on the hypothesis that this was true that you wouldn't do on the hypothesis that it was false?

Replies from: Rukifellth, Rukifellth
comment by Rukifellth · 2013-07-15T11:55:53.985Z · LW(p) · GW(p)

Honestly? I'd start taking antidepressants, and then embark on a a life-long quest to destroy the Universe via high energy particle experiments, or perhaps an unfriendly AI.

comment by Rukifellth · 2013-07-15T11:30:20.054Z · LW(p) · GW(p)

Honestly, I'd start taking antidepressants, and then embark on a a life-long quest to destroy the Universe via high energy particle experiments. Still not used to how the commenting works, this comment was not retracted.

comment by Douglas_Knight · 2013-07-15T05:28:20.587Z · LW(p) · GW(p)

I endorse this theory and it all adds up to normality: in the end, the theories that you offer as alternatives are all true. (I have not read anything other than your comment.)

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T01:24:14.631Z · LW(p) · GW(p)

How can they, if they're mutually exclusive?

Whew, Karma. Also, why did this get downvoted so much? I'd appreciate the skepticism a lot more in the form of an argument. (No, seriously, I'd appreciate skeptical argument way more than any abstract philosophical argument should be appreciated)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-07-16T03:05:34.694Z · LW(p) · GW(p)

The belief that they are mutually exclusive is confusion.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T03:19:48.496Z · LW(p) · GW(p)

I don't understand.

comment by Richard_Kennaway · 2013-07-15T08:34:11.154Z · LW(p) · GW(p)

(Partially derot13ing for clarity:)

What does everyone think of Bcra Vaqvivqhnyvfz

Nonsense on stilts. Next!

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T01:19:34.826Z · LW(p) · GW(p)

I like your phrasing, but how is this so?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-16T12:14:11.268Z · LW(p) · GW(p)

I just have a robust memetic immune defence system that at once recognises the absurdity of the suggested viewpoint, and that apart from the warm fuzzies it may induce from contemplating the Deep Wisdom that "we are all One!", it has no implications for anticipated experiences.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T13:46:28.828Z · LW(p) · GW(p)

apart from the warm fuzzies it may induce from contemplating the Deep Wisdom that "we are all One!

I don't understand why everyone thinks this is such a good thing. I wouldn't have rot13'd this post if I thought this was a good thing.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-16T14:04:36.771Z · LW(p) · GW(p)

I don't understand why everyone thinks this is such a good thing.

Well, I don't think warm fuzzies from Deep (i.e. fake) Wisdom are a good thing. Does anyone here? I prefer to get mine from reality, or from fiction, not from the latter passed off as the former.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T14:27:49.436Z · LW(p) · GW(p)

I mean, I don't understand why this would be a source of warm fuzzies. Everyone else is really you? That means none of the people I care about ever existed! I can't imagine people continuing to function with a belief like that, and yet there it is, a Facebook group whose members smile knowingly at each other, each member fully complacent with the idea that none of the others really exist.

Replies from: Alejandro1, Richard_Kennaway
comment by Alejandro1 · 2013-07-16T15:35:52.085Z · LW(p) · GW(p)

Maybe if your life is miserable (e.g., let's say you are estranged from your family, you are unemployed or have a soul-crushing job, and/or you have no close friends and no romantic prospects) you get a thrill out of believing that none of it is real, that those bothersome people you interact with are in fact only aspects of yourself.

Replies from: Thomas
comment by Thomas · 2013-07-16T16:09:02.915Z · LW(p) · GW(p)

This is a kind of META argument. "How miserable you must be, to suggest something like this ..."

Doesn't matter how miserable or not he is. It only matters if he is right or not.

Replies from: Alejandro1
comment by Alejandro1 · 2013-07-16T16:10:44.955Z · LW(p) · GW(p)

I'm just answering Rukifellth's question as to how could someone derive warm fuzzies from such a belief, not making any kind of argument against it.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-16T16:44:55.647Z · LW(p) · GW(p)

I would derive a great number of warm fuzzies from an argument against it.

comment by Richard_Kennaway · 2013-07-16T14:43:47.574Z · LW(p) · GW(p)

"People are crazy, the world is mad." Having boggled at them, I pass by.

comment by drethelin · 2013-07-15T01:42:47.007Z · LW(p) · GW(p)

If there's only one person and everyone else is simulated in their minds then that simulation is powerful and uncontrollable enough that for all practical purposes they can act like there are other people.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-15T02:07:33.379Z · LW(p) · GW(p)

The concept is unlike traditional solipsism, if that's what you're referring to?

Replies from: drethelin
comment by drethelin · 2013-07-15T02:10:24.766Z · LW(p) · GW(p)

I haven't read past what you posted but it seems identical to me.

Replies from: Rukifellth, Rukifellth
comment by Rukifellth · 2013-07-15T02:21:04.742Z · LW(p) · GW(p)

This concept is unlike your example, because it is still possible for this one person carrying the simulation to create an offspring or clone, and it would in time become two separate people. Open Individualism states that if the one person carrying the simulation were to somehow reproduce themselves, there would still only be one person.

comment by Rukifellth · 2013-07-15T02:14:55.102Z · LW(p) · GW(p)

Past what I posted? Where are you?

Replies from: drethelin
comment by drethelin · 2013-07-15T02:38:59.081Z · LW(p) · GW(p)

In your head.