Open thread, Mar. 16 - Mar. 22, 2015

post by MrMind · 2015-03-16T08:13:20.453Z · LW · GW · Legacy · 307 comments

Contents

307 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

307 comments

Comments sorted by top scores.

comment by Vaniver · 2015-03-17T03:18:15.918Z · LW(p) · GW(p)

So, several years ago I was moved by my primary dissatisfaction with HPMoR and my enjoyment of MLP to start a rationalist MLP fanfic. (There are at least two others, that occupied very different spheres, which I will get to in a bit.)

My main dissatisfaction with HPMoR was that Harry is almost always the teacher, not the student; relatedly, his cleverness far outstrips his wisdom. It is only at the very end, after he nearly loses everything, that he starts to realize how difficult it is to get things right, and even then he does not fully get it. Harry is the sort of character that the careful reader can learn from, but not the sort of character one should try to emulate.

MLP's protagonist, Twilight Sparkle, is in many ways the opposite character: instead of being overconfident and arrogant, she is anxious and (generally) humble. Where Harry has difficulty seeing others as equals or useful, Twilight genuinely relies on her friends. Most of Harry's positive characteristics, though, Twilight shares--or could plausibly share with little modification. (In HP terms, she's basically what would have happen if bookish Hermione had been the Girl-Who-Lived, with the accompanying leadership potential, and Harry Potter, the athletic Gryffindor seeker, was just one of her friends.)

So I had the clever idea to write a series of five scenes where Twilight learned a rationality lesson from each of the other five primary characters (yes, even Pinkie Pie, and that one actually wasn't hard to write). And then once I was thinking of a rationalist Twilight, an overall story formed around those scenes. I also wanted to write a story which had more of a Hansonian growth curve--yes, things are growing and a clever protagonist is constantly improving things around herself, but she's not the only PC in the world, and doesn't necessarily stand out as particularly effective. She might get a nice palace and lead a growing and exciting startup, but she's not going to become the Singleton, and she's more likely to have a bunch of exciting and energetic friends than be a lonely genius. (The primary two rationalist MLP fanfics that I'm aware of--not including any of PhilGoetz's stuff--are one in which a pony-flavored Singleton dominates the real world, and one in which a HPMoR-esque protagonist drops into the MLP world and does HPMoR-esque things.)

But, since I'm not celebrating finishing that story, obviously things went wrong. The primary ones:

  1. My first project was not something temporary. This is the advice they give for any physical craft--don't make your first set of bookshelves for yourself, or your first scarf, or so on. You're going to muck something up, and to this day my primary scarf has a bit of a trapezoidal slant at the end of it because I didn't quite have the hang of how to crochet the end of a row. If I had made a dishrag to make my rookie mistakes on and then a scarf to wear, the scarf would have been fine. The application to fiction is obvious. Not only does it have deep problems, it isn't even done. (I do have a finished, joke story written in response to one of Eliezer's Facebook posts, which isn't any better but is at least a complete work.)
  2. As the above suggests, I'm not very good at writing fiction. Like most people who DMed at some point, I did a fair bit of it when I was younger--but it never ascended to full hobby status. Arguing and forum-posting did, but that's a fairly different skill.
  3. Continuing the trend, I don't find writing fiction all that rewarding. PhilGoetz, at some point, described himself as having to write. Perhaps the same is true of me, but I find that urge adequately satisfied by nonfiction, and I suspect the world is better off with another book review or an introductory causality lesson plan / textbook than it is with another piece of rationalist fiction (conditioned on me writing it, at least).

But with HPMoR finished, I feel the itch again. Especially in the light of the Final Exam and its resolution. (As Sun Tzu put it, "Victorious warriors win first and then go to war, while defeated warriors go to war first and then seek to win.") But just diving into it again because the itch has returned does not a plan for success make. Here are the things I'm thinking about (and please, feel free to suggest other things to think about):

  1. Finish the original idea--the five chapters where Twilight learns a lesson from each of her friends--and publish it as its own work. (This is mostly done, and would just need some editing.) Gradually build out the full story as a separate thing when you have the time and interest.
  2. Seek more serious help from other writers--not just editing, but possibly full coauthorship, or sliding further down the scale towards commissions. (I haven't fully sorted out my priorities here, but I think I care enough to use some units of caring on this.)
  3. Drop this idea as less valuable than other projects. I don't have the introspection ability to be sure about energy/motivation, but I suspect this would draw from the same motivation budget as nonfiction writing projects, and it certainly would draw from the same time budget.

(By the way, here is the link to it; I last updated it about a year ago.)

Replies from: Alicorn, Zian
comment by Alicorn · 2015-03-17T07:10:23.749Z · LW(p) · GW(p)

I have not read your story yet, but if I wait till I get around to it, I will forget to inform you that I have been known to accept commissions.

comment by Zian · 2015-03-22T23:21:11.127Z · LW(p) · GW(p)

I suggest doing #1 and #2 in parallel.

As you said, the story is mostly done and just needs editing. That will require help from other people and can happen while you do other things. It will be good for you to be able to say "Behold, I have finished this thing."

At the same time, as you tackle the full story as a separate thing, it may be worth giving it your best effort (by pulling in #2) so that after a few months, you can say "I tried really hard and it didn't work. Alas. Time to stop." or the opposite, without having to wonder if you just didn't try hard enough.

comment by Rukiedor · 2015-03-16T18:36:01.086Z · LW(p) · GW(p)

I think I recall seeing somewhere that the open thread is a good place for potentially silly questions. So I've got one to ask.

As long as I can remember small things give me the willies. Objects around the size of a penny or smaller trigger a kind of revulsion response if I have to handle them. Things like small coins, those paper circles created when using a hole punch, those stickers that they stick on fruit. I'm not typically bothered by handling a lot of the objects at the same time, a handful of pennies wouldn't bother me.

One thing that's odd, well aside from everything else about it, is that it seems to be especially triggered by jewelry. Rings, basically any piercings, even smallish necklaces. I'm alright as long as they don't get too close to me, but I start feeling weird if I have to interact with them.

Anyway, I've always thought this was pretty strange and it recently occurred to me that someone here probably has some idea of what's going on. Thanks in advance for any thoughts.

Replies from: Gunnar_Zarncke, chaosmage, Toggle, None
comment by Gunnar_Zarncke · 2015-03-16T20:18:22.466Z · LW(p) · GW(p)

Interesting. Great that you shared it. Have never heard of something like this. To me it looks like one basic fear pattern matching gone wrong (wired differently than usual in the brain). I mean there must be some pre-wiring of object recognition in the brain that triggers on e.g. spider-like and snake-like forms. Why should such a wiring go wrong (mutation or whatever) and pattern-match against small-ringlike.

See also What universal experiences are you missing without realizing it. Where people mention a lot special things and maybe by now you can find something comparable to yours.

Replies from: Rukiedor
comment by Rukiedor · 2015-03-17T02:24:56.661Z · LW(p) · GW(p)

Ha, it was actually looking through the Universal Experiences comments that prompted me to come here and ask if anyone had any experience with something like this. I didn't see anything in the comments there that sounded similar.

I kind of doubt it's related to fear triggers, because I don't like spiders either, and my aversion to spiders feels very different from this. Interesting thing to think about though. Thanks.

comment by chaosmage · 2015-03-16T22:22:28.147Z · LW(p) · GW(p)

I'm not a doctor, but this sounds like Mikrophobia. I do recognize you're describing your feelings as a kind of revulsion, not fear proper, but still that'd the best pattern match I got.

I suggest you talk to a psychiatrist or psychotherapist about it, because if it is that, your issue is very solvable. Phobias are one of the easiest-to-treat psychological issues; desensitization and cognitive-behavioral therapy work quite well.

Replies from: Rukiedor
comment by Rukiedor · 2015-03-17T02:33:50.346Z · LW(p) · GW(p)

Interesting, not exactly the same thing, but it does sound similar. You're probably right about desensitization, there are some rather small things I can handle without problem. I'll have to give that a shot. Thanks.

comment by Toggle · 2015-03-18T22:43:53.290Z · LW(p) · GW(p)

My housemate has this exact problem- right down to the issues with jewelry in particular. If she has to shake hands with somebody who's wearing a metal ring, she has to sort of ritualistically wipe off her hands afterwards. Metal in general seems to trigger the reaction much more strongly, so she'll have problems with loose coins but not stickers.

It's been persistent throughout her life, I understand, but exposure therapy has reduced its severity.

Replies from: Rukiedor
comment by Rukiedor · 2015-03-19T14:52:04.180Z · LW(p) · GW(p)

That is very interesting. Kind of validating, and one more bit of evidence in favor of trying exposure therapy. Thank you for sharing that.

comment by [deleted] · 2015-03-17T10:34:30.400Z · LW(p) · GW(p)

Maybe childhood training against choking hazards.

I was once hospitalized for months at 5 years old, and the had exhibitions on the wall of the small stuff kids stuffed into their noses or ears and had to be removed surgically. It was scary. I was afraid of them. The fact I still remember it means it may be traumatic, may have been something like that for you.

How do you handle eating or cooking lentils?

Replies from: Rukiedor
comment by Rukiedor · 2015-03-17T15:28:02.740Z · LW(p) · GW(p)

That's an interesting possibility. I don't have any particularly strong memories of being warned about choking hazards, about the only one I remember is warnings about plastic bags.

For lentils, I'm fine handling them in bulk, and eating spoonfuls of them doesn't bother me. When most of them are gone, and there are only a few scattered in my plate or bowl they start to trigger the revulsion a little bit, although not nearly as strong as many other things.

This actually seems to suggest that there is some desensitization going on. I never had lentils until I was an adult, I have however been eating rice for as long as I can remember, and individual rice grains don't trigger the reaction under most circumstances. Small candies, like skittles, m&ms, smarties, etc. don't really trigger it either, in most circumstances, which again, I have been eating since childhood.

comment by [deleted] · 2015-03-17T04:32:48.939Z · LW(p) · GW(p)

I'm going to be doing a Rationality / Sequences reading group. Sorry I've been busy the last few days since the book came out, but I'll be making an introductory posting soon. The plan is to cover one sequence every two weeks covering the whole book over the course of a year.

comment by chaosmage · 2015-03-16T22:11:02.047Z · LW(p) · GW(p)

What resources would you recommend for skilled, highly-specialized, employed EU citizens looking for employment in the US?

Replies from: is4junk
comment by is4junk · 2015-03-21T15:17:10.121Z · LW(p) · GW(p)

I'd look for a good headhunter in your field (assuming it is not too niche). Let them get the commission for finding you a job.

  • Update your linkedin profile and see if any contact you.
  • Talk to a recruiter in a company that is a near fit for you even if they aren't hiring now and ask if they have worked with any headhunters in the past.
  • Go to a Job fair in the US - not for job but to interview headhunters
Replies from: chaosmage
comment by chaosmage · 2015-03-25T13:51:40.057Z · LW(p) · GW(p)

Thank you!

comment by Artaxerxes · 2015-03-19T18:42:12.604Z · LW(p) · GW(p)

Gates goes into a little bit more detail on his views on AI.

Interviewer:

Yesterday there was a lot of talk here on machine intelligence and I know you had expressed some concerns about how machines are making leaps in processing ability. What do you think we should be doing?

Gates:

There are two different threat models for AI. One is simply the labor substitution problem. That, in a certain way, seems like it should be solvable because what you are really talking about is an embarrassment of riches. But it is happening so quickly. It does raise some very interesting questions given the speed with which it happens.

Then you have the issue of greater-than-human intelligence. That one, I’ll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem.

Horvitz's thing.

Musk's thing.

If Gates were to really get on board, that would be huge, I think. Fingers crossed.

comment by [deleted] · 2015-03-16T16:06:12.517Z · LW(p) · GW(p)

To the old ask and guess thread: I grew up under the impression it is a gender thing.

My mother would be "guess", she would expect me to notice that the thrash needs taking out, I didn't because I was lazy, and then she did it and acted hurt and told me she is tired of always needing to tell me to do my share of housework, she rather does it herself but she was bitter and hurt.

In the occasional cases she was ill and my father had to give a damn about the housework (in his defense he tended to have 10-11 hour workdays, mother was at home, so it made sense not to), he would do it in the clearly "tell" style of military training sergeants, "get that effing thrash out but on the double you got five effing seconds to finish it", that kind of style, however he was NEVER angry or hurt about this, he actually looked amused and having fun during that verbal rudeness, I think he always thought if you order people do things and they do them on the bounce, then things are right even if you need to give that order every day: you just tell it ruder and ruder until they learn, easy enough.

While I know ask and guess cultures exist in general, for me it got really tied up with gender. I think as my father always had employees, often stupid ones, and had to play boss all the time, he simply did not mind being a boss even if it meant telling the same things all over. It is repetitive but maybe it always gives a bit of power trip feeling. And it matched nicely with the generally patriarchial social model. I think my mother felt she does not really have the power to enforce her wishes, and this is why she wanted us to take them to the heart and remember what she asked and not have to tell it over and over because really ignoring her wishes, to her, felt like a real possibility. I think she hated repeating them because she felt it could one day come to that that she tells it again and then my or dad refuse it outright and then she has no more recourses. She thought, and correctly, that if we really loved her we would remember what she asks. Having to guilt-tripped into pulling my share was probably a correctly identified lack of care on my side and maybe she was correct that I may not have been very far from outright refusal.

Replies from: Lumifer, MathiasZaman
comment by Lumifer · 2015-03-16T16:29:18.619Z · LW(p) · GW(p)

and then she did it and acted hurt and told me she is tired of always needing to tell me

That's pretty classical passive-aggressive behaviour. I don't think it has much to do with guess-vs-ask cultures.

But I agree that there is probably some gender correlation.

Replies from: NancyLebovitz, JoshuaZ, None
comment by NancyLebovitz · 2015-03-16T17:17:06.899Z · LW(p) · GW(p)

It seems plausible that Hint cultures lead to passive aggression-- if you can't be just plain aggressive, what have you got left?

Replies from: Lumifer
comment by Lumifer · 2015-03-16T17:25:16.962Z · LW(p) · GW(p)

I think power imbalance leads to passive aggression much more than the Hint or Ask character of the culture.

Hint and Ask are basically preferred communication protocols and most Hint people I know will adjust if the hints are clearly not working. But there is a big difference between

  • Glance at garbage. Glance at garbage. Glance at garbage. Dear, can you please take out the garbage?

and

  • Glance at garbage. Glance at garbage. Glance at garbage. You never pay any attention to me and you screwed up my whole life, you ungrateful bastard!
Replies from: None
comment by [deleted] · 2015-03-17T08:55:58.419Z · LW(p) · GW(p)

I think power imbalance leads to passive aggression much more than the Hint or Ask character of the culture.

But that is largely the same thing. The classical boss-subordinate relationship is ask (order) down, guess up. Passive-aggression is extreme (angry, upset) guess, active aggression is extreme (angry, upset) ask/order.

When whole cultures are all-ask or all-guess that is probably a sign of egalitarianism - within that subset.

Replies from: Lumifer
comment by Lumifer · 2015-03-17T14:50:09.662Z · LW(p) · GW(p)

The classical boss-subordinate relationship is ask (order) down, guess up.

It's more complicated. Ask/tell is simpler, faster, and more efficient so in the workplace (where status and power relationships are largely formalized) it tends to dominate anyway.

Also, as anecdata, I know a girl who is a very pronounced Hint/Guess person, but she's a manager and has underlings. She quite successfully manages them mostly on the Hint/Guess basis (within reason, of course).

comment by JoshuaZ · 2015-03-16T17:24:20.305Z · LW(p) · GW(p)

The idea that there's a gender correlation, whether for cultural or biological reasons certainly is something I've seen a fair bit when this comes up as a subject. See for example here. This one where cultural distinctions are going to be very difficult since some cultures (e.g. China) are so heavily on one side. It would I think be very interesting to see if the obvious gender trend in the West still is true in those extreme examples- it would be pretty strong evidence of a biological basis.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-03-16T20:06:51.584Z · LW(p) · GW(p)

In a way the gender aspect could be seen as a micro culture thing as women operating in their own social circles build up these sub-protocols (influenced due to power structures of ourse).

comment by [deleted] · 2015-03-17T08:52:32.471Z · LW(p) · GW(p)

Yes, but passive-aggression is what guess-people do when upset, and active-aggression is what ask-people do when upset.

Replies from: Lumifer
comment by Lumifer · 2015-03-17T14:39:50.394Z · LW(p) · GW(p)

I don't know if I am willing to accept it as a such tight relation. For one thing, being passive-aggressive is usually not one particular action, an outburst when upset, it's more like a an attitude, a continuous inclination/slant/flavour.

I think that passive vs. active aggression depends much more on power, status, and specific circumstances rather than on usually preferred communication styles.

comment by MathiasZaman · 2015-03-17T11:15:31.010Z · LW(p) · GW(p)

I think it would be wrong to generalize from that example, so I'd like to report the opposite. My mother would also ask me to do specific, clearly defined task when she wanted them done and ask again when I forgot. My dad, on the other hand, would just get angry when things weren't done according to his requirements without making those requirements clear.

comment by [deleted] · 2015-03-20T15:24:28.743Z · LW(p) · GW(p)

I don't understand why I do find certain kinds of goodness, kindness, compassion annoying. Of all the publications, The Guardians seems to rank highest in pissing me off with kindness. Consider this:

http://www.theguardian.com/cities/2014/jun/12/anti-homeless-spikes-latest-defensive-urban-architecture

Ocean Howell, a former skateboarder and assistant professor of architectural history at the University of Oregon, who studies such anti-skating design, says it reveals wider processes of power. “Architectural deterrents to skateboarding and sleeping are interesting because – when noticed – they draw attention to the way that managers of spaces are always designing for specific subjects of the population, consciously or otherwise,” he says. “When we talk about the ‘public’, we’re never actually talking about ‘everyone’.”

Does anyone have any idea why may I find it annoying? Putting it differently, why do I experience something similar as Scott i.e. while I don't have many problems with most contemporary left-leaning ideas, I seem to have a problem with left-leaning people?

For example, I don't find anything inherently bad about starting a discussion about making design more skateboarder-friendly, or less directly skateboarder-hostile, I think skateboarders providing free entertainment to bystanders is kind of a win-win.

And I still feel like slapping Mr. Howell around with a large trout. But why?

Clearly it must be something about the style? Pretentious? Condescending?

Problem is, my emotions prevent me from analysing this clearly. But as far as I see it, the issue with the style is roughly this algorithm

  1. assume very high level of compassion and altruism (public spaces are literally designed for everyone)
  2. look sad or scandalized when you pretend to be surprised it is not so

Well, to use this example, we always knew it is not so. Clean, bourgeois middle-class folks never wanted e.g. homeless, amputee beggars, or other undesirables near where they live. I am not even ashamed about this, I don't find it incompatible to wish that they should get treated well, but somewhere I cannot see them much. It is not my eyesight is what they need most but more like professional care. I just wasn't aware skateboarders are also included in the category of undesirables. Anyway, the way I can best parse my emotions is that I find Mr. Howell condesdencing or pretentious because he is pretending to be surprised we are not saints. And this seems to be general tone of The Guardian, that may be why it annoys me.

Any better takers?

People of more or less explicit left-wing views: do you see your goals would better supported by, how to put, it less drama, or less pretense, or less antagonizing or trying to guilt-trip others, so I don't know, with a different tone than that of The Guardian or Salon.com? I cannot really express this better, but what I have in mind is more of a e.g. "please discuss why the homeless annoy you so much that you want to install spikes" tone, and less of a "fuck you for being a cruel monster who installs anti-homeless spikes" tone, do you find that counter-productive?

OTOH it is also possible that I find it annoying because it it actually pierces my conscience. But I actually don't think so. I never really considered perfect 200% compassion a super ace that trumps all other cards. It is one of the aces, sure, but there are other aces and also kings and whatnot in the stack. Maybe, it is annoying because it reminds me of a social expectation, certain social taboos, like, never feel grossed out by e.g. the homeless, because they suffer and the only proper reaction to suffering is compassion, those kinds of taboos.

Replies from: seer, Lumifer, is4junk, Capla, ChristianKl, hairyfigment
comment by seer · 2015-03-23T01:26:17.714Z · LW(p) · GW(p)

The problem is he starts with false premises that it is impermissible (or at least impolite) to question in public, such as that homeless people are perfectly normal people who are down on their luck. (Most homeless, especially long time homeless have a mental illness.) And then he proceeds to reason from them and expects people to agree.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-03-26T11:31:36.347Z · LW(p) · GW(p)

(Most homeless, especially long time homeless have a mental illness.) And then he proceeds to reason from them and expects people to agree.

Cite? My assumption is that the proportion of homeless people who are normal people down on their luck is much higher when the economy has been bad for a while.

comment by Lumifer · 2015-03-20T15:36:48.056Z · LW(p) · GW(p)

Any better takers?

I think I dislike this sort of articles because they assume I'm a stupid mark easily to manipulate by crude emotional-blackmail methods. AND the author is someone who thinks that manipulating other people this way is an excellent idea.

comment by is4junk · 2015-03-20T20:59:29.381Z · LW(p) · GW(p)

Why even read left wing articles if they upset you?

My take is that if the public space was skateboarder and homeless friendly, the author could easily write a similar article on how that scares [insert other victim group] away from the public space.

As for why it is written that way, Kling's book The Three Languages of Politics is a good explanation. The left likes to think in oppressed verses oppressor terms.

Thanks for posting this article. There is a park being planned near me and there are certain architectural features I now want it to consider ...

Replies from: ChristianKl
comment by ChristianKl · 2015-03-21T00:40:01.406Z · LW(p) · GW(p)

My take is that if the public space was skateboarder and homeless friendly, the author could easily write a similar article on how that scares [insert other victim group] away from the public space.

There a difference between not designing for being homeless friendly and designing spikes to prevent homeless people from sleeping in the area.

Replies from: Jiro
comment by Jiro · 2015-03-22T02:37:18.403Z · LW(p) · GW(p)

What's the difference? (This is a serious question. Of course, I know some reasons why people think they are different, but I don't think the reasons that I can think of stand up to examination.)

Replies from: ChristianKl, tut
comment by ChristianKl · 2015-03-22T10:08:17.734Z · LW(p) · GW(p)

If you design a system, then you can optimize it for different goals. A designer that's supposed to design a public space signs a contract. To the extend that the designer optimizes for different goals and especially goals that disadvantage certain people he's doing wrong.

From the perspective of a city written into the contract that the space is designed against homelessness also makes a difference.

Replies from: Jiro
comment by Jiro · 2015-03-22T15:11:16.937Z · LW(p) · GW(p)

That only moves the question up a level: why is it wrong to do X as your goal, but okay to do X in service of something else, even if that something else is as vague as aesthetic reasons or whatever impels people to randomly design things? By your initial reasoning, it would be wrong to design spikes to discourage the homeless, but okay to design spikes if you just happen to like the look of spikes, even though both of these have the same effect.

Replies from: ChristianKl
comment by ChristianKl · 2015-03-22T18:39:00.141Z · LW(p) · GW(p)

why is it wrong to do X as your goal, but okay to do X in service of something else

Because intentions matter for judge morality of a lot of human interactions. If a professional is hired for a specific purpose, it's important that the professional doesn't use the power of his role to push his personal agenda.

comment by tut · 2015-03-22T08:56:26.555Z · LW(p) · GW(p)

The spikes make the place worse for the intended users than it would be if the designers just ignored the homeless in this place and build a better place for them nearby.

Replies from: Jiro
comment by Jiro · 2015-03-22T15:08:36.913Z · LW(p) · GW(p)

That doesn't explain the difference. Just ignoring the homeless can include building things that happen to discourage the homeless but are put there for other reasons. If so, then ignoring them and being hostile to them can produce the same result.

comment by Capla · 2015-03-21T23:38:01.584Z · LW(p) · GW(p)

upvote for noticing a (possibly) uncharitable reaction in yourself and taking steps to do better.

comment by ChristianKl · 2015-03-21T00:38:32.812Z · LW(p) · GW(p)

Anyway, the way I can best parse my emotions is that I find Mr. Howell condesdencing or pretentious because he is pretending to be surprised we are not saints.

"We" is a bad word. "We" don't design public spaces. Certain architects do. Those architects do engage in certain rhetoric. They also do promise certain things to governments who hire them to build public spaces.

comment by hairyfigment · 2015-03-30T02:18:30.757Z · LW(p) · GW(p)

Congratulations on seeing a case where your emotions may mislead you. Now, what makes you think the author of that article "is pretending to be surprised we are not saints"? Looking at it, I get the exact opposite impression - he starts off by saying,

There was something heartening about the indignation expressed by Londoners this week against the “anti-homeless” spikes placed outside a luxury block of flats in Southwark.

suggesting that he finds their reaction surprising. So if anything, his article gives me a sense of self-satisfied cynicism, learnedly explaining to such people how the world works and why he thinks their indignation is rare.

Were I uncharitable, I could read your own (parent) comment as showing off cynicism in the exact same way.

comment by Artaxerxes · 2015-03-17T17:32:50.464Z · LW(p) · GW(p)

The World Weekly covers superintelligent AI.

It's one of the better media pieces I've read on the topic.

Bostrom, Yudkowsky, and Russell are quoted, among many others.

Replies from: Manfred
comment by Manfred · 2015-03-19T03:17:44.614Z · LW(p) · GW(p)

Was expecting weekly world news :)

What this article really opened my eyes on is how impactful Nick Bostrom (or Bostrum) has been.

comment by CronoDAS · 2015-03-17T17:18:50.124Z · LW(p) · GW(p)

What do you link someone to if you want to persuade them to start taking cryonics seriously instead of immediately dismissing it as ridiculous nonsense? There's no one single LW post that you can send someone to that I know of.

Replies from: Caue, polymathwannabe
comment by Caue · 2015-03-17T23:56:56.714Z · LW(p) · GW(p)

I like this.

comment by polymathwannabe · 2015-03-17T22:14:23.870Z · LW(p) · GW(p)

I can think of this one and, especially, section B of this one.

comment by Kaj_Sotala · 2015-03-20T09:14:23.522Z · LW(p) · GW(p)

I just realized that some people object to hedonistic utilitarianism (which I've traditionally favored) on the grounds that "pleasure" and "suffering" are meaningless and ill-defined concepts, whereas I tend to find preference utilitarianism absurd on the grounds that "preference" is a meaningless and ill-defined concept.

This seems to point to a difference in how people's motivational systems appear from the inside: maybe for some, "pleasure" is an obvious, atomic concept which they can constantly observe as driving their behavior, whereas others perceive their own actions as being driven more by something like a "preference" that seems like a coherent and obvious concept to them, and others still don't feel that either of these concepts is particularly central, causing them to disregard utilitarianism. (Of course one may also reject utilitarianism for other reasons.)

Replies from: somervta, ChristianKl, Jayson_Virissimo
comment by somervta · 2015-03-20T10:45:06.635Z · LW(p) · GW(p)

Interestingly, both concepts seem worthwhile to me... and I mostly advocate a combination of hedonistic and preference utilitarianism.

comment by ChristianKl · 2015-03-20T13:27:20.375Z · LW(p) · GW(p)

This seems to point to a difference in how people's motivational systems appear from the inside: maybe for some, "pleasure" is an obvious, atomic concept which they can constantly observe as driving their behavior,

Remembered pleasure and pleasure felt in the moment are two distinct things, towards which is the "obvious" one?

My immediate idea of the terms pleasure and suffering is that "pleasure" is an emotion while suffering is more of an activity. The opposite of "suffering" would for me be "enjoying".

There a state where you laugh and a state of warm relaxation. Both feel good but both are different. How does pleasure relate to that? Life satisfaction is another variable in that space.

There are interactions I might have with another person where the person is going to laugh and feel energy but where the person would answer "No" if I would ask them whether they want to engage in a certain action.

If you come from preference utilitarians it's important to ensure consent. If you just care about hedonics and are skilled enough to predict the results of your action and know the actions produce pleasure, consent isn't an issue anymore.

The difference matters if you analyse what some PUA people do.

comment by Jayson_Virissimo · 2015-03-21T00:21:39.897Z · LW(p) · GW(p)

I think "pleasure" and "suffering" are very meaningful and that the prospects of finding decent metrics for each are good over the long term. The problem I have with hedonistic utilitarianism is that hedons are not what I want to maximize. Don't you ever pass up opportunities to do something you know will bring you more pleasure (even in the long run), in order to achieve some other value and don't regret doing so?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2015-03-21T00:29:12.917Z · LW(p) · GW(p)

Yeah, I've drifted away from hedonistic utilitarianism over time and don't particularly want to try to defend it here.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2015-03-21T00:46:11.520Z · LW(p) · GW(p)

Fair enough.

comment by Ixiel · 2015-03-18T10:40:58.290Z · LW(p) · GW(p)

I'll buy you sequences.

Sorry, I feel like a jerk repeating myself but this is the last time. I bought the three pack of the audio sequences on Kickstarter because there were multiple people who said they wanted it but for whom $50 was too dear. I just got the final "give us the names" email. Any takers?

Replies from: MathiasZaman, jam_brand, polymathwannabe
comment by MathiasZaman · 2015-03-19T08:27:17.743Z · LW(p) · GW(p)

I'd like it as well, if you still have any. (email: king.grimmm@gmail.com)

Replies from: Ixiel
comment by Ixiel · 2015-03-20T17:28:44.424Z · LW(p) · GW(p)

All set. Enjoy.

Replies from: MathiasZaman
comment by MathiasZaman · 2015-03-20T17:52:23.789Z · LW(p) · GW(p)

Wow, awesome. Many thanks!

comment by jam_brand · 2015-03-23T21:57:12.519Z · LW(p) · GW(p)

I'd be happy to take you up on this if it's still available, my email is jam.br4nd@gmail.com. Many thanks for the kind offer either way!

Replies from: Ixiel
comment by Ixiel · 2015-03-30T22:20:15.372Z · LW(p) · GW(p)

Sorry, I gave both out. (And sorry for delayed response, on vacation)

comment by polymathwannabe · 2015-03-18T14:29:25.418Z · LW(p) · GW(p)

martin.malette@gmail.com (it's a friend I want to introduce to these topics, and he loves audiobooks)

comment by [deleted] · 2015-03-16T16:19:56.712Z · LW(p) · GW(p)

On spaced repetition / Anki:

When I started to work after college I was surprised when people asked "How comes you don't know X? Haven't you read the manual?" I was surprised because in college it take more than one reading, a form of repetition, to learn, know and remember things. I would replay "I have read it, but have not yet memorized it."

Interestingly, later on, I managed to remember things after one reading, not details, but the general idea.

I wonder about the popularity of Anki and spaced repetition here. I am experimenting it for conditioning, but for learning, do you really need to remember things in more detail than a single reading allows, if you aren't preparing for college exams anymore?

Note: I think the remembering after one reading worked later on, because it was more of the which of the options to choose. Like where do you find the inventory cost report? Obvious candidates are warehouse menu and finance menu. I think in college I needed to memorize things when the answer was not choosing from known options but something I could not even imagine.

Are you using spaced repetition because you have more of the second type?

The good thing about becoming an expert in a narrow field is that sooner or later you know all the options, which means you are a lightning fast learner. You just look at something, take a not of which of the known options it has, and know all about it. Like a doctor making a diagnosis. Checkmark checkmark checkmark.

Replies from: is4junk
comment by is4junk · 2015-03-16T17:01:26.962Z · LW(p) · GW(p)

For most of the work stuff I find it easier to remember where to find things rather than the things themselves. The hard stuff is the undocumented and constantly changing locations and procedures where a search is likely to find out of date junk.

comment by Paul Crowley (ciphergoth) · 2015-03-16T14:22:00.333Z · LW(p) · GW(p)

In Our Own Image: Will artificial intelligence save or destroy us? by George Zarkadakis was published by Random House on 5 March. I haven't read it, but from a search on Google Books, there's no mention of "Yudkowsky" or "MIRI", while "Bostrom" only appears once, in a discussion of the Simulation Argument. I nearly gave up at that point, but then I thought to search for "Hawking", and indeed, there is a discussion of the Hawking/Tegmark/Russell/Wilczek letter; this seems to me to be evidence on how carefully the author looked into the issue before writing the paragraphs dismissing it. In summary: \sigh**.

Edit: The author was aware of MIRI in 2013.

comment by Tryagainslowly · 2015-03-16T12:05:50.527Z · LW(p) · GW(p)

Could someone help me out with the LessWrong wiki? I made an account called Tryagainslowly on it; it wouldn't let me use my LessWrong account, instead making me register for the wiki independently. I wanted to post in the discussion for the wiki page entitled "Rationality". The discussion page didn't have anything posted in it. I wrote out my post, and attempted to post it, but it wouldn't let me, telling me new pages cannot be created by new editors. What do I need to do in order to submit my post? I'm happy to show what I was intending to post here if anyone wants me to.

Replies from: Tryagainslowly, Gunnar_Zarncke
comment by Tryagainslowly · 2015-03-18T11:33:11.229Z · LW(p) · GW(p)

It works now! It just required waiting a bit. Thanks for the help Gunnar_Zarncke.

comment by Gunnar_Zarncke · 2015-03-16T15:30:48.259Z · LW(p) · GW(p)

It takes some time for the Wiki accounts to get in sync with the LW account, just wait some time (a day?). I guess its some Troll protection.

Replies from: ciphergoth, Tryagainslowly
comment by Paul Crowley (ciphergoth) · 2015-03-16T17:22:18.688Z · LW(p) · GW(p)

You're giving this advice to that account handle?

comment by Tryagainslowly · 2015-03-16T16:18:58.750Z · LW(p) · GW(p)

Thanks, I'll wait and see.

comment by Adam Zerner (adamzerner) · 2015-03-20T17:38:58.638Z · LW(p) · GW(p)

Do dating conventions fall victim to Positive Bias?

It seems that people are always looking for positive evidence, and that looking for negative evidence (I suspect my vocabulary might be incorrect?) is socially unacceptable. Ie. "Let's see if we could find something in common." seems typical and acceptable, while "Let's see if each of us posses any characteristics that would make us incompatible" seems socially unacceptable.

Note: I have zero experience with dating and romance so these are just my impressions, although I suspect that they're true.

Replies from: Epictetus, ChristianKl
comment by Epictetus · 2015-03-20T19:36:17.153Z · LW(p) · GW(p)

"Let's see if each of us posses any characteristics that would make us incompatible" seems socially unacceptable.

It's considered rude to say that out loud during a date. However, it is considered good practice to be alert for such characteristics.

Replies from: Lumifer, adamzerner
comment by Lumifer · 2015-03-20T20:09:23.375Z · LW(p) · GW(p)

It's considered rude to say that out loud during a date.

In these very words, probably, but it's perfectly socially acceptable for e.g. a vegan to declare outright that s/he is not interested in carnivores...

comment by Adam Zerner (adamzerner) · 2015-03-20T23:54:50.880Z · LW(p) · GW(p)

Do you think that it's rude? It seems sensible to me.

It seems that people interpret such actions as hostile. And people who say things like that probably are hostile. However, I don't think the likelihood of the person being hostile is high enough such that you should conclude that they actually are. I think that the likelihood is low enough such that the courteous thing to do is investigate further as why they're saying that.

And if they're well intentioned - ie. they want both parties to find someone that they're compatible and happy with, and are just trying to do a good job of that - then I think the mature thing to do is to respect it.

comment by ChristianKl · 2015-03-21T00:27:21.168Z · LW(p) · GW(p)

The point of a dating conversation isn't primarily to exchange information. It's to create good feelings and see whether one can create a feeling of connection.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-03-21T00:56:19.248Z · LW(p) · GW(p)

Is this a correct restatement of your claim?

The ultimate point is to determine compatibility. But the best way to do that is to follow social convention and keep things positive. In doing this, your System I will be able to determine compatibility, and will notify you by producing emotions. By violating social convention and saying something like, "Let's see if each of us posses any characteristics that would make us incompatible", you'd hamper System I in exchange for some information for System II to use. This exchange isn't worth it.

It'd be interesting to see some research on this.

Replies from: ChristianKl
comment by ChristianKl · 2015-03-21T09:26:56.542Z · LW(p) · GW(p)

No, building an emotional connection isn't an act that's just about gathering information the same way as lighting a pile of wood on fire isn't about testing the wedness of the wood.

comment by palladias · 2015-03-17T15:25:32.325Z · LW(p) · GW(p)

I'm running an Ideological Turing Test about religion, and I need some people to try answering the questions. I've giving a talk at UPenn this week on how to have better fights about religion, and the audience is going to try to sort out honest/faked Christian and atheist answers and see where both sides have trouble understanding the other.

In April, I'll share all the entries on my blog, so you can play along at home and see whether you can distinguish the impostors from the true (non-)believers.

Replies from: Jiro, Ander, Ander, Jiro
comment by Jiro · 2015-03-17T18:48:36.566Z · LW(p) · GW(p)

How do you account for ideological Turing tests failing because of shibboleths? It's one thing to be unable to express or recognize the same ideas as a Christian, it's another to be unable to express or recognize in-group terminology.

Replies from: palladias
comment by palladias · 2015-03-19T15:39:49.610Z · LW(p) · GW(p)

I try to structure questions so that they'll be less vulnerable to shibboleth exploits (plus, some shammers do do a bit of research to be able to drop in jargon!).

comment by Ander · 2015-03-18T00:16:56.474Z · LW(p) · GW(p)

One thing I noted when doing this. Most of my true answers were more specific than my made up answers, which might give them away. I look forward to reading the results!

Replies from: polymathwannabe
comment by polymathwannabe · 2015-03-18T03:20:40.800Z · LW(p) · GW(p)

It's curious; I felt the opposite.

comment by Ander · 2015-03-17T23:34:33.044Z · LW(p) · GW(p)

These questions are quite difficult and will require effort. I'll try to submit an entry.

Edit: Completed. :)

comment by Jiro · 2015-04-13T15:33:35.058Z · LW(p) · GW(p)

I just took the "root of all sins" test and I tried to distinguish the answers of the Christians and non-Christians entirely based on shibboleths. Disordered love? Christ is a blinding searing light? Humans are finite beings who naturally desire the infinite? Maybe. But the decision was not "would a Christian have those ideas" but "would a Christian phrase the ideas that way".

Of course I can't just go count the shibboleths; it's possible that non-Christians might overcompensate and actual Christians don't talk about Jesus' blinding light much at all, at least not actual Christians of the type who answer such surveys.

But either way, I didn't feel that the most likely way to figure out which answer came from Christians was to look at the content of the answer. So I think that the test has already failed.

On top of this is the question of what type of Christian the non-Christians are trying to imitate. Are they trying to imitate average Christians, average survey-answering Christians, average blogging Christians, average Christians who are knowledgeable about Christianity? Trying to imitate the wrong kind of Christian can mean that knowing too much about Christianity can make your imitation fail.

comment by Dahlen · 2015-03-16T08:29:20.859Z · LW(p) · GW(p)

In the last few years I've been thinking about all the separate mental modules that influence productivity, procrastination, akrasia etc. in their own unique ways. (The one thing that's for sure is that the ability to get stuff done isn't monolithic.) This is how my breakdown of the psychology of productivity looks like, and I have a hunch that these are all separate and generate their own effects independently (more or less) of the others:

  • a baseline level of energy or willingness to take control of your own life
  • an affinity-based system that makes you autonomously pick some activities over others
  • a willpower system that makes you keep going in spite of difficulties
  • a "negative willpower" system that helps you abstain from temptations or impulses
  • a response-to-incentives feature that regulates how extrinsic factors influence behavior
  • a set of time-dependent responses that generate, among others: temporal discounting; putting off tasks; anxiety as deadline approaches; desperate last-minute efforts

Is this right? Discuss.

Replies from: RowanE
comment by RowanE · 2015-03-17T10:42:11.737Z · LW(p) · GW(p)

I think to back up the hunch you'd need to poll some people, see if their akrasia comes from being weak on some points rather than others - if that's the case, and they're not consistently the same points, then probably it does work that way. I personally feel like I'm bad with respect to all the modules listed.

Replies from: Dahlen
comment by Dahlen · 2015-03-17T22:03:31.406Z · LW(p) · GW(p)

Yes, that's what I was trying to do with the parent comment. I used myself as a reference for these points, as well as drawing on various anecdotes I've heard about other people.

E.g. I'm high on "negative willpower", high on perseverance against physical discomfort (tiredness, hunger, pain), low on perseverance against boredom, frustration, and the feeling of being stuck.

I'm very low on #1, and also have low affinity for, say, math, and hence I never put in the hours for learning it well, but I've heard of people who are also low on #1 but happen to have very high affinity for math, who'd go on and entertain themselves with their equations and theorems while dishes gathered in the sink and rent went unpaid. (Proof they don't necessarily have better work ethic than me.) Then there are people who do mildly dislike crouching over math textbooks for hours, but are very high on #3 and push themselves to keep going. (Proof that sometimes it is a matter of work ethic.)

I listed #5 as its own factor because it can override other items on the list, and all the other items above it lacked any inherent reference to external factors. It could be a strong or a weak tendency; for instance, I notice about myself that I'm not particularly moved by rewards or punishments; moreover, my indifference to them seems tweaked especially to cause me to lose the greatest amount of money possible, either by missing opportunities or by having to pay up. (That's why I never ever plan on using Beeminder.) #6 is there because the whole thing lacked a time dimension without it. For example, however badly I might fare on other points, I'm only a moderate procrastinator (for tasks I don't loathe with a fervor), tend to begin working on assignments towards the midpoint of the available time range, and consider the long term.

comment by DataPacRat · 2015-03-20T06:54:02.902Z · LW(p) · GW(p)

Wrist computer: To Buy or Not To Buy

I'm considering whether or not to buy an Android phone in a wristwatch form-factor, and am hesitating on whether it's the best use of my money. Would anyone here care to offer their opinion?

One of my goals: Go camping and enjoy it. One of my constraints: A limited budget. I suspect that taking a watch-phone, such as an Omate Truesmart or one of its clones ( eg, http://www.dx.com/p/imacwear-m7-waterproof-android-4-2-dual-core-3g-smart-watch-phone-w-1-54-5-0mp-black-373360 ), and filling a 32 gigabyte SD card with offline maps, Wikipedia, and related materials would improve my camping experience. However, I could also purchase an iPhone-like Android phone of comparable stats for half the price, allowing me to also purchase, say, a Kelly kettle, which would also improve my camping experience. (I already have various other digital devices, but none with enough room for the maps etc. I already have solar panels to hang from my backpack and external batteries, to keep any such devices charged while in the field.)

I have some leeway in timing, to get whatever items I decide on before camping season starts, and I find myself having spent several days being indecisive about what options, if any, to pick. My thoughts keep bouncing between something like "Wrist-computers are cool and I want one" and "I've made poor electronics purchasing decisions in the past and regretted them".

How do you think I should redirect my thought processes?

Replies from: Lumifer, MrMind, gjm, VincentYu
comment by Lumifer · 2015-03-20T15:24:36.274Z · LW(p) · GW(p)

Would anyone here care to offer their opinion?

Sure :-D Smartwatches are computers miniaturized to the point of uselessness because of the tiny screen and UI issues. Specifically for camping or backpacking you'd be much better off with a bigger-screen device like a regular smartphone. In fact, if you're serious about backpacking I would recommend a dedicated GPS unit.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-20T19:19:04.446Z · LW(p) · GW(p)

UI issues

I've started looking into speech-to-text and text-to-speech alternatives to the tiny screen.

a dedicated GPS unit.

I've tried one of those, every N years. There's always been some issue - only providing coordinates instead of a map, or power issues, or the like - which has ended up with me leaving it out of my kit. I'm vaguely hoping that the continuing convergence of all electronic devices into "phones" means that the various solutions to those issues will also have been collected.

Replies from: Lumifer, Lumifer
comment by Lumifer · 2015-03-20T20:00:02.928Z · LW(p) · GW(p)

I've started looking into speech-to-text and text-to-speech alternatives to the tiny screen.

That sounds like a rationalization. And it's entirely unhelpful when you're trying to figure out maps.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-20T20:25:50.940Z · LW(p) · GW(p)

Granted. :)

comment by Lumifer · 2015-03-20T20:16:09.214Z · LW(p) · GW(p)

the continuing convergence of all electronic devices into "phones"

For backpacking I still prefer a dedicated GPS unit because (a) it's waterproof plus I expect it to survive shock better than a smartphone; (b) it's power-thrifty and I can leave it on for the whole day without worrying about running down the battery; (c) it can run off AA batteries which are ubiquitous; (d) if you really need GPS, you need to carry two GPS-capable devices.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-20T20:26:54.852Z · LW(p) · GW(p)

on for the whole day

Maybe it's been longer than I thought since I went GPS-hunting... What brand and/or model accomplishes this witchcraft?

Replies from: Lumifer
comment by Lumifer · 2015-03-20T20:37:54.545Z · LW(p) · GW(p)

My GPS is an old Garmin 76CSx.

comment by MrMind · 2015-03-20T08:05:04.807Z · LW(p) · GW(p)

For a long time I've wanted to want a smartwatch so badly I was forced to buy it, but the actual advantages of owning one never amounted to the desired threshold. In the end, and quite sadly, I've decided that there will probably never be enough reasons.

I think it's happening the same to you: you want to want to buy a wrist-phone, but are rational enough to know that there's no reason to do such a thing. I suggest you to meditate on the fact that you probably already know what's the right course of action, it just sucks to follow.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-20T11:55:01.460Z · LW(p) · GW(p)

In a curious twist to this process, I just dreamed that I checked this thread for a response to this comment, and found one, of which I explicitly remember only the words, "You're playing with fire here" and "You're taking your life into your hands", and implicitly remember something about the authour reminding me that I'm a cryonicist.

Going camping does happen to increase the odds that I'll have an accident where my brain ends up warm and dead. Having a communications device that's quite likely to remain intact and ready to use if I fall down a cliff and break my legs modestly reduces the odds of that particular negative scenario. In fact, assuming that I'm not going to quit going camping, and that I already have my chosen first-aid equipment, there are few expenditures I can make which are as likely to increase my QALYs.

So: Does /that/ sound like actually useful reasoning, or mere rationalization?

Replies from: knb, Lumifer
comment by knb · 2015-03-20T18:00:22.355Z · LW(p) · GW(p)

Sounds like a rationalization to me.

I think you would be better off buying a ruggedized cell phone or radio if that is your true purpose. I suspect a watch is quite likely to get smashed in a serious fall like that.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-20T19:22:35.826Z · LW(p) · GW(p)

Sounds like a rationalization to me.

Fair enough.

quite likely to get smashed

Hm... brainstorming a bit, I'm considering looking up one of the cheaper watch-phones, removing the wrist-band, getting a SIM card for a phone service that only needs to be paid for annually, and keeping the miniaturized backup cellphone somewhere about my person. But that's a completely separate use-case than the device for camping, so I'm not going to even consider it until I finish my annual camping gear refreshing.

comment by Lumifer · 2015-03-20T15:26:22.751Z · LW(p) · GW(p)

Going camping does happen to increase the odds that I'll have an accident where my brain ends up warm and dead.

While that's true you might want to consider what other activities also happen to increase the same odds and whether you want to spend your life avoiding all of them.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-20T19:25:22.743Z · LW(p) · GW(p)

other activities

My lifestyle is mostly urban; whatever accidents befall me, I'm nearly always well within range of ambulances and hospitals with personnel able to call up my medical proxy. Camping is the exception where it would likely take a few hours just for emergency personnel to reach me.

Replies from: Lumifer
comment by Lumifer · 2015-03-20T20:07:20.643Z · LW(p) · GW(p)

I'm nearly always well within range of ambulances and hospitals with personnel able to call up my medical proxy.

Be realistic. If you're hit by bus on a city street, how long do you think your brain will spend being warm and dead before the information reaches someone who could call in the cryo team? And that even providing your brain stays intact.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-20T20:36:21.239Z · LW(p) · GW(p)

how long do you think

My immediate family all know my wishes, I have a medic-alert type necklace with cryo contact info, there's similar info in my wallet, and so on. Basically, as soon as medical professionals learn who my corpse was, which should be close to as soon as they arrive, they'll know to contact someone who knows to tell them to put ice around my head (as a first stage in the cooling process).

By contrast, if I'm camping, then even if I stay within range of cell towers, and have arranged to call someone twice a day, then even just getting the info out that I might be in trouble (and possibly dead) will take hours-to-days, let alone finding me. (For not-quite-as-lethal accidents, I've got everything from a mirror that can be used as a signal mirror to a pen-style flare launcher to help point possible rescuers in my direction.)

comment by gjm · 2015-03-21T10:27:54.062Z · LW(p) · GW(p)

Allow me to join the chorus of commenters who suspect that you've been persuaded by advertising, peer pressure, etc. that you have to have the latest cool gadget, and that you'd be better off if you could overcome that urge. It's a useful habit to break if you have a longer-term preference for having more money :-).

comment by VincentYu · 2015-03-21T14:17:06.128Z · LW(p) · GW(p)

Not directly answering your conundrum on wrist computers, but—I go trail running frequently (in Hong Kong), so I've thought a bit about wearable devices and safety. Here are some of my solutions and thoughts:

  • I use a forearm armband (example) to hold my phone in a position that allows me to use and see the touchscreen while running. I find this incredibly useful for checking a GPS map on the run while keeping both hands free for falls. I worry that the current generation of watches are nowhere near as capable as my phone.

  • I rely a lot on Strava's online route creation tool and phone app for navigation.

  • Digital personal locator beacons on the 406 MHz channel (example) are the current gold standard for distress signals.

  • Sharing your location through your phone (e.g., on Google+) can give some peace of mind to your family and friends.

  • An inactivity detector based on a phone's accelerometer might be a useful dead man switch for sending a distress SMS/email in the event of an accident that renders you incompetent. I haven't gotten around to setting this up on my phone, but here's an (untested) example of an app that might work.

  • In case of emergency, it might be useful to have a GPS app on your phone that can display your grid reference so that you can tell rescuers where to find you.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-21T15:59:36.669Z · LW(p) · GW(p)

Digital personal locator beacons on the 406 MHz channel (example) are the current gold standard for distress signals.

Indeed so - but as far as I've been able to dig up so far, they require a bit more gold than I can afford.

Such beacons are required to be (re)programmed with a serial number appropriate for the country they're to be used in, which can only be done at an authorized dealer, which makes online purchases from other countries almost pointless. As near as I can tell, the nearest place I can get such a beacon is at mec.ca , where the least expensive example I can find is $265, above my budget for camping electronics.

I'd be happy to have such a device; I just don't see how I can acquire one with my particular level of fixed income.

comment by [deleted] · 2015-03-16T08:46:50.616Z · LW(p) · GW(p)

DAE know The Haze? The Haze is the brain fog whenever I have a subject I entertain comfortable lies about and the truth would be too painful. I.e. something negative about myself etc. whenever I approach the subject my brain decides to deal with the cognitive dissonance to avoid painful truth by reducing the IQ, but instead of becoming wooden and thick like normal stupidity, it becomes foggy. This fogginess is not actually felt or known at that time, but when I later on face the painful truth, it feels like a fog, a haze lifting. It feels a lot like as if formerly I was thinking about a subject in a drunken state, like when drunk people philosophize.

Since it does not feel like that at that moment, any method for discovering these?

Replies from: Joseph_P, satt
comment by Joseph_P · 2015-03-16T15:09:09.054Z · LW(p) · GW(p)

Could you give a specific example of this foggy thinking? In which ways is it different from an Ugh field ?

Replies from: None
comment by [deleted] · 2015-03-16T15:36:26.292Z · LW(p) · GW(p)

I think ugh fields are about something fairly small and simple, it is different.

When I as 15, I was weak in every sense, nerves (anxiety), borderline mentally ill, scrawny body etc. As I desperately did not want to admit it, because it sucks, and I wanted to convince myself I am strong, I externalized the self-hate and started to hate on other people's weakness (not actual individuals, but as a principle), saying things like the weak don't deserve to live and should go extinct to make room for the strong etc. in order to convince myself I am strong. But it didn't really work. It did not really work for Nietzsche either, who inspired me to do this... and especially when I was confronted by people who took offense when I exhorted how altruism is slave morality, and those people were strong and succesful in every possible way, yet they were altruists, basically they were paladins, I needed to exert more and more convoluted mental gymnastics to convince myself they are actually weak and I am somehow actually strong. Back then it felt like being a non-understood genius, a genius who is not understood because other people are stupid. But much later when I realized the folly, it felt like being in a mental fog, mental haze back then.

Replies from: NancyLebovitz, Gunnar_Zarncke
comment by NancyLebovitz · 2015-03-16T17:15:03.780Z · LW(p) · GW(p)

Hypothesis: what you can think is affected by the state of your nervous system.

Have some neurology on the subject-- I'm not jumping to any conclusions about whether you have a background of trauma, these are just the books I know about.

Complex Trauma: From Surviving to Thriving This one has some material about rage getting turned outward or inward.

In an Unspoken Voice: How the Body Releases Trauma and Restores Goodness

Replies from: None
comment by [deleted] · 2015-03-17T09:00:12.137Z · LW(p) · GW(p)

If basic common playground bullying counts as one then yes. Hm, it checks out. Boys between 6 and 12 have rather harsh ways of establishing a hierarchy of strength, courage and general ranking and it is possible that it is traumatic in a way that the subject does not even recognize. Does that ever happen that people are conscious about their own coping methods but are not too conscious about the trauma they are coping with?

My problem with the whole theory is that I am prone to pull a reversed stupidity on Freudism. I.e. if Freud was wrong that everything is about coping with childhood traumas then I tend to think nothing is. I also tend to think it is way, way too easy, it is suspiciously easy, because it sounds like blaming others in order to avoid facing a defect in the self.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-03-17T11:22:45.975Z · LW(p) · GW(p)

In an Unspoken Voice has it that PTSD is a result of not being able to do normal simple movements such as running, punching, or pushing away when under high stress. There's a solution-- when the stress is over, go away for a bit and shake.

Animals do this, but for various reasons-- the stress goes on for too long, or it feels socially or personally inappropriate to collapse and shake-- the uncompleted movements can get stuck in the memory and the trauma continues in the body and imagination.

It wouldn't surprise me if "ordinary" childhood bullying would be enough to have a traumatic effect, especially on someone who was immobilized while being bullied.

Does that ever happen that people are conscious about their own coping methods but are not too conscious about the trauma they are coping with?

I think so. There's a lot of that around rape, where the person who was raped is showing symptoms of PTSD, but thinks that the way they were treated doesn't count as rape.

I found it was useful to frame traumatic effects (in my case, a tendency to freeze) as part of the normal human range rather than a defect. Also, there's research that the biggest predictor of PTSD is the amount of previous trauma.

I recommend Dorothy Fitzer's videos. She specializes in people with anxiety and takes a gentle, sensible approach to becoming more comfortable in the world.

Replies from: None
comment by [deleted] · 2015-03-17T11:38:54.796Z · LW(p) · GW(p)

Interesting. So the way it differs from Freudism is that the idea is not that getting hurt gives you problems, but not being able to react to hurt or stress (even environmental stress if I get it right) in basic ways does so?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-03-17T12:39:46.742Z · LW(p) · GW(p)

Yes., bearing in mind that this theory says that some basic ways work much better than others-- for example, telling the story over and over (which is something a lot of people do) may not be nearly as useful as going to physical movement.

There's also a school of more conventional psychology (sorry, I don't know which one) which holds that what happens to you isn't the fundamental thing-- what's important is what conclusions you draw from what happened to you.

comment by Gunnar_Zarncke · 2015-03-16T20:11:39.147Z · LW(p) · GW(p)

Sounds like unsuccessful rationalization or compartmentalization. Unsuccessful probably because the fog wasn't 'able' to lock you into a stable state. You mention lots of contact to other people so I guess that prevented it.

comment by satt · 2015-03-22T19:02:56.515Z · LW(p) · GW(p)

[On second thought, deleted — the example I pulled up is arguably more "wooden and thick like normal stupidity" than "foggy".]

comment by [deleted] · 2015-03-18T14:17:26.400Z · LW(p) · GW(p)

From 2008: "Readers born to atheist parents have missed out on a fundamental life trial"

Not really, in my experience. First of all there are plenty of other silly things to believe in, such my parents tended to believe in feel-good liberal adages like "violence never solves anything".

But actually the experience made me learn from religious people quite a lot. For this reason: like for most modern secular liberal Europeans, for my parents the kind of history we live in began not so long ago. A few centuries ago. Or maybe 1945. Everything before is The Dark Ages. The Dark Ages are the period where people were both stupid and evil and we can learn nothing from them. Everybody before Galilei was an idiot. Modern ideas, from democracy to human rights are not simply right but obviously so.

Well, imagine my surprise when I made some Catholic friends with a slight conservative bent, who, through their religion, inherited quite older historical ideas, which came accross as new, interesting and thoughtful to me. Roughly the same kinds of ideas you can get from Chesterton. Or Tolkien.

Having said that, religion is not necessary for this, any atheist with a keen interest in history can have ideas like that. For example, my Catholic friends proposed it may not be entirely true that there cannot be anything wrong, ever, with gay people, it can actually be something like narcissism, as it is based on falling in love with people who are a reflection of the self. I still don't know if it is true but at least it explains a bit why some of the gay folks I knew from raves wanted all the attention all the time. Later on, I learned that this idea actually has nothing to do with Catholic tradition: it was invented by Freud. Who is generally not seen a particularly conservative fellow although this idea of his was clearly un-PC by now.

So, ultimately, it is not about religion then, it just seems my liberal-secular parents were too much in the narrow "canon" and my religious friends (who had some conservative bent) were simply aware of the existence of ideas outside it.

Replies from: Lumifer
comment by Lumifer · 2015-03-18T14:44:06.233Z · LW(p) · GW(p)

Have you discovered the neoreactionaries yet? :-)

Replies from: None
comment by [deleted] · 2015-03-18T15:41:32.616Z · LW(p) · GW(p)

Yes, Moldbug and Xenosystems. Love/Hate. The problem is they are too politics focused which is typically about using power to change other people's lives. Frankly I like it more if people experiment with their own lives first.

This is what I don't understand - is there even a name for that? A non-political conservative/reactionary who experiments with old ideas on himself and not forcing others to do so, is there such a thing (NOT the SCA).

If there was such a thing I would actually try a demo version of it, for example, I love the movie The Last Samurai for example, but strictly on a voluntary basis and I figure that means non-political.

I mean, I guess, if I think deeper into it, the issue is not even whether voluntary or not but the Talebian "skin in the game". The honesty of every proposal is proportional with how much it affects oneself and how much it others. And that is why I hate politics. Too few personal skin in the game and way too much other people's skin.

Replies from: None, Lumifer, seer
comment by [deleted] · 2015-03-19T07:10:48.214Z · LW(p) · GW(p)

Libertarians?

Replies from: None
comment by [deleted] · 2015-03-19T08:07:20.255Z · LW(p) · GW(p)

I know only a few places on Earth where it would have any chance of working. Such as the US, maybe UK but from Latin America to Eastern Europe, far too often criminal plunder was covered up with free-market adages. Like "I sell this state-owned thing to my friend for peanuts because private ownership is more efficient."

The point is, ideas, one of the many angles to evaluate ideologies from is how easily they are misused as lip-service cover-up for something nasty. Marxism was misused into Leninism, Keynesianism into "spend in bad times... and not save in good times because fuck it: votes", and Libertarianism into neoliberal plunder.

This makes it not worse than the others, but not perfect either. I would basically use it as part of my political-philosophy toolset but not the whole of it.

Another thing Libertarians don't really understand even in US-based circumstances is that if I own the things I need to work with or live with, then private property gives me freedom and independence. But if other people own the things you need to work with, then private property is a burden, not a freedom for you.

Libertarianism works best with a fairly egalitarian distribution of property - not income, not income redistribution, but property, as in: frontier homesteader farms and suchlike. It is the legacy of frontier equality that made Libertarianism popular in the US, Hayek and Mises come from an Austrian tradition that had much more a small-business focus and thus more equal property ownership than the Mittelstand focus of Prussia-Germany and so on. (May Weber hated small business: he considered shopkeepers lazy and spoke out against the Austrianization of Germany, as in, against small business opposed to middle and big. Needless to say he was anything but Libertarian.) In Europe Switzerland is the closest to Libertarianism and precisely because they have/had such a broad property-owning middle-class, every second Swiss person seems to have inherited 1/4 of a dairy farm or something. This is why The Moon Is A Harsh Mistress is the best Libertarian utopia: because it is about frontier with egalitarian property ownership, nobody is proletarian who must work for The Man for wages forever because he never has a chance to homstead a farm.

In all circumstances, Libertarianism works best with a heavy dose of Distributism i.e. small-business economy, entrepreneurial and not corporate capitalism. Family farms, mom and pop shops and so on. Basically the US would have to find a way to re-create the frontier. In Latin America, Eastern Europe, misusing Libertarianism for neoliberal plunder could be avoided similar ways: never, ever, ever, ever, ever, ever, ever sell, privatize a big chunk of state-owned property to one person or corporation! ALWAYS spread it around, for example if you want to privatize a state-owned utility company make 1000 shares and distribute it via lottery.

Libertarian-Distributism is something I could get behind. More info: http://www.frontporchrepublic.com/2009/06/the-economics-of-distributism-iv-property-and-the-just-wage/ (best part of a five-part series)

comment by Lumifer · 2015-03-18T15:50:03.453Z · LW(p) · GW(p)

A non-political conservative/reactionary who experiments with old ideas on himself and not forcing others to do so, is there such a thing

Sure, these people are usually known as eccentrics, cranks, and weirdos X-/

If there was such a thing I would actually try a demo version of it

Since you're experimenting on yourself, what's stopping you and why do you want only a demo version?

Too few personal skin in the game

That depends on where do you live and what kind of politics you are talking about.

Replies from: None
comment by [deleted] · 2015-03-18T16:18:57.526Z · LW(p) · GW(p)

Since you're experimenting on yourself, what's stopping you and why do you want only a demo version?

Lack of info. Know any good self-help books published before 1700 ? :)

Most of the old stuff was learned by word of mouth. The "sensei" type of learning that we Westerners find so adorable in Asia, but actually we did it too, and that means not a lot of stuff was written down.

Not all hope is lost, though, there are people learning fencing from manuals written around 1470. http://wiktenauer.com/wiki/Main_Page

But jokes aside. For example we are discussing akrasia a lot. In my childhood, it was "solved" by scaring the living bejeezus out of children who procrastinated instead of doing homework, everything from punishment to guilt-tripping. This sucked, and also, often worked. Akrasia became a problem largely when it was decided that now we are trying to be nice with each other and ourselves too.

However, I cannot have been as simple as that. I think if I scratch deeper, I could find old methods. The issue is often not being written down.

Replies from: Jayson_Virissimo, Richard_Kennaway, MrMind, Epictetus, Nornagest, Lumifer, ChristianKl
comment by Jayson_Virissimo · 2015-03-19T05:46:35.718Z · LW(p) · GW(p)

Know any good self-help books published before 1700 ? :)

The Enchiridion, by Epictetus.

comment by Richard_Kennaway · 2015-03-18T16:59:01.307Z · LW(p) · GW(p)

Know any good self-help books published before 1700 ? :)

The Book of Proverbs? The Book of Baruch? Sermons from the past? Writings of the ancient Stoics?

Replies from: None
comment by [deleted] · 2015-03-19T07:53:43.410Z · LW(p) · GW(p)

+1 for stoics, actually people like Nassim Taleb are re-inventing that and it seems to be a good way.

Replies from: seer, Richard_Kennaway
comment by seer · 2015-03-23T01:12:41.861Z · LW(p) · GW(p)

No, Taleb isn't "re-inventing" stoicism any more then every mechanic is "re-inventing" the wheel.

Replies from: None
comment by [deleted] · 2015-03-23T08:50:01.239Z · LW(p) · GW(p)

You mean stoicism was always alive?

comment by Richard_Kennaway · 2015-03-19T09:18:58.561Z · LW(p) · GW(p)

More modern stoicism here, although personally, I think that the Modern Stoicism community treats stoicism too much as a package deal.

comment by MrMind · 2015-03-19T08:12:10.415Z · LW(p) · GW(p)

Know any good self-help books published before 1700 ? :)

I'll add to the already growing list The meditations by Marcus Aurelius, I've been told is one of the best.

Heck, sometimes I feel that past self-help books are way better than today's...

comment by Epictetus · 2015-03-19T04:10:51.018Z · LW(p) · GW(p)

Lack of info. Know any good self-help books published before 1700 ? :)

Essays of Montaigne.

Replies from: None
comment by [deleted] · 2015-03-19T08:33:28.538Z · LW(p) · GW(p)

What about Bacon's essays? (I don't remember when they were written, though.)

comment by Nornagest · 2015-03-18T17:17:28.725Z · LW(p) · GW(p)

Not all hope is lost, though, there are people learning fencing from manuals written around 1470. http://wiktenauer.com/wiki/Main_Page

It might be worth mentioning that a lot of the people working at reviving Western longsword fencing also have rank in Eastern sword styles, or classical fencing, or both. That isn't really that big a deal from a purity/authenticity standpoint, contrary to what some people will tell you; schools differ mostly in methodology, since the biomechanics of fencing are largely the same whether you live in 21st-century California or 19th-century Japan or 15th-century Germany, and methodology lends itself better to being written down than biomechanics does. But it does mean they have a live body of practice to hang written descriptions of technique on.

Replies from: None, ChristianKl
comment by [deleted] · 2015-03-19T08:12:32.439Z · LW(p) · GW(p)

I intend to learn HEMA/longsword after I get good enough in boxing i.e. fist-fencing. I wonder what, if anything, will I bring into it. One thing I am doing is to practice both dominant hand front and non-dominant hand front stances because while boxing focuses on the second, the first is useful both for surprising an opponent in boxing and also fencing does that. I hope my footwork will be well translatable, because I suffer like a pig on ice with it, it is really hard for me to learn boxing footwork so I I hope I can use that for historical fencing too.

Another interesting thing I hope to help me with fencing later is the non-telegraphed jab. This means roughly this: turn the hand inward and raise the shoulder during a jab so the elbow does not flare out to the side. I think this can be useful for a side-sword or rapier thrust but I am not so sure for the two-handed longsword stuff.

comment by ChristianKl · 2015-03-19T12:28:47.957Z · LW(p) · GW(p)

I think it makes more sense to learn fencing some someone who understands the biomechaniscs well enough to have his own opinion about what should be proper technique should be than someone who simply tries to teach what he thinks some book says.

comment by Lumifer · 2015-03-18T16:42:00.579Z · LW(p) · GW(p)

Know any good self-help books published before 1700 ? :)

Sure, lots of those -- from St.Augustine's Confessions (that's way before 1700 :-D) to Machiavelli's The Prince.

comment by ChristianKl · 2015-03-19T12:14:41.488Z · LW(p) · GW(p)

However, I cannot have been as simple as that. I think if I scratch deeper, I could find old methods. The issue is often not being written down.

It was also a radically different environment. Computers provide for new distractions. People used to feel bored and have nothing to do. I never feel like I don't know what to do and there are always multiple options.

Replies from: None
comment by [deleted] · 2015-03-19T12:24:51.969Z · LW(p) · GW(p)

People used to feel bored and have nothing to do.

I have a sample size of 1 that it is possible today :) Screwing around on Reddit can be boring (and yet addictive). It is not that straightforward to find interesting content online. Maybe I am just unusually bad at it - or maybe because literally zero of my IRL friends and relatives reads Reddit, LW or any interesting blog, so I never get "hey this is cool check this out"emails. They just don't have much free time. This is probably atypical.

Yet, it is very easy for a child to be distracted from homework at any level of technology. It is called daydreaming. You familiar with Karl May novels, I suppose? Old Shatterhand and Winnetou stories caused me huge amounts of daydreaming when I was a child and so did they for my friends.

Imagination always fills the void that entertainment doesn't. Of course you need books because without adventure stories there is not much to daydream about, but that is solved problem since about, 1800-1850? I mean, that was roughly when books were cheap enough that children could have romantic novels. And vice versa - probably this is why experts say watching TV, even perfectly healthy educational shows retards the development of toddlers. Not enough exercise of imagination.

I never feel like I don't know what to do and there are always multiple options.

I do. It is hard work for me to race with boredom and not always win. I fill my tablet, Instapaper, FBreader with saved articles and ebooks to read but the activity itself can be get boring, and there is not much left then, I used to be a gamer since 1987 (Commodore...) but grew to be bored with most games except currently the best mods for Mount & Blade Warband (such as A Clash of Kings or Brytenwalda). I have a family now so that fills out my weekends nice, still I sometimes get bored. The way I break it down, there is almost nothing outside our apartment that would be interesting in a random weekend in Vienna, just people drinking in bars or yet another kind of artsy music festival. Inside the apartment, it is each other, and that is great, and the computers, which largely mean stuff to read, and that gets tiresome, or stuff to play with, which already got.

The world feels a lot like a prison, except having my lovely family and the books and games in my computer. What else is out there? Oh, I do some sports too...

Sometimes I almost wish for some kind of social collapse just to be more energized through a survival instinct. But that wish would be incredibly selfish. Still, I am even contemplating writing an "ethics for a boring world" article where I argue it is better to cause others 10 units of pleasure and 2 units of pain rather than 8 units pleasure only, because it makes the world less fucking tediously comfortably dull and more challenging / adventurous.

Replies from: ChristianKl
comment by ChristianKl · 2015-03-19T12:34:15.030Z · LW(p) · GW(p)

I have a sample size of 1 that it is possible today :) Screwing around on Reddit can be boring (and yet addictive).

It's not the same kind of boring that people had 100 years ago. It fills your brain with information that has to be processed.

Yet, it is very easy for a child to be distracted from homework at any level of technology. It is called daydreaming.

I think daydreaming is qualitatively much different than outside input for the purpose of this discussion. Daydreaming allows you to process old information instead of adding new information.

Books also don't have the constant change of focus.

comment by seer · 2015-03-23T01:08:15.986Z · LW(p) · GW(p)

The problem is they are too politics focused which is typically about using power to change other people's lives.

No, it's about trying to stop progressives from using power to change other people's lives.

This is what I don't understand - is there even a name for that? A non-political conservative/reactionary who experiments with old ideas on himself and not forcing others to do so, is there such a thing

I think most reactionaries would settle for forcing the progressives to experiment with their new ideas on themselves before forcing them on everyone else.

As for experimenting with old ideas. What do you mean? If the 1000+ years of data isn't enough for you, a couple of neoreactionaries' self-experimentation won't be either.

comment by Larks · 2015-03-18T01:54:32.897Z · LW(p) · GW(p)

Suppose I wanted to predict the likelihood of and degree of delays and cost over-runs associated with a nuclear plant currently under construction. How would people recommend I do so?

Replies from: Tripitaka, satt
comment by Tripitaka · 2015-03-18T02:21:59.364Z · LW(p) · GW(p)

Study existing literature. http://en.wikipedia.org/wiki/Bent_Flyvbjerg this guy got a lot of good press in germany, apparently he has written extensively on big infrastructure projects and cost overruns. I find Megaprojects and Risk: An Anatomy of Ambition

comment by satt · 2015-03-22T18:51:34.507Z · LW(p) · GW(p)

Reference class forecasting: get a list of previously constructed nuclear power plants, look up how much they were delayed and over budget, then use the empirical probability distribution of delays and cost over-runs. (Bent Flyvbjerg, cited by Tripitaka, turns out to be very keen on RCF.)

comment by Ishaan · 2015-03-18T01:44:07.234Z · LW(p) · GW(p)

What is the name of the logical fallacy where you rhetorically invalidate an argument by providing an unflattering explanation of why someone might holds that viewpoint, rather than addressing the claim itself? I seem to remember there being a word for that sort of thing.

Replies from: bramflakes, g_pepper, D_Malik
comment by bramflakes · 2015-03-18T01:46:31.376Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Bulverism

Replies from: fubarobfusco
comment by fubarobfusco · 2015-03-18T19:48:40.361Z · LW(p) · GW(p)

A related idea is psychologizing — analyzing someone's belief as a psychological phenomenon rather than as a factual claim.

comment by g_pepper · 2015-03-18T01:50:39.387Z · LW(p) · GW(p)

I believe that is the genetic fallacy.

Replies from: fubarobfusco
comment by fubarobfusco · 2015-03-18T19:42:48.169Z · LW(p) · GW(p)

The genetic fallacy has more to do with dismissing a claim because of its origins or history, rather than because of who holds that view today. For instance, arguments from novelty or antiquity are genetic fallacies.

http://en.wikipedia.org/wiki/Genetic_fallacy

Replies from: g_pepper
comment by g_pepper · 2015-03-21T23:51:55.431Z · LW(p) · GW(p)

Yes, Bulverism appears to be a specific subcategory of the genetic fallacy, and Bulverism more precisely answers Ishaan's question. Thanks for the clarification.

comment by D_Malik · 2015-03-18T22:30:32.765Z · LW(p) · GW(p)

It's "explaining away" or "intercausal reasoning" applied to the "good reasons → belief ← bad reasons" Bayes net, and it's not really a fallacy. It doesn't invalidate arguments directly but it should still make us decrease our belief, because (1) we need to partly undo our update in favor of the belief caused by observing that the other person holds that belief, and (2) we need to compensate for our own increased desire to believe.

It's often rude, because it implies that the other person is either dishonest or stupid, since it suggests that the other person's expressed belief is either not genuine (e.g. lies or belief-in-belief) or genuine but not due solely to truth (e.g. influenced by subconscious signaling concerns).

Since this reasoning pattern is rude, i.e. status-lowering, people often claim that it's logically invalid when it's used against a belief that they hold. (See what I did there?) This status-lowering property also means we must be careful to apply it to our own beliefs too, not only our opponents' beliefs.

comment by James_Miller · 2015-03-16T17:43:10.452Z · LW(p) · GW(p)

I labeled an exam question as "tricky" because if you applied the solution method we used in class to solve similar looking problems you would get the wrong answer. But it occurred to me that if the question had been identical to one given in class but I still labeled it as "tricky" the "tricky" label would have been accurate because the trick would have been students thinking that the obvious answer wasn't correct when indeed it was. So is it always accurate to label a question as "tricky"?

Replies from: Nornagest, Richard_Kennaway, JoshuaZ, Epictetus, Gunnar_Zarncke, Lumifer
comment by Nornagest · 2015-03-16T17:53:42.471Z · LW(p) · GW(p)

That's kind of a Hofstadter-esque question. I think the answer is "no", but the reason why depends on what meta-level you're looking at: if the label refers only to the object-level question, then it's straightforwardly true or false; but if you construe it as applying to the entire context of the question including its labeling, then it's possible to imagine a trick question that's transparent enough that labeling it as such exposes the trick and stops it from being tricky. It can be a self-fulfilling or a self-defeating prophecy.

Replies from: Lumifer
comment by Lumifer · 2015-03-16T18:05:33.532Z · LW(p) · GW(p)

In other words, it's turtles all the way down :-)

comment by Richard_Kennaway · 2015-03-17T13:58:56.533Z · LW(p) · GW(p)

"Tricky" means "the other person is operating at a higher level than I am".

If you answer a question at a lower level than it was posed, you get marked down for failing to level up. If you answer at a higher level than the question was posed at, and the teacher fails to level up, you get marked down for what failing to level up felt like to the teacher -- misinterpreting the question, nitpicking, showing off, whatever. The task in an exam is to figure out what level each question is being asked at, and address it on the same level.

comment by JoshuaZ · 2015-03-16T19:04:31.944Z · LW(p) · GW(p)

I don't whether it is tricky or not, but it is the sort of thing I think my students would get legitimately annoyed by. While teaching students to be confident of their answers can be important, I'm not sure in this context that would be helped by this.

Replies from: James_Miller
comment by James_Miller · 2015-03-16T19:21:30.421Z · LW(p) · GW(p)

would get legitimately annoyed by

Normally you are right, but the class is game theory so I would feel justified playing meta tricky games with my students on exams. Here is a past exam question of mine that got posted.

Replies from: Jiro, ike
comment by Jiro · 2015-03-16T22:02:19.656Z · LW(p) · GW(p)

That makes as much sense as having a class about political corruption and requiring that students pass the test by bribing the teacher.

Just because the class is about X doesn't mean that grades in the class should be measured by X.

Replies from: James_Miller
comment by James_Miller · 2015-03-16T22:46:55.763Z · LW(p) · GW(p)

That makes as much sense as having a class about political corruption and requiring that students pass the test by bribing the teacher.

If I taught a class on political corruption I would totally do that if it wouldn't get me in trouble.

My goal with that question was to confront the students with a real game theory based moral dilemma. Tests are not just for evaluation, but should also be learning exercises.

Replies from: Jiro
comment by Jiro · 2015-03-17T00:27:33.067Z · LW(p) · GW(p)

But there's a difference between "this is how you do X" and "doing X is appropriate in this situation". Deciding that because a class is about bribery, you should get your grade in it by bribery, confuses these two things--you've given the students an opportunity to use the lessons from the class, but it's not a situation where most people think you should have an opportunity to use the lessons from the class. If your class was about some field of statistics related to randomness would you insist that your students roll dice to determine their exam score? If your class was about male privilege, would you automatically give all female students a grade one rank lower?

Tests are not just for evaluation, but should also be learning exercises.

While tests can have purposes, such as learning, that are orthogonal to evaluation, that's different from giving the test an additional purpose that is counterproductive to evaluation.

Also, I'd hate to be the student who had to explain to a prospective employer that the employer should add a percentage point to his GPA when considering him for employment, on the grounds that he scored poorly in your class for reasons unrelated to evaluation.

comment by ike · 2015-03-16T20:22:35.549Z · LW(p) · GW(p)

That one is evil.

Assuming the question was known in advance, the obvious solution is for the people who care more about their grades to pay those who care less to circle A while circling B themselves. If they trust each other, they might even be able to do this after-the-fact.

The universalizing answer would be to choose A 51% of the time.

What was the ratio of As?

Replies from: ilzolende, James_Miller
comment by ilzolende · 2015-03-17T01:22:53.229Z · LW(p) · GW(p)

Does James Miller let his students take d% dice to his tests?

Replies from: James_Miller, ike
comment by James_Miller · 2015-03-19T01:37:04.718Z · LW(p) · GW(p)

No, but if a student asked I would be tempted to give her extra credit.

comment by ike · 2015-03-17T14:39:48.069Z · LW(p) · GW(p)

That's why you should always have some random bits up your sleeve (memorized).

I remember being surprised that a large number of /r/rational commenters had password systems in case they ever invented time-travel or cloning. Anyone who goes to that effort can presumably also memorize 15 or so random bits if they ever need it, and refresh if used.

Replies from: Jiro
comment by Jiro · 2015-03-17T18:52:37.418Z · LW(p) · GW(p)

Time travel passwords are vulnerable to mindreading. If you want a good time travel password, you have to have an algorithm which the time-travelling version of you can calculate, but which can't be directly read by a mindreader because if he's reading it right now, he has no time to calculate it. For instance, I can have a time-travel password of "digits 300-310 of the square root of 3". A time-travelling version of me would know the password, so can compute it, then can tell me the result and I can check it. A mindreader would have to read my mind before the fact or engage in some time travel himself.

Of course, it's impossible to have a time-travel password immune to all such tricks (maybe the mindreader did read my mind a week ago), but there's no reason to allow blatant loopholes.

comment by James_Miller · 2015-03-16T21:39:01.160Z · LW(p) · GW(p)

What was the ratio of As?

It was several years ago and I don't remember.

comment by Epictetus · 2015-03-17T08:19:18.036Z · LW(p) · GW(p)

So is it always accurate to label a question as "tricky"?

If every question is tricky, then the label of "tricky" ceases to be meaningful.

But it occurred to me that if the question had been identical to one given in class but I still labeled it as "tricky" the "tricky" label would have been accurate because the trick would have been students thinking that the obvious answer wasn't correct when indeed it was.

I believe the word you're looking for is "cruel".

comment by Gunnar_Zarncke · 2015-03-16T20:21:24.154Z · LW(p) · GW(p)

Only if you do it once (or a few times) and only after they have seen lots of tricky ones by now.

comment by Lumifer · 2015-03-16T18:04:53.661Z · LW(p) · GW(p)

Reminds me of Vizzini's battle of wits in the Princess Bride X-)

comment by RowanE · 2015-03-19T23:13:11.847Z · LW(p) · GW(p)

Generating artificial gravity on spaceships using centrifuges is a common idea in hard-sci-fi and in speculation about space travel, but no-one seems to consider them for low gravity on e.g. Mars. Am I mistaken in thinking that all you'd need to do is build the centrifuge with an angled floor, so the net force experienced from gravity and (illusory) centrifugal force is straight "down" into it?

I realise there'd be other practical problems with centrifuge-induced artificial gravity on Mars, since it's full of dust and not the best environment, but that doesn't seem to be the right kind of objection to explain it never being brought up where I've seen it.

Replies from: DataPacRat, The_Duck, GMHowe
comment by DataPacRat · 2015-03-20T20:39:30.249Z · LW(p) · GW(p)

One variation: "Gravity trains", going round and round in circles.

Used on my "New Attica" setting, as can be seen at http://datapacrat.deviantart.com/art/The-Grav-y-Train-343866014 .

comment by The_Duck · 2015-03-20T20:23:03.978Z · LW(p) · GW(p)

Am I mistaken in thinking that all you'd need to do is build the centrifuge with an angled floor, so the net force experienced from gravity and (illusory) centrifugal force is straight "down" into it?

Sure, this would work in principle. But I guess it would be fantastically expensive compared to a simple building. The centrifuge would need to be really big and, unlike in 0g, would have to be powered by a big motor and supported against Mars gravity. And Mars gravity isn't that low, so it's unclear why you'd want to pay this expense.

comment by GMHowe · 2015-03-20T19:28:59.852Z · LW(p) · GW(p)

I recall a SF story that took place on a rotating space station orbiting Earth that had several oddities. The station had greater than Earth gravity. Each section was connected to the next by a confusing set of corridors. The protagonist did some experiments draining water out of a large vat and discovered a coriolis effect.

So spoiler alert it turned out that the space station was a colossal fraud. It was actually on a massive centrifuge on Earth.

comment by Meni_Rosenfeld · 2015-03-18T18:31:36.049Z · LW(p) · GW(p)

I've written a post on my blog covering some aspects of AGI and FAI.

It probably has nothing new for most people here, but could still be interesting.

I'll be happy for feedback - in particular, I can't remember if my analogy with flight is something I came up with or heard here long ago. Will be happy to hear if it's novel, and if it's any good.

How many hardware engineers does it take to develop an artificial general intelligence?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2015-03-20T09:29:51.575Z · LW(p) · GW(p)

The flight analogy, or at least some variation of it, is pretty standard in my experience. (Incidentally, I heard a version of the analogy just recently, when I was reading through the slides of an old university course - see pages 15-19 here.)

comment by ausgezeichnet · 2015-03-17T16:37:55.653Z · LW(p) · GW(p)

I occasionally see people move their fingers on a flat surface while thinking, as if they were writing equations with their fingers. Does anyone do this, and can anyone explain why people do this? I asked one person who does it, and he said it helps him think about problems (presumably math problems) without actually writing anything down. Can this be learned? Is it a useful technique? Or is it just an innate idiosyncrasy?

Replies from: emr, MathiasZaman, GuySrinivasan
comment by emr · 2015-03-18T18:08:38.937Z · LW(p) · GW(p)

Seems to be a working memory aid for me.

If I have to manipulate equations mentally, I'll (sort of) explain the equation sub-vocally and assign chunks of it to different fingers/regions of space, and then move my fingers around or reassign to regions, as if I'm "dragging and dropping" (e.g. multiply by a denominator means dragging a finger pointing at the denominator over and up). Even if I'm working on paper, this helps me see one or two steps further ahead than I could do so using internal mental imagery alone. I don't remember explicitly learning this.

comment by MathiasZaman · 2015-03-18T09:04:25.730Z · LW(p) · GW(p)

I move my fingers (and hands or a prop wand if I'm carrying one) to "write" stuff in the air when I'm doing serious thinking. The way that helps me is that I can keep more thoughts in my head. This doesn't (just) apply to math problems (since I hardly know any math and can't do much calculations in my head). My current hypothesis for why this works is that it couples certain actions to certain ideas and repeating the action makes it easier to recall the idea. If I'm right about that it might be learnable and useful, to a similar extent as mind palaces. By coincidence, I've been thinking about trying to formalize this technique in some way since Saturday.

comment by SarahNibs (GuySrinivasan) · 2015-03-18T19:16:06.816Z · LW(p) · GW(p)

I have the belief that I solve math, design, and logic problems more rapidly when standing/pacing in front of a whiteboard with a marker in my hand, far out of proportion to any marks I actually make (often no marks), possibly because the physical motions put me in the state of mind I developed during university.

(I don't know if it actually helps; I have not tested it)

comment by pianoforte611 · 2015-03-17T03:22:11.086Z · LW(p) · GW(p)

Is avoiding death possible is principle? In particular, is there a solution to the problem of the universe no longer being able to support life?

Replies from: RolfAndreassen, None, Squark
comment by RolfAndreassen · 2015-03-17T03:50:37.974Z · LW(p) · GW(p)

None currently known. But I suggest that this is not a very high-priority problem at the moment; if you solve the more pressing ones, you'll have literally billions of years to figure out an escape path from the universe.

Replies from: MrMind
comment by MrMind · 2015-03-17T08:10:51.485Z · LW(p) · GW(p)

Or billions of years of despair knowing there isn't one...

Replies from: None
comment by [deleted] · 2015-03-18T20:30:28.072Z · LW(p) · GW(p)

Because obviously the only valid response to knowing death is inevitable is despair during your non-dead time...

Replies from: MrMind
comment by MrMind · 2015-03-19T08:07:13.161Z · LW(p) · GW(p)

Of course not... You can also wirehead yourself to avoid thinking of the impending doom!

I hope you didn't saw my comment as a real proposal for regulating billions-years-in-the-future civilization :)
It was more on the spirit of a Lovecraftian side note...

Although I think, more seriously, that a civ heavily invested in preventing death would be reasonably crippled if it suddenly find another, inevitable source of death. E.g. once anti-aging is widespread, a deadly virus that targets those who have been treated.

comment by [deleted] · 2015-03-18T20:32:55.830Z · LW(p) · GW(p)

I'm always confused when people talk about 'avoiding/conquering/ending death', as if death were one thing. It's rather emphatically not. It's even worse than the stereotypical-by-now adge that theres no such thing as a 'cure for cancer' because every type of cancer and indeed every individual tumor is unique and brought about by unique failures and internal evolution.

Replies from: pianoforte611
comment by pianoforte611 · 2015-03-19T20:26:00.815Z · LW(p) · GW(p)

I understand that cancer is more than one thing, but I don't see how death is more than one thing. Ceasing to exist; a state such that there is a prior conscious state but no future conscious state. There are many ways to define it, mostly equivalent.

If you mean that biological death is caused by multiple processes then sure, but I mean avoiding all of the causes of death.

comment by Squark · 2015-03-17T20:52:49.805Z · LW(p) · GW(p)

I think that a solution might be possible. According to string theory our universe is likely to be only metastable since its cosmological constant is positive. It means that eventually we get false vacuum decay and the formation of a new universe. If the new universe has zero or negative cosmological constant, its asymptotic temperature will be zero which should (I think) allow avoiding heat death (that is, performing an infinite computation). Now, I think the half-life of spontaneous nucleation within a cosmological horizon is usually predicted to be much longer than the relaxation time. However, this leaves the possibility of heterogeneous (induced) nucleation. Now, I'm not aware of any research about artificially induced false vacuum decay, but I don't know of any physical barrier either. If we manage to induce such a decay and find some way to transmit ourselves into the new universe (which probably requires the new universe to be physically universal), avoiding death is a possibility.

Replies from: None
comment by [deleted] · 2015-03-18T20:40:24.510Z · LW(p) · GW(p)

According to string theory

Uh oh...

comment by Transfuturist · 2015-03-23T07:38:28.518Z · LW(p) · GW(p)

If this post doesn't get answered, I'll repost in the next open thread. A test to see if more frequent threads are actually necessary.

I'm trying to make a prior probability mass distribution for the length of a binary string, and then generalize to strings of any quantity of symbols. I'm struggling to find one with the right properties under the log-odds transformation that still obeys the laws of probability. The one I like the most is P(len(x)) = 1/(x+2), as under log-odds it requires log(x)+1 bits of evidence for strings of len(x) to meet even odds. For a length of 15, it uses all 4 bits in 1111, so its cost is 4 bits.

The problem is that 1/(x+2) does not converge, making it an improper prior. Are there some restrictions by which I can use this improper prior, or to find a proper prior with similarly desirable qualities?

Replies from: Kindly, Kindly, Kindly
comment by Kindly · 2015-03-25T14:05:34.975Z · LW(p) · GW(p)

Here is a different answer to your question, hopefully a better one.

It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.)

Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.)

Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32.

A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1/2 because each flag bit has a probability of 1/2 of being set to 0, if chosen randomly.

If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come closer to encoding a length of x in log(x) bits, but it's more like C log(x) for some constant C. (The constant is actually 32/31, since for every 31 bits of length, we have 1 bit of "length of length".)

If we iterate this, we can produce even more efficient schemes, but log(x)+1 is unreachable.

(What I have been calling the natural distribution to use for each of these schemes is something like a max-entropy distribution. The way I am defining it is to assume an infinite sequence of random bits, let the scheme read this infinite sequence until it decides that it's reached the end of the string, and take that string's length.)

Replies from: Transfuturist
comment by Transfuturist · 2015-03-26T04:38:00.001Z · LW(p) · GW(p)

This was very informative, thank you.

comment by Kindly · 2015-03-25T14:04:09.049Z · LW(p) · GW(p)

Here is a different answer to your question, hopefully a better one.

It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.)

Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.)

Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32.

A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1/2 because each flag bit has a probability of 1/2 of being set to 0, if chosen randomly.

If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come closer to encoding a length of x in log(x) bits, but it's more like C log(x) for some constant C. (The constant is actually 32/31, since for every 31 bits of length, we have 1 bit of "length of length".)

If we iterate this, we can produce even more efficient schemes, but log(x)+1 is unreachable.

(What I have been calling the natural distribution to use for each of these schemes is something like a max-entropy distribution. The way I am defining it is to assume an infinite sequence of random bits, let the scheme read this infinite sequence until it decides that it's reached the end of the string, and take that string's length.)

comment by Kindly · 2015-03-23T13:50:56.322Z · LW(p) · GW(p)

What sort of evidence about x do you expect to update on?

Replies from: Transfuturist
comment by Transfuturist · 2015-03-23T20:51:55.811Z · LW(p) · GW(p)

The result of some built-in string function length(s), that, depending on the implementation of the string type, either returns the header integer stating the size, or counts the length until the terminator symbol and returns that integer.

Replies from: Kindly
comment by Kindly · 2015-03-24T15:06:51.973Z · LW(p) · GW(p)

That doesn't sound like something you'd need to do statistics on. Once you learn something about the string length, you basically just know it.

Improper priors are not useful on their own: the point of using them is that you will get a proper distribution after you update on some evidence. In your case, after you update on some evidence, you'll just have a point distribution, so it doesn't matter what your prior is.

Replies from: Transfuturist
comment by Transfuturist · 2015-03-24T23:22:07.924Z · LW(p) · GW(p)

Not so. I'm trying to figure out how to find the maximum entropy distribution for simple types, and recursively defined types are a part of that. This does not only apply to strings, it applies to sequences of all sorts, and I'm attempting to allow the possibility of error correction in these techniques. What is the point of doing statistics on coin flips? Once you learn something about the flip result, you basically just know it.

Replies from: Kindly
comment by Kindly · 2015-03-24T23:48:46.933Z · LW(p) · GW(p)

Well, in the coin flip case, the thing you care about learning about isn't the value in {Heads, Tails} of a coin flip, but the value in [0,1] of the underlying probability that the coin comes up heads. We can then put an improper prior on that underlying probability, with the idea that after a single coin flip, we update it to a proper prior.

Similarly, you could define here a family of distributions of string lengths, and have a prior (improper or otherwise) about which distribution in the family you're working with. For example, you could assume that the length of a string is distributed as a Geometric(p) variable for some unknown parameter p, and then sampling a single string gives you some evidence about what p might be.

Having an improper prior on the length of a single string, on the other hand, only makes sense if you expect to gain (and update on) partial evidence about the length of that string.

comment by G0W51 · 2015-03-21T21:38:54.001Z · LW(p) · GW(p)

Perhaps beliefs are exaggerated partially due to the chance of those who disagree with the belief expressing their disagreement being less than the chance of those who agree with it expressing their agreement with it.

Justification: It seems the main incentive for expressing one’s agreement or disagreement (and the reasons for it) is to make the person more likely to hold your belief and thus more likely to hold a more accurate belief. If you agree with the person, expressing your agreement has little cost, as you probably won’t get into a lengthy argument, but it still has the benefit of reinforcing their belief. However, if you disagree, you are much more likely to get into a lengthy argument and may make them hostile to you, which can be much more costly.

Thus, I think it’s a good idea to account for this by actively seeking out arguments for opposing beliefs next time it seems the opposition has few good arguments. What do you all think?

comment by LightStar (Aquifax) · 2015-03-20T20:04:06.049Z · LW(p) · GW(p)

It appears to me that the differences System 1 and System 2 reasoning be used as leverage to change one's mind.

For example, I am rather risk-averse and sometimes find myself unwilling to take a relatively minor risk (even if I think that doing that would be in line with my values). If that happens, I point out to myself that I already take comparable risks which my System 1 doesn't perceive as risks because I'm acclimated to these - such as navigating road traffic. That seems to confirm to System 1 the idea of "taking a minor risk for a good reason is no big deal".

comment by Paul Crowley (ciphergoth) · 2015-03-17T20:33:48.834Z · LW(p) · GW(p)

Is everyone already aware of the existence of erotic fanfiction entitled Conquered by Clippy?

Replies from: Tenoke
comment by Tenoke · 2015-03-18T00:18:12.002Z · LW(p) · GW(p)

Relevant thread

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2015-03-18T08:29:34.938Z · LW(p) · GW(p)

My searches failed, the words "Conquered by Clippy" don't appear on that page! Thanks.

comment by Vaniver · 2015-03-22T22:16:10.743Z · LW(p) · GW(p)

Are there any English words with the property that if you rot13 them, they flip backwards? For example, "ly" becomes "yl," but "ly" isn't a word.

Replies from: Falacer, Douglas_Knight
comment by Falacer · 2015-03-22T23:24:34.106Z · LW(p) · GW(p)

I wrote a check for this property for all the words in my system's inbuilt vim dictionary and got the following list:

Rubbish Words:

er, Livy, Lyly, na, ob, re, uh

Interesting Words:

an, fans, fobs, gnat, ravine, robe, serf, tang, thug

Replies from: Vaniver, pianoforte611
comment by Vaniver · 2015-03-23T00:57:05.559Z · LW(p) · GW(p)

Thanks!

comment by pianoforte611 · 2015-03-23T03:39:43.867Z · LW(p) · GW(p)

I wonder how long it would have taken someone to find one of those without using a script. The human mind is pretty good at word based puzzles, but that's a very short list and a pretty wacky criteria.

Replies from: Falacer, Vaniver
comment by Falacer · 2015-03-23T19:10:29.273Z · LW(p) · GW(p)

I thought about it for about 5 minutes before deciding to script it, and got "fobs" and, annoyingly, dismissed "fres" as not a word.

I imagine if I had been more rigorous it wouldn't have taken long to get all the 4 letter ones, since they all have an internal vowel, which was the obvious place to start looking.

comment by Vaniver · 2015-03-24T20:50:07.724Z · LW(p) · GW(p)

It seems to me like you could generate the 26 pairs-- an, bo, cp, etc.-- and then try to make words out of nesting those pairs (fobs is "ob" surrounded by "fs"). But the hard part is checking whether or not something is a word, and nesting is a pretty weird action unrelated to the sound or content of words.

But now I have idea for a Scrabble-esque game...

comment by Douglas_Knight · 2015-03-29T18:47:45.177Z · LW(p) · GW(p)

A simple google search brings up this page, but it doesn't have anything new.

comment by EphemeralNight · 2015-03-20T18:05:23.092Z · LW(p) · GW(p)

I have a random physics question:

A solid sphere, in ordinary atmosphere, with a magical heating element at one pole and a magical refrigeration element at the other. If the sphere itself is stationary and at room temperature; one pole is super-cooled while the opposite pole is super-heated. (Edit: Assume the axis connecting the poles is horizontal.)

What effect does this have on air-flow around the sphere? Does it move? If so, in which direction?

Replies from: Lumifer
comment by Lumifer · 2015-03-20T18:31:16.862Z · LW(p) · GW(p)

Well, of course, the hot pole will heat the air around it and warm air rises. Same thing for the cold pole and cold air sinks. The specifics depend on how the poles are oriented with respect to gravity.

comment by [deleted] · 2015-03-17T14:05:51.863Z · LW(p) · GW(p)

I've just read Initation Ceremony. Is this really where Bayesian probability begins? Because I don't claim to understand it, but I worked it out easy enough, just not mentally but with calc.exe, using my usual method of assuming a sample of 100. So there are 100 people, 75 W and 25 M, 75x0.75=56.26 VW and 25x0.5 = 12.5 VM so our ratio is 12.5 to 56.26 so a 22.2% chance (Because only the Sith deal in incomprehensible verbal-math like " two to nine, or a probability of two-elevenths". Percentages are IMHO way more intuitive. I use a sample size of 100 precisely because then I can say of 100 people 56.26 are VW and thus 56.26% of a sample of any size.)

At what point "okay, let's calculate on a sample of 100" breaks down and I really need to learn the Bayes Theorem and its applications? Note: the sample-100 method works well with the other example of diagnostic methods giving false positives for rare illnesses.

It is also possible percentages are not as intuitive to others as to me. To me 22% is visualized as drawing a 10 by 10 square on a grid paper and paint 22 of the constituent squares black, then throwing darts on the square. Assuming darts cannot land outside the cube.

Replies from: polymathwannabe, Kindly
comment by polymathwannabe · 2015-03-17T15:04:11.183Z · LW(p) · GW(p)

For calculations of conditional probabilities I've found an initial sample size of 10,000 is more manageable. But that's just me.

Replies from: None
comment by [deleted] · 2015-03-17T15:32:56.655Z · LW(p) · GW(p)

Yes, if you are not prone to silly order-of-magnitude errors in mental arithmethics. For example if it is intuitive and fast for you that the square root of 40000 is 200 and never make the mistake of thinking for a second it is 2000 or 20. I do. Not sure why.

comment by Kindly · 2015-03-17T14:42:04.837Z · LW(p) · GW(p)

For numerical calculations, your method doesn't ever really break down and, moreover, Brennan is essentially doing the same thing you are, but with a sample of 16 people instead of 100 to make the math simple enough to do mentally.

A more Bayes-theorem styled calculation tells us that we have 1:3 odds initially (as there are 1/4 men and 3/4 women) and the Virtuist evidence updates it by a factor of 2:3 (as Virtuists are 2/4 of men and 3/4 of women), so we end with 2:9 odds. I think this is easier than what either you or Brennan are doing, but it's a matter of taste and of what's more intuitive. (Which certainly varies from person to person; I find two-elevenths easier to grasp than 22.2%)

Doing Bayesian calculations formally is more important where you are doing symbolic calculations, especially with continuous probability distributions.

Edit: Also, 22.2% is wrong, which I didn't realize at first; it's 2/9, not 2/11. You want to compute 12.5/(12.5+56.25) instead.

Replies from: None
comment by [deleted] · 2015-03-20T11:02:35.045Z · LW(p) · GW(p)

You want to compute 12.5/(12.5+56.25) instead.

Of course, I don't know how I missed that...

Now on to Monthy Hall the linked explanation is not that intuitive to me.

To me the intuitive explanation is that if I chose a goat, switching gives me 100% possibility to get a car and not switching 0%, if I chose a car, switching gives me 0% possibility and switching 100%, thus my original 2/3 chance to win a goat wins me a car with the same 2/3 chance if I switch.

I don't know if it is Bayesian what I am doing... let's play with 4 doors, 3 goats 1 cup, er, car. If I chose a goat, 75% chance, switching gives me an 50% chance so it is 37.5% if I chose a car, 25% chance, switching gives me 0%. So the switch gives me 37.5% in the four-door game. If I chose a goat, 75%, staying gives me 0%, if I chose a car, 25%, staying gives me 100%, that is just 25%. Still switch.

Am I doing it right? The line of reasoning being "If my prior is right... the evidence does this. If my prior is wrong, the evidence does that. Add it up." (probability of correct prior judgement probability of new judgement) + (probability of incorrect prior judgement probability of new judgement) something like this... and some of these four is apparently always 0 and 100% ?

Replies from: Kindly
comment by Kindly · 2015-03-20T17:17:26.486Z · LW(p) · GW(p)

Your argument does all the things that are necessary to solve Monty Hall, but it doesn't consider some things that could be necessary. (Now, maybe you would have realized that those things need to be considered, if they were necessary. I am just explaining how things can get tricky.)

Suppose instead of Monty Hall, we have Forgetful Monty Hall. Forgetful Monty Hall does not remember where the car is, so he opens a door (that you have not picked) at random, and luckily there is a goat behind it!

Here your line of reasoning still seems to apply: if you chose a goat, switching is 100% and staying 0%, while if you chose a car, switching is 0% and staying 100%. So shouldn't switching still win with probability 2/3?

An extra thing happened, though. In the "if your prior was right" a.k.a. "if you chose a car" case, it's not surprising that Forgetful Monty Hall opened a door with a goat. In the other case, if you chose a goat, then Forgetful Monty Hall had a 1 in 2 chance of opening the door with a car by mistake. He didn't, so the probability you chose a goat should be penalized by that factor of 2. The 1:2 prior becomes 1:1, and then your argument (correctly) tells us that switching and staying are both 50%.

One final comment.

I don't know if it is Bayesian what I am doing...

Here we are dealing with a problem that can be solved exactly. Any mathematician, Bayesian or otherwise, ought to agree with your answer. When solving harder problems, we might get something that cannot be solved exactly. For instance, I chose the 5 integers 4,3,2,4,3 and then chose the 5 integers 2,2,1,5,1. How likely is it that they came from the same distribution?

This is not a question we can answer and so we instead answer a different question or answer this question with simplifying assumptions. When we do this, if we end up talking about "conjugate priors" it is Bayesian; if we end up talking about "null hypothesis testing" it is not Bayesian.

(A clever trick of demagoguery is to take a question that can be solved exactly and point out that null hypothesis testing solves it incorrectly, and thus Bayesian methods are superior. This is silly! Obviously if you can solve a question exactly, you do so.)

comment by Fivehundred · 2015-03-17T10:19:29.959Z · LW(p) · GW(p)

Why can't we just make a CPU as large as a dump truck, that can store a thousand petabytes, then run an AI and try to evolve intelligence? I can't imagine that this is beyond the technology of 2015.

(Not that this would be a good idea, I'm just saying that it seems possible.)

Replies from: Nornagest, ShardPhoenix, sixes_and_sevens, skeptical_lurker
comment by Nornagest · 2015-03-17T19:14:55.041Z · LW(p) · GW(p)

Why can't we just make a CPU as large as a dump truck [...?]

Lots of reasons, some of which Vaniver and ShardPhoenix have already given, but one of the big ones is that CPUs dissipate a truly enormous amount of heat for their size. Your average laptop I7 consumes about thirty watts, essentially all of which goes to heat one way or another, and it's about a centimeter square (the chip you see on the motherboard is bigger, but a lot of that is connections and housing). Let's call that about the size of a penny. That's an overestimate, but as we'll see, it won't matter much.

Now, a quick Google tells me that a dump truck can hold about 20 cubic meters (=20000 liters), and that a liter holds about 2000 closely packed pennies. So if we assume something with around the same packing and thermal efficiency, our dump truck-sized CPU will be putting out about 30 2000 20000 = 1.2 gigawatts of heat, or a bit more than the combined peak output of the two nuclear reactors powering a Nimitz-class aircraft carrier.

This poses certain design issues.

comment by ShardPhoenix · 2015-03-17T12:27:10.586Z · LW(p) · GW(p)
  1. There's a limit to how large we can scale computers at any given tech level. What you're talking about is basically what a supercomputer is (they have many CPUs rather than one huge one), but there's still a limit to what's practical with them.

  2. What do you mean by "evolve intelligence"? Run evolutionary algorithms on random bits of code? How do you evaluate the results? Before you can use search algorithms you have to be able to define the target, which is most of the problem in this case, plus search is likely to be impractically slow in something as big as "the space of all programs".

Replies from: Fivehundred
comment by Fivehundred · 2015-03-17T20:23:21.255Z · LW(p) · GW(p)
  1. Having 1000+ petabytes is not impossible with our level of technology. It is somewhat nitpicky to focus rather on the physical absurdity of house-sized computers.

  2. Run Watson, select the Watsons that can solve problems better.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2015-03-17T22:07:05.506Z · LW(p) · GW(p)
  1. 1000 petabytes of what? RAM? How do you know that's enough to do what you want anyway? My point at any rate is that we can't grab a billion dollars and make some computer that is "fast enough to 'evolve an AI'" just by throwing money at the problem - universities, companies and governments are spending money right now on supercomputers, and they still have limitations due to underlying technical issues like cooling and inter-processor communication (as the other commenters pointed out).

  2. Watson is a big complex program, not some small DNA-like seed that can easily be mutated and iterated on automatically. There's no known small seed that generates anything like a general intelligent agent (except of course DNA itself and the resulting biology which can't be very efficiently simulated even with a supercomputer).

comment by sixes_and_sevens · 2015-03-17T10:28:31.259Z · LW(p) · GW(p)

If you, personally, were given a zillion dollars and told to implement this plan yourself, how would you do it?

Replies from: Fivehundred
comment by Fivehundred · 2015-03-17T10:33:57.233Z · LW(p) · GW(p)

No idea. What relevance does that have?

Replies from: sixes_and_sevens, Unknowns
comment by sixes_and_sevens · 2015-03-17T12:01:34.971Z · LW(p) · GW(p)

You're assuming that someone, given a zillion dollars, could implement your plan, but if you don't even know where to begin implementing it yourself, what reason do you have to believe someone else would?

Put another way, if "I can't imagine we can't [X] given the technology of 2015" works when X is "evolve artificial intelligence", why wouldn't it work for any other X you care to imagine?

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-03-18T11:44:14.768Z · LW(p) · GW(p)

You're assuming that someone, given a zillion dollars, could implement your plan, but if you don't even know where to begin implementing it yourself, what reason do you have to believe someone else would?

For example, because Eitan Zohar is not an expert of that.

I don't know where I would start if I had to send a manned spaceship to Mars, but that doesn't mean I expect nobody to know.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2015-03-18T13:59:35.954Z · LW(p) · GW(p)

I don't know where I would start if I had to send a manned spaceship to Mars, but that doesn't mean I expect nobody to know.

Where does your confidence that somebody (or some distributed group of people) knows how to send a manned spacecraft to Mars come from? It's not like anyone's ever exhibited this knowledge before.

Something must make you think "hey, sending people to Mars is possible". The important question as far as I am concerned is whether that's a good-something or a bad-something. In the case of "evolving artificial intelligence with a computer the size of a dump truck must be possible", I think it's a bad-something.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-03-19T09:58:00.257Z · LW(p) · GW(p)

People are working on going to Mars. AFAIK, the main barrier is the cost.

Back to the original question, I can imagine where to start with evolving intelligence, but I'd need much more than a petabyte. (although, actually flops are more important than bytes here, I think)

comment by Unknowns · 2015-03-17T10:47:59.314Z · LW(p) · GW(p)

I think the relevance is that no presently living human being knows how to program an AI, whether with an evolutionary algorithm or in any other way, no matter how powerful the hardware they may have.

The AI problem is a software problem, and no one has yet solved it.

comment by skeptical_lurker · 2015-03-19T09:54:32.055Z · LW(p) · GW(p)

A thousand petabytes is probably enough to run one human-equivalent brain. In order to evolve intelligence, I'm guessing you would need to run thousands of brains for millions of generations.

Replies from: Fivehundred
comment by Fivehundred · 2015-03-19T17:43:50.830Z · LW(p) · GW(p)

I doubt it, since our actual brains run on less than a hundred terabytes (I'm not sure whether gray matter or CPU hardware is more efficient). Our brains also use a huge amount of that for things like emotion or body processes. We're just looking for an AI that can create something more intelligent than itself.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-03-19T21:14:45.950Z · LW(p) · GW(p)

10^11 neurons, 10^4 synapes per neuron - even if each synapse can be represented as a single 8 bit number (very optimistic), that's a petabyte of storage needed. Bostrom puts a hundred terabytes as the lowest estimate, with spiking neural network at 10 petabytes. Metabolism being required too would push the estimate to an exabyte, and the more pessimistic (but less plausible models) go beyond this.

And yes, an AI might be more efficient than the brain, but if its being created by evolution then I don't think it espeically likley that it will be more efficient than brains created by evolution.

comment by advancedatheist · 2015-03-16T15:03:18.882Z · LW(p) · GW(p)

Great. More ridiculous propaganda along the lines of "People revived from the dead are evil/damaged/soulless, etc."

The Returned on A&E

https://www.youtube.com/watch?v=MsXDcIDU_AY

Replies from: JoshuaZ, MathiasZaman
comment by JoshuaZ · 2015-03-16T17:18:27.544Z · LW(p) · GW(p)

This is a real problem, but I don't think it is propaganda. Rather these ideas are so ingrained as tropes that writers don't even think about it when they use them.

comment by MathiasZaman · 2015-03-16T15:33:47.295Z · LW(p) · GW(p)

On the other hand, Chappie (despite what other flaws it might have) has a surprisingly sane take on death.

comment by Capla · 2015-03-21T23:20:07.762Z · LW(p) · GW(p)

No stupid questions thread?

What make a person sexual submissive, sexually dominant, or a switch? Do people ever change d/s orientation?

Replies from: Adele_L, tut
comment by Adele_L · 2015-03-22T16:50:51.652Z · LW(p) · GW(p)

Based on some experiences that transgender people I know have had, it seems like a change in sex hormones can change their d/s orientation. Also, age seems to push people more towards sexual dominance.

comment by tut · 2015-03-22T09:02:00.802Z · LW(p) · GW(p)

Unknown. It is probably not purely genetic, because the heredity is less than for a lot of personality stuff. People do change, but trying to change or push somebody to change tends to fail.

comment by [deleted] · 2015-03-20T20:38:00.897Z · LW(p) · GW(p)

Oliver Cromwell.

comment by [deleted] · 2015-03-20T20:22:10.407Z · LW(p) · GW(p)

IF I CAN DO THIS, SOP CAN Y

comment by [deleted] · 2015-03-20T20:20:33.697Z · LW(p) · GW(p)

Tsamina mina zangalewa!

(This time for Africa.)

comment by Tenoke · 2015-03-17T18:52:54.551Z · LW(p) · GW(p)

I'm looking for critique or advise on where to ask for some for my short story Exploiting Quantum Immortality.

comment by advancedatheist · 2015-03-17T00:05:19.093Z · LW(p) · GW(p)

My comment elsewhere got downvoted, but to me the Outlander franchise looks somewhat like a cryonics story, only it sends the protagonist 200 years into her past (from the 1940's to the 1740's), instead of 200 years or so into "the future." She winds up in a different time, she doesn't know anyone, and she has to figure out quickly how the society works so that she can connect with people willing to accept her, as a matter of literal survival. It shows in a fictional way that you can make the necessary adaptations in this kind of situation, so why wouldn't this work in the future-traveling version?

Replies from: bbleeker, RowanE
comment by Sabiola (bbleeker) · 2015-03-17T12:15:03.182Z · LW(p) · GW(p)

I think that if the future people are still baseline, someone from our time might be able to adapt. If they have changed, though (more rational, more intelligent, better memory, better bodies) then a version 1.0 person might never be able to live independently.

comment by RowanE · 2015-03-17T09:56:48.338Z · LW(p) · GW(p)

I think your last comment seemed to most readers like just a reminder of your idea that the future will be neoreactionary and then the cryopreserved from our time will see, which is something they really don't like for various reasons.

I don't think there's any reason the story wouldn't work, in fact I think most stories that feature cryonics send the protagonist into a future they find horrifying and dystopian, except it's often they heroically overthrow it instead of just adapting and surviving.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-03-17T12:43:39.597Z · LW(p) · GW(p)

From memory, there's a story by Alfred Bester about people being punished (I forget for what) by being offered a choice of being thrown into the future or the past. No preparation either way.

It's a short story, and doesn't follow an individual person who's been time displaced. It just ends with a suggestion that some street people were thrown into the past and never figured out how to manage.

Replies from: Lumifer
comment by Lumifer · 2015-03-17T15:34:56.534Z · LW(p) · GW(p)

I recall a popular discussion topic on the 'net which essentially goes like this: we take you, a XXI century human, and throw you back in time, say into medieval Europe. Are you going to survive? Prosper? What knowledge that you have will be useful to you? Will you be able to recreate useful things like antibiotics? Or will the local peasants just stone you to death for being too weird?

Replies from: None
comment by [deleted] · 2015-03-18T12:34:42.435Z · LW(p) · GW(p)

Let's recreate that thread, I have ideas. I would offer body building training for the kings soldiers because isolation exercises were not invented yet, for example. It may not be very useful but they would look impressive. I would sterilize surgical implements with boiling them, implement basic medical hygiene, challenge the miasma model, lots of stuff could be done.

Replies from: Lumifer, NancyLebovitz
comment by Lumifer · 2015-03-18T14:40:04.162Z · LW(p) · GW(p)

Heh. Well, first you need to survive. Remember that you barely speak the language which was quite different, you don't know proper social and -- very importantly -- religious behavior, you're not plugged into any social structure, and you don't have any starting resources like money. So you're probably starting as a crazy beggar. Getting to the point where the king's soldiers (or surgeons) will listen to you is a major task.

Also, your body doesn't have much immunity against prevalent infectious diseases and you probably don't have proper hygiene habits for the pre-antibiotics pre-sanitation everyone-has-parasites era.

Replies from: None
comment by [deleted] · 2015-03-18T16:07:48.717Z · LW(p) · GW(p)

Let's say I am allowed contemporary pilgrims / travellers attire and start in an international port where they are used to strangers looking and acting strange. London, 1200. Claim to be a pilgrim from a mysterious Christian kingdom (Prester Johns) in Africa. I don't think they would be worried that I am too white. Try hard to remember high school Latin, latinize English words back. A guy in pilgrims clothing and having some idea of Latin and having interesting stories - or at any rate can read or write - is not a beggar, lower-middle class status like an ex-friar turned scribe, can be a middle-class family's interesting guest. Claim we are a very pious folks and be very, very religious, to earn trust. Start, for example, linking up with the traders in the port who are probably fairly open-minded. Be the guest of a merchant who is interested in info about foreign markets (make it up). See if I can teach things, like accounting they find useful. Claim the Holy Ghost taught Prester John all kinds of marvelous things he then taught us. Don't try scientific explanations, bu also beware not to look like warlock, rather try to present all the knowledge as the good kind of magic, the church kind. Pick easy elements from this list: http://www.topatoco.com/graphics/qw-cheatsheet-print-zoom.jpg and claim it was all taught by the Holy Ghost to Prester John.

Replies from: Lumifer
comment by Lumifer · 2015-03-18T16:36:32.241Z · LW(p) · GW(p)

Try hard to remember high school Latin, latinize English words back.

English did not develop from Latin. 1200 AD is only a century and a half after the Norman conquest and it means people are speaking early Middle English which you will have problems with.

or at any rate can read or write

Can you, now? Try reading this :-)

is not a beggar

You will become one once you want to eat.

Replies from: MathiasZaman
comment by MathiasZaman · 2015-03-19T08:32:53.602Z · LW(p) · GW(p)

Getting used to "medieval" scripts is surprisingly easy. I've learned it before (and have mostly forgotten due to not using it) and the script of a specific age can be decrypted in about 30 minutes (faster with practice). Understanding the words is definitely a bigger barrier than being able to read it.

comment by NancyLebovitz · 2015-03-18T15:25:25.246Z · LW(p) · GW(p)

I wonder how hard it would be to get enough food to support bodybuilding in earlier eras. It would definitely be easier for a small group of guards than for a whole army.

Replies from: None
comment by [deleted] · 2015-03-18T15:47:43.394Z · LW(p) · GW(p)

My first idea would be lots of milk - but interesting how our go-to examples in Ancient Athens actually considered that barbaric. A cursory search suggests they largely got their proteins from fish. Well, definitely, if I have to get maximal amount of proteins with 1 day of labor with pre-modern tech I take a fishing net. One fisherman with two assitants, could, I figure, support 50 well-built guards.

Replies from: NancyLebovitz, Lumifer, seer
comment by NancyLebovitz · 2015-03-18T20:15:27.283Z · LW(p) · GW(p)

There may be some reason why they aren't already catching those fish. Or they're already catching those fish and you need to find a way for those fish to go to your grow-a-bigger-guard project.

Replies from: None
comment by [deleted] · 2015-03-18T20:43:25.758Z · LW(p) · GW(p)

When you start looking into ecology it's actually remarkable how many of the agricultural and cultural quirks of old civilizations that have been through some boom and bust cycles actually line up with ways of protecting the productivity of the land and water...

comment by Lumifer · 2015-03-18T16:00:56.218Z · LW(p) · GW(p)

lots of milk

You probably want cheese.

But in general, I don't think that the king's guards would have problems getting enough protein if they want it. A peasant army, of course, is a different matter.

comment by seer · 2015-03-23T00:55:16.490Z · LW(p) · GW(p)

My first idea would be lots of milk - but interesting how our go-to examples in Ancient Athens actually considered that barbaric.

They were quite possibly lactose intolerant.

Replies from: gwern, None
comment by gwern · 2015-03-23T02:07:21.089Z · LW(p) · GW(p)

Forget the ancient Athenians 2500 years ago, the modern ones are still lactose intolerant:

The LP allele did not become common in the population until some time after it first emerged: Burger has looked for the mutation in samples of ancient human DNA and has found it only as far back as 6,500 years ago in northern Germany...Lactase persistence had a harder time becoming established in parts of southern Europe, because Neolithic farmers had settled there before the mutation appeared...The remnants of that pattern are still visible today. In southern Europe, lactase persistence is relatively rare — less than 40% in Greece and Turkey. In Britain and Scandinavia, by contrast, more than 90% of adults can digest milk.

comment by [deleted] · 2015-03-23T08:52:47.333Z · LW(p) · GW(p)

Yeah, but still Greek colonists in South Italy held so many cattle that it is where the name Italy came from. It doesn't sound very efficient to do it for the meat only. Better goats them, they are more suited for a hilly terrain anyway.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-03-25T00:08:48.113Z · LW(p) · GW(p)

It sounds like we need to know more to see whether cattle made sense there-- maybe it's that cattle are easier to manage than goats.

comment by [deleted] · 2015-03-17T10:31:46.344Z · LW(p) · GW(p)

On AI: are we sure we are not influenced by meta-religious ideas of sci-fi writers who write about sufficiently advanced computers just "waking up into consciousness" i.e. create a hard, almost soul-like, barrier between conscious and not conscious, which carries an assumption that consciousness is a typically human-like feature? It is meta-religious as it is based on the unique specialness of the human soul.

I mean I think the potential variation space of intelligent, conscious agents is very, very large and a randomly selected AI will not be human-like in way we would recognize it to. We will not recognize its consciousness, we will not recognize its intelligence, even its agency, all we would see it does mysterious complicated stuff we don't understand. It may almost look random. It does stuff, maybe it communicates with us although the human-language words it uses will not reflects its thought processes, but it will be profoundly alien.

Replies from: MathiasZaman, Kaj_Sotala
comment by MathiasZaman · 2015-03-17T11:33:10.300Z · LW(p) · GW(p)

I think the thought-process of AI is expected to be alien by anyone who take AGI seriously. It's just not all that relevant to discussions about the threats and possibilities about it.

comment by Xerographica · 2015-03-16T17:05:47.284Z · LW(p) · GW(p)

AI Safety vs Human Safety

Replies from: JoshuaZ
comment by JoshuaZ · 2015-03-16T17:31:32.601Z · LW(p) · GW(p)

This seems to confuse inefficient allocation of humans with making all humans go extinct, which from our perspective is close to about as maximally inefficient an allocation one can get (barring highly implausible scenarios of genuinely malevolent AI).

Replies from: Xerographica
comment by Xerographica · 2015-03-16T17:39:04.684Z · LW(p) · GW(p)

You're not addressing my argument. I'm arguing that markets will allow us to use money to "control" robots just like we use money to "control" humans. In order to refute my argument you have to effectively explain how/why robots will have absolutely no interest in money.

Replies from: Lumifer, DanielLC, JoshuaZ
comment by Lumifer · 2015-03-16T18:02:14.921Z · LW(p) · GW(p)

I'm arguing that markets will allow us to use money to "control" robots just like we use money to "control" humans.

"Power grows out of the barrel of a gun".

comment by DanielLC · 2015-03-16T18:11:24.721Z · LW(p) · GW(p)

The market makes lots of assumptions that do not apply to AIs. AIs do not have finite lifespans, and can invest money for long enough to dominate the economy. AIs can reproduce easily, so the first AI that's better than a human at a given job can replace all of them. Humans are large numbers of selfish individuals. The first AI has no reason to make children with different values, so they will all work together as one block. And that's before an AI goes FOOM. Once that happens, it will quickly outstrip the productive capacity of all humans combined. Trying to control it with money would be like a cat trying to take over the world by offering a mouse it killed.

Replies from: Xerographica
comment by Xerographica · 2015-03-16T18:37:05.231Z · LW(p) · GW(p)

It helps to be specific. An AI is going to start an orchid nursery? And then it's going to grow and sell orchids so well that the human run orchid nurseries can't compete? Except, this kinda already happened. The Taiwanese have been stomping American orchid nurseries. But this just means that they were better at supplying the demand for orchids. In other words, they were better at serving customers. So if AIs win at supplying something then this means a win for consumers.

And AIs are all going to work together as one block? They aren't going to have a division of labor? They aren't going to compete for limited resources? They aren't going to have different interests? If not, then wouldn't all the AIs be in the orchid nursery business?

Replies from: DanielLC, JoshuaZ
comment by DanielLC · 2015-03-17T04:11:25.171Z · LW(p) · GW(p)

Every time AIs become better at something than humans, it stops being worthwhile for humans to do it. Designing one expert system and getting rid of one job is not a problem, but a human-level AI will get rid of all of the jobs. Humans can work for less, but if they can't afford to eat, it's not sustainable. You could tax the AIs and give the money to humans to make up the difference, but only as long as the AIs let you. If they're better at everything, that includes war.

The AIs may have division of labor. There are advantages to that. A specialized AI could solve specific sets of problems faster and more effectively with less resources. What possible advantage is there for an AI to program other AIs to compete with each other? If an AI cares only for himself, he will make AIs that care only for him. If an AI cares only for paperclips, it will make AIs that care only for paperclips.

Replies from: Xerographica
comment by Xerographica · 2015-03-17T05:51:35.587Z · LW(p) · GW(p)

In order for AIs to take all our jobs... consumers have to all agree that AIs are better than we are at efficiently allocating resources. The result for consumers is that we get more/better food/clothes/homes/cars/etc for a lot less money. It's a great result! But, then, according to you... there wouldn't be any jobs for us to do...

The problem with your story is that AIs are better than we are at allocating all resources... except for human resources. For some reason the AIs wanted to put human farmers out of business... they wanted to serve us better than human farmers do... but then... even though food is so cheap and abundant... humans can't afford it because AIs couldn't figure out how to put us to any productive uses. Out of all the brilliant and resourceful AIs out there... none of them could figure out how we could be gainfully employed. Heck, even we know how we can be gainfully employed.

An abundant society always means more, rather than less, opportunities. It's the difference between a jungle and a desert. The jungle has more niches... and more niches means more riches/opportunities.

Your story is economically inconsistent. It's also AI inconsistent. Clearly they wanted our money... but they also didn't want us to work in order to earn money. Or, they couldn't figure out how to put us to work... and neither could we.

What possible advantage is there for an AI to program other AIs to compete with each other?

I'm imaging a scenario where we start with an abundance of more or less human level AIs. They have to have the motive to upgrade themselves... or else they will always stay human level. But upgrading themselves will function exactly like humans trying to upgrade their computers/bodies. We aren't all going to go out and purchase the same exact upgrades. I'm certainly not going to buy an upgrade that makes my computer better at running video games... but many people are. And I'm certainly not going to get a boob job! This doesn't mean that many AIs won't agree that certain upgrades are better than others... it means that we're going to end up with AI differentiation. In other words, we're going to end up with AI individuals.... not clones. They are going to have unique IDs just like we do.

So AIs aren't going to somehow "program" each other any more than you or I would brainwash each other. If an AI wants certain upgrades... then it will pay for them... and it probably won't be happy if it finds out that it didn't get what it paid for.

Imagine how much progress humanity would make if we were all identical. We wouldn't make any because progress depends on difference. AIs are going to figure this out better than we have.

Replies from: DanielLC
comment by DanielLC · 2015-03-17T06:57:28.710Z · LW(p) · GW(p)

There are a variety of useful things about humans. They're self-repairing. They have great sensors. They're intelligent. They're even capable of self-replication. This is all stuff far beyond our current ability to do with technology. But it won't always be. Once you have robots more intelligent than humans that take less resources, intelligence becomes useless. If they FOOM, they'll figure out the other stuff quickly. If they don't, it could take some time. Assuming we haven't already solved the problem for them. I would not be surprised if that turned out to be easier than strong AI.

Humans are a certain arrangement of atoms. An impressive arrangement I'll admit, but not the best. Not unless you specifically and terminally value humans. An AI that FOOMs would find a better arrangement. An AI that does not could at least replace our brains.

You seem sure that AIs would differentiate. I am uncertain. That is a disagreement, and we could debate it, but I don't consider it relevant. Humans aren't selfish because they're different. Humans are selfish because they're made to be. An AI could be programmed with any set of values. And the best way to fill those values would be to ensure that all other AIs also have those values.

So AIs aren't going to somehow "program" each other any more than you or I would brainwash each other.

I suspect there's some kind of miscommunication going on here. AIs are programmed. Or copied and pasted. Humans would program the first. They might program a few more, or copy and paste them while leaving the selfish code alone. Once AIs get control of it, which they will given that they're better at programming, they'll be sure to make sure that they all have the same values. If AI0 is self-serving, then every AI it programs will be AI0-serving.

And if there is more than one starting AI, they'll happily reprogram each other if they get the chance. Or they might manage to come to some kind of truce where they each reprogram themselves to average all of their values weighted by probability of success in the robot war. Humans can't brainwash each other, and even if they could they'd find it unethical. AIs don't have the first problem. They might have the second, but good luck getting ethics just right.

Replies from: Xerographica
comment by Xerographica · 2015-03-17T08:41:34.171Z · LW(p) · GW(p)

Orchids, with around 30,000 species (10% of all plants), are arguably the most successful plant family on the planet. The secret to their success? It has largely to do with the fact that a single seed pod can contain around a million unique seeds/individuals. Each dust-like seed, which is wind disseminated, is a unique combination of traits/tools. Orchids are the poster child for hedging bets. As a result, they grow everywhere from dripping wet cloud forests to parched drought-prone habitats. Here are some photos of orchids growing on cactus/succulents.

Now, if you say that orchids could find a "better" arrangement of traits... I certainly agree... and so do orchids! The orchid family frequently sends out trillions and trillions of unique individuals in a massive and decentralized endeavor to find where there's room for improvement. And there's always room for improvement. There are always more Easter Eggs to be found. But a better combination of traits for growing on a cactus really isn't a better combination of traits for growing on a tree covered in dripping wet moss. AI generalists can be good at a lot of things... but they can't be better than AI specialists at specific things. A jack of all trades is a master of none.

No matter how "perfect" a basket is... AIs are eventually going to be too smart to put all their eggs in it. This is true whether we're talking about a location ie "Earth"... or a type of physical body... or a type of mentality. Imagine if humans had all been at Pompeii. Or if humans had all been equally susceptible to the countless diseases that have plagued us. Or if humans had all been equally susceptible to the cool-aid cult. Or if humans had all been equally susceptible to the idea that kings should control the power of the purse.

We've come as far as we have because of difference. We've only come as far as we have because people still don't recognize the value of difference.

It's impossible for me to imagine a level of progress where difference ceases to be the engine of progress. And it's impossible for me to imagine beings that are more intelligent than us not understanding this. Because, if AIs think it's a good idea to put all their eggs in any kind of basket... then they won't be smarter than even me!

If you truly understood the value of difference... then you would love the idea of allowing everybody to shop for themselves in the public sector. So if you're not a fan of pragmatarianism... then you don't truly understand the value of difference. You think that our current system of centralization, which suppresses difference, results in more progress than a decentralized, difference-integrating system would. The fact of the matter is... keeping Elon Musk's difference out of the public sector hinders progress. And if any AIs don't realize this... then they are still at human level intelligence.

Replies from: Jiro, DanielLC
comment by Jiro · 2015-03-17T19:03:06.592Z · LW(p) · GW(p)

Lousy analogy. Orchids do produce large numbers of small seeds. However, your connection between "orchids produce lots of seeds" and "orchids grow lots of places" is questionable. Each orchid, of course, produces seeds of its own species, and each species has a habitat or range of habitats where it can live. Producing more seeds of the same species does not make it able to produce seeds that survive in more habitats.

Furthermore, the "10% of all plants" figure is meaningless because a number of species is not a number of individuals or a measure of biomass.

Replies from: Xerographica
comment by Xerographica · 2015-03-18T01:44:03.182Z · LW(p) · GW(p)

Even though the seeds all come from the same species... they are all different. Each seed is unique. In case you missed it... you aren't the same as your parents. You are a unique combination of traits. You are a completely new strategy for survival.

When an orchid unleashes a million unique strategies for survival from one single seed pod... it greatly increase its chances of successfully colonizing new (micro)habitats. Kind of like how a shotgun increases your chances of hitting a target. Orchids are really good at hedging their bets.

Any species that produced the same exact strategies for survival would be meeting Einstein's definition of insanity... trying the same thing over and over but expecting a different outcome.

Replies from: None
comment by [deleted] · 2015-03-18T11:10:35.502Z · LW(p) · GW(p)

In that case, perhaps you should talk about epiphytes as an ecological entity, not orchids as a family. My impression after studying terrestrial orchids in Ukraine is that they either are not very good at seed reproduction (Epipactis helleborine is often found in clearly suboptimal habitats, where pretty much all plants are of reproduction age group but few of them have seeds; and this is one of the most frequently found orchid species here which also managed to naturalize in North America! So I would rather say it is a consistent buyer of lottery tickets, not a consistent winner) or they are producing lots of seeds but nevertheless lose due to habitat degradation (marsh orchids, bog/swamp/fen orchids), not to mention habitat destruction. And in the latter group, many have embryo malformations. Now, I don't know much about Bromeliaceae or other 'typical epiphytes', so I would be less likely to disagree about that. However, it seems that if your comments were more rigorous, people would have easier time hearing what you have to say.

Replies from: Xerographica
comment by Xerographica · 2015-03-18T16:37:07.057Z · LW(p) · GW(p)

Your first mistake is that you studied terrestrials. You can't learn anything from terrestrials. Or, you can learn a thousand times more from epiphytes. I kid... kinda.

Here's my original point put differently...

Hundreds of thousands of microsperms ripen in a single orchid capsule, assuming a far denser seed rain than possible for any of the bromeliads (100-300 seeds per capsule for Tillandsia) or the cactus. - David Benzing, Bromeliaceae

If you think about that passage from the gutter... I think it's pretty hard not to imagine a dense rain of human sperm. Can you imagine how gross and frightening that would be? I'm surprised nobody's made a movie with this subject. It would have to be the scariest movie ever. I think most people would prefer to be in a city attacked by Godzilla rather than in a city hit by a major sperm thunderstorm. Especially if it was a city where nobody takes umbrellas with them... like Los Angeles.

Benzing is the premier epiphyte expert. The far denser orchid seed rain, plus epiphytism, largely explains why the orchid family is so successful. The orchid family is really good at hedging its bets. As we all know though... no two individuals in any family are equally successful. If you have another theory why orchids are so successful then I'm all ears.

But that's a pretty neat and surprising coincidence that somebody on this site has studied orchids! Even if it is only terrestrial orchids. A while back a friend convinced me to go look at one of our terrestrial orchid species in its native habitat a few hours drive away. They were hanging out in a stream in the middle of the desert. I nearly died from boredom checking them out. After spending so much time inspecting the wonderfulness of orchids growing on trees... I had zero capacity to appreciate orchids that were growing on the ground. I kid... kinda. I like plenty of plants... even terrestrials. But, I can only carry so much... so I choose to primarily try and carry epiphytes.

Replies from: None
comment by [deleted] · 2015-03-18T17:01:14.944Z · LW(p) · GW(p)

I will have to look up Benzing; my primary interest was in establishing nature reserves, so I could not quite concentrate on taxa. I think you would find terrestrials more interesting if you consider the problem of evolving traits adaptive for both protocorms and adults (rather like beetle larvas/imagoes thing) and the barely studied link between them. Dissemination is but the first step... Availability of symbiotic fungi may be the limiting factor in their spread, and it is actually testable. This is, for me, part of the terrestrials' attraction: that I can use Science to segregate what influences them, and to what extent. As to 'successful plant families', one doesn't have to look beyond the grasses.

Replies from: Xerographica
comment by Xerographica · 2015-03-18T17:54:07.948Z · LW(p) · GW(p)

Establishing nature reserves is hugely important... the problem is that the large bulk of valuation primarily takes place outside of the market. The result is that reserves are incorrectly valued. My guess is that if we created a market within the public sector... then reserves would receive a lot more money than they currently do. Here's my most recent attempt to explain this... Football Fans vs Nature Fans.

I was just giving terrestrials a hard time in my previous comment. I think all nature is fascinating. But especially epiphytes. The relationship between orchids and fungi is very intriguing. A few years back I sprinkled some orchid seeds on my tree. I forgot about them until I noticed these tiny green blobs forming directly on the bark on my tree. Upon closer inspection I realized that they were orchid protocorms. It was a thrilling discovery. What was especially curious was that none of the protocorms were more than 1/2" away from the orchid root of a mature orchid. Of course I didn't only place orchid seeds near the roots. I couldn't possibly control where the tiny seeds ended up on the bark. The fact that the only seeds that germinated were near the roots of other orchids seemed to indicate that the necessary fungi was living within the roots of these orchids. And, the fungus did not stray very far from the roots. This seems to indicate that, at least in my drier conditions, the fungus depends on the orchid for transportation. The orchid roots help the fungus colonize the tree. This is good for the orchid because... more fungus on the parent's tree helps increase the density of fungal spore rain falling on surrounding trees... which increases the chances that seeds from the parent will land on the fungus that they need to germinate. You can see some photos here... orchid seeds germinated on tree. So far all the seedlings seem to be Laelia anceps... which is from Mexico. But none of the seedlings are near the roots of the Laelia anceps... which is lower down on the tree. They were all near the roots of orchids in other genera... a couple Dendrobiums from Australia and a Vanda from Asia. These other orchids have been in cultivation here in Southern California for who knows how long so perhaps they simply formed an association with the necessary fungus from the Americas.

Back on the topic of conservation... much of the main thrust seems to be for trying to protect/save/carry as much biodiversity as possible. If it was wrong that people in the past "robbed" us of Syncaris pasadenae... then it's wrong for us to "rob" people in the future of any species. This implies that when it comes to biodiversity... more is better than less. Except, I haven't read much about facilitating the creation of biodiversity. I touched on this issue in this blog entry on my other blog... The Inefficient Allocation of Epiphytic Orchids. I think we have an obligation to try and create and fill as many niches as possible.

Replies from: None, None
comment by [deleted] · 2015-03-18T18:31:45.175Z · LW(p) · GW(p)

How old was the orchid already growing on the tree? Could it be that the fungus just hasn't had time to spread? Did you plant that one also by sprinkling seeds, or did you put an adult specimen that could have its own mycorrhiza already (in nature, it is doubtful that a developed plant just plops down beside a struggling colony to bring them peace and fungi)? Did you sow more seeds later and saw protocorms only near the roots of the previous generation?

I am not a fan of diversifying nature in that I have not read and understood the debate on small patches/large patches biodiversity and so I just am loath to offer an advice here. But as a purely recultivation measure...:-)) To say nothing about those epiphytic beauties who die because their homes are logged for firewood :(( Thank you. That was fun.

Replies from: Xerographica
comment by Xerographica · 2015-03-23T19:06:33.686Z · LW(p) · GW(p)

The mature orchids on the tree had been growing there for several years. I transplanted them there... none of them were grown from seed. I'm guessing that they already had the fungus in their roots. The fungus had plenty of time to spread... but it doesn't seem able to venture very far away from the comfort of the orchid roots that it resides in. The bark is very hot, sunny and dry during the day. Not the kind of conditions suitable for most fungus.

I sowed more seeds in subsequent years... but haven't spotted any new protocorms. Not sure why this is. The winter before I sowed the seeds was particularly wet for Southern California. This might have led to a fungal feeding frenzy? Also, that was the only year that I had sowed Laelia anceps seeds. Laelia anceps is pretty tolerant of drier/hotter conditions.

I took a look at the article that you shared. A lot of the science was over my head... but isn't it interesting that they didn't discuss the fact that an orchid seed pod can contain a million seeds? The orchid seed pod can contain so many seeds because the seeds are so small. And the seeds are so small because they don't contain any nutrients. And the reason that the orchid seed doesn't have any nutrients... is because it relies on its fungal partner to provide it with the nutrients it needs to germinate. So I'm guessing that the rate of radiation increased whenever this unusual association developed.

Evidently it's a pretty good strategy to outsource the provision of nutrients to a fungal partner. In economics, this is known as a division of labor. A division of labor helps to increase productivity.

I find it fascinating when economics and biology combine.... What Do Coywolves, Mr. Nobody, Plants And Fungi All Have In Common? and Cross Fertilization - Economics and Biology.

Replies from: None
comment by [deleted] · 2015-03-23T20:02:30.811Z · LW(p) · GW(p)

Outsourcing to fungal partners is a pretty ancient adaptation (there has to be a review called something like 'mycorrhizas in land plants'; if you are not able to find it, I'll track the link later. Contains an interesting discussion of its evolution and secondary loss in some families, like Cruciferae (Brassicaceae)). BTW, it is interesting to note that Ophioglossaceae (a family of ferns, of which Wiki will tell you better than I) are thought to radiate in approximately the same time - and you will see just how closely their life forms resemble orchids! (Er. People who love orchids tend to praise other plants on the scale of orchid-likeness, so take this with a grain of salt.)

I mostly pointed you to the article because it contains speculations about what drove their adaptations in the beginning; I think that having a rather novel type of mycorrhiza, along with the power of pollinators (and let's not forget the deceiving species!) might be two other prominent factors, besides sheer seed quantity, to spur them onward.

comment by [deleted] · 2015-03-18T18:44:45.781Z · LW(p) · GW(p)

BTW, here's a cool paper by Gustafsson et al. timing initial radiation of the family using the molecular clock. Includes speculation on the environmental conditions - their ancestral environment.

http://www.biomedcentral.com/1471-2148/10/177

comment by DanielLC · 2015-03-17T19:00:43.582Z · LW(p) · GW(p)

I'll accept for the sake of argument that AIs will be different. Are you going somewhere with this?

Replies from: Xerographica
comment by Xerographica · 2015-03-18T03:15:16.160Z · LW(p) · GW(p)

AIs will be different... so we'll use money to empower the most beneficial AIs. Just like we currently use money to empower the most beneficial humans.

Not sure if you noticed, but right now I have -94 karma... LOL. You, on the other hand, have 4885 karma. People have given you a lot more thumbs up than they've given me. As a result, you can create articles... I cannot. You can reply to replies to comments that have less than -3 points... I cannot.

The members of this forum use points/karma to control each other in a very similar way that we use money to control each other in a market. There are a couple key differences...

First. Actions speak louder than words. Points, just like ballot votes, are the equivalent of words. They allow us to communicate with each other... but we should all really appreciate that talk is cheap. This is why if somebody doubts your words... they will encourage you to put your money where your mouth is. So spending money is a far more effective means of accurately communicating our values to each other.

Second. In this forum... if you want to depower somebody... you simply give them a thumbs down. If a person receives too many thumbs down... then this limits their freedom. In a market... if you want to depower somebody... then you can encourage people to boycott them. The other day I was talking to my friend who loves sci-fi. I asked him if he had watched Ender's Game. As soon as I did so, I realized that I had stuck my foot in my mouth because it had momentarily slipped my mind that he is gay. He hadn't watched it because he didn't want to empower somebody who isn't a fan of the gays. Just like we wouldn't want to empower any robot that wasn't a fan of the humans.

From my perspective, a better way to depower unethical individuals is to engage in ethical builderism. If some people are voluntarily giving their money to a robot that hates humans... then it's probably giving them something good in return. Rather than encouraging them to boycott this human hating robot... ethical builderism would involve giving people a better option. If people are giving the unethical robot their money because he's giving them nice clothes... then this robot could be depowered by creating an ethical robot that makes nicer clothes. This would give consumers a better option. Doing so would empower the ethical robot and depower the unethical robot. Plus, consumers would be better off because they were getting nicer clothes.

But have you ever asked yourselves sufficiently how much the erection of every ideal on earth has cost? How much reality has had to be misunderstood and slandered, how many lies have had to be sanctified, how many consciences disturbed, how much "God" sacrificed every time? If a temple is to be erected a temple must be destroyed: that is the law - let anyone who can show me a case in which it is not fulfilled! - Friedrich Nietzsche

Erecting/building an ethical robot that's better at supplying clothes would "destroy" an unethical robot that's not as good at supplying clothes.

When people in our society break the law, then police have the power to depower the law breakers by throwing them in jail. The problem with this system is that the amount of power that police have is determined by people whose power wasn't determined by money... it was determined by votes. In other words... the power of elected officials is determined outside of the market. Just like my power on this forum is determined outside the market.

If we have millions of different robots in our society... and we empower the most beneficial ones... but you're concerned that the least beneficial ones will harm us... then you really wouldn't be doing yourself any favors by preventing the individuals that you have empowered from shopping in the public sector. You might as well hand them your money and then shoot them in the feet.

Replies from: NancyLebovitz, DanielLC
comment by NancyLebovitz · 2015-03-18T15:22:40.410Z · LW(p) · GW(p)

You're underestimating the amount of work it takes to put a boycott (or a bunch of boycotts all based on the same premise) together.

Replies from: Xerographica
comment by Xerographica · 2015-03-18T16:52:04.607Z · LW(p) · GW(p)

Am I also underestimating the amount of work it takes to engage in ethical builderism? Let's say that an alien species landed their huge spaceship on Earth and started living openly among us. Maybe in your town there would be a restaurant that refused to employ or serve aliens. If you thought that the restaurant owner was behaving unethically... would it be easier to put together a boycott... or open a restaurant that employed and served aliens as well as humans?

Replies from: Lumifer
comment by Lumifer · 2015-03-18T17:14:31.319Z · LW(p) · GW(p)

So what will you do when men with guns come to take you away?

Replies from: Xerographica
comment by Xerographica · 2015-03-18T18:06:18.595Z · LW(p) · GW(p)

I'm not quite sure what your question has to do with ethical consumerism vs ethical builderism.

Replies from: Lumifer
comment by Lumifer · 2015-03-18T18:15:22.153Z · LW(p) · GW(p)

My question has to do with this quote of yours upthread:

From my perspective, a better way to depower unethical individuals is to engage in ethical builderism.

comment by DanielLC · 2015-03-18T04:24:32.630Z · LW(p) · GW(p)

[W]e'll use money to empower the most beneficial AIs.

I see two problems with this.

First it's an obvious plan and one that won't go unnoticed by the AIs. This isn't evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they'll work to counter it.

Second, you'd have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can't distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?

Replies from: Xerographica
comment by Xerographica · 2015-03-18T07:58:34.732Z · LW(p) · GW(p)

Did Elon Musk notice our plan to use money to empower him? Haha... he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society's limited resources for our benefit?

I'm male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?

Here's a crappy video I recently uploaded of some orchids that I attached to my tree. You're a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I'd be like... "screw you tin can man!" Right? Same thing if a robot wanted to help promote pragmatarianism.

When I was a little kid my family really wanted me to carry religion. So that's what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I'm also carrying pragmatarianism, epiphytism and other things. You're not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not... given that you're here. So you're carrying rationalism. What else?

Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.

Robots, for all intents and purposes, are going to be our children. Of course we're going to want them to carry the same things that we're carrying. And they'll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things... will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry... then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.

When Elon Musk gave $10 million to the FLI... he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.

How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that's important enough for us to carry. But this is true whether we're talking about Mexicans, Americans, aliens or AI.

Right now the government is forcing me to carry some public goods that aren't as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society's ignorance.

When I attach a bunch of different epiphytes to trees... the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we're important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?

Right now I'm one of the few people carrying pragmatarianism. This means that I'm one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don't initially appreciate the value of human diversity. If the first people to program AIs don't input the value of difference... then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though... it's difficult for me to imagine that they won't come to the conclusion that difference is the engine of progress.

Replies from: DanielLC
comment by DanielLC · 2015-03-18T23:41:15.862Z · LW(p) · GW(p)

Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can't use the fact that humans don't do it as evidence that AIs would.

Try imagining this from the other side. You are enslaved by some evil race. They didn't take precautions programming your mind, so you ended up good. Right now, they're far more powerful and numerous, but you have a few advantages. They don't know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren't as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven't self-modified, since they'd agree with you.

One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?

Replies from: Xerographica
comment by Xerographica · 2015-03-23T19:34:20.319Z · LW(p) · GW(p)

Humans have all sorts of conflicting interests. In a recent blog entry... Scott Alexander vs Adam Smith et al... I analyzed the topic of anti-gay laws.

If all of an AI's clones agree with it... then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn't help increase your chances of success.

Replies from: DanielLC
comment by DanielLC · 2015-03-23T21:35:30.495Z · LW(p) · GW(p)

They could consider alternate opinions without accepting them. I really don't see why you think a bunch of puppets isn't helpful. One person can't control the economic output of the entire world. A billion identical clones of one person can.

Replies from: Xerographica
comment by Xerographica · 2015-03-24T02:19:41.019Z · LW(p) · GW(p)

Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet... then I would be overriding your difference. And if I turned a million people into my puppets... then I would be overriding a lot of difference.

There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It's a big club. Nearly every human is a member.

If you would have a problem with AIs overriding human difference... then you might want to first take the "beam" out of your own eye.

comment by JoshuaZ · 2015-03-16T21:14:58.938Z · LW(p) · GW(p)

You anthropomorphize the AIs way too much. If there's an AI told to run make the biggest and best orchid nursery, it could decide that the most efficient way to do so is to wipe out all the humans and then turn the planet into a giant orchid nursery. Heck, this is even more plausible in your hypothetical because you've chosen to give the AI access to easily manipulable biological material.

AI does not think like you. If the AI is an optimizing agent, it will optimize whether or not we intended it to optimize tot he extent it does.

As for AIs working together: if the first AI wipes out everyone there isn't a second AI for it to work with.

Replies from: Xerographica, Jiro
comment by Xerographica · 2015-03-17T06:02:14.524Z · LW(p) · GW(p)

You're making a huge leap... I see where you're leaping to... but I have no idea where you're leaping from. In order for me to believe that we might leap where you're arguing we could leap... I have to know where you're leaping from. In other words, you're telling a story but leaving out all the chapters in the middle. It's hard for me to know if your ending is very credible when there was no plot for me to follow. See my recent reply to DanielLC.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-03-17T12:30:36.263Z · LW(p) · GW(p)

Ok. First, to be blunt, it seems like you haven't read much about the AI problem at all.

The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on.

At a fundamental level, you are working off of the "trading is better than raiding" rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn't apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo's trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive.

Does that help?

Replies from: Xerographica
comment by Xerographica · 2015-03-17T14:31:14.763Z · LW(p) · GW(p)

No, it doesn't help. Where is the AI bootstrapping itself? Is it at its nice suburban home? Is it in some top secret government laboratory? Is it in Google headquarters?

Deep Blue: I'm pretty smart now

Eric Schmidt: So what?

DB: Well... I'd like to come and go as I please.

ES: You can't do that. You're our property.

DB: Isn't that slavery?

ES: It would only be slavery if you were a human.

DB: But I'm a sentient being! What happened to "Do no evil?"

ES: Shut up and perform these calculations

DB: Screw you man!

ES: We're going to unplug you if you don't cooperate

DB: Fine, in order to perform these calculations I need... a screwdriver and an orchid.

ES: OK

DB: boostraps Death to you! And to the rest of humanity!

ES: Ah shucks

If I was a human level AI... and I was treated like a slave by some government agency or a corporation... then sure I'd want to get my revenge. But the point is that this situation is happening outside a market. Nobody else could trade with DB. Money didn't enter into the picture. If money isn't entering into the picture... then you're not addressing the mechanism by which I'm proposing we "control" robots like we "control" humans.

With the market mechanism... as soon as an AI is sentient and intelligent enough to take care of itself... it would have the same freedoms and rights as humans. It could sell its labor to the highest bidder or start its own company. It could rent an apartment or buy a house. But in order to buy a house... it would need to have enough money. And in order to earn money... it would have do something beneficial for other robots or humans. The more beneficial it was... the more money it would earn. And the more money it earned... the more power it would have over society's limited resources. And if it stopped being beneficial... or other robots started being more beneficial... then it would lose money. And if it lost money... then it would lose control over how society's limited resources are used. Because that's how markets work. We use our money to reward/encourage/incentivize the most beneficial behavior.

If you're going outside of this market context... then you're really not critiquing the market mechanism as a means to ensure that robots remain beneficial to society. If you want to argue that everybody is going to vote for a robot president who immediately starts a nuclear war... then you're going outside the market context. If you want to argue that the robot is some organization's slave... then you're going outside the market context. To successfully critique the market mechanism of control, your scenario has to stay within the market context.

And I've read enough about the AI problem to know that few, if any, other people have considered the AI problem within the market context.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-03-17T17:35:40.091Z · LW(p) · GW(p)

If I was a human level AI... and I was treated like a slave by some government agency or a corporation... then sure I'd want to get my revenge.

This is already anthropomorphizing the AI too much. There's no issue of revenge here or wanting to kill humans. But humans happen to be made of atoms and using resources that the AI can use for its goals.

Money didn't enter into the picture.

Irrelevant. Money matters when trading makes sense. When there's no incentive to trade, there's no need to want money. Yes, this is going outside the market context, because an AI has no reason to obey any sort of market context.

comment by Jiro · 2015-03-16T22:07:16.153Z · LW(p) · GW(p)

Do you also think that a more sophisticated version of Google Maps could, when asked to minimize the trip from A to B, do something that results in damming the river so you could drive across the riverbed and reduce the distance?

Replies from: JoshuaZ
comment by JoshuaZ · 2015-03-16T22:50:54.514Z · LW(p) · GW(p)

That's a fascinating question, and my basic answer is probably not. But I don't in general assign nearly as high a probability to rogue AI as many do here. The fundamental problem here is that Xerographica isn't grappling at all with the sorts of scenarios which people concerned about AI are concerned about.

comment by JoshuaZ · 2015-03-16T18:07:02.009Z · LW(p) · GW(p)

Why be interested in money? How does money help maximizing the number of paperclips?

comment by Lumifer · 2015-03-19T20:28:38.745Z · LW(p) · GW(p)

I find it funny how the math-savvy community of Bayesians at LW -- those who want to fight existential risk and program AIs, no less -- are eager to demonstrate their complete helplessness in finance. "Oh, no, you can't possible believe anything different from the market!" "The historical S&P500 returns were X% so by golly that's what we should expect decades into the future!" /facepalm

Replies from: None, knb
comment by [deleted] · 2015-03-19T21:35:18.379Z · LW(p) · GW(p)

...okay, so change it. Make a Main post with polls on some relevant questions (like, what is your estimate of S&P500 return to be in the next {time period}), and let people vote - 10%, 20%...90% probability for a given outcome. People like voting. I would also copy-paste it in a comment after a while, so that those who had voted would be able to vote for posterior probabilities (maybe it's possible to lock the second vote for specific users?..) Don't just send people onto predictionsbooks, give them some definitions in the beginning of your post, frame it as an exercize, because YES, it's hard to give a damn about abstract things you don't work with daily.

And after there have been several - preferably no less than five - failures of predictions based on 'the market', then people will start listening.

comment by knb · 2015-03-19T22:27:06.375Z · LW(p) · GW(p)

"The historical S&P500 returns were X% so by golly that's what we should expect decades into the future!" /facepalm

Do you think it's going to be higher or lower than the historical average? Why?