Open thread, October 2011
post by MarkusRamikin · 2011-10-02T09:05:25.900Z · LW · GW · Legacy · 314 commentsContents
314 comments
This thread is for discussing anything that doesn't seem to deserve its own post.
If the resulting discussion becomes impractical to continue here, it means the topic is a promising candidate for its own thread.
314 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2011-10-03T13:16:24.424Z · LW(p) · GW(p)
Some time ago, I had a simple insight that seems crucial and really important, and has been on my mind a lot. Yet at the same time, I'm unable to really share it, because on the surface it seems so obvious as to not be worth stating, and very few people would probably get much out of me just stating it. I presume that this an instance of the Burrito Phenomenon:
While working on an article for the Monad.Reader, I’ve had the opportunity to think about how people learn and gain intuition for abstraction, and the implications for pedagogy. The heart of the matter is that people begin with the concrete, and move to the abstract. Humans are very good at pattern recognition, so this is a natural progression. By examining concrete objects in detail, one begins to notice similarities and patterns, until one comes to understand on a more abstract, intuitive level. This is why it’s such good pedagogical practice to demonstrate examples of concepts you are trying to teach. It’s particularly important to note that this process doesn’t change even when one is presented with the abstraction up front! For example, when presented with a mathematical definition for the first time, most people (me included) don’t “get it” immediately: it is only after examining some specific instances of the definition, and working through the implications of the definition in detail, that one begins to appreciate the definition and gain an understanding of what it “really says.”
Unfortunately, there is a whole cottage industry of monad tutorials that get this wrong. To see what I mean, imagine the following scenario: Joe Haskeller is trying to learn about monads. After struggling to understand them for a week, looking at examples, writing code, reading things other people have written, he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads; let us suppose that Joe’s metaphor is that Monads are Like Burritos. Here is where Joe badly misinterprets his own thought process: “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.
I'm curious: do others commonly get this feeling of having finally internalized something really crucial, which you at the same time know you can't communicate without spending so much time as to make it not worth the effort? I seem to get one such feeling maybe once a year or a couple.
To clarify, I don't mean simply the feeling of having an intuition which you can't explain because of overwhelming inferential distance. That happens all the time. I mean the feeling of something clicking, and then occupying your thoughts a large part of the time, which you can't explain because you can't state it without it seeming entirely obvious.
(And for those curious - what clicked for me this time around was basically the point Eliezer was making in No Universally Compelling Arguments and Created Already in Motion, but as applied to humans, not hypothetical AIs. In other words, if a person's brain is not evaluating beliefs on the basis of their truth-value, then it doesn't matter how good or right or reasonable your argument is - or for that matter, any piece of information that they might receive. And brains can never evaluate a claim on the basis of the claim's truth value, for a claim's truth value is not a simple attribute that could just be extracted directly. This doesn't just mean that people might (consciously or subconsciously) engage in motivated cognition - that, I already knew. It also means that we ourselves can never know for certain whether hearing the argument that should convince us if we were perfect reasoners will in fact convince us, or whether we'll just dismiss it as flawed for basically no good reason. )
Replies from: hamnox↑ comment by hamnox · 2011-10-03T18:19:43.226Z · LW(p) · GW(p)
Yes, I think I know what you mean. I hit that roadblock just about every time I try to explain math concepts to my little brother. It's not so much that he doesn't have enough background knowledge to get what I'm saying, as that I already have a very specific understanding of math built up in my head in which half of algebra is too self-evident to break down any further.
comment by EphemeralNight · 2011-10-04T13:15:16.125Z · LW(p) · GW(p)
For the passed year or two I've felt like there are literally no avenues open to me towards social, romantic, or professional advancement, up from my current position of zero. On reflection, it seems highly unlikely that this is actually true, so it follows that I'm rather egregiously missing something. Are there any rationalist techniques designed to make one better at noticing opportunities (ones that come along and ones that have always been there) in general?
Replies from: rwallace, pedanterrific, NancyLebovitz, EphemeralNight, thomblake, Unnamed, Swimmer963↑ comment by rwallace · 2011-10-05T09:11:47.525Z · LW(p) · GW(p)
I was about to explain why nobody has an answer to the question you asked, when it turned out you already figured it out :) As for what you should actually do, here's my suggestion:
Explain your actual situation and ask for advice.
For each piece of advice given, notice that you have immediately come up with at least one reason why you can't follow it.
Your natural reaction will be to post those reasons, thereby getting into an argument with the advice givers. You will win this argument, thereby establishing that there is indeed nothing you can do.
This is the important bit: don't do step 3! Instead, work on defeating or bypassing those reasons. If you can't do this by yourself, go ahead and post the reasons, but always in a frame of "I know this reason can be defeated or bypassed, help me figure out how," that aligns you with instead of against the advice givers.
You are allowed to reject some of the given advice, as long as you don't reject all of it.
↑ comment by EphemeralNight · 2011-10-05T13:14:18.516Z · LW(p) · GW(p)
That's actually exactly what I usually try to do. Unfortunately, most advice-givers in my experience tend to mistake #4 for #3. I point out that they've made an incorrect assumption when formulating their advice, and I immediately get yelled at for making excuses. I do actually have a tendency to seek excuses for non-action, but I've been aware of that tendency in myself for a long time and counter it as vigorously as I am able to.
I suppose it couldn't hurt to explain my actual situation, though. Gooey details incoming.
I live in the southwestern suburbs of Fairfield, California, on a fixed income that's just enough to pay the bills and buy food, with a little left over. (Look the town over in Google Maps to get a sense of what kind of place it is.)
Most critically, i suffer from Non-24, which, in the past, was responsible for deteriorating health and suicidal depression during high school, for forcing me to drop even the just-for-fun classes I was taking at the community college, as well as causing me to completely lose touch with my high school acquaintances before I figured out what I had and that there was a pattern to it and not just random bouts of hypersomnia and insomnia. It rules out doing anything that involves regularly scheduled activities; I even had to quit my World of Warcraft guild because of it.
Before I lost touch with my high school acquaintances, I did get to experience some normal social gatherings, though to me there was never anything particularly fun about being pelted with straw-wrappers at Denny's or dancing to Nirvana under a strobe-light or watching them play BeerPong. None of those people were ever my friends or even much of a support structure, and I don't actually miss any of them. I've been on several dates through OkCupid and my brief time in college, but they were all failures of emotional connection and in each case I was relieved when the girl told me she didn't want to go out with me anymore. I mention this to show that I'm not just assuming certain generic solutions won't work for me; I've confirmed it by experiment.
So, I'm living without much disposable income, with a sleep disorder that precludes regularly scheduled activities of any kind, in a highway-tumor town, with no friends or contacts of any kind. Oh, and I have a mild photosensitivity condition which means I'm slaved to my sunglasses during the day and even with them can't do anything that involves exposure to direct sunlight for more than a few minutes at a time, just for the sake of thoroughness.
That's the summary of the situation.
My career goals aren't actually precluded by any of this, though becoming a successful graphic artist, or writer, or independent filmmaker or webcomic author or whatever I end up succeeding at, is made more difficult. I only included the professional category because my social goals mostly pertain to my career goals: I'd like to have a useful social network. It'd be nice to have friends just for the sake of having friends, but that's of low value to me. My only high value purely-social goal is meeting and befriending a woman with whom I can have a meaningful and lasting intimate relationship, which dissolves away the romantic category as well.
Replies from: AdeleneDawner, Viliam_Bur, Kingreaper, Swimmer963, rwallace, None↑ comment by AdeleneDawner · 2011-10-05T19:42:42.972Z · LW(p) · GW(p)
Unfortunately, most advice-givers in my experience tend to mistake #4 for #3. I point out that they've made an incorrect assumption when formulating their advice, and I immediately get yelled at for making excuses.
If this conversation is representative, 'making excuses' might not be entirely accurate, though I can see why people would pattern-match to that as the nearest cached thing of relevance. But to be more accurate, it's more like you're asking "what car is best for driving to the moon" and then rejecting any replies that talk about rockets, since that's not an answer to the actual question you asked. It could even be that the advice about building rockets is entirely useless to you, if you're in a situation where you can't go on a rocket for whatever reason, and they need to introduce you to the idea of space elevators or something, but staying focused on cars isn't going to get you what you want and people are likely to get frustrated with that pretty quickly.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-05T23:46:24.476Z · LW(p) · GW(p)
it's more like you're asking "what car is best for driving to the moon" and then rejecting any replies that talk about rockets, since that's not an answer to the actual question you asked. It could even be that the advice about building rockets is entirely useless to you, if you're in a situation where you can't go on a rocket for whatever reason, and they need to introduce you to the idea of space elevators or something,
Wow... that may just be the most apt analogy I've ever heard anyone make about this. I'm having a "whoa" moment here.
'kay. So, my first thought is, how does my actual goal fit into the analogy? If my terminal goal fits as finding the right car then the problem lies in everyone hearing a different question than the one asked. If, on the other hand, my terminal goal fits into the analogy as getting to the moon then the problem is a gap of understanding that causes me to persist with the wrong question. Which sounds like exactly the sort of flaw-in-thinking that I was talking about in the first place!
I am vaguely disturbed that I don't actually know which part of the analogy my terminal goal fits into. It seems like its something I should know. I would guess it is the latter, though, due to there actually being a cognitive flaw that remains elusive.
Replies from: AdeleneDawner, Kingreaper↑ comment by AdeleneDawner · 2011-10-06T01:56:36.260Z · LW(p) · GW(p)
It could be that you want both. Human values do tend to be complex, after all. (Also, I'd map 'wanting the best possible mind' to 'wanting the best car', and 'wanting to get your life moving in a good direction' as 'wanting to go to the moon', if that was a source of confusion.)
↑ comment by Kingreaper · 2011-10-06T12:15:23.077Z · LW(p) · GW(p)
Getting to the moon (ie. getting your life moving) is quite clearly one of your terminal goals.
Whether or not you've enshrined the car (ie. a general solution) as a newer terminal goal, I can't tell you.
A hint however: The car may not take the form you expect. It may be a taxi, or a bus, where you don't own it but rather ride in it. (ie. the best general solution for you might actually be "go on the internet and look for a specific solution")
↑ comment by Viliam_Bur · 2011-10-06T19:46:16.641Z · LW(p) · GW(p)
How does your Non-24 function? Is it completely unpredictable, or would you be able to maintain a regular N-hour cycle for some value of N? Best if N could be something like 33.6, 28, 21, 18.7, because then you could maintain a week cycle, which would allow you a part-time job. But any predictable schedule allows you to plan things.
If your sunglasses are not very helpful in day, could you try some darker sunglasses? You could have a set of sunglasses with different levels of darkness, for inside and outside.
As a general strategy, I would suggest this: If you cannot find one perfect solution, focus on small improvements you can do.
It is generally good to have a "big picture", so your actions are coordinated towards a goal. But even if you don't have it, don't stop. If you do nothing, you receive no information, and that does not make your future planning any better. Even doing random (non-dangerous) things is good, because you gain information.
For example, I don't think that buying darker sunglasses is going to fix all your problems. But still, if darker sunglasses would be an improvement, you should get them. It is better than waiting until you find a pefect strategy for everything.
↑ comment by Kingreaper · 2011-10-06T12:11:34.338Z · LW(p) · GW(p)
I'd say that your statement:
It rules out doing anything that involves regularly scheduled activities
Is inaccurate. It rules out regularly scheduled activities where you have to attend every single one.
The majority of meetups are perfectly happy with someone who attends 1/2 or 1/3 of the meetings; which non-24 shouldn't prevent.
Meetups also have a more structured feel than the social gatherings you mention, and tend to be more useful for networking.
A deeper problem is your location. I'm assuming given your sunlight issue that you can't really drive very far on sunny days?
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-06T12:43:19.858Z · LW(p) · GW(p)
What is this thing called "Meetup" that everyone keeps talking about? Does it have some meaning beyond the obvious that I'm unaware of? Because the way its used around here makes it seem like it refers to something more specific than the literal definition.
I'm assuming given your sunlight issue that you can't really drive very far on sunny days?
I have a very good pair of sunglasses, which combined with a modern car windshield are enough that I can drive without being too limited by that(though I still prefer to make long trips at night when I can), plus cars have roofs which means there are a lot of relative positions the sun can be in which does not put the driver in direct sunlight. The bigger limitation is paying for gas. Occasional long trips are no problem. ~weekly long trips would break the bank. (Long > 25 miles )
Replies from: None, Nisan, Swimmer963, Kingreaper, AdeleneDawner↑ comment by [deleted] · 2011-10-06T13:25:56.462Z · LW(p) · GW(p)
There is also a website, meetup.com, that is used to organize many such events in a variety of areas. It's difficult to say how well any particular one will yield people you click with since the site merely facilitates someone creating a specific group with a specific place-and-time scheduled meet, but it's a good way to keep track of what's going on in your area that might be relevant to your interests.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-06T16:36:27.474Z · LW(p) · GW(p)
Ah.
I was completely unaware of meetup.com existing. (Took one look and am already registered) It seems like kind of a Big Thing; I am somewhat baffled that I'd never heard of it before.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-10-06T18:16:59.879Z · LW(p) · GW(p)
Um.
Useful answers will probably be along the lines of either 'try meeetup.com/okcupid/your local LW meetup/etc', or 'here's how you find out about things like meetup.com/okcupid/LW meetups/etc'.
↑ comment by Nisan · 2011-10-16T02:02:20.795Z · LW(p) · GW(p)
This isn't a piece of advice so much as a friendly invitation: The Berkeley Less Wrong meetup meets Wednesdays at 7pm and also monthly on Saturday evenings. It looks like it would take you 90 minutes and two train/bus tickets to get there, and the same going back. You're welcome to join us.
(Mailing list.)
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-06T13:43:53.626Z · LW(p) · GW(p)
Does your town have Greyhound bus service? This could be a cheaper alternative, possibly, if you find bus trips bearable. Also you can sleep on the bus, which would help if the time you needed to make the trip correlated with a 'sleeping' phase of your schedule.
↑ comment by Kingreaper · 2011-10-06T14:52:45.259Z · LW(p) · GW(p)
By a "meetup" I mean a regular, or semi-regular, event whereby a group of people with common interests meet in order to discuss things, including [but not limited to] the common interest.
These meetups come in many forms; some occur in pubs, some in meeting halls, some in coffee shops. Some feature speeches, which tend to be on the issue of the common interest, but most do not.
By attending a meetup two events running, or three events out of six, you'll tend to get to know many of the regulars, and become part of their social network.
One type of meetup that would obviously be relevant to your interests is a lesswrong one, but meetups of skeptic societies, societies associated with your particular sexual kinks/relationship preferences (poly meets, munches, rope meets, furmeets etc.), humanist meetups, etc. would all likely be useful to you.
↑ comment by AdeleneDawner · 2011-10-06T13:19:12.103Z · LW(p) · GW(p)
What is this thing called "Meetup" that everyone keeps talking about? Does it have some meaning beyond the obvious that I'm unaware of?
It's mostly what it sounds like, once you take into account that it's short for "LessWrong meetup" ('meetup for LessWrong users'). The possibly non-obvious bit is that meetups are often recurring things with people who consistently come to most instances of them in a particular area, so they're more about ongoing socialization/skillbuilding/etc than literally meeting people.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-05T13:49:33.904Z · LW(p) · GW(p)
Most critically, i suffer from Non-24
Have you seen doctors about this or tried any treatments? I did a quick Wikipedia search and the 'Treatment' section suggested light therapy or melatonin therapy. It said they don't always work well and may be completely ineffective for some people, and it sounds like a lot of work for not much gain, but if you haven't tested it out, it might be worth at try.
for forcing me to drop even the just-for-fun classes I was taking at the community college.
Are online classes perhaps a better option? I don't know how flexible they are in terms of what time of day you can view the lectures and stuff, and I don't know whether you've already tried that.
Actually, there may be online work opportunities as well. I've never investigated this personally, but it might be worth hunting around or asking some other LWers.
RE: writing, that's something that fits pretty well into an irregular schedule. You can do it at home at whatever time of day. What sort of material are you interested in writing? I've been working on writing fiction for a number of years now, and I would happily do an email exchange and read/edit your work. I can't offer to do the same thing for graphic art, but I wouldn't be surprised if there are other people on LW who can.
though to me there was never anything particularly fun about being pelted with straw-wrappers at Denny's or dancing to Nirvana under a strobe-light or watching them play BeerPong.
I can understand that. Those things are pretty boring. Feeling emotionally connected to the people you're doing them with is what makes it worthwhile, and if you don't, you just don't.
As for your original comment about having some cognitive flaw, it might boil down to the fact that you just aren't interested in the same life experience as, say, your high school acquaintances were. Having a group of acquaintances and doing regular social activities with them is a conventional solution for a lot of people, but it if doesn't work for you, it just doesn't work. And when your reward structure isn't the same as everyone else's, there will be fewer "opportunities to be rewarded" that automatically presen themselves.
What will work for you is another question. Finding a job that would self-select for coworkers who had similar interests to yours could help. Also, learning how to steer a conversation from something banal towards something interesting to you is a skill that can help deepen your social connections. (Although the first step is to have enough practice with conversations that you know how to make yourself interesting to the other person. This took me a long time and a lot of conscious effort to acquire.)
Also, depression is its own form of cognitive bias that might make you more likely to see opportunities negatively or as a "waste of time", when otherwise you might think "why not?" If you were depressed for several years, these kind of thoughts or more subtle versions might have become habits.
My only high value purely-social goal is meeting and befriending a woman with whom I can have a meaningful and lasting intimate relationship, which dissolves away the romantic category as well.
I wish you the best of luck with this. It does make a huge difference once you can find that person.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-05T15:18:29.364Z · LW(p) · GW(p)
Have you seen doctors about this or tried any treatments?
I've made some inquiries. According to all the information I've seen, success of treatment seems to correlate with undersensitivity to light or outright blindness. Since I'm oversensitive to light, that places me on the extreme end of Untreatable.
Are online classes perhaps a better option?
Not really; I was taking those classes for social reasons, not educational reasons.
Also, learning how to steer a conversation from something banal towards something interesting to you is a skill that can help deepen your social connections.
I'm actually reasonably good at this, but it has usually just accelerated the exposure of lack of common ground with whoever I was talking to.
I think meeting the right people is a much bigger problem for me than interacting successfully with those people.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-10-05T17:27:54.056Z · LW(p) · GW(p)
I've made some inquiries.
If you haven't given several potential treatments serious attempts, I think you should. Improving this issue seems like it would be worth a lot to you, so even smallish probabilities of success are worth investigating.
↑ comment by rwallace · 2011-10-05T17:18:38.378Z · LW(p) · GW(p)
Okay, as Swimmer observes, writing can easily be done from home on a random sleep schedule; so can graphics work, so can creating web comics. There's plenty of relevant educational material for all of these that doesn't require attending scheduled classes. And if you don't bond well with random people, probably the best way to improve your social life is to look for people with whom you have shared interest; which means you might be better off getting the career stuff up and running first; once you do that, it will probably lead to encounters with people with whom you have something in common.
↑ comment by [deleted] · 2011-10-06T13:40:21.328Z · LW(p) · GW(p)
See, that actually suggests that a big part of your problem is related to your situation. The most obvious long-term fix is meeting your career goals, if you think they're otherwise in reach -- but presumably (given you seem to have a solid sense of yourself and appear to be working on things already) you're doing this at a manageable rate. It might be worthwhile to see if you can speed that along, however -- more money is probably going to make the single biggest difference, as it'll give you more freedom of location and disposable income with which to pursue things.
That's kinda stating the obvious, but it also sounds from your explanation that there aren't any obvious "magic bullets" you've been missing. I don't say that to sound discouraging, just, it seems like there's not a lot of low-hanging fruit for improving your situation.
↑ comment by pedanterrific · 2011-10-05T00:22:09.808Z · LW(p) · GW(p)
Okay, I've read through the other responses and I think I understand what you're asking for, but correct me if I'm wrong.
A technique I've found useful for noticing opportunities once I've decided on a goal is thinking and asking about the strategies that other people who have succeeded at the goal used, and seeing if any of them are possible from my situation. This obviously doesn't work so well for goals sufficiently narrow or unique that no one has done them before, but that doesn't seem to be what you're talking about.
Social advancement: how do people who have a lot of friends and are highly respected make friends and instill respect? Romantic advancement: How did the people in stable, committed relationships (or who get all the one-night stands they want, whichever) meet each other and become close? Professional advancement: How did my boss (or mentor) get their position?
Edit: Essentially I'm saying the first step to noticing more opportunities is becoming more familiar with what an opportunity looks like.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-05T11:17:52.577Z · LW(p) · GW(p)
This is useful, actually. I think I've been kind of doing that indirectly, but not with a direct conscious effort. It doesn't do me much good right now, since I'm still completely isolated and don't know of anyone who got out of a situation like mine, but I think it could still be helpful.
Replies from: Kingreaper↑ comment by Kingreaper · 2011-10-05T14:16:08.094Z · LW(p) · GW(p)
Then first, change your situation to NOT completely isolated.
If you're in a town or city that's easy, just go to a meetup of a society of some sort that sounds vaguely interesting. If you can't find such a society, wonder from pub to coffee shop to restaurant, looking for any relevant posters.
Or just go online and look up a meetup website.
Looking for a general solution is all well and good, but you have a very specific problem. And so, rather than spending years working on a general solution while in the wrong environment, perhaps you'd be better off using the specific solution, and working on a general one later?
↑ comment by NancyLebovitz · 2011-10-05T08:07:14.816Z · LW(p) · GW(p)
You might be interested in The Luck Factor-- it's based on research about lucky and unlucky people, and the author says that lucky people are high on extroversion, have a relaxed attitude toward life (so that they're willing to take advantage of opportunities as they appear (in other words, they don't try to force particular outcomes, and they haven't given up on paying attention to what might be available), and openness to new experiences.
The book claims that all these qualities can be cultivated.
↑ comment by EphemeralNight · 2011-10-04T16:49:32.089Z · LW(p) · GW(p)
Alright, since no one seems to be understanding my question here, I'll try to reframe it.
(First, to be clear, I'm not having a problem with motivation. I'm not having a problem with indecision. I'm not having a problem with identifying my terminal goal(s).)
To use an analogy, imagine you're playing a video game, and at some point you come to a room where the door shuts behind you and there's no other way out. There's nothing in the room you can interact with, nothing in your inventory that does anything; you poor over every detail of the room, and find there is no way to progress further; the game has glitched, you are stuck. There is literally no way beyond that room and no way out of it except reseting to an earlier save point.
That is how my life feels from the inside: no available paths. (In the glitched video game, it is plausible that there really is no action that will lead to progression beyond the current situation. In real life, not so much.)
Given that it is highly unlikely that this is an accurate Map of the Territory that is the real world, clearly there is a flaw in how I generate my Map in regards to potential paths of advancement in the Territory. It is that cognitive flaw that I wish to correct.
I am asking only for a way to identify and correct that flaw.
Replies from: None, Swimmer963, Clarica↑ comment by [deleted] · 2011-10-04T17:20:14.473Z · LW(p) · GW(p)
I rather doubt there is a fully-generalizable theory of the sort you seem to be looking for. Some territories are better left than explored in more detail, if it be within your power to do so; others can be meaningfully understood and manipulated.
If you are in a dead-end job in a small town where you are socially isolated and clash culturally with the locals, do not have professional credentials sufficient to make a lateral move plausible (ie, working retail as opposed to something that requires a degree), the advice will necessarily be different than if you are in a major city and have a career of some kind.
What works in New York may not work so well in Lake Wobegon.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-04T18:12:08.257Z · LW(p) · GW(p)
Some territories are better left than explored in more detail
Forgive me if I'm mistaken, but are you saying that some blank places on our Maps ought to be deliberately kept blank? That seems, well, insane.
In any case, at no point did I ask for advice about my specific situation. I want the algorithm being used to generate that advice, not the advice itself.
Replies from: None, dlthomas↑ comment by [deleted] · 2011-10-04T18:19:53.877Z · LW(p) · GW(p)
Forgive me if I'm mistaken, but are you saying that some blank places on our Maps ought to be deliberately kept blank? That seems, well, insane.
No, I'm saying that not all situations present the same amount of opportunities, and your situation makes a difference whether or not you think it does.
I do not think there is a fully-general piece of advice for you, but you clearly believe there is. I believe this is a mistake on your part, and have said so several times now. Since you are not apparently interested in hearing that, I will not bother to repeat myself further.
Good luck finding what you're after though.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-04T18:37:58.364Z · LW(p) · GW(p)
I'm saying that not all situations present the same amount of opportunities, and your situation makes a difference whether or not you think it does.
Okay, and that's not something I dispute. If I did somehow manage to correct my cognitive flaw, one of the possibilities is that I'd discover that I really don't have any options. But I can't know that until the flaw is solved.
I do not think there is a fully-general piece of advice for you, but you clearly believe there is.
Of course I believe there is a fully general algorithm for identifying avenues of advancement towards a terminal goal. But phrasing it like that just made me realize that anyone who actually knew it would have already built an AGI.
Well, crap.
Replies from: vi21maobk9vp, None↑ comment by vi21maobk9vp · 2011-10-04T21:27:50.421Z · LW(p) · GW(p)
Well, having it described in terms suitable for human improvement and relying on existing human cognitive abilities would lower it just to universally applicable intelligence amplification.
So you did not ask for something AGI-equivalent.
↑ comment by dlthomas · 2011-10-04T18:17:13.109Z · LW(p) · GW(p)
I don't think many here would propose that portions of the map be kept blank for the sake of keeping them blank.
It is easy to see that, with limited resources, it may be preferable to leave some regions blank when you've determined that there are likely to be bigger gains to be had elsewhere for cheaper.
Replies from: None↑ comment by [deleted] · 2011-10-04T18:24:09.699Z · LW(p) · GW(p)
This is in fact what I meant -- one's map is necessarily local to one's territory, and sometimes the gains of going over it in more detail are minimal.
To go with the analogy of maps: if what you want is advice on how to plant a garden, it matters whether or not you're in the middle of the Sahara Desert.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T17:12:00.170Z · LW(p) · GW(p)
I think I understand the feeling you're having now. Still, It seems highly unlikely to me that you can fix this "cognitive flaw" in isolation, before you've found a few concrete avenues of advancement...I find that my habits, including habits of thought, are trainable rather than fixable in the abstract.
Are you in school? If so, would you like to study something different? If not, is there something you do want to study? Are you working or is there somewhere you want to work? These are conventional paths to life-advancement.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-04T18:04:15.135Z · LW(p) · GW(p)
None of that information would constrain the space of possibilities in which the cognitive flaw exists, no matter what my answers happened to be. That's all a level above the actual problem, and irrelevant.
It seems highly unlikely to me that you can fix this "cognitive flaw" in isolation, before you've found a few concrete avenues of advancement.
Well, that seems rather boot-strap-ish, since finding concrete avenues of advancement is exactly what the cognitive flaw is preventing me from doing.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T18:12:41.121Z · LW(p) · GW(p)
Okay, I'm sorry none of my answers were helpful to you. I don't know what to suggest.
↑ comment by Clarica · 2011-10-04T17:30:35.319Z · LW(p) · GW(p)
If the flaw lies in your choices, choose differently. If the flaw lies in your habits, practice better habits. If the flaw lies in your cognitive habits, you must do something higher up on this list in order to be able to develop different cognitive habits.
Your existing habits and choices (and arguably genetics and environment) may not be what created the situation which is becoming intolerable, but they are the easiest thing to work on.
You do not have to worry about making the right change or practices--start with whatever seems easiest. And try not to go against your 'better' judgement.
↑ comment by Unnamed · 2011-10-04T15:42:58.547Z · LW(p) · GW(p)
It may be that you need to get your brain to treat it as an active goal, and once that happens your brain will automatically generate ideas and notice opportunities. You could think about the goal and identify it as something that you want to accomplish, perhaps visualizing the outcome that you would like to achieve to set off the processes that PJ Eby describes in his irresistable instant motivation video.
You could also put effort into trying to come up with ways to pursue the goal, focusing on generating ideas and not worrying too much about their quality or feasibility. You could brainstorm ideas for how to pursue the goal, plot out possible paths from where you are to the outcome you want, look at other people who are in a position similar to you to see what they're doing to pursue the goal, look at people who are doing what you aspire to do to see what you could change, or talk with other people to get suggestions or bounce ideas around. These could work directly, or they could help indirectly by keeping the goal active in your mind so that your brain will notice things that are relevant to the goal.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T13:35:21.522Z · LW(p) · GW(p)
Probably. The technique I've had the most success with is "just go out and DO it!" Whether or not it's a job/friend group/ relationship that seems viable or desirable in the long term, you probably benefit more from trying it than not trying it.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-04T13:52:53.008Z · LW(p) · GW(p)
Yeah, that's not really what I was talking about. My problem is with being unable to see that there's anything I should just go out and do, not with actually going out and doing it. I don't have any trouble following a path to my goal once that path has been identified; it's identifying possible path(s) to my goal(s) in the first place that I seem to have a deficiency in. What was unclear about my question that prompted you to answer a different question than the one I asked?
Replies from: Kingreaper, Swimmer963↑ comment by Kingreaper · 2011-10-05T13:55:09.675Z · LW(p) · GW(p)
"Just go out and DO it!" is then the wrong advice.
However "Just go out and DO!" remains good advice.
Next time you see a poster for a meetup; just go to it. Even if it doesn't sound like it'll help, just go to it.
Next time you see a request for volunteers, which you can afford the time to fulfil, just volunteer. Even if it's not something you care much about.
While you're out doing those things you'll come across people, and random events, etc. that may give you new paths to your goals.
Don't worry about achieving your goals, just do things. To use your video-game analogy: you've been looking around for things that look like they'll be useful for you. But you haven't been pressing random buttons, you haven't clicked "use" on the poster in the corner: because why would that help? But of course, sometimes there's a safe behind the poster. Or sometimes, pressing shift and K simultaneously activates the item use menu, etc.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T13:59:20.649Z · LW(p) · GW(p)
You may have to explain some context here, because I'm not sure I understand what you mean by 'not seeing anything that you should go out and do.' Do you find your lack of employment/social/romantic opportunities distressing? If not, then there isn't a problem unless you want there to be a problem. If you do want to change this situation, then I can't point out the opportunities you have because I know nothing about your day-to-day. However, you're right that unless your situation is very unusual, it's unlikely that there are really no opportunities.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2011-10-04T14:04:25.892Z · LW(p) · GW(p)
I don't see how asking for rationalist techniques to make me better at noticing opportunities requries any context. Not that I'm unwilling to give context, I just think it would be irrelevant. I'm asking if there's anything I can to do get better at spotting opportunities. What was unclear about my question that prompted you to assume I was asking for specific opportunities to be identified for me?
Replies from: lessdazed, Swimmer963, Kingreaper↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T14:16:10.023Z · LW(p) · GW(p)
I suggested a general technique that worked well for me in my first comment. I think it's the only technique that has ever worked well for me. When you said I'd misunderstood your problem, and I reread your comment and decided I still didn't understand, I realized that our life-situations were probably different enough that any technique I suggested based on personal experience would inevitably not work for you. This may be a flaw in my thinking, but I have trouble thinking of any "general" rationalist techniques that would work to optimize a particular person's life in a particular context. My brain is now trying to produce more solutions, but I'm not really expecting them to be helpful to you.
Replies from: AdeleneDawner, EphemeralNight↑ comment by AdeleneDawner · 2011-10-04T14:32:55.176Z · LW(p) · GW(p)
If I'm understanding the original question properly, the issue is along the lines of the following situation: EphemeralNight finds emself sitting at home, thinking 'I wish there was something fun I could do tonight. But I don't know of anything. So how might I find something? I have no idea.' It's not that e's running into akrasia on the path to doing X, it's that e doesn't have an X in the first place and doesn't know how to find one.
Useful answers will probably be along the lines of either 'try meeetup.com/okcupid/your local LW meetup/etc', or 'here's how you find out about things like meetup.com/okcupid/LW meetups/etc'.
Replies from: Swimmer963, EphemeralNight↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T14:42:28.465Z · LW(p) · GW(p)
That's what I thought, too, but the comment seemed to be asking for a general rather than a specific solution.
↑ comment by EphemeralNight · 2011-10-04T14:42:02.616Z · LW(p) · GW(p)
'here's how you find out about things like meetup.com/okcupid/LW meetups/etc'
This is still one step ahead of the problem I'm actually trying to solve (Ie. it's on the level of answers to "What am I supposed to just go out and do?") but advice on that level could be somewhat useful. However, what I was actually asking about in my original post are cognitive tools that will help me get better at answering that question myself.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T14:57:30.531Z · LW(p) · GW(p)
Yeah, the big thing with specific solutions is that while they may be helpful, they don't teach you a new way to think. (Also, what might be fun for one person could be boring or unpleasant for someone else. I don't know whether you enjoy debating, or sports, or watching plays, etc. But I'm assuming you know what would be fun for you.)
In terms of why no one is getting at the root of the problem... for me at least, I've never thought about it consciously. School happened to me, work happened to me, and the few times I decided to spontaneously start a new activity (i.e. taekwondo) I just googled "taekwondo in Ottawa", found a location, and showed up. If anything, my problem has always been noticing too many opportunities to do fun things and been upset that I couldn't do all of them. So there may well be something that you do differently than I do, but since 'noticing' fun things to do happens below the level of my conscious awareness, trying to figure out the cognitive strategies involved takes a lot of work.
↑ comment by EphemeralNight · 2011-10-04T14:30:11.025Z · LW(p) · GW(p)
You're still attempting to solve the wrong problem.
"Just go out an do it" doesn't even apply to the problem of finding the cognitive flaw in my ability to identify opportunities that is damaging my ability to figure out what I can "just go out and do". You're trying to solve a problem that is two whole steps ahead of the one my post was about. What was it about my original question that was unclear?
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T14:39:36.024Z · LW(p) · GW(p)
Maybe that it's so far removed from any state I've experienced that I'm not even sure what you mean. Hopefully there is someone else on this site who has been in a similar place before and can recognize it. But it does look to me like you're trying to solve a specific problem, not a general problem. I just interpreted the wrong specific problem when I read your comment.
Also, all the answers my brain produces when I ask it to imagine "cognitive biases that would result in not noticing opportunities" come out sounding judgmental, and as a general rule I don't write things down if they sound judgmental or negative.
↑ comment by Kingreaper · 2011-10-05T14:07:14.561Z · LW(p) · GW(p)
Here are two ways to find more opportunities. 1) is to get out and DO!, which exposes you to more opportunities.
2) is to get better at spotting them when they're around.
The only way I can think of to achieve 2, personally, is practise. How do you practise? Well, you do 1), and expose yourself to as many opportunities as possible, and see how many you notice in time, and when you notice one too late you think about how you could have noticed it quicker.
comment by lessdazed · 2011-10-14T01:40:03.506Z · LW(p) · GW(p)
Grocery stores should have a lane where they charge more, such as 5% more per item. It would be like a toll lane for people in a hurry.
Replies from: Jack, pedanterrific, Prismattic, taw, Vaniver↑ comment by Jack · 2011-10-14T01:47:49.021Z · LW(p) · GW(p)
Grocery stores also routinely keep track of how fast each cashier is-- by measuring items per minute. Such lanes could be staffed by the fastest cashiers and have dedicated baggers.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-16T23:31:07.747Z · LW(p) · GW(p)
There are already 'express' lanes with maximum item limits, which achieve faster service by making sure the average time to process each customer is reduced. In that case, assigning faster cashiers make sense, but it seems like the 'toll lane' idea would achieve faster service primarily by being much less crowded than other lanes (that is, if the toll lane has a line the same length as other lanes, there would be no point going to it). So having your best cashier there just ensures they spend more time idle, thereby increasing the average time for all lanes.
↑ comment by pedanterrific · 2011-10-16T23:42:02.737Z · LW(p) · GW(p)
Obvious corollary: bribing the maître d'.
↑ comment by Prismattic · 2011-10-19T01:56:05.481Z · LW(p) · GW(p)
I've actually considered leaving a note in my supermarket's suggestion box, to the effect that there should be a $1 congestion charge to enter the parking lot between 10am and 5pm on Saturday and Sunday. The wait in line doesn't bother me anywhere near as much as fruitlessly driving around the parking lot looking for someplace to park.
($1 is a starting point. The point being trial and error to hit the equilibrium of a moderately full parking lot at peak hours.)
I suspect this supermarket would actually not lose customers on net, because it is so much better than its competition that people previously deterred by the crowds would balance out people deterred by the congestion charge.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-10-19T03:46:27.501Z · LW(p) · GW(p)
I share your sentiment, but you want to be a little bit careful doing this sort of thing by trial and error as there can be hysteresis effects, as with the Israeli day care experiment made famous by Freakonomics. You may find, for example, that offering sales that start after 5pm works better than charging a fee before 5pm, but only if you do it first.
↑ comment by Vaniver · 2011-10-16T23:47:27.078Z · LW(p) · GW(p)
Since grocery prices are pretty competitive and waits are pretty short currently, this seems more likely to exist as a discount for waiting than as a surcharge for being faster. It's not clear to me that offering that option benefits the grocery store.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-17T00:00:54.774Z · LW(p) · GW(p)
this seems more likely to exist as a discount for waiting than as a surcharge for being faster
What does this mean? That it would be better for a store to have a sole 5% discount lane and no other special lanes than a 5% surcharge lane, and that it would be better for a store to have no special lanes than a 5% discount lane?
I am suggesting that a grocery store could a) unilaterally b) bring the situation closer to a Pareto optimum and c) capture much of the benefit.
To the extent consumers are rational, the store pushes off the decision to commit to a faster or cheaper store until the consumer has more information. To the extent they are irrational, it offers them an impulse purchase that is easy to rationalize.
Replies from: Vaniver↑ comment by Vaniver · 2011-10-17T04:43:53.708Z · LW(p) · GW(p)
Suppose the local grocery store offers this option.
Now, whenever* you don't choose the toll lane, you'll be struck by how long the non-toll lines are. You may even wonder if it's a plot to make you pay 5% extra at the register and thus display lower prices on the shelves, or maybe to just waste your time if you don't want to pay the extra 5%.** You don't recall waits at the other grocery store being this torturous, and so you start going there instead.***
*Offer subject to availability and confirmation bias.
**Offer subject to attribution bias.
***Indeed, you're giving other grocery stores a great advertising hook to springboard off of: "our cashiers are fast, friendly, and free!"
The grocery store isn't offering a new, supplemental service; they're charging you for quality service rather than mediocre service. (A more loaded way to put it is that they're ransoming your time back to you.) I suspect customers would resent that quite a bit.
The comment about discounts was because 5% is a lot when it comes to groceries (typical profit margins are around 1%), and charging people just 5 cents per plastic bag causes plastic bag usage rates to drop significantly. It seems like people would go to a different grocery store to save the 5%, and so prices would have to be lowered to compensate- which means that the grocery store is now paying you to wait, which is further from the Pareto optimum.
The new, supplemental service that targets time-starved customers is grocery delivery, which had a number of high-profile failures. The survivors are growing, but it seems likely that perhaps 5-10% of customers are time-starved enough to pay extra. If you only have ~5 cashiers working at a particular time, are you going to put one of them on the toll lane, significantly decreasing throughput?
There's also the question of the time calculation from the customer's point of view. Suppose they save 5 minutes; at $8 an hour that comes out to 67 cents. If the order costs more than $30, that's not worthwhile. (I can't remember the last time I had to wait more than 2 minutes to check out.)
Replies from: lessdazed↑ comment by lessdazed · 2011-10-18T04:27:41.295Z · LW(p) · GW(p)
The comment about discounts was because 5% is a lot when it comes to groceries (typical profit margins are around 1%)
I can think of several reasons why typical markup rates would be relevant, but not for why typical profit margins would be. I suspect you looked up "profit" when what was doing the work in your implicit arguments was "markup". 5% then ceases to be thought of as 500% of 1% and becomes thought of as a more reasonable 33% of ~15% or so.
charging people just 5 cents per plastic bag causes plastic bag usage rates to drop significantly.
This is because humans are irrational about free things, rather than the sum of money involved. See Arieli's Lindt/Hersheys experiment. It is possible they see checking out as free, but also possible they see the price as a surcharge on each item. I don't know.
Also, I think your emphasis on what you see wrong with a 5% toll violates the spirit of least convenient possible world, as I used that as an example of what I thought would approximately achieve the ends I had in mind.
perhaps 5-10% of customers are time-starved enough to pay extra.
Rather than create something entirely new, with different advantages and disadvantages (e.g. someone has to be there to take in the groceries, but one doesn't have to go to the store), I am discussing a small improvement to an existing structure. I don't really buy the analogy because the point of this is that people could decide what to do after going to the store that has choices and looking at the lines. People like to keep their options open. They don't have to decide yet how impatient, hungry, or busy they are.
There's also the question of the time calculation from the customer's point of view. Suppose they save 5 minutes; at $8 an hour that comes out to 67 cents. If the order costs more than $30, that's not worthwhile.
Here you predict people will be rational, while I predict they will be impatient for an immediate reward. I also think orders of less than $30 and people who make more than $8/hour are pretty common.
I can't remember the last time I had to wait more than 2 minutes to check out.
To this I'm going to invoke LCPW again. From 5pm to 7pm where I live, lines are long. The toll lane doesn't have to be active at 10am.
Replies from: Vaniver↑ comment by Vaniver · 2011-10-18T13:01:20.967Z · LW(p) · GW(p)
I can think of several reasons why typical markup rates would be relevant, but not for why typical profit margins would be. I suspect you looked up "profit" when what was doing the work in your implicit arguments was "markup".
Profit margins strike me as a better measure of how competitive prices are; markup rates are necessarily higher because of the costs of running the store. To put it another way, high profit margins are a better sign of low competition than high markups.
The argument I was making was that grocery store customers are not a captive market, and are sensitive to price increases (and probably insults).
Also, I think your emphasis on what you see wrong with a 5% toll violates the spirit of least convenient possible world, as I used that as an example of what I thought would approximately achieve the ends I had in mind.
Since I'm pointing out potential downsides of your suggestion, isn't invoking LCPW for me invoking MCPW for yourself?
I am discussing a small improvement to an existing structure.
You are suggesting a small change to an existing structure. It has both positive and negative effects.
I don't really buy the analogy because the point of this is that people could decide what to do after going to the store that has choices and looking at the lines. People like to keep their options open.
The original examples of extra choices making people worse off come from grocery stores (though choice paralysis is different from the game theoretic concerns I'm making).
Replies from: lessdazed↑ comment by lessdazed · 2011-10-19T01:10:20.397Z · LW(p) · GW(p)
high profit margins are a better sign of low competition than high markups.
I see. High competition does not strongly imply high price competition with other factors such as service fluctuating little, I think there is significant service competition in the current market.
Since I'm pointing out potential downsides of your suggestion, isn't invoking LCPW for me invoking MCPW for yourself?
It depends. "...are you going to put one of them on the toll lane, significantly decreasing throughput?" and "I can't remember the last time I had to wait more than 2 minutes to check out," seem like implausible over-interpretations of my suggestion, as if I meant for it to apply at all times and in all places regardless of store layout and business etc. "5% is a lot when it comes to groceries" is much more fair.
You are suggesting a small change to an existing structure.
This is a good point. I said "improvement" because increasing choice is usually, all else equal, an improvement, though it isn't always. Here it sort of obviously isn't a pure improvement, but the cost to consumers (assuming store prices are constant) that they pay for a chance to check out much faster is a slightly slower checkout if they don't so choose (i.e. the toll line might be half as long as the others, distributing those it would have were it free among many other lanes).
To have consumers think they are worse off, it isn't plausible to think that their free lanes have noticeably longer lines than they would have were all lanes free, so you rightfully didn't say they would - particularly if the store loses customers. Instead you said consumers would irrationally disfavor the system - which is perfectly fair. In particular, you said they would resent being in a longer line compared to those in the toll lane (rather than compared to the line they would have been were all free, the rational comparison that people might not make), and that they would have a negative feeling they would associate with the store, despite the lines at other stores being just as long.
extra choices making people worse off
While choices make people worse off, they are still biased towards preserving their choices, so I think this factor would still benefit the supermarkets, though this part of it wouldn't simply be from creating value and taking some of it rather than have consumers take all of it.
I think you are overly caught up in the subject matter here, which disguises how little those examples are relevant here, a great analogy furthering some of your points is from traffic showing Braess's Paradox.
One important thing to track is if either of us is "cheating" in a general way I will try to describe by example. It would be cheating to make certain simultaneous claims, such as that 5% isn't enough that people would mind paying it but that it would also be enough to make the toll lane noticeably shorter, or that free lanes would be longer under the toll system than they would be otherwise and that many people would avoid the toll store,
There is something very worthwhile to point out, and that is that people would favor or disfavor the store, and for other reasons the store would benefit or lose, from three types of influences. The first is people's simply rational natures, the second is people's simply irrational natures, and the third is people's strategically rational causally irrational mindsets. That is not a technical term so if anyone would tell me what it is I would appreciate that. See this great post and TDT/UDT.
The store with a toll lane during rush hour (that at least does not make throughput worse, though for many stores that would be improved by having some lines consistently longer than others, and for other stores that would make throughput worse), and how the toll policy changes things from the status quo:
I) Rational decision making:
Pros:
a) Consumers get to put off deciding whether or not to pay a premium for faster service until after they see how much time it too them to reach the market, shop, and observe the lines.
b) Free lanes are barely longer with one toll lane existing than they would have been were all free unless the policy greatly increases store traffic, which would simply be good for the store, which has the power to implement the policy unilaterally (unless people go to this store whenever they need just a few things, and the cashiers' time is taken up largely by individuals paying, such that store profit comes from people buying many items at once, and people who need to buy many things go to other stores...or something else I haven't considered).
c) Rich people, whose time is worth the cost, and who buy more expensive things and with fewer coupons, might favor the toll store during rush hour.
Cons:
a) Free lines are slightly longer, which would particularly not be worth it if store prices were unchanged despite increased profit from the toll lane.
II) Irrational decision making:
Pros:
a) Consumers are biased towards keeping options open, and would choose to put off deciding between spending more time or money.
b) Consumers in line have to resist their impatience every second they are in the longer line, and have many opportunities to pay the store more, even if they told themselves they were going to this store and paying regularly.
Cons:
a) Consumers resent paying for something they think of as free, such as checking out.
b) Consumers might compare their experience in the more crowded free lane to that of those in the toll lane and have negative feelings about the store, rather than properly comparing that experience to the lines had all lanes in that store been free or the lines in other stores.
III) Game theoretic decision making
Cons:
a) Consumers might punish institutions that raise prices or attempt to influence them by taking advantage of their irrational behavior (i.e. manipulating them).
comment by selylindi · 2011-10-14T05:05:35.388Z · LW(p) · GW(p)
On the Freakonomics blog, Steven Pinker had this to say:
There are many statistical predictors of violence that we choose not to use in our decision-making for moral and political reasons, because the ideal of fairness trumps the ideal of cost-effectiveness. A rational decision-maker using Bayes’ theorem would say, for example, that one should convict a black defendant with less evidence than one needs with a white defendant, because these days the base rates for violence among blacks is higher. Thankfully, this rational policy would be seen as a moral abomination.
I've seen a common theme on LW that is more or less "if the consequences are awful, the reasoning probably wasn't rational". Where do you think Pinker's analysis went wrong, if it did go wrong?
One possibility is that the utility function to be optimized in Pinker's example amounts to "convict the guilty and acquit the innocent", whereas we probably want to give weight to another consideration as well, such as "promote the kind of society I'd wish to live in".
Replies from: lessdazed, pedanterrific, lessdazed, JoshuaZ, Jack, Vaniver, Morendil, Emile, taw, TimS, Vladimir_Nesov↑ comment by lessdazed · 2011-10-16T07:26:03.800Z · LW(p) · GW(p)
A rational decision-maker using Bayes’ theorem would say, for example, that one should convict a black defendant with less evidence than one needs with a white defendant, because these days the base rates for violence among blacks is higher.
One would compare black defendants with guilty black defendants and white defendants with guilty white defendants. It's far from obvious that (guilty black defendants/black defendants) > (guilty white defendants/white defendants). Differing arrest rates, plea bargaining etc. would be factors.
Where do you think Pinker's analysis went wrong, if it did go wrong?
He began a sentence by characterizing what a member of a group "would say".
Replies from: Jack, APMason, selylindi↑ comment by Jack · 2011-10-17T17:27:42.228Z · LW(p) · GW(p)
One would compare black defendants with guilty black defendants and white defendants with guilty white defendants. It's far from obvious that (guilty black defendants/black defendants) > (guilty white defendants/white defendants). Differing arrest rates, plea bargaining etc. would be factors.
60% of convicts who have been exonerated through DNA testing are black. Whereas blacks make up 40% of inmates convicted of violent crimes. Obviously this is affected by the fact that "crimes where DNA evidence is available" does not equal "violent crimes". But the proportion of inmates incarcerated for rape/sexual assault who are black is even smaller: ~33%. There are other confounding factors like which convicts received DNA testing for their crime. But it looks like a reasonable case can be made that the criminal justice system's false positive rate is higher for blacks than whites. Of course, the false negative rate could be higher too. If cross-racial eyewitness identification is to blame for wrongful convictions then uncertain cross-racial eyewitnesses might cause wrongful acquittals.
↑ comment by APMason · 2011-10-16T23:44:45.424Z · LW(p) · GW(p)
Yes. It's important to remember that guilty defendants aren't the same thing as convicted defendants. A rational decision-maker using Bayes' theorem wouldn't necessarily put all that much weight on the decisions of past juries, knowing as we do that they're not using Bayes' theorem at all. And, of course, a Bayesian would need exactly the same amount of evidence to convict a black defendant as they did a white defendant. That question is whether skin colour counts as evidence.
↑ comment by selylindi · 2011-10-17T05:27:29.965Z · LW(p) · GW(p)
The conviction rate for black defendents is sometimes much higher than the conviction rate for whites, so the solution you've suggested here would intensify the racial disparity.
Replies from: lessdazed↑ comment by pedanterrific · 2011-10-15T23:57:38.808Z · LW(p) · GW(p)
If you instituted a policy to require less evidence to convict black defendants, you would convict more black defendants, which would make the measured "base rates for violence among blacks" go up, which would mean that you could need even less evidence to convict, which...
Replies from: Emile, selylindi↑ comment by Emile · 2011-10-17T07:46:35.660Z · LW(p) · GW(p)
No, you'd just need to keep track of how often demographic considerations influenced the outcome, so that any measure of "base rates for violence among blacks" you use for such decisions is independent of the policy.
(That's not to say that such a policy would be a good idea of course)
↑ comment by selylindi · 2011-10-17T05:18:47.248Z · LW(p) · GW(p)
It's not clear that "the base rates for violence among blacks is higher" is meant to be measured by convictions. I interpreted it to be based on sociological data, for example, and in that case there would be no feedback loop. Pinker didn't cite a source, unfortunately. A very quick stroll past Google Scholar 1 2 shows that a common source used is arrest data in the FBI's Uniform Crime Report. Plainly there are also important ways in which arrest data may be biased against blacks, but I'm hesitant to simply dismiss a finding based on that difficulty as I'm willing to bet that researchers in the field would have attempted to account for the difficulty.
Replies from: Jack↑ comment by Jack · 2011-10-17T17:51:36.453Z · LW(p) · GW(p)
The right kind of data would come from things like the National Crime Victimization Survey which collects data outside the criminal justice system. The base rate for the offender in a violent crime being black, is, according to that survey, lower than the probability that a given person arrested for a violent crime is black. So it looks to me that the evidence is long screened out by the time the case gets to the court room.
Replies from: Vaniver↑ comment by Vaniver · 2011-10-17T19:07:57.279Z · LW(p) · GW(p)
Note that the NCVS requires that the victims survive, and does not collect data on crimes like murder, which may cause systematic differences.
Replies from: Jack↑ comment by Jack · 2011-10-17T19:24:12.541Z · LW(p) · GW(p)
Right. I'm comparing particular categories of crime-- reports from robbery victims to arrests for robbery, reports of rape to arrests for rape etc. I'm definitely not comparing total arrests for violent crimes to reports of violent crimes minus homicide.
Replies from: Vaniver↑ comment by Vaniver · 2011-10-17T22:17:02.071Z · LW(p) · GW(p)
I presumed as much, but that problem may still be noticeable: every rape that's also a murder won't get counted. I don't know how frequently the various violent crimes are paired with murder, though, or how large the difference between reported victimization and arrests are.
Replies from: Jack↑ comment by lessdazed · 2011-10-17T14:54:08.216Z · LW(p) · GW(p)
Pinker didn't address evidence screening off other evidence. Race would be rendered zero evidence in many cases, in particular in criminal cases for which there is approximately enough evidence to convict. I'm not exactly sure how often, I don't know how much e.g. poverty, crime, and race coincide.
It is perhaps counterintuitive to think that Bayesian evidence can apparently be ignored, but of course it isn't really being ignored, just carefully not double counted.
Replies from: Jack↑ comment by JoshuaZ · 2011-10-16T05:10:38.364Z · LW(p) · GW(p)
The problem isn't using it as evidence. The problem is that it is extremely likely that humans will use such evidence in much greater proportion than is actually statistically justified. If juries were perfect Bayesians this wouldn't be a problem.
Replies from: komponisto↑ comment by komponisto · 2011-10-17T14:13:34.300Z · LW(p) · GW(p)
Yes indeed. And the same goes, by the way, for other kinds of rational evidence that aren't acceptable as legal evidence (hearsay, flawed forensics, coerced confessions, etc).
↑ comment by Jack · 2011-10-17T17:59:15.147Z · LW(p) · GW(p)
The more interesting question isn't for the jury-- for whom the race of a defendant has long been swamped by other evidence-- but for a police officer deciding whether or not a person's suspicious behavior is sufficient reason to stop and question them. Not only does including race in that assessment seem rational but it is something police officers almost certainly do (if not consciously) which makes it rather more interesting as a policy question.
↑ comment by Vaniver · 2011-10-16T04:35:21.399Z · LW(p) · GW(p)
Where do you think Pinker's analysis went wrong, if it did go wrong?
The word "thankfully."
Replies from: selylindi↑ comment by selylindi · 2011-10-17T05:29:54.800Z · LW(p) · GW(p)
Do you care to elaborate? The interpretation of your response that comes to my mind is that you dissent from the moral viewpoint that Pinker expresses.
Replies from: Vaniver↑ comment by Vaniver · 2011-10-17T13:04:49.937Z · LW(p) · GW(p)
I do not consider it laudable that, when someone makes a rational suggestion, it is seen as a moral abomination. If it's a bad idea, there are rational ways to declare it a bad idea, and "moral abomination" is lazy. If it is a good idea, then "moral abomination" goes from laziness to villainy.
If his argument is "this causes a self-fulfilling prophecy, because we will convict blacks and not convict Asians because blacks are convicted more and Asians convicted less, suggesting that we will over-bias ourselves," then he's right that this policy is problematic. If his argument is "we can't admit that blacks are more likely to commit crimes because that would make us terrible people," then I don't want any part of it. Since he labeled it a moral abomination, that suggests the latter rather than the former.
Replies from: Incorrect, Tyrrell_McAllister↑ comment by Incorrect · 2011-10-17T13:39:34.416Z · LW(p) · GW(p)
The latter is probably not his intended meaning given that he states "these days the base rates for violence among blacks is higher."
I think calling something a "moral abomination" means it directly conflicts with your values, rather than only being a "bad idea." For example, lying may be a bad idea but probably not a moral abomination to a consequentialist whereas killing the healthiest humans to reduce overpopulation would not only be a bad idea because it would be killing off the workforce, it directly conflicts with our value against killing people.
The laziness in calling something a "moral abomination" is failing to specify what value it is conflicting with. Of course, having such a complex, context-dependent, and poorly objectively defined value as "non-discrimination" might be unfashionable to some.
Replies from: Vaniver↑ comment by Vaniver · 2011-10-17T15:51:57.482Z · LW(p) · GW(p)
The latter is probably not his intended meaning given that he states "these days the base rates for violence among blacks is higher."
Those are the words he puts in the mouth of "a rational decision-maker using Bayes Theorem," whose conclusion he identifies as a moral abomination. It is ambiguous whether or not he thinks that belief should pay rent.
I think calling something a "moral abomination" means it directly conflicts with your values, rather than only being a "bad idea."
The purpose of indignation is not to make calculations easier, but to avoid calculations.
↑ comment by Tyrrell_McAllister · 2011-10-18T00:33:48.746Z · LW(p) · GW(p)
I think he's just saying that not all rational evidence should be legal evidence. I don't think that he should be read according to LW conventions when he calls lower evidence standards for blacks a "rational policy". He doesn't mean to say that it would be rational to institute this policy (and yet somehow also morally abominable). He means that institutionalizing Bayesian epistemology in this way would be morally abominable (and hence not rational, as folks around here use the term).
Replies from: Vaniver, lessdazed↑ comment by Vaniver · 2011-10-18T01:08:39.440Z · LW(p) · GW(p)
I think he's just saying that not all rational evidence should be legal evidence.
Sure; in which case calling it a moral abomination is laziness. (The justification for holding legal evidence to a higher standard is very close to the self-fulfilling prophecy argument.)
↑ comment by lessdazed · 2011-10-18T01:38:46.474Z · LW(p) · GW(p)
lower evidence standards for blacks
It's already been pointed out that being a member of a group is evidence, so the evidence standards are identical. This is important because some evidence screens off other evidence.
The problem with our conversation is that Pinker's argument is so wrong, with so many errors sufficient to invalidate it, that we are having trouble inferring which sub-components of it he was right about. I encourage moving on from what he meant to what the right way to think is.
↑ comment by Morendil · 2011-10-17T16:10:18.835Z · LW(p) · GW(p)
One (more) reason to be uncomfortable with such an argument: "black" doesn't carve nature at its joints.
(Whereas, relevantly for such arguments, "poor" does - though I believe that arguing that way leads down the path that has been called "reference class tennis".)
Replies from: Emile, Jack, lessdazed↑ comment by Emile · 2011-10-17T19:20:17.390Z · LW(p) · GW(p)
Doesn't it?
When it comes to US demographics, "black" covers a "natural" cluster of the population / identifiable blob in thingspace. Sure, there are border cases like mixed-race people and recent African immigrants, just like there are edge-cases between bleggs and rubes. "Is person X black or not?" is probably one of the top yes/no questions that would tell you the most about an American (Along with "Did he vote for Obama?", "Is he richer or poorer than the median?", or "Does he live north or south of the Mason-Dixon line?")
Sure, when it comes to world demographics, or Brazilian demographics, it doesn't cut reality at it joints as well.
Replies from: Vaniver↑ comment by lessdazed · 2011-10-17T23:22:40.141Z · LW(p) · GW(p)
That's not too important. If I go to my closet and pull out twenty items of clothing at random, and designate those group A, and designate the rest group B, if I know what is in each group I can still make predictions about traits of random members of either group.
↑ comment by Emile · 2011-10-17T08:00:25.964Z · LW(p) · GW(p)
Why single out race? There are other demographic factors that could count too: sex, age, social class ... and if a policy of "OK, conviction standards for blacks are lower" would be political suicide, a policy of "OK, conviction standards for working class people" would be even worse.
Even so such policies may indeed increase the accuracy of convictions, 1) they don't match our intuitions about justice, which I suspect would make people less happy, 2) judges and juries already take such factors into account (implicitly and approximately), so there's a risk of overcorrecting, and 3) energy would be much better spent increasing the accuracy of conviction with less ambiguous things like cameras, DNA tests, etc.
I do believe that such demographic data can be useful to help direct resources for crime prevention, though.
Replies from: wedrifid, pedanterrific↑ comment by pedanterrific · 2011-10-17T08:06:24.875Z · LW(p) · GW(p)
I do believe that such demographic data can be useful to help direct resources for crime prevention, though.
Leaning more towards "increased police presence in the ghetto" or "only frisk the Muslims"?
Replies from: Emile↑ comment by Emile · 2011-10-17T08:35:50.842Z · LW(p) · GW(p)
Whatever works, I don't have any specific policies in mind (I'm far from being an expert in law enforcement).
But to take a specific example, I don't think information about higher crime rates for blacks is enough to tell whether we need "increased police presence in the Ghetto" - for all I know, police presence could already be 10 times the national average there.
There is a tendency I dislike in political punditry/activism/whatever (not that I'm accusing you of it, you just gave me a pretext to get on my soap box) to say "we need more X" or "we need less X" (regulation, police, taxes, immigrants, education, whatever) without any reference to evidence about what the ideal level of X would be, and about whether we are above or below that level - sometimes the same claims are made in countries with wildly different levels of X.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-10-17T14:30:32.064Z · LW(p) · GW(p)
↑ comment by taw · 2011-10-16T23:19:02.189Z · LW(p) · GW(p)
"Thankfully" part is wrong. We don't use any explicit probability thresholds to judge people guilty or not, we rely on judge's gut feeling about the defendant, which is very likely even more biased.
With a serious probability threshold being black would count slightly against you, but it would be very small bias.
↑ comment by TimS · 2011-10-17T16:16:39.780Z · LW(p) · GW(p)
Where do you think Pinker's analysis went wrong, if it did go wrong?
Pinker is not distinguishing between members of the audience and members of the jury.
For the audience, Pinker is basically correct. The prior probability that the defendant committed the acts charged is fairly high, given the evidence available to the audience that the government charged the defendant with a crime.
But the jury is given an instruction by the judge that the defendant is entitled to a presumption of innocence that must be overcome by the government. In Bayesian terms the judge means, "For social reasons not directly related to evidence, set your prior probability that the defendant is guilty as close to zero as is logically possible for you. If the probability of guilt is sufficiently high after hearing all the evidence, then you may vote to convict."
If a rationalist juror ignores this instruction and acts like a rational audience member, then the juror is doing wrong, as Pinker notes. But ignoring the judge's instruction is wrong, and Pinker is not presenting a great insight by essentially highlighting this point.
Replies from: komponisto↑ comment by komponisto · 2011-10-17T19:51:45.312Z · LW(p) · GW(p)
There shouldn't be any such distinction. The audience (I assume you mean the courtroom audience) should reason the exact same way the jury does.
The prosecution is required to make an explicit presentation of the evidence for guilt, so that the mere fact that charges were brought is screened off. As a consequence, failure to present a convincing explicit case is strong evidence of innocence; prosecutors have no incentive to hide evidence of guilt! Hence any juror or audience member who reasons "the prosecution's case as presented is weak, but the defendant has a high likelihood of guilt just because they suspect him" is almost certainly committing a Bayesian error. (Indeed, this is how information cascades are formed.)
See Argument Screens Off Authority: once in the courtroom, prosecutors have to present their arguments, which renders their "authority" worthless.
Replies from: TimS↑ comment by TimS · 2011-10-17T21:26:34.303Z · LW(p) · GW(p)
I wrote this long post defending my point, and about halfway through, I realize it was mostly wrong. I think the screening off point is probably a better description of what's wrong with Pinker's analysis. Specifically, the higher rate of crime among blacks should be screened off from consideration by the fact that this particular black defendant was charged.
To elaborate on my earlier point, the presumption of innocence also serves to remind the juror that the propensity of the population to commit crimes is screened off by the fact that this particular person went through the arrest and prosecution screening processes in order to arrive in the position of defendant. In other words, a rationalist should not use less evidence to convict black defendants than required for white defendants because this is double counting the crime rate of blacks.
↑ comment by Vladimir_Nesov · 2011-10-17T20:08:27.834Z · LW(p) · GW(p)
I've seen a common theme on LW that is more or less "if the consequences are awful, the reasoning probably wasn't rational".
Could you elaborate? Consequences of what (related to reasoning how)?
comment by lessdazed · 2011-10-14T01:53:03.099Z · LW(p) · GW(p)
People are bothered by some words and phrases.
Recently, I learned that the original meaning of "tl;dr" has stuck in people's mind such that they don't consider it a polite response. That's good to know.
Some things that bother me are:
- Referring to life extension as "immortality".
- Referring to AIs that don't want to kill humans as "friendly".
- Referring to AIs that want to kill humans as simply "unfriendly".
- Expressing disagreement as false lack of understanding, e.g. "I don't know how you could possibly think that."
- Referring an "individual's CEV".
- Referring to "the singularity" instead of "a singularity".
I'm not going to pretend that referring to women as"girls" inherently bothers me, but it bothers other people, so it by extension bothers me and I wouldn't want it excluded from this discussion.
Some say to say not "complexity" or "emergence".
Replies from: Nisan, Normal_Anomaly, wedrifid, wedrifid↑ comment by Nisan · 2011-10-16T01:05:09.873Z · LW(p) · GW(p)
Expressing disagreement as false lack of understanding, e.g. "I don't know how you could possibly think that."
This, more than the others, is a sign of a pernicious pattern of thought. By affirming that someone's point of view is alien, we fail to use our curiosity, we set up barriers to communication, and we can make any opposing viewpoint seem less reasonable.
↑ comment by Normal_Anomaly · 2011-10-16T20:27:59.384Z · LW(p) · GW(p)
Referring to AIs that don't want to kill humans as "friendly". Referring to AIs that want to kill humans as simply "unfriendly".
"Friendly" as I've seen it used on here means "an AI that creates a world we won't regret having created," or something like that. It might be good to link to an explanation every time the term is used for the benefit of new readers, but I don't think it's necessary. "Unfriendly" means "any AI that doesn't meet the definition of Friendly," or "an AI that we would regret creating (usually because it destroys the world)." I think these are good, consistent ways of using the term.
Most possible AIs have no particular desire either to kill humans or to avoid doing so. They are generally called "Unfriendly" because creating one would be A Bad Thing. Many possible AIs that want to avoid killing humans are also Unfriendly because they have no problem doing other things we don't want. The important thing, when classifying potential AIs, is whether it would be a very good idea or a very bad idea to create one. That's what the Friendly/Unfriendly distinction should mean.
Expressing disagreement as false lack of understanding, e.g. "I don't know how you could possibly think that."
I've found that saying, "I don't think I understand what you mean by that" or "I don't see why you're saying so" is a useful tactic when somebody says something apparently nonsensical. The other person usually clarifies their position without being much offended, and one of two things happens. Either they were saying something true which I misinterpreted, or they really did mean something I disagree with, at which point I can say so.
Referring [to] an "individual's CEV".
I think this is a good idea, because humans aren't expected utility maximizers. We have different desires at different times, we don't always want what we like, etc. An individual's CEV would be the coherent combination of all that person's inconsistent drives: what that person is like at reflective equilibrium.
Referring to "the singularity" instead of "a singularity".
referring to women as"girls"
These ones bother me too, and I support not doing them.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-16T20:37:17.732Z · LW(p) · GW(p)
Expressing disagreement as a false lack of understanding
I've found that saying, "I don't think I understand what you mean by that" or "I don't see why you're saying so" is a useful tactic when somebody says something apparently nonsensical.
Yes, when you actually don't understand, saying that you don't understand is rarely a bad idea. It's when you understand but disagree that proclaiming an inability to comprehend the other's viewpoint is ill-advised.
Referring [to] an "individual's CEV".
I think this is a good idea, because humans aren't expected utility maximizers.
I could be wrong, but this may be a terminology issue.
Replies from: Normal_AnomalyCoherence: Strong agreement between many extrapolated individual volitions ...
↑ comment by Normal_Anomaly · 2011-10-16T21:58:25.481Z · LW(p) · GW(p)
Coherence: Strong agreement between many extrapolated individual volitions ...
It would indeed appear that EY originally defined coherence that way. I think it's legitimate to extend the meaning of the term to "strong agreement among the different utility functions an individual maximizes in different situations." You don't necessarily agree, and that's fine, because this is partly a subjective issue. What, if anything, would you suggest instead of "CEV" to refer to a person's utility function at reflective equilibrium? Just "eir EV" could work, and I think I've seen that around here before.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-16T23:22:45.146Z · LW(p) · GW(p)
I think it's legitimate to extend the meaning of the term to "strong agreement among the different utility functions an individual maximizes in different situations."
Me too. I consider the difference in coherency issues between CEV(humanity) and CEV(pedanterrific) to be one of degree, not kind. I just thought that might be what lessdazed was objecting to, that's all.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-10-17T10:14:35.488Z · LW(p) · GW(p)
Okay, so I'm not the only one. Lessdazed: is that your objection to "individual CEV" or were you talking about something else?
↑ comment by wedrifid · 2011-10-16T05:12:53.490Z · LW(p) · GW(p)
Referring to AIs that don't want to kill humans as "friendly".
Necessary, within the an infinitesimal subset of mindspace around friendliness but not quite sufficient. Examples of cases where this is a problem include when people go around saying "a friendly AI may torture ". Because that is by definition not friendly. Any other example of "what if a friendly AI thing did " is also a misuse of the idea.
Referring to AIs that want to kill humans as simply "unfriendly".
That seems entirely legitimate. uFAI is rather useful and well established name for an overwhelmingly important concept. I honestly think you just need to learn more about how the concept of Unfriendly AI is used because this is not a problem term. AIs that want to kill humans (ie. most of them) are unfriendly.
Referring to life extension as "immortality".
Do people even do that? I haven't seen it. People attempting immortality (living indefinitely) will obviously use whatever life extension practices they think will help achieve that end. Yet if anyone ever said, for example, "I'm having daily resveratrol for immortality" then I suggest they were being tongue-in-cheek.
Replies from: lessdazed, pedanterrific↑ comment by lessdazed · 2011-10-16T07:09:16.675Z · LW(p) · GW(p)
want
AIs that want to kill humans (ie. most of them) are unfriendly.
Just as "want" does not unambiguously exclude instrumental values in English, "unfriendly" does not unambiguously include instrumental values in English. As for the composite technical term "Unfriendly Artificial Intelligence"...
If you write "Unfriendly Artificial Intelligence" alone, regardless of other context, you are technically correct. If you want to be correct again, type it again, in wingdings if the mood strikes you, you will still be technically correct, though with even less of a chance at communicating. In the context of entire papers, there is other supporting context, so it's not a problem. In the context of secondary discussions, consider those liable to be confused or you can consider them confused.
We might disagree about the extent of confusion around here, we might disagree as to how important that is, we might disagree as to how much of that is caused by unclear forum discussions, and we might disagree about the cost of various solutions.
Regarding the first point, those confident enough to post their thoughts on the issue make mistakes. Regarding the fourth point, assume I'm not advocating an inane extreme solution such as requiring you to define words every comment you make, but rather thoughtfulness.
Examples of cases where this is a problem include when people go around saying "a friendly AI may torture ". Because that is by definition not friendly. Any other example of "what if a friendly AI thing did " is also a misuse of the idea.
No torture? You're guessing as to what you want, what people want, what you value, what there is to know...etc. Guessing reasonably, but it's still just conjecture and not a necessary ingredient in the definition (as I gather it's usually used).
Or, you're using "friendly" in the colloquial rather than strictly technical sense, which is the opposite of how you criticized how I said not to speak about unfriendly AI! My main point is that care to should be taken to explain what is meant when navigating among differing conceptions within and between colloquial and technical senses.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-16T09:26:28.762Z · LW(p) · GW(p)
Or, you're using "friendly" in the colloquial rather than strictly technical sense
No, you're wrong about the dichotomy there. The words were used legitimately with respect to a subjectively objective concept. But never mind that.
Of all the terms in "Unfriendly Artificial Intelligence" I'd say the 'unfriendly' is the most straightforward. I encourage folks to go ahead and use it. Elaborate further on what specifically they are referring to as the context makes necessary.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-16T20:06:01.081Z · LW(p) · GW(p)
I encourage folks to go ahead and use it. Elaborate further on what specifically they are referring to as the context makes necessary.
This implies I'm discouraging use of the term, which I'm not, when I raised the issue to point out that for this subject specificity is often not supplied by context alone and needs to be made explicit.
What is confusing is when people describe a scenario in which it is central that an AI has human suffering as a positive terminal value, and they use "unfriendly" alone as a label to discuss it. The vast majority of possible minds are the ones most overlooked: the indifferent ones. If something applies to malicious minds but not indifferent or benevolent ones, one can do better than describing the malicious minds as "either indifferent or malicious", i.e. "unfriendly".
I would also discourage calling blenders "not-apples" when specifically referring to machines that make apple sauce. Obviously, calling a blender a "not-apple" will never be wrong. There's nothing wrong with talking about non-apples in general, nor talking about distinguishing them from apples, nor with saying that a blender is an example of a non-apple, nor with saying that a blender is a special kind of non-apple that, unlike other non-apples, is an anti-apple.
But when someone describes a blender and just calls it a "non-apple", and someone else starts talking about how almost nothing is a non-apple because most things don't pulverize apples, and every few times the subject is raised someone assumes a "non-apple" is something that pulverizes apples, it's time for the first person to implement low-cost clarifications to his or her communication in certain contexts.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-17T04:01:03.716Z · LW(p) · GW(p)
What is confusing is when people describe a scenario in which it is central that an AI has human suffering as a positive terminal value, and they use "unfriendly" alone as a label to discuss it. The vast majority of possible minds are the ones most overlooked: the indifferent ones. If something applies to malicious minds but not indifferent or benevolent ones, one can do better than describing the malicious minds as "either indifferent or malicious", i.e. "unfriendly".
I would use malicious in that context. A specific kind of uFAI requires a more specific word if you expect people to distinguish it from all other uFAIs.
↑ comment by pedanterrific · 2011-10-16T05:29:32.126Z · LW(p) · GW(p)
Referring to AIs that want to kill humans as simply "unfriendly".
That seems entirely legitimate. ... AIs that want to kill humans (ie. most of them) are unfriendly.
I interpreted lessdazed as objecting to the lack of emotional impact - calling it "unfriendly artificial intelligence" has a certain blasé quality, particularly when compared to "all-powerful abomination bent on our destruction", say.
Edit: Apparently I was incorrect.
comment by Shmi (shminux) · 2011-10-03T01:29:16.014Z · LW(p) · GW(p)
Why does the argument "I've used math to justify my views, so it must have some validity" tend to override "Garbage In - Garbage Out"? It can be this thread:
I estimate, that a currently working and growing superintelligence has a probability in a range of 1/million to 1/1000. I am at least 50% confident that it is so.
or it can be the subprime mortgage default risk.
What is the name for this cognitive bias of trusting the conclusions more (or sometimes less) when math is involved?
Replies from: DanielLC, dspeyer↑ comment by dspeyer · 2011-10-03T01:38:51.325Z · LW(p) · GW(p)
Sounds like a special case or "judging an argument by its appearance" (maybe somebody can make that snappier). It's fairly similar to "it's in latin, therefore it must be profound", "it's 500 pages, therefore it must be carefully thought-out" and "it's in helvetica, therefore it's from a trustworthy source".
Note that this is entirely separate from judging by the arguer's appearance.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-06T19:04:45.851Z · LW(p) · GW(p)
It's fairly similar to "it's in latin, therefore it must be profound"
Or to sound more profound, quidquid Latine dictum sit altum videtur. With that in mind, the fallacy of dressing up in mathematical clothing might be dubbed the Quidquid Mathematice fallacy, or Quidquid Per Numeros ("whatever (is said) with numbers").
comment by jaimeastorga2000 · 2011-10-09T04:19:01.352Z · LW(p) · GW(p)
I'm having trouble finding a piece which I am fairly confident was either written on LW or linked to from here. It dealt with a stone which had the power render all the actions of the person who held it morally right. So a guy goes on a quest to get the stone, crossing the ocean and defeating the fearful guardian, and finds it and returns home. At some point he kills another guy, and gets sentenced by a judge, and it is pointed out that the stone protects him from committing morally wrong actions, not from the human institution of law. Then the guy notices that he is feeling like crap because he is a murderer and it is pointed out that the stone isn't supposed to protect him from his feelings of guilt. And so on, with the stone proving to be useless because the "morality" wasn't attached to anything real.
If somebody knows what I'm talking about, could they be so kind as to point me towards it?
Replies from: Alicorn↑ comment by Alicorn · 2011-10-09T04:47:45.068Z · LW(p) · GW(p)
The Heartstone in Yvain's Consequentialism FAQ. Except it's a cat the guy kills.
Replies from: jaimeastorga2000↑ comment by jaimeastorga2000 · 2011-10-09T20:25:02.772Z · LW(p) · GW(p)
Yes, that's what I was looking for! Thank you very much for the link.
comment by lessdazed · 2011-10-04T19:54:06.451Z · LW(p) · GW(p)
I propose a discussion thread in which people can submit requests for pdfs of scholarly articles. I have found promising things for debiasing racism but I've been figuring out the contents of important articles indirectly - through their abstract and descriptions of them in free articles.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-05T00:42:25.205Z · LW(p) · GW(p)
You mean, like this?
comment by ahartell · 2011-10-15T18:04:03.520Z · LW(p) · GW(p)
In his talk on Optimism (roughly minute 30 to roughly minute 35), David Deutsch said that the idea that the world may be inexplicable from a human perspective is wrong and is only an invitation to superstitious thinking. He even mentions an argument by Richard Dawkins stating that evolution would have no reason to produce a brain capable of comprehending everything in our universe. It reminds me of something I heard about the inability to teach algebra or whatever to dogs. He writes this argument off for reasons evolution didn't prepare me for, so I was wondering if anyone could clarify this for me. To me it seems very possible that Dawkins was right, and that without enhancement some problems are just to hard for humans.
If you can't watch the video, in one line he says that I'm having trouble with is "If we live inside a little bubble of explicability in a great inexplicable universe, then the inside couldn't be really explicable either because the outside is needed in our explanation of the inside." This seems wrong to me. In a hypothetical universe where humans were too stupid to go beyond Newtonian mechanics, we would be in a bubble that suitably explained the movement of large objects. We wouldn't need knowledge of the quantum things that would be beyond our grasp to understand why apples fall.
Am I missing something or am I misunderstanding him or is he wrong?
Replies from: selylindi, ahartell, JoshuaZ↑ comment by selylindi · 2011-10-15T21:34:16.809Z · LW(p) · GW(p)
without enhancement some problems are just to hard for humans.
Without the enhancement of a computer or at least external memory like pen and paper, can you compute the n-th roots of pi to arbitrary decimal places? I can't, so it seems plain that Dawkins was correct. But it's a mighty big jump from there to "and there are processes in the universe which no constructible tools could ever let us explain, even in principle".
Humans with our enhancements haven't yet found any aspect of the universe which we have good reason to believe will always continue to escape explanation. That lack of evidence is weak evidence in favor of nothing remaining permanently and necessarily mysterious.
Replies from: ahartell, Desrtopa↑ comment by Desrtopa · 2011-10-17T05:16:55.043Z · LW(p) · GW(p)
Humans with our enhancements haven't yet found any aspect of the universe which we have good reason to believe will always continue to escape explanation.
What would you say would actually constitute evidence for such a thing existing?
Replies from: selylindi, wedrifid↑ comment by selylindi · 2011-10-17T05:36:31.823Z · LW(p) · GW(p)
I can imagine encountering a living organism composed of "subtle matter" not reducible to molecular machinery, or a fundamental particle that spontaneously and stochastically changed its velocity, or an Oracle that announced the solution to the halting problem for any given piece of code.
↑ comment by wedrifid · 2011-10-17T07:21:42.965Z · LW(p) · GW(p)
What would you say would actually constitute evidence for such a thing existing?
That's an easy one.
- Finding something that you can't explain.
- Finding that other smart people can't explain something.
- Finding other things are easy to explain.
- Becoming smarter and still being unable to explain something.
As for what would constitute strong evidence...
↑ comment by JoshuaZ · 2011-10-15T21:54:00.150Z · LW(p) · GW(p)
Deutsch essentially thinks that humans are what I think he called at one point "universal knowledge generators". I confess that I don't fully understand his argument for this claim. It seemed to be something like the idea that we can in principle run a universal Turing machine. He does apparently discuss this idea more in his book The Beginning of Infinity, but I haven't read it yet.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-15T22:12:42.585Z · LW(p) · GW(p)
I haven't read it yet.
What would you think of a loose convention to not say one hasn't learned about a specific thing yet?
Saying that I haven't read something yet makes me more likely to think others think I am more likely to read it than if I hadn't said "yet". But that prematurely gives me some of the prestige that makes me want to read it in the first place, making it less likely I will.
Replies from: JoshuaZ, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-10-16T09:54:25.898Z · LW(p) · GW(p)
(You might be thinking of http://lesswrong.com/lw/z8/image_vs_impact_can_public_commitment_be/ )
comment by NihilCredo · 2011-10-05T17:09:45.563Z · LW(p) · GW(p)
Is there a term for the following fallacy (related to the false dilemma)?
- While discussing the pros and cons of various items in the same category, people switch to 'competition mode thinking' - even if they hold no particular attachment towards any item, and nobody is in need of making a choice among the items - and they begin to care exclusively for the relative ranking of the items, rather than considering each one on its own merits. Afterwards, people will have a favourable opinion of the overall winner even if all items were shown to be very bad, and vice-versa they will have an unfavourable opinion of the overall loser even if all items were shown to be veryy good.
↑ comment by fubarobfusco · 2011-10-09T21:48:35.551Z · LW(p) · GW(p)
This isn't quite the same, but I wrote an essay for Wikipedia a few years ago (2005!) on why encyclopedia articles shouldn't contain pro-and-con lists. Even though I didn't know much about cognitive biases at the time, and was thinking about the specific domain of Wikipedia articles rather than argumentation or truth-seeking in general, it may be relevant.
One of the things that occurred to me at the time was that pro-and-con lists invite Wikipedia readers who already support one "side" to think of more items to add to "their side" of the list, and add them. In LW-speak, they inspire motivated cognition for people whose bottom line is already written.
comment by [deleted] · 2011-10-04T22:28:03.865Z · LW(p) · GW(p)
.
comment by philosophytorres · 2015-01-22T17:54:34.495Z · LW(p) · GW(p)
I'd love to know what the community here thinks of some critiques of Nick Bostrom's conception of existential risks, and his more general typology of risks. I'm new to the community, so a bit unsure whether I should completely dive in with a new article, or approach the subject some other way. Thoughts?
Replies from: Vaniver↑ comment by Vaniver · 2015-01-22T18:38:52.126Z · LW(p) · GW(p)
Welcome! The LW wiki page on Existential Risk is a good place to start. People here typically take the idea of existential risk seriously, but I don't see much discussion of his specific typology.
AmandaEHouse made a visualization of Bostrom's Superintelligence here, and Katja Grace runs a reading group for Superintelligence, which is probably one of the best places to start looking for discussion.
The open threads are also weekly now, and the current one is here. (You can find a link to the most recent one on the right sidebar when you're in the discussion section.)
Replies from: philosophytorres↑ comment by philosophytorres · 2015-01-22T22:31:05.385Z · LW(p) · GW(p)
Great, thanks. Very helpful. Where do I, er, post on the LW wiki? Should I click on the "discussion" page? (Again, apologies for my callowness!)
Replies from: Vaniver↑ comment by Vaniver · 2015-01-23T00:34:04.016Z · LW(p) · GW(p)
The wiki is for stable reference content; you'll notice much of it is definitions of terms, lists of posts for easy reference, and summaries of concepts. The discussion tab on the wiki is used mostly for editors to talk with each other, which very rarely happens.
If you want to have a conversation, posting in the current open thread is probably the best thing to do. (The next step, if you have something long enough to talk about, is posting a post in discussion, which you do by hitting "create new article" to the right of your name and karma display.)
comment by MarkusRamikin · 2011-10-07T08:49:55.813Z · LW(p) · GW(p)
Can someone explain to me the point of sequence reruns?
I honestly don't get it. Sequences are well organised and easily findable; what benefit is there from duplicating the content? It seems to me like it just spreads the relevant discussion into multiple places, adds noise to google results, and bloats the site.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-07T13:48:00.965Z · LW(p) · GW(p)
Many people find blogs easier to read than books. Reacting to prompts with bite size chunks of information requires far less executive control and motivation than working through a mass of text unprompted.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-08T10:28:56.685Z · LW(p) · GW(p)
Hm. Yeah, that makes sense I suppose, though that's a rather alien way of thinking for me. I like the organisation and permanence of the Sequences, the fact that I can read them at my own pace and in what order they interest me. Especially since we don't seem to suffer from the "don't necro old threads" disease as much as most other forums, and arguments from 2007 still get developed today. Which is good IMO, I never liked how most of the Internet has a 1-2 day attention span, and if something isn't freshly posted then it's not worth reading or replying to. But I digress.
To each their own, I suppose, it's not like this stops me from reading the original Sequences the way I like to.
comment by antigonus · 2011-10-06T03:46:58.967Z · LW(p) · GW(p)
Where can I find arguments that intelligence self-amplification is not likely to quickly yield rapidly diminishing returns? I know Chalmers has a brief discussion of it in his singularity analysis article, but I'd like to see some lengthier expositions.
Replies from: kilobug↑ comment by kilobug · 2011-10-07T09:08:27.694Z · LW(p) · GW(p)
I asked the same question not so long ago, and I was pointed to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate which did contain interesting arguments on the topic. Hope it'll help you as it helped me.
comment by [deleted] · 2011-10-19T17:14:07.233Z · LW(p) · GW(p)
Last night, I had an idea for a post to write for LW. My idea was something along the following:
For many good reasons, LessWrong-ers have gone through great lengths to explain how to use Bayes' theorem. While understanding Bayes' theorem is essential to rationality, has anyone written an explanation (targeted toward traditional rationalists) about why Bayes' theorem is so essential in the first place? If such a post hasn't be written, maybe I could write that post...
Here are some of the main points I'm thinking of addressing
- A very brief history of Bayes' theorem and its past usefulness.
- The advantages of Bayesian rationality over traditional rationality. (For example, the ability to directly update your beliefs in proportion to the evidence.)
- How their traditional logic is just a special case of Bayesian rationality
- A list of more resources for further study
I suppose my overarching goal is this: if someone mentions Bayes' theorem in a discussion and they get a blank look from their interlocutor, they can first point them toward my post to give them the basic idea of what Bayes theorem is and why it is useful. Then, if they're enticed, then they can go learn the rigorous mathematics about the subject.
Has such a post already been written? If it hasn't, do you think such a post would be helpful? Would it be appropriate for LW?
All constructive criticism, advice, and questions are welcome.
Replies from: None↑ comment by [deleted] · 2011-10-19T19:03:02.079Z · LW(p) · GW(p)
I'd love to see the kind of post you're describing regardless of any overlap with previous posts. If you're aiming for a basic introduction targeted at non-LWers, I think you have a roughly the right amount of subject matter to make it work. If you're aiming for a more in-depth analysis, you might want to split the topics up into separate posts.
Luke wrote this post on the history of Bayes' Theorem, which incorporates some of these ideas.
Replies from: None↑ comment by [deleted] · 2011-10-19T20:22:29.492Z · LW(p) · GW(p)
Thanks for the support, Tetronian!
I think you said it well. I am "aiming for a basic introduction targeted at non-LWers." This will be my first post on the main site, so I want write something important without it being overwhelming for myself and the reader. So I want to to be short, non-technical, and without LW-vernacular.
I've read a lot of Luke's work, including the post you linked. His work is fantastic and he's a very big inspiration to me. I think his work is what (in major part) helped led me to this idea.
comment by [deleted] · 2011-10-16T20:33:57.305Z · LW(p) · GW(p)
I'm working to improve my knowledge of epistemology. Can anyone recommend a good reference/text book on the subject? I'm especially looking to better understand LW's approach to epistemology (or an analogous approach) in a rigorous, scholarly way.
Until recently, I was a traditional rationalist. Epistemologically speaking, I was a foundationalist with belief in a priori knowledge. Through recent studying of Bayes, Quine, etc., these beliefs have been strongly challenged. I have been left with a feeling of cognitive dissidence.
I'd really appreciate if my thinking on epistemology can be set straight. While I think I understand the basics, I feel like a lack of coherent epistemology is really infecting the rest of my thinking.
[I'm having trouble explaining exactly what I'm looking for while still being concise. If you have any questions about my request, please feel free to ask.]
Edit: I've recently picked up a copy of Probability Theory: The Logic of Science by E. T. Jaynes. I've heard his name referenced a lot of LW and the reviews of the book are glowing. I'm going to read it to see if I can get a better understanding of probability theory, as it seems essential to the LW approach to epistemology.
Suggestions of other books are still very much appreciated.
Replies from: Nonecomment by lessdazed · 2011-10-03T09:24:40.564Z · LW(p) · GW(p)
I propose a thread in which ideas commonly discussed on LW can be discussed with a different dynamic - that of the relatively respectable minority position being granted a slight preponderance in number and size of comments.
This might include feminism in which one is offended by "manipulation", deontology, arguments for charities other than X-risk ones, and the like.
Nothing would be censored or off limits, those used to being in the majority would merely have to wait to comment if most of the comments already supported "their" "side" (both words used loosely).
comment by lessdazed · 2011-10-03T08:18:28.929Z · LW(p) · GW(p)
Was it possible for the ancient Greeks to discover that cold is the absence of heat?
Replies from: Jack, Kingreaper, Emile, JoshuaZ, Oscar_Cunningham↑ comment by Jack · 2011-10-06T02:20:18.826Z · LW(p) · GW(p)
I'm not confident they didn't know that. Cite?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-06T02:39:05.204Z · LW(p) · GW(p)
Not exactly a great citation, but the Wikipedia article on the history of heat suggests that their understanding was not very good.
For what it is worth, marginally related ideas seemed to be going around at least by the end of the sixth century since I seem to recall an argument in the Talmud about whether or not transfer of heat by itself from a non-kosher thing to a kosher thing could make the kosher thing non-kosher. But it is possible that I was reading that with a too modern perspective. I'll try to track down the section and see what it says.
Replies from: Jack↑ comment by Jack · 2011-10-06T02:45:22.408Z · LW(p) · GW(p)
Yeah, I read the wikipedia page looking for something. And certainly no one would say they had a good understanding of thermodynamics. Usually different ancient philosophers had different opinions about these sorts of things so I would not surprise me if a few prominent figures had the basic idea down. They definitely associated fire with heat, and fire in most accounts was an element. In seems plausible that they might have believed cold simply involved the absence of fire.
↑ comment by Kingreaper · 2011-10-07T11:24:44.782Z · LW(p) · GW(p)
Yes. The relevant experiment would be a study of how gases expand when heated, leading to the ideal gas law, which has a special case at absolute 0.
The special case distinguishes between cold being a real entity (and heat being neg-cold) and heat being a real entity (and cold being neg-heat); because it proves that heat has a minimum, and cold a maximum, rather than the other way around.
↑ comment by Emile · 2011-10-03T09:29:24.317Z · LW(p) · GW(p)
Probably, by considering how there are several ways to "create" heat (burning, rubbing things together, as Oscar says), but none of "creating" cold. That makes more sense in a model where heat is a substance that can be transmitted from object to object, and cold is merely the absence of such a substance.
Replies from: lessdazed, Desrtopa, JoshuaZ↑ comment by lessdazed · 2011-10-03T09:56:48.276Z · LW(p) · GW(p)
What if they built a building or found a cave where wind ran over a bucket or pool of water, cooling the air?
http://en.wikipedia.org/wiki/Evaporative_cooler#Physical_principles
http://en.wikipedia.org/wiki/Windcatcher
Replies from: Jack↑ comment by Jack · 2011-10-06T02:48:33.581Z · LW(p) · GW(p)
"Water produces cold" is a plausible hypothesis for someone using Earth/Air/Water/Fire chemistry.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-06T03:28:15.238Z · LW(p) · GW(p)
They did well enough to figure out or intuit or guess that a simpler explanation was better: You're not giving them enough credit, as some went beyond that chemistry.
All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods.
Aristotle speaking about Thales:
"For it is necessary that there be some nature (φύσις), either one or more than one, from which become the other things of the object being saved... Thales the founder of this type of philosophy says that it is water."
See also here.
So granted that they could narrow it down to one "element", was it possible for them to do better than to guess as to the nature of thermodynamics? To guess which is the absence of the other?
Replies from: Jack↑ comment by Jack · 2011-10-06T03:42:05.124Z · LW(p) · GW(p)
As my reply to your original comment indicates I give them plenty of credit -- I'm not sure they didn't guess that cold was the absence of heat.
You have the pre-socratics a bit mixed up. Heracletus and Thales are before the five element system of Aristotle. Heracletus only had three elements in his cosmology and fire was the most important. Some ancient cosmologies made one element central...I'm not sure what that has to do with the question?
But certainly it is possible some of them surmised that cold was the absence of fire or something like that.
↑ comment by Desrtopa · 2011-10-06T03:16:10.667Z · LW(p) · GW(p)
A number of substances have high enthalpy heats of solution, and appear to "create" cold when added to water. Some, like calcium chloride, would likely have been known in Classical Greece.
Edit: My mistake. Dissolution of calcium chloride is actually exothermic. I'm not sure if any salts which have high endothermic dissolution occur in a naturally pure state.
↑ comment by JoshuaZ · 2011-10-06T02:54:50.661Z · LW(p) · GW(p)
How would this belief pay rent if one doesn't have a lot of chemistry to start off with and hasn't done a very large set of experiments? Would it look any different than cold is the absence of heat? The flowing behavior can be easily explained either way.
Also, one thing that's very clear from a lot of history is how much the ancients could have learned if they just were a bit more wiling to do direct experiments. They did them but only on rare occasions. There's no reason that the scientific method could not have shown up in say 200 BCE.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-06T03:32:57.094Z · LW(p) · GW(p)
A description of an experiment they could have done would be a fine answer to my question, even though they weren't inclined to do them.
Perhaps the experiment is obvious to you? I don't know what it would be.
How would this belief pay rent
This is a good question and I'm not confident it could or couldn't. It's more a thought experiment to rethink how much medium-hanging fruit there is today. If they could have figured it out even though it couldn't have paid rent, so much the better.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-06T03:43:25.730Z · LW(p) · GW(p)
Perhaps the experiment is obvious to you? I don't know what it would be.
That's my problem. I can see sets of experiments that form long chains that eventually get this result, but I don't see it as an easy result by itself.
I suppose that if they had kept experimenting with Hero's version of the steam engine they might have started to develop the right stuff. That might be the most natural way that it could have gone.
Part of the problem is that in order to get decent chemistry you need to do enough experiments to understand conservation of mass. And when gasses are released that requires very careful measurements. And in order to even start thinking in that vein you probably want enough physics understanding to understand that mass matters a lot. (There's good reason that conservation of mass becomes a discussed and experimented with issue after Newton and Galileo and all those guys had developed basic physics). Then, if one has steam engines also and has a notion of work, one can start doing careful experiments and see how things function in terms of specific heat (how different substances take different amounts of heat to reach the same temperature). If one combines that with how gases behave and has that gases are composed of little tiny particles then a pseudokinetic theory of heat results.
I know however that the caloric theory of heat didn't require a correct theory of gases. This makes me wonder if there's an easier experimental pathway, whether this is a lucky guess, or whether it is just that there are a lot more examples in nature of things that seem to produce heat than things that seem to produce cold. (As has already been noted in this subthread, there are definitely things that would seem to produce cold.)
↑ comment by Oscar_Cunningham · 2011-10-03T08:55:03.069Z · LW(p) · GW(p)
"Heat" isn't a thing either. It's all just molecules bopping around. A very smart person might have been able to deduce this (or at least raise it as a possibility) by thinking about the fact that rubbing two sticks together makes heat.
Replies from: MinibearRex↑ comment by MinibearRex · 2011-10-03T15:20:25.465Z · LW(p) · GW(p)
The original theory of thermodynamics grew out of things like this, except the possibility that they considered had to do with a "caloric fluid". All objects had this particular fluid stored inside of them, and by rubbing the sticks together or something like that, you could simply release the fluid while gradually breaking down the object. Which is a reasonable conclusion, assuming you don't have all the historical background that we do in atomic physics.
comment by DanArmak · 2016-10-12T14:02:14.254Z · LW(p) · GW(p)
I've been told that people use the word "morals" to mean different things. Please answer this poll or add comments to help me understand better.
When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?
[pollid:1165]
Replies from: TheOtherDave, username2, CCC, TheAncientGeek↑ comment by TheOtherDave · 2016-10-13T04:05:12.443Z · LW(p) · GW(p)
When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?
Depends on context.
When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.
I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.
I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.
↑ comment by CCC · 2016-10-13T13:49:46.940Z · LW(p) · GW(p)
"Morals" and "goals" are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.
Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it's immoral.
AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.
↑ comment by TheAncientGeek · 2016-10-13T13:32:08.612Z · LW(p) · GW(p)
I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.
Replies from: DanArmak, hairyfigment↑ comment by DanArmak · 2016-10-14T18:51:54.792Z · LW(p) · GW(p)
Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. "make 1000 paperclips", not just "make paperclips"), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it's somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed.
On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you're wrong about having made 1000 paperclips is very small, and you shouldn't invest more resources in that instead of working on your next value, this needs to be explicit and quantified.
In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-10-17T13:49:08.419Z · LW(p) · GW(p)
Ingenious. However, I can easily get round it by adding the rider that morality as concerned with conflicts between individuals. As stated, that is glib, but it can be motivated. Conflicts between individuals, in the absence of rules about how to distribute resources) are destructive, leading to waste of resources. (yes, I can predict the importance of various kinds of "fairness" to morality"). Conflicts within individuals much less so. Conflicts aren't a problem because they are conflicts, they are a problem because of their possible consequences.
Replies from: DanArmak↑ comment by DanArmak · 2016-10-17T18:33:21.681Z · LW(p) · GW(p)
I'm not sure what you mean by conflict between individuals.
If you mean actual conflict like arguing or fighting, then choosing between donating to save five hungry people in Africa vs. two hungry people in South America isn't a moral choice if nobody can observe your online purchases (let alone counterfactual ones) and develop a conflict with you. Someone who secretly invents a way cure for cancer doesn't have moral reasons to cure others because they don't know he can and are not in conflict with him.
If you mean conflict between individuals' own values, where each hungry person wants you to save them, then every single decision is moral because there are always people who'd prefer you give them your money instead of doing anything else with it, and there are probably people who want you dead as a member of a nationality, ethnicity or religion. Apart from the unpleasant implications of this variant of utilitarianism, you didn't want to label all decisions as moral.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-10-18T13:04:49.990Z · LW(p) · GW(p)
I am not taking charity to be a central example of ethics.
Charity, societal improvement,etc are not centrally ethical, because the dimension of obligation is missing. It is obligatory to refrain from murder, but supererogatory to give to charity. Charity is not completely divorced from ethics, because gaining better outcomes is the obvious flipside of avoiding worse outcomes, but it does not have every component that which is centrally ethical.
Not all value is morally relevant. Some preferences can be satisfied without impacting anybody else, preferences for flavours of ice cream being the classic example, and these are morally irrelevant. On the other had, my preference for loud music is likely to impinge on my neighbour's preference for a good nights sleep: those preferences have a potential for conflict.
Charity and altrusim are part of ethics, but not central to ethics. A peaceful and prosperous society is in a position to consider how best to allocate its spare resources (and utiliariansim is helpful here, without being a full theory of ethics), but peace and prosperity are themselves the outcome a functioning ethics, not things that can be taken for granted. Someone who treats charity as the outstanding issue in ethics is, as it were, looking at the visible 10% of the iceberg while ignoring the 90% that supports it.
If you mean conflict between individuals' own values,
I mean destructive conflict.
Consider two stone age tribes. When a hunter of tribe A returns with a deer, everyone falls on it, trying to grab as much as possible, and end up fighting and killing each other. When the same thing happens in tribe b, they apportion the kill in an orderly fashion according to a predefined rule. All other things being equal, tribe B will do better than tribe A: they are in possession of a useful piece of social technology.
Replies from: DanArmak↑ comment by hairyfigment · 2016-10-13T20:02:28.694Z · LW(p) · GW(p)
Were the Babyeaters immoral before meeting humans?
If not, what would you like to call the thing we actually care about?
Replies from: TheAncientGeek, CCC↑ comment by TheAncientGeek · 2016-10-13T21:03:47.901Z · LW(p) · GW(p)
If I don't use "moral" as a rubber stamp for all and any human value, you don't run into CCCs problem of labeling theft and murder as moral because some people value them. That's the upside. Whats the downside?
↑ comment by CCC · 2016-10-14T10:30:22.010Z · LW(p) · GW(p)
What they did was clearly wrong... but, at the same time, they did not know it, and that has relevance.
Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.
The above paragraph holds even if the device also causes lightning to strike a different person in China every time you press the button. (Although, in this case, creating the device was presumably an immoral act).
So, back to the babyeaters; some of their actions were immoral, but they themselves were not immoral, due to their ignorance.
Replies from: hairyfigment↑ comment by hairyfigment · 2016-10-14T10:37:41.105Z · LW(p) · GW(p)
Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.
Replies from: CCC↑ comment by CCC · 2016-10-28T09:51:45.765Z · LW(p) · GW(p)
Hmmm. I had to go back and re-read the story.
...I notice that, while they were not ignorant that they were causing pain and emotional distress, they did honestly believe that they were doing the best thing and, indeed, even made a genuine attempt to persuade humanity, from first principles, that this was the right and good thing to do.
So they were doing, at all times, the action which they believed to by most moral, and were apparently willing to at least hear out contrary arguments. I still maintain, therefore, that their actions were immoral but they themselves were not; they made a genuine attempt to be moral to the best of their ability.
comment by lessdazed · 2011-10-13T02:29:43.540Z · LW(p) · GW(p)
If asked to guess a number that a human chose that is between zero and what they say is "infinity", how would one go about assigning probabilities to both a) assign higher numbers lower probabilities on average than lower numbers and b) assign higher values to low complexity numbers than higher complexity ones?
For example, 3^^^3 is more likely than 3^^^3 - 42.
Is a) necessary so the area under the curve adds up to 1? Generally, what other things than a) and b) would be needed when guessing most humans' "random" number?
Replies from: NihilCredo↑ comment by NihilCredo · 2011-10-13T10:56:11.929Z · LW(p) · GW(p)
I think (a) is a special case of (b).
Replies from: endoselfcomment by lessdazed · 2011-10-08T22:00:10.319Z · LW(p) · GW(p)
How common are game theory concepts that are not expressed in nature?
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-10-09T17:57:37.137Z · LW(p) · GW(p)
I don't understand you, could you give an example?
Replies from: lessdazed, Normal_Anomaly↑ comment by lessdazed · 2011-10-14T02:34:58.308Z · LW(p) · GW(p)
I was surprised to find out about stotting being cooperation between cheetahs and gazelles. I was amused by the "rock-paper-scissors" common side blotched lizard.
In nature, is there a Rubinstein bargaining model or a game without a value, for example?
↑ comment by Normal_Anomaly · 2011-10-13T01:32:07.714Z · LW(p) · GW(p)
Well, the superiority of Tit-for-Tat to most other Iterated PD strategies was discovered by evolutionary sims, and evidence has been found of its being used in nature. For instance, the behavior of WWI soldiers who stopped killing each other in the trenches by mutually choosing to only fire their artillery when fired upon first, and several instances in animals. I'm too lazy to look up the latter, but I'm pretty confident they're in The Selfish Gene. I think lessdazed is asking if there are any other important game theory findings that don't have that kind of real world support.
Replies from: NancyLebovitz, lessdazed↑ comment by NancyLebovitz · 2011-10-17T11:24:13.000Z · LW(p) · GW(p)
From memory of The Evolution of Cooperation-- the soldiers didn't refuse to fire their artillery. They aimed to miss.
Artillery truces drove the generals crazy, and they tried various solutions that I don't remember. None of the solutions worked until they discovered by accident that frequently rotating the artillery crews meant that histories of trust couldn't be developed.
Perhaps the generals could be viewed as building cooperation at their own level to maintain the killing.
Replies from: wedrifid, lessdazed↑ comment by wedrifid · 2011-10-17T12:32:25.393Z · LW(p) · GW(p)
Perhaps the generals could be viewed as building cooperation at their own level to maintain the killing.
And as defectors against the soldiers. That sounds about right. If only soldiers were better at coordinating against their commanding officers!
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-10-17T16:08:04.562Z · LW(p) · GW(p)
The standard prisoner's dilemma assumes a symmetrical grid, with both prisoners getting the same punishments under the same circumstances. I don't know whether unequal power (access to rewards, risk and severity of punishment) has been explored.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-18T09:29:48.224Z · LW(p) · GW(p)
The standard prisoner's dilemma
This isn't a prisoner's dilemma. But the term 'defection' is not specific to the prisoner's dilemma.
I don't know whether unequal power (access to rewards, risk and severity of punishment) has been explored.
They have. Both with actual Prisoner's Dilemmas with non-equivalent payoffs and in various other games which take similar forms. In the former case the guy with the lower payoff tends to defect more to try to get 'equality'. But that is in a situation in which the subjects consider themselves of equal status (probably volunteer undergrads).
↑ comment by lessdazed · 2011-10-17T16:35:14.097Z · LW(p) · GW(p)
Perhaps the generals could be viewed as building cooperation at their own level to maintain the killing.
What would they be thinking? What would their goals be, and why?
Replies from: gwern, NancyLebovitz↑ comment by gwern · 2011-10-17T16:59:54.130Z · LW(p) · GW(p)
Promotion and social esteem? The paths of glory lead but to the grave - for thems as do the actual fighting.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-17T23:18:23.857Z · LW(p) · GW(p)
When two executives are each trying to get their organizations to defect first in a standard prisoner's dilemma when the organizations both have impulses towards cooperation, the identical strategies among the executives do not constitute "cooperation" between them.
I agree each has motivations to see that their orders are followed regardless of what is good for the organizations, but this is not cooperation because both would have their organizations defect regardless of whether the other organization was inclined to cooperate or defect.
Cooperation between generals on two sides might look something like the Battle of Tannenberg, where each Russian general may have wanted the Germans to beat the other general before beating the Germans himself. That would be manifested as lack of conflict, such as the Russian First Army waiting for the German Eighth Army to beat the Russian Second Army before joining the battle.
Cooperation to induce fighting might be each HQ broadcasting vulnerable locations for the other artillerists to shoot at, or similar. I don't know of any such cases, but then again, I wouldn't necessarily know of them.
↑ comment by NancyLebovitz · 2011-10-17T17:12:26.401Z · LW(p) · GW(p)
They want their side to win?
They're caught up in being generals, so they think there's supposed to be fighting?
They want their orders to be followed?
Replies from: lessdazed↑ comment by lessdazed · 2011-10-17T23:06:17.780Z · LW(p) · GW(p)
They want their side to win?
A lull in artillery fire would generally support one side or the other. I don't think that generals' interests systematically differed from those of their sides in such a way as to make it in both sides' generals' interests for their to be artillery fire rather than no artillery fire.
I think it is more likely that each was mistakenly overconfident that they could win with their strategy and tactics. WWI is replete with examples of generals' overconfidence and I think that is a much better explanation of why each would prefer his orders followed than wanting to "maintain the killing"/"supposed to be fighting", which seems like suggesting they are innately evil.
There are many examples of officers thinking the war would be best conducted under their offensive strategy when all were wrong and a defensive strategy would have been better.
They want their orders to be followed?
This is a very good point.
I still don't see any sign of cooperation among generals across sides. I think it is more likely that each correctly thought the truces bad for morale, and the Central Powers were concerned their troops would cease to have the morale for offensive operations while the Entente was concerned that their troops would cease to have the morale necessary to fight at all. This would leave some wrong about who the truces favored.
↑ comment by lessdazed · 2011-10-13T02:14:49.898Z · LW(p) · GW(p)
mutually choosing to only shoot when shot at
I am under the anecdotal impression that this applied far more to explosives, particularly trench mortars, than it did to bullets, having read many more primary than secondary sources for the First World War.
If I recall correctly, German snipers were largely assigned to sections of front, while British and French snipers were assigned to regular units that rotated in and out of the front depending on casualties, strategic considerations, and the like (so "the Germans" wouldn't be one entity to negotiate with). If true, this might partially explain why shooting truces were less common than mortar truces. This is in addition to the usual rotation of regular units on both sides that would prevent them from becoming too familiar with the enemy.
Another factor is that it is almost always plausible to refrain from firing artillery at targets due to supply concerns. This seems like it would make an artillery truce easier to de-escalate and maintain.
Do you have sources for (non-holiday, non-corpse collection) shooting truces?
Your point stands, obviously, regardless of the weapon types.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-10-13T10:12:40.474Z · LW(p) · GW(p)
I meant artillery truces, and I've fixed my last comment to be more clear. Sorry for the lack of precision.
comment by lessdazed · 2011-10-03T07:16:51.556Z · LW(p) · GW(p)
Assuming infinite matter were available, is there a limit to the possible consciousnesses that could be made out of it?
Replies from: wedrifid↑ comment by wedrifid · 2011-10-03T08:18:27.713Z · LW(p) · GW(p)
Assuming infinite matter were available, is there a limit to the possible consciousnesses that could be made out of it?
No limit, unless you construct an arbitrary definition of 'consciousness' that for some reason decrees that vast sets of different consciousnesses must be lumped in together as one.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-03T08:32:59.289Z · LW(p) · GW(p)
Assuming a speed limit of communication such as light speed, why couldn't sufficiently large minds always either be made from less matter or merely be larger versions of smaller, identical patterns?
Replies from: pengvado↑ comment by pengvado · 2011-10-03T10:14:22.690Z · LW(p) · GW(p)
If you're talking about possibility rather than efficiency, then what does a speedlimit have to do with anything? A big algorithm (mind or otherwise) that requires too much nonlocal communication will just run slowly.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-03T10:18:48.250Z · LW(p) · GW(p)
With no speed limit, a designer of a bigger mind could easily take advantage of its size to form new, unique mind patterns by linking distant parts.
With the speed limit, many big minds are in exactly the same pattern as smaller ones, only slower.
If a mind is big enough, it may dwarf its components such that it is consciously the same as a smaller mind in a similar pattern.
comment by Dr_Manhattan · 2011-10-03T00:47:40.307Z · LW(p) · GW(p)
http://becominggaia.wordpress.com/2011/03/15/why-do-you-hate-the-siailesswrong/#entry I'll reserve my opinion about this clown, but honestly I do not get how he gets invited to AGI conferences, having neither work or even serious educational credentials.
Replies from: ArisKatsaris, wedrifid, None, Solvent, vi21maobk9vp↑ comment by ArisKatsaris · 2011-10-04T21:59:04.894Z · LW(p) · GW(p)
I'll reserve my opinion about this clown
Downvoted. Unless "clown" is his actual profession, you didn't reserve your opinion.
↑ comment by wedrifid · 2011-10-03T07:23:55.948Z · LW(p) · GW(p)
Wow, I loved the essay. I hadn't realized I was part of such a united, powerful organisation and that I was so impressively intelligent, rhetorically powerful and ruthlessly self interested. I seriously felt flattered.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2011-10-03T08:03:45.665Z · LW(p) · GW(p)
You are in a Chinese room, according to his argument. No one of us is as cruel as all of us.
↑ comment by [deleted] · 2011-10-03T16:47:46.679Z · LW(p) · GW(p)
Not to call attention to the elephant in the room, but what exactly are Eliezer Yudkowsky's work and educational credentials re: AGI? I see a lot of philosophy relevant to AI as a discipline, but nothing that suggests any kind of hands-on-experience...
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2011-10-03T18:32:27.871Z · LW(p) · GW(p)
This for one http://singinst.org/upload/LOGI//LOGI.pdf is in the ballpark of AGI work. Plus FAI work, while not being on AGI per se, is relevant and interesting to a rare conference in the area. Waser is pure drivel.
↑ comment by Solvent · 2011-10-03T06:22:43.641Z · LW(p) · GW(p)
He didn't actually make any arguments in that essay. That frustrates me.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-03T07:34:00.699Z · LW(p) · GW(p)
They...build a high wall around themselves rather than building roads to their neighbors. I can understand self-protection and short-sighted conservatism but extremes aren’t healthy for anyone...repetitively screaming their fear rather than listening to rational advice. Worse, they’re kicking rocks down on us.
If it weren’t for their fear-mongering...AND their arguing for unwise, dangerous actions (because they can’t see the even larger dangers that they are causing), I would ignore them like harmless individuals...rather than [like] junkies who need to do anti-societal/immoral things to support their habits...fear-mongering and manipulating others...
...very good at rhetorical rationalization and who are selfishly, unwilling to honestly interact and cooperate with others. Their fearful, conservative selfishness extends far beyond their “necessary” enslavement of the non-human and dangerous...raising strawmen, reducing to sound bites and other misdirections. They dismiss anyone and anything they don’t like with pejoratives like clueless and confused. Rather than honest engagement they attempt to shut down anyone who doesn’t see the world as they do. And they are very active in trying to proselytize their bad ideas...
In a sense, they are very like out-of-control children. They are bright, well-meaning and without a clue of the likely results of their actions. You certainly can’t hate individuals like that — but you also don’t let them run rampant...
What do you mean no arguments? Just read the above excerpts...what do you think those are, ad hominems and applause lights?
Replies from: Solvent, endoself↑ comment by Solvent · 2011-10-03T07:56:48.510Z · LW(p) · GW(p)
...I think that that was one of those occasional comments you make which are sarcastic, and which no-one gets, and which always get downvoted.
But I could be wrong. Please clarify if you were kidding or not, for this slow uncertain person.
Replies from: lessdazed↑ comment by vi21maobk9vp · 2011-10-03T06:55:23.299Z · LW(p) · GW(p)
Maybe he submits papers and conference program comittee find them relevant and interesting enough?
After all, Yudkowsky has no credentials to speak of, either - what is SIAI? Weird charity?
I read his paper. Well, the point he raises against FAI concept and for rational cooperation are quite convincing-looking. So are pro-FAI points. It is hard to tell which are more convincing with both sides being relatively vague.
Replies from: lessdazed, wedrifid, Solvent↑ comment by lessdazed · 2011-10-03T07:48:15.642Z · LW(p) · GW(p)
Based on the abstract, it's not worth my time to read it.
Abstract. Insanity is doing the same thing over and over and expecting a different result. “Friendly AI” (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity’s eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a “Coherent Extrapolated Volition of Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon?
Points 2), 3), and 4) are simply inane.
Replies from: None↑ comment by wedrifid · 2011-10-03T08:22:37.911Z · LW(p) · GW(p)
Maybe he submits papers and conference program comittee find them relevant and interesting enough?
Which invites the question of why clearly incompetent people make up the program committee. His papers look like utter drivel mixed with superstition.
Replies from: Vladimir_Nesov, vi21maobk9vp↑ comment by Vladimir_Nesov · 2011-10-03T10:34:06.289Z · LW(p) · GW(p)
Interestingly, back in 2007, when I was naive and stupid, I thought Mark Waser one of the most competent participants of agi and sl4 mailing lists. Must be something appealing to an unprepared mind in the way he talks. Can't simulate that impression now, so it's not clear what that is, but probably mostly general contrarian attitude without too many spelling errors.
↑ comment by vi21maobk9vp · 2011-10-03T08:31:03.942Z · LW(p) · GW(p)
If you are right, it is good that public AGI field is composed of stupid people (LessWrong is prominent enough to attract - at least once - attention of anyone whom LW could possibly convince). If you are wrong, it is good that his viewpoint is published, too, and so people can try to find a balanced solution. Now, in what situation should we not promote that status quo?
Replies from: wedrifid, lessdazed↑ comment by wedrifid · 2011-10-03T09:57:12.152Z · LW(p) · GW(p)
Now, in what situation should we not promote that status quo?
Bad thinking happens without me helping to promote it. If there ever came a time when human thinking in general prematurely converged due to a limitation of reasonably sound (by human standards) thought then I would perhaps advocate adding random noise to the thoughts of some of the population in a hope that one of the stupid people got lucky and arrived at a new insight. But as of right now there is no need to pay more respect to silly substandard drivel than what the work itself merits.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-03T08:51:17.832Z · LW(p) · GW(p)
That's a fully general counterargument comprised of the middle ground fallacy and the fallacy of false choice.
We should not promote that status quo if his ideas - such as they are amid clumsily delivered, wince-inducing rhetorical bombast - are plainly stupid and a waste of everyone's time.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2011-10-03T09:21:17.907Z · LW(p) · GW(p)
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
Replies from: wedrifid, lessdazed↑ comment by wedrifid · 2011-10-03T10:01:01.649Z · LW(p) · GW(p)
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
That isn't true. It would be a good idea to suppress some AGI information if the FAI approach is futile and any creation of AGI would turn out to be terrible.
↑ comment by lessdazed · 2011-10-03T09:42:17.471Z · LW(p) · GW(p)
information
It's a general argument to avoid considering whether or not something even is information in a relevant sense.
I'm willing to accept "If you are wrong, it is good that papers showing how you are wrong are published," but not "If you are right, there is no harm done by any arguments against your position," nor "If you are wrong, there is benefit to any argument about AI so long as it differs from yours."
Replies from: wedrifid, vi21maobk9vp↑ comment by vi21maobk9vp · 2011-10-03T13:38:40.291Z · LW(p) · GW(p)
Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.
Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.
If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-10-03T14:03:34.392Z · LW(p) · GW(p)
FAI approach
Talk of "approaches" in AI has a similar insidious effect to that of "-ism"s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-10-03T14:45:27.727Z · LW(p) · GW(p)
That's an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a 'bayesian approach to statistics'.
Replies from: lessdazed↑ comment by Solvent · 2011-10-03T07:15:42.189Z · LW(p) · GW(p)
Which paper of his did you read? He has quite a few.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2011-10-03T07:34:46.338Z · LW(p) · GW(p)
AGI-2011 one.
comment by MarkusRamikin · 2011-12-05T13:41:26.944Z · LW(p) · GW(p)
http://www.questionablecontent.net/view.php?comic=2070
Let's get FAI right. It would be the ultimate insult if we were ever turned into paperclips by something naming itself Gary.
comment by lessdazed · 2011-10-22T13:03:04.798Z · LW(p) · GW(p)
I face a trivial inconvenience. If I want to send someone a message through LW, I am held up by trying to think of a Subject line.
What is a good convention or social norm that would enable people to not have to think about what to put there and how that affects the message?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-10-22T15:17:46.808Z · LW(p) · GW(p)
This isn't quite an answer to your question, but my approach is to write the message first. Usually by then the subject is obvious.
But for where it isn't, I generally go with something like "A question" or "A thought" or "Some possibilities" or something equally neutral like that.
comment by lessdazed · 2011-10-13T14:54:23.135Z · LW(p) · GW(p)
If there is a safe and effective way to induce short term amnesia, wouldn't that be useful for police lineups?
People are good at picking the person who most resembles who they saw, but not at determining if someone was who they saw. Amnesia would allow people to pick among different lineups without remembering who they chose in the first lineup and whether or not that is someone in a later lineup.
People would be given a drug or machine interfering with their memory and pick someone out of a lineup of a suspect and similar looking people. Then, the person they identified as who they saw would be removed and they would be asked to pick again. One could also have a new lineup with the person chosen from the first one and new extras, but I think the greatest benefit would be if the selected person is removed. This would allow one to see if the witness actually recognizes the person or chose the best fit in the first lineup and is subsequently remembering that person as the culprit.
comment by Dr_Manhattan · 2011-10-06T19:33:30.877Z · LW(p) · GW(p)
Interesting - http://www.takeonit.com/question/332.aspx (Is living forever worthwhile?)
comment by lessdazed · 2011-10-02T19:58:39.232Z · LW(p) · GW(p)
Negative utility: how does it differ from positive utility, and what is the relationship between the two?
Useful analogies might include the relationship of positive numbers to negative ones, the relationship of hot to cold, or other.
Replies from: saturn, DanielLC, vi21maobk9vp, quinsie↑ comment by saturn · 2011-10-03T01:34:12.034Z · LW(p) · GW(p)
Mathematically, all that matters is the ratio of the differences in the utilities of the possible alternatives, so it's not really important whether utilities are positive or negative. Informally, negative utility generally means something less desirable than the status quo.
↑ comment by vi21maobk9vp · 2011-10-02T20:35:42.142Z · LW(p) · GW(p)
Well, in the simplest case (when we are not talking being vs. non-being), the utility function is something that you can shift and even multiply by a constant. The only thing that matters for a selfish rational agent which either not considers ceasing to be or ascribes it some utility is ratio of utility differences. You usually maximize expected utility; and you do not care about absolute value, but only about the actions you are going to take. Shifts and multiplication by positive constants do not change any inequality with expectations of utilitiy. And shifts can make negative become positive and vice versa.
Now, if we consider moral questions with variable count of agents, we can find ourselves in a situation where we want to compare being to non-being - and some people implicity ascribe non-being utility zero. Also we can try to find a common scale for the wish intensities different people have. Buddhism with its stopping of reincarnation seems to ascribe negative utility to any form of being before transcending into nirvana. Whether it is better not to be born or to be born into modern world in Africa is a question that can get different answers in Western Europe; now, we can expect that as accurate a description as possible of Western Europe could cause a pharaoh of Egypt say that it is better not to be born than to be born into this scary world.
↑ comment by quinsie · 2011-10-02T20:26:13.018Z · LW(p) · GW(p)
A thing has negative utility equal to the positive utility that would be gained from that thing's removal. Or, more formally, for any state X such that the utility of X is Y, the utility of the state ~X is -Y.
Replies from: quinsie, printing-spoon, vi21maobk9vp, lessdazed↑ comment by quinsie · 2011-10-02T21:06:05.619Z · LW(p) · GW(p)
Yep, definitely needs some clarification there.
Humans don't distinguish between the utility for different microscopic states of the world. Nobody cares if air molecule 12445 is shifted 3 microns to the right, since that doesn't have any noticable effects on our experiences. As such, a state (at least for the purposes of that definition of utility) is a macroscopic state.
"~X" means, as in logic, "not X". Since we're interested in the negative utility of the floor being clear, in the above case X is "the airplane's floor being clear" and ~X is "the airplane's floor being opaque but otherwise identical to a human observer".
In reality, you probably aren't going to get a material that is exactly the same structurally as the clear floor, but that shouldn't stop you from applying the idea in principle. After all, you could probably get reasonably close by spray painting the floor.
To steal from Hofstadter, we're interested in the positive utility derived from whatever substrate level changes would result in an inversion of our mind's symbol level understanding of the property or object in question.
↑ comment by printing-spoon · 2011-10-02T20:35:52.414Z · LW(p) · GW(p)
I think you need to be more precise about what states and ~ are.
↑ comment by vi21maobk9vp · 2011-10-02T21:48:23.221Z · LW(p) · GW(p)
In that formulation, addition of the thing has utility equal to minus the utility of removal of the same thing. And that only if addition/removal can be defined.
About states - there are too many of them, some are macroscopically different but irrelevant to human untility (I am next to sure it is possible to shift some distant galaxies in a way that many sapient being will be able to see different sky and none would care).
The meaningful thing is ratio of utility differentials between some states.
↑ comment by lessdazed · 2011-10-02T20:41:34.279Z · LW(p) · GW(p)
that thing's removal
How is this defined? If an airplane has a clear floor such that its passengers vomit whenever they look down, removing the floor would put them in an even worse position. We want to remove only the property of transparency, which would involve replacing the clear material with an entirely different opaque material that had all other properties identical.
What's troubling to me about the counterfactual is that it doesn't seem to have an objective baseline, a single thing that is ~X, so we are left comparing Y(X) with Y(Z), the utility of thing X instead of thing Z. I'm not sure how valid it is to talk about simply removing properties because the set of higher level properties depends on the arrangement of atoms. It seems like properties are their own thing that can be individually mixed and matched separate from material but they really can't be.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-10-02T21:06:09.087Z · LW(p) · GW(p)
If we're using 'the object in question doesn't exist' as the baseline for comparison, I'd say that the clear floor actually has positive utility. That's just counter-intuitive because we have such a strong tendency to think of the case that's currently normal as the baseline, rather than the 'doesn't exist' case.
I do agree that neither of those baselines is objectively correct in any sense (though the 'doesn't exist' one seems a bit more coherent and stable if we find a need to choose one), and that remembering that properties don't have independent existence is generally useful when considering possible cases.
comment by Douglas_Knight · 2011-11-29T01:13:18.385Z · LW(p) · GW(p)
Has the comment deletion behavior changed back?
No.
comment by ata · 2011-10-23T03:39:11.736Z · LW(p) · GW(p)
Does LW have any system in place for detecting and dealing with abuses of the karma system? It looks like someone went through around two pages of my comments and downvoted all of them between yesterday and today; not that this particular incident is a big deal, I'm only down 16 points, but I'd be concerned if it continues, and I know this sort of thing has happened before.
Replies from: KPier, None↑ comment by KPier · 2011-10-23T04:34:16.933Z · LW(p) · GW(p)
Discussed here. Short answer: no. Longer answer: Voting directly from user pages was taken away. People have also suggested limits on the amount of karma you can add/subtract from a user in a given amount of time, but if one is implemented it will likely be bigger than 16 (I'd like the ability to downvote two posts by the same user in the same day.)
But you have 5000 karma. I really wouldn't worry over 16, or 160.
Replies from: Prismattic, ata↑ comment by Prismattic · 2011-10-23T06:02:27.369Z · LW(p) · GW(p)
Why not just disallow the casting of more than one downvote by the same person within a five-minute period? How many spite-voters are going to be dedicated enough to wait around for an hour just to blitz someone's karma?
Replies from: Nominull↑ comment by Nominull · 2011-10-23T06:20:50.853Z · LW(p) · GW(p)
Because sometimes I read multiple comments in the space of five minutes, and it's not unthinkable I might want to downvote more than one of them. Any rate-limiter would have to be carefully considered not to impinge on ordinary non-pathological users. This is perhaps more important than fighting spite-voters.
This is essentially the same trade-off DRM faces. I would say that spite-voting isn't a large enough problem to need a technical solution unless somebody's being hugely egregious, and if someone's being hugely egregious, there are admins that can step in, right?
Replies from: ata↑ comment by ata · 2011-10-23T07:08:24.400Z · LW(p) · GW(p)
I would say that spite-voting isn't a large enough problem to need a technical solution unless somebody's being hugely egregious, and if someone's being hugely egregious, there are admins that can step in, right?
There are admins that can step in, but I'm not sure if they have in past egregious cases. Aside from Will Newsome, I think there have been other significant instances of mass downvoting (at least PJ Eby, maybe others), and (correct me if I'm mistaken) I don't recall anything being directly done about either in the end, except the removal of voting buttons from userpages after Will brought it up. That was an improvement, but it's still clearly possible, and if someone were sufficiently motivated, it would be pretty easily scriptable.
↑ comment by ata · 2011-10-23T05:20:35.520Z · LW(p) · GW(p)
Yeah, I wouldn't have proposed hard limits, I was thinking more of an automatic (i.e. not involving manually poking around in the database) means of allowing the administrators to check on large-scale suspicious voting and reverse it if necessary. (And, as I said, I'm by no means worried about my 16 precious votes (though I'd be starting to get concerned by 160), but this incident reminded me of the general problem and I wanted to check if I had missed any changes to how such things are handled.)
I might support just making all votes public; since on LW they (are supposed to) mean "more/less of this" rather than "I like/don't like you" or "I agree/disagree", I'm not sure I see any reason why that information should not be associated with the people whose opinions they represent, since that is relevant information as well. (Though of course some people prefer going in the other direction to make things consistent, hence the anti-kibitzer. But if the anti-kibitzer can be opt-in, perhaps so should not seeing other people's votes.)
But then, I vaguely remember that having been discussed before, so I'll see if I can locate said discussion(s) before attempting to start another one.
↑ comment by [deleted] · 2011-10-23T07:20:40.303Z · LW(p) · GW(p)
Karma is a mostly pointless number that doesn't really provide you with any real information.
EDIT: To clarify, I was talking about total karma. I can't really follow the details of the ensuing discussion because either one or both of the participants seem to have deleted parts of their comments -- there are quotes that have no antecedent, for instance -- and so I don't know really what to do with it.
There are two main arguments I could formulate against using karma as a measure of relative contribution. The first is based on common experience here on LW. For instance, everything proximally close to a comment by Yudkowsky seems to receive far higher karma than similar comments in other places, an effect I termed "karmic wake" in one place. No, I don't have data, but it seems that way nonetheless.
The second argument is that the more seriously people take karma, the more worthwhile it would be to exploit the system, c.f. Goodhart's Law.
If I assign any meaning to the number of downvotes received by a particular comment, it would be something along the lines of, "Of the X people (essentially unknown) who viewed this comment and read it and the surrounding context, Y more people of some fraction of X were in a good enough mood to upvote (or a bad enough mood to downvote)." That's a far cry from "Y more people who voted believed this comment was good for LW," and it's coherent with karma systems I've interacted with in other communities.
For example, consider this somewhat recent Reddit post.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-23T08:06:03.330Z · LW(p) · GW(p)
EDIT: Re-framing campaign successful. I no longer endorse the new meaning which has been attributed to this comment.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-23T08:13:15.909Z · LW(p) · GW(p)
Funny. But I should hope upvoting and downvoting can mean more than "agree/disagree", on Less Wrong of all places.
Replies from: SilasBarta, wedrifid↑ comment by SilasBarta · 2011-10-23T21:17:09.625Z · LW(p) · GW(p)
A long-requested feature has been to have separate buttons for "agree/disagree" and "good/bad" so as to have a way to mark comments that are either:
a) widely agreed to be correct, but stated poorly or rudely, or
b) widely agreed to be wrong, but make the point well and show familiarity with the topic and site conventions.
↑ comment by wedrifid · 2011-10-23T08:29:29.488Z · LW(p) · GW(p)
Funny. But I should hope upvoting and downvoting can mean more than "agree/disagree", on Less Wrong of all places.
They do. And of the two meanings I mentioned "you are wrong" and "STFU" it is the latter that is the most significant.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-23T08:40:25.084Z · LW(p) · GW(p)
What happened to "I might agree, but you're not helpful/on topic?" Or, "I might agree, but your tone/quality of argument is below LW standards"?
Or did we turn into Youtube when I wasn't looking.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-23T09:19:43.812Z · LW(p) · GW(p)
What happened to "I might agree, but you're not helpful/on topic?" Or, "I might agree, but your tone/quality of argument is below LW standards"?
They don't get included in every non-exhaustive list. You will pleased to note that I just employed the latter criterion.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-23T09:26:08.260Z · LW(p) · GW(p)
Uh.
Since you (presumably) are incapable of extracting information from the negative karma of the parent allow me to translate it for you: "Dude, you're wrong! STFU."
You took it upon yourself to translate the downvotes paper-machine is getting into the most rude interpretation available... you're being quite generous to yourself when defending that as a "non-exhaustive list". Not sure if you think mass-downvoting me will make this look better.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-23T10:37:48.896Z · LW(p) · GW(p)
You took it upon yourself to translate the downvotes paper-machine is getting into the most rude interpretation available... you're being quite generous to yourself when defending that as a "non-exhaustive list". Not sure if you think mass-downvoting me will make this look better.
My default interpretation of this blatant misrepresentation of the context is that you are being as disingenuous as you think you can get away with in order to make the person you are arguing with look bad. But it is probably better to dismiss that as paranoia and assume the conversation really did go completely over your head.
Either way the point is that comment karma really does convey useful information and that by denying that information with respect to negative votes paper-machine does himself a particular disservice. Not only does he lose out on understanding how people consider the comment, it necessitates people communicating with him overtly. Whether it be via body language or via 'karma', nonverbal communication allows us to avoid being overt and blunt when giving feedback - more pleasant and all round neater for everyone!
Not sure if you think mass-downvoting me will make this look better.
Two downvotes must be a record for the smallest "mass downvote" spree ever! It's almost like they were two independent votes for comments which combined condescension with muddled thinking in a way I really would prefer not to see.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-23T11:38:27.143Z · LW(p) · GW(p)
Either way the point is that comment karma really does convey useful information and that by denying that information with respect to negative votes paper-machine does himself a particular disservice. Not only does he lose out on understanding how people consider the comment, it necessitates people communicating with him overtly. Whether it be via body language or via 'karma', nonverbal communication allows us to avoid being overt and blunt when giving feedback - more pleasant and all round neater for everyone!
I understood this from the start, and for the most part I agree. However, did you literally think that paper-machine was blind to the disapproval his comment generated, as expressed through the negative karma it received? That without your post, he would stare at the negative number next to his post and have no idea what it meant? It seems more like you were just being "clever" in pointing out the irony. This was especially apparent while your comment only had that first paragraph.
In the process, you may have attacked a straw man, since it's likely that paper-machine was talking about total karma, not single comment karma. Now, that other point would have been open to argument too, but this way you got to make him look like he walked into a punch...
More importantly to me, you certainly misrepresented the range of meanings that negative downvotes can have around here. This is what I objected to.
Why? Because it is valuable to me that LW comment votes often represent more sophisticated evaluations than "agree/disagree+STFU". It's an advantage over most other places on the Internet, something worth defending against the "bad money drives out good" tendency. And I feel that making it sound like a downvote basically means "shut up, I disagree with you" gives a wrong impression to future posters, both givers and receivers of comment votes.
EDIT: Okay, I see where we drove away in different directions. My whole objection is about your first response to paper-machine. If that "They do" in the next one was there from the start, I missed it, and misunderstood what the "non-exhausive list" referred to. Mea maxima.
Replies from: wedrifid, lessdazed, wedrifid↑ comment by lessdazed · 2011-10-23T12:00:26.006Z · LW(p) · GW(p)
it's likely that paper-machine was talking about total karma, not single comment karma.
I think so too. Why didn't you say so in the first place?
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-23T12:15:51.800Z · LW(p) · GW(p)
At the time it didn't seem relevant to what I was thinking about. I downvoted paper's comment because, whether it referred to single-comment or total karma, I felt it was unhelpful as a response to ata's question.
↑ comment by wedrifid · 2011-10-23T11:54:07.118Z · LW(p) · GW(p)
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-23T12:04:13.968Z · LW(p) · GW(p)
I. Am. Talking. About. Your. First. Post.
Since you (presumably) are incapable of extracting information from the negative karma of the parent allow me to translate it for you: "Dude, you're wrong! STFU."
One of the negative karma points was from me. It did not stand for "you're wrong" or "STFU". So it was misrepresented.
comment by Dr_Manhattan · 2011-10-07T12:54:24.654Z · LW(p) · GW(p)
ArsTechnica article "Rise of the Machines". A bit confused, but interesting instance of the meme. http://arstechnica.com/tech-policy/news/2011/10/rise-of-the-machines-why-we-still-read-hg-wells.ars
comment by MichaelHoward · 2011-10-06T14:15:44.694Z · LW(p) · GW(p)
There are several tribute videos on the BBC web site today, many of them very moving and inspirational. This is the most popular.
"The single best invention of life."
I can't even put it into words.
comment by Thomas · 2011-10-02T12:46:05.455Z · LW(p) · GW(p)
I estimate, that a currently working and growing superintelligence has a probability in a range of 1/million to 1/1000. I am at least 50% confident that it is so.
Not a big probability but given the immense importance of such an object, it is already a significant event to consider. The very near term birth of a superintelligence is something to think about. It wouldn't be just another Sputnik launched by some other people you thought they are unable to make it, but they sure were. We know that well, it wouldn't be just a minor blow for a pride as Sputnik was for some, and a triumph for others who conceived it and launched it.
No, that could be a check mate in a first move.
Non the less, people are dismissive of any short term success in the field. I am not and I want to express it in an open thread.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-02T13:15:03.046Z · LW(p) · GW(p)
I estimate, that a currently working and growing superintelligence has a probability in a range of 1/million to 1/1000. I am at least 50% confident that it is so.
The probability is already just an expression of your own uncertainty. Giving a confidence interval over the probability does not make sense.
Replies from: kilobug, Vaniver, gwern, bentarm, Jonathan_Graehl↑ comment by kilobug · 2011-10-02T14:48:00.663Z · LW(p) · GW(p)
Well, the probability is computed by an algorithm that is itself imperfect. "I'm 50% confident that the probability is 1/1000" means something like "My computation gives a probably of 1/1000, but I'm only 50% confident that I did it right". For example, if given a complex maths problem about probabilities of getting some card patterns from a deck with twisted rules of drawing and shuffling, you can do that maths, ends up with a probability of 1/5 that you'll get the pattern, but not be confident you didn't make a mistake in applying the laws of probability, so you'll only give a 50% confidence to that answer.
And there is also the difference between belief and belief in belief. I can something "I believe the probability to be of 1/1000, but I'm only 50% confident that this is my real belief, and not just a belief in belief".
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-10-01T00:06:43.401Z · LW(p) · GW(p)
Maybe when you say it, that's what you mean (do you say it?), but that's pretty weak evidence about what Thomas means. Added: oops, I didn't mean to revive something two years old.
↑ comment by Vaniver · 2011-10-02T14:56:35.061Z · LW(p) · GW(p)
It sounds to me like he is describing his distribution over probabilities, and estimates at least 50% of the mass of his distribution is between 1/1,000 and 1/1,000,000. Is that a convenient way to store or deal with probabilities? Not really, no, but I can see why someone would pick it.
Replies from: bentarm↑ comment by bentarm · 2011-10-04T13:40:09.758Z · LW(p) · GW(p)
The problem with this interpretation is that it renders the initial statement pretty meaningless. Assuming he's decided to give us a centered 50% confidence interval, which is the only one that really makes sense, that means that 25% of his probability distribution over probabilities is more likely than 1/1000, and this part of the probability mass is going to dominate the rest.
For example, if you think there's a 25% chance that the "actual probability" (whatever that means) is 0.01, then your best estimate of the "actual probability" has to be at least 0.004, which is significantly more than 1/1000, and even a 1% chance of it being 0.1 would already be enough to move your best estimate above 0.001, so it's not just that I'm not sure the concept makes sense, it's that the statement gives us basically no information in the only interpretation in which it does make sense.
Replies from: Vaniver↑ comment by Vaniver · 2011-10-04T16:08:40.484Z · LW(p) · GW(p)
Suppose you wanted to make a decision that is equally sensible for P values above X, and not sensible for P values below X. Then, knowing that a chunk of the pdf is below or above X is valuable. (If you only care about whether or not the probability is greater than 1e-3; he's suggested there's a less than 50% chance that's the case).
To elaborate a little more: he's answered one of the first questions you would ask to determine someone's pdf for a variable. One isn't enough; we need two (or hopefully more) answers. But it's still a good place to start.
↑ comment by bentarm · 2011-10-04T13:33:41.075Z · LW(p) · GW(p)
I basically agree that the part of the original comment that you quote doesn't make any sense at all, and am not attempting to come to the defence of confidence intervals over probabilities, but it does feel like there should be some way of giving statements of probability and indicating how sure one is about the statement at the same time. I think, in some sense, I want to be able to say how likely I think it is that I will get new information that will cause me to update away from my current estimate, or give a second-derivative of my uncertainty, if you will.
Let's say we have two bags, one contains 1 million normal coins, one contains 500,000 2-headed coins and 500,000 2-tailed coins. Now, I draw a coin from the first bag and toss it - I have a 50% chance of getting a head. I draw a coin from the second bag and toss it - I also have a 50% chance of getting a head, but it does feel like there's some meaningful difference between the two situations. I will admit, though, that I have basically no idea how to formalise this - I assume somebody, somewhere, does.
↑ comment by Jonathan_Graehl · 2011-10-04T23:51:18.354Z · LW(p) · GW(p)
I agree. Perhaps he means to say that his opinion is based on very little evidence and is "just a hunch".
I do think that in fitting a model to data, you can give meaningful confidence intervals for parameters of those models which correspond to probabilities (e.g. p(heads) for a particular coin flipping device). But that's not relevant here.
comment by Solvent · 2011-10-03T06:21:26.554Z · LW(p) · GW(p)
My friend suggested a point about Pascal's Mugging. There is a non-zero chance that if you tried to Pascal mug someone, they're a genie who would be pissed off by your presumptuousness, and punish you severely enough that you should have originally decided not to try it. I know the argument isn't watertight, but it is entertaining.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-03T07:07:06.149Z · LW(p) · GW(p)
My friend suggested a point about Pascal's Mugging. There is a non-zero chance that if you tried to Pascal mug someone, they're a genie who would be pissed off by your presumptuousness, and punish you severely enough that you should have originally decided not to try it. I know the argument isn't watertight, but it is entertaining.
What is it a non-watertight argument for exactly? That it's dangerous to try to mug people? Got it. But that's the other guy's problem, not mine. The problem is what to do when faced with a Pascal's Mugger, not whether or not it is a good idea to try to do it to others.
Replies from: Solvent↑ comment by Solvent · 2011-10-03T07:21:18.769Z · LW(p) · GW(p)
He's arguing that it's dangerous to Pascal's Mug people. This surely affects your probability that the guy who's pulling a Pascal's mugging on you is really faking.
Replies from: AdeleneDawner, wedrifid, JoshuaZ↑ comment by AdeleneDawner · 2011-10-03T09:14:39.446Z · LW(p) · GW(p)
The whole point of Pascal's Mugging is that an arbitrarily small probability of something happening is automatically swamped if there's infinite utility or disutility if it does, according to all usual ways of calculating. Making the arbitrarily small probability smaller doesn't fix this.
↑ comment by wedrifid · 2011-10-03T10:10:14.565Z · LW(p) · GW(p)
What Adelene said. Add to that the fact that the small probability that the threat is real actually increases by an irrelevant amount rather than decreasing by an irrelevant amount due to the consideration in question.
Since we are already conditioning on the fact that we know we are subject to a mugging attempt when we consider that mugging attempts are dangerous we should slightly increase our estimation that the mugger is powerful enough to carry out the threat. Because the more powerful the potential mugger is the fewer potential anti-mugging genies he has to fear. So apart from being too small an effect to be relevant it is also an effect in the wrong direction.
↑ comment by JoshuaZ · 2011-10-06T03:51:20.019Z · LW(p) · GW(p)
This surely affects your probability that the guy who's pulling a Pascal's mugging on you is really faking.
Adelene's response is important, but I'd like to point out that if anything the danger in engaging in such muggings makes people even less likely to be faking. So this isn't a solution to Pascal's Mugging, it just means that the situation is even worse. So now the mugger instead of saying 3^^^^3 people can just say (3^^^^3)/2 or something like that.
comment by EphemeralNight · 2011-10-14T04:03:17.848Z · LW(p) · GW(p)
So, a little while ago I watched Thor for the first time. Was I the only one who immediately thought "it's a GSV!" the first time Asgard appeared on screen?