Posts

Comments

Comment by xSciFix on Feeling Moral · 2019-07-08T18:57:34.696Z · LW · GW

Nice post! Utilitarianism definitely has its points. The trick of course is assigning values to such things as hiccups and shark attacks...

Assuming this is a one-off again;

If I care about an individual in the group of 500, say myself or my wife or whatever, I'd want you to pick 2 in either case. Option 1 gives the individual a 20% chance to die (1/5 they'll die), option 2 gives the individual a 10% chance to die (if everyone dies).

This is a bit more complicated than the simple math suggests though - a lot of factors come into play. Let me tweak it slightly; you're in a space colony of 500 and you have to decide on what to do about a problem. You have two choices on how to handle it, same odds. Choice 1: 100 colonists die. Choice 2: 90% odds everyone is saved but 10% the colony is wiped out.

From the perspective of someone interested in maintaining the longevity of the colony, shouldn't I take choice 1 in either case? Yes, it is the choice with 50 less expected value of lives saved but the 10% odds of total destruction path that is possible down choice 2 is an *unacceptable* fail-state. The colony can recover from 20% population hit but not if it is entirely destroyed.

Or to put it even more simply: would you sacrifice 20% of the human population to remove a definite 10% chance of total extinction of the species?

Comment by xSciFix on Zut Allais! · 2019-07-08T18:21:28.836Z · LW · GW

I should have read this post before replying on the last I suppose! Things are a little more clear.

Hmm... well I had more written but for brevity's sake: I suppose my preference system looks more like 1A>1B, 2A=2B. I don't really have a strong preference for an extra 1% vs an extra $3k either way.

The pump really only functions when it is repeated plays; however in that case I'd take 1B instead of 1A.

Comment by xSciFix on The Allais Paradox · 2019-07-08T17:59:59.646Z · LW · GW

Assuming this is a one off and not a repeated iteration;

I'd take 1A because I'd be *really* upset if I lost out on $27k due to being greedy and not taking the sure $24k. That 1/34 is a small risk but to me it isn't worth taking - the $24k is too important for me to lose out on.

I'd take 2B instead of 2A because the difference in odds is basically negligible so why not go for the extra $3k? I have ~2/3rds chance to walk away with nothing either way.

I don't really see the paradox there. The point is to win, yes? If I play game 1 and pick B and hit that 1/34 chance of loss and walk away with nothing I'll be feeling pretty stupid.

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.

But why would I pay to switch it back to A when I've already won given the conditions of B? And as Doug_S. mentions, you can take my pennies if I'm getting paid out tens of thousands of dollars.

I do see the point in it being difficult to program this type of decision making, though.

Comment by xSciFix on Detached Lever Fallacy · 2019-06-26T19:36:48.303Z · LW · GW

> Anyone knows the exact reference, do leave a comment.

Well, 11 years later but as I don't see anyone else answering... that sounds pretty much like Star Trek TNG, Season 7 Episode 12. The "lever" being the phased cloaking device letting the ships pass through asteroids.

Comment by xSciFix on Zombies: The Movie · 2019-06-13T21:14:54.107Z · LW · GW

Yeah. This is basically a great summation to the philosophical zombie question; what does it even matter? It's as you mentioned with the dissolving the question posts: what do the pro-zombie people think a world in which they are correct looks like? What do we learn from this thought experiment which is just basically another flavor of solipsism?

I'd say "inevitably generates the worst sort of Mysterious Answer to a Mysterious Question" is pretty spot on. The Zombie thing doesn't really tell us anything new or let us predict anything. Just a bunch of sophistry really.


Comment by xSciFix on Ghosts in the Machine · 2019-06-05T18:38:13.185Z · LW · GW

My intro to programming instructor did a pretty good exercise: he had us pair up, and we'd each write pseudo-code for the other person instructing them on how to make a peanut butter & jelly sandwich, step by step from a certain starting position (walk forward 5 steps, move hand out X inches, grasp jar, twist lid, etc). The person acting out the "code" had to do it exactly as written without making logical leaps (as refereed by the rest of the class) in order to simulate a computer.

Needless to say not a lot of sandwiches got completed. The point was well made though, I think.

Comment by xSciFix on Do We Believe Everything We're Told? · 2019-05-01T16:31:18.580Z · LW · GW

Well I suppose I'm not going to be idly reading random tabloid headlines while waiting in the checkout line anymore for starters.

So is it possible to train one's brain such that it reflexively employs the Decartes method, as it were?

Comment by xSciFix on Lawful Uncertainty · 2019-04-16T16:55:33.327Z · LW · GW

A lot of comments saying various forms of "well but for some situations it *is* best to be random." Fine, maybe so; but the decision to 'act randomly' is arrived after a careful analysis of the situation. It is the most rational thing to do *in that instance.* That doesn't mean that decision theory is thus refuted at all. Reaching the the conclusion that you're playing a minmax stochastic game in which the best way forward is to play at random is not at all the same as "might as well just be random all the time in the face of something that seems irrational."

Acting randomly *all the time* because hey the world is random is in fact useless. Yes, sometimes you'll hit the right answer (30% of the cards were red after all) but if you're not engaging in 'random' behavior as part of a larger strategy or goal then you're just throwing everything at the wall and seeing what sticks (granted sometimes brute-forcing an answer is also the best way forward).

Arguing about 'well in *this one instance* it is best to be random' is entirely beside the point. The point is how do you reach that conclusion and by what thought processes?

'If faced with irrationality, throwing your own reason away won't help you' is exactly correct. Conversely, when faced with rationality then acting irrationally won't help you either. Unlike the popular media trope, in real life you're not really going to baffle and thus defeat the computer opponent by just playing at random. You're not really going to beat a chess master in the park by just playing randomly in order to confuse them.

Comment by xSciFix on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-04-15T20:19:04.414Z · LW · GW

Interesting. Thanks!

Comment by xSciFix on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-04-12T20:36:52.517Z · LW · GW

Yeah I think it was a terrible addition. Best way to do it was to simply write in the 5 paragraph pattern that is expected. Even still it was subject to wildly differing results - scores were demonstrably effected by simple things like reviewers being irritated or tired that day.

Comment by xSciFix on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-04-12T20:34:35.159Z · LW · GW
Certainly we'd expect that the KKK would eventually get its hands on such software and create a revisionist, racist Civil War history. But the reading public will receive it in the context of other histories published by "reputable AI publishing firms" and human experts. I don't see why this situation is all that different than the one we have today, just with different means of production.

Yeah, they already do this so what would change really?

Comment by xSciFix on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-04-12T20:32:46.305Z · LW · GW
I think a few minutes of conversation with anyone almost always gives me significant information about their intelligence in an IQ sense.

Out of curiosity, what do you base this on? Is there anything specific you're looking for? Particular patterns of thought/logic or something more superficial? Not trying to be disparaging, just interested.

I often greatly moderate the way I speak depending on circumstance. I'm looking for the best means of communication, not to impress anyone with vocabulary. Sometimes sounding like the smart one in the room is detrimental, or sounds like condescension. In practice this means I'm often speaking in a way that someone might categorize as 'not high intelligence.'

I also think that since language and communication are a product of one's environment, they aren't necessarily good indicators of intelligence. Simple example: I often see people think that immigrants are not smart because they can't speak English well - never mind that the person might speak 2-3 other languages fluently and have an engineering degree. People often assume those who use a lot of slang are not smart, but that doesn't really mean anything other than they are using the best mode of communication within their community/area.

Personally I also like to throw in profanity to keep people on their toes. I don't want people to get an accurate read on me; but that's probably also just me being a paranoiac. So then I guess also: how do you know people aren't giving you false data on purpose?

Comment by xSciFix on Applause Lights · 2019-04-12T20:19:27.984Z · LW · GW
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:

I'm reminded of your Tom Riddle a bit heh.

I think "a speech that consists of nothing but applause lights" pretty much applies to 99% of political discourse these days and instead being amused at how long it takes the audience to realize you'd be embittered at how seriously everyone took the whole exercise. Maybe I have some bias to sort out but I think the actual content of what is being said often matters very little to most people, as long as you hit the right buzzwords and look convincing/confident.

Comment by xSciFix on Professing and Cheering · 2019-04-12T20:00:16.256Z · LW · GW
Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game. A banner saying Go Blues isn’t a statement of fact, or an attempt to persuade; it doesn’t have to be convincing—it’s a cheer.

Tying back to my comment a couple posts ago - yes I think this is exactly right. She probably doesn't believe what she is saying. She knows full well it is crap. She has no interest in a good faith argument. She's just there to cheer on paganism. It's 'Science VS Paganism,' the 'new ways' vs the 'old ways.' Rah rah rah. I wonder if while she was speaking there wasn't someone in the back handing out pamphlets, and while 90-95% of the audience reacted with "what a load of crock" a few did think "yeah these scientists aren't as smart as they think they are..." This is one way in which propaganda functions - just to signal, attract followers, and throw up a smoke screen so most of the room doesn't even realize what is happening and just has a laugh. Maybe while you went home confused she ended up selling a few dozen books or whatever to similarly minded "rebels."

Are 'science' and 'religion' compatible? Define the terms I suppose but sure. Why not? 'Religion' just explains the unknown. I *believe* that one day science will be able to eliminate every last notion similar to "lightning exists because Zeus throws it" but until then I think there's nothing fundamentally incompatible with holding beliefs such as "god metaphorically snapped their fingers and *that's* what set off the Big Bang." Mind you all the organized religions I am aware of are ruled out... I'm just saying there can be a space for 'belief' in the areas where science is currently unable to investigate. Personally I think it's better to just say 'I/we don't know, yet' but humans will be humans.

(is anyone reading these anymore? Oh well it's more to help me process my own thoughts anyway I suppose!)

Comment by xSciFix on Belief in Belief · 2019-04-12T19:34:36.574Z · LW · GW

Went back to re-read some Lacan and Zizek after this, with regards to Dennett's 'belief in belief.' Very similar to the 'displaced belief' they talk about. The common example they give is Santa Claus: children probably don't believe it but they say they do for the presents, because they understand that the adults expect them to believe, etc. The parents don't believe it but they continue the ruse for the benefit of the children, other people's children, or whatever they tell themselves. Thus people often *do* admit to themselves that they don't believe but they say "but nonetheless other people believe." They displace the belief onto someone else, and they continue going through the motions - and the 'belief' functions anyway. Even if nobody actually believes, they believe by proxy by trusting the apparent belief of those around them. Emperor's New Clothes comes to mind also.

" No, this invisibility business is a symptom of something much worse. "

Indeed. If only it *were* as simple as all that... There often is some fundamental Thing preventing people from realization of the truth and then acting in accordance with that truth though. Often times their entire worldview would be shattered, and they just Can't Have That - it is ideological, in other words. Others know something is charlatanism but they are the charlatan benefiting so they'll keep making up reasons for why there really is a dragon in their garage (maybe they are selling magical dragon breath for $100/jar). Others use false beliefs merely as a way to signal propaganda and attract followers - they know it's fakery but they don't care about debating in good faith to begin with.

Anyway I'm slowly making my way through these after a re-read of your HP fanfic. Just wanted to say that even if "what do I know and how do I know it" is the only thing my brain can hold on to it's already been well worth it (although I did pound Bayes' Theorem in there, too). Thanks!