Comment by markrkrebs on Consequentialism Need Not Be Nearsighted · 2011-09-27T10:17:16.806Z · LW · GW

Good point.. Easy to imagine a lot of biologically good designs getting left unexpressed because the first move is less optimal.

Comment by markrkrebs on Consequentialism Need Not Be Nearsighted · 2011-09-27T10:12:54.195Z · LW · GW

Hmm, I agree, except for the last part. Blindly trying (what genetic mixing & mutating does) it like poorly guided forecasting. (Good simulation engineers or chess players somehow "see" the space of likely moves, bad ones just try a lot) and the species doesn't select, but the environment does.

I need to go read "evolve to extinction."


Comment by markrkrebs on Consequentialism Need Not Be Nearsighted · 2011-09-27T02:25:14.880Z · LW · GW

The world we find ourselves in would never expect the doctor to cut the guy up. Few people are doing that consequentialist math. Well, maybe a few long thinkers on this site. So, the supposed long view as reason for not doing it is baloney. I think on that basis alone the experiment fails to come up recommending the conventional behavior it's trying to rationalize.

Comment by markrkrebs on Consequentialism Need Not Be Nearsighted · 2011-09-27T02:21:47.553Z · LW · GW

Well, they could EVOLVE that reticence for perfectly good reasons. I'll dare in this context to suggest that evolution IS intelligence. Have you heard of thought as an act of simulating action and forecasting the results? Is that not what evolution does, only the simulations are real, and the best chess moves "selected?"

a species thereby exhibits meta-intelligence, no?

Comment by markrkrebs on Consequentialism Need Not Be Nearsighted · 2011-09-27T02:17:02.911Z · LW · GW

"philosophy tries... to agree with our ...intuition..."? Bravo! See, I think that's crazy. Or if it's right, it means we're stipulating the intuition in the first place. Surely that's wrong? Or at least, we can look back in time to see "obvious" moral postulates we no longer agree with. In science we come up with a theory and then test it in the wind tunnel or something. In philosophy, is our reference standard kilogram just an intuition? That's unsatisfying!

Comment by markrkrebs on Consequentialism Need Not Be Nearsighted · 2011-09-27T02:14:25.431Z · LW · GW

I had fun with friends recently considering the trolley problem from a perspective of INaction. When it was an act of volition, even (say) just a warning shout, they (we) felt less compelled to let the fat man live. (He was already on the track and would have to be warned off, get it?) It seems we are responsible for what we do, not so much for what we elect NOT to do. Since the consequences are the same, it seems wrong that there is a perceptive difference. This highlights, I suppose the author's presumed contention (consequentialism generally) that the correct ethical choice is obviously one of carefully (perhaps expensively!) calculated long term outcomes and equal to what feels right only coincidentally. I think in the limit, we would (consequentialists all) just walk into the hospital and ask for vivisection, since we'd save 5 lives. The reason I don't isn't JUST altruism, because I wouldn't ask you to either, instead it's a step closer to Kant's absolutism: as humans we're worth something more than ants (who I submit are all consequentialists?) and have individual value. I need to work on expressing this better...

Comment by markrkrebs on Consequentialism Need Not Be Nearsighted · 2011-09-27T01:37:13.481Z · LW · GW

Your doctor with 5 organs strikes me as Vizzini's princess bride dilemma, "I am not a great fool, so I can clearly not choose the wine in front of you."

So it goes, calculating I know you know I know unto silliness. Consequentialists I've recently heard lecturing went to great lengths, as you did, to rationalize what they 'knew" to be right. Can you deny it? The GOAL of the example was to show that "right thinking" consequentialists would come up with the same thing all our reptile brains are telling us to do.

When you throw a ball, your cerebral cortex doesn't do sums to figure where it will land. Primitive analog calculation does it fast and with reasonable accuracy. As we all know, doctors across the nation don't do your TDL sums either. Nor do I think they're internalized the results unconsciously either. They have an explicit moral code which in it's simple statements would disagree.

The thing I find interesting, the challenge I'd like to suggest, is whether consequentialism is somewhat bankrupt in that it is bending over backwards to "prove" things we all seem to know, instead of daring to prove something less obvious (or perhaps unknown / controversial). If you can make a NEW moral statement, and argue to make it stick, well that's like finding a new particle of matter or something: quite valuable.

Comment by markrkrebs on Belief in Belief · 2011-09-22T20:25:01.040Z · LW · GW

Surprised not to find Pascal's wager linked to this discussion since he faced the same crisis of belief. It's well known he chose to believe because of the enormous (inf?) rewards if that turned out to be right, so he was arguably hedging his bets.

It's less well known that he understood it (coerced belief for expediency's sake) to be something that would be obvious to omniscient God, so it wasn't enough to choose to believe, but rather he actually Had To. To this end he hoped that practice would make perfect and I think died worrying about it. this is described in the Wikipedia article in an evasive third person, but a philosophy podcast I heard attributed the dilemma of insincere belief to Pascal directly.

Fun stuff.

Comment by markrkrebs on Selfishness Signals Status · 2010-03-07T13:13:20.983Z · LW · GW

For reasons I perhaps don't fully understand this, and threads like it are unsettling to me. Doesn't high status confer the ability (and possibly duty, in some contexts) to treat others better, to carry their pack so to speak? Further, acting high status isn't necessary at all if you actually have it (it being the underlying competence status (supposedly, ideally) signifies. I am a high status athlete (in a tiny, circumscribed world) and in social situations try to signify humility, so others won't feel bad. They can't keep up, and if made to feel so, will not want to come again. Maybe in this forum we just want to drop anyone who can't keep the pace. If I see someone acting supercilious/indifferent, signaling status on all frequencies, I will infer you have something to hide, or strong feelings of incompetence that need to be stroked. Now we can play the game of you know I know high status signallers may be compensating, but it's a silly game, because faking status, if that's what you want, is only a temporary fiction. Any close relationship will soon scrape through that whitewash. Unfortunately, (I think) poseurs do manage to get by quite well in the world, by exactly the techniques being discussed here. Maybe everybody should get a tattoo with their VO2max and IQ right on their forehead?

Comment by markrkrebs on The Graviton as Aether · 2010-03-07T00:13:46.999Z · LW · GW

Most excellent. Now, glasshoppah, you are ready to lift the bowl of very hot red coals. Try this

Comment by markrkrebs on A Fable of Science and Politics · 2010-03-06T13:53:56.498Z · LW · GW

Nice example of Bliks in action. Literature is powered by such dramas, where people's individual mindset shifts the spectrum of every photon right or left of the reader, or the other protagonists, and the tragedy is that too few rays of light fall true, through a clear eye.

Ferris I suppose has seceded, too advanced to bother with the various foolish repercussions she knows will ring through the world under her feet from this new data. That's fine, she's too far ahead to go back anyway. ()

I worry that we (denizens of this website) are too confident that OUR vision is so sure. I'm a noob of course and not sure that I feel myself at home but I suggest caution. All those fools who see blue or green... they're sure they're right, too. Hubris is the danger.

Comment by markrkrebs on The Graviton as Aether · 2010-03-06T10:21:11.968Z · LW · GW

You correctly decry popularity as a non-rational measure of veracity, but to the extent that it expresses a sort of straw poll, it may be a good indicator anyway. The idea of expert futures markets comes to mind.

My point is related: is it not also a fallacy to assert it's GOT to be simple? That's awful close to demanding (even believing?) something's true because it ought to be, because we want it so bad. Occam's razor has worked like a champ all these years but inference is risky and maybe now, we find ourselves confronted with some hard digging. I too hope some crystalline simplification will make everything make sense, but I don't think we've a right to expect that, or should. What you and I want doesn't matter.

Comment by markrkrebs on Open Thread: March 2010 · 2010-03-03T15:20:33.981Z · LW · GW

I suggest you pay me $50 for each week you don't get and hold a job. Else, avoid paying me by getting one, and save yourself 6mo x 4wk/mo x $50 -$100 = $400! Wooo! What a deal for us both, eh?

Comment by markrkrebs on Great Product. Lousy Marketing. · 2010-03-02T01:13:06.653Z · LW · GW

Saying there are white arts as well as dark ones is conceding the point, isn't it? One should be allowed to be persuasive as well as right, and sometimes just being right isn't enough, especially if the audience is judging the surface appeal of an argument (and maybe even accepting it or not!) prior to digging into it's meat. In such situations, attractive wrapping isn't just pretty, it's a prerequisite. So, I love your idea of inventing a protocol for DAtDA.

Comment by markrkrebs on Open Thread: March 2010 · 2010-03-02T01:05:49.628Z · LW · GW

The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don't personally think we'll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.

Consciousness, I hope, is something more and different in kind, and maybe that's what you were really after in the original post, but it's a subjective beast. OTOH, if it is "mere" complex behavior we're after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.

I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.

Comment by markrkrebs on Great Product. Lousy Marketing. · 2010-03-02T00:48:57.609Z · LW · GW

Jonathan, I'll try again, with less flair...
My comments were to the original post, which asks if "dark arts" are justified and I say simply, yes. I think lots of otherwise very smart people who like to communicate with bare logic and none of the cultural niceties of linguistic foreplay can actually alienate the people they hope to persuade. You may have just done that, assuming you were trying to persuade me of something.

Re: losing the audience that demands respect, firstly I was trying to be illustrative in a funny, not disrespectful way, and more importantly I was not talking about you, at all. I am talking about arguing with people who are unswayed by the logical content. "If the glove does not fit, you must acquit!" What? That's a freaking POEM: rhyming doesn't make it true! ...and yet, history teaches us that Johnny Cochran was a genius: OJ walked. That's the world you and I live in, unfortunately. How shall we persuade it? Logic isn't enough.

I'd presumed, (and I suggest my tack is actually quite respectful of THIS readership), that the very part of the audience you're cautioning me not to lose doesn't need to be convinced, 'cause they can "do the math" already. The facts will work just fine for them. No, I am hunting smaller, game.

At the risk of another metaphor, I'll reach for resonance. Different antennae resonate to the beat of different wavelengths. A high power signal of surpassing precision will pass unnoticed through an antenna not sized to receive it. It is possible to give one's opponents too much credit.

Comment by markrkrebs on Great Product. Lousy Marketing. · 2010-03-01T09:48:57.416Z · LW · GW

I guess I'm for persuasion, think the ends justify in this case. Otherwise you're all bringing knives to a gunfight and crying foul when you get shot down. Could there be a degree of "sour grapes" here resulting from finding one's self inexplicably unpersuasive next to a velvet tongued dummy? Are we sure we eschew the tactics of rhetoric because they're wrong? Is it even fair to describe "dark arts" as wrong?

I say rather that speech is meant to be persuasive. Better speech would then be more persuasive. As such persuasion backed by truth should be sturdier than mere naked truth, which is only palatable to connoisseurs who like their facts without sauce. Have I mixed enough metaphors for you there?

I think thought, hence communication, is highly model based, so you can be compelling with a metaphor that "fits" and one that's well chosen would fit very well indeed. Then the topic under discussion gets a piggyback ride from the metaphor and the stipulated conclusion from the model is transferred to the new topic. (there must be an established name for that sort of trick?) But what if it's in defense of the truth? I say it's ok. To go back to the first metaphor: you gotta fight fire with fire; facts burn too easily.

Comment by markrkrebs on Welcome to Less Wrong! · 2010-02-26T14:07:57.447Z · LW · GW

Hi! Vectored here by Robin who's thankfully trolling for new chumps and recommending initial items to read. I note the Wiki would be an awesome place for some help, and may attempt to put up a page there: NoobDiscoveringLessWrongLeavesBreadcrumbs, or something like that.

My immediate interest is argument: how can we disagree? 1+1=2. Can't that be extrapolated to many things. I have been so happy to see a non-cocky (if prideful) attitude in the first several posts that I have great hopes for what I may learn here. We have to remember ignorance is an implacable enemy, and being insulting won't defeat it, and we may be subject to it ourselves. I've notice I am.

First post from me is coming shortly.

  • mark krebs
Comment by markrkrebs on Rational Me or We? · 2010-02-26T13:37:39.106Z · LW · GW

I love your thesis and metaphor, that the goal is for us all jointly to become rational, seek, and find truth. But I do not "respect the opinions of enough others." I have political/scientific disagreements so deep and frequent that I frequently just hide them and. worry. I resonated best with your penultimate sentence: "humanity's vast stunning cluelessness" does seem to be the problem. Has someone written on the consequences of taking over the world? The human genome, presumptively adapted to forward it's best interests in a competitive world, may have only limited rationality, inadequate to the tasks of altruism, global thinking, and numerical analysis. By this last phrase I refer to our overreaction to a burning skyscraper, when an equal number of deaths monthly on freeways, by being less spectacular or poignant, motivates a disproportionately low response. Surely the difference there is a "gut" reaction, not a cogent one. We need to change what we care about, but we're hardwired to worry about spectacle, perhaps?

Comment by markrkrebs on Reason as memetic immune disorder · 2010-02-26T09:42:54.164Z · LW · GW

"The conservatism of a religion - it's orthodoxy - is the inert coagulum of a once highly reactive sap." -Eric Hoffer, the True Believer

Love your post: religion as virulent namb-shub. See also Snow Crash by Stephenson.

Comment by markrkrebs on The Crackpot Offer · 2010-02-26T09:27:21.819Z · LW · GW

I love this site. Found it when looking at a piece of crackpot science on the internet and, wondering, typed "crackpot" into google. I am trying to argue with someone who's my nemesis in most every way, and I'm trying to do it honestly. I feel his vested interest in the preferred answer vastly biases his judgment & wonder what biases do I have, and how did they get there. You seem to address a key one I liken to tree roots, growing in deep and steadfast wherever you first happen to fall, whether it's good ground or not.

Not unlike that analogy, I landed here first, on your post, and found it very good ground indeed.