Posts

Comments

Comment by marshall on Dissenting Views · 2009-06-07T19:33:13.834Z · score: -5 (7 votes) · LW · GW

Mission accomplished!

Comment by marshall on Dissenting Views · 2009-06-07T12:15:59.871Z · score: -8 (8 votes) · LW · GW

i) A lotta apes are writing on a lotta typewriters

ii) Not much dissent in a post reserved for dissension.

iii) The presumption of being Less Wrong leads to the arrogance of being More Right.

iv) Being More Right leads to the necessity of Violence.

v) The ramblings of the young are worth listening to when you get old.

Comment by marshall on Less Wrong: Progress Report · 2009-04-25T16:22:52.000Z · score: 1 (1 votes) · LW · GW

Yes Tim I deleted my account. Eliezer explained to me that I was not "ready" to comment om LW but I was welcome to continue reading. It is thus a little insulting, when Eliezer now says, I was "karma farming". I was contributing as best I could and consequently down-voted. I asked several times why I was down-voted and Eliezer himself answered by asking everyone to down vote me - just deserts because of "vagueness". Several of the articles I posted were never commented on. This does not sound like farming to me and why on earth would an adult man wish to collect pixel points? Eliezer's answer here reveals a moral weakness. And that is bad karma.

Comment by marshall on Less Wrong: Progress Report · 2009-04-25T07:22:38.000Z · score: 1 (1 votes) · LW · GW

My first comment is a month old (as is Eliezer's original and now edited post). A month is a long time and in that time I was "hounded" out of LW. I think I was the first who experienced Eliezer's idea of policing the walled garden from vandalism and entropy. However to an outsider this looks more like a recipe for political correctness. In other words the uniformity of thought on LW is rather high and the place does seem rather boring. Motivated rationalists searching for a raison d'être - be it nusery rhymes, losing weight, procrastination or missionary behaviour with daring programmes for eradicating other's irrationality. In a sense it all rather seems like one big Multi-Player-Game in Language. Scoring points and distributing hits with rather transparent strategies for how to proceed. In my worst moments I sometimes think, that Eliezer has opened "the box" and the "thing" needs an army of obedient servants. LW would be the first place to start recruiting.

Comment by marshall on Less Wrong: Progress Report · 2009-03-21T07:00:03.000Z · score: 1 (1 votes) · LW · GW

I definitely agree that LW's structure encourages participation (I have rarely contributed to OB) and the to and fro of comments gives valuable information on what/who people are in this "rationalist community" and where you stand.

However "the first comment you encounter is going to be something highly intelligent" is sales-talk and highly ridiculous. The first coment you encounter is just as likely to be runaway conformity. I would suggest that the pressure to conform is high and much of the intelligence is being used to signal logical dexterity on things with very little practical benefit.

It is my impression that LW is a tight community with little tolerance for what falls outside Eliezers definition of rationality and how rational people express themselves. I do not think this description will be accepted by Eliezer or the other contributors (and it would never be one of the first comments you met on a thread) but maybe they are Just Wrong.

Comment by marshall on Failed Utopia #4-2 · 2009-01-21T13:29:55.000Z · score: -12 (16 votes) · LW · GW

The desire for "the other" is so deep, that it never can be fulfilled. The real woman/man disappoints in their stubborn imperfection and refuted longing. The Catboy/girl disappoints in all their perfection and absence of reality. Game over - no win. Desire refutes itself. This is the wisdom of ageing.

Comment by marshall on Emotional Involvement · 2009-01-07T19:10:47.000Z · score: 0 (0 votes) · LW · GW

Tim:-"would anyone else like to share what they think their utility function is?" Seem to have missed this question the first time around - and it looks like a good question. My timid answer is thus: To maximise the quality of (my) time. This is no trivial task and requires a balance between achieving things in the world, acquiring new information (thanks OB) and achieving new things in the world, peppered with a little bit of good biological feelings. Repeat.

Comment by marshall on Devil's Offers · 2008-12-26T11:07:33.000Z · score: 0 (0 votes) · LW · GW

Michael Vasar:- maybe you chose to work in an area, where you had to lie to survive. Perhaps Eli works in an area where the discovery of lying has a higher price (in destroyed reputation) than sticking to the inconvenient truth. But unfortunately I think it is easier to discount a truth-sayer (he is after all an alien) than a randomised liar (he is one of us). In other words it is easier to buy the mix of truth-and-untruth than the truth and nothing but the truth. But the social result seems to be the same - untruth wins.

Comment by marshall on Devil's Offers · 2008-12-26T11:00:39.000Z · score: 1 (1 votes) · LW · GW

Michael Vasar:- maybe you chose to work in an area, where you had to lie to survive. Perhaps Eli works in an area where the discovery of lying has a higher price (in destroyed reputation) than sticking to the inconvenient truth. But unfortunately I think it is easier to discount a truth-sayer (he is after all an alien) than a randomised liar (he is one of us). In other words it is easier to buy the mix of truth-and-untruth than the truth and nothing but the truth. But the social result seems to be the same - untruth wins.

Comment by marshall on Chaotic Inversion · 2008-11-30T07:32:44.000Z · score: -1 (5 votes) · LW · GW

I think you are very productive Eliezer. Human Rationality is surely not tortured wheels squeekily running every second of the day - producing producing producing.

Human rationality should not and cannot be made into an assembley line.

Not Getting Things Done in a balance with GTD is important. Productivity is one of the big American lies.

Comment by marshall on BHTV: Jaron Lanier and Yudkowsky · 2008-11-02T11:38:54.000Z · score: 0 (2 votes) · LW · GW

I actually thought Eliezer that you did rather poorly in this dialogue. You and your logic reached their limits. The tools you drew on were from too narrow a scope and didn't match Lanier's breadth. I am surprised (and worried) that all the other comments "take your side". I think this "event" requires som updating by you, Eliezer. Person to person - you lost. And I think this phrase "I was a little too reluctant to interrupt" is an example of cognitive dissonance and not the truth of the matter at all.

Comment by marshall on Traditional Capitalist Values · 2008-10-17T18:16:56.000Z · score: 0 (2 votes) · LW · GW

@Caledonian

How can you ask this question "What is actual evil?"? You must know. Perhaps not definitively, but surely well enough, that we others would recognize it and agree.

What is the function of this false barrier that is no barrier?

May I suggest that the asking of this question is an example of the answer, (whilst I accept, that evil done, does not have to be intended)?

Comment by marshall on The Level Above Mine · 2008-09-26T18:00:38.000Z · score: 0 (2 votes) · LW · GW

Vassar - your English is encrypted - more an assumption of intelligence than a sign.

EY - I admire your work. Along with Robin this is the best Show in Town and I will miss it, when it stops.

I actually doubt whether you are accomplishing anything - but this does not seem so important to me, because the effort itself is worthwhile. And we are educated along the way.

This is a youthful blog with youthful worries. From the vantage point of age worrying about intelligence seems like a waste of time and unanswerable to boot.

But those are the stones in your shoes.

Comment by marshall on The Truly Iterated Prisoner's Dilemma · 2008-09-06T08:41:00.000Z · score: 0 (0 votes) · LW · GW

Marshall I think that's a bit of a cop-out.

Why wouldn't a PM cheat? Why would it ever remain inside the frame of the game?

Would two so radically different agents even recognize the same pay-off frame?

"The different one" will have different pay-offs - and I will never know them and am unlikely to benefit fra any of them.

In my world a PM is chaotic, just as I am chaotic in his. Thus we are each other's enemy and must hide from the other.

No interaction because otherwise the number of crying mothers and car dealerships will always be higher.

Comment by marshall on The Truly Iterated Prisoner's Dilemma · 2008-09-06T07:18:00.000Z · score: -2 (2 votes) · LW · GW

I think you guys are calculating too much and talking too much.

Regardless of the "intelligence" of a PM, in my world that is a pretty stupid thing to do. I would expect such a "stupid" agent to do chaotic things indeed evil things. Things I could not predict and things I could not understand.

In an interactioin with a PM I would not expect to win, regardless of how clever and intelligent I am. Maybe they only want to make paperclips (and play with puppies), but such an agent will destroy my world.

I have worked with such PM's.

I would never voluntarily choose to interact with them.

Comment by marshall on You Provably Can't Trust Yourself · 2008-08-20T19:18:25.000Z · score: 0 (0 votes) · LW · GW

I think this is your most important post.

Comment by marshall on Sorting Pebbles Into Correct Heaps · 2008-08-11T17:37:35.000Z · score: 0 (0 votes) · LW · GW

I wonder - did we all understand this parable in the same way? I doubt it!

Comment by marshall on Could Anything Be Right? · 2008-07-18T10:49:20.000Z · score: 2 (2 votes) · LW · GW

I think Eliezer is saying: We know on average what's right and what's wrong. It is part of being human. There are different versions of being human and thus our rights and wrongs are embedded in time and place. It is in the "Thickness" of living with others we know what and how to do. Mostly it is easy. Because morality is human. Stopping up and thinking about all this gives what Michael Vassar calls "Aack!!! Too... many... just so stories... bad evolutionary psychology... comment moderation... failing."

Comment by marshall on Whither Moral Progress? · 2008-07-16T11:41:45.000Z · score: 0 (0 votes) · LW · GW

Marshall, how is your "usefulness" not isomorphic to the word "good"? Useful for what?

I suppose I just want to avoid the preachiness of the word good. It is unfortunately coherent to die for goodness. It is not very useful to die for usefullness.

Useful for what? This doesn't seem like a useful question. Usefulness is obvious and thus no need to ask.

I do not wish to lose my way or be carried away by the bigness of the nominalisation "morality". Occam's Razor should also be applied here - in a pleasant and gentle way.

Comment by marshall on Whither Moral Progress? · 2008-07-16T05:59:26.000Z · score: -5 (9 votes) · LW · GW

Why keep on about "morality"? Isn't this just a type of con-word used by ministers of religion, teachers and politicians to impress on us the need to be good and improve (in ways which they decide)? Can't we just abolish this word and tune it out? We all drive on the correct side of the road, because it is useful and it is seen to be useful by all. This is morality - small useful rules for getting along. There is no mystery about where they come from. We find ways to avoid bumping into things. We are brought up with this implicit usefulness and maintain it. In the luxury of our affluence it is no longer useful to boil the cat and we need our dogs to understand our language. Thus our circles of "utility" expand.

Comment by marshall on Possibility and Could-ness · 2008-06-14T08:54:51.000Z · score: 0 (0 votes) · LW · GW

Hopefully: My suggestion is, that it is the use of the metaphor "illusion", which is unfortunate. In the process - Search Find (have a beer) Execute there is no room for illusion. Just as David Copperfield works very hard whilst making illusions but is himself under no illusion. In other words "illusion" is an out of process perspective. It is in the "I" of the beholder. You hope(fully) expect that the search process can be improved by the intervention of "I". Why should that be? Would the search process be improved by a coin flip? In other words dissolving "free will" dissolves the "user-illusion" of "Eye". And this plays havoc with your persistence odds. You want your illusion to persist?

Comment by marshall on Timeless Control · 2008-06-07T08:15:11.000Z · score: -2 (2 votes) · LW · GW

Clearly, clear thinking is opaquely difficult. The future is not random, Roland, the future is just unknown. The future will have about as much structure as the now has, but we do not yet know all the details. This can possibly be construed as "randomness" in our thinking, but it is not randomness in the world. Another POV would call this "randomness in our thinking" as uncertainty. Roland "don't fall into this trap" means don't let determinism dertermine you - or worse - let Roland determine what you are determined to do. Maybe the Don't Panic button would be better and thusly your future was.

Comment by marshall on Thou Art Physics · 2008-06-06T21:03:40.000Z · score: 0 (0 votes) · LW · GW

How can I think a thought? The river that flows without a drop.

Am I thinking the next thought? Chemicals, doing what they ought.

With time an illusion. The I that says it's me, is a figment too.

I struggle to choose to do what must be done.

Don't ask who I am. But observe what was done.