Posts

"Announcing" the "Longevity for All" Short Movie Prize 2015-09-11T13:44:31.985Z
Maybe theism is wrong 2009-04-11T16:53:29.690Z
A corpus of our community's knowledge 2009-03-18T20:18:43.240Z

Comments

Comment by infotropism on Free research help, editing and article downloads for LessWrong · 2012-03-26T09:23:27.636Z · LW · GW

Hi, could anyone help me obtain

"Limits of Scientific Inquiry" by G. Holton, R. S. Morison ( 1978 )

and

"What is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable." Brockman, John (2007)

Thanks in advance

Comment by infotropism on Dreams with Damaged Priors · 2009-08-10T20:57:33.404Z · LW · GW

So yes, you'd likely lose the fun of normal dreaming - experiencing weird stuff, letting the insane flow of your dreams carry you like a leaf on a mad wind and not even feeling confused by it, but rather feeling like it was plain normal and totally making sense, having lots of warm fuzzy feelings and partway formed thoughts about your experiences in that dream.

Yet you might on the other hand gain the fun of being able to, for instance, capitalize on your dreaming time to learn and do some thinking. Not to mention the pleasure and sense of security derived from knowing your rational mind can work even under (some) adverse conditions.

Comment by infotropism on Are You Anosognosic? · 2009-07-19T05:59:39.411Z · LW · GW

From the popularity of the "Strangest thing an AI could tell you" post, and anosognosia tidbits in general, this topic seems to fascinate many people here. I for one would find it freakishly interesting to discover that I had such an impairment. In other words, I'd have motivation to at least genuinely investigate the idea, and even accept it.

How I'd come to accept it, would probably involve a method other than just "knowing it intuitively", like how I intuitively know the face of a relative to be that of a relative, or how I know with utter, gut level certainty that I have three arms. Considering that we are, well, rationalists, couldn't we be supposed to be able to use other methods, to discover truth, than our senses and intuitions ? Even if the truth is about ourselves, and contradicts our personal feeling ?

After all, it's not like people in the early 20th century had observed tiny pictures of atoms, they deduced their existence from relatively nonintuitive clues glued together into a sound theoretical framework. Observing nature and deducing its laws has often been akin to being blind, and yet managing to find your way around by using indirect means.

If I had to guess, I'd still not be certain that, even being a rationalist using scientific methods and all those tools that help straighten chains of inference, as well as finding anosognosia to be more of a treat than a pain to be rationalized, would make it a sure bet that I'd not yet retain a blindspot.

Maybe the prospect of some missing things could be too horrid to behold, not matter how abstractly, perhaps beholding them may require me to think in a way that's just too complicated, abstract and alien for me to ever notice it as being something salient, let alone comprehensible.

Still that's really not what my intuition would lead me to believe, what with truth being entangled and so forth. And such a feeling, such an intuition, may be exactly part of the problem of why and how I'd not pay attention to such an impairment. Perhaps I just don't want to know the truth, and willingly look away each time I can see it. Then again, if we're talking rationalization and lying to oneself, that has a particular feeling, and that is something one could be able to notice.

Comment by infotropism on The Strangest Thing An AI Could Tell You · 2009-07-17T02:15:42.016Z · LW · GW

This, applies more generally than to anosognosia alone, and was very illuminating, thank you !

So, provided that as we grow, some parts of our brain, mind, change, then this upsets the balance of our mind as a whole.

Let's say someone relied on his intuition for years, and consistently observed it correlated well with reality. That person would have had a very good reason to more and more rely on that intuition, and uses its output unquestioningly, automatically to fuel other parts of his mind.

In such a person's mind, one of the central gears would be that intuition. The whole machine would eventually depend upon it, and to remove intuition would mean, at best, that years of training and fine-tuning that rational machine would be lost; and a new way of thinking would have to be reached, trained again; most people wouldn't even realize that, let alone be bold enough to admit it and start back from scratch.

And so some years later, the black-boxed process of intuition starts to deviate from correctly predicting reality for that person. And the whole rational machine carries on using it, because that gear just became too well established, and the whole machine lost its fluidity as it specialized in exploiting that easily available mental ressource.

Substitute emotions, drives for intuition, and that may work in the same way too. And so from being a well calibrated rationalist, you start deviating, slowly losing your mind, getting it wrong more and more often when you get an idea, or try to predict an action, or decide what would be to your best advantage, never realizing that one of the once dependable gears in your mind had slowly been worn away.

Comment by infotropism on Absolute denial for atheists · 2009-07-16T20:36:55.084Z · LW · GW

There's no such thing as an absolute denial macro. And I sure hope this to trigger yours.

Comment by infotropism on The Strangest Thing An AI Could Tell You · 2009-07-16T07:01:07.065Z · LW · GW

What ?

Comment by infotropism on The Strangest Thing An AI Could Tell You · 2009-07-16T00:34:23.577Z · LW · GW

Yes I would. Why the acute interest ?

Is it because by admitting to being able to believe that, one would admit to having no strong enough internal experience of morality ?

Experience of morality, that is, in a way that would make him say "no that's so totally wrong, and I know because I have experienced both genuine guilt and shame, AND also the embarrassment of being caught falsely signaling, AND I know how they are different things". I have a tendancy to always dig deep enough to find how it was selfish for me to do or feel something in particular. And yet I can't always help but feeling guilt or shame beyond whose deep roots exist aside from my conscious rationalizations of how what I do benefit myself. Oh, and sometimes, it also benefits other people too.

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-16T00:08:49.761Z · LW · GW

Now we have a lot higher GDP

Yes indeed. Do you expect that to remain true after a nuclear war too ? More basically, I suppose I could resume my idea as follows : you can poke a hole in a country's infrastructure or economy, and the hole will heal with time because the rest is still healthy enough to help with that - just as a hole poked into a life form can heal, provided that the hole isn't big enough to kill the thing, or send it into a downward spiral of degeneration.

But yes, society isn't quite an organism in the same sense. There you probably could have full scale cataplasia, and see something survive someplace, and perhaps even from there, start again from scratch (or better, or worse, than scratch).

Comment by infotropism on The Strangest Thing An AI Could Tell You · 2009-07-15T23:51:27.646Z · LW · GW

Agranarian is the new vegetarian.

Comment by infotropism on The Strangest Thing An AI Could Tell You · 2009-07-15T23:39:41.457Z · LW · GW

Well, kidding aside, your argument, taken from Pearl, seems elegant. I'll however have to read the book before I feel entitled to having an opinion on that one, as I haven't grokked the idea, merely a faint impression of it and how it sounds healthy.

So at this point, I only have some of my own ideas and intuitions about the problem, and haven't searched for the answers yet.

Some considerations though :

Our idea of causality is based upon a human intuition. Could it be that it is just as wrong as vitalism, time, little billiard balls bumping around, or the yet confused problem of consciousness ? That's what would bug me if I had no good technical explanation, one provably unbiased by my prior intuitive belief about causality (otherwise there's always the risk I've just been rationalizing my intuition).

Every time we observe "causality", we really only observe correlations, and then deduce that there is something more behind those. But is that a simple explanation ? Could we devise a simpler consistent explanation to account for our observation of correlations ? As in, totally doing away with causality ? Or at the very least, redefining causality as something that doesn't quite correspond to our folk definition of it ?

Grossly, my intuition, when I hear the word causality is something along the lines of

" Take event A and event B, where those events are very small, such that they aren't made of interconnected parts themselves - they are the parts, building blocks that can be used in bigger, complex systems. Place event A anywhere within the universe and time, then provided the rules of physics are the same each time we do that, and nothing interferes in, event B will always occur, with probability 1, independantly of my observing it or not." Ok, so could (and should ?) we say that causality is when a prior event implies a probability of one for a certain posterior event to occur ? Or else, is it then not probability 1, just an arbitrarily very high probability ?

In the latter case with less than 1 probability, then that really violates my folk notion of causality, and I don't really see what's causal about a thing that can capriciously choose to happen or not, even if the conditions are the same.

In the former case, I can see how that would be a very new thing, I mean, probability 1 for one event implying that another will occur ? What better, firmer foundation to build an universe upon ? It feels really, very comfortable and convenient, all too comfortable in fact.

Basically, neither of those possibilities strike me as obviously right, for those reasons and then some, the idea I have of causality is confused at best. And yet, I'd say it is not too unsophisticated or pondered as it stands. Which makes me wonder how people who'd have put less thought in it (probably a lot of people) can deservedly feel any more comfortable with saying it exists with no afterthought (almost everyone), even as they don't have any good explanation for it (which is a rare thing), such as perhaps the one given by Pearl.

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-15T09:41:12.111Z · LW · GW

What should be realized here, however, is that Hiroshima could become a relatively ok place because it could receive a huge amount of help for being part of the country with such a high GDP.

Hiroshima didn't magically get better. A large scale nuclear war would destroy our economy, and thus our capability to respond and patch the damage that way. For that matter, I'm not even sure our undisturbed response systems could be able to deal with more than a few nuked cities. Also please consider that Hiroshima was nuked by a 18 kt bomb, which is nothing like the average 400 - 500 kt nukes we have now.

Comment by infotropism on The Strangest Thing An AI Could Tell You · 2009-07-15T08:18:45.248Z · LW · GW

1 ) That human beings are all individual instances of the exact same mind. You're really the same person as any random other one, and vice versa. And of course that single mind had to be someone blind enough not to chance upon that fact ever, regardless of how numerous he was.

2 ) That there are only 16 real people, of which you are, and that this is all but a VR game. Subsequently results in all the players simultaneously being still unable to be conscious of that fact, AND asking that you and the AI be removed from the game. (Inspiration : misunderstanding situation in page 55-56 of Iain Banks's Look to Windwards).

3 ) That we are in the second age of the universe : time has been running backwards for a few billion years. Our minds are actually the result of the original minds of previous people being rewound, their whole life to be undone, and finally negated into oblivion. All our thoughts processes are of course horribly distorted, insane mirror versions of the originals, and make no sense whatsoever (in the original timeframe, which is the valid one).

4 )

5 ) That our true childhood is between age 0 and ~ 50-90 (with a few exceptional individuals reaching maturity sooner or later). If you thought the 'adult conspiracy' already lied a lot, and well to 'children', prepare yourself for a shock in a few decades.

6 ) That the AI just deduced that the laws of physics can only be consistent with us being eternally trapped in a time loop. The extent of the time loop is : thirty two seconds spread evenly around now. Nothing in particular can be done about it. Enjoy your remaining 10 seconds.

7 ) Causality doesn't exist. Not only is the universe timeless, but causality is an epiphenomenon, which we only believe because of a confusion of our ideas. Who ever observed a "causation" ? Did you, like, expect causation particles jumping between atoms or something ? Only correlation exists.

8 ) We actually exist in a simulation. The twist is : somewhere out there, some people really crossed the line with the ruling AI. We're slightly modified versions of these people : modified in a way as to experience the maximum amount of their zuul feeling, which is the very worst nirdy you could imagine.

9 ) The universe has actually 5 spatial macro dimensions, of which we perceive only 3. Considering what we look like if you take the other 2 into account, this obliviousness may actually not be all too surprising.

10 ) That any single human being has actually a 22 % probability of not being able to be conscious of one or more of these 9 statements above.

Comment by infotropism on The Strangest Thing An AI Could Tell You · 2009-07-15T07:39:09.262Z · LW · GW

Funnily enough, you realize this is quite similar to what you'd need to make Chalmers right, and p-zombies possible, right ?

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T22:15:49.816Z · LW · GW

Under those assumptions your estimates are sound, really. However, should we only count the direct deaths incurred as a consequence of a direct nuclear strike ? Or should we also take into account the nuclear fallout, radiations, nuclear winter, ecosystems crashing down, massive economy and infrastructure disruption, etc. ? How much more worse does it get if we take such considerations into account ?

Aside from those considerations, I really agree with your idea of getting our priorities right, based on numbers. That's exactly the reason why I'd advocate antiagathic research above a lot of other things, which actually kill and make less people suffer than aging itself does, but not everyone seems to agree to that.

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T22:04:26.073Z · LW · GW

I see your point, sometimes we may have already written the bottom line, and all that comes afterward is trying to justify it.

However, if an existential risk is conceivable, how much would you be ready to pay, or do, to investigate it ? Your answer could plausibly range from nothing, to everything you have. There ought to be a healthy middle there.

I could certainly understand how someone would arrive at saying that the problem isn't worth investigating further, because that person has a definite explanation of why other people care about that particular question, their reason being biased.

I'd for instance think of religion, as an example of that. I wouldn't read the Bible and centuries of apologetics and debates to decide that God does or doesn't exist. I'd just check to see if at first, people started to justify the existence of a god for other reasons than it existing. That's certainly a much more efficient way of looking at the problem.

Is there no sum of money, no amount of effort, however trivial, that could nevertheless be expanded on such an investigation, considering its possible repercussions, however unlikely those seem to be ?

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T21:39:23.577Z · LW · GW

A fair point. So what you're telling me is that we should desire a future civilization that is descended from our own, probably one that will have some common points with current humanity, like, some of our values, desires (or values, desires who'd have grown from our own) etc. ?

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T21:33:54.518Z · LW · GW

How many deaths, directly or indirectly derived from the pope's prohibition, would be enough for his influence to be considered negative in this case ?

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T21:09:46.188Z · LW · GW

Technological progress seems to be necessary, but not sufficient to ensure our civilization's long term survival.

Correct me if I'm wrong, but you seem quite adamant on arguing against the idea that our current civilization is in danger of extinction, when so many other people argue the other way around. This seems like it has the potential to degenerate into a fruitless debate, or even a flame war.

Yet you probably have some good points to make; why not think it over, and make a post about it, if your opinion is so different, and substantiated by facts and good reasoning, as I am sure it must be ?

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T21:00:24.504Z · LW · GW

I sometime wonder why people think this outcome is bad.

Mind if I ask, but, as opposed to considering it good ?

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T20:57:03.079Z · LW · GW

hypothetical "disasters", civilisation doesn't end - it is just that it is no longer led by humans

You'd think that's actually pretty much what most of us humans care about.

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-14T19:08:41.984Z · LW · GW

Whether those catastrophes could destroy present humanity wasn't the point, which was whether or not near misses in potential extinction events have ever occurred during our past.

Consider it that way : under your assumptions of our world being more robust nowadays, what would count as a near miss today, would certainly have wiped the frailer humanity out back then; conversely what counted as a near miss back then, would not be nearly that bad nowadays. This basically means, by constraining the definition of a "near miss" in that way, that it is impossible to show any such near miss in our history. That is at best one step away from saying we're actually safe and shouldn't worry all that much about existential risks.

Speaking of which, when arguing the definition of an existential risk, and from that arguing that such catastrophes as a nuclear war, aren't existential risks, blurs the point. Let us rephrase the question : how much would you want to avoid a nuclear war, or a supereruption, or an asteroid strike ? How much effort, time, money should we put into the cause of avoiding such catastrophes ?

While it is true a catastrophe that doesn't wipe out humanity forever, isn't as bad as one that does, such an event can still be awfully bad, and deserving of our attention and efforts, so as to prevent it. We're talking billions of human lives lost or spent in awful conditions for decades, centuries, millennia, etc. If that is no cause to serious worry, pray tell what is ?

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-12T16:38:54.199Z · LW · GW

Though that doesn't immediately make it non fictional evidence, dysgenic pressure (as well as the flynn effect and the possibility of genetic engineering as possible counters) is also being briefly mentioned in Nick Bostrom's fundamental paper Existential Risks - 5.3.

Comment by infotropism on Our society lacks good self-preservation mechanisms · 2009-07-12T11:13:04.555Z · LW · GW

Well, there possibly was the Toba supereruption, which would fit being a near miss.

Arguably, we were very close too during the cold war, and several times over - not total extinction, but a nuclear war would've left us very crippled.

Comment by infotropism on The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It · 2009-06-15T17:21:06.300Z · LW · GW

Minor quibble, interesting info :

"like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description"

The particular example of the planet's orbit is actually one where such a simple rule exists : see the law of Titius Bode

Comment by infotropism on Theism, Wednesday, and Not Being Adopted · 2009-04-27T17:25:41.753Z · LW · GW

In maybe 15 years of time, Wednesday comes to this place, or what this place has become by then. She is still a Mormon, and is welcomed. She is interested in participating, because she is open minded enough, educated, and the community is tolerant and helpful. So she gets to learn about rationality, and is taken into the process of becoming a rationalist herself, and a productive, healthy member of the rationalist community.

My question : and after a few months or years of that, does she still remain a Mormon, or a believer in the supernatural ?

If yes, how does she reconcile that with the fact that a few of the priors behind religions are wrong ? That religion requires self deception to work, at some level ? Will there be some projects in which she won't be able to participate, simply because they are at odds with those beliefs ?

If not, then how does she reconcile that with her past life ? How will it impact her already established relationships ? How easy or difficult will it be for her to change her mind ?

Comment by infotropism on Excuse me, would you like to take a survey? · 2009-04-27T13:45:55.625Z · LW · GW

I agree with Vladimir too, you can't always pinpoint people like that.

I'd say I'm uncommitted too. By that I mean to encompass the general idea that I agree with a lot of the ideas that come from, for instance, libertarianism, and at the same time, with a lot of the ideas behind communism. As I never heard of a good synthesis between the two, so I stand uncommitted.

Comment by infotropism on "Self-pretending" is not as useful as we think · 2009-04-26T00:15:56.771Z · LW · GW

Self fulfilling prophecies are only epistemically wrong when you fail to act upon them. Failing, maybe out of cynicism, sophistication or simply being too clever, rationalizing them away; the result will be the same.

There's a potential barrier there. You can tunnel through it, or not. Tunneling can sound magical and counterintuitive. It's not. There are definite reasons why it can work.

Sometimes, however, you don't know those reasons, but can observe it appears to work for other people anyway. Then you may want to find a way to bootstrap the process, like self pretending. Or trying to copy someone else

I can say these words but not the rule that generates them, or the rule behind the rule; one can only hope that by using the ideas, perhaps, similar machinery might be born inside you.

For instance, the party example. This might not apply to everyone but, I think the issue starts with even trying to find that optimal way to attract a partner. Do you expect to find a way to be attractive enough to score on your first try ? Perhaps, to really minimize the risk of being rejected ?

Different people have different tastes, and even a single person may react differently to the same stimulus, depending on the conditions in which they find themselves.

Some people don't have an issue with that. They are ready to try as many times and different people as it takes to find one receptive, moving on as soon as it appears as though it won't work with this person in particular.

Not only will they eventually find someone - and some will indeed have to search for longer - but will also act with more confidence, knowing they will succeed at some point.

I know that's how it works for me anyway. I can't take failure, nor rejection. So I try my very best to avoid it, and this involves trying to push the efforts I invest in a single encounter, to their limits, which isn't as efficient as trying as many different persons as is needed to succeed.

Comment by infotropism on This Didn't Have To Happen · 2009-04-24T11:11:55.560Z · LW · GW

Do you also, simply, desire to live ?

Or do you mean to say that if your life didn't possess those useful qualities, then it would be better, for you, to forfeit cryonics, and have your organs donated, for instance ?

And I'm actually asking that question to other people here as well, who have altruistic arguments against cryonics. Is there an utility, a value your life has to have, like if you can contribute to something useful, in order to be cryopreserved ? For then that would be for the greatest good for the greatest number of people ?

A value below which, your life would be best not cryopreserved, and your body, used, for organ donations, or something equally destructive to you, but equally beneficial to other people (and certainly more beneficial than whatever value you could create yourself if you were alive) ?

Comment by infotropism on Go Forth and Create the Art! · 2009-04-23T16:06:59.036Z · LW · GW

where rationality is easily assessed it is already well understood; it is in extending the art to hard-to-assess areas that the material here is most valuable.

My question is : as well understood as it is, how much of it do any single individual here, know, understand, and is able to use on a recurring basis ?

We'll want to develop more than what exists, but we'll build that upon - once we have it - a firm basis. So I wonder, how much knowledge and practice of those well understood parts of rationality, does it require of the would-be builders of the next tier ? Otherwise, we stand the risk, of being so eager as to hurriedly build sky high ivory towers on sand, with untrained hands.

Comment by infotropism on Go Forth and Create the Art! · 2009-04-23T15:48:08.391Z · LW · GW

This article is definitely relevant - I hadn't seen anyone dare being honest about how most of philosopher's thoughts, of old, are not to be blindly revered, and are indeed highly flawed. They aren't right, they aren't even wrong. Thanks for the link.

Comment by infotropism on Open Thread: April 2009 · 2009-04-21T22:15:41.550Z · LW · GW

Don't you find it more aesthetically appealing that way ? Also, I'm French :-)

Comment by infotropism on Open Thread: April 2009 · 2009-04-21T22:00:20.514Z · LW · GW

Be that as it may be, what is a captial ? I understand the need for proper grammar and orthography in our dear garden, but there's something intriguing going on there :-)

Comment by infotropism on Open Thread: April 2009 · 2009-04-21T21:47:25.123Z · LW · GW

So a lack of captials deserves a downvote ?

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T21:44:16.553Z · LW · GW

I get error 403 trying to access it. But I suppose you meant this : remember santa

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T18:04:06.374Z · LW · GW

I don't place any confidence in my intuition as a general, indiscriminately good-for-everything case. I try to only have confidence on a case by case basis. I try to pay attention to all potential bias that could screw my opinion, like anchoring. And try to not pay attention to who wrote what I'm voting upon. Then I have to have a counterargument. Even if I don't elaborate it, even if I don't lay it down, I have to know that if I had the time or motivation, I could rather reply, and say what was wrong or right in that post.

My decisions and arguments, could, or could not be more informed than those of the average voter. But if I add my own in the pool of votes, then we have a new average. Which will only be slightly worse, or slightly better. Could we try to adapt something of decision markets there ? The way they're supposed to self correct, under the right conditions, makes me wonder if we could dig a solution in them.

And maybe someone could create an article, collecting all the stuff that could help people make more informed votes on LW, that'd help too. Like the biases they'd have to take into account, stuff like the antikibitzer, or links to articles such as the one about aumann voting or this very one.

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T17:24:14.391Z · LW · GW

If I were in your shoes, I'd be fairly scared of posting about this again if I'd expect to be shot down. But please don't be afraid. I think such a post would really be interesting.

If it is shot down, that's a fact about the ideas, or maybe how they were laid down, not about you, after all. In that case, it's up to the people who disagree, to explain how they think you're wrong, or why they disagree.

If you hold the ideas you're exposing, as dear, or part of your identity, it may even hurt a bit more than simply being rebuked, but even then, really, I think it'll only help the community, and you, to move forward, to add them on the mat, and see where it leads.

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T17:00:44.070Z · LW · GW

That was my first idea. But I am not the only player here. I know I overcompensate for my uncertainty, and so I tend to never downvote anything. Other people may not have the same attitude, for down, and upvoting. Who are they ? Is their opinion more educated than mine ? If we all are too scrupulous to vote when our opinion is in fact precious, then our occasional vote may end up drowned in a sea of poorly decided, hastily cast ones.

Besides, I am still only going to downvote if I can think of a good reason to do so. For sometimes, I have a good reason to downvote, but no still no good reasons, or even no time, to reply to all ideas I think need a fix, or those which are simply irrelevant to the current debate.

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T16:03:52.746Z · LW · GW

Obeying. Even though I had some strong reasons to upvote. Edit : you're running for a record there - the most downvoted comment on LW :-)

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T15:36:36.003Z · LW · GW

Don't overcompensate ? Reversed neutrality isn't intelligent censorship, and downvoting people more than usual, just to obey the idea that now you should downvote, won't work well I think. Take a step back, and some time to see the issue from an outside view.

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T15:32:27.141Z · LW · GW

And the interesting question is : given decentralized censorship, or even no censorship at all, what sort of community can emerge from that ?

My impression is that 4chan is resilient from becoming a failed community, because they have no particular goal, except maybe every one doing what pleases themselves on a personal basis, given it doesn't bother everyone else.

Any single individual will, pretty naturally and unwittingly, act as a moderator, out of personal interest. 4chan is like a chemical reaction that has displaced itself towards equilibrium. It won't move easily one way or the other now, and so it'll remain as it is, 4chan. But just what it is, and what sort of spontaneous equilibrium can happen to a community, remains to be seen.

Comment by infotropism on Well-Kept Gardens Die By Pacifism · 2009-04-21T15:15:52.728Z · LW · GW

The karma system isn't enough for the purpose of learning; I fully agree to that. And to the point of this article, I usually don't downvote people, rather I try to correct them if I see something wrong. That, if anything, seems more appropriate to me. If I see an issue somewhere, it isn't enough to point it, I must be able to explain why it is an issue, and should propose a way to solve it.

But Eliezer has me swayed on that one. Now I'll downvote, even though I am, indeed, very uncertain of my own ability to correctly judge whether a post deserves to be downvoted or not. For that matter, I am very uncertain about the quality of my own contributions as well, so there too I can relate to your experience. Sometimes, I feel like I'm just digging myself deeper and deeper, that I am not up to the necessary quality required to post in here.

Now, if I was told what, in my writings, correlates with high karma, and what does, with low karma, I think I might be tempted to optimize my posting to karma - gathering, rather than adapting them to the purpose of making high quality, useful contributions.

That's a potential issue. Karma is correlated to quality and usefulness, but ultimately, other things than quality alone can come into play, and we don't want to elicit people's optimizing for those for their own sake alone (like, persuasiveness, rhetorics, seductive arguments, well written, soul sucking texts, etc.).

We really need to get beyond the karma system. But apparently none of the ways so far proposed would be workable, for lack of programming resources. We'll need to be vigilant till then.

Comment by infotropism on The ideas you're not ready to post · 2009-04-20T10:02:07.247Z · LW · GW

Not the mathematical proof.

But the idea that if you don't yet have data bound to observation, then you decide the probability of a prior by looking at its complexity.

Complexity, defined as looking up the smallest compressed bitstring program for each possible turing machines (and that is the reason why it's intractable unless you have infinite computational ressources yes ?), that can be said to generate this prior as the output of being run on that machine.

The longest the bitstring, the less likely the prior (and this has to do with the idea you can make more permutations on larger bit strings, like, a one bit string can be in two states, a two bit one can be in 2 states, a 3 bit one in 2 exp 3 states, and so on.).

Then you somehow average the probabilities for all pairs of (turing machine + program) into one overall probability ?

(I'd love to understand that formally)

Comment by infotropism on The ideas you're not ready to post · 2009-04-20T02:23:09.624Z · LW · GW

So maybe, to rephrase the idea then, we want to strive, to achieve something as close as we can to perfection; optimality ?

If we do, we may then start laying the bases, as well as collecting practical advices, general methods, on how to do that. Though not a step by step absolute guide to perfection, rather, the first draft of one idea that would be helpful in aiming towards optimality.

edit : also, that's a st Exupery quote, that illustrates the idea, I wouldn't mean it that literally, not as more than a general guideline.

Comment by infotropism on The ideas you're not ready to post · 2009-04-19T23:44:54.718Z · LW · GW

I have an idea I need to build up about simplicity, how to build your mind and beliefs up incrementally, layer by layer, how perfection is achieved not when there's nothing left to add, but nothing left to remove, how simple minded people are sometimes being the ones to declare simple, true ideas others lost sight of, people who're too clever and sophisticate, whose knowledge is like a card house, or a bag of knots, genius, learning, growing up, creativity correlated with age, zen. But I really need to do a lot more searching about that before I can put something together.

Edit : and if I post that here, that's because if someone else wants to dig that idea, and work on it with me, that'd be with pleasure.

Comment by infotropism on Rationality Quotes - April 2009 · 2009-04-19T00:23:05.151Z · LW · GW

"I know thy works, that thou art neither cold nor hot: I would thou wert cold or hot. So then because thou art lukewarm, and neither cold nor hot, I will spue thee out of my mouth."

God (presumably), Revelation 3:16

Comment by infotropism on Rationality Quotes - April 2009 · 2009-04-18T21:43:52.364Z · LW · GW

"On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

Charles Babbage

Comment by infotropism on Rationality Quotes - April 2009 · 2009-04-18T20:15:12.365Z · LW · GW

And a few interesting reversed proverbs and quotes that ring truer than the original ones there (for those who can read French).

Like :

In doubt, abstain !

In doubt, search further !

Comment by infotropism on Rationality Quotes - April 2009 · 2009-04-18T20:02:02.244Z · LW · GW

There will always be a large difference between those who'd ask themselves "why won't things work as they are meant to" and those asking themselves "how could I get them to work". For the moment being, the human world belongs to those who would ask "why". But the future belongs, necessarily, to those who'd ask themselves "how".

Bernard Werber

Comment by infotropism on Rationality Quotes - April 2009 · 2009-04-18T15:50:11.444Z · LW · GW

Philosophy easily triumphs over past and future evils. But present ones, prevail over it.

Maxim 22 François de La Rochefoucauld

Comment by infotropism on My Way · 2009-04-17T09:27:12.911Z · LW · GW

Among the only differences I could think of, is that noticing the difference between black and white has almost only negative connotation today, while noticing it between males and females is a more mixed bag. What if it was possible to attach positive affect reactions in excess to negatives ones, to that color difference ? Would it still be good to abolish people's noticing ? Though, color of skin isn't a category in the same sense sex is; it doesn't correlate with so much potential difference.

This also leads to the other reason why you'd think it's important to care about difference between sexes but not between skin color, is because the first has practical consequences, for instance, on your relationships, while the other would not.

But while this is true, I can't shake the feeling there is a bias there, which ticks me off. Some people may not feel like gender makes such a difference in how they relate to others. This doesn't back the idea that erasing differences is a desirable thing, but it probably lessens the extent to which the fact that humanity has two sexes adds to life's interest - at least Eliezer should stick to saying he finds it desirable on a personal level, and be more careful about making it an universal.