Posts
Comments
Montreal, Quebec, Canada
Oh, I understood that. Except that your explanation of what happened at the end of Permutation City made sense whereas how that story actually ended did not. Hence I prefer your explanation of the ending of Permutation City to the one provided in the book.
I really enjoyed the story, and I have to say that I prefer your ending to Permutation City than the one that Egan wrote.
I agree, which is why I tend to shy away from performing a moral analysis of Fantasy stories in the first place. That way lies a bottomless morass.
Interesting. Is hard to reconstruct my reasoning exactly, but I think that I assumed that things I didn't know were simply things I didn't know, and based my answer on the range of possibilities -- good and bad.
It would say that the likelihood is overwhelming that BOTH choices will lead to bad ends. The only question is: which is worse. That's why I was saying it was between two evils.
Besides, its hard to reconcile the concept of 'Good' with a single flawed individual deciding the fate of the world, possibly for an infinite duration. The entire situation is inherently evil.
My first impression of this story was very positive, but as it asks us to ask moral questions about the situation, I find myself doing so and having serious doubts about the moral choices offered.
First of all, it appears to be a choice between two evils, not evil and good. On one hand is a repressive king-based classist society that is undeniably based on socially evil underpinnings. On the other hand we have an absolute unquestionably tyranny that plans to do good. Does no one else have trouble deciding which is the lesser problem?
Secondly, we know for a fact that, in our world, kingdoms and repressive regimes sometimes give way to more enlightened states, and we don't know enough about the world to even know how many different kingdoms there are or what states of enlightenment exist elsewhere. For all we know things are on the verge of a (natural) revolution. We can't say much about rule by an infinite power, having no examples to hand, but there is the statement that "power corrupts". Now, I'm not going to say that this is inevitable, but I have at least to wonder if an integration over total sentient happiness going forward is higher in the old regime and its successors, or in the Infinite Doom regime.
Finally, the hero is big into democracy. Where in either of these choices does the will of the peasants fit in anywhere?
EDIT: One more point I wanted to add, since its clearly not a Choice Between Good and Evil as the prophesy states, why assume there is a choice, or that there are only two options. Would not a truly moral person look for a third alternative?
I rather enjoy the taste of a Brown Cow, which is Creme de Cacoa in Milk. Then again, I'm sure I'd prefer a proper milkshake. Generally, if I drink an alcoholic beverage its for the side effects.
Granted, the title was probably too flip, but I think yours is a little wordy. I'm not sure I can do better at the moment other than maybe something like "Self-Publication as a Truth Filter".
Reading this, I suddenly had an A-Ha! and checked my post from last month that had, mysteriously, never garnered a single comment or vote and discovered that it was in the drafts area. I could swear that I double checked it at the time to make sure it had been published, but in any case, I've now made sure its published. Thanks!
To echo scientists who say that something is "Not Even Wrong" if its untestable and/or non-scientific to the point of being incomprehensible, my position on the whole religion question is one that I tend to call Ignosticism in which I say that religions definitions of God are so self-contradictory that I don't even know what they mean by God.
Generally, when some asks if I believe in God, I tell them to define it. When they ask me why, I ask them if they believe in Frub. If so, why? If not, why? Without me giving them a definition, how can they possibly give a rational answer.
The above is a great list. Here are a couple more to add:
Vision can also be divided into a modelling sense (what's out there) and a targetting sense (where is something). There are known cases of someone losing one of these without the other. (ie a totally 'blind' man being able to perfectly track a moving target with his pointing finger by 'guessing'.)
As well, we have something called the 'General Chemical Sense' that alerts us to damage to mucus membranes, and is the thing that is complaining when you have the sensation of burning during excretion after you've had a spicy meal.
I think this post made some very good points and I've voted it up, but I want to pick a nit with the mention of "your five senses". Thats Aristotelean mythology. We have many more than five, and so could you please edit this to just read "your senses"?
(Actually, since I'm posting this, I should mention I don't believe in qualia either, but that is a debate of an entirely different order).
I think it will be very necessary to carefully frame what it would be that we might wish to accomplish as a group, and what not. I say this because I'm one of those who thinks that humanity has less than a 50% chance of surviving the next 100 years, but I have no interest in trying to avert this. I am very much in favour of humanity evolving into something a lot more rational than what it is now, and I don't really see how one can justify saying that such a race would still be 'humanity'. On the other hand, if the worry is the extinction of all rational thought, or the extinction of certain, carefully chosen, memes, I might very well wish to help out.
The main problem, as I see it, is in being clear on what we want to have happen (and what not) and what we can do to make the preferred outcomes more likely. The more I examine the entire issues, the harder it appears to define how to distinguish between the good and the bad outcomes.
Okay, then I shall attempt to come up with a post that doesn't re-cover too much of what yours says. I shall have to rethink my approach somewhat to do that though.
I find it interesting that some folks have mental imagery and others don't, because this possibility had never occurred to me despite having varying ability with this at different times. My mental imagery is far more vivid and detailed when I'm asleep than when I'm awake, which I've often wondered about.
This post completely takes the wind out of the sails of a post I was planning to make on 'Self-Induced Biases' where one mistakes the environment one has chosen for themselves as being, in some sense, 'typical' and then derives lots of bad mental statistics from this. Thus, chess fanatics will tend to think that chess is much more popular than it is, since all their friends like chess, disregarding the fact that they chose those friends (at least partly) based on a commonality of interests.
A worse case is when the police start to think that everyone is a criminal because that's all they ever seem to meet.
But what does that have to do with the adjectives of 'near' and 'far'?
Lurkers and Involvement.
I've been thinking that one might want to make a post, or post a survey, that attempts to determine how much folks engage with the contents on less wrong.
I'm going to assume that there are far more lurkers than commenters, and far more commenters than posters, but I'm curious as to how many minutes, per day, folks spend on this site.
For myself, I'd estimate no more than 10 or 15 minutes but it might be much less than that. I generally only read the posts from the RSS feed, and only bother to check the comments on one in 5. Even then, if there's a lot of comments, I don't bother reading most of them.
One of the reasons I don't post is that I often find it takes me 20-30 minutes to put my words into a shape that I feel is up to the rather high standard of posting quality here, and I'm generally not willing to commit that much of my time to this site.
I think the question of how much of their time an average person thinks a site is worth to them is an important metric, and one we may wish to try to measure with an eye to increasing the average for this site.
Heck, that might even get me posting more often.
I think there's a post somewhere in the following observation, but I'm at a loss as to what lesson to take away from it, or how to present it:
Wherever I work I rapidly gain a reputation for being both a joker and highly intelligent. It seems that I typically act in such a way that when I say something stupid, my co-workers classify it as a joke, and when I say something deep, they classify it as a sign of my intelligence. As best I can figure, its because at one company I was strongly encouraged to think 'outside the box' and one good technique I found for that was to just blurt out the first technological idea that occurred to me when presented with a technological problem, but to do so in a non-serious tone of voice. Often enough the idea is one that nobody else has thought of, or automatically dismissed for what, in retrospect, were insufficient reasons. Other times its so obviously stupid an idea that everyone thinks I'm making a joke. It doesn't hurt that often I do deliberately joke.
I don't know if this is a technique others should adopt or not, but I've found it has made me far less afraid of appearing stupid when presenting ideas.
I have to admit, I've never understood Hanson's Near-Far distinction either. As described it just doesn't seem to mesh at all with how I think about thinking. I keep hoping someone else will post their interpretation of it from a sufficiently different viewpoint that I can at least understand it well enough to know if I agree with it or not.
A friend of mine has offered to lend me the Kushiel series on a number of occasions. I'm starting to think I should take her up on that.
Well, as an additional data point on how folks find less wrong, I found it through Overcoming Bias. I found that site via a link from some extropian or transhumanist blog, although I'm not sure which.
And I found the current set of my extropian and/or transhumanist blogs by actively looking for articles on cutting-edge science, which turn out to often be referenced by transhumanist blogs.
If we assume I'm rational, then I'm not going to assume anything about Omega. I'll base my decisions on the given evidence. So far, that appears to be described as being no more and no less than what Omega cares to tell us.
I never knew I had an inbox. Thanks for telling us about that, but I wonder if we might not want to redesign the home page to make some things like that a bit more obvious.
This touches on something that I've been thinking about, but am not sure how to put into words. My wife is the most rational woman that I know, and its one of the things that I love about her. She's been reading Overcoming Bias, but I've never been completely sure if its due to the material, or because she's a fan of Eliezer. Its probably a combination of the two. In either case, she's shown no interest in this particular group, and I'm not sure why.
I also have a friend who is the smartest person and the best thinker that I've ever met. He's a practicing rationalist but of the sort who uses it as a means to an end. In his case its the design of computer systems of all kinds. Now, I haven't even bothered to point out the Overcoming Bias and Less Wrong communities to him, as I can't imagine he'd have any interest in them, although I'm sure he'd provide useful insights if one could get him interested.
So, of the three most likely candidates to participate in this group that I know of, only one does. This may well be partly due to my own biases in which groups of people I select to tell about which blogs I read, but I think some of it has got to be due to this site being somehow appealing to a narrower segment of the population than those who it might be most valuable to.
I have no proposed solution. This is simply an observation.
That, of course, is your opinion and you're welcome to it. But I thought that I was (perhaps too verbosely to be clear) pointing out that this the original article was yet-another post on Less Wrong that seemed to be saying.
"Do X. Its the rational thing to do. If you don't do X, you aren't rational."
I was trying to point out that there may be many rational reasons for not doing X.
Ah, interesting. That was not considered important enough to get into the RSS feed, so I never saw it.
I find it 'interesting' that we've both had our posts voted down to zero. Could it be that someone objects to pointing out that the game is a money sink and therefore one might have perfectly rational reasons to avoid it?
I have a Magic deck, but I don't often play. That's because Magic is not only an interesting game, its been carefully designed to continually suck more money out of your pocket.
Ever since it was first introduced (I happen to own a first generation deck) the game has been slowly increasing the power levels of the cards so that older cards are less and less valuable and one needs to buy ever more newer cards just to stay competitive.
Add to this the fact they regularly bring out new types of cards that radically shift the power balances in the game and one finds that it becomes a very expensive hobby to keep up with if you want to play with a random assortment of your friends.
So, like Warhammer 40K (another game known for being designed to be a money sink), I've deliberately stayed away from being competitive at. Oh, I have a few decks back from when the game was launched and recently was gifted another few by a friend who wanted to play, and I really do enjoy playing, but I'm not going to let myself get sucked in.
My first thought was to assume it was part of the whole alpha-male dominance thing. Any male that wants to achieve the status of alpha-male starts out in a position of being an underdog and facing an entrenched opposition with all of the advantages of resources.
But, of course, alpha-males outperform when it comes to breeding success and so most genes are descended from males that have confronted this situation, strove against "impossible" odds, and ultimately won.
Of course, if this is the explanation, then one would expect there to be a strong difference in how males and females react to the appearance of an underdog.
Well, that's just me. I've never been afraid of leaping feet-first into a paradox and seeing where that takes me. Which reminds me, maybe there's a post in that.
These are both good points. Frankly I wasn't trying to rock the boat with my post, I was trying to find out if there was a group of disgruntled rationalists who hadn't liked the community posts and had kept silent. Had that been the case, this post would (I'm assuming) have helped to draw them out.
As for what I WOULD like to see, that's a tricky problem in that I am interested in Rationality topics that I know little to nothing about. The trouble is, right now I don't know what it is that I don't know.
Comments vs. Upvoting.
I've been wondering if the number of comments that a post (or comment) gets should have an effect on a karma score. I say this because there are some 1-point comments that have many replies attached to them. Clearly folks thought the comment had some value, or they wouldn't have replied to it. Maybe we need have each comment count as a vote, with the commenter having to explicitly choose +,-,or neutral in order to post?
I'm only now replying to this, since I've only just figured out what it was that I was groping for in the above.
The important thing is not compression, but integration of new knowledge so that it affects future cognition, and future behaviour. The ability to change one's methodologies and approaches based on new knowledge would seem to be key to rationality. The more subtle the influence (ie, a new bit of math changes how you approach buying meat at the supermarket) then the better the evidence for deep integration of new knowledge.
You are stating that. But as far as I can tell Omega is telling me its a capricious omnipotent being. If there is a distinction, I'm not seeing it. Let me break it down for you:
1) Capricious -> I am completely unable to predict its actions. Yes.
2) Omnipotent -> Can do the seemingly impossible. Yes.
So, what's the difference?
When I look at my question there, the only answer that seems appropriate is 'Introspection' as that's at least a step towards an answer.
And if Omega comes up to me and says "I was going to kill you if you gave me $100. But since I've worked out that you won't, I'll leave you alone." then I'll be damn glad I wouldn't agree.
This really does seem like pointless speculation.
Of course, I live in a world where there is no being like Omega that I know of. If I knew otherwise, and knew something of their properties, I might govern myself differently.
I think my answer would be "I would have agreed, had you asked me when the coin chances were .5 and .5. Now that they're 1 and 0, I have no reason to agree."
Seriously, why stick with an agreement you never made? Besides, if Omega can predict me this well he knows how the coin will come up and how I'll react. Why then, should I try to act otherwise. Somehow, I think I just don't get it.
I don't have much of a vested interest in being or remaining human. I've often shocked friends and acquaintances by saying that if there were a large number of intelligent life forms in the universe and I had my choice, I doubt I'd choose to be human.
This has been voted into the negatives, but I'm not sure its so basically bad as an idea. If we can set up a system where all of the students, teachers, and any other staff, are all in continuous rationality competitions with each other, then this would quickly cause one to hone their skills.
For example, maybe the teacher of a class is chosen from within a class and has to fight (metaphorically) to maintain that position. Maybe the choice of whether you are teacher, student, principal, cafeteria cook, or janitor depends on the outcomes of numerous rationality contests between members.
And note that I don't necessarily mean that cafeteria cook or janitor would be positions that go to the losers...
Well, there's always the idea of using fMRI scans to determine if someone is thinking in 'rational' patterns. You stick them under the machine and give them a test. You ignore the results of the test, but score the student on what parts of their brains light up.
You'd have to define 'cheated on'. A fair number of the most rational folks I know live in non-traditional marriage arrangements.
I agree. The only solutions to this that I can see is to either not let students know when they are being tested, or to have a system of continual testing.
A friend of mine, the most consistently rational person I know of, once told me that his major criteria for whether a piece of information is useful is if it can allow him to forget multiple other pieces of information, because they are now derivable from his corpus of information, given this new fact.
I have a vague feeling that there should be a useful test of rationality based on this. Some sort of information modeling test whereby one is given a complex set of interrelated but random data, and a randomly-generated data-expression language. Scoring is based on how close to optimal once gets on writing a generator for the given data in the given language.
Unfortunately, I think this is someone one could explicitly train for, and someone with knowledge of data compression theory would probably be at an advantage.
Well, you asked for DUMB ideas, so here's mine. It has the advantage that I'm sure no one else will suggest it. This is based on an accidental discovery (so far as I know, unpublished) that one can compare two arbitrary documents for similarity (even if they are in different word-processor formats) by running them both through a recognizer built out of a random state machine and comparing bit masks of all the states traversed. The more common they are, the more states will be traversed in both.
So, lets assume we have a panel of highly rational individuals which are our control group. We generate a random multiple-choice questionnaire consisting of nonsensical questions and answers. Things like:
1) How Green is the Smell of Bacon?
a) 7.5
b) Neon
c) Introspection
d) Larger
You then do a correlation over how your panel of experts chose their answers and see if there is a common pattern. You then score students who take the test based on how similar to the common pattern they are.
Assuming this idea works at all, the advantage of this is that it would be extremely difficult to game. The disadvantage would be that it would penalize those who are significantly more rational than the 'norm'. It would probably also require the panel to be similar to each other in cognition. There is also the general problem of not knowing if you're really testing for what you think you're testing.
Frankly, I don't know if I'd be more happy if this was tested and shown to be workable, or if it turned out to be a really stupid idea.
You decided to try achieving that "non-rational" goal, so it must be to your benefit (at least, you must believe so).
Yes, exactly. The fact that you think its to your benefit, but it isn't, is the very essence of what I mean by a non-rational goal.
Actually, I find I have the exact opposite problem. I almost never vote. Partly that's because I read Less Wrong through an RSS feed that doesn't even show the vote totals. I only ever vote if, like now, I've gone to the actual site in order to comment.
Even then, I find that I am comparing the quality of Less Wrong posts and comments against the entire corpus of what I read on a daily basis, some of which is great, and some of which is dreck.
So, I tend to only vote when the quality of what is written is extremely good -- enough so that I want to 'reward' it -- or extremely bad, so that I want to punish. The vast majority is in the middle and so I don't bother to vote.
I am replying to my own post here, because I've been fascinated how the score on this post keeps changing. It was at +1 immediately after I posted it, then dropped to -2 within seconds. The next time I checked it was at +1 and I voted it down to -1. Now its back up at +1. There may well have been intermediate ups and downs I missed. To bad I can't see a history of the voting.