Posts

Comments

Comment by Glen on Rationality Quotes July 2016 · 2016-07-26T18:42:59.543Z · LW · GW

So, in context this is someone trying to diffuse a dangerous situation with placating lies. How is this rationality?

Comment by Glen on Ultimate List of Irrational Nonsense · 2016-04-01T14:57:41.847Z · LW · GW

That is true. However, at some point you are trying to fit too much into a single image or chart. I think what you're describing here could work if you keep it focused on a smaller range of ideas, rather than this many. It would also allow people to think individually about each claim, which larger sets don't really do.

I think your proposed chart would work best as an introduction or header to a more in depth analysis. Show the shape of the arguments and faults, then discuss each one thoroughly beneath the image.

Comment by Glen on Lesswrong 2016 Survey · 2016-03-30T22:13:52.466Z · LW · GW

I have taken the survey

Comment by Glen on Ultimate List of Irrational Nonsense · 2016-03-30T21:57:57.836Z · LW · GW

I believe the problem people have with this is that it isn't actually helpful at all. It's just a list of outgroups for people to laugh at without any sort of analysis on why they believe this or what can be done to avoid falling into the same traps. Obviously a simple chart can't really encompass that level of explanation, so it's actual value or meaningful content is limited.

EDIT: Also, looking over your list it seems that you have marked most philosophies and alternate governments as "Immoral", along with literally everything as "Pointless and Counterproductive". Anarchism, Authoritarianism, Bushido, Collectivism, Cultural Relativism, Cynicsm, Defeatism, Ecocentrism, Egocentrism, Error Theory, Ethical Egoism, fascism, Gothicismus, Harmonious Society & Scientific Outlook on Development, Hedonism, Illegalism, Libertarianism, Machiavellianism, Medievalism, Misanthropy, Misology, Moral Relativism, Moral Skepticism, Moral Subjectivism, Nihilism, Non-Atomic Eudaiominism, Opportunism, Pacifism, Sensualism, Ubuntu(!), Value-Pluralism, Virtue Ethics, Voluntaryism are all marked as "Immoral" and nothing else. I have a lot of issues with your list, but the one that jumps out hte most is Ubuntu. How is UBUNTU of all things Immoral, Pointless and Counterproductive?

Comment by Glen on Rationality Quotes Thread March 2016 · 2016-03-16T16:09:04.847Z · LW · GW

By he I meant Vox. I read the linked post, and it makes all these mistakes. I wouldn't expect a quote to include a full argument or evidence base, but the source ideally should.

Comment by Glen on Rationality Quotes Thread March 2016 · 2016-03-16T15:18:52.073Z · LW · GW

He lists a single "parasitic" non-profit, and then declares the entire field of non-profits to be corrupt thieves on the scale of the financial sector. This post is explicitly about his disgust with the "non-profit world", and he pretty clearly believes that this sort of this is common despite providing no strong evidence in support of that belief. That is his mistake, generalizing from a single example with no additional evidence provided or even discussed.

Comment by Glen on Rationality Quotes Thread March 2016 · 2016-03-16T14:30:25.268Z · LW · GW

The closest thing to rationality content I can pull from this is "just because a thing looks good, doesn't mean it is good". However, the source page lists a grand total of one corrupt non-profit. You can find one bad version of anything, no matter how good or bad the whole group is. You could probably even find a hundred such examples, just from population size and base rate alone. Vox doesn't attempt to check if he is right, he doesn't even list a few examples. He just lists a single instance of a probably corrupt non-profit and, pleased with his own cynicism and insight, declares he they has found a pattern. This is a good example of what not to do, and an important failure mode to watch out for, but you are presenting it as though it were rational rather than a cautionary tale.

Comment by Glen on Rationality Quotes Thread February 2016 · 2016-02-17T21:59:46.690Z · LW · GW

This is an interesting historical note, but I am having a hard time seeing why it is a rationality quote. Perhaps as a record of people acting irrationally? Would you mind explaining a bit?

Comment by Glen on Rationality Quotes Thread February 2016 · 2016-02-08T20:37:30.288Z · LW · GW

Even if "do what makes you happy" were the best rationality advice, the big problem is figuring out what actually makes you happy, how to achieve it, and how to maintain/improve it. Getting drunk is pretty bad advice for a rationality standpoint, because it's sacrificing long term gain for short term pleasure, which is basically the opposite of what you should do. The man drinking at a bar all day is happier right now than the one working extra hours or studying, but in a few years, their happiness will probably be reversed as the latter's investment pays off and the former is still just drinking (only with more health problems).

Comment by Glen on Rationality Quotes Thread February 2016 · 2016-02-08T20:31:38.625Z · LW · GW

That's not even true, though. If you are kidnapped and then tortured, you are not remotely in control of your own happiness, just to take the most obvious extreme answer. Even for more mundane situations, people can be trapped in terrible situations, where cruel people have power over them. If you are working at a minimum wage job with bills coming seemingly every day and which you only overcome by working 18 hour days before collapsing exhausted and doing it all again in the morning, there is very little you can do about it. Now if one of the supervisors at one of your jobs is a petty tyrant who makes you miserable, what choice do you have that would increase your happiness?

I see what the quote is trying to say, as a call to action to change your own life, but it simply isn't true. It also fails the false wisdom reversal test, in that a quote saying "You have no true control over your own happiness; therefore, you must accept your lot in life with all the grace you can muster" sounds just as deep and helpful.

Comment by Glen on Rationality Quotes Thread January 2016 · 2016-01-25T19:54:08.860Z · LW · GW

I find myself agreeing with your general statement, that it is important to not treat the outspoken members of a group as indicative whether good or bad, while being somewhat worried that you have fallen into the same pattern in the process of trying to explain it.

Your examples of feminist and men's rights activist generalizations seem to be examples of the sort of one-sided generalizations you warn about in the very next paragraph. Men's right's activists are generalized in a positive fashion - they are victims of circumstance, trying to avenge the wrongs done to them - while feminists are portrayed in a negative fashion - one dimensional bigots building a career on hating men. I think it would have served your point better if you had attempted positive generalizations for both. How you have it now just seems like it is undermining your general point. In fact, you should probably avoid contemporary political groups when giving examples to avoid this sort of this altogether.

It is possible that you deliberately chose those generalizations in order to demonstrate the trap many people fall into. If that is the case, I think you need to make it more clear. Examples of failed rationality are useful, but should be clearly labeled.

Additionally, I don't see how learning the opinions of the silent majority is reversed stupidity. We already know the opinions of the vocal minority, wouldn't learning the opinions of the silent majority give us a clear picture of the whole group's opinions? I suppose there could be a third group left out by this, some sort of Mumbling Moderates, but it should be easy enough to pick them up in well designed polls as well.

Comment by Glen on Rationality Quotes Thread January 2016 · 2016-01-06T21:00:31.999Z · LW · GW

That depends on the situation and record, doesn't it? If 90% of changes that you have undergone in the past were negative, then wouldn't it be reasonable to resist change in the future? Obviously you shouldn't just outright refuse all change, but if you have a chance to slow it down long enough to better judge what the effects will be, isn't that good? I guess the real solution is to judge possible actions by analyzing the cost/benefit to the best of your ability in cases where this is practical.

Comment by Glen on Rationality Quotes Thread December 2015 · 2015-12-10T15:54:34.385Z · LW · GW

(To make it clear: I have never seen the movie in question, so this is not a comment on the specifics of what happened) Just because it turned out poorly doesn't make it a bad rule. It could have had a 99% chance to work out great, but the killer is only seeing the 1% where it didn't. If you're killing people, then you can't really judge their rules, since it's basically a given that you're only going to talk to them when the rules fail. Everything is going to look like a bad rule if you only count the instances where it didn't work. Without knowing how many similar encounters the victim avoided with their rule, I don't see how you can make a strong case that it's a bad (or good) rule.

Comment by Glen on Rationality Quotes Thread December 2015 · 2015-12-04T19:40:59.907Z · LW · GW

Are there any other systems for judging medicine that more accurately reflects reality? I know very little about medicine in general, but it would be interesting to hear about any alternate methods that get good results.

Comment by Glen on Rationality Quotes Thread November 2015 · 2015-11-25T22:06:54.012Z · LW · GW

Why is it not ridiculous? From skimming the source, he seems to be using a long discredited biological idea and applying it to intelligence because there's a vague resemblance if you squint at it. There's no clear reason to believe that vitalism would be any more possible, let alone plausible, with regards to intelligence as opposed to organic compounds.

Comment by Glen on Rationality Quotes Thread November 2015 · 2015-11-16T20:24:33.156Z · LW · GW

While it has some amusing jokes in it, this isn't a rationality quote. This won't help anyone think better, doesn't clarify beliefs, doesn't offer insight into anything. It's only a way of laughing at the out-group, which is counterproductive even when they are wrong.

Comment by Glen on Rationality Quotes Thread November 2015 · 2015-11-13T16:33:26.045Z · LW · GW

I don't follow what this has to do with rationality. Could you explain further?

Comment by Glen on Rationality Quotes Thread October 2015 · 2015-10-20T15:10:13.081Z · LW · GW

There's no way that this is actually true, though. Before anybody has met you, they have 0 interest in you. After they have met you, their interest may change based on what you say/do etc. (People's first impressions are important, but do not literally set a limit for how interested they will ever be) It is therefore entirely possible that a given person would have some combination of things you can say and do to increase how much they are interested in you, and indeed one of the major points of dating is to see if that will happen. While some people will just never be interested in you no matter what you say or do, it's ridiculous to just say it's impossible to specifically target any given person.

Comment by Glen on Rationality Quotes Thread February 2015 · 2015-02-02T18:31:50.758Z · LW · GW

We can't go back, Mat. The Wheel has turned, for better or worse. And it will keep on turning, as lights die and forests dim, storms call and skies break. Turn it will. The wheel is not hope, and the Wheel does not care, the Wheel simply is. But so long as it turns, folk may hope, folk may care. For with light that fades, another will eventually grow, and each storm that rages must eventually die. Thom Merrilin, The Gathering Storm by Robert Jordan and Brandon Sanderson

(For those unfamiliar with the series, the Wheel is basically reality/the universe)

Comment by Glen on Rationality Quotes August 2013 · 2013-08-05T17:26:58.546Z · LW · GW

This is similar to how I've interpreted it. The character comes from a pre-enlightenment society, and is considered one of the greatest intelligence agents largely due to his ability to get results where nobody else can. He privately attributes this success to a rational mind and extensive [chess] skill that trains him to approach things as though they can be solved. While "stop and think about problems like they were games to be won instead of chores to be blamed on someone else" may seem obvious to people used to thinking like that, it's a major shift for most people.

Comment by Glen on Rationality Quotes August 2013 · 2013-08-02T22:52:18.151Z · LW · GW

Everything can be reduced to an abstraction, a puzzle, and then solved

-Ledaal Kes (Exalted Aspect Book: Air)

Comment by Glen on Welcome to Less Wrong! (6th thread, July 2013) · 2013-07-25T22:34:55.194Z · LW · GW

http://lesswrong.com/lw/r5/the_quantum_physics_sequence/

This is the root level of the sequence, and it links to all of the posts I believe

Comment by Glen on Welcome to Less Wrong! (6th thread, July 2013) · 2013-07-25T20:25:59.259Z · LW · GW

The most interesting stories come from a power in Exalted called "Wise Choice". Basically, you give it a situation and a finite list of actions you could take and it tells you the one that will have the best outcome for you within the next month. It also requires a moderate expenditure of mana, so it can't be used over and over without cost. When I read what the charm did, I thought of Harry's time-experiment with prime numbers. It was immediately obvious that Wise Choice could factorize any number easily, although perhaps not cheaply if it has a large number of factors. From there, it also expanded to finding literally anything in the world either with one big question (if low on mana) or a quick series of smaller ones (if low on time) by dividing the world into a grid and either listing every square or doing a basic binary search via asking the power "Given that I'm going to keep divind the world in half and asking a similar question to this one, which half of the world should I focus on to get within 10 feet of Item/Person X's location at exactly 7PM tomorrow evening" I also figured out that you can beat the one month time limit by pre-committing to asking the same question in 27 days and having someone else promise to give you a reward if you state the same thing each month, with the caveat that you have to give it all back if you're proven wrong in the end or change your answer. This can be shown to work (assuming I haven't made a mistake) by taking a simple case of there being two boxes, one containing ten million dollars and the other being empty. By choosing a box now, it will be opened in six months and you will be given what is inside. Without the trick, Wise Choice looks forward one month, sees no difference and tells you "it doesn't matter". With the trick, Wise Choice looks forward a month, and tells you to say what it sees future you saying, even though it doesn't "understand" why. However, future you can see an additional month forward, and uses it to see future you+2, etc. Therefore, the first instance gives you the true box, even though it can't see to when the box opens.

Of course, it's possible that I've missed a possible case that makes those tricks invalid. I don't have access to an actual infinite-knowledge superpower to check my work, but I figure telling other people about it so they can see things I missed is almost as good.

Comment by Glen on Welcome to Less Wrong! (6th thread, July 2013) · 2013-07-25T17:22:16.864Z · LW · GW

Hello all, my name is Glen and I am a fairly long-time lurker here. I first found this site through the Sword of Good short story, and filed it in my "List of things I want to read but will never actually get around to" and largely forgot about it until I recognized the name while reading HPMOR. I've read most, but not all, of the sequences and am currently going through Quantum Mechnics. I'm Chicago based and work as a programmer for an advertising company. I consider myself a low-mid level rationalist and am working at getting better.

I run or play in a wide range of tabletop games, where I'm known as being a GM-Friendly Munchkin. That is to say, I like finding exploits and unusual combinations, but then I talk to the person running the game about them and usually explain why I shouldn't be allowed to do that. It lets me have fun breaking the system without actually making hte game less fun. I've also used basic information theory to great effect, unless the GM tells me to knock it off. Currently in love with Exalted. Been burned by Shadowrun in the past, but I just can't stay mad at her.