Posts
Comments
I actually like this post and agree to most points you make. I'm not talking about the meta points about steelmanning and rhetoric tricks.
The obvious and clearly stated bias helped me to better insights than most articles that claim true understanding of anything.
I'm not sure whether this is due to increased attention to weak arguments or a greater freedom to ignore weak arguments as they are probably not serious anyways.
Can it be both? Was that effect intentional?
I would read a "Steelmanning counterintuitive claim X" series.
I know it as Liquid Democracy or http://en.wikipedia.org/wiki/Delegative_democracy
I like your solution to pascals mugging but as some people mentioned it breaks down with superexponential numbers. This is caused by the extreme difficulty to do meaningful calculations once such a number is present (similar to infinity or a division by zero).
I propose the following modification:
- Given a Problem that contains huge payoffs or penalties, try common_laws solution.
- Should any number above Gogol be in the calculation, refuse to calculate!
- Try to reformulate the problem in a way that doesn't contain such a big number.
- Should this fail, do nothing.
I would go so far as to treat any claim with such numbers in it as fictional.
Another LW classic containing such numbers is the Dust Speck vs. Torture paradox. I think that just trying to calculate in the presence of such numbers is a fallacy. Has someone formulated a Number-Too-Big-Fallacy already?
- Select one and only one cause to join that you really care about.
- Activism is useful for networking as already mentioned. Treat it as a tool, not as an achievement.
- Read to find out what really needs to change. What are the root causes? What keeps the movement from being effective?
- Again select just one of these according to your abilities.
- Edit: Oh and please just do it. Don't get lost in "I will be more effective by earning money and paying someone to do it." mindgames. You can't pay them to actually care, they will do a lousy job. Find something you can do and grow with the challenge!
I've been in that spot for a long time and my excuse always was that vegetarianism would be too inconvenient.
Around the end of last year it finally clicked. The inconvenience excuse is plainly wrong in many cases AND being a vegetarian in just these cases is still a good thing!
I resolved to eat vegetarian whenever it is not inconvenient. This turned out to be almost always. Especially easy are restaurants and ordered food. When in a supermarket I never buy meat which automatically sets me up for lots of vegetarian meals.
I'm currently eating vegetarian on ~95% of my meals. As a bonus I don't have a bad conscience in the few cases where I eat meat.
Some LessWrong links that may be of interest:
http://lesswrong.com/lw/1qq/debate_tools_an_experience_report/
Here are two projects that try to remove subvocalization. It's fun to try at least. http://www.spreeder.com/ http://learn2spritz.com/
I find the qualitative reflections most enlightening and especially that you said: "But never in the course of this experiment did I count something that turned out to be unimportant."
Your under-confidence in that point may be very common leading to thoughts like: "Yea noticing confusion is all nice but I usually do that already. I'm fairly certain that I'm only missing some irrelevant confusion." Your experience suggests that there is no such thing as irrelevant confusion. The art is to notice as many as humanly possible instead of just some.
I have never read a better motivation to go and actively try to notice confusion than this sentence. Thanks.
Lying is saying something false while you know better. Not lying doesn't imply only saying true things or knowing all implications.
The added burden should be minimal as between friends most people already assume that they are not lied to without making it an explicit rule.
Wait, wait, has the game already started?
The start of the game may be undefined and whether a lie is couted as inside the game depends a lot on the players.
For everyone who thinks he can't change the voice: Picture
I actually read the article due to your post and it was interesting. I agree to your point, just didn't like the style and I could have been more diplomatic about it.
Keep posting. :-)
I don't think prevention is very likely as EYs comment suggests that moderator intervention will be very hard or even impossible, so disincentivizing is probably the way. I hope my suggestions would remove a motivation for mass downvoting by making it impossible to attack someones karma.
This is decreasing your work in commenting by increasing the work for some readers. It would be globally more useful to spend one minute on a better comment like the one Viliam_Bur has posted, than having an unknown number of people read the linked article to understand your point.
Your utility function and opinion may differ though, perhaps your intention was not primarily to get a point across but to make people read the article?
A less extreme modification of the karma system would be to keep the downvotes but change how karma is calculated for the users.
Karma could be defined as the sum of all votes of posts with positive total score. An alternative change would be to count only the upvotes and ignore downvotes completely for the karma calculation.
In both cases the general correlation between users that post great content and high karma would stay intact but mass downvoting would no longer feel as threatening. All the signaling benefits you mentioned would still work in this modified system.
Do you think these are acceptable changes to the karma system?
Yes I replied too fast to your comment. Already Fixed.
"Society" doesn't make decisions, groups of people make decisions.
The way society forms mass-opinions and decides (i.e. by voting) on important issues is not easily split into groups of people making decisions.
Still I accept your mechanism because group decisions are a large part of society and improving that will improve society.
About the group project: If we can get everyone to be "genuinely rational" instead of just a bit more rational we will certainly live in a very different world. I don't expect that anytime soon though.
You're right. "Has read a majority of the sequences so that there is a high probability that this specific sequence is among them" would have been more precise.
While it was an exaggeration "extreme distortion" seems like a harsh judgement.
Edit: oh sorry - I i didn't mean to imply all the sequences are necessary for understanding. I'll fix the sentence.
A group project is far away from society as a whole, where discussion and explanation between all members is impossible due to scale.
Your project could benefit from increased obedience as you could just lead rationally and the others would follow. Disagreements between rational people can take a longer time to resolve, etc.
I still agree to all your examples. More anecdotes will not be helpful, as I already agree that increased rationality will improve society (and group projects and institutions for that matter).
What I'm missing is a clear mechanism that actually produces a more rational society just from increasing the rationality of people. Please explain the mechanism.
I disagree about having this problem solved by moderators. Changing the karma system would be preferable i.e. by removing the downvotes or having downvotes only affect the individual post but not on the total karma of the user.
What do you mean by "only obvious in extreme cases?"
Just, that there is no obvious mechanism that produces a more rational society from more rational people.
Again I agree on the positive effects of rationality and do believe that more rationality will improve society. But there are many people that say the same about religion, obedience or other things that I don't view as positive.
So a society is rational if the institutions are rational ... and an institution is rational if its outputs seem rationally designed ... which is judged by a rational individual ... which is still hard to define.
I see your point and agree that there is room for improvement. Instead of "more rational" I would propose "less insane" which seems to fit the evidence as good as the other description.
Will one of these more insane societies become less insane by making sure everybody on the streets is less insane? The connection doesn't seem obvious, except in extreme cases.
The connection between rational individuals and rational society is implied by use of the same word and only obvious in extreme cases.
I think you have a good point, but it would be easier to see if you had posted a short sentence explaining what your point is. Please don't assume that every reader has read all the sequences or has the time to do so (edit: read this one) just to understand your comment.
Assuming that rationality can be taught at school to everyone, is there even a connection between more rational individuals and a more rational society?
The problem I see here is that rationality is already very weakly defined for individuals and I know of no definitions in the context of society. A society can't even think (or can it?), how can it be rational?
Many decision processes of society are not based on rationality at all and I see no reason why the tried ways of winning (i.e. corruption) should be replaced by others assuming as the only change slightly more rational agents. Elections produces an average of opinions, this average may not change at all given more rational voters.
You will have to cover a lot of inference steps just to show that society as a whole will become more rational. Rationality isn't the only attribute of a "good" society and there might be ugly trade-offs. Whether a more rational society will have any "huge benefits" is just the last question in a chain that will surely be too much for a single article or a few sentences.
The positive effects would trickle down into many aspects of our society.
I think the opposite way is more probable. We first need a better culture of debate in society. Only if debate is more accepted and expected by the general population this change may trickle up to the politicians and the mass media. It could be pushed back down by the powerful if they feel threatened.
Mass debate is very difficult though.
This gave me an idea to make things even more complicated: Let's assume a scientist manages to create a simulated civilization of the same size as his own. It turns out, that to keep the civilization running he will have to sacrifice a lot. All members of the simulated civilization prefer to continue existing while the "mother civilization" prefers to sacrifice as little as possible.
How much should be sacrificed to keep the simulation running as long as possible? Should the simulated civilization create simulations itself to increase the preference of continued existence?
Bonus questions: Does a simulated Civilization get to prefer anything? What are the moral implications of creating new beings that may hold preference (including having children in real life)? What if the scientist can manipulate the preferences of the simulated civilization, should he? And to what end? What about education and other preference changing techniques in real life?
I have to say it's fun to find the most extreme scenario to doom our civilization by critical mass of preference. Can you find a more extreme or more realistic one than my civilization simulating supercomputer or the aliens mentioned in the original post?