Posts
Comments
It's not meant to be "serious philosophy". He's not presenting the ideas in the book as being literally true, he's just provoking the reader to look at the issues in the book in a different light. Forcing the reader to consider alternative hypotheses, if you will.
In case you haven't realized it, you're being downvoted because your post reads like this is the first thing you've read on this site. Just FYI.
"Universally Preferable Behavior" by Stefan Molyneux, "Argumentation Ethics" by Hans Hermann Hoppe, and of course Objectivism, to name the most famous ones. Generally the ones I'm referring to all try to deduce some sort of Objective Ethics and (surprise) it turns out that property rights are an inherent property of the universe and capitalism is a moral imperative.
Forgive me if you're thinking of some other libertarians who don't have crazy ethical theories. I didn't mean to make gross generalizations. I've just observed that libertarian philosophers who consciously promote their theories of ethics tend to be of this flavor.
Why is the discrimination problem "unfair"? It seems like in any situation where decision theories are actually put into practice, that type of reasoning is likely to be popular. In fact I thought the whole point of advanced decision theories was to deal with that sort of self-referencing reasoning. Am I misunderstanding something?
Maybe "progress" doesn't refer to equality, but autonomy. It does seem like the progression of social organization generally leads to individual autonomy and equality of opportunity. Egalitarianism is a nice talking point for politicians, but when we say "progress" we really mean individual autonomy.
Austrian-minded people definitely have some pretty crazy methods, but their economic conclusions seem pretty sound to me. The problem arises when they apply their crazy methods to areas other than economics (see any libertarian theory of ethics. Crazy stuff)
I think the correct comparison would be, "since no one can agree on the nature of Earth/Earth's existence, Earth must not exist" but this is ridiculous since everyone agrees on at least one fact about Earth: we live on it. The original argument still stands. Denying the existence of god(s) doesn't lead to any ridiculous contradictions of universally experienced observations. Denying Earth's geometry does.
You are merely objecting to Eliezer's choice of scale. The distances between "intelligences" are pretty arbitrary. Plus he's using a linear scale, so there's no room for intelligence curves.
I think the DRH quote is pretty out of context, and Eliezer's commentary on it is pretty unfair. DRH has a deeply personal respect for human intelligence. He doesn't look forward to the singularity because he (correctly) points out that it will be the end of humanity. Most SI/LessWrong people accept that and look forward to it, but for Hofstadter the current view of the singularity is an extremely pessimistic view of the future. Note that this is simply a result of his personal beliefs. He never claims that people are wrong to look forward to superintelligence, brain emulation and things like that, just that he doesn't. See this interview for his thoughts on the subject.
Congratulations, you have just discovered the difference between art and design. If Azkaban had been designed to be a commentary on muggle prisons, the connection would have had to have been made explicit within the text. The fact that Eliezer pointed out the connection does not mean he consciously tried to make it explicit in the text. Since the connection is implicit rather than explicit, the commentary is an artistic interpretation of the text. You don't need to feel justified in an artistic interpretation.
You should collect data on time spent using the app and success. Do Science and stuff.
By the way, I spent a good amount of time using it yesterday and I just finished an entire Hershey's bar. Apparently it's not working for me.
In any decision involving an Omega like entity that can run perfect simulations of you, there wouldn't be a way to tell if you were inside the simulation or in the real universe. Therefore, in situations where the outcome depends on the results of the simulation, you should act as though you are in the simulation. For example, in counterfactual mugging, you should take the lesser amount because if you're in Omega's simulation you guarantee your "real life" counterpart the larger sum.
Of course this only applies if the entity you're dealing with happens to be able to run perfect simulations of reality.
What makes "science vs. bayes" a dichotomy? The scientific method is just a special case of Bayesian reasoning. I mean, I understand the point of the article, but it seems like it's way less of a dilemma in practice.
I know this is an old post, I just wanted to write down my answers to the "morality as preference" questions.
Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?
Do the statements, "I liked that movie" and "That movie was good" sound different? The latter is phrased as a statement of fact, while the former is obviously a statement of preference. Unless the latter is said by a movie critic or film professor, no one thinks it's a real statement of fact. It's just a quirk of the English language that we don't always indicate why we believe the words we say. In English, it's always optional to state whether it's a self-evident fact, the words of a trusted expert or merely a statement of opinion.
When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?
"Moral progress" doesn't really refer to individuals. The entities we refer to making "moral progress" tend to be community level, like societies, so I don't really get the first and last questions. As for the concept of moral progress, it refers to the amount of people who have their moral preferences met. The reason democracy is a "more ethical" society than totalitarianism is because more people have a chance to express their preferences and have them met. If I think a particular war is immoral, I can vote for the candidate or law that will end that war. If I think a law is immoral I can vote to change it. I think this theory lines up pretty well with the concept of moral progress.
Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?
Usually people who do something they "know is wrong" are just doing something that most other people don't like. The only reason it feels like it's wrong to steal is because society has developed, culturally and evolutionary, in such a way that most people think stealing is wrong. That's really all it is. There's nothing in physics that encodes what belongs to who. Most people just want stuff to belong to them because of various psychological factors.
This is an awesome article. But I've always been bothered by people's expectations when it comes to arriving on time for things. In my experience, people are less annoyed at the person who leaves early than the person who arrives late, even if they miss the same amount of the meeting. The usual reasons people give for avoiding being late (missing content, disrupting the meeting) apply just as much to leaving early. Why the double standard? Also, people are generally more understanding if you have to miss something than if you are an hour late, for some reason.
This is all completely anecdotal, obviously.
I suppose you're right. Although it's pretty easy for me to imagine something that is "conscious" that isn't an "observer" i.e., a mind without sensory capabilities. I guess I was just wondering whether our common (non-rigorous) definitions of the two concepts are independent.
It occurred to me that I have no idea what people mean by the word "observer". Rather, I don't know if a solid reductionist definition for observation exists. The best I can come up with is "an optimization process that models its environment". This is vague enough to include everything we associate with the word, but it would also include non-conscious systems. Is that okay? I don't really know.
I had no idea. That is really interesting. What are some artificial languages that have evidential grammar? I knew lojban had evidentials, but I think they're optional.
I understand the concept of Tegmarkian multiverses, but could you explain how they "reduce to themselves"?
Behavior is very different than thoughts. It's easier to think of animals as machines because we have never experienced an animal thought. To us, animals just look exactly as you described, like behavior outputting machines, because we have never experienced the thought processes of animals.
Isn't this true about any conceivable hypothesis?
Not sure how much this post has to do with the economic fact of scarcity. Seems like it would be very easy to mistake actual rationality based on economic knowledge for this bias.
It seems to me that in at least some of these examples you are confusing the map with the territory. Take genetics:
Genes don't proliferate by being good for the species; they win by being good for themselves.
Failing to be "good for the species" is not a fact about evolution, or genes. Thinking that evolution was supposed to be "good for the species" was just a heuristic humans used when trying to understand evolution. The "selfish gene" does not say anything meaningful about the phenomenon of evolution, it just shows that we have refined our understanding of evolution.
Now take politics:
Why do governments inevitably end up run by career lawyers and politicians instead of scientists and economists?
What does the phenomenon of government actually look like, in reality? Well, it looks like a system of human hierarchical organization in which career lawyers and politicians have a natural propensity to be on top. Thinking that the phenomenon of government has anything to do with understanding nuanced social issues is confusing the map with the territory.
To my mind, the people asking the question frequently neglect the second-order effects of regularly talking about politics on the sort of people who will join LW and what their primary goals are.
Could you clarify this point a little? I though the primary goals of LW include refining and promoting human rationality, and I see no reason why this goal would not apply to politics. Especially since irrational political theories can have a directly negative effect on the quality of life for many people.
I have seen this problem afflict other intellectually-driven communities, and believe me, it is a very hard problem to shake. Be grateful we aren't getting media attention. The adage, "All press is good press", has definitely been proven wrong.
Hello, I am Nicholas, an undergraduate studying music at Portland State University. Even though my (at least academic) primary area of study is the arts, the philosophy of rationality and science has always been a large part of my intellectual pursuits. I found this site about a year ago and read many articles, but I recently decided to try to participate. Even before I was a rationalist, my education was entirely self-driven by a desire to seek the truth, even when the truth conflicted with what was widely believed by those around me (teachers, parents, etc.) My idea of what "the truth" means has changed significantly over time, especially after learning about rationality theory, Baye's theorem, and many of the concepts on this site, but the core emotional drive for knowledge has never wavered.
I have read Politics is the Mind Killer and understand the desire to avoid political discussions, but I feel that my conception of a "good" political discussion is significantly different than most users of this site. I care nothing for US style partisan politics. Far from exclusively arguing for "my home team", my political ideas have changed dramatically over the years, and are always based on actual existing phenomenon rather than words like "socialism", "capitalism, "republican" or "democrat". I would be interested to know what led to this ban on political thought. Is it a widely held view of the community that political discussion is inherently devoid of rationality, or was it a decision made out of historical necessity, perhaps because of an observed trend of the quality of political discussions? In either case, I would like to gain a better understanding of the arguments and attempt to refute them.
What effect could misplacing the electrodes have besides stimulating a different part of the brain? I'm honestly asking, I have no idea about any of this.
Am I correct in (roughly) summarizing your conclusion in the following quote?
Yes, there really is morality, and we can locate it in reality — either as a set of facts about the well-being of conscious creatures, or as a set of facts about what an ideally rational and perfectly informed agent would prefer, or as some other set of natural facts.
If so, what is the logical difference between your theory and moral relativism? What if a person's set of natural facts for morality is "those acts which the culture I was born into deem to be moral"?
I view intellectual property as the logical conclusion of the "unhealthiness" Eliezer is describing. I laugh when I look at all the ridiculous patents and copyrights that exist, but then I get scared when I remember that someone can use legal force against me for discovering those ideas simply because they discovered them first.
You mean "libertarian" in the literal sense right? You're not implying that the subject of "free will" has anything to do with politics are you?