Posts

Yet Another "Rational Approach To Morality & Friendly AI Sequence" 2010-11-06T16:30:25.074Z
An apology 2010-11-03T19:20:08.179Z
Waser's 3 Goals of Morality 2010-11-02T19:12:49.132Z
Intelligence vs. Wisdom 2010-11-01T20:06:06.987Z
Irrational Upvotes 2010-11-01T12:10:38.277Z
I clearly don't understand karma 2010-10-30T22:10:08.355Z

Comments

Comment by mwaser on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-17T14:56:14.704Z · LW · GW

Dogmas of analytic philosophy, part 1/2 and part 2/2 by Massimo Pigliucci in his Rationally Speaking blog.

Comment by mwaser on Thoughts on the Singularity Institute (SI) · 2012-05-10T22:07:04.179Z · LW · GW

If it is true (i.e. if a proof can be found) that "Any sufficiently advanced tool is indistinguishable from agent", then any RPOP will automatically become indistinguishable from an agent once it has self-improved past our comprehension point.

This would seem to argue against Yudkowsky's contention that the term RPOP is more accurate than "Artificial Intelligence" or "superintelligence".

Comment by mwaser on Babies and Bunnies: A Caution About Evo-Psych · 2011-03-14T10:57:24.125Z · LW · GW

Actually, eating a baby bunny is a really bad idea when viewed from a long-term perspective. Sure, it's a tender tasty little morsel -- but the operative word is little. Far better from a long-term view to let it grow up, reproduce and then eat it. And large competent bunnies aren't nearly as cute as baby bunnies, are they? So maybe evo-psych does have it correct . . . . and maybe the short-sighted rationality of tearing apart a whole field by implication because you don't understand how something works doesn't seem as brilliant.

Comment by mwaser on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) · 2011-02-16T13:03:28.420Z · LW · GW

MY "objection" to CEV is exactly the opposite of what you're expecting and asking for. CEV as described is not descriptive enough to allow the hypothesis "CEV is an acceptably good solution" to be falsified. Since it is "our wish if we knew more", etc., any failure scenrio that we could possibly put forth can immediately be answered by altering the potential "CEV space" to answer the objection.

I have radically different ideas about where CEV is going to converge to than most people here. Yet, the lack of distinctions in the description of CEV cause my ideas to be included under any argument for CEV because CEV potentially is . . . ANYTHING! There are no concrete distinctions that clearly state that something is NOT part of the ultimate CEV.

Arguing against CEV is like arguing against science. Can you argue a concrete failure scenario of science? Now -- keeping Hume in mind, what does science tell the AI to do? It's precisely the same argument, except that CEV as a "computational procedure" is much less well-defined than the scientific method.

Don't get me wrong. I love the concept of CEV. It's a brilliant goal statement. But it's brilliant because it doesn't clearly exclude anything that we want -- and human biases lead us to believe that it will include everything we truly want and exclude everything we truly don't want.

My concept of CEV disallows AI slavery. Your answer to that is "If that is truly what a grown-up humanity wants/needs, then that is what CEV will be". CEV is the ultimate desire -- ever-changing and never real enough to be pinned down.

Comment by mwaser on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) · 2011-02-16T12:05:48.146Z · LW · GW

I know the individuals involved. They are not biased against non-academics and would welcome a well-thought-out contribution from anyone. You could easily have a suitable abstract ready by March 1st (two weeks early) if you believed that it was important enough -- and I would strongly urge you to do so.

Comment by mwaser on "Nahh, that wouldn't work" · 2010-11-29T15:47:14.735Z · LW · GW

Threats are certainly a data point that I factor in when making a decision. I, too, have been known to apply altruistic punishment to people making unwarranted threats. But I also consider whether the person feels so threatened that the threat may actually be just a sign of their insecurity. And there are always times when going along with the threat is simply easier than bothering to fight that particular issue.

Do you really always buck threats? Even when justified -- such a "threatened consequences" for stupid actions on your part? Even from, say, police officers?

Comment by mwaser on "Nahh, that wouldn't work" · 2010-11-29T15:15:18.337Z · LW · GW

I much prefer the word "consequence" -- as in, that action will have the following consequences . . . .

I don't threaten, I point out what consequences their actions will cause.

Comment by mwaser on Imperfect Levers · 2010-11-17T23:06:02.799Z · LW · GW

For-profit corporations, as a matter of law, have the goal of making money and their boards are subject to all sorts of legal consequences and other unpleasantnesses if they don't optimize that goal as a primary objective (unless some other goal is explicitly written into the corporate bylaws as being more important than making a profit -- and even then, there are profit requirements that must be fulfilled to avoid corporate dissolution or conversion to a non-profit -- and very few corporations have such provisions).

Translation

Corporations = powerful, intelligent entities with the primary goal of accumulating power (in the form of money).

Comment by mwaser on An Xtranormal Intelligence Explosion · 2010-11-09T13:07:12.352Z · LW · GW

As you get closer to the core of friendliness, you get all sorts of weird AGI's that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.

Is this true or is this a useful assumption to protect us from doing something stupid?

Is it true that Friendliness is not an attractor or is it that we cannot count on such a property unless it is absolutely proven to be the case?

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-09T03:52:10.241Z · LW · GW

I meant lurking is slow, lurking is inefficient, and a higher probability that it gets worse results for the newbie. I'm not sure which objective is being referred to in that clause. I retract those evaluations as flawed.

Yeah, I made the same mistake twice in a row. First, I didn't get that I didn't get it. Then I "got it" and figured out some obvious stuff -- and didn't even consider that there probably was even more below that which I still didn't get and that I should start looking for (and was an ass about it to boot). What a concept -- I don't know what I don't know.

The playground option was an idiot idea. I actually figured out that I don't want to go there and stagnate before your comment. I've got this horrible mental image of me being that guy that whines in boot camp. Let me take a few days and come up with a good answer to one of your questions (once I've worked this through a bit more).

I'd say thank you and sorry for being an ass but I'm not sure of its appropriateness right now. (Yeah, that tag is still really messing with me ;-)

ETA: Still re-calibrating. Realizing I'm way too spoiled about obtaining positive feedback . . . . ;-) EDIT: Make that addicted to obtaining positive feedback and less accepting of negative feedback that I don't immediately understand than I prefer to realize (and actually commenting on the first part seems to immediately recurse into hilarity)

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-08T14:41:38.446Z · LW · GW

Umph! I am really not used to interacting with people mentally skilled enough that I have a really bad case of not knowing what I don't know. I need to fix that.

Good one with the tags. I'm still recalibrating from it/working through all its implications.

I'm going off to work on one of the questions now.

Comment by mwaser on An Xtranormal Intelligence Explosion · 2010-11-08T12:44:17.871Z · LW · GW

AI correctly predicts that programmer will not approve of its plan. AI fully aware of programmer-held fallacies that cause lack of approval. AI wishes to lead programmer through thought process to eliminate said fallacies. AI determines that the most effective way to initiate this process is to say "I recognize that even with all of my intelligence I’m still fallible so if you object to my plans I will rethink them." Said statement is even logically true because the statement "I will rethink them " is always true.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-08T02:50:02.608Z · LW · GW

Yes. If you have actually learned something then your comments will reflect this and earn karma. You'll be into the positives before you know it.

OK. Got it.

If this is so, you have been doing it very badly.

I've already acknowledged that. But I've clearly been doing better with the "What I missed" explanation being +5 and this post only garnering -2 over two days as opposed to -6 in a few hours so I must have learned something.

I've also learned that we've reached the point where some people are tired enough of this thread that they will go through it karma down any comment by me and karma up any comment not agreeing with me. (I should go visit draq's posts and disagree with him ;-)

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-08T02:37:03.716Z · LW · GW

Okay. So the comment is unclear and incomplete but not unwelcome with a +5 karma). Clearly, I need to slow down and expand, expand, expand. I'm willing to keep fighting with it and do that and learn. Where is an appropriate place to do so?

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-08T02:26:44.070Z · LW · GW

Is effective a cousin? I suspect so since the easiest way to rewrite it would be to simply replace rational with effective. If not, assume that my rewrite simply does that. If so, can I get a motivation for the request? I'm not sure where you're going or why "cousins" are disallowed.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T22:31:35.034Z · LW · GW

If you want to be safe, you lurk until you truly get what's going on around you. People can in fact learn things that way.

I never said I wanted to be safe. Please reread what I said.

Lurking until you truly get what's going on around you is not the most effective (rational) way to learn. I can provide you a boatload of references supporting that if you wish.

Do you really want subpar newbies who will accept such irrationality just to maintain your peace and quiet? Particularly when a playground option is suggested? You could even get volunteers and never deal with the hassle.

Premise: It's more rational for your goals, to just ignore a good rational proposal from an erring, annoying newbie who is trying to provide access to new resources for you (both newbies and structures for their care and feeding).

I just don't get that.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T22:10:17.930Z · LW · GW

claiming repeatedly to have learned some unspecified thing which makes you above disapproval.

Could you point to an example please so I can try to evaluate how I implied something so thoroughly against my intent? I certainly don't believe myself above disapproval.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T22:00:08.245Z · LW · GW

However, it doesn't say anywhere what it is that you claim to have suddenly understood

here

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T21:47:01.805Z · LW · GW

Could someone give me a hint as to why this particular comment which was specifically in answer to a question is being downvoted? I don't get it.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T21:36:00.797Z · LW · GW

Thank you. As I said below, I didn't clearly understand the need for the explicit inclusion of motivation before. I now see that I need to massively overhaul the question and include motivation (as well as make a lot of other recommended changes).

The post has a ton of errors but I don't understand why you think it was in error. Given that your premise about my intentions is correct, doesn't your argument mean that posting was correct? Or, are you saying that it was in error due to the frequency of posting?

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T21:19:13.025Z · LW · GW

Ah. Now I see your point.

The actions of a nation are those which were caused by it's governance structure like your actions are those which are caused by your brain. A fever or your stomach growling is not your action in the same sense that actions by lower-level officials and large companies are not the actions of a nation -- particularly when those officials and companies are subsequently censured or there is some later attempt to rein them in. Actions of the duly recognized head of state acting in a national capacity are actions of the nation unless they are subsequently over-ruled by rest of the governance structure -- which is pretty much the equivalent of your having an accident or making a mistake.

A nation has explicit goals when it declares those goals through it's governance structure.

A nation has implicit goals when it's governance structure appears to be acting in a fashion resembling rational behavior for having those goals and there is not an alternative explanation.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T21:06:34.217Z · LW · GW

And this is still too abstract. Depending on detail of the situation, either decision might be right. For example, I might like to remain where I am, thank you very much.

So I take it that you are heavily supporting the initial post's "Premise: The only rational answer given the current information is the last one."

Worse, so far I've seen no motivation for the questions of this post, and what discussion happened around it was fueled by making equally unmotivated arbitrary implicit assumptions not following from the problem statement in the post.

Thank you. I didn't clearly understand the need for the explicit inclusion of motivation before.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T20:47:20.514Z · LW · GW

Seconding Carl changes your argument to this is the first substantive posting I've made in four days. Now it's one in five days.

Other than not posting on a new given topic (while you have no active or live posts), what would you suggest? Personally, I would suggest a separate area (a playpen, if you will) where newbies are allowed to post and learn. You can't truly learn anything of value just by watching. Insisting that a first attempt be done correctly on the first try under safe circumstances is counter-productive.

My last substantive post before this one was a total admitted disaster (make that my last two substantive posts). This one is hanging in there. Apparently I've learned something. If I, like draq, am being heavily downvoted -- this post would be positive for anyone else.

Continuing the admitted disasters would have been an exercise of throwing good time after bad. I'm trying to wring all the knowledge (or functionality) I can out of each top-level post but they were done. Do you really want to say that regardless of what I've learned, you "would appreciate it if you would cease making top-level posts entirely" until I've paid for my previous errors through certain very limited activities?

I get why your original comment has such high karma. I always have been trying to calculate the expected value of their content for your readers. I argue that not giving credit for intent and some slack to newbies (especially those showing progress) is counter-productive to any goal of outreach.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T17:29:16.671Z · LW · GW

Acquiring citizenship is joining a nation. People who are not only allowed to acquire citizenship but encouraged to do so are "invited to join". To choose whether to do so or not is to file the necessary papers and perform the necessary acts. I think that these answers should be obvious.

A nation has a top-most goal if all of its goals do not conflict with that goal. This is more specific than a top-level goal.

A nation is rational to the extent that its actions promote its goals. Did you really have to ask this?

How does a nation identify the goals of its members? My immediate reaction is the quip "Not very well". A better answer is "that is what government is supposed to be for". I have no interest and no intention to get into politics. The problem with my providing a specific example, particularly one that falls short in the rationality department from what was stated in the premise, is that people tend to latch on to the properties of the example in order to argue rather than considering the premise. Current "developed nations" are a very poor, imperfect, irrational echo of the model I had in mind but they are the closest existing (and therefore easily/clearly cited) example I could think of.

In fact, let me change my example to a theoretical nation where Eliezer has led a group of the best and brightest LessWrong individuals to create a new manmade-island-based nation with a unique new form of government. Would you join if invited?

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T17:09:59.703Z · LW · GW

Upvote from me! Yes, you are understanding me correctly.

One could indeed come up with my list of options without having done any prior investigation. But would one share it with others? My pointing at that particular post is meant to be a signal that I grok that it is not rational to share it with others until I believe that I have strong evidence that it is a strong hypothesis and have pretty much run out of experiments that I can conduct by myself that could possibly disprove the hypothesis.

Skepticism is desired as long as it doesn't interfere with the analysis of the hypothesis. If mistrust leads someone to walk away from a hypothesis that would be of great interest to them, if true, without fairly analyzing the hypothesis, that's a problem.

Yes, I realize that I still am lacking some of the skills necessary to present and frame a discussion here. I should have presented an example as Vladimir pointed out. I'm under the impression that evidence isn't necessarily appropriate at this point. If people would leap in to correct me if that is incorrect, it would be appreciated.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T16:54:19.104Z · LW · GW

The question "which options are long-term rational answers?" corresponds immediately to the hypothesis "among the options are some long-term rational answers" and can be investigated in the same way.

Incorrect. Prove that one option is a long-term rational answer and you have proved the hypothesis "among the options are some long-term rational answers". That is nowhere near completing answering the question "which options are long-term rational answers"

My hypothesis was much, much more limited than "among the options are some long-term rational answers". It specified which of the options was a long-term rational answer. It further specified that all of the other options were not long-term rational answers. It is much, much easier to disprove my hypothesis than the broader hypothesis "among the options are some long-term rational answers" which gives it correspondingly more power.

If you really think that people here need to be educated as to what a hypothesis is, then a) it'd be better to link to a wikipedia definition and b) why are you bothering to post here?

Fully grokking Eliezer's post that I linked would have given you all of the above reply. The wikipedia definition is less clear than Eliezer's post. I post here because this community is more than capable of helping/forcing me to clarify my logic and rationality.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T14:32:15.723Z · LW · GW

Post hoc analysis is subject to all sorts of fallacies.

Huh. I just realized that there is nothing that I recognize as a clear Science/Scientific Method sequence (though there is a ton either assumed or sprinkled throughout the sequences) for me to reference.

Reread Harry Potter and the Methods of Rationality.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T13:23:15.571Z · LW · GW

Thank you very much.

Premise: Most developed nations are such a community although the goal is certainly not explicit.

Do you believe that premise is flawed?

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T13:13:41.270Z · LW · GW

"Option 2 is the only long-term rational answer" is a clear hypothesis. It is disproved if any of the other options is also a long-term rational answer. "Which options are long-term rational answers?" is a question, not a hypothesis.

Reread Einstein's Arrogance

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T03:47:51.968Z · LW · GW

Too ambiguous. It's not clear which elements aren't clear to you, so it's not possible to fix the problem.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T03:44:46.494Z · LW · GW

The original statement said nothing about how much work each step was. In fact, the original statement was refuting a statement that was even more simplistic and strongly implied the process was limited to just data and conclusions.

I agree with your second sentence.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T03:31:35.460Z · LW · GW

Semi-correct due to further information that you should infer from reality or you would have gotten explicitly with the correct answer of asking for more information. A community has certain expectations when you join it and is likely to take punitive action if you violate those expectations.

As a hint, how is 5 different from 2? Is 2 not rational or not in your best interest?

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T03:23:04.809Z · LW · GW

To be blunt, you are violating community norms by posting large quantities of material despite general disinterest or disapproval from other community members.

My last top level post currently has a karma of +15. The net of the rest of my comments (i.e. not including that post) over the same time period is +12.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-07T03:18:04.471Z · LW · GW

Merely knowing that a group is rational and utilitarian (or at least, that it claims to be) doesn't narrow down what it is very much.

Interesting. Those were sidelights for me as well. What was definitive for me was the statement of the community's top-most goal.

What you've stated instead is that you're trying to prove your hypothesis, which is, I hope, wrong--rather, you're investigating whether your hypothesis is true, without the specific goal of proving or disproving it.

Agreed.

Comment by mwaser on An apology · 2010-11-06T18:40:12.660Z · LW · GW

It is a definition, not an explanation. I misunderstood his post to be questioning what the quoted word "structures" meant so I provided a definition.

I am editing it to provide examples. It was certainly not intended as a curiosity-stopper.

As a definition, it had meaning -- but none that was new to you.

Comment by mwaser on Yet Another "Rational Approach To Morality & Friendly AI Sequence" · 2010-11-06T18:28:39.156Z · LW · GW

Your "first question" is excellent. Your question for me is even better.

What I'm trying to eventually prove is called a hypothesis. If I can disprove it, that is equally valuable to me.

Hypothesis first. Experimental design second. Then data and conclusions.

BTW, I regard "How do you go about trying to fulfill the community's metagoal?" and "What does it mean to be a member of this community?" most simply as better phrasings of what I meant by question 5 (though the answers would probably answer 4).

I would also give 1, 2, and 3 higher priority since they are likely to be shorter answers and the answers may well permit me to join while I perform further investigation. Your questions can take a lifetime to answer and are probably answered differently by each member of the community.

Comment by mwaser on POLL: Realism and reductionism · 2010-11-06T16:48:15.836Z · LW · GW

1A

2A - if your morality is rationally defined by your goals OR 2C - if you insist on all the confusing noise of human morality

3A or 3B - I don't know how to test to determine

4 is confused

Comment by mwaser on An apology · 2010-11-06T14:45:37.961Z · LW · GW

By "structures" I mean "interlocking sets of values, virtues, norms, practices, identities, technologies, and psychological mechanisms that work together to fulfill the goal of stabilization (of something)".

Examples: The "terms of art" like "confused" (different from common use in that it can imply sins of omission as well as commission), the use of karma, the nearly uniform behavior when performing certain tasks, the nearly uniform reactions to certain things, etc. are all part of the "structures" supporting the community.

Comment by mwaser on indexical uncertainty and the Axiom of Independence · 2010-11-05T21:49:31.723Z · LW · GW

Yes. I agree fully with the above post.

Comment by mwaser on indexical uncertainty and the Axiom of Independence · 2010-11-05T15:56:31.428Z · LW · GW

We seem to have differing assumptions:

My default is to assume that B utility cannot be produced in a different world UNLESS it is of utility in B's world to produce the utility in another world. One method by which this is possible is trade between the two worlds (which was the source of my initial response).

Your assumption seems to be that B utility will always have value in a different world.

My default assumption is explicitly overridden for the case where I feel good (have utility in the world where I am present) when I care about the world where I am not present.

Your (assumed) blanket assumption has the counterexample that while I feel good when someone has sex with me in the world where I am present (alive), I do not feel good (I feel nothing -- and am currently repulsed by the thought = NEGATIVE utility) when someone has sex with me in the world where I am dead (not present).

ACK. Wait a minute. I'm clearly confusing the action that produced B utility with B utility itself. Your problem formulation did explicitly include your assumption (which thereby makes it a premise).

OK. I think I now accept your argument so far. I have a vague feeling that you've carried the argument to places where the premise/assumption isn't valid but that's obviously the subject for another post.

(Interesting karma question. I've made a mistake. How interesting is that mistake to the community? In this case, I think that it was a non-obvious mistake (certainly for me without working it through ;-) that others have a reasonable probability of making on an interesting subject so it should be of interest. We'll see whether the karma results validate my understanding.)

Comment by mwaser on indexical uncertainty and the Axiom of Independence · 2010-11-05T01:42:47.426Z · LW · GW

Fair enough. I'm willing to rephrase my argument as A can't produce B utility because there is no B present in the world.

Yes, I do want to pre-commit to a counter-factual trade in the mugging because that is the cost of obtaining access to an offer of high expected utility (see my real-world rephrasing here for a more intuitive example case).

In the current world-splitting case, I see no utility for me since the opposing fork cannot produce it so there is no point to me pre-committing.

Comment by mwaser on indexical uncertainty and the Axiom of Independence · 2010-11-04T21:44:56.967Z · LW · GW

Strategy one has U1/2 in both A-utility and B-utility with the additional property that the utility is in the correct fork where it can be used (i.e. it truly exists).

Strategy two has U2/2 in both A-utilty and B-utility but the additional property that the utility produced is not going to be usable in the fork where it is produced (i.e. the actual utility is really U0/2 unless the utility can be traded for the opposite utility which is actually usable in the same fork).

Assuming that there is no possibility of trade (since you describe no method by which it is possible):

I don't see a requirement for trade existing in the counterfactual mugging problem so I accept it.

Since the above deal requires the possibility of trade to actually gain USABLE utility (arguably the only nonzero kind assuming that [PersonalUse OR Trade = Usability]) and I don't see the possibility for trade, I am justified in rejecting the above deal despite accepting the counterfactual deal.

Comment by mwaser on An apology · 2010-11-04T20:25:49.896Z · LW · GW

Wow! Evil. Effective. Not to mention a great demonstration of the criticality of context.

Definitely deserves a link or mention in a newbie's guide.

Comment by mwaser on An apology · 2010-11-04T19:59:28.951Z · LW · GW

What I didn't get?

Some of it was mistaken assumptions about karma. Much more of it was the lack of recognition of the presence of a huge amount of underlying structure which is necessary to explain what looks like seemingly irrational behavior (to someone who doesn't have that structure). I also didn't recognize most of the offered help because I didn't understand it. (Even just saying to a newbie, "I know that you don't recognize this as help because you don't get it yet but could you please trust me that it is intended as help" would probably convince many more people to just look again rather than bailing).

Some of the epiphany was figuring out the various parts that make up karma and truly recognizing its accuracy and efficiency. A lot more of it was just figuring out that there had to be structures present to explain the seemingly irrational behavior. Yeah, that's duh! obvious in hindsight but it's difficult to figure out by yourself (until you catch the underlying regularities and make the right assumptions).

One of the largest problems for newbies is that the culture has evolved a great many "terms of art" that are not recognizable as such to the newbie. Getting "hammered" for questioning the upvote of a comment apparently without substance was a shock for me. Fortunately, the underlying consistency of the "irrationality" was also becoming apparent at the same time.

Just reading and even fully understanding the sequences does not fully prepare one for contributing here. This fact is NOT evident to new contributors. Smacking a new contributor on the nose (with karma) while pointing at a sequence that they are rather sure that they comprehended and nothing else is not going to make sense to them until they have the necessary understandings.

One must understand the expected process and expectations of contribution and understand the "terms of art" that are invariable used in the evaluatory comments. Clear and confused have very specific meanings here that do not unpack correctly unless you have the underlying structure/understanding. I was also very shocked by the number of perceived strawmen and the community's acceptance of them -- contrary to virtually every other "rational" website.

I know that I still don't have all of it but most of the behavior that totally baffled me before and appeared irrational now makes total sense. The rules are totally different here from what I expected/assumed and the unnoticed phase change caused my "rational" behavior to be deemed "irrational" (only because it was ;-) and "irrational" behavior to be widely accepted (not what you expect on a site devoted to rationality ;-).

Most of what I think I have in mind is just to point out where and explain why the rules are very different from what is likely to be assumed by an outsider. In particular, it's very hard to accept that you're confused and wrong when your bayesian priors give that a low probability -- and a near-zero probability when the people informing you aren't making sense and acting irrationally (except when they're all doing it -- and doing it consistently).

The real epiphany was when I said "F it. These people are managing to be consistent. There has to be some set of rules that allow them to do that. Now . . . . what the F are they?" And, for me, that was pretty rapidly followed by the "Ohhhhh. WOW! Damn. Now I feel bad." of my apology.

If I could figure out some way to be helpful to steer people towards that epiphany without actually giving it to them, it would be ideal. Some work is necessary to fully integrate something like this. On the other hand, if it's too hard and confusing, I think that a lot of people will (and do) bail out with a very bad taste in their mouths (which I still believe is very contrary to the stated goals of the community).

I'm also looking for any interested individuals who would like to help.

Comment by mwaser on Harry Potter and the Methods of Rationality discussion thread, part 4 · 2010-11-04T17:00:49.998Z · LW · GW

Using "because" on evolution is tricky -- particularly when co-evolution is involved --and society and humans are definitely co-evolving. Which evolved first -- the chicken or the chicken egg (i.e. dinosaur-egg-type arguments explicitly excluded).

Comment by mwaser on Rationality Quotes: November 2010 · 2010-11-04T12:18:39.387Z · LW · GW

I can think of several reasons

  1. Your post appears to be a dominance game. Your bible will obliterate their bible.
  2. While beauty is in the eye of the beholder, I would guess that the initial quote probably strikes many here as elegant poetry that is well worth sharing (and upvotes effectively equal sharing).
  3. Your post isn't particularly interesting so I would guess that it wouldn't attract any upovotes and point 1 means that it is nearly certain to attract at least two or three downvotes.
Comment by mwaser on An apology · 2010-11-04T12:04:11.795Z · LW · GW

If you were the first person to see such a post (where Yvain made such a stupid comment that you believed that it deserved to attract 26527 downvotes), would you, personally, downvote it for stupidity or would you upvote it for interestingness?

EDIT: I'd be interested in answers from others as well.

Comment by mwaser on Irrational Upvotes · 2010-11-03T21:49:54.450Z · LW · GW

From Wikipedia

A straw man argument is an informal fallacy based on misrepresentation of an opponent's position.[1] To "attack a straw man" is to create the illusion of having refuted a proposition by substituting it with a superficially similar yet unequivalent proposition (the "straw man"), and refuting it, without ever having actually refuted the original position.[1][2]

My position never included the any claims about the value of the statement as an argument. To imply that my position was that it was a "bad" argument is to misrepresent my position. My position was exactly the two sentences that I wrote:

This is a statement that can be made about any premise. It is backed by no supporting evidence.

Did he disagree with either of these two sentences? Or did he strongly imply that I said that the upvoted comment was a bad argument and attack that?

Comment by mwaser on Waser's 3 Goals of Morality · 2010-11-03T20:34:25.926Z · LW · GW

Now that I've got it, this is clear, concise, and helpful. Thank you.

I also owe you (personally) an apology for previous behavior.

Comment by mwaser on An apology · 2010-11-03T19:29:27.005Z · LW · GW

Yes, that is precisely what I wish to do -- but, as I said, that is also going to take some patience and help from others and I have certainly, if unintentionally, abused my welcome.

There is also (obviously) still a lot that I don't understand -- for example, this post quickly acquired a downvote in addition to your comment and I don't get why.