Posts

Learn Three Things Every Day 2015-01-16T09:36:33.983Z

Comments

Comment by helltank on White Lies · 2015-11-02T12:39:32.665Z · LW · GW

I don't think people have a right to lie to other people. I also can't understand why you would regret breaking up with someone so truth-averse and horrible.

Comment by helltank on Rationality Quotes Thread October 2015 · 2015-10-11T04:10:44.805Z · LW · GW

How does this help me become more rational?

Comment by helltank on Torture vs. Dust Specks · 2015-03-26T10:56:01.090Z · LW · GW

That's ridiculous. So mild pains don't count if they're done to many different people?

Let's give a more obvious example. It's better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.

Scaling down, we can say that it's better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.

Keep repeating this in your head(see how consistent it feels, how it makes sense).

Now just extrapolate to the instance that it's better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn't good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn't add up in your straw man example, unlike with the actual example given.

As a side note, you are also appealing to consequences.

Comment by helltank on That Alien Message · 2015-03-18T10:34:48.263Z · LW · GW

The point is that to an AI, we are but massive, stupid beings who are attempting to teach them minor symbols with massive overuse of resources(that few lines of code to define "rock" could be used by a sufficiently powerful UFAI to, say, manufacture nukes).

Comment by helltank on Meetup : Singapore Meetup Group · 2015-03-01T06:28:18.204Z · LW · GW

I'm there with one other person. Look to the lesswrong Singapore google group for any future updates. https://groups.google.com/forum/m/#!topic/lesswrong-singapore/cXtHTMQO4xw

Comment by helltank on Quotes Repository · 2015-02-24T13:30:56.039Z · LW · GW

On religion:

Faith is corrosive to the human mind. -Susan Blackmore

I've never really thought just how damaging blind faith was to my thought processes until I read this quote. It strikes a chord with me.

Comment by helltank on Meetup : Singapore Meetup Group · 2015-02-24T09:18:58.660Z · LW · GW

Finally, a Singapore meetup! Will definitely be there.

Comment by helltank on Respond to what they probably meant · 2015-01-19T09:48:13.117Z · LW · GW

For me, the problem with this is that if I'm speaking to an autistic person(and a very large number of LWers identify themselves as on the autistic spectrum), they tend to use literal meanings very often. In fact, some of them(including me) get offended or confused when they say something literal and it is interpreted as sarcastic or subtext.

Suppose I am speaking to an autistic person, and he says, "I am 87% confident that X is true." The issue with this statement is that a lot of people use this sort of statement in a metaphorical sense(ie. they just pull the number out of their butt to make it oddly specific and get a cheap laugh) but an autistic or rationality-trained person may literally mean that they are 87% sure it is true, especially if they are good at measuring their own confidence levels. In this case, the usual situation- the number being picked randomly - is false.

There are also, however, a large number of statements that are almost always meant sarcastically or in a non-literal way. The statement "I, for one, welcome our new alien overlords" is almost always sarcastic as it is 1) invoking a well-known meme which is intended to be used in this manner and 2) it is extremely unlikely that the person I am speaking to is actually someone who wants aliens to take over the world. These statements are, for want of a better word, "cached non-literal statements"(as in, it is an automatic thought that these statements are not literal), or CNLS for short.

It might be useful to append the guideline "All statements have a probability of being literal that is worth considering, except in the case of CNLSes. This probability is adjusted up if the person you are speaking to is known for being extremely literal and adjusted down if they are known for using figurative speech(although that last sentence should be fairly obvious, I throw it in for the sake of completeness)" to your thesis.

This actually got me thinking if there is a methodical, objective and accurate way to find out if someone's statement is literal or not, perhaps by measuring their posture, tone of voice. The only difficulty is to try to weasel some quantifiable data out of context. If it can be done, that would be a great resource to people who have trouble understanding the non-literal meanings of statements everywhere.

Comment by helltank on What topics are appropriate for LessWrong? · 2015-01-16T12:00:16.376Z · LW · GW

I have to go to bed soon, therefore I will not write up a long post but leave you with this short statement:

Yes, there is such a point in our rationality training. You underestimate the amount of work needed to get there. I do not think that I can reach that point within the next 30 years; and everyone on LW would have to reach that point to argue effectively. It only takes a few outraged posters to turn a thread into a shitstorm(see the comments and replies above).

It is indeed a word of caution, just like "do not play with electricity" is a word of caution. Grown adults should theoretically be able to handle electricity without getting electrocuted, but doing so(unless they're electricians) won't give them many benefits and there will always be that risk.

I believe that he suggested(he is not a moderator but a random poster making suggestions, remember) that jokes, humor and art not be posted here because this is not a website for jokes, humor and art, unless they somehow have to do with rationality. There are plenty of sites for such things if you really have a pressing need to discuss your love of the Mona Lisa or knock-knock jokes with people on the internet.

If you want my opinion, it's that a debate about Obama's healthcare reforms is less likely to improve rationality than a debate about the sequences or some other "traditional" topic. If you really want to apply your rationality skills in a real world context:

It's right there. Just switch off your computer, go outside and strike up a debate with someone in meatspace.

Comment by helltank on Learn Three Things Every Day · 2015-01-16T10:06:55.992Z · LW · GW

No problem, and I hope this post taught you how to work better and learn better. If you have problems with procrastination, you can try programs like Beeminder, or simply have a friend act as a watcher to ensure you get your work(or your three new things) done for the day, week or month.

Comment by helltank on What topics are appropriate for LessWrong? · 2015-01-16T09:58:49.951Z · LW · GW

If you are offended by any of gjm's statements, I suggest you walk away now, because what I'm going to say is going to be just as offensive to you as anything that gjm has posted.

Right, I take issue with your statement that autistic people are irrational, but I think that point has already been made for me. What I am taking issue with now is:

then I think that's a sad state of affairs.

You believe it is a sad state of affairs that people on LessWrong are discouraged from discussing topics that will harm people more than benefit them? Am I correct in therefore saying that you believe it is a sad state of affairs people on LessWrong are discouraged from doing stupid and irrational things? Because if so, that doesn't seem like a sad thing at all.

Consider the case where political commentary is viewed as just as acceptable a topic of debate as any other. Yes, it would be ideal to have everyone here so rational they can discuss politics freely, without risking harm to their rationality. Yet it is a fact that Politics is the Mind-Killer, and this is not going to go away and it is not going to change because you believe in freedom of speech. And I don't think this is a particularly sad state of affairs, for the very fact that people avoid things that make them irrational is a promising sign that they value their lack of bias.

But you seem to think that the freedom to say silly things like "autistic people are less rational than others", or to bring up disruptive topics, outweighs that consideration.

At this point, I would like to recommend that you close the window right now, turn away from the computer and think hard about whether complete freedom of speech is one of those things that, in the minds of some people, automatically equals a win. I can't recall the technical term for it, but I do recall quite strongly that it will kill your mind.

Comment by helltank on 2014 Less Wrong Census/Survey · 2014-11-20T02:54:59.313Z · LW · GW

I did the survey.

Comment by helltank on Agency and Life Domains · 2014-11-17T16:29:01.051Z · LW · GW

I'll just point out that I actively cut off relationships with people of no value before I read this. Therefore, your argument that non-cultists don't cut off relations with zero valu people is incorrect in at least one case and possibly more: as it is the core of your argument, your argument in at least one case and possibly more.

Comment by helltank on Agency and Life Domains · 2014-11-16T02:31:43.202Z · LW · GW

Okay, thanks for the update and of course the idea of measuring agentness, while simultaneously being careful not to apply the halo effect to agentness, is fundamentally sound. I would propose treating the perceived agentness of a certain person as a belief, so that it can be updated quickly with well-known rationalist patterns when the shift moves to another domain.

Let us take the example of a person who is very agenty in managing relationships but bad at time management, as given in your post. In this case, I would observe that this person displays high levels of agentness in managing relationships. However, this does not equate to high agentness in other fields; yet it may be an indication of an overall trend of agentness in his life. Therefore if his relationship agentness level is 10 I might estimate a prior of his agentness at any random domain to be, say, 6.

Now, suppose I observe him scheduling his tasks with a supposed agentness of 6 and he screws it up completely, because of an inherent weakness which I didn't know about in that domain. After the first few times he was late I could lower my belief probability that his agentness in that domain (time management) is actually 6, and increase the probability of the belief that it is 3, for instance, plus a slight increase in the numbers adjacent (2 and 4).

However, cached thoughts do interest me. We have seen clearly that cached thoughts can act against agentness; but in my opinion the correct path is to make cached thoughts for agentness. Say you discover that in situation X, given Y and Z, A is almost always(or a sufficiently high percentage chance) the most agenty option. Then you can use your system 2 to train your system 1 into storing this pattern, and in future situations you will reflex perform A, with the slow-down consideration given depending on how high the chance that the agenty option is not A after all times its disutility and so on.

I would say that cached thoughts are very interesting phenomena, being able to control the first actions of a human being(and the actions that we, being impulsive creatures, normally take first), and that with proper training it might even be possible to use them for good.

Comment by helltank on Agency and Life Domains · 2014-11-16T02:00:31.951Z · LW · GW

I will probably read this post in more detail when the font isn't hurting my sleep-deprived eyes. Please fix!

Comment by helltank on Belief Chains · 2014-11-15T23:12:34.745Z · LW · GW

27chaos, that is a very interesting paper and I thank you for the find. It's actually quite a happy coincidence as neural networks (having been prompted by the blegg sequence) was on my next-to-study list. Glad to be able to add this paper to my queue.

Comment by helltank on Belief Chains · 2014-11-15T23:06:38.584Z · LW · GW

Very useful and instructive post. I would like to comment that one of the biggest tests(or so it seems to me) to check if a belief chain is valid or not is the test of resistance to arbitrary changes.

You write that systems like [I was abused]<->[people are meanies] <-> [life is horrible] <-> are stable and this is why people believe them; because they seem to hold sound under their own reasoning. But they are inherently not stable because they are not connected to the unshakable foundation of the source of truth(reality)!

Suppose you apply my test and /arbitrarily change one of the beliefs/. Let's say I decide to change the belief [I was abused] to [I was not abused] (which is an entirely plausible viewpoint to hold unless you think that everyone is abused). In that case, the whole chain falls apart, because if you were not abused, then it does not prove that people are meanies, which in turn implies a possible non-terrible world. And therefore the system is only stable on the surface. A house is not called solid if it can stand up; it is called solid if it can stand rough weather(arbitrary changes) without falling.

Let's look at the truthful chain [Laws of physics exist] <-> [Gravity exists] <-> [If I jump, I will not float]. In this case we can arbitrarily change the value of ANY belief and still have the chain repair itself. Let's say I say that the LAWS OF PHYSICS ARE FALSE. In that case I would merely say, "Gravity- supported by the observation that jumping peoples fall- proves or at least very strongly evidences the existence of a system of rules that govern our universe", and from there work out the laws of physics from basic principles at caveman level. It might take a long time, but theoretically it holds.

Now, if I say that gravity does not exist, a few experiments with the laws of physics -> gravity will prove me wrong. And If I decide to claim that if I jump, I will not fall, gravity, supported by the laws of physics, thinks otherwise(and enforces its opinion quite sharply).

The obvious assumption here is that there is a third person saying "these things are false" in the second example as opposed to god making a change in the first. But the key point is that actually stable (or true) belief chains cannot logically support such a random change without auto-repairing itself. It is impossible to imagine the laws of physics existing as they are and yet gravity being arbitrarily different for some reason. The truth of the belief chain holds all the way to the laws of physics down to quantum mechanics, which is the highest detail of reality we have yet to find.

It seems clear to me that the ability to support and repair an arbitrary chain is what differentiates true chains from bad chains.

Comment by helltank on November 2014 Monthly Bragging Thread · 2014-11-13T13:46:02.896Z · LW · GW

Thanks a lot. I really appreciated that comment.

Comment by helltank on The Truth and Instrumental Rationality · 2014-11-10T02:58:55.980Z · LW · GW

A psychopath would have no problem with this, by the way; he'd just step on the heads of people and be on his merry way, calm as ever.

Comment by helltank on November 2014 Monthly Bragging Thread · 2014-11-10T02:41:43.592Z · LW · GW

I went through an entire evening outing and did not drop the ball once socially- in every event, I successfully carried out all the steps of social interaction, from perfectly(or so I'd like to think) mimicking empathy, adopting correct facial expressions and words. I'd like to think that this is a huge step forward in my social training. One of the people that I went on an outing with even commented that he thought my social skills were improving greatly.

Comment by helltank on Baysian conundrum · 2014-10-16T09:08:49.716Z · LW · GW

I'm really having a lot of trouble understanding why the answer isn't just:

1000/1001 chance I'm about to be transported to a tropical island 0 chance given I didn't make the oath.

Assuming that uploaded you memory blocks his own uploading when running simulations.

Comment by helltank on Causal decision theory is unsatisfactory · 2014-09-14T12:51:06.661Z · LW · GW

Maybe I was unclear.

I'm arguing that the button will never, ever be pushed. If you are NOT a psychopath, you won't push, end of story.

If you ARE A psychopath, you can choose to push or not push.

if you push, that's evidence you are a psychopath. If you are a psychopath, you should not push. Therefore, you will always end up regretting the decision to push.

If you don't push, you don't push and nothing happens.

In all three cases the correct decision is not to push, therefore you should not push.

Comment by helltank on Rationality Quotes September 2014 · 2014-09-14T00:03:47.105Z · LW · GW

Most people would die before they think. Most do.

-AC Grayling

Comment by helltank on Talking to yourself: A useful thinking tool that seems understudied and underdiscussed · 2014-09-13T23:55:16.934Z · LW · GW

What about talking to your rational self? It seems like this accomplishes the benefits of talking to yourself and improves upon some of them.

Comment by helltank on How realistic would AI-engineered chatbots be? · 2014-09-13T23:46:08.528Z · LW · GW

The thing is, if you get suspicious you don't immediately leap to the conclusion of chatbots. Nobody glances around, realizes everyone is bland and stupid and thinks," I've been fooled! An AI has taken over the world and simulated chatbots as human beings!" unless they suffer from paranoia.

Your question, "How much more sophisticated would they need to be" is answered by the question "depends". If you live as a hermit in a cave up in the Himalayas, living off water from a mountain stream and eating nothing but what you hunt or gather with your bare hands, the AI will not need to use chatbots at all. If you're a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle(let's be frank; if we were introduced to a chatbot imitating Eliezer Yudkosky the difference would be fairly obvious).

If you've interacted a lot with your friends and family, and have never been once suspicious that they are a chatbot, then with our current level of AI technology it is unlikely(but not impossible) that they are actually chatbots.

[Please note that if everyone says what you expect them to say that would make it fairly obvious something is up, unless you happen to be a very, very good predictor of human behavior.]

Comment by helltank on Causal decision theory is unsatisfactory · 2014-09-13T23:22:10.097Z · LW · GW

Wouldn't the fact that you're even considering pushing the button(because if only a psychopath would push the button then it follows that a non-psychopath would never push the button) indicate that you are a psychopath and therefore you should not push the button?

Another way to put it is:

If you are a psychopath and you push the button, you die. If you are not a psychopath and you push the button, pushing the button would make you a psychopath(since only a psychopath would push), and therefore you die.

Comment by helltank on Simulate and Defer To More Rational Selves · 2014-09-11T09:52:18.881Z · LW · GW

What I'm interested in is whether this method is applicable to social situations as well. I am not a naturally social person, but have studied how people interact and general social behaviors well enough that I can create a simulation of a "socially acceptable helltank".

I already have mental triggers (what I like to call "scripts") in place for a simulation of my rational mind - or rather a portion of my rational mind kept in isolation from bias and metaphorically disconnected from the other parts of my mind to override my "main" portion of my mind in case the main portion becomes irrational at some point, similar to a backup system overriding a corrupted main system.

Until today, however, I have not thought of using them to simulate social skills. I suppose I might eventually spread out a bunch of simulations, what eli_sennesh called a Parliament of different aspects of your personality in his cancelled post, in order to guide my decision making in certain situations, with a "master aspect" (the aforementioned rational simulation) controlling when to give an aspect override privileges.

Still a very good post. Thank you for it.