Posts

Comments

Comment by vinayak on [deleted post] 2012-06-21T11:53:13.474Z

Haha! Very curious to know how this turns out!

Comment by vinayak on Making Beliefs Pay Rent (in Anticipated Experiences) · 2012-05-15T04:26:41.358Z · LW · GW

I have read this post before and have agreed to it. But I read it again just now and have new doubts.

I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.

Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some vibration, the answer is yes. For the second person, the test is to monitor every person living on earth and see if their brains did the kind of auditory processing that the falling tree would make them do. Since the first person's test has turned out to be positive and the second person's test has turned out to be negative, they say "yes" and "no" respectively as answers to the question, "Did the tree make any sound?"

So the problem here doesn't seem to be an absence of rent in anticipated experiences. There is some problem, true, because there is no single anticipated experience where the two people anticipate opposite outcomes even though one says that the tree makes a sound and the other one says it doesn't. But it seems like that's because of a different reason.

Say person A has a set of observations X, Y, and Z that he thinks are crucial for deciding whether the tree made any sound or not. For example, if X is negative, he concludes that the tree did make a sound otherwise it didn't, if Y is negative, he concludes it did not make a sound and so on. Here, X could be "cause air vibration" for example. For all other kinds of observations, A has a don't care protocol, i.e., the other observations do not say anything about the sound. Similarly, person B has a set X', Y', Z' of crucial observations and other observations lie in the set of don't cares. The problem here is just that X,Y, Z are completely disjoint from X', Y', Z'. Thus even though A and B differ in their opinions about whether the tree made a sound, there is no single aspect where they would anticipate completely opposite experiences.

Comment by vinayak on SotW: Be Specific · 2012-04-03T19:36:12.954Z · LW · GW

How about this:

People are divided into pairs. Say A and B are in one pair. A gets a map of something that's fairly complex but not too complex. For example, an apartment with a sufficiently large number of rooms. A's task is to describe this to B. Once A and B are both satisfied with the description, B is asked questions about the place the map represented. Here are examples of questions that could be asked:

How many left-turns do you need to make to go from the master bed room to the kitchen?

Which one is the washroom nearest to the game room?

You are sitting in room1 and you want to go to room2. You have some guests sitting in room3 and you want to avoid them. Can you still manage to reach room2?

You can also just simulate the story about Y Combinator and Paul Graham. Show a new web-service to person A and ask him to describe it to person B. Finally ask B questions about the web service.

In both cases, the accuracy with which B answers the questions is directly proportional to the quality of A's description.

I think two variants can be tried. In the first one, A does not know what questions will be given to B. In the second one, he does, but he is prohibited from directly including the answers as a part of his description.

Comment by vinayak on Meetup : First(New?) Waterloo Meetup · 2011-11-12T03:38:35.807Z · LW · GW

I will come too.

Comment by vinayak on Starting a LW meet-up is easy. · 2011-06-12T11:51:57.013Z · LW · GW

Hey, I live in Waterloo too. I will join. (Perhaps not this one, but any subsequent ones after the 24th this month that are organized in Waterloo.) Please keep me posted and let me know if you need any help in organizing this.

Comment by vinayak on Applying Behavioral Psychology on Myself · 2010-06-24T18:08:28.602Z · LW · GW

Pretty neat. Thanks!

Comment by vinayak on Applying Behavioral Psychology on Myself · 2010-06-22T12:11:59.271Z · LW · GW

If you have many things to do and you are wasting time, then you should number those things from 1 to n and assign n+1 to wasting time and then use http://random.org to generate a random number between 1 and n+1 (1 and n+1 included) to decide what you should do. This adds some excitement and often works.

Comment by vinayak on Less Wrong Book Club and Study Group · 2010-06-10T15:12:58.233Z · LW · GW

I live in Waterloo, Ontario (Canada). Does anyone live nearby?

Comment by vinayak on Less Wrong Book Club and Study Group · 2010-06-10T15:08:49.892Z · LW · GW

I'm in too.

Comment by vinayak on Open Thread: May 2010 · 2010-05-02T05:20:38.511Z · LW · GW

Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that's presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?

What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would identify some facts that could be used as evidence in favour or against the hypothesis. And then one would do the necessary Bayesian updates.

I know how to do this for the simple cases of balls in a bin etc. But I get confused when it comes to forming beliefs about statements that are about the real world.

Comment by vinayak on Open Thread: May 2010 · 2010-05-01T23:39:26.204Z · LW · GW

So 200:1 is your prior? Then where's the rest of the calculation? Also, how exactly did you come up with the prior? How did you decide that 200:1 is the right place to stop? Or in other words, can you claim that if a completely rational agent had the same information that you have right now, then that agent would also come up with a prior of 200:1? What you have described is just a way of measuring how much you believe in something. But what I am asking is how do you decide how strong your belief should be.

Comment by vinayak on Open Thread: May 2010 · 2010-05-01T11:23:11.311Z · LW · GW

I want to understand Bayesian reasoning in detail, in the sense that, I want to take up a statement that is relevant to our daily life and then try to find exactly how much should I believe in it based on the the beliefs that I already have. I think this might be a good exercise for the LW community? If yes, then let's take up a statement, for example, "The whole world is going to be nuked before 2020." And now, based on whatever you know right now, you should form some percentage of belief in this statement. Can someone please show me exactly how to do that?

Comment by vinayak on Attention Lurkers: Please say hi · 2010-04-19T03:51:26.828Z · LW · GW

Hello.

Now can I get some Karma score please?

Thanks.

Comment by vinayak on The Importance of Goodhart's Law · 2010-03-14T03:54:22.562Z · LW · GW

The fact that students who are motivated to get good scores in exams very often get better scores than students who are genuinely interested in the subject is probably also an application of Goodhart's Law?

Comment by vinayak on Open Thread: March 2010 · 2010-03-02T06:07:35.225Z · LW · GW

Yes, I should be more specific about 2.

So let's say the following are the first three questions you ask and their answers -

Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No.

At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question.

Q4. Do you believe in Modus Ponens?

or in other words,

Q4. Do you think that if A and A=>B are both true then B should also be true?

If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well.

Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well?

And I can go on and on...

So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of "Modus Ponens is wrong" for some suitably chosen n.

Comment by vinayak on Open Thread: March 2010 · 2010-03-02T05:49:23.138Z · LW · GW

I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -

  1. Strigli wins the game.
  2. It rains immediately after the match is over.
  3. Strigli wins the game AND it rains immediately after the match is over.

I am not sure if your scheme ensures that this does not happen.

Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don't understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?

One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don't know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can't completely convince myself that it gives a correct prior probability.

Comment by vinayak on Open Thread: March 2010 · 2010-03-01T20:27:14.784Z · LW · GW

I have two basic questions that I am confused about. This is probably a good place to ask them.

  1. What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

  2. Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to 'yes' and a probability to 'no'. What's the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?

Comment by vinayak on Fundamentally Flawed, or Fast and Frugal? · 2010-01-20T14:42:32.950Z · LW · GW

I think one thing that evolution could have easily done with our existing hardware is to at least allow us to use rational algorithms whenever it's not intractable to do so. This would have easily eliminated things such as Akrasia, where our rational thoughts do give a solution, but our instincts do not allow us to use them.

Comment by vinayak on Efficient prestige hypothesis · 2009-12-15T08:52:11.380Z · LW · GW

There seem to exist certain measures of quality that are second level, in the sense that they measure quality in a kind of indirect way, mostly because the indirect way seems to be easier. One example is sex appeal. The "quality" of a potential mate should be measured just by the number of healthy offsprings it can give birth to. However, that's difficult to find out and hence evolution has programmed our genes to refer to the sex appeal instead, that is, the number of people who will find the person in question attractive. However, the only problem with such second level measures is that they can be feigned. An impotent person can still look attractive.

Similarly, imagine a car manufacturing company that produces two different models. However, the only differences between them are in the looks and the cost. Assuming that the looks are not clearly better than the looks of the other, people who use prestige as the measure of quality will always buy the more expensive car.

Comment by vinayak on When Willpower Attacks · 2009-10-20T23:27:57.848Z · LW · GW

I think there's a fundamental flaw in this post.

You're assuming that if we have unlimited willpower, we are actually going to use all of it. Willpower is the ability to do what you think is the most correct thing to do. If what you think is the correct thing to do is actually the correct thing to do, then doing it will, by the definition of correctness, be good. So if you do some "high level reasoning" and conclude that not sleeping for a week is the best thing for you to do and then you use your willpower to do it, it will be the best thing to do, just because you've done the correct analysis and have taken all costs into consideration (including the cost of bad health because of sleep deprivation).

It's always good to be able to do the thing that's the best thing for you to do. What's bad is to not be able to decide what's best for you. So we shouldn't blame willpower. We should blame the inability to take correct decisions.

Comment by vinayak on Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 '09! · 2009-10-19T22:04:01.560Z · LW · GW

We realized that one of the very important things that rationalists need is a put down artist community, as opposed to the pick up artist community which already exists but isn't of much use. This is because of the very large number of rationalists who get into relationships but then aren't able to figure out how to get out of them.

Comment by vinayak on Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 '09! · 2009-10-18T18:11:04.238Z · LW · GW

So we have three people now. I hope this happens.

Comment by vinayak on Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 '09! · 2009-10-18T17:51:12.694Z · LW · GW

Oops, I guess I'm late - I just saw this post. In any case, I will come too.

Comment by vinayak on The Nature of Offense · 2009-07-25T08:18:22.306Z · LW · GW

It will be nice to come up with a more precise definition of 'lowering the status'. For example, if some person treats me like a non-person, all he is doing is expressing his opinion about me being a non-person. This being the opinion of just one person, should not affect my status in the whole society and yet, I feel offended. So the first question is whether this should be called lowering of my status.

Also, let us assume that one person treating me like a non-person does lower my status in some way. Even then, shouting back at him and informing him that he is offending me and requesting him to take care not to offend me in future is obviously not a good way of gaining back the lost status. And yet, this is what comes naturally to most people's minds on being offended. Why is that so?

Comment by vinayak on When Truth Isn't Enough · 2009-07-22T09:40:01.159Z · LW · GW

Something related happens with me every once in a while when someone makes a statement of the form A -> B and I say 'yes' or 'ok' in response. By saying 'ok' all I am doing is to acknowledge the truth of the statement A -> B, however, in most cases, the person assumes that I am agreeing that A is true and hence ends up concluding that B is true as well.

One example is this -

I go to a stationery shop and ask for an envelope. The storekeeper hands me one and I start inspecting it. The storekeeper observes this and remarks, "If you want a bigger envelope, I can give you one." I say, "Alright." He hands me a bigger envelope.