Posts

Comments

Comment by kvas_duplicate0.1636121129676118 on 2017 LessWrong Survey · 2017-09-21T12:35:04.917Z · LW · GW

I took the survey. It was long but fun. Thanks for the work you've put into designing it and processing the results.

Comment by kvas_duplicate0.1636121129676118 on A Decision Problem · 2017-08-27T18:26:19.822Z · LW · GW

What can I say, your prior does make sense in the real world. Mine was based on the other problems featuring Omega (Newcomb's problem and Counterfactual mugging) where apart from messing with your intuitions Omega was not playing any dirty tricks.

Comment by kvas_duplicate0.1636121129676118 on A Decision Problem · 2017-08-27T09:52:17.448Z · LW · GW

There's no good reason for assigning 50% probability to game A but neither is there a good reason to assign any other probability. I guess I can say that i'm using something like "fair Omega prior" that assumes that Omega is not trying to trick me.

You and Gurkenglas seem to assume that Omega would try to minimize your reward. What is the reason for that?

Comment by kvas_duplicate0.1636121129676118 on A Decision Problem · 2017-08-27T00:39:03.679Z · LW · GW

You could also make a version where you don't know what X is. In this case always reject strategy doesn't work since you would reject k*X in real life after the simulation rejected X. It seems like if you must precommit to one choice, you would have to accept (and get (X+X/k)/2 on average) but if you have a source of randomness, you could try to reject your cake and eat it too. If you accept with probability p and reject with probability 1 - p, your expected utility would be (p*X + (1-p)*p*k*X + p*p*X/k)/2. If you know the value of k, you can calculate the best p and see if random strategy is better than always-accept. I'm still not sure where this is going though.

Comment by kvas_duplicate0.1636121129676118 on A Decision Problem · 2017-08-27T00:13:26.717Z · LW · GW

I also agree with Dagon's first paragraph. Then, since I don't know which game Omega is playing except that either is possible, I will assign 0.5 probability to each game, calculate expected utilities (reject -> $5000, accept -> $550) and reject.

For general form I will reject if k > 1/k + 1, which is the same as k*k - k - 1 > 0 or k > (1+sqrt(5))/2. Otherwise i will accept.

It seems like I'm missing something, though, because it's not clear why you chose these payoffs and not the ones that give some kind of nice answer.

Comment by kvas_duplicate0.1636121129676118 on [Humor] A Fearsome Rationality Technique · 2017-08-15T16:11:15.938Z · LW · GW

Thank you, this is awesome! I've just convinced my wife to pay more attention to LW discussion forum.

Comment by kvas_duplicate0.1636121129676118 on People don't have beliefs any more than they have goals: Beliefs As Body Language · 2017-08-13T22:39:09.307Z · LW · GW

And then they judge what some high-status members of their group would say about the particular Quantum Mechanics conundrum. Then, they side with him about that. Almost nobody actually ponders what the Hell is really going on with the Schrodinger's poor cat. Almost nobody.

I find it harder to reason about the question "what would high status people in group X say about Schrodinger's cat?" than about the question "based on what I understand about QM, what would happen to Schrodinger's cat?". I admit that I suck at modelling other people, but how many people are actually good at it?

Not to say that belief signalling doesn't happen. After all in many cases you just know what the high status people say since they, well, said it.

Comment by kvas_duplicate0.1636121129676118 on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-08-02T14:57:15.101Z · LW · GW

Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)

After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I've come to an opinion that the "disagreement on priorities", as I have originally called it, is more significant than I originally acknowledged.

To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn't work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.

... the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else.

Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.

So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so -- it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation -- perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that's precise enough to be programmed into computers, that would be pretty significant).

Comment by kvas_duplicate0.1636121129676118 on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-29T20:22:25.800Z · LW · GW

You are steelmanning the rationalist position

That could very well be. I had an impression that meta-rationalists are arguing against a strawman, but that would just mean we disagree about the definition of "rationalist position".

I agree that one-true-map rationalism is rather naive and that there are many people who hold this position, but I haven't seen much of this on LW. Actually, LW contains the clearest description of the map/territory relationship that I've seen, no nebulosity or any of that stuff.

Comment by kvas_duplicate0.1636121129676118 on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-29T20:14:32.511Z · LW · GW

Ok, I think I get it. So basically, pissing contests put aside, meta-rationalists should probably just concede that LW-style rationalists are also meta-rational and have a constructive discussion about better ways of thinking (I've actually seen a bit of this, for example in the comments to this post).

Judging from the tone of your comment, I gather that that's the opposite of what many of them are doing. Well, that doesn't really surprise me, but it's kind of sad.

Comment by kvas_duplicate0.1636121129676118 on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-29T11:37:03.789Z · LW · GW

Thank you, this is a pretty clear explanation. I did read a bit more from meaningness.com yesterday and what I gathered was also pointing in the direction of this sort of meta-epistemological relativism.

However, I still don't really see a significant disagreement. The map/territory distinction, which I see as one of the key ideas of rationalism, seems to be exactly about this. So I see rationalism as saying "the map is not the territory and you never have unmediated access to the territory but you can make maps that are more or less useful in different contexts for different purposes; here are some tools for mapmaking and updating, and btw, the maps produced by science are great for most purposes, so we mostly use those and build new ones on top of them".

So with what I learned so far, if I try to formulate the disagreement, it would probably be something like this:

  • Rationalists: sure, no map is objectively true, but those science maps work really well for most purposes and a lot of people are working on improving them; everyone else would better invest their time by building on top of science maps; also bayesian updating is the best method for deciding how to update your map.

  • Meta-rationalists: yeah, science maps are pretty awesome, but non-scientific maps work rather well for some people too, so we should pay attention to those as well; bayesian updating is great, but that's the easy part -- the hard part is formulating the hypotheses.

I'm not sure if I'm capturing most of the disagreement here, but at least this part seems to be more about different priorities rather than fundamentally different world views. So there's no "quantum leap", that is promised by meta-rationalists, or am I missing something?

Comment by kvas_duplicate0.1636121129676118 on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-28T14:30:32.400Z · LW · GW

I've read the article and then also A first lesson in meta-rationality but I must confess I still have no idea what he's talking about. The accusations of inscrutability seem to be spot on.

Perhaps I should read more about meta-rationality to get it, but just to keep me motivated, can anyone explain in simple terms what the deal is about, or perhaps give me an example of meta-rationalist belief that rationalists don't share?

Comment by kvas_duplicate0.1636121129676118 on MILA gets a grant for AI safety research · 2017-07-23T20:04:45.097Z · LW · GW

I know that David Krueger is one of the people working with 80000 hours on helping people to get into the AI safety field. He also organized a related google group.