## Posts

Meetup : Mumbai Meetup 2013-11-22T10:14:00.678Z · score: 3 (4 votes)

Comment by ronak on Rationality is about pattern recognition, not reasoning · 2015-06-07T20:10:33.349Z · score: 0 (0 votes) · LW · GW

Oh this is nice. I've also come to realise this over time, ,in different words, and my mind is extremely tickled by how your formulation puts it on an equal footing with other non-explicit-rationality avenues of thought.

I would love to help you. I am very interested in a passion project right now. And we seem to be classifying similar things as hard-won realisations, though we have very different timelines for different things; talking to you might be all-round interesting for me.

Comment by ronak on The Hostile Arguer · 2014-11-29T09:18:57.206Z · score: 1 (1 votes) · LW · GW

My steelman is this (without having read anything downstairs, so I apologise if there's a better on extant): the world is a complicated place, and we all form beliefs based on the things we think are important in the world; and since we are all horrible reasoners, it's impossible to believe about some things that they are important movers of the world without seeing it actually happen and viscerally feeling it change things.

Cognitive biases in yourself are like this, methinks. Your thought processes really need to be broken down repeatedly for you to be able to start seeing the subtle shifts happening inside you - and anticipating that they happen even when you don't see them (generalising from many examples here, but not nearly enough).

Another difficult tripping point for me was intuitive reasoning. Till I saw people who couldn't make any sense do significantly better than me I could not possibly believe it, even fighting against people who told me I over-analysed and spoke too much.

I'm slowly coming around on dishonest rhetorical stances, because of the amount of time I've spent trying to convince hostile arguers. Let me soothe the raised hackles of your inner LW-cat by saying that I can't endorse anything like this without finding a Schelling fence,* and am willing to consider anyone who takes such a stance on LW (or in LW-related contexts) evil.

*In fact, based on the world being as it is, I strongly suspect there isn't one.

Comment by ronak on 2014 Less Wrong Census/Survey · 2014-10-24T08:50:09.082Z · score: 55 (55 votes) · LW · GW

I took it. If it's anything like last year, officially 2/5 of my karma will be from surveys.

Comment by ronak on What are your contrarian views? · 2014-09-21T00:02:38.209Z · score: 1 (1 votes) · LW · GW

So, Romila Thapar is not a Dalit activist, just a historian (I'm guessing this is a source of confusion; I could be wrong).

I'm saying they should have read up before starting their project.

I can't find the study for some reason, so I'll try and do it from memory. They randomly picked from a city Dalits (Dalit is a catch-all term coined by B R Ambedkar for people of the lowest castes, and people outside the caste system, all of whom were treated horribly) and people from the merchant castes to look for genetic differences. Which is all fine and dandy - but for the fact that neither 'Dalit' not 'merchant-caste' is an actual caste; there are many castes which come into those two categories. So, assuming a simple no-inter-caste-marriage model of caste, a merchant family from village A thousands of kilometres from village B has about as much (or, considering marginal things like babies born out of rape, even less) genetic material in common than a merchant and Dalit family from the same village - unless there's a common genetic ancestor to all merchant families. And that's where reading historical literature comes in - the history of caste is much more complicated, involving for example periods when it was barely enforced and shuffling and all sorts of stuff. So, they will find differences in their study, but it won't reflect actual caste differences.

Comment by ronak on What are your contrarian views? · 2014-09-20T20:30:22.218Z · score: 0 (0 votes) · LW · GW

Actually, with the caveat that I don't have any object-level research, I doubt it; they assign a rigidity to the whole thing that seems hard to institute. My point was that 'do there exist genetic differences' is not the issue here.

Comment by ronak on What are your contrarian views? · 2014-09-20T13:25:16.993Z · score: 0 (0 votes) · LW · GW

a) Actually Thapar's point wasn't that there were no genetic differences (in fact, the theory of caste promulgated by Dalit activists is that it's created by the prohibition of inter-caste marriage and therefore pretty much predicts genetic differences) - but that the groupings done by the researchers wasn't the correct one.

b) I should actually check that what I surmised is what she said. Thanks for alerting me to the possibility.

Comment by ronak on What are your contrarian views? · 2014-09-19T02:21:38.618Z · score: 2 (2 votes) · LW · GW

When I said humanities I didn't mean social sciences; in fact, I thought social sciences explicitly followed the scientific method. Maybe the word points to something different in your head, or you slipped up. Either way, when I say humanities, I actually mean fields like philosophy and literature and sociology which go around talking about things by taking the human mind as a primitive.

The whole point of the humanities is that it's a way of doing things that isn't the scientific method. The disgraceful thing is the refusal to interface properly with scientists and scientific things - but there's no shortage of scientists who refuse to interface with humanities either, when you come down to it. My head's canonical example is Indian geneticists who try to go around finding genetic caste differences; Romila Thapar once gave an entertaining rant about how anything they found they'd be reading noise as signal because the history of caste was nothing like these people imagined.

And, on the other hand, we have many Rortys and Bostroms and Thapars in the humanities who do interface.

Comment by ronak on What are your contrarian views? · 2014-09-19T02:10:34.540Z · score: 0 (0 votes) · LW · GW

In the abstract. Though, undoubtedly, many of the people can do wonders too.

Comment by ronak on What are your contrarian views? · 2014-09-18T21:50:32.975Z · score: 1 (1 votes) · LW · GW

Why? This looks as if you're taking a hammer to Ockham's razor.

Comment by ronak on What are your contrarian views? · 2014-09-18T21:42:29.171Z · score: 7 (9 votes) · LW · GW

Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.

(I have a large interval for how controversial this is, so pardon me if you think it's not.)

Comment by ronak on What are your contrarian views? · 2014-09-17T02:36:52.576Z · score: 10 (12 votes) · LW · GW

Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)

(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)

Comment by ronak on Open thread, September 8-14, 2014 · 2014-09-17T02:29:22.070Z · score: 0 (0 votes) · LW · GW

I think this is a good argument. Thanks.

After some thought on why your argument sounded unsatisfatory to me, I decided that I have a much more abstract, much less precise argument, to do with things like the beginning of epistemology.

In the logcial beginning, I know nothing about the territory. However, I notice that I have 'experiences.' However, I have nore ason for believing that these experiences are 'real' in any useful sense. So, I decide to base my idea of truth on the usefulness of helping me predict further experiences. 'The sun rises every morning,' in this view, is actually 'it will seem to me that every time there's this morning-thing I'll see the sun rise.' All hypotheses (liike maya and boltzmann brains) that say that these experiences are not 'real,' as long as I have no reason to doubt 'reality,' form part of this inscrutable probability noise in my probability assignments. Therefore, even if I was randomed into existence a second ago, it's still rational to do everything and say "I have no issues with being a boltzmann brain - however it's just part of my probability noise.'

I haven't fleshed out precisely the connection between this reasoning and not worrying about Carroll's argument - it seems as if I'm viewing myself as an implementation-independent process trying to reason about its implementation, and asking what reasoning holds up in that view.

Comment by ronak on Bayesianism for humans: "probable enough" · 2014-09-10T06:47:54.600Z · score: 3 (3 votes) · LW · GW

The 'how to think of planning fallacy' I grokked was 'people while planning don't simulate the scenario in enough detail and don't see potential difficulties,'* so this is new to me. Or rather, what you say is in some sense part of the way I thought, except I didn't simulate it in enough detail to realise that I should understand it in a probabilistic sense as well, so it's new to me when it shouldn't be.

*In fact, right now I'm procrastinating goign and telling my prof that an expansion I told him I'd do is infinite.

Comment by ronak on Open thread, September 8-14, 2014 · 2014-09-09T13:58:25.875Z · score: 1 (1 votes) · LW · GW

I'm interested in your calling it 'paying in sanity.' Are you referring to the insanity of believing in Bengali babus, or the fact that they're preserving their own sanity in some way by not going to a real doctor for things they know they can't afford?

Comment by ronak on Open thread, September 8-14, 2014 · 2014-09-09T13:56:32.234Z · score: 8 (8 votes) · LW · GW

Indeed, 'being poor is expensive' is related to how they frame this fact. From the end of the same chapter:

Comment by ronak on Open thread, September 8-14, 2014 · 2014-09-09T13:54:40.608Z · score: 3 (3 votes) · LW · GW

My issue is much 'earlier' in terms of logic. When I started reading that post, the Boltzmann brain problem seemed like a non-problem; an inevitable conclusion that people were unwilling to accept for reasons of personal bias - analogous to how most LWers would view someone who insists on metaphysical free will.

Even if certain facts about the universe didn't solve the issue, it seems to me that Carroll would still want to find reasons that we weren't Boltzmann brains. Now, from my own interest in entropy and heat death, I had long ago concluded that I might, right now, be part of a random fluctuation that is gone the next moment; in fact, I had concluded that every moment of my existence turns up somewhere during heat death. That's not an issue, as far as I can see - whatever fact we see about the universe, that would just be part of this fluctuation (I don't know about this acceleration thing - my technical understanding of the issue is not good enough, but I'm willing to take your and Carroll's words for it). At this level, 'we're part of a random fluctuation' is one of those uninteresting hypotheses like maya that could very well be true but are unverifiable. (Continued adherence to ordered laws can't really be considered evidence, since we may have just popped into existence a second ago with memories as we have. It truly can predict everything.)

But then, Carroll argues that believing you're a Boltzmann brain is inconsistent, since you can't trust your own brain which is a product of a random fluctuation. Of course, I don't believe I'm a Boltzmann brain, I just note that no experience (modulo expanding universe) contradicts the hypothesis and therefore I should reason without giving a shit about it. However, Carroll's argument gives me pause, and I can't really see whether I should consider it seriously.

Comment by ronak on Open thread, September 8-14, 2014 · 2014-09-09T02:59:59.467Z · score: 12 (12 votes) · LW · GW

From Poor Economics by Esther Duflo and Abhijit Bannerjee

There is potentially another reason the poor may hold on to beliefs that might seem indefensible: When there is little else they can do, hope becomes essential. One of the Bengali doctors we spoke to explained the role he plays in the lives of the poor as follows: “The poor cannot really afford to get treated for anything major, because that involves expensive things like tests and hospitalization, which is why they come to me with their minor ailments, and I give them some little medicines which make them feel better.” In other words, it is important to keep doing something about your health, even if you know that you are not doing anything about the big problem. In fact, the poor are much less likely to go to the doctor for potentially life-threatening conditions like chest pains and blood in their urine than with fevers and diarrhea. The poor in Delhi spend as much on short-duration ailments as the rich, but the rich spend much more on chronic diseases.34 So it may well be that the reason chest pains are a natural candidate for being a bhopa disease (an older woman once explained to us the dual concepts of bhopa diseases and doctor diseases—bhopa diseases are caused by ghosts, she insisted, and need to be treated by traditional healers), as are strokes, is precisely that most people cannot afford to get them treated by doctors.

Comment by ronak on Open thread, September 8-14, 2014 · 2014-09-09T02:29:57.259Z · score: 6 (8 votes) · LW · GW

A room full of monkeys, hitting keys randomly on a typewriter, will eventually bang out a perfect copy of Hamlet. Assuming, of course, that their typing is perfectly random, and that it keeps up for a long time. An extremely long time indeed, much longer than the current age of the universe. So this is an amusing thought experiment, not a viable proposal for creating new works of literature (or old ones).

There’s an interesting feature of what these thought-experiment monkeys end up producing. Let’s say you find a monkey who has just typed Act I of Hamlet with perfect fidelity. You might think “aha, here’s when it happens,” and expect Act II to come next. But by the conditions of the experiment, the next thing the monkey types should be perfectly random (by which we mean, chosen from a uniform distribution among all allowed typographical characters), and therefore independent of what has come before. The chances that you will actually get Act II next, just because you got Act I, are extraordinarily tiny. For every one time that your monkeys type Hamlet correctly, they will type it incorrectly an enormous number of times — small errors, large errors, all of the words but in random order, the entire text backwards, some scenes but not others, all of the lines but with different characters assigned to them, and so forth. Given that any one passage matches the original text, it is still overwhelmingly likely that the passages before and after are random nonsense.

That’s the Boltzmann Brain problem in a nutshell. Replace your typing monkeys with a box of atoms at some temperature, and let the atoms randomly bump into each other for an indefinite period of time. Almost all the time they will be in a disordered, high-entropy, equilibrium state. Eventually, just by chance, they will take the form of a smiley face, or Michelangelo’s David, or absolutely any configuration that is compatible with what’s inside the box. If you wait long enough, and your box is sufficiently large, you will get a person, a planet, a galaxy, the whole universe as we now know it. But given that some of the atoms fall into a familiar-looking arrangement, we still expect the rest of the atoms to be completely random. Just because you find a copy of the Mona Lisa, in other words, doesn’t mean that it was actually painted by Leonardo or anyone else; with overwhelming probability it simply coalesced gradually out of random motions. Just because you see what looks like a photograph, there’s no reason to believe it was preceded by an actual event that the photo purports to represent. If the random motions of the atoms create a person with firm memories of the past, all of those memories are overwhelmingly likely to be false.

This thought experiment was originally relevant because Boltzmann himself (and before him Lucretius, Hume, etc.) suggested that our world might be exactly this: a big box of gas, evolving for all eternity, out of which our current low-entropy state emerged as a random fluctuation. As was pointed out by Eddington, Feynman, and others, this idea doesn’t work, for the reasons just stated; given any one bit of universe that you might want to make (a person, a solar system, a galaxy, and exact duplicate of your current self), the rest of the world should still be in a maximum-entropy state, and it clearly is not. This is called the “Boltzmann Brain problem,” because one way of thinking about it is that the vast majority of intelligent observers in the universe should be disembodied brains that have randomly fluctuated out of the surrounding chaos, rather than evolving conventionally from a low-entropy past. That’s not really the point, though; the real problem is that such a fluctuation scenario is cognitively unstable — you can’t simultaneously believe it’s true, and have good reason for believing its true, because it predicts that all the “reasons” you think are so good have just randomly fluctuated into your head!

So, before reading the last sentence quoted I had no issue with the idea that I turned up as a random fluctuation, but that last sentence gives me pause - and my brain refuses to cross it and give useful thoughts.

Anyone have any useful comments? Thanks.

Comment by ronak on Luck II: Expecting White Swans · 2013-12-19T14:22:07.053Z · score: 0 (0 votes) · LW · GW

I should have been clearer, sorry. Facebook is less inconvenient on two non-trivial counts: there are other reasons to open it (whereas a birthday diary will only have information related to birthdays and similar stuff), and it records the birthdays without any effort on your part.

Comment by ronak on Meetup : Mumbai Meetup · 2013-12-17T07:31:36.812Z · score: 0 (0 votes) · LW · GW

Quite well, I thought. We were talking for four or so hours.

Comment by ronak on Luck II: Expecting White Swans · 2013-12-17T07:29:23.041Z · score: 0 (0 votes) · LW · GW

But they made those diary entries. And looked into the diary regularly to make sure they remembered.

Comment by ronak on Meetup : Mumbai Meetup · 2013-12-15T09:50:50.836Z · score: 0 (0 votes) · LW · GW

We're moving it to cafe royal. It's in the area.

Comment by ronak on Meetup : Mumbai Meetup · 2013-12-15T09:41:53.155Z · score: 0 (0 votes) · LW · GW

Where it used to be. Look outside regal cinema. Close to a shop called sixth street yogurt.

Comment by ronak on Meetup : Mumbai Meetup · 2013-12-15T09:40:23.592Z · score: 0 (0 votes) · LW · GW

This is next to a shop called sixth street yogurt.

Comment by ronak on Meetup : Mumbai Meetup · 2013-12-15T09:36:06.209Z · score: 0 (0 votes) · LW · GW

So Gloria jeans coffee is closed. I'm standing outside with a sign if anyone's around. Sorry for the mix-up.

Comment by ronak on Meetup : Mumbai Meetup · 2013-12-15T05:38:17.378Z · score: 0 (0 votes) · LW · GW

Yeah, conversation most probably. Backup games and things in case too many of us are bad at social stuff.

Comment by ronak on Meetup : Mumbai Meetup · 2013-12-15T05:36:52.951Z · score: 0 (0 votes) · LW · GW

Awesome. Look forward to meeting you.

Comment by ronak on Meetup : Mumbai Meetup · 2013-11-22T19:41:05.534Z · score: 1 (1 votes) · LW · GW

Totally. Moving to fifteenth.

Comment by ronak on 2013 Less Wrong Census/Survey · 2013-11-22T10:05:01.861Z · score: 35 (35 votes) · LW · GW

I took the survey - extra credit and everything!

Comment by ronak on Why one-box? · 2013-07-03T20:47:29.411Z · score: 1 (1 votes) · LW · GW

When I said 'A and B are the same,' I meant that it is not possible for one of A and B to have a different truth-value from the other. Two-boxing entails you are a two-boxer, but being a two-boxer also entails that you'll two-box. But let me try and convince you based on your second question, treating the two as at least conceptually distinct.

Imagine a hypothetical time when people spoke about statistics in terms of causation rather than correlation (and suppose no one had done Pearl's work). As you can imagine, the paradoxes would write themselves. At one point, someone would throw up his/her arms and tell everyone to stop talking about causation. And then the causalists would rebel, because causality is a sacred idea. The correlators would reply probably by constructing a situation where a third, unmeasured C caused both A and B.
Newcomb's is that problem for decision theory. CDT is in a sense right when it says one-boxing doesn't cause there to be a million dollars in the box, that what does cause the money to be there is being a one-boxer. But, it ignores the fact that the same thing that caused there to be the million dollars also causes you to one-box - so, there may not be a causal link there very definitely is a correlation.
'C causing both A and B' is an instance of the simplest and most intuitive way in which correlation can be not causation, and CDT fails. EDT is looking at correlations between decisions and consequences and using that to decide.

Aside: You're right, though, that the LW idea of a decision is somewhat different from the CDT idea. You define it as "a proposition that the agent can make true or false at will." That definition has this really enormous black box called will - and if Omega has an arbitrarily high predictive accuracy, then it must be the case that that black box is a causal link going from Omega's raw material for prediction (brain state) to decision. CDT, when it says that you ought to only look at causal arrows that begin at the decision, assumes that there can be no causal arrow that points to the decision (because the moment you admit that there can be a causal arrow that begins somewhere and ends at your decision, you have to admit that there can exist C that causes both your decision and a consequence without your decision actually causing the consequence).
In short, the new idea of what a decision is itself causes the requirement for a new decision theory.

Comment by ronak on Why one-box? · 2013-06-30T19:12:47.490Z · score: 2 (2 votes) · LW · GW

[Saying same thing as everyone else, just different words. Might work better, might not.]

Suppose once Omega explains everything to you, you think 'now either the million dollars are there or aren't and my decision doesn't affect shit.' True, your decision now doesn't affect it - but your 'source code' (neural wiring) contains the information 'will in this situation think thoughts that support two-boxing and accept them.' So, choosing to one-box is the same as being the type of agent who'll one-box.
The distinction between agent type and decision is artificial. If your decision is to two-box, you are the agent-type who will two-box. There's no two ways about it. (As others have pointed out, this has been formalised by Anna Salomon.)

The only way you can get out of this is if you believe in free will as something that exists in some metaphysical sense. Then to you, Omega being this accurate is beyond the realm of possibility and therefore the question is unfair.

Comment by ronak on Start Under the Streetlight, then Push into the Shadows · 2013-06-28T18:37:19.063Z · score: 1 (1 votes) · LW · GW

What's the hairy green sphere? My search engine gives this page as first result.

Comment by ronak on How to Write Deep Characters · 2013-06-24T17:27:55.602Z · score: 1 (1 votes) · LW · GW

It sounds unlikely to be a cause - with a different reward system different teaching will be deemed right.

Comment by ronak on How would not having free will feel to you? · 2013-06-21T16:02:07.202Z · score: 1 (1 votes) · LW · GW

Epistemic:
-> finding out that I can't, even given an exponentially bigger universe to compute in, be predicted.
It would also potentially destroy my sense of identity. Even if I can be predicted, I can do anything I want: it's just that what I'll want is constrained. However, if the converse is true, any want I feel has nothing to do with me (and my intuitive sense of identity is similar to 'something that generates the wants and thoughts that I have') and I'm not sure I'll feel particularly obliged to satisfy them.
(Warning: writing it out made it sound to me like post-hoc rationalisation. But far as I can make out, I believe it.)

-> finding out that I'm being controlled by direct neurological tinkering or very thorough manipulation.
Manipulation to some degree is very common, and the line between influence and manipulation is not very clean, but there is definitely an amount of manipulation that will make me go, 'no free will.' Basically, if you can make it clear that my wants are being constrained by intentions.

Psychological and physical: I find it hard to come up with anything that won't look normal. If my wants are being constrained, it'll just feel like stronger wants acting contrariwise. However, I can imagine something ineffable keeping me away from certain thoughts and wants... but that happens anyway too - Yudkowsky's written shitloads about that feeling (also, in a very different context, me).
Physically, involuntary motions happen all the time, and often I can't move quite the way I want to. Physical constraints, I dismiss them as.

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-18T17:55:23.828Z · score: 3 (3 votes) · LW · GW

Because the solution to the problem is worthless except to the extent that it establishes your position in an issue it's constructed to illuminate.

Comment by ronak on How to Write Deep Characters · 2013-06-18T03:12:29.548Z · score: 2 (2 votes) · LW · GW

*I don't know what you understand and don't.

*I can tell you that he's talking about rich people's concerns and how they've taken over litfic and how there's a very narrow understanding of character building, but there's lots more intricacy to it and that's why I'd do better at explaining bits than the whole thing.

Comment by ronak on How to Write Deep Characters · 2013-06-17T18:42:43.290Z · score: 0 (0 votes) · LW · GW

No. I wouldn't mind that, but those two are hardly the only things novels can do; and I can't provide an exhaustive list of what literature does and how it does it - if I could do such things I'd have written something worth reading by now.

I'm sorry, but I have no idea how to explain Mieville's statements to you. Lit people are often vague, and often because they aren't able to be clearer. Maybe if you had specific points of confusion I could help.
It might help to know that the litfic audience is a lot more like an academy than a fanbase, and that Mieville is a Marxist so he's using language from there.

Comment by ronak on How to Write Deep Characters · 2013-06-17T16:23:09.043Z · score: 1 (1 votes) · LW · GW

They're hard to pin down, and different people I know have different explanations.

The one in my head is basically that they pay too much attention to theme and perspective; while in many cases litfic is directly about perspectives (themes), lots of people tend to be reductio ad absurdums of this, focusing on these things in rather simplistic ways that sometimes ignore how the world works or the basic potentially interesting things in the setting**. This is made worse because it's less obvious to the unpractised eye by the very nature of what's being tackled what the difference between Nabokov and McEwan is than is that between Arthur C Clarke and a generic bad SF writer; and by the fact that the average litfic writer has been through a professional course in writing and therefore sounds very polished.

Here's China Mieville's explanation, since you shouldn't be limited by my account found in the Guardian (it's not a coincidence that he chose Saturday too, it's partly that it's too good an example and partly that he put it in my head back when I read this piece):
"Literary fiction of that ilk – insular, socially and psychologically hermetic, neurotically backslapping and self-congratulatory about a certain milieu, disaggregated from any estrangement or rubbing of aesthetics against the grain – is in poor shape." Miéville identifies Ian McEwan's Saturday, set around the 2003 demonstration against the Iraq war, as a "paradigmatic moment in the social crisis of litfic".
"In the early 2000s there was this incredible efflorescence of anger and excitement . . . It seemed to me that Saturday quite bolshily said, 'OK, you accuse us of a neurotic obsession with insularity and a certain milieu. I'm going to take the most extraordinary political event that has happened in Britain for however many years and I am going to doggedly interiorise it and depoliticise it with a certain type of limpid prose . . . It was a combative novel that met that sense of there being a crisis and de-crisised it through its absolute fidelity to a set of generic tropes."

*Another particularly appropriate example: Cormac McCarthy's The Road is a post-apocalyptic novel involving, among other things, cannibals and an earth that can't grow food that, at page 300, suddenly reveals that it's about A Father's Concerns About Setting His Child Free and nothing else*. I'm sympathetic to the theme, but not when it funges on everything else potentially interesting about the story.

Edit: I consider China Mieville more able to answer this question properly than Eliezer because he has read a lot of litfic and incorporates techniques from that side into his writing.
Also, I just realised that this whole thread must have been a bit frustrating to you because of my laziness. Sorry about that.

Comment by ronak on How to Write Deep Characters · 2013-06-16T17:03:57.344Z · score: 0 (0 votes) · LW · GW

It'd take me a while to explain it fully, but basically that the worst trends in litfic writing are manifested in his work.

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-16T16:51:59.469Z · score: 0 (0 votes) · LW · GW

In response to this, I want to roll back to saying that while you may not actually be simulated, having the programming to one-box is what causes there to be a million dollars in there. But, I guess that's the basic intuition behind one-boxing/the nature of prediction anyway so nothing non-trivial is left (except the increased ability to explain it to non-LW people).

Also, the calculation here is wrong.

Comment by ronak on How to Write Deep Characters · 2013-06-16T16:34:05.285Z · score: 1 (1 votes) · LW · GW

Saturday.

To be clear, I liked the book, though I otherwise don't like the guy's writing.

Comment by ronak on How to Write Deep Characters · 2013-06-16T12:00:47.265Z · score: 2 (2 votes) · LW · GW

Genre people and litfic people love flinging shit at each other, and it rarely makes much sense to a person actually familiar with the writing. Far as I can make out, it's because of generalising from a little evidence - a lot of the insults make more sense when you look at the more-likely-to-be-recommended stuff (for example, Ian McEwan wrote a whole book which can be very easily strawmanned into "these poor people are really badly off; but you shouldn't give in to the temptation to therefore dismiss all rich people").

Even positive reviews that cross the divide are horribly condescending.

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-14T19:11:58.331Z · score: 0 (0 votes) · LW · GW

I usually deal with people who don't have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.

Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.

But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.

A similar analysis applies to the counterfactual mugging.

Further, this argument actually creates immunity to the response 'I'll just find a qubit arbitrarily far back in time and use the measurement result to decide.' I think a self-respecting TDT would also have this immunity, but there's a lot to be said for finding out where theories fail - and Newcomb's problem (if you assume the argument about you-completeness) seems not to be such a place for CDT.

Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as 'choose A that maximises $D\(A\$%20=%20\Sigma_i%20P(A%20\rightarrow%20O_i)%20D(O_i)\$) where D is the desirability function and P the probability function.'

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-13T19:28:32.088Z · score: 2 (2 votes) · LW · GW

Care to elaborate? Because otherwise I can say "it totally is!", and we leave at that.

Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.

However, if it is a minute earlier, we are forced to consider the possibility - even if we don't want to - that something contradicting classical ideas of free will is at work (though we can't throw out travel and processing time either).

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-13T19:21:25.622Z · score: 0 (0 votes) · LW · GW

For less than 85 pages, his main argument is in sections 3 and 4, ~20 pages.

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-13T19:18:44.980Z · score: 3 (3 votes) · LW · GW

No his thesis is that it is possible that even a maximal upload wouldn't be human in the same way. His main argument goes like this:

a) There is no way to find out the universe's initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.

b) So we have to talk about uncertainty about wavefunctions - something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).

c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways).

d) We define "non-free" as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively).

e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true.

My disagreements (well, not quite - more, why I'm still compatibilist after reading this):

a) predictability is different from determinism - his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it's not deterministic, according to my interpretation of the word, we wouldn't have free will any more.

b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly among considered possiblities,

I'd say I was rather benefitted by reading it, because it is a stellar example of steelmanning a seemingly (and really, I can say now that I'm done) incoherent position (well, or being the steel man of said position). Here's a bit of his conclusion that seems relevant here:

To any “mystical” readers, who want human beings to be as free as possible from the mechanistic chains of cause and effect, I say: this picture represents the absolute maximum that I can see how to offer you, if I confine myself to speculations that I can imagine making contact with our current scientific understanding of the world. Perhaps it’s less than you want; on the other hand, it does seem like more than the usual compatibilist account offers! To any “rationalist” readers, who cheer when consciousness, free will, or similarly woolly notions get steamrolled by the advance of science, I say: you can feel vindicated, if you like, that despite searching (almost literally) to the ends of the universe, I wasn’t able to offer the “mystics” anything more than I was! And even what I do offer might be ruled out by future discoveries.

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-13T18:49:06.788Z · score: 0 (0 votes) · LW · GW

Yes, I agree with you - but when you tell some people that the question arises of what is in the big-money box after Omega leaves... and the answer is "if you're considering this, nothing."

A lot of others (non-LW people) I tell this to say it doesn't sound right. The bit just shows you that the seeming closed-loop is not actually a closed loop in a very simple and intuitive way** (oh and it actually agrees with 'there is no free will'), and also it made me think of the whole thing from a new light (maybe other things that look like closed loops can be shown not to be in similar ways).

** Anna Salamon's cutting argument is very good too but a) it doesn't make the closed-loop-seeming thing any less closed-loop-seeming and b) it's hard to understand for most people and I'm guessing it will look like garbage to people who don't default to compatibilist.

Comment by ronak on [link] Scott Aaronson on free will · 2013-06-13T12:55:42.608Z · score: -1 (1 votes) · LW · GW

I like his causal answer to Newcomb's problem:

In principle, you could base your decision of whether to one-box or two-box on anything you like: for example, on whether the name of some obscure childhood friend had an even or odd number of letters. However, this suggests that the problem of predicting whether you will one-box or two-box is “you-complete.” In other words, if the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another copy of you (as discussed previously). But in that case, to whatever extent we want to think about Newcomb’s paradox in terms of a freely-willed decision at all, we need to imagine two entities separated in space and time—the “flesh-and-blood you,” and the simulated version being run by the Predictor—that are nevertheless “tethered together” and share common interests. If we think this way, then we can easily explain why one-boxing can be rational, even without backwards-in-time causation. Namely, as you contemplate whether to open one box or two, who’s to say that you’re not “actually” the simulation? If you are, then of course your decision can affect what the Predictor does in an ordinary, causal way.

Comment by ronak on Rationality Quotes June 2013 · 2013-06-10T17:40:18.319Z · score: 1 (1 votes) · LW · GW

While reading books. Always particular voices for every character. So much so, I can barely sit through adaptations of books I've read. And my opinion of a writer always drops a little bit when I meet hjm/her, and the voice in my head just makes more sense for that style.

Comment by ronak on Social intelligence, education, & the workplace · 2013-05-04T09:16:02.282Z · score: 0 (0 votes) · LW · GW

I'd wager people who do well on tests are apt to be the same ones who get high marks on Cognos reports--i.e., the same prejudices affect what's deemed valuable for both.

Well, fair enough.