Posts

Comments

Comment by jollybard on Do you like bullet points? · 2019-04-10T01:43:56.040Z · score: 5 (3 votes) · LW · GW

I like reading. I like reading prose, as if I were listening to someone talking.

I also read very fast and I'm very good at skimming prose.

That being said, I strongly dislike bullet points, in most part because they're not fun to read... But I also find them harder to skim. Indeed, they are usually much denser in terms of information, with much less redundancy, such that every word counts; in other words, no skimming allowed.

I don't understand why skimming natural text should be any more difficult.

>It's easier to skim, and build up a high level understanding of a post's structure

I mean. This is why we have paragraphs, and leading sentences. There are also some of the reasons listed by gjm: there is more logical and contextual information in prose, which makes it easier to skim.

In fact, I would argue that if we're going to encourage bullet points for easier reading, we could just as well encourage learning to write well...

Comment by jollybard on What is abstraction? · 2018-12-16T06:44:37.637Z · score: 1 (1 votes) · LW · GW

Just a quick, pedantic note.

But there seems to be something very different about each of the two situations. In the first, we would say that the "brush your teeth" abstraction is composed of the subtasks, but we wouldn't say that "animal" is composed of humans, dogs and cats in the second.

Actually, from an extensive point of view, that is exactly how you would define "animal": as the set of all things that are animals. So it is in fact composed of humans, dogs and cats -- but only partly, as there are lots of other things that are animals.

This is just a pedantic point since it doesn't cut to the heart of the problem. As johnswentworth noted, man-made categories are fuzzy, they are not associated with true or false but with probabilities, so "animal" is more like a test, i.e. a function or association between some set of possible things and . So "animals" isn't a set, the sets would be "things that are animals with probability " for every .

Comment by jollybard on Making intentions concrete - Trigger-Action Planning · 2017-10-18T01:44:50.400Z · score: 0 (0 votes) · LW · GW

This post has been very helpful for me, as I kept hearing about TAPs in rationalist circles without ever knowing what it meant. Even knowing what the acronym was didn't help at all (is it usually sufficient for people?).

This post, however, for all its faults (it gets too quickly at examples without first convincing me that I should care), serves as a good reference, if only for the fact that I never knew the concept already existed in mainstream science and was called "implementation intentions". I remember once searching for something of the sort and only finding biology stuff about animal instincts.

I'm aphantasiac. Visualizing things is completely alien to me. So, of course, implementation intentions do not seem obviously good to me. But I see now that they are useful and that I regularly use something similar, and I now believe that most people should use them.

Comment by jollybard on Planning 101: Debiasing and Research · 2017-02-16T05:18:22.815Z · score: 1 (1 votes) · LW · GW

One thing I've never really seen mentioned in discussion of the planning fallacy is that there is something of a self-defeating prophecy at play.

Let's say I have a report to write, and I need to fit it in my schedule. Now, according to my plans, things should go fine if I take an hour to write it. Great! So, knowing this, I work hard at first, then become bored and dick around for a while, then realise that my self-imposed deadline is approaching, and -- whoosh, I miss it by 30 minutes.

Now, say I go back in time and redo the report, but now I assume it'll take me an hour and a half. Again, I work properly at first, then dick around, and -- whoa, only half an hour left! Quick, let's finish thi--- whoosh.

The point I'm trying to make here is that sometimes the actual length of a task depends directly on your estimate of that task's length, in which case avoiding the planning fallacy simply by giving yourself a larger margin won't work.

But I suppose the standard argument against this is that to properly counteract this kind of planning fallacy, one mustn't just take out a longer span of time, but find what it is that makes one miss the deadline and correct it.

Comment by jollybard on Double Crux — A Strategy for Resolving Disagreement · 2016-12-01T03:23:50.715Z · score: 6 (6 votes) · LW · GW

Personally, I am still eagerly waiting for CFAR to release more of their methods and techniques. A lot of them seem to be already part of the rationalist diaspora's vocabulary -- however, I've been unable to find descriptions of them.

For example, you mention "TAP"s and the "Inner Simulator" at the beginning of this article, yet I haven't had any success googling those terms, and you offer no explanation of them. I would be very interested in what they are!

I suppose the crux of my criticism isn't that there are techniques you haven't released yet, nor that rationalists are talking about them, but that you mention them as though they were common knowledge. This, sadly, gives the impression that LWers are expected to know about them, and reinforces the idea that LW has become a kind of elitist clique. I'm worried that you are using this in order to make aspiring rationalists, who very much want to belong, come to CFAR events, to be in the know.

Comment by jollybard on Open Thread May 23 - May 29, 2016 · 2016-05-25T03:07:52.173Z · score: 1 (3 votes) · LW · GW

This looks great and I can see that it should work, but I can't seem to find a formal proof. Can you explain a bit?

Comment by jollybard on JFK was not assassinated: prior probability zero events · 2016-04-28T14:09:48.628Z · score: 0 (0 votes) · LW · GW

That wasn't really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it's likely, but you're right that my example was a bit off, since obviously the person getting interrogated should just lie about it.

Comment by jollybard on JFK was not assassinated: prior probability zero events · 2016-04-28T01:45:39.337Z · score: 0 (0 votes) · LW · GW

I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let's also assume that the question is clearly defined enough that the person can't play with definitions and just say that God is in everyone and God killed JFK)

However, let me steelman this a bit by somewhat moving the goalposts: if we allow a single random belief to have P=0, then it seems very unlikely that it will have a serious effect. I guess that the above scenario would require that we know that the person has P=0 about something (or have Omega exist), which, if we agree that such a belief will not have much empirical effect, is almost impossible to know. So that's also unlikely.

Comment by jollybard on What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown? · 2016-03-27T02:24:56.510Z · score: 1 (3 votes) · LW · GW

Oh, yes, good old potential UFAI #261: let the AI learn proper human values from the internet.

The point here being, it seems obvious to me that the vast majority of possible intelligent agents are unfriendly, and that it doesn't really matter what we might learn from specific error cases. In order words, we need to deliberately look into what makes an AI friendly, not what makes it unfriendly.

Comment by jollybard on The map of quantum (big world) immortality · 2016-02-15T20:06:32.010Z · score: 0 (0 votes) · LW · GW

My point was that QM is probabilistic only at the smallest level, for example in the Schrödinger's cat thought experiment. I don't think surviving a plane crash is ontologically probabilistic, unless of course the crash depends on some sort of radioactive decay or something! You can't make it so that you survive the plane crash without completely changing the prior causal networks... up until the beginning of you universe. Maybe there could be a way to very slightly change one of the universal constants so that nothing changes except that you survive, but I seriously doubt it.

Comment by jollybard on The map of quantum (big world) immortality · 2016-02-14T16:29:23.453Z · score: 0 (0 votes) · LW · GW

There might also be situations where surviving is not just ridiculously unlikely, but simply mathematically impossible. That is, I assume that not everything is possible through quantum effects? I'm not a physicist. I mean, what quantum effects would it take to have your body live forever? Are they really possible?

And I have serious doubts that surviving a plane crash or not could be due to quantum effects, but I suppose it could simply be incredibly unlikely. I fear that people might be confusing "possible worlds" in the subjective Bayesian sense and in the quantum many-worlds sense.