Posts

Gödel Incompleteness: For Dummies 2020-07-12T09:13:47.182Z
Let There be Sound: A Fristonian Meditation on Creativity 2020-07-04T03:33:39.026Z
Why Rationalists Shouldn't be Interested in Topos Theory 2020-05-25T05:35:03.193Z
On decision-prediction fixed points 2019-12-04T20:49:36.464Z

Comments

Comment by jollybard on We need a theory of anthropic measure binding · 2022-01-05T05:12:20.167Z · LW · GW

Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don't know how to formalize it.

It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.

 

Comment by jollybard on We need a theory of anthropic measure binding · 2022-01-05T05:07:32.676Z · LW · GW

Well, ask the question, should the bigger brain receive a million dollar, or do you not care?

Comment by jollybard on We need a theory of anthropic measure binding · 2021-12-30T08:21:03.856Z · LW · GW

I've always maintained that in order to solve this issue we must first solve the question of, what does it even mean to say that a physical system is implementing a particular algorithm? Does it make sense to say that an algorithm is only approximately implemented? What if the algorithm is something very chaotic such as prime-checking, where approximation is not possible? 

An algorithm should be a box that you can feed any input into, but in the real, causal world, there is no such choice, any impression that you "could" input anything into your pocket calculator is due to the counterfactuals your brain can consider purely because it has some uncertainty about the world (an omniscient being could not make any choice at all! -- assuming complete omniscience is possible, which I don't think it is, but let us imagine the universe as an omniscient being or something).

This leads me to believe that "anthropic binding" cannot be some kind of metaphysical primitive, since for it to be well-defined it needs to be considered by an embedded agent! Indeed, I claimed that recognizing algorithms "in the wild" requires the use of counterfactuals, and omniscient beings (such as "the universe") cannot use counterfactuals. Therefore I do not see how there could be a "correct" answer to the problem of anthropic binding.

Comment by jollybard on Additive Operations on Cartesian Frames · 2020-10-27T09:33:21.179Z · LW · GW

Fantastic work!

How do we express the way that the world might be carved up into different agent-environment frames while still remaining "the same world"? The dual functor certainly works, but how about other ways to carve up the world? Suppose I notice a subagent of the environment, can I switch perspective to it?

Also, I am guessing is that an "embedded" cartesian frame might be one where  i.e. where the world is just the agent along with the environment. Or something. Then, since we can iterate the choice function, it ould represent time steps. Though we might in fact need sequences of agents and environments. Anyway, I can't wait to see what you came up with.

Comment by jollybard on Gödel Incompleteness: For Dummies · 2020-07-12T17:36:12.816Z · LW · GW

There are two theorems. You're correct that the first theorem (that there is an unprovable truth) is generally proved by constructing a sort of liar's paradox, and then the second is proved by repeating the proof of the first internally.

However I chose to take the reverse route for a more epistemological flavour.

Comment by jollybard on Gödel Incompleteness: For Dummies · 2020-07-12T17:09:38.317Z · LW · GW

But we can totally prove it to be consistent, though, from the outside. Its sanity isn't necessarily suspect, only its own claim of sanity.

If someone tells you something, you don't take it at face value, you first verify that the thought process used to generate it was reliable.

Comment by jollybard on Gödel Incompleteness: For Dummies · 2020-07-12T16:31:14.055Z · LW · GW

You are correct. Maybe I should have made that clearer.

My interpretation of the impossibility is that the formal system is self-aware enough to recognize that no one would believe it anyway (it can make a model of itself, and recognizes that it wouldn't even believe it if it claimed to be consistent).

Comment by jollybard on Let There be Sound: A Fristonian Meditation on Creativity · 2020-07-04T22:19:10.296Z · LW · GW

It's essentially my jumping off point, though I'm more interested in the human-specific parts than he is.

Comment by jollybard on The Parable of Predict-O-Matic · 2020-07-02T04:16:14.111Z · LW · GW

The relevance that I'm seeing is that of self-fulfilling prophecies.

My understanding of FEP/predictive processing is that you're looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some act of will, believe really hard that it should happen, and let thermodynamics run its course.

More simply put, changing your mind changes the state of the world by changing your brain, so it really is some kind of action. In the case of predict-o-matic, its predictions literally influence the world, since people are following its prophecies, and yet it still has to make accurate predictions; so in order to have accurate beliefs it actually has to choose one of many possible prediction-outcome fixed points.

Now, FEP says that, for living systems, all choices are like this. The only choice we have is which fixed point to believe in.

I find the basic ideas of FEP pretty compelling, especially because there are lots of similar theories in other fields (e.g. good regulators in cybernetics, internal models in control systems, and in my opinion Löb's theorem as a degenerate case). I haven't looked into the formalism yet. I would definitely not be surprised to see errors in the math, given that it's very applied math-flavored and yet very theoretical.

Comment by jollybard on The Parable of Predict-O-Matic · 2020-06-30T04:22:07.582Z · LW · GW

Excellent post, it echoes much of my current thoughts.

I just wanted to point out that this is very reminiscent of Karl Friston's free energy principle.

The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. [...] After a while it became clear that, even in the toy environment of the game, the reward-­maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better.
Comment by jollybard on Why Rationalists Shouldn't be Interested in Topos Theory · 2020-06-20T04:48:08.438Z · LW · GW

This is the logical induction I was thinking of.

Comment by jollybard on Why We Age, Part 2: Non-adaptive theories · 2020-05-29T07:47:13.884Z · LW · GW
Mammals and birds tend to grow, reach maturity, and stop growing. Conversely, many reptile and fish species keep growing throughout their lives. As you get bigger, you can not only defend yourself better (reducing your extrinsic mortality), but also lay more eggs.

So, clearly, we must have the same for humans. If we became progressively larger, women could carry twins and n-tuplets more easily. Plus, our brains would get larger, too, which could allow for a gradual increase in intelligence during our whole lifetimes.

Ha ha, just kidding: presumably intelligence is proportional to brain size/body size, which would remain constant, or might even decrease...

Comment by jollybard on Why Rationalists Shouldn't be Interested in Topos Theory · 2020-05-25T19:49:01.406Z · LW · GW
I'm not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether.

My feeling is that the arguments I give above are pretty decent reasons to think that they're not truth values! As I wrote: "The thesis of this post is that probabilities aren't (intuitionistic) truth values."

Comment by jollybard on Why Rationalists Shouldn't be Interested in Topos Theory · 2020-05-25T08:47:19.083Z · LW · GW

Indeed, and -categories can provide semantics of homotopy type theory. But -categories are ultimately based on sets. At some point though maybe we'll use HoTT to "provide semantics" to set theories, who knows.

In general, there's a close syntax-semantics relationship between category theory and type theory. I was expecting to touch on that in my next post, though!

EDIT: Just to be clear, type theory is a good alternate foundation, and type theory is the internal language of categories.

Comment by jollybard on Why Rationalists Shouldn't be Interested in Topos Theory · 2020-05-25T06:37:16.854Z · LW · GW

Yes, I have! Girard is very... opinionated, he is fun to read for that reason. That is, Jean-Yves has some spicy takes:

Quantum logic is indeed a sort of punishment inflicted on nature, guilty of not yielding to the prejudices of logicians… just like Xerxes had the Hellespont – which had destroyed a boat bridge – whipped.

I enjoyed his book "Proofs and Types" as an introduction to type theory and the Curry-Howard correspondence. I've looked through "The Blind Spot" a bit and it also seemed like a fun read. Of course, you can't avoid his name if you're interested in linear logic (as I currently am), since the guy invented it.

Comment by jollybard on The Towel Census: A Methodology for Identifying Orphaned Objects in Your Home · 2019-12-23T07:07:09.036Z · LW · GW

That all makes more sense now :)

In our case the towel rack was right in front of the toilet, so it didn't have to be an ambient thing haha

Comment by jollybard on The Towel Census: A Methodology for Identifying Orphaned Objects in Your Home · 2019-12-23T05:09:01.102Z · LW · GW

I just want to point out that you should probably change your towel at least every week (preferably every three uses), especially if you leave it in a high humidity environment like a shared bathroom.

I can't even imagine the smell... Actually, yes I can, because I've had the same scenario happen to me at another rationalist sharehouse.

So, um, maybe every two months is a little bit too long.

A few obvious alternatives:

1. Everyone leave their towels in their room.
2. Guests leave their towels in their rooms. The common towels are put into a hamper every week, and the hamper goes to the laundry when it's full.
3. Have fewer towels. Not the best solution since that doesn't solve the problem of not having any towels while they're being washed, but it could create more incentive to change them more often.

This is definitely the sort of coordination problem that happens when you have a lot of people living together, but I also have a feeling that this should not happen at all, somehow. Like, in general, if this is like a hostel, then guests should behave as guests in a hostel, and the hostel itself should have people responsible for regular cleaning (this could be the permanent housemates). There is definitely a privacy and autonomy tradeoff at hostels.

Comment by jollybard on Is Rationalist Self-Improvement Real? · 2019-12-10T03:09:31.632Z · LW · GW

I've said it elsewhere, but wringing your hands and crying "it's because of my akrasia!" is definitely not rational behavior; if anything, rationalists should be better at dealing with akrasia. What good is a plan if you can't execute it? It is like a program without a compiler.

Your brain is part of the world. Failing to navigate around akrasia is epistemic failure.

Comment by jollybard on On decision-prediction fixed points · 2019-12-05T21:23:41.443Z · LW · GW

Maybe I ought to give a slightly more practical description.

Your akrasia is part of the world and failing to navigate around it is epistemic failure.

Comment by jollybard on On decision-prediction fixed points · 2019-12-05T21:20:39.042Z · LW · GW

I see what you mean, but

if I know exactly what a tic tac toe or chess program would do,

if you were this logically omniscient, then supposing that the program did something else would imply that your system is inconsistent, which means everything is provable.

There needs to be boundedness somewhere, either in the number of deductions you can make, or in the certainty of your logical beliefs. This is what I mean by uncertainty being necessary for logical counterfactuals.

Comment by jollybard on On decision-prediction fixed points · 2019-12-05T14:26:41.215Z · LW · GW

Right, so that's not a decision-prediction fixed point; a correct LDT algorithm would, by its very definition, choose the optimal decision, so predicting its behavior would lead to the optimal decision.

Comment by jollybard on On decision-prediction fixed points · 2019-12-05T14:23:26.851Z · LW · GW

I don't think that's right. If you know exactly what you are going to do, that leaves no room for counterfactuals, not if you're an LDT agent. Physically, there is no such thing as a counterfactual, especially not a logical one; so if your beliefs match the physical world perfectly, then the world looks deterministic, including your own behavior. I don't think counterfactual reasoning makes sense without uncertainty.

Comment by jollybard on On decision-prediction fixed points · 2019-12-04T23:55:20.912Z · LW · GW

Perhaps, but that's not quite how I see it. I'm saying akrasia is failure to predict yourself, that is when there's a disconnect between your predictions and your actions.

Comment by jollybard on cousin_it's Shortform · 2019-12-04T20:17:24.493Z · LW · GW

Could convolution work?

EDIT: confused why I am downvoted. Don't we want to encourage giving obvious (and obviously wrong) solutions to short form posts?

Comment by jollybard on The Jordan Peterson Mask · 2019-12-03T06:16:15.609Z · LW · GW

Metaphysical truth here describes self-fulfilling truths as described by Abram Demski, and whose existence are garanteed by e.g. Löb's theorem. In other words, metaphysical truth is truth, and rationalists should be aware of them.

Comment by jollybard on World State is the Wrong Abstraction for Impact · 2019-12-03T03:22:34.827Z · LW · GW

AIXI is relevant because it shows that world state is not the dominant view in AI research.

But world state is still well-defined even with ontological changes because there is no ontological change without a translation.

Perhaps I would say that "impact" isn't very important, then, except if you define it as a utility delta.

Comment by jollybard on World State is the Wrong Abstraction for Impact · 2019-11-30T22:46:29.295Z · LW · GW

This is a misreading of traditional utility theory and of ontology.

When you change your ontology, concepts like "cat" or "vase" don't become meaningless, they just get translated.

Also, you know that AIXI's reward function is defined on its percepts and not on world states, right? It seems a bit tautological to say that its utility is local, then.

Comment by jollybard on Do you like bullet points? · 2019-04-10T01:43:56.040Z · LW · GW

I like reading. I like reading prose, as if I were listening to someone talking.

I also read very fast and I'm very good at skimming prose.

That being said, I strongly dislike bullet points, in most part because they're not fun to read... But I also find them harder to skim. Indeed, they are usually much denser in terms of information, with much less redundancy, such that every word counts; in other words, no skimming allowed.

I don't understand why skimming natural text should be any more difficult.

>It's easier to skim, and build up a high level understanding of a post's structure

I mean. This is why we have paragraphs, and leading sentences. There are also some of the reasons listed by gjm: there is more logical and contextual information in prose, which makes it easier to skim.

In fact, I would argue that if we're going to encourage bullet points for easier reading, we could just as well encourage learning to write well...

Comment by jollybard on What is abstraction? · 2018-12-16T06:44:37.637Z · LW · GW

Just a quick, pedantic note.

But there seems to be something very different about each of the two situations. In the first, we would say that the "brush your teeth" abstraction is composed of the subtasks, but we wouldn't say that "animal" is composed of humans, dogs and cats in the second.

Actually, from an extensive point of view, that is exactly how you would define "animal": as the set of all things that are animals. So it is in fact composed of humans, dogs and cats -- but only partly, as there are lots of other things that are animals.

This is just a pedantic point since it doesn't cut to the heart of the problem. As johnswentworth noted, man-made categories are fuzzy, they are not associated with true or false but with probabilities, so "animal" is more like a test, i.e. a function or association between some set of possible things and . So "animals" isn't a set, the sets would be "things that are animals with probability " for every .

Comment by jollybard on Making intentions concrete - Trigger-Action Planning · 2017-10-18T01:44:50.400Z · LW · GW

This post has been very helpful for me, as I kept hearing about TAPs in rationalist circles without ever knowing what it meant. Even knowing what the acronym was didn't help at all (is it usually sufficient for people?).

This post, however, for all its faults (it gets too quickly at examples without first convincing me that I should care), serves as a good reference, if only for the fact that I never knew the concept already existed in mainstream science and was called "implementation intentions". I remember once searching for something of the sort and only finding biology stuff about animal instincts.

I'm aphantasiac. Visualizing things is completely alien to me. So, of course, implementation intentions do not seem obviously good to me. But I see now that they are useful and that I regularly use something similar, and I now believe that most people should use them.

Comment by jollybard on Planning 101: Debiasing and Research · 2017-02-16T05:18:22.815Z · LW · GW

One thing I've never really seen mentioned in discussion of the planning fallacy is that there is something of a self-defeating prophecy at play.

Let's say I have a report to write, and I need to fit it in my schedule. Now, according to my plans, things should go fine if I take an hour to write it. Great! So, knowing this, I work hard at first, then become bored and dick around for a while, then realise that my self-imposed deadline is approaching, and -- whoosh, I miss it by 30 minutes.

Now, say I go back in time and redo the report, but now I assume it'll take me an hour and a half. Again, I work properly at first, then dick around, and -- whoa, only half an hour left! Quick, let's finish thi--- whoosh.

The point I'm trying to make here is that sometimes the actual length of a task depends directly on your estimate of that task's length, in which case avoiding the planning fallacy simply by giving yourself a larger margin won't work.

But I suppose the standard argument against this is that to properly counteract this kind of planning fallacy, one mustn't just take out a longer span of time, but find what it is that makes one miss the deadline and correct it.

Comment by jollybard on Double Crux — A Strategy for Mutual Understanding · 2016-12-01T03:23:50.715Z · LW · GW

Personally, I am still eagerly waiting for CFAR to release more of their methods and techniques. A lot of them seem to be already part of the rationalist diaspora's vocabulary -- however, I've been unable to find descriptions of them.

For example, you mention "TAP"s and the "Inner Simulator" at the beginning of this article, yet I haven't had any success googling those terms, and you offer no explanation of them. I would be very interested in what they are!

I suppose the crux of my criticism isn't that there are techniques you haven't released yet, nor that rationalists are talking about them, but that you mention them as though they were common knowledge. This, sadly, gives the impression that LWers are expected to know about them, and reinforces the idea that LW has become a kind of elitist clique. I'm worried that you are using this in order to make aspiring rationalists, who very much want to belong, come to CFAR events, to be in the know.

Comment by jollybard on Open Thread May 23 - May 29, 2016 · 2016-05-25T03:07:52.173Z · LW · GW

This looks great and I can see that it should work, but I can't seem to find a formal proof. Can you explain a bit?

Comment by jollybard on JFK was not assassinated: prior probability zero events · 2016-04-28T14:09:48.628Z · LW · GW

That wasn't really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it's likely, but you're right that my example was a bit off, since obviously the person getting interrogated should just lie about it.

Comment by jollybard on JFK was not assassinated: prior probability zero events · 2016-04-28T01:45:39.337Z · LW · GW

I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let's also assume that the question is clearly defined enough that the person can't play with definitions and just say that God is in everyone and God killed JFK)

However, let me steelman this a bit by somewhat moving the goalposts: if we allow a single random belief to have P=0, then it seems very unlikely that it will have a serious effect. I guess that the above scenario would require that we know that the person has P=0 about something (or have Omega exist), which, if we agree that such a belief will not have much empirical effect, is almost impossible to know. So that's also unlikely.

Comment by jollybard on What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown? · 2016-03-27T02:24:56.510Z · LW · GW

Oh, yes, good old potential UFAI #261: let the AI learn proper human values from the internet.

The point here being, it seems obvious to me that the vast majority of possible intelligent agents are unfriendly, and that it doesn't really matter what we might learn from specific error cases. In order words, we need to deliberately look into what makes an AI friendly, not what makes it unfriendly.

Comment by jollybard on The map of quantum (big world) immortality · 2016-02-15T20:06:32.010Z · LW · GW

My point was that QM is probabilistic only at the smallest level, for example in the Schrödinger's cat thought experiment. I don't think surviving a plane crash is ontologically probabilistic, unless of course the crash depends on some sort of radioactive decay or something! You can't make it so that you survive the plane crash without completely changing the prior causal networks... up until the beginning of you universe. Maybe there could be a way to very slightly change one of the universal constants so that nothing changes except that you survive, but I seriously doubt it.

Comment by jollybard on The map of quantum (big world) immortality · 2016-02-14T16:29:23.453Z · LW · GW

There might also be situations where surviving is not just ridiculously unlikely, but simply mathematically impossible. That is, I assume that not everything is possible through quantum effects? I'm not a physicist. I mean, what quantum effects would it take to have your body live forever? Are they really possible?

And I have serious doubts that surviving a plane crash or not could be due to quantum effects, but I suppose it could simply be incredibly unlikely. I fear that people might be confusing "possible worlds" in the subjective Bayesian sense and in the quantum many-worlds sense.