Posts

Monk Treehouse: some problems defining simulation 2018-07-11T07:35:12.058Z

Comments

Comment by dranorter on Self-Referential Probabilistic Logic Admits the Payor's Lemma · 2023-12-10T22:14:49.339Z · LW · GW

I'm interested in what happens if individual agents A, B, C merely have a probability of cooperating given that their threshold is satisfied. So, consider the following assumptions.

The last assumption being simply that  is low enough. Given these assumptions, we have  via the same proof as in the post.

So for example if  are all greater than two thirds, there can be some nonzero  such that the agents will cooperate with probability . In a sense this is not a great outcome, since viable  might be quite small; but it's surprising to get any cooperation in this circumstance.

Comment by dranorter on Some Hacky ELK Ideas · 2022-02-15T20:25:13.988Z · LW · GW

It doesn't seem quite right to say that the sensor readings are identical when the thief has full knowledge of the diamond. The sensor readings after tampering can be identical. But some sensor readings have caused the predictor to believe that the sensors would be tampered with by the thief. The problem is just that the predictor knows what signs to look for, and humans do not.

Comment by dranorter on Radical Probabilism · 2020-08-19T20:26:15.837Z · LW · GW

It's worth noting that in the case of logical induction, there's a more fleshed-out story where the LI eventually has self-trust and can also come to believe probabilities produced by other LI processes. And, logical induction can come to trust outputs of other processes too. For LI, a "virtuous process" is basically one that satisfies the LI criterion, though of course it wouldn't switch to the new set of beliefs unless they were known products of a longer amount of thought, or had proven themselves superior in some other way.

Comment by dranorter on No Safe Defense, Not Even Science · 2020-03-10T20:24:10.111Z · LW · GW

It's easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to "trust and think at the same time".

Are either of those the flaw he found?

What we have to go on are "fairly inexcusable" and "affects one of the conclusions". I'm not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here's an attempt.

  • Tentative observation: the impressive (actively growing) rationalists have early experiences which fall into a cluster.
  • The core of the cluster may be a breaking of "core emotional trust".
  • We can spell out a vivid model where "core emotional trust" is blocking some people from advancing, and "core emotional trust" prevents a skill/activity called "lonely dissent", and "lonely dissent" is crucial.
  • We can have (harmful, limiting) "core emotional trust" in science (and this example enriches our picture of it, and our picture of how much pretty obvious good "lonely dissent" can do).
  • There is no (known) human or mathematical system which is good (excusable, okay, safe) to put "core emotional trust" in.
  • "Core Emotional Trust" can only really be eliminated when we make our best synthesis of available external advice, then faithfully follow that synthesis, and then finally fail; face the failure squarely and recognize its source; and then continue trying by making our own methods.

More proposed flaws I thought of while spelling out the above:

  • An Orthodox Jewish background "seems to have had the same effect", but then the later narrative attributes the effect to a break with Science. Similarly, the beginning of the post talks about childhood experiences, but the rest talks about Science and Bayescraft. In some ways this seems like a justifiable extrapolation, trying to use an observation to take the craft further. However, it is an extrapolation.
  • The post uses details to make possibilities seem more real. "Core emotional trust" is a complex model which is probably wrong somewhere. But, that doesn't mean it's entirely useless, and I don't feel that's the flaw.
  • The argument that Bayesianism can't receive our core trust is slightly complex. Its points are good so far as they go, but to jump from there to "So you cannot trust" period is a bit abrupt.
  • It occurs to me that the entire post presupposes something like epistemic monism. Someone who is open to criticism, has a rich pool of critique, a rich pool of critique-generating habits, and constant motivation to examine such critiques and improve, could potentially have deep trust in Science or Bayescraft and still improve. Deep trust of the social web is a bit different - it prevents "lonely dissent".
  • "Core emotional trust" can possibly be eliminated via other methods than the single, vividly described one at the end of the article. Following the initial example, seeing through a cult can be brought on when other members of the cult make huge errors, rather than onesself.

I suppose that's given me plenty to think about, and I won't try to guess the "real" flaw for now. I agree with, and have violated, the addendum: I had a scattered cloud of critical thoughts in order to feel more critical. (Also: I didn't read all the existing comments first.)

Comment by dranorter on Track-Back Meditation · 2018-11-01T15:06:57.724Z · LW · GW

I think there’s some looseness in the Mind Illuminated ontology around this point, but I would say: thinking involves attention on an abstract concept. When attention and/or awareness are on a thought, that’s metacognitive attention and/or awareness. For example, if I’m trying to work on an intellectual task but start thinking about food, my attention has moved from the task to food. Specifically my attention might be on a specific possibility for dinner, or on a set of possibilities. If I have no metacognitive awareness, then I’m lost in the thought; my attention is not on the thought, it’s on the food.

Comment by dranorter on In Logical Time, All Games are Iterated Games · 2018-09-26T07:27:24.347Z · LW · GW

The definition may not be principled, but there's something that feels a little bit right about it in context. There are various ways to "stay in the logical past" which seem similar in spirit to migueltorrescosta's remark, like calculating your opponent's exact behavior but refusing to look at certain aspects of it. The proposal, it seems, is to iterate already-iterated games by passing more limited information of some sort between the possibly-infinite sessions. (Both your and the opponent's memory gets limited.) But if we admit that Miguel's "iterated play without memory" is iterated play, well, memory could be imperfect in varied ways at every step, giving us a huge mess instead of well-defined games and sessions. But, that mess looks more like logical time at least.

Not having read the linked paper yet, the motivation for using iterated or meta-iterated play is basically to obtain a set of counterfactuals which will be relevant during real play. Depending on the game, it makes sense that this might be best accomplished by occasionally resetting the opponent's memory.

Comment by dranorter on In Logical Time, All Games are Iterated Games · 2018-09-26T06:16:36.495Z · LW · GW

I think it's worth mentioning that part of the original appeal of the term (which made us initially wary) was the way it matches intuitively with the experience of signaling behavior. Here's the original motivating example. Imagine that you are in the Parfit's Hitchhiker scenario and Paul Ekman has already noticed that you're lying. What do you do? You try to get a second chance. But it won't be enough to simply re-state that you'll pay him. Even if he doesn't detect the lie this time around, you're the same person who had to lie only a moment ago. What changed? Well, you want to signal that what's changed is that some logical time has passed. A logically earlier version of you got a ride from a logically earlier Ekman but didn't pay. But then Ekman put effort into remembering the logical past and learning from it. A logically more recent version of you wasn't expecting this, and perished in the desert. Given that both you and Ekman know these things, what you need to do in order to survive is to show that you are in the logical future of those events, and learned from them. Not only that, but you also want to show that you won't change your mind during the ride back to civilization. There will be time to think during the car ride, and thinking can be a way of getting into the logical future. You want to demonstrate that you're fully in the logical future of the (chronologically yet-to-be-made) decision to pay.

This might be an easy problem if the hitchhiker and Ekman shared the same concept of logical time (and knew it). Then it would be similar to proving you remembered the literal past; you could describe a trivial detail or an agreed-upon signal. However, agents are not necessarily incentivized (or able) to use a shared imaginary iterated version of whatever game they're playing. To me it seems like one of the real questions the logical time terminology brings up is, when, and to what extent, will agents be incentivized to use compatible notions of logical time?

Comment by dranorter on Probability is Real, and Value is Complex · 2018-07-20T07:16:19.573Z · LW · GW

What does it look like to rotate and then renormalize?

There seem to be two answers. The first answer is that the highest probability event is the one farthest to the right. This event must be the entire . All we do to renormalize is scale until this event is probability 1.

If we rotate until some probabilities are negative, and then renormalize in this way, the negative probabilities stay negative, but rescale.

The second way to renormalize is to choose a separating line, and use its normal vector as probability. This keeps probability positive. Then we find the highest probability event as before, and call this probability 1.

Trying to picture this, an obvious question is: can the highest probability event change when we rotate?

Comment by dranorter on Too Much Effort | Too Little Evidence · 2017-01-26T00:17:48.114Z · LW · GW

Your assessment makes the assumption that the knowledge that we are missing is "not that important".

Better to call it a rational estimate than an assumption.

It is perfectly rational to say to onesself "but if I refuse to look into anything which takes a lot of effort to get any evidence for, then I will probably miss out." We can put math to that sentiment and use it to help decide how much time to spend investigating unlikely claims. Solutions along these lines are sometimes called "taking the outside view".

To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs.

For the sake of engaging with your points 1 thru 5, ProofOfLogic, Kindly, et al. are supposing the existence of a class of claims for which there exists roughly the same amount of evidence pro and con as exists for lucid dreaming. This includes how much we trust the person making the claim, how well the claim itself fits with our existing beliefs, how simple the claim is (ie, Occam's Razor), how many other people make similar claims, and any other information we might get our hands on. So the assumption for the sake of argument is that these claims look just about equally plausible once everything we know or even suspect is taken into account.

It seems very reasonable to conclude that the best one can do in such a case is choose randomly, if one does in fact want to test out some claim within the class.

But suggestions as to what else might be counted as evidence are certainly welcome.