Posts

Comments

Comment by Liliet B (liliet-b) on To listen well, get curious · 2020-12-17T14:47:37.379Z · LW · GW

Mirrors are useful even though you don't expect to see another person in them.

Sometimes you need a person to be a mirror to your thoughts.

Comment by Liliet B (liliet-b) on To listen well, get curious · 2020-12-17T14:46:14.703Z · LW · GW

why they haven't been able to solve it yet?

the magic part.

Bad / insufficiently curiosed-through advice is often infuriating because the person giving it seems to be assuming you're an idiot / have come to them as soon as you noticed the problem. Which is very rarely true! Generally, between spotting the problem and talking to another person about it, there's a pretty fucking long solution-seeking stage. Where "pretty fucking long" can be anything between ten minutes ("i lost my pencil and can't find it )=") (where actually common sense suggestions MIGHT be helpful - you might not have through up all the checklist yet) and THE PERSON'S ENTIRE LIFETIME (anything relating to a disability, for example).

An advice-giver who doesn't understand why you still have the problem is going to have a lot more advice to give, and they're also often going to be SO patronizing and idiocy-assuming and invalidating sounding.

As opposed to the person who is at the point of "ok yeah that does sound like a problem" first, before they might move on with "hmm but what to do though" along with you.

(You might well be ahead of them anyway, but at least they've listened first!)

Comment by Liliet B (liliet-b) on How to Beat Procrastination · 2020-09-07T16:00:17.686Z · LW · GW

As an ADHD person for whom "reduce impulsiveness" is about as practical a goal as "learn telekinesis", reducing delay is actually super easy. Did you know people feel good about completing tasks and achieving goals? All you have to do to have a REALLY short delay between starting the task and an expected reward is explicitly, in your own mind, define a sufficiently small sub-task as A Goal. Then the next one, you don't even need breaks in-between if it goes well - even if what you're doing is as inherently meaningless as, I dunno, filling in an excel table from a printed one, you can still mentally reward yourself for each page or whatever.

The first salesman guy could set himself a task of "make three cold calls" regardless of success, and then feel good about having done them. The third guy could make a checklist at the start where tasks are listed in order and enjoy an uninterrupted checkmark row when he's not behind on anything. The student could feel really proud for making the front page, then the next part, etc.

Comment by Liliet B (liliet-b) on Science Isn't Strict Enough · 2020-01-18T19:59:28.682Z · LW · GW

Prior probabilities with no experience in a domain at all is an incoherent notion, since that implies you don't know what the words you're using even refer to. Priors include all prior knowledge, including knowledge about the general class of problems like the one you're trying to eyeball a prior for.

If you're asked to perform experiments on finding out what tapirs eat - and you don't know what tapirs even are, except that they eat something apparently, judging by the formulation of the problem - you're already going to assign a prior of ~0 of 'they eat candy wrappers and rocks and are poisoned by everything and anything else, including non-candy-wrapper plastics and objects made of stone', because you have prior information on what 'eating' refers to and how it tends to work. You're probably going to assign a high prior probability to the guess that tapirs are animals, and on the basis of that assign a high prior probability to them being either herbivores, omnivores or carnivores - or insectivores, unless you include that as carnivores - since that's what you know most animals are.

Priors are all prior information. It would be thoroughly irrational of you to give the tapirs candy wrappers and then when they didn't eat them, assume it was the wrong brand and start trying different ones.

For additional clarification on what priors mean, imagine that if you didn't manage to give the tapirs something they actually are willing to eat within 24 hours, your family is going to be executed.

In that situation, what's the rational thing to do? Are you going to start with metal sheets, car tires and ceramic pots, or are you going to start trying different kinds of animal food?

Comment by Liliet B (liliet-b) on Decoherence is Simple · 2020-01-12T22:35:17.218Z · LW · GW

Ordinary language includes mathematics.

"One, two, three, four" is ordinary language. "The thing turned right" is ordinary language (it's also multiplication by -i).

Feynman was right, he just neglected to specify that the ordinary language needed to explain physics would necessarily include the math subset of it.

Comment by Liliet B (liliet-b) on Collapse Postulates · 2020-01-12T22:18:48.679Z · LW · GW

"Many worlds can be seen as a kind of non-local theory, as the nature of the theory assumes a specific time line of "simultaneity" along which the universe can "split" at an instant."

As I understand, no it doesn't. The universe split is also local, and if at a difference at point A preserves the same particles at point B, then at point B we only have the same universe (where at point A we have multiple). The configurations merge together. It's more like vibration than splitting into paths that go into different directions. Macroscopic physics is inherently predictable, meaning that all the multiple worlds ultimately end up doing roughly the same thing!

Except for that one hypothetical universe where I saw a glass of boiling water spontaneously freeze into an ice block.

I'm going to guess the fact I'm not in that universe and as far as we know no-one has ever been, has something to do with the Born probabilities.

As far as ethical implications go, the vibration visualization helps me sort it out. The other existing me's are not more ethically distinct from each other than 'me a second ago' is ethically distinct from 'me a second later'. They are literally the same person, me. Any other me would do the same thing this me is doing, because there's no reason for it to be otherwise (if quantum phenomena had random effects on macroscopic scale, the world would be a lot more random and a lot less predictable on the everyday level), so we're still overlapping. All the uncountable other me's are sitting in the same chair I am (also smeared/vibrating), typing the same words I am, and making typos and quickly backspacing to erase them on the same smeared/vibrating keyboard.

All of the smearing has absolutely no effect a lightyear away from me, because the year it would take for any effect from my vibration over here to get to there hasn't passed yet. It has its own vibration, and I'm not affected by that one either.

"Many worlds" but same universe.

Comment by Liliet B (liliet-b) on The Second Law of Thermodynamics, and Engines of Cognition · 2020-01-06T10:42:22.750Z · LW · GW

When I worldbuild with magic, this is somehow automatically intuitive - so I always end up assuming (if not necessarily specifying explicitly) a 'magic field' or smth that does the thermodynamic work and that the bits of entropy are shuffled over to. Kind of like how looking something up on the internet is 'magic' from an outside observer's POV if people only have access nodes inside their heads and cannot actually show them to observers, or like how extracting power from the electricity grid into devices is 'magic' under the same conditions.

Only people didn't explicitly invent and build the involved internet and the electricity grid first. So more like how speech is basically telepathy, as Eliezer specified elsewhere~

Comment by Liliet B (liliet-b) on Terminal Values and Instrumental Values · 2019-12-24T19:45:54.479Z · LW · GW

I would propose an approximation of the system where each node has a terminal value of its own (which can be 0 for completely neutral nodes, but actually no they cannot - reinforcement mechanisms of our brain inevitably give something like 0.0001 because I heard someone say it was cool once or -0.002 because it reminds me of a sad event in my childhood)

As a simple example, consider eating food when hungry. You get a terminal value on eating food - the immediate satisfaction the brain releases in the form of chemicals as a response to recognition of the event, thanks to evolution - and an instrumental value on eating food, which is that you get to not starve for a while longer.

Now let's say that while you are a sentient optimization process that can reason over long projections of time, you are also a really simple one, and your network actually doesn't have any other terminal values than eating food, it's genuinely the only thing you care about. So when you calculate the instrumental value of eating food, you get only the sum of getting to eat more food in the future.

Let's say your confidence in getting to eat food next time after this one decreases with a steady rule. For example, p(i+1)=p(i)*0.5. If your confidence that you are eating food right now is 1, then your confidence that you'll get to eat again is 0.5, and your confidence that you'll get to eat the time after that is 0.25 and so on.

So the total instrumental value of eating food right now is limit of Sum(p(i) * T(food)) where i starts from 0 and approaches infinity (no I don't remember enough math to write this in symbols).

So the total total value of eating food is T(food) + Sum (p(i)*T(food)). It's always positive, because T(food) is positive and p(i) is positive and that's that. You'll never choose not to eat food you see in front of you, because there are no possible reasons for that in your value network.

Then let's add the concept of 'gross food', and for simplicity's sake ignore evolution and suggest that it exists as a totally arbitrary concept that is not actually connected to your expectation of survival after eating it. It's just kinda free floating - you like broccoli but don't like carrots, because your programmer was an asshole and entered those values into the system. Also for simplicity's sake, you're a pretty stupid reasoning process that doesn't actually anticipate seeing gross food in the future. In your calculation of instrumental value there's only T(food) which is positive, and T(this_food) which can be positive or negative depending on the specific food you're looking at appears ONLY while you're actually looking at it. If it's negative, you're surprised every time (but don't update your values because you're a really stupid sentient entity and don't have that function).

So now the value of eating food you see right now is T(this_food) + Sum (p(i)*T(food)). If T(this_food) is negative enough, you might choose to not eat food. Of course this assumes we're comparing to zero, ie you assume that if you don't eat right now you'll die immediately and also that's perfectly neutral and you don't have opinions on that (you only have opinions on eating food). If you don't eat the food you're looking at right now, you'll NEVER EAT AGAIN, but it might be that it's gross enough that it's worth it! More logically, you're comparing T(this_food) + Sum (p(i)*T(food)) to Sum(p(i)*T(food)) * p(not starving immediately). The outcome depends on how high the grossness of the food is and how high you evaluate p(not starving immediately) to be.

(If the food's even a little positive, or even just neutral, eating it wins every time, since p(not starving immediately) is <1 and not having it there wins automatically)

Note that the grossness of food and probability of starving are already not linear in how they correlate in their influence on the outcome. And that's just for the idiot AI that knows nothing except tasty food and gross food! And if we allow it to compute T(average_food) based on how much of what food we've given it, it might choose to starve rather than eat gross things it expects to eat in the future! Look, I've simulated willful suicide in all three simplifications so far! No wonder evolution didn't produce all that many organisms that could compute instrumental values.

Anyway, it gets more horrifically complex when you consider bigger goals. So our brain doesn't compute the whole Sum( Sum(p(i)*T(outcome(j)))) every time. It gets computed once and then stored as a quasi-terminal value instead. QT(outcome) = T(outcome) + Sum( Sum(p(i)*T(outcome(j)))), and it might get recomputed sometimes, but most of the time it doesn't. And recomputing it is what updating our beliefs must involve. For ALL outcomes linked to the update.

...Yeah, that tends to take a while.

Comment by Liliet B (liliet-b) on Doublethink (Choosing to be Biased) · 2019-12-07T13:51:51.174Z · LW · GW

It does affect your point.

Comment by Liliet B (liliet-b) on Dark Side Epistemology · 2019-12-07T13:29:28.509Z · LW · GW

The ultimate prior is maximum entropy, aka "idk", aka "50/50: either happens or not". We never actually have it, because we start gathering evidence for how the world is before our brains even form enough to make any links between it.

Comment by Liliet B (liliet-b) on One Argument Against An Army · 2019-12-07T12:13:58.793Z · LW · GW

As noted above, rehearsing all the evidence against your position alongside your own should be a counter. As in the article's example, the math should not be "1 vs 3 every time", but it should not be "1 vs 3 the first time, 1 vs 0 the second and subsequent times" either. It should be "1 vs 3, then 2 vs 3, then..."

In actual debate practice, it might confuse the other person that you're listing their points for them, but I've found it a helpful practice anyway.

Comment by Liliet B (liliet-b) on One Argument Against An Army · 2019-12-07T12:08:10.339Z · LW · GW

I'd go with this. Gather all the evidence in one place as you're attempting to update... Otherwise you might miss that shiny new counterevidence actually screens off some old counterevidence you'd already updated on, or is screened off by it and you don't need to update at all.

Comment by Liliet B (liliet-b) on Rationality: An Introduction · 2019-12-04T09:27:58.649Z · LW · GW

Not if you consider that the 1:5 figure constrains that ONLY one person among the six has a crush on you. If you learn for a fact one does, you'll also immediately know the others all don't. Which is not true for a random selection of students - you could randomly pick six that all have a crush on you. Bob belongs to a group in which you know for a fact five people DON'T have a crush on you. So you have evidence lowering Bob's odds relative to a random winker.

Either that, or it doesn't matter how many actually have a crush on you, you're looking for the specific one you have definite evidence about. For thisy, a random winker is not qualified to enter the comparison at all - if they're not one of the six, they're not the person you're looking for. So Bob might have a crush on you AND not be the person you're looking for, although his odds are higher than those of the other five you don't have any evidence about.

That's the interpretations that make the math not wrong, anyway. If you only know that "at least one of them has a crush on me" and more than one could potentially satisfy your search criteria, the 1:5 figure is not the right odds.

Comment by Liliet B (liliet-b) on Conservation of Expected Evidence · 2019-12-02T18:17:07.701Z · LW · GW

Before you have actually done A, since it might fail because of ~P (which is what the thing you said actually means), your confidence is still the same as before you came up with the plan. We're still at t=0. Information about your plan succeeding or not hasn't arrived yet.

Now if over the course of planning you realize that the very ability you have to make the plan shifts probability estimate of P, then we've already got the new evidence. We're at t=1, and the probability has shifted rightfully without violating the law. The evidence is no longer expected, it's already here!

Before you started planning, you didn't know that you would succeed and get this information. Not for certain. Or if you did, your estimate of probability of P was clearly wrong, but you hadn't noticed it yet, where the "yet" is the time factor that distinguishes between t0 and t1 again...

Can't cheat your way out of this at t=0, I'm afraid.

Comment by Liliet B (liliet-b) on Conservation of Expected Evidence · 2019-12-02T14:52:15.998Z · LW · GW

Let's say you are organising a polar expedition. It will succeed (A) or fail (~A). There is a postulate that there are no man eating polar Cthulhu in the area (P). If there are some (~P), the expedition will fail (~A), thus entangling A with P.

You can do your best to prepare the expedition so that it will not fail for non-Cthulhu reasons, strengthening the entanglement - ~A becomes stronger evidence for ~P. You can also do your best to prepare the expedition to survive even the man eating polar Cthulhu, weakening the entanglement - by introducing a higher probability of A&~P, we're making A weaker evidence for P.

Do any of these preparations, in themselves, actually influence the amount of man eating polar Cthulhu in the area?