Posts

Comments

Comment by Ian Televan on There's No Fire Alarm for Artificial General Intelligence · 2021-08-02T21:38:03.843Z · LW · GW

from random import *

runs = 100000
S = runs
for _ in range(runs):
   while(randint(1,20) != 1):
       S += 1
print(S/runs)

>>> 20.05751

Comment by Ian Televan on How to teach things well · 2021-07-31T19:56:00.130Z · LW · GW

In my experience teachers tend to only give examples of typical members of a category. I wish they'd also give examples along the category border, both positive and negative. Something like: "this seems to have nothing to do with quadratic equations, but it actually does, this is why" and "this problem looks like it can be solved using quadratic equations but this is misleading because XYZ". This is obvious in subjects like geography, (when you want to describe where China is, don't give a bunch of points around Beijing as examples, but instead draw the border and maybe tell about ongoing territorial conflicts) but for some reason less obvious in concept-heavy subjects like mathematics.

Another point on my wishlist: create sufficient room for ambition. Give bonus points for optional but hard exercises. Tell about some problems that even world's top experts don't know how to solve. 

Comment by Ian Televan on How to learn from conversations · 2021-07-30T19:24:18.467Z · LW · GW

Thank you very much for posting this! I've been thinking about this topic for a while now and feel like this is criminally overlooked. There are so many resources on how to teach other people effectively, but virtually none on how to learn things effectively from other people (not just from textbooks). Yet we are often surrounded by people who know something that we currently don't and who might not know much about teaching or how to explain things well. Knowing what questions to ask and how to ask them makes these people into great teachers - while you reap the benefits! - this feels like a superpower. 

Comment by Ian Televan on Decision Theory · 2021-07-15T20:22:54.830Z · LW · GW

While I agree that the algorithm might output 5, I don't share the intuition that it's something that wasn't 'supposed' to happen, so I'm not sure what problem it was meant to demonstrate. I thought of a few ways to interpret it, but I'm not sure which one, if any, was the intended interpretation:

a) The algorithm is defined to compute argmax, but it doesn't output argmax because of false antecedents. 

- but I would say that it's not actually defined to compute argmax, therefore the fact that it doesn't output argmax is not a problem.

b) Regardless of the output, the algorithm uses reasoning from false antecedents, which seems nonsensical from the perspective of someone who uses intuitive conditionals, which impedes its reasoning.

- it may indeed seem nonsensical, but if 'seeming nonsensical' doesn't actually impede its ability to select actions wich highest utility (when it's actually defined to compute argmax), then I would say that it's also not a problem. Furthermore, wouldn't MUDT be perfectly satisfied with the tuple ? It also uses 'nonsensical' reasoning 'A()=5 => U()=0' but still outputs action with highest utility.

c) Even when the use of false antecedents doesn't impede its reasoning, the way it arrives at its conclusions is counterintuitive to humans, which means that we're more likely to make a catastrophic mistake when reasoning about how the agent reasons.

- Maybe? I don't have access to other people's intuitions, but when I read the example, I didn't have any intuitive feeling of what the algorithm would do, so instead I just calculated all assignments , eliminated all inconsistent ones and proceeded from there. And this issue wouldn't be unique to false antecedents, there are other perfectly valid pieces of logic that might nonetheless seem counterintuitive to humans, for example the puzzle with islanders and blue eyes.


Yet, we reason informally from false antecedents all the time, EG thinking about what would happen if

When I try to examine my own reasoning, I find that when I do so, I'm just selectively blind to certain details and so don't notice any problems. For example: suppose the environment calculates "U=10 if action = A; U=0 if action = B" and I, being a utility maximizer, am deciding between actions A and B. Then I might imagine something like "I chose A and got 10 utils", and "I chose B and got 0 utils" - ergo, I should choose A. 

But actually, if I had thought deeper about the second case, I would also think "hm, because I'm determined to choose the action with highest reward I would not choose B. And yet I chose B. This is logically impossible! OH NO THIS TIMELINE IS INCONSISTENT!" - so I couldn't actually coherently reason about what could happen if I chose B. And yet, I would still be left with the only consistent timeline where I choose A, which I would promptly follow, and get my maximum of 10 utils. 

The problem is also "solved" if the agent thinks only about the environment, ignoring its knowledge about its own source code.

The idea with reversing the outputs and taking the assignment that is valid for both versions of the algorithm seemed to me to be closer to the notion "but what would actually happen if you actually acted differently", i.e. avoiding seemingly nonsensical reasoning while preserving self-reflection. But I'm not sure when, if ever, this principle can be generalized. 

Comment by Ian Televan on Decision Theory · 2021-07-08T23:34:19.065Z · LW · GW

I don't quite follow why 5/10 example presents a problem.

Conditionals with false antecedents seem nonsensical from the perspective of natural language, but why is this a problem for the formal agent? Since the algorithm as presented doesn't actually try to maximize utility, everything seems to be alright. In particular, there are 4 valid assignments: 

The algorithm doesn't try to select an assignment with largest , but rather just outputs  if there's a valid assignment with , and  otherwise. Only  fulfills the condition, so it outputs  and  also seem nonsensical because of false antecedents but with attached utility  - would that be a problem too? 

For this particular problem, you could get rid of assignments with nonsensical values by also considering an algorithm with reversed outputs and then taking the intersection of valid assignments, since only  satisfies both algorithms. 

Comment by Ian Televan on A Semitechnical Introductory Dialogue on Solomonoff Induction · 2021-05-07T22:44:43.631Z · LW · GW

Could someone explain why this doesn't degenerate into an entirely circular concept when we postulate a stronger compiler; or why it doesn't become entirely dependent on the choice of the compiler?

  1. There are many programs that output identical sequences. That's a waste. Make it so that no two different programs have the same output.
  2. There are many sequences that when fed into the compiler don't result in valid programs. That's a waste. Make it so that every binary sequence represents a valid program.

Now we have a set of sequences that we'd like to encode: S = {, 0, 1, 00, 01, ... }, a set of sequences that are interpreted by the compiler as programs: P = {, 0, 1, 00, 01, ... } and the compiler which is a bijection from P to S. It better not turn out to be the identity function.. And that's with the best possible compiler. If we postulate a reasonable but much weaker compiler then the programs that encode the sequences become on average longer than the sequences themselves! 

The only way out of this that I see is to weight elements of S by their frequencies in our universe and/or by how much we care about them, and then let the compiler be a function that minimizes this frequency-importance score. In fact, this compiler starts looking more and more like an encoder (?!). The difficult part then seems to me to be the choice of the optimal encoder, and not the Solomonoff induction itself.

Edit: Of course, when there's a 1 to 1 mapping, then selecting the shortest program is trivial. So in a way, if we make the Solomonoff induction trivial then the only thing that's left is the choice of the compiler. But why isn't this still a problem with weaker, traditional compilers?

Comment by Ian Televan on Rationality: Appreciating Cognitive Algorithms · 2021-04-22T22:32:36.412Z · LW · GW

I thought of a slightly different exception for the use of "rational": when we talk about conclusions that someone else would draw from their experiences, which are different from ours. "It's rational for Truman Burbank to believe that he has a normal life." 

Or if I had an extraordinary experience which I couldn't communicate with enough fidelity to you, then it might be rational for you not to believe me. Conversely, if you had the experience and tried to tell me, I might answer with "Based only on the information that I received from you, which is possibly different from what you meant to communicate, it's rational for me not to believe the conclusion." There I might want to highlight the issue with fidelity of communication as a possible explanation for the discrepancy (the alternative being, for example, that the conclusion is unwarranted even if the account of the event is true and compete).

Comment by Ian Televan on Double Illusion of Transparency · 2021-04-22T19:02:11.624Z · LW · GW

Richard Feynman once said that if you really understand something in physics you should be able to explain it to your grandmother.  I believed him.

Curiously enough, there is a recording of an interview with him where he argues almost exactly the opposite, namely that he can't explain something in sufficient detail to laypeople because of the long inferential distance.

Comment by Ian Televan on The Allais Paradox · 2021-04-09T23:00:20.388Z · LW · GW

It seems that the mistake that people commit is imagining the the second scenario is a choice between 0.34*24000 = 8160 and 0.33*27000 = 8910. Yes, if that was the case, then you could imagine a utility function that is approximately linear in the region 8160 to 8910, but sufficiently concave in the region 24000 to 27000  s.t. the difference between 8160 and 8910 feels greater than between 24000 and 27000... But that's not the actual scenario with which we are presented. We don't actually get to see 8160 or 8910. The slopes of the utility function in the first and second scenarios are identical. 

"Oh, these silly economists are back at it again, asserting that my utility function ought to be linear, lest I'm irrational. Ugh, how annoying! I have to explain again, for the n-th time, that my function actually changes the slope in such a way that my intuitions make sense. So there!" <- No, that's not what they're saying! If you actually think this through carefully enough, you'll realize that there is no monotonically increasing utility function, no matter the shape, that justifies 1A > 1B and 2A < 2B simultaneously. 

Comment by Ian Televan on Where Recursive Justification Hits Bottom · 2021-04-08T21:01:02.853Z · LW · GW

But is the Occam's Razor really circular? The hypothesis "there is no pattern" is strictly simpler than "there is this particular pattern", for any value of 'this particular'.. Occam's Razor may expect simplicity in the world, but it is not the simplest strategy itself.  

Edit: I'm talking about the hypothesis itself, as a logic sequence of some kind, not that, which the hypothesis asserts. It asserts maxentropy - the most complex world. 

Comment by Ian Televan on Excluding the Supernatural · 2021-04-07T14:29:26.358Z · LW · GW

Originally I thought of an exception where the thing that we don't know was a constructive question. e.g. given more or less complete knowledge or material science, how to we construct a decent bridge? But it's an obvious limitation, no self-proclaimed reductionist would actually try to apply reductionism in such situation.

It seems to me that you're describing a reverse scenario: suppose we have an already constructed object, and want to figure out how works - can reductionism still be used? I'd still say yes.

Take an airplane, for example. Knowing relevant laws of physics and looking at just the airplane, you can't actually say predict whether it's going to fly to New Your or Chicago. You need to incorporate the pilot into the model. And the pilot is influenced by human psychology, economics, etc. So on one hand you have the airplane as a concrete physical object, and one the other hand you have the role that airplanes of that type play in human society. BUT! By looking at just the physical properties, you can still infer a great deal about how it's used. 

This too applies to money. Physical manifestations are not actually completely arbitrary - they are either valuable in themselves - hides, grain, salt etc. or they have properties which make them suitable as value tokens - relatively durable and difficult to counterfeit either through scarcity of raw materials or difficulty in manufacturing. There is not as much to say about the physical properties of money compared to airplanes, but the difference is quantitative, not qualitative. 

So we're left with questions about human society. How do humans actually use these objects? Well, it's often impractical to apply reductionism but it's still possible in principle. We just don't know enough yet, or it would be computationally intractable, or it would be unethical etc. And of course, a lot has already been learned though application of reductionism to human psychology. 

Comment by Ian Televan on A Technical Explanation of Technical Explanation · 2021-04-07T00:32:59.024Z · LW · GW

Something felt off about this example and I think I can put my finger on it now. 

My model of the world gives the event with the blue tentacle probability ~0. So when you ask me to imagine it, and I do so, what it feels like to me like I'm coming up with a new model to explain it, which gives a higher probability to that outcome than my current model does. This seems to be the root of the apparent contradiction, it appears that I'm violating the invariant. But I don't think that that's what actually happening. Consider this fictional exchange:

EY: Imagine that you have this particular gaussian model. Now suppose that you find yourself in a situation that is 50 SD's away from the median. How do you explain it?

Me: Well, my hypothesis is that...

EY: Wrong! That scenario is too unlikely, if the model has something to say about, then it must be wrong and irrational.

Me: No! You asked me to suppose this incredibly unlikely scenario, which is exactly what I did. I didn't conclude "EY is asking me to consider something that's too unlikely, ah, he's trying to trick me, therefore I am not going to imagine the scenario on the count that it's impossible!" because this is an impossible conclusion from inside the model.

I have limited resources, so I just don't bother pre-computing all details of my model that are too unlikely to matter. But if this scenario actually came up in real life, I would be able to fill in the missing details retroactively. That doesn't mean that my model assumes more than 100% total probability, because I'm already reserving a bit of probability mass for unknown unknowns. And I needn't worry about such scenarios now, because they're too unlikely and there too many similarly unlikely scenarios. I just can't be meaningfully concerned about them all. 

Comment by Ian Televan on Excluding the Supernatural · 2021-04-05T12:21:18.816Z · LW · GW

Care to elaborate? Also, that's not really an exception, but a boundary - it's exactly what you would expect if there are finitely many layers of composition i.e. the world is not like an infinite fractal. 

Comment by Ian Televan on Excluding the Supernatural · 2021-04-05T00:20:20.592Z · LW · GW

Of course it doesn't work for problems where the objects in question are already fundamental and cannot be reduces any further. But that's what I meant in the original post - reductionist frameworks would fail to produce any new insights if we were already at the fundamental level.

Comment by Ian Televan on Excluding the Supernatural · 2021-04-01T13:15:22.102Z · LW · GW

If reductionism was wrong then I would expect reductionist approaches to be ineffective. Every attempt at gaining knowledge using a reductionist framework would fail do discover anything new, except by accident on very rare occasions. Or experiments would fail to replicate because the conservation of energy was routinely violated in unpredictable ways. 

Comment by Ian Televan on Belief in the Implied Invisible · 2021-04-01T01:31:32.567Z · LW · GW

Conservation laws or not, you ought to believe in the existence of the photon because you continue having the evidence of its existence - it's your memory of having fired the photon! Your memory is entangled with the state of the universe, not perfectly, but still, it's Bayesian evidence. And if your memory got erased, then indeed, you'd better stop believing that the photon exists.

Comment by Ian Televan on Dissolving the Question · 2021-03-28T09:16:41.315Z · LW · GW

That seems unlikely. There is already a certain difficulty in showing that illusion of free will is an illusion. "It seems like you have free will, but actually, it doesn't seem." - The seeming is self-evident, so what sense does it make to say that something actually doesn't seem if it feels like it seems. As far as I understand it, it's not like it doesn't really seem so, but you're mistaken about it and think that it actually seems so, and then mindfulness meditation clears up that mistake for you and you stop thinking that it seems that you have free will. Instead, you observe that seeming itself just disappears. It stops seeming that you have free will. 

So now we come to your suggestion: "It seems(level 2.) like the seeming(lvl 1.) disappears, but actually, it doesn't seem(lvl 2.) like the seeming(lvl 1.) disappears." - but once again, the seeming(lvl 2.) is self-evident. So you'd need to come up with some extraordinary circumstances which are associated with more mental clarity to show that that seeming(lvl 2.) also disappears. But this is unlikely, because the concept of free will is already incoherent, so more mental clarity shouldn't point you towards it. 

Comment by Ian Televan on Dissolving the Question · 2021-03-28T08:57:59.205Z · LW · GW

An illusion of X is indeed an illusion.. by definition :)

Comment by Ian Televan on Dissolving the Question · 2021-03-22T18:40:35.026Z · LW · GW

As Sam Harris points out, the illusion of free will is itself an illusion. It doesn't actually feel like you have free will if you look closely enough. So then why are we mistaken about things when we don't examine them closely enough? Seems like a too-open-ended question. 

Comment by Ian Televan on Beautiful Probability · 2021-03-21T21:45:53.855Z · LW · GW

Update:  a) is just wrong and b) is right, but unsatisfying because it doesn't address the underlying intuition which says that the stopping criterion ought to matter. I'm very glad that I decided to investigate this issue in full detail and run my own simulations instead of just accepting some general principle from either side.

MacKay presents it as a conflict between frequentism vs bayesianism and argues why frequentism is wrong. But I started out with a bayesian model and still felt that motivated stopping would have some influence. I'm going to try to articulate the best argument why the stopping criterion must matter and then explain why it fails.

First of all the scenario doesn't describe exactly what the stopping criterion was. So I made up one: The (second) researcher treats patients and gets the results one at a time. He has some particular threshold for the probability that the treatment is >60% effective and he is going to stop and report the results the moment the probability reaches the threshold. He derives this probability by calculating a beta distribution for the data and integrating it from 0.6 to 1. (for those who are unfamiliar with the beta distribution, I recommend this excellent video by 3Blue1Brown) In this case the likelihood of seeing the data given underlying probability  is given by beta , and the probability that treatment is >60% effective is .

Now the argument: motivated stopping ensures that we don't just get 70 successes and 30 failures. We have an additional constraint that after each of the 99 outcomes for treatment the probability is strictly  and only after the 100th patient it reaches . Surely then, we must modify  to reflect this constraint. And if the true probability was really >60%, then surely there are many Everett branches where the probability reaches  before we ever get to the 100th patient. If it really took so long, then it must be because it's actually less likely that the true probability is >60%.

And indeed, the likelihood of seeing 70 successes and 30 failures with such stopping criterion is less than is initially given by . BUT! The constraint is independent of the probability ! It is purely about the order in which the outcomes appear. In other words, it changes the constant , which originally indicated the total number of all different ways to order 70 positive and 30 negative instances. And this constant reduces the likelihood for every probability equally! It doesn't reduce it more in universes where  compared to where . This means that the shape of the original distribution stays the same, only the amplitude changes. But because we condition on seeing 70 successes and 30 failures anyway, this means that the area under the curve must be equal to 1. So we have to re-normalize  , and it comes out as  again! 

Another way to think about it is that the stopping criterion is not entangled with the actual underlying probability in a given universe. There is zero mutual information between the stopping criterion and . And yes, if this was not the case, if for example, the researcher had decided that he would also treat one more patient after reaching the threshold  and only publish the results if this patient recovered (but not mention them in the report), then it would absolutely affect the results, because a positive outcome for the patient is more likely in universes where . But then it also wouldn't be purely about his state of mind, we would have an additional data point.

Comment by Ian Televan on Beautiful Probability · 2021-03-18T12:31:56.457Z · LW · GW

Fixing my predictions now, before going to investigate this issue further (I have Mackay's book within the hand's reach and would also like to run some Monte-Carlo simulations to check the results; going to post the resolution later): 

a) It seems that we ought to treat the results differently, because the second researcher in effect admits to p-hacking his results. b) But on the other hand, what if we modify the scenario slightly: suppose we get the results from both researchers 1 patient at a time. Surely we ought to update the priors by the same amount each time? And so by the time we get the 100th individual result from each researcher, the priors should be the same, even if we then find out that they had different stopping criteria.

My prediction is that argument a) turns out to be right and argument b) contains some subtle mistake. 

Comment by Ian Televan on Mutual Information, and Density in Thingspace · 2021-03-15T19:36:38.069Z · LW · GW

Fascinating subject indeed!

  1. I wonder how one would need to modify this principle to take into account risk-benefit analysis. What if quickly identifying wiggins meant incurring great benefit or avoiding great harm, then you would still need a nice short word for them. This seems obvious, the question is only how much shorter would the word need to be.
  2. Labels that are both short and phonetically consistent with a given language are in short supply, therefore we would predict that sometimes even unrelated things shared labels - if they occupied sufficiently different contexts s.t. there was no risk of confusing them. This what we see in case of professional jargon, for example. I also wonder whether one could actually quantify such prediction.
  3. If labels that are both short and phonetically consistent with a given language are really in such short supply, why aren't they all already occupied? Why were you able to come up with a word like 'wiggin', that seems to be consistent with English phonetics, that doesn't already mean something? -- This introduces the concept of phonetic redundancy in languages. It would actually be impractical to occupy all shortest syllable combinations, because it would make it impossible or require too much effort to correct errors. People in radiocommunications recognized this phenomenon and devised a number of spelling alphabets, the most commonly known being the NATO phonetic alphabet.
Comment by Ian Televan on Extensions and Intensions · 2021-03-13T17:20:03.404Z · LW · GW

This seems to be a Quality vs Quantity issue. Yes, it may be a matter of fact that "I can define a word any way I like" is not true. But it is not the same as "I can give the words almost arbitrarily different definition, compared to how they're usually used, although this is rarely a good idea and I should be extra cautions when doing so, carefully weighing pro's and con's". In many cases clarifying definitions, nudging them slightly (or not so slightly) leads to real practical progress. 

Comment by Ian Televan on The Parable of the Dagger · 2021-03-13T15:19:02.617Z · LW · GW

I tried to reason through the riddles, before reading the rest and I made the same mistake as the jester did. It is really obvious in hindsight; I thought about this concept earlier and I really thought I had understood it. Did not expect to make this mistake at all, damn.

I even invented some examples on my own, like in the programming language Python a statement like print("Hello, World!") is an instruction to print "Hello, World!" on the screen, but "print(\"Hello, World!\")" is merely a string, that represents the first string, it's completely inert. (in an interactive environment it would display "print("Hello, World!")" on the screen, but still not "Hello, World!"). 

Edit: I think I understand what went wrong with my reasoning. Usually, distinguishing a statement from a representation of a statement is not difficult. To get a statement from a representation of a statement you must interpret the representation once. And this is rather easy, for example, when I'm reading these essays, I am well aware that the universe doesn't just place these statements of truth into my mind, but instead, I'm reading what Eliezer wrote down and I must interpret it. It is always "Eliezer writes 'X'", and not just "X". 

But in this example, there were 2 different levels of representation. To get to the jester and the king I need to interpret the words once. But to get to the inscriptions, I must interpret the words twice. This is what went wrong. If I correctly understood the root of my mistake, then, if I was in jester's shoes, I wouldn't have made this mistake. Therefore, I think, my mistake is not the same as jester's. Simultaneous interpretation of different levels of representation is something to be vigilant about. 

C'est ne pas un pipe. This is not a picture of a pipe either, this is a picture of a picture of a pipe. Or is this a piece text, saying "this is a picture of a picture of a pipe"? Or is this a piece of text, saying "This is a piece of text, saying \"this is a picture... \""... :-)

Comment by Ian Televan on An Especially Elegant Evpsych Experiment · 2021-03-11T19:39:46.808Z · LW · GW

(Of course I don't know how the authors actually come up with the hypothesis and I could be wrong, and the conclusions seem very plausible anyway, but..) The study seem to be susceptible to stopping bias. 

If the correlation was very strong right away, they could've said "Parental grief directly correlates with reproductive potential, Q.E.D!"

It wasn't, but they found a group resembling early hunter-gatherers; with the conclusion "Parental grief directly correlates with reproductive potential from back then, Q.E.D!"

If this didn't turn out either, and the correlation had peaked for some values in the middle, they could've said "Parental grief correlates with reproductive potential from back then, and it is also influenced by the specifics of the current society, Q.E.D!"

Comment by Ian Televan on Original Seeing · 2021-03-09T12:35:16.956Z · LW · GW

I'm not sure whether the explanation at the end was right, but this is a very powerful technique nonetheless. I observed a similar problem many times, but couldn't quite put my finger on it.

Comment by Ian Televan on Dark Side Epistemology · 2021-03-08T20:45:57.402Z · LW · GW

Arguing against consistency itself. "I was trying to be consistent when I was younger, but now I'm more wise than that."

Comment by Ian Televan on Truly Part Of You · 2021-03-06T18:54:26.180Z · LW · GW

This feels very important.

Suppose that something *was* deleted. What was it? What am I failing to notice? 

Maybe learning to 'regenerate' the knowledge that I currently possess is going to help me 'regenerate' the knowledge that 'was deleted'.