Posts

Are We Right about How Effective Mockery Is? 2020-08-27T10:48:18.288Z
Excusing a Failure to Adjust 2020-08-26T13:51:54.132Z
Empathetic vs Intrinsic Oof 2020-08-24T00:12:07.737Z
What do you make of AGI:unaligned::spaceships:not enough food? 2020-02-22T14:14:14.687Z
Comment on Coherence arguments do not imply goal directed behavior 2019-12-06T09:30:25.882Z
Testing the Efficacy of Disagreement Resolution Techniques (and a Proposal for Testing Double Crux) 2019-10-21T22:57:37.327Z
Off the Cuff Brangus Stuff 2019-08-02T11:40:24.628Z
Is the sum individual informativeness of two independent variables no more than their joint informativeness? 2019-07-08T02:51:28.221Z
The Principle of Predicted Improvement 2019-04-23T21:21:41.018Z
Asking for help teaching a critical thinking class. 2019-03-07T02:15:38.208Z
Does Evidence Have To Be Certain? 2016-03-30T10:32:39.632Z
Computable Universal Prior 2015-12-11T09:54:24.935Z
Does Probability Theory Require Deductive or Merely Boolean Omniscience? 2015-08-03T06:54:31.012Z
Meetup : Umbc meetup 2015-02-25T04:00:34.284Z
Humans Shouldn't make Themselves Smarter? 2011-12-11T12:00:55.418Z
LW Philosophers versus Analytics 2011-11-28T15:40:15.138Z
(Subjective Bayesianism vs. Frequentism) VS. Formalism 2011-11-26T05:05:41.138Z
Bayes Slays Goodman's Grue 2011-11-17T10:45:37.460Z
Naming the Highest Virtue of Epistemic Rationality 2011-10-24T23:00:37.924Z
Can't Pursue the Art for its Own Sake? Really? 2011-09-20T02:09:49.470Z
Towards a New Decision Theory for Parallel Agents 2011-08-07T23:39:58.376Z
MSF Theory: Another Explanation of Subjectively Objective Probability 2011-07-30T19:46:56.701Z
Induction, Deduction, and the Collatz Conjecture: the Decidedly Undecidable Propositions. 2011-06-15T15:21:00.080Z

Comments

Comment by Ronny (potato) on “PR” is corrosive; “reputation” is not. · 2021-02-14T15:39:46.693Z · LW · GW

I came here to say something pretty similar to what Duncan said, but I had a different focus in mind. 

It seems like it's easier for organizations to coordinate around PR than it is for them to coordinate around honor.  People can have really deep intractable, or maybe even fundamental and faultless, disagreements about what is honorable, because what is honorable is a function of what normative principles you endorse. It's much easier to resolve disagreements about what counts as good PR. You could probably settle most disagreements about what counts as good PR using polls. 

Maybe for this reason we should expect being into PR to be a relatively stable property of organizations, while being into honor is a fragile and precious thing for an organization. 

Comment by Ronny (potato) on What are examples of Rationalist fable-like stories? · 2020-09-28T18:09:07.879Z · LW · GW

https://www.lesswrong.com/posts/4tke3ibK9zfnvh9sE/the-bayesian-tyrant

Comment by Ronny (potato) on What does it mean to apply decision theory? · 2020-07-14T02:15:02.411Z · LW · GW

This might be sort of missing the point, but here is an ideal and maybe not very useful not-yet-theory of rationality improvements I just came up with.

There are a few black boxes in the theory. The first takes you and returns your true utility function, whatever that is. Maybe it's just the utility function you endorse, and that's up to you. The other black box is the space of programs that you could be. Maybe it's limited by memory, maybe it's limited by run time, or maybe it's any finite state machine with less than 10^20 states, maybe it's python programs less than 5000 characters long, some limited set of programs that takes your sensory data and motor output history as input, and returns a motor output. The limitations could be whatever, don't have to be like this.

Then you take one of these ideal rational agents with your true utility function and the right prior, and you give them the decision problem of designing your policy, but they can only use policies that are in the limited space of bounded programs you could be. Their expected utility assignments over that space of programs is then our measure of the rationality of a bounded agent. You could also give the ideal agent access to your data and see how that changes their ranking, if it does. If you can change yourself such that the program you become is assigned higher expected utility by the agent, then that is an improvement.

Comment by Ronny (potato) on An Orthodox Case Against Utility Functions · 2020-04-20T20:02:03.184Z · LW · GW

I don't think we should be surprised that any reasonable utility function is uncomputable. Consider a set of worlds with utopias that last only as long as a Turing machine in the world does not halt and are otherwise identical. There is one such world for each Turing machine. All of these worlds are possible. No computable utility function can assign higher utility to every world with a never halting Turing machine.

Comment by Ronny (potato) on Comment on Coherence arguments do not imply goal directed behavior · 2019-12-19T21:31:38.302Z · LW · GW
I do think this is an important concept to explain our conception of goal-directedness, but I don't think it can be used as an argument for AI risk, because it proves too much. For example, for many people without technical expertise, the best model they have for a laptop is that it is pursuing some goal (at least, many of my relatives frequently anthropomorphize their laptops).

This definition is supposed to also explains why a mouse has agentic behavior, and I would consider it a failure of the definition if it implied that mice are dangerous. I think a system becomes more dangerous as your best model of that system as an optimizer increases in optimization power.

Comment by Ronny (potato) on Off the Cuff Brangus Stuff · 2019-10-10T10:00:08.612Z · LW · GW

Here is an idea for a disagreement resolution technique. I think this will work best:

*with one other partner you disagree with.

*when your the beliefs you disagree about are clearly about what the world is like.

*when your the beliefs you disagree about are mutually exclusive.

*when everybody genuinely wants to figure out what is going on.

Probably doesn't really require all of those though.

The first step is that you both write out your beliefs on a shared work space. This can be a notebook or a whiteboard or anything like that. Then you each write down your credences next to each of the statements on the work space.

Now, when you want to make a new argument or present a new piece of evidence, you should ask your partner if they have heard it before after you present it. Maybe you should ask them questions about it beforehand to verify that they have not. If they have not heard it before, or had not considered it, you give it a name and write it down between the two propositions. Now you ask your partner how much they changed their credence as a result of the new argument. They write down their new credences below the ones they previously wrote down, and write down the changes next to the argument that just got added to the board.

When your partner presents a new argument or piece of evidence, be honest about whether you have heard it before. If you have not, it should change your credence some. How much do you think? Write down your new credence. I don't think you should worry too much about being a consistent Bayesian here or anything like that. Just move your credence a bit for each argument or piece of evidence you have not heard or considered, and move it more for better arguments or stronger evidence. You don't have to commit to the last credence you write down, but you should think at least that the relative sizes of all of the changes were about right. I

I think this is the core of the technique. I would love to try this. I think it would be interesting because it would focus the conversation and give players a record of how much their minds changed, and why. I also think this might make it harder to just forget the conversation and move back to your previous credence by default afterwards.

You could also iterate it. If you do not think that your partner changed their mind enough as a result of a new argument, get a new workspace and write down how much you think they should have change their credence. They do the same. Now you can both make arguments relevant to that, and incrementally change your estimate of how much they should have changed their mind, and you both have a record of the changes.

Comment by Ronny (potato) on Off the Cuff Brangus Stuff · 2019-08-12T23:04:53.606Z · LW · GW

Ping.


Comment by Ronny (potato) on Off the Cuff Brangus Stuff · 2019-08-02T22:09:17.295Z · LW · GW

If you come up with a test or set of tests that it would be impossible to actually run in practice, but that we could do in principle if money and ethics were no object, I would still be interested in hearing those. After talking to one of my friends who is enthusiastic about chakras for just a little bit, I would not be surprised if we in fact make fairly similar predictions about the results of such tests.

Comment by Ronny (potato) on Off the Cuff Brangus Stuff · 2019-08-02T19:17:54.189Z · LW · GW

Sometimes I sort of feel like a grumpy old man that read the sequences back in the good old fashioned year of 2010. When I am in that mood I will sometimes look around at how memes spread throughout the community and say things like "this is not the rationality I grew up with". I really do not want to stir things up with this post, but I guess I do want to be empathetic to this part of me and I want to see what others think about the perspective.

One relatively small reason I feel this way is that a lot of really smart rationalists, who are my friends or who I deeply respect or both, seem to have gotten really into chakras, and maybe some other woo stuff. I want to better understand these folks. I'll admit now that I have weird biased attitudes towards woo stuff in general, but I am going to use chakras as a specific example here.

One of the sacred values of rationality that I care a lot about is that one should not discount hypotheses/perspectives because they are low status, woo, or otherwise weird.

Another is that one's beliefs should pay rent.

To be clear, I am worried that we might be failing on the second sacred value. I am not saying that we should abandon the first one as I think some people may have suggested in the past. I actually think that rationalists getting into chakras is strong evidence that we are doing great on the first sacred value.

Maybe we are not failing on the second sacred value. I want to know whether we are or not, so I want to ask rationalists who think a lot or talk enthusiastically about chakras a question:

Do chakras exist?

If you answer "yes", how do you know they exist?

I've thought a bit about how someone might answer the second question if they answer "yes" to the first question without violating the second sacred value. I've thought of basically two ways that seems possible, but there are probably others.

One way might be that you just think that chakras literally exist in the same ways that planes literally exist, or in the way that waves literally exist. Chakras are just some phenomena that are made out of some stuff like everything else. If that is the case, then it seems like we should be able to at least in principle point to some sort of test that we could run to convince me that they do exist, or you that they do not. I would definitely be interested in hearing proposals for such tests!

Another way might be that you think chakras do not literally exist like planes do, but you can make a predictive profit by pretending that they do exist. This is sort of like how I do not expect that if I could read and understand the source code for a human mind, that there would be some parts of the code that I could point to and call the utility and probability functions. Nonetheless, I think it makes sense to model humans as optimization processes with some utility function and some probability function, because modeling them that way allows me to compress my predictions about their future behavior. Of course, I would get better predictions if I could model them as mechanical objects, but doing so is just too computationally expensive for me. Maybe modeling people as having chakras, including yourself, works sort of the same way. You use some of your evidence to infer the state of their chakras, and then use that model to make testable predictions about their future behavior. In other words, you might think that chakras are real patterns. Again it seems to me that in this case we should at least in principle be able to come up with tests that would convince me that chakras exist, or you that they do not, and I would love to hear any such proposals.

Maybe you think they exist in some other sense, and then I would definitely like to hear about that.

Maybe you do not think they exist in anyway, or make any predictions of any kind, and in that case, I guess I am not sure how continuing to be enthusiastic about thinking about chakras or talking about chakras is supposed to jive with the sacred principle that one's beliefs should pay rent.

I guess it's worth mentioning that I do not feel as averse to Duncan's color wheel thing, maybe because it's not coded as "woo" to my mind. But I still think it would be fair to ask about that taxonomy exactly how we think that it cuts the universe at its joints. Asking that question still seems to me like it should reduce to figuring out what sorts of predictions to make if it in fact does, and then figuring out ways to test them.

I would really love to have several cooperative conversations about this with people who are excited about chakras, or other similar woo things, either within this framework of finding out what sorts of tests we could run to get rid of our uncertainty, or questioning the framework I propose altogether.

Comment by Ronny (potato) on Off the Cuff Brangus Stuff · 2019-08-02T11:40:24.820Z · LW · GW

Here is an idea I just thought of in an uber ride for how to narrow down the space of languages it would be reasonable to use for universal induction. To express the k-complexity of an object relative to a programing language I will write:

Suppose we have two programing languages. The first is Python. The second is Qython, which is a lot like Python, except that it interprets the string "A" as a program that outputs some particular algorithmically large random looking character string with . I claim that intuitively, Python is a better language to use for measuring the complexity of a hypothesis than Qython. That's the notion that I just thought of a way to formally express.

There is a well known theorem that if you are using to measure the complexity of objects, and I am using to measure the complexity of objects, then there is a constant such that for any object :


In words, this means that you might think that some objects are less complicated than I do, and you might think that some objects are more complicated than I do, but you won't think that any object is complexity units more complicated than I do. Intuitively, is just the length of the shortest program in that is a compiler for So worst case scenario, the shortest program in that outputs will be a compiler for written in (which is characters long) plus giving that compiler the program in that outputs (which would be characters long).

I am going to define the k-complexity of a function relative to a programing language as the length of the shortest program in that language such that when it is given as an input, it returns . This is probably already defined that way, but jic. So say we have a function from programs in to their outputs and we call that function , then:

There is also another constant:

The first is the length of the shortest compiler for written in , and the second is the length of the shortest compiler for written in . Notice that these do not need to be equal. For instance, I claim that the compiler for Qython written in Python is roughly characters long, since we have to write the program that outputs in Python which by hypothesis was about characters long, and then a bit more to get it to run that program when it reads "A", and to get that functionality to play nicely with the rest of Qython however that works out. By contrast, to write a compiler for Python in Qython it shouldn't take very long. Since Qython basically is Python, it might not take any characters, but if there are weird rules in Qython for how the string "A" is interpreted when it appears in an otherwise Python-like program, then it still shouldn't take any more characters than it takes to write a Python interpreter in regular Python.

So this is my proposed method for determining which of two programming languages it would be better to use for universal induction. Say again that we are choosing between and . We find the pair of constants such that and , and then compare their sizes. If is less than this means that it is easier to write a compiler for in than vice versa, and so there is more hidden complexity in 's encodings than in 's, and so we should use instead of for assessing the complexity of hypotheses.

Lets say that if then hides more complexity than .

A few complications:

It is probably not always decidable whether the smallest compiler for written in is smaller than the smallest compiler for written in , but this at least in principle gives us some way to specify what we mean by one language hiding more complexity than another, and it seems like at least in the case of Python vs. Qython, we can make a pretty good argument that the smallest compiler for Python written in Qython is smaller than the smallest compiler for Qython written in Python.

It is possible (I'd say probable) that if we started with some group of candidate languages and looked for languages that hide less complexity, we might run into a circle. Like the smallest compiler for in might be the same size as the smallest compiler for in but there might still be an infinite set of objects such that:


In this case, the two languages would disagree about the complexity of an infinite set of objects, but at least they would disagree about it by no more than the same fixed constant in both directions. Idk, seems like probably we could do something clever there, like take the average or something, idk. If we introduce an and the smallest compiler for in is larger than it is in , then it seems like we should pick

If there is an infinite set of languages that all stand in this relationship to each other, ie, all of the languages in an infinite set disagree about the complexity of an infinite set of objects and hide less complexity than any language not in the set, then idk, seems pretty damning for this approach, but at least we narrowed down the search space a bit?

Even if it turns out that we end up in a situation where we have an infinite set of languages that disagree about an infinite set of objects by exactly the same constant, it might be nice to have some upper bound on what that constant is.

In any case, this seems like something somebody would have thought of, and then proved the relevant theorems addressing all of the complications I raised. Ever seen something like this before? I think a friend might have suggested a paper that tried some similar method, and concluded that it wasn't a feasible strategy, but I don't remember exactly, and it might have been a totally different thing.

Watcha think?

Comment by Ronny (potato) on Measuring Optimization Power · 2019-07-27T01:46:31.280Z · LW · GW

When I started writing this comment I was confused. Then I got myself fairly less confused I think. I am going to say a bunch of things to explain my confusion, how I tried to get less confused, and then I will ask a couple questions. This comment got really long, and I may decide that it should be a post instead.

Take a system with 8 possible states. Imagine is like a simplified Rubik's cube type puzzle. (Thinking about mechanical Rubik's cube solvers is how I originally got confused, but using actual Rubik's cubes to explain would make the math harder.) Suppose I want to measure the optimization power of two different optimizers that optimize , and share the following preference ordering:

When I let optimizer1 operate on , optimizer1 always leaves So on the first time I give optimizer1 I get:

If I give to optimizer1 a second time I get:

This seems a bit weird to me. If we are imagining a mechanical robot with a camera that solves a Rubik's cube like puzzle, it seems weird to say that the solver gets stronger if I let it operate on the puzzle twice. I guess this would make sense for a measure of optimization pressure exerted instead of a measure of the power of the system, but that doesn't seem to be what the post was going for exactly. I guess we could fix this by dividing by the number of times we give optimizer1 , and then we would get 3 no matter how many times we let optimizer1 operate on This would avoid the weird result that a mechanical puzzle solver gets more powerful the more times we let it operate on the puzzle.

Say that when I let optimizer2 operate on , it leaves with probability , and leaves with probability , but I do not know If I let optimizer2 operate on one time, and I observe , I get:

If I let optimizer2 operate on three times, and I observe , , , then I get:

Now we could use the same trick we used before and divide by the number of instances on which optimizer2 was allowed to exert optimization pressure, and this would give us 7/3. The thing is though that we do not know and it seems like is relevant to how strong optimizer2 is. We can estimate to be 2/5 using Laplace's rule, but it might be that the long run frequency of times that optimizer2 leaves is actually .9999 and we just got unlucky. (I'm not a frequentist, long run frequency just seemed like the closest concept. Feel free to replace "long run frequency" with the prob a solomonoff bot using the correct language assigns at the limit, or anything else reasonable.) If the long run frequency is in fact that large, then it seems like we are underestimating the power of optimizer2 just because we got a bad sample of its performance. The higher is the more we are underestimating optimizer2 when we measure its power from these observations.

So it seems then like there is another thing that we need to know besides the preference ordering of an optimizer, the measure over the target system in the absence of optimization, and the observed state of the target system, in order to perfectly measure the optimization power of an optimizer. In this case, it seems like we need to know This is a pretty easy fix, we can just take the expectation of the optimization power as originally defined with respect to the probability of observing that state when the optimizer is present, but it is seem more complicated, and it is different.

With being the observed outcome, being the utility function of the optimization process, and being the distribution over outcomes in the absence of optimization, I took the definition in the original post to be:

The definition I am proposing instead is:

That is, you take the expectation of the original measure with respect to the distribution over outcomes you expect to observe in the presence of optimization. We could then call the original measure "optimization pressure exerted", and the second measure optimization power. For systems that are only allowed to optimize once, like humans, these values are very similar; for systems that might exert their full optimization power on several occasions depending on circumstance, like Rubik's cube solvers, these values will be different insofar as the system is allowed to optimize several times. We can think of the first measure as measuring the actual amount of optimization pressure that was exerted on the target system on a particular instance, and we can think of the second measure as the expected amount of optimization pressure that the optimizer exerts on the target system.

To hammer the point home, there is the amount of optimization pressure that I in fact exerted on the universe this time around. Say it was a trillion bits. Then there is the expected amount of optimization pressure that I exert on the universe in a given life. Maybe I just got lucky (or unlucky) on this go around. It could be that if you reran the universe from the point at which I was born several times while varying some things that seem irrelevant, I would on average only increase the negentropy of variables I care about by a million bits. If that were the case, then using the amount of optimization pressure that I exerted on this go around as an estimate of my optimization power in general would be a huge underestimate.

Ok, so what's up here? This seems like an easy thing to notice, and I'm sure Eliezer noticed it.

Eliezer talks about how from the perspective of deep blue, it is exerting optimization pressure every time it plays a game, but from the perspective of the programmers, creating deep blue was a one time optimization cost. Is that a different way to cache out the same thing? It still seems weird to me to say that the more times deep blue plays chess, the higher its optimization power is. It does not seem weird to me to say that the more times a human plays chess, the higher its optimization power is. Each chess game is a subsystem of the target system of that human, eg, the environment over time. Whereas it does seem weird to me to say that if you uploaded my brain and let my brain operate on the same universe 100 times, that the optimization power of my uploaded brain would be 100 times greater than if you only did this once.

This is a consequence of one of the nice properties of Eliezer's measure: OP sums for independent systems. It makes sense that if I think an optimizer is optimizing two independent systems, then when I measure their OP with respect to the first system and add it to their OP with respect to the second, I should get the same answer I would if I were treating the two systems jointly as one system. The Rubik's cube the first time I give it to a mechanical Rubik's cube solver, and the second time I give it to a mechanical Rubik's cube solver are in fact two such independent systems. So are the first time you simulate the universe after my birth and the second time. It makes sense to me that you should sum my optimization power for independent parts of the universe in a particular go around should sum to my optimization power with respect to the two systems taken jointly as one, but it doesn't make sense to me that you should just add the optimization pressure I exert on each go to get my total optimization power. Does the measure I propose here actually sum nicely with respect to independent systems? It seems like it might, but I'm not sure.

Is this just the same as Eliezer's proposal for measuring optimization power for mixed outcomes? Seems pretty different, but maybe it isn't. Maybe this is another way to extend optimization power to mixed outcomes? It does take into account that the agent might not take an action that guarantees an outcome with certainty.

Is there some way that I am confused or missing something in the original post that it seems like I am not aware of?



Comment by Ronny (potato) on Measuring Optimization Power · 2019-07-26T21:41:40.071Z · LW · GW

Is there a particular formula for negentropy that OP has in mind? I am not seeing how the log of the inverse of the probability of observing an outcome as good or better than the one observed can be interpreted as the negentropy of a system with respect to that preference ordering.

Edit: Actually, I think I figured it out, but I would still be interested in hearing what other people think.

Comment by Ronny (potato) on Functional Decision Theory vs Causal Decision Theory: Expanding on Newcomb's Problem · 2019-05-02T23:50:15.852Z · LW · GW

Something about your proposed decision problem seems cheaty in a way that the standard Newcomb problem doesn't. I'm not sure exactly what it is, but I will try to articulate it, and maybe you can help me figure it out.

It reminds me of two different decision problems. Actually, the first one isn't really a decision problem.

Omega has decided to give all those who two box on the standard Newcomb problem 1,000,000 usd, and all those who do not 1,000 usd.

Now that's not really a decision problem, but that's not the issue with using it to decide between decision theories. I'm not sure exactly what the issue is but it seems like it is not the decisions of the agent that make the world go one way or the other. Omega could also go around rewarding all CDT agents and punishing all FDT agents, but that wouldn't be a good reason to prefer CDT. It seems like in your problem it is not the decision of the agent that determines what their payout is, whereas in the standard newcomb problem it is. Your problem seems more like a scenario where omega goes around punishing agents with a particular decision theory than one where an agent's decisions determine their payout.

Now there's another decision problem this reminds me of.

Omega flips a coin and tell you "I flipped a coin, and I would have paid you 1,000,000 usd if it came up heads only if I predicted that you would have paid me 1,000 usd if it came up tails after having this explained to you. The coin did in fact come up tails. Will you pay me?"

In this decision problem your payout also depends on what you would have done in a different hypothetical scenario, but it does not seem cheaty to me in the same way your proposed decision problem does. Maybe that is because it depends on what you would have done in this same problem had a different part of it gone differently.

I'm honestly not sure what I am tracking when I judge whether a decision problem is cheaty or not (where cheaty just means "should be used to decide between decision theories") but I am sure that your problem seems cheaty to me right now. Do you have any similar intuitions or hunches about what I am tracking?

Comment by Ronny (potato) on The Principle of Predicted Improvement · 2019-05-02T19:59:18.717Z · LW · GW

I had already proved it for two values of H before I contracted Sellke. How easily does this proof generalize to multiple values of H?

Comment by Ronny (potato) on The Principle of Predicted Improvement · 2019-04-25T18:37:22.683Z · LW · GW

I see. I think you could also use PPI to prove Good's theorem though. Presumably the reason it pays to get new evidence is that you should expect to assign more probability to the truth after observing new evidence?

Comment by Ronny (potato) on The Principle of Predicted Improvement · 2019-04-25T06:57:21.388Z · LW · GW

I honestly could not think of a better way to write it. I had the same problem when my friend first showed me this notation. I thought about using but that seemed more confusing and less standard? I believe this is how they write things in information theory, but those equations usually have logs in them.

Comment by Ronny (potato) on The Principle of Predicted Improvement · 2019-04-24T15:42:09.056Z · LW · GW

I didn't take the time to check whether it did or didn't. If you would walk me through how it does, I would appreciate it.

Comment by Ronny (potato) on Asking for help teaching a critical thinking class. · 2019-03-07T17:39:42.947Z · LW · GW

Luckily, I don't know much about genetics. I totally forgot that, I'll edit the question to reflect it.

To be sure though, did what I mean about the different kinds of cognition come across? I do not actually plan on teaching any genetics.

Comment by Ronny (potato) on Does Evidence Have To Be Certain? · 2016-03-30T12:49:45.256Z · LW · GW

Yeah, the problem i have with that though is that I'm left asking: why did I change my probability in that? Is it because i updated on something else? Was I certain of that something else? If not, then why did I change my probability of that something else, and on we go down the rabbit hole of an infinite regress.

Comment by Ronny (potato) on Bayes Slays Goodman's Grue · 2015-11-08T21:24:10.946Z · LW · GW

Wait, actually, I'd like to come back to this. What programming language are we using? If it's one where either grue is primitive, or one where there are primitives that make grue easier to write than green, then true seems simpler than green. How do we pick which language we use?

Comment by Ronny (potato) on Causal Universes · 2015-10-09T17:13:23.586Z · LW · GW

Here's my problem. I thought we were looking for a way to categorize meaningful statements. I thought we had agreed that a meaningful statement must be interpretable as or consistent with at least one DAG. But now it seems that there are ways the world can be which can not be interpreted even one DAG because they require a directed cycle. SO have we now decided that a meaningful sentence must be interpretable as a directed, cyclic or acyclic, graph?

In general, if I say all and only statements that satisfy P are meaningful, then any statement that doesn't satisfy P must be meaningless, and all meaningless statements should be unobservable, and therefor a statement like "all and only statements that satisfy P are meaningful" should be unfalsifiable.

Comment by Ronny (potato) on Causality: a chapter by chapter review · 2015-10-07T05:34:13.223Z · LW · GW

What is Markov relative?

Comment by Ronny (potato) on The Fabric of Real Things · 2015-10-07T05:25:16.415Z · LW · GW

Does EY give his own answer to this elsewhere?

Comment by Ronny (potato) on Godel's Completeness and Incompleteness Theorems · 2015-10-07T03:40:04.295Z · LW · GW

Wait... this will seems stupid, but can't I just say: "there does not exist x where sx = 0"

nevermind

Comment by Ronny (potato) on Tell Culture · 2015-08-04T00:21:11.455Z · LW · GW

Here's a new strategy.

Use guess culture as a default. Use guess tricks to figure out whether other communicator speaks Ask. Use Ask tricks to figure out whether communicator speaks Tell.

Comment by Ronny (potato) on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T23:50:44.466Z · LW · GW

Let's forget about the oracle. What about the program that outputs X only if 1 + 1 = 2, and else prints 0? Let's call it A(1,1). The formalism requires that P(X|A(1,1)) = 1, and it requires that P(A(1,1)) = 2 ^-K(A(1,1,)), but does it need to know that "1 + 1 = 2" is somehow proven by A(1,1) printing X?

In either case, you've shown me something that I explicitly doubted before: one can prove any provable theorem if they have access to a Solomonoff agent's distribution, and they know how to make a program that prints X iff theorem S is provable. All they have to do is check the probability the agent assigns to X conditional on that program.

Comment by Ronny (potato) on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T22:53:04.679Z · LW · GW

Awesome. I'm pretty sure you're right; that's the most convincing counterexample I've come across.

I have a weak doubt, but I think you can get rid of it:

let's name the program FTL()

I'm just not sure this means that the theorem itself is assigned a probability. Yes, I have an oracle, but it doesn't assign a probability to a program halting; it tells me whether it halts or not. What the Solomoff formalism requires is that "if (halts(FTL()) == true) then P(X|FTL()) = 1" and "if (halts(FTL()) == false) then P(X|FTL()) = 0" and "P(FTL()) = 2^-K(FTL())". Where in all this is the probability of Fermat's last theorem? Having an oracle may imply knowing whether or not FTL is a theorem, but it does not imply that we must assign that theorem a probability of 1. (Or maybe, it does and I'm not seeing it.)

Edit: Come to think of it... I'm not sure there's a relevant difference between knowing whether a program that outputs True iff theorem S is provable will end up halting, and assigning probability 1 to theorem S. It does seem that I must assign 1 to statements of the form "A or ~ A" or else it won't work; whereas if the theorem S is is not in the domain of our probability function, nothing seems to go wrong.

In either case, this probably isn't the standard reason for believing in, or thinking about logical omniscience because the concept of logical omniscience is probably older than Solomonoff induction. (I am of course only realizing that in hindsight; now that I've seen a powerful counter example to my argument.)

Comment by Ronny (potato) on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T14:51:14.962Z · LW · GW

Upvoted for cracking me up.

Comment by Ronny (potato) on Transhumanism and the denotation-connotation gap · 2015-08-03T09:44:58.438Z · LW · GW

Terminology quibble:

I get where you get this notion of connotation from, but there's a more formal one that Quine used, which is at least related. It's the difference between an extension and a meaning. So the extensions of "vertebrate" and "things with tails" could have been identical, but that would not mean that the two predicates have the same meanings. To check if the extensions of two terms are identical, you check the world; it seems like to check whether two meanings are identical, you have to check your own mind.

Edit: Whoops, somebody already mentioned this.

Comment by Ronny (potato) on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T09:16:05.996Z · LW · GW

I agree. I am saying that we need not assign it a probability at all. Your solution assumes that there is a way to express "two" in the language. Also, the proposition you made is more like "one elephant and another elephant makes two elephants" not "1 + 1 = 2".

I think we'd be better off trying to find a way to express 1 + 1 = 2 as a boolean function on programs.

Comment by Ronny (potato) on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-03T09:10:30.053Z · LW · GW

This is super interesting. Is this based on UDT?

Comment by Ronny (potato) on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T08:51:11.074Z · LW · GW

How do you express, Fermat's last theorem for instance, as a boolean combination of the language I gave, or as a boolean combination of programs? Boolean algebra is not strong enough to derive, or even express all of math.

edit: Let's start simple. How do you express 1 + 1 = 2 in the language I gave, or as a boolean combination of programs?

Comment by Ronny (potato) on How An Algorithm Feels From Inside · 2013-09-18T19:30:15.239Z · LW · GW

Except that around 2% of blue egg-shaped objects contain palladium instead. So if you find a blue egg-shaped thing that contains palladium, should you call it a "rube" instead? You're going to put it in the rube bin—why not call it a "rube"?

But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped objects that contain palladium are just as likely to glow in the dark as any other blue egg-shaped object.

So if you find a blue egg-shaped object that contains palladium, and you ask "Is it a blegg?", the answer depends on what you have to do with the answer: If you ask "Which bin does the object go in?", then you choose as if the object is a rube. But if you ask "If I turn off the light, will it glow?", you predict as if the object is a blegg. In one case, the question "Is it a blegg?" stands in for the disguised query, "Which bin does it go in?". In the other case, the question "Is it a blegg?" stands in for the disguised query, "Will it glow in the dark?"

This is amazing, but too fast. It's too important and counter intuitive to do that fast, and we absolutely devastatingly painfully need it in philosophy departments. Please help us. This is an S.O.S. our ship is sinking. Write this again longer, so that I can show it to people and change their minds. People who are not lesswrong litterate. It's too important to go over that fast, anyway. I also ask that you, or anyone for that matter, find a simple real world example which has roughly analogous parameters to the ones you specified, and use that as the example instead. Somebody do it [please, I'm too busy arguing with philosophy proffesors about it, and there are better writers on this site that could take up the endeavor. It would be useful and well liked anyway chances are, and I'll give what rewards I can.

Comment by Ronny (potato) on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-09T01:52:27.467Z · LW · GW

Here's a question, if we had the ability to input a sensory event with a likelyhoodratio of 3^^^^3:1 this whole problem would be solved?

Comment by Ronny (potato) on The Fabric of Real Things · 2012-10-14T23:04:26.115Z · LW · GW

Hmm, it depends on whether or not you can give finite complete descriptions of those algorithms, if so, I don't see the problem with just tagging them on. If you can give finite descriptions of the algorithm, then its komologorov complexity will be finite, and the prior: 2^-k(h) will still give nonzero probabilities to hyper environments.

If there are no such finite complete descriptions, then I gotta go back to the drawing board, cause the universe could totally allow hyper computations.

On a side note, where should I go to read more about hyper-computation?

Comment by Ronny (potato) on The Fabric of Real Things · 2012-10-14T09:08:06.371Z · LW · GW

At first thought. It seems that if it could be falsified, then it would fail the criteria of containing all and only those hypotheses which could in principle be falsified. Kind of like a meta-reference problem; if it does constrain experience, then there are hypotheses which are not interpretable as causal graphs that constrain experience (no matter how unlikely). This is so because the sentence says "all and only those hypothesis that can be interpreted as causal graphs are falsifiable", and for it to be falsified, means verifying that there is at least one hypothesis which cannot be interpreted as a causal graph which is falsifiable. Short answer, not if we got it right this time.

(term clarification) All and only hypotheses that constrain experience are falsifiable and verifiable, for there exists a portion of experience space which if observed falsifies them, and the rest verifies them (probabilistically).

Comment by Ronny (potato) on The Fabric of Real Things · 2012-10-14T08:54:09.342Z · LW · GW

I have to ask, how does this metaphysics (cause that's what it is) account for mathematical truths? What causal models do those represent?

My bad:

Someone already asked this more cleverly than I did.

Comment by Ronny (potato) on The Fabric of Real Things · 2012-10-14T08:46:56.158Z · LW · GW

I have a plausibly equivalent (or at least implies Ey's) candidate for the fabric of real things, i.e., the space of hypotheses which could in principle be true, i.e., the space of beliefs which have sense:

A Hypothesis has nonzero probability, iff it's computable or semi computable.

It's rather obviously inspired by Solomonoff abduction, and is a sound principle for any being attempting to approximate the universal prior.

Comment by Ronny (potato) on The Fabric of Real Things · 2012-10-13T19:10:30.576Z · LW · GW

It seems to me that this is the primary thing that we should be working on. If probability is subjective, and causality reduces to probability, then isn't causality subjective, i.e., a function of background knowledge?

Comment by Ronny (potato) on Causality: a chapter by chapter review · 2012-10-11T11:02:43.609Z · LW · GW

Looking it over, I could have been much clearer (sorry). Specifically I want to know. Given a Dag of the form:

A -> C <- B

Is it true that (in all prior joint distributions where A is independent of B, but A is evidence of C, and B is evidence of C) A is none-independent of B, given C is held constant?

I proved that when A & B is evidence against C, this is so, and also when A & B are independent of C, this is so, the only case I am missing is when A & B is evidence for C.

It's clear enough to me that when you have one none-colliding path between any two variables, they must not be independent; and that if we were to hold any of the variable along that path constant, that those variables would be independent. This can all be shown given standard probability theory and correlation alone. It can also be shown that if there are only colliding paths between two variables, those two variables are independent. If I have understood the theory of d-separation correctly, if we hold the collision variable (assuming there is only one) on one of these paths constant, the two variables should become none-independent (either evidence for or against one another). I have proven that this is so in two of the (at least) three cases that fit the given DAG using standard probability theory.

Those are the proofs I gave above.

Comment by Ronny (potato) on Causality: a chapter by chapter review · 2012-10-02T04:43:50.688Z · LW · GW

I have a question: is D-separation implied by the komologorov axioms?

I've proven that it is in some cases:

Premises:

1)A = A|B :. A|BC ≤ A|C
2)C < C|A
3)C < C|B
4) C|AB < C

proof starts:
1)B|C > B {via premise 3
2)A|BC = A B C|AB / (C B|C) {via premise 1
3)A|BC
C = A B C|AB / B|C
4)A|BC C / A = B C|AB / B|C
5)B C|AB / B|C < C|AB {via line 1
6)B
C|AB / B|C < C {via line 5 and premise 4
7)A|BC C / A < C {via lines 6 and 4
8)A|C = A
C|A / C
9)A|C C = A C|A
10)A|C C / A = C|A
11)C < A|C
C / A {via line 10 and premise 2
12)A|BC C / A < A|C C / A {via lines 11 and 7
13)A|BC < A|C
Q.E.D.

Premises:

1) A = A|B :. A|BC ≤ A|C
2) C < C|A
3) C < C|B
4) C|AB = C

proof starts:

1)A|C = A C|A / C
2)A|BC = A
B C / (B C|B) {via premises 1 and 4
3)A|BC = A C / C|B
4)A
C < A C|A {via premise 2
5)A
C / C|B < A * C|A / C {via line 4 and premise 3
6)A|BC < A|C {via lines 1, 3, and 5
Q.E.D.

If it is implied by classical probability theory, could someone please refer me to a proof?

Comment by Ronny (potato) on Terminal Values and Instrumental Values · 2012-09-16T07:50:30.774Z · LW · GW

A real deadlock i have with using your algorithmic meta-ethics to think about object level ethics is that I don't know who's volition, or "should" label I should extrapolate from. It allows me to figure out what's right for me, and what's right for any group given certain shared extrapolated terminal values, but it doesn't tell me what to do when I am dealing with a population with none-converging extrapolations, or with someone that has different extrapolated values from me (hypothetically).

These individuals are rare, but they likely exist.

Comment by Ronny (potato) on Math is Subjunctively Objective · 2012-09-16T07:27:11.184Z · LW · GW

You've misunderstood me. It's really not at all conspicuous to allow a none-empty "set" into your ontology, but if you'd prefer we can talk about heaps; they serve for my purposes here (of course, by "heap", I mean any random pile of stuff). Every heap has parts: you're a heap of cells, decks are heaps of cards, masses are heaps of atoms, etc. Now if you apply a level filter to the parts of a heap, you can count them. For instance, I can count the organs in your body, count the organ cells in your body, and end up with two different values, though I counted the same object. The same object can constitute many heaps, as long as there are several ways of dividing the object into parts. So what we can do, is just talk about the laws of heap combination, rather than the laws of numbers. We don't require any further generality in our mathematics to do all our counting, and yet, the only objects I've had to adopt into my ontology are heaps (rather inconspicuous material fellows in IMHO).

I should mention that this is not my real suggestion for a foundation of mathematics, but when it comes to the challenge of interpreting the theory of natural numbers without adopting any ghostly quantities, heaps work just fine.

(edit): I should mention that while heaps, requiring only for you to accept a whole with parts, and a level test on any gven part, are much more ontologically inconspicuous than pure sets. Where exactly is the null set? Where is any pure set? I've never seen any of them. Of course, i see heaps all over the place.

Comment by Ronny (potato) on Bayes for Schizophrenics: Reasoning in Delusional Disorders · 2012-09-11T03:57:41.934Z · LW · GW

"

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?"

IMHO All human psychologies have a hard time updating to believe they're poorly built. We are by nature arrogant. Do not forget that common folk often "choose" what to believe after they think about how it feels to believe it.

(Brilliant article btw)

(eidt):"Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?"

IMHO I think there are plenty of cognitive biases that can explain that sort of behavior in healthy patients. Confirmation bias, and the affective heuristic are the first to come to mind.

Comment by Ronny (potato) on [SEQ RERUN] Math is Subjunctively Objective · 2012-08-13T15:22:37.746Z · LW · GW

Why not call the set of all sets of actual objects with cardinality 3, "three", the set of all sets of physical objects with cardinality 2, "two", and the set of all sets of physical objects with cardinality 5, "five"? Then when I said that 2+3=5, all I would mean is that for any x in two and any y in three, the union of x and y is in five. If you allow sets of physical objects, and sets of sets of physical objects, into your ontology, then you got this; 2+3=5 no matter what anyone thinks, and two and three are real objects existing out there.

Comment by Ronny (potato) on Math is Subjunctively Objective · 2012-08-13T15:20:07.846Z · LW · GW

Why not call the set of all sets of actual objects with cardinality 3, "three", the set of all sets of physical objects with cardinality 2, "two", and the set of all sets of physical objects with cardinality 5, "five"? Then when I said that 2+3=5, all I would mean is that for any x in two and any y in three, the union of x and y is in five. If you allow sets of physical objects, and sets of sets of physical objects, into your ontology, then you got this; 2+3=5 no matter what anyone thinks, and two and three are real objects existing out there.

Comment by Ronny (potato) on Rationality Quotes August 2012 · 2012-08-13T14:42:09.673Z · LW · GW

It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.

Comment by Ronny (potato) on [SEQ RERUN] The Meaning of Right · 2012-08-13T14:37:12.071Z · LW · GW

There's no purpose to purpose, but there's still plenty of purpose in the object level.

Comment by Ronny (potato) on [SEQ RERUN] The Meaning of Right · 2012-08-13T14:34:09.921Z · LW · GW

Attempt at a four sentence summary for practicing ethical agents:

You decide just how right some event is by approximating an ideal computation. This is why if you think about it longer, sometimes you change your mind about how right an event was. This solves the problem of metaethics. However, most of the work for object level ethicologists remains open, e.g., specifying the ideal computation we approximate when we decide how right some event is.

Comment by Ronny (potato) on Torture vs. Dust Specks · 2012-08-10T22:06:00.541Z · LW · GW

Here's a suggestion: if someone going through a fate A, is incapable of noticing whether or not they're going through fate B, then fate A is infinitely worse than fate B.