Off the Cuff Brangus Stuff
post by Ronny Fernandez (ronny-fernandez) · 2019-08-02T11:40:24.628Z · LW · GW · 15 commentsContents
15 comments
15 comments
Comments sorted by top scores.
comment by Ronny Fernandez (ronny-fernandez) · 2019-08-02T19:17:54.189Z · LW(p) · GW(p)
Sometimes I sort of feel like a grumpy old man that read the sequences back in the good old fashioned year of 2010. When I am in that mood I will sometimes look around at how memes spread throughout the community and say things like "this is not the rationality I grew up with". I really do not want to stir things up with this post, but I guess I do want to be empathetic to this part of me and I want to see what others think about the perspective.
One relatively small reason I feel this way is that a lot of really smart rationalists, who are my friends or who I deeply respect or both, seem to have gotten really into chakras, and maybe some other woo stuff. I want to better understand these folks. I'll admit now that I have weird biased attitudes towards woo stuff in general, but I am going to use chakras as a specific example here.
One of the sacred values of rationality that I care a lot about is that one should not discount hypotheses/perspectives because they are low status, woo, or otherwise weird. [LW · GW]
Another is that one's beliefs should pay rent [LW · GW].
To be clear, I am worried that we might be failing on the second sacred value. I am not saying that we should abandon the first one as I think some people may have suggested in the past. I actually think that rationalists getting into chakras is strong evidence that we are doing great on the first sacred value.
Maybe we are not failing on the second sacred value. I want to know whether we are or not, so I want to ask rationalists who think a lot or talk enthusiastically about chakras a question:
Do chakras exist?
If you answer "yes", how do you know they exist?
I've thought a bit about how someone might answer the second question if they answer "yes" to the first question without violating the second sacred value [? · GW]. I've thought of basically two ways that seems possible, but there are probably others.
One way might be that you just think that chakras literally exist in the same ways that planes literally exist, or in the way that waves literally exist. Chakras are just some phenomena that are made out of some stuff like everything else. If that is the case, then it seems like we should be able to at least in principle point to some sort of test that we could run to convince me that they do exist, or you that they do not. I would definitely be interested in hearing proposals for such tests!
Another way might be that you think chakras do not literally exist like planes do, but you can make a predictive profit [LW · GW] by pretending that they do exist. This is sort of like how I do not expect that if I could read and understand the source code for a human mind, that there would be some parts of the code that I could point to and call the utility and probability functions. Nonetheless, I think it makes sense to model humans as optimization processes with some utility function and some probability function, because modeling them that way allows me to compress my predictions about their future behavior. Of course, I would get better predictions if I could model them as mechanical objects, but doing so is just too computationally expensive for me. Maybe modeling people as having chakras, including yourself, works sort of the same way. You use some of your evidence to infer the state of their chakras, and then use that model to make testable predictions about their future behavior. In other words, you might think that chakras are real patterns. Again it seems to me that in this case we should at least in principle be able to come up with tests that would convince me that chakras exist, or you that they do not, and I would love to hear any such proposals.
Maybe you think they exist in some other sense, and then I would definitely like to hear about that.
Maybe you do not think they exist in anyway, or make any predictions of any kind, and in that case, I guess I am not sure how continuing to be enthusiastic about thinking about chakras or talking about chakras is supposed to jive with the sacred principle that one's beliefs should pay rent.
I guess it's worth mentioning that I do not feel as averse to Duncan's color wheel thing, maybe because it's not coded as "woo" to my mind. But I still think it would be fair to ask about that taxonomy exactly how we think that it cuts the universe at its joints. Asking that question still seems to me like it should reduce to figuring out what sorts of predictions to make if it in fact does, and then figuring out ways to test them.
I would really love to have several cooperative conversations about this with people who are excited about chakras, or other similar woo things, either within this framework of finding out what sorts of tests we could run to get rid of our uncertainty, or questioning the framework I propose altogether.
Replies from: Stag, habryka4, Hazard, ronny-fernandez↑ comment by Stag · 2019-08-09T09:08:25.812Z · LW(p) · GW(p)
I am not one of the Old Guard, but I have an uneasy feeling about something related to the Chakra phenomenon.
It feels like there's a lot of hidden value clustered around wooy topics like Chakras and Tulpas, and the right orientation towards these topics seems fairly straightforward: if it calls out to you, investigate and, if you please, report. What feels less clear to me is how I as an individual or as a member of some broader rat community should respond when, according to me, people do not certain forms of bullshit tests.
This comes from someone with little interest or knowledge about the former, but after accidentally stumbling into some Tulpa-related territory and bumbling around in it for a while, it turns out that the Internal Family Systems model captures a large part of what I was grasping towards, this time with testable predictions and the whole deal.
I haven't given the individual-as-part-of-community thing that much thought, but my intuition is that I would make a poor judge for when to say "nope, your thing is BS" and I'm not sure what metric we might use to figure out who would make for a better judge besides overall faith in reasoning capability.
↑ comment by habryka (habryka4) · 2019-08-02T20:15:04.744Z · LW(p) · GW(p)
I have some thoughts about this (as someone who isn't really into the chakra stuff, but feels like it's relatively straightforward to answer the meta-questions that you are asking here). Feel free to ping me in a week if I haven't written a response to this.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2019-08-12T23:04:53.606Z · LW(p) · GW(p)
Ping.
Replies from: habryka4
↑ comment by habryka (habryka4) · 2019-08-13T00:13:47.751Z · LW(p) · GW(p)
Ok, let me give it a try. I am trying to not spend too much time on this, so I prefer to start with a rough draft and see whether there is anything interesting here before I write a massive essay.
You say the following:
Do chakras exist?
In some sense I might be missing the point since the answer to this is basically just "no". Though obviously I still think they form a meaningful category of something, but in my model they form a meaningful category of "mental experiences" and "mental procedures", and definitely not a meaningful category of real atom-like things in the external world.
Another way might be that you think chakras do not literally exist like planes do, but you can make a predictive profit [LW · GW] by pretending that they do exist
I don't think the epistemically healthy thing is to pretend that they exist as some external force. Here is an analogy that I think kind of explains the ideas of "auras", which is a broader set than just chakras:
Imagine you are talking to a chessmaster who has played 20000 hours of chess. You show him a position and he responds with "Oh, black is really open on the right". You ask "what do you mean by 'open on the right'?". He says: "Black's defense on the right is really weak, I could push through that immediately if I wanted to", while making the motion of picking up a piece with his right hand and pushing it through the right side of black's board.
As you poke him more, his sense of "openness" will probably correspond to lots of proprioceptive experiences like "weak", "fragile", "strong", "forceful", "smashing", "soft", etc.
Now, I think it would be accurate to describe (in buddhist/spiritual terms) the experience of the chessmaster as reading an "aura" off the chessboard. It's useful to describe it as such because a lot of its mental representation is cached out in the same attributes that people and physical objects in general have, even though its referent is the state of some chess-game, which obviously doesn't have those attributes straightforwardly.
My read of what the deal with "chakras" is, is that it's basically trying to talk about the proprioceptive subsets of many mental representations. So in thinking about something like a chessboard, you can better understand your own mental models of it, by getting a sense of what the natural clusters of proprioceptive experiences are that tend to correlate with certain attributes of models (like how feeling vulnerable around your stomach corresponds to a concept of openness in a chess position).
You can also apply them to other people, and try to understand what other people are experiencing by trying to read their body-language, which gives you evidence about the proprioceptive experiences that their current thoughts are causing (which tend to feed back into body-language), which allows you to make better inferences about their mental state.
I haven't actually looked much into whether the usual set of chakras tend to be particularly good categories for the relationship between proprioceptive experiences and model attributes, so I can't speak much about that. But it seems clear that there are likely some natural categories here, and referring to them as "chakras" seems fine to me.
↑ comment by Hazard · 2019-08-03T18:57:45.567Z · LW(p) · GW(p)
lol on the grumpy old man part, I feel that sometimes :)
I'm not really familiar with what chakras are supposed to be about, but I'm decently familiar with yoga (200h level training several years ago). For the first 2/3 of the training we just focused on movement and anatomy, and the last 1/3 was teaching and theory. My teacher told be that there was the stuff called prana that flowed through living beings, and that breath work was all about getting the right prana flow.
I thought that was a bit weird, but the breathing techniques we actually did also had lovely and noticeable affects on my mood/body.
My frame: some woo frameworks came about through X years of experimentation and fiding lots of little tweaks that work, and then the woo framework co-evolved, or came afterwards, as a way to tie all these disjointed bits of accumulated knowledge. So when I go to evaluate something like chakras, I treat the actual theory as secondary to the actual pointers, "how chakras tell me to live my life".
Now, any given woo framework may or may not have that much useful accumulated tidbits, that's where we have to try it for ourselves and see if it works. I've done enough yoga to be incredibly confident that though prana may not carve reality at the joints or be real, I'm happy to ask a master yogi how to handle my body better.
Hmmmmm, so I guess the thing I wanted to say to you was, when having this chakra discussion with whomever, make sure to ask them, "What are the concrete things chakras tell me to do with my body/mind" and then see if those things have nay effect.
↑ comment by Ronny Fernandez (ronny-fernandez) · 2019-08-02T22:09:17.295Z · LW(p) · GW(p)
If you come up with a test or set of tests that it would be impossible to actually run in practice, but that we could do in principle if money and ethics were no object, I would still be interested in hearing those. After talking to one of my friends who is enthusiastic about chakras for just a little bit, I would not be surprised if we in fact make fairly similar predictions about the results of such tests.
comment by Ronny Fernandez (ronny-fernandez) · 2019-08-02T11:40:24.820Z · LW(p) · GW(p)
Here is an idea I just thought of in an uber ride for how to narrow down the space of languages it would be reasonable to use for universal induction. To express the k-complexity of an object relative to a programing language I will write:
Suppose we have two programing languages. The first is Python. The second is Qython, which is a lot like Python, except that it interprets the string "A" as a program that outputs some particular algorithmically large random looking character string with . I claim that intuitively, Python is a better language to use for measuring the complexity of a hypothesis than Qython. That's the notion that I just thought of a way to formally express.
There is a well known theorem that if you are using to measure the complexity of objects, and I am using to measure the complexity of objects, then there is a constant such that for any object :
In words, this means that you might think that some objects are less complicated than I do, and you might think that some objects are more complicated than I do, but you won't think that any object is complexity units more complicated than I do. Intuitively, is just the length of the shortest program in that is a compiler for So worst case scenario, the shortest program in that outputs will be a compiler for written in (which is characters long) plus giving that compiler the program in that outputs (which would be characters long).
I am going to define the k-complexity of a function relative to a programing language as the length of the shortest program in that language such that when it is given as an input, it returns . This is probably already defined that way, but jic. So say we have a function from programs in to their outputs and we call that function , then:
There is also another constant:
The first is the length of the shortest compiler for written in , and the second is the length of the shortest compiler for written in . Notice that these do not need to be equal. For instance, I claim that the compiler for Qython written in Python is roughly characters long, since we have to write the program that outputs in Python which by hypothesis was about characters long, and then a bit more to get it to run that program when it reads "A", and to get that functionality to play nicely with the rest of Qython however that works out. By contrast, to write a compiler for Python in Qython it shouldn't take very long. Since Qython basically is Python, it might not take any characters, but if there are weird rules in Qython for how the string "A" is interpreted when it appears in an otherwise Python-like program, then it still shouldn't take any more characters than it takes to write a Python interpreter in regular Python.
So this is my proposed method for determining which of two programming languages it would be better to use for universal induction. Say again that we are choosing between and . We find the pair of constants such that and , and then compare their sizes. If is less than this means that it is easier to write a compiler for in than vice versa, and so there is more hidden complexity in 's encodings than in 's, and so we should use instead of for assessing the complexity of hypotheses.
Lets say that if then hides more complexity than .
A few complications:
It is probably not always decidable whether the smallest compiler for written in is smaller than the smallest compiler for written in , but this at least in principle gives us some way to specify what we mean by one language hiding more complexity than another, and it seems like at least in the case of Python vs. Qython, we can make a pretty good argument that the smallest compiler for Python written in Qython is smaller than the smallest compiler for Qython written in Python.
It is possible (I'd say probable) that if we started with some group of candidate languages and looked for languages that hide less complexity, we might run into a circle. Like the smallest compiler for in might be the same size as the smallest compiler for in but there might still be an infinite set of objects such that:
In this case, the two languages would disagree about the complexity of an infinite set of objects, but at least they would disagree about it by no more than the same fixed constant in both directions. Idk, seems like probably we could do something clever there, like take the average or something, idk. If we introduce an and the smallest compiler for in is larger than it is in , then it seems like we should pick
If there is an infinite set of languages that all stand in this relationship to each other, ie, all of the languages in an infinite set disagree about the complexity of an infinite set of objects and hide less complexity than any language not in the set, then idk, seems pretty damning for this approach, but at least we narrowed down the search space a bit?
Even if it turns out that we end up in a situation where we have an infinite set of languages that disagree about an infinite set of objects by exactly the same constant, it might be nice to have some upper bound on what that constant is.
In any case, this seems like something somebody would have thought of, and then proved the relevant theorems addressing all of the complications I raised. Ever seen something like this before? I think a friend might have suggested a paper that tried some similar method, and concluded that it wasn't a feasible strategy, but I don't remember exactly, and it might have been a totally different thing.
Watcha think?
comment by Ronny Fernandez (ronny-fernandez) · 2019-10-10T10:00:08.612Z · LW(p) · GW(p)
Here is an idea for a disagreement resolution technique. I think this will work best:
*with one other partner you disagree with.
*when your the beliefs you disagree about are clearly about what the world is like.
*when your the beliefs you disagree about are mutually exclusive.
*when everybody genuinely wants to figure out what is going on.
Probably doesn't really require all of those though.
The first step is that you both write out your beliefs on a shared work space. This can be a notebook or a whiteboard or anything like that. Then you each write down your credences next to each of the statements on the work space.
Now, when you want to make a new argument or present a new piece of evidence, you should ask your partner if they have heard it before after you present it. Maybe you should ask them questions about it beforehand to verify that they have not. If they have not heard it before, or had not considered it, you give it a name and write it down between the two propositions. Now you ask your partner how much they changed their credence as a result of the new argument. They write down their new credences below the ones they previously wrote down, and write down the changes next to the argument that just got added to the board.
When your partner presents a new argument or piece of evidence, be honest about whether you have heard it before. If you have not, it should change your credence some. How much do you think? Write down your new credence. I don't think you should worry too much about being a consistent Bayesian here or anything like that. Just move your credence a bit for each argument or piece of evidence you have not heard or considered, and move it more for better arguments or stronger evidence. You don't have to commit to the last credence you write down, but you should think at least that the relative sizes of all of the changes were about right. I
I think this is the core of the technique. I would love to try this. I think it would be interesting because it would focus the conversation and give players a record of how much their minds changed, and why. I also think this might make it harder to just forget the conversation and move back to your previous credence by default afterwards.
You could also iterate it. If you do not think that your partner changed their mind enough as a result of a new argument, get a new workspace and write down how much you think they should have change their credence. They do the same. Now you can both make arguments relevant to that, and incrementally change your estimate of how much they should have changed their mind, and you both have a record of the changes.
comment by Ronny Fernandez (ronny-fernandez) · 2021-12-10T22:07:22.628Z · LW(p) · GW(p)
A lot of folks seem to think that general intelligences are algorithmically simple. Paul Christiano seems to think this when he says that the universal distribution is dominated by simple consequentialists.
But the only formalism I know for general intelligences is uncomputable, which is as algorithmically complicated as you can get.
The computable approximations are plausibly simple, but are the tractable approximations simple? The only example I have of a physically realized agi seems to be very much not algorithmically simple.
Thoughts?
Replies from: davidad↑ comment by davidad · 2021-12-10T22:30:54.559Z · LW(p) · GW(p)
Just want to point out that the minimum-description-length prior and the speed prior are two very different notions of simplicity. The universal distribution Paul is referring to is probably the former. Simple in the description-length sense is not incompatible with uncomputability. In my experience of conversations at AIRCS workshops, it's perennially considered an open question whether or not the speed prior is also full of consequentialists.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2021-12-10T23:01:52.874Z · LW(p) · GW(p)
Simple in the description length sense is incompatible with uncomputability. Uncomputability means there is no finite way to point to the function. That’s what I currently think, but I’m confused about you understanding all those words and disagreeing.
Replies from: davidad↑ comment by davidad · 2021-12-11T21:46:17.467Z · LW(p) · GW(p)
There's a level-slip here, which is more common in discussions of algorithmic probability than I realized when I wrote as if it was an obvious nitpick.
There are two ways to point at the Solomonoff prior: the operational way (Solomonoff's original idea) and the denotational way (called a "universal mixture"). These are shown equivalent by (Hutter et al., 2011). I'll try to explain the confusion in both ways.
Operationally, we're going to feed an infinite stream of fair coin-flips into a universal monotone Turing machine, and then measure the distribution of the output. What we do not do is take the random noise that we're about to feed into the Turing machine and apply a halting-oracle to it to make sure it encodes a computable function before running it. We just run it. Each such "hypothesis" may halt eventually, or it may produce output forever, or it may infinite-loop after producing some output. As a Solomonoff inductor, we're going to have infinite patience and consider all the output it produces before either halting or infinite-looping. This is the sense in which a hypothesis doesn't have to be "computable" to be admissible, it just has to be "a program".
Denotationally, we're going to mix over a computable enumeration of lower semicomputable semimeasures. (This is the perspective I prefer, I think, but it's equivalent.) So in a really precise way, the hypotheses have to be semicomputable, which is a strictly larger set than being computable, and, in fact, the universal mixture itself is semicomputable (even though it is uncomputable).
This is why it's not inconsistent to imagine Solomonoff inductors showing up as hypotheses inside Solomonoff induction.
Replies from: ronny-fernandez, ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2021-12-24T14:49:20.988Z · LW(p) · GW(p)
Just realized that’s not UAI. Been looking for this source everywhere, thanks.
↑ comment by Ronny Fernandez (ronny-fernandez) · 2021-12-24T14:45:17.470Z · LW(p) · GW(p)
Ok I understand that although I never did find a proof that they are equivalent in UAI. If you know where it is, please point it out to me.
I still think that solomonoff induction assigns 0 to uncomputable bit strings, and I don’t see why you don’t think so.
Like the outputs of programs that never halt are still computable right? I thought we were just using a “prints something eventually oracle” not a halting oracle.