Halifax SSC Meetup -- FEB 8 2020-02-08T00:45:37.738Z
HALIFAX SSC MEETUP -- FEB. 1 2020-01-31T03:59:05.110Z
SSC Halifax Meetup -- January 25 2020-01-25T01:15:13.090Z
Clarifying The Malignity of the Universal Prior: The Lexical Update 2020-01-15T00:00:36.682Z
Halifax SSC Meetup -- Saturday 11/1/20 2020-01-10T03:35:48.772Z
Recent Progress in the Theory of Neural Networks 2019-12-04T23:11:32.178Z
Halifax Meetup -- Board Games 2019-04-15T04:00:02.799Z
Predictors as Agents 2019-01-08T20:50:49.599Z
Formal Models of Complexity and Evolution 2017-12-31T20:17:46.513Z
A Candidate Complexity Measure 2017-12-31T20:15:39.629Z
Please Help: How to make a big improvement in the alignment of political parties’ incentives with the public interest? 2017-01-18T00:51:56.355Z


Comment by interstice on UDT might not pay a Counterfactual Mugger · 2020-11-26T00:05:29.823Z · LW · GW

I don’t think Nomega has to simulate you interacting with Omega in order to know how to would react should you encounter it

By 'simulating' I just mean that it's reasoning in some way about your behavior in another universe, it doesn't have to be a literal simulation. But the point remains -- of all the ways that Nomega could choose to act, for some reason it has chosen to simulate/reason about your behavior in a universe containing Omega, and then give away its resources depending on how it predicts you'll act.

What this means is that, from a Kolmogorov complexity perpective, Nomega is strictly more complex than Omega, since the definition of Nomega includes simulating/reasoning about Omega. Worlds containing Nomega will be discounted by a factor proportional to this additional complexity. Say it takes 100 extra bits to specify Nomega. Then worlds containing Nomega have less measure under the Solomonoff prior than worlds with Omega, meaning that UDT cares much less about them.

(My comment above was reasoning as if Nomega could choose to simulate/reason about many different possible universes, not just the ones with Omega. Then, perhaps, its baseline complexity might be comparable to Omega. Either way, the result is that the worlds where Nomega exists and you have influence don't have very high measure)

This argument would also suggest that by the time you see tails, you know you live in the tails world and thus should not pay up.

What I meant by "Nomega world" in that paragraph was a world where Nomega exists but does not simulate/reason about your behavior in the Omega world. The analogous situation to the tails/heads world here is the "Omega"/"Nomega simulating omega" world. I acknowledge that you would have counterfactual influence over this world. The difference is that the heads/tails worlds have equal measure, whereas the "Nomega simulates omega" world has much less measure than the Omega world(under a 'reasonable' measure such as Solomonoff)

Comment by interstice on UDT might not pay a Counterfactual Mugger · 2020-11-25T19:36:21.916Z · LW · GW

UDT's behavior here is totally determined by its prior. The question is which prior is more reasonable. 'Closeness to Solomonoff induction' is a good proxy for reasonableness here.

I think a prior putting greater weight on Omega, given that one has seen Omega, is much more reasonable. Here's the reasoning. Let's say that the description complexity of both Omega and Nomega is 1000 bits. Before UDT has seen either of them, it assigns a likelihood of to worlds where either of them exist. So it might seem that it should weight them equally, even having seen Omega.

However, the question then becomes -- why is Nomega choosing to simulate the world containing Omega? Nomega could choose to simulate any world. In fact, a complete description of Nomega's behavior must include a specification of which world it is simulating. This means that, while it takes 1000 bits to specify Nomega, specifying that Nomega exists and is simulating the world containing Omega actually takes 2000 bits.[1]

So UDT's full prior ends up looking like:

  • 999/1000: Normal world

  • : Omega exists

  • : Nomega exists

  • : Nomega exists and is simulating the world containing Omega

Thus, in a situation where UDT has seen Omega, it has influence over the Omega world and Nomega/Omega world, but no influence over the normal world and Nomega world. Since the Omega world has so much more weight than the Omega/Nomega world, UDT will effectively act as if it's in the Omega world.

  1. You might object that Nomega is defined by its property of messing with Omega, so it will naturally simulate worlds with Omega. In that case, it's strictly more complex to specify than Omega, probably by several hundred bits due to the complexity of 'messing with' ↩︎

Comment by interstice on UDT might not pay a Counterfactual Mugger · 2020-11-25T01:27:03.730Z · LW · GW

Although UDT is formally updateless, the 'mathematical intuition module' which it uses to determine the effects of its actions can make it effectively act as though it's updating.

Here's a simple example. Say UDT's prior over worlds is the following:

  • 75% chance: you will see a green and red button, and a sign saying "press the red button for $5"

  • 25% chance: same buttons, but the sign says "press the green button for $5"

Now, imagine the next thing UDT sees is the sign saying that it should press the green button. Of course, what it should do is press the green button(assuming the signs are truthful), even though in expectation the best thing to do would be pressing the red button. So why does it do this? UDT doesn't update -- it still considers the worlds where it sees the red button to be 3X more important -- however, what does change is that, once it sees the green button sign, it no longer has any influence over the worlds where it sees the red button sign. Thus it acts as though it's effectively updated on seeing the green button sign, even though its distribution over worlds remains unchanged.

By analogy, in your scenario, even though Omega and Nomega might be equally likely a priori, UDT's influence over Omega's actions is far greater given that it has actually seen Omega. Or to be more precise -- in the situation where UDT has both seen Omega and the coin comes up heads, it has a lot of predictable influence over Omega's behavior in a(equally valuable by its prior) world where Omega is real and the coin comes up tails. It has no such predictable influence over worlds where Nomega exists.

Comment by interstice on UDT might not pay a Counterfactual Mugger · 2020-11-24T00:53:23.441Z · LW · GW

That’s exactly what others are saying about priors

It's not the same thing. Other people are correctly pointing out that UDT's behavior here depends on the prior. I'm arguing that a prior similar to the one we use in our day-to-day lives would assign greater probability to Omega than Nomega, given that one has seen Omega. The OP can be seen as implicitly about both issues.

Comment by interstice on UDT might not pay a Counterfactual Mugger · 2020-11-22T18:34:55.784Z · LW · GW

I think there's a big asymmetry between Omega and Nomega here, namely that Omega actually appears before you, while Nomega does not. This means there's much better reason to think that Omega will actually reward you in an alternate universe than Nomega.

Put another way, the thing you could pre-commit to could be a broad policy of acausally cooperating with beings you have good reason to think exist, in your universe or a closely adjacent one(adjacent in the sense that your actions here actually have a chance of effecting things there) Once you learn that a being such as Omega exists, then you should act as though you had pre-committed to cooperating with them all along.

Comment by interstice on UDT might not pay a Counterfactual Mugger · 2020-11-22T04:27:37.465Z · LW · GW

However, I have no real way to formalizing why Omega might be more real then Nomega

But in this scenario, Omega is supposed to actually appear before you, right? So at least in this hypothetical, there would be a very good reason to suppose that Omega is real while Nomega is not.

I still feel like this idea makes the case for UDT telling us to pay a lot more shaky

You could make the case for doing anything shaky, if you assume that there's a counterfactual demon who will punish us for acting correctly.

Comment by interstice on Misalignment and misuse: whose values are manifest? · 2020-11-13T21:10:49.044Z · LW · GW

The strange new values that were satisfied were those of the AI systems, but the entire outcome only happened because people like Bob chose it knowingly (let’s say). Bob liked it more than the long glorious human future where his business was less good

I think a relevant consideration here is that Bob doesn't actually have the ability to choose between these two futures -- rather his choice seems to be between a world where his business succeeds but AI takes over later, or a world where his business fails but AI takes over anyway(because other people will use AI even if he doesn't). Bob might actually prefer to sign a contract forbidding the use of AI if he knew that everybody else would be in on it. I suspect that this would be the position of most people who actually thought AI would eventually take over, and that most people who would oppose such a contract would not think AI takeover is likely(perhaps via self-deception due to their local incentives, which in some ways is similar to just not valuing the future)

Comment by interstice on Ethics in Many Worlds · 2020-11-09T15:22:14.283Z · LW · GW

I think you're misusing the word 'real' here. We only think QM is 'real' in the first place because it predicts our experimental results, so it seems backwards to say that those (classical, probabilistic) results are actually not real, while QM is real. What happens if we experimentally discover a deeper layer of physics beneath QM, will you then say "I thought QM was real, but it was actually fake the whole time"? But then, why would you change your notion of what 'real' is in response to something you don't consider real?

Comment by interstice on Ethics in Many Worlds · 2020-11-09T02:24:10.843Z · LW · GW

I think some notion of prediction/observation has to be included for a theory to qualify as physics. By your definition, studying the results of e.g. particle accelerator experiments wouldn't be part of quantum mechanics, since you need the Born rule to make predictions about them.

Comment by interstice on Ethics in Many Worlds · 2020-11-08T17:49:34.407Z · LW · GW

I mean that it correctly predicts the results of experiments and our observations -- which, yes, would be different if we were sampled from a different measure. That's the point. I'm taking for granted that we have some pre-theoretical observations to explain here, and saying that the Hilbert measure is needed to explain them.

Comment by interstice on Ethics in Many Worlds · 2020-11-08T01:06:14.264Z · LW · GW

Why should sampling weight (you're more likely to find yourself as a real vs Boltzmann brain, or 'thick' vs 'arbitrary' computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)?

I think the weights for prediction and moral value should be the same or at least related. Consider, if we're trying to act selfishly, then we should make choices that lead to the best futures according to the sampling weight(conditioned on our experience so far), since the sampling weight is basically defined as our prior on future sense experiences. But then it seems strange to weigh other peoples' experiences differently than our own.

So in order to think that minds matter in proportion to the measure of the world they're in, while recognizing they 'feel' precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally

I think of the measure as being a generalization of what it means to 'count' experiences, not a property of the experiences themselves. So this is more like how, in utilitarianism, the value of an experience has to be multiplied by the number of people having it to get the total moral value. Here we're just multiplying by the measure instead.

My understanding was that MWI is something like what you get when you don't add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.

People like to claim that, but fundamentally you need to add some sort of axiom that describes how the wave function cashes out in terms of observations. The best you can get is an argument like "any other way of weighting the branches would be silly/mathematically inelegant". Maybe, but you're still gonna have to put it in if you want to actually predict anything. If you want to think of it in terms of writing a computer program, it simply won't return predictions without adding the Born rule(what I'm calling the 'Hilbert measure' here)

Comment by interstice on Multiple Worlds, One Universal Wave Function · 2020-11-07T01:25:41.264Z · LW · GW

There is some reason to think we will never see effects that depend on the other Everett branches, because we could say that a branching event has occurred precisely when the differences between the two components are no longer effectively reversible.

Comment by interstice on Ethics in Many Worlds · 2020-11-07T00:48:31.343Z · LW · GW

I tend to agree with 'dilution' response, which considers branches with less Hilbert-space measure to be 'less real'. Some justification: if you're going to just count all the ways minds can be embedded in the wave function, why stop at "normal-looking" embeddings? what's stopping you from finding an extremely convoluted mapping such that the coffee in your mug actually instantiates 10^23 different conscious experiences? But now it's impossible to make any ethical comparisons at all since every state of the universe contains infinitely many copies of all possible experiences. Using a continuous measure on experiences, a la UDASSA, lets you consider arbitrary computations to be conscious without giving all the ethical weight to Boltzmann brains.

Another reason for preferring the Hilbert measure: you could consider weighting using the Hilbert measure to be part of the definition of MWI, since it's only using such a weighting that it's possible to make correct predictions about the real world.

Comment by interstice on Sleeping Julia: Empirical support for thirder argument in the Sleeping Beauty Problem · 2020-11-03T18:36:42.740Z · LW · GW

Sleeping Beauty can give me a probability distribution over what day it is and just call it ordinary belief

But the whole question is about how Beauty should decide on her probabilities before seeing any evidence, right? What I'm saying is that she should do that with reference to her intended goals(or, just decide probabilities aren't useful in this context)

I'm taking a behaviorist/decision-theoretic view on probability here -- I'm saying that we can define an agent's probability distribution over worlds in terms of its decision function and utility function. An agent definitionally believes an event will occur with probability p if it will sacrifice a resource worth <p utilons to get a certificate paying out 1 utilon if the event comes to pass.

I’d rather correctly guess whether I’m in a simulation and then take good actions anyhow.

But what does 'correctly' actually mean here? It can't mean that we'll eventually see clear signs of a simulation, as we're specifically positing there's no observable differences. Does it mean 'the Solomonoff prior puts most of the weight for our experiences inside a simulation'? But we would only say this means 'correctly' because S.I. seems like a good abstraction of our normal sense of reality. But 'UDT, with a utility function weighted by the complexity of the world' seems like just as good of an abstraction, so it's not clear why we should prefer one or the other. (Note the 'effective probability' derived from UDT is not the same as the complexity weighting)

I actually think there is an interesting duality here -- within this framework, as moral actors agents are supposed to use UDT, but as moral patients they are weighted by Solomonoff probabilities. I suspect there's an alternative theory of rationality that can better integrate these two aspects, but for now I feel like UDT is the more useful of the two, at least for answering anthropic/decision problems.

Comment by interstice on Sleeping Julia: Empirical support for thirder argument in the Sleeping Beauty Problem · 2020-11-03T13:15:29.845Z · LW · GW

I'm not so sure -- what about cases where your sense experiences are perfectly compatible with 2 different underlying embedding? You might say "you will eventually find some evidence disambiguating, just use your distribution over that", but that's begging the question -- different distributions over the future copies will report seeing different things. Also, you might need to make decisions before getting more evidence.

As a particularly clear example, take the simulation argument. I think that any approach to epistemology based on counting -- even Solomonoff induction -- would probably conclude from our experiences that we are in a future simulation of some sort. But it's still correct for us to make decisions as if we are in the 'base' universe, because that's where we can have the most influence.

ETA: if you're arguing that we will end up having some sort of distribution over our future experiences by Cox, that might be true -- I'm not sure.(what would you say about Beauty having 'effective probabilities' for the coin flip that change before and after going to sleep?) What I'm saying is that, even if that's the case, our considerations in creating this effective distribution will mainly be about the consequences of our actions, not epistemics.

Comment by interstice on Sleeping Julia: Empirical support for thirder argument in the Sleeping Beauty Problem · 2020-11-03T05:59:56.056Z · LW · GW

I haven't really followed what the consensus in academia is, I've mostly picked up my views by osmosis from LessWrong/thinking on my own.

My view is that probabilities are ultimately used to make decisions. In particular, we can define an agent's 'effective probability' that an event has occurred by the price at which it would buy a coupon paying out $1 if the event occurs. If Beauty is trying to maximize her expected income, her policy should be to buy a coupon for Tails for <$0.50 before going to sleep, and <$0.66 after waking(because her decision will be duplicated in the Tails world). You can also get different 'effective probabilities' if you are a total/average utilitarian towards copies of yourself(as explained in the paper I linked).

Once you've accepted that one's 'effective probabilities' can change like this, you go on to throw away the notion of objective probability altogether(as an ontological primitive), and instead just figure out the best policy to execute for a given utility function/world. See UDT for an quasi-mathematical formalization of this, as applied to a level IV multiverse.

But there is a real, objective probability that can be proven, and it has nothing to do with SB’s subjective, anthropic probability

But why? In this view, there doesn't have to be an 'objective probability', rather there are only different decision problems and the best algorithms to solve them.

Comment by interstice on Sleeping Julia: Empirical support for thirder argument in the Sleeping Beauty Problem · 2020-11-03T00:52:10.255Z · LW · GW

"Empirical" evidence of thirder/halfer positions doesn't seem to prove much, since it's inevitably based around a choice of decision problem/utility function with no deeper justification than the positions themselves.

I think a more useful attitude towards problems like this is that anthropic 'probability' is meaningless except in the context of a specific decision problem, see e.g. tl;dr, agents maximizing different types of utility function will behave as if having different 'probabilities', this explains conflicting answers on problems like this and some others.

Comment by interstice on Desperately looking for the right person to discuss an alignment related idea with. (and some general thoughts for others with similar problems) · 2020-10-24T03:50:00.893Z · LW · GW

This might seem unlikely, but remember that often the root of a new insight does not require competence or knowledge, it merely requires novelty and thinking in a different way and out of he box compared to everyone else, and this kind of thing correlates with mental illness.

I think the history of most intellectual disciplines doesn't really reflect this; most good new ideas are produced by people with lots of familiarity with their chosen area, and build extensively on ideas that already exist. Usually, 'ability to think novel thoughts' is not the limiting factor.

That said, if you want someone to talk about your idea with in private, maybe you should try direct messaging someone who you consider knowledgeable, but doesn't seem likely to be super busy. e.g. someone with insightful LW posts, but maybe doesn't work for MIRI. I'd be willing to.

Comment by interstice on Matt Goldenberg's Short Form Feed · 2020-10-17T02:01:56.352Z · LW · GW

I think we're talking about different things. I'm talking about how you would locate minds in an arbitrary computational structure(and how to count them), you're talking about determining what's valuable about a mind once we've found it.

Comment by interstice on Matt Goldenberg's Short Form Feed · 2020-10-16T23:04:31.051Z · LW · GW

What hypothesis would you be "testing"? What I'm proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity.

If you mean that we should check if the minds we usually see in the world have low complexity, I think that already seems to be the case, in that we're the end-result of a low-complexity process starting from simple conditions, and can be pinpointed in the world relatively simply.

Comment by interstice on Matt Goldenberg's Short Form Feed · 2020-10-16T20:43:41.870Z · LW · GW

But the question then becomes how you sample these minds you are talking to. Do you just go around literally speaking to them? Clearly this will miss a lot of minds. But you can't use completely arbitrary ways of accessing them either, because then you might end up packing most of the 'mind' into your way of interfacing with them. Weighting by complexity is meant to provide a good way of sampling minds, that includes all computable patterns without attributing mind-fulness to noise.

(Just to clarify a bit, 'complexity' here is referring to the complexity of selecting a mind given the world, not the complexity of the mind itself. It's meant to be a generalization of 'number of copies' and 'exists/does not exist', not a property inherent to the mind)

Comment by interstice on Matt Goldenberg's Short Form Feed · 2020-10-16T16:25:34.906Z · LW · GW

Is it an empirical question? It seems more like a philosophical question(what evidence could we see that would change our minds?)

Here's a (not particularly rigorous) philosophical argument in favour. The substrate on which a mind is running shouldn't affect its moral status. So we should consider all computable mappings from the world to a mind as being 'real'. On the other hand, we want the total "number" of observer-moments in a given world to be finite(otherwise we can't compare the values of different worlds). This suggests that we should assign a 'weight' to different experiences, which must be exponentially decreasing in program length for the sum to converge.

Comment by interstice on Matt Goldenberg's Short Form Feed · 2020-10-16T02:55:49.477Z · LW · GW

Description complexity is the natural generalization of "speed" and "number of observer moments" to infinite universes/arbitrary embeddings of minds in those universes. It manages to scale as (the log of) the density of copies of an entity, while avoiding giving all the measure to Boltzmann brains.

Comment by interstice on Matt Goldenberg's Short Form Feed · 2020-10-15T17:37:03.377Z · LW · GW

That's why you need to use some sort of complexity-weighting for theories like this, so that minds that are very hard to specify(given some fixed encoding of 'the world') are considered 'less real' than easy-to-specify ones.

Comment by interstice on Has Eliezer ever retracted his statements about weight loss? · 2020-10-14T21:22:48.390Z · LW · GW

He recently did lose a lot of weight using some form of keto diet, although I think he still maintains it was much harder for him than others. Check his twitter account.

Comment by interstice on “Unsupervised” translation as an (intent) alignment problem · 2020-09-30T18:16:32.313Z · LW · GW

Not all intent alignment problems involve existential risk.

Comment by interstice on Puzzle Games · 2020-09-28T18:13:53.040Z · LW · GW

I would put Game Title at tier 4 and Game Title: Lost Levels at Tier 3.

Comment by interstice on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-15T02:54:33.676Z · LW · GW

If you're a scientist, your job is ostensibly to uncover the truth about your field of study, so I think being uninterested in the truth of the papers you cite is at least a little bit malicious.

Comment by interstice on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-08T16:31:30.025Z · LW · GW

Uhh, I don't follow this. Could you explain or link to an explanation please?

Intuitive explanation: Say it takes X bits to specify a human, and that the human knows how to correctly predict whatever sequence we're applying SI to. SI has to find the human among the other 2^X programs of length X. Say SI is trying to predict the next bit. There will be some fraction of those 2^X programs that predict it will be 0, and some fraction predicting 1. There fractions define SI's probabilities for what the next bit will be. Imagine the next bit will be 0. Then SI is predicting badly if greater than half of those programs predict a 1. But then, all those programs will be eliminated in the update phase. Clearly, this can happen at most X times before most of the weight of SI is on the human hypothesis(or a hypothesis that's just as good at predicting the sequence in question)

The above is a sketch, not quite how SI really works. Rigorous bounds can be found here, in particular the bottom of page 979("we observe that Theorem 2 implies the number of errors of the universal predictor is finite if the number of errors of the informed prior is finite..."). In the case where the number of errors is not finite, the universal and informed prior still have the same asymptotic rate of growth of error (error of universal prior is in big-O class of error of informed prior)

I don't think this is true. I do agree some conclusions would be converged on by both systems (SI and humans), but I don't think simplicity needs to be one of them.

When I say the 'sense of simplicity of SI', I use 'simple program' to mean the programs that SI gives the highest weight to in its predictions(these will by definition be the shortest programs that haven't been ruled out by data). The above results imply that, if humans use their own sense of simplicity to predict things, and their predictions do well at a given task, SI will be able to learn their sense of simplicity after a bounded number of errors.

How would you ask multiple questions? Practically, you'd save the state and load that state in a new SI machine (or whatever). This means the data is part of the program.

I think you can input multiple questions by just feeding a sequence of question/answer pairs. Actually getting SI to act like a question-answering oracle is going to involve various implementation details. The above arguments are just meant to establish that SI won't do much worse than humans at sequence prediction(of any type) -- so, to the extent that we use simplicity to attempt to predict things, SI will "learn" that sense after at most a finite number of mistakes(in particular, it won't do any *worse* than 'human-SI', hypotheses ranked by the shortness of their English description, then fed to a human predictor)

Comment by interstice on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-06T03:28:19.835Z · LW · GW
I read it as: "why would stuff the simplicity an idea had in one form (code) necessarily correspond to simplicity when it is in another form (english)? or more generally: why would the complexity of an idea stay roughly the same when the idea is expressed through different abstraction layers?"

I think that the argument about emulating one Turing machine with another is the best you're going to get in full generality. You're right that we have no guarantee that the explanation that looks simplest to a human will also look the simplest to a newly-initialized SI, because the 'constant factor' needed to specify that human could be very large.

I do think it's meaningful that there is at most a constant difference between different versions of Solomonoff induction(including "human-SI"). This is because of what happens as the two versions update on incoming data: they will necessarily converge in their predictions, differing at most on a constant number of predictions.

So while SI and humans might have very different notions of simplicity at first, they will eventually come to have the same notion, after they see enough data from the world. If an emulation of a human takes X bits to specify, it means a human can beat SI at binary predictions at most X times(roughly) on a given task before SI wises up. For domains with lots of data, such as sensory prediction, this means you should expect SI to converge to giving answers as good as humans relatively quickly, even if the overhead is quite large*.

Our estimates for the data requirements to store a mind are like 10^20 bits

The quantity that matters is how many bits it takes to specify the mind, not store it(storage is free for SI just like computation time). For the human brain this shouldn't be too much more than the length of the human genome, about 3.3 GB. Of course, getting your human brain to understand English and have common sense could take a lot more than that.

*Although, those relatively few times when the predictions differ could cause problems. This is an ongoing area of research.

Comment by interstice on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-05T17:37:25.501Z · LW · GW
Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.

Solomonoff induction is fine with inputs taking unboundedly long to run. There might be cases where the human doesn't converge to a stable answer even after an indefinite amount of time. But if a "simple" hypothesis can have people debating indefinitely about what it actually predicts, I'm okay with saying that it's not actually simple(or that it's too vague to count as a hypothesis), so it's okay if SI doesn't return an answer in those cases.

You should include all levels of abstraction in your reasoning, like raw bytecode. It's both low level and can be written by humans. It's not necessarily fun but it's possible. What about things people design at a transistor level?

Why do you need to include those things? Solomonoff induction can use any Turing-complete programming language for its definition of simplicity, there's nothing special about low-level languages.

I use Haskell and have no idea what you're talking about.

I mean you can pass functions as arguments to other functions and perform operations on them.

Regarding dictionary/list-of-tuples, the point is that you only have to write the abstraction layer *once*. So if you had one programming language with dictionaries built-in and other without, the one with dictionaries gets at most a constant advantage in code-length. In general two different universal programming languages will have at most a constant difference, as johnswentworth mentioned. This means that SI is relatively insensitive to the choice of programming language: as you see more data, the predictions of 2 versions of Solomonoff induction with different programming languages will converge.

Comment by interstice on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-05T01:11:39.414Z · LW · GW

One somewhat silly reason: for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next. Therefore the English and code-complexity can differ by at most a constant.

This gives a very loose bound, since it will probably take a lot of bits to specify a human mind. In practice I think the two complexities will usually not differ by too much, because coding languages were designed to be understandable by humans and have syntax similar to human languages. There are some difficult cases like descriptions of visual objects, but even here neural networks should be able to bridge the gap to an extent(in the limit this just becomes the 'encode a human brain' strategy)

Regarding 'levels of abstraction', I'm not sure if this is a big obstacle, as most programming languages have built-in mechanisms for changing levels of abstraction. e.g. functional programming languages allow you to treat functions as objects.

Comment by interstice on Many-worlds versus discrete knowledge · 2020-08-16T18:46:01.400Z · LW · GW

You might be interested in the work of Jess Riedel, whose research agenda is centered around finding a formal definition of wavefunction branches, e.g.

Comment by interstice on Down with Solomonoff Induction, up with the Presumptuous Philosopher · 2020-06-12T16:32:49.818Z · LW · GW

This example seems a little unfair on Solomonoff Induction, which after all is only supposed to predict future sensory input, not answer decision theory problems. To get it to behave as in the post, you need to make some unstated assumptions about the utility functions of the agents in question(e.g. why do they care about other copies and universes? AIXI, the most natural agent defined in terms of Solomonoff induction, wouldn't behave like that)

It seems that in general, anthropic reasoning and decision theory end up becoming unavoidably intertwined(e.g.) and we still don't have a great solution.

I favor Solomonoff induction as the solution to (epistemic) anthropic problems because it seems like any other approach ends up believing crazy things in mathematical(or infinite) universes. It also solves other problems like the Born rule 'for free', and of course induction from sense data generally. This doesn't mean it's infallible, but it inclines me to update towards S.I.'s answer on questions I'm unsure about, since it gets so much other stuff right while being very simple to express mathematically.

Comment by interstice on The Presumptuous Philosopher, self-locating information, and Solomonoff induction · 2020-06-02T01:12:57.396Z · LW · GW

Another thing, I don't think Solomonoff Induction would give an advantage of log(n) to theories with n observers. In the post you mention taking the discrete integral of to get log scaling, but this seems to be based on the plain Kolmogorov complexity , for which is approximately an upper bound. Solomonoff induction uses prefix complexity , and the discrete integral of converges to a constant. This means having more copies in the universe can give you at most a constant advantage.

(Based on reading some other comments it sounds like you might already know this. In any case, it means S.I. is even more anti-PP than implied in the post)

Comment by interstice on The Presumptuous Philosopher, self-locating information, and Solomonoff induction · 2020-06-01T20:25:54.217Z · LW · GW

It seems to me that there are (at least) two ways of specifying observers given a physical world-model, and two corresponding ways this would affect anthropics in Solomonoff induction:

  • You could specify their location in space-time. In this case, what matters isn't the number of copies, but rather their density in space, because observers being more sparse in the universe means more bits are needed to pin-point their location.

  • You could specify what this type of observer looks like, run a search for things in the universe matching that description, then pick one off the list. In this case, again what matters is the density of us(observers with the sequence of observations we are trying to predict) among all observers of the same type.

Which of the two methods ends up being the leading contributor to the Solomonoff prior depends on the details of the universe and the type of observer. But either way, I think the Presumptuous Philosopher's argument ends up being rejected: in the 'searching' case, it seems like different physical theories shouldn't affect the frequency of different people in the universe, and in the 'location' case, it seems that any physical theory compatible with local observations shouldn't be able to affect the density much, because we would perceive any copies that were close enough.

Comment by interstice on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T04:55:04.423Z · LW · GW

The post mentions problems that encourage people to hide reality from themselves. I think that constructing a 'meaningful life narrative' is a pretty ubiquitous such problem. For the majority of people, constructing a narrative where their life has intrinsic importance is going to involve a certain amount of self-deception.

Some of the problems that come from the interaction between these sorts of narratives and learning about x-risks have already been mentioned. To me, however, it looks like some of the AI x-risk memes themselves are partially the result of reality-masking optimization with the goal of increasing the perceived meaningfulness of the lives of people working on AI x-risk. As an example, consider the ongoing debate about whether we should expect the field of AI to mostly solve x-risk on its own. Clearly, if the field can't be counted upon to avoid the destruction of humanity, this greatly increases the importance of outside researchers trying to help them. So to satisfy their emotional need to feel that their actions have meaning, outside researchers have a bias towards thinking that the field is more incompetent than it is, and to come up with and propagate memes justifying that conclusion. People who are already in insider institutions have the opposite bias, so it makes sense that this debate divides to some extent along these lines.

From this perspective, it's no coincidence that internalizing some x-risk memes leads people to feel that their actions are meaningless. Since the memes are partially optimized to increase the perceived meaningfulness of the actions of a small group of people, by necessity they will decrease the perceived meaningfulness of everyone else's actions.

(Just to be clear, I'm not saying that these ideas have no value, that this is being done consciously, or that the originators of said memes are 'bad'; this is a pretty universal human behavior. Nor would I endorse bringing up these motives in an object-level conversation about the issues. However, since this post is about reality-masking problems it seems remiss not to mention.)

Comment by interstice on Clarifying The Malignity of the Universal Prior: The Lexical Update · 2020-01-16T20:44:36.073Z · LW · GW

Thanks, that makes sense. Here is my rephrasing of the argument:

Let the 'importance function' take as inputs machines and , and output all places where is being used as a universal prior, weighted by their effect on -short programs. Suppose for the sake of argument that there is some short program computing ; this is probably the most 'natural' program of this form that we could hope for.

Even given such a program, we'll still lose to the aliens: in , directly specifying our important decisions on Earth using will require both and to be fed into , costing bits, then bits to specify us. For the aliens, getting them to be motivated to control -short programs costs bits, but then they can skip directly to specifying us given , so they save bits over the direct explanation. So the lexical update works.

(I went wrong in thinking that the aliens would need to both update their notion of importance to match ours *and* locate our world; but if we assume the 'importance function' exists then the aliens can just pick out our world using our notion of importance)

Comment by interstice on Clarifying The Malignity of the Universal Prior: The Lexical Update · 2020-01-16T01:04:31.901Z · LW · GW

What you say seems right, given that we are specifying a world, then important predictions, then U'. I was assuming a different method for specifying where we are, are as follows:

For the sake of establishing whether or not the lexical update was a thing, I assume that there exists some short program Q which, given as input a description of a language X, outputs a distribution over U-important places where X is being used to make predictions. Given that Q exists, and (importantly) has a short description in U', I think the shortest way of picking out "our world" in U' would be feeding a description of U' into Q then sampling worlds from the distribution Q(U'). This would then imply that U'(our world) is equal to U’(our world, someone making predictions, U'), because the shortest way of describing our world involves specifying Q and U' anyway. The aliens can't make the 'lexical update' here because this is about their preferences, not beliefs(that is, since they know U', they could find our world knowing only Q; but this wouldn't affect their motives to do so because we're assuming their motives are tied to simplicity in U', where our world still requires finding an up-front specification of U' within U')

That said, it seems like maybe I am cheating by assuming Q has a short description in U'; a more natural assumption might be a program Q(X, Y) which outputs X-important places where Y is being used to make predictions. I will think about this more.

Comment by interstice on [deleted post] 2020-01-05T06:12:34.177Z

Today's neural networks definitely have problems solving more 'structured' problems, but I don't think that 'neural nets can't learn long time-series data' is a good way of framing this. To go through your examples:

This shouldn’t have been a major issue, except that with each switch it discarded past observations. Had the car maintained this history it would have seen that some sort of large object was progressing across the street on a collision course, and had plenty of time to stop.

From a brief reading of the report, this sounds like this control logic is part of the system surrounding the neural network, not the network itself.

One network predicts the odds of winning and another network figures out which move to perform. This turns a time-series problem (what strategy to perform) into a two separate stateless[1] problems.

I don't see how you think this is 'stateless'. AlphaStar's architecture contains an LSTM('Core') which is then fed into the value and move networks, similar to most time series applications of neural networks.

Most conspicuously, human beings know how to build walls with buildings. This requires a sequence of steps that don’t generate a useful result until the last of them are completed. A wall is useless until the last building is put into place. AlphaStar (the red player in the image below) does not know how to build walls.

But the network does learn how to build its economy, which also doesn't pay off for a very long time. I think the issue here is more about a lack of 'reasoning' skills than time-scales: the network can't think conceptually, and so doesn't know that a wall needs to completely block off an area to be useful. It just learns a set of associations.

ML can generate classical music just fine but can’t figure out the chorus/verse system used in rock & roll.

MustNet was trained from scratch on MIDI data, but it's still able to generate music with lots of structure on both short and long time scales. GPT2 does the same for text. I'm not sure if MuseNet is able to generate chorus/verse structures in particular, but again this seems more like an issue of lack of logic/concepts than time scales(that is, MuseNet can make pieces that 'sound right' but has no conceptual understanding of their structure)

I'll note that AlphaStar, GPT2, and MuseNet all use the Transformer architecture, which seems quite effective for structured time-series data. I think this is because its attentional mechanism lets it zoom in on the relevant parts of past experiences.

I also don't see how connectome-specific-waves are supposed to help. I think(?) your suggestion is to store slow-changing data in the largest eigenvectors of the Laplacian -- but why would this be an improvement? It's already the case(by the nature of the matrix) that the largest eigenvectors of e.g. an RNN's transition matrix will tend to store data for longer time periods.

Comment by interstice on romeostevensit's Shortform · 2020-01-01T23:35:02.105Z · LW · GW

Steroids do fuck a bunch of things up, like fertility, so they make evolutionary sense. This suggests we should look to potentially dangerous or harmful alterations to get real IQ boosts. Greg cochran has a post suggesting gout might be like this.

Comment by interstice on Understanding Machine Learning (I) · 2019-12-22T00:16:39.115Z · LW · GW

This seems much too strong, lots of interesting unsolved problems can be cast as i.i.d. Video classification, for example, can be cast as i.i.d., where the distribution is over different videos, not individual frames.

Comment by interstice on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-21T23:47:56.411Z · LW · GW

In the analogy, it's only possible to build a calculator that outputs the right answer on non-13 numbers because you already understand the true nature of addition. It might be more difficult if you were confused about addition, and were trying to come up with a general theory by extrapolating from known cases -- then, thinking 6 + 7 = 15 could easily send you down the wrong path. In the real world, we're similarly confused about human preferences, mind architecture, the nature of politics, etc., but some of the information we might want to use to build a general theory is taboo. I think that some of these questions are directly relevant to AI -- e.g. the nature of human preferences is relevant to building an AI to satisfy those preferences, the nature of politics could be relevant to reasoning about what the lead-up to AGI will look like, etc.

Comment by interstice on What determines the balance between intelligence signaling and virtue signaling? · 2019-12-16T03:54:40.161Z · LW · GW

Fair point, but I note that the cooperative ability only increases fitness here because it boosts the individuals' status, i.e. they are in a situation where status-jockeying and cooperative behavior are aligned. Of course it's true that they _are_ often so aligned.

Comment by interstice on Many Turing Machines · 2019-12-15T22:39:37.941Z · LW · GW

I agree it's hard to get the exact details of the MUH right, but pretty much any version seems better to me than 'only observable things exist' for the reasons I explained in my comment. And pretty much any version endorses many-worlds(of course you can believe many-worlds without believing MUH). Really this is just a debate about the meaning of the word 'exist'.

Comment by interstice on What determines the balance between intelligence signaling and virtue signaling? · 2019-12-12T07:11:29.702Z · LW · GW

Ability to cooperate is important, but I think that status-jockeying is a more 'fundamental' advantage because it gives an advantage to individuals, not just groups. Any adaptation that aids groups must first be useful enough to individuals to reach fixation(or near-fixation) in some groups.

Comment by interstice on Many Turing Machines · 2019-12-10T20:53:45.197Z · LW · GW

You've essentially re-invented the Mathematical Universe Hypothesis, which many people around here do in fact believe.

For some intuition as to why people would think that things that can't ever affect our future experiences are 'real', imagine living in the distant past and watching your relatives travel to a distant land, and assume that long-distance communication such as writing is impossible. You would probably still care about them and think they are 'real' , even though by your definition they no longer exist to you. Or if you want to quibble about the slight chance of seeing them again, imagine being in the future and watching them get on a spaceship which will travel beyond your observable horizon. Again, it still seems like you would still care about them and consider them 'real'.

Comment by interstice on What determines the balance between intelligence signaling and virtue signaling? · 2019-12-09T06:39:56.107Z · LW · GW

Do you agree that signalling intelligence is the main explanation for the evolution of language? To me, it seems like coalition-building is a more fundamental driving force(after all, being attracted to intelligence only makes sense if intelligence is already valuable in some contexts, and coalition politics seems like an especially important domain) Miller has also argued that sexual signalling is a main explanation of art and music, which Will Buckingham has a good critique of here.

Comment by interstice on Q&A with Shane Legg on risks from AI · 2019-12-09T06:11:06.489Z · LW · GW

As far as I know no one's tried to build a unified system with all of those capacities, but we do seem to have rudimentary learned versions of each of the capacities on their own.

Comment by interstice on Recent Progress in the Theory of Neural Networks · 2019-12-06T21:18:53.425Z · LW · GW

That's an interesting link. It sound like the results can only be applied to strictly Bayesian methods though, so they couldn't be applied to neural networks as they exist now.