Posts

Training Regime Day 5: TAPs 2020-02-19T18:11:05.649Z · score: 10 (3 votes)
Training Regime Day 4: Murphyjitsu 2020-02-18T17:33:12.523Z · score: 8 (4 votes)
Training Regime Day 3: Tips and Tricks 2020-02-17T18:53:24.808Z · score: 18 (8 votes)
Training Regime Day 2: Searching for bugs 2020-02-16T17:16:32.606Z · score: 16 (9 votes)
Training Regime Day 1: What is applied rationality? 2020-02-15T21:03:32.685Z · score: 22 (9 votes)
Training Regime Day 0: Introduction 2020-02-14T08:22:19.851Z · score: 15 (11 votes)

Comments

Comment by mark-xu on Training Regime Day 2: Searching for bugs · 2020-02-19T18:14:32.613Z · score: 1 (1 votes) · LW · GW

In my view, there is no "right level" for bugs. Some bugs are simpler and thus more suited to practicing, but the goal is to get to the point where you can solve even your largest bugs. I'll provide more prompts for finding larger bugs later on in the sequence.

Thanks for participating!

Comment by mark-xu on Training Regime Day 3: Tips and Tricks · 2020-02-17T22:16:27.631Z · score: 1 (1 votes) · LW · GW

Thanks! I mentioned on day 0 that this was one of my training regimes for rationality, but it's also training writing and posting thinking in public places, among other things. I'm glad that people are able to get something out of it.

Comment by mark-xu on How to Lurk Less (and benefit others while benefiting yourself) · 2020-02-17T21:18:33.193Z · score: 6 (4 votes) · LW · GW

There is a phenomenon among students of mathematics where things go from being "difficult" to "trivial" as soon as concepts are grasped. The main reason why I don't comment many of my thoughts is that I think that since I can think them, they must not be very hard to think, so commenting them is kind of useless. I think me thinking that my thoughts aren't very novel/insightful/good explains nearly all of the times I don't comment - if I have a thought I think is non-trivial to think or I have access to information that I think most people do not have access to, I will likely comment it (this happens extremely rarely).

However, I agree that people should say more obvious things on the margin.

(I also think that, on the margin, people should compliment other people more. I liked this post and think it is an important problem to try and solve.).

Comment by mark-xu on how has this forum changed your life? · 2020-02-05T05:42:51.013Z · score: 3 (2 votes) · LW · GW

Seconding career choices, cryonics, and donating money. Became vegan after my exposure to LW, but not sure if the effect was strong. Exposure to LessWrong has also given me a better working model of how to do the thinking thing better. In particular, I am now much much much better at noticing confusion.

Comment by mark-xu on Definitions of Causal Abstraction: Reviewing Beckers & Halpern · 2020-01-21T04:40:08.281Z · score: 3 (2 votes) · LW · GW

I don't know if you've seen this, but https://arxiv.org/abs/1906.11583 is a follow-up that generalizes the Beckers and Halpern paper to a notion of approximate abstraction by measuring the non-commutativity of the diagram by using some distance function and taking expectations. I think the most useful notion that the paper introduces is the idea of a probability distribution over the set of allowed interventions. Intuitively, you don't need your abstraction of temperature to behave nicely w.r.t freezing half the room and burning the other half such that the average kinetic energy balances out. Thus you can determine the "approximate commutativeness" of the diagram by fixing a high-level intervention and taking an expectation over the low-level interventions that were likely to map to that high-level intervention.

Also, if you are willing to write up your counter example to the conjecture that Beckers and Halpern make, I am currently researching under Eberhardt and he (and I) would be extremely interested in seeing it. I also initially thought that the conjecture was obviously false, but when I tried to actually construct counter examples, all of them ended up as either not strong abstractions or not recursive (acyclic) causal models.

Comment by mark-xu on Risks from Learned Optimization: Introduction · 2020-01-16T19:08:42.540Z · score: 5 (3 votes) · LW · GW

I'm confused why the inner alignment problem is conceptually different from the outer alignment problem. From a general perspective, we can think of the task of building any AI system as humans trying to optimizer their values by searching over some solution space. In this scenario, the programmer becomes the base optimizer and the AI system becomes the mesa optimizer. The outer alignment problem thus seems like a particular manifestation of the inner alignment problem where the base optimizer is a human.

In particular, if there exists a robust solution to the outer alignment problem, then presumably there's some property that we want the AI system to have and that we have some process that convinces us that the AI system has property . I don't see why we can't just give the AI system the ability to enact to ensure that any optimizer's that it creates have property (modulo the problem of ensuring that the system has with , ensuring that with , etc.). I guess you can have a solution to the outer alignment problem by having and not have the recursive tower needed to solve the inner alignment problem, but that seems like not the issues that were being brought up. (something something Lobian Obstacle)

In particular,

We will call the problem of eliminating the base-mesa objective gap the inner alignment problem, which we will contrast with the outer alignment problem of eliminating the gap between the base objective and the intended goal of the programmers. This terminology is motivated by the fact that the inner alignment problem is an alignment problem entirely internal to the machine learning system, whereas the outer alignment problem is an alignment problem between the system and the humans outside of it (specifically between the base objective and the programmer’s intentions). In the context of machine learning, outer alignment refers to aligning the specified loss function with the intended goal, whereas inner alignment refers to aligning the mesa-objective of a mesa-optimizer with the specified loss function.

My view says that if is the machine learning system and are the programmers, we can view as the "machine learning system" and as a mesa-optimizer. The task of aligning the the mesa-objective with the specific loss seems the same type of problem as aligning the loss function of with the programmers values.

Maybe the important thing is that loss functions are functions and values are not, so the point is that even if we have a function that represents our values, things can still go wrong. That is, before people thought that the problem was that finding a function that does what we want when it gets optimized was the main problem, but mesa-optimizer pseudo-alignment shows that even if we have that then we can't just optimize the function.

An implication is that all the reasons why mesa-optimizers can cause problems are reasons why strategies for trying to turn human values into a function can go wrong too. For example, value learning strategies seem vulnerable to the same pseudo-alignment problems. Admittedly, I do not have a good understanding of current approaches to value learning, so I am not sure if this is a real concern. (Assuming that the authors of this post are adequate, if such a similar concern existed in value learning, I think they would have mentioned it. This suggests that either I am wrong about this being a problem or that no one has given it serious thought. My priors are on the former, but I want to know why I'm wrong.)

I suspect that I've failed to understand something fundamental because it seems like a lot of people that know a lot of stuff think this is really important. In general, I think this paper has been well written and extremely accessible to someone like me who has only recently started reading about AI safety.