Posts

Is there an intuitive way to explain how much better superforecasters are than regular forecasters? 2020-02-19T01:07:52.394Z · score: 16 (8 votes)
Machine Learning Projects on IDA 2019-06-24T18:38:18.873Z · score: 51 (18 votes)
Reinforcement Learning in the Iterated Amplification Framework 2019-02-09T00:56:08.256Z · score: 26 (7 votes)
HCH is not just Mechanical Turk 2019-02-09T00:46:25.729Z · score: 40 (17 votes)
Amplification Discussion Notes 2018-06-01T19:03:35.294Z · score: 43 (11 votes)
Understanding Iterated Distillation and Amplification: Claims and Oversight 2018-04-17T22:36:29.562Z · score: 73 (21 votes)
Improbable Oversight, An Attempt at Informed Oversight 2017-05-24T17:43:53.000Z · score: 2 (2 votes)
Informed Oversight through Generalizing Explanations 2017-05-24T17:43:39.000Z · score: 1 (1 votes)
Proposal for an Implementable Toy Model of Informed Oversight 2017-05-24T17:43:13.000Z · score: 1 (1 votes)

Comments

Comment by william_s on Have the lockdowns been worth it? · 2020-10-13T23:32:54.092Z · score: 16 (8 votes) · LW · GW

I'm skeptical of this.

  • Wuhan needed 2 months on lockdown: https://en.wikipedia.org/wiki/COVID-19_pandemic_lockdown_in_Hubei
  • I'd expect that imposing China-style lockdowns in the West would require significant force and might end up causing a large-scale panic in and of itself.
  • I'd expect that any lockdown in the West wouldn't have been effective enough to stamp out 100% of cases, and if you don't eradicate it then you need ongoing measures or it will just flare up again later, so one strictly enforced lockdown wouldn't cut it. (Though maybe you could do very rigorous contact tracing and lock down just people who might have been in contact with cases, which could be less costly than full lockdown but probably still need significant enforcement).
Comment by william_s on Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? · 2020-09-02T18:09:51.552Z · score: 1 (1 votes) · LW · GW

https://www.microcovid.org/paper/2-riskiness#fn6 discusses https://covid.joinzoe.com/post/covid-long-term which has an app-based survey claiming 1 in 10 people still have symptoms after 3 weeks. (but since people can just sign up for the app I'd guess this is harder to know how to interpret than the telephone survey). Microcovid.org uses this 1 in 10 figure as the estimate for chance of some ongoing health consequence, and claims the risk of ongoing health problems from a 1% chance of COVID is equivalent to the risk from 1 year of driving (but this comparison involves even more assumptions).

Comment by william_s on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-02T17:57:40.856Z · score: 3 (3 votes) · LW · GW

https://www.cdc.gov/mmwr/volumes/69/wr/mm6930e1.htm found that ~1 in 5 of 18-34 year olds with no underlying health conditions had symptoms 3 weeks later (telephone survey of people who'd been symptomatic and had a positive test).

Other discussion in comments of https://www.lesswrong.com/posts/ahYxBHLmG7TiGDqxG/do-we-have-updated-data-about-the-risk-of-permanent-chronic

Comment by william_s on Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? · 2020-09-02T17:24:27.433Z · score: 1 (1 votes) · LW · GW

Not addressing fatigue, and just a study in progress, but this study is looking for long term neurological problems, might another weak bit of evidence when it releases results https://www.cambridgebrainsciences.com/studies/covid-brain-study

Comment by william_s on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-24T01:40:07.510Z · score: 4 (4 votes) · LW · GW

Seems like it could be helpful if people who've thought about this would also predict on the question of what the survey value would be today. (e.g. via elicit snapshots)

Comment by william_s on OpenAI announces GPT-3 · 2020-05-29T21:20:50.960Z · score: 8 (5 votes) · LW · GW

Google's Meena (2.6 billion parameters, February 2020) creates original puns in "Conversation D", and I think "Cross-turn Repetition Example 2" is absurdist comedy, but maybe more as a result of the way the model fails.

Conversation D
Human: I like how cute cows are
Meena: Ah, interesting. You like how they are cute, right?
Human: yeah, and they’re realy smart
Meena: I heard they go to college
Human: cows go to college?
Meena: I heard that a cow went to Harvard.
Human: what did the cow study?
Meena: Bovine sciences.
Human: do horses go to Harvard?
Meena: Horses go to Hayvard.
Human: that’s a pretty good joke, I feel like you led me into it
Meena: You were trying to steer it elsewhere, I can see it.
Comment by william_s on Does the 14-month vaccine safety test make sense for COVID-19? · 2020-03-18T19:56:24.872Z · score: 5 (4 votes) · LW · GW

Are there any sources that describe why 14 months is the trial period (or provide justification for picking trial periods of various lengths)?

Comment by william_s on Does the 14-month vaccine safety test make sense for COVID-19? · 2020-03-18T19:55:56.678Z · score: 4 (3 votes) · LW · GW

Seems like it ought to be more of a continuous variable, rather than this discrete 14 month trial: at time t, we've observed x people for y months to see if they have wierd long-term side effects, so we should be willing to vaccinate z more people.

Comment by william_s on How to have a happy quarantine · 2020-03-18T17:31:46.116Z · score: 1 (1 votes) · LW · GW

The chrome extention Netflix Party lets you synchronize playing the same video on netflix other people, which you can use along with Skype to watch something together.

(You can always fall back to counting down "3,2,1" to start playing the video at the same time, but the experience is nicer if you ever need to pause and resume)

Comment by william_s on Zoom In: An Introduction to Circuits · 2020-03-10T21:12:12.205Z · score: 11 (6 votes) · LW · GW

The worry I'd have about this interpretability direction is that we become very good at telling stories about what 95% of the weights in neural networks do, but the remaning 5% hides some important stuff, which could end up including things like mesa-optimizers or deception. Do you have thoughts on that?

Comment by william_s on What "Saving throws" does the world have against coronavirus? (And how plausible are they?) · 2020-03-05T05:03:41.385Z · score: 4 (3 votes) · LW · GW

Might be interesting to look at information that was available at the start of H1N1 and how accurate it turned out to be in retrospect (though there's no guarantee that we'd make errors in the same direction this time around).

Comment by william_s on What "Saving throws" does the world have against coronavirus? (And how plausible are they?) · 2020-03-05T02:53:59.712Z · score: 14 (10 votes) · LW · GW

Virus mutates to a less severe form, quarantine measures select for the less severe form, fighting off less severe form provides immunity against more severe form, severe form dies out.

According to https://en.wikipedia.org/wiki/Spanish_flu

Another theory holds that the 1918 virus mutated extremely rapidly to a less lethal strain. This is a common occurrence with influenza viruses: There is a tendency for pathogenic viruses to become less lethal with time, as the hosts of more dangerous strains tend to die out[15] (see also "Deadly Second Wave", above).

Article today suggested that COV19 has already split into two strains and hypothesized that selection pressure from quarantine changed the relative frequencies of the strains, don't think there's evidence about whether one strain is more severe https://academic.oup.com/nsr/advance-article/doi/10.1093/nsr/nwaa036/5775463?searchresult=1

I'm not an expert and this isn't great evidence, so it's maybe in the "improbable" category

Comment by william_s on Reinforcement Learning in the Iterated Amplification Framework · 2020-02-16T01:21:30.248Z · score: 1 (1 votes) · LW · GW

I'm talking about an imitation version where the human you're imitating is allowed to do anything they want, including instatiting a search over all possible outputs X and taking that one that maximizes the score of "How good is answer X to Y?" to try to find X*. So I'm more pointing out that this behaviour is available in imitation by default. We could try to rule it out by instructing the human to only do limited searches, but that might be hard to do along with maintaining capabilities of the system, and we need to figure out what "safe limited search" actually looks like.

Comment by william_s on Reinforcement Learning in the Iterated Amplification Framework · 2020-02-16T01:17:18.898Z · score: 1 (1 votes) · LW · GW
If M2 has adversarial examples or other kinds of robustness or security problems, and we keep doing this training for a long time, wouldn't the training process sooner or later sample an X that exploits M2 (gets a high reward relative to other answers without actually being a good answer), which causes the update step to increase the probability of M1 giving that output, and eventually causes M1 to give that output with high probability?

I agree, and think that this problem occurs both in imitation IA and RL IA

For example is the plan to make sure M2 has no such robustness problems (if so how)?

I believe the answer is yes, and I think this is something that would need to be worked out/demonstrated. I think there is one hope that if M2 can increase the amount computing/evaluation power it uses for each new sample X as we take more samples, then you can keep taking more samples without ever accepting an adversarial one (This assumes something like for any adversarial example, all M2 with at least some finite amount of computing power will reject it). There's maybe another hope that you could make M2 robust if you're allowed to reject many plausibly good X in order to avoid false positives. I think both of these hopes are in the IOU status, and maybe Paul has a different way to put this picture that makes more sense.

Comment by william_s on Outer alignment and imitative amplification · 2020-02-16T01:04:53.418Z · score: 3 (2 votes) · LW · GW

Overall, I think imitative amplification seems safer, but I maybe don't think the distinction is as clear cut as my impression of this post gives.

if you can instruct them not to do things like instantiate arbitrary Turing machines

I think this and "instruct them not to search over arbitrary text strings for the text string that gives the most approval", and similar things, are the kind of details that would need to be filled out to make the thing you are talking about actually be in a distinct class from approval-based amplification and debate (My post on imitation and RL amplification was intended to argue that without further restrictions, imitation amplification is in the same class as approval-based amplification, which I think we'd agree on). I also think that specifying these restrictions in a way that still lets you build a highly capable system could require significant additional alignment work (as in the Overseer's Manual scenario here)

Conversely, I also think there are ways that you can limit approval-based amplification or debate - you can have automated checks, for example, that discard possible answers that are outside of a certain defined safe class (e.g. debate where each move can only be from either a fixed library of strings that humans produced in advance or single direct quotes from a human-produced text). I'd also hope that you could do something like have a skeptical human judge that quickly discards anything they don't understand + an ML imitation of the human judge that discards anything outside of the training distribution (don't have a detailed model of this, so maybe it would fail in some obvious way)

I think I do believe that for problems where there is a imitative amplification decomposition that solves the problem without doing search, that's more likely to be safe by default than approval-based amplification or debate. So I'd want to use imitative amplification as much as possible, falling back to approval only if needed. On imitative amplification, I'm more worried that there are many problems it can't solve without doing approval-maximizing search, which brings the old problems back in again. (e.g. I'm not sure how to use imitative amplification at the meta-level to produce better decomposition strategies than humans use without using approval-based search)

Comment by william_s on Use-cases for computations, other than running them? · 2020-01-21T00:55:46.560Z · score: 4 (3 votes) · LW · GW

Substituting parts of the computation (replace a slow, correct algorithm for part of the computation with a fast, approximate one)

Comment by william_s on Use-cases for computations, other than running them? · 2020-01-21T00:54:07.155Z · score: 4 (3 votes) · LW · GW
  • Formally verifying properties of the computation
  • Informally checking properties of the computation (is this algorithm for making loan decisions fair?)
  • Debugging the computation, or more generally "modifying the computation to do what you actually want"
Comment by william_s on Understanding Iterated Distillation and Amplification: Claims and Oversight · 2019-11-30T21:45:04.540Z · score: 2 (2 votes) · LW · GW

One situation is: maybe an HBO tree of size 10^20 runs into a security failure with high probability, but an HBO tree of size 10^15 doesn't and is sufficient to output a good LBO overseer.

Comment by william_s on Understanding Iterated Distillation and Amplification: Claims and Oversight · 2019-11-30T21:43:55.358Z · score: 2 (2 votes) · LW · GW

Yeah, that's my current picture of it.

Comment by william_s on Can corrigibility be learned safely? · 2019-11-30T21:43:17.397Z · score: 4 (3 votes) · LW · GW
I understand that, but I don't see why #2 is likely to be achievable. Corrigibility seems very similar to Wei Dai's translation example, so it seems like there could be many deceptive actions that humans would intuitively recognize as not corrigible, but which would fool an early-stage LBO tree into assigning a high reward. This seems like it would be a clear example of "giving a behaviour a high reward because it is bad". Unfortunately I can't think of any good examples, so my intuition may simply be mistaken.

For factored cognition: I think the reason #2 might be achievable relies on assuming that there is some reason that the bad features is selected in the first place and is assigned high reward. For example, this might have happened because the agent ran a simulation forward, and then realized that if they punch the human they can take the reward button from them. The hope is that we can figure out that the simulation process happened and why it lead to the wrong thing (or outlaw simluations of this form in the first place).

For factored evaluation, I think the story is a bit different (relying on the other expert being able to understand the reasons for the reward assignment and point it out to the judge, but I don't think the judge needs to be able to find it on there own).

Incidentally, it seems like Ought could feasibly test whether meta-execution is sufficient to ensure corrigibility; for example, a malicious expert could recommend deceptive/influence-seizing actions to an agent in a simulated environment, and the meta-execution tree would have to detect every deceptive action without any contextual knowledge . Are there any plans to do this?

No plans currently, but it would be interesting.

Comment by william_s on Can corrigibility be learned safely? · 2019-11-03T01:04:34.045Z · score: 1 (1 votes) · LW · GW

Suppose an action is evaluated as a linear combination of a set of human interpretable features.

The action "punch the human" could be selected because 1) many of the reward weights of these features could be wrong, or it could be selected because 2) there is one feature "this action prevents the human from turning me off" that is assigned high reward. I think the thing we'd want to prevent in this case is 2) but not 1), and I think that's more likely to be achievable.

Comment by william_s on Understanding Iterated Distillation and Amplification: Claims and Oversight · 2019-11-03T00:56:41.177Z · score: 3 (2 votes) · LW · GW

I think it's a general method that is most applicable in LBO, but might still be used in HBO (eg. an HBO overseer can read one chapter of a math textbook, but this doesn't let it construct an ontology that let's it solve complicated math problems, so instead it needs to use meta-execution to try to manipulate objects that it can't reason about directly.

Comment by william_s on Understanding Iterated Distillation and Amplification: Claims and Oversight · 2019-11-03T00:54:04.817Z · score: 6 (3 votes) · LW · GW

I'd interpreted it as "using the HBO system to construct a "core for reasoning" reduces the chances of failure by exposing it to less inputs/using it for less total time", plus maybe other properties (eg. maybe we could look at and verify an LBO overseer, even if we couldn't construct it ourselves)

Comment by william_s on Concrete experiments in inner alignment · 2019-09-26T22:52:13.788Z · score: 3 (3 votes) · LW · GW

Possible source for optimization-as-a-layer: SATNet (differentiable SAT solver)

https://arxiv.org/abs/1905.12149

Comment by william_s on 2-D Robustness · 2019-09-26T18:34:37.193Z · score: 6 (5 votes) · LW · GW

One way to try to measure capability robustness seperate from alignment robustness off of the training distribution of some system would be to:

  • use an inverse reinforcement learning algorithm infer the reward function of the off-distribution behaviour
  • train a new system to do as well on the reward function as the original system
  • measure the number of training steps needed to reach this point for the new system.

This would let you make comparisons between different systems as to which was more capability robust.

Maybe there's a version that could train the new system using behavioural cloning, but it's less clear how you measure when you're as competent as the original agent (maybe using a discriminator?)

The reason for trying this is having a measure of competence that is less dependent on human judgement/closer to the systems's ontology and capabilities.

Comment by william_s on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T18:22:32.665Z · score: 6 (6 votes) · LW · GW

I think the better version of this strategy would involve getting competing donations from both sides, using some weighting of total donations for/against pushing the button to set a probability of pressing the button, and tweaking the weighting of the donations such that you expect the probability of pressing the button will be low (because pressing the button threatens to lower the probability of future games of this kind, this is an iterated game rather than a one-shot).

Comment by william_s on Problems with AI debate · 2019-09-05T18:08:11.444Z · score: 6 (4 votes) · LW · GW

For Alaska vs. Bali, alternative answer is "You could be convinced that either Alaska or Bali is a good vacation destination". It's an interesting question whether this could actually win in debate. I think it might have a better chance in Factored Evaluation, because we can spin up two seperate trees to view the most compelling argument for Alaska and the most compelling argument for Bali and verify that these are convincing. In debate, you'd need view either Alaska Argument before Bali Argument, or Bali Argument before Alaska Argument, and you might just be convinced by the first argument you see in which case you wouldn't agree that you could be convinced either way.

Comment by william_s on HCH is not just Mechanical Turk · 2019-08-07T02:55:24.913Z · score: 4 (3 votes) · LW · GW

I'd say that the claim is not sufficient - it might provide some alignment value, but it needs a larger story about how the whole computation is going to be safe. I do think that the HCH framework could make specifying an aligned GOFAI-like computation easier (but it's hard to come up with a rigorous argument for this without pointing to some kind of specification that we can make claims about, which is something I'd want to produce along the way while proceeding with HCH-like approaches)

Comment by william_s on Deceptive Alignment · 2019-07-21T20:54:27.734Z · score: 4 (3 votes) · LW · GW

I think a cleaner way of stating condition 3 might be "there exists some policy with a non-zero chance of deception at some point in the future, with expected value for the mesa-optimizer higher than the policy of always cooperating".

This could be met if the threat of modification never goes away, but maybe decreases over time. (at training time, system is monitored for defection every episode, at a deployment time it is monitored for defection 1/million episodes, not worthwhile to defect during training but worthwhile to defect at deployment time)

This could fail to be met if the mesa-optimizer believes that 1) there is a 1% chance that it will be detected by the base optimizer 2) if this happens, the base optimizer will be able to determine $O_mesa$ and give it a large negative reward, 100x more than the possible reward from the best defecting policy. (not sure if there's any version of this that would work, since it seems hard to figure out $O_mesa$ and provide negative reward)

Comment by william_s on An Increasingly Manipulative Newsfeed · 2019-07-14T03:53:52.360Z · score: 1 (1 votes) · LW · GW

To me, It seems like the point of this story is that we could build an AI that ends up doing very dangerous things without ever asking it "Will you do things I don't like if given more capability?" or some other similar question that requires it to execute the treacherous turn. In contrast, if the developers did something like build a testing world with toy humans in it who could be manipulated in a way detectable to the developers, and placed the AI in the toy testing world, then it seems like this AI would be forced into a position where it either acts in a way according to it's true incentives (manipulate the humans and be detected), or execute the treacherous turn (abstain from manipulating the humans so developers will trust it more). So it seems like this wouldn't happen if the developers are trying to test for treacherous turn behaviour during development.

Comment by william_s on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-01T16:48:25.834Z · score: 4 (2 votes) · LW · GW

Are you interested in protocols involving multiple episodic questions (where you ask one question, wait for it to resolve, then ask another question?)

Comment by william_s on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-01T16:46:57.502Z · score: 12 (10 votes) · LW · GW

Submission: low-bandwidth oracle

Plan Criticism: Given plan to build an aligned AI, put together a list of possible lines of thought to think about problems with the plan (open questions, possible failure modes, criticisms, etc.). Ask the oracle to pick one of these lines of thought, pick another line of thought at random, and spend the next time period X thinking about both, judge which line of thought was more useful to think about (where lines of thought that spot some fatal missed problem are judged to be very useful) and reward the oracle if its suggestion was picked.

Comment by william_s on The Main Sources of AI Risk? · 2019-03-22T18:55:56.945Z · score: 5 (3 votes) · LW · GW
  • AI systems end up controlled by a group of humans representing a small range of human values (ie. an ideological or religious group that imposes values on everyone else). While not caused only by AI design, it is possible that design decisions could impact the likelihood of this scenario (ie. at what point are values loaded into the system/how many people's values are loaded into the system), and is relevant for overall strategy.
Comment by william_s on The Main Sources of AI Risk? · 2019-03-22T18:52:24.938Z · score: 6 (3 votes) · LW · GW
  • Failure to learn how to deal with alignment in the many-humans, many-AIs case even if single-human, single-AI alignment is solved (which I think Andrew Critch has talked about). For example, AIs negotiating on behalf of humans take the stance described in https://arxiv.org/abs/1711.00363 of agreeing to split control of the future according to which human's priors are most accurate (on potentially irrelevant issues) if this isn't what humans actually want.
Comment by william_s on Some Thoughts on Metaphilosophy · 2019-03-08T18:58:55.169Z · score: 2 (2 votes) · LW · GW

Maybe one AI philosophy service could look like: would ask you a bunch of other questions that are simpler than the problem of qualia, then show you what those answers imply about the problem of qualia if you use some method of reconciling those answers.

Comment by william_s on Some Thoughts on Metaphilosophy · 2019-03-08T18:53:49.848Z · score: 2 (2 votes) · LW · GW

Re: Philosophy as interminable debate, another way to put the relationship between math and philosophy:

Philosophy as weakly verifiable argumentation

Math is solving problems by looking at the consequences of a small number of axiomatic reasoning steps. For something to be math, we have to be able to ultimately cash out any proof as a series of these reasoning steps. Once something is cashed out in this way, it takes a small constant amount of time to verify any reasoning step, so we can verify given polynomial time.

Philosophy is solving problems where we haven't figured out a set of axiomatic reasoning steps. Any non-axiomatic reasoning step we propose could end up having arguments that we hadn't thought of that would lead us to reject that step. And those arguments themselves might be undermined by other arguments, and so on. Each round of debate lets us add another level of counter-arguments. Philosophers can make progress when they have some good predictor of whether arguments are good or not, but they don't have access to certain knowledge of arguments being good.

Another difference between mathematics and philosophy is that in mathematics we have a well defined set of objects and a well-defined problem we are asking about. Whereas in philosophy we are trying to ask questions about things that exist in the real world and/or we are asking questions that we haven't crisply defined yet.

When we come up with a set of axioms and a description of a problem, we can move that problem from the realm of philosophy to the realm of mathematics. When we come up with some method we trust of verifying arguments (ie. replicating scientific experiments), we can move problems out of philosophy to other sciences.

It could be the case that philosophy grounds out in some reasonable set of axioms which we don't have access to now for computational reasons - in which case it could all end up in the realm of mathematics. It could be the case that, for all practical purposes, we will never reach this state, so it will remain in the "potentially unbounded DEBATE round case". I'm not sure what it would look like if it could never ground out - one model could be that we have a black box function that performs a probabilistic evaluation of argument strength given counter-arguments, and we go through some process to get the consequences of that, but it never looks like "here is a set of axioms".

Comment by william_s on Some Thoughts on Metaphilosophy · 2019-03-08T17:41:23.465Z · score: 6 (4 votes) · LW · GW

I guess it feels like I don't know how we could know that we're in the position that we've "solved" meta-philosophy. It feels like the thing we could do is build a set of better and better models of philosophy and check their results against held-out human reasoning and against each other.

I also don't think we know how to specify a ground truth reasoning process that we could try to protect and run forever which we could be completely confident would come up with the right outcome (where something like HCH is a good candidate but potentially with bugs/subtleties that need to be worked out).

I feel like I have some (not well justified and possibly motivated) optimism that this process yields something good fairly early on. We could gain confidence that we are in this world if we build a bunch of better and better models of meta-philosophy and observe at some point the models continue agreeing with each other as we improve them, and that they agree with various instantiations of protected human reasoning that we run. If we are in this world, the thing we need to do is just spend some time building a variety of these kinds of models and produce an action that looks good to most of them. (Where agreement is not "comes up with the same answer" but more like "comes up with an answer that other models think is okay and not disastrous to accept").

Do you think this would lead to "good outcomes"? Do you think some version of this approach could be satisfactory for solving the problems in Two Neglected Problems in Human-AI Safety?

Do you think there's a different kind of thing that we would need to do to "solve metaphilosophy"? Or do you think that working on "solving metaphilosophy" roughly caches out as "work on coming up with better and better models of philosophy in the model I've described here"?

Comment by william_s on Three AI Safety Related Ideas · 2019-03-08T17:30:25.571Z · score: 4 (2 votes) · LW · GW

A couple ways to implement a hybrid approach with existing AI safety tools:

Logical Induction: Specify some computationally expensive simulation of idealized humans. Run a logical inductor with the deductive process running the simulation and outputting what the humans say after time x in simulation, as well as statements about what non-idealized humans are saying in the real world. The inductor should be able to provide beliefs about what the idealized humans will say in the future informed by information from the non-idealized humans.

HCH/IDA: The HCH-humans demonstrate a reasoning process which aims to predict the output of a set of idealized humans using all available information (which can include running simulations of idealized humans or information from real humans). The way that the HCH tree using information about real humans involves looking carefully at their circumstances and asking things like "how do the real human's circumstances differ from the idealized human" and "is the information from the real human compromised in some way?"

Comment by william_s on Can HCH epistemically dominate Ramanujan? · 2019-02-27T22:58:06.230Z · score: 1 (1 votes) · LW · GW

It seems like for Filtered-HCH, the application in the post you linked to, you might be able to do a weaker version where you label any computation that you can't understand in kN steps as problematic, only accepting things you think you can efficiently understand. (But I don't think Paul is arguing for this weaker version).

Comment by william_s on Reinforcement Learning in the Iterated Amplification Framework · 2019-02-18T21:09:09.756Z · score: 4 (2 votes) · LW · GW
RL is typically about sequential decision-making, and I wasn't sure where the "sequential" part came in).

I guess I've used the term "reinforcement learning" to refer to a broader class of problems including both one-shot bandit problems and sequential decision making problems. In this view The feature that makes RL different from supervised learning is not that we're trying to figure out what how to act in an MDP/POMDP, but instead that we're trying to optimize a function that we can't take the derivative of (in the MDP case, it's because the environment is non-differentiable, and in the approval learning case, it's because the overseer is non-differentiable).

Comment by william_s on Some disjunctive reasons for urgency on AI risk · 2019-02-15T21:59:17.046Z · score: 2 (2 votes) · LW · GW

Re: scenario 3, see The Evitable Conflict, the last story in Isaac Asimov's "I, Robot":

"Stephen, how do we know what the ultimate good of Humanity will entail? We haven't at our disposal the infinite factors that the Machine has at its! Perhaps, to give you a not unfamiliar example, our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good – and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don't know. Only the Machines know, and they are going there and taking us with them."
Comment by william_s on HCH is not just Mechanical Turk · 2019-02-13T00:15:16.037Z · score: 6 (3 votes) · LW · GW

Yeah, to some extent. In the Lookup Table case, you need to have a (potentially quite expensive) way of resolving all mistakes. In the Overseer's Manual case, you can also leverage humans to do some kind of more robust reasoning (for example, they can notice a typo in a question and still respond correctly, even if the Lookup Table would fail in this case). Though in low-bandwidth oversight, the space of things that participants could notice and correct is fairly limited.

Though I think this still differs from HRAD in that it seems like the output of HRAD would be a much smaller thing in terms of description length than the Lookup Table, and you can buy extra robustness by adding many more human-reasoned things into the Lookup Table (ie. automatically add versions of all questions with typos that don't change the meaning of a question into the Lookup Table, add 1000 different sanity check questions to flag that things can go wrong).

So I think there are additional ways the system could correct mistaken reasoning relative to what I would think the output of HRAD would look like, but you do need to have processes that you think can correct any way that reasoning goes wrong. So the problem could be less difficult than HRAD, but still tricky to get right.

Comment by william_s on The Argument from Philosophical Difficulty · 2019-02-11T17:47:18.818Z · score: 4 (3 votes) · LW · GW

Thanks, this position makes more sense in light of Beyond Astronomical Waste (I guess I have some concept of "a pretty good future" that is fine with something like a bunch of human-descended beings living a happy lives that misses out on the sort of things mentioned in Beyond Astronomical Waste, and "optimal future" which includes those considerations). I buy this as an argument that "we should put more effort into making philosophy work to make the outcome of AI better, because we risk losing large amounts of value" rather than "our efforts to get a pretty good future are doomed unless we make tons of progress on this" or something like that.

"Thousands of millions" was a typo.

Comment by william_s on Thoughts on reward engineering · 2019-02-10T22:31:38.378Z · score: 4 (2 votes) · LW · GW
What is the motivation for using RL here?

I see the motivation as given practical compute limits, it may be much easier to have the system find an action the overseer approves of instead of imitating the overseer directly. Using RL also allows you to use any advances that are made in RL by the machine learning community to try to remain competitive.

Comment by william_s on Thoughts on reward engineering · 2019-02-10T22:28:42.330Z · score: 4 (2 votes) · LW · GW
Would this still be a problem if we were training the agent with SL instead of RL?

Maybe this could happen with SL if SL does some kind of large search and finds a solution that looks good but is actually bad. The distilled agent would then learn to identify this action and reproduce it, which implies the agent learning some facts about the action to efficiently locate it with much less compute than the large search process. Knowing what the agent knows would allow the overseer to learn those facts, which might help in identifying this action as bad.

Comment by william_s on Reinforcement Learning in the Iterated Amplification Framework · 2019-02-10T22:09:27.681Z · score: 4 (2 votes) · LW · GW
I don't understand why we want to find this X* in the imitation learning case.

Ah, with this example the intent was more like "we can frame what the RL case is doing as finding X* , let's show how we could accomplish the same thing in the imitation learning case (in the limit of unlimited compute)".

The reverse mapping (imitation to RL) just consists of applying reward 1 to M2's demonstrated behaviour (which could be "execute some safe search and return the results), and reward 0 to everything else.

What is pM(X∗)?

is the probability of outputting (where is a stochastic policy)

M2("How good is answer X to Y?")∗∇log(pM(X))

This is the REINFORCE gradient estimator (which tries to increase the log probability of actions that were rated highly)

Comment by william_s on Announcement: AI alignment prize round 4 winners · 2019-02-10T19:06:00.792Z · score: 6 (3 votes) · LW · GW

I guess the question was more from the perspective of: if the cost was zero then it seems like it would worth running, so what part of the cost makes it not worth running (where I would think of cost as probably time to judge or availability of money to fund the contest).

Comment by william_s on The Argument from Philosophical Difficulty · 2019-02-10T19:02:57.074Z · score: 6 (3 votes) · LW · GW

One important dimension to consider is how hard it is to solve philosophical problems well enough to have a pretty good future (which includes avoiding bad futures). It could be the case that this is not so hard, but fully resolving questions so we could produce an optimal future is very hard or impossible. It feels like this argument implicitly relies on assuming that "solve philosophical problems well enough to have a pretty good future" is hard (ie. takes thousands of millions of years in scenario 4) - can you provide further clarification on whether/why you think that is the case?

Comment by william_s on Announcement: AI alignment prize round 4 winners · 2019-02-09T17:43:58.790Z · score: 8 (4 votes) · LW · GW

Slightly disappointed that this isn't continuing (though I didn't submit to the prize, I submitted to Paul Christiano's call for possible problems with his approach which was similarly structured). Was hoping that once I got further into my PhD, I'd have some more things worth writing up, and the recognition/a bit of prize money would provide some extra motivation to get them out the door.

What do you feel like is the limiting resource that keeps continuing this from being useful to continue in it's current form?

Comment by william_s on HCH is not just Mechanical Turk · 2019-02-09T17:10:56.001Z · score: 1 (1 votes) · LW · GW

Yeah, this is a problem that needs to be addressed. It feels like in the Overseers Manual case you can counteract this by giving definitions/examples of how you want questions to be interpreted, and in the Lookup Table case this can be addr by coordination within the team creating the lookup table