Posts

How can Interpretability help Alignment? 2020-05-23T16:16:44.394Z · score: 31 (12 votes)

Comments

Comment by flodorner on AI Unsafety via Non-Zero-Sum Debate · 2020-07-14T12:39:18.889Z · score: 1 (1 votes) · LW · GW

"My intuition is that there will be a class of questions where debate is definitely safe, a class where it is unsafe, and a class where some questions are safe, some unsafe, and we don’t really know which are which."

Interesting. Do you have some examples of types of questions you expect to be safe or potential features of save questions? Is it mostly about the downstram consquences that answers would have, or more about instrumental goals that the questions induce for debaters?

Comment by flodorner on Tradeoff between desirable properties for baseline choices in impact measures · 2020-07-09T09:52:49.372Z · score: 3 (2 votes) · LW · GW

I like the insight that offsetting is not always bad and the idea of dealing with the bad cases using the task reward. State-based reward functions that capture whether or not the task is currently done also intuitively seem like the correct way of specifying rewards in cases where achieving the task does not end the episode.

I am a bit confused about the section on the markov property: I was imagining that the reason you want the property is to make applying standard RL techniques more straightforward (or to avoid making already existing partial observability more complicated). However if I understand correctly, the second modification has the (expectation of the) penalty as a function of the complete agent policy and I don't really see, how that would help. Is there another reason to want the markov property, or am I missing some way in which the modification would simplify applying RL methods?

Comment by flodorner on Good and bad ways to think about downside risks · 2020-06-11T17:48:58.104Z · score: 6 (2 votes) · LW · GW

Nice post!

I would like to highlight that a naive application of the expected value perspective could lead to problems like the unilateralist's curse and think that the post would be even more useful for readers who are new to these kinds of considerations if it discussed that more explicitly (or linked to relevant other posts prominently).

Comment by flodorner on My prediction for Covid-19 · 2020-06-01T08:46:15.643Z · score: 2 (2 votes) · LW · GW

"If, at some point in the future, we have the same number of contagious people, and are not at an appreciable fraction of group immunity, it will at that point again be a solid decision to go into quarantine (or to extend it). "

I think for many people the number of infections at which this becomes a good idas has increased as we have more accurate information about the CFR and how quickly realistic countermeasures can slow down an outbreak in a given area, which should decrease credence in some of the worst case scenarios many were worried about a few months ago.

Comment by flodorner on The case for C19 being widespread · 2020-04-13T14:18:27.701Z · score: 3 (2 votes) · LW · GW

"Czech Researchers claim that Chinese do not work well "

This seems to be missing a word ;)

Comment by flodorner on Conflict vs. mistake in non-zero-sum games · 2020-04-07T22:19:52.053Z · score: 20 (14 votes) · LW · GW

Nitpick: I am pretty sure non-zero-sum does not imply a convex Pareto front.

Instead of the lens of negotiation position, one could argue that mistake theorists believe that the Pareto Boundary is convex (which implies that usually maximizing surplus is more important than deciding allocation), while conflict theorists see it as concave (which implies that allocation is the more important factor).

Comment by flodorner on March 14/15th: Daily Coronavirus link updates · 2020-03-17T12:23:17.551Z · score: 1 (1 votes) · LW · GW

Twitter: CV kills via cardiac failure, not pulmonary links to the aggragate spreadsheet, not the twitter soruce.

Comment by flodorner on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T13:19:34.852Z · score: 9 (3 votes) · LW · GW

Even if the claim was usually true on longer time scales, I doubt that pointing out an organisations mistakes and not entirely truthful statements usually increases the trust in them on the short time scales that might be most important here. Reforming organizations and rebuilding trust usually takes time.

Comment by flodorner on Subagents and impact measures, full and fully illustrated · 2020-03-05T19:43:20.722Z · score: 1 (1 votes) · LW · GW

How do

"One of the problems here is that the impact penalty only looks at the value of VAR one turn ahead. In the DeepMind paper, they addressed similar issues by doing “inaction rollouts”. I'll look at the more general situations of rollouts: rollouts for any policy "

and

"That's the counterfactual situation, that zeroes out the impact penalty. What about the actual situation? Well, as we said before, A will be just doing ∅; so, as soon as would produce anything different from ∅, the A becomes completely unrestrained again."

fit together? In the special case where is the inaction policy, I don't understand how the trick would work.

Comment by flodorner on Attainable Utility Preservation: Scaling to Superhuman · 2020-02-27T22:06:56.228Z · score: 1 (1 votes) · LW · GW

For all auxillary rewards. Edited the original comment.

I agree that it is likely to go wrong somewhere, but it might still be useful to figure out why. If the agent is able to predict the randomness reliably in some cases, the random baseline does not seem to help with the subagent problem.

Edit: Randomization does not seem to help, as long as the actionset is large (as the agent can then arrange for most actions to make the subagent optimize the main reward).

Comment by flodorner on Attainable Utility Preservation: Scaling to Superhuman · 2020-02-27T20:44:50.349Z · score: 3 (2 votes) · LW · GW

I wonder what happens to the subagent problem with a random action as baseline: In the current sense, building a subagent roughly works by reaching a state where

for all auxillary rewards , where is the optimal policy according to the main reward; while making sure that there exists an action such that

for every . So while building a subagent in that way is still feasible, the agent would be forced to either receive a large penalty or give the subagent random orders at .

Probably, there is a way to circumvent this again, though? Also, I am unsure about the other properties of randomized baselines.

Comment by flodorner on How Low Should Fruit Hang Before We Pick It? · 2020-02-27T17:53:12.442Z · score: 1 (1 votes) · LW · GW

Where does

come from?

Also, the equation seems to imply

Edit: I focused too much on what I suppose is a typo. Clearly you can just rewrite the the first and last equality as equality of an affine linear function

at two points, which gives you equality everywhere.

Comment by flodorner on How Low Should Fruit Hang Before We Pick It? · 2020-02-27T17:01:50.956Z · score: 3 (2 votes) · LW · GW

I do not understand your proof for proposition 2.

Comment by flodorner on On characterizing heavy-tailedness · 2020-02-16T22:43:06.421Z · score: 5 (2 votes) · LW · GW

Do you maybe have another example for action relevance? Nonfinite variance and finite support do not go well together.

Comment by flodorner on In theory: does building the subagent have an "impact"? · 2020-02-14T08:00:46.590Z · score: 1 (1 votes) · LW · GW

So the general problem is that large changes in ∅) are not penalized?

Comment by flodorner on Appendix: how a subagent could get powerful · 2020-02-12T15:05:40.514Z · score: 4 (3 votes) · LW · GW

"Not quite... " are you saying that the example is wrong, or that it is not general enough? I used a more specific example, as I found it easier to understand that way.

I am not sure I understand: In my mind "commitments to balance out the original agent's attainable utility" essentially refers to the second agent being penalized by the the first agent's penalty (although I agree that my statement is stronger). Regarding your text, my statement refers to "SA will just precommit to undermine or help A, depending on the circumstances, just sufficiently to keep the expected rewards the same. ".

My confusion is about why the second agent is only mildy constrained by this commitment. For example, weakening the first agent would come with a big penalty (or more precisely, building another agent that is going to weaken it gives a large penalty to the original agent), unless it's reversible, right?

The bit about multiple subagents does not assume that more than one of them is actually built. It rather presents a scenario where building intelligent subagents is automatically penalized. (Edit: under the assumption that building a lot of subagents is infeasible or takes a lot of time).

Comment by flodorner on Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1) · 2020-01-21T14:16:24.271Z · score: 1 (1 votes) · LW · GW

I found it a bit confusing that you first reffered to selection and control as types of optimizers and then (seemingly?) replaced selection by optimization in the rest of the text.

Comment by flodorner on When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors · 2020-01-13T13:39:52.494Z · score: 1 (1 votes) · LW · GW

I was thinking about normalisation as linearly rescaling every reward to when I wrote the comment. Then, one can always look at , which might make it easier to graphically think about how different beliefs lead to different policies. Different scales can then be translated to a certain reweighting of the beliefs (at least from the perspective of the optimal policy), as maximizing is the same as maximizing

Comment by flodorner on When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors · 2020-01-12T11:26:23.819Z · score: 10 (2 votes) · LW · GW

After looking at the update, my model is:

(Strictly) convex Pareto boundary: Extreme policies require strong beliefs. (Modulo some normalization of the rewards)

Concave (including linear) Pareto boundary: Extreme policies are favoured, even for moderate beliefs. (In this case, normalization only affects the "tipping point" in beliefs, where the opposite extreme policy is suddenly favoured).

In reality, we will often have concave and convex regions. The concave regions then cause more extreme policies for some beliefs, but the convex regions usually prevent the policy from completely focusing on a single objective.

From this lens, 1) maximum likelihood pushes us to one of the ends of the Pareto boundary, 2) an unlikely true reward pushes us close to the "bad" end, 3) Difficult optimization messes with normalisation (I am still somewhat confused about the exact role of normalization) and 4) Not accounting for diminishing returns bends the pareto boundary to become more concave.

Comment by flodorner on When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors · 2020-01-09T21:03:52.334Z · score: 1 (1 votes) · LW · GW

But no matter, how I take the default outcome, your second example is always "more positive sum" than the first, because 0.5 + 0.7 + 2x < 1.5 - 0.1 +2x.

Granted, you could construct examples where the inequality is reversed and Goodhart bad corresponds to "more negative sum", but this still seems to point to the sum-condition not being the central concept here. To me, it seems like "negative min" compared to the default outcome would be closer to the actual problem. This distinction matters, because negative min is a lot weaker than negative sum.

Or am I completely misunderstanding your examples or your point?


Comment by flodorner on When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors · 2020-01-06T08:07:15.483Z · score: 1 (1 votes) · LW · GW

To clear up some more confusion: The sum-condition is not what actually matters here, is it? In the first example of 5), the sum of utilities is lower than in the second one. The problem in the second example seems to rather be that the best states for one of the (Edit: the expected) rewards are bad for the other?

That again seems like it would often follow from resource constraints.

Comment by flodorner on When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors · 2019-12-31T10:04:40.341Z · score: 3 (2 votes) · LW · GW

Right. I think my intuition about negative-sum interactions under resource constrainrs combined the zero-sum nature of resource spending with the (perceived) negative-sum nature of competition for resources. But for a unified agent there is no competition for resources, so the argument for resource constraints leading to negative-sum interactions is gone.

Thank you for alleviating my confusion.

Comment by flodorner on When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors · 2019-12-30T17:42:38.845Z · score: 1 (1 votes) · LW · GW

My model goes something like this: If increasing values requires using some resource, gaining access to more of the resource can be positive sum, while spending it is negative sum due to opportunity costs. In this model, the economy can be positive sum because it helps with alleviating resource constraints.

But maybe it does not really matter if most interactions are positive-sum until some kind of resource limit is reached and negative-sum only after?

Comment by flodorner on When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors · 2019-12-20T22:00:37.167Z · score: 1 (1 votes) · LW · GW

"If our ideal reward functions have diminishing returns, this fact is explicitly included in the learning process."

It seems like the exact shape of the diminishing returns might be quite hard to infer while wrong "rates" of diminishing returns can lead to (slighlty less severe versions of) the same problems as not modelling diminishing returns at all.

We probably at least need to incorporate our uncertainty about how returns diminish in some way. I am a bit confused about how to do this, as slowly diminishing functions will probably dominate if we just take an expectation over all candidates?

Comment by flodorner on Sections 5 & 6: Contemporary Architectures, Humans in the Loop · 2019-12-20T20:53:23.071Z · score: 3 (3 votes) · LW · GW

It seems like replacing two agents A and B by a single agent that optimizes for their welfare function would avoid the issue of punishment. I guess that doing this might be feasible in some cases for artificial agents (as a single agent optimizing for the welfare function is a simpler object than the two-agent dynamics including punishment) and potentially understudied, as the solution seems harder to implement for humans (even though human solutions to collective action problems at least resemble the approach). One key problem might be finding a welfare function that both agents agree on, especially if there is information assymetry.

Any thought on this?

Edit: The approach seems to be most trivial when both agents share their world model and optimize for explicit utilities over this world model. More general, two principals with similar amounts of compute and similarly easily optimizable utility functions are most likely better off building an agent that optimizes for their welfare instead of two agents that need to learn to compete and cooperate. Optimizing for the welfare function applied to the agent's value functions can be done by a somewhat straightforward modification of Q-learning or (in the case of differentiable welfare) policy gradient methods.

Comment by flodorner on Sections 1 & 2: Introduction, Strategy and Governance · 2019-12-18T16:35:30.489Z · score: 1 (1 votes) · LW · GW

My reasoning relies more the divisibility of stakes (without having to resort to randomization). If there was a deterministic settlement that is preferable to conflict, then nobody has an incentive to break the settlement.

However, my main point was that I read the paragraph I quoted as "we don't need the divisibility of stakes if we have credibility and complete information, therefore credibility and complete information is more important than divisibility of stakes". I do not really find this line of argument convincing, as I am not convinced that you could not make the same argument with the role of credibility and divisible stakes reversed. Did I maybe misread what you are saying there?

Still, your conclusion still seems plausible and I suspect that you have other arguments for focusing on credibility. I would like to hear those.

Comment by flodorner on Sections 1 & 2: Introduction, Strategy and Governance · 2019-12-15T19:07:30.967Z · score: 1 (1 votes) · LW · GW

"If players could commit to the terms of peaceful settlements and truthfully disclose private information necessary for the construction of a settlement (for instance, information pertaining to the outcome probability p in Example 1.1.1), the allocation of indivisible stakes could often be accomplished. Thus, the most plausible of Fearon’s rationalist explanations for war seem to be (1) the difficulty of credible commitment and (2) incomplete information (and incentives to misrepresent that information). "

It seems plausible that if players could truthfully disclose private information and divide stakes, the ability to credibly commit would often not be needed. Would that in turn reduce the plausibility of explanation (1)?

I am curious whether there are some further arguments for the second sentence in the quote that were ommited to save space.

Comment by flodorner on Tabooing 'Agent' for Prosaic Alignment · 2019-08-23T17:51:27.034Z · score: 3 (3 votes) · LW · GW

In light of this exchange, it seems like it would be interesting to analyze how much arguments for problematic properties of superintelligent utility-maximizing agents (like instrumental convergence) actually generalize to more general well-generalizing systems.

Comment by flodorner on Let's talk about "Convergent Rationality" · 2019-06-14T20:05:08.136Z · score: 1 (1 votes) · LW · GW

"If the learner were a consequentialist with accuracy as its utility function, it would prefer to modify the test distribution in this way in order to increase its utility. Yet, even when given the opportunity to do so, typical gradient-based supervised learning algorithms do not seem to pursue such solutions (at least in my personal experience as an ML researcher)."

Can you give an example for such an opportunity being given but not taken?