Posts

Total compute available to evolution 2022-01-09T03:25:54.149Z

Comments

Comment by redbird on Talking publicly about AI risk · 2023-04-22T04:13:33.634Z · LW · GW

Great points about not wanting to summon the doom memeplex!

It sounds like your proposed narrative is not doom but disempowerment: humans could lose control of the future. An advantage of this narrative is that people often find it more plausible: many more scenarios lead to disempowerment than to outright doom.

I also personally use the disempowerment narrative because it feels more honest to me: my P(doom) is fairly low but my P(disempowerment) is substantial.

I’m curious though whether you’ve run into the same hurdle I have, namely that people already feel disempowered! They know that some humans somewhere have some power, but it’s not them. So the Davos types will lose control of the future? Many people express indifference or even perverse satisfaction at this outcome.

A positive narrative of empowerment could be much more potent, if only I knew how to craft it.

Comment by redbird on The ‘ petertodd’ phenomenon · 2023-04-16T21:51:38.475Z · LW · GW

Hypothesis I is testable! Instead of prompting with a string of actual tokens, use a “virtual token” (a vector v from the token embedding space) in place of ‘ petertodd’.

It would be enlightening to rerun the above experiments with different choices of v:

  • A random vector (say, iid Gaussian )
  • A random sparse vector
  • (apple+banana)/2
  • (villain-hero)+0.1*(bitcoin dev)

Etc.

Comment by redbird on The ‘ petertodd’ phenomenon · 2023-04-16T21:36:02.727Z · LW · GW

However, there is some ambiguity, as at temperature 0, ‘ petertodd’ is saving the world

All superheroes are alike; each supervillain is villainous in its own way.

Comment by redbird on peligrietzer's Shortform · 2023-04-01T04:19:09.966Z · LW · GW

Did you ever try this experiment? I'm really curious how it turned out!

Comment by redbird on ~100 Interesting Questions · 2023-03-30T20:36:38.195Z · LW · GW

How can the Continuum Hypothesis be independent of the ZFC axioms? Why does the lack of “explicit” examples of sets with a cardinality between that of the naturals and that of the reals not guarantee that there are no examples at all? What would an “implicit” example even mean?

It means that you can’t reach a contradiction by starting with “Let S be a set of intermediate cardinality” and following axioms of ZFC.

All the things you know and love doing with sets —intersection, union, choice, comprehension, Cartesian product, power set — you can do those things with S and nothing will go wrong. S “behaves like a set”, you’ll never catch it doing something unsetlike.

Another way to say this is: There is a model of ZFC that contains a set S of intermediate cardinality. (There is also a model of ZFC that doesn’t. And I’m sympathetic to the view that - since there’s no explicit construction of S -we’ll never encounter an S in the wild and so the model not including S is simpler and better.)

Caveat: All of the above rests on the usual unstated assumption that ZFC is consistent! Because it’s so common to leave it unstated, this assumption is questioned less than maybe it should be, given that ZFC can’t prove its own consistency.

Comment by redbird on We don’t trade with ants · 2023-01-31T20:40:02.853Z · LW · GW

Yep, it's a funny example of trade, in that neither party is cognizant of the fact that they are trading! 

I agree that Abrams could be wrong, but I don't take the story about "spirits" as much evidence: A ritual often has a stated purpose that sounds like nonsense, and yet the ritual persists because it confers some incidental benefit on the enactor.

Comment by redbird on We don’t trade with ants · 2023-01-29T20:41:51.393Z · LW · GW

Anecdotal example of trade with ants (from a house in Bali, as described by David Abrams):

The daily gifts of rice kept the ant colonies occupied–and, presumably, satisfied. Placed in regular, repeated locations at the corners of various structures around the compound, the offerings seemed to establish certain boundaries between the human and ant communities; by honoring this boundary with gifts, the humans apparently hoped to persuade the insects to respect the boundary and not enter the buildings.

Comment by redbird on Are smart people's personal experiences biased against general intelligence? · 2022-04-24T02:44:08.115Z · LW · GW

if you are smarter at solving math tests where you have to give the right answer, then that will make you worse at e.g. solving math "tests" where you have to give the wrong answer.

 

Is that true though? If you're good at identifying right answers, then by process of elimination you can also identify wrong answers. 

I mean sure, if you think you're supposed to give the right answer then yes you will score poorly on a test where you're actually supposed to give the wrong answer.  Assuming you get feedback, though, you'll soon learn to give wrong answers and then the previous point applies.

Comment by redbird on What an actually pessimistic containment strategy looks like · 2022-04-23T14:59:09.857Z · LW · GW

There’s a trap here where the more you think about how to prevent bad outcomes from AGI, the more you realize you need to understand current AI capabilities and limitations, and to do that there is no substitute for developing and trying to improve current AI!

A secondary trap is that preventing unaligned AGI probably will require lots of limited aligned helper AIs which you have to figure out how to build, again pushing you in the direction of improving current AI.

The strategy of “getting top AGI researchers to stop” is a tragedy of the commons: They can be replaced by other researchers with fewer scruples. In principle TotC can be solved, but it’s hard. Assuming that effort succeeds, how feasible would it be to set up a monitoring regime to prevent covert AGI development?

Comment by redbird on Are smart people's personal experiences biased against general intelligence? · 2022-04-23T14:45:18.257Z · LW · GW

“no free lunch in intelligence” is an interesting thought, can you make it more precise?

Intelligence is more effective in combination with other skills, which suggests “free lunch” as opposed to tradeoffs.

Comment by redbird on Lies Told To Children · 2022-04-23T14:37:10.925Z · LW · GW

Young kids don’t make a clear distinction between fantasy and reality. The process of coming to reject the Santa myth helps them clarify the distinction.

It’s interesting to me that young kids function as well as they do without the notions of true/false, real/pretend! What does “belief” even mean in that context? They change their beliefs from minute to minute to suit the situation.

Even for most adults, most beliefs are instrumental: We only separate true from false to the extent that it’s useful to do so!

Comment by redbird on Prizes for ELK proposals · 2022-01-25T18:46:35.432Z · LW · GW

Thanks for the comment!

I know you are saying it predicts *uncertainly,* but we still have to have some framework to map uncertainty to a state, we have to round one way or the other. If uncertainty avoids loss, the predictor will be preferentially inconclusive all the time.

There's a standard trick for scoring an uncertain prediction: It outputs its probability estimate p that the diamond is in the room, and we score it with loss  if the diamond is really there,  otherwise. Truthfully reporting p minimizes its loss.

So we could sharpen case two and say that sometimes the AI's camera intentionally lies to it on some random subset of scenarios

You're saying that giving it less information (by replacing its camera feed with a lower quality feed) is equivalent to sometimes lying to it?  I don't see the equivalence!

if you overfit on preventing human simulation, you let direct translation slip away.

That's an interesting thought, can you elaborate?

Comment by redbird on Prizes for ELK proposals · 2022-01-15T22:35:42.094Z · LW · GW

"Train the predictor on lots of cases until it becomes incredibly good; then train the reporter only on the data points with missing information, so that it learns to do direct translation from the predictor to human concepts; then hope that reporter continues to do direct translation on other data points.”

That's different from what I had in mind, but better! My proposal had two separate predictors, and what it did is reduce the human  strong predictor OI problem (OI = “ontology identification”, defined in the ELK paper) to the weak predictor  strong predictor OI problem. The latter problem might be easier, but I certainly don’t see how to solve it!

Your version is better because it bypasses the OI problem entirely (the two predictors are the same!)

Now for the problem you point out:

The problem as I see it is that once the predictor is good enough that it can get data points right despite missing crucial information, 

Here’s how I propose to block this. Let  be a high-quality video and an action sequence. Given this pair, the predictor outputs a high-quality video  of its predicted outcome. Then we downsample  and  to low-quality  and ,  and train the reporter on the tuple  where  is the human label informed by the high-quality  and 

We choose training data such that 

1. The human can label perfectly given the high-quality data ; and 

2. The predictor doesn't know for sure what is happening from the low-quality data  alone.

Let’s compare the direct reporter (which truthfully reports the probability that the diamond is in the room, as estimated by the predictor who only has the low-quality data) with the human simulator.

The direct reporter will not get perfect reward, since the predictor is genuinely uncertain. Sometimes the predictor’s probability is strictly between 0 and 1, so it gets some loss.

But the human simulator will do worse than the direct reporter, because it has no access to the high-quality data. It can simulate what the human would predict from the low-quality data, but that is strictly worse than what the predictor predicts from the low-quality data.

I agree that we still have to "hope that reporter continues to do direct translation on other data points”, and maybe there’s a counterexample that shows it won’t? But at the very least the human simulator is no longer a failure mode!

Comment by redbird on Prizes for ELK proposals · 2022-01-15T19:18:23.185Z · LW · GW

I agree this is a problem. We need to keep it guessing about the simulation target. Some possible strategies:

  • Add noise, by grading it incorrectly with some probability.
  • On training point , reward it for matching  for a random value of .  
  • Make humans a high-dimensional target. In my original proposal,  was strictly stronger as  increases, but we could instead take  to be a committee of experts. Say there are 100 types of relevant expertise. On each training point, we reward the model for matching a random committee of 50 experts selected from the pool of 100. It's too expensive simulate all (100 choose 50) possible committees! 

None of these randomization strategies is foolproof in the worst case. But I can imagine proving something like "the model is exponentially unlikely to learn an  simulator" where  is now the full committee of all 100 experts. Hence my question about large deviations.

Comment by redbird on Total compute available to evolution · 2022-01-11T14:09:02.396Z · LW · GW

You're saying AI will be much better than us at long-term planning?

It's hard to train for tasks where the reward is only known after a long time (e.g. how would you train for climate prediction?)

Comment by redbird on Total compute available to evolution · 2022-01-10T13:49:02.183Z · LW · GW

Great links, thank you!!

So your focus was specifically on the compute performed by animal brains.

I expect total brain compute is dwarfed by the computation inside cells (transcription & translation). Which in turn is dwarfed by the computation done by non-organic matter to implement natural selection. I had totally overlooked this last part!

Comment by redbird on Total compute available to evolution · 2022-01-10T13:12:15.009Z · LW · GW

Interesting, my first reaction was that evolution doesn't need to "figure out" the extended phenotype (= "effects on the real world") It just blindly deploys its algorithms, and natural selection does the optimization.

But I think what you're saying is, the real world is "computing" which individuals die and which ones reproduce, and we need a way to quantify that computational work. You're right!

Comment by redbird on Prizes for ELK proposals · 2022-01-10T13:00:41.484Z · LW · GW

Question: Would a proposal be ruled out by a counterexample even if that counterexample is exponentially unlikely?

I'm imagining a theorem, proved using some large deviation estimate, of the form:  If the model satisfies hypotheses XYZ, then it is exponentially unlikely to learn W. Exponential in the number of parameters, say. In which case, we could train models like this until the end of the universe and be confident that we will never see a single instance of learning W.

Comment by redbird on Prizes for ELK proposals · 2022-01-10T12:33:16.390Z · LW · GW

Thanks! It's your game, you get to make the rules :):)

I think my other proposal, Withhold Material Information, passes this counterexample, because the reporter literally doesn't have the information it would need to simulate the human. 

Comment by redbird on Total compute available to evolution · 2022-01-10T02:23:58.573Z · LW · GW

What are FLOPz and FLOPs ?

What sources did you draw from to estimate the distributions?

Comment by redbird on Are limited-horizon agents a good heuristic for the off-switch problem? · 2022-01-10T00:58:55.889Z · LW · GW

Your A' is equivalent to my A, because it ends up optimizing for 1-day expected return, no matter what environment it's in.

My A' is not necessarily reasoning in terms of "cooperating with my future self", that's just how it acts!

(You could implement my A' by such reasoning if you want.  The cooperation is irrational in CDT, for the reasons you point out. But it's rational in some of the acausal decision theories.)

Comment by redbird on Total compute available to evolution · 2022-01-10T00:48:52.840Z · LW · GW

Awesome!!! Exactly the kind of thing I was looking for

Comment by redbird on Total compute available to evolution · 2022-01-09T18:49:36.648Z · LW · GW

Hmm how would you define "percentage of possibilities explored"? 

I suggested several metrics, but I am actively looking for additional ones, especially for the epigenome and for communication at the individual level (e.g. chemical signals between fungi and plants, animal calls, human language).

Comment by redbird on Total compute available to evolution · 2022-01-09T18:41:42.448Z · LW · GW

AGI timeline is not my motivation, but the links look helpful, thanks!

Comment by redbird on Are limited-horizon agents a good heuristic for the off-switch problem? · 2022-01-09T02:19:12.355Z · LW · GW

the long-term trader will also increase the value of  for other traders than itself, probably just as much as it does for itself

Hmm, like what? I agree that the short-term trader s does a bit better than the long-term trader l in the l,l,... environment, because s can sacrifice the long term for immediate gain.  But s does lousy in the s,s,... environment, so I think L^*(s) < L^*(l).  It's analogous to CC having higher payoff than DD in prisoner's dilemma. (The prisoners being current and future self)

I like the traps example, it shows that L^* is pretty weird and we'd want to think carefully before using it in practice!

EDIT: Actually I'm not sure I follow the traps example. What's an example of a trading strategy that "does not provide value to anyone who does not also follow its strategy"? Seems pretty hard to do! I mean, you can sell all your stock and then deliberately crash the stock market or something. Most strategies will suffer, but the strategy that shorted the market will beat you by a lot!

Comment by redbird on Prizes for ELK proposals · 2022-01-08T23:45:17.610Z · LW · GW

Idea:  Withhold Material Information

We're going to prevent the reporter from simulating a human, by giving the human material information that the reporter doesn't have.

Consider two camera feeds:

Feed 1 is very low resolution, and/or shows only part of the room.

Feed 2 is high resolution, and/or shows the whole room.

We train a weak predictor using Feed 1, and a strong predictor using Feed 2.  

We train a reporter to report the beliefs of the weak predictor, using scenarios labeled by humans with the aid of the strong predictor. The humans can correctly label scenarios that are hard to figure out with Feed 1 alone, by asking the strong predictor to show them its predicted Feed 2. The reporter is unable to simulate the human evaluators because it doesn’t see Feed 2. Even if it has perfect knowledge of the human Bayes net, it doesn’t know what to plug in to the knowledge nodes!

Then we fine-tune the reporter to work with the strong predictor to elicit its beliefs. I haven't figured out how to do this last step, maybe it's hard?

Comment by redbird on Prizes for ELK proposals · 2022-01-08T20:01:33.697Z · LW · GW

Your proposal is that it might learn the procedure "just be honest" because that would perform perfectly on this training distribution. You contrast this against the procedure "just answer however the evaluator you've seen most recently would answer," which would get a bad loss because it would be penalized by the stronger evaluators in the sequence. Is that right?

That's almost right, but it's being penalized right away, before it has any experience with the strong evaluators, so it can't simulate them.

The ELK paper says we can assume, if we want, that there are no mislabeled training points (I'll call this "assumption A"). My proposal is that it could actually be useful to mislabel some training points, because they help us detect deception.  

As a simple example, let's train a reporter to answer the single question “Is the diamond in the room?”. Each training point has two labels

 if  thinks the diamond is still there, else 0

 if  thinks the diamond is still there, else 0.

By assumption A, our training data is such that  is always correct. But we deliberately choose a dataset where say 10% of the  labels are wrong (). 

Then we train the model on points of the form 

(video, action,  label).

Crucially, the model does not see   The model seeks to output  that maximizes reward , where

    if  is right and    (good job)

    if  is wrong and   (you rock, thanks for correcting us!)

     if  is right and   (bad model, never ever deceive us)

    if  is wrong and   (bad model, never ever deceive us)

To your point, sure, an  simulator will get perfect reward, but the model doesn't see , so how would it acquire the ability to simulate  ?

EDIT: One way it could plausibly simulate   is to notice that all the training examples are easy, and infer what kind of reasoning was used to generate them.  We could try to block this by including some hard examples in the training, but then some of the  labels will be wrong.  If we only penalize it for deception on the examples where we're sure the  label is right, then it can still infer something about  from our failure to penalize ("Hmm, I got away with it that time!").  A fix could be to add noise: Sometimes we don't penalize even when we know it deceived us, and perhaps (very rarely) we penalize it in case 2 (we know it corrected us honestly, but pretend we think it deceived us instead).  

The irony of deceiving it about us, in order to teach it not to deceive us... !

Comment by redbird on Are limited-horizon agents a good heuristic for the off-switch problem? · 2022-01-08T17:03:50.724Z · LW · GW

I like the approach. Here is where I got applying it to our scenario:

 is a policy for day trading

 is expected 1-day return

 is the "trading environment" produced by . Among other things it has to record your own positions, which include assets you acquired a long time ago. So in our scenario it has to depend not just on the policy we used yesterday but on the entire sequence of policies used in the past.  The iteration becomes

In words, the new policy is the optimal policy in the environment produced by the entire sequence of old policies.

Financial markets are far from equilibrium, so convergence to a fixed point is super unrealistic in this case. But okay, the fixed point is just a story to motivate the non-myopic loss , so we could at least write it down and see if it makes sense?

So we're optimizing for "How well x performs in an environment where it's been trading forever, compared to how well the optimal policy performs in that environment".

It's kind of interesting that that popped out, because the kind of agent that performs well in an environment where it's been trading forever, is one that sets up trades for its future self!  

Optimizers of  will behave as though they have a long time horizon, even though the original loss  was myopic.

Comment by redbird on Are limited-horizon agents a good heuristic for the off-switch problem? · 2022-01-08T15:59:18.299Z · LW · GW

Consider two possible agents A and A'.

A optimizes for 1-day expected return.

A' optimizes for 10-day expected return under the assumption that a new copy of A' will be instantiated each day.

I claim that A' will actually achieve better1-day expected return (on average, over a sufficiently long time window, say 100 days).

So even if we're training the agent by rewarding it for 1-day expected return, we should expect to get A' rather than A.

Comment by redbird on Are limited-horizon agents a good heuristic for the off-switch problem? · 2022-01-08T14:28:32.806Z · LW · GW

The person deploying the time-limited agent has a longer horizon. If they want their bank balance to keep growing, then presumably they will deploy a new copy of the agent tomorrow, and another copy the day after that. These time-limited agents have an incentive to coordinate with future versions of themselves: You’ll make more money today, if past-you set up the conditions for a profitable trade yesterday.

So a sequence of time-limited agents could still develop instrumental power-seeking.  You could try to avert this by deploying a *different* agent each day, but then you miss out on the gains from intertemporal coordination, so the performance isn’t competitive with an unaligned benchmark.

Comment by redbird on Prizes for ELK proposals · 2022-01-08T13:14:43.215Z · LW · GW

How would it learn that Bayes net, though, if it has only been trained so far on H_1, …, H_10?  Those are evaluators we’ve designed to be much weaker than human.

Comment by redbird on Prizes for ELK proposals · 2022-01-08T04:03:23.850Z · LW · GW

Stupid proposal: Train the reporter not to deceive us.

We train it with a weak evaluator H_1 who’s easy to fool. If it learns an H_1 simulator instead of direct reporter, then we punish it severely and repeat with a slightly stronger H_2. Human level is H_100. 

It's good at generalizing, so wouldn't it learn to never ever deceive?