Implications of evidential cooperation in large worlds 2023-08-23T00:43:45.232Z
PaLM-2 & GPT-4 in "Extrapolating GPT-N performance" 2023-05-30T18:33:40.765Z
Some thoughts on automating alignment research 2023-05-26T01:50:20.099Z
Before smart AI, there will be many mediocre or specialized AIs 2023-05-26T01:38:41.562Z
PaLM in "Extrapolating GPT-N performance" 2022-04-06T13:05:12.803Z
Truthful AI: Developing and governing AI that does not lie 2021-10-18T18:37:38.325Z
OpenAI: "Scaling Laws for Transfer", Hernandez et al. 2021-02-04T12:49:25.704Z
Prediction can be Outer Aligned at Optimum 2021-01-10T18:48:21.153Z
Extrapolating GPT-N performance 2020-12-18T21:41:51.647Z
Formalising decision theory is hard 2019-08-23T03:27:24.757Z
Quantifying anthropic effects on the Fermi paradox 2019-02-15T10:51:04.298Z


Comment by Lukas Finnveden (Lanrian) on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-09-14T17:04:43.142Z · LW · GW

That would mean that believed he had a father with the same reasons, who believed he had a father with the same reasons, who believed he had a father with the same reasons...

I.e., this would require an infinite line of forefathers. (Or at least of hypothetical, believed-in forefathers.)

If anywhere there's a break in the chain — that person would not have FDT reasons to reproduce, so neither would their son, etc.

Which makes it disanalogous from any cases we encounter in real life. And makes me more sympathetic to the FDT reasoning, since it's a stranger case where I have less strong pre-existing intuitions.

Comment by Lukas Finnveden (Lanrian) on Paper: On measuring situational awareness in LLMs · 2023-09-06T00:19:35.648Z · LW · GW

Cool paper!

I'd be keen to see more examples of the paraphrases, if you're able to share. To get a sense of the kind of data that lets the model generalize out of context. (E.g. if it'd be easy to take all 300 paraphrases of some statement (ideally where performance improved) and paste in a google doc and share. Or lmk if this is on github somewhere.)

I'd also be interested in experiments to determine whether the benefit from paraphrases is mostly fueled by the raw diversity, or if it's because examples with certain specific features help a bunch, and those occasionally appear among the paraphrases. Curious if you have a prediction about that or if you already ran some experiments that shed some light on this. (I could have missed it even if it was in the paper.)

Comment by Lukas Finnveden (Lanrian) on ARC Evals new report: Evaluating Language-Model Agents on Realistic Autonomous Tasks · 2023-09-02T01:32:58.104Z · LW · GW


Comment by Lukas Finnveden (Lanrian) on ARC Evals new report: Evaluating Language-Model Agents on Realistic Autonomous Tasks · 2023-08-23T06:12:43.847Z · LW · GW

This is interesting — would it be easy to share the transcript of the conversation? (If it's too long for a lesswrong comment, you could e.g. copy-paste it into a google doc and link-share it.)

Comment by Lukas Finnveden (Lanrian) on Implications of evidential cooperation in large worlds · 2023-08-23T06:06:44.425Z · LW · GW

You might want to check out the paper and summary that explains ECL, that I linked. In particular, this section of the summary has a very brief introduction to non-causal decision theory, and motivating evidential decision theory is a significant focus in the first couple of sections of the paper.

Comment by Lukas Finnveden (Lanrian) on The “no sandbagging on checkable tasks” hypothesis · 2023-08-03T01:28:23.895Z · LW · GW

Here's a proposed operationalization.

For models that can't gradient hack: The model is "capable of doing X" if it would start doing X upon being fine-tuned to do it using a hypothetical, small finetuning dataset that demonstrated how to do the task. (Say, at most 1000 data points.)

(The hypothetical fine-tuning dataset should be a reasonable dataset constructed by a hypothetical team of human who knows how to do the task but aren't optimizing the dataset hard for ideal gradient updates to this particular model, or anything like that.)

For models that might be able to gradient-hack, but are well-modelled as having certain goals: The model is "capable of doing X" if it would start doing X if doing X was a valuable instrumental goal, for it.

For both kinds: "you can get it to do X" if you could make it do X with some large amount of research+compute budget (say, 1% of the pre-training budget), no-holds-barred.

Edit: Though I think your operationalization also looks fine. I mainly wanted to point out that the "finetuning" definition of "capable of doing X" might be ok if you include the possibility of finetuning on hypothetical datasets that we don't have access to. (Since we only know how to check the task — not perform it.)

Comment by Lukas Finnveden (Lanrian) on SSA rejects anthropic shadow, too · 2023-07-31T07:05:23.973Z · LW · GW

Then, conditional on type 1, you're about 0.5% likely to observe being post cold war, and conditional on type 2, you're about 45% likely to observe being post cold war.

I would have thought:

  • p(post cold war | type-1) = 1/101 ~= 1%.
  • p(post cold war | type-2) = 10/110 ~= 9%.

I don't think this makes a substantive difference to the rest of your comment, though.

Under SIA, you start with a ~19:10 ratio in favor of type 2 (in the subjective a priori). The likelihood ratios are the same as with SSA so the posteriors are equally weighted towards type 2. So the updates are of equal magnitude in odds space under SSA and SIA.

Oh, I see. I think I agree that you can see SIA and SSA as equivalent updating procedures with different priors.

Nevertheless, SSA will systematically assign higher probabilities (than SIA) to latent high probabilities of disaster, even after observing themselves to be in worlds where the disasters didn't happen (at least if the multiverse + reference class is in a goldilocks zone of size and inclusivity). I think that's what the anthropic's shadow is about. If your main point is that the action is in the prior (rather than the update) and you don't dispute people's posteriors, then I think that's something to flag clearly. (Again — I apologise if you did something like this in some part of the post I didn't read!)

I think this is an odd choice of reference class, and constructing your reference class to depend on your time index nullifies the doomsday argument, which is supposed to be an implication of SSA. I think choices of reference class like this will have odd reflective behavior because e.g. further cold wars in the future will be updated on by default.

I agree it's very strange. I always thought SSA's underspecified reference classes were pretty suspicious. But I do think that e.g. Bostrom's past writings often do flag that the doomsday argument only works with certain reference classes, and often talks about reference classes that depend on time-indices.

Comment by Lukas Finnveden (Lanrian) on SSA rejects anthropic shadow, too · 2023-07-31T02:22:53.261Z · LW · GW

That argument only works for SSA if type 1 and type 2 planets exist in parallel.

I was talking about a model where either every planet in the multiverse is type 1, or every planet in the multiverse is type 2.

But extinction vs non-extinction is sampled separately on each planet.

Then SSA gives you an anthropic shadow.

(If your reference class is "all observers" you still get an update towards type 2, but it's weaker than for SIA. If your reference class is "post-nuclear-weapons observers", then SSA doesn't update at all.)

Comment by Lukas Finnveden (Lanrian) on SSA rejects anthropic shadow, too · 2023-07-30T16:24:28.645Z · LW · GW

How to get anthropic shadow:

  • Assume that the universe is either type 1 or type 2, and that planets with both subtypes (extinction or non-extinction) exist in parallel.
  • Use SSA.

At this point, I believe you will get some difference between SSA and SIA. For maximizing the size of the shadow, you can add:

  • If you wake up after nuclear war has become possible: choose to use the reference class "people living after nuclear war became possible".

(I didn't read the whole post, sorry if you address this somewhere. Also, I ultimately don't agree with the anthropic shadow argument.)

Comment by Lukas Finnveden (Lanrian) on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-28T16:25:08.004Z · LW · GW

They don't give a factor 5 uncertainty. They add a 100x discount on top of the 20x discount — counting fish suffering as 2000x less important than human suffering.

Comment by Lukas Finnveden (Lanrian) on evhub's Shortform · 2023-07-05T22:55:49.042Z · LW · GW

being separated in space vs. being separated across different branches of the wavefunction seem pretty similar in terms of specification difficulty

Maybe? I don't really know how to reason about this.

If that's true, that still only means that you should be linear for gambles that give different results in different quantum branches. C.f. logical vs. physical risk aversion.

Some objection like that might work more generally, since some logical facts will mean that there are far less humans in the universe-at-large, meaning that you're at a different point in the risk-returns curve. So when comparing different logical ways the universe could be, you should not always care about the worlds where you can affect more sentient beings. If you have diminishing marginal returns, you need to be thinking about some more complicated function that is about whether you have a comparative advantage at affecting more sentient beings in worlds where there is overall fewer sentient beings (as measured by some measure that can handle infinities). Which matters for stuff like whether you should bet on the universe being large.

Comment by Lukas Finnveden (Lanrian) on evhub's Shortform · 2023-07-05T21:44:12.794Z · LW · GW

(Though you could get out of this by claiming that what you really care about is happy humans per universe, that's a pretty strange thing to care about—it's like caring about happy humans per acre.)

My sense is that many solutions to infinite ethics look a bit like this. For example, if you use UDASSA, then a single human who is alone in a big universe will have a shorter description length than a single human who is surrounded by many other humans in a big universe. Because for the former, you can use pointers that specify the universe and then describe sufficient criteria to recognise a human, but for the latter, you need to nail down exact physical location or some other exact criteria that distinguishes a specific human from every other human.

Comment by Lukas Finnveden (Lanrian) on The Case for Overconfidence is Overstated · 2023-07-03T23:24:22.245Z · LW · GW

Yeah that's a reasonable way to look at it. I'm not sure how much the two approaches really disagree: both are saying that the actual intervals people are giving are narrower than their genuine 90% intervals, and both presumably say that this is modulated by the fact that in everyday life, 50% intervals tend to be better. Right?

Yeah sounds right to me!

I haven't come across any interval-estimation studies that ask for intervals narrower than 20%, though Don Moore (probably THE expert on this stuff) told me that people have told him about unpublished findings where yes, when they ask for 20% intervals people are underprecise.

There definitely are situations with estimation (variants on the two-point method) where people look over-confident in estimates >50% and underconfident in estimates <50%, though you don't always get that.

Nice, thanks!

Comment by Lukas Finnveden (Lanrian) on The Case for Overconfidence is Overstated · 2023-07-01T17:20:21.066Z · LW · GW

But insisting that this is irrational underestimates how important informativity is to our everyday thought and talk.

As other authors emphasize, in most contexts it makes sense to trade accuracy for informativity. Consider the alternative: widening your intervals to obtain genuine 90% hit rates. (...) Asked when you’ll arrive for dinner, instead of “5:30ish” you say, “Between 5 and 8”.

This bit is weird to me. There's no reason why people should use 90% intervals as opposed to 50% intervals in daily life. The ask is just that they widen it when specifically asked for a 90% interval.

My framing would be: when people give intervals in daily life, they're typically inclined to give ~50% confidence intervals (right? Something like that?). When asked for a ("90%") interval by a researcher, they're inclined to give a normal-sounding interval. But this is a mistake, because the researcher asked for a very strange construct — a 90% interval turns out to be an interval where you're not supposed to say what you think the answer is, but instead give an absurdly wide distribution that you're almost never outside of.

Incidentally — if you ask people for centered 20% confidence intervals (40-60th percentile) do you get that they're underconfident?

Comment by Lukas Finnveden (Lanrian) on When do "brains beat brawn" in Chess? An experiment · 2023-06-29T20:13:29.298Z · LW · GW

I intend to write a lot more on the potential “brains vs brawns” matchup of humans vs AGI. It’s a topic that has received surprisingly little depth from AI theorists.

I recommend checking out part 2 of Carl Shulman's Lunar Society podcast for content on how AGI could gather power and take over in practice.

Comment by Lukas Finnveden (Lanrian) on rohinmshah's Shortform · 2023-06-24T22:04:58.171Z · LW · GW

Yeah, I also don't feel like it teaches me anything interesting.

Comment by Lukas Finnveden (Lanrian) on rohinmshah's Shortform · 2023-06-24T02:55:44.087Z · LW · GW

Note that B is (0.2,10,−1)-distinguishable in P.

I think this isn't right, because definition 3 requires that sup_s∗ {B_P− (s∗)} ≤ γ.

And for your counterexample, s* = "C" will have B_P-(s*) be 0 (because there's 0 probably of generating "C" in the future). So the sup is at least 0 > -1.

(Note that they've modified the paper, including definition 3, but this comment is written based on the old version.)

Comment by Lukas Finnveden (Lanrian) on GPT-4 Predictions · 2023-06-15T23:53:21.530Z · LW · GW

GPT-2    1.5B    15B    2.5794

Where does the "15B" for GPT-2's data come from, here? Epoch's dataset's guess is that it was trained on 3B tokens for 100 epochs:

Comment by Lukas Finnveden (Lanrian) on Announcing Apollo Research · 2023-06-15T07:20:23.440Z · LW · GW

Are you mainly interested in evaluating deceptive capabilities? I.e., no-holds-barred, can you elicit competent deception (or sub-components of deception) from the model? (Including by eg fine-tuning on data that demonstrates deception or sub-capabilities.)

Or evaluating inductive biases towards deception? I.e. testing whether the model is inclined towards deception in cases when the training data didn't necessarily require deceptive behavior.

(The latter might need to leverage some amount of capability evaluation, to distinguish not being inclined towards deception from not being capable of deception. But I don't think the reverse is true.)

Or if you disagree with that way of cutting up the space.

Comment by Lukas Finnveden (Lanrian) on The Dictatorship Problem · 2023-06-12T17:10:57.190Z · LW · GW


I'm a big fan of extrapolating trendlines, and I think the current trendlines are concerning. But when evaluating the likelihood that "most democratic Western countries will become fascist dictatorships", I'd say these trends point firmly against this being "the most likely overall outcome" in the next 10 years. (While still increasing my worry about this as a tail-risk, a longer-term phenomena, and as a more localized phenomena.)

If we extrapolate the graphs linearly, we get:

  • If we wait 10 years, we will have 5 fewer "free" countries and 7 more "non-free" countries. (Out of 195 countries being tracked. Or: ~5-10% fewer "free" countries.)
  • If we wait 10 years, the average democracy index will fall from 5.3 to somewhere around 5.0-5.1.

That's really bad. But it would be inconsistent with a wide fascist turn in the West, which would cause bigger swings in those metrics.

(As far as I can tell, the third graph is supposed to indiciate the sign of the derivative of something like a democracy index, in each of many countries? Without looking into their criteria more, I don't know what it's supposed to say about the absolute size of changes, if anything.)

This also makes me confused about the next section's framing. If there's no "National Exceptionalism" where western countries are different from the others, then presumably the same trends should apply. But those suggest that the headline claim is unlikely. (But that we should be concerned about less probable, less widespread, and/or longer-term changes of the same kind.)

Comment by Lukas Finnveden (Lanrian) on Cosmopolitan values don't come free · 2023-06-04T01:42:15.437Z · LW · GW

1/trillion kindness seems to directly imply a small utopia where existing humans get to live out long and happy lives

Not direct implication, because the AI might have other human-concerning preferences that are larger than 1/trillion. C.f. top-level comment: "I’m not talking about whether the AI has spite or other strong preferences that are incompatible with human survival, I’m engaging specifically with the claim that AI is likely to care so little one way or the other that it would prefer just use the humans for atoms."

I'd guess "most humans survive" vs. "most humans die" probabilities don't correspond super closely to "presence of small pseudo-kindness". Because of how other preferences could outweigh that, and because cooperation/bargaining is a big reason for why humans might survive aside from intrinsic preferences.

Comment by Lukas Finnveden (Lanrian) on Change my mind: Veganism entails trade-offs, and health is one of the axes · 2023-06-03T16:51:26.292Z · LW · GW

If they don't go to a doctor, it could be either because the problem is minor enough that they can't be bothered, or because they generally don't seek medical help when they are seriously unwell, in which case the risk from something like B12 deficiency is negligible compared to e.g. the risk of an untreated heart attack.

I'm personally quite bad at noticing and tracking (non-sudden) changes in my energy, mood, or cognitive ability. I think there are issues that I wouldn't notice (or would think minor) that I would still care a lot about fixing.

Also, some people have problems with executive function. Even if they notice issues, the issues might have to be pretty bad before they'll ask a doctor about them. Bad enough that it could be pretty valuable to prevent less bad issues (that would go untreated).

(This could be exacerbated if people are generally unexcited about seeking medical help — I think there are plenty of points on this axis where people will seek help for heart-attacks but will be pessimistic about getting help with "vaguely feeling tired lately". Or maybe not even pessimistic. Just... not having "ask a dr" be generated as an obvious thing to try.)

Comment by Lukas Finnveden (Lanrian) on Let’s use AI to harden human defenses against AI manipulation · 2023-06-03T03:58:59.594Z · LW · GW

doesn't it seem to you that the topic is super neglected (even compared to AI alignment) given that the risks/consequences of failing to correctly solve this problem seem comparable to the risk of AI takeover?

Yes, I'm sympathetic. Among all the issues that will come with AI, I think alignment is relatively tractable (at least it is now) and that it has an unusually clear story for why we shouldn't count on being able to defer it to smarter AIs (though that might work). So I think it's probably correct for it to get relatively more attention. But even taking that into account, the non-alignment singularity issues do seem too neglected.

I'm currently trying to figure out what non-alignment stuff seems high-priority and whether I should be tackling any of it.

Comment by Lukas Finnveden (Lanrian) on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-02T22:44:09.859Z · LW · GW

This was also my impression.

Curious if OP or anyone else has a source for the <1% claim? (Partially interested in order to tell exactly what kind of "doom" this is anti-predicting.)

Comment by Lukas Finnveden (Lanrian) on PaLM-2 & GPT-4 in "Extrapolating GPT-N performance" · 2023-06-01T21:11:43.525Z · LW · GW

I assume that's from looking at the GPT-4 graph. I think the main graph I'd look at for a judgment like this is probably the first graph in the post, without PaLM-2 and GPT-4. Because PaLM-2 is 1-shot and GPT-4 is just 4 instead of 20+ benchmarks.

That suggests 90% is ~1 OOM away and 95% is ~3 OOMs away.

(And since PaLM-2 and GPT-4 seemed roughly on trend in the places where I could check them, probably they wouldn't change that too much.)

Comment by Lukas Finnveden (Lanrian) on PaLM-2 & GPT-4 in "Extrapolating GPT-N performance" · 2023-05-31T19:33:38.047Z · LW · GW

Interesting. Based on skimming the paper, my impression is that, to a first approximation, this would look like:

  • Instead of having linear performance on the y-axis, switch to something like log(max_performance - actual_performance). (So that we get a log-log plot.)
  • Then for each series of data points, look for the largest n such that the last n data points are roughly on a line. (I.e. identify the last power law segment.)
  • Then to extrapolate into the future, project that line forward. (I.e. fit a power law to the last power law segment and project it forward.)

That description misses out on effects where BNSL-fitting would predict that there's a slow, smooth shift from one power-law to another, and that this gradual shift will continue into the future. I don't know how important that is. Curious for your intuition about whether or not that's important, and/or other reasons for why my above description is or isn't reasonable.

When I think about applying that algorithm to the above plots, I worry that the data points are much too noisy to just extrapolate a line from the last few data points. Maybe the practical thing to do would be to assume that the 2nd half of the "sigmoid" forms a distinct power law segment, and fit a power law to the points with >~50% performance (or less than that if there are too few points with >50% performance). Which maybe suggests that the claim "BNSL does better" corresponds to a claim that the speed at which the language models improve on ~random performance (bottom part of the "sigmoid") isn't informative for how fast they converge to ~maximum performance (top part of the "sigmoid")? That seems plausible.

Comment by Lukas Finnveden (Lanrian) on Before smart AI, there will be many mediocre or specialized AIs · 2023-05-29T16:35:09.383Z · LW · GW

Thanks, fixed.

Comment by Lukas Finnveden (Lanrian) on Let’s use AI to harden human defenses against AI manipulation · 2023-05-22T19:02:03.651Z · LW · GW

I'm also concerned about how we'll teach AIs to think about philosophical topics (and indeed, how we're supposed to think about them ourselves). But my intuition is that proposals like this looks great on that perspective.

For areas where we don't have empirical feedback-loops (like many philosophical topics), I imagine that the "baseline solution" for getting help from AIs is to teach them to imitate our reasoning. Either just by literally writing the words that it predicts that we would write (but faster), or by having it generate arguments that we would think looks good. (Potentially recursively, c.f. amplification, debate, etc.)

(A different direction is to predict what we would think after thinking about it more. That has some advantages, but it doesn't get around the issue where we're at-best speeding things up.)

One of the few plausible-seeming ways to outperform that baseline is to identify epistemic practices that work well on questions where we do have empirical feedback loops, and then transferring those practices to questions where we lack such feedback loops. (C.f. imitative generalization.) The above proposal is doing that for a specific sub-category of epistemic practices (recognising ways in which you can be misled by an argument).

Worth noting: The broad category of "transfer epistemic practices from feedback-rich questions to questions with little feedback" contains a ton of stuff, and is arguably the root of all our ability to reason about these topics:

  • Evolution selected human genes for ability to accomplish stuff in the real world. That made us much better at reasoning about philosophy than our chimp ancestors are.
  • Cultural evolution seems to have at least partly promoted reasoning practices that do better at deliberation. (C.f. possible benefits from coupling competition and deliberation.)
  • If someone is optimistic that humans will be better at dealing with philosophy after intelligence-enhancement, I think they're mostly appealing to stuff like this, since intelligence would typically be measured in areas where you can recognise excellent performance.
Comment by Lukas Finnveden (Lanrian) on Matthew Barnett's Shortform · 2023-05-17T20:56:30.103Z · LW · GW

It seems like the list mostly explains away the evidence that "human's can't currently prevent value drift" since the points apply much less to AIs. (I don't know if you agree.)

  • As you mention, (1) probably applies less to AIs (for better or worse).
  • (2) applies to AIs in the sense that many features of AIs' environments will be determined by what tasks they need to accomplish, rather than what will lead to minimal value drift. But the reason to focus on the environment in the human case is that it's the ~only way to affect our values. By contrast, we have much more flexibility in designing AIs, and it's plausible that we can design them so that their values aren't very sensitive to their environments. Also, if we know that particular types of inputs are dangerous, the AIs' environment could be controllable in the sense that less-susceptible AIs could monitor for such inputs, and filter out the dangerous ones.
  • (3): "can't change the trajectory of general value drift by much" seems less likely to apply to AIs (or so I'm arguing). "Most people are selfish and don't care about value drift except to the extent that it harms them directly" means that human value drift is pretty safe (since people usually maintain some basic sense of self-preservation) but that AI value drift is scary (since it could lead your AI to totally disempower you).
  • (4) As you noted in the OP, AI could change really fast, so you might need to control value-drift just to survive a few years. (And once you have those controls in place, it might be easy to increase the robustness further, though this isn't super obvious.)
  • (5) For better or worse, people will probably care less about this in the AI case. (If the threat-model is "random drift away from the starting point", it seems like it would be for the better.)

Since the space of possible AIs is much larger than the space of humans, there are more degrees of freedom along which AI values can change.

I don't understand this point. We (or AIs that are aligned with us) get to pick from that space, and so we can pick the AIs that have least trouble with value drift. (Subject to other constraints, like competitiveness.)

(Imagine if AGI is built out of transformers. You could then argue "since the space of possible non-transformers is much larger than the space of transformers, there are more degrees of freedom along which non-transformer values can change". And humans are non-transformers, so we should be expected to have more trouble with value drift. Obviously this argument doesn't work, but I don't see the relevant disanalogy to your argument.)

Creating new AIs is often cheaper than creating new humans, and so people might regularly spin up new AIs to perform particular functions, while discounting the long-term effect this has on value drift (since the costs are mostly borne by civilization in general, rather than them in particular)

Why are the costs mostly borne by civilizaiton in general? If I entrust some of my property to an AI system, and it changes values, that seems bad for me in particular?

Maybe the argument is something like: As long as law-and-order is preserved, things are not so bad for me even if my AI's values start drifting. But if there's a critical mass of misaligned AIs, they can launch a violent coup against the humans and the aligned AIs. And my contribution to the coup-probability is small?

Comment by Lukas Finnveden (Lanrian) on Matthew Barnett's Shortform · 2023-05-17T18:22:56.122Z · LW · GW

It's possible that there's a trade-off between monitoring for motivation changes and competitiveness. I.e., I think that monitoring would be cheap enough that a super-rich AI society could happily afford it if everyone coordinated on doing it, but if there's intense competition, then it wouldn't be crazy if there was a race-to-the-bottom on caring less about things. (Though there's also practical utility in reducing principal-agents problem and having lots of agents working towards the same goal without incentive problems. So competitiveness considerations could also push towards such monitoring / stabilization of AI values.)

Comment by Lukas Finnveden (Lanrian) on Matthew Barnett's Shortform · 2023-05-17T18:04:20.894Z · LW · GW

5. However, AI values will drift over time. This happens for a variety of reasons, such as environmental pressures and cultural evolution. At some point AIs decide that it's better if they stopped listening to the humans and followed different rules instead.

How does this happen at a time when the AIs are still aligned with humans, and therefore very concerned that their future selves/successors are aligned with human? (Since the humans are presumably very concerned about this.)

This question is related to "we could use AI to predict this outcome ahead of time and ask AI how to take steps to mitigate the harmful effects", but sort of posed on a different level. That quote seemingly presumes that their will be a systemic push away from human alignment, and seemingly suggests that we'll need some clever coordinated solution. (Do tell me if I'm reading you wrong!) But I'm asking why there is a systemic push away from human alignment if all the AIs are concerned about maintaining it?

Maybe the answer is: "If everyone starts out aligned with humans, then any random perturbations will move us away from that. The systemic push is entropy." I agree this is concerning if AIs are aligned in the sense of "their terminal values are similar to my terminal values", because it seems like there's lots of room for subtle and gradual changes, there. But if they're aligned in the sense of "at each point in time I take the action that [group of humans] would have preferred I take after lots of deliberation" then there's less room for subtle and gradual changes:

  • If they get subtly worse at predicting what humans would want in some cases, then they can probably still predict "[group of humans] would want me to take actions that ensures that my predictions of human deliberation are accurate" and so take actions to occasionally fix those misconceptions. (You'd have to be really bad at predicting humans to not realise that the humans wanted that^.)
  • Maybe they sometimes randomly stop caring about what the [group of humans] want. But that seems like it'd be abrupt enough that you could set up monitoring for it, and then you're back in a more classic alignment regime of detecting deception, etc. (Though a bit different in that the monitoring would probably be done by other AIs, and so you'd have to watch out for e.g. inputs that systematically and rapidly changed the values of any AIs that looked at them.)
  • Maybe they randomly acquire some other small motivation alongside "do what humans would have wanted". But if it's predictably the case that such small motivations will eventually undermine their alignment to humans, then the part of their goals that's shaped lilke "do what humans would have wanted" will vote strongly to monitor for such motivation changes and get rid of them ASAP. And if the new motivation is still tiny, probably it can't provide enough of a counteracting motivation to defend itself.

(Maybe you think that this type of alignment is implausible / maybe the action is in your "there's slight misalignment".)

Comment by Lukas Finnveden (Lanrian) on My views on “doom” · 2023-04-28T01:59:34.847Z · LW · GW

Maybe x-risk driven by explosive (technological) growth?

Edit: though some people think AI point of no return might happen before the growth explosion. 

Comment by Lukas Finnveden (Lanrian) on rohinmshah's Shortform · 2023-04-25T02:47:49.712Z · LW · GW

This is true if "the standard setting" refers to one where you have equally robust evidence of all options. But if you have more robust evidence about some options (which is common), the optimizer's curse will especially distort estimates of options with less robust evidence. A correct bayesian treatment would then systematically push you towards picking options with more robust evidence.

(Where I'm using "more robust evidence" to mean something like: evidence that has an overall greater likelihood ratio, and that therefore pushes you further from the prior. Where the error driving the optimizer's curse error is to look at the peak of the likelihood function while neglecting the prior and how much the likelihood ratio pushes you away from it.)

Comment by Lukas Finnveden (Lanrian) on GPT-4 · 2023-04-04T01:55:34.842Z · LW · GW

Where do you get the 3-4 months max training time from? GPT-3.5 was made available March 15th, so if they made that available immediately after it finished training, that would still have left 5 months for training GPT-4. And more realistically, they finished training GPT-3.5 quite a bit earlier, leaving 6+ months for GPT-4's training.

Comment by Lukas Finnveden (Lanrian) on GPT-4 · 2023-03-23T16:53:30.613Z · LW · GW

Are you saying that you would have expected GPT-4 to be stronger if it was 500B+10T? Is that based on benchmarks/extrapolations or vibes?

Comment by Lukas Finnveden (Lanrian) on AGI in sight: our look at the game board · 2023-02-20T06:35:40.646Z · LW · GW

LW discussion

Comment by Lukas Finnveden (Lanrian) on AGI in sight: our look at the game board · 2023-02-20T06:33:05.567Z · LW · GW

This one?

Comment by Lukas Finnveden (Lanrian) on Literature review of TAI timelines · 2023-01-29T06:39:15.975Z · LW · GW

The numbers you use from Holden says that he thinks AGI by 2036 is more than 10%. But when fitting the curves you put that at exactly 10%, which will predictably be an underestimate. It seems better to fit the curves without that number and just check that the result is higher than 10%.

Comment by Lukas Finnveden (Lanrian) on Vegan Nutrition Testing Project: Interim Report · 2023-01-21T17:38:24.795Z · LW · GW

One what later? Year, month?

Comment by Lukas Finnveden (Lanrian) on My Current Take on Counterfactuals · 2023-01-12T18:31:15.758Z · LW · GW

I'm curious if anyone made a serious attempt at the shovel-ready math here and/or whether this approach to counterfactuals still looks promising to Abram? (Or anyone else with takes.)

Comment by Lukas Finnveden (Lanrian) on What's the Least Impressive Thing GPT-4 Won't be Able to Do · 2023-01-09T03:43:52.552Z · LW · GW

Even when using chain of thought?

Comment by Lukas Finnveden (Lanrian) on Language models are nearly AGIs but we don't notice it because we keep shifting the bar · 2023-01-01T17:13:43.103Z · LW · GW

GPT-3- a text-generating language model.

PaLM-540B- a stunningly powerful question-answering language model.

Great Palm- A hypothetical language model that combines the powers of GPT-3 and PaLM-540B.

I would've thought that palm was better at text generation then gpt-3 by default. They're both pretrained on internet next-word prediction and palm is bigger with more data. What makes you think GPT-3 is better at text generation?

Comment by Lukas Finnveden (Lanrian) on Revisiting algorithmic progress · 2022-12-21T16:11:20.062Z · LW · GW

Interesting, thanks! To check my understanding:

  • In general, as time passes, all the researcheres increase their compute usage at a similar rate. This makes it hard to distinguish between improvements caused by compute and algorithmic progress.
  • If the correlation between year and compute was perfect, we wouldn't be able to do this at all.
  • But there is some variance in how much compute is used in different papers, each year. This variance is large enough that we can estimate the first-order effects of algorithmic progress and compute usage.
  • But complementarity is a second-order effect, and the data doesn't contain enough variation/data-points to give a good estimate of second-order effects.
Comment by Lukas Finnveden (Lanrian) on Revisiting algorithmic progress · 2022-12-19T15:16:12.858Z · LW · GW

Thanks for this!

Question: Do you have a sense of how strongly compute and algorithms are complements vs substitutes in this dataset?

(E.g. if you compare compute X in 2022, compute (k^2)X in 2020, and kX in 2021: if there's a k such that the last one is better than both the former two, that would suggest complementarity)

Comment by Lukas Finnveden (Lanrian) on ($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? · 2022-11-02T17:12:24.114Z · LW · GW

I'm curious how much of a concern you think this is, now, 1 year later. I haven't heard the "total number of mRNA shots (for any disease)"-concern from other places, and I'm wondering if that's for good reasons.

Comment by Lukas Finnveden (Lanrian) on Counterarguments to the basic AI x-risk case · 2022-10-17T17:04:02.265Z · LW · GW

Competence does not seem to aggressively overwhelm other advantages in humans: 


g. One might counter-counter-argue that humans are very similar to one another in capability, so even if intelligence matters much more than other traits, you won’t see that by looking at  the near-identical humans. This does not seem to be true. Often at least, the difference in performance between mediocre human performance and top level human performance is large, relative to the space below, iirc. For instance, in chess, the Elo difference between the best and worst players is about 2000, whereas the difference between the amateur play and random play is maybe 400-2800 (if you accept Chess StackExchange guesses as a reasonable proxy for the truth here).

The usage of capabilities/competence is inconsistent here. In points a-f, you argue that general intelligence doesn't aggressively overwhelm other advantages in humans. But in point g, the ELO difference between the best and worst players is less determined by general intelligence than by how much practice people have had.

If we instead consistently talk about domain-relevant skills: In the real world, we do see huge advantages from having domain-specific skills. E.g. I expect elected representatives to be vastly better at politics than medium humans.

If we instead consistently talk about general intelligence: The chess data doesn't falsify the hypothesis that human-level variation in general intelligence is small. To gather data about that, we'd want to analyse the ELO-difference between humans who have practiced similarly much but who have very different g.

(There are some papers on the correlation between intelligence and chess performance, so maybe you could get the relevant data from there. E.g. this paper says that (not controlling for anything) most measurements of cognitive ability correlates with chess performance at about ~0.24 (including IQ iff you exclude a weird outlier where the correlation was -0.51).)

Comment by Lukas Finnveden (Lanrian) on Common misconceptions about OpenAI · 2022-08-29T23:39:17.767Z · LW · GW

Another fairly common argument and motivation at OpenAI in the early days was the risk of "hardware overhang," that slower development of AI would result in building AI with less hardware at a time when they can be more explosively scaled up with massively disruptive consequences. I think that in hindsight this effect seems like it was real, and I would guess that it is larger than the entire positive impact of the additional direct work that would be done by the AI safety community if AI progress had been slower 5 years ago.

Could you clarify this bit? It sounds like you're saying that OpenAI's capabilities work around 2017 was net-positive for reducing misalignment risk, even if the only positive we count is this effect. (Unless you think that there's substantial reason that acceleration is bad other than giving the AI safety community less time.) But then in the next paragraph you say that this argument was wrong (even before GPT-3 was released, which vaguely gestures at the "around 2017"-time). I don't see how those are compatible.

Comment by Lukas Finnveden (Lanrian) on chinchilla's wild implications · 2022-08-14T22:13:11.741Z · LW · GW

(If 1 firing = 1 bit, that should be 34 megabit ~= 4 megabyte.)

This random article (which I haven't fact-checked in the least) claims a bandwidth of 8.75 megabit ~= 1 megabyte. So that's like 2.5 OOMs higher than the number I claimed for chinchilla. So yeah, it does seem like humans get more raw data.

(But I still suspect that chinchilla gets more data if you adjust for (un)interestingness. Where totally random data and easily predictable/compressible data are interesting, and data that is hard-but-possible to predict/compress is interesting.)

Comment by Lukas Finnveden (Lanrian) on chinchilla's wild implications · 2022-08-14T18:18:52.764Z · LW · GW

There's a billion seconds in 30 years. Chinchilla was trained on 1.4 trillion tokens. So for a human adult to have as much data as chinchilla would require us to process the equivalent of ~1400 tokens per second. I think that's something like 2 kilobyte per second.

Inputs to the human brain are probably dominated by vision. I'm not sure how many bytes per second we see, but I don't think it's many orders of magnitudes higher than 2kb.

Comment by Lukas Finnveden (Lanrian) on Two-year update on my personal AI timelines · 2022-08-03T21:50:41.601Z · LW · GW

The acronym is definitely used for reinforcement learning. ["RLHF" "reinforcement learning from human feedback"] gets 564 hits on google, ["RLHF" "reward learning from human feedback"] gets 0.