capybaralet's Shortform

post by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-08-27T21:38:18.144Z · LW · GW · 50 comments

50 comments

Comments sorted by top scores.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T07:28:11.208Z · LW(p) · GW(p)

Wow this is a lot better than my FB/Twitter feed :P

:D :D :D

Let's do this guys! This is the new FB :P

Replies from: habryka4
comment by habryka (habryka4) · 2020-09-16T00:38:56.225Z · LW(p) · GW(p)

:D Glad to hear that! 

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T06:32:52.032Z · LW(p) · GW(p)

I have the intention to convert a number of draft LW blog posts into short-forms.

Then I will write a LW post linking to all of them and asking people to request that I elaborate on any that they are particularly interested in.

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T06:33:12.306Z · LW(p) · GW(p)

I've been building up drafts for a looooong time......

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-18T23:17:22.177Z · LW(p) · GW(p)

It seems like a lot of people are still thinking of alignment as too binary, which leads to critical errors in thinking like: "there will be sufficient economic incentives to solve alignment", and "once alignment is a bottleneck, nobody will want to deploy unaligned systems, since such a system won't actually do what they want".

It seems clear to me that:

1) These statements are true for a certain level of alignment, which I've called "approximate value learning" in the past (https://www.lesswrong.com/posts/rLTv9Sx3A79ijoonQ/risks-from-approximate-value-learning). I think I might have also referred to it as "pretty good alignment" or "good enough alignment" at various times.

2) This level of alignment is suboptimal from the point of view of x-safety, since the downside risk of extinction for the actors deploying the system is less than the downside risk of extinction summed over all humans.

3) We will develop techniques for "good enough" alignment before we develop techniques that are acceptable from the standpoint of x-safety.

4) Therefore, the expected outcome is: once "good enough alignment" is developed, a lot of actors deploy systems that are aligned enough for them to benefit from them, but still carry an unacceptably high level of x-risk.

5) Thus if we don't improve alignment techniques quickly enough after developing "good enough alignment", it's development will likely lead to a period of increased x-risk (under the "alignment bottleneck" model).

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-22T00:36:51.522Z · LW(p) · GW(p)

Treacherous turns don't necessarily happen all at once. An AI system can start covertly recruiting resources outside its intended purview in preparation for a more overt power grab.

This can happen during training, without a deliberate "deployment" event. Once the AI has started recruiting resources, it can outperform AI systems that haven't done that on-distribution with resources left over which it can devote to pursuing its true objective or instrumental goals.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-08-27T21:38:18.538Z · LW(p) · GW(p)

My pet "(AI) policy" idea for a while has been "direct recourse", which is the idea that you can hedge against one party precipitating an irreversible events by giving other parties the ability to disrupt their operations at will.
For instance, I could shut down my competitors' AI project if I think it's an X-risk.
The idea is that I would have to compensate you if I was later deemed to have done this for an illegitimate reason.
If shutting down your AI project is not irreversible, then this system increases our ability to prevent irreversible events, since I might stop some existential catastrophe, and if I shut down your project when I shouldn't, then I just compensate you and we're all good.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-09-01T17:32:09.525Z · LW(p) · GW(p)

Suggestion for authors here: don't use conclusive titles for posts that make speculative arguments.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T07:01:24.055Z · LW(p) · GW(p)

"No Free Lunch" (NFL) results in machine learning (ML) basically say that success all comes down to having a good prior.

So we know that we need a sufficiently good prior in order to succeed.

But we don't know what "sufficiently good" means.

e.g. I've heard speculation that maybe we can use 2^-MDL in any widely used Turing-complete programming language (e.g. Python) for our prior, and that will give enough information about our particular physics for something AIXI-like to become superintelligent e.g. within our lifetime.

Or maybe we can't get anywhere without a much better prior.

DOES ANYONE KNOW of any work/(intelligent thoughts) on this?

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T07:09:35.909Z · LW(p) · GW(p)

Although it's not framed this way, I think much of the disagreement about timelines/scaling-hypothesis/deep-learning in the ML community basically comes down to this...

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-17T19:39:33.985Z · LW(p) · GW(p)

I'm frustrated with the meme that "mesa-optimization/pseudo-alignment is a robustness (i.e. OOD) problem". IIUC, this is definitionally true in the mesa-optimization paper, but I think this misses the point.

In particular, this seems to exclude an important (maybe the most important) threat model: the AI understands how to appear aligned, and does so, while covertly pursues its own objective on-distribution, during training.

This is exactly how I imagine a treacherous turn from a boxed superintelligent AI agent to occur, for instance. It secretly begins breaking out of the box (e.g. via manipulating humans) and we don't notice until its too late.

Replies from: evhub
comment by evhub · 2020-09-17T19:58:13.514Z · LW(p) · GW(p)

the AI understands how to appear aligned, and does so, while covertly pursues its own objective on-distribution, during training.

Sure, but the fact that it defects in deployment and not in training is a consequence of distributional shift, specifically the shift from a situation where it can't break out of the box to a situation where it can.

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-18T23:06:15.180Z · LW(p) · GW(p)

No, I'm talking about it breaking out during training. The only "shifts" here are:

1) the AI gets smarter

2) (perhaps) the AI covertly influences its external environment (i.e. breaks out of the box a bit).

We can imagine scenarios where it's only (1) and not (2). I find them a bit more far-fetched, but this is the classic vision of the treacherous turn... the AI makes a plan, and then suddenly executes it to attain DSA. Once it starts to execute, ofc there is distributional shift, but:

A) it is auto-induced distributional shift

B) the developers never decided to deploy

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-16T09:22:07.097Z · LW(p) · GW(p)

As alignment techniques improve, they'll get good enough to solve new tasks before they get good enough to do so safely. This is a source of x-risk.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-13T21:23:34.354Z · LW(p) · GW(p)

Regarding the "Safety/Alignment vs. Capabilities" meme: it seems like people are sometimes using "capabilities" to use 2 different things:

1) "intelligence" or "optimization power"... i.e. the ability to optimize some objective function

2) "usefulness": the ability to do economically valuable tasks or things that people consider useful

I think it is meant to refer to (1).

Alignment is likely to be a bottleneck for (2).

For a given task, we can expect 3 stages of progress:

i) sufficient capabilities(1) to perform the task

ii) sufficient alignment to perform the task unsafely

iii) sufficient alignment to perform the task safely

Between (i) and (ii) we can expect a "capabilities(1) overhang". When we go from (i) to (ii) we will see unsafe AI systems deployed and a potentially discontinuous jump in their ability to do the task.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-08-31T02:01:15.776Z · LW(p) · GW(p)

LessWrong and the Alignment Forum are great and all, but... if you are interested in technical AI safety, you should also learn about AI from other sources, like by looking at workshops, conference proceedings, looking into different research groups in academia, etc.

 

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-09-05T15:40:55.629Z · LW(p) · GW(p)

I find the argument that 'predicting data generated by agents (e.g. language modeling) will lead a model to learn / become an agent' much weaker than I used to.

This is because I think it only goes through cleanly if the task uses the same input and output as the agent.  This is emphatically not the case for (e.g.) GPT-3.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-23T21:27:00.273Z · LW(p) · GW(p)

For all of the hubbub about trying to elaborate better arguments for AI x-risk, it seems like a lot of people are describing the arguments in Superintelligence as relying on FOOM, agenty AI systems, etc. without actually justifying that description via references to the text.

It's been a while since I read Superintelligence, but my memory was that it anticipated a lot of counter-arguments quite well.  I'm not convinced that it requires such strong premises to make a compelling case.  So maybe someone interested in this project of clarifying the arguments should start with establishing that the arguments in superintelligence really have the weaknesses they are claimed to?

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-23T06:10:12.367Z · LW(p) · GW(p)

Moloch is not about coordination failures.  Moloch is about the triumph of instrumental goals.  Maybe we can defeat Moloch with sufficiently good coordination.  It's worth a shot at least.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-18T23:30:30.114Z · LW(p) · GW(p)

A lot of the discussion of mesa-optimization seems confused.

One thing that might be relevant towards clearing up the confusion is just to remember that "learning" and "inference" should not be thought of as cleanly separated, in the first place, see, e.g. AIXI...

So when we ask "is it learning? Or just solving the task without learning", this seems like a confused framing to me. Suppose your ML system learned an excellent prior, and then just did Bayesian inference at test time. Is that learning? Sure, why not. It might not use a traditional search/optimization algorithm, but probably is has to do *something* like that for computational reasons if it wants to do efficient approximate Bayesian inference over a large hypothesis space, so...

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-16T23:03:02.965Z · LW(p) · GW(p)

I like "tell culture" and find myself leaning towards it more often these days, but e.g. as I'm composing an email, I'll find myself worrying that the recipient will just interpret a statement like: "I'm curious about X" as a somewhat passive request for information about X (which it sort of is, but also I really don't want it to come across that way...)

Anyone have thoughts/suggestions?

Replies from: Raemon, AllAmericanBreakfast, capybaralet
comment by Raemon · 2020-09-17T00:16:59.719Z · LW(p) · GW(p)

Cultures depend on shared assumptions of trust, and indeed, if they don't share your assumptions, you can't just unilaterally declare a culture. (I think the short answer is "unless you want to onboard someone else into your culture, you probably can't just do the sort of thing you want to do.")

I recommend checking out Reveal Culture, which tackles some of this.

Replies from: Raemon
comment by Raemon · 2020-09-17T00:45:06.153Z · LW(p) · GW(p)

(You can manually specify "I'm curious about X [I don't mean to be asking you about it, just mentioning that I'm curious about it, no pressure if you don't want to go into it.]". But, that is indeed a clunkier statement, and probably defeats the point of you being able to casually mention it in the first place.)

I am somewhat curious what you're hoping to get out of being able to say things like "I'm curious about X" if it's not intended as a passive request. I think the answers here of how to communicate across cultures will depend a lot on what specific thing you're trying to communicate and why and how (and then covering that with a variety of patches, which are specific to the topic in question)

Replies from: Pongo, capybaralet, mr-hire
comment by Pongo · 2020-09-17T17:30:56.185Z · LW(p) · GW(p)

But, that is indeed a clunkier statement

I once heard someone say, "I'm curious about X, but only want to ask you about it if you want to talk about it" and thought that seemed very skillful.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-17T06:11:28.481Z · LW(p) · GW(p)

It might be a passive request, I'm not actually sure... I'd think of it more like an invitation, which you are free to decline. Although OFC, declining an invitation does send a message whether you like it or not *shrug.

comment by Matt Goldenberg (mr-hire) · 2020-09-17T01:37:09.484Z · LW(p) · GW(p)

> But, that is indeed a clunkier statement, and probably defeats the point of you being able to casually mention it in the first place.)

Also like, if you're in something like guess culture, and someone tells you "I'm just telling you this with no expectation," they will still be trying to guess what you may want from that.

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-09-17T17:09:28.607Z · LW(p) · GW(p)

Be brave. Get clear on your own intentions. Feel out their comfort level with talking about X first. 

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-16T23:04:19.358Z · LW(p) · GW(p)

I guess one problem here is that how someone responds to such a statement carries information about how much they respect you...

If someone you are honored to even get the time of day from writes that, you will almost certainly craft a strong response about X...

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2023-05-22T22:01:13.371Z · LW(p) · GW(p)

Organizations that are looking for ML talent (e.g. to mentor more junior people, or get feedback on policy) should offer PhD students high-paying contractor/part-time work.

ML PhD students working on safety-relevant projects should be able to augment their meager stipends this way.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-09-01T17:45:52.929Z · LW(p) · GW(p)

I'm most active on Twitter these days; please follow me there! 
https://twitter.com/DavidSKrueger

I also have a website now: https://davidscottkrueger.com/

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-09-02T14:15:06.609Z · LW(p) · GW(p)

As an academic, I typically find LW/SF posts to be too "pedagogic" and not skimmable enough.  This limits how much I read them.  Academic papers are, on average, much easier to extract a TL;DR from.  

Being pedagogic has advantages, but it can be annoying if you are already familiar with much of the background and just want to skip to the (purportedly) novel bits.

Replies from: mikkel-wilson, alexander-gietelink-oldenziel
comment by MikkW (mikkel-wilson) · 2021-09-02T15:49:55.849Z · LW(p) · GW(p)

Pedagogic posts are more accessible, and a large portion of the point of publishing on LW is to present technical ideas to a wide audience. While the audience here is intelligent, they also come from a wide variety of domains, so accessibility is key to successfully writing a good LW post (with some exceptions).

Do you have a proposition for how to increase skimability without sacrificing accessibility?

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-09-02T20:24:32.256Z · LW(p) · GW(p)

Maybe some, but I think that's a bit besides the point... 
I agree there's a genuine trade-off, but my post was mostly about AF.
I'm mostly in LW/AF for AI Alignment content, and I think these posts should strive to be a bit closer to academic style.

A few quick thoughts:
- include abstracts
- say whether a post is meant to be pedagogic or not
- say "you can skip this section if"
- follow something more like the format of an academic paper
- include a figure towards the top that should summarize the idea for someone with sufficient background with a caption like "a summary of [idea]: description / explanation"

Replies from: adamShimi, mikkel-wilson
comment by adamShimi · 2021-09-03T16:17:24.262Z · LW(p) · GW(p)

Sounds like a fair point. I'll try to add that to my posts in the future. ;)

comment by MikkW (mikkel-wilson) · 2021-09-02T20:42:42.665Z · LW(p) · GW(p)

I agree that AI alignment posts don't need to aim for accessibility to the same degree as the typical LW post (this was what I was mainly referring to when I edited in "with some exceptions"), but you did name-check LW in your top-level post, and I don't think it's besides the point for the typical LW post.

I think your suggestions are good and reasonable suggestions.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-10-03T23:44:23.049Z · LW(p) · GW(p)

We learned about RICE as a treatment for injuries (e.g. sprains) in middle school, and it's since stuck me as odd that you would want to inhibit the body's natural healing response.

It seems like RICE is being questioned by medical professionals, as well, but consensus is far off.

Anyone have thoughts/knowledge about this?

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-10-03T23:45:37.222Z · LW(p) · GW(p)

https://thischangedmypractice.com/move-an-injury-not-rice/
https://www.stoneclinic.com/blog/why-rice-not-always-nice-and-some-thoughts-about-swelling

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-04-14T18:36:00.014Z · LW(p) · GW(p)

I'll be at EAG this weekend.

Replies from: Benito
comment by Ben Pace (Benito) · 2022-04-14T22:02:22.493Z · LW(p) · GW(p)

Same here.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-11-15T22:11:52.583Z · LW(p) · GW(p)

Are there people in the AI alignment / x-safety community who are still major "Deep Learning skeptics" (in terms of capabilities)?  I know Stuart Russell is... who else?

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-11-14T00:31:36.341Z · LW(p) · GW(p)

IMO, the outer alignment problem is still the biggest problem in (technical) AI Alignment.  We don't know how to write down -- or learn -- good specifications, and people making strong AIs that optimize for proxies is still what's most likely to get us all killed.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-10-04T02:19:21.593Z · LW(p) · GW(p)

Some possible implications of more powerful AI/technology for privacy:

1) It's as if all of your logged data gets poured over by a team of super-detectives to make informed guesses about every aspect of your life, even those that seem completely unrelated to those kinds of data.

2) Even data that you try to hide can be read from things like reverse engineering what you type based on the sounds of you typing, etc.

3) Powerful actors will deploy advanced systems to model, predict, and influence your behavior, and extreme privacy precautions starting now may be warranted.

4) On the other hand, if you DON'T have a significant digital footprint, you may be significantly less trustworthy.  If AI systems don't know what to make of you, you may be the first up against the wall (compare with seeking credit without a having credit history).
 
5) On the other other hand ("on the foot"?), if you trust that future societies will be more enlightened, then you may be retroactively rewarded for being more enlightened today.

Anything important I left out?

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-30T08:04:44.814Z · LW(p) · GW(p)

Whelp... that's scary: 
Chip Huyen

@chipro

 

Replying to

@chipro

4. You won’t need to update your models as much One mindboggling fact about DevOps: Etsy deploys 50 times/day. Netflix 1000s times/day. AWS every 11.7 seconds. MLOps isn’t an exemption. For online ML systems, you want to update them as fast as humanly possible. (5/6)
https://twitter.com/chipro/status/1310952553459462146

Replies from: Dagon
comment by Dagon · 2020-09-30T16:56:30.553Z · LW(p) · GW(p)

What part is scary?  I think they're missing out on the sheer variety of model usage - probably as variable as software deployments.  But I don't think there's anything particularly scary about any given point on the curve.

Some really do get built, validated, and deployed twice a year.  Some have CI pipelines that re-train with new data and re-validate every few minutes.  Some are self-updating, and re-sync to a clean state periodically.  Some are running continuous a/b tests of many candidate models, picking the best-performer for a customer segment every few minutes, and adding/removing models from the pool many times per day.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T23:53:17.695Z · LW(p) · GW(p)

What's our backup plan if the internet *really* goes to shit?

E.g. Google search seems to have suddenly gotten way worse for searching for machine learning papers for me in the last month or so. I'd gotten used to it being great, and don't have a good backup.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T23:50:56.763Z · LW(p) · GW(p)

A friend asked me what EAs think of https://en.wikipedia.org/wiki/Chuck_Feeney.

Here's my response (based on ~1 minute of Googling):

He seems to have what I call a "moral purity" attitude towards morality.
By this I mean, thinking of ethics as less consequentialist and more about "being a good person".

I think such an attitude is natural, very typical and not very EA.So, e.g. living frugally might or might not be EA, but definitely makes sense if you believe we have strong charitable obligations and have a moral purity attitude towards morality.

Moving away from moral purity and giving consequentialist arguments against it are maybe one of the main insights of EA vs. Peter Singer.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T06:31:27.020Z · LW(p) · GW(p)

Moloch is not about coordination failures.

Moloch is about the triumph of instrumental goals.

Coordination *might* save us from that. Or not. "it is too soon to say"

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2020-12-12T20:45:42.509Z · LW(p) · GW(p)

Working a lot is an instrumental goal. If you start tracking your time, and optimizing that metric, you might end up working more than optimal. That seems like a triumph of instrumental goals that isn't a coordination failure. I wouldn't assign this failure to Moloch. Thoughts?

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-12-16T20:37:26.273Z · LW(p) · GW(p)

I basically agree, but I do assign it to Moloch. *shrug