capybaralet's Shortform

post by capybaralet · 2020-08-27T21:38:18.144Z · score: 5 (1 votes) · LW · GW · 33 comments

33 comments

Comments sorted by top scores.

comment by capybaralet · 2020-09-15T07:28:11.208Z · score: 13 (5 votes) · LW(p) · GW(p)

Wow this is a lot better than my FB/Twitter feed :P

:D :D :D

Let's do this guys! This is the new FB :P

comment by habryka (habryka4) · 2020-09-16T00:38:56.225Z · score: 2 (1 votes) · LW(p) · GW(p)

:D Glad to hear that! 

comment by capybaralet · 2020-09-15T06:32:52.032Z · score: 12 (3 votes) · LW(p) · GW(p)

I have the intention to convert a number of draft LW blog posts into short-forms.

Then I will write a LW post linking to all of them and asking people to request that I elaborate on any that they are particularly interested in.

comment by capybaralet · 2020-09-15T06:33:12.306Z · score: 5 (3 votes) · LW(p) · GW(p)

I've been building up drafts for a looooong time......

comment by capybaralet · 2020-09-18T23:17:22.177Z · score: 11 (7 votes) · LW(p) · GW(p)

It seems like a lot of people are still thinking of alignment as too binary, which leads to critical errors in thinking like: "there will be sufficient economic incentives to solve alignment", and "once alignment is a bottleneck, nobody will want to deploy unaligned systems, since such a system won't actually do what they want".

It seems clear to me that:

1) These statements are true for a certain level of alignment, which I've called "approximate value learning" in the past (https://www.lesswrong.com/posts/rLTv9Sx3A79ijoonQ/risks-from-approximate-value-learning). I think I might have also referred to it as "pretty good alignment" or "good enough alignment" at various times.

2) This level of alignment is suboptimal from the point of view of x-safety, since the downside risk of extinction for the actors deploying the system is less than the downside risk of extinction summed over all humans.

3) We will develop techniques for "good enough" alignment before we develop techniques that are acceptable from the standpoint of x-safety.

4) Therefore, the expected outcome is: once "good enough alignment" is developed, a lot of actors deploy systems that are aligned enough for them to benefit from them, but still carry an unacceptably high level of x-risk.

5) Thus if we don't improve alignment techniques quickly enough after developing "good enough alignment", it's development will likely lead to a period of increased x-risk (under the "alignment bottleneck" model).

comment by capybaralet · 2020-09-22T00:36:51.522Z · score: 9 (5 votes) · LW(p) · GW(p)

Treacherous turns don't necessarily happen all at once. An AI system can start covertly recruiting resources outside its intended purview in preparation for a more overt power grab.

This can happen during training, without a deliberate "deployment" event. Once the AI has started recruiting resources, it can outperform AI systems that haven't done that on-distribution with resources left over which it can devote to pursuing its true objective or instrumental goals.

comment by capybaralet · 2020-08-27T21:38:18.538Z · score: 9 (4 votes) · LW(p) · GW(p)

My pet "(AI) policy" idea for a while has been "direct recourse", which is the idea that you can hedge against one party precipitating an irreversible events by giving other parties the ability to disrupt their operations at will.
For instance, I could shut down my competitors' AI project if I think it's an X-risk.
The idea is that I would have to compensate you if I was later deemed to have done this for an illegitimate reason.
If shutting down your AI project is not irreversible, then this system increases our ability to prevent irreversible events, since I might stop some existential catastrophe, and if I shut down your project when I shouldn't, then I just compensate you and we're all good.

comment by capybaralet · 2020-09-15T07:01:24.055Z · score: 6 (4 votes) · LW(p) · GW(p)

"No Free Lunch" (NFL) results in machine learning (ML) basically say that success all comes down to having a good prior.

So we know that we need a sufficiently good prior in order to succeed.

But we don't know what "sufficiently good" means.

e.g. I've heard speculation that maybe we can use 2^-MDL in any widely used Turing-complete programming language (e.g. Python) for our prior, and that will give enough information about our particular physics for something AIXI-like to become superintelligent e.g. within our lifetime.

Or maybe we can't get anywhere without a much better prior.

DOES ANYONE KNOW of any work/(intelligent thoughts) on this?

comment by capybaralet · 2020-09-15T07:09:35.909Z · score: 2 (2 votes) · LW(p) · GW(p)

Although it's not framed this way, I think much of the disagreement about timelines/scaling-hypothesis/deep-learning in the ML community basically comes down to this...

comment by capybaralet · 2020-09-13T21:23:34.354Z · score: 6 (4 votes) · LW(p) · GW(p)

Regarding the "Safety/Alignment vs. Capabilities" meme: it seems like people are sometimes using "capabilities" to use 2 different things:

1) "intelligence" or "optimization power"... i.e. the ability to optimize some objective function

2) "usefulness": the ability to do economically valuable tasks or things that people consider useful

I think it is meant to refer to (1).

Alignment is likely to be a bottleneck for (2).

For a given task, we can expect 3 stages of progress:

i) sufficient capabilities(1) to perform the task

ii) sufficient alignment to perform the task unsafely

iii) sufficient alignment to perform the task safely

Between (i) and (ii) we can expect a "capabilities(1) overhang". When we go from (i) to (ii) we will see unsafe AI systems deployed and a potentially discontinuous jump in their ability to do the task.

comment by capybaralet · 2020-09-17T19:39:33.985Z · score: 5 (3 votes) · LW(p) · GW(p)

I'm frustrated with the meme that "mesa-optimization/pseudo-alignment is a robustness (i.e. OOD) problem". IIUC, this is definitionally true in the mesa-optimization paper, but I think this misses the point.

In particular, this seems to exclude an important (maybe the most important) threat model: the AI understands how to appear aligned, and does so, while covertly pursues its own objective on-distribution, during training.

This is exactly how I imagine a treacherous turn from a boxed superintelligent AI agent to occur, for instance. It secretly begins breaking out of the box (e.g. via manipulating humans) and we don't notice until its too late.

comment by evhub · 2020-09-17T19:58:13.514Z · score: 6 (3 votes) · LW(p) · GW(p)

the AI understands how to appear aligned, and does so, while covertly pursues its own objective on-distribution, during training.

Sure, but the fact that it defects in deployment and not in training is a consequence of distributional shift, specifically the shift from a situation where it can't break out of the box to a situation where it can.

comment by capybaralet · 2020-09-18T23:06:15.180Z · score: 3 (2 votes) · LW(p) · GW(p)

No, I'm talking about it breaking out during training. The only "shifts" here are:

1) the AI gets smarter

2) (perhaps) the AI covertly influences its external environment (i.e. breaks out of the box a bit).

We can imagine scenarios where it's only (1) and not (2). I find them a bit more far-fetched, but this is the classic vision of the treacherous turn... the AI makes a plan, and then suddenly executes it to attain DSA. Once it starts to execute, ofc there is distributional shift, but:

A) it is auto-induced distributional shift

B) the developers never decided to deploy

comment by capybaralet · 2020-09-16T09:22:07.097Z · score: 5 (3 votes) · LW(p) · GW(p)

As alignment techniques improve, they'll get good enough to solve new tasks before they get good enough to do so safely. This is a source of x-risk.

comment by capybaralet · 2020-09-23T21:27:00.273Z · score: 3 (2 votes) · LW(p) · GW(p)

For all of the hubbub about trying to elaborate better arguments for AI x-risk, it seems like a lot of people are describing the arguments in Superintelligence as relying on FOOM, agenty AI systems, etc. without actually justifying that description via references to the text.

It's been a while since I read Superintelligence, but my memory was that it anticipated a lot of counter-arguments quite well.  I'm not convinced that it requires such strong premises to make a compelling case.  So maybe someone interested in this project of clarifying the arguments should start with establishing that the arguments in superintelligence really have the weaknesses they are claimed to?

comment by capybaralet · 2020-09-23T06:10:12.367Z · score: 3 (2 votes) · LW(p) · GW(p)

Moloch is not about coordination failures.  Moloch is about the triumph of instrumental goals.  Maybe we can defeat Moloch with sufficiently good coordination.  It's worth a shot at least.

comment by capybaralet · 2020-09-18T23:30:30.114Z · score: 3 (2 votes) · LW(p) · GW(p)

A lot of the discussion of mesa-optimization seems confused.

One thing that might be relevant towards clearing up the confusion is just to remember that "learning" and "inference" should not be thought of as cleanly separated, in the first place, see, e.g. AIXI...

So when we ask "is it learning? Or just solving the task without learning", this seems like a confused framing to me. Suppose your ML system learned an excellent prior, and then just did Bayesian inference at test time. Is that learning? Sure, why not. It might not use a traditional search/optimization algorithm, but probably is has to do *something* like that for computational reasons if it wants to do efficient approximate Bayesian inference over a large hypothesis space, so...

comment by capybaralet · 2020-09-16T23:03:02.965Z · score: 3 (2 votes) · LW(p) · GW(p)

I like "tell culture" and find myself leaning towards it more often these days, but e.g. as I'm composing an email, I'll find myself worrying that the recipient will just interpret a statement like: "I'm curious about X" as a somewhat passive request for information about X (which it sort of is, but also I really don't want it to come across that way...)

Anyone have thoughts/suggestions?

comment by Raemon · 2020-09-17T00:16:59.719Z · score: 6 (3 votes) · LW(p) · GW(p)

Cultures depend on shared assumptions of trust, and indeed, if they don't share your assumptions, you can't just unilaterally declare a culture. (I think the short answer is "unless you want to onboard someone else into your culture, you probably can't just do the sort of thing you want to do.")

I recommend checking out Reveal Culture, which tackles some of this.

comment by Raemon · 2020-09-17T00:45:06.153Z · score: 4 (2 votes) · LW(p) · GW(p)

(You can manually specify "I'm curious about X [I don't mean to be asking you about it, just mentioning that I'm curious about it, no pressure if you don't want to go into it.]". But, that is indeed a clunkier statement, and probably defeats the point of you being able to casually mention it in the first place.)

I am somewhat curious what you're hoping to get out of being able to say things like "I'm curious about X" if it's not intended as a passive request. I think the answers here of how to communicate across cultures will depend a lot on what specific thing you're trying to communicate and why and how (and then covering that with a variety of patches, which are specific to the topic in question)

comment by Pongo · 2020-09-17T17:30:56.185Z · score: 3 (2 votes) · LW(p) · GW(p)

But, that is indeed a clunkier statement

I once heard someone say, "I'm curious about X, but only want to ask you about it if you want to talk about it" and thought that seemed very skillful.

comment by capybaralet · 2020-09-17T06:11:28.481Z · score: 3 (2 votes) · LW(p) · GW(p)

It might be a passive request, I'm not actually sure... I'd think of it more like an invitation, which you are free to decline. Although OFC, declining an invitation does send a message whether you like it or not *shrug.

comment by mr-hire · 2020-09-17T01:37:09.484Z · score: 3 (2 votes) · LW(p) · GW(p)

> But, that is indeed a clunkier statement, and probably defeats the point of you being able to casually mention it in the first place.)

Also like, if you're in something like guess culture, and someone tells you "I'm just telling you this with no expectation," they will still be trying to guess what you may want from that.

comment by AllAmericanBreakfast · 2020-09-17T17:09:28.607Z · score: 2 (1 votes) · LW(p) · GW(p)

Be brave. Get clear on your own intentions. Feel out their comfort level with talking about X first. 

comment by capybaralet · 2020-09-16T23:04:19.358Z · score: 1 (1 votes) · LW(p) · GW(p)

I guess one problem here is that how someone responds to such a statement carries information about how much they respect you...

If someone you are honored to even get the time of day from writes that, you will almost certainly craft a strong response about X...

comment by capybaralet · 2020-10-03T23:44:23.049Z · score: 2 (2 votes) · LW(p) · GW(p)

We learned about RICE as a treatment for injuries (e.g. sprains) in middle school, and it's since stuck me as odd that you would want to inhibit the body's natural healing response.

It seems like RICE is being questioned by medical professionals, as well, but consensus is far off.

Anyone have thoughts/knowledge about this?

comment by capybaralet · 2020-10-03T23:45:37.222Z · score: 2 (2 votes) · LW(p) · GW(p)

https://thischangedmypractice.com/move-an-injury-not-rice/
https://www.stoneclinic.com/blog/why-rice-not-always-nice-and-some-thoughts-about-swelling

comment by capybaralet · 2020-10-04T02:19:21.593Z · score: 1 (1 votes) · LW(p) · GW(p)

Some possible implications of more powerful AI/technology for privacy:

1) It's as if all of your logged data gets poured over by a team of super-detectives to make informed guesses about every aspect of your life, even those that seem completely unrelated to those kinds of data.

2) Even data that you try to hide can be read from things like reverse engineering what you type based on the sounds of you typing, etc.

3) Powerful actors will deploy advanced systems to model, predict, and influence your behavior, and extreme privacy precautions starting now may be warranted.

4) On the other hand, if you DON'T have a significant digital footprint, you may be significantly less trustworthy.  If AI systems don't know what to make of you, you may be the first up against the wall (compare with seeking credit without a having credit history).
 
5) On the other other hand ("on the foot"?), if you trust that future societies will be more enlightened, then you may be retroactively rewarded for being more enlightened today.

Anything important I left out?

comment by capybaralet · 2020-09-30T08:04:44.814Z · score: 1 (1 votes) · LW(p) · GW(p)

Whelp... that's scary: 
Chip Huyen

@chipro

 

Replying to

@chipro

4. You won’t need to update your models as much One mindboggling fact about DevOps: Etsy deploys 50 times/day. Netflix 1000s times/day. AWS every 11.7 seconds. MLOps isn’t an exemption. For online ML systems, you want to update them as fast as humanly possible. (5/6)
https://twitter.com/chipro/status/1310952553459462146

comment by Dagon · 2020-09-30T16:56:30.553Z · score: 4 (2 votes) · LW(p) · GW(p)

What part is scary?  I think they're missing out on the sheer variety of model usage - probably as variable as software deployments.  But I don't think there's anything particularly scary about any given point on the curve.

Some really do get built, validated, and deployed twice a year.  Some have CI pipelines that re-train with new data and re-validate every few minutes.  Some are self-updating, and re-sync to a clean state periodically.  Some are running continuous a/b tests of many candidate models, picking the best-performer for a customer segment every few minutes, and adding/removing models from the pool many times per day.

comment by capybaralet · 2020-09-15T23:53:17.695Z · score: 1 (1 votes) · LW(p) · GW(p)

What's our backup plan if the internet *really* goes to shit?

E.g. Google search seems to have suddenly gotten way worse for searching for machine learning papers for me in the last month or so. I'd gotten used to it being great, and don't have a good backup.

comment by capybaralet · 2020-09-15T23:50:56.763Z · score: 1 (1 votes) · LW(p) · GW(p)

A friend asked me what EAs think of https://en.wikipedia.org/wiki/Chuck_Feeney.

Here's my response (based on ~1 minute of Googling):

He seems to have what I call a "moral purity" attitude towards morality.
By this I mean, thinking of ethics as less consequentialist and more about "being a good person".

I think such an attitude is natural, very typical and not very EA.So, e.g. living frugally might or might not be EA, but definitely makes sense if you believe we have strong charitable obligations and have a moral purity attitude towards morality.

Moving away from moral purity and giving consequentialist arguments against it are maybe one of the main insights of EA vs. Peter Singer.

comment by capybaralet · 2020-09-15T06:31:27.020Z · score: 1 (3 votes) · LW(p) · GW(p)

Moloch is not about coordination failures.

Moloch is about the triumph of instrumental goals.

Coordination *might* save us from that. Or not. "it is too soon to say"