Posts

Berkeley group house, spots open 2022-09-22T17:13:59.842Z
[Linkpost] Can lab-grown brains become conscious? 2022-08-28T17:45:52.937Z
When is positive self-talk not worth it due to self-delusion? 2022-04-21T03:11:06.915Z
Game that might improve research productivity 2022-03-26T07:00:11.599Z
Have you noticed costs of being anticipatory? 2022-02-14T19:15:57.061Z
Ideas for avoiding optimizing the wrong things day-to-day? 2022-01-26T10:46:02.496Z
Jack R's Shortform 2022-01-24T05:32:02.388Z
What time in your life were you the most productive at learning and/or thinking and why? 2021-12-22T22:56:20.379Z
Redwood Research is hiring for several roles 2021-11-29T00:16:32.650Z
Some of the best rationality essays 2021-10-19T22:57:48.843Z
Do you think you are a Boltzmann brain? If not, why not? 2021-10-15T06:02:30.083Z
Referential Information 2021-08-01T21:07:11.371Z
My Productivity Tips and Systems 2021-07-25T12:52:34.932Z
Stanford EA Confusion Dinner 2021-06-15T10:49:57.353Z

Comments

Comment by Jack R (Jack Ryan) on Prize idea: Transmit MIRI and Eliezer's worldviews · 2022-09-20T03:47:43.681Z · LW · GW

Aren’t turned off by perceived arrogance

One hypothesis I've had is that people with more MIRI-like views tend to be more arrogant themselves. A possible mechanism is that the idea that the world is going to end and that they are the only ones who can save is appealing in a way that shifts their views on certain questions and changes the way they think about AI (e.g. they need less explanation that they are some of the most important people ever, so they spend less time considering why AI might go well by default).

[ETA: In case it wasn't clear, I am positing subconscious patterns correlated with arrogance that lead to MIRI-like views]

Comment by Jack R (Jack Ryan) on How can we ensure that a Friendly AI team will be sane enough? · 2022-09-12T18:51:51.875Z · LW · GW

How'd this go? Just searched LW for "neurofeedback" since I recently learned about it

Comment by Jack R (Jack Ryan) on Discussion on utilizing AI for alignment · 2022-09-06T00:37:13.107Z · LW · GW

That argument makes sense, thanks

Comment by Jack R (Jack Ryan) on Discussion on utilizing AI for alignment · 2022-09-04T22:05:34.880Z · LW · GW

We are very likely not going to miss out on alignment by a 2x productivity boost, that’s not how things end up in the real world. We’ll either solve alignment or miss by a factor of >10x.

Why is this true?

Comment by Jack R (Jack Ryan) on The shard theory of human values · 2022-09-04T19:52:21.639Z · LW · GW

the genome can’t directly make us afraid of death

It's not necessarily direct, but in case you aren't aware of it, prepared learning is a relevant phenomenon,since apparently the genome does predispose us to certain fears

Comment by Jack R (Jack Ryan) on Announcing Encultured AI: Building a Video Game · 2022-08-30T05:13:09.129Z · LW · GW

Seems like this guy has already started trying to use GPT-3 in a videogame: GPT3 AI Game Prototype

Comment by Jack R (Jack Ryan) on AGI Timelines Are Mostly Not Strategically Relevant To Alignment · 2022-08-23T22:41:29.028Z · LW · GW

Not sure if it was clear, but the reason I asked was because it seems like if you think the fraction changes significantly before AGI, then the claim that Thane quotes in the top-level comment wouldn't be true.

Comment by Jack R (Jack Ryan) on AGI Timelines Are Mostly Not Strategically Relevant To Alignment · 2022-08-23T22:24:38.842Z · LW · GW

Don't timelines change your views on takeoff speeds? If not, what's an example piece of evidence that updates your timelines but not your takeoff speeds?

Comment by Jack R (Jack Ryan) on AGI Timelines Are Mostly Not Strategically Relevant To Alignment · 2022-08-23T22:23:58.488Z · LW · GW

Same - also interested if John was assuming that the fraction of deployment labor that is automated changes negligibly over time pre-AGI.

Comment by Jack R (Jack Ryan) on Broad Picture of Human Values · 2022-08-21T04:20:10.118Z · LW · GW

Humans can change their action patterns on a dime, inspired by philosophical arguments, convinced by logic, indoctrinated by political or religious rhetoric, or plainly because they're forced to.

I'd add that action patterns can change for reasons other than logical/deliberative ones. For example, adapting to a new culture means you might adopt and have new reactions to objects, gestures, etc that are considered symbolic in that culture.

Comment by Jack R (Jack Ryan) on Discovering Agents · 2022-08-18T22:11:08.309Z · LW · GW

so the edge  is terminal

Earlier you said that the blue edges were terminal edges.

Comment by Jack R (Jack Ryan) on Announcing Encultured AI: Building a Video Game · 2022-08-18T06:11:33.467Z · LW · GW

What are some of the "various things" you have in mind here? It seems possible to me that something like "AI alignment testing" is straightforwardly upstream of what players want, but maybe you were thinking of something else

Comment by Jack R (Jack Ryan) on Pendulums, Policy-Level Decisionmaking, Saving State · 2022-08-11T19:21:10.584Z · LW · GW

"Go with your gut” [...] [is] insensitive to circumstance.

People's guts seem very sensitive to circumstance, especially compared to commitments.

Comment by Jack R (Jack Ryan) on The alignment problem from a deep learning perspective · 2022-08-11T07:26:48.406Z · LW · GW

But the capabilities of neural networks are currently advancing much faster than our ability to understand how they work or interpret their cognition;

Naively, you might think that as opacity increases, trust in systems decreases, and hence something like "willingness to deploy" decreases. 

How good of an argument does this seem to you against the hypothesis that "capabilities will grow faster than alignment"? I'm viewing the quoted sentence as an argument for the hypothesis.

Some initial thoughts:

  • A highly capable system doesn't necessarily need to be deployed by humans to disempower humans, meaning "deployment" is not necessarily a good concept to use here
  • On the other hand, deployability of systems increases investment in AI (how much?), meaning that increasing opacity might in some sense decreases future capabilities compared to counterfactuals where the AI was less opaque
  • I don't know how much willingness to deploy really decreases from increased opacity, if at all
  • Opacity can be thought of as the inability to predict behavior in a given new environment. As models have scaled, the number of benchmarks we test them on also seems to have scaled, which does help us understand their behavior. So perhaps the measure that's actually important is the "difference between tested behavior and deployed behavior" and it's unclear to me what this metric looks like over time. [ETA: it feels obvious that our understanding of AI's deployed behavior has worsened, but I want to be more specific and sure about that]
Comment by Jack R (Jack Ryan) on Will working here advance AGI? Help us not destroy the world! · 2022-07-28T20:59:15.101Z · LW · GW

I was thinking of the possibility of affecting decision-making, either directly by rising the ranks (not very likely) or indirectly by being an advocate for safety at an important time and pushing things into the Overton window within an organization. 

I imagine Habryka would say that a significant possibility here is that joining an AGI lab will wrongly turn you into an AGI enthusiast. I think biasing effects like that are real, though I also think it's hard to tell in cases like that how much you are biased v.s. updating correctly on new information, and one could make similar bias claims about the AI x-risk community (e.g. there is social pressure to be doomy; only being exposed to heuristic arguments for doom and few heuristic arguments for optimism will bias you to be doomier than you would be given more information).

Comment by Jack R (Jack Ryan) on Will working here advance AGI? Help us not destroy the world! · 2022-07-27T04:51:10.209Z · LW · GW

It seems like you are confident that the delta in capabilites would outweigh any delta in general alignment sympathy. Is this what you think?

Comment by Jack R (Jack Ryan) on A central AI alignment problem: capabilities generalization, and the sharp left turn · 2022-07-26T08:43:17.300Z · LW · GW

Attempting to manually specify the nature of goodness is a doomed endeavor, of course, but that's fine, because we can instead specify processes for figuring out (the coherent extrapolation of) what humans value. […] So today's alignment problems are a few steps removed from tricky moral questions, on my models.
 

I‘m not convinced that choosing those processes is significantly non-moral. I might be misunderstanding what you are pointing at, but it feels like the fact that being able to choose the voting system gives you power over the vote’s outcome is evidence of this sort of thing - that meta decisions are still importantly tied to decisions.

Comment by Jack R (Jack Ryan) on Criticism of EA Criticism Contest · 2022-07-14T22:34:28.186Z · LW · GW

I think there should be a word for your parsing, maybe "VNM utilitarianism," but I think most people mean roughly what's on the wiki page for utilitarianism:

Utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for all affected individuals

Comment by Jack R (Jack Ryan) on Where I agree and disagree with Eliezer · 2022-07-09T23:07:19.377Z · LW · GW

It's not obvious to me that the class of counter-examples "expertise, in most fields, is not easier to verify than to generate" are actually counter-examples. For example for "if you're not a hacker, you can't tell who the good hackers are," it still seems like it would be easier to verify whether a particular hack will work than to come up with it yourself, starting off without any hacking expertise.

Comment by Jack R (Jack Ryan) on Human values & biases are inaccessible to the genome · 2022-07-08T04:42:27.033Z · LW · GW

Could you clarify a bit more what you mean when you say "X is inaccessible to the human genome?"

Comment by Jack R (Jack Ryan) on Information Loss --> Basin flatness · 2022-05-22T01:00:25.995Z · LW · GW

Ah okay -- I have updated positively in terms of the usefulness based on that description, and have updated positively on the hypothesis "I am missing a lot of important information that contextualizes this project," though still confused. 

Would be interested to know the causal chain from understanding circuit simplicity to the future being better, but maybe I should just stay posted (or maybe there is a different post I should read that you can link me to; or maybe the impact is diffuse and talking about any particular path doesn't make that much sense [though even in this case my guess is that it is still helpful to have at least one possible impact story]).

Also, just want to make clear that I made my original comment because I figured sharing my user-experience would be helpful (e.g. via causing a sentence about the ToC), and hopefully not with the effect of being discouraging / being a downer.

Comment by Jack R (Jack Ryan) on Information Loss --> Basin flatness · 2022-05-21T23:40:21.541Z · LW · GW
Comment by Jack R (Jack Ryan) on Information Loss --> Basin flatness · 2022-05-21T23:38:18.263Z · LW · GW

I didn't finish reading this, but if it were the case that:

  • There were clear and important implications of this result for making the world better (via aligning AGI)
  • These implications were stated in the summary at the beginning

then I very plausibly would have finished reading the post or saved it for later.

ETA: For what it's worth, I still upvoted and liked the post, since I think deconfusing ourselves about stuff like this is plausibly very good and at the very least interesting. I just didn't like it enough to finish reading it or save it, because from my perspective it's expected usefulness wasn't high enough given the information I had.

Comment by Jack R (Jack Ryan) on Is AI Progress Impossible To Predict? · 2022-05-16T01:43:02.596Z · LW · GW

I wonder if there are any measurable dimensions along which tasks can vary, and whether that could help with predicting task progress at all.  A simple example is the average input size for the benchmark.

Comment by Jack R (Jack Ryan) on Starting too many projects, finishing none · 2022-05-06T02:06:27.908Z · LW · GW

I’m glad you posted this — this may be happening to me and now I’ve read about sunken cost faith counterfactually

Comment by Jack Ryan on [deleted post] 2022-04-28T08:16:38.358Z

I don't know how good of a fit you would be, but have you considered applying to Redwood Research?

Comment by Jack Ryan on [deleted post] 2022-04-23T08:48:06.830Z

Ah I see, and just to make sure I'm not going crazy, you've edited the post now to reflect this?

Comment by Jack Ryan on [deleted post] 2022-04-23T05:24:20.479Z

W is a function, right? If so, what’s its type signature?

Comment by Jack R (Jack Ryan) on When is positive self-talk not worth it due to self-delusion? · 2022-04-21T23:25:50.809Z · LW · GW

I agree, though I want to be able to have a good enough understanding of the gears such that I can determine whether something like "telling yourself you are awesome everyday" will have counterfactual better outcomes than not. I guess the studies seem to suggest the answer in this case is "yes" in as much as self-delusion negative externalities are captured by the metrics that the studies in the TED talk use. [ETA: and I feel like now I have nearly answered the question for myself, so thanks for the prompt!]

Comment by Jack R (Jack Ryan) on When is positive self-talk not worth it due to self-delusion? · 2022-04-21T10:07:11.611Z · LW · GW

What’s a motivation stack? Could you give an example?

Comment by Jack R (Jack Ryan) on When is positive self-talk not worth it due to self-delusion? · 2022-04-21T03:12:03.401Z · LW · GW

A partial answer: 

  • Your emotions are more negative than granted if, for instance, it's often the case that your anxiety is strong enough that it feels like you might die and you don't in fact die.
  • Your emotions are more positive than granted if it's often the case that, for instance, you are excited about getting job offers "more than" you tend to get job offers.

These answers still have ambiguity though, in "more than" and in how many Bayes points your anxiety as a predictor of death actually gets.

Comment by Jack R (Jack Ryan) on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-14T23:25:41.434Z · LW · GW

I'll add that when I asked John Wentworth why he was IDA-bearish, he mentioned the inefficiency of bureaucracies and told me to read the following post to learn why interfaces and coordination are hard: Interfaces as a Scarce Resource.

Comment by Jack R (Jack Ryan) on Takeoff speeds have a huge effect on what it means to work on AI x-risk · 2022-04-14T08:20:41.681Z · LW · GW

while in the slow takeoff world your choices about research projects are closely related to your sociological predictions about what things will be obvious to whom when.
 

Example?

Comment by Jack R (Jack Ryan) on [Link] A minimal viable product for alignment · 2022-04-13T22:26:35.269Z · LW · GW

I found this comment pretty convincing. Alignment has been compared to philosophy, which seems at the opposite end of "the fuzziness spectrum" as math and physics. And it does seem like concept fuzziness would make evaluation harder.

I'll note though that ARC's approach to alignment seems more math-problem-flavored than yours, which might be a source of disagreement between you two (since maybe you conceptualize what it means to work on alignment differently).

Comment by Jack R (Jack Ryan) on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-13T09:24:12.519Z · LW · GW

MIRI doesn't have good reasons to support the claim of almost certain doom


I recently asked Eliezer why he didn't suspect ELK to be helpful, and it seemed that one of his major reasons was that Paul was "wrongly" excited about IDA. It seems that at this point in time, neither Paul nor Eliezer are excited about IDA, but Eliezer got to the conclusion first. Although, the IDA-bearishness may be for fundamentally different reasons -- I haven't tried to figure that out yet.

Have you been taking this into account re: your ELK bullishness? Obviously, this sort of point should be ignored in favor of object-level arguments about ELK, but to be honest, ELK is taking me a while to digest, so for me that has to wait.

Comment by Jack R (Jack Ryan) on A broad basin of attraction around human values? · 2022-04-13T01:29:28.085Z · LW · GW

I think Nate Soares has beliefs about question 1.  A few weeks ago, we were discussing a question that seems analogous to me -- "does moral deliberation converge, for different ways of doing moral deliberation? E.g. is there a unique human CEV?" -- and he said he believes the answer is "yes." I didn't get the chance to ask him why, though.

Thinking about it myself for a few minutes, it does feel like all of your examples for how the overseer could have distorted values have a true "wrongness" about them that can be verified against reality -- this makes me feel optimistic that there is a basin of human values, and that "interacting with reality" broadly construed is what draws you in.

Comment by Jack R (Jack Ryan) on Worse than an unaligned AGI · 2022-04-11T01:53:11.322Z · LW · GW

An example is an AI making the world as awful as possible, e.g. by creating dolorium. There is a separate question about how likely this is, hopefully very unlikely.

Comment by Jack R (Jack Ryan) on What an actually pessimistic containment strategy looks like · 2022-04-10T23:11:10.246Z · LW · GW

I mean to argue against your meta-strategy which relies on obtaining relevant understanding about deception or alignment as we get larger models and see how they work. I agree that we will obtain some understanding, but it seems like we shouldn't expect that understanding to be very close to sufficient for making AI go well (see my previous argument), and hence not a very promising meta-strategy.

 

 


 

Comment by Jack R (Jack Ryan) on What an actually pessimistic containment strategy looks like · 2022-04-10T20:26:26.232Z · LW · GW

[ETA: I'm not that sure of the below argument]

Thanks for the example, but it still seems to me that this sort of thing won't work for advanced AI. If you are familiar with the ELK report, you should be able to see why. [Spoiler below]

Even if you manage to learn the properties of what looks like deception to humans, and instill those properties into a loss function, then it seems like you are still more likely to get a system that tells you what humans think the truth is, avoiding what humans would be able to notice as deception, rather than telling you what the truth actually seems to be (given what it knows). The reason is that, as AI develops, programs that are capable of the former thing have constant complexity, but programs that are capable of the latter thing have complexity that grows with the complexity of the AI's models of the world, and so you should expect that the former is favored by SGD. See this part of the ELK document for a more detailed description of this failure mode.

 

Comment by Jack R (Jack Ryan) on Worse than an unaligned AGI · 2022-04-10T05:14:06.403Z · LW · GW

Isn’t the worst case one in which the AI optimizes exactly against human values?

Comment by Jack R (Jack Ryan) on What an actually pessimistic containment strategy looks like · 2022-04-10T03:56:44.269Z · LW · GW

Maybe Carl meant to link this one

Comment by Jack R (Jack Ryan) on What an actually pessimistic containment strategy looks like · 2022-04-10T03:52:23.725Z · LW · GW

it could be that the lack of alignment understanding is an inevitable consequence of our capabilities understanding not being there yet.

Could you say more about this hypothesis? To me, it feels likely that you can get crazy capabilities from a black box that you don't understand and so whose behavior/properties you can't verify to be acceptable. It's not like once we build a deceptive model we will know what deceptive computation looks like and how to disincentivize it (which is one way your nuclear analogy could translate).

It's possible, also, that this is about takeoff speeds, and that you think its plausible that e.g. we can disincentivize deception by punishing the negative consequences it entails (if FOOM, can't since we'd be dead).

Comment by Jack R (Jack Ryan) on We Are Conjecture, A New Alignment Research Startup · 2022-04-08T17:27:04.754Z · LW · GW

One thing is that it seems like they are trying to build some of the world’s largest language models (“state of the art models”)

Comment by Jack R (Jack Ryan) on Don't die with dignity; instead play to your outs · 2022-04-07T03:48:40.980Z · LW · GW

Hah! Thanks

Comment by Jack R (Jack Ryan) on Don't die with dignity; instead play to your outs · 2022-04-07T02:52:55.621Z · LW · GW

It seems to me that it would be better to view the question as "is this frame the best one for person X?" rather than "is this frame the best one?"

Though, I haven't fully read either of your posts, so excuse any mistakes/confusion.

Comment by Jack R (Jack Ryan) on You get one story detail · 2022-04-05T07:15:49.544Z · LW · GW

Do you have an example of a set of 1-detail stories you now might tell (composed with “AND”)?

Comment by Jack R (Jack Ryan) on Do a cost-benefit analysis of your technology usage · 2022-04-04T06:52:03.354Z · LW · GW

Ah — sorry if I missed that in the post, only skimmed

Comment by Jack R (Jack Ryan) on Do a cost-benefit analysis of your technology usage · 2022-03-29T08:39:34.073Z · LW · GW

Random tip: If you want to restrict apps etc on your iPhone but not know the Screen Time pin, I recommend the following simple system which allows you to not know the password but unlock restrictions easily when needed:

  1. Ask a friend to write a 4 digit pin in a small note book (which is dedicated only for this pin)
  2. Ask them to punch in the pin to your phone when setting the Screen Time password
  3. Keep the notebook in your backpack and never look inside of it, ever
  4. If you ever need your phone unlocked, you can walk up to someone, even a stranger, show them the notebook and ask them to punch in the pin to your phone

The system works because having a dedicated physical object that you commit to never look inside is surprisingly doable, for some reason.

Comment by Jack Ryan on [deleted post] 2022-03-29T03:20:11.101Z

Thanks for this list!

Though the list still doesn't strike me as very novel -- it feels that most of these conditions are conditions we've been shooting for anyways.

E.g. conditions 1, 2, and 5 are about selecting for behavior we approve of and condition 5 is just inspection with interpretability tools.

If you feel you have traction on conditions 3 and 4 though, that does seem novel (side-note that condition 4 seems to be a subset of condition 3). I feel skeptical though, since value extrapolation seems like about as hard of a problem as understanding machine generalization in general + the way a thing behaves in a large class of cases seems to be so complicated of a concept that you won't be able to have confident beliefs about it or understand it. I don't have a concrete argument about this though.

Anyways, thanks for responding, and if you have any thoughts about the tractability of conditions 3/4, I'm pretty curious.

Comment by Jack R (Jack Ryan) on What are the top 1-10 posts / sequences / articles / etc. that you've found most useful for yourself for becoming "less wrong"? · 2022-03-28T10:14:37.905Z · LW · GW

I (with some help) compiled some of the best rationality essays here.