Posts

MDPs and the Bellman Equation, Intuitively Explained 2022-12-27T05:50:23.633Z
Jack O'Brien's Shortform 2022-12-01T08:58:32.177Z

Comments

Comment by Jack O'Brien (jack-o-brien) on Jack O'Brien's Shortform · 2024-09-15T05:47:41.112Z · LW · GW

Ummmmm yeah what have I done so far. I didn't really get any solid work done this week either. I have decided to extend the project by another two weeks with the other two people involved - we have all been pretty preoccupied with life. Last week on sunday night i didn't really do a solid hour of work. I did manage to summarise the concept of selection theorems and think about the agent type signature - a concept i will be referring to throughout the post, super fundamental. Tonight I will hopefully actually meet with my group. I wanna do like half an hour of work before, and a little bit after too. I want to summarise the good regulator theorem this week as well as turner's post on power seeking.

Comment by Jack O'Brien (jack-o-brien) on Jack O'Brien's Shortform · 2024-09-08T09:17:42.595Z · LW · GW

Well, haven't got much done in the last 2 weeks. Life has gotten in the way, and in the times where I thought I actually had the time and headspace to work on the project, things happened like my shoulder got injured playing sport, and my laptop mysteriously died.

But I have managed to create a github repo, and read the original posts on selection theorems. My list of selection theorems to summarize has grown. Check out the github page: https://github.com/jack-obrien/selection-theorems-review

Tonight I will try to do at least an hour of solid work on it. I want to summarize the idea of selection theorems, and sumarize the good regulator theorem, and start reading the next post (probably Turner's post on power seeking)

Comment by Jack O'Brien (jack-o-brien) on Jack O'Brien's Shortform · 2024-08-25T03:01:15.019Z · LW · GW

** Progress Report: AI Safety Fundamentals Project ** This is a public space for me to keep updated on my AI safety fundamentals project. The project will take 4 weeks. My goal is to stay lean and limit my scope so I can actually finish on time. I aim to update this post at least once per week with my updates, but maybe more often.

Overall, I want to work on agent foundations and the theory behind AI alignment agendas. One stepping point for this is Selection theorems; a research program to find justifications that a given training process will result in a given agent property.

My plan for the agisf project: literature review on selection theorems. Take a whole load of concepts / blog posts, read them, riff on them if i feel like it. At least write a 1 paragraph summary of each post im intrerested in. List of posts:

  • John's original posts on Selection theorems, and Adam Khoja's distillation of it.
  • Scott garrabrant's stuff on geometric rationality
  • Coherence theorems for utility theory.
  • Evolutionary biology shallow dive and explanation of price and fishers equations.
  • Maybe some stuff by Thane Ruthenis.
  • Some content from Jaynes' probability theory about bayesian vs frequentism
  • Power seeking is instrumentally convergent in MDPs.
  • ??? more examples to come once i read john's original post.

TODO:

  • Make an initial lesswrong progress report.
  • Make a list of things to read.
  • Make a git repo on my pc with markdown and mathjax support. In the initial document, populate it with the list of things to read. For each thing I read, remove it from the TODO list and put its summary in the main body of the blog post. When I am done, any posts still left on the TODO list will get formatted and added as an 'additional reading' section.
Comment by Jack O'Brien (jack-o-brien) on Lucie Philippon's Shortform · 2023-07-09T06:43:06.520Z · LW · GW

I think this is a good thing to do! I reccomend looking up things like "reflections on my LTFF upskilling grant" for similar pieces from lesser known researchers / aspiring researchers.

Comment by Jack O'Brien (jack-o-brien) on Ten Levels of AI Alignment Difficulty · 2023-07-04T06:22:54.093Z · LW · GW

Hey thanks for writing this up! I thought you communicated the key details excellently - in particular these 3 camps of varying alignment difficulty worlds, and the variation within those camps. Also I think you included just enough caveats and extra details to give readers more to think about, but without washing out the key ideas of the post.

Just wanted to say thanks, this post makes a great reference for me to link to.

Comment by Jack O'Brien (jack-o-brien) on MDPs and the Bellman Equation, Intuitively Explained · 2023-06-23T08:39:52.814Z · LW · GW

Yep that's right, fixed :)

Comment by Jack O'Brien (jack-o-brien) on Introduction to Cartesian Frames · 2023-06-09T02:32:56.034Z · LW · GW

Definition: .


A useful alternate definition of this is:

Where  refers to . Proof:

Comment by Jack O'Brien (jack-o-brien) on You are probably not a good alignment researcher, and other blatant lies · 2023-02-03T14:59:58.370Z · LW · GW

This felt great to read. Thanks for that :)

Comment by Jack O'Brien (jack-o-brien) on Accurate Models of AI Risk Are Hyperexistential Exfohazards · 2022-12-26T05:38:21.199Z · LW · GW

Yep, fair point. In my original comment I seemed to forget about the problem of AIs goodharting our long reflection. I probably agree now that doing a pivotal act into a long reflection is approximately as difficult as solving alignment.

(Side-note about how my brain works: I notice that when I think through all the argumentative steps deliberately, I do believe this statement: "Making an AI which helps humans clarify their values is approximately as hard as making an AI care about any simple, specific thing." However it does not come to mind automatically when I'm reasoning about alignment. 2 Possible fixes:

  1. Think more concretely about Retargeting the Search when I think about solving alignment. This makes the problems seem similar in difficulty.
  2. Meditate on just how hard it is to target an AI at something. Sometimes I forget how Goodhartable any objective is. )
Comment by Jack O'Brien (jack-o-brien) on Accurate Models of AI Risk Are Hyperexistential Exfohazards · 2022-12-26T02:34:28.078Z · LW · GW

This post was incredibly interesting and useful to me. I would strong-upvote it, but I don't think this post should be promoted to more people. I've been thinking about the question of "who are we aligning AI to" for the past two months.

I really liked your criticism of the Long Reflection because it is refreshingly different from e.g. Macaskill and Ord's writing on the long reflection. I'm still not convinced that we can't avoid all of the hellish things you mentioned like synthetic superstimuli cults and sub-AGI drones. Why can't we just have a simple process of open dialogue with values of truth, individual agency during the reflection, and some clearly defined contract at the end of the long reflection to like, take power away from the AGI drones?

Comment by Jack O'Brien (jack-o-brien) on Ulisse Mini's Shortform · 2022-12-02T03:11:35.950Z · LW · GW

3 is my main reason for wanting to learn more pure math, but I use 1 and 2 to help motivate me

Comment by Jack O'Brien (jack-o-brien) on Ulisse Mini's Shortform · 2022-12-01T09:02:42.551Z · LW · GW

which of these books are you most excited about and why? I also want to do more fun math reading

Comment by Jack O'Brien (jack-o-brien) on Jack O'Brien's Shortform · 2022-12-01T08:58:32.415Z · LW · GW

Let's be optimistic and prove that an agentic AI will be beneficial for the long-term future of humanity. We probably need to prove these 3 premises:

Premise 1:  Training story X will create an AI model which approximates agent formalism A
Premise 2: Agent formalism A is computable and has a set of alignment properties P
Premise 3: An AI with a set of alignment properties P will be beneficial for the long-term future.

Aaand so far I'm not happy with our answers to any of these.

Comment by jack-o-brien on [deleted post] 2022-10-05T02:31:41.085Z

Fantastic! Here's my summary:

Premises:

  1. A recursively self improving singleton is the most likely AI scenario.
  2. To mitigate AI risk, building a fully aligned singleton on the first try is the easiest solution. This is easier than other approaches which require solving coordination.
  3. By default, AI will become misaligned when it generalises away from human capabilities. We must apply a security mindset and be doubtful of most claims that an AI will be aligned when it generalises.
  4. We should prioritise research which solves the hard part of the alignment problem rather than handwavy, tractable research.
  5. Many current tractable alignment solutions handwave away the hard parts of the problem (and possibly make the problem worse in expectation).
  6. More formal / rigorous research can produce more robust guarantees about AI safety than other research.

Conclusion: We should prioritize working up what we even want from an aligned AI, formalize these ideas into concrete desiderata, then build AI systems which meet those desiderata

Comment by Jack O'Brien (jack-o-brien) on The Big Picture Of Alignment (Talk Part 1) · 2022-08-23T02:15:43.816Z · LW · GW

Did you get around to writing a longer answer to the question, "How do humans do anything in practice if the search space is vast?" I'd be curious to see your thoughts.

My answer to this question is that: 
(a) Most day-to-day problems can be solved from far away using a low-dimensional space containing natural abstractions. For example, a manager at a company can give their team verbal instructions without describing the detailed sequence of muscle movements needed.
(b) For unsolved problems in science, we get many tries at the problem. So, we can use the scientific method to design many experiments which give us enough bits to locate the solution. For example, a drug discovery team can try thousands of compounds in their search for a new drug. The drug discovery team gets to test each compound on the condition they're trying to treat - so, they can get many bits about which compounds could be effective.

Comment by Jack O'Brien (jack-o-brien) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T18:30:40.982Z · LW · GW

I have a few questions about corrigibility. First, I will tentatively define corrigibility as creating an agent who is willing to let humans shut it off or change its goals without manipulating humans. I have seen that corrigibility can lead to VNM-incoherence (i.e. an agent can be dutch-booked / money-pumped). Has this result been proven in general?

Also, what is the current state of corrigibility research? If the above incoherence result turns out to be correct and corrigibility leads to incoherence, are there any other tractable theoretical directions we could take towards corrigibility? 

Are any people trying to create corrigible agents in practice? (I suspect it is unwise to try this, as any poorly understood corrigibility we manage to implement in practice is liable to be wiped away if a sharp left turn occurs).

Comment by Jack O'Brien (jack-o-brien) on Distilled - AGI Safety from First Principles · 2022-07-04T05:18:04.721Z · LW · GW

Excellent summary, Harrison! I especially enjoyed your use of pillar diagrams to break up streams of text. In general, I found your post very approachable and readable.

As for Pillar 2: I find the description of goals as "generalised concepts" still pretty confusing after reading your summary. I don't think this example of a generalised concept counts as a goal: "things that are perfectly round are objects called spheres; 6-sided boxes are objects called cubes". This statement is a fact, but a goal is a normative preference about the world (cf. the is-ought distinction).
Also, I think the 'coherence' trait could do with slightly more deconfusion - you could phrase it as "goals are internally consistent and stable over time".

I think the most tenuous pillar in Ngo's argument is Pillar 2: that AI will be agentic with large-scale goals. It's plausible that the economic incentives for developing a CEO-style AI with advanced planning capabilities will not be as strong as stated. I agree that there is a strong economic incentive for CEO-style AI which can improve business decision-making. However, I'm not convinced that creating an agentic AI with large-scale goals is the best way to do this. We don't have enough information about which kinds of AI are most cost-effective for doing business decision-making. For example, the AI field may develop viable models that don't display these pesky agentic tendencies. 
(On the other hand, it does seem plausible that an agentic AI with large-scale goals is a very parsimonious/natural/easily-found-by-SGD model for such business decision-making tasks.)

Comment by Jack O'Brien (jack-o-brien) on An Inside View of AI Alignment · 2022-05-11T14:01:08.020Z · LW · GW

I'm excited to read your work! I would also like to post my inside view on LessWrong later, once it is more developed.

Comment by Jack O'Brien (jack-o-brien) on More Is Different for AI · 2022-02-16T12:38:12.922Z · LW · GW

I really like this post, you explained your purpose in writing the sequence very clearly. Thanks also for writing about how your beliefs updated over the process of writing this.