Advice for new alignment people: Info Max

post by Jonas Hallgren · 2023-05-30T15:42:20.142Z · LW · GW · 4 comments

Contents

  Info Maxing
  Personal experience of info maxing
  Summary
None
4 comments

Epistemic status: This text is meant for other people in a similar position to me who are deciding how they can help ensure that AI doesn’t kill us all. 

I hope that the new perspective gives you some new ways of thinking about your role in alignment, even though it should be treated as just that, another perspective on the problem.
 

As a young math and information nerd, I have a couple of ways of changing the world for the better. I believe that AI Safety is the most important thing I could be doing, and the question after that inevitably becomes what I should do in AI Safety. The most obvious answer is technical research, but that might not be the best way of getting high impact.

So I did the natural thing and asked some cool people what they thought. 80k has the advice to work on a problem area and develop aptitudes, but that is very unspecific. That standard advice is true, but it can be amplified like a *cough* motherf*cker. 

When asking Connor Leahy from Conjecture what the hell I should do with my life, I got the answer that I should go to the Bay and create a tech company. This might not sound like something coming out of the mouth of someone with short timelines and a high percentage of doom, yet it did. 

Connor told me that the idea behind it was that there would be pivotal points in the development of AI where we would need competent and trustworthy people to lead projects. People who are both trustworthy and competent are a very valuable resource in this world, yet it might seem very distant to someone just starting their career.

For me, becoming trustworthy was the new goal because it made sense when I think about fat-tailed impact. However, with my short timelines, I’ve realised that I can’t just take an ordinary 10-year route and do some cool research to become respected and trustworthy, as I just won’t have the time to do so.

So what am I doing instead? My current strategy is asking myself how I can become competent and respected in the quickest way possible. In my head, I refer to it as "Infomaxing." 


 

Info Maxing

You might now wonder, "How do you define info maxing?" Info maxing, in my head, is the strategic acquisition of a wide array of skills and experiences to enhance one's overall competency or, in short, maximising for valuable mental models. But to what end?

In my opinion, based on Connor's advice, the ultimate objective of info maxing, especially in the context of AI safety, is to position oneself at the helm during the crucial junctures of AI development. The times when competent and trustworthy leadership will matter the most. You're info maxing to become the 'captain' when the ship is navigating through the stormiest seas.

But let's delve deeper into what this might mean in AI Safety. 

The general idea is that we’re going to need trustworthy and competent people to help out with projects when the going gets rough. This might be different depending on what model you have about the world, but if you want to position yourself in such a way were you can be a “captain” of a ship, the following might be useful:

Infomaxing for Technical Proficiency: One can't navigate the tumultuous waters of AI without a robust technical understanding. It's crucial to familiarise yourself with the ins and outs of AI and machine learning. This doesn't mean that you need to be the one to develop groundbreaking new algorithms. Still, you do need to understand the underlying technology well enough to make informed decisions and to communicate effectively with those who do.

Infomaxing for Leadership: A leader is more than just a decision-maker. A great leader inspires trust and confidence in their team. In the AI safety world, such trust is particularly critical. You'll be guiding the team during potentially high-stake situations. You’ll also have to be trusted by external sources to fulfil the duties placed on you. This is where strategic thinking, communication, and emotional intelligence become indispensable.

Infomaxing for Vision: Finally, it's important to develop a clear vision of what 'good' looks like in the context of AI safety. This vision will guide your actions and decisions and give you the clarity and resolve you'll need when facing difficult choices.

This is not sufficient to capture all of the skills that you need, but the important thing to take into account is that pure technical research may not be the best way for you to make an impact on the world. Vision and leadership might be just as important.

Personal experience of info maxing

(9 months of having it in the back of my head)

It probably makes sense to see how someone is putting this into practice. I hope someone can take some wisdom from it. 

How I do this in practice is that I spend my time in an agile framework where I try to start projects & iterate on why things fail. The things I’m currently doing in my info maxing are the following :

Starting organisations & projects: I’ve tried to be very proactive when starting new things as it seems like one of the best ways the world provides real-world feedback loops. The average entrepreneur will start 10 projects before they go net-positive with one, and I’m expecting something similar for my projects at the moment. I’m still in the exploration phase of starting things up, but once I find something that works well, I will double down on that.

Trying to work out a research agenda: I’ve been working on understanding Active Inference, Boundaries, Natural Abstractions & Garrabrant’s work to come up with a nice definition of Dynamic Membranes (Dynamic Boundaries [LW · GW]) to predict agency within neural networks. It may or may not lead somewhere, but I feel that I’ve become a lot more experienced on the research front since I started. 

A side note is that it will probably be better for people, in general, to work on ML Engineering if they don’t find themselves super excited about theoretical stuff. I can become as giddy as a kid when I hear about some cool new frame on agent foundations, so I felt I should go down this route more.

Testing personal boundaries & working conditions: I’ve been trying different ways of working and living. I’ve realised I can’t have more than two major projects simultaneously whilst being productive and mentally sane. I also need a strict schedule with deep work and shallow work scheduling to work. 

I’ve looked into a lot of psycho and biohacking, and I can happily work 2x the amount I could a year ago just due to optimising my workflow. This was mostly done through increasing my energy levels through good sleep, energy-level-based scheduling, a change to low carbs until 4 pm, caffeine scheduling, yoga nidra, exercise and project minimalism. (implementing Huberman with some multimodal agents of mind [? · GW]on the side)

I believe that the compounding effects from this experimentation will most likely be worth quite a lot, as I know where my work boundaries are, and I’ve doubled the amount of deep work I can do during a week.

Reading & implementing book knowledge: Whilst I’ve been doing the above, I’ve been reading non-fiction books (now at 3x the speed, hats off to nonlinear for the advice) related to the work and self-improvement I’ve been doing. I’ve been trying to implement the knowledge that the authors have had in my daily life so that I get short feedback loops between the two.

Iterate & steal: In short, I’ve tried to throw myself out into the wide-open world of feedback loops, and I’ve tried to steal as many applicable models from other people as possible. Regarding projects, my current thinking is "the more personal responsibility that I need to take over a project, the more I grow used to responsibility." As John Wentworth says somewhere, it’s helpful to practice working on a hard problem for a while if you want to work on alignment. And as I say, it is more useful to just work on alignment.

If you want to become the captain of a ship, build a boat and sail it. 

With some caveats, of course: You don’t want to go into too deep water because then you might die (burnout), and it might also make sense to learn from experienced captains (read books, join boot camps & courses), etc. The important thing is that you do these things whilst also doing the main activity.

Sail, make sure that you sail.

Summary

In short, think of building skills that will make you a person that can fill the bottleneck that will be needed the most. If your fit allows it, try to become someone who can shoulder responsibility, for people like that will be bloody needed in the future.

The way to become that person is most likely info maxing, which is done by gathering useful models of acting in the world. The real world provides the best feedback loops, so if you want to become a captain, make sure that you practice sailing a boat.

 


 

4 comments

Comments sorted by top scores.

comment by metachirality · 2023-05-30T23:36:25.417Z · LW(p) · GW(p)

What other advice/readings do you have for optimizing your life/winning/whatever?

Replies from: Jonas Hallgren
comment by Jonas Hallgren · 2023-05-31T06:22:26.978Z · LW(p) · GW(p)

Since you asked so nicely, I can give you two other models. 

1. Meditation is like slow recursive self-improvement and reprogramming of your mind. It gives focus & mental health benefits that are hard to get from other places. If you want to accelerate your growth, I think it's really good. A mechanistic model of meditation [? · GW] & then doing the stages in the book The Mind Illuminated will give you this. (at least, this is how it has been for me)
2. Try to be weird and useful. If you have a weird background, you will catch ideas that might fall through the cracks for other people. Yet to make those ideas worth something you have to be able to actually take action on them, meaning you need to know how to, for example, communicate. So try to find the Pareto optimal between weird and useful by following & doing stuff you find interesting, but also valuable and challenging.

(Read a fuckton of non-fiction books as well if that isn't obvious. Just look up 30 different top 10 lists and you will have a good range to choose from.)

Replies from: metachirality
comment by metachirality · 2023-06-01T06:38:52.983Z · LW(p) · GW(p)

I have tried meditation a little bit although not very seriously. Everything I've heard about it makes me think it would be a good idea to do it more seriously.

Not sure how to be weird without being unuseful. What does a weird but useful background look like?

Also I've already been trying to read a lot but still somewhat dissatisfied with my pace. You mentioned you could read at 3x your previous speed. How did you do that?

Replies from: Jonas Hallgren
comment by Jonas Hallgren · 2023-06-01T08:28:43.991Z · LW(p) · GW(p)

I actually read less books than I used to, the 3x thing was that I listen to audiobooks at 3x the speed so I read less non-fiction but at a faster pace.

Also weird but useful in my head is for example looking into population dynamics to understand alignment failures. When does ecology predict that mode collapse will happen inside of large language models? Understanding these areas and writing about them is weird but it could also a useful bet for at least someone to take.

However, this also depends on how much doing the normal stuff is saturated. I would recommend trying to understand the problems and current approaches really well and then come up with ways of tackling them. To get the bits of information on how to tackle them you might want to check out weirder fields since those bits aren't already in the common pool of "alignment information" if that makes sense?