Work with me on agent foundations: independent fellowship

post by Alex_Altair · 2024-09-21T13:59:16.706Z · LW · GW · 5 comments

Contents

  What the role might be like
  The research problems
  Application process
None
5 comments

Summary: I am an independent researcher in agent foundations, and I've recently received an LTFF grant to fund someone to do research with me. This is a rolling application; I'll close it whenever I'm no longer interested in taking another person. Edit: this application is now closed! Thanks to everyone who helped or passed the word along. I'm really excited about the applications that I received.

If you're not familiar with agent foundations, you can read about my views in this post [LW · GW].

What the role might be like

This role is extremely flexible. Depending on who you are, it could end up resembling an internship, a research assistant position, a postdoc or even as a mentor/advisor to me. Below, I've listed out the parameters of the fellowship that I am using as a baseline of what it could be. All of these parameters are negotiable!

What this role ends up looking like mostly depends on your experience level relative to mine. Though I now do research, I haven't gone through the typical academic path. I'm in my mid-thirties and have a proportional amount of life and career experience, but in terms of mathematics, I consider myself the equivalent of a second year grad student. So I'm comfortable leading this project and am confident in my research taste, but you might know more math than me.

The research problems

Like all researchers in agent foundations, I find it quite difficult to concisely communicate what my research is about. Probably the best way to tell if you will be interested in my research problems is to read [LW · GW] other things [LW · GW] I've written [LW · GW], and then have a conversation with me about it.

All my research is purely mathematical,[2] rather than experimental or empirical. None of it involves machine learning per se, but the theorems should apply to ML systems.

The domains of math that I've been focusing on include: probability theory, stochastic processes, measure theory, dynamical systems, ergodic theory, information theory, algorithmic information theory. Things that I'm interested in but not competent in include: category theory, computational mechanics, abstract algebra, reinforcement learning theory.

Here are some more concrete examples of projects you could work on.

Application process

If you're interested, fill out this application form! You're also welcome to message me with any questions. After that, the rest of the application steps are;

After this, we should have a pretty good sense of whether we would work well together, and I'll make a decision about whether to offer you the 3-month fellowship (or whatever else we may have negotiated).

  1. ^

    Why not 40h/week? Partly because I want to use the grant money well. I also think that marginal productivity on a big abstract problem starts to drop around 20h/week. (I get around this by having multiple projects at time, so that may be an option.) Happy to negotiate on this as well.

  2. ^

    More specifically, the desired results are mathematical. The ideas are almost all "pre-mathematical", in that the first part will be to translate the ideas into the appropriate formalisms.

  3. ^

    A. A. Brudno, Entropy and the complexity of the trajectories of a
    dynamical system (1983)

  4. ^

    Canonically Wonham, W. M. Towards an Abstract Internal Model Principle (1976) but a more pedagogic presentation appears in the book Supervisory Control of Discrete-Event Systems, (2019) Cai & Wonham as section 1.5.

5 comments

Comments sorted by top scores.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-09-21T16:55:03.737Z · LW(p) · GW(p)

Not my area of research, but I would like to make an endorsement of Alex from knowing him socially.

Some researchers can be rather socially prickly and harsh, but Alex is not. He's an affable fellow. So if you are someone who needs non-prickly colleagues to be comfortable, you will likely enjoy working with Alex.

Example of what I mean by prickly: https://www.lesswrong.com/posts/BGLu3iCGjjcSaeeBG/related-discussion-from-thomas-kwa-s-miri-research [LW · GW]

Replies from: Alex_Altair
comment by Alex_Altair · 2024-09-21T17:18:00.210Z · LW(p) · GW(p)

<3!

comment by Cole Wyeth (Amyr) · 2024-09-21T14:43:59.575Z · LW(p) · GW(p)

I have been thinking about extending the AIXI framework from reward to more general utility functions, and working out some of the math, would be happy to chat if that's something you're interested in. I am already supported by the LTFF (for work on embedded agency) so can't apply to the job offer currently. But maybe I can suggest some independent researchers who might be interested.

Replies from: Alex_Altair
comment by Alex_Altair · 2024-09-27T22:46:16.457Z · LW(p) · GW(p)

Nice! Yeah I'd be happy to chat about that, and also happy to get referrals of any other researchers who might be interested in receiving this funding to work on it.

Replies from: Amyr
comment by Cole Wyeth (Amyr) · 2024-09-28T01:15:53.016Z · LW(p) · GW(p)

Cool, I'll DM you.