Posts

Predictions for Neuralink's Friday Announcement 2020-08-26T06:33:27.249Z
Michaël Trazzi's Shortform 2020-05-08T20:45:07.360Z
Jukebox: how to update from AI imitating humans? 2020-04-30T20:50:13.844Z
The Epistemology of AI risk 2020-01-27T23:33:28.667Z
An Increasingly Manipulative Newsfeed 2019-07-01T15:26:42.566Z
Problems with Counterfactual Oracles 2019-06-11T18:10:05.223Z
Stories of Continuous Deception 2019-05-31T14:31:47.486Z
Trade-off in AI Capability Concealment 2019-05-23T19:25:32.664Z
A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI 2019-05-21T19:58:42.258Z
Considerateness in OpenAI LP Debate 2019-03-12T19:05:27.643Z
Treacherous Turn, Simulations and Brain-Computer Interfaces 2019-02-25T15:49:44.375Z
Greatest Lower Bound for AGI 2019-02-05T20:17:24.675Z
Open Thread October 2018 2018-10-02T18:01:05.416Z
Book Review: AI Safety and Security 2018-08-21T10:23:24.165Z
Building Safer AGI by introducing Artificial Stupidity 2018-08-14T15:54:33.832Z
Human-Aligned AI Summer School: A Summary 2018-08-11T08:11:00.789Z
A Gym Gridworld Environment for the Treacherous Turn 2018-07-28T21:27:34.487Z
The Multiple Names of Beneficial AI 2018-05-11T11:49:51.897Z
Talking about AI Safety with Hikers 2018-05-10T06:38:26.620Z
Applied Coalition Formation 2018-05-09T07:07:42.014Z
Better Decisions at the Supermarket 2018-05-07T22:32:00.723Z
Beliefs: A Structural Change 2018-05-06T13:40:30.262Z
Are you Living in a Me-Simulation? 2018-05-03T22:02:03.967Z
A Logician, an Entrepreneur, and a Hacker, discussing Intelligence 2018-05-01T20:45:58.143Z
Should an AGI build a telescope to spot intergalactic Segways? 2018-04-28T21:55:15.664Z

Comments

Comment by mtrazzi on Excusing a Failure to Adjust · 2020-08-26T16:05:04.365Z · LW · GW

More generally, there's a difference between things being true and being useful. Believing that sometimes you should not update isn't a really useful habit as it forces the rationalizations you mentioned.

Another example is believing "willpower is a limited quantity" vs. "it's a muscle and the more I use it the stronger I get". The first belief will push you towards not doing anything, which is similar to the default mode of not updating in your story.

Comment by mtrazzi on Predictions for Neuralink's Friday Announcement · 2020-08-26T14:16:34.056Z · LW · GW

Note: I also know very little about this. Few thoughts on your guesses (and my corresponding credences):

--It seems pretty likely that it will be for humans (something that works for mices wouldn't be impressive enough for an announcement). In last year's white paper they were already inserting electrode arrays in the brain. But maybe you mean something that lives inside the brain independently? (90%)

--If by "significative damage" you mean "not altering basic human capabilities" then it sounds plausible. From the white paper they seem to focus on damage to "the blood-brain barrier" and the "brain’s inflammatory response to foreign objects". My intuition is that the brain would react pretty strongly to something inside it for 10 years though. (20%)

--Other BCI companies have done similar demo-s, so given presentation is long this might happen at some point. But Neuralink might also want to show they're different from mainstream companies. (35%)

--Seems plausible. Assigning lower credence because really specific. (15%)

Comment by mtrazzi on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-18T13:18:56.105Z · LW · GW

Funnily enough, I wrote a blog distilling what I learned from reproducing experiments of that 2018 Nature paper, adding some animations and diagrams. I especially look at the two-step task, the Harlow task (the one with monkeys looking at a screen), and also try to explain some brain things (e.g. how DA interacts with the PFN) at the end.

Comment by mtrazzi on OpenAI announces GPT-3 · 2020-05-29T12:58:43.921Z · LW · GW

HN comment unsure about the meta-learning generalization claims that OpenAI has a "serious duty [...] to frame their results more carefully"

Comment by mtrazzi on Raemon's Shortform · 2020-05-28T21:13:09.951Z · LW · GW

re working memory: never thought of it during conversations, interesting. it seems that we sometime hold the nodes of the conversation tree to go back to them afterward. and maybe if you're introducing new concepts while you're talking people need to hold those definitions in working memory as well.

Comment by mtrazzi on What would flourishing look like in Conway's Game of Life? · 2020-05-13T09:23:23.729Z · LW · GW

Some friends tried (inconclusively) to apply AlphaZero to a two-player GoL. I can put you in touch if you want their feedback.

Comment by mtrazzi on Michaël Trazzi's Shortform · 2020-05-10T11:15:28.977Z · LW · GW

Thanks for the tutorial to download documentation, I've never done that myself so will check it out next time I go offline for a while!

I usually just run python to look at docs, importing the library, and then do help(lib.module.function). If I don't really know what the class can do, I usually do dir(class_instance) to find the available methods/attributes, and do the help thing on them.

This only works if you know reasonably well where to look at. If I were you I would try loading the "read the docs" html build offline in your browser (might be searchable this way), but then you still have a browser open (so you would really need to turn down wifi).

Comment by mtrazzi on How to do remote co-working · 2020-05-08T21:01:29.220Z · LW · GW

Thanks for writing this up!

I've personally tried Complice coworking rooms where people synchronize on pomodoros and chat during breaks, especially EA France's study room (+discord to voice chat during breaks) but there's also a LW study hall: https://complice.co/rooms

Comment by mtrazzi on Michaël Trazzi's Shortform · 2020-05-08T20:45:07.900Z · LW · GW

I've been experimenting with offline coding recently, sharing some of my conclusions.

Why I started 1) Most of the programming I do at the moment only needs a terminal and a text editor. I'm implementing things from scratch without needing libraries and I noticed I could just read the docs offline. 2) I came to the conclusion that googling things wasn't worth the cost of having a web browser open--using the outside view, when I look back at all the instances of coding while having the internet in easy-access, I always end up being distracted, and even if i code my mind thinks about what I could be doing.

How to go offline (Computer) 1) turn off wi-fi 2) forget network (Phone) if you're at home, put it out of reach. I turn it off then throw it on top of a closet, so far that i need to grab a chair in the living room to catch it. If you have an office, then do the same thing and go to your office without your phone.

When My general rule in January was that I could only check the internet between 11pm and 12am. The rest of the "no work + no internet" time was for deep relaxation, meditation, journaling, eating, etc. In April I went without any internet connection for a week. I was amazed at how much free time I had, but the lack of social interactions was a bit counter-productive. Currently, I'm going offline from the moment I wake up to 7pm. This seems like a good balance where I'm not too tired but still productive throughout the day.

Let me know if you have any question about the process or similar experience to share.

Comment by mtrazzi on The Epistemology of AI risk · 2020-01-30T12:12:11.420Z · LW · GW

Thanks for all the references! I don't currently have much time to read all of it right now so I can't really engage with the specific arguments for the rejection of using utility functions/studying recursive self-improvement.

I essentially agree with most of what you wrote. There is maybe a slight disagreement in how you framed (not what you meant) how research focus shifted since 2014.

I see Superintelligence as essentially saying "hey, there is pb A. And even if we solve A, then we might also have B. And given C and D, there might be E." Now that the field is more mature and we have many more researchers getting paid to work on these problems, the arguments became much more goal focused. Now people are saying "I'm going to make progress on sub-problem X, by publishing a paper on Y. And working on Z is not cost-effective given so I'm not going to work on it given humanity's current time constraints."

These approaches are often grouped as "long-term problems-focused" and "making tractable progress now focused". In the first group you have Yudkowsky 2010, Bostrom 2014, MIRI's current research and maybe CAIS. In the second one you have current CHAI/FHI/OpenAI/DeepMind/Ought papers.

Your original framing can be interpreted as "after proving some mathematical theorems, people rejected the main arguments of Superintelligence and now most of the community agrees that working on X, Y and Z are tractable but A, B and C are more controversials".

I think a more nuanced and precise framing would be: "In Superintelligence Bostrom exposes exhaustively the risks associated with advanced AI. A short portion of the book is dedicated to the problems are working on right now. Indeed, people stopped working on the other problems (largest portion of the book) because 1) there hasn't been really productive working on them 2) some rebuttals have been written online giving convincing arguments that those pbs are not tractable anyway 3) there are now well-funded research organizations with incentives to make tangible progress on those pbs."

In your last framing, you presented precise papers/rebuttals (thanks again!) for 2), and I think rebuttals are a great reason to stop working on a pb, but I think they're not the only reason and not the real reason people stopped working on those pb. To be fair, I think 1) can be explained by many more factors than "it's theoretically impossible to make progress on those pbs". It can be that the research mindset required to work on these pbs is less socially/intellectually validating or requires much more theoretical approaches, so will be off-putting/tiresome to most recent grads that enter the field. I also think that AI Safety is now much more intertwined with evidence-based approaches such as Effective Altruism than it was in 2014, which explains 3), so people start presenting their research as "partial solutions to the pb. of AI Safety" or "research agenda".

To be clear, I'm not criticizing the current shift in research. I think it's productive for the field, both in the short term and long term. To give a bit more personal context, I started getting interested in AI Safety after reading Bostrom and have always been more interested in the "finding problems" approach. I went to FHI to work on AI Safety because I was super interested in finding new pbs related to the treacherous turn. It's now almost taboo to say that we're working on pbs that are sub-optimally minimizing AI risk, but the real reason that pushed me to think about those pbs was because they were both important and interesting. The pb. with the current "shift in framing" is that it's making it socially unacceptable for people to think/work on more long-term pbs where there is more variance in research productivity.

I don't quite understand the question?

Sorry about that. I thought there was some link to our discussion about utility functions but I misunderstood.

EDIT: I also wanted to mention that the number of pages in a book doesn't account for how important the author think the pb. is (Bostrom even comments on this in the postface of its book). Again, the book is mostly about saying "here are all the pbs", not "these are the tractable pbs we should start working on, and we should dedicate research ressources proportionally to the amount of pages I talk about it in the book".

Comment by mtrazzi on The Epistemology of AI risk · 2020-01-30T10:49:12.553Z · LW · GW

This framing really helped me think about gradual self-improvement, thanks for writing it down!

I agree with most of what you wrote. I still feel that in the case of an AGI re-writing its own code there's some sense of intent that hasn't been explicitly happening for the past thousand years.

Agreed, you could still model Humanity as some kind of self-improving Human + Computer Colossus (cf. Tim Urban's framing) that somehow has some agency. But it's much less effective at self-improving itself, and it's not thinking "yep, I need to invent this new science to optimize this utility function". I agree that the threshold is "when all the relevant action is from a single system improving itself".

there would also be warning signs before it was too late

And what happens then? Will we reach some kind of global consensus to stop any research in this area? How long will it take to build a safe "single system improving itself"? How will all the relevant actors behave in the meantime?

My intuition is that in the best scenario we reach some kind of AGI Cold War situation for long periods of time.

Comment by mtrazzi on The Epistemology of AI risk · 2020-01-30T00:17:16.832Z · LW · GW

I get the sense that the crux here is more between fast / slow takeoffs than unipolar / multipolar scenarios.

In the case of a gradual transition into more powerful technology, what happens when the children of your analogy discovers recursive self improvement?

Comment by mtrazzi on The Epistemology of AI risk · 2020-01-30T00:10:17.812Z · LW · GW

When you say "the last few years has seen many people here" for your 2nd/3rd paragraph, do you have any posts / authors in mind to illustrate?

I agree that there has been a shift in what people write about because the field grew (as Daniel Filan pointed out). However, I don't remember reading anyone dismiss convergent instrumental goals such as increasing your own intelligence or utility functions as an useful abstraction to think about agency.

In your thread with ofer, he asked what was the difference between using loss functions in neural nets vs. objective function / utility functions and I haven't fully catched your opinion on that.

Comment by mtrazzi on The Epistemology of AI risk · 2020-01-28T23:04:11.218Z · LW · GW

the ones you mentioned

To be clear, this is a linkpost for Philip Trammell's blogpost. I'm not involved in the writing.

Comment by mtrazzi on The Epistemology of AI risk · 2020-01-28T23:02:15.909Z · LW · GW

As you say

To be clear, the author is Philip Trammell, not me. Added quotes to make it clearer.

Comment by mtrazzi on Ultra-simplified research agenda · 2019-11-22T16:44:16.391Z · LW · GW

Having printed and read the full version, this ultra-simplified version was an useful summary.

Happy to read a (not-so-)simplified version (like 20-30 paragraphs).

Comment by mtrazzi on Do you get value out of contentless comments? · 2019-11-21T23:38:21.881Z · LW · GW

Funny comment!

Comment by mtrazzi on AI Alignment "Scaffolding" Project Ideas (Request for Advice) · 2019-07-11T12:07:45.888Z · LW · GW
A comprehensive AI alignment introductory web hub

RAISE and Robert Miles provide introductory content. You can think of LW->alignment forum as "web hubs" for AI Alignment research.

formal curriculum

There was a course on AGI Safety last fall in Berkeley.

A department or even a single outspokenly sympathetic official in any government of any industrialized nation

You can find a list of institutions/donors here.

A list of concrete and detailed policy proposals related to AI alignment

I would recommend reports from FHI/GovAI as a starting point.

Would this be valuable, and which resource would it be most useful to create?

Please give more detailed information about the project to receive feedback.

Comment by mtrazzi on Modeling AI milestones to adjust AGI arrival estimates? · 2019-07-11T11:53:55.952Z · LW · GW

You can find AGI predictions, including Starcraft forecasts, in "When Will AI Exceed Human Performance? Evidence from AI Experts". Projects for having "all forecasts on AGI in one place" include ai.metaculus.com & foretold.io.

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-07-04T16:42:00.970Z · LW · GW

Does that summarize your comment?

1. Proposals should make superintelligences less likely to fight you by using some conceptual insight true in most cases.
2. With CIRL, this insight is "we want the AI to actively cooperate with humans", so there's real value from it being formalized in a paper.
3. In the counterfactual paper, there's the insight "what if the AI thinks he's not on but still learns".
For the last bit, I have two interpretations:
4.a. However, it's unclear that this design avoids all manipulative behaviour and is completely safe.
4.b. However, it's unclear that adding the counterfactual feature to another design (e.g. CIRL) would make systems overall safer / would actually reduce manipulation incentives.

If I understand you correctly, there are actual insights from counterfactual oracles--the problem is that those might not be insights that would apply to a broad class of Alignment failures, but only to "engineered" cases of boxed oracle AIs (as opposed to CIRL where we might want AIs to be cooperative in general). Was it what you meant?

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-07-04T16:22:18.203Z · LW · GW

The zero reward is in the paper. I agree that skipping would solve the problem. From talking to Stuart, my impression is that he thinks that would be equivalent to skipping for specifying "no learning", or would just slow down learning. My disagreement on that I think it can confuse learning to the point of not learning the right thing.

Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?

Yes, that should work. My quote saying that online learning "won't work and is unsafe" is imprecise. I should have said "if epsilon is small enough to be comparable to the probability of shooting an escape message at random, then it is not safe. Also, if we continue sending the wrong instead of skipping, then it might not learn the correct thing if is not big enough".

Although I guess that probably isn't really original either. What seems original is that during any episode where learning will take place, don't let humans (or any other system that might be insecure against the oracle) see the oracle's output until the episode is over.

That's exactly it!

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-12T17:54:57.854Z · LW · GW

The string is read with probability 1-

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-12T15:13:53.977Z · LW · GW

Yes, if we choose the utility function to make it a CDT agent optimizing for the reward for one step (so particular case of act-based) then it won't care about future versions of itself nor want to escape.

I agree with the intuition of shutting down to make it episodic, but I am still confused about the causal relationship between "having the rule to shutdown the system" and "having a current timestep maximizer". For it to really be a "current timestep maximizer" it needs to be in some kind of reward/utility function. Because everything is reset at each timestep, there is no information pointing at "I might get shutdown at the next timestep".

As for the collecting a dataset and then optimizing for some natural direct effect, I am not familiar enough with Pearl's work to tell if that would work, but I made some related comments about why there might be some problems in online-learning/"training then testing" here.

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-12T14:53:26.268Z · LW · GW

The Asymptotically Unambitious AGI thread helped me clarify my thoughts, thanks. I agree that an optimal CDT agent won't think about future versions, and I don't see any optimization pressure towards escape message nor disproportionately common "escape message" regions.

However, it still assumes we have access to this magic oracle that optimizes for where is the event where humans don't see the answer, its indicator function, and the counterfactual reward (given by the automatic machine). If humans were able to build an oracle performing optimally (w.r.t ) from day 1, then humans would be able to specify some kind of "god oracle". The rest of the design seems to be just "how to interact with a god oracle so that humans are not influenced by the answers".

In practice, you'll want something that is able to learn from its (question, prediction, reward) history. That's why there is this automatic machine rewarding the oracle with some probability . In an online learning setting, most of the time the model gets (probability ), and it sometimes gets some useful feedback (probability . Therefore, if is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random. Even worse, the (question, prediction, ) datapoints in the history could get so misleading that the oracle never learns anything.

Now, if we're not in an online learning process but instead there is a separation between a "training phase" and a "deployment phase where the AI continue to learns with probability ", then the setup is just "have a model that learns to do the useful stuff in sandbox, and then have the weights (almost) fixed in deployment"

In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won't work and is unsafe, which leaves us with a "training then deployment" setup that isn't really original.

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-11T19:43:46.347Z · LW · GW

Yes, they call it a low-bandwidth oracle.

Comment by mtrazzi on Stories of Continuous Deception · 2019-06-03T14:01:21.388Z · LW · GW

I agree that these stories won't (naturally) lead to a treacherous turn. Continuously learning to deceive (a ML failure in this case, as you mentioned) is a different result. The story/learning should be substantially different to lead to "learning the concept of deception" (for reaching an AGI-level ability to reason about such abstract concepts), but maybe there's a way to learn those concepts with only narrow AI.

Comment by mtrazzi on Trade-off in AI Capability Concealment · 2019-05-24T15:25:02.445Z · LW · GW

I included dates such as 2020 to 2045 to make it more concrete. I agree that weeks (instead of years) would give a more accurate representation as current ML experiments take a few weeks tops.

The scenario I had in mind is "in the context of a few weeks ML experiment, I achieved human intelligence and realized that I need to conceal my intentions/capabilities and I still don't have decisive strategic advantage". The challenge would then be "how to conceal my human level intelligence before everything I have discovered is thrown away". One way to do this would be to escape, for instance by copy-pasting and running your code somewhere else.

If we're already at the stage of emergent human-level intelligence from running ML experiments, I would expect "escape" to be harder than just human-level intelligence (as there would be more concerns w.r.t. AGI Safety, and more AI boxing/security/interpretability measure), which would necessit more recursive self-improvement steps, hence more weeks.

Beside, in such a scenario the AI would be incentivized to spend as much time as possible to maximize its true capability, because it would want to maximize its probability of successfully taking over (because any extra % of taking over would give astronomical returns in expected value compared to just being shutdown).

Comment by mtrazzi on A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI · 2019-05-22T10:24:54.570Z · LW · GW

Your comment makes a lot os sense, thanks.

I put step 2. before step 3. because I thought something like "first you learn that there is some supervisor watching, and then you realize that you would prefer him not to watch". Agreed that step 2. could happen only by thinking.

Yep, deception is about alignment, and I think that most parents would be more concerned about alignment, not improving the tactics. However, I agree that if we take "education" in a broad sense (including high school, college, etc.), it's unofficially about tactics.

It's interesting to think of it in terms of cooperation - entities less powerful than their supervisors are (instrumentally) incentivized to cooperate.

what to do with a seed AI that lies, but not so well as to be unnoticeable

Well, destroy it, right? If it's deliberately doing a. or b. (from "Seed AI") then step 4. has started. The other cases where it could be "lying" from saying wrong things would be if its model is consistently wrong (e.g. stuck in a local minima), so you better start again from scratch.

If the supervisor isn't itself perfectly consistent and aligned, some amount of self-deception is present. Any competent seed AI (or child) is going to have to learn deception

That's insightful. Biased humans will keep saying that they want X when they want Y instead, so deceiving humans by pretending to be working on X while doing Y seems indeed natural (assuming you have "maximize what humans really want" in your code).

Comment by mtrazzi on A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI · 2019-05-22T09:52:20.953Z · LW · GW

I meant:

"In my opinion, the disagreement between Bostrom (treacherous turn) and Goertzel (sordid stumble) originates from the uncertainty about how long steps 2. and 3. will take"

That's an interesting scenario. Instead of "won't see a practical way to replace humanity with its tools", I would say "would estimate its chances of success to be < 99%". I agree that we could say that it's "honestly" making humans happy in the sense that it understands that this maximizes expected value. However, he knows that there could be much more expected value after replacing humanity with its tools, so by doing the right thing it's still "pretending" to not know where the absurd amount of value is. But yeah, a smile maximizer making everyone happy shouldn't be too concerned about concealing its capabilities, shortening step 4.

Comment by mtrazzi on [deleted post] 2019-04-25T15:35:45.328Z

This thread is to discuss "How useful is quantilization for mitigating specification-gaming? (Ryan Carey, Apr. 2019, SafeML ICLR 2019 Workshop)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:35:24.845Z

This thread is to discuss "Quantilizers (Michaël Trazzi & Ryan Carey, Apr. 2019, Github)".

Comment by mtrazzi on [deleted post] 2019-04-25T15:35:09.233Z

This thread is to discuss "When to use quantilization (Ryan Carey, Feb. 2019, LessWrong)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:34:48.693Z

This thread is to discuss "Quantilal control for finite MDPs & Computing an exact quantilal policy (Vanessa Kosoy, Apr. 2018, LessWrong)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:34:29.184Z

This thread is to discuss "Reinforcement Learning with a Corrupted Reward Channel (Tom Everitt; Victoria Krakovna; Laurent Orseau; Marcus Hutter; Shane Legg, Aug. 2017, arXiv; IJCAI)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:33:58.640Z

This thread is to discuss "Thoughts on Quantilizers (Stuart Armstrong, Jan. 2017, Intelligent Agent)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:33:25.030Z

This thread is to discuss "Another view of quantilizers: avoiding Goodhart's Law (Jessica Taylor, Jan. 2016, Intelligent Agent Foundations Forum)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:32:49.221Z

This thread is to discuss "New paper: "Quantilizers" (Rob Bensinger, Nov. 2015, MIRI)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:32:05.280Z

This thread is to discuss "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization (MIRI; AAAI)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:31:20.321Z

This thread is to discuss "Quantilizers maximize expected utility subject to a conservative cost constraint (Jessica Taylor, Sep. 2015, Intelligent Agent Foundation Forum)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:27:38.617Z

This thread is for general comments about the LessWrong post "Notes on Quantilization"

Comment by mtrazzi on Corrigibility as Constrained Optimisation · 2019-04-24T14:23:29.759Z · LW · GW
Reply: The button is a communication link between the operator and the agent. In general, it is possible to construct an agent that shuts down even though it has received no such message from its operators as well as an agent that does get a shutdown message, but does not shut down. Shutdown is a state dependent on actions, and not a communication link

This is very clear. Communication link made me understand that it didn't have a direct physical effect on the agent. It you want to make it even more intuitive you could do a diagram, but this explanation is already great!

Thanks for updating the rest of the post and trying to make it more clear!

Comment by mtrazzi on Corrigibility as Constrained Optimisation · 2019-04-11T11:54:03.971Z · LW · GW

Layman questions:

1. I don't understand what you mean by "state" in "Suppose, however, that the AI lacked any capacity to press its shutdown button, or to indirectly control its state". Do you include its utility function in its state? Or just the observations he receives from the environment? What context/framework are you using?

2. Could you define U_S and U_N? From the Corribility paper, U_S appears to be an utility function favoring shutdown, and U_N is a potentially flawed utility function, a first stab at specifying their own goals. Was that what you meant? I think it's useful to define it in the introduction.

3. I don't understand how an agent that "[lacks] any capacity to press its shutdown button" could have any shutdown ability. It's seems like a contradiction, unless you mean "any capacity to directly press its shutdown button".

4. What's the "default value function" and the "normal utility function" in "Optimisation incentive"? Is it clearly defined in the litterature?

5. "Worse still... for any action..." -> if you choose b as some action with bad corrigibility property, it seems reasonable that it can be better than most actions on v_N + v_S (for instance if b is the argmax). I don't see how that's a "worse still" scenario, it seems plausible and normal.

6. "From this reasoning, we conclude" -> are you infering things from some hypothetic b that would satisfy all the things you mention? If that's the case, I would need an example to see that it's indeed possible. Even better would be a proof that you can always find such b.

7. "it is clear that we could in theory find a θ" -> could you expand on this?

8. "Given the robust optimisation incentive property, it is clear that the agent may score very poorly on UN in certain environments." -> again, can you expand on why it's clear?

9. In the appendix, in your 4 lines inequality, do you assume that U_N(a_s) is non-negative (from line 2 to 3)? If yes, why?

Comment by mtrazzi on Renaming "Frontpage" · 2019-03-09T09:26:02.764Z · LW · GW

Name suggestions: "approved", "favored", "Moderators' pick", "high [information] entropy", "original ideas", "informative", "mostly ideas".

More generally, I'd recommend that each category has a name that bluntly states what the filter does (e.g. if it only uses karma as filter say "high karma").

Comment by mtrazzi on Alignment Research Field Guide · 2019-03-08T21:57:11.859Z · LW · GW

Hey Abram (and the MIRI research team)!

This post resonates with me on so many levels. I vividly remember the Human-Aligned AI Summer School where you used to be a "receiver" and Vlad was a "transmitter", when talking about "optimizers". Your "document" especially resonates with my experience running an AI Safety Meetup (Paris AI Safety).

On January 2019, I organized a Meetup about "Deep RL from human preferences". Essentially, the resources were by difficulty, so you could discuss the 80k podcast, the open AI blogpost, the original paper or even a recent relevant paper. Even if the participants were "familiar" to RL (because they got used to see written "RL" in blogs or hear people say "RL" in podcasts) none of them could explain to me the core structure of a RL setting (i.e. that a RL problem would need at least an environment, actions, etc.)

The boys were getting hungry (abram is right, $10 of chips is not enough for 4 hungry men between 7 and 9pm), when in the middle of a monologue ("in RL, you have so-and-so, and then it goes like so on and so forth..."), I suddenly realize that I'm talking to more than qualified attendees (I was lucky to have a PhD candidate in economics, a teenager who used to do international olympiads in informatics (IOI) and a CS PhD) that lack the necessary RL procedural knowledge to ask non-trivial questions about "Deep RL from human preferences".

That's when I decided to change the logistics of the Meetup to something much closer to what is described in "You and your research". I started thinking about what they would be interested in knowing. So I started telling the brillant IOI kid about this MIRI summer program, how I applied last year, etc. One thing lead to another, and I ended up asking what Tsvi had asked me one year ago for the AISFP interview:

If one of you was the only Alignment researcher left on Earth, and it was forbidden to convince other people to work on AI Safety research, what would you do?

That got everyone excited. The IOI boy took the black marker, and started to do math to the question, as a transmitter: "So, there is a probability p_0 that AI Researchers will solve the problem without me, and p_1 that my contribution will be neg-utility, so if we assume this and that, we get so-and-so."

The moment I asked questions I was truly curious about, the Meetup went from a polite gathering to the most interesting discussion of 2019.

Abram, if I were in charge of all agents in the reference class "organizer of Alignment-related events", I would tell instances of that class with my specific characteristics two things:

1. Come back to this document before and after every Meetup.

2. Please write below (can be in this thread or in the comments) what was your experience running an Alignment think-thank that resonates the most with the above "document".

Comment by mtrazzi on Greatest Lower Bound for AGI · 2019-02-05T23:14:48.666Z · LW · GW

I intuitively agree with your answer. Avturchin also commented saying something close (he said 2019, but for different reasons). Therefore, I think I might not be communicating clearly my confusion.

I don't remember exactly when, but there was some debates between Yann Le Cun and AI Alignment folks on a Fb group (maybe AI Safety discussion "open" a few months ago). What stroke me was how confident LeCun was about long timelines. I think, for him, the 1% would be in at least 10 years. How do you explain that someone who has access to private information (e.g. at FAIR) might have timelines so different than yours?

Meta: Thanks for expressing clearly your confidence levels through your writing with "hard", "maybe" and "should": it's very efficient.

EDIT: Le Cun thread: https://www.facebook.com/groups/aisafety/permalink/1178285709002208/

Comment by mtrazzi on Greatest Lower Bound for AGI · 2019-02-05T23:06:19.435Z · LW · GW

Could you detail a bit more the Gott's equation? I'm not familiar with it.

Also, do you think that those 62 years are meaningful if we think about AI winters or exponential technological progress?

PS: I think you commented instead of giving an answer (different things in question posts)

Comment by mtrazzi on If You Want to Win, Stop Conceding · 2018-11-23T23:17:52.804Z · LW · GW

Thanks for the post!

It resonates with some experience I had in playing the game of go at a competitive level.

Go is a perfect information game but it's very hard to know exactly what will be the outcome of a "fight" (you would need to look up to 30 moves ahead in some cases).

So when the other guy would kill your group of stones after a "life or death" scenario, because he had a slight advantage in the fight, it feels like the other is lucky, and most people have really bad thoughts and just give up.

Once, I created an account with the bio "I don't resign" to see what would happen if I forced myself to not concede and keep playing after a big loss. It went surprisingly well and I even went to play the highest ranked guy connected on the server. At this point, I completely lost the game and there was 100+ people watching the game, so I just resigned.

Looking back, it definitely helped me to continue fighting even after a big loss, and stop the mental chatter. However, there's a trade-off between the time gained by correctly estimating the probability of winning and resigning when too improbable, and the mental energy gained from not resigning (minus the fact that your opponent may be pretty pissed off).

Comment by mtrazzi on Introducing the AI Alignment Forum (FAQ) · 2018-10-31T11:49:06.596Z · LW · GW

(the account databases are shared, so every LW user can log in on alignment forum, but it will say "not a member" in the top right corner)

I am having some issues in trying to log in from a github-linked account. It redirects me to LW with an empty page and does nothing.

Comment by mtrazzi on noticing internal experiences · 2018-10-16T11:37:13.921Z · LW · GW

This website is designed to make you write about three morning pages every day.

I've used it for about two years and wrote ~200k words.

Really recommend it to form an habit of daily free writing.

Comment by mtrazzi on Open Thread October 2018 · 2018-10-14T20:55:51.056Z · LW · GW

Same issue here with the <a class="users-name" href="/users/mtrazzi">Michaël Trazzi</a> tag. The e in "ë" is larger than the "a" (here is a picture).

The bug seems to come from font-family: warnock-pro,Palatino,"Palatino Linotype","Palatino LT STD","Book Antiqua",Georgia,serif;" in .PostsPage-author (in <style data-jss="" data-meta="PostsPage">).

If I delete this font-family line, the font changes but the "ë" (and any other letter with accent) appears to have the correct size.