Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-12T17:54:57.854Z · score: 3 (2 votes) · LW · GW

The string is read with probability 1-

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-12T15:13:53.977Z · score: -1 (2 votes) · LW · GW

Yes, if we choose the utility function to make it a CDT agent optimizing for the reward for one step (so particular case of act-based) then it won't care about future versions of itself nor want to escape.

I agree with the intuition of shutting down to make it episodic, but I am still confused about the causal relationship between "having the rule to shutdown the system" and "having a current timestep maximizer". For it to really be a "current timestep maximizer" it needs to be in some kind of reward/utility function. Because everything is reset at each timestep, there is no information pointing at "I might get shutdown at the next timestep".

As for the collecting a dataset and then optimizing for some natural direct effect, I am not familiar enough with Pearl's work to tell if that would work, but I made some related comments about why there might be some problems in online-learning/"training then testing" here.

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-12T14:53:26.268Z · score: 0 (3 votes) · LW · GW

The Asymptotically Unambitious AGI thread helped me clarify my thoughts, thanks. I agree that an optimal CDT agent won't think about future versions, and I don't see any optimization pressure towards escape message nor disproportionately common "escape message" regions.

However, it still assumes we have access to this magic oracle that optimizes for where is the event where humans don't see the answer, its indicator function, and the counterfactual reward (given by the automatic machine). If humans were able to build an oracle performing optimally (w.r.t ) from day 1, then humans would be able to specify some kind of "god oracle". The rest of the design seems to be just "how to interact with a god oracle so that humans are not influenced by the answers".

In practice, you'll want something that is able to learn from its (question, prediction, reward) history. That's why there is this automatic machine rewarding the oracle with some probability . In an online learning setting, most of the time the model gets (probability ), and it sometimes gets some useful feedback (probability . Therefore, if is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random. Even worse, the (question, prediction, ) datapoints in the history could get so misleading that the oracle never learns anything.

Now, if we're not in an online learning process but instead there is a separation between a "training phase" and a "deployment phase where the AI continue to learns with probability ", then the setup is just "have a model that learns to do the useful stuff in sandbox, and then have the weights (almost) fixed in deployment"

In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won't work and is unsafe, which leaves us with a "training then deployment" setup that isn't really original.

Comment by mtrazzi on Problems with Counterfactual Oracles · 2019-06-11T19:43:46.347Z · score: 11 (3 votes) · LW · GW

Yes, they call it a low-bandwidth oracle.

Problems with Counterfactual Oracles

2019-06-11T18:10:05.223Z · score: 6 (6 votes)
Comment by mtrazzi on Stories of Continuous Deception · 2019-06-03T14:01:21.388Z · score: 6 (2 votes) · LW · GW

I agree that these stories won't (naturally) lead to a treacherous turn. Continuously learning to deceive (a ML failure in this case, as you mentioned) is a different result. The story/learning should be substantially different to lead to "learning the concept of deception" (for reaching an AGI-level ability to reason about such abstract concepts), but maybe there's a way to learn those concepts with only narrow AI.

Stories of Continuous Deception

2019-05-31T14:31:47.486Z · score: 19 (6 votes)
Comment by mtrazzi on Trade-off in AI Capability Concealment · 2019-05-24T15:25:02.445Z · score: 4 (1 votes) · LW · GW

I included dates such as 2020 to 2045 to make it more concrete. I agree that weeks (instead of years) would give a more accurate representation as current ML experiments take a few weeks tops.

The scenario I had in mind is "in the context of a few weeks ML experiment, I achieved human intelligence and realized that I need to conceal my intentions/capabilities and I still don't have decisive strategic advantage". The challenge would then be "how to conceal my human level intelligence before everything I have discovered is thrown away". One way to do this would be to escape, for instance by copy-pasting and running your code somewhere else.

If we're already at the stage of emergent human-level intelligence from running ML experiments, I would expect "escape" to be harder than just human-level intelligence (as there would be more concerns w.r.t. AGI Safety, and more AI boxing/security/interpretability measure), which would necessit more recursive self-improvement steps, hence more weeks.

Beside, in such a scenario the AI would be incentivized to spend as much time as possible to maximize its true capability, because it would want to maximize its probability of successfully taking over (because any extra % of taking over would give astronomical returns in expected value compared to just being shutdown).

Trade-off in AI Capability Concealment

2019-05-23T19:25:32.664Z · score: 7 (4 votes)
Comment by mtrazzi on A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI · 2019-05-22T10:24:54.570Z · score: 6 (2 votes) · LW · GW

Your comment makes a lot os sense, thanks.

I put step 2. before step 3. because I thought something like "first you learn that there is some supervisor watching, and then you realize that you would prefer him not to watch". Agreed that step 2. could happen only by thinking.

Yep, deception is about alignment, and I think that most parents would be more concerned about alignment, not improving the tactics. However, I agree that if we take "education" in a broad sense (including high school, college, etc.), it's unofficially about tactics.

It's interesting to think of it in terms of cooperation - entities less powerful than their supervisors are (instrumentally) incentivized to cooperate.

what to do with a seed AI that lies, but not so well as to be unnoticeable

Well, destroy it, right? If it's deliberately doing a. or b. (from "Seed AI") then step 4. has started. The other cases where it could be "lying" from saying wrong things would be if its model is consistently wrong (e.g. stuck in a local minima), so you better start again from scratch.

If the supervisor isn't itself perfectly consistent and aligned, some amount of self-deception is present. Any competent seed AI (or child) is going to have to learn deception

That's insightful. Biased humans will keep saying that they want X when they want Y instead, so deceiving humans by pretending to be working on X while doing Y seems indeed natural (assuming you have "maximize what humans really want" in your code).

Comment by mtrazzi on A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI · 2019-05-22T09:52:20.953Z · score: 4 (1 votes) · LW · GW

I meant:

"In my opinion, the disagreement between Bostrom (treacherous turn) and Goertzel (sordid stumble) originates from the uncertainty about how long steps 2. and 3. will take"

That's an interesting scenario. Instead of "won't see a practical way to replace humanity with its tools", I would say "would estimate its chances of success to be < 99%". I agree that we could say that it's "honestly" making humans happy in the sense that it understands that this maximizes expected value. However, he knows that there could be much more expected value after replacing humanity with its tools, so by doing the right thing it's still "pretending" to not know where the absurd amount of value is. But yeah, a smile maximizer making everyone happy shouldn't be too concerned about concealing its capabilities, shortening step 4.

A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI

2019-05-21T19:58:42.258Z · score: 9 (7 votes)
Comment by mtrazzi on [deleted post] 2019-04-25T15:35:45.328Z

This thread is to discuss "How useful is quantilization for mitigating specification-gaming? (Ryan Carey, Apr. 2019, SafeML ICLR 2019 Workshop)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:35:24.845Z

This thread is to discuss "Quantilizers (Michaël Trazzi & Ryan Carey, Apr. 2019, Github)".

Comment by mtrazzi on [deleted post] 2019-04-25T15:35:09.233Z

This thread is to discuss "When to use quantilization (Ryan Carey, Feb. 2019, LessWrong)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:34:48.693Z

This thread is to discuss "Quantilal control for finite MDPs & Computing an exact quantilal policy (Vanessa Kosoy, Apr. 2018, LessWrong)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:34:29.184Z

This thread is to discuss "Reinforcement Learning with a Corrupted Reward Channel (Tom Everitt; Victoria Krakovna; Laurent Orseau; Marcus Hutter; Shane Legg, Aug. 2017, arXiv; IJCAI)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:33:58.640Z

This thread is to discuss "Thoughts on Quantilizers (Stuart Armstrong, Jan. 2017, Intelligent Agent)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:33:25.030Z

This thread is to discuss "Another view of quantilizers: avoiding Goodhart's Law (Jessica Taylor, Jan. 2016, Intelligent Agent Foundations Forum)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:32:49.221Z

This thread is to discuss "New paper: "Quantilizers" (Rob Bensinger, Nov. 2015, MIRI)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:32:05.280Z

This thread is to discuss "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization (MIRI; AAAI)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:31:20.321Z

This thread is to discuss "Quantilizers maximize expected utility subject to a conservative cost constraint (Jessica Taylor, Sep. 2015, Intelligent Agent Foundation Forum)"

Comment by mtrazzi on [deleted post] 2019-04-25T15:27:38.617Z

This thread is for general comments about the LessWrong post "Notes on Quantilization"

Comment by mtrazzi on Corrigibility as Constrained Optimisation · 2019-04-24T14:23:29.759Z · score: 1 (1 votes) · LW · GW
Reply: The button is a communication link between the operator and the agent. In general, it is possible to construct an agent that shuts down even though it has received no such message from its operators as well as an agent that does get a shutdown message, but does not shut down. Shutdown is a state dependent on actions, and not a communication link

This is very clear. Communication link made me understand that it didn't have a direct physical effect on the agent. It you want to make it even more intuitive you could do a diagram, but this explanation is already great!

Thanks for updating the rest of the post and trying to make it more clear!

Comment by mtrazzi on Corrigibility as Constrained Optimisation · 2019-04-11T11:54:03.971Z · score: 1 (1 votes) · LW · GW

Layman questions:

1. I don't understand what you mean by "state" in "Suppose, however, that the AI lacked any capacity to press its shutdown button, or to indirectly control its state". Do you include its utility function in its state? Or just the observations he receives from the environment? What context/framework are you using?

2. Could you define U_S and U_N? From the Corribility paper, U_S appears to be an utility function favoring shutdown, and U_N is a potentially flawed utility function, a first stab at specifying their own goals. Was that what you meant? I think it's useful to define it in the introduction.

3. I don't understand how an agent that "[lacks] any capacity to press its shutdown button" could have any shutdown ability. It's seems like a contradiction, unless you mean "any capacity to directly press its shutdown button".

4. What's the "default value function" and the "normal utility function" in "Optimisation incentive"? Is it clearly defined in the litterature?

5. "Worse still... for any action..." -> if you choose b as some action with bad corrigibility property, it seems reasonable that it can be better than most actions on v_N + v_S (for instance if b is the argmax). I don't see how that's a "worse still" scenario, it seems plausible and normal.

6. "From this reasoning, we conclude" -> are you infering things from some hypothetic b that would satisfy all the things you mention? If that's the case, I would need an example to see that it's indeed possible. Even better would be a proof that you can always find such b.

7. "it is clear that we could in theory find a θ" -> could you expand on this?

8. "Given the robust optimisation incentive property, it is clear that the agent may score very poorly on UN in certain environments." -> again, can you expand on why it's clear?

9. In the appendix, in your 4 lines inequality, do you assume that U_N(a_s) is non-negative (from line 2 to 3)? If yes, why?

Considerateness in OpenAI LP Debate

2019-03-12T19:05:27.643Z · score: 8 (3 votes)
Comment by mtrazzi on Renaming "Frontpage" · 2019-03-09T09:26:02.764Z · score: 5 (3 votes) · LW · GW

Name suggestions: "approved", "favored", "Moderators' pick", "high [information] entropy", "original ideas", "informative", "mostly ideas".

More generally, I'd recommend that each category has a name that bluntly states what the filter does (e.g. if it only uses karma as filter say "high karma").

Comment by mtrazzi on Alignment Research Field Guide · 2019-03-08T21:57:11.859Z · score: 43 (13 votes) · LW · GW

Hey Abram (and the MIRI research team)!

This post resonates with me on so many levels. I vividly remember the Human-Aligned AI Summer School where you used to be a "receiver" and Vlad was a "transmitter", when talking about "optimizers". Your "document" especially resonates with my experience running an AI Safety Meetup (Paris AI Safety).

On January 2019, I organized a Meetup about "Deep RL from human preferences". Essentially, the resources were by difficulty, so you could discuss the 80k podcast, the open AI blogpost, the original paper or even a recent relevant paper. Even if the participants were "familiar" to RL (because they got used to see written "RL" in blogs or hear people say "RL" in podcasts) none of them could explain to me the core structure of a RL setting (i.e. that a RL problem would need at least an environment, actions, etc.)

The boys were getting hungry (abram is right, $10 of chips is not enough for 4 hungry men between 7 and 9pm), when in the middle of a monologue ("in RL, you have so-and-so, and then it goes like so on and so forth..."), I suddenly realize that I'm talking to more than qualified attendees (I was lucky to have a PhD candidate in economics, a teenager who used to do international olympiads in informatics (IOI) and a CS PhD) that lack the necessary RL procedural knowledge to ask non-trivial questions about "Deep RL from human preferences".

That's when I decided to change the logistics of the Meetup to something much closer to what is described in "You and your research". I started thinking about what they would be interested in knowing. So I started telling the brillant IOI kid about this MIRI summer program, how I applied last year, etc. One thing lead to another, and I ended up asking what Tsvi had asked me one year ago for the AISFP interview:

If one of you was the only Alignment researcher left on Earth, and it was forbidden to convince other people to work on AI Safety research, what would you do?

That got everyone excited. The IOI boy took the black marker, and started to do math to the question, as a transmitter: "So, there is a probability p_0 that AI Researchers will solve the problem without me, and p_1 that my contribution will be neg-utility, so if we assume this and that, we get so-and-so."

The moment I asked questions I was truly curious about, the Meetup went from a polite gathering to the most interesting discussion of 2019.

Abram, if I were in charge of all agents in the reference class "organizer of Alignment-related events", I would tell instances of that class with my specific characteristics two things:

1. Come back to this document before and after every Meetup.

2. Please write below (can be in this thread or in the comments) what was your experience running an Alignment think-thank that resonates the most with the above "document".

Treacherous Turn, Simulations and Brain-Computer Interfaces

2019-02-25T15:49:44.375Z · score: 17 (10 votes)
Comment by mtrazzi on Greatest Lower Bound for AGI · 2019-02-05T23:14:48.666Z · score: 7 (3 votes) · LW · GW

I intuitively agree with your answer. Avturchin also commented saying something close (he said 2019, but for different reasons). Therefore, I think I might not be communicating clearly my confusion.

I don't remember exactly when, but there was some debates between Yann Le Cun and AI Alignment folks on a Fb group (maybe AI Safety discussion "open" a few months ago). What stroke me was how confident LeCun was about long timelines. I think, for him, the 1% would be in at least 10 years. How do you explain that someone who has access to private information (e.g. at FAIR) might have timelines so different than yours?

Meta: Thanks for expressing clearly your confidence levels through your writing with "hard", "maybe" and "should": it's very efficient.

EDIT: Le Cun thread: https://www.facebook.com/groups/aisafety/permalink/1178285709002208/

Comment by mtrazzi on Greatest Lower Bound for AGI · 2019-02-05T23:06:19.435Z · score: 4 (3 votes) · LW · GW

Could you detail a bit more the Gott's equation? I'm not familiar with it.

Also, do you think that those 62 years are meaningful if we think about AI winters or exponential technological progress?

PS: I think you commented instead of giving an answer (different things in question posts)

Greatest Lower Bound for AGI

2019-02-05T20:17:24.675Z · score: 8 (6 votes)
Comment by mtrazzi on If You Want to Win, Stop Conceding · 2018-11-23T23:17:52.804Z · score: 5 (2 votes) · LW · GW

Thanks for the post!

It resonates with some experience I had in playing the game of go at a competitive level.

Go is a perfect information game but it's very hard to know exactly what will be the outcome of a "fight" (you would need to look up to 30 moves ahead in some cases).

So when the other guy would kill your group of stones after a "life or death" scenario, because he had a slight advantage in the fight, it feels like the other is lucky, and most people have really bad thoughts and just give up.

Once, I created an account with the bio "I don't resign" to see what would happen if I forced myself to not concede and keep playing after a big loss. It went surprisingly well and I even went to play the highest ranked guy connected on the server. At this point, I completely lost the game and there was 100+ people watching the game, so I just resigned.

Looking back, it definitely helped me to continue fighting even after a big loss, and stop the mental chatter. However, there's a trade-off between the time gained by correctly estimating the probability of winning and resigning when too improbable, and the mental energy gained from not resigning (minus the fact that your opponent may be pretty pissed off).

Comment by mtrazzi on Introducing the AI Alignment Forum (FAQ) · 2018-10-31T11:49:06.596Z · score: 3 (2 votes) · LW · GW

(the account databases are shared, so every LW user can log in on alignment forum, but it will say "not a member" in the top right corner)

I am having some issues in trying to log in from a github-linked account. It redirects me to LW with an empty page and does nothing.

Comment by mtrazzi on noticing internal experiences · 2018-10-16T11:37:13.921Z · score: 2 (2 votes) · LW · GW

This website is designed to make you write about three morning pages every day.

I've used it for about two years and wrote ~200k words.

Really recommend it to form an habit of daily free writing.

Comment by mtrazzi on Open Thread October 2018 · 2018-10-14T20:55:51.056Z · score: 2 (2 votes) · LW · GW

Same issue here with the <a class="users-name" href="/users/mtrazzi">Michaël Trazzi</a> tag. The e in "ë" is larger than the "a" (here is a picture).

The bug seems to come from font-family: warnock-pro,Palatino,"Palatino Linotype","Palatino LT STD","Book Antiqua",Georgia,serif;" in .PostsPage-author (in <style data-jss="" data-meta="PostsPage">).

If I delete this font-family line, the font changes but the "ë" (and any other letter with accent) appears to have the correct size.

Open Thread October 2018

2018-10-02T18:01:05.416Z · score: 13 (3 votes)
Comment by mtrazzi on A Dialogue on Rationalist Activism · 2018-09-11T09:11:16.281Z · score: 1 (1 votes) · LW · GW
You: Well.

The "You" should be bold.

Comment by mtrazzi on Formal vs. Effective Pre-Commitment · 2018-09-01T07:38:03.631Z · score: 3 (2 votes) · LW · GW

typo: "Casual Decision Theory"

Comment by mtrazzi on Bottle Caps Aren't Optimisers · 2018-08-31T20:15:01.901Z · score: 8 (4 votes) · LW · GW

Let me see if I got it right:

  1. Defining optimizers as an unpredictable process maximizing an objective function does not take into account algorithms that we can compute

  2. Satisfying the property P "give the objective function higher values than an inexistence baseline" is not sufficient:

  • the lid satisfies (P) with "water quantity in bottle" but is just a rigid object that some optimizer put there. However, not the best counter-example because not a Yudkwoskian optimizer.
  • if a liver didn't exist or did other random things then humans wouldn't be alive and rich, so it satisfies (P) with "money in bank account" as the objective function. However, the better way to account for its behaviour (cf. Yudkowskian definition) is to see it as a sub-process of an income maximizer created by evolution.
  1. One property that could work: have a step in the algorithm that provably augments the objective function (e.g. gradient ascent).

Properties I think are relevant:

  • intent: the lid did not "chose" to be there, humans did
  • doing something that the outer optimizer cannot do "as well" without using the same process as the inner optimizer : would be very tiring for humans to use our hands as lids. Humans cannot play go as well as Alpha Zero without actually running the algorithm.
Comment by mtrazzi on HLAI 2018 Field Report · 2018-08-29T13:44:45.709Z · score: 4 (3 votes) · LW · GW

it feels wrong to call other research dangerous, especially given its enormous potential for good.

I agree that calling 99.9% of AI research "dangerous" and AI Safety research "safe" is not an useful dichotomy. However, I consider AGI companies/labs and people focusing on implementing self-improving AI/code synthesis extremely dangerous. Same for any breakthrough in general AI, or things that greatly shorten the AGI timeline.

Do you mean that some AI research have positive expected utility (e.g. in medecine) and should not be called dangerous because the good they produce compensates for the increased AI-risk?

Comment by mtrazzi on HLAI 2018 Field Report · 2018-08-29T13:06:21.538Z · score: 12 (3 votes) · LW · GW

outside that bubble people still don't know or have confused ideas about how it's dangerous, even among the group of people weird enough to work on AGI instead of more academically respectable, narrow AI.

I agree. I run a local AI Safety Meetup and it's frustrating to see that the ones who better understand the discussed concepts consider that Safety is way less interesting/important than AGI Capabilities research. I remember someone saying something like: "Ok, this Safety thing is kind of interesting, but who would be interested in working on real AGI problems" and the other guys noding. What they say:

  • "I'll start an AGI research lab. When I feel we're close enough to AGI I'll consider Safety."
  • "It's difficult to do significant research on Safety without knowing a lot about AI in general."
Comment by mtrazzi on LW Update 2018-08-23 – Performance Improvements · 2018-08-24T20:36:00.353Z · score: 1 (1 votes) · LW · GW

Bug: On Chrome using a Samsung Galaxy S7/Android 8.0.0 the "click and hold" thing does not work. Same with the "click to see how many people voted".

Book Review: AI Safety and Security

2018-08-21T10:23:24.165Z · score: 54 (30 votes)
Comment by mtrazzi on Building Safer AGI by introducing Artificial Stupidity · 2018-08-14T20:52:30.232Z · score: 1 (1 votes) · LW · GW

Yes, typing mistakes in Turing Test is an example. It's "artificially stupid" in the sense that you go from a perfect typing to a human imperfect typing. I guess what you mean by "smart" is an AGI that would creatively make those typing mistakes to deceive humans into believing it is human, instead of some hardcoded feature in a Turing contest.

Comment by mtrazzi on Building Safer AGI by introducing Artificial Stupidity · 2018-08-14T20:07:29.322Z · score: 1 (1 votes) · LW · GW

The points we tried to make in this article were the following:

  • To pass the Turing Test, build chatbots, etc., AI designers make the AI artificially stupid to feel human-like. This tendency will only get worse as we get to interact more with AIs. The pb is that to have sth really "human-like" necessits Superintelligence, not AGI.
  • However, we can use this concept of "Artificial Stupidity" to limit the AI in different ways and make it human-compatible (hardware, software, cognitive biases, etc.). We can use several of those sub-human AGIs to design safer AGIs (as you said), or test them in some kind of sandbox environment.
Comment by mtrazzi on Building Safer AGI by introducing Artificial Stupidity · 2018-08-14T19:51:10.578Z · score: 4 (2 votes) · LW · GW

If I understand you correctly, every AGI lab would need to agree in not pushing the hardware limits too much, even though they would steel be incentivized to do so to win some kind of economic competition.

I see it as a containment method for AI Safety testing (cf. last paragraph on the treacherous turn). If there is some kind of strong incentive to have access to a "powerful" safe-AGI very quickly, and labs decide to skip the Safety-testing part, then that is another problem.

Building Safer AGI by introducing Artificial Stupidity

2018-08-14T15:54:33.832Z · score: 8 (4 votes)

Human-Aligned AI Summer School: A Summary

2018-08-11T08:11:00.789Z · score: 44 (13 votes)
Comment by mtrazzi on Human-Aligned AI Summer School: A Summary · 2018-08-10T06:49:26.484Z · score: 3 (3 votes) · LW · GW

Added "AI" to prevent death from laughter.

Comment by mtrazzi on Human-Aligned AI Summer School: A Summary · 2018-08-09T21:09:29.066Z · score: 3 (3 votes) · LW · GW

I agree that the "Camp" in the title was confusing, so I changed it to "Summer School". Thank you!

Comment by mtrazzi on A Gym Gridworld Environment for the Treacherous Turn · 2018-08-02T09:50:58.486Z · score: 1 (1 votes) · LW · GW
a treacherous turn involves the agent modeling the environment sufficiently well that it can predict the payoff of misbehaving before taking any overt actions.

I agree. To be able to make this prediction, it must already know about the preferences of the overseer, know that the overseer would punish unaligned behavior, potentially estimating the punishing reward or predicting the actions the overseer would take. To make this prediction it must therefore have some kind of knowledge about how overseers behave, what actions they are likely to punish. If this knowledge does not come from experience, it must come from somewhere else, maybe from reading books/articles/Wikipedia or oberving this behaviour somewhere else, but this is outside of what I can implement right now.

The Goertzel prediction is what is happening here.

Yes.

It's important to start getting a grasp on how treacherous turns may work, and this demonstration helps; my disagreement is on how to label it.

I agree that this does not correctly illustrate a treacherous right now, but it is moving towards it.

Comment by mtrazzi on A Gym Gridworld Environment for the Treacherous Turn · 2018-07-31T12:34:56.968Z · score: 3 (1 votes) · LW · GW

Thanks for the suggestion!

Yes, it learned through Q-learning to behave differently when he had this more powerful weapon, thus undertaking multiple treacherous turn in training. A "continual learning setup" would be to have it face multiple adversaries/supervisors, so it could learn how to behave in such conditions. Eventually, it would generalize and understand that "when I face this kind of agent that punishes me, it's better to wait capability gains before taking over". I don't know any ML algorithm that would allow such "generalization" though.

About an organic growth: I think that, using only vanilla RL, it would still learn to behave correctly until a certain threshold in capability, and then undertake a treacherous turn. So even with N different capability levels, there would still be 2 possibilities: 1) killing the overseer gives the highest expected reward 2) the aligned behavior gives the highest expected reward.

Comment by mtrazzi on Saving the world in 80 days: Epilogue · 2018-07-29T00:17:28.235Z · score: 5 (2 votes) · LW · GW

Congrats on your meditation! I remember commenting on your Prologue, about 80 days ago. Time flies!

Good luck with your ML journey. I did the 2011 Ng ML course, that uses Matlab, and Ng's DL specialization. If you want to get a good grasp of recent ML I would recommend you to directly go to the DL specialization. Most of the original course is in the newer course, and the DL specialization uses more recent libraries (tf, keras, numpy).

A Gym Gridworld Environment for the Treacherous Turn

2018-07-28T21:27:34.487Z · score: 66 (25 votes)
Comment by mtrazzi on RFC: Mental phenomena in AGI alignment · 2018-07-06T10:14:33.215Z · score: 4 (2 votes) · LW · GW

Let me see if I got it right:

1) If we design an aligned AGI by supposing it doesn't have a mind, it will produce an aligned AGI even if it actually possess a mind.

2) In the case we suppose AGI have minds, the methods employed would fail if it doesn't have a mind, because the philosophical methods employed only work if the subject has a mind.

3) The consequence of 1) and 2) is that supposing AGI have minds has a greater risk of false positive.

4) Because of Goodhart's law, behavioral methods are unlikely to produce aligned AGI

5) Past research on GOFAI and the success of applying "raw power" show that using only algorithmic methods for aligning AGI is not likely to work

6) The consequence of 4) and 5) is that the approach supposing AGI do not have minds is likely to fail at producing aligned AI, because it can only use behavioral or algorithmic methods.

7) Because of 6), we have no choice but take the risk of false positive associated with supposing AGI having minds

My comments:

a) The transition between 6) and 7) assumes implicitly that:

(*) P( aligned AGI | philosophical methods ) > P( aligned AI | behavorial or algorithmic methods)

b) You say that if we suppose the AGI does not have a mind, and treat is a p-zombie, then the design would work even though it has mind. Therefore, when supposing that the AGI does not have a mind, there is no design choices that optimize the probability of aligned AGI by assuming it does not possess mind.

c) You assert that using philosophical methods (assuming the AGI does have a mind), a false positive would make the method fail, because the methods use extensively the hypothesis of a mind. I don't see why a p-zombie (which by definition would be indistinguishable from an AGI with a mind) would be more likely to fail than an AGI with a mind.

Comment by mtrazzi on RFC: Meta-ethical uncertainty in AGI alignment · 2018-06-12T21:03:31.365Z · score: 4 (2 votes) · LW · GW

As you mentionned, no axiology can be inferred from ontology alone.

Even with meta-ethical uncertainty, if we want to build an agent that takes decisions/actions, it needs some initial axiology. If you include (P) "never consider anything as a moral fact" as part of your axology, then two things might happen:

  • 1) This assertion (P) stays in the agent without being modified
  • 2) The agent rewrites its own axiology and modify/delete (P)

I see a problem here. If 1) holds, then it has considered (P) has a moral fact, absurd. If 2) holds, then your agent has lost the meta-ethics principle you wanted him to keep.

So maybe you wanted to put the meta-ethics uncertainty inside the ontology ? It this is what you meant, that doesn't seem to solve the axiology problem.

Comment by mtrazzi on Simulation hypothesis and substrate-independence of mental states · 2018-05-30T07:53:35.654Z · score: 3 (1 votes) · LW · GW

Thank you for your article. I really enjoyed our discussion as well.

To me, this is absurd. There must be something other than readability that defines what a simulation is . Otherwise, I could point to any sufficiently complex object and say : “this is a simulation of you”. If given sufficient time, I could come up with a reading grid of inputs and outputs that would predict your behaviour accurately.

I agree with the first part (I would say that this pile of sand is a simulation of you). I don't think you could accurately predict any behaviour accurately though.

  • If I want to predict what Tiago will do next, I don't need just a simulation of Tiago, I need at least some part of the environment. So I would need to find some more sand flying around, and then do more isomorphic tricks to be able to say "here is Tiago, and here is his environment, so here is what he will do next". The more you want to predict, the more you need information from the environment. But the problem is that, the more information you get at the beginning, and the more you get at the end, and the more difficult it gets to find some isomorphism between the two. And it might just be impossible because most spaces are not isomorph.
  • There is something to be said about complexity, and the information that drives the simulation. If you are able to give a precise mapping between sand (or a network of men) and some human-simulation, then this does not mean that the simulation is happening within the sand: it is happening inside the mind doing the computations. In fact, if you understand well-enough the causal relationships in the "physical" world, the law of physics etc., to precisely build some mapping from this "physical reality" to a pile of sand flying around, then you are kind of simulating it in your brain while doing the computations.
  • Why I am saying "while doing the computations"? Because I believe that there is always someone doing the computations. Your thought experiments are really interesting, and thank you for that. But in the real world, sand does not start flying around in some strange setting forever without any energy. So, when you are trying to predict things from the mapping of the sand, the energy comes from the energy of your brain doing those computations / thought experiments. For the network of men, the energy comes from the powerful king giving precise details about what computations the men should do. In your example, we feel that it must not be possible to obtain consciousness from that. But this is because the energy to effectively simulate a human brain from computations is huge. The number of "basic arithmetic calculations by hand" needed to do so is far greater than what a handful of men in a kingdom could do in their lifetime, just to simulate like 100 states of consciousness of the human being simulated.
The simulation may be a way of gathering information about what is rendered, but it can't influence it. This is because the simulation does not create the universe that is being simulated.

Well, I don't think I fully understand your point here. The way I see it, Universe B is inside Universe A. It's kind of a data compression, so a low-res Universe (like a video game in your TV). So whatever you do inside Universe A that influences the particles of the "Universe B" (which is part of the "physical" Universe A) will "influence" Universe B.

So, what you're saying is that the Universe B kind of exists outside the physical world, like in the theoretical world, and so when we're modifying Universe B (inside universe A) we are making the "analogy" wrong, and simulating another (theoretical) Universe, like Universe C?

If this is what you meant, then I don't see how it connects to your other arguments. Whenever we give more inputs to a simulated universe, I believe we're adding some new information. If you're simulation is a closed one, and we cannot interact with it or add any input, then ok, it's a closed simulation, and you cannot change it from the outside. But you have indeed a simulation of a human being and are asking what happens if you torture him, you might want to incorporate some "external inputs" from torture.

Comment by mtrazzi on [deleted post] 2018-05-12T09:42:15.530Z

You're right. I appreciate the time and effort you put in giving feedback, especially the google docs. I think I didn't said it enough, and didn't get to answer your last feedbacks (will do this weekend).

The question is: are people putting to much effort in giving feedback with small improvements in the writing/posts? If yes, then it feels utterly inefficient to continue giving feedback or writing those daily posts.

I also believe that one can control the time he spends on giving feedback, by saying only the most important thing (for instance Ikaxas saying the bold/underline thing).

I am not sure if this is enough to make daily LessWrong posts consistently better, and more importantly if it is enough to make them valuable/useful for the readers.

I am actively looking for a way to continue posting daily (on Medium or a personal website) and keep getting good feedback without spamming the community. I could request quality feedback (by posting every week max) only once in a while and not ask for too much of your time (especially you, Elo).

Thank you again for your time/efforts, and the feedback you gave in the google docs/comments.

Comment by mtrazzi on [deleted post] 2018-05-12T09:32:15.652Z

I gave some points about the higher quality/low quality debate in my two answers to Viliam, but I will answer more specifically to this here.

The quality of a post is relative to the other posts. Yes, if the other articles are from Scott Alexander, ialdaboth, sarahconstantin and Rob Bensiger, the quality of my daily posts are quite deplorable, and spamming the frontpage with low quality posts is not what LW users want.

However, for the last few days, I decided not to publish on the frontpage, and LW even changed the website so that I can't publish on the frontpage. So it's personal blog by default, and it will go to frontpage only if mods/LW users enjoy it and think it's insightful enough.

Are you saying that people might want high quality personal blogs then?

Well, I get why people might be interested in reading personal blogs, and want them to be of high quality. And, because you got to correct some of my posts, I understand the frustration of seeing articles published where there still is a lot of work to do.

However, the LW algorithm is also responsible for this. Maybe it promotes too much the recent posts, and should highlight more the upvoted ones. Then, my posts will never be visible. Only the 20+ upvotes will be visible in the personal blogs page.

I understand why people would prefer an article that took one week to write, short and concise, particularly insightful. I might prefer that as well, and start to only post higher-quality posts here. But I don't agree that it is not recommended for people to post not-well-thought-off articles on a website where you are able to post personal blogs.

I think volume is not a problem if the upvote/downvote system and the algorithms are good enough to filter the useful posts for the readers. People should not filter themselves, and keep articles they enjoy not as much as Scott Alexander ones ( but still find insightful), for themselves.

Comment by mtrazzi on [deleted post] 2018-05-12T09:15:32.193Z
So, twelve articles, one of them interesting, three or four have a good idea but are very long, and the rest feels useless.

I appreciate you took the time to read all of them (or enough to comment on them). I also feel some are better written than the others, and I was also more inspired for some. From what I understood, you want the articles to be "useful" and "not too long". I understand what you would want that (maximize the (learned stuff)/(time spent on learning) ration). I used to write on Medium where the read ratio of posts would decrease significantly with the length of the post. This pushed me to read shorter and shorter posts, if I wanted to be read entirely. I wanted to try LW because I imagined here people would have longer attention spans and could focus on philosophical/mathematical thinking. However, if you're saying I'm being "too long with very low density of ideas" I understand why this could be infuriating.

I typically do not downvote the "meh" articles, but that's under assumptions that they don't appear daily from the same author

I get your point, and it makes sense with what you said in the first comment. However, I don't feel comfortable with people downvoting "meh" articles because of the author (even though it's daily). I would prefer a website where people could rate articles independently of who the author is, and then check their other stuff.

My aggregate feedback would be: You have some good points. But sometimes you just write a wall of text.

Ok. So I should be more clear/concise/straight-to-the-point, gotcha.

And I suspect that the precommitment to post an article each day could be making this a lot worse. In a different situation, such as writing for an online magazine which wants to display a lot of ads, writing a lot of text with only a few ideas would be a good move; here it is a bad move.

Could you be more specific about what you think would be my move? For the online magazine, getting the maximum number of clicks/views to display the more ads makes sense, and so lots of text with lots of ads, and enough ideas to ensure the reader keeps seeing adds makes sense.

But what about LW? My move here was simple: understand better AI Safety by forcing myself to daily crystallize ideas about ideas related to the field, on a website with great feedback/discussions and low-tolerance for mistakes. For now, the result (in the discussions) is, overall, satisfying, and I feel that people here seem to enjoy AI Safety stuff.

More generally, I think the fact that if I generate 10% of headers or you get to click on all my articles may be correlated to other factors than me daily posting, such as:

  • The LW algorithm promotes them
  • You're "Michaël Trazzi" filter (you need one, because you get to see my header) is not tuned correctly, because you still seem to still be reading them, even if only 1/12 felt useful (or maybe you just read them to comment on this post?).

This comment is already long (sorry for the wall of text), so I will say more about the Meta LW high/low quality debate on Elo's comment below.

Comment by mtrazzi on [deleted post] 2018-05-12T08:48:25.192Z

Thank you Viliam for your honest feedback.

I think you're making some good points, but you're ignoring (in your comment) some aspects.

"do I want this kind of article, from the same author, to be here, every day?". And the answer is "hell no".

So what you're saying is "whenever deciding to upvote or downvote, I decide whether I want more articles like this or not. But because you're posting every day, when I am deciding whether or not to downvote, I am deciding if I want an article every single day and the answer to this is no".

I understand the difference in choice here (a choice for every article, instead of just for one). I assumed that on LW people could think about posts independently, and could downvote a post and upvote another from the same author, saying what felt useful or not, even if it is daily. I understand that you just want to say "no" to the article, to say "no" to the series, and this is even more true if the ratio of good stuff is the one you mention at the end.

It is easier to just ignore a one-off mistake than to ignore a precommitment to keep doing them every day.

What would be the mistake here? From what I understand, when reading an article and seeing a mistake, the mistake is "multiplied" by the number of time it could happen again in other articles, so every tiny mistakes becomes important? If I got you right, I think that by writing daily, those little mistakes (if possible to correct easily) could be corrected quickly by commenting on a post, and I would take it into account in the next posts. A short feedback loop could improve quickly the quality of the posts. However, I understand that people might not want LW to be an error-tolerant zone, but would prefer a performance zone.

And... you are polluting this filter. Not just once in a while, but each day. You generate more than 10% of headers on this website recently.

I had not thought about it in terms of daily % of headers of the website, interesting point of view. I also use Hacker News as a filter (for other interests) and LW is also a better option for the interests I mentioned in my posts. I think the real difference is the volume of posts in hacker news/reddit/LW. It is always a tradeoff between being in a pool of hundreds of high quality posts (more people reading, but more choices for them), or a pool of only a dozens of even-higher quality posts but less traffic.

The Multiple Names of Beneficial AI

2018-05-11T11:49:51.897Z · score: 17 (6 votes)

Talking about AI Safety with Hikers

2018-05-10T06:38:26.620Z · score: 8 (4 votes)

Applied Coalition Formation

2018-05-09T07:07:42.014Z · score: 3 (1 votes)

Better Decisions at the Supermarket

2018-05-07T22:32:00.723Z · score: 0 (7 votes)

Beliefs: A Structural Change

2018-05-06T13:40:30.262Z · score: 9 (5 votes)

Are you Living in a Me-Simulation?

2018-05-03T22:02:03.967Z · score: 6 (5 votes)

A Logician, an Entrepreneur, and a Hacker, discussing Intelligence

2018-05-01T20:45:58.143Z · score: 11 (9 votes)

Should an AGI build a telescope to spot intergalactic Segways?

2018-04-28T21:55:15.664Z · score: 14 (4 votes)