Posts

Comments

Comment by Victor Levoso on Cognitive Emulation: A Naive AI Safety Proposal · 2023-02-26T03:01:57.541Z · LW · GW

This post doesn't make me actually optimistic about conjeture actually pulling this off, because for that I would have to see details but it does at least look like you understand why this is hard and why the easy versions like just telling gpt5 to imitate a nice human won't work. And I like that this actually looks like a plan. Now maybe it will turn out to not be a good plan but at least is better than openAI's plan of
"well figure out from trial and error how to make the Magic safe somehow".

Comment by Victor Levoso on DragonGod's Shortform · 2023-02-22T17:08:51.609Z · LW · GW

I think that DG is making a more nickpicky point and just claiming that that specific definition is not feasible rather than using this as a claim that foom is not feasible, at least in this post. He also claims that elsewhere but has a diferent argument about humans being able to make narrow AI for things like strategy(wich I think are also wrong) At least that's what I've understood from our previous discussions.

Comment by Victor Levoso on AI Safety Camp, Virtual Edition 2023 · 2023-02-18T21:56:38.182Z · LW · GW

So it seems that a lot of people who applied to Understanding Search in Transformers project to do mechanistic interpretability research and probably a lot of them won't get in.
I think there's a lot of similar projects and potential low-hanging fruit people could work on and we probably could organize to make more teams working on similar things.
I’m willing to organize at least one such project myself(specifically working on trying to figure out how algorithm distillation https://arxiv.org/pdf/2210.14215.pdf works) and will talk with Linda about it in 2 weeks and write a longer post with more details but I thought it would be better to write a comment here to see if how many people are interested in that kind of thing beforehand.

 


 

Comment by Victor Levoso on Decision Transformer Interpretability · 2023-02-10T02:34:04.291Z · LW · GW

About the sampling thing. I think a better way to do it that will work for other kind models would be trainining a few diferent models that do better or worse on the task and use different policies, and then you just make a dataset of samples of trajectories from multiple of them. Wich should be cleaner in terms of you knowing what is going on on the training set than getting the data as the model trains (wich on the other hand is actually better for doing AD)

That also has the benefit of letting you study how wich agents you use to generate the training data affects the model. Like if you have two agents that get similar rewards using diferent policies does the dt learn to output a mix of the two policies or what exactly?. The agents don't even need to be neural nets, could be random samples or a handcrafted one.

For example I tried training a dt-like model(though didn't do the time encoding) on a mixture of a DQN that played the dumb toy env I was using (the frozen lake env from gym wich is basically solved by memorizing the correct path) and random actions and it apparently learned to ouput the correct actions the DQN took on rtg 1 and a uniform distribution of the action tokens on rtg 0.

Comment by Victor Levoso on Decision Transformer Interpretability · 2023-02-08T01:47:18.922Z · LW · GW

Oh nice, I was interested on doing mechanistic interpretability on decision transformers myself and had gotten started during SERI MATS but now was more interested in looking into algorithm distillation and the decision transformers stuff fell to the wayside(plus I haven't been very productive during the last few weeks unfortunately). It's too late to read the post in detail today but will probably read it in detail and look at the repo tomorrow. I'm interested in helping with this and I'm likely going to be working on some related research in the near future anyway. Also btw I think that once someone gets to the point that we understand what's going on the setup from the original dt paper it would be interesting to look into this: https://arxiv.org/abs/2201.12122

Also the dt paper finds their model generalizes to bigger rtg than the training set in the seaquest env and it would be interesting to get a mechanistic explanation of why that happens (tough that's an atari task and I think you are right in that that's probably going to have to come later cause CNN are probably harder to work with).

Another thing to note is that OpenAI's VPT while it's technically not a decision transformer (because it doesn't predict rewards if I remember correctly) it a similar kind of thing in that is Ofline RL as sequence prediction, and is probably one of the biggest publicly avaliable pretrained models of this kind. There's also multiple open source implementation of Gato that could probably be interesting to try to do interpretability on. https://github.com/Shanghai-Digital-Brain-Laboratory/BDM-DB1

Also training decision transformers on minerl(or in eleuther's future minetest enviroment) seems like what might come next after atari(the task gato is trained are mostly atari games and google stuff that is not publicly avaliable if I remember correctly)

(sorry if this is too rambly I'm half asleep and got excited because I think work on dt is a very potentially promising area on alignment and was procrastinating on writing a post trying to convince more people to work on it, and I'm pleasantly suprised other people had the same idea)

Comment by Victor Levoso on Two-year update on my personal AI timelines · 2022-08-05T13:16:23.852Z · LW · GW

Another posible update is towards shorter timelines if you think that humans might not be trained whith the optimal amount of data(since we can't just for example read the entire internet) and so it might be posible to get better peformance whith less parameters, if you asume brain has similar scaling laws.

Comment by Victor Levoso on AGI Ruin: A List of Lethalities · 2022-06-08T05:12:28.105Z · LW · GW

Not a response to your actual point but I think that hypothetical example probably doesn't make sense (as in making the ai not "care" doesn't prevent it from including mindhacks in its plan) If you have a plan that is "superingently optimized" for some misaligned goal then that plan will have to take into account the effect of outputing the plan itself and will by default contain deception or mindhacks even if the AI doesn't in some sense "care" about executing plans. (or if you setup some complicated scheme whith conterfactuals so the model ignores the effects of the plans in humans that will make your plans less useful or inscrutable)

The plan that produces the most paperclips is going to be one that deceives or mindhacks humans instead of one that humans wouldn't accept in the first place. Maybe it's posible to use some kind of scheme that avoids the model taking the consecueces of ouputing the plan itself into account but the model kind of has to be modeling humans reading its plan to write a realistic plan that humans will understand, accept and be able to put into practice, and the plan might only work in the fake conterfactual universe whith no plan it was written for.

So I doubt it's actually feasible to have any such scheme that avoids mindhacks and still produces usefull plans.

Comment by Victor Levoso on Ngo and Yudkowsky on alignment difficulty · 2021-11-16T15:33:38.646Z · LW · GW

So first it is really unclear what you would actually get from gtp6 in this situation.
(As an aside  I tried with gptj and it outputted an index with some chapter names).
You might just get the rest of your own comment or something similar.... 
Or maybe you get some article about Eliezer's book,  some joke book written now or the actual book but it contains sutle errors Eliezer might make, a fake article an AGI that gpt6 predicts would likely take over the world by then would write... etc.

Since in general gpt6 would be optimized to predict (in the training distribution) what it followed from that kind of text, which is not the same as helpfully responding to prompts(for a current example, codex outputs bad code when prompted with bad code).  

It seems to me like the result depends on unknown things about what really big transformer models do internally which seem really hard to predict.

But for you to get something like what you want from this gpt6 needs to be modeling future Eliezer in great detail, complete with lots of thought and interactions. 
And while gtp6 could have been optimized into having a very specific human modeling algorithm that happens to do that, it seems more likely that before the optimization process finds the complicated algorithm necessary it gets something simpler and more consequentialist, that does some more general thinking process to achieve some goal that happens to output the right completions on the training distribution. 
Which is really dangerous.

And if you instead trained it with human feedback to ensure you get helpful responses (which sounds exactly the kind of thing people would do if they wanted to actually use gpt6 to do things like answer questions) it would be even worse because you are directly optimizing it for human feedback and it seems clearer there that you are running a search for strategies that make the human feedback number higher.

Comment by Victor Levoso on Thoughts on the Alignment Implications of Scaling Language Models · 2021-06-06T05:01:27.350Z · LW · GW

Well if Mary does learn something new( how it feels "from the inside" to see red or whatever ) she would notice, and her brainstate would reflect that plus whatever information she learned. Otherwise it doesn't make sense to say she learned anything.

And just the fact she learned something and might have thought something like "neat, so that's what red looks like" would be relevant to predictions of her behavior even ignoring possible information content of qualia.

So it seems distinguishable to me.

Comment by Victor Levoso on Luna Lovegood and the Chamber of Secrets - Part 6 · 2020-12-12T05:24:39.133Z · LW · GW

Not sure what you mean.

If some action is a risk to the world but Harry doesn't know vow doesn't prevent him from doing it.

If afer taking some action Harry realizes it risked the world nothing happens except maybe him not being unable to repeat the decision if it comes up again.

If not taking some action (Example defeating someone about to obliviate him) would cause him to forget about a risk to the world vow doesn't actually force him to do it.

And if Harry is forced to decide between ignorance and a risk to the world he will choose whichever he thinks is least likely to destroy the world.

The thing about ignorance seems to also aply to abandoning intelligence buffs.