Posts

Referential Information 2021-08-01T21:07:11.371Z
My Productivity Tips and Systems 2021-07-25T12:52:34.932Z
Stanford EA Confusion Dinner 2021-06-15T10:49:57.353Z

Comments

Comment by Jack R (Jack Ryan) on Three reasons to expect long AI timelines · 2021-04-24T04:42:47.377Z · LW · GW

Won't we have AGI that is slightly less able to jump into existing human roles before we have AGI that can jump into existing human roles? (Borrowing intuitions from Christiano's Takeoff Speeds) [Edited to remove typo]

Comment by Jack R (Jack Ryan) on The mathematical universe: the map that is the territory · 2021-04-24T03:53:24.352Z · LW · GW

Obviously, we wouldn’t notice the slowness from the inside, any more than the characters in a movie would notice that your DVD player is being choppy.

Do you have a causal understanding for why this is the case? I am a bit confused by it

Comment by Jack R (Jack Ryan) on Three reasons to expect long AI timelines · 2021-04-24T03:34:39.040Z · LW · GW

Re: 1, I think it may be important to note that adoption has gotten quicker (e.g. as visualized in Figure 1 here; linking this instead of the original source since you might find other parts of the article interesting). Does this update you, or were you already taking this into account? 

Comment by Jack R (Jack Ryan) on Does the lottery ticket hypothesis suggest the scaling hypothesis? · 2021-04-22T00:47:06.870Z · LW · GW

When the network is randomly initialized, there is a sub-network that is already decent at the task.

From what I can tell, the paper doesn't demonstrate this--i.e. I don't think they ever test the performance of a sub-network with random weights (rather they test the performance of a subnetwork after training only the subnetwork). Though maybe this isn't what you meant, in which case you can ignore me :)

Comment by Jack R (Jack Ryan) on Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers · 2021-04-11T06:19:34.238Z · LW · GW

Thanks a lot for this--I'm doing a lit. review for an interpretability project and this is definitely coming in handy :)

Random note: the paper "Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction" is listed twice in the master list of summarized papers.

Comment by Jack R (Jack Ryan) on Why GPT wants to mesa-optimize & how we might change this · 2021-03-05T02:58:38.966Z · LW · GW

I agree, and thanks for the reply. And I agree that even a small chance of catastrophe is not robust. Though I asked because I still care about the probability of things going badly, even if I think that probability is worryingly high. Though I see now (thanks to you!) that in this case our prior that SGD will find look-ahead is still relatively high and that belief won't change much by thinking about it more due to sensitivity to complicated details we can't easily know.

Comment by Jack R (Jack Ryan) on Why GPT wants to mesa-optimize & how we might change this · 2021-03-03T22:42:44.094Z · LW · GW

Anyway, the question here isn't whether lookahead will be perfectly accurate, but whether the post-lookahead distribution of next words will allow for improvement over the pre-lookahead distribution.

Can you say a bit more about why you only need look-ahead to improve performance? SGD favors better improvements over worse improvements--it feels like I could think of many programs that are improvements but which won't be found by SGD. Maybe you would say there don't seem to be any improvements that are this good and this seemingly easy for SGD to find?

Comment by Jack R (Jack Ryan) on 2020 LessWrong Demographics Survey · 2020-06-12T05:10:55.159Z · LW · GW

For the risk question, is it asking about positive and negative risk, or just negative risk?