Posts

Comments

Comment by Erik Istre (eistre91) on Soon: a weekly AI Safety prerequisites module on LessWrong · 2018-05-03T06:56:41.326Z · LW · GW

I should clarify that we only intend to design pathways that either complement/augment/supplement existing resources. As you rightly point out, safety-oriented ML is more saturated with much better material from experts. I don't plan to replace that, only to provide a pathway through it some day. I see my role at least as providing tools for someone to verify their knowledge, to challenge it, and to never be lost about where they need to go next.

I have the analogy in mind of a mountain range and different approaches to guiding people through that mountain range. Right now, the state of learning about agent-foundations feels something like "hey here's a map to the whole range. Go get to the top of those mountains. Good luck." I would like it to be something like "here's the trails of least resistance given your skill set and background to get to the top of those mountains".

Comment by Erik Istre (eistre91) on Soon: a weekly AI Safety prerequisites module on LessWrong · 2018-05-02T22:29:59.245Z · LW · GW

What's currently present is mostly a sketch of a part of what we intend to do with it. We do eventually plan to extend into machine learning as well.

The limitation at the time was that my academic background was purely in foundations of mathematics research, and so the MIRI approach was a more natural starting point. I am working on remedying these gaps in my knowledge :)