Posts

Comments

Comment by Daermonn on [SEQ RERUN] Changing the Definition of Science · 2014-04-23T03:37:33.629Z · LW · GW

I don't understand why you're getting downvoted. Those were great links, and indeed relevant. I appreciated them.

Comment by Daermonn on Meetup : Princeton NJ Meetup · 2014-02-23T02:32:02.534Z · LW · GW

I'm in the area and I would have loved to attend, but I just saw this posting. How many people showed up? Is another meetup being planned?

Comment by Daermonn on Co-Working Collaboration to Combat Akrasia · 2013-03-11T01:59:54.028Z · LW · GW

This is something I've been thinking about a lot lately, myself. I totally struggle with akrasia and executive functioning, and I find I have more willpower if I do things socially. I've been using friends to go to the gym for the past few weeks. I've actually been rethinking my relationship with leadership because of it: I used to hate being the leader (preferring to just be left alone), but now I'm thinking that I need to lead in order to do the things I want to do.

Comment by Daermonn on Negative and Positive Selection · 2012-07-05T18:10:16.818Z · LW · GW

I agree. I stumbled across this one a week ago or so - without knowing the author was associated with LW - loved it, and have been thinking about it off and off since. I'm glad to see it again. I feel like I should probably start reading your blog regularly..

Comment by Daermonn on Reply to Holden on 'Tool AI' · 2012-06-12T07:49:35.179Z · LW · GW

This really gets at the heart of what intuitively struck me wrong (read: "confused me") in Eliezer's reply. Both Eliezer and Holden engage with the example "Google Maps AGI"; I'm not sure what the difference is - if any - between "Google Maps AGI" and the sort of search/decision-support algorithms that Google Maps and other GPS systems currently use. The algorithm Holdon describes and the neat A* algorithm Eliezer presents seem to just do exactly what the GPS on my phone already does. If the Tool AI we're discussing is different than current GPS systems, then what is the difference? Near as I understand it, AGI is intelligent across different domains in the same way a human is, while Tool AI (= narrow AI?) is the sort of simple-domain search algorithms we see in GPS. Am I missing something here?

But if what Holden is talking about by Tool AI is just this sort of simple(r), non-reflective search algorithm, then I understand why he thinks this is significantly less risky; GPS-style Tool AI only gets me lost when it screws up, instead of killing the whole human species. Sure, this tool is imperfect: sometimes it doesn't match my utility function, and returns a route that leads me into traffic, or would take too long, or whatever; sometimes it doesn't correctly model what's actually going on, and thinks I'm on the wrong street. Even still, gradually building increasingly agentful Tool AIs - ones that take more of the optimization process away from the human user - seems like it would be much safer than just swinging for the fences right away.

So I think that Vaniver is right when he says that the heart of Holden's Tool AI point is "Well, if AGI is such a tough problem, why even do it?"

This being said, I still think that Eliezer's reply succeeds. I think his most important point is the one about specialization: AGI and Tool AI demand domain expertise to evaluate arguments about safety, and the best way to cultivate that expertise is with an organization that specializes in FAI-grade programmers. The analogy with the sort of optimal-charity work Holden specializes in was particularly weighty.

I see Eliezer's response to Holden's challenge - "why do AGI at all?" - as: "Because you need FAI-grade skills to know if you need to do AGI or not." If AGI is an existential threat, and you need FAI-grade skills to know how to deal with that threat, then you need FAI-grade programmers.

(Though, I don't know if "The world needs FAI-grade programmers, even if we just want to do Tool AI right now" carries through to "Invest in SIAI as a charity," which is what Holden is ultimately interested in.)

Comment by Daermonn on Rationality Quotes June 2012 · 2012-06-04T06:16:55.566Z · LW · GW

This speech was really something special. Thanks for posting it. My favorite sections:

"If it takes years to articulate great questions, what do you do now, at sixteen? Work toward finding one. Great questions don't appear suddenly. They gradually congeal in your head. And what makes them congeal is experience. So the way to find great questions is not to search for them-- not to wander about thinking, what great discovery shall I make? You can't answer that; if you could, you'd have made it.

The way to get a big idea to appear in your head is not to hunt for big ideas, but to put in a lot of time on work that interests you, and in the process keep your mind open enough that a big idea can take roost. Einstein, Ford, and Beckenbauer all used this recipe. They all knew their work like a piano player knows the keys. So when something seemed amiss to them, they had the confidence to notice it."

And:

"Rebellion is almost as stupid as obedience. In either case you let yourself be defined by what they tell you to do. The best plan, I think, is to step onto an orthogonal vector. Don't just do what they tell you, and don't just refuse to. Instead treat school as a day job. As day jobs go, it's pretty sweet. You're done at 3 o'clock, and you can even work on your own stuff while you're there."

Great stuff.

Comment by Daermonn on [SEQ RERUN] Changing the Definition of Science · 2012-05-11T23:07:02.047Z · LW · GW

This is a good one. I definitely sympathize with Eliezer's point that Bayesian probability theory is only part of the solution. e.g., in philosophy of science, the deductive-nomological account of scientific explanation is being displaced by a mechanistic view of explanation. In this context, a mechanism is an organization of parts which is responsible for some phenomena. This change is driven by the inapplicability of D-N to certain areas of science, especially the biomedical sciences, where matters are more complex and we can't really deduce conclusions from universal laws; instead, people are treating law-like regularity as phenomena to be explained by appeal to the organized interactions of underlying parts.

e.g., Instead of explaining, "You display symptoms Y; All people with symptoms Y have disease X; Therefore, you have disease X," mechanists explain by positing a mechanism, the functioning of which constitutes the phenomena to be explained. This seems to me to be intimately related to Eliezer's "reduce-to-algorithm" stance; and that an appeal to reduce abstract beliefs to physical mechanisms seems to be a pretty good way to generalize his stance here. In addition, certain mechanistic philosophers have done work to connect mechanisms and mechanistic explanation with Bayesian probability, and with Pearl's work on Bayesian networks and causality. Jon Williamson at Kent has my favorite account: he uses Recursive Bayesian Networks to model this sort of mechanistic thinking quantitatively.

Comment by Daermonn on Rationality Quotes February 2012 · 2012-02-10T05:45:35.189Z · LW · GW

From Shattered Perception (Discard all the cards in your hand, then draw that many cards.):

"You must shatter the fetters of the past. Only then can you truly act."

I think this one takes the cake, in terms of rationality.