Posts

Comments

Comment by reup on I think I've found the source of what's been bugging me about "Friendly AI" · 2012-06-10T21:30:56.555Z · LW · GW

I think that some of the issue is that while Eliezer's conception of these issues has continued to evolve, we continue to both point and be pointed back to posts that he only partially agrees with. We might chart a more accurate position by winding through a thousand comments, but that's a difficult thing to do.

To pick one example from a recent thread, here he adjusts (or flags for adjustment) his thinking on Oracle AI, but someone who missed that would have no idea from reading older articles.

It seems like our local SI representatives recognize the need for an up to date summary document to point people to. Until then, our current refrain of "read the sequences" will grow increasingly misleading as more and more updates and revisions are spread across years of comments (that said, I still think people should read the sequences :) ).

Comment by reup on Building toward a Friendly AI team · 2012-06-08T09:39:19.443Z · LW · GW

I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?

I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.

Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?

SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.

PD: Hm, so what about the project lead?

SI: Well, he's done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.

PD: Huh. So, how has the work gone so far?

SI: That's the best part, we're keeping it all secret so that our advances don't fall into the wrong hands. You wouldn't want that, would you?

PD: [backing away slowly] No, of course not... Well, I need to do a little more reading about your organization, but this sounds, um, good...

Comment by reup on Peter Thiel's AGI discussion in his startups class @ Stanford [link] · 2012-06-08T06:41:46.962Z · LW · GW

I remember reading and enjoying that article (this one, I think).

I would think that the same argument would apply regardless of the scale of the donations (assuming there aren't fixed transaction costs (which might not be valid)). My read would be that it comes down to the question of risk versus uncertainty. If there is actual uncertainty, investing widely might make sense if you believe that those investments will provide useful information to clarify the actual problem structure so that you can accurately target future giving.

Comment by reup on Building toward a Friendly AI team · 2012-06-08T03:31:43.159Z · LW · GW

And, if they're relying on perfect secrecy/commitment over a group of even a half-dozen researchers as the key to their safety strategy, then by their own standards they should not be trying to build an FAI.

Comment by reup on Peter Thiel's AGI discussion in his startups class @ Stanford [link] · 2012-06-08T03:27:33.042Z · LW · GW

Remember he's playing an iterated game. So, if we assume that right now he has very little information about which area is the most important to invest in or which areas are most likely to produce the best return, playing a wider distribution in order to gain information in order maximize the utility of later rounds of donations/investments seems rational.

Comment by reup on List of Problems That Motivated UDT · 2012-06-08T01:13:23.353Z · LW · GW

Is there a post on the relative strengths/weaknesses of UDT and TDT? I've searched but haven't found one.

Comment by reup on What are you working on? June 2012 · 2012-06-07T23:40:33.045Z · LW · GW

On the html side, grab a free template (quite a few sites out there offer nice ones). I find that it's easier to keep working when my project at least looks decent. Also, at least for me, I feel more comfortable showing it to friends for advice when there's some superficial polish.

Also, when you see something (a button, control or effect) on a site, open the source. A decent percent of the time you'll find it's actually open source already (lots of js frameworks out there) and you can just copy directly. If not, you'll still learn how it's done.

Good luck!

Comment by reup on Building toward a Friendly AI team · 2012-06-07T23:28:30.716Z · LW · GW

Maybe solving them will require new math, but it seems possible that existing math already provides the necessary tools.

There seems to be far more commitment to a particular approach than is justified by the evidence (at least what they've publicly revealed).

Comment by reup on Building toward a Friendly AI team · 2012-06-07T23:12:38.242Z · LW · GW

I think we can safely stipulate that there is no universal route to contest success or Luke's other example of 800 math SATs.

But, I can answer your question that, yes, I'm sure that at least some of the students are receiving supplemental tutoring. Not necessarily contest-focused, but still.

Anecdotally: the two friends I had from undergrad who were IMO medalists (about 10 years ago) had both gone through early math tutoring programs (and both had a parent who was a math professor). All of my undergrad friends who had 800 math SAT had either received tutoring or had their parents buy them study materials (most of them did not look back fondly on the experience).

Remember, for any of these tests, there's a point where even a small amount of training to the test overwhelms a good deal of talent. Familiarity with problem types, patterns, etc can vastly improve performance.

I have no way to evaluate the scope of your restrictions on doing "super-well" or the particular that the tutoring start at an "early age" (although at least one of the anecdotal IMO cases did a Kumon-type program that started at pre-school).

Are there some people who don't follow that route? Certainly. However, I do think that it's important to be aware of other factors that may be present.

Comment by reup on Peter Thiel's AGI discussion in his startups class @ Stanford [link] · 2012-06-07T21:32:29.547Z · LW · GW

I think it could be consistent if you treat his efforts as designed to gather information.

Comment by reup on Help please! · 2012-06-07T21:25:13.336Z · LW · GW

Another version of this is to offer to go talk with a priest/pastor yourself. One thing this does is to buy you time while your mom adjusts. If you find a decent one to talk with (iIf your church has one, sometimes youth pastors are a bit more open), the conversation won't be too unpleasant (don't view it as convincing them, just lay out your reasoning).

Your mom may be pleased that someone "higher up" is dealing with you. Also, when they fail to convince you, it helps her to let go of the idea that there was something more she could have done.

Comment by reup on Strategic research on AI risk · 2012-06-07T21:18:01.317Z · LW · GW

This. It comes off as amateurish, not knowing which details are important to include. But hopefully these semi-informal discussions help with refining the pitch and presentation before they're standing in front of potential donors.

Comment by reup on Building toward a Friendly AI team · 2012-06-07T20:56:34.816Z · LW · GW

Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do.

I question this assumption. I think that building an FAI team may damage your overall goal of AI risk reduction for several reasons:

  1. By setting yourself up as a competitor to other AGI research efforts, you strongly decrease the chance that they will listen to you. It will be far easier for them to write off your calls for consideration of friendliness issues as self-serving.

  2. You risk undermining your credibility on risk reduction by tarring yourselves as crackpots. In particular, looking for good mathematicians to work out your theories comes off as "we already know the truth, now we just need people to prove it."

  3. You're a small organization. Splitting your focus is not a recipe for greater effectiveness.

Comment by reup on Building toward a Friendly AI team · 2012-06-07T20:40:36.550Z · LW · GW

The fact that you are looking for "raw" math ability seems questionable. If their most recent achievements are IMO/SAT, you're looking at high schoolers or early undergrads (Putnam winners have their tickets punched at top grad schools and will be very hard to recruit). Given that, you'll have at least a 5-10 year lag while they continue learning enough to do basic research.

Comment by reup on Building toward a Friendly AI team · 2012-06-07T20:32:15.471Z · LW · GW

One somewhat close quote that popped to mind (from lukeprog's article on philosophy):

Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality.

Comment by reup on Building toward a Friendly AI team · 2012-06-07T20:13:24.729Z · LW · GW

One other issue is that a near precondition for IMO-type recognition is coming from at least a middle class family and having either an immediate family member or early teacher able to recognize and direct that talent. Worse, as these competitions have increased in stature, you have an increasing number of the students pushed by parents and provided regular tutoring and preparation. Those sorts of hothouse personalities would seem to be some of the more risky to put on an FAI team.