Posts

Rationality Boot Camp 2011-03-22T08:37:46.395Z
South Bay Less Wrong Meetup Saturday Feb 26th 2011-02-23T22:45:00.578Z
Singularity Institute Party Feb 22nd 2011-02-22T06:45:37.255Z
Bye Bye Benton: November Less Wrong Meetup 2010-11-09T05:35:05.641Z
Call for Volunteers: Rationalists with Non-Traditional Skills 2010-10-28T21:38:26.737Z
September Less Wrong Meetup aka Eliezer's Bayesian Birthday Bash 2010-09-08T04:51:31.686Z
Friendly AI at the NYC Future Salon 2010-02-16T02:57:47.014Z

Comments

Comment by Jasen on A Critique of Leverage Research's Connection Theory · 2012-09-22T01:47:03.192Z · LW · GW

Thanks for the info PJ!

PCT looks very interesting and your EPIC goal framework strikes me as intuitively plausible. The current list of IGs that we reference is not so much part of CT as an empirical finding from our limited experience building CT charts. Neither Geoff nor I believe that all of them are actually intrinsic. It is entirely possible that we and our subjects are simply insufficiently experienced to penetrate below them. It looks like I've got a lot reading to do :-)

Comment by Jasen on A Critique of Leverage Research's Connection Theory · 2012-09-20T20:42:49.928Z · LW · GW

Hey Peter,

Thanks for writing this.

I’m the primary researcher working on Connection Theory at Leverage. I don’t have time to give an in-depth argument for why I consider CT to be worth investigating at the moment, but I will briefly respond to your post:

Objections I & II:

I think that your skeptical position is reasonable given your current state of knowledge. I agree that the existing CT documents do not make a persuasive case.

The CT research program has not yet begun. The evidence presented in the CT documents is from preliminary investigations carried out shortly after the theory’s creation when Geoff was working on his own.

My current plan is a follows: Come to understand CT and how to apply it well enough to design (and be able to carry out) a testing methodology that will provide high quality evidence. Perform some preliminary experiments. If the results are promising, create training material and programs that produce researchers who reliably create the same charts, predictions and recommendations from the same data. Recruit many aspiring researchers. Train many researchers. Begin large-scale testing.

Objection III:

I agree that a casual reading of CT suggests that it conflicts with existing science. I thought so as well and initially dismissed the theory for just that reason. Several extended conversations with Geoff and the experience of having my CT chart created convinced me otherwise. Very briefly:

The brain is complicated and the relationship between brain processes and our everyday experience of acting and updating is poorly understood. Since CT is trying to be a maximally elegant theory of just these things, CT does not attempt to say anything one way or the other about the brain and so strictly speaking does not predict that beliefs can be changed by modifying the brain. That said, it is easy to specify a theory, which we might call CT’, that is identical in every respect except that it allows beliefs to be modified directly by altering the brain.

“Elegant updating” is imprecisely defined in the current version of CT. This is definitely a problem with the theory. That said, I don’t think the concept is hopelessly imprecise. For one, elegant updating as defined by CT does not mean ideal Bayesian updating. One of the criteria of elegance is that the update involve the fewest changes from the previous set of beliefs. This means that a less globally-elegant theory may be favored over a more globally-elegant theory due to path-dependence. This introduces another source of less-than-optimally-rational beliefs. If we imagine a newly formed CT-compliant mind with a very minimal belief system updating in accordance with this conception of elegance and the constraints from their intrinsic goods (IGs), I think we should actually expects its beliefs to be totally insane, even more so than the H&B literature would suggest. Of course, we will need to do research in developmental psychology to confirm this suspicion.

It is surprisingly easy to explain many common biases within the CT framework. The first bias you mentioned, scope insensitivity, is an excellent example. Studies have shown that the amount people are willing to donate to save 2,000 birds is about the same as what they are willing to donate to save 200,000 birds. Why might that be?

According to CT, people only care about something if it is part of a path to one of their IGs. The IGs we’ve observed so far are mostly about particular relationships with other people, group membership, social acceptance, pleasure and sometimes ideal states of the world (world-scale IGs or WSIGs) such as world peace, universal harmony or universal human flourishing. Whether or not many birds (or even humans!) die in the short term is likely to be totally irrelevant to whether or not a person’s IGs are eventually fulfilled. Even WSIGs are unlikely to compel donation unless the person believes their donation action to be a necessary part of a strategy in which a very large number of people donate (and thus produce the desired state). It just isn’t very plausible that your individual attempt to save a small number of lives through donation will be critical to the eventual achievement of universal flourishing (for example). That leaves social acceptance as the next most likely explanation for donation. Since the number of social points people get from donating tends not to scale very well, there is no reason to expect the amount that they donate to scale. This is not the only possible CT-compliant explanation for scope insensitivity, but my guess is that it is the most commonly applicable.

I’ll close by saying that, like Geoff, I do not believe that CT is literally true. My current belief is that it is worthy of serious investigation and that the approach to psychology that it has inspired - that of mapping out individual beliefs and actions in a detailed and systematic manner, will be of great value even if the theory itself turns out to not be.

Comment by Jasen on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far · 2012-01-01T02:59:34.125Z · LW · GW

I'll chime in to agree with both lukeprog in pointing out that the interview is very outdated and with Holden in correcting Louie's account of the circumstances surrounding it.

Comment by Jasen on The benefits of madness: A positive account of arationality · 2011-04-26T03:25:23.234Z · LW · GW

Awesome, I'm very interested in sharing notes, particular since you've been practicing meditation a lot longer than I have.

I'd love to chat with you on Skype if you have the time. Feel free to send me an email at jasen@intelligence.org if you'd like to schedule a time.

Comment by Jasen on The benefits of madness: A positive account of arationality · 2011-04-23T03:43:55.470Z · LW · GW

First of all, thank you so much for posting this. I've been contemplating composing a similar post for a while now but haven't because I did not feel like my experience was sufficiently extensive or my understanding was sufficiently deep. I eagerly anticipate future posts.

That said, I'm a bit puzzled by your framing of this domain as "arational." Rationality, at least as LW has been using the word, refers to the art of obtaining true beliefs and making good decisions, not following any particular method. Your attitude and behavior with regard to your "mystical" experiences seems far more rational than both the hasty enthusiasm and reflexive dismissal that are more common. Most of what my brain does might as well be magic to me. The suggestion that ideas spoken to you by glowing spirit animals should be evaluated in much the same way as ideas that arise in less fantastic (though often no less mysterious) ways seems quite plausible and worthy of investigation. You seem to have done a good job at keeping your eye on the ball by focusing on the usefulness of these experiences without accepting poorly thought out explanations of their origins.

It may be the case that we have the normative, mathematical description of what rationality looks like down really well, but that doesn't mean we have a good handle on how best to approximate this using a human brain. My guess is that we've only scratched the surface. Peak or "mystical" experiences, much like AI and meta-ethics, seem to be a domain in which human reasoning fails more reliably than average. Applying the techniques of X-Rationality to this domain with the assumption that all of reality can be understood and integrated into a coherent model seems like a fun and potentially lucrative endeavor.

So now, in traditional LW style, I shall begin my own contribution with a quibble and then share some related thoughts:

Many of them come from spiritual, religious or occult sources, and it can be a little tricky to tease apart the techniques from the metaphysical beliefs (the best case, perhaps, is the Buddhist system, which holds (roughly) that the unenlightened mind can't truly understand reality anyway, so you'd best just shut up and meditate).

As far as I understand it, the Buddhist claim is that the unenlightened mind fails to understand the nature of one particular aspect of reality: it's own experience of the world and relationship to it. One important goal of what is typically called "insight meditation" seems to be to cause people to really grok that the map is not the territory when it comes to the category of "self." What follows is my own, very tentative, model of "enlightenment":

By striving to dissect your momentary experience in greater and greater detail, the process by which certain experiences are labeled "self" and others "not-self" becomes apparent. It also becomes apparent that the creation of this sense of a separate self is at least partially responsible for the rejection of or "flinching away" from certain aspects of your sensory experience and that this is one of the primary causes of suffering (which seems to be something like "mental conflict"). My understanding of "enlightenment" is as the final elimination (rather than just suppression) of this tendency to "shoot the messenger." This possibility is extremely intriguing to me because it seems like it should eliminate not only suffering but what might be the single most important source of "wireheading" behaviors in humans. People who claim to have achieved it say it's about as difficult as getting an M.D. Seems worthy of investigation.

Rather than go on and on here, I think it's about time I organized my experience and research into a top-level post.

Comment by Jasen on Rationality Boot Camp · 2011-04-06T21:26:30.639Z · LW · GW

Attention: Anyone still interested in attending the course must get their application in by midnight on Friday the 8th of April. I would like to make the final decision about who to accept by mid April and need to finish interviewing applicants before then.

Comment by Jasen on Rationality, Singularity, Method, and the Mainstream · 2011-03-23T01:55:02.939Z · LW · GW

But "produc[ing] formidable rationalists" sounds like it's meant to make the world better in a generalized way, by producing people who can shine the light of rationality into every dark corner, et cetera.

Precisely. The Singularity Institute was founded due to Eliezer's belief that trying to build FAI was the best strategy for making the world a better place. That is the goal. FAI is just a sub-goal. There is still consensus that FAI is the most promising route, but it does not seem wise to put all of our eggs in one basket. We can't do all of the work that needs to be done within one organization and we don't plan to try.

Through programs like Rationality Boot Camp, we expect to identify people who really care about improving the world and radically increase their chances of coming to correct conclusions about what needs to be done and then actually doing so. Not only will more highly-motivated, rational people improve the world at a much faster rate, they will also serve as checks on our sanity. I don't expect that we are sufficiently sane at the moment to reliably solve the world's problems and we're really going to need to step up our game if we hope to solve FAI. This program is just the beginning. The initial investment is relatively small and, if we can actually do what we think we can, the program should pay for itself in the future. We'd have to be crazy not to try this. It may well be too confusing from a PR perspective to run future versions of the program within SingInst, but if so we can just turn it into its own organization.

If you have concrete proposals for valuable projects that you think we're neglecting and would like to help out with I would be happy to have a Skype chat and then put you in contact with Michael Vassar.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T23:44:39.760Z · LW · GW

Good question. I haven't quite figured this out yet, but one solution is to present everyone we are seriously considering with as much concrete information about the activities as we can and then give each of them a fixed number of "outs," each of which can be used to get out of one activity.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:36:52.753Z · LW · GW

Definitely all-consuming.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:36:11.053Z · LW · GW

Definitely apply, but please note your availability in your answer to the "why are you interested in the program?" question.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:32:12.750Z · LW · GW

It will definitely cost us money but, due to its experimental nature, will be free for all participants for this iteration at least. If we continue offering it in the future, we will probably charge money and offer scholarships.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:28:04.889Z · LW · GW

Edited post.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:26:02.196Z · LW · GW

Congrats indeed!

We'll definitely be writing up a detailed curriculum and postmortem for internal purposes and I expect we'll want to make most if not all of it publicly available.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:21:46.628Z · LW · GW

Probably, though I'm not sure when.

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:18:28.709Z · LW · GW

Whoops, thank you. Post edited

Comment by Jasen on Rationality Boot Camp · 2011-03-22T21:17:45.581Z · LW · GW

That book was part of what gave me the idea. I expect most of the exercises will come from it.

Comment by Jasen on Less Wrong NYC: Case Study of a Successful Rationalist Chapter · 2011-03-17T21:06:24.449Z · LW · GW

Preach it, brother!

;-)

Comment by Jasen on Less Wrong at Burning Man 2011 · 2011-03-09T01:21:46.852Z · LW · GW

I'll be there.

Comment by Jasen on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-03T20:57:56.445Z · LW · GW

I've been able to implement something like this to great effect. Every time I notice that I've been behaving in a very silly way, I smile broadly, laugh out loud and say "Ha ha! Gotcha!" or something to that effect. I only allow myself to do this in cases where I've actually gained new information: Noticed a new flaw, noticed an old flaw come up in a new situation, realized that an old behavior is in fact undesirable, etc. This positively reinforces noticing my flaws without doing so to the undesirable behavior itself.

This is even more effective when implemented in response to someone else pointing out one of my flaws. It's a little more difficult to carry out because I have to suppress a reflex to retaliate/defend myself that doesn't come up as much when I'm my own critic, but when I succeed it almost completely eliminates the social awkwardness that normally comes with someone critiquing me in public.

Comment by Jasen on Berkeley LW Meet-up Friday December 10 · 2010-12-06T21:55:38.540Z · LW · GW

I'll be there.

Comment by Jasen on Diplomacy as a Game Theory Laboratory · 2010-11-16T03:07:28.608Z · LW · GW

Jasen himself explained it as a desire to prove that SIAI people were especially cooperative and especially good at game theory, which I suppose worked.

Close, I was more trying to prove that I could get the Visiting Fellows to be especially cooperative than trying to prove that they were normally especially cooperative. I viewed it more as a personal challenge. I was also thinking about the long-term, real world consequences of the game's outcome. It was far more important to me that SIAI be capable of effective cooperation and coordinate than that I win a board game, and I thought rallying the team to stick together would be a good team-building exercise. Also, if I actually imagine myself in the real-world situation the game is kinda based off of, I would hugely prefer splitting the world with 5 of my friends than risking everything to get the whole thing. If I delve into my psychology a bit more, I must admit that I tend to dislike losing a lot more than I like getting first place. Emotionally, ties tend to feel almost as good as flat out wins to me.

Finally, an amusing thing to note about that game is that, before it started, without telling anyone, I intentionally became sufficiently intoxicated that I could barely understand the rules (most people can't seem to tell unless I tell them first, which I find hilarious). This meant that my only hope of not losing was to forge a powerful alliance.

Comment by Jasen on Call for Volunteers: Rationalists with Non-Traditional Skills · 2010-10-28T22:55:48.495Z · LW · GW

On a related note, a friend of ours named John Ku has negotiated a donation of 20% stock to SIAI from his company MetaSpring. MetaSpring is a digital marketing consultancy that mostly sells a service of rating the effectiveness of advertising campaigns and they are currently hiring. They are looking for experience with:

Ruby on Rails MySql / Sql web design / user interface JavaScript wordpress php web programming in general sales client communication unix system administration Photoshop / slicing HTML & CSS drupal

If you're interested, contact John Ku at ku@johnsku.com

Comment by Jasen on Transparency and Accountability · 2010-08-21T21:00:48.320Z · LW · GW

Jonah,

Thanks for expressing an interest in donating to SIAI.

(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.

I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we're trying figure out the best way to proceed.

If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don't know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?

We think that UFAI is the largest known existential risk and that the most complete solution - FAI - addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don't mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We've met and worked with several promising candidates in the past few months. We'll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.

(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.

It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu's summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to "help save the human race" is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.

Comment by Jasen on Report from Humanity+ UK 2010 · 2010-04-26T01:47:38.286Z · LW · GW

I was the main organizer for the NYC LW/OB group until I moved out to the Bay Area a few weeks ago. From my experience, if you want to encourage people to get together with some frequency you need to make doing so require as little effort and coordination as possible. The way I did it was as follows:

We started a google group that everyone interested in the meetups signed up for, so that we could contact each other easily.

I picked an easily accessible location and two times per month, (second Saturdays at 11am and 4th Tuesdays at 6pm) on which meetups would always occur. I promised to show up at both times every month for at least two hours regardless of whether or not anyone else showed up. I figured the worst that could happen was that I'd have two hours of peace and quiet to read or get some work done and that if at least one person showed up we'd almost certainly have a great time.

We've been doing that for about 9 months and I've never been left alone. In fact, we found that twice a month wasn't enough and started meeting every week a few months ago.

At the moment, only one meetup per month is announced to the "public" through the meetup.com group (so that we don't have to explain all of the basics to new people every meeting), one is for general unfocused discussion and two are rationality-themed game nights (such as poker training).

You should probably set up the google/meetup.com group first and poll people on what times work best for them and what kinds of activities they are most interested in and then take it form there.

I wish you the best of luck, and I'd be happy to answer any other questions you might have.

Comment by Jasen on Friendly AI at the NYC Future Salon · 2010-02-20T00:45:22.855Z · LW · GW

Yeah, It will be recorded. I'll add a link to the post when the video is up.

Comment by Jasen on Regular NYC Meetups · 2009-10-01T16:03:49.537Z · LW · GW

Thank you for posting this!

Now I feel bad about not spreading the word sooner...but better late than never I suppose.

So far, attendance has ranged from 4 to 8 people per meetup. If enough people are interested in meeting regularly we might have to switch venues. I had a good experience at the Moonstruck Diner during another meetup, so that would probably be my second choice:

http://nymag.com/listings/restaurant/moonstruck-diner/

It was quiet, cheap, had a large empty dining area at the back and they left us a lone to talk for over four hours. If anyone has any other suggests, feel free to post them here or on the google group.

I hope to see more of you all in the future!

Comment by Jasen on NY-area OB/LW meetup Saturday 10/3 7 PM · 2009-10-01T10:30:03.988Z · LW · GW

This is an excellent opportunity to announce that I recently organized an OB/LW discussion group that meets in NYC twice a month. We had been meeting sporadically ever since Robin's visit back in April. The regular meetings only started about a month ago and have been great fun. Here is the google group we've been using to organize them:

http://groups.google.com/group/overcomingbiasnyc

We meet every 2nd Saturday at 11:00am and every 4th Tuesday at 6:00pm at Georgia's Bake Shop (on the corner of 89th street and Broadway). The deal is that I show up every time and stay for at least two hours regardless of whether or not anyone else comes.

I've been meaning to post this for a while but I don't have enough Karma...

Comment by Jasen on Rationalists lose when others choose · 2009-06-16T19:32:25.667Z · LW · GW

There are instances where nature penalizes the rational. For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.

I would generally avoid calling a behavior irrational without providing specific context. Revenge is no more irrational than a peacock's tail. They are both costly signals that can result in a significant boost to your reputation in the right social context...if you are good enough to pull them off.