Steam Greenlight 2012-07-10T05:11:43.065Z
Consider a robot vacuum. 2012-06-05T08:08:45.546Z
SIAI - An Examination 2011-05-02T07:08:30.768Z
Seattle, WA - Less Wrong Meetup - Sunday February 20th, 2:00 PM 2011-02-08T00:02:33.499Z


Comment by BrandonReinhart on MIRI's 2015 Winter Fundraiser! · 2015-12-14T23:58:00.361Z · LW · GW

Donation sent.

I've been very impressed with MIRI's output this year, to the extent I am able to be a judge. I don't have the domain specific ability to evaluate the papers, but there is a sustained frequency of material being produced. I've also read much of the thinking around VAT, related open problems, definition of concepts like foreseen difficulties... the language and framework for carving up the AI safety problem has really moved forward.

Comment by BrandonReinhart on LessWrong 2.0 · 2015-12-10T17:02:41.975Z · LW · GW

Well I totally missed the diaspora. I read star slate codex (but not the comments) and had no idea people are posting things in other places. It surprises me that it even has a name "rationalist diaspora." It seemed to me that people ran out of things to say or the booster rocket thing had played itself out. This is probably because I don't read Discussion, only Main and as Main received fewer posts I stopped coming to Less Wrong. As "meet up in area X" took over the stream of content I unsubscribed from my CSS reader. Over the past few years the feeling of a community completely evaporated for me. Good to hear that there is something going on somewhere, but it still isn't clear where that is. So archiving LW and embracing the diaspora to me means so long and thanks for all the fish.

Comment by BrandonReinhart on Why startup founders have mood swings (and why they may have uses) · 2015-12-10T15:48:07.601Z · LW · GW

When you’re “up,” your current strategy is often weirdly entangled with your overall sense of resolve and commitment—we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice.

Aside 1: I run into many developers who aren't able to separate their idea from their identity. It tends to make them worse at customer and product oriented thinking. In a high bandwidth collaborative environment, it leads to an assortment of problems. They might not suggest an idea, because they think the group will shoot it down and they will be perceived as a generator of poor ideas. Or they might not relinquish an idea that the group wants to modify, or work on an alternative to, because they feel that, too, is failure. Or they might not critically evaluate their own idea to the standard they would evaluate any other idea that didn't come from their mind. Over time it can lead to selective sidelining of that person in a way that needs a deliberate effort to undo.

The most effective collaborators are able to generate many ideas with varying degrees of initial quality and then work with the group to refine those ideas or reject the ones that are problematic. They are able to do this without taking collateral damage to their egos. These collaborators see the ideas they generate as products separate from themselves, products meant to be improved by iteration by the group.

I've seen many cases where this entanglement of ego with idea generation gets fixed (through involvement of someone who identifies the problem and works with that person) and some cases where it doesn't get fixed (after several attempts, with bad outcomes).

I know this isn't directly related to the post, but it occurred to me when I read the quoted part above.

Aside 2: I have similar mood swings when I think about the rationalist community. "Less Wrong seems dead, there is no one to talk to." then "Oh look, Anna has a new post, the world is great for rationalists." I think it's different from the work related swings, but also brought to mind by the post.

Comment by BrandonReinhart on Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity · 2015-12-09T00:02:17.329Z · LW · GW

I've always thought that "if I were to give, I should maximize the effectiveness of that giving" but I did not give much nor consider myself an EA. I had a slight tinge of "not sure if EA is a thing I should advocate or adopt." I had the impression that my set of beliefs probably didn't cross over with EAs and I needed to learn more about where those gaps were and why they existed.

Recently through Robert Wiblin's facebook have encountered more interesting arguments and content in EA. I had no concrete beliefs about EA, only vague impressions (not having had much time to research it in depth in the past). I had developed an impression that EA was about people maximizing giving to a self-sacrificial degree that I found uncomfortable. I also have repeatedly bounced off the animal activism - I have a hard time separating my pleasure of eating meat from my understanding of the ethical arguments. (So, I figured I would be considered a lawful evil person to the average EA).

However, now having read a few more things even just today, I feel like these are misplaced perceptions of the movement. Reading the 2014 summary, posted in a comment here from Tog, makes me think that:

  • EAs give in a pattern similar to what I would give. However, I personally favor the ex-risk and teaching rationality stuff probably a bit higher than the mean.

  • EAs give about as much as I'd be willing to give before I run into egoist problems (where it becomes painful in a stupid way I need to work to correct). So 10% seems very reasonable to me. For whatever reason, I had thought that "EA" meant "works to give most of what they earn and live a spartan life." I think this comes from not knowing any EAs and instead reading 80,000 hours and other resources not completely processing the message correctly. Probably some selective reading going on and I need to review how that happened.

  • The "donate to one charity" argument is so much easier for me to plan around.

Overall I should have read the 2014 results much sooner and it helped me realize that my perspective is probably a lot closer to the average LWer than I had thought. This makes me feel like taking further steps to learn more about EA and make concrete plans to give some specific amount from an EA perspective is a thing I should do. Which is weird, because I could have done all of that anyway, but was letting myself bounce off of the un-pleasurable conclusions of giving up meat eating or giving a large portion of my income away. Neither of which I have to do in the short term to both give effectively or participate in the EA community. Derp.

Comment by BrandonReinhart on Whole Brain Emulation: Looking At Progress On C. elgans · 2015-10-22T03:08:01.327Z · LW · GW

I'm curious about the same thing as [deleted].

Comment by BrandonReinhart on Course recommendations for Friendliness researchers · 2013-01-11T03:49:49.642Z · LW · GW

Furthermore, a hard to use text may be significantly less hard to use in the classroom where you have peers, teachers, and other forms of guidance to help digest the material. Recommendations for specialists working at home or outside a classroom might not be the same as the recommendations you would give to someone taking a particular class at Berkeley or some other environment where those resources are available.

A flat out bad textbook might seem really good when it is something else such as the teacher, the method, or the support that makes the book work.

Comment by BrandonReinhart on Firewalling the Optimal from the Rational · 2012-10-08T23:44:48.356Z · LW · GW

"A directed search of the space of diet configurations" just doesn't have the same ring to it.

Comment by BrandonReinhart on Problematic Problems for TDT · 2012-06-04T02:11:56.505Z · LW · GW

Thanks for this. I hadn't seen someone pseudocode this out before. This helps illustrate that interesting problems lie in the scope above (callers to tdt_uility() etc) and below (implementation of tdt() etc).

I wonder if there is a rationality exercise in 'write pseudocode for problem descriptions, explore the callers and implementations'.

Comment by BrandonReinhart on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T07:52:17.199Z · LW · GW

Doh, I have no idea why my hands type c-y-r instead of c-r-y, thanks.

Comment by BrandonReinhart on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T07:45:13.417Z · LW · GW

Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like 'characteristic of a metaphysical belief system.' The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn't consistent with my more fundamental, empirical beliefs.

So in my mind I have 'WARNING!' tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being eaten I often don't approach at all.

If I find that I have a metaphysical belief or if I detect that a fact/idea may be metaphysical, then I attach the mystical tag to it and go find my stick.

If something in my mind has the mysticism tag attached to it inappropriately, then I want to reclassify that thing -- slightly reduce the size of the tag or create a branch through more specific concept definition and separation.

So I don't really see value in attaching the mysticism tag to things that don't directly warrant it. What you call a mystical litany I'd call a mnemonic technique for reminding yourself of a useful process or dangerous bias. Religions have litanies, but litanies are not inherently religious concepts.

So no, I won't consider mysticism itself as a useful brain hack. Mysticism is allocated the purpose of 'warning sign' . It's not the only warning sign, but it's a useful one.

Comment by BrandonReinhart on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T07:15:55.792Z · LW · GW

As an aside, what are IFS and NVC?

Edit: Ah, found links.



Comment by BrandonReinhart on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T05:41:14.076Z · LW · GW

I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn't seem unreasonable to me that folks interested could explore it and see what turns up.

Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I'd be paying opportunity cost to cover unknown territory with unknown payoffs.

I don't feel oddly attached to any beliefs here. Maybe I'll go search for some research. Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.

I don't see myself updating my beliefs about meditation (which are weak) unduly because of an argument from authority. They changed because the arguments were reasoned from principles or with process I accept as sound. Reasoning like "fairly credible sources like Feynman claim they can learn to shift the perception of the center of self-awareness to the left. (Feynman was also a bullshitter, but let's take this as an example...) What do we think he meant? Is what we think he meant possible? What is possible? Is that reproducible? Would it be useful to be able to do that? Should we spend time trying to figure out if we can do that?" This would be what I consider to be a discussion in the space of meditation-like stuff that is non-mystical and enjoyable. It isn't going to turn me into a mystic any more than Curzi's anecdotes about his buddy's nootropics overdoses will turn me into a juicer.

I didn't take away the message 'meditation is super-useful.' I took away the message 'meditation is something some people are messing with to see what works.' I'm less worried about that than if someone said 'eating McDonalds every day for every meal is something some people are messing with to see what works.' because my priors tell me that is really harmful whereas my priors tell me meditating every day is probably just a waste of time. A possibly non-mystical waste of time.

Now I'm worried comment-readers will think I'm a blind supporter of meditation. It is more accurate to say I went from immediate dismissal of meditation to a position of seeing the act of meditating as separable from a mystic context.

Now my wife is telling me I should actually be MORE curious about meditation and go do some research.

Comment by BrandonReinhart on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T02:22:34.154Z · LW · GW

To address your second point first, the -attendees- were not a group who strongly shared common beliefs. Some attended due to lots of prior exposure to LW, a very small number were strong x-risk types, several were there only because of recent exposure to things like Harry Potter and were curious, many were strongly skeptical of x-risks. There were no discussions that struck me as cheering for the team -- and I was actively looking for them!

Some counter evidence, though: there was definitely a higher occurrence of cryonicists and people interested in cryonics than you'd find in any random sample of 30 people. I.e.: some amount >2 vs some amount close to 0. So we weren't a wildly heterogeneous group.

As for the instructors - Anna and Luke were both very open about the fact that the rationality-education process is in its infancy and among the various SIAI members there is discussion about how to proceed. I could be wrong, I interpreted Eliezer as being somewhat skeptical of the minicamp process. When he visited, he said he had almost no involvement related to the minicamp. I believe he said he was mainly a sounding board for some of the ideas. I'm interpreting his involvement in this thread now and related threads/topics as a belief shift on his part toward the minicamp being valuable.

I think your order of magnitude increases well describes a bad conceivable scenario, but poorly describes the scenario I actually witnessed.

Now, for cost, I don't know. I'm attending a guitar camp in August that will be 7 days and cost me $2000. I would put the value of minicamp a fair amount above the value of the guitar camp, but I wouldn't necessarily pay $3000 to attend minicamp. To answer the price question I would ask:

1) What else do I plan to spend the $1500 on? What plans or goals suffer setbacks? What would I otherwise buy?

2) What do I value the information from attending at? I can see how it would be easier to measure the value of information from a guitar camp versus one about something that feels more abstract. So maybe the first step is to find the concrete value you've already gotten out of LW. If you've read the sequences and you think there are useful tools there, you might start with 'what would be the estimated value from being able to clarify the things I'm unsure about." So you take some measurement of value you've already gotten from LW and do some back of the napkin math with that.

3) Consider your level of risk aversion versus the value of minicamp now vs later. If these new minicamps are successful, more people will post about them. Attendees will validate or negate past attendee experiences. It may be that if $1500 is too much for you when measured against your estimation of the pay-off discounted by risks, that you simply wait. Either the camps will be shown to be valuable or they will be shown to be low value.

4) Consider some of the broad possible future worlds that follow from attending minicamp. In A you attend and things go great, you come out with new rationality tools. In B you attend and your reaction is neutral and you don't gain anything useful. In C you attend and have poor experiences or worse suffer some kind of self-damage (ex: your beliefs shift in measurably harmful ways that your prior self would have not agreed to submit to ahead of time). Most attendees are suggesting you'll find yourself in worlds like A. We could be lying because we all exist in worlds like C or we're in B but feel an obligation to justify attending the camp or whatever. Weigh your estimate of our veracity with your risk aversion. Update the connected values.

I would suggest it unlikely that the SIAI be so skilled at manipulation that they've succeeded in subverting an entire group of people from diverse backgrounds and with some predisposition to be skeptical. Look for evidence that some people exist in B or C (probably from direct posts stating as much -- people would probably want to prevent other people from being harmed).

There are other things to put into a set of considerations around whether to spend the money, but these are some.

Comment by BrandonReinhart on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T01:40:34.413Z · LW · GW

I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:

1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized -- my rationality concepts were all individual floating bits not well sewn together -- so I reached a point where new concepts didn't fit into my map very easily.

I agree things got more disorganized, in fact, I remember on a couple occasions seeing the 'this isn't the outcome I expected' look on Anna's face and the attempt to update and try a different approach or go with the flow and see where things were leading. I marked this responsiveness as a good thing.

As for your ugly it's important to note that was a casual discussion among attendees. I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you're giving credibility to? Example: I talked to a lot of curious people that week about how Valve's internal structure works, but no one should necessarily run off and establish a Valve-like company without understanding Valve's initial conditions, goals, employee make-up, other institutions, and comparing them with their own initial conditions, goals, employees, institutions, etc.

Comment by BrandonReinhart on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T01:07:38.786Z · LW · GW

I attended the 2011 minicamp.

It's been almost a year since I attended. The minicamp has greatly improved me along several dimensions.

  1. I now dress better and have used techniques provided at minicamp to become more relaxed in social situations. I'm more aware of how I'm expressing my body language. It's not perfect control and I've not magically become an extrovert, but I'm better able to interact in random social situations successfully. Concretely: I'm able to sit and stand around people I don't know and feel and present myself as relaxed. I dress better and people have noticed and I've received multiple comments to that effect. I've chosen particular ways to present myself and now I get comments like 'you must play the guitar' (this has happened five times since minicamp haha). This is good since it loads the initial assumptions I want the person to load.

  2. I've intentionally hacked my affectation towards various things to better reach my goals. For years I never wanted to have children. My wife said (earlier this year, after minicamp) that she wanted to have kids. I was surprised and realized that given various beliefs (love for wife, more kids good for society, etc) I needed to bring my emotions and affectations in line with those goals. I did this by maximizing positive exposure to kids and focusing on the good experiences...and it worked. I'm sure nature helped, but I came to a change of emotional reaction that feels very stable. TMI: I had my vasectomy reversed and am actively working on building kid version 1.0

  3. Minicamp helped me develop a better mental language for reasoning around rationalist principles. I've got tools for establishing mental breakpoints (recognizing states of surprise, rationalization, etc) and a sense for how to improve on weak areas in my reasoning. I have a LOT of things I still need to improve. Many of my actions still don't match my beliefs. The up side is that I'm aware of many of the gaps and can make progress toward solving them. There seems to be only so much I can change at once, so I've been prioritizing everything out.

  4. I've used the more concise, direct reasoning around rationality at my job at Valve Software. I use it to help make better decisions, concretely: when making decisions around features to add to DOTA 2 I've worked particularly hard at quickly relinquishing failed ideas that I generated. I have developed litanies like 'my ideas are a product, not a component of my identity.' Before I enter into interactions I pause and think 'what is my goal for this interaction? The reasoning tools from minicamp have helped me better teach and interpret the values of my company (which are very similar). I helped write a new employee guide that captures Valve values, but uses tools such as Anna Salamon's "Litany for Simplified Bayes" to cut straight to the core concepts. "If X is true, what would the world look like?" "If X is not true, what would the world look like?" "What does the world look like?" I've been influential in instituting predictions meetings before we launch new features.

  5. I've been better able to manage my time, because I'm more aware of the biases and pitfalls that lie before me. I think more about what 'BrandonReinhart2020' wants than what the current me wants. (Or at least, my best guess as to what I think he would not being dead, and being a bad ass guitar shredder, etc). This has manifested itself concretely in my self-education around the guitar. When I went to minicamp I had only just started learning guitar. Since then I've practiced 415 hours (I work full time, so this is all in my spare time) and have developed entirely new skills. I can improv, write songs, etc. Minicamp provided some inspiration, yes, but there were also real tools that I've employed. A big one was coming home and doing research on human learning and practice. This helped me realize that my goals were achievable. Luke gave sessions on how to do efficient research. Critch gave a session on hacking your affectations. I used this to make practice something I really, really like doing (I listened to music I liked before practicing, I would put objects like role-playing books or miniatures that I liked around my practice area -- nerdy yes, but it worked for me -- and I would drink a frosty beer after practicing three hours in a row. Okay so that last one shows that my health beliefs and goals may not be entirely in line, but it served an objective here). Now I can easily practice for 3 hours and enjoy every moment of it. (This is important, before I would use that time for World of Warcraft and other pursuits that just wasted time and didn't improve me.)

I've been in the Less Wrong orbit for a long time and have had the goal of improving my rationality for a long time. I've read Yudkowsky's writing since the old SL4 days. I followed Overcoming Bias from the beginning. I can't say that I had a really good grasp on which concepts were the most important until after minicamp. There's huge value in being able to ask questions, debate a point, and just clarify your confusion quickly.

I have also been an SIAI skeptic. Both myself and John Salvatier thought that SIAI might be a little religion-like. Our mistake. The minicamp was a meeting of really smart people who wanted to help each other win more. The minicamp was genuinely about mental and social development and the mastery of concepts that seem to lead to a better ability to navigate complex decision trees toward desired outcomes.

While we did talk about existential risk, the SIAI never went deep into high shock level concepts that might alienate attendees. It wasn't an SIAI funding press. It wasn't a AGI press. In fact, I thought they almost went too light on this subject (but I came to modern rationality from trans/posthumanism and most people in the future will probably get to trans/posthumanism from modern rationality, so discussions about AGI and such feels normal to me). Point being if you have concerns about this you'll feel a lot better as you attend.

I would say the thing that most discomforted me during the event was the attitude toward meditation. I realized, though, that this was an indicator about my preconceptions about meditation and not necessarily due to facts about meditation. After talking to several people about meditation, I learned that there wasn't any funky mysticism inherent to meditation, just closely associated to meditation. Some people are trying to figure out if it can be used as a tool and are trying to figure out ways to experiment around it, etc. I updated away from 'meditation is a scary religious thing' toward 'meditation might be another trick to the bag.' I decided to let other people bear the burden/risk of doing the research there, though. :)

Some other belief shifts related to minicamp: I have greatly updated toward the Less Wrong style rationality process as being legitimate tools for making better decisions. I have updated a great deal toward the SIAI being a net good for humanity. I have updated a great deal toward the SIAI being led by the right group of people (after personal interactions with Luke, Anna, and Eliezer).

Comparing minicamp to a religious retreat seems odd to me. There is something exciting about spending time with a bunch of very smart people, but it's more like the kind of experience you'd have at a domain-specific research summit. The experience isn't to manipulate through repeated and intense appeals to emotion, guilt, etc (I was a Wesleyan Christian when I was younger and went to retreats like Emaeus and I still remember them pressing a nail sharply into my palm as I went to the altar to pray for forgiveness). It's more accurate to think of minicamp as a rationality summit, with the instructors presenting findings, sharing techniques for the replication of those findings, and there being an ongoing open discussion of the findings and the process used to generate findings. And like any good Summit there are parties.

If you're still in doubt, go anyway. I put the probability of self-damage due to attending minicamp at extremely low, compared to self-damage from attending your standard college level economics lecture or a managerial business skills improvement workshop. It doesn't even blip on a radar calibrated to the kind of self-damage you could do speculatively attending religious retreats.

If you're a game developer, you would probably improve your ability to make good decisions around products more by attending SIAI Minicamp than you would by attending GDC (of course, GDC is still valuable for building a social network within the industry).

Comment by BrandonReinhart on Best shot at immortality? · 2012-03-23T06:52:57.730Z · LW · GW

What we know about cosmic eschatology makes true immortality seem unlikely, but there's plenty of time (as it were) to develop new theories, make new discoveries, or find possible new solutions. See:

Cirkovic "Forecast for the Next Eon: Applied Cosmology and the Long-Term Fate of Intelligent Beings"

Adams "Long-term astrophysical processes"

for an excellent overview of the current best-estimate for how long a human-complexity mind might hope to survive.

Just about everything CIrkovic writes on the subject is really engaging.

Comment by BrandonReinhart on Best shot at immortality? · 2012-03-23T06:46:27.504Z · LW · GW

More importantly, cryonics is useful for preserving information. (Specifically, the information stored by your brain.) Not all of the information that your body contains is critical, so just storing your spinal cord + brain is quite a bit better than nothing. (And cheaper.) Storing your arms, legs, and other extremities may not be necessary.

(This is one place where the practical reasoning around cryonics hits ugh fields...)

Small tissue cryonics has been more advanced than whole-body. This may not be the case anymore, but certainly was say four years ago. So storing your brain alone gave you an improved bet at good information retention over storing the whole-body. I believe that whole-body methods have improved somewhat in the past few years, but still have a ways to go. Part of the problem lies in efficient perfusion of cryoprotectants through the body.

If you place credence on the possibility of ems, then you might consider investing in neuro-preservation. In that case, you wouldn't need revival, only good scanning and emulation tech.

Edit: Also, I highly recommend the Alcor site. The resources there span the gamut from high level to detailed and there's good coverage of the small tissue and cryoprotectant problems among other topics.

Comment by BrandonReinhart on I'm starting a game company and looking for a co-founder. · 2012-03-18T00:21:26.431Z · LW · GW

Your company plan sounds very much like how Valve is structured. You may find it challenging to maintain your desired organizational structure, given that you also plan to be dependent on external investment. Also, starting a company with the express goal of selling it as quickly as possible conflicts with several ways you might operate your company to achieve a high degree of success. Many of the recent small studios that have gone on to generate large amounts of revenue (relative to their size) (Terraria / Minecraft / etc) are independently owned and build 'service based' software that seeks to keep the community engaged.

Alexei, I would suggest an alternative and encourage you to apply to Valve. 1) It wouldn't take much of your time to try. 2) It may help you reach your goals more quickly. 3) You don't have to invest in building a rational organization, (which is costly and hard) since one already exists.

It would be a career oriented decision (few people leave Valve once they start there) and I know you are interested in applying yourself to existential risk as completely as you can, but you should consider the path of getting really good at satisfying the needs of customers and then through that, directing resources towards the existential risk problem. It may feel like you are less engaged, because you aren't there solving the hard problems yourself -- and if you have a high degree of confidence that you are the one to solve those problems then maybe you should pursue a direct approach -- but it is a path you should give serious thought to.

I wouldn't advise you to go work at any random company. Most game companies -- particularly large ones -- are structured in a way that doesn't mean you'd have a good chance of individual success (versus working anywhere else or doing something else).

Valve has one of the highest profits per employees in the world and is wholly owned by the employees. The company compensates very well. So my advice is specific to considering an application to Valve.

Comment by BrandonReinhart on What are YOU doing against risks from AI? · 2012-03-18T00:08:12.150Z · LW · GW

Carl Shulman has been convincing that I should do nothing directly (in terms of labor) on the problem on AI risks, instead become successful elsewhere, and then direct resources as I am able toward the problem.

However I believe I should 1) continue to educate myself on the topic 2) try to learn to be a better rationalist so when I do have resources I can direct them effectively 3) work toward being someone who can gain access to more resources 4) find ways to better optimize my lifestyle.

At one point I seriously considered running off to San Fran to be in the thick of things, but I now believe that would have been a strictly worse choice. Sometimes the best thing you can do is to do what you already do well and hope to direct the proceeds towards helping people. Even when it feels like this is selfish, disengaged, or remote.

Comment by BrandonReinhart on Cult impressions of Less Wrong/Singularity Institute · 2012-03-17T23:41:11.536Z · LW · GW

I will say that I feel 95% confident that SIAI is not a cult because I spent time there (mjcurzi was there also), learned from their members, observed their processes of teaching rationality, hung out for fun, met other people who were interested, etc. Everyone involved seemed well meaning, curious, critical, etc. No one was blindly following orders. In the realm of teaching rationality, there was much agreement it should be taught, some agreement on how, but total openness to failure and finding alternate methods. I went to the minicamp wondering (along with John Salvatier) whether the SIAI was a cult and obtained lots of evidence to push me far away from that position.

I wonder if the cult accusation in part comes from the fact that it seems too good to be true, so we feel a need for defensive suspicion. Rationality is very much about changing one's mind and thinking about this we become suspicious that the goals of SIAI are to change our minds in a particular way. Then we discover that in fact the SIAI's goals (are in part) to change our minds in a particular way so we think our suspicions are justified.

My model tells me that stepping into a church is several orders of magnitude more psychologically dangerous than stepping into a Less Wrong meetup or the SIAI headquarters.

(The other 5% goes to things like "they are a cult and totally duped me and I don't know it", "they are a cult and I was too distant from their secret inner cabals to discover it", "they are a cult and I don't know what to look for", "they aren't a cult but they want to be one and are screwing it up", etc. I should probably feel more confident about this than 95%, but my own inclination to be suspicious of people who want to change how I think means I'm being generous with my error. I have a hard time giving these alternate stories credit.)

Comment by BrandonReinhart on Anti-akrasia tool: like for data nerds · 2011-10-12T08:45:09.861Z · LW · GW

Thanks for taking the time to respond.

I rebuilt my guitar thing and added today's datapoint and now it seems to be predicting my path properly. Makes more sense now. I think I was confused at first because I had made a custom graph instead of using the "Do More" prefab.

Neat software!

Comment by BrandonReinhart on 1001 PredictionBook Nights · 2011-10-12T05:38:17.827Z · LW · GW

An exercise we ran at minicamp -- which seemed valuable, but requires a partner -- is to take and argue for a position for some time. Then, at some interval, you switch and argue against the position (while your partner defends). I used this once at work, but haven't had a chance since. The suggestion to swap sides mid argument surprised the two, but did lead to a more effective discussion.

The exercise sometimes felt forced if the topic was artificial and veered too far off course, or if one side was simply convinced and felt that further artificial defense was unproductive.

Still, it's a riff on this theme.

Comment by BrandonReinhart on Anti-akrasia tool: like for data nerds · 2011-10-12T01:23:50.019Z · LW · GW
  • It's a little strange that I have to set up the first data point when I register the goal. I'd rather set up the goal, then do the first day's work. I suppose this is splitting hairs.

I created two goals:

Both goals have perfectly tight roads. Is this correct? I would like to give myself some variance, since I'll probably not ever do exactly 180 minutes in a day. To start, I fudged the first day's value at the goal value.

Based on how you describe the system, it looks like I should expect to pay $5 if I practice 179 minutes.

  • How do I delete a goal if I screw it up in some way?

  • Is the goal value a median or is it a target?

  • I would like the ability to expressly exclude days at a certain rate. Like "I will practice ear training approximately X minutes per day, 5 out of 7 days a week."

  • Is there a 'vacation' feature? If I'm on a holiday, I might not be able to maintain certain goals. I would expect vacations to have to be declared in advance, though, to prevent someone from using it as a method of worming out of an impending failure.

  • I really like that you are iterating on your concept publicly. This is the way to go. I hope you are able to move towards success. Are you tracking your software development goals in the same software?

  • An exponential punishment curve seems harsh. Is the concern that a linear rate of punishment might lead to basically buying indulgences? I would think that even linear curves at good rates would create incentive.

  • The data tracking features are interesting to me and one reason I might try this. Is there a way to export the data? If I did use this, then it would be cool to import the data into a practice log. AHA. Found the export button!

  • Some goals might contain periodic sub-goals. For example, a musical practice goal might include X days of "spend % of this time on speed improvement" this is an idea for a future feature. These could spin off to become their own goal graphs if the user wanted, otherwise they are simply children of the main goal.

Comment by BrandonReinhart on Case Study: Reading Edge's financial filings · 2011-09-21T07:11:39.977Z · LW · GW

It would actually be worthwhile to post a small analysis of Lifeboat. How they meet the crank checklist, etc. Do they do anything other than name drop on their website, etc?

Comment by BrandonReinhart on Help Fund Lukeprog at SIAI · 2011-08-25T04:05:18.568Z · LW · GW

Hiring Luke full time would be an excellent choice for the SIAI. I spent time with Luke at mini-camp and can provide some insight.

  • Luke is an excellent communicator and agent for the efficient transmission of ideas. More importantly, he has the ability to teach these skills to others. Luke has shown this skill publicly on Less Wrong and also on his blog, with this distilled analysis of Eliezer's writing "Reading Yudkowsky."

  • Luke is a genuine modern day renaissance man, a true polymath. However, Luke is very self-aware of his limitations and has devoted significant work to finding ways of removing or mitigating those limitations. For example any person with a broad range of academic interests could fall prey to never acquiring useful skills in any of those interest areas. Luke sees this as a serious problem of concern and wants to maximize the efficiency of searching the academic space of ideas. Again, for Luke this is a teachable skill. His session "Productivity and Scholarship" at minicamp outlined techniques for efficient research and reducing akrasia. None of that material would be particularly surprising for a regular reader of Less Wrong -- because Luke pioneered critical posts on these subjects. Luke's suggestions were all implementable and process focused, such as utilizing review articles and Wikipedia to rapidly familiarize one's self broadly with the jargon of a new discipline before doing deep research.

  • Luke is an excellent listener and has a high degree of effectiveness in human interaction. This manifests itself as someone you enjoy speaking to, who seems interested in your views, and then who is able to tell you why you are wrong in a way that makes you feel smarter. (Compare with Eliezer, who will simply turn away when you are wrong. This is fine for Eliezer, but not ideal for SIAI as an organization.) Again, Luke understands how to teach this skill set. It seems likely that Luke would raise the social effectiveness of SIAI as an organization and then also generate positive affectations toward the organization in his dealings with others.

Luke would have a positive influence on the culture of the SIAI, the research of the SIAI, and the public face of the SIAI. Any organization would love to find someone who excels in any one of those dimensions, much less someone who excels in all of them.

Mini-camp was an exhausting challenge to all of the instructors. Luke never once showed that exhaustion, let it dampen his enthusiasm, or let his annoyance be shown (except, perhaps, as a tactical tool to move along a stalled or irrelevant conversation). In many ways he presented the best face of "mini-camp as a consumable product." That trait (we could call it customer focus or product awareness) is a critical skill the SIAI is lacking.

An example of how Luke has changed me. I was only vaguely aware of the concepts of efficient learning and study. Of course, I know about study habits and putting in time at practice in a certain sense. These usually emphasize practice and time investment (which is important) but underemphasize the value of finding the right things to spend time on.

It was only when I read Luke's posts, spoke to him, and participated in his sessions at mini-camp that I received a language for thinking about and conducting introspection on the subject of efficient learning. Specifically, I've applied his standards and process to my study of guitar and classical music and I now feel I've effectively solved the question of where to spend my time and am solely in the realm of doing the actual practice, composition, and research. I've advanced more in the past few months of music study than I have ever done in the prior year and a half I played guitar.

In the past month I have actively applied his skill of skimming review material (review books on classical composers) and then used wikipedia to rapidly drill down on confusing component subjects. In the past month, I have actively applied his skill of thinking vicariously about someone else's victory that represents goals I have to make a hard road seem less like a barrier and more like a negotiable terrain. In the past month, I have applied his skill of considering the merits of multiple competing areas of interest, determined the one with the most impact, and pursued it (knowing I could later scoop up the missing pieces more quickly).

I did all of that with the awareness that Luke was the source of the skills and language that let me do those things.

I am more awesome because of Luke.

Comment by BrandonReinhart on The Blue-Minimizing Robot · 2011-07-24T03:57:23.460Z · LW · GW

Is "transform function" a technical term from some discipline I'm unfamiliar with? I interpret your use of that phrase as "operation on some input that results in corresponding output." I'm having trouble finding meaning in your post that isn't redefinition.

Comment by BrandonReinhart on Scholarship: How to Do It Efficiently · 2011-05-10T02:48:27.017Z · LW · GW

Here is another question, regarding the basic methdology of study. When you are reading a scholastic work and you encounter an unfamiliar concept, do you stop to identify the concept or continue but add the concept to a list to be pursued later? In other words, do you queue the concept for later inspection or do you 'step into' the concept for immediate inspection?

I expect the answer to be conditional, but knowing what conditions is useful. I find myself sometimes falling down the rabbit hole of chasing chained concepts. Wikipedia makes this mistake easy.

Comment by BrandonReinhart on Scholarship: How to Do It Efficiently · 2011-05-10T02:25:08.633Z · LW · GW

Here's a question: does learning to read faster provide a net marginal benefit to the pursuit of scholarship? Are there narrow, focused, and confirmed methods of learning to read faster that yield positive results? This would be beneficial to all, but perhaps moreso to those of us that have full time jobs that are not scholarship.

Comment by BrandonReinhart on The 5-Second Level · 2011-05-08T06:13:20.913Z · LW · GW

Grunching. (Responding to the exercise/challenge without reading other people's responses first.)

Letting go is important. A failure in letting go is to cling to the admission of belief in a thing which you have come not to believe, because the admission involves pain. An example of this failure: I suggest a solution to a pressing design problem. Through conversation, it becomes apparent to me that my suggested solution is unworkable or has undesirable side effects. I realize the suggestion is a failure, but defend it to protect my identity as an authority on the subject and to avoid embarrassment.

An example of success: I stop myself, admit that I have changed my mind, that the idea was in error, and then relinquish the belief.

A 5-second-level description:

  • I notice that my actual belief state and my professed belief state do not match. This is a trigger that signals that further conscious analysis is needed. What I believe (the suggestion will have undesirable side effects) and what I desire to profess (the suggestion is good) are in conflict.

  • I notice that I feel impending embarrassment or similar types of social pain. This is also a trigger. The feeling that a particular action may be painful is going to influence me to act in a way to avoid the pain. I may continue to defend a bad idea if I'm worried about pain from retreat.

  • Noticing these states triggers a feeling of caution or revulsion: I may act in a way opposed to what I believe merely to defend my ego and identity.

  • I take a moment to evaluate my internal belief state and what I desire to profess. I actively override my subconscious desire to evade pain with statements that follow from my actual internal belief. I say "I'm sorry. I appear to be wrong."

An exercise to cause these sub-5-second events:

I proposed a scenario to my wife wherein she was leading an important scientific project. She was known among her team as being an intelligent leader and her team members looked up to her with admiration. A problem on the project was presented: without a solution the project could not move forward. I told my wife that she had had a customary flash of insight and began detailing the solution. A plan to resolve the problem and moving the project forward.

Then, I told her that a young member of her team revealed new data about the problem. Her solution wouldn't work. Even worse, the young team member looked smug about the fact she had outsmarted the project lead. Then I asked "what do you do?"

My wife said she would admit her solution was wrong and then praise the young team member for finding a flaw. Then she said this was obviously the right thing to do and asked me what the point of posing the scenario was.

I'm not sure my scenario/exercise is very good. The conversation that followed the scenario was more informative for us than the scenario itself.

Comment by BrandonReinhart on Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011) · 2011-05-08T02:49:35.321Z · LW · GW

I donated $275 to the SIAI via the Facebook page. Given the flight prices on Orbitz, this should cover somebody. Maybe not an east coaster or someone overseas.

Pledge fulfilled!

Also: I will be attending mini-camp and have also gotten my own ticket.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-05T04:34:12.759Z · LW · GW

Yes. 40 per week.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-04T23:37:19.671Z · LW · GW

I would be willing to do this work, but I need some "me" time first. The SIAI post took a bunch of spare time and I'm behind on my guitar practice. So let me relax a bit and then I'll see what I can find. I'm a member of Alcor and John is a member of CI and we've already noted some differences so maybe we can split up that work.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-04T23:34:50.141Z · LW · GW

He is full time. According to the filings he reports 40 hours of work for the SIAI. (Form 990 2009, Part VII, Section A -- Page 7).

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-04T16:48:50.620Z · LW · GW

"Michael Vassar's Persistent Problems Group idea does need funding, though it may or may not operate under the SIAI umbrella."

It sounds like they have a similar concern.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-03T20:05:11.950Z · LW · GW

I agree, this doesn't deserve to be downvoted.

It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn't also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-02T21:26:17.416Z · LW · GW

At this point an admin should undelete the original SIAI Fundraising discussion post. I can't seem to do it myself. I can update it with a pointer to this post.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-02T21:24:57.256Z · LW · GW

Thanks, I added a note to the text regarding this.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-02T16:45:34.594Z · LW · GW

Yeah, I'll update it when the 2010 documents become available.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-02T16:43:30.060Z · LW · GW

Added to the overview section.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-02T16:03:53.888Z · LW · GW

I didn't know about that! I will update the post to use it as soon as I can. Thanks! Most of my work on this post was done by editing the HTML directly instead of using the WYSIWYG editor.

EDIT: All of the images are now hosted on lesswrong.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-02T15:27:06.484Z · LW · GW

The older draft contains some misinformation. Much is corrected in the new version. I would prefer people use the new version.

Comment by BrandonReinhart on SIAI - An Examination · 2011-05-02T15:24:06.296Z · LW · GW

Typo fixed.

Comment by BrandonReinhart on Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011) · 2011-04-28T03:51:13.219Z · LW · GW

I will donate the amount without earmarking it. It will fill the gap taken by the cost to send someone to the event.

I don't see a lot of value in earmarking funds for the SIAI. I'm working on a document about SIAI finances and from reading the Form 990s I believe they use their funds efficiently. Given my low knowledge of their internal workings and low knowledge of their immediate and medium term goals I would bet that they would be better at figuring out the best use of the money than I would be. Earmarking would increase the chance the money is used inefficiently, not decrease it.

Comment by BrandonReinhart on SIAI Fundraising · 2011-04-27T17:39:36.152Z · LW · GW

Can everyone see all of the images? I received a report that some appeared broken.

Comment by BrandonReinhart on SIAI Fundraising · 2011-04-27T07:26:37.212Z · LW · GW

Once I finish the todo at the top and get independent checking on a few things I'm not clear on, I can post it to the main section. I don't think there's value in pushing it to a wider audience before it's ready.

Comment by BrandonReinhart on SIAI Fundraising · 2011-04-27T02:18:13.373Z · LW · GW

Zvi Mowshowitz! Wow color me surprised. Zvi is a retired professional magic player. I used to read his articles and follow his play. Small world.

Comment by BrandonReinhart on SIAI Fundraising · 2011-04-26T16:46:31.634Z · LW · GW

I'm also going to see if I can get a copy of the 2010 filing.

Edit: The 2002 and on data is now largely incorporated. Still working on a few bits. Don't have the 2010 data, but the SIAI hasn't necessarily filed it yet.

Comment by BrandonReinhart on SIAI Fundraising · 2011-04-26T16:45:14.621Z · LW · GW


The section that led me to my error was 2009 III 4c. The amount listed as expenses is $83,934 where your salary is listed in 2009 VII Ad as $95,550. The text in III 4c says:

"This year Eliezer Yudkowsky finished his posting sequences on Less Wrong [...] Now Yudkowsky is putting together his blog posts into a book on rationality. [...]"

This is listed next to two other service accomplishments (the Summit and Visiting Fellows).

If I had totaled the program accomplishments section I would have seen that I was counting some money twice (and also noticed that the total in this field doesn't feed back into the main sheet's results).

Please accept my apology for the confusion.

Comment by BrandonReinhart on Are Girl Scout Cookies Deliciously Evil? A Case Study in Evaluating Charities by Yourself · 2011-04-26T03:06:16.546Z · LW · GW

I -- thoughtlessly -- hadn't considered donating to the SIAI as a matter of course until recently (helped do a fund raiser for something else through my company and this made me think about it). Now reading the documentation on GuideStar has me thinking about it more...

Looking at the SIAI filings, I'd be interested in knowing more about the ~$118k that was misappropriated by a contractor (reported in 2009). I hadn't heard of that before. For an organization that raises less than or close to half a million a year, that's a painful blow.

Peter Thiel's contributions compose a significant part of the SIAI's income. I notice that in 2009, the organization raised less than $100k from non-'excess' contributions. In other words, the organization is largely funded by a small group of big donors. I wonder how this compares to other organizations of a similar size? Is there a life-cycle to bootstrapping organizations where they transition from small pools of big donors to more stable funding by a broad contributor base?

Naive Googling says that grant writing is the traditional way for an organization to get funds. I think there is low hanging fruit here. In 2009, SIAI received no grants. Can the Less Wrong community create a task force with the purpose of learning about grant writing and then executing on a few?

What about enumerating non-standard approaches? For example, the Less Wrong community consists of a large number of software developers. Would it be possible to create a task force to create a software product with the purpose of donating it's revenue to the SIAI? (Various corporate non-competes might be a barrier, but maybe companies would give dispensation for charity work?)

Right now SIAI appears to be primarily dependent on several key contributors and conference income. Expenditures are close to revenue, so they don't have much in the way of savings. Diversifying income, building up savings...probably goals we can help the organization achieve.

(Removed my post to clean it up, get verification on a bunch of stuff and ensure I'm not doing any unexpected damage.)

Comment by BrandonReinhart on Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011) · 2011-04-25T09:21:51.355Z · LW · GW

I applied to mini-camp. However, I may not be selected because of my personal situation (older, not college educated). I believe the mini-camp program is worth supporting and should be helped to be successful. I am willing to back up this belief with my wallet...and in public, so you all can hold me to it.

Whether or not I am selected, I pledge to pay for the flight of one individual who is (and who isn't me). This person must live in the continental United States.

If the easiest way to fulfill this pledge is to donate to the SIAI, earmarked for this purpose*, I can do that. Otherwise, I can just buy the flight directly when the time comes.

*See comment below. This was posted before I did some investigation and put more thought into things.