Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28
post by AnnaSalamon · 2012-03-29T20:48:48.227Z · LW · GW · Legacy · 238 commentsContents
Apply now. Frequently Asked Questions: Apply now ETA: CMR is still looking for good teachers and curriculum designers. If you're interested, please especially consider coming to a minicamp; we're hoping to find some good hires there. ETA2: We will probably have answers to all applicants within about two weeks (i.e., by April 16 or so), with answers to the May folks probably earlier than the others. If for some reason you need your application processed *faster* than this, please shoot me an email: annasalamon at gmail. None 238 comments
“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”
--Michael Curzi, summer 2011 minicamp participant
Who: You and a class full of other aspiring rationalists and world-optimizers, from around the world.
What: Two 3-day weekend minicamps and one 8-day minicamp, filled with hands-on activities for applying rationality to your life, your goals, and the making of a better world. (See details in the FAQ.)
When and where: We're running three camps, so that we can do this for three sets of participants: May 11-13 and June 22-24 for the 3-day camps, and July 21-28 for the eight-day camp, all in the San Francisco Bay Area.
Why: Because you’re a social primate, and the best way to jump into a new way of thinking, make friends, and accomplish your goals is often to spend time with other primates who are doing just that.
- Hang out and explore the Bay Area with two dozen other people like you who are smart, interesting, and passionate about rationality
- Attend bonus sessions about style, body language, and confidence-building.
- Get help charting out career paths; and, entirely optionally for those interested, connect with folks at the Singularity Institute about optimal philanthropy.
Instructors:
Eliezer Yudkowsky | Anna Salamon | Julia Galef |
Andrew Critch | Luke Muehlhauser | Michael Smith |
Cost: $650 for the three-day programs; $1500 for the week-long program. This includes lodging[1], meals, and tuition.
(Note that this *still* isn't quite enough to make running minicamps sustainable in the long-run; a lodging + meals at retreat centers start at around $90 per person per night, the "three-day camps" include four nights, and these workshops take a staff of about 5 full-time people for over a month each prior to each workshop, most of us at $3k/month, counting curriculum development time (plus miscellaneous expenses). We are trying to strike a compromise between "charge enough that we can run more camps" and staying affordable, especially for our start-up phase; costs will probably go up in following years.)
Three days (or a week) isn’t long enough to learn rationality, but it's long enough to learn how to learn rationality, and to get some momentum toward doing so.
Come meet us, and see what you can do.
1. I’m older. Should I still apply?
Yes! We're aiming for a more diverse crowd and would love to add your wider set of experiences and skills.
2. I’d like to come, but I’m not sure you’ll accept me. Should I still apply?
Absolutely! You can fill out our form in as little 10 minutes. What’s the harm?[2]
3. I’d like to come, but I can’t afford it. Should I still apply?
Yes, you should definitely apply. A limited number of scholarships will probably be available this time, and more may be available later.
(There's also an option on the application form if you want to apply but can't make any of the times - this just says that you want to be part of future minicamps and makes sure we have your application details.)
4. What will we do, exactly?
We're still working out the details. In our current model:
- Daily schedule: Every day, you'll have five hours of core workshop sessions (mostly exercises, divided into morning and evening sessions), meals shared with other participants, and shared activities such as soccer, poker, karaoke, and trips to bay area sites.
- Rationality: You'll practice many specific techniques (e.g. Fermi calculations, applying Bayes' theorem and cognitive biases to daily life, seeing how using fungibility can boost your goal achievement); develop a map of your rationality strengths and gaps; and learn how to continue learning rationality after the program.
- Social effectiveness: Reading and using body language; developing a fashion sense; improving social courage; and understanding why social reality is important.
- Individual meetings: You'll be able to schedule one-on-one appointments to discuss career paths you may want to take (we can help with statistics on earnings in different professions, and strategy for getting in); how to start a LW meet-up or similar community; and, optionally for those interested, how to get involved in existential risks-reducing research and action.
5. I’m new to all this. Will it make sense?
If you’ve read at least fifteen posts from the core sequences, yes it will. If you haven’t: why not read them now?
We’ll also aim for an atmosphere in which everyone is free to make mistakes and to try things, and in which people are receptive to a wide range of skill levels.
6. I’ve already read the Sequences seventeen times, and also I’m a self-made billionaire with three PhDs. Will I learn anything new?[3]
We hope so. We’re covering a good range of material, with much more of a focus on practice and exercise than in the Sequences, incorporating new lessons learned since the LW material was written, and with some instructors who've developed their own takes on rationality.
7. What evidence is there that I'll be glad I went?
After last year's minicamp, participants completed an anonymous exit survey. (With the instructions: "We're asking you these questions to learn how to run camps; please be honest; it'll help us more if you're accurate than if you're positive.") Here are their answers to the most relevant questions:
- In answer to “Zero to ten, are you glad you came?”, the median answer was 10 (mean was 9.3).
- In answer to “Zero to ten, will your life go significantly differently because you came to mini-camp?” the median answer was 7.5 (the mean was 6.9) [This was the response that was most positively surprising to me.].
- In answer to “Zero to ten, has your epistemic rationality improved?”, the median answer was 7 (mean 6.9).
- In answer to “Zero to ten, are you more motivated to learn epistemic rationality, than you were when you came?”, the median answer was 8.5 (mean 8.1).
- In answer to “Zero to ten, have you become more skilled at modifying your emotions and dispositions?”, the median answer was 7 (mean 6.3).
- In answer to “Zero to ten, are you more motivated to modify your emotions and dispositions, than you were when you came?”, the median answer was 9 (mean 8.3).
- In answer to “Zero to ten, have you gained social skills since coming?”, the median answer was 7.5 (mean 7.2).
- In answer to "Zero to ten, did you like spending time with the other participants?", the median answer was 9 (mean 8.8).
We also asked participants for testimonials -- statements designed to be shown to others, in case they wanted to recommend such camps. They wrote:
“This was an intensely positive experience. This was easily the most powerful change self-modification I've ever made, in all of the social, intellectual, and emotional spheres. I'm now a more powerful person than I was a week ago -- and I can explain exactly how and why this is true.
At mini-camp, I've learned techniques for effective self-modification -- that is, I have a much deeper understanding of how to change my desires, gather my willpower, channel my time and cognitive resources, and model and handle previously confusing situations. What's more, I have a fairly clear map of how to build these skills henceforth, and how to inculcate them in others. And all this was presented in such a way that any sufficiently analytical folk -- anyone who has understood a few of the LW sequences, say -- can gain in extreme measures.”
--Matt Elder / Fiddlemath
“I expected a week of interesting things and some useful tools to take away. What I got was 8 days of constant, deep learning, challenges to my limits that helped me grow. I finally grokked that I can and should optimize myself on every dimension I care about, that practice and reinforcement can make me a better thinker, and that I can change very quickly when I'm not constrained by artificial barriers or stress.
I would not recommend doing something like this right before another super-busy week, because I was learning at 100% of capacity and will need a lot of time to unpack all the things I learned and apply them to my life, but I came away with a clear plan for becoming better. It is now a normal and easy thing for me to try things out, test my beliefs, and self-improve. And I'm likely to be much more effective at making the world a better place as well, by prioritizing without fear.
The material was all soundly-researched and effectively taught, with extremely helpful supplemental exercises and activities. The instructors were very helpful in and out of session. The other participants were excited, engaged, challenging, and supportive.
I look forward to sharing what I've learned with my local Lesswrong meetup and others in the area. If that's even 1/4 as awesome as my time at the Mini-Camp, it will make our lives much better.”
--Ben Hoffman / Benquo
“I really can't recommend this camp enough! This workshop broke down a complex and intertwined set of skills labelled in my brain as "common sense" and distinguished each part so that I could work on them separately. Sessions on motivation, cognition, and what habits to build to not fool yourself were particularly helpful. This camp was also the first example that I've seen of people taking current cognitive science and other research, decoding it, and showing people what's been documented to work so that they can use it too. It feels to me now as though the coolest parts of the sequences have been given specific exercises and habits to build off of. This camp, and the people in it, have changed my path for the better.”
--David Jones / TheDave
You can also read the full testimonials from everyone who chose to give one.
(You can totally fill out the application in just 10 minutes, so you might want to fill in the blanks right now -- we'd like to announce the first acceptances (for May) in the next week)
[1] More exactly, we provide a bed in a shared room at a house or retreat center rented by SIAI.
[2] Sometimes people say they’re “afraid of wasting our time” by sending in an application. In a word, no. If you’re interested in us, we’re interested in you. It takes just seconds to read someone’s form, and our experience shows that many of our highest-value people have been the ones who hesitated to apply.
[3] Okay, fine, this isn’t really a frequently asked question. But seriously, we’ll be covering a lot that isn’t in the sequences -- and the flesh-and-blood experience of meeting other aspiring rationalists is hard to duplicate.
ETA: CMR is still looking for good teachers and curriculum designers. If you're interested, please especially consider coming to a minicamp; we're hoping to find some good hires there.
ETA2: We will probably have answers to all applicants within about two weeks (i.e., by April 16 or so), with answers to the May folks probably earlier than the others. If for some reason you need your application processed *faster* than this, please shoot me an email: annasalamon at gmail.
238 comments
Comments sorted by top scores.
comment by orthonormal · 2012-03-29T22:07:01.343Z · LW(p) · GW(p)
I have a high opinion of the minicamp (after observing fiddlemath before and after the camp, anecdotally I'd say he "leveled up" in notable ways that would be worthwhile for me), and I'll probably apply. That being said:
This post gives off bad vibes to (my mental model of) outsiders- I wouldn't be comfortable showing it to a non-LessWrong person and saying "This is what I'll be doing". I'm holding the post to a pretty high standard, because signaling matters a lot for an event where you're asking money from people and putting them through an intensive program (it pattern-matches things that people are wary of, from "multilevel marketing seminar" to "Christian retreat").
Some suggestions:
- Providing an estimated cost breakdown (from last year) rather than a vague statement ("most of it is meals and lodging") would go a long way toward showing that whatever this is, it's not an SIAI fundraiser.
- A specific example of an exercise from last summer's minicamps would be much better than a description of how awesome the exercises are in general, both for reassuring people that there's content to it and making people excited (as I was when I heard some of the things you did).
- A (partial) "program of proposed topics" would make it look substantially more serious: which instructors have which particular focuses?
Like I said, I'm already interested, and I already know that this info exists; but explicitly showing it will vastly improve the signaling value, and remove the inconvenience of having to convince one's friends and family that this isn't an obviously cultish thing.
Replies from: Bugmaster, Eliezer_Yudkowsky, Bugmaster, AnnaSalamon↑ comment by Bugmaster · 2012-03-29T23:24:41.221Z · LW(p) · GW(p)
Here's another random idea:
When I read product or movie reviews, I tend to look for the negatives as much as (if not more than) the positives; I also pay attention to the rating distribution (especially if it's bimodal). If I can't find any negatives, I tend to assume that the product has been astroturfed, and move on.
So, did the SIAI ever receive any negative comments about the rationality minicamp ? If so, where can I read them ?
Replies from: AnnaSalamon, AnnaSalamon↑ comment by AnnaSalamon · 2012-04-01T03:00:20.341Z · LW(p) · GW(p)
I posted earlier that the surveys were confidential, but actually, I just reread them, and there doesn't seem to be anything personal in the "Which parts of the camp didn't work particularly well for you, and what do you think we could do to improve?" column, which was basically the "what negative comments do you have?" column. So I pasted those answers into a new spreadsheet and posted them to the web; you can read participants' collected complaints here.
Replies from: ciphergoth, Bugmaster↑ comment by Paul Crowley (ciphergoth) · 2012-04-01T09:44:55.966Z · LW(p) · GW(p)
This is very useful, thanks! These don't look too challenging to address - would be good to know more about what you've changed in response to some of the common themes there.
↑ comment by AnnaSalamon · 2012-03-30T01:28:05.554Z · LW(p) · GW(p)
If you or anyone wants to do a survey, I can give you the email addresses of the minicampers, and you can email them and see what you get and post it online (assuming you write in your email that you are looking for publishable comments, etc.). Let me know if you/anyone seriously wish to do this.
Many of the minicampers are on LW, also; the folks with testimonials above have linked LW accounts; but there is of course selection bias there.
Replies from: thejash↑ comment by thejash · 2012-03-31T17:18:22.539Z · LW(p) · GW(p)
Someone else above asked for the negatives as well. Didn't we all submit suggestions for improvement and criticisms last year? Are those publishable? If you don't have permission, you could just email people for permission to publish their criticisms. You can definitely publish any of my comments.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-03-31T23:45:21.055Z · LW(p) · GW(p)
The survey was anonymous, so this is hard to ask permission to share individual comments, since I don't know who wrote them (and folks were assured that their submissions were confidential). You (since you're on the minicamps google group) could email that google group and collect criticisms, and publish them.
Someone else could ask me for folks' email addresses and then do the same.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T22:22:55.080Z · LW(p) · GW(p)
Anna says we're still looking at locations but it's looking at around $115/person/night just for lodging + meals, and that the 3-day camps actually include 4 nights the way everyone counts things and we have to purchase it. Anna also notes that she and Julia and Michael get $3k/month and this takes way more of their time than just the actual days. So definitely not a Singinst fundraiser. That data is available very easily so I'm posting it right now.
A specific example of an exercise from last year's minicamp that a lot of people liked was "Value of Information" which included the technical details of how to calculate VoI and exercises in being sensitive to particular forms of scope (how much does it cost, how long does it last, how often does it happen).
We're still working out the program which is why it's not posted even tentatively (we were just in the middle of some agonizing about priorities).
Replies from: None, lessdazed↑ comment by [deleted] · 2012-03-29T22:59:19.528Z · LW(p) · GW(p)
$115/person/night
Wait, what? Can I just stay in a hostel and eat from the gorcery store?
Replies from: Viliam_Bur, arundelo↑ comment by Viliam_Bur · 2012-03-30T10:41:11.496Z · LW(p) · GW(p)
Can I just stay in a hostel and eat from the gorcery store?
To make a rational decision, more information is necessary, such as: How much does the hostel cost? Does it have decent beds? Distance from hostel to the place of workshop. Local food costs. Etc. (Don't forget to include facts like if you sleep in a different place than majority, you deprive yourself of opportunity of some morning/evening informal chat.)
Generally: how much would I really save by "hostel & grocery store" and how much would it reduce my gains from the workshop?
Speaking for myself, I would like to cut some costs (together with the cost of flying it makes my salary for 2 months), but there is always a risk. Once I slept in a hostel with beds so bad that I really did not sleep much that night. Now if I imagine 9 such nights plus jet lag, and the resulting effect on my concentration and memory, I would get much less benefit per dollar spent.
Replies from: Bugmaster↑ comment by lessdazed · 2012-03-29T22:29:39.771Z · LW(p) · GW(p)
I have friends and relatives who live in the area. How central to the camp is the communal living aspect? What would you charge to commute to it, if that is possible?
Replies from: AnnaSalamon, Kevin↑ comment by AnnaSalamon · 2012-03-29T22:34:08.481Z · LW(p) · GW(p)
I guess we'd charge about 1/2 of the total (noting that you'd still be having meals with the rest of us)... but I suspect commuting is harder than you think, given how intensively scheduled it is. Err on the side of applying, and we can discuss.
Also, if anyone's unable to afford camp for whatever reason, apply anyhow and check the "needs scholarship" box and we can see what can be worked out.
↑ comment by Kevin · 2012-03-30T04:38:06.196Z · LW(p) · GW(p)
The Bay Area is rather sprawling. It can take 1.5 hours to go from Berkeley to San Jose during rush hour. If they don't live near where the camp is held, I expect you would regret the commute and find the experience more taxing and less relaxing than the participants staying on site.
Replies from: taryneast↑ comment by taryneast · 2012-03-30T05:16:50.323Z · LW(p) · GW(p)
Agree but... if I knew where in the bay area it's being held I could tell whether it's just around the corner, or a 1.5 hour commute.
Replies from: Kevin↑ comment by Kevin · 2012-04-10T04:55:42.337Z · LW(p) · GW(p)
It's still being nailed down and it looks like it will be different locations for different camps, but for now it looks like the one week long one is at a retreat center in the deep East Bay between Walnut Creek and Danville, with one weekend camp in Berkeley and one in the South Bay. Still subject to change until we officially announce.
↑ comment by Bugmaster · 2012-03-29T22:29:42.105Z · LW(p) · GW(p)
This post gives off bad vibes to (my mental model of) outsiders.
I had the same impression; the post makes the minicamp sound like your average, run-of-the-mill, self-help seminar scam -- complete with testimonials and everything.
Replies from: thomblake, AnnaSalamon, RomeoStevens↑ comment by AnnaSalamon · 2012-03-29T22:37:03.123Z · LW(p) · GW(p)
Good to know.
I mean, it kind of is a standard workshop (like ones on public speaking, or italian cooking, or, yes, self-help)... except that the content is about Bayesian math, and microeconomics, and the cognitive science of standard human error patterns and so on. And the people you get to network with are other LW-ers who are interested in actually applying this content to practical problems, and coming to embed Bayesian patterns into the details of one's day-to-day thoughts instead of just into the way you answer pen-and-paper questions.
But, yes, similar workshop format, different content. Maybe we should make the ad different too in some way. I wonder if my inclusion of the testimonials and survey data, in particular, may have been misleading -- I was trying to say "look, past participants (who were smart LW-ers a lot like you) liked it, so maybe you will too", but it may have come across as a stronger claim. I'd say come check it out if you're interested, or else wait for a subsequent year if you want to have seen "proof that this will definitively change your life" first or something (which we may or may not ever manage, though we're certainly working on it), and, meanwhile, whether you come or not, do keep contributing on LW, trying exercises yourself in your own life, and generally helping to figure out what rationality can be.
↑ comment by RomeoStevens · 2012-03-29T22:41:23.831Z · LW(p) · GW(p)
The difficulty stems from how much good stuff is mixed in with all the scams, making outside evaluation much harder. Most self help programs include a lot of the same basic items by necessity (fixing the big problems first). We also seem to understand implicitly that people's self judgement of the effects of these types of things is terrible, especially when those judgements are very close time-wise to the event itself (rationalization of expense and effort, unwillingness to signal disloyalty to new ingroup, even internally).
↑ comment by AnnaSalamon · 2012-03-29T23:14:14.581Z · LW(p) · GW(p)
Thanks; this is helpful.
comment by BrandonReinhart · 2012-03-30T01:07:38.786Z · LW(p) · GW(p)
I attended the 2011 minicamp.
It's been almost a year since I attended. The minicamp has greatly improved me along several dimensions.
I now dress better and have used techniques provided at minicamp to become more relaxed in social situations. I'm more aware of how I'm expressing my body language. It's not perfect control and I've not magically become an extrovert, but I'm better able to interact in random social situations successfully. Concretely: I'm able to sit and stand around people I don't know and feel and present myself as relaxed. I dress better and people have noticed and I've received multiple comments to that effect. I've chosen particular ways to present myself and now I get comments like 'you must play the guitar' (this has happened five times since minicamp haha). This is good since it loads the initial assumptions I want the person to load.
I've intentionally hacked my affectation towards various things to better reach my goals. For years I never wanted to have children. My wife said (earlier this year, after minicamp) that she wanted to have kids. I was surprised and realized that given various beliefs (love for wife, more kids good for society, etc) I needed to bring my emotions and affectations in line with those goals. I did this by maximizing positive exposure to kids and focusing on the good experiences...and it worked. I'm sure nature helped, but I came to a change of emotional reaction that feels very stable. TMI: I had my vasectomy reversed and am actively working on building kid version 1.0
Minicamp helped me develop a better mental language for reasoning around rationalist principles. I've got tools for establishing mental breakpoints (recognizing states of surprise, rationalization, etc) and a sense for how to improve on weak areas in my reasoning. I have a LOT of things I still need to improve. Many of my actions still don't match my beliefs. The up side is that I'm aware of many of the gaps and can make progress toward solving them. There seems to be only so much I can change at once, so I've been prioritizing everything out.
I've used the more concise, direct reasoning around rationality at my job at Valve Software. I use it to help make better decisions, concretely: when making decisions around features to add to DOTA 2 I've worked particularly hard at quickly relinquishing failed ideas that I generated. I have developed litanies like 'my ideas are a product, not a component of my identity.' Before I enter into interactions I pause and think 'what is my goal for this interaction? The reasoning tools from minicamp have helped me better teach and interpret the values of my company (which are very similar). I helped write a new employee guide that captures Valve values, but uses tools such as Anna Salamon's "Litany for Simplified Bayes" to cut straight to the core concepts. "If X is true, what would the world look like?" "If X is not true, what would the world look like?" "What does the world look like?" I've been influential in instituting predictions meetings before we launch new features.
I've been better able to manage my time, because I'm more aware of the biases and pitfalls that lie before me. I think more about what 'BrandonReinhart2020' wants than what the current me wants. (Or at least, my best guess as to what I think he would want...like not being dead, and being a bad ass guitar shredder, etc). This has manifested itself concretely in my self-education around the guitar. When I went to minicamp I had only just started learning guitar. Since then I've practiced 415 hours (I work full time, so this is all in my spare time) and have developed entirely new skills. I can improv, write songs, etc. Minicamp provided some inspiration, yes, but there were also real tools that I've employed. A big one was coming home and doing research on human learning and practice. This helped me realize that my goals were achievable. Luke gave sessions on how to do efficient research. Critch gave a session on hacking your affectations. I used this to make practice something I really, really like doing (I listened to music I liked before practicing, I would put objects like role-playing books or miniatures that I liked around my practice area -- nerdy yes, but it worked for me -- and I would drink a frosty beer after practicing three hours in a row. Okay so that last one shows that my health beliefs and goals may not be entirely in line, but it served an objective here). Now I can easily practice for 3 hours and enjoy every moment of it. (This is important, before I would use that time for World of Warcraft and other pursuits that just wasted time and didn't improve me.)
I've been in the Less Wrong orbit for a long time and have had the goal of improving my rationality for a long time. I've read Yudkowsky's writing since the old SL4 days. I followed Overcoming Bias from the beginning. I can't say that I had a really good grasp on which concepts were the most important until after minicamp. There's huge value in being able to ask questions, debate a point, and just clarify your confusion quickly.
I have also been an SIAI skeptic. Both myself and John Salvatier thought that SIAI might be a little religion-like. Our mistake. The minicamp was a meeting of really smart people who wanted to help each other win more. The minicamp was genuinely about mental and social development and the mastery of concepts that seem to lead to a better ability to navigate complex decision trees toward desired outcomes.
While we did talk about existential risk, the SIAI never went deep into high shock level concepts that might alienate attendees. It wasn't an SIAI funding press. It wasn't a AGI press. In fact, I thought they almost went too light on this subject (but I came to modern rationality from trans/posthumanism and most people in the future will probably get to trans/posthumanism from modern rationality, so discussions about AGI and such feels normal to me). Point being if you have concerns about this you'll feel a lot better as you attend.
I would say the thing that most discomforted me during the event was the attitude toward meditation. I realized, though, that this was an indicator about my preconceptions about meditation and not necessarily due to facts about meditation. After talking to several people about meditation, I learned that there wasn't any funky mysticism inherent to meditation, just closely associated to meditation. Some people are trying to figure out if it can be used as a tool and are trying to figure out ways to experiment around it, etc. I updated away from 'meditation is a scary religious thing' toward 'meditation might be another trick to the bag.' I decided to let other people bear the burden/risk of doing the research there, though. :)
Some other belief shifts related to minicamp: I have greatly updated toward the Less Wrong style rationality process as being legitimate tools for making better decisions. I have updated a great deal toward the SIAI being a net good for humanity. I have updated a great deal toward the SIAI being led by the right group of people (after personal interactions with Luke, Anna, and Eliezer).
Comparing minicamp to a religious retreat seems odd to me. There is something exciting about spending time with a bunch of very smart people, but it's more like the kind of experience you'd have at a domain-specific research summit. The experience isn't to manipulate through repeated and intense appeals to emotion, guilt, etc (I was a Wesleyan Christian when I was younger and went to retreats like Emaeus and I still remember them pressing a nail sharply into my palm as I went to the altar to pray for forgiveness). It's more accurate to think of minicamp as a rationality summit, with the instructors presenting findings, sharing techniques for the replication of those findings, and there being an ongoing open discussion of the findings and the process used to generate findings. And like any good Summit there are parties.
If you're still in doubt, go anyway. I put the probability of self-damage due to attending minicamp at extremely low, compared to self-damage from attending your standard college level economics lecture or a managerial business skills improvement workshop. It doesn't even blip on a radar calibrated to the kind of self-damage you could do speculatively attending religious retreats.
If you're a game developer, you would probably improve your ability to make good decisions around products more by attending SIAI Minicamp than you would by attending GDC (of course, GDC is still valuable for building a social network within the industry).
Replies from: None, wallowinmaya, Goobahman↑ comment by [deleted] · 2012-03-30T01:52:38.775Z · LW(p) · GW(p)
If you're still in doubt, go anyway. I put the probability of self-damage due to attending minicamp at extremely low, compared to self-damage from attending your standard college level economics lecture or a managerial business skills improvement workshop. It doesn't even blip on a radar calibrated to the kind of self-damage you could do speculatively attending religious retreats.
What about the cost? I would not call spending $1500 in a week insignificant. And as a baseline, I believe that being surrounded for a week by a group of people who believe strongly in some collection of ideas is a risk at least an order of magnitude higher than an economics lecture. I certainly expect that it would have a much stronger effect on me (as it seems it has had on you) than the lecture would, and I would most certainly not take a risk of this magnitude if I have any non-negligible doubts.
Replies from: BrandonReinhart↑ comment by BrandonReinhart · 2012-03-30T02:22:34.154Z · LW(p) · GW(p)
To address your second point first, the -attendees- were not a group who strongly shared common beliefs. Some attended due to lots of prior exposure to LW, a very small number were strong x-risk types, several were there only because of recent exposure to things like Harry Potter and were curious, many were strongly skeptical of x-risks. There were no discussions that struck me as cheering for the team -- and I was actively looking for them!
Some counter evidence, though: there was definitely a higher occurrence of cryonicists and people interested in cryonics than you'd find in any random sample of 30 people. I.e.: some amount >2 vs some amount close to 0. So we weren't a wildly heterogeneous group.
As for the instructors - Anna and Luke were both very open about the fact that the rationality-education process is in its infancy and among the various SIAI members there is discussion about how to proceed. I could be wrong, I interpreted Eliezer as being somewhat skeptical of the minicamp process. When he visited, he said he had almost no involvement related to the minicamp. I believe he said he was mainly a sounding board for some of the ideas. I'm interpreting his involvement in this thread now and related threads/topics as a belief shift on his part toward the minicamp being valuable.
I think your order of magnitude increases well describes a bad conceivable scenario, but poorly describes the scenario I actually witnessed.
Now, for cost, I don't know. I'm attending a guitar camp in August that will be 7 days and cost me $2000. I would put the value of minicamp a fair amount above the value of the guitar camp, but I wouldn't necessarily pay $3000 to attend minicamp. To answer the price question I would ask:
1) What else do I plan to spend the $1500 on? What plans or goals suffer setbacks? What would I otherwise buy?
2) What do I value the information from attending at? I can see how it would be easier to measure the value of information from a guitar camp versus one about something that feels more abstract. So maybe the first step is to find the concrete value you've already gotten out of LW. If you've read the sequences and you think there are useful tools there, you might start with 'what would be the estimated value from being able to clarify the things I'm unsure about." So you take some measurement of value you've already gotten from LW and do some back of the napkin math with that.
3) Consider your level of risk aversion versus the value of minicamp now vs later. If these new minicamps are successful, more people will post about them. Attendees will validate or negate past attendee experiences. It may be that if $1500 is too much for you when measured against your estimation of the pay-off discounted by risks, that you simply wait. Either the camps will be shown to be valuable or they will be shown to be low value.
4) Consider some of the broad possible future worlds that follow from attending minicamp. In A you attend and things go great, you come out with new rationality tools. In B you attend and your reaction is neutral and you don't gain anything useful. In C you attend and have poor experiences or worse suffer some kind of self-damage (ex: your beliefs shift in measurably harmful ways that your prior self would have not agreed to submit to ahead of time). Most attendees are suggesting you'll find yourself in worlds like A. We could be lying because we all exist in worlds like C or we're in B but feel an obligation to justify attending the camp or whatever. Weigh your estimate of our veracity with your risk aversion. Update the connected values.
I would suggest it unlikely that the SIAI be so skilled at manipulation that they've succeeded in subverting an entire group of people from diverse backgrounds and with some predisposition to be skeptical. Look for evidence that some people exist in B or C (probably from direct posts stating as much -- people would probably want to prevent other people from being harmed).
There are other things to put into a set of considerations around whether to spend the money, but these are some.
Replies from: curiousepic, ciphergoth↑ comment by curiousepic · 2012-03-31T13:55:34.913Z · LW(p) · GW(p)
I just wanted to say this (esp. the second part) is actually one of the most cogent posts about anything that I've read in quite some time, and as such, a self-referential example of the value of the camp. It should probably be more visible, and I recommend making it a discussion post about deciding whether/when to attend.
↑ comment by Paul Crowley (ciphergoth) · 2012-03-30T07:29:56.061Z · LW(p) · GW(p)
Nitpick - cRYonics. Thanks!
Replies from: BrandonReinhart↑ comment by BrandonReinhart · 2012-03-30T07:52:17.199Z · LW(p) · GW(p)
Doh, I have no idea why my hands type c-y-r instead of c-r-y, thanks.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-03-30T12:06:17.644Z · LW(p) · GW(p)
You're not alone - it's a common mistyping!
↑ comment by David Althaus (wallowinmaya) · 2012-04-02T09:36:50.458Z · LW(p) · GW(p)
After talking to several people about meditation, I learned that there wasn't any funky mysticism inherent to meditation, just closely associated to meditation. Some people are trying to figure out if it can be used as a tool and are trying to figure out ways to experiment around it, etc.
Rather off-topic, but I'm very interested in rational meditation-advice: Did they suggest specific techniques of meditation like e.g. vipassana or did they recommend some particular books on meditation?
Replies from: jpulgarin↑ comment by Goobahman · 2012-04-02T14:11:19.860Z · LW(p) · GW(p)
Thanks for that. It's fascinating to get a glimpse of what rationality looks like in the real world rather than just online interchanges.
Note aside, I'm a big fan of your work. Reassures me to know rationalists are on the team for dota 2
comment by [deleted] · 2012-03-29T22:18:41.995Z · LW(p) · GW(p)
Applied. Looks good. Might decide it's not worth it, but you make a good case.
One thing. 0 to 10 ratings are utterly useless. The median is almost always around 7, for almost anything. Please give us calibrated statistics, not subjective pseudo-quantities where most of the contribution is from noise and offset.
Reminds me of business planning types ranking alternatives 1..n and then treating the indexes as utilities. ick. TYPE ERROR.
Replies from: Eliezer_Yudkowsky, thomblake, lessdazed↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T22:39:13.048Z · LW(p) · GW(p)
We've actually noticed in our weekly sessions that our nice official-looking yes-we're-gathering-data rate-from-1-to-5 feedback forms don't seem to correlate with how much people seem to visibly enjoy the session - mostly the ratings seem pretty constant. (We're still collecting useful data off the verbal comments.) If anyone knows a standard fix for this then PLEASE LET US KNOW.
Replies from: AShepard, daenerys, TheOtherDave, orthonormal, AShepard, John_Maxwell_IV, Alicorn, None↑ comment by AShepard · 2012-03-30T02:00:05.209Z · LW(p) · GW(p)
I'd suggest measuring the Net Promoter Score (NPS) (link). It's used in business as a better measure of customer satisfaction than more traditional measures. See here for evidence, sorry for the not-free link.
- "On a scale of 0-10, how likely would you be to recommend the minicamp to a friend or colleague?"
- "What is the most important reason for your recommendation?
To interpret, split the responses into 3 groups:
- 9-10: Promoter - people who will be active advocates.
- 7-8: Passive - people who are generally positive, but aren't going to do anything about it.
- 0-6: Detractor - people who are lukewarm (which will turn others off) or will actively advocate against you
NPS = [% who are Promoters] - [% who are Detractors]. Good vs. bad NPS varies by context, but +20-30% is generally very good. The followup question is a good way to identify key strengths and high priority areas to improve.
Replies from: Vaniver, jsalvatier, thomblake, AShepard↑ comment by Vaniver · 2012-03-30T03:21:25.150Z · LW(p) · GW(p)
NPS is a really valuable concept. Means and medians are pretty worthless compared to identifying the percentage in each class, and it's sobering to realize that a 6 is a detractor score.
(Personal anecdote: I went to a movie theater, watched a movie, and near the end, during an intense confrontation between the hero and villain, the film broke. I was patient, but when they sent me an email later asking me the NPS question, I gave it a 6. I mean, it wasn't that bad. Then two free movie tickets came in the mail, with a plea to try them out again.
I hadn't realized it, but I had already put that theater in my "never go again" file, since why give them another chance? I then read The Ultimate Question for unrelated reasons, and had that experience in my mind the whole time.)
Replies from: tgb↑ comment by jsalvatier · 2012-03-30T18:25:00.230Z · LW(p) · GW(p)
Here is the evidence paper.
↑ comment by AShepard · 2012-03-31T21:10:51.756Z · LW(p) · GW(p)
Another thing you could do is measure in a more granular way - ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why.
You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they'll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.
↑ comment by daenerys · 2012-03-29T22:58:41.663Z · LW(p) · GW(p)
One idea (which you might be doing already) is making the people collecting the data DIFFERENT from the people organizing/running the sessions.
For example, if Bob organizes and runs a session, and everyone likes Bob, but thinks that the session was so-so, they may be less willing to write negative things down if they know Bob is the one collecting and analyzing data.
If Bob runs the sessions, then SALLY should come in at the end and say something like "Well we want to make these better, so I'M gathering information of ways to improve, etc"
Even if Bob eventually gets the negative information, I think people might be more likely to provide it to Sally (one step removed) than to Bob directly.
(Even better: Nameless Guy organizes a session. Bob teaches session (making sure everyone knows this is NAMELESS' session, and Bob is just the mouthpiece.)
Replies from: daenerys↑ comment by TheOtherDave · 2012-03-29T22:46:32.134Z · LW(p) · GW(p)
Back when I did training for a living, my experience was that those forms were primarily useful for keeping my boss happy. The one question that was sometimes useful was asking people what they enjoyed most and least about the class, and what they would change about it. Even more useful was asking that question of people to their faces. Most useful was testing to determine what they had actually learned, if anything.
↑ comment by orthonormal · 2012-03-29T22:43:34.882Z · LW(p) · GW(p)
I've seen "rate from 1 to 5, with 3 excluded", which should be equivalent to "rate from 1 to 4" but feels substantially different. But there are probably better ones.
Replies from: Kaj_Sotala, Eliezer_Yudkowsky↑ comment by Kaj_Sotala · 2012-03-30T06:31:40.916Z · LW(p) · GW(p)
In this category of tricks, somebody (I forget who) used a rating scale where you assigned a score of 1, 3, or 9. Which should be equivalent to "rate from 1 to 3", but...
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T22:44:08.679Z · LW(p) · GW(p)
We weren't getting a lot of threes, but maybe that works anyway.
Replies from: orthonormal↑ comment by orthonormal · 2012-03-29T22:57:15.681Z · LW(p) · GW(p)
Then maybe "1 to 4, excluding 3" or "1 to 5, excluding 4", to rule out the lazy answer "everything's basically fine". That might force people to find an explanation whenever they feel the thing is good but not perfect.
If you start getting 5s too frequently, then it's probably not a good trick.
Replies from: tgb↑ comment by tgb · 2012-03-30T03:45:57.899Z · LW(p) · GW(p)
Why not go all the way and just use a plus-minus-zero system like LW ratings (and much of the rest of the internet)? Youtube had an interesting chart before they switched from 5 star rating systems to the like-dislike system showing how useless the star ratings were. But that's non-mandatory so its very different.
↑ comment by AShepard · 2012-03-31T21:11:21.628Z · LW(p) · GW(p)
Another thing you could do is measure in a more granular way - ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why.
You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they'll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-30T22:52:34.393Z · LW(p) · GW(p)
You could have a rubric without any numbers, just 10 sentences or so where participants could circle those that apply. E.g. "I learned techniques in this session that I will apply at least once a week in my everyday life", "Some aspects of this session were kind of boring", "This session was better presented than a typical college lecture", etc.
↑ comment by Alicorn · 2012-03-30T00:40:55.134Z · LW(p) · GW(p)
You could try a variant of this (give someone a d10 and a d6, hide roll from surveyor, if the d6 comes up 1 they give you a 1-10 rating based on the d10 and are otherwise honest) but this may not be useful in cases where people aren't deliberately lying to you, and is probably only worth it if you have enough sample size to wipe out random anomalies and can afford to throw out a sixth of your data.
Or weight the die.
↑ comment by [deleted] · 2012-03-29T23:14:58.792Z · LW(p) · GW(p)
I'm not a pro, but you probably want to turn the data into a z-score (this class is ranked 3 standard deviations above the ranking for other self-help classes). If you can't turn it into a z-score, the data is probably meaningless.
Also, maybe use some other ranking system. I imagine that people have a mindless cached procedure for doing these rankings that you might want to interupt to force acually evaluating it (rank is a random variable with mean = 7 and stddev = 1).
↑ comment by thomblake · 2012-03-29T22:35:02.628Z · LW(p) · GW(p)
The median is almost always around 7, for almost anything.
An anecdote on a related note...
There was once a long-term online survey about patterns of usage of a particular sort of product (specifics intentionally obscured to protect the guilty). One screen asks something like "Which of these have you used in the past year", and it shows 4 products of different brands in random order and "None of the above", and respondents can select multiple brands. Different respondents answer every week, but the results are pretty consistent from one week to the next. Most respondents select one brand.
One week, they took away one of the brands. If it were tracking real usage, you'd expect all of the responses for that brand to have shifted over to "None of the above". Instead, all of a sudden people had used the other 3 brands about 4/3 as often as the previous week. It was exactly the result one would expect if practically everyone were answering randomly. That pattern kept up for a few weeks. Then the question was changed back, and the usage of all 4 brands went back to 'normal'.
Replies from: orthonormal↑ comment by orthonormal · 2012-03-29T22:41:13.789Z · LW(p) · GW(p)
Some of the effect could be accounted for by a substitution principle; instead of asking oneself for each option whether one's used it in the last year, it's easier to ask which of them one recalls using most recently (or just which of them seems most salient to memory), check that, and move on. If people do actually switch between products often enough, this would create that dynamic.
↑ comment by lessdazed · 2012-03-29T22:23:36.618Z · LW(p) · GW(p)
The median is almost always around 7, for almost anything.
I tried to take that into account when reading.
treating the indexes as utilities
Please explain.
Replies from: None↑ comment by [deleted] · 2012-03-29T22:55:18.711Z · LW(p) · GW(p)
I tried to take that into account when reading.
I know, I did too, but that is really the sort of calculation that should be done by a large-scale study that documents a control distribution for 0-10 ratings that such ratings can be calibrated against.
treating the indexes as utilities
Please explain.
In my engineering school, we had some project planning classes where we would attempt to calculate what was the best design based on the strength of our preference for performance in a variety of criteria (aesthetics, wieght, strength, cost, etc). Looking back I recognize what we were doing as coming up with a utility function to compute the utilities of the different designs.
Unfortunately, none of us (including the people who had designed the procedure) knew anything about utility functions or decision theory, so they would do things like rank the different criteria, and the strength of each design in each criteria, and then use those directly as utility wieghts and partial utilities.
(so for example strength might be most important (10), then cost (9) then wieght (8) and so on. and then maybe design A would be best (10) in wieght, worst (1) in strength, etc)
I didn't know any decision theory or anything, but I have a strong sense for noticing errors in mathematical models, and this thing set off alarm bells like crazy. We should have been giving a lot of thought to calibration of our wieghts and utilities to make sure arbitraryness of rankings can't sneak through and change the answer, but no one gave a shit. I raised a fuss and tried to rederive the whole thing from first principles. I don't think I got anything, tho, it was only one assignment so I might have given up because of low value (it's a hard problem). Don't remember.
Moral:
With this sort of thing, or anything really, you either use bulletproof mathematical models derived from first principles (or empirically) with calibrated real quantities, or you wing it intuitively using your built-in hardware. You do not use "math" on uncalibrated pseudo-quantities; that just tricks you into overriding your intuition for something with no correct basis.
This is why you never use explicit probabilities that aren't either empirically determined or calculated theoretically.
Replies from: Nick_Tarleton, Vaniver, Blueberry, Grognor↑ comment by Nick_Tarleton · 2012-03-30T18:46:11.552Z · LW(p) · GW(p)
With this sort of thing, or anything really, you either use bulletproof mathematical models derived from first principles (or empirically) with calibrated real quantities, or you wing it intuitively using your built-in hardware. You do not use "math" on uncalibrated pseudo-quantities; that just tricks you into overriding your intuition for something with no correct basis.
Despite anti-arbitrariness intuitions, there is empirical evidence that this is wrong.
The Robust Beauty of Improper Linear Models
Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in Paul Meehl's book on clinical versus statistical prediction—and a plethora of research stimulated in part by that book—all indicates that when a numerical criterion variable (e.g., graduate grade point average) is to be predicted from numerical predictor variables, proper linear models outperform clinical intuition. Improper linear models are those in which the weights of the predictor variables are obtained by some nonoptimal method; for example, they may be obtained on the basis of intuition, derived from simulating a clinical judge's predictions, or set to be equal. This article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors. In fact, unit (i.e., equal) weighting is quite robust for making such predictions. The article discusses, in some detail, the application of unit weights to decide what bullet the Denver Police Department should use. Finally, the article considers commonly raised technical, psychological, and ethical resistances to using linear models to make important social decisions and presents arguments that could weaken these resistances.
(this is about something somewhat less arbitrary than using ranks as scores, but it seems like evidence in favor of that approach as well)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-30T23:55:58.698Z · LW(p) · GW(p)
Dawes is not a reliable researcher; I have very little confidence in his studies. Check it.
(ETA: I also have other reasons to mistrust Dawes, but shouldn't go into those here. In general you just shouldn't trust heuristics and biases results any more than you should trust parapsychology results. (Actually, parapsychology results tend to be significantly better supported.) Almost all psychology is diseased science; the hypotheses are often interesting, the statistical evidence given for them is often anti-informative.)
↑ comment by Vaniver · 2012-03-30T04:53:31.324Z · LW(p) · GW(p)
Multicriteria objective functions are really hard to get right. Weighting features from 10 to 1 is actually a decent first approach- it should separate good solutions from bad solutions- but if you're down to narrow differences of the weighted objective function, it's typically time to hand off to a human decision-maker, or spend a lot of time considering tradeoffs to elicit the weights. (Thankfully, a first pass should show you what features you need to value carefully and which features you can ignore.)
Replies from: gwern↑ comment by gwern · 2019-12-13T14:48:29.922Z · LW(p) · GW(p)
If you have relatively few choices and properties are correlated (as of course they are), I'm not sure how much it matters. I did a simulation of this for embryo selection with n=10, and partially randomized the utility weights made little difference.
↑ comment by Blueberry · 2012-03-29T23:09:42.056Z · LW(p) · GW(p)
You do not use "math" on uncalibrated pseudo-quantities; that just tricks you into overriding your intuition for something with no correct basis.
I'm not sure I understand what you mean by pseudo-quantities.
strength might be most important (10), then cost (9) then wieght (8) and so on.
So the problem is that these attributes were given rankings from 10 down to 1, rather than their weights that corresponded to their actual importance?
Replies from: orthonormal, None↑ comment by orthonormal · 2012-03-29T23:20:00.461Z · LW(p) · GW(p)
So the problem is that these attributes were given rankings from 10 down to 1, rather than their weights that corresponded to their actual importance?
Right- that can cause this problem. (Not quite the same dynamic, but you get the idea.)
↑ comment by [deleted] · 2012-03-29T23:29:47.323Z · LW(p) · GW(p)
"pseudo-quantity" is a term I just made up for things that look like quantities (they may even have units), but are fake in some way. Unlike real quantities, for which correct math is always valid, you cannot use math on pseudo-quantities without calibration (which is not always possible).
Example: uncalibrated probability ratings (I'm 95% sure) are not probabilities, and you cannot use them in probability calculations, even though they seem to be numbers with the right units. You can turn them into real probabilities by doing calibration. (assuming they correllate well enough)
So the problem is that these attributes were given rankings from 10 down to 1, rather than their weights that corresponded to their actual importance?
More or less. Other ranking systems could be calibrated to get actual utility coeficients, but rank indexes loose information and cannot even be calibrated.
Replies from: FeepingCreature, Blueberry↑ comment by FeepingCreature · 2019-12-13T10:15:17.527Z · LW(p) · GW(p)
Probabilities can be empirically wrong, sure, but I find it weird to say that they're "not probabilities" until they're calibrated. If you imagine 20 scenarios in this class, and your brain says "I expect to be wrong in one of those", that just is a probability straight up.
(This may come down to frequency vs belief interpretations of probability, but I think saying that beliefs aren't probabilistic at all needs defending separately.)
↑ comment by Blueberry · 2012-03-29T23:37:04.579Z · LW(p) · GW(p)
So the pseudo-quantities in your example are strength ratings on a 1-10 scale?
I actually think that's acceptable, assuming the ratings on the scale are equally spaced, and the weights correspond to the spacing. For instance, space strengths out from 1 to 10 evenly, space weights out from 1 to 10 evenly (where 10 is the best, i.e., lightest), where each interval corresponds to roughly the same level of improvement in the prototype. Then assign weights to go along with how important an improvement is along one axis compared to the other. For instance, if improving strength one point on the scale is twice as valuable as improving weight, we can give strength a weight of 2, and computations like:
- Option A, strength 3, weight 6, total score 2(3) + 6 = 12
- Option B, strength 5, weight 3, total score 2(5) + 3 = 13
make sense.
Replies from: None↑ comment by [deleted] · 2012-03-29T23:52:11.379Z · LW(p) · GW(p)
Still have one degree of freedom. What if you ranked from 10-20? or -5 to 5? As a limiting case consider rankings 100-110: the thing with the highest preference (strength) would totally swamp the calculation, becoming the only concern.
Once you have scale and offset correctly calibrated, you still need to worry about nonlinearity. In this case (using rank indexes), the problem is even worse. Like I said, rank indexes lose information. What if they are all the same wieght but one is drastically lighter? Consider that the rankings are identical no matter how much difference there is. That's not right. Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity.
This is not as hard as FAI, but it's harder than pulling random numbers out of your butt, multiplying them, and calling it a decision procedure.
Replies from: Blueberry↑ comment by Blueberry · 2012-03-30T00:09:57.406Z · LW(p) · GW(p)
I agree that ranking the weights from 1 to N is idiotic because it doesn't respect the relative importance of each characteristic. However, changing the ratings from 101-110 for every scale will just add a constant to each option's value:
- Option A, strength 103, mass 106, total score 2(103) + 106 = 312
- Option B, strength 105, mass 103, total score 2(105) + 103 = 313
(I changed 'weight to 'mass' to avoid confusion with the other meaning of 'weight')
Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity.
I assume you mean using values for the weights that correspond to importance, which isn't necessarily 1-10. For instance, if strength is 100 times more important than mass, we'd need to have weights of 100 and 1.
You're right that this assumes that the final quality is a linear function of the component attributes: we could have a situation where strength becomes less important when mass passes a certain threshold, for instance. But using a linear approximation is often a good first step at the very least.
Replies from: Vaniver, None↑ comment by [deleted] · 2012-03-30T00:22:45.126Z · LW(p) · GW(p)
Option A, strength 103, mass 106, total score 2(103) + 106 = 312 Option B, strength 105, mass 103, total score 2(105) + 103 = 313
Oops, I might have to look at that more closely. I think you are right. The shared offset cancels out.
I assume you mean using values for the weights that correspond to importance, which isn't necessarily 1-10. For instance, if strength is 100 times more important than mass, we'd need to have weights of 100 and 1.
Using 100 and 1 for something that is 100 times more important is correct (assuming you are able to estimate the weights (100x is awful suspicious)). Idiot procedures were using rank indicies, not real-valued weights.
But using a linear approximation is often a good first step at the very least.
agree. Linearlity is a valid assumption
The error is using uncalibrated rating from 0-10, or worse, rank indicies. Linear valued rating from 0-10 has the potential to carry the information properly, but that does not mean people can produce calibrated estimates there.
↑ comment by Grognor · 2012-03-30T04:44:44.686Z · LW(p) · GW(p)
With this sort of thing, or anything really, you either use bulletproof mathematical models derived from first principles (or empirically) with calibrated real quantities, or you wing it intuitively using your built-in hardware. You do not use "math" on uncalibrated pseudo-quantities; that just tricks you into overriding your intuition for something with no correct basis.
This is why you never use explicit probabilities that aren't either empirically determined or calculated theoretically.
This is a very good general point, one that I natively seem to grasp, but even so I'd appreciate it if you wrote a top-level post about it.
comment by Academian · 2012-03-29T23:49:42.438Z · LW(p) · GW(p)
Since a couple of people want before/after information, here's some: Before minicamp: I was able to work around 5 hours per day on research.
After: 10 hours/day, sustainable for months.
After: Less afraid to try new professional directions than ever before, by a margin much wider than this trait has ever changed for me.
After: Secured $24,000 of grant money from DARPA to work on applications of algebraic geometry to machine learning, my first time trying out applied math. Loving it.
After: Difference in productivity was so noticeable that I'm volunteering my time as an instructor at the next few camps (I taught some at the last camp, too) because I expect it to have further positive, lasting effects on my professional / personal life.
After: Got a new dissertation advisor; many people around me seemed to think that was impossible or risque, but it has gone very well and been very refreshing, given my interests. (Before the camp I was more afraid to make what felt like a "sudden" change, which was actually something I had been thinking about for a year and was not sudden at all.)
Note: My experience at the camp may not have been typical, because I did teach a few sessions at the beginning... but those were not the ideas that stuck with me most and motivated me professionally; they were Anna's and Luke's sessions.
Since I'm volunteering to teach for the next few camps, I won't be able to give participant-side data after the next camp, so let this be my public testimonial: minicamp had a SERIOUS before/after effect on my life, resulting in more exploration, faster decision making (changed my thesis advisor, to great benefit and the surprise of many), and increased productivity. Its benefits are the cause of my volunteering to teach for it, and this comment.
Replies from: Hul-Gil↑ comment by Hul-Gil · 2012-03-30T19:43:06.572Z · LW(p) · GW(p)
This is interesting to me, since we seem to be in about the same position academically (though you're a bit ahead of me). What was responsible for such a huge increase in productivity, or can that not be summarized? I need to research more myself, but I do not think I will be able to afford or attend the minicamp, so anything you'd be able to share would be appreciated.
Replies from: AnnaSalamon, Academian↑ comment by AnnaSalamon · 2012-03-30T21:53:23.451Z · LW(p) · GW(p)
but I do not think I will be able to afford or attend the minicamp
If you want to attend but can't afford the fees, please do apply anyhow, and check the "need scholarship" box. Even if it turns out that we can't admit you this year, we'll at least know there are people out there who want to attend but can't afford it, and we can possibly take this information to potential donors as the Center for Modern Rationality gets on its own non-profit feet.
↑ comment by Academian · 2012-03-30T21:07:21.316Z · LW(p) · GW(p)
The particular changes I've made (like changing my advisor) have been very personalized for me, by me... but they have been fueled by a few root adjustments:
1) More curiosity about my life choices. Caused in part by being surrounded by a group of smart similar people doing very different things with their lives.
2) More willingness and desire to make up my mind more quickly and effectively about Big Life Decisions. Caused in part by Anna Salamon generating, on the spot, a steady stream of helpful questions for me that I could ask and answer to myself about career choices. I never came to any conclusions that she suggested (which I consider a good sign; I wouldn't expect someone else to know what I should do with my life from a few conversations), but she gave me a sense that more is possible in terms of how quickly a person can generate important, answerable questions.
3) More curiosity / motivation to experiment with productivity hacks, until I found some that work for me (Getting Things Done system + Pomodoro technique). Caused by being surrounded by productivity-obsessed people for a week with lots of cool ideas that helped me internalize a belief in the existence of popular productivity hacks that would work for me.
4) More desire to Be Successful (which I'd had very little of throughout most of my life), caused by feeling like I was part of a community that I cared about and who might benefit in some small way from my success.
comment by SarahNibs (GuySrinivasan) · 2012-03-30T01:03:16.607Z · LW(p) · GW(p)
Here are some points, as I think of them.
The Good
- It was fun. On par with the best one-week vacations I've had, but less fun than Hawaii.
- I look significantly better, directly caused by the fashion workshops. My sister was briefly jealous because while I usually won in the raw-g department, she had always handily won in the looking-good department, and this is no longer true.
- I took to heart Try Things, (hypothesis) directly caused by in-person admonitions by high-status instructors. Previously I had focused far too much on exploitation over exploration. Concrete example: I went to a code retreat with Arlo Belshee last weekend and my roommate did not, while any normal reference class would have said he was more likely to go, and it was super-useful.
- I actually applied (and am applying) Gendlin and Tarski to the scary object of my inner mental life. I recommend Internal Family Systems as very useful though I have no direct comparisons I can make. If it turns out that it's actively harmful or even significantly worse than average mainstream psychotherapy I will update strongly towards standard-retreat-rather-than-awesome.
- Directly after the minicamp, I made several time-management changes to my lifestyle that have persisted until now, giving me many effective extra hours per week. Concrete example: noticing that the marginal return on playing Dominion online was negative past about the first 10% of my time spent, and actually cutting way back.
- Noticing when I'm being too abstract that I should try to provide or ask for a concrete example. ;) This was very much caused by admonitions during minicamp and some supporting reminders afterwards by John.
The Bad
- The minicamp became more and more disorganized toward the end. IIRC this was because the instructors were updating as they went and thought they discovered that they could do better in the second half than originally planned, but the fact of disorganization stands.
- Several of the exercises we did felt artificial and did not feel like they hit their intended target. Concrete example: when we played the game of arguing for an answer instead of reasoning towards an answer, to feel what rationalization felt like, I just felt like I was playing a game, nothing like when I actually notice rationalization.
- I felt like there was too much stuff to learn and not enough practicing stuff. By a wide enough margin that I'd list this here.
The Ugly
- I came unsettlingly close to not looking for counterevidence like this when presented with investment advice. Not actually too close, but it was still unsettling. Constant vigilance!
Edit: formatting, also the investment advice did not come from SI, but from one of the attendees.
Replies from: BrandonReinhart, GuySrinivasan, Kevin, Hul-Gil↑ comment by BrandonReinhart · 2012-03-30T01:40:34.413Z · LW(p) · GW(p)
I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:
1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized -- my rationality concepts were all individual floating bits not well sewn together -- so I reached a point where new concepts didn't fit into my map very easily.
I agree things got more disorganized, in fact, I remember on a couple occasions seeing the 'this isn't the outcome I expected' look on Anna's face and the attempt to update and try a different approach or go with the flow and see where things were leading. I marked this responsiveness as a good thing.
As for your ugly it's important to note that was a casual discussion among attendees. I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you're giving credibility to? Example: I talked to a lot of curious people that week about how Valve's internal structure works, but no one should necessarily run off and establish a Valve-like company without understanding Valve's initial conditions, goals, employee make-up, other institutions, and comparing them with their own initial conditions, goals, employees, institutions, etc.
Replies from: GuySrinivasan, AspiringRationalist↑ comment by SarahNibs (GuySrinivasan) · 2012-03-30T04:58:16.357Z · LW(p) · GW(p)
I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you're giving credibility to?
Yes, this.
Usually this risk is low, but here it was actually quite high. This particular instance was an Ugly example, because the category - ideas with close temporal association - was false. But there were many scary examples based on good categories. The most outlandish was meditation. Remember that other people's brains are part of evidence, now witness quite a few people who have just spent the last few days on activities that convinced you they are pretty decent (compared to baseline, damn good) at doing their research, discarding bullshit, not strongly espousing ideas they don't strongly hold, examining the ideas they do hold, etc. etc... witness them say with a straight face that meditation, which you (I) assumed was a crock of mystic religion that just took a different turn than the Western religions you're familiar with... witness them say that meditation is super-useful. Then watch your brain say "Bull! Wait, they're good at things. Maybe not bull? Hey, argument from authority, bull after all! Wait, argument from authority is evidence... :S I... have to take this seriously..."
IFS, NVC, nootropics? Guess I have to take them seriously too.
(I exaggerate slightly, but my feelings were stronger than I think they should have been, so that story is in line with how I felt, if not precisely what my beliefs were)
Replies from: BrandonReinhart, BrandonReinhart↑ comment by BrandonReinhart · 2012-03-30T05:41:14.076Z · LW(p) · GW(p)
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn't seem unreasonable to me that folks interested could explore it and see what turns up.
Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I'd be paying opportunity cost to cover unknown territory with unknown payoffs.
I don't feel oddly attached to any beliefs here. Maybe I'll go search for some research. Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.
I don't see myself updating my beliefs about meditation (which are weak) unduly because of an argument from authority. They changed because the arguments were reasoned from principles or with process I accept as sound. Reasoning like "fairly credible sources like Feynman claim they can learn to shift the perception of the center of self-awareness to the left. (Feynman was also a bullshitter, but let's take this as an example...) What do we think he meant? Is what we think he meant possible? What is possible? Is that reproducible? Would it be useful to be able to do that? Should we spend time trying to figure out if we can do that?" This would be what I consider to be a discussion in the space of meditation-like stuff that is non-mystical and enjoyable. It isn't going to turn me into a mystic any more than Curzi's anecdotes about his buddy's nootropics overdoses will turn me into a juicer.
I didn't take away the message 'meditation is super-useful.' I took away the message 'meditation is something some people are messing with to see what works.' I'm less worried about that than if someone said 'eating McDonalds every day for every meal is something some people are messing with to see what works.' because my priors tell me that is really harmful whereas my priors tell me meditating every day is probably just a waste of time. A possibly non-mystical waste of time.
Now I'm worried comment-readers will think I'm a blind supporter of meditation. It is more accurate to say I went from immediate dismissal of meditation to a position of seeing the act of meditating as separable from a mystic context.
Now my wife is telling me I should actually be MORE curious about meditation and go do some research.
Replies from: Hul-Gil, GuySrinivasan, bogus↑ comment by Hul-Gil · 2012-03-30T19:19:19.534Z · LW(p) · GW(p)
Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.
Are you familiar with the study (studies) about meditation and brain health? I've seen one or two crop up, but I've not read the actual studies themselves - just summaries. IIRC, it appears to reduce the effects of aging.
The other reason I consider meditation possibly worth pursuing is that it appears to be an effective "mindhack" in at least one respect: it can be used to reduce or eliminate unpleasant physical and mental sensations. For example, I believe it's been shown to be effective in reducing stress and anxiety, and - more impressively - chronic pain, or even sensations like "chilly". How useful this is is more debatable: while I'm waiting in line, shivering, I probably won't be able to meditate effectively, or have the time to.
↑ comment by SarahNibs (GuySrinivasan) · 2012-03-30T05:51:19.259Z · LW(p) · GW(p)
Hm, super-useful was a bad term. The actual impressions I got were "obviously coherent and not bs, and with high enough mean+variance that the value of investigation is very high". Not necessarily the value of any one specific person investigating, but the value of it being investigated.
So I went a bit further than your
Now I'm worried comment-readers will think I'm a supporter of meditation (an out-group belief?). It is more accurate to say I went from immediate dismissal of meditation to a position of seeing the act of meditating as separable from a mystic context.
to believe the top of the curve was a) grossly useful and b) of non-negligible likelihood.
↑ comment by bogus · 2012-03-30T07:13:04.109Z · LW(p) · GW(p)
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts.
It strikes me that you may want to take a step further and consider mysticism itself as a functionally useful brain-hack much like meditation. It's very possible that mystical texts could be used to bring out a mental stance conducive to rationality. The Litanies of Tarski and Gendlin are fairly obvious examples, and I'd even argue that HP:MoR seems to be fulfilling that role as a kind of shared mythology tapping into well-understood tropes, at least for the subset of rationalists who like Harry Potter fanfiction.
Replies from: BrandonReinhart↑ comment by BrandonReinhart · 2012-03-30T07:45:13.417Z · LW(p) · GW(p)
Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like 'characteristic of a metaphysical belief system.' The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn't consistent with my more fundamental, empirical beliefs.
So in my mind I have 'WARNING!' tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being eaten I often don't approach at all.
If I find that I have a metaphysical belief or if I detect that a fact/idea may be metaphysical, then I attach the mystical tag to it and go find my stick.
If something in my mind has the mysticism tag attached to it inappropriately, then I want to reclassify that thing -- slightly reduce the size of the tag or create a branch through more specific concept definition and separation.
So I don't really see value in attaching the mysticism tag to things that don't directly warrant it. What you call a mystical litany I'd call a mnemonic technique for reminding yourself of a useful process or dangerous bias. Religions have litanies, but litanies are not inherently religious concepts.
So no, I won't consider mysticism itself as a useful brain hack. Mysticism is allocated the purpose of 'warning sign' . It's not the only warning sign, but it's a useful one.
Replies from: bogus↑ comment by bogus · 2012-03-30T08:22:57.012Z · LW(p) · GW(p)
I can see why you would consider what you call "mysticism", or metaphysical belief systems, a warning sign. However, the use of mystical text forms, which is what I was referring to in my comment, is quite unrelated to this kind of metaphysical and cosmological rigidity. Compare, say, Christian fundamentalists versus Quakers or Unitarian Universalists, or Islamic Wahabis and Qutbis versus Sufis: the most doctrinal and memetically dangerous groups make only sparing use of mystical practices, or forbid them outright.
Atheists and agnostics are obviously a more challenging case, but it appears that at least some neopagans comfortably identify as such, using their supposed metaphysical beliefs as functionally useful aliefs, to be invoked through a ritual whenever the psychical effects of such rituals are desired. There is in fact an account of just such a ritual practice on LW itself involving the Winter Solstice, which is often celebrated as a festival by neopagan groups. It's hard to describe that account as anything other than a mystical ritual aiming to infuence the participants in very specific ways and induce a desirable stance of mind among them. In fact, that particular practice may be regarded as extremely foolish and memetically dangerous (because it involves a fairly blatant kind of happy-death-spiral) in a way that other mystical practices are not. I now see that post as a cautionary tale about the dangers of self-mindhacking, but that does not justify its wholesale rejection, particularly in an instructional context where long-term change is in fact desired.
Replies from: David_Gerard, Hul-Gil, lessdazed↑ comment by David_Gerard · 2012-03-30T10:48:46.026Z · LW(p) · GW(p)
This does sound plausible:
- that the people who decompartmentalise crazy and do crazy stuff - fundies, cultists, fundie cultists - have a strong aversion to ambiguity, subtlety, irony;
- that groups with weird ideas who are not averse to ambiguity, subtlety or irony are less likely to do crazy stuff.
The first I think is obvious, the second as a positive result would be somewhat surprising and worthy of investigation.
I also suspect that a lot of romantic objection to rationality and science is that they see science as an example of group 1 holding that anything that can't be measured doesn't exist and throwing away important detail.
I wonder how we would meaningfully gather numbers on such things.
↑ comment by Hul-Gil · 2012-03-30T19:09:50.087Z · LW(p) · GW(p)
I think mysticism is inherently irrational, and thus seriously participating in "mysticism itself" is counter-productive if you wish you become more rational. But I say "seriously participating", because as you say, perhaps mystical aliefs can be used to produce useful mental states - as long as it is recognized that that's what you're doing, and you don't ascribe any special significance to the mystical aspects (i.e., you recognize that the same effect can probably be achieved without any such relics; it's just a matter of preference).
Like those neopagans you mention, I am both an atheist and a Wodanist. I use Wodan as a symbol of various ideals, and the devotions, rituals, symbols, etc. involved to remind myself of these. My actual beliefs are entirely atheistic and materialistic, but I enjoy the trappings and history behind Germanic paganism of this sort; thus, the main reason behind my Wodanism is simply enjoyment. Useful? Yes, as a reminder or way to encourage yourself (e.g., "though I am tempted to waste my money, I will be self-disciplined like my patron god") - but that's entirely apart from any mystical aspects.
Replies from: bogus↑ comment by bogus · 2012-03-31T22:17:19.427Z · LW(p) · GW(p)
Useful? Yes, as a reminder or way to encourage yourself (e.g., "though I am tempted to waste my money, I will be self-disciplined like my patron god") - but that's entirely apart from any mystical aspects.
I agree with this as far as rational belief is concerned, and on a denotational level. But I'm not sure whether one can achieve the very tangible benefits of enacting rituals involving such "gods" as Pan, Wodan or Hermes/Thoth without alieving that the gods are really there at some level--if only as archetypes of one's unconscious psychology--so that one can relate to them on their own terms.
As long as the "gods" are not literally considered as supernatural entities (whatever that might mean) believing in them needs not be any more irrational than believing in any other features of our psychology. But successfully channeling a god might require us to connote that belief in ways that will seem quite foreign to a rationalistic, logically-oriented mental stance.
↑ comment by lessdazed · 2012-03-30T08:54:18.381Z · LW(p) · GW(p)
the most...memetically dangerous groups
What are your criteria for this?
Replies from: bogus↑ comment by bogus · 2012-03-30T09:28:22.993Z · LW(p) · GW(p)
What are your criteria for this?
Well, that gets rather complicated. Think of it as the extent to which the religion appeals and encourages irrationality, and this causes its followers to be instrumentally irrational in verifiable ways. I'm not talking about self-identified moral or ethical systems here, but rather obviously crazy beliefs like "Our god will reward you with a heavenly garden and 42 virgins if you become a martyr" or "You need to purify yourself from the tiny spiritual beings which were brought to Earth by an all-powerful alien millions of years ago". Stuff like that will appeal to human utility/reward functions in fairly obvious ways, assuming that it is truly, fervently believed.
↑ comment by BrandonReinhart · 2012-03-30T07:15:55.792Z · LW(p) · GW(p)
As an aside, what are IFS and NVC?
Edit: Ah, found links.
IFS: http://en.wikipedia.org/wiki/Internal_Family_Systems_Model
↑ comment by NoSignalNoNoise (AspiringRationalist) · 2012-03-30T20:58:33.709Z · LW(p) · GW(p)
I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:
1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized -- my rationality concepts were all individual floating bits not well sewn together -- so I reached a point where new concepts didn't fit into my map very easily. Did you attend the 3-day version or the week-long version? I would be curious to know after what length of time you saw significantly diminishing returns.
Relatedly, I wonder what minimum consecutive length of time you need to get a lot out of this. How would the returns from three spaced-apart day-long workshops compare to those from a single three-day workshop? (This would of course work better with a group of people who don't need to travel a significant distance.) Is the New York meetup group what happens if you take this sort of thing, break it into small chunks and spread it out over time?
People who attended minicamp can probably provide more informed speculation on these matters than I can.
↑ comment by SarahNibs (GuySrinivasan) · 2012-03-30T17:15:03.389Z · LW(p) · GW(p)
A couple of notes:
Directly after the minicamp, I made several time-management changes to my lifestyle that have persisted until now, giving me many effective extra hours per week.
a) Two of the changes I made account for most of the gains: cutting the tail of my gaming (not just Dominion) and buying a car. There were other changes but they were all an order of magnitude smaller. b) The process I used does not require minicamp (but was caused by minicamp). You can do it now, in a couple hours. Write down everything you do in the 24*7 hours in a week. Look at the biggest chunks of time. There are two basic types: things you kinda have to do, and things you do because you want to. For those you have to do, ask (be curious!) how you can spend less time doing them, and then see if any of your methods are net positive. For those you want to do, ask how much better spending all of this time is than spending half of this time. Wherever the value starts to drop off sharply, just cut back to that amount.
If it turns out that [IFS is] actively harmful or even significantly worse than average mainstream psychotherapy I will update strongly towards standard-retreat-rather-than-awesome.
This is one of those examples of trusting that something is well worth investigating because people you recently came to trust say it's well worth investigating. Finding out that it wasn't would cause me to take a step back and wonder again "have I been brainwashed? are my defenses truly up like I feel they are? was the minicamp actually awesome or just the standard glow of decently-run retreats, 'cause if it wasn't actually awesome then the halo effect is an invalid bias, not a useful heuristic".
↑ comment by Hul-Gil · 2012-03-30T18:46:35.297Z · LW(p) · GW(p)
Thanks for this; it's detailed and doesn't shy from pointing out the Bad and the Ugly (though it seems like there isn't much of those!). One thing that made me curious, however:
the marginal return on playing Dominion online was negative past about the first 10% of my time spent
How did you determine this?
Edit: Oh, I see you explain this below.
comment by Dr_Manhattan · 2012-03-29T19:16:13.236Z · LW(p) · GW(p)
Request/advice: please consider taping the sessions. This will be useful to:
- improve them in the future
- package them as courseware, possibly for sale
↑ comment by Dustin · 2012-03-29T20:06:17.680Z · LW(p) · GW(p)
I agree with this.
I would rather see them for free on YouTube or something. It would help me and others decide if it was something we'd want to try out ourselves.
Without having attended one, and as someone who has been reading OB/LW ever since Eliezer started posting at OB, it seems like the largest benefit I would get out of such a thing is the social networking benefits. If I'm right, and if I'm typical, you wouldn't be removing motivation for most potential camp attendees because they wouldn't get the biggest benefit ...person-to-person networking and friendship-building.
I'd say it was likely that those, whose motivation to attend was removed by feeling like they'd already got everything out of the camps by watching the videos, would be more than counteracted by interest raised in the camps by the videos.
Unless the videos make the camp seem boring or not worthwhile, of course!
↑ comment by AnnaSalamon · 2012-03-29T21:08:53.681Z · LW(p) · GW(p)
We'll / I'll totally consider this. Though note that most of the session-minutes will be composed of practice and exercises, not of watching a lecture; and so the value of watching on YouTube would be circumscribed.
Replies from: Dr_Manhattan, thomblake↑ comment by Dr_Manhattan · 2012-03-29T23:24:01.594Z · LW(p) · GW(p)
I realize these will not be very useful out of the box, but considering how a number of Stanford classes were successfully ported to long-distance format (with interactive exercises, quizzes, etc), this might be a good first step in the refinement process.
I think analyzing your performance via video is underrated outside of sports.
Replies from: XFrequentist↑ comment by XFrequentist · 2012-03-30T17:28:31.981Z · LW(p) · GW(p)
I think analyzing your performance via video is underrated outside of sports.
I recently started drafting a post around this exact premise! Any interest in collaborating?
Replies from: curiousepic, Dr_Manhattan↑ comment by curiousepic · 2012-03-31T13:50:55.099Z · LW(p) · GW(p)
This is an interesting concept - I look forward to the post. Some quick notes: In my experience, people instinctively balk at being recorded, or listening/watching a recording of themselves. I think there's something unnerving about it, and in some cases probably indicates low self-confidence. Perhaps something to do with mirror-neurons as well?
Replies from: army1987, AnnaSalamon↑ comment by A1987dM (army1987) · 2012-03-31T18:15:23.862Z · LW(p) · GW(p)
I'm not bothered by being recorded (provided I know who is going to see the video), but I feel somewhat uncomfortable watching the video afterwards.
↑ comment by AnnaSalamon · 2012-03-31T15:51:11.781Z · LW(p) · GW(p)
If it matters, we've been filming our Saturday test sessions for our own use (watching the videotapes to learn how to teach, after setting a webcam up across the room), but that's quite different than making usable video for you-tube.
↑ comment by Dr_Manhattan · 2012-03-30T17:45:44.373Z · LW(p) · GW(p)
I can't contribute much other than the raw observation :(. I've seen this done by a guy from Dale Carnegie who was teaching presentation skills, and noticed some benefit from watching a couple of presentations I recorded myself. I imagine the benefit would be multiplied if I was going to give this presentation again and again, like someone who is planning a curriculum ^^^.
Looking forward to your post!
↑ comment by thomblake · 2012-03-29T21:10:48.828Z · LW(p) · GW(p)
Well, it's potentially one vector for folks to learn how to do the practice and exercises.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-03-29T21:12:31.538Z · LW(p) · GW(p)
In the long run, we are working anyhow to port the exercises into a form that will work well at local LW meet-ups.
Replies from: thomblakecomment by thejash · 2012-03-30T03:53:06.845Z · LW(p) · GW(p)
I attended minicamp last year, and I followed up with almost all of the attendees since then. I have had periodic Skype chats to see how it impacted their lives, so I can pretty confidently say that the minicamp:
- Had a dramatic positive impact on some people
- Had a significant (noticeable) positive impact on almost everyone
- Had no noticeable negative effects on anyone
It definitely had a positive impact on me, but I represent more of a median result than an outlier. Since minicamp, I:
Sold my company, and am on track to make more money in the time since minicamp than I've made in the past few years. The decisions that lead to this were a direct result of the systematic decision and planning process I implemented because of minicamp
Turned down a job offer at Google to work at an even more awesome company. I both learned about--and got an interview with--this company directly because of a contact from minicamp
Improved in a hundred small ways (again, directly attributable to notes I made at minicamp), from fashion (I now regularly get compliments where I got none before) to health (I use less time to exercise and eat, yet feel much better)
There were definitely parts of the minicamp that could have used improvement, but these were mostly a variety of details and logistical mistakes that will go away with practice on the part of the organizers.
If you're even thinking about this at all, you should apply. The cost is HUGELY outweighed by the benefits. I've probably gotten a 10x ROI (assuming I had paid the amount listed here and including the value of the time), and it hasn't even been a year....
I'm happy to answer any questions about my experience (or my observations of others' experiences).
Replies from: None↑ comment by [deleted] · 2012-03-30T05:55:43.989Z · LW(p) · GW(p)
The decisions that lead to this were a direct result of the systematic decision and planning process I implemented because of minicamp
I'm curious to hear more about that point. Do you mean to say that you explicitly implemented a system that designated how to make those kinds of decisions?
Replies from: thejash↑ comment by thejash · 2012-03-30T13:25:16.170Z · LW(p) · GW(p)
Sort of. I meant to say that I decided to make explicit long term, medium term, and short term goals, regularly check their progress, estimate their difficulty and likelihood, and also had a better sense of the space of other opportunities, all as a direct result of minicamp (there was a session or two on goals, sessions on estimation and prediction calibration, and in general while there I realized that I sucked at seeing opportunity costs).
After I did all those things, it effectively resulted in a systematic decision and planning process, since I had a much better sense about what tasks had the highest expected payoffs for my goals, and I simply work on those first.
comment by Shmi (shminux) · 2012-03-30T00:24:20.263Z · LW(p) · GW(p)
In answer to “Zero to ten, has your epistemic rationality improved?”, the median answer was 7 (mean 6.9).
That's not something to ask people, that's something you ought to actually measure before and after, otherwise what kind of rationalists are you.
Replies from: lukeprog↑ comment by lukeprog · 2012-03-30T04:15:02.238Z · LW(p) · GW(p)
Would you like to help us develop our rationality metrics? It's a fairly difficult problem. We can't just give people the CRT before and after a camp.
Replies from: lessdazed, shminux↑ comment by lessdazed · 2012-03-30T19:19:44.399Z · LW(p) · GW(p)
The main problem is that a test tests ability to take the test, independently of what its makers intended. The more similar tests are to each other, the more taking the first is training for the second, and the easier it is to teach directly to the test rather than to the skill that inspired the test. The less similar the before and after tests are, the less comparable they are.
Rationality training is particularly tricky because one is to learn formal models of both straight and twisted thinking, recognize when real-life situations resemble those patterns, and then decide how much formal treatment to give the situation, as well as how much weight to give to one's formal model as against one's feelings, reflexive thoughts, and so on.
Traditional classroom tests are set up to best test the first bit, knowledge of the formal models, if one did solve the problems inherent in testing. Even to the extent one can ask people about how one ought to react in the field, e.g. when to use which sort of calculation, that is still a question with a correct answer according to a formal model and one is still not testing the ability to apply it!
These problems resemble those the military has faced in its training and testing. They use indoctrination, simulations, and field tests. Decision making is tested under uncomfortable conditions, ensuring probable good decision making under most circumstances. In general, knowing what they do is likely to be helpful.
The problems with tests are not intractable. One can limit the gain on the second test from having taken the first test by saturating the test taker with knowledge of the test before it is taken the first time, though few would be motivated. One can try to make a test similar to the skill tested, so ability at the test is well correlated with the skill one intends to test. One can try to devise very different sorts of tests that measure the same thing (I doubt that will work here).
One component of a useful classroom test might resemble the classic research on correspondence bias. In it, people judge individuals' support for positions based off an essay they supposedly wrote. Some subjects are told that the writer chose the thesis, others that the writer had it assigned. (The theses were either pro- or anti-Castro.) People inferred that the essay's author significantly agreed with the thesis even when they were told it was assigned to them. The quality of an essay a person produces is some evidence of what they believe, as is their willingness to write it at all, etc., but in general people overly infer others' dispositions from actions they take under social constraint, even when they know of the constraint.
Here is how the framework could translate into a useful rationality test: the test would give people some evidence for something they are biased to overly believe, and the quantity and quality of legitimate evidence in the test would vary widely. One would not be able to pass the test by simply detecting the bias and then declare oneself unmoved in that wrong direction, as one might be able to do for, say, sunk costs. Instead, the valid evidence and invalid inclination would be along the same vector such that one would have to distinguish the bias from the rest of the evidence in the environment.
This solves the problem of having a classroom test be an easy exercise of spotting the biased thought pattern and quashing it. Videos or essays of various people with known beliefs arguing for or against those beliefs could be used to train and test people in this. It's actually probably a skill one could learn without any idea of how one was doing it.
Expressed abstractly, the idea is to test for ability to quantify wrong thinking by mixing it with legitimate evidence, all of which increases confidence in a particular conclusion. This is hard to game because the hard part isn't recognizing the bias. The material's being media from real life prevents testers from imposing an unrealistic model that ignores actual evidence (e.g., a strongly pro-Castro person really might refuse to write an anti-Castro essay).
↑ comment by Shmi (shminux) · 2012-03-30T05:20:07.586Z · LW(p) · GW(p)
Ah, yes, that is indeed the first thing one should work on, otherwise the MW (Must Win) interpretation of Rationality is little better than the MW (Many Worlds) interpretation of Quantum Mechanics. I didn't realize that, after all this time, there are still no objective metrics to measure the success of the course. I wish I had good ideas as to how to experimentally measure rationality, but alas. Hopefully other forum regulars do. Or maybe EY can spend some time thinking about it.
I guess an obvious way to start is to score a particular behavior based on some objective criteria, like the pass/fail on those sunk cost situations Anna (?) linked here some time ago. It's not nearly as good as actually putting people into the circumstances where they have to apply their newly learned skills (such as detecting confusion, recognizing cognitive dissonance, what have you), but it's a start.
As a next step, my guess is that if you look through the standard psychological experiments (maybe something less drastic and notorious than the Stanford prison experiment), you will find quite a number of them that can be cheaply replicated in a controlled setting like a mini-camp. I'm sure that gwern can dig up a whole whack of them in no time flat. Or maybe you are already doing this, for all I know. The important thing is that the participants should be inside the situations, not outside of them, and hopefully unaware that they are being tested. I guess it is sort of similar to giving two sets of CRTs, before and after.
comment by A1987dM (army1987) · 2012-03-30T00:11:16.811Z · LW(p) · GW(p)
In answer to “Zero to ten, will your life go significantly differently because you came to mini-camp?” the median answer was 7.5 (the mean was 6.9) [This was the response that was most positively surprising to me.].
How long after the camp ended did you ask that question? If not very long, the answers don't surprise me at all. Asking such a question a year after the camp would be more interesting.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-03-30T17:23:50.125Z · LW(p) · GW(p)
The question was asked on the last day of minicamp. We'll include something similar in the upcoming re-survey.
comment by Blueberry · 2012-03-29T23:04:28.547Z · LW(p) · GW(p)
This sounds great. A couple questions:
Why do you ask for my LW username? Will I be judged for poorly thought out comments or misfired jokes?
What is the difference between the 3 day and the week long? How do I decide?
↑ comment by AnnaSalamon · 2012-03-31T15:57:52.083Z · LW(p) · GW(p)
I'm not sure what to say about 3-day vs. week-long. We know week-long worked last year; 3-day seems worth trying, because, if it works well, it'll allow the material be accessible to a much larger set. We'll give you the best three days of stuff, but there'll be less total stuff and less chance to bond. I guess I'd come for the week if you have time for it, and would come to the 3-day if you can fit that but not the other?
In any case, don't delay your application while considering. You can always email in later with a changed timing preference.
↑ comment by b1shop · 2012-03-31T11:06:43.022Z · LW(p) · GW(p)
I'd like to second the second question. Should I be worried about the 3 day camp attempting to cram in too many useful techniques or the week long camp having filler?
Replies from: fiddlemath, wedrifid↑ comment by fiddlemath · 2012-03-31T14:57:15.523Z · LW(p) · GW(p)
The week-long camp will not have filler. At the minicamp, there were exercises that I suspected weren't useful -- the "rationalizing game" stands out in my mind -- but probably a bigger problem was trying to pack so many things into a week's span. I definitely had the feeling that I needed to be taking good notes, because I couldn't possibly absorb everything in the various sessions during those sessions.
↑ comment by wedrifid · 2012-03-31T11:23:17.725Z · LW(p) · GW(p)
I'd like to second the second question. Should I be worried about the 3 day camp attempting to cram in too many useful techniques or the week long camp having filler?
They ran our camp for 10 weeks or so. Sure, some of that was filler. But they have enough there for at least a week.
Replies from: b1shopcomment by malderi · 2012-03-29T21:26:48.757Z · LW(p) · GW(p)
Feedback: I'm interested in this but will not attend.
Why: I'm a professional in another city with limited vacation time. I could potentially go to the Bay Area for this, but it'd be expensive in money, time, and vacation not spent elsewhere. I believe it might still be worth doing it, but am not convinced.
However, I AM convinced that if one were held in my city (in this case, Seattle) for a similar price, I would be very interested. The cost could be offset because lodging/travel by the instructors would be paid instead of the attendees. If the workshops were something like Thursday/Friday evening and all weekend, so much the better.
Suggestion for the future: Check interest for doing these locally in other major cities and run the numbers to see if it's worth it. It might not make sense, but if it did, count me in!
Replies from: Eliezer_Yudkowsky, Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T21:37:10.957Z · LW(p) · GW(p)
This may happen in the future, depending on how this year goes; we're pretty sure it's not happening this year, so it's not a good thing to wait on.
Replies from: malderi↑ comment by malderi · 2012-03-29T21:42:51.253Z · LW(p) · GW(p)
Thanks. I doubt I will go this year for the reasons I listed above. Next year when I have more vacation time built up I'd consider doing it.
Although if you'd like to include "read advance chapters of HPMOR" into the benefits, I'm in.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T21:46:02.908Z · LW(p) · GW(p)
...I don't know if I'll have the next chapter by May 13, and if there are other advance chapters after that they'll probably be horrible cliffhangers, but I'll consider it.
Replies from: malderi↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T22:03:08.751Z · LW(p) · GW(p)
Actually, quick check - was it clear from the text that there are two 3-day weekend camps and one 1-week camp? Hopefully a 3-day camp wouldn't be that expensive in terms of vacation time not spent elsewhere. Go ahead and ignore this comment if it was already clear, but if it wasn't clear from the text let me know and I can try to emphasize it more strongly.
Replies from: thomblake, malderi, taryneast↑ comment by thomblake · 2012-03-29T22:19:49.596Z · LW(p) · GW(p)
FWIW, it was not clear from a skim.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T22:29:10.294Z · LW(p) · GW(p)
Okay, went in and fixed.
↑ comment by malderi · 2012-03-29T22:32:05.898Z · LW(p) · GW(p)
Yes, it was clear to me.
I would prefer the weeklong one but am considering the weekends. The cost of airfare is the same.
Another question: Is there any particular reason why you are including the hotel costs into the fee? I can see the marketing value from "single number", but for those already in the Bay Area (or with friends/family there), reducing that cost by a bit and staying elsewhere would be helpful.
If activities/teambuilding are going late, that makes sense, but that was not clear on a single read through (I could read through again to find out, but figure the feedback on this not being clear might be helpful).
Replies from: malderi↑ comment by taryneast · 2012-03-30T05:31:31.549Z · LW(p) · GW(p)
It is clear now.
However I'd like to mildly dispute "Hopefully a 3-day camp wouldn't be that expensive in terms of vacation time not spent elsewhere".
...some of us come from another country (quite a few actually) - and the travel time alone is equal to the 3 days we'd spend on the actual camp. Plus a day or so adjusting to jetlag enough that we could even stay awake for the first day.... I'd estimate that for me to attend even the three day camp would be, at minimum, a seven day vacation.
Factor in the significant cost of the airfares, and we're starting to talk a lot of vacation-opportunity-cost.
Obviously, attending the longer camp would be a better signal-to-noise ratio... but would still be a serious undertaking.
PS: that's not to say, of course, that's it's not still worth the effort. Personally I'm looking into my options to see if I can pull it together to get to the week-long one this time...
The main point is just not to forget that a significant number of LW readers are in a quite different situation to your own and thus the difficulty of attending should not be inadvertently trivialised.
comment by marc · 2012-04-02T13:00:58.632Z · LW(p) · GW(p)
I attended the minicamp last summer, at more personal expense than most participants, since I flew in from europe (I did have other things to do in California, so the cost wasn't entirely for minicamp).
If you want an analogy with minicamp, think of an academic summer school. At the most important level, I think the only thing that really separates minicamp (or an academic summer school) from christian camps is that the things they teach at minicamp (and summer schools) are mostly correct.
I go to summer schools to learn from people who have thought about things that I care about, in greater depth than I have. If you don't believe that will be true, don't go. You should be able to make a reasonable guess whether you think you have things to learn by looking at the instructors posts on less wrong.
I definitely agree with many things that the other participants said. I found that minicamp gave me a sense that things that normal people consider insoluble are often not, and a well thought out series of actions can lead you to places that most people would not believe. I also found it inspiring to be around a group of people that really care about improving themselves - something that I have found relatively rarely.
I have one genuine criticism of minicamp. There are reasons to be tactically 'irrational' in the real world. As a cartoon example: if disagreeing repeatedly with your boss will get you fired from your well-paid job, and you're giving significant amounts of money to the efficient charity, then stay quiet.
Now, Eliezer is too smart to fall for this - it's reading his writing that let me clearly understand the difference between faux-rational (Spock-like dedication to the truth, and getting fired) and truly rational (shutting up). Indeed, the complexities of this are beautifully explored in Harry Potter and the methods of rationality. However, at minicamp, I felt like the less inspiring aspects of being rational were under-emphasised. That is totally understandable, since talking about bending to social domination, lying etc, is low-status. Also, the instructors at minicamp have, quite deliberately, created a community where they are somewhat isolated from having to deal with irrational people, so they probably don't viscerally experience the importance on quite such a regular basis.
I felt that, at the end of minicamp, there should have been a session pointing out a few aspects of living rationally in an irrational world. I think we needed a lecture from Professor Quirrell, so that we don't create rationalists that can spot every bias known to psychology (and a few more) but aren't actually having positive impact on the world, because they don't know how to get things done.
I'll end by pointing out that I've just asked Anna whether I can go back this year, maybe as a participant, maybe as a volunteer. Hopefully that should let you estimate how I truthfully rate the overall experience.
comment by SilasBarta · 2012-03-29T20:16:09.593Z · LW(p) · GW(p)
7b) Is there any evidence I'll be glad I went that a Christian retreat could not produce just as easily?
Edit: Okay, 15 seconds to this being downvoted was a little hasty.
Replies from: fiddlemath, None, TheDave, ciphergoth, lessdazed, Mercurial, Academian, orbenn, Eliezer_Yudkowsky↑ comment by fiddlemath · 2012-03-29T22:57:35.420Z · LW(p) · GW(p)
I know that this is mere anecdote; and that after doesn't strictly imply because of. But, since the mini-camp, people who know me would probably agree that:
- I am more likely to try new things; in particular, I now have the habit of trying new habits to see what works and what doesn't. This has helped in a handful of little ways:
- I've stopped biting my nails.
- I've stopped drinking soda
- I maintain a journal to get better information about myself
- I use Anki to memorize facts, instead of just thinking it's a good idea. This has made my work rather more efficient.
- I have more time and energy for both my academic work and other activities I enjoy.
- I meet people more easily, and have more friends.
To emphasize the last point, uncomfortably personally: I am no longer cripplingly unable to examine my own sexuality, ask women out, or engage in relationships. (I'm still inexperienced for my age, though this improves over time.) These changes are due to techniques I learned at mini-camp: not lessons of the form "how to pick up women", but "how to be right about yourself".
Also, I suspect my writing has improved.
There are also internal, mental changes; and I suspect that the rate at which my agency improves has increased. But you'd get the same report in different words from someone after a Christian brainwashing retreat, so I suppose these are pretty weak evidence for you.
Replies from: Nisan↑ comment by [deleted] · 2012-03-30T00:31:53.317Z · LW(p) · GW(p)
Finding people who could converse at a high level about the most important topics in the world was more fulfilling than I could have imagined. You can get some of this at a meetup - and I've been to meetups in Chicago, St. Louis, and the Bay - but the level of fulfillment I got at the mini-camp was the greatest by far.
Again, forgetting all the rationality training - there were moments at mini-camp when everyone was hanging out and I would literally have trouble deciding where to stand in a room because every conversation going around me was so ridiculously interesting that I couldn't stand choosing where to place myself. I felt like a wealth of knowledge was being spilt around me, and if I didn't scramble to consume as much as possible I'd miss some lifechanging insight and regret it forever. It was so beautiful it hurt.
Replies from: Blueberry, Wei_Dai↑ comment by Blueberry · 2012-03-30T00:41:04.227Z · LW(p) · GW(p)
Again, forgetting all the rationality training - there were moments at mini-camp when everyone was hanging out and I would literally have trouble deciding where to stand in a room because every conversation going around me was so ridiculously interesting that I couldn't stand choosing where to place myself. I felt like a wealth of knowledge was being spilt around me, and if I didn't scramble to consume as much as possible I'd miss some lifechanging insight and regret it forever. It was so beautiful it hurt.
Wow. That's like the opposite of most parties.
↑ comment by Wei Dai (Wei_Dai) · 2012-04-25T00:44:31.264Z · LW(p) · GW(p)
Can you describe the difference between a typical conversation at the mini-camp, and a typical conversation on LW? (Would it be accurate to say that you're more impressed with the former than the latter? I'm curious to find out why if that's the case.)
Replies from: None↑ comment by [deleted] · 2012-04-25T05:03:01.744Z · LW(p) · GW(p)
It would be accurate to say I'm more impressed with the former than the latter. I think the majority of this effect is caused by a) the conversations being in person, which is a better format than this nested Reddit thing, and b) the fact that we were together so long.
That said, the conversations were also more enjoyable and interesting than conversations I've had at meetups (which have often been fantastic). I'm not exactly sure why - perhaps experiencing the somewhat rigorous mini-camp generated a sense of camaraderie, and thus friendship?
After trying to adjust for the above effects, it also does seem to me that any residual difference in quality could have to do with the group that was selected. Luke did mention to me that they tried to choose a relatively extroverted set of people for the first mini-camp. Also, the level of professional success at the mini-camp was higher than most other groups I've been in, including meetups. (I also think the median age of the mini-camp must have been higher than the median ages of the meetups I've attended. At 21, I was one of the youngest there.)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-04-25T23:15:45.072Z · LW(p) · GW(p)
So it's more about the form of the conversations, and less about the content?
A problem I have with in-person group conversations is that I'd occasionally find that whoever is speaking is rambling or just not being as interesting as I hope, and wish there was some way to politely signal the person to make their point quickly and give someone else a turn. And then when I get a chance to speak, I'd fear that I'm not being as interesting as I had expected to be when I decided to speak up, and other people are thinking that I should stop talking.
I'm curious if other people have had this problem and how they dealt with it.
Replies from: TheOtherDave, None↑ comment by TheOtherDave · 2012-04-29T15:20:28.382Z · LW(p) · GW(p)
An experiment I tried once, when I was helping mediate a 60-person round-robin discussion group (1), was to give everyone in the room four colored index cards: red, blue, green, and white, and assign them meanings by convention:
red = "I disagree with what the speaker is saying"
green = "I agree with what the speaker is saying"
blue = "I have a question about what the speaker is saying"
white = "I do not care about what the speaker is saying"
My theory was that by establishing a communication channel that supported multiple simultaneous inputs, I could get the flow control to be a lot more efficient.
The experiment mostly failed, in that people didn't use the cards, so I can't really speak to results. It still seems plausible to me, and I haven't seen it done elsewhere.
===
1 - Don't try this at home.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-05-04T01:19:39.574Z · LW(p) · GW(p)
My theory was that by establishing a communication channel that supported multiple simultaneous inputs, I could get the flow control to be a lot more efficient.
I think people already do something like this, using facial expressions and body language. Using your cards probably felt redundant, condescending (implying the speaker can't read the standard signals), weird, or too explicit (e.g., when you want to signal disagreement/disinterest but also want plausible deniability).
So I guess I was hoping for some tips on how to read/send the usual signals, and what to do when someone rambles on despite sending the usual signals. Another idea I just thought of is to have a smartphone app that allows one to send a covert anonymous signal to the speaker (but it would probably take too much work to get everyone to set it up and use it).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-04T01:48:48.671Z · LW(p) · GW(p)
I think people already do something like this, using facial expressions and body language.
Certainly. Those mechanisms weren't working terribly reliably in a conversation that involved 60 people, which is precisely why I'd been looking for ways to augment the normal mechanisms.
↑ comment by [deleted] · 2012-04-29T04:37:04.618Z · LW(p) · GW(p)
So it's more about the form of the conversations, and less about the content?
Basically. But I think the form of the conversations leads to much better content, and more depth of exploration, and clearer / faster communication.
wish there was some way to politely signal the person to make their point quickly and give someone else a turn.
I honestly find that this is difficult. I think it's easier to learn how to politely interrupt, or just be careful about the groups one hangs out in, or speak in smaller groups.
And then when I get a chance to speak, I'd fear that I'm not being as interesting as I had expected to be when I decided to speak up, and other people are thinking that I should stop talking.
That is interesting. I try to keep my points short, when possible. I think short points also facilitates better communication; shorter back-and-forth periods enable people to ask for the specific information they need, and closes inferential gaps.
↑ comment by TheDave · 2012-03-30T00:37:03.927Z · LW(p) · GW(p)
As an attendee, my personal data might be relevant:
I have gained practice deliberately acquiring new habits and soliciting useful feedback. Before camp I had no specific plans for self-improvement other than "work harder", and now I actually keep track of what works and what doesn't. For instance, I am deliberately improving my public speaking skills by giving talks on Minicamp material once a week to a limited audience. I would place a bet that the "alternate universe me" who instead attended Inspirational Retreat X (IRX) would not have had lasting effects nearly a year later.
I am deliberately extending my network of contacts. Speaking to new people was a skill that I didn't have pre-Minicamp. On this point, "alternate universe me" could have reasonably acquired similar skills from IRX, but I have relatively strong reason to believe that those skills would be much more of a black box than they are now. I usually leave workshops inspired, but I can tell if it's a poor workshop when I try to apply the skills I learned and discover that it's not as easy as it seemed to be according to the instructor's examples. There is a difference between "explaining something so that it sounds good" and "explaining something so someone else can do it". I attend swing dancing workshops about once a month, and Minicamp never once felt inapplicable like several of the lessons I've taken over the years. More personal data: I talked a local CEO into letting me give a presentation on rationality to the class he teaches on the side at Penn State, which is something I would have never even thought about doing before Minicamp.
This comment has already gone on too long, but I hope that gives you some useful information.
Summary: Minicamp's general perspective on teaching skills is more effective than the vast majority of small workshops I attend because the instructors taught skills rather than inspiration. Inspiration came from trying their skills and discovering that they worked, which is surprisingly rare.
↑ comment by Paul Crowley (ciphergoth) · 2012-03-30T07:42:27.754Z · LW(p) · GW(p)
People reporting back from a Christian retreat are likely to report effects that Christians approve of - that they're asking Jesus to help them decide in their daily life, that they feel a more full and whole relationship with God, etc. But those things (where they don't require the existence of a God) are likely to be true - they really are doing those things.
↑ comment by lessdazed · 2012-03-29T21:38:20.416Z · LW(p) · GW(p)
7b) Is there any evidence I'll be glad I went that a Christian brainwashing retreat could not produce just as easily?
If you went to a Jehovah's Witness retreat, and were in an accident, and you were conscious enough to refuse a blood transfusion, you'd be glad for having learned what you did at the retreat, even if you knew the refusal would be fatal.
In general, anything that is compelling and affects your decisions will make you glad for it, and its being compelling is probably not inversely related to its being true. So I'm not too concerned that my tentative answer to this question is "no."
Replies from: SilasBarta, orthonormal↑ comment by SilasBarta · 2012-03-29T21:44:25.677Z · LW(p) · GW(p)
I'm concerned, however, that the camp can't produce evidence of the kind, "Before the minicamp, Mary Sue was in rehab for crack. A year later, she's clean and has a successful web consultancy." (Exaggerating the expected magnitude of change, of course.) Religious retreats don't produce this, and tend to produce results more like, "Immediately after the retreat I felt really good, and a year later I do awesome on unobservable metrics!"
Replies from: WrongBot↑ comment by WrongBot · 2012-03-29T23:07:43.743Z · LW(p) · GW(p)
Before the bootcamp, I'd just barely managed to graduate college and didn't have the greatest prospects for finding a job. (Though to be fair, I was moving to SF and it was a CS degree.)
At the bootcamp, I founded (and then folded) a startup with other bootcampers, which was profoundly educational and cost a couple months of time and <$100.
Now, <1 year after the bootcamp, I'm doing programming and design work on the new SimCity, which is as close to a dream job for me as could reasonably be expected to exist.
I can't attribute all my recent success to the bootcamp, because I was pretty awesome beforehand, but it really did dramatically improve my effectiveness in a number of domains (my girlfriend is grateful for the fashion tips I picked up, for example). Other specific things I've found useful include meditation, value of information calculations, and rejection therapy.
↑ comment by orthonormal · 2012-03-29T21:43:21.372Z · LW(p) · GW(p)
Replace "glad I went" with a better criterion- that question deserves a good response.
Replies from: lessdazed↑ comment by lessdazed · 2012-03-29T22:18:40.201Z · LW(p) · GW(p)
"Is there evidence this will be worthwhile according to my values now, independently of how it might change my values?"
"Is there evidence that this is instrumentally useful for more than warm fuzzies?"
"Is there evidence that for the probable benefit of this event the costs are substantially optimized for it? I.e., if the benefit is substantially social, even if this would be worth flying around the world for, a program could actually be optimized for social benefits, and/or I could attend a closer/cheaper/shorter program with similar benefits to me."
"Regardless of anyone's intent, what is this program optimized for?"
"How's the food?"
↑ comment by Mercurial · 2012-03-29T21:16:56.337Z · LW(p) · GW(p)
Replies from: SilasBarta, None↑ comment by SilasBarta · 2012-03-29T21:20:39.359Z · LW(p) · GW(p)
The cooperation has actually been happening; it's just that it was achieved by ostracizing the guy who asked if you were adhering to the principles expected of that kind.
Replies from: orthonormal, Blueberry↑ comment by orthonormal · 2012-03-29T21:40:14.741Z · LW(p) · GW(p)
Note that your original comment has positive and rising karma at this point. I have a high estimation of the minicamps (partially because I'm friends with fiddlemath, who really has appeared to level up since last summer in noticeable ways), but I'm glad that you're stubborn about making SIAI/CMR show you some good evidence.
↑ comment by Blueberry · 2012-03-29T21:51:30.998Z · LW(p) · GW(p)
There are ways of making that point without saying it sounds like a "Christian brainwashing retreat."
Replies from: SilasBarta↑ comment by SilasBarta · 2012-03-29T22:23:54.424Z · LW(p) · GW(p)
Sorry, "Christian retreat" didn't convey the idea, and in any case I gave a link to a better explanation of the part of conceptspace I was trying to refer to. I'll take it out since the link should suffice.
Replies from: Blueberry↑ comment by Academian · 2012-03-29T23:58:43.766Z · LW(p) · GW(p)
For the purpose of causal inference / intervention evaluation, you must ask if a Christian retreat would have had this effect on those participants. Perhaps Christians feel closer after a Christian event, but I find Christian events somewhat alienating because I'm not Christian. I don't find aspiring rationalist events alienating, in part because I'm an aspiring rationalist. It's fun to hang out with people who have common interests, and depending on who you are, that group is a different group... for me, it's rationalists. Part of the point of the camp is that it has a similar bonding effect that any coming together of people with a deep common interest or aspiration can have, and in this case, the common aspiration is rationality.
Plus, at the camp, I did internalize skills and attitudes that have helped me a lot over the past (I.e. I've improved much more over the past year than I have in previous years), for example, looking more vigilantly for fungibility between my time and money, looking more at the reasons I do things and finding more effect ways to pursue those reasons...
Those particular effects I wouldn't expect from a Christian camp, just as the particular effect of feeling close to Jesus is not an effect I'd expect from a rationality camp. I just happen to prefer the "rationality" effects, and these camps are for people with similar such preferences.
Seriously, it's fun :)
↑ comment by orbenn · 2012-03-29T20:58:57.751Z · LW(p) · GW(p)
If the primary motivation for attending is the emotional rewards of meeting others with interest in rationality and feeling that you've learned how to be more rational, then yes, a Christian brainwashing retreat would make you glad you attended it in the same way, if and only if you are/became Christian (since non Christians likely wouldn't enjoy a Christian brainwashing retreat.)
That said, as many of us have little/no data on changes in rationality (if any) of attendees, attending is the only real option you have to test whether it might. Confirmation bias would make a positive result weak evidence, but it'd be relatively important given the lack of other evidence. Luckily even if the retreat doesn't have benefits to your objective level of rationality it sounds worthwhile on the undisputed emotional merits.
I think what SilasBarta is trying to ask is do we have any objective measurements yet from the previous minicamp that add weight to the hypothesis that this camp does in fact improve rationality or life achievement over either the short or long term?
If not then I'm still curious, are there any plans to attempt to study rationality of attendees and non-attendees to establish such evidence?
Replies from: thomblake, SilasBarta↑ comment by thomblake · 2012-03-29T21:08:14.377Z · LW(p) · GW(p)
If not then I'm still curious, are there any plans to attempt to study rationality of attendees and non-attendees to establish such evidence?
Yes, that's an oft-repeated goal, and as Eliezer mentions in a sibling, there's a one-year follow-up planned but it has not yet been a year.
↑ comment by SilasBarta · 2012-03-29T21:01:18.780Z · LW(p) · GW(p)
Right, it's been nearly a year since the last one. The long-term evidence is out there. How are attendees doing in their lives now vs how they were doing before?
I'm pretty sure there's been enough time to find this information out by now.
Replies from: bentarm↑ comment by bentarm · 2012-03-29T21:12:27.759Z · LW(p) · GW(p)
It's hard to get objective evidence on this, because the participants were all pretty exceptional people to start off with, and there were so few of them, but there is an effort underway to collect what data we can from those that attended the longer Boot Camp - hopefully we'll be able to report back within a month.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T20:48:21.449Z · LW(p) · GW(p)
You'll be able to explain the math behind what you do.
Replies from: lessdazed, thomblake↑ comment by lessdazed · 2012-03-29T21:38:29.762Z · LW(p) · GW(p)
It's easy to imagine a Christian brainwashing retreat run by someone similar to Luke that would also have that property.
Replies from: Academian↑ comment by Academian · 2012-03-30T18:49:37.255Z · LW(p) · GW(p)
Do you think a religious event would have the same effect on the same people? That is, these mostly atheist people who were all very interested in science and rationality? Or do you just think that there exist people on which a religious event would a similar effect?
This is an important distinction for someone deciding whether to attend, because such a person knows whether she is religious or not.
↑ comment by thomblake · 2012-03-29T20:53:05.072Z · LW(p) · GW(p)
I'm not sure you answered the question. I think SilasBarta is looking for evidence, that someone can provide for him right now, that he will be glad he went.
ETA: and for purposes of this discussion, I think a bald assertion does not fall in the cluster "evidence".
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T20:59:45.932Z · LW(p) · GW(p)
There's lots of statistical data already in the post about evidence that you will be glad you went. That wasn't what Silas Barta asked, and frankly I'm not sure this thread is going to be productive given the way the opening question was framed.
Replies from: SilasBarta, thomblake, thomblake↑ comment by SilasBarta · 2012-03-29T21:03:15.184Z · LW(p) · GW(p)
What would be a better framing?
How long would it take Anna to email the attendees and ask them to reply back about their current life status as compare to a year ago, so as to avoid proceeding with misleading evidence?
Edit in reply to the unmarked update to EY's comment:
Like thomblake noted, the evidence being cited is not of the kind I asked for, which was why I framed my question with a link to Yvain's lucid explanation of the problems of sorting out good retreats from bad. The exact evidence produced can be found just the same from (less useful) Christian brainwashing retreats, which is why I wasn't impressed the last time around.
I do appreciate your efforts to apply the methods of rationality to your own endeavors.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T21:12:50.057Z · LW(p) · GW(p)
Apparently the one-year followup is currently underway - a Minicamp attendee volunteered to do it.
This is pretty strong evidence of itself - people almost never volunteer for things and do them.
EDIT: OOPS! Anna said that an RBC attendee volunteered to do the RBC followup. RBC as you know was less successful than Minicamp, and we do not have someone doing the Minicamp followup yet.
I will remark that it's more time-intensive than you seem to think - this is something that gets done after successfully hiring an executive assistant; we have a candidate but the hiring hasn't yet occurred.
Replies from: SilasBarta, Jayson_Virissimo, bentarm, SilasBarta↑ comment by SilasBarta · 2012-03-29T21:15:19.786Z · LW(p) · GW(p)
This is pretty strong evidence of itself - people almost never volunteer for things and do them.
It would be strong evidence if the volunteer had completed the "do them" part, certainly.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T21:16:11.778Z · LW(p) · GW(p)
Fair enough. One cannot update on evidence one has not yet received.
↑ comment by Jayson_Virissimo · 2012-03-30T13:15:51.769Z · LW(p) · GW(p)
SilasBarta:
7b) Is there any evidence I'll be glad I went that a Christian retreat could not produce just as easily?
Eliezer_Yudkowsky:
Apparently the one-year followup is currently underway - a Minicamp attendee volunteered to do it. This is pretty strong evidence of itself - people almost never volunteer for things and do them.
Yes, people usually don't do that. On the other hand, it isn't implausible that someone who just returned from a "Christian retreat" and who is "on fire for God" to "volunteer for things and do them". SilasBarta isn't merely asking for evidence that the camp provides benefits; he is asking for a reason to think it has benefits that exceed those that can be obtained at other kinds of events (specifically, a "Christian retreat").
Replies from: TheOtherDave, Eliezer_Yudkowsky↑ comment by TheOtherDave · 2012-03-30T14:42:45.335Z · LW(p) · GW(p)
he is asking for a reason to think it has benefits that exceed those that can be obtained at other kinds of events
Or, rather, that exceeds that that can be so obtained. That is, SB's7b relates to the relative quality of the reason for belief, not the relative quality of the benefits.
But you're right that (for example) Christian retreats routinely get people to volunteer to do things and do them, so the simple fact of a Minicamp attendee doing so is not by itself strong evidence of a difference between the two events.
OTOH, there may well be sufficient differences between the two communities that the similarity of results is such evidence. That is, if event X1 gets result Y1 from a member of community Z1, while X2 gets Y2 from a member of Z2, the similarity of Y1 and Y2 given significant relevant differences between Z1 and Z2 suggests equally significant differences between X1 and X2. If Z2 is consistently more inclined to cooperate than Z1, and Y1/Y2 demonstrate willing cooperation, I conclude that X1 is more effective at inducing cooperation than X2.
(OTOOH, a lot depends on why Z2 cooperates more reliably. If it turns out that cooperation is primarily caused by the quality of Z2's events, then that's evidence against there being a significant difference between X1 and X2.)
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-31T06:46:54.638Z · LW(p) · GW(p)
Yes, after I said "After Minicamp you will be able to explain the math behind what you do", thus answering the original question, whereupon I was directed to answer other questions instead.
↑ comment by bentarm · 2012-03-29T23:11:48.819Z · LW(p) · GW(p)
RBC as you know was less successful than Minicamp
Assuming this is true, do you have a good model for why this was the case?
It is certainly the case that those at RBC were exposed to more of the "rationality teaching" material than those at Minicamp, so if this is true then it should probably be worrying.
Replies from: lincolnquirk, Eliezer_Yudkowsky↑ comment by lincolnquirk · 2012-03-30T02:45:28.902Z · LW(p) · GW(p)
I think it was a selection effect: at Mini-Camp, the people who went were chosen from a broad pool, of anyone who could take a week off at the beginning of summer. But the only people who went to Mega-Camp were the types of people who could afford to take a whole summer off. So the Mega-Camp attendees were younger, more likely to be a student, less likely to have other things going on in their lives.
(I was a Mega-Camp attendee.)
Other potential reasons: it started to suck living & eating together in a tight, crowded space. It's tolerable (and even fun!) for a week, but after a few weeks, privacy & space become an issue.
Replies from: bentarm↑ comment by bentarm · 2012-03-30T10:41:58.334Z · LW(p) · GW(p)
These are all good reasons why RBC would seem less awesome than Mini-Camp, but they aren't actually good reasons why it should have been less effective at teaching people rationality. If anything, surely one would expect people who were younger and had less going on in their lives to benefit more from rationality training.
Basically, I agree with you that these are the reasons that Eliezer describes RBC as "less of a success", but this just means that Silas is right, and the measure of "success" being used is "how awesome did everyone think it was", not "how much rationality did we manage to teach".
Replies from: lincolnquirk↑ comment by lincolnquirk · 2012-03-30T20:06:41.890Z · LW(p) · GW(p)
Agreed.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-31T06:47:56.235Z · LW(p) · GW(p)
Yes, we have a good model for why this is the case, and it involves specific managers performing worse than others so the social cost of explaining our model is not zero.
We tried two experiments. The first one worked better, so we're repeating that one instead of the second one.
↑ comment by SilasBarta · 2012-03-29T21:23:18.494Z · LW(p) · GW(p)
I will remark that it's more time-intensive than you seem to think - this is something that gets done after successfully hiring an executive assistant; we have a candidate but the hiring hasn't yet occurred.
I'm open to the planning fallacy as anyone, but really, how long does it take to email everyone for their current life status?
Replies from: taryneast↑ comment by thomblake · 2012-03-29T21:02:16.355Z · LW(p) · GW(p)
I am not sure this thread is helpful, particularly given the way the opening question was framed.
I agree. But Silas is just doing his due diligence to ask that sort of question every time one of these things is mentioned, and surely that's valuable to have around.
↑ comment by thomblake · 2012-03-29T21:04:50.960Z · LW(p) · GW(p)
There's lots of statistical data already in the post about evidence that you will be glad you went. That wasn't what Silas Barta asked
I left out the clause "that a Christian brainwashing retreat could not produce just as easily" in my retelling, since I was just noting an additional constraint. I don't think the sort of evidence in the post above actually satisfies that criterion.
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-16T20:52:03.868Z · LW(p) · GW(p)
It's April 16th today... Any update on when the acceptances will be sent out?
Replies from: jslocumcomment by Benquo · 2012-04-01T04:39:33.878Z · LW(p) · GW(p)
Since several people have asked for negative reviews in addition to positive ones, I'll share my mixed review, starting with the bad:
The first minicamp was divided into two types of sessions: epistemic rationality, and social effectiveness.
The epistemic rationality sessions always seemed "right", but what we did didn't translate directly into making improvements in my day-to-day rationality. The exercises were fun and seemed like they were "on the right track," but I am still putting time and energy into figuring out how to turn "knowing" about rationality into actually making good decisions. (Other minicamp participants reported spectacular gains, so perhaps I'm just a slow learner in that respect.)
On the other hand, the instructors did and do seem quite serious about making things better along that axis, so I would expect this coming minicamp to be superior in some ways.
The social effectiveness training was much more concrete and I was able to apply it immediately. I've gotten measurable results - compliments, and most shockingly to me, strangers smile at me on the street. On the other hand, I don't think that should be Rationality Org's comparative advantage, and if that's all you think you want out of the program you may be better off elsewhere.
By far the best think about Minicamp was bonding socially with a likeminded group of people. Having access to a group of people who were willing to travel and take two weeks to work on nothing but rationality was an experience I don't think I could have done any other way.
A subset of us have been trying to have every-other-week Skype calls with each other on a rotating basis, and this has worked better than any drill or exercise to keep me focused on improving, raise my baseline expectations of my own rationality, and remind me that if I am experiencing distress or not getting what I want, that means I should be doing something different and I need to figure out what that is.
Minicamp also increased the mental availability of a bunch of rationality techniques I'd already known about in theory. I am much better about noticing when I'm rationalizing, and now at least have a fighting chance at diagnosing problems while they're happening, rather than long afterwards.
comment by NoSignalNoNoise (AspiringRationalist) · 2012-03-30T15:18:48.737Z · LW(p) · GW(p)
There are a lot of glowing recommendations from past participants here. In fact, I have not noticed a single criticism from a past participant. This reeks of self-censorship, because even if rationality mini-camp was overall an awesome experience, it is very unlikely that it was perfect.
In order to do my small part to counteract the social pressure not to criticize those with high status, I hereby pre-commit to upvoting any comment from a past participant that contains criticism of the mini-camp. This includes comments that are generally positive but include criticism of specific aspects.
Replies from: JGWeissman, thejash↑ comment by JGWeissman · 2012-03-30T16:44:15.174Z · LW(p) · GW(p)
In this comment, GuySrinivasan reports The Good, The Bad, and The Ugly.
↑ comment by thejash · 2012-03-31T17:13:22.879Z · LW(p) · GW(p)
I can definitely understand your perspective. I pretty much ONLY read the negative parts of reviews--if there is NOTHING bad, that is a bad sign in itself.
I also commented positively below, but since you asked, here are my complaints about the last minicamp:
- A little disorganized. Apparently knowing about the planning fallacy does not make you immune to it ;) I suspect this will be fixed for this year.
- Large number of college students (graduate and undergraduate). I would have liked to see a wider range of attendees. Again, this was probably partly due to the short notice for last year.
- Some sessions were not valuable to me. However, most of those were valuable to others, so I think this is due more to the fact that brains are different than that the sessions were poorly done.
Actually, I'm pretty sure we all gave detailed feedback afterward (including lots of suggestions for improvements). Could Anna or someone post links to those too? Perhaps seeing the minor details that were negative will help people get a better sense for how useful it was overall.
comment by Humbug · 2012-03-29T19:20:06.646Z · LW(p) · GW(p)
“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”
Sing karaoke...
Now I can't get this image out of my head of Eliezer singing 'I am the very model of a singularitarian '...
Replies from: Hellosescomment by Giles · 2012-03-30T00:44:41.742Z · LW(p) · GW(p)
A question for Eliezer, Anna or Luke: under which circumstances would you prefer people came to minicamp, and under which circumstances would you prefer people just gave that money (inc. travel) to the SI?
Not that I'm necessarily going to follow this advice, just curious which factors you think are relevant.
Vaguely related question: I'm sure I remember there being a minicamp in New York in the past (EDIT: I was wrong). What's the expected length of time before the next east coast camp?
Replies from: lukeprog, AnnaSalamon↑ comment by lukeprog · 2012-03-30T01:33:53.322Z · LW(p) · GW(p)
People seem to be getting the impression that these minicamps may be primarily fundraising opportunities for the Singularity Institute. So, by way of explanation:
- At my request, our Board of Directors put Anna in charge of developing Rationality Group and gave her a separate budget for it.
- Rationality Group has been developing lessons, testing them, iterating the lessons in response to feedback, researching how to run a successful non-profit in this space, networking, and hiring founding members for a couple months now.
- Rationality Group hopes to learn how to run these kinds of camps without losing money. (Minicamp paid for itself as a result of future donations from minicamp participants who plausibly wouldn't have donated nearly as much had they not had this in-person encounter with us, but we'd like to learn how to run these kinds of minicamps without losing money while not counting future donations.)
- The lessons are now more tested and practiced than what we put together for minicamp, but we always appreciate opportunities to get feedback on them from additional participants.
- Who knows? Maybe we'll want to hire some of the participants to be part of our team.
- Above all, we want to start making a greater push to raise the sanity waterline, help aspiring rationalists along their journey, and help create more heroes who do cool things like ethical careers, efficient chairty, and x-risk reduction.
↑ comment by Giles · 2012-03-30T02:14:55.523Z · LW(p) · GW(p)
Oops. I wasn't thinking along the lines of "the people we most want to come to minicamp are the people who are most easily brainwashed into giving us money". Sorry if I gave that impression.
I was more thinking along the lines of "when should I buy utilons, and when should I buy self-improvement in order that I can acquire more resources and/or more effectively turn resources into utilons?"
Minor point: can you clarify whether Rationality Group is the same thing as the Center for Modern Rationality?
Replies from: lukeprog↑ comment by lukeprog · 2012-03-30T04:09:19.276Z · LW(p) · GW(p)
can you clarify whether Rationality Group is the same thing as the Center for Modern Rationality?
They are the same. We're still testing names. I've talked with several marketing firms, and right now my assistant is tracking down additional companies who focus on market-testing organization names.
↑ comment by AnnaSalamon · 2012-03-30T00:57:54.836Z · LW(p) · GW(p)
If in doubt, apply. You can always decide to not come after being accepted, gaining info, etc. (Applying is a 10-15 minute process.)
As to money vs. minicamp: come to minicamp. Especially if you've had little to no in-person contact with this community before. You'll learn more, acquire more money-making skills, make friends, and get into a better position for long-term influence. At least, that's my impression.
We haven't run any on the east coast; at some point we'd like to run do this, probably after refining our curriculum, growing our instructor-set, etc., but probably not in the next 12 months.
comment by jsalvatier · 2012-03-30T19:43:46.775Z · LW(p) · GW(p)
I would definitely go to one of these again. Are you considering repeats? If so how much do you expect repeats to get out of it?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-03-30T21:41:35.573Z · LW(p) · GW(p)
We're considering it, though I'm not sure yet how much overlap there will be. We're also considering letting some past folk volunteer at the upcoming sessions. Contact me (via email or the application form, either way) with your interests.
comment by Paul Crowley (ciphergoth) · 2012-03-30T07:39:24.272Z · LW(p) · GW(p)
My primary partner is curious about this sort of thing, since unsurprisingly I talk about it all the time. Should I be thinking about going on my own or both of us going? We live in London UK.
Replies from: AnnaSalamon, Academian↑ comment by AnnaSalamon · 2012-03-30T21:19:12.592Z · LW(p) · GW(p)
Depending on your partner, I'd consider coming together. If you come with a close associate of any sort, you'll have someone in your life afterward who can help you remember to practice the techniques in daily life.
Do mark on your application if you / your partner much prefer to come with a particular other person, so that we can evaluate your applications jointly.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-03-31T10:43:52.351Z · LW(p) · GW(p)
Thanks! She's reminded me of some practical reasons she can't go this time. Hopefully we'll be bringing this to London before long.
When do you hope to answer applications? The sooner I can book flights the better :)
comment by scaphandre · 2012-03-30T02:42:58.965Z · LW(p) · GW(p)
Thanks for putting this together - I am intrigued by a mini-camp.
One of the questions in the application form is tickbox for "Interested in potentially working for CMR".
Could someone give some more detail on that question?
A google of "site:lesswrong.com CMR" didn't give me anything useful.
Replies from: scaphandre↑ comment by scaphandre · 2012-03-30T03:01:51.451Z · LW(p) · GW(p)
Ah - I found some details for the Eliezer's potential Center for Modern Rationality: http://hpmor.com/modern-rationality/
comment by curiousepic · 2012-03-31T13:37:55.297Z · LW(p) · GW(p)
Out of curiosity, what does the menu look like? Is it based around a specific, dare I say "rational" diet? Paleo, Bulletproof, Shangri-la?
Replies from: jpulgarin, thejash↑ comment by jpulgarin · 2012-04-10T00:04:58.441Z · LW(p) · GW(p)
The food at RBC was awesome, mostly due to a small number of attendees being very good cooks. Many of us also had similar fitness goals with regards to gaining muscle, so we had a 25 pound tub of protein, and cooked high-protein meals.
My diet has been nowhere near as good since leaving RBC.
↑ comment by thejash · 2012-03-31T17:03:52.466Z · LW(p) · GW(p)
I helped make some of the food last time. I would call that menu "college random" ;) It was basically left as a problem for us to solve.
I assume that this time they will have it straightened out (and is probably part of the higher price), but I am also curious.
Replies from: Jollycomment by DanPeverley · 2012-03-31T05:02:21.367Z · LW(p) · GW(p)
I understand that you are expected to have read at least part of the sequences, but what sort of general education is necessary, if any? What kind of math should a participant be able to do in order to get optimal utility out of the event? I am seriously considering flying out to attend, and would like to know if I need to review anything :)
comment by lessdazed · 2012-03-30T03:59:07.957Z · LW(p) · GW(p)
Consider giving an example of the sort of decision making procedure that is taught in camp, with the subject of the example whether one should attend the camp.
E.g.:
Write down all the reasons you think you are considering on a sheet of paper, in pro and con columns. Circle those that do not refer to consequences of going or not going to camp. Then shut your eyes to think for two minutes and think of at least five alternatives that you are likely to do instead of camp. Make pro and con lists for the most likely three of these. Then circle non-consequences. Generate consequences you should be considering but aren't by imagining what is likely to happen if you go to camp. Be sure not to think that compelling stories with many features are most likely, and give greater consideration to self-generated stories with fewer contingent parts. Generate at least four seemingly likely stories of what will likely happen. Put a star next to each alternative for which the time and/or money is spent acquiring an experience, rather than material goods, as the science of happiness consistently shows that such acquisitions are more uplifting...etc.
Alternatively, a sample VOI calculation on how much time people should spend considering it would do.
↑ comment by Bill_McGrath · 2012-05-19T12:35:00.761Z · LW(p) · GW(p)
Is there somewhere on LW to report this kind of thing? A spam or admin notification thread?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-05-19T13:14:27.995Z · LW(p) · GW(p)
See this ticket. Currently, one of the moderators has to notice personally, though you could for example send a PM to me.
Replies from: Bill_McGrath↑ comment by Bill_McGrath · 2012-05-19T13:19:39.172Z · LW(p) · GW(p)
I see now that there's a "report" button in my inbox - I knew I'd seen it somewhere on this site.
Cheers, I'll PM in future.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-19T13:50:23.738Z · LW(p) · GW(p)
In practice, the report button doesn't seem to do much. There's a list of reported things, but I don't think any mods look at it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-05-19T14:20:10.192Z · LW(p) · GW(p)
Since "report" buttons currently don't appear in most places, the lists of reported items go without update for months, so there's not much incentive to look at them. The last reported item in Main is of 12 June 2011. But I've got the pages bookmarked, so if the feature is resurrected it'll serve its purpose.
comment by Yuu · 2012-04-05T16:09:47.055Z · LW(p) · GW(p)
Could you specify location of the minicamp? Or suggest several possible locations? I just want to calculate trip time from San Francisco International Airport, I also hope it will be useful information for all applicants outside of USA, like me.
Replies from: jpulgarincomment by Unhelpful · 2012-03-30T05:20:35.058Z · LW(p) · GW(p)
As somebody about as far from the Bay Area as a USian can get, and who has been hearing great things from a friend in the area about the CMR workshops… when will there be online workshops and/or camps for the northeast or the east coast in general?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-03-30T21:27:56.647Z · LW(p) · GW(p)
I'm not sure. Maybe a year from now for east cost, and some longer period of time for online (since online exercises need more development before they work well stand-alone)? I probably wouldn't wait. Lots of people flew in last time from Europe.
Replies from: Unhelpful↑ comment by Unhelpful · 2012-03-31T02:44:03.005Z · LW(p) · GW(p)
Thanks for responding. I'm trying not to underestimate the value offered here, but the degree of non-monetary cost of going varies fairly widely. For those of us who would have to disrupt things fairly severely to go, would getting meetup groups, etc. trained on the curriculum be a means to train more people, sooner, even if removing the intensive camp setting means that the participants take more time overall?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-03-31T15:41:58.308Z · LW(p) · GW(p)
We are working on that. The trouble is that it is easier to design exercises that work in a local setting, with skilled and knowledgable instructors, than to design exercises that can be ported. We are giving some preference in minicamps admissions to folks who run or plan to run meet-ups, and we're interested in hiring people who will be more able to design exercises and port them to meet-ups; if you have friends in these categories, send them our way.
comment by cata · 2012-03-30T01:10:15.537Z · LW(p) · GW(p)
Thanks for this post, I may be interested in attending.
I live in the Bay Area, however, and I would be happy to drive back and forth, so I would be more inclined to pay less and not receive lodging. Is there any chance that the option of doing so would be available to me?
EDIT: Oops, I just now read this comment which brings up the same question. OK.
comment by DanPeverley · 2012-04-01T23:29:52.167Z · LW(p) · GW(p)
Another question I realized is probably more relevant: What has been the median age for attendees of these events? Are they demographically young college age students, or what?
Replies from: jpulgarin↑ comment by jpulgarin · 2012-04-10T00:00:11.154Z · LW(p) · GW(p)
RBC was mostly young college age students (oldest participant was 29 years old if I recall correctly. I believe I was the first or second youngest at 21). The median age for the minicamps was probably a bit higher, but not substantially so.
comment by Robert Miles (robert-miles) · 2012-04-28T23:20:18.864Z · LW(p) · GW(p)
The possibility of flying out for this only became apparent to me very late on. I submitted an application as soon as I knew it was an option for me, which was the 22nd of April. Since it seems like applicants are already getting answers, I'm resigned to the possibility that my application was submitted too late to be considered.
Is that in fact the case? If so, it's probably worth modifying or removing the "Apply Now" links. If not though, I'll adjust my holiday plans accordingly.
comment by Bruno_Coelho · 2012-04-03T01:16:53.283Z · LW(p) · GW(p)
In the LW community apparently most people don't know how to dress well. I read some testimonials, and part are in some sense about achieving goals like been more sociable, and not to be a anti-social nerd.
There a re skeptics too, however for me the effectiviness of (decision theory + psychological tests) is clear, only in the few comments a read. Maybe they are overly optimistic, but the feel of improvement and the evidence in their professions indicate that the cost/benefit is superior of the most camps/groups out there.
comment by sixes_and_sevens · 2012-03-30T11:57:41.360Z · LW(p) · GW(p)
I applied, but accidentally submitted it with an obviously incomplete current/past employment field. What would be a sensible way to remedy this?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-03-30T21:22:49.514Z · LW(p) · GW(p)
Email our new executive assistant, stephen p cole at gmail, with your details, and ask him to fix it for you.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-03-30T22:55:07.960Z · LW(p) · GW(p)
Thank you.