Help Fund Lukeprog at SIAI
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-24T07:16:45.967Z · LW · GW · Legacy · 278 commentsContents
278 comments
Singularity Institute desperately needs someone who is not me who can write cognitive-science-based material. Someone smart, energetic, able to speak to popular audiences, and with an excellent command of the science. If you’ve been reading Less Wrong for the last few months, you probably just thought the same thing I did: “SIAI should hire Lukeprog!” To support Luke Muelhauser becoming a full-time Singularity Institute employee, please donate and mention Luke (e.g. “Yay for Luke!”) in the check memo or the comment field of your donation - or if you donate by a method that doesn’t allow you to leave a comment, tell Louie Helm (louie@intelligence.org) your donation was to help fund Luke.
Note that the Summer Challenge that doubles all donations will run until August 31st. (We're currently at $31,000 of $125,000.)
During his stint as a Singularity Institute Visiting Fellow, Luke has already:
- Co-organized and taught sessions for a well-received one-week Rationality Minicamp, and taught sessions for the nine-week Rationality Boot Camp.
- Written many helpful and well-researched articles for Less Wrong on metaethics, rationality theory, and rationality practice, including the 20-page tutorial A Crash Course in the Neuroscience of Human Motivation.
- Written a new Singularity FAQ.
- Published an intelligence explosion website for academics.
- ...and completed many smaller projects.
As a full-time Singularity Institute employee, Luke could:
- Author and co-author research papers and outreach papers, including
- A chapter already accepted to Springer’s The Singularity Hypothesis volume (co-authored with Louie Helm).
- A paper on existential risk and optimal philanthropy, co-authored with a Columbia University researcher.
- Continue to write articles for Less Wrong on the theory and practice of rationality.
- Write a report that summarizes unsolved problems related to Friendly AI.
- Continue to develop his metaethics sequence, the conclusion of which will be a sort of Polymath Project for collaboratively solving open problems in metaethics relevant to FAI development.
- Teach courses on rationality and social effectiveness, as he has been doing for the Singularity Institute’s Rationality Minicamp and Rationality Boot Camp.
- Produce introductory materials to help bridge inferential gaps, as he did with the Singularity FAQ.
- Raise awareness of AI risk and the uses of rationality by giving talks at universities and technology companies, as he recently did at Halcyon Molecular.
If you’d like to help us fund Luke Muehlhauser to do all that and probably more, please donate now and include the word “Luke” in the comment field. And if you donate before August 31st, your donation will be doubled as part of the 2011 Summer Singularity Challenge.
278 comments
Comments sorted by top scores.
comment by pengvado · 2011-08-25T15:53:10.560Z · LW(p) · GW(p)
I donated 10000$.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-26T07:42:41.571Z · LW(p) · GW(p)
By check? Can you PM or email me with the name? The reason I ask is so that I can figure out how close HPMOR is to the 4-day update threshold, add it into my calculations in advance, and make sure it doesn't get double-counted when the actual check arrives. (BTW, do you want credit with my thousands of fanatic readers for bringing the threshold closer?)
comment by Rain · 2011-08-26T13:57:52.640Z · LW(p) · GW(p)
I just put in a pledge of $1,000 per month.
Replies from: Vladimir_Nesov, Eliezer_Yudkowsky↑ comment by Vladimir_Nesov · 2011-08-26T14:38:27.443Z · LW(p) · GW(p)
Feels counterintuitive, but if just 50 people establish arrangements like this one, SingInst gets a reliable supply of funding on the current spending level independent of funding rallies or big-sum donors.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-08-26T14:42:21.194Z · LW(p) · GW(p)
50 people is a lot. Certainly a large number of people here simply cannot afford that sort of commitment to any charity. There are a lot of grad students here for example, some of whom are getting less than that for their monthly salaries. In fact when I saw Rain's comment my first thought was "how the heck does Rain have that kind of money?"
Replies from: Rain, Vladimir_Nesov↑ comment by Rain · 2011-08-27T17:04:18.019Z · LW(p) · GW(p)
my first thought was "how the heck does Rain have that kind of money?"
Low cost of living and a good job. I've always wondered the opposite, "how the heck does nobody else have any money?" I have so much left over every month, I wondered what to do with it for a long time before deciding on making a better future in the best way I know how.
Replies from: MixedNuts↑ comment by MixedNuts · 2012-12-09T09:48:40.884Z · LW(p) · GW(p)
How do you go about having a low cost of living? (I think I know how one goes about having a good job.) My best attempts at being a total cheapskate still have me spending my whole 8000 SEK monthly income. Okay, sure, I eat at cafeterias rather than packing lunch, and I buy fresh vegetables rather than eat lentils everyday, but you're a freaking fashion plate!
Replies from: None, Rain, army1987↑ comment by [deleted] · 2012-12-13T02:52:32.867Z · LW(p) · GW(p)
I don't know if you'll be able to translate to SEK, but here's my canadian dollar budget:
3000/month income after tax
-100/month food
-400/month housing
-300/month personal spending
The rest (2200) is for savings and SI (not that I've organized a monthly $1k yet or anything).
$100 for food: people are consistently amazed at this one. Oatmeal + milk + granola for breakfast. Eggs + english muffins + cheese + mayonaise + celery + peanut butter + carrots + leftovers for lunch. Cheap meat and veggies and rice and such for dinner. I shop at the local grocer for meat and veggies, and Real Canadian Superstore for everything else.
The trick is to be strict about it. Put your money in a box at the begin of the month, eat fucking beans and rice for a week if you blow the budget. You learn quick this way. Only problem is cooking. Eats up like 4 hours a week.
$400 for housing: live with roommates, and rent.
$300/mo personal: that's actually a lot of money, but you do have to be careful, you can't be buying a new jacket every month, or you won't be able to buy anything else. Again, strict budgeting.
I hope this helps people become more effective altruists!
Replies from: Apteris↑ comment by Apteris · 2012-12-23T14:47:22.574Z · LW(p) · GW(p)
Only problem is cooking. Eats up like 4 hours a week.
This article by Roger Ebert on cooking is, I suspect, highly relevant to your interests. Mine too, as a matter of fact.
↑ comment by Rain · 2012-12-09T13:50:05.577Z · LW(p) · GW(p)
I think the largest component of it is spending all of my free time online rather than going out for adventures. Such excursions often end up costing quite a bit, and so long as my interests are occupied by free internet, I don't get any urge to buy "stuff". Also, my taxes are lower due to the large amount of my donations.
I allow myself to splurge when my money builds up, typically on Projects that keep me deeply interested in a whole new facet of society or personal development for some time, but with a budget typically limited to $3000 or less.
↑ comment by A1987dM (army1987) · 2012-12-09T11:57:19.098Z · LW(p) · GW(p)
It also depends, among other things, on where you live and whether you have children.
Replies from: MixedNuts↑ comment by MixedNuts · 2012-12-09T12:09:47.513Z · LW(p) · GW(p)
I rent a small room in a shitty neighborhood (3200 SEK a month) and have no kids. Stockholm is kinda expensive to live in but I expect the education and subsequent job opportunities to make up for it.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-12-09T13:36:22.783Z · LW(p) · GW(p)
[Looks up SEK exchange rate] Well, 8000 SEK doesn't sound like that much, after all.
For comparison, Julia Wise and Jeff Kaufman spend $22K/year (i.e. 12000 SEK/month, 6000 each), and I guess it's cheaper to be a couple than two single individuals. (FWIW, I spend about as much as each of them, but I live in Italy -- it would have been very hard for me to live on that little in Ireland.)
↑ comment by Vladimir_Nesov · 2011-08-26T14:56:01.612Z · LW(p) · GW(p)
50 people is still an idea that fits into human imagination and feels usual, unlike a sum of $600,000 or a single donation of $200,000.
Lots of people in the US/UK make several thousand a month, and anything on the order of 10% of income is usually expendable. Of course, depending on income, lower pledge works out proportionally.
Replies from: lessdazed↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-27T02:16:37.353Z · LW(p) · GW(p)
Do you want credit with HPMOR's grateful readers, and if so should it be under Rain or your true name?
Replies from: Raincomment by BrandonReinhart · 2011-08-25T04:05:18.568Z · LW(p) · GW(p)
Hiring Luke full time would be an excellent choice for the SIAI. I spent time with Luke at mini-camp and can provide some insight.
Luke is an excellent communicator and agent for the efficient transmission of ideas. More importantly, he has the ability to teach these skills to others. Luke has shown this skill publicly on Less Wrong and also on his blog, with this distilled analysis of Eliezer's writing "Reading Yudkowsky."
Luke is a genuine modern day renaissance man, a true polymath. However, Luke is very self-aware of his limitations and has devoted significant work to finding ways of removing or mitigating those limitations. For example any person with a broad range of academic interests could fall prey to never acquiring useful skills in any of those interest areas. Luke sees this as a serious problem of concern and wants to maximize the efficiency of searching the academic space of ideas. Again, for Luke this is a teachable skill. His session "Productivity and Scholarship" at minicamp outlined techniques for efficient research and reducing akrasia. None of that material would be particularly surprising for a regular reader of Less Wrong -- because Luke pioneered critical posts on these subjects. Luke's suggestions were all implementable and process focused, such as utilizing review articles and Wikipedia to rapidly familiarize one's self broadly with the jargon of a new discipline before doing deep research.
Luke is an excellent listener and has a high degree of effectiveness in human interaction. This manifests itself as someone you enjoy speaking to, who seems interested in your views, and then who is able to tell you why you are wrong in a way that makes you feel smarter. (Compare with Eliezer, who will simply turn away when you are wrong. This is fine for Eliezer, but not ideal for SIAI as an organization.) Again, Luke understands how to teach this skill set. It seems likely that Luke would raise the social effectiveness of SIAI as an organization and then also generate positive affectations toward the organization in his dealings with others.
Luke would have a positive influence on the culture of the SIAI, the research of the SIAI, and the public face of the SIAI. Any organization would love to find someone who excels in any one of those dimensions, much less someone who excels in all of them.
Mini-camp was an exhausting challenge to all of the instructors. Luke never once showed that exhaustion, let it dampen his enthusiasm, or let his annoyance be shown (except, perhaps, as a tactical tool to move along a stalled or irrelevant conversation). In many ways he presented the best face of "mini-camp as a consumable product." That trait (we could call it customer focus or product awareness) is a critical skill the SIAI is lacking.
An example of how Luke has changed me. I was only vaguely aware of the concepts of efficient learning and study. Of course, I know about study habits and putting in time at practice in a certain sense. These usually emphasize practice and time investment (which is important) but underemphasize the value of finding the right things to spend time on.
It was only when I read Luke's posts, spoke to him, and participated in his sessions at mini-camp that I received a language for thinking about and conducting introspection on the subject of efficient learning. Specifically, I've applied his standards and process to my study of guitar and classical music and I now feel I've effectively solved the question of where to spend my time and am solely in the realm of doing the actual practice, composition, and research. I've advanced more in the past few months of music study than I have ever done in the prior year and a half I played guitar.
In the past month I have actively applied his skill of skimming review material (review books on classical composers) and then used wikipedia to rapidly drill down on confusing component subjects. In the past month, I have actively applied his skill of thinking vicariously about someone else's victory that represents goals I have to make a hard road seem less like a barrier and more like a negotiable terrain. In the past month, I have applied his skill of considering the merits of multiple competing areas of interest, determined the one with the most impact, and pursued it (knowing I could later scoop up the missing pieces more quickly).
I did all of that with the awareness that Luke was the source of the skills and language that let me do those things.
I am more awesome because of Luke.
Replies from: taryneastcomment by JGWeissman · 2011-08-25T04:28:19.616Z · LW(p) · GW(p)
I just donated an additional $2000. Yay for Luke!
comment by Benquo · 2011-08-25T04:13:47.832Z · LW(p) · GW(p)
Was planning on waiting 'til the last day to decide with maximum info (in particular, whether the maximum match amount was met). If enough other people think like me, SIAI should see a rush of cash in the last few days of the contest.
But Eliezer forced my hand with this from MoR:
Thus this fic will next update at 7pm Pacific Time, on August 30th 2011, unless the Summer Challenge reaches $50,000 or more before then, in which case the fic will update sooner (but still at 7pm, because I'm not cruel).
Also we need more Luke.
So that's another $1000 for SIAI
Replies from: gjm↑ comment by gjm · 2011-08-26T10:53:55.528Z · LW(p) · GW(p)
By doing that, you gain "maximum info" for yourself while denying it to others; in particular, if there's a last-minute rush then anyone donating before the end of that rush may well be misled about whether the limit is likely to be reached.
It's not necessarily wrong to favour yourself over others. But it seems a bit weird to do so in the context of a charitable donation...
Replies from: Benquocomment by SilasBarta · 2011-08-24T14:03:22.827Z · LW(p) · GW(p)
So, no plans for providing any substantiation of the mini-camp's purported success? (Some want to know.) Or of people who have increased their level of life success as a result of the winning at life guides?
Replies from: AnnaSalamon, lukeprog, jsalvatier, WrongBot, XFrequentist, GuySrinivasan, AnnaSalamon↑ comment by AnnaSalamon · 2011-08-25T02:08:08.743Z · LW(p) · GW(p)
Among the ultimate criteria for the minicamps is their impact on long-term life success. To assess this, both minicamp participants and a control group completed a long, anonymous survey containing many indicators of life success (income, self-reported happiness and anxiety levels, many questions about degree of social connectedness and satisfaction with relationships, etc.); we plan to give it again to both groups a year after mini-camp, to see whether minicampers improved more than controls. I’m eager to see and update from those results, but we’re only a couple months into the year’s waiting period. (The reason we decided ahead of time to wait a year is that minicamp aimed to give participants tools for personal change; and, for example, it takes time for improved social skills, strategicness, and career plans to translate into income.)
Meanwhile, we’re working with self-report measures because they are what we have. But they are more positive than I anticipated, and that can’t be a bad sign. I was also positively surprised by the number of rationality, productivity, and social effectiveness habits that participants reported using regularly, in response to my email asking, two months out. To quote a significant fraction of the numerical data from the exit survey (from the last day of minicamp), for those who haven’t seen participants’ ratings:
- In answer to “Zero to ten, are you glad you came?”, the median answer was 10 (mean was 9.3).
- In answer to “Zero to ten, will your life go significantly differently because you came to mini-camp?” the median answer was 7.5 (the mean was 6.9). [This was the response that was most positively surprising to me.]
- In answer to “Zero to ten, has your epistemic rationality improved?”, the median answer was 7 (mean 6.9)
- In answer to “Zero to ten, are you more motivated to learn epistemic rationality, than you were when you came?”, the median answer was 8.5 (mean 8.1)
- In answer to “Zero to ten, have you become more skilled at modifying your emotions and dispositions?”, the median answer was 7 (mean 6.3).
- In answer to “Zero to ten, are you more motivated to modify your emotions and dispositions, than you were when you came?”, the median answer was 9 (mean 8.3).
- In answer to “Zero to ten, are you more motivated to gain social skills than you were when you came?”, the median answer was 8 (mean 7.7).
- In answer to “Zero to ten, have you gained social skills since coming?”, the median answer was 7.5 (mean 7.2).
- In answer to “zero to ten, did you like Luke’s sessions?”, the median answer was 9 (mean answer 8.7).
Some excerpts from the survey, about about Luke’s sessions in particular:
- “Luke is an excellent presenter. These sessions exceeded my expectations: I am convinced I have under-valued social interaction and techniques and that I can accelerate my success curve by aggressively adopting them. ”
- “I really liked Luke's sessions. They were fun and interactive and well put together. There is an effect of being a bit more personally interested in the material.”
- “Very useful content. Great presentation of it. Very good at handling the practical camp-issues and also useful fashion tips.”
- “Luke’s sessions were concise, and well structured. Good PPT templates!”
- “The social effectiveness and fashion sessions were very useful for me. ”
- “Some parts of some sessions i felt went too slowly... but mostly extremely valuable information. wish we could have more social skills sessions - i would take another camp just for these super low-hanging fruit.”
- “Luke gave concrete examples and advice. It was very helpful.”
- “Luke was great as a session leader. His sessions were very clearly, cleanly organized, and discussions in his sessions were handled very well. Luke has, by far, the presence to lead a discussion among 16 people. :)”
- “Luke was great. His sessions hit the relevant points in an effective manner.”
- “Luke was very helpful and knowledgeable. The pace of his sessions was really good, and there was a lot of room for discussion. Luke also gave some helpful and specific fashion advice. ”
- “Pretty much everything with Luke was phenomenal... Luke really made this whole camp worthwhile. I know this is more praise than constructive feedback, but I legitimately can't think of anything!”
I worked on mini-camp with Luke, and I can honestly say that it’s only because of Luke that we were able to hold minicamp at all, and also that he was a phenomenal work partner in organizing the camp, getting all the logistics together, and generally making it a positive and, for many, life-changing experience.
More generally: In minicamp and other SingInst projects, Luke combines energy, reliable ability to carry projects to completion, and strategicness as to which projects make sense and which aspects of those projects are most worth the extra effort; if you’re looking to reduce existential risk, making it possible for SingInst to stably hire Luke seems to me to offer unusually good bang for your buck.
Replies from: wedrifid, None, lukeprog, Alexandros↑ comment by wedrifid · 2011-08-25T03:55:17.605Z · LW(p) · GW(p)
both minicamp participants and a control group
How was the control group selected? Did you select a pool of candidates larger than you could accept then randomly take a subset of these as a control? If not then calling it a 'control group' is borderline at best.
The prior expectation of the influence of one week of training on personal success over a year is far lower than that of various personal and environmental qualities in the individual. This being the case it is more reasonable to attribute differences in progress between the groups to the higher potential for growth in the chosen minicampers. This primarily reflects well on the ability of the Singinst rationality trainers to identify indicators of future success - a rather important skill in its own right!
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2011-08-25T04:14:34.005Z · LW(p) · GW(p)
A good point. The control group was of folks who made it through the initial screening but not the final screening, so, yes, there are differences. We explicitly discussed the possibility of randomizing admissions, but, for our first go, elected to admit the 25 people we most wanted, and to try randomizing some future events if the first worked well enough to warrant follow-ups (which it did). It is a bit of a hit to the data-gathering, but it wasn't growth potential as such that we were selecting for -- for example, younger applicants were less likely to have cool accomplishments and therefore less likely to get in, although they probably have more growth potential -- so there should still be evidence in the results.
Also, we marked down which not-admitted applicants were closest to the cut-off line (there were a number of close calls; I really wished we had more than 25 spaces), so we can gain a bit of data by seeing if they were similar to the minicamp group or to the rest of the controls.
↑ comment by [deleted] · 2011-08-25T22:47:27.819Z · LW(p) · GW(p)
I have a real hard time deciding how seriously I should take this survey.
The halo effect for doing anything around awesome people like are found in a selected group of Lesswrongians is probably pretty strong. I fear at least some of the participants may have mixed up being with awesome people with becoming awesome. Don't get me wrong being with awesome people in of itself will work ... for a while, until you leave that group.
I'm not that sceptical of the claims, but from the outside its hard to tell the difference between this scenario and the rationality camps working as intended.
Replies from: None, ciphergoth, shokwave↑ comment by [deleted] · 2011-08-27T22:28:19.535Z · LW(p) · GW(p)
You're right to suspect that this could have happened. That said: I was a mini-camp participant, and I actually became more awesome as a result. Since mini-camp, I've:
- used Fermi calculations (something we practiced) to decide to graduate from school early.
- started making more money than I had before.
- started negotiating for things, which saved me over $1000 this summer.
- begun the incredibly fucking useful practice of rejection therapy, which multiplied my confidence and caused the above two points.
- rapidly improved my social abilities, including the easily measurable 'success with women' factor. This was mostly caused by a session about physical contact by Will Ryan, and from two major improvements in wardrobe caused by the great and eminent lukeprog (in whose name I just donated). I wasn't bad at social stuff before - this was a step from good to great.
- resolved my feelings about a bad relationship, mostly as a result of boosted confidence from increased social success.
I stuck around in California for the summer, and gained a lot from long conversations with other SIAI-related people. The vigor and insight of the community was a major factor in showing me how much more was possible and helping me stick to plans I initiated.
But, that said - the points listed above appear to be a direct result of the specific things I learned at mini-camp.
↑ comment by Paul Crowley (ciphergoth) · 2011-08-26T07:38:38.600Z · LW(p) · GW(p)
I suspect that it's precisely because of concerns like these that they didn't present these numbers until now. It's hard to see what other evidence they could have for the efficacy of the "minicamp" at this stage.
(Edited to replace "bootcamp" with "minicamp" as per wedrified's correction)
↑ comment by shokwave · 2011-08-26T00:24:24.506Z · LW(p) · GW(p)
Don't get me wrong being with awesome people in of itself will work ... for a while, until you leave that group.
I'm not that sceptical of the claims, but from the outside its hard to tell the difference between this scenario and the rationality camps working as intended.
Indeed. SIAI is conducting a year-later follow up which should provide the information needed to differentiate. Answering that question now is probably not possible to the degree of certainty required.
Replies from: None↑ comment by [deleted] · 2011-08-26T00:28:37.112Z · LW(p) · GW(p)
Answering that question now is probably not possible to the degree of certainty required.
That's exactly the complaint though -- many people have described it as a success, before the data is available.
Replies from: jsalvatier, shokwave↑ comment by jsalvatier · 2011-08-27T15:16:04.293Z · LW(p) · GW(p)
I think people are seeing drastically different things in the word 'success'.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2011-08-27T16:07:42.889Z · LW(p) · GW(p)
Yes; what I meant by "success" was more like a successful party or conference; Luke pulled off an event that nearly all the attendees were extremely glad they came to, gave presentations that held interest and influenced behavior for at least the upcoming weeks, etc. It was successful enough that, when combined with Luke's other accomplishments, I know we want Luke, for his project-completion, social effectiveness, strategicness, fast learning curves, and ability to fit all these qualities into SingInst in a manner that boosts our overall effectiveness. I don't mean "Minicamp definitely successfully created new uber-rationalists"; that would be a weird call from this data, given priors.
↑ comment by shokwave · 2011-08-26T14:51:36.158Z · LW(p) · GW(p)
Sure, but Konkvistador's post is about how the survey might be contaminated by awesome-people-halo-effect, not that we shouldn't be calling it a success. That's a separate concern addressed elsewhere. My post was addressing how we would tell the difference between "working" and "near awesome people".
↑ comment by Alexandros · 2011-10-26T07:11:47.178Z · LW(p) · GW(p)
Since you're using self-reporting anyway, it would have been good if you had a 'how invested do you feel in minicamp's success?' question. Of course I say that having seen the results already.
↑ comment by lukeprog · 2011-08-24T16:20:07.709Z · LW(p) · GW(p)
SilasBarta,
We collected lots of data before and during minicamp. We are waiting for some time to pass before collecting followup data, because it takes time for people's lives to change, if they're going to change. Minicamp was only a couple months ago.
Minicampers are generally still in contact, and indeed we are still gathering data. For example, several minicampers sent me before and after photos concerning their fashion (which was part of the social effectiveness section of the minicamp) and I'm going to show them to people on the street and ask them to choose which look they prefer (without indicating which is 'before' and which one is 'after').
So yes: by all qualitative measures, minicamp seems to have been a success. The early quantitative measures have been taken, but before-and-after results will have to wait a while.
As for future rationality training, we are taking the data gathered from minicamp and boot camp and also from some market research we did and trying to build a solid curriculum. To my knowledge, four people are seriously working on this project, and Eliezer is one of them.
Cheers,
Luke
Replies from: SilasBarta, jsalvatier↑ comment by SilasBarta · 2011-08-24T16:54:19.889Z · LW(p) · GW(p)
So it's early enough to call it an unqualified success, but too soon for evidence to exist that it was was a success? If I have to be patient for the evidence to come back, shouldn't you be a little more patient about judging it a success?
Edit: I gave a list of information you could post. The fashion part isn't suprising enough to count as strong evidence, and was a relatively small part of the course that, in any case, you previously claimed could be accomplished by looking at a few fashion magazines.
Replies from: lukeprog↑ comment by lukeprog · 2011-08-24T18:38:35.461Z · LW(p) · GW(p)
I mean 'evidence' in the Bayesian sense, not the scientific sense. I have significant Bayesian evidence that minicamp was a success on several measures, but I can't know more until we collect more data.
Thanks for providing a list of information we could post. One reason for not posting more information is that doing so requires lots of staff hours, and we don't have enough of those available. We're also trying to, for example, develop a rationality curriculum and write a document of open problems in FAI theory.
If you're anxious to learn more about the rationality camps before SI has time to publish about that data, you're welcome to contact the people who attended; many of them have identified themselves on Less Wrong.
I'm fairly confident that campers got more out of my fashion sessions than what they can learn only from looking at a few fashion magazines.
Cheers,
Luke
Replies from: handoflixue, SilasBarta, timtyler↑ comment by handoflixue · 2011-08-25T00:31:50.372Z · LW(p) · GW(p)
This comes off very strongly as the typical bureaucratic protectiveness - a business doesn't want to share raw data, because raw data is a valuable resource. If you came out and said this was the reason, I'd be more understanding, but it would still feel like a major violation of community norms to be so secretive.
If simple secrecy is indeed the case, I would urge you, please, be honest about this motive and say so explicitly! At least then we are having an honest discussion, and the rest of this comment can be disregarded.
We collected lots of data before and during minicamp
In short, what is the reason you can't share this RAW data, which you state you collected, and which you've presumably found sufficient for your own preliminary conclusions? I don't think Silas is asking for or expecting an elegant power-point presentation or a concise statistical analysis - I know I would personally love to simply see raw data.
Is there truly not a single spreadsheet or writeup that you could drop up for us to study while you collect the rest of the data?
Replies from: Mass_Driver, lukeprog↑ comment by Mass_Driver · 2011-08-25T07:29:17.592Z · LW(p) · GW(p)
Good grief, people. There are conspiracies that need ferreting out, but they do not revolve around generating fake data about the effectiveness of an alpha version of a rationality training camp that was offered for free to a grateful public.
I went to the minicamp, I had a great time, I learned a lot, and I saw shedloads of anecdotal evidence that the teachers are striving to become as effective as possible. I'm sure they will publish their data if and when they have something to say.
Meanwhile, consider re-directing your laudable passion for transparency toward a publicly traded company or a medium-sized city or a research university. Fighting conspiracies is an inherently high-risk activity, both because you might be wrong about the conspiracies' existence, and because even if the conspiracy exists, you might be defeated by its shadowy and awful powers. Try to make sure the risks you run are justified by an even bigger payoff at the end of the tunnel.
Replies from: Kaj_Sotala, handoflixue, SilasBarta↑ comment by Kaj_Sotala · 2011-08-25T09:34:58.195Z · LW(p) · GW(p)
There are conspiracies that need ferreting out, but they do not revolve around generating fake data about the effectiveness of an alpha version of a rationality training camp that was offered for free to a grateful public.
I don't think anybody is accusing the minicamp folks of anything of the kind. But public criticism and analysis of conclusions is the only reliable way to defend against overconfidence and wishful thinking.
When I ended my term as an SIAI Visiting Fellow, I too felt like the experience would really change my life. In reality, most of the effects faded away within some months, though a number of factors combined to permanently increase my average long-term happiness level.
Back then the rationality exercises were still being worked out and Luke wasn't around, so it's very plausible that the minicamp is a lot more effective than the Visiting Fellow program was for me. But the prior for any given self-help program having a permanent effect is small, even if participants give glowing self-reports at first, so deep skepticism is warranted. No conspiracies are necessary, just standard wishful thinking biases.
Though I think this was the third time that Silas raised the question before finally getting a reply, despite his comment being highly upvoted each time. If some people are harboring suspicions of SIAI covering up information, well, I can't really say I'd blame them after that.
Replies from: lukeprog, katydee↑ comment by lukeprog · 2011-08-25T11:02:11.856Z · LW(p) · GW(p)
For the record, I for one don't recall reading any of SilasBarta's earlier comments on this topic.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-29T16:55:57.873Z · LW(p) · GW(p)
I find that unlikely. That would mean you never followed up after trumpeting your success -- you just posted the topic, and never bothered to come back and see what people had to say. And that you didn't see the top comment on the 125k fundraiser thread. Then again, this is consistent with what komponisto said about your "Mt Olympus" mentality: just say stuff ex cathedra and expect everyone to fall in line or otherwise swoon.
Replies from: lukeprog↑ comment by lukeprog · 2011-08-29T17:10:00.301Z · LW(p) · GW(p)
I don't understand this bit about my 'My Olympus' mentality. Until very recently I wasn't on SI's full-time staff. And as far as I can tell, I've spent vastly more time substantiating what I say by citing the relevant scientific literature (rather than relying on whatever personal authority I'm supposed to have, which I don't think is much at all) than anyone else on Less Wrong.
And no, I don't expect everyone to "fall in line or otherwise swoon." It's just that I don't have time to write up a 20-citation research article supporting every sentence I write. If the reasons that led me to write a certain sentence aren't available to you, as is usually the case, then you should only be updating your beliefs as much as you should given the evidence of my testimony, which in many cases should be very little.
As for you not believing me when I say that I don't recall reading your earlier comments calling for evidence about minicamp's success, well... the only evidence I have for you besides my testimony is that I hadn't replied to any of your earlier comments on the matter. If you don't believe me, well, so be it: that's all I've got.
Replies from: komponisto, SilasBarta↑ comment by komponisto · 2011-08-29T19:05:44.894Z · LW(p) · GW(p)
I don't understand this bit about my 'My Olympus' mentality. Until very recently I wasn't on SI's full-time staff. And as far as I can tell, I've spent vastly more time substantiating what I say by citing the relevant scientific literature (rather than relying on whatever personal authority I'm supposed to have, which I don't think is much at all) than anyone else on Less Wrong.
Indeed you have, and you've been well rewarded, with 24,000+ karma points and a full-time position at SI (with EY himself begging for money to pay you in a promoted LW post -- I'll bet that felt good!). What you haven't earned, however, is the right to ignore people without their being offended. (The only person who might conceivably have that level of status is EY, and I think even that is debatable.)
The impression I think you give is one of writing all this great LW material, but then being "too busy" with your high-status SI work to read people's comments on it. Surely you can see how that comes across.
Replies from: lukeprog, BigManMachismoMaster↑ comment by lukeprog · 2011-08-29T19:09:58.379Z · LW(p) · GW(p)
I reply to many comments but certainly not all. I can't respond to all my critics, and it's probably unwise to do so. I'm also sympathetic to thomblake's comment on this discussion:
Replies from: komponistoI appreciate your efforts to decode this 'Olympus mentality' nonsense, and in general to make sure you're not making communication errors. But at this point I believe you're just wasting your time. You've documented your research methods better than I've ever seen someone do, and they certainly don't need defending here.
On behalf of those who believe your work can positively impact the future of humanity and your time can be better spent elsewhere, I humbly request that you please file what you've been responding to under 'trolling' and move on.
↑ comment by komponisto · 2011-08-29T20:11:54.731Z · LW(p) · GW(p)
To be clear, my comment pertained to reading others' comments, not necessarily replying to them.
I would also like to stress that, while I am not sympathetic to thomblake's comment (I could hardly be expected to be, since he effectively labeled a comment of mine "nonsense"), I have not voiced any complaint about the underlying subject (proof of minicamp effectiveness), and should not be confused with Silas. (This isn't to say that I don't think he has a point also; but I emphatically do not consider myself to have participated in any "trolling".)
I entered the discussion because of the "Olympus" issue, which I had noticed in other contexts and had considered bringing up before. (Evidently I am not the only one, because my comment on the matter has -- against all expectations of mine -- been voted up into the 20s.)
↑ comment by BigManMachismoMaster · 2011-08-29T21:57:12.151Z · LW(p) · GW(p)
"The impression I think you give is one of writing all this great LW material, but then being "too busy" with your high-status SI work to read people's comments on it. Surely you can see how that comes across."
LOL. If one does form that impression from Luke's posts I would suggest that you:
1.) Don't be so sensitive and get a life beyond obssessing over blog comments. 2.) Realize that people who actually get things done usually don't make responding to blog posts a high or even medium priority in life. Seriously, most people with productive lives don't have time to obssess over blogs.
Appreciate the good content, criticize the points worth criticizing and try not to cry about the minor points and the fact that someone doesn't respond.
Replies from: wedrifid↑ comment by SilasBarta · 2011-08-29T17:20:38.394Z · LW(p) · GW(p)
As for you not believing me when I say that I don't recall reading your earlier comments calling for evidence about minicamp's success, well... the only evidence I have for you besides my testimony is that I hadn't replied to any of your earlier comments on the matter.
My point was that either a) you didn't do what most people do when they start topics -- look at them a second time -- or b) you're lying. Neither is likely, but a) is at least consistent with the Mt Olympus mentality -- why else would you never follow up on a thread with a major announcement? What other mentality would lead to that behavior?
My request for evidence was also (for most of the time it was up) the top comment on the 125k fundraising thread, which does give someone reason to be skeptical you weren't aware anyone had asked for any substantiation.
It's just that I don't have time to write up a 20-citation research article supporting every sentence I write.
Sure, that should be resolved for the more improbable claims.
And in the case where your justifications come from the literature, I thought you could just google the term, read the abstract, and copy-paste the citation?
Replies from: lukeprog↑ comment by lukeprog · 2011-08-29T17:25:56.715Z · LW(p) · GW(p)
What other mentality would lead to that behavior?
Being busy with research and writing? Seriously, I don't spend all day re-checking the comment threads on old LW posts. If you want to reach me, please contact me directly or reply to one of my comments, so it shows up in my LW inbox.
Replies from: BigManMachismoMaster, SilasBarta↑ comment by BigManMachismoMaster · 2011-08-29T22:03:12.078Z · LW(p) · GW(p)
"Being busy with research and writing? Seriously, I don't spend all day re-checking the comment threads on old LW posts."
Amen!
Research & Writing > responding to some disgruntled blogger.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-30T01:18:17.148Z · LW(p) · GW(p)
You were aware he was replying to a strawman and that I never expected that he should "spend all day rechecking ...", right? (Guess not, and ditto for those who voted you up.)
Replies from: wedrifid↑ comment by wedrifid · 2011-08-30T01:21:04.698Z · LW(p) · GW(p)
I can see the general point you were making with your criticisms and so could the other people who voted you up on your early comments on this issue. That being said I suggest you quit while you are (or were) ahead. It is too easy to let others reclaim the moral high ground if you stay on the same topic for too long and let them put you on the defensive.
This would allow you to maintain context specific credibility for use the next time the same, or similar claims were made about success that you don't believe are justified.
Replies from: Jack, SilasBarta↑ comment by Jack · 2011-08-30T01:48:37.045Z · LW(p) · GW(p)
Would I be unreasonable or unrealistic if I expressed a desire to not see this any of this SIAI inside baseball on Less Wrong... ever? Insofar as these rationality minicamps are something we think people who want to be more rational should take part it, obviously data on their effectiveness is very important. But insofar as the 'success' of the minicamp figured into SIAI's decision to hire lukeprog (and that seems to be the issue for the moment) I could care less. I realize of course Less Wrong and SIAI are intimately connected and I'm personally at the low end on a spectrum of interest and involvement in SIAI. And I'm fine with seeing the occasional fund-raising post or strategy discussions in Discussion... after all I'm not paying for the pleasure of posting and reading here. But an ongoing flame war about a random criteria in an SIAI hiring dominates the recent comments section and is of no interest to me (and one assumes, others in my position). In the interest of keeping SIAI and Less Wrong somewhat separate shouldn't SIAI have some other avenue donors can use to voice concerns and criticisms so that it doesn't interfere with the interesting content here?
(And if Silas isn't a donor... Thomas %&$@! Bayes why does anyone care?!)
Replies from: AnnaSalamon, wedrifid, Eliezer_Yudkowsky↑ comment by AnnaSalamon · 2011-08-30T03:36:14.320Z · LW(p) · GW(p)
For the record, Silas is a donor -- listed on our donor list as having donated $2,000.
↑ comment by wedrifid · 2011-08-30T02:27:54.582Z · LW(p) · GW(p)
Would I be unreasonable or unrealistic if I expressed a desire to not see this any of this SIAI inside baseball on Less Wrong... ever?
I don't care for flame wars either. But what I do care about is that if it is permitted to make a declaration of fact on lesswrong it is permitted to to refute it. The details of what you suggest in the parent violate this. You advocating the lesswrong equivalent of true evil!
Replies from: Jack↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-30T03:33:43.129Z · LW(p) · GW(p)
(And if Silas isn't a donor... Thomas %&$@! Bayes why does anyone care?!)
I don't, but if I don't say that out loud, other people go on loudly caring, and if I do say it out loud, I get downvoted. (Shrug.)
Replies from: Randaly↑ comment by Randaly · 2011-08-30T04:18:24.160Z · LW(p) · GW(p)
Errr, do you really care that much about being downvoted?
Replies from: Desrtopa, MatthewBaker↑ comment by Desrtopa · 2011-08-30T05:12:20.072Z · LW(p) · GW(p)
Downvotes imply net disapproval, particularly for someone whose comments get read as much as Eliezer's. If you think of it as simply losing points, it seems trivial, but if you take it as a sign that "people seem not to like it when I do that," it's a meaningful consideration.
↑ comment by MatthewBaker · 2011-08-30T04:25:42.913Z · LW(p) · GW(p)
Even Eliezer needs Karma, if you ignore the sequences Alicorn and Yvain can beat him out ;)
↑ comment by SilasBarta · 2011-08-30T01:25:26.149Z · LW(p) · GW(p)
Very well.
This is a pretty sad day for LW, to learn that you can just lie and strawman your way out of criticism, because whoever calls you on it is just "staying on the same topic for too long" :-/
Replies from: wedrifid, KPier, None↑ comment by wedrifid · 2011-08-30T02:03:33.332Z · LW(p) · GW(p)
(I hope you'll pardon the digression into crude discussion of rational strategies for influence. This is a far more interesting topic than what serves for the object level at this point!)
This is a pretty sad day for LW, to learn that you can just lie and strawman your way out of criticism, because whoever calls you on it is just "staying on the same topic for too long" :-/
Speak more strategically. Don't let the verbal signals you utter be tools you use to salve and release your own feelings. Not because it is virtuous, purely because that doesn't get you what you want. Also note that not only am I someone who consistently voted you up and those insulting you down, my comment provided stronger support for your position than your most recent comments managed. Allow me to translate what I said into 'fun' rather than vaguely polite:
"You're so right man. They're full of shit. I mean WTF is with this claiming stuff with no evidence then ragging on you instead of answering you. That's fucking pathetic. Ape Status Bullshit 101 - If someone asks critical questions don't answer them, beat them with a fucking stick so nobody else thinks it's ok to dissent. That's what people with status and power do and people always let them get away with it. But here's the thing: You're making it easy for them by being a whiny little bitch. How's that working out for you? Not working, huh? Yeah, no shit. What did you expect? Now stop crying 'cos the world isn't fair to you and start saying shit that works. Also, accept that you cannot change other people and instead work out what influence you can have and make it. In this case it would be 'make all unjustified claims a net PR loss for Luke/SIAI by calling them on it effectively and moderately.' And I'd have called that a real success. Heck, it even worked. Look at Anna's replies from a few days ago. Now man the fuck up and stop being a pussy. Because I often agree with the complaints that you have about stuff but don't want to look bad by association."
Now, consider the difference between the above wording and what I actually said. Notice that it positions me as somewhat of an ally, assumes the criticisms you make of Luke are valid but at the same time doesn't try to alienate me from the tribe. See why I chose to use the wording I did and, more importantly, which conceptual territory I chose to stake out and claim. A good rule of thumb is that if you are acting less savvy, constrained and strategic than the wedrifid persona then you are doing something wrong. Because I'm rather flippant and cavalier myself.
Replies from: Rain, Morendil↑ comment by KPier · 2011-08-30T02:20:04.431Z · LW(p) · GW(p)
I really don't think that's the problem here.
You had a good point, you made it, and you pointed out the problems with the responses to it. All of these comments were upvoted, many to double digits. But then your comments turned into personal attacks on Luke (suggesting he doesn't understand the material he posts, suggesting he is lying about not seeing an earlier comment of yours). At that point, I felt (as, I'm guessing, did others), that your comments were actively counterproductive in trying to learn more about the minicamp, as well as promoting a community norm of insulting each other and assuming bad faith.
I also tend to get annoyed by, and downvote, comments to the effect of "The fact that I was downvoted reflects badly on all of you, who obviously downvoted me for [reason]" since I usually didn't downvote for the reason mentioned and I don't see them as a sincere attempt to understand the source of disagreement.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-30T11:09:41.365Z · LW(p) · GW(p)
as well as promoting a community norm of insulting each other and assuming bad faith.
(The community norm should be to assume bad faith as much as is suggested by evidence. The extent to which bad faith is assumed shouldn't be a product of a community norm. Insulting is rarely useful, of course.)
Replies from: KPier↑ comment by KPier · 2011-08-31T00:14:18.806Z · LW(p) · GW(p)
Given the human tendency to get emotionally involved in an argument, I think a rule of "assume bad faith as much as is suggested by evidence" qucikly devolves into "assume bad faith". If you want to argue for a community norm of "assume bad faith as much as suggested by the evidence even after updating on all the evidence that people are really bad at evaluating other people's motives", I wouldn't neccesarily disagree, but in practice, I think that looks a lot like "assume good faith".
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-31T00:23:57.535Z · LW(p) · GW(p)
If there are known flaws in a method of inference, taking them into account should be part of what's done when performing an inference, or what's meant when suggesting to perform it. There should be no distinction between suggesting to look for a fact and suggesting to take into account possible flaws in the method of looking for that fact. This is simple exercise of the human power, something to encourage, not work around.
Replies from: KPier↑ comment by KPier · 2011-08-31T00:29:03.828Z · LW(p) · GW(p)
But, for instance, we know that flaws in our way of thinking about politics are so pervasive that we've decided to avoid it as much as possible. I would argue that flaws in our way of assessing whether other people in an online argument are arguing in good faith are nearly as pervasive, to the extent that assuming good faith is a better heuristic than assuming bad faith as much as is suggested by evidence.
And people who use an "assume good faith" model still change their mind once the evidence starts to accumulate; it's about what your default assumption is, not whether it's ever appropriate to say "You are arguing in bad faith."
↑ comment by [deleted] · 2011-08-30T01:38:32.893Z · LW(p) · GW(p)
For what it's worth, I agree that you're doing the right thing.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-30T01:46:55.950Z · LW(p) · GW(p)
Thank you. For your karma's sake, though, you might want to keep that to PM.
Replies from: None↑ comment by SilasBarta · 2011-08-29T17:36:28.314Z · LW(p) · GW(p)
Strawman much? Returning once to a topic you started != "spending all day re-checking the comment threads on old LW posts".
Replies from: lukeprog↑ comment by lukeprog · 2011-08-29T17:50:57.171Z · LW(p) · GW(p)
In order to see your earlier comments I would have had to return to that topic or specifically to the $125k post that I didn't write, and I'd need to have done that sometime after your comments appeared on one of those posts, and I would have had to have comments sorted so as to see your comment before I gave up scrolling through all the comments there, and something about the first bit of your comments would need to have grabbed my attention so that I would have read it rather than continuing to scroll down.
My LW comment-reading behavior doesn't 'zoom in' on comments from SilasBarta about minicamp. Either I would need to have gotten lucky or I would have to be doing that kind of thing with a broad selection of LW posts (and the comments made to them), and that's just not the case, because I don't "spend all day re-checking the comment threads on old LW posts."
Replies from: JoeW, SilasBarta↑ comment by JoeW · 2011-08-31T00:26:31.115Z · LW(p) · GW(p)
Asking generally - is there a compelling technical reason we don't have an option to "subscribe" to a thread, or be emailed notifications of direct replies to our comments? Or even that there was a reply, if not the reply itself?
I am mildly irked that I have to go check my LW inbox for this; I think it reinforces my light tendencies towards online OCD and ADD to work to a pull model rather than a push.
(Or if there is one, please enlighten me, as I had thought I had searched sufficiently to find one if it existed.)
↑ comment by SilasBarta · 2011-08-29T17:58:29.503Z · LW(p) · GW(p)
Even now, there are only 3 threads on your mini-camp result announcement topic, and mine is at the top, 16 comments in total. 90% of the discussion is about the request for evidence. No need to "zoom in" on anything, nor re-check frequently. Please, stop trying to come up with stories to account for not having seen it; it's obvious you just never came back.
And really, it's not some kind of mortal sin or anything -- I don't see why you're goint to such lengths to justify it.
Replies from: JackEmpty, lukeprog↑ comment by JackEmpty · 2011-08-29T18:26:28.658Z · LW(p) · GW(p)
If I was a Confessor I would have tazed you by now.
I am alright with your original questions on this, but now you're stretching. You seem to be going to unnecessary extremes to find fault with anything and everything that Luke has said on this. I judge this a violation of sanity.
Replies from: None, wedrifid, SilasBarta↑ comment by [deleted] · 2011-08-30T01:49:58.108Z · LW(p) · GW(p)
If I were a Confessor I would have tazed you by now.
That's probably why Confessors don't exist. We're not ready from them; we haven't grown up enough to cope with even a single, tiny dissenting voice without resorting to threats of counterfactual violence.
Replies from: JackEmpty↑ comment by JackEmpty · 2011-08-30T02:01:36.985Z · LW(p) · GW(p)
If I were a Confessor I would have tazed you by now.
Thanks for pointing that out. Typing quickly on the go does not afford much spell/grammar checking.
And yes, by all means, I only meant that from reading (most of) the comments and discussion on this topic that I in my current state would have tazed him, had I the job description of a Confessor. I didn't mean to imply that I was exceptionally good at judging sanity violations in any way, just a reference and a pithy statement of my view.
↑ comment by wedrifid · 2011-08-30T02:20:15.730Z · LW(p) · GW(p)
If I was a Confessor I would have tazed you by now.
I would have tazed you in turn. Not because you tazed Silas - I'd have done that too for his own sake. Rather, I'd have tazed you for the reasons you gave. You are observing two people bitching at each other each with their own (vastly different) kind of insanity and siding with the one with the most status and whose insane bitching is the most skilled (and socially typical). You are tazing the unsophisticated, lower status insane bitcher.
The evidence given suggests you are well suited to be a player in the social environment but not a confessor. In the future, when it matters, you can be expected to act as a social enforcer and not as a last bastion of sanity.
Replies from: JackEmpty↑ comment by JackEmpty · 2011-08-30T02:42:22.526Z · LW(p) · GW(p)
In the interest of full disclosure, I read the majority of this exchange in unordered chunks from the Recent Comments and mostly-backwards by going up context levels and trying to figure it out. And like I said to paper-machine I don't mean to say I'm exceptionally good at judging sanity violations, just being pithy.
I'll probably later on read them in some more-ordered fashion and see if I would taze luke too (even taking into account your claim you would).
Glad to know you'd be there to taze me if I started to go insane. It is appreciated. Not that I'm evaluating you as a fully superpowerful Confessor at the moment or anything. Here's a question though... who would you have tazed first?
Replies from: wedrifid↑ comment by wedrifid · 2011-08-30T02:52:49.240Z · LW(p) · GW(p)
In the interest of full disclosure, I read the majority of this exchange in unordered chunks from the Recent Comments and mostly-backwards by going up context levels and trying to figure it out.
I can see why reading in that order/style would leave you just shooting Silas. :)
Here's a question though... who would you have tazed first?
Chronologically Luke. He was insane way back when the claims were first made/not defended and Silas hadn't gone insane yet. If I were to enter the room now after observing from outside I would shoot Silas first, pointedly shoot Luke as well and give everyone else in the room a stern look. Then I'd confiscate your tazer and turn in my confessors hood myself. Because I don't want that kind of responsibility.
I'd keep the tazers. Because I have yet to meet anyone who I would trust to confessor at me, even though there are those whose advice I value. I would always take care to position myself with my back to the wall such that I could see the movements of any confessors and rely on my reflexes and laser tag prowess to protect me from any nosy interventionists. If necessary I'd take them all out in a massive confessor tazing spree.
Replies from: JackEmpty↑ comment by JackEmpty · 2011-08-30T02:57:59.121Z · LW(p) · GW(p)
Can I take back what I said about being cool with you tazing me?
I think I'm just going to go read this thing in order and ignore any responses to my comments for a bit...
Replies from: wedrifid↑ comment by wedrifid · 2011-08-30T03:02:08.501Z · LW(p) · GW(p)
I think I'm just going to go read this thing in order and ignore any responses to my comments for a bit...
That sounds like an inefficient use of your time (also note that the conversation spans several posts).
This isn't even an interesting thread relative to other flame wars we've had!
Replies from: JackEmpty↑ comment by JackEmpty · 2011-08-30T03:13:27.585Z · LW(p) · GW(p)
Yes, but I've got the complicated issue of taking your interjection entirely truthfully. I don't strongly believe you have any motivation to lie to me, but I may want to go through a few just to verify.
In any case, I'm not going to do it now, just when I have some spare time and am not browsing other comments.
This isn't even an interesting thread relative to other flame wars we've had!
I only really started posting comments in March of this year. Reading the comments at all about a month or so before then, and have been reading LW itself for a little over a year. I may still be a little green for any of the more interesting flame wars.
And yet crap, I'm already doing what I said I wouldn't. Shucks.
↑ comment by SilasBarta · 2011-08-29T18:31:40.784Z · LW(p) · GW(p)
"It's not the crime, it's the coverup."
If Luke wants to say he just ignored comments when he trumpeted the success (like a good Olympian) -- fine.
If he wants to invent stories about how seeing my request would have required "spending all day re-checking the comment threads on old LW posts" or how it would have been difficult see my comment in the massive thicket of 3 threads with mine at the top, etc etc etc -- then he's making things up, which is not fine.
Yes, I can think of someone who warranted a Confessor zapping.
Replies from: JackEmpty↑ comment by lukeprog · 2011-08-29T18:05:48.525Z · LW(p) · GW(p)
Its obvious you just never came back
I think I remember coming back once soon after posting, but YES! This is what I've been trying to say! I never saw your earlier comments concerning the evidence for minicamp's success, but you said you thought I was lying about that.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-29T18:14:31.602Z · LW(p) · GW(p)
Lying or never following up because of the Olympus mentality, yes. Neither reflects well on you. In any case, I did contact you via other means and got no response that way either. (And other people have been made aware of this issue, who probably contacted you as well.)
Make sure to thank the folks who have modded you up for your strawman comment and your elaborate excuses for how you could have missed the entire discussion on a thread you started.
So are you going to settle on the "too busy" or "never saw it" position?
Replies from: lukeprog↑ comment by lukeprog · 2011-08-29T18:51:44.910Z · LW(p) · GW(p)
When did you contact me via other means? Was it by email? Who else do you think contacted me about the issue? As I've said, I don't recall seeing your earlier comments on the matter. I've also said I don't recall returning to the minicamp announcement discussion more than once after I originally posted it, but I don't think this was because I have a 'Mount Olympus' mentality - more likely, it's because I was busy doing other things. After all, it was an announcement post, not a 'let's discuss topic X' post or a post that asked questions and expected replies.
I am curious what's giving you the impression that I have an 'Olympus mentality', though, and whether others have gotten the same impression. The feeling 'on the ground' is quite different. I feel I (justly) have no authority at all because (1) I learned about the intelligence explosion less than a year ago and discuss it every day with people who have thought for much longer about the subject, (2) I have completed no degrees and published no papers (yet) on the subject, and (3) I am surrounded by math and programming geniuses who inadvertently cause me to feel insecure about my relative lack of training in those fields.
Moreover, I try to speak less "from personal authority" than everyone else, via bothering to cite the scientific papers supporting many of the claims I make - and even if all I did was track down the right papers, read the abstracts, and cite them, this would still be more work than other LWers usually do to ground their claims in the scientific literature. (Of course this isn't always the case; I'm talking mostly about claims made in my articles about psychology and neuroscience.)
(Also, I don't just start with personal claims and then do a Google scholar search for supporting evidence. I start with a question and then read textbook and review article excerpts to figure out which researchers are studying the topic, and then I read or skim their articles on the topic to figure out what we know, how we know it, and what we don't know. And then I post my claims and cite the studies I found that guided me to make those claims.)
Back to your requests for evidence of minicamp's success, and my impact upon it....
- Anna took the time to write up some quantitative results from our exit survey. I haven't seen you either thank her for answering your request or give a different reply yet.
- She also included testimonials as to my own effectiveness during minicamp, and other minicampers have given their own (positive) accounts. You haven't replied to any of those.
- In a reply to you, jsalvatier linked to additional earlier positive testimonials, to which you also did not reply.
- I listed the preliminary evidence that led me to call minicamp a success, and you didn't reply to that subthread yet.
- You wanted to see the testimonials, and Anna posted them, and you didn't say thanks or reply to that yet either.
- I explained that further data measuring the effects of minicamp on its participants was still being gathered, but that this takes time and SI lacks available staff hours. Four other people have contacted me so far so they can free up my time by completing volunteer-doable tasks. At first you said you would volunteer, but then you apparently withdrew the offer.
As others have said, your objections have been addressed and it's hard to see why you're still unsatisfied for now. Could you explain? Are you mostly just wanting to see additional quantitive evidence of minicamp's effects on its participants' lives? If so, I explained long ago that this data was still in the process of being collected and parsed.
Also keep in mind jsalvatier's comment:
Replies from: thomblake, SilasBarta, SilasBartaI think a lot of the hubbub in this thread is due to different interpretations of SIAI related folks saying that the minicamp was 'successful'. I think many people here have interpreted 'success' as meaning something like "definitely improved the rationality of the attendants lastingly" and I think SIAI folks intended to say something like "was competently executed and gives us something promising to experiment with in the future".
↑ comment by thomblake · 2011-08-29T19:06:19.899Z · LW(p) · GW(p)
Luke,
I appreciate your efforts to decode this 'Olympus mentality' nonsense, and in general to make sure you're not making communication errors. But at this point I believe you're just wasting your time. You've documented your research methods better than I've ever seen someone do, and they certainly don't need defending here.
On behalf of those who believe your work can positively impact the future of humanity and your time can be better spent elsewhere, I humbly request that you please file what you've been responding to under 'trolling' and move on.
Replies from: JackEmpty, SilasBarta, SilasBarta↑ comment by SilasBarta · 2011-08-30T01:28:06.240Z · LW(p) · GW(p)
To me, your comments here look like trolling, but I guess YMMV.
↑ comment by SilasBarta · 2011-08-29T19:36:09.346Z · LW(p) · GW(p)
Calling someone on phony excuses is "trolling"? (That's the only reason the recent thread has stretched out so far.)
Next time, I guess I should shut up when Luke makes an(other) implausible claim?
Replies from: JackEmpty, NancyLebovitz↑ comment by JackEmpty · 2011-08-29T19:41:18.649Z · LW(p) · GW(p)
What about the part where you ignored the things you were asking for, and kept pressing on slightly-modified issues?
I'd call that trolling, along with the tone of some of your comments.. Silas, frankly, this could have been executed much more diplomatically.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-29T19:49:15.106Z · LW(p) · GW(p)
What about the part where you ignored the things you were asking for, and kept pressing on slightly-modified issues?
Different topic. I'm being called a troll because Luke made implausible excuses, I'm calling him on it, he's digging himself deeper, and yet people are voting him up instead of me. (And this wasn't his first implausible one: remember the "we don't have time"/"we can't give that out" flip-flop?)
In any case, I'm not "pressing on slightly-modified issues", nor ignoring anything (unless someone else replied as I would have). I listed my criteria way back when; that's not answered, and there's every reason to believe they have that information.
Silas, frankly, this could have been executed much more diplomatically.
Sure, if the voting pattern is to be believed, I should just make up implausible stories, and then call people trolls if they ever call me on it. What exactly should I have done differently? Write out what my comments should have looked like.
↑ comment by NancyLebovitz · 2011-08-30T11:38:07.288Z · LW(p) · GW(p)
I don't think what you're doing is trolling, but I have a fairly tight definition for trolling-- I think of it as posting driven by abstract malice, a desire to cause pain which is divorced from the topic at hand. That clearly isn't you.
On the other hand, I think you're engaged in a bad emotional habit-- attributing a negative motivation to someone else on very little evidence, and getting stuck on the idea of that motivation.
↑ comment by SilasBarta · 2011-08-30T01:23:17.174Z · LW(p) · GW(p)
After all, it was an announcement post, not a 'let's discuss topic X' post or a post that asked questions and expected replies.
By the way, I'd like you to start labeling your topics in the future to avoid such a misunderstanding. Specifically, on all those which you deem to be "an announcement", which we're not supposed to discuss, or argue about, or criticize, or question, or whine about -- like the mini-camp results topic -- please indicate as much. Thanks.
Cheers,
Silas
↑ comment by SilasBarta · 2011-08-29T19:51:43.010Z · LW(p) · GW(p)
When did you contact me via other means? Was it by email?
Yes, around the time I posted it. I don't care if you find this implausible; I didn't find your claims of never having seen my requests plausible either. I don't have time to substantiate this claim either; I'm too busy.
After all, it was an announcement post, not a 'let's discuss topic X' post or a post that asked questions and expected replies.
People usually expect comments on topics they start; I don't know why you would expect otherwise just because it's "an announcement post". It's the Olympus mentality that says, "I talk, you listen, replies not wanted." It's the mentality that replies defensively to any cross-examination of evidence you've presented.
Anna took the time to write up some quantitative results from our exit survey. I haven't seen you either thank her for answering your request or give a different reply yet. She also included testimonials as to my own effectiveness during minicamp, and other minicampers have given their own (positive) accounts. You haven't replied to any of those. I listed the preliminary evidence that led me to call minicamp a success, and you didn't reply to that subthread yet. ... You wanted to see the testimonials, and Anna posted them, and you didn't say thanks or reply to that yet either.
I didn't assign those high priority for a reply because others already said what I would have said in reply, and it was becoming clear that I was not alone in being suspicious of this evidence.
I also didn't ask to see testimonials because I didn't consider those strong evidence for what you're claiming -- like others noted, they don't really distinguish you from any other self-help camp.
I explained that further data measuring the effects of minicamp on its participants was still being gathered, but that this takes time and SI lacks available staff hours.
And I explained that you don't get to play "evidence takes time!" while saying, "yep, I already have enough evidence to call this a success and ignore any questioning of this claim".
In a reply to you, jsalvatier linked to additional earlier positive testimonials, to which you also did not reply.
What was there to reply to? "Oh, well, since you personally feel you had a good time, I guess I shouldn't be suspicious that it improved anyone's rationality"?
At first you said you would volunteer, but then you apparently withdrew the offer.
I said I would pay the ransom you're demanding for your evidence. That was the only reason I was willing to help you. I never withdrew any offer; rather, you tried to change the topic to helping with your research, which I don't want to do, never did, and never offered to.
As others have said, your objections have been addressed and it's hard to see why you're still unsatisfied for now. Could you explain?
Those are regarding a different thread and different claim; there's nothing to explain here, and I'd appreciate if you didn't misreport evidence.
Also keep in mind jsalvatier's comment:
Sure thing -- and you can keep it in mind too, when using increasingly strong superlatives to describe our success.
Replies from: lukeprog, jsalvatier↑ comment by lukeprog · 2011-08-30T03:44:52.570Z · LW(p) · GW(p)
Despite multiple requests to drop this discussion, I'd like to put a little more effort toward mutual understanding. Perhaps I'm irrationally optimistic for reconciliation and convergence.
Others have addressed the unproductive 'attack-mode' nature of your comments; I won't address that here. Suffice it to say that I have plenty to learn myself about communicating diplomatically.
I also won't say much more on the issue of my not having seen your earlier calls for evidence of minicamp's success. I can only repeat: If you want to be sure I'll read a particular comment, make sure you contact me directly or reply to one of my comments so that your comment shows up in the LW inbox. I do not have time to keep revisiting old posts and reading all comments made on them, and I kinda doubt anyone thinks that is the best use of my limited time when I could instead be doing research and academic outreach related to rationality and FAI theory. You may insist on attributing this to my 'Olympus Mentality', though I'll try to dissuade you of this interpretation below.
As for your definite accusation that I lied when I said I hadn't seen your earlier comments on the topic, it remains the case that I never replied to them, and they seem like comments I would have replied to given my well-documented defensiveness on LW. Just notice how tenaciously I've defended myself in this discussion, despite a continuous slew of character attacks.
As for your accusation that I strawmanned you, I tried to explain that unless I had a policy of checking tons of old posts for new comments it's not clear I would have seen your original comment, but you seem to simply disagree, so I don't think there's much more to say about that.
Finally, you seem to have suggested that I said I made announcement posts "we're not supposed to discuss, or argue about, or criticize, or question, or whine about -- like the mini-camp results topic", but that's just not true. You're welcome to discuss, argue, criticize, question, or whine about anything I post on Less Wrong. All I said was that I don't go back and check every post for new comments, and that if you want to make sure I read something you should contact me directly or be sure to reply directly to one of my comments so that I see it in my LW inbox.
A 'Successful' Minicamp
jsalvatier has repeatedly suggested that we may have different ideas of what I meant when I wrote that Rationality Minicamp was a success.
As KPier wrote in response to what seems to be your original comment on this topic, "The article pretty clearly states that the claims about the effects of the camp were based on exit surveys, and that the impact of the camp is demonstrated by the projects the camp grads are now working on. You could debate whether those are good measures, but we don't exactly have better ones." Later, Anna and myself gave that specific evidence in more detail.
You might be willing to concede that the evidence from exit surveys and testimonials provide about as much evidence of minicamp 'success' as such measures are capable of providing, though that may not be much. Is that true?
But of course, you've been asking for stronger evidence. You'd like to see measures of rationality improvement or life success or something like that. I addressed this exact request directly in my very first comment on the topic:
We collected lots of data before and during minicamp. We are waiting for some time to pass before collecting followup data, because it takes time for people's lives to change, if they're going to change...
...we are still gathering data... before-and-after results will have to wait a while...
You replied that if these stronger forms of evidence don't yet exist, then I shouldn't claim that minicamp was a success. But again, I must repeat what KPier originally told you: My original blog post on minicamp being a success made it clear that such 'success' was assessed based on exit surveys and participant testimonials:
Our exit survey shows that the camp was a smashing success...
[Participants] continue to share the minicamp’s impact on themselves, and the impact they are having on others as a result, via an online mailing list and regular Skype video chats.
You seem to have interpreted 'success' in a different way than it was used in that blog post, perhaps to mean something like "Rationality minicamp successfully improved the rationality and life success of its participants, as demonstrated by several quantitative measures."
But as the original blog post shows, that's not what was meant to be claimed by calling the rationality minicamp a 'success'.
Now, I'll be happy to make this clearer by editing the original blog post, and by asking Eliezer to edit his post above. We could call it a 'highly praised' or 'well-reviewed' minicamp where brevity is needed, and where we have more space we could say something like "The minicamp was well-received by participants, who rated it highly in our anonymous exit survey and have given glowing reviews and reports of their resulting self-improvement. Further evidence concerning the minicamp's effect on participants' rationality and life success are pending."
As for the fact that this data is still being gathered because it takes time for people's lives to change and it takes time to parse collected data, you appear to have called this an "implausible excuse," though I still don't know what's implausible about it.
Or perhaps what you meant to call an "implausible excuse" is my point about how the raw exit survey data is anonymous and private, and that's why we can't publish it. But I'm not sure what's implausible about that, either. You can ask the minicamp participants themselves: We asked them to fill out one form that would be anonymous and private (the exit survey form), and another that would be identifiable and public (the testimonials form).
You also said I flip-flopped between these two "excuses", but that's not true. I maintain both claims. It takes time to collect and parse data on life changes and we can't publish the private and anonymous exit form data.
Olympus mindset
You keep finding things that you choose to interpret as demonstrating my 'Olympus mindset' without addressing disconfirming evidence like what I gave above:
I feel I (justly) have no authority at all because (1) I learned about the intelligence explosion less than a year ago and discuss it every day with people who have thought for much longer about the subject, (2) I have completed no degrees and published no papers (yet) on the subject, and (3) I am surrounded by math and programming geniuses who inadvertently cause me to feel insecure about my relative lack of training in those fields.
Moreover, I try to speak less "from personal authority" than everyone else, via bothering to cite the scientific papers supporting many of the claims I make - and even if all I did was track down the right papers, read the abstracts, and cite them, this would still be more work than other LWers usually do to ground their claims in the scientific literature.
I'd also be curious to hear from others who think I display an 'Olympus mindset', and what triggers they think give them that impression. I don't want to be giving off an inaccurate impression of myself in that way. I still practice facial expressions in the mirror because my face sometimes doesn't clearly communicate my mindset, and obviously I still need to practice my online communication because my typed words don't always clearly communicate my mindset, either.
EDIT: This has become an unproductive flame war, with no small thanks to my own behavior, and I will now bow out.
Replies from: jsalvatier, lessdazed, wedrifid↑ comment by jsalvatier · 2011-08-31T15:12:02.230Z · LW(p) · GW(p)
Now, I'll be happy to make this clearer by editing the original blog post, and by asking Eliezer to edit his post above. We could call it a 'highly praised' or 'well-reviewed' minicamp where brevity is needed, and where we have more space we could say something like "The minicamp was well-received by participants, who rated it highly in our anonymous exit survey and have given glowing reviews and reports of their resulting self-improvement. Further evidence concerning the minicamp's effect on participants' rationality and life success are pending."
Support!
Replies from: lukeprog↑ comment by lukeprog · 2011-08-31T16:17:53.090Z · LW(p) · GW(p)
Okay, I've done both these things.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-31T16:28:07.770Z · LW(p) · GW(p)
Thanks, I appreciate it.
↑ comment by lessdazed · 2011-08-30T08:44:06.988Z · LW(p) · GW(p)
As for the fact that this data is still being gathered because it takes time for people's lives to change
I disagree with this. My intuition, supplemented by experience in somewhat analogous religious retreats, is that change happens easily in the camp environment and the question is how much of that will be inculcated enough to survive once the return to life happens.
I'd say it takes time to be sure people's lives have changed permanently, but not too much time for them to change.
↑ comment by wedrifid · 2011-08-30T04:21:55.307Z · LW(p) · GW(p)
EDIT: This has become an unproductive flame war, with no small thanks to my own behavior, and I will now bow out.
Luke wins the flame war! Huzzah!
Replies from: MatthiasMiner↑ comment by MatthiasMiner · 2011-08-30T16:56:26.380Z · LW(p) · GW(p)
Yes he did! The smarter and tougher man won.
↑ comment by jsalvatier · 2011-08-29T21:51:07.574Z · LW(p) · GW(p)
So if minicamp related posts had instead of 'success' said 'we were very pleased with the execution of the camp and it gives us a promising direction to explore' would you have felt similarly to how you feel now?
If so, why are you still arguing about whether SIAI has prematurely judged the minicamp instead of asking them to make their judgment less ambiguous?
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-30T01:21:20.126Z · LW(p) · GW(p)
They don't seem to agree that they should have done anything differently, so I don't know why I would be at the point of asking them to change something. But yes, I would appreciate characterizations of the mini-camp that aren't extremely misleading, please pass it on through someone who they'll listen to.
↑ comment by katydee · 2011-08-25T10:28:03.466Z · LW(p) · GW(p)
It seems rather unlikely to me that being a mini-camp participant would have more of an effect on someone's life than being a Visiting Fellow, new techniques or not-- and if I am wrong, I would very much want to encounter these new techniques!
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-08-25T12:54:31.589Z · LW(p) · GW(p)
I wouldn't be that surprised. Explicit rationality exercises were only starting to be developed during the last month of my stay, and at that point they mostly fell into the category of "entertaining, but probably not hugely useful". The main rationality boost came from being around others with a strong commitment to rationality, but as situationist psychology would have it, the effect faded once I was out of that environment.
↑ comment by handoflixue · 2011-08-25T08:08:38.314Z · LW(p) · GW(p)
The positive endorphin rush from you and lukeprog sends signals that loook just like the enthusiastic gushing I see from any week-long "how to fix your life in five easy steps!" seminar. Smart people get caught up in biased thinking all the time. I had a good friend quit AI research to sell a self-help book, so I may be particularly sensitive to this :)
Objective data means I can upgrade this from "oh bunnies, another self-help meme" to "oooh, fascinating and awesome thing that I want to steal for myself." As long as it signals like a self-help meme, I'm going to shoot it down just like I'd shoot down any similar meme that tried to sell itself here on LessWrong.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2011-08-25T08:38:13.823Z · LW(p) · GW(p)
All right, but there's a fine line between shooting down self-help memes and unnecessarily discouraging project-builders from getting excited about their work. It's not fun or helpful for a pioneer to have his or her every first step be met with boundless skepticism. Your concerns sound real enough to me, but even an honest concern can be rude, and even a rationalist can validly trade off a tiny little bit of honesty for a whole lot of politeness and sympathy.
Why do I say "a tiny little bit of honesty?" Well, if the minicamp were being billed as "finished," "polished," "complete," "famous," "proven," or "demonstrably successful," as many self-help programs are, then it would make sense to demand data supporting those claims.
Instead, the PR blurb says that "Starting on May 28th, the Singularity Institute ran a one-week Rationality Training Camp. Our exit survey shows that the camp was a smashing success, surpassing the expectations of the organizers and the participants."
Leaving aside the colorful language that can and should characterize most press releases, this is a pretty weak claim: the camp beat expectations. Do you really need to see data to back that up?
Replies from: handoflixue↑ comment by handoflixue · 2011-08-25T08:49:31.180Z · LW(p) · GW(p)
this is a pretty weak claim: the camp beat expectations
"Please give us money" and "Co-organized and taught sessions for a highly successful one-week Rationality Minicamp" are stronger claims.
↑ comment by SilasBarta · 2011-08-29T17:03:37.661Z · LW(p) · GW(p)
For me, this isn't about making SIAI transparent; it does quite enough in that regard. It's about stopping an information cascade genie that's already out of the bottle.
Let me put it this way: right now the ratio of "relying on the assumption of mini-camp's success for decision making" to "available evidence for its success" is about 20-to-1. As I warned before, it's quickly becoming something "everyone knows" despite the lack of evidence (and major suspicions of many people that it wouldn't succeed going in). And that believe will keep feeding on itself unless someone traces it back to its original evidence.
It doesn't reassure me that I'm told I have to keep waiting before anything's conclusive, yet they can declare it a success now.
I just want the reliable evidence they claim to have, rather than just dime-a-dozen self-help testimonials. They collected hard data, and I gave them a list of things they could provide that are easy to gather and don't compromise privacy, and are much more likely to be present if the success were real than if it were not. Even after AnnaSalamon's circling of the wagons I don't see that.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-08-29T20:15:47.909Z · LW(p) · GW(p)
I think this is largely a case of people reading different things into 'success'.
↑ comment by lukeprog · 2011-08-25T00:49:44.118Z · LW(p) · GW(p)
Oh, sure. The reason is easy to communicate. We explicitly told minicampers that their feedback on the exit survey would be private and anonymous, for maximal incentive to be direct and honest. We are not going to violate that agreement. The testimonials were given via a separate form with permission granted to publish THAT data publically.
Replies from: handoflixue↑ comment by handoflixue · 2011-08-25T00:57:28.059Z · LW(p) · GW(p)
Oh, sure. The reason is easy to communicate. We explicitly told minicampers that their feedback on the exit survey would be private and anonymous, for maximal incentive to be direct and honest.
I'm unclear, then, why you are citing a lack of staff hours if the data cannot be published at all.
The testimonials were given via a separate form with permission granted to publish THAT data publically.
Has the raw testimonial data been published?
I'm assuming you have data beyond exit surveys and testimonials...?
Replies from: lukeprog↑ comment by lukeprog · 2011-08-25T01:16:34.819Z · LW(p) · GW(p)
A summary of the data can be published, for example median scores for measured values. But the data can't be published in raw form.
Not sure if raw testimonial data has been published yet. We do have data beyond testimonials and exit surveys, but that, too, requires precious staff hours to compile and write up, and it is still in the process of being collected.
Typing this stuff from a phone, pardon the brevity...
↑ comment by SilasBarta · 2011-08-24T19:05:40.567Z · LW(p) · GW(p)
I mean 'evidence' in the Bayesian sense, not the scientific sense.
Great, so did I! Now communicate that evidence. If it can't be communicated, I don't think you should be so confident in it.
One reason for not posting more information is that doing so requires lots of staff hours
I find that hard to believe. It may take time for the participants to report back, but not for you to tabulate the results.
We're also trying to, for example, develop a rationality curriculum and write a document of open problems in FAI theory.
I'm sorry, but this just sounds like excuse-making. Do you want your audience to be people who just take your word on something like this? I've asked several times for some very simple checks. This claim that you're too busy just doesn't fly.
I'm fairly confident that campers got more out of my fashion sessions than what they can learn only from looking at a few fashion magazines.
Then why don't you mention this in your "how to be happy" post, which is also being used as evidence of your productivity? Do you know a single person who has improved fashion to an acceptable level as a result of those magazines?
Cheers,
Luke
Not necessary.
Replies from: Vladimir_Nesov, lukeprog↑ comment by Vladimir_Nesov · 2011-08-24T20:10:54.711Z · LW(p) · GW(p)
(I disapprove of downvoting the parent (which I just found at '-2'). It continues the same conversation as the previous Silas's posts, pointing out what does look like rationalization. If raising a possibility of interlocutor's rationalizing in defense of their position is considered too rude to tolerate, we'll never fix such problems.)
Replies from: Spurlock, SilasBarta↑ comment by Spurlock · 2011-08-24T21:00:33.377Z · LW(p) · GW(p)
I suspect that most of the downvotes came from the very last sentence, which struck me as more than a little snarky. "Cheers" might not be necessary, but it is a gesture of politeness and was probably added in an attempt to convey a positive tone (which is important but somewhat tricky in text). I wouldn't say "not necessary" if someone held the door for me, even if it is obviously true.
Agree with you that the actual substance of the post was in no way downvote-worthy.
Replies from: komponisto, SilasBarta↑ comment by komponisto · 2011-08-24T21:59:43.440Z · LW(p) · GW(p)
Because "signing" comments is not customary here, doing so signals a certain aloofness or distance from the community, and thus can easily be interpreted as a passive-aggressive assertion of high status. (Especially coming from Luke, who I find emits such signals rather often -- he may want to be aware of this in case it's not his intention.)
I interpret Silas's "Not necessary" as roughly "Excuse me, but you're not on Mount Olympus writing an epistle to the unwashed masses on LW down below".
Replies from: Miller, handoflixue, NancyLebovitz↑ comment by Miller · 2011-08-25T04:13:24.634Z · LW(p) · GW(p)
Because "signing" comments is not customary here, doing so signals a certain aloofness or distance from the community
No. I am very confident the intention was to signal that Luke was not being emotionally affected by the intense criticism for the purpose of appearing to be leader type material, which is substantially not aloofness from the community.
It's not a convincing signal primarily because it's idiosyncracy highlights it for analysis, but I still think the above holds.
Replies from: komponisto↑ comment by komponisto · 2011-08-25T14:40:15.474Z · LW(p) · GW(p)
I am very confident the intention was to signal that Luke was not being emotionally affected by the intense criticism for the purpose of appearing to be leader type material
Or, in another words, signaling high status -- just like I said.
which is substantially not aloofness from the community.
It may not be aloofness, but it certainly is distance (I used two words for a reason); a leader is, necessarily, separated in some way from those who are led.
↑ comment by handoflixue · 2011-08-25T00:24:19.057Z · LW(p) · GW(p)
Second this.
↑ comment by NancyLebovitz · 2011-08-26T11:23:18.155Z · LW(p) · GW(p)
I saw lukeprog's signing messages as minor noise and possibly a finger macro developed long ago, so I stopped seeing the signature.
I'd be dubious about assuming one can be certain (where's the Bayesianism?) about what someone else is intending to signal, especially considering that it's doctrine here that one can't be certain of even one's own motivations. How much less certain should one be about other people's?
I would add some further uncertainty if one feels very sure about the motivation driving a behavior that's annoying.
Replies from: saturn, lessdazed↑ comment by saturn · 2011-08-26T16:20:52.385Z · LW(p) · GW(p)
I just looked through several pages of lukeprog's most recent comments, and the only ones that were signed were direct replies to SilasBarta.
Replies from: ciphergoth, NancyLebovitz↑ comment by Paul Crowley (ciphergoth) · 2011-08-26T16:42:30.012Z · LW(p) · GW(p)
Which could just mean that he feels the need to counter hostility with extra friendliess.
↑ comment by NancyLebovitz · 2011-08-26T16:59:57.002Z · LW(p) · GW(p)
That's interesting. Thanks for checking. I still have no idea what lukeprog intended, but my finger macro theory is clearly wrong.
↑ comment by lessdazed · 2011-08-26T11:34:18.633Z · LW(p) · GW(p)
doing so signals...a passive-aggressive assertion... emits such signals rather often -- he may want to be aware of this in case it's not his intention.
vs.
I'd be dubious about assuming one can be certain (where's the Bayesianism?) about what someone else is intending to signal
I didn't read komponisto as necessarily or primarily talking about consciously intended signals.
↑ comment by SilasBarta · 2011-08-24T21:45:46.164Z · LW(p) · GW(p)
I get annoyed by people who "sign" posts in the text like that, especially when they do it specifically on replies to me. It really isn't necessary. I'm interested in substance, not pleasantries, as I was three months ago when I asked how the mini-camp was a success.
Replies from: lukeprog, AuthorityFigure↑ comment by AuthorityFigure · 2011-08-25T15:18:29.538Z · LW(p) · GW(p)
" I'm interested in substance, not pleasantries"
You are so right. Fuck being nice as it is merely a tool for the irrational.
Replies from: None↑ comment by SilasBarta · 2011-08-24T20:13:44.528Z · LW(p) · GW(p)
Thanks, Vladimir.
↑ comment by lukeprog · 2011-08-25T00:38:42.881Z · LW(p) · GW(p)
Great, so did I! Now communicate that evidence. If it can't be communicated, I don't think you should be so confident in it.
Here:
- Surprisingly positive reviews in both qualitative and quantitative forms on our exit surveys.
- Follow-ups with many individual minicampers who report that several of the things we taught have stuck with them and improved their lives.
- People telling me to my face during minicamp that they were getting lots of value out of it.
- Enthusiastic testimonials.
It may take time for the participants to report back, but not for you to tabulate the results.
No, it definitely takes time to tabluate the results and write a presentable post about the results. I've personally spent 3 hours on it already but the project is unfinished.
Do you want your audience to be people who just take your word on something like this?
Ah. I may not have communicate this clearly: I think your skepticism concerning the success of the minicamp is warranted because almost no evidence is available to you. You're welcome to not take my word for it. When I have another 5-10 hours to finish putting together the results and write a post with more details about minicamp, I will, but I'm mostly waiting to invest that time until I can do it most profitably, for example when we've gathered more 'after minicamp' data.
Then why don't you mention this in your "how to be happy" post, which is also being used as evidence of your productivity?
I don't understand. The 'How to Be Happy' post was written before I helped run minicamp. And, there are tons of things not mentioned in that post. That post barely scratches the surface of my thoughts on happiness, let alone research on happiness in general.
Do you know a single person who has improved fashion to an acceptable level as a result of those magazines?
I doubt magazines is ever the sole input on someone's fashion sense, but yes I know people who have improved their fashion as a result of following magazines (or fashion blogs; same thing basically). Ask Peter Scheyer about this, for example.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-08-25T20:46:33.672Z · LW(p) · GW(p)
Given the large amount of effort it took to get to the miny camps, all four of these could be easily explained by cognitive dissonance.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-08-26T07:39:58.334Z · LW(p) · GW(p)
What evidence would you expect them to have if the "minicamp" was a genuine success? (Edited - thanks for the correction, wedrified!)
Replies from: wedrifid, JoshuaZ↑ comment by wedrifid · 2011-08-26T08:02:01.640Z · LW(p) · GW(p)
What evidence would you expect them to have if the "bootcamp" was a genuine success?
Bootcamp? I found the wording Eliezer used fascinating:
Co-organized and taught sessions for a highly successful one-week Rationality Minicamp, and taught sessions for the nine-week Rationality Boot Camp.
Have they actually claimed anywhere here that the bootcamp was successful?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-08-26T11:54:00.849Z · LW(p) · GW(p)
Oops, fixed - thanks!
↑ comment by JoshuaZ · 2011-08-26T12:56:09.179Z · LW(p) · GW(p)
Honestly, I'm not sure. Having a randomized control group and then looking at actual success would be nice. Even without a good control, one obvious thing to do would have been to do before and after tests of similar questions that test for rational behavior (e.g. whether they can recognize they are engaging in the sunk cost fallacy and things like that). It may be that given the circumstances the best evidence we have is self-reporting like this. If so, it is evidence for the success of the minicamps. But, it is not very strong evidence precisely because it is consistent with a variety of other not implausible hypotheses. This thread has made me more inclined to believe that the minicamps were successful, but had not strongly increased my confidence.
↑ comment by timtyler · 2011-08-24T22:14:03.891Z · LW(p) · GW(p)
I mean 'evidence' in the Bayesian sense, not the scientific sense.
Shouldn't these be the same? Bayesian evidence is surely scientific evidence - and visa versa. I don't see much point in multiplying definitions of "evidence". Let's just have one notion of "evidence", please. Promoting multiple "evdience" concepts seems to be undesirable terminology - unless there's a really good reason for doing so.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-24T22:21:57.545Z · LW(p) · GW(p)
There is a good reason. A lot of things people know can't contribute to forming reliable public knowledge for all sorts of practical reasons. And you know the reference for the arguments about this question: Scientific Evidence, Legal Evidence, Rational Evidence.
Replies from: timtyler↑ comment by timtyler · 2011-08-24T23:43:11.656Z · LW(p) · GW(p)
Actually, that helps. As a teenager, I noticed that most of the scientific method, including the key concept of experimentation, extended to personal knowledge and understanding. So, I did what seemed to be the obvious thing: I expanded my conception of science to include that domain. The public-only conception of science wasn't really much of a natural kind - since eventually technology would gain access to people's minds.
That explains why I don't get very much out of the Science vs Bayes material on this site. To me it just looks as though the true nature of science has not been properly grokked.
I must say, I still like my way: expanding the definition of science a teeny bit has a number of virtues over trying to stage a rationality revolution.
↑ comment by jsalvatier · 2011-08-24T16:34:54.167Z · LW(p) · GW(p)
I'm curious, why you guys didn't post the testimonials or surveys you gathered at the end of the camp? Obviously these should be accompanied with appropriate caveats, but I think this would help explain to people why you are pleased with the results and think 'we're on to something'.
Replies from: lukeprog↑ comment by lukeprog · 2011-08-24T18:41:12.713Z · LW(p) · GW(p)
Largely, lack of available staff hours.
Replies from: AnnaSalamon, jsalvatier, handoflixue, SilasBarta↑ comment by AnnaSalamon · 2011-08-25T04:08:01.174Z · LW(p) · GW(p)
Here are excerpts from the Minicamp testimonials (which were written to be shown to the public), with a link to the full list at the end:
“The week I spent in minicamp had by far the highest density of fun and learning I have ever experienced. It's like taking two years of college and condensing it to a week: you learn just as much and you have just as much fun. The skills I've learned will help me set and achieve my own life goal, and the friends I've made will help me get there.” --Alexei
“This was an intensely positive experience. This was easily the most powerful change self-modification I've ever made, in all of the social, intellectual, and emotional spheres. I'm now a more powerful person than I was a week ago -- and I can explain exactly how and why this is true.
At mini-camp, I've learned techniques for effective self-modification -- that is, I have a much deeper understanding of how to change my desires, gather my willpower, channel my time and cognitive resources, and model and handle previously confusing situations. What's more, I have a fairly clear map of how to build these skills henceforth, and how to inculcate them in others. And all this was presented in such a way that any sufficiently analytical folk -- anyone who has understood a few of the LW sequences, say -- can gain in extreme measures.” --Matt Elder / Fiddlemath
“I expected a week of interesting things and some useful tools to take away. What I got was 8 days of constant, deep learning, challenges to my limits that helped me grow. I finally grokked that I can and should optimize myself on every dimension I care about, that practice and reinforcement can make me a better thinker, and that I can change very quickly when I'm not constrained by artificial barriers or stress.
I would not recommend doing something like this right before another super-busy week, because I was learning at 100% of capacity and will need a lot of time to unpack all the things I learned and apply them to my life, but I came away with a clear plan for becoming better. It is now a normal and easy thing for me to try things out, test my beliefs, and self-improve. And I'm likely to be much more effective at making the world a better place as well, by prioritizing without fear.
The material was all soundly-researched and effectively taught, with extremely helpful supplemental exercises and activities. The instructors were very helpful in and out of session. The other participants were excited, engaged, challenging, and supportive.
I look forward to sharing what I've learned with my local Lesswrong meetup and others in the area. If that's even 1/4 as awesome as my time at the Mini-Camp, it will make our lives much better.” --Ben Hoffman / Benquo
“I really can't recommend this camp enough! This workshop broke down a complex and intertwined set of skills labelled in my brain as "common sense" and distinguished each part so that I could work on them separately. Sessions on motivation, cognition, and what habits to build to not fool yourself were particularly helpful. This camp was also the first example that I've seen of people taking current cognitive science and other research, decoding it, and showing people what's been documented to work so that they can use it too. It feels to me now as though the coolest parts of the sequences have been given specific exercises and habits to build off of. This camp, and the people in it, have changed my path for the better.” --David Jones / TheDave
You can now also read the full testimonials, from everyone who chose to give one.
↑ comment by jsalvatier · 2011-08-24T19:47:48.722Z · LW(p) · GW(p)
Is this something one of the minicampers might be willing and able to do?
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-24T20:10:25.649Z · LW(p) · GW(p)
No, they don't have the time to save time.
↑ comment by handoflixue · 2011-08-25T00:54:08.257Z · LW(p) · GW(p)
I'm genuinely interested in seeing this data published, because I think it's something that a lot of people can build off of. If the only obstacle is really hours, I am happy to contribute.
I would be happy to show up in person while I'm in the area, pick up any paper notes you have available, transcribe them, and mail the originals back once finished. I have professional experience with data entry (including specifically product surveys) and market research in general. I'll be in San Francisco the afternoon of Monday, September 5th, hopefully around noon. I leave early Tuesday morning.
Replies from: lukeprog↑ comment by lukeprog · 2011-08-25T01:20:20.585Z · LW(p) · GW(p)
Great! Much of the minicamp data is private and anonymous, so I can't share that with you, but I definitely have tasks for volunteers to do that will free uo time for me to write up a minicamp report - some of those tasks are even directly relevant to minicamp. Please email me at lukeprog at gmail if you'd like to help.
↑ comment by SilasBarta · 2011-08-24T19:07:44.471Z · LW(p) · GW(p)
Is there a post requesting volunteer help with this administrative task?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-24T21:23:15.108Z · LW(p) · GW(p)
Administrating volunteers also requires staff hours. Sometimes more than the original task. Why, are you volunteering to administrate them?
Replies from: handoflixue, SilasBarta↑ comment by handoflixue · 2011-08-25T00:54:46.502Z · LW(p) · GW(p)
http://lesswrong.com/lw/78s/help_fund_lukeprog_at_siai/4oxp I can administer myself when it comes to basic data transcription.
↑ comment by SilasBarta · 2011-08-24T21:43:18.294Z · LW(p) · GW(p)
Sure, I'd love to! (I thought I didn't qualify to volunteer for SIAI?) Hand over whatever you have and I'll make sure they do it right! (I thought this administration has to be done by someone in the loop on this, but whatever.)
Oh, you were just hoping I'd drop it, and the issue of actual substantiation of the mini-camp's success (which lies at the top of your reasons for wanting to fund Luke) would die off? Can't help there.
Replies from: lukeprog, AuthorityFigure↑ comment by lukeprog · 2011-08-25T00:40:41.624Z · LW(p) · GW(p)
SilasBarta,
I need to write up the results myself because I personally ran the minicamp with Anna Salamon and Andrew Critch. But I have tons of stuff I could have you do that would free up more of my time to get around to writing up results of minicamp data. If you're interested in helping, that would be awesome. You can contact me at lukeprog [at] gmail.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-25T23:05:48.911Z · LW(p) · GW(p)
I actually have no interest in supporting your research. Every time I ask a clarifying question of any of your claims, you get extremely defensive and fail to answer it, which suggests a poor understaning of what you're trying to present results on. Also, every piece of advice I've followed falls woefully short of what you claim it does, and I don't seem to be alone here (on either point). I think your contributions are overrated.
(This is a large part of why I put such a low prior on claims of the minicamp's phenomenal success and was so skeptical of the report.)
I don't claim to have contributed more research, to LW, of course, but when I do present research I make sure to understand it.
In fairness, your more recent work doesn't seem to be subject to any of this, so I could very well change my opinion on this.
Replies from: ciphergoth, KPier, lessdazed, orthonormal, None, OptimalFAI↑ comment by Paul Crowley (ciphergoth) · 2011-08-26T07:35:40.496Z · LW(p) · GW(p)
I wish you'd be a bit more Columbo about this. You know, unfailingly polite, generous, but dogged to the end.
↑ comment by KPier · 2011-08-26T01:21:50.974Z · LW(p) · GW(p)
The comment you linked to doesn't seem like a clarifying question at all. I think that conversation might be another instance where your belligerent tone hurt communication even though your point was a valid one. Luke's answer didn't seem particularly defensive, either (although I have seen other conversations where his tone was defensive, so I won't challenge that point.)
I actually agree with you on this point (and upvoted all your questions about it), but the longer you argue the less sympathetic I'm getting. You asked when results would be published, and the answer was that it requires a lot of processing time. You asked if volunteers can help, and Luke answered that while volunteers can't help on that specifically, they can contribute to other things which will speed up the process. To which you answered:
I actually have no interest in supporting your research....I think your contributions are overrated.
Which just doesn't show a whole lot of interest in actually resolving the problem you're concerned about.
↑ comment by lessdazed · 2011-08-25T23:46:38.057Z · LW(p) · GW(p)
Every time I ask a clarifying question of any of your claims, you get extremely defensive and fail to answer it, which suggests a poor understaning of what you're trying to present results on.
Regarding the comment thread you linked to, I agree that the initial response you received was defensive and uninformative. I am not surprised to see it sitting at zero upvotes.
When you prodded further, you got a good response, so while I think you didn't come out badly in that exchange at all, I am surprised that you are citing it as evidence of lukeprog understanding poorly. It instead suggests that he responds defensively even when he does understand and has a cogent answer, and that defensiveness from him implies shallow understanding far less than it would from others.
I think your contributions are overrated.
I have little problem with bluntly telling people that they suck, and by extension don't mind less offending forthright communication, but I am leery of discussing people's work by evaluating how people's work compares to the popular perception of their work. It introduces an unnecessary factual dispute - how people are perceived.
E.g. "Loui Eriksson is the most underrated player in the NHL, just ask anyone! Wait a second...if everyone agrees, then..."
Replies from: lessdazed, SilasBarta↑ comment by lessdazed · 2012-01-12T23:03:35.278Z · LW(p) · GW(p)
"Loui Eriksson is the most underrated player in the NHL, just ask anyone! Wait a second...if everyone agrees, then..."
A new player poll asking who the most underrated NHL player is just came out, and guess who got more than twice as many votes as the second most voted for player? Hint: he was named to the All-Star roster last year...yes, it's Loui Eriksson, again. This makes little sense. How many years in a row and in how many polls can a single guy be perceived by so many as "most underrated?"
New Year's resolution: avoid discussing whether or not something is overrated or underrated and simply evaluate its actual worth.
↑ comment by SilasBarta · 2011-08-29T16:51:41.827Z · LW(p) · GW(p)
When you prodded further, you got a good response, so while I think you didn't come out badly in that exchange at all, I am surprised that you are citing it as evidence of lukeprog understanding poorly.
I disagree. The final response was just as unhelpful; it's just that I didn't bother pushing the point. Luke tried to imply that the research he cited showed how to "dress fashionably based on magazines" yet not be consumerist, which is completely false.
I have little problem with bluntly telling people that they suck, and by extension don't mind less offending forthright communication, but I am leery of discussing people's work by evaluating how people's work compares to the popular perception of their work.
Well, there is a tendency among forums for people to automatically vote up anything that looks well researched, so it's important to know when that facade isn't holding up. And considering the number of times Luke gets corrected on his use of a source or otherwise crumples on any follow-up question, I'm worried this is one of those cases, and so I can't avoid implicating people for hasty upvoting.
But again, some of his more recent work looks to be more careful.
↑ comment by orthonormal · 2011-08-27T16:52:52.229Z · LW(p) · GW(p)
It may be OK in poker to try calling someone's bluff with a bluff of your own, but it's pretty rude in real life.
↑ comment by OptimalFAI · 2011-08-26T01:07:33.453Z · LW(p) · GW(p)
"I actually have no interest in supporting your research. "
Eliezer might have to defend the Golden Boy here. This takes issue with Eliezer's promotion of Luke.
"I think your contributions are overrated."
Ouch! Silas is bringing the bitch slap.
↑ comment by AuthorityFigure · 2011-08-25T15:29:24.329Z · LW(p) · GW(p)
"Sure, I'd love to! (I thought I didn't qualify to volunteer for SIAI?)"
LOL. Way to play up the role of the passive-aggressive outsider.
Replies from: OptimalFAI↑ comment by OptimalFAI · 2011-08-25T18:59:45.010Z · LW(p) · GW(p)
Very true.
In the interest of optimizing our rationality I think that we need to continue to call out instances "community distancing" such as the one exhibited by Silas above.
The reason for doing so? It lets the dissenters know that a community can tolerate and appreciate criticism but not the creation of a lone wolf character. Lone wolves do not contribute to a community and instead impede our advances in rationality by drawing conversations back to their status. As such, their status seeking should be pointed out and skepticism should be attached to their future postings.
Passive-aggressive comments in particular are troublesome because these types eventually find ways to disrupt substantive threads by reminding others of their loner status and their unacknowledged genius. Their resentment then leads to them mocking key figures in a community (note Silas' comments to both Eliezer and Luke).
Perhaps LW needs a mini-sequence on acceptable and non-acceptable signaling within a rational community.
Replies from: wedrifid, lessdazed, None↑ comment by wedrifid · 2011-08-26T06:51:24.989Z · LW(p) · GW(p)
As such, their status seeking should be pointed out and skepticism should be attached to their future postings.
I object. "Lone wolves", and Silas in particular, are not more status seeking than average. Luke's contributions are far more status seeking than Silas's are. Luke is good at status seeking while Silas's biggest weakness is that he fails to status seek when it would clearly be in his interests to do so.
↑ comment by lessdazed · 2011-08-25T20:31:22.620Z · LW(p) · GW(p)
Lone wolves do not contribute to a community
You think they don't contribute at all?
Replies from: jsalvatier, FederalVArm↑ comment by jsalvatier · 2011-08-25T21:44:37.230Z · LW(p) · GW(p)
I think all 3 of these accounts are spoofs: they have odd names, and no other activity.
↑ comment by FederalVArm · 2011-08-25T21:16:27.844Z · LW(p) · GW(p)
About my experience...with LW's is that silence is many times golden. There are those whom are rather old fashioned and follow Marine Law. In therapy you isolate the problem and separate what doesn't belong like a sculptor does when observing the stone it is about to chisel out.
If a drifter would come into any mention I always heard that many dislike drifters because you don't know too much about those, and if you do learn something desperately trying to find out where they went suddenly they have vanished leaving behind some interesting questions. So to say, a lone wolf has nothing to contribute I would say conservatively one ought to be careful to discern a wolf; one could have an Angel or a Devil ready to gobble you up hehehe
↑ comment by [deleted] · 2011-08-25T19:16:53.123Z · LW(p) · GW(p)
It lets the dissenters know that a community can tolerate and appreciate criticism but not the creation of a lone wolf character. Lone wolves do not contribute to a community and instead impede our advances in rationality by drawing conversations back to their status.
Let's taboo "lone wolf" and see what you actually mean by it, because I don't see Silas as a lone wolf figure in this debacle. For example, most of his comments have positive karma -- what I would consider a lone wolf wouldn't have such support.
↑ comment by jsalvatier · 2011-08-24T15:36:48.533Z · LW(p) · GW(p)
I agree with 'more testing and evidence, please', but you often come across as adversarial and I think that generally makes it harder for you to convince the people you want to convince.
As an aside, remember that the minicamp was a relatively unplanned event; it came about because SIAI had extra time and space. I will be more concerned if the megacamp has a similar lack of testing.
Replies from: SilasBarta, bentarm, None↑ comment by SilasBarta · 2011-08-24T16:05:24.191Z · LW(p) · GW(p)
I agree with 'more testing and evidence, please', but you often come across as adversarial and I think that generally makes it harder for you to convince the people you want to convince.
Well, I hope they're not relying on "Silas is a meanie" as their intellectual "covering fire" for not substantiating this claim. And it's not that I want more testing and evidence, I just want to see what they think proves its success.
As an aside, remember that the minicamp was a relatively unplanned event; it came about because SIAI had extra time and space.
True, but I wouldn't be asking for any of this if leaders didn't try to paint it afterwards as a major success. If they want to take a risky venture, fine. If they want to play, "I meant to do that", let's see what it accomplished.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-08-24T16:25:32.981Z · LW(p) · GW(p)
I'm not talking about 'covering fire'. If your goal is to win an argument or appear righteous, then your strategy is alright. If your goal is to actually get SIAI to change their behavior, then your language is hurting your cause. You want to make it as easy as possible for them to change their behavior, and it's psychologically much easier to do something because an ally asks than because an adversary asks.
You have seen evidence: both Guy (link) and I (link) posted 'lessons learned' for the minicamp. You are right to say this is not especially strong evidence, but it is evidence. I think it would have been good to video tape some of the sessions and post them and post the exit surveys (they took testimonials too).
Replies from: wedrifid, None↑ comment by wedrifid · 2011-08-26T06:59:51.865Z · LW(p) · GW(p)
If your goal is to win an argument or appear righteous, then your strategy is alright.
No, it clearly isn't. He left himself wide open to this sort of attack.
More fundamentally he justified and explained himself. He did it reasonably well and a good justification can work but it is almost never the optimal strategy.
↑ comment by [deleted] · 2011-08-24T21:04:02.390Z · LW(p) · GW(p)
Data is not the plural of anecdote.
Replies from: shokwave↑ comment by shokwave · 2011-08-25T03:36:37.396Z · LW(p) · GW(p)
(That quote is commonly used by Science and is technically inaccurate under Bayes, in case you were wondering.)
Replies from: Miller, None↑ comment by Miller · 2011-08-25T03:45:29.838Z · LW(p) · GW(p)
Can we be forgiving and assume that multiple anecdotes fail because they have a consistent bias related to how they are obtained?
Replies from: shokwave↑ comment by shokwave · 2011-08-25T03:54:18.219Z · LW(p) · GW(p)
Sure, and I regularly do ("Well, if situation X seems like it would produce anecdote Y, then all anecdote Y shows us is that situation X happened, not that contention Z is necessarily true - only if situation X shows us that contention Z is true").
I would surmise that not all commentors are willing to be that forgiving.
↑ comment by [deleted] · 2011-08-25T04:36:05.988Z · LW(p) · GW(p)
And how else should I update after reading two self-selected, subjective assessments? This has a perfectly reasonable Bayesian interpretation.
EDIT: Also note that the grandparent was posted before AnnaSalamon actually fixed the problem at hand.
EDIT x2: And while I'm endlessly editing this comment, let me note that most of this drama could have been averted if someone had just posted the damn data instead of coming up with multiple, bad excuses. Lots of guilty parties, only a couple heroes (in my book, at least).
Replies from: shokwave↑ comment by shokwave · 2011-08-25T07:15:32.722Z · LW(p) · GW(p)
And how else should I update after reading two self-selected, subjective assessments?
Very little. I was explaining why your comment was downvoted so much. I said "technically inaccurate" as opposed to "wrong" because I am sympathetic to your point of view; it is almost no data. But it is a little bit of data.
↑ comment by bentarm · 2011-08-25T01:13:14.748Z · LW(p) · GW(p)
remember that the minicamp was a relatively unplanned event; it came about because SIAI had extra time and space. I will be more concerned if the megacamp has a similar lack of testing.
As far as I can tell, the mega-camp actually had even less testing than the mini-camp. I did leave before the last week though, so I can't be sure quite what was done then. We had a discussion at the beginning about how we would decide if the camp had been a success, but I don't think we came to any very satisfactory conclusions.
↑ comment by [deleted] · 2011-08-24T15:53:55.678Z · LW(p) · GW(p)
It's been almost a month, and nothing. I think a bit of contrarianism is warranted.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-08-24T16:20:48.358Z · LW(p) · GW(p)
I don't think that number is correct.
Replies from: None↑ comment by [deleted] · 2011-08-24T20:50:04.820Z · LW(p) · GW(p)
Silas' original comment was August 1, and it's nearly the end of August.
Perhaps I want payday to come too dearly. Eh.
Replies from: None↑ comment by [deleted] · 2011-08-24T22:00:46.312Z · LW(p) · GW(p)
Did paper-machine get downvoted for admitting he said something hastily, and correcting it to be accurate and precise? People are weird.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-08-24T22:06:23.698Z · LW(p) · GW(p)
Head honcho promotes SIAI insider. Annoying guy asks for evidence of insider's successes. Imagine how some people are going to vote on that.
Replies from: None↑ comment by [deleted] · 2011-08-24T22:08:54.466Z · LW(p) · GW(p)
I really wish I had my PGP key here. You've earned yourself some papermachine points.
EDIT:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Today (25/8/11) User:SilasBarta
earned five paper-machine points
by successfully arguing for the release of mini-camp data
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJOVmChAAoJEDjQ6lJtxNBEEHIIALmWPoAYHBYFh1+PFxxnLA41
CM90FZxEVQEbXVSCEGtU4P0PIfJn0sQ1aweXIoQE10imRsNF8RMUwT5C+ImrJx5O
SL8KrtGqYARoMho+H017TwXyX3tg3wId2ZS6j2wxmkYiX/SOWK5rIityoF2d2pV5
SrVma3My24oQlFYiXOPloIGDWMUn+DzBTJH646qrZHqTIGbu+hXfV9zTDK9uGHBb
u5a3BLfQNXe+2LvXcuLiAJ+nGfYDTztZ98OehFq4BdChPug3GnXpKWKfaZ9zlcOb
zyP6MKxLfqtyVJeUBVyJYQUeTFWw8ROG0z8pzK5mey9u6eZuaglDqG4qv16pC3Q=
=TdF7
-----END PGP SIGNATURE-----
↑ comment by WrongBot · 2011-08-25T07:47:16.725Z · LW(p) · GW(p)
One of the many things I updated on as a result of the 9-week bootcamp is the importance of tone. I'm sympathetic to your data-crusade, but the way in which you're prosecuting it is leading me to dislike you.
You've made a number of posts indicating that you place a high priority on finding and joining a rationalist community. That will be more difficult if you are generally perceived by rationalists as a hostile conversationalist; you should be more strategic about achieving your goals.
↑ comment by XFrequentist · 2011-08-24T15:02:52.401Z · LW(p) · GW(p)
Upvoted, and props for sticking to your guns on this.
I appreciate that substantiating this sort of thing is non-trivial, but I would like to see at least an effort at some sort of evaluation.
Replies from: RobertLumley↑ comment by RobertLumley · 2011-08-24T15:19:12.133Z · LW(p) · GW(p)
Agreed. And to clarify, my repeated upvotes of Silas's comments on this matter are meant as a seconding of his request...
↑ comment by SarahSrinivasan (GuySrinivasan) · 2011-08-24T15:09:20.084Z · LW(p) · GW(p)
At the very least I would like to see some kind of plan for evaluating future ventures... it may be too late now for anything but post-hoc qualitative for the mini-camp (or mega-camp?). Except that we did take a survey just before the camp, whose general idea I remember but not the specific questions, and I think there are plans to send it out again 10 months from now. Unfortunately that survey is available online IIRC, but I won't go look at it.
↑ comment by AnnaSalamon · 2011-08-25T02:32:18.555Z · LW(p) · GW(p)
comment by Nisan · 2011-08-26T00:25:30.289Z · LW(p) · GW(p)
I'm not sure yet how much I want to donate, so I just donated 100 USD. Yay Luke!
Replies from: gwern↑ comment by gwern · 2011-08-26T00:45:15.476Z · LW(p) · GW(p)
I take it $100 was the lower bound on the various donations you were considering?
Replies from: Nisan↑ comment by Nisan · 2011-08-26T09:45:19.818Z · LW(p) · GW(p)
Yes.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-26T10:53:46.142Z · LW(p) · GW(p)
Upvoted for correctly subverting the standard madness of inaction.
comment by Kaj_Sotala · 2011-08-27T16:13:28.502Z · LW(p) · GW(p)
I've been low on cash recently, so I can't donate as much as I've used to, but I resumed the 10 EUR/month regular donation I previously had going on and which got cancelled for some reason which I forget. (Hopefully, I should be keeping that going indefinitely.)
I felt a remarkable resistance to donating such a small amount, feeling that it's even more embarassing than not donating at all. But then I came to my senses and figured I'd post this comment to encourage others to also make a small donation rather than no donation. If you're considering giving a sum which seems too small to be worth it, there's no shame in that! I did it too!
Also, go Luke! You're awesome, and have the kind of amazing energy to work on these issues that I can only dream of having. Just be careful not to burn yourself out.
comment by Rain · 2011-08-24T23:34:34.647Z · LW(p) · GW(p)
Also mentioned in the email that went out to the SIAI mailing list, a pledge of monthly donations for the next 12 months counts at the full year's value for purposes of matching.
Replies from: Nick_Tarleton, VNKKET, Leonhart↑ comment by Nick_Tarleton · 2011-08-24T23:37:01.938Z · LW(p) · GW(p)
Thanks, this is good to know and really should have been stated on the donations page.
↑ comment by Leonhart · 2011-08-29T20:33:50.090Z · LW(p) · GW(p)
Had already donated; but learning this prompted me to add a monthly donation as well. What makes it qualify as a 12-month-pledge though, as opposed to an indefinite-period-direct-debit? I didn't see any language to that effect. Is there a super-secret form I should have used?
Replies from: Rain↑ comment by Rain · 2011-08-29T20:50:23.352Z · LW(p) · GW(p)
When I called the contact number, they said I should use the monthly donation method, and it would be counted as such. I asked if I should put it in the comment field, and they said that would help clarify things. My comment read, "I pledge to donate $X per month for 12 months."
The only place I ever saw it mentioned was a small sentence in an email that went out only to those on the SingInst mailing list. I don't know if they planned on doing it that way from the start, especially since they didn't seem to think about or realize the logistical problems around 'making a pledge', or monitoring follow-through (what happens if someone cancels after one month?), etc.
comment by jsalvatier · 2011-08-24T14:49:48.612Z · LW(p) · GW(p)
Thanks for starting to make the case for SIAI's marginal need for funding.
comment by wmorgan · 2011-08-24T14:25:51.025Z · LW(p) · GW(p)
I lurk on this site every day, and this it the first I've heard about the Summer Challenge. Close call! I just added singinst.org/blog to my feed reader, and sent $1000 Luke's way.
I love funds-matching opportunities! And yet I didn't get an email or anything?
Replies from: lessdazed, MichaelAnissimov↑ comment by lessdazed · 2011-08-24T16:32:59.093Z · LW(p) · GW(p)
You missed my favorite comment of all time, by dripgrind. I think a reason it wasn't more widely publicized is expressed by Plasmon.
↑ comment by MichaelAnissimov · 2011-08-24T22:39:31.574Z · LW(p) · GW(p)
Hi wmorgan, you should sign up for the SIAI email list for future announcements. It's on the singinst.org homepage where it says "Email sign up". Thanks for your donation!
comment by Jonathan_Graehl · 2011-08-24T07:32:41.624Z · LW(p) · GW(p)
This is brilliant. How many cents a day will it take to feed him? :)
Replies from: HughRistik, Nisancomment by jsalvatier · 2011-08-25T21:55:56.154Z · LW(p) · GW(p)
I think a lot of the hubbub in this thread is due to different interpretations of SIAI related folks saying that the minicamp was 'successful'. I think many people here have interpreted 'success' as meaning something like "definitely improved the rationality of the attendants lastingly" and I think SIAI folks intended to say something like "was competently executed and gives us something promising to experiment with in the future".
comment by MichaelAnissimov · 2011-08-25T01:10:20.337Z · LW(p) · GW(p)
I just updated the Challenge Grant total to $39,695, or about 32% of our total. THANK YOU to everyone who donated in the last couple days, many of you did!
Replies from: Tripitaka↑ comment by Tripitaka · 2011-08-25T16:08:04.426Z · LW(p) · GW(p)
May I suggest that with regard to Eliezers HP/MOR-Challenge the page gets updated at least once every 24 hours, possible even more often?
Replies from: komponisto, MichaelAnissimov↑ comment by komponisto · 2011-08-27T01:02:51.526Z · LW(p) · GW(p)
Eliezers HP/MOR-Challenge
What does this refer to?
Replies from: MinibearRex↑ comment by MinibearRex · 2011-08-27T01:07:56.106Z · LW(p) · GW(p)
The Singularity Institute for Artificial Intelligence, the nonprofit I work at, is currently running a Summer Challenge to tide us over until the Singularity Summit in October (Oct 15-16 in New York, ticket prices go up by $100 after September starts). The Summer Challenge grant will double up to $125,000 in donations, ends at the end of August, and is currently up to only $39,000 which is somewhat worrying. I hadn't meant to do anything like this, but:
I will release completed chapters at a pace of one every 6 days, or one every 5 days after the SIAI's Summer Challenge reaches $50,000, or one every 4 days after the Summer Challenge reaches $75,000, or one every 3 days if the Summer Challenge is completed. Remember, the Summer Challenge has until the end of August, after that the pace will be set. (Just some slight encouragement for donors reading this fic to get around to donating sooner rather than later.) A link to the Challenge and the Summit can be found in the profile page, or Google "summer singularity challenge" and "Singularity Summit" respectively.
The original description is here.
↑ comment by MichaelAnissimov · 2011-08-26T23:09:07.566Z · LW(p) · GW(p)
I am updating it every 24 hours. I usually update it at around 7-8PM PST.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-27T00:06:34.457Z · LW(p) · GW(p)
Does the current page take into account http://lesswrong.com/lw/78s/help_fund_lukeprog_at_siai/4p9a and http://lesswrong.com/lw/78s/help_fund_lukeprog_at_siai/4p1x in the total?
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2011-08-27T00:07:42.102Z · LW(p) · GW(p)
No it doesn't. I'll update it now. Give me an hour or two, there are a lot of entries.
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2011-08-27T02:14:04.877Z · LW(p) · GW(p)
No one person donated $10,000. One person did create a $1,000/mo. subscription which counts as a $12,000 pledge. Only one person so far has contributed more than $5,000.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-27T02:15:56.400Z · LW(p) · GW(p)
By which you mean, we haven't gotten the check yet, and the current total doesn't take into account pengvado's donation?
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2011-08-27T02:22:59.744Z · LW(p) · GW(p)
Yep, I meant that the current record of online donations shows that. Should I add the check to the total even if we haven't received it yet, being confident that we will?
comment by AlexMennen · 2011-08-24T21:34:00.999Z · LW(p) · GW(p)
Donated $285, unrestricted, although funding Luke sounds like a fine thing to do with it. Also, will Bitcoin donations be matched by their dollar exchange rate?
Replies from: AlexMennen↑ comment by AlexMennen · 2011-08-25T03:52:49.151Z · LW(p) · GW(p)
Also, will Bitcoin donations be matched by their dollar exchange rate?
I came up with a better idea than waiting for an answer to this: I just donated 2.74 bitcoin to SIAI. According to MtGox, the last trade was at a price of $10.8 per bitcoin, so that works out to about $30. If you are reading this, I encourage you to take the initiative and match this contribution yourself.
comment by Alexei · 2011-08-24T14:02:53.473Z · LW(p) · GW(p)
Sounds like a fantastic plan. I am going to the donate page right now.
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2011-08-25T00:50:52.134Z · LW(p) · GW(p)
Thanks for donating, and for taking advantage of the monthly subscription counting for a full year of pledged funds.
comment by RobertLumley · 2011-08-24T14:55:30.350Z · LW(p) · GW(p)
Should this really be under main, and promoted at that? My impression was that main posts, and especially promoted ones were supposed to be reserved for posts discussing rationality and its applications, meant to be held up as examples of our best work. I have nothing against lukeprog, the SIAI, or this effort, but I don't think this really qualifies.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-08-24T17:03:24.798Z · LW(p) · GW(p)
Should this really be under main, and promoted at that?
Yes, absolutely. Calls for action in support of causes associated with this site are material for promoted front page articles. It is valuable to send the message that participating, actually donating money rather than just thinking "That's nice, SIAI is hiring another research fellow", is important by giving prominence to the announcement.
Replies from: RobertLumley↑ comment by RobertLumley · 2011-08-24T17:23:21.733Z · LW(p) · GW(p)
Calls for action in support of causes associated with this site are material for promoted front page articles.
But there aren't causes associated with the site. If the site were about promoting the SIAI, I would agree with you. But LessWrong is about rationality, not promoting SIAI, even though those two sometimes coincide.
It is valuable to send the message that participating, actually donating money rather than just thinking "That's nice, SIAI is hiring another research fellow", is important by giving prominence to the announcement.
I can't disagree more. That's just telling people a bottom line. We need to be promoting articles that say why the SIAI is doing good work, and discussing the rationality behind supporting it. Not just telling people that, "Hey, there's this organization we think is great and you should donate to it."
Replies from: JGWeissman↑ comment by JGWeissman · 2011-08-24T17:39:35.121Z · LW(p) · GW(p)
But there aren't causes associated with the site.
That is simply false. LW was created by SIAI with the purpose of generating rationalists interested in reducing existential risks, and accepting and even encouraging that it might produce rationalists interested in other causes.
In the early days, we specifically avoided talking about SIAI, FAI, and existential risks because we didn't want shiny discussions about those topics to overwhelm our work on rationality. Now that we are more established, we no longer do that. From the beginning, that policy was meant to be temporary.
We need to be promoting articles that say why the SIAI is doing good work, and discussing the rationality behind supporting it.
False dichotomy. We have had lots of discussions about the effectiveness of SIAI. We can also have announcements of when they have a project that needs funding. There are people here who are already convinced SIAI is an effective charity worth supporting, but need some encouragement to actually support it. That is why this kind of announcement is important.
Replies from: RobertLumley↑ comment by RobertLumley · 2011-08-24T17:44:12.404Z · LW(p) · GW(p)
That is simply false.
You're right. That was a horribly crafted sentence, in many ways. They are clearly associated. But the site is about rationality, not the SIAI. That was my point. (The statement is also patently false if you take "rationality" as a cause, which is entirely reasonable.)
False dichotomy. We have had lots of discussions about the effectiveness of SIAI. We can also have announcements of when they have a project that needs funding. There are people here who are already convinced SIAI is an effective charity worth supporting, but need some encouragement to actually support it. That is why this kind of announcement is important.
Sure. But that doesn't mean it needs to be in Main and promoted...
From About Less Wrong
Replies from: JGWeissmanOnce you have 20 or more karma points, you're allowed to make posts to the main community blog. (Click 'Create New Article' and change 'Post to' to 'Less Wrong'.) This section is intended for posts about rationality theory or practice that display well-edited writing, careful argument, and new material.
↑ comment by JGWeissman · 2011-08-24T18:01:13.340Z · LW(p) · GW(p)
Sure. But that doesn't mean it needs to be in Main and promoted...
From About Less Wrong
Once you have 20 or more karma points, you're allowed to make posts to the main community blog. (Click 'Create New Article' and change 'Post to' to 'Less Wrong'.) This section is intended for posts about rationality theory or practice that display well-edited writing, careful argument, and new material.
That is in the context of telling people new to the site what sort of article they should write if they want to publish in main, and it describes the primary usage, but it is not comprehensive. The actual use of the Main section does include this sort of announcement. It is normal, generally accepted by the community, and has been going on since LW split into Main and Discussion sections.
In general you will find that the actual use of many things in life does not match up with original intentions or simplified descriptions.
Replies from: RobertLumley↑ comment by RobertLumley · 2011-08-24T18:25:58.808Z · LW(p) · GW(p)
It is normal, generally accepted by the community
Judging by the upvotes of my original comment, there is not as much unity on that point as you seem to believe.
And frankly, the context doesn't matter. If posts like these are acceptable, than that statement is patently false, and should be changed. If it is not false, then posts like this are inappropriate on main. But with as clear as that statement is, there is no room for consistency between it and a post like this being in main.
In general you will find that the actual use of many things in life does not match up with original intentions or simplified descriptions.
That's an incredibly patronizing tone to take, and I don't appreciate it.
But putting that aside, this is largely irrelevant statement (And its irrelevance only serves to highlight the insult). I don't disagree. But what bearing does that have on what should be? Should we attempt to describe what types of posts are acceptable in main accurately, or should we not? There may be arguments on both sides. But that's not one of them.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-24T20:47:35.896Z · LW(p) · GW(p)
If posts like these are acceptable, than that statement is patently false, and should be changed. If it is not false, then posts like this are inappropriate on main.
What does the description "is acceptable" refer to? Acceptable by what criterion? The real question is whether things like this should be encouraged or discouraged, using whatever methods are at our disposal, including establishing a policy for moving "off-topic" posts out of Main. Instead, you seem to be appealing to an existing social attitude, which shouldn't be a major factor, as it too can be influenced by good arguments and other means.
Replies from: RobertLumley↑ comment by RobertLumley · 2011-08-24T22:23:53.193Z · LW(p) · GW(p)
Fair point. I had already expressed that I thought they should be separate, though, and was looking for other places that similar opinions had been expressed.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-24T22:26:44.079Z · LW(p) · GW(p)
Whether these opinions are right vs. whether they are popular.
comment by MichaelAnissimov · 2011-08-29T00:03:53.401Z · LW(p) · GW(p)
The current total as of 5PM PST August 28th is $103,235, or 82.58% of our goal.
Replies from: KPier↑ comment by KPier · 2011-08-29T00:06:37.798Z · LW(p) · GW(p)
Does that now include pengvado's $10,000?
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2011-08-29T04:47:34.083Z · LW(p) · GW(p)
Yes.
comment by aletheilia · 2011-08-26T21:42:23.951Z · LW(p) · GW(p)
I wonder if anyone here shares my hesitation to donate (only a small amount, since I unfortunately can't afford anything bigger) due to thinking along the lines of "let's see, if I donate a 100$, that may buy a few meals in the States, especially CA, but on the other hand, if I keep them, I can live ~2/3 of a month on that and since I also (aspire to) work on FAI-related issues, isn't this a better way to spend the little money I have?"
But anyway, since even the smallest donations matter (tax laws an' all that, if I'm not mistaken) and -5$ isn't going to kill me, I've just made this tiny donation...
comment by MatthewBaker · 2011-08-24T09:02:15.594Z · LW(p) · GW(p)
Your direct pleas for money often work the best against my Akrasia Eliezer. Maybe some new lack of fallacy's in my thinking has convinced me to give you money for many ideas that some consider foolish but here is a Benjamin for the cause.
comment by SarahSrinivasan (GuySrinivasan) · 2011-08-24T15:32:50.147Z · LW(p) · GW(p)
Continue to develop his metaethics sequence, the conclusion of which will be a sort of Polymath Project for collaboratively solving open problems in metaethics relevant to FAI development.
This in itself, well-run, is worth a salary in expectation. Luke, have you talked to Michael Nielsen or Tim Gowers or Terrence Tao or Gil Kalai about things that worked or didn't in the polymath projects they've run? I'm certain at least Nielsen will be very interested.
Edit: feels like it's worth. I didn't do any math.
comment by Armok_GoB · 2011-08-24T17:16:54.677Z · LW(p) · GW(p)
I'm confused. Why are donations for this separate from other kinds of SIAI donations? Some kind of psychological trick to increase total donations? Or will his wage be directly proportional to how much has such comments?
Replies from: DSimon, Eliezer_Yudkowsky↑ comment by DSimon · 2011-08-24T17:58:48.065Z · LW(p) · GW(p)
From the article, my semi-guess is that the SIAI has a number of things they'd like to fund, one of which is hiring Luke. If you think that hiring Luke is more important than other things the SIAI could be doing with immediate income right now, then you can specify that in your donation and make it happen.
Which brings up a question: what are the other things the SIAI would be spending new money on other than hiring Luke? Nothing against Luke, who is clearly awesome, but it's hard to build an ordered list of preferences if every choice but one is labelled "???".
Replies from: Kaj_Sotala, RobertLumley↑ comment by Kaj_Sotala · 2011-08-24T20:30:51.138Z · LW(p) · GW(p)
There are a number of ways by which I could imagine SIAI choosing between the things they'd like to fund. A popular vote where only one of the options is specified isn't one of them.
I'd guess this is an attempt to get more donations, plus an experiment on the effectiveness of such an approach.
↑ comment by RobertLumley · 2011-08-24T19:00:17.285Z · LW(p) · GW(p)
I was going to make the exact point you mention in your second paragraph after reading your first, and then you beat me to it...
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-24T21:26:37.760Z · LW(p) · GW(p)
#include "dripgrind.h"
Replies from: lessdazed, Armok_GoBcomment by thomblake · 2011-08-24T13:37:19.779Z · LW(p) · GW(p)
Seconded. People should definitely do this.
Replies from: SilasBartacomment by hairyfigment · 2011-08-29T05:28:43.304Z · LW(p) · GW(p)
I announced my $500 donation here. But I didn't mean to suggest that I oppose hiring Luke (just that I would have donated without this thread).
comment by AndrewClough · 2011-08-28T16:19:06.644Z · LW(p) · GW(p)
I had been planning on donating right before New Years when I do all my other charitable donation, but now that I'm aware of the matching I'm moving things up.
comment by [deleted] · 2011-08-24T22:05:46.304Z · LW(p) · GW(p)
This really should have been done as a Kickstarter project. If SIAI suddenly decides it doesn't have enough money to fund lukeprog, what is going to happen to the people donating "unrestricted" but with the intent to fund lukeprog? Why should SIAI waste resources administrating the fundraiser while a perfectly good third-party product exists?
Replies from: Davorakcomment by Tripitaka · 2011-08-27T14:57:11.411Z · LW(p) · GW(p)
All of this is surprisingly effective in overcoming my akrasia, esp. Nisans way, so on top of my donation I wanted to subscribe monthly- unfortunatly it seems that a credit card is neccesary for that. Any ideas how to circumvent this? I do not want to get a (regular) creditcard.
Replies from: Kaj_Sotala, Rain↑ comment by Kaj_Sotala · 2011-08-27T17:36:34.445Z · LW(p) · GW(p)
By the off chance that you'd happen to have a Visa Electron, PayPal will accept it as if it were a credit card.
↑ comment by Rain · 2011-08-27T16:53:07.008Z · LW(p) · GW(p)
Paypal monthly donation can do it using a registered bank account (direct debit), or any other source accepted by Paypal.
Replies from: Tripitaka↑ comment by Tripitaka · 2011-08-27T17:00:40.087Z · LW(p) · GW(p)
Well, no, Paypal does not want to. "You have got to register a Creditcard, direct debit is not possible".
Replies from: Rain↑ comment by Rain · 2011-08-27T17:08:40.893Z · LW(p) · GW(p)
Weird. They tried really hard to force me to use my bank account, and previously limited my credit card donations because I didn't have a bank account attached.
I guess they want as many sources of your money as possible before they let you do much with them.
Replies from: Tripitakacomment by Thrasymachus · 2011-08-26T21:14:49.486Z · LW(p) · GW(p)
Saying "the rationality minicamp was highly successful" before you have analyzed the data you have gathered to assess the success of the rationality minicamp is irrational.
If success at the minicamp is important - suggested by it listed first on Eliezer's recommendation - why not wait until you CAN analyze the data, to see whether it really was successful, before you recommend hiring Luke? Doing so means a) you can make a more persuasive case to donors, and b) if the minicamp WASN'T successful, then one can reconsider the hire.
The fact this plug happened before the analysis signals Eliezer is committed to recommending Luke's hire regardless of whether analysis shows the minicamp as successful or not. And if HE doesn't think the minicamp success is relevant to the merits of hiring Luke, why is he using it to persuade us?
Disclaimer: I think Luke has added lots of value to this site, and I would be unsurprised if later transparent analysis showed the minicamp to be highly successful. But the OP (as well comments switching various reasons/excuses for failing to present data, etc. etc.) is suggestive of irrational salesmanship. Perhaps a salutatory lesson that rationality experts still succumb to bias?
Replies from: shokwave, GuySrinivasan, Academian↑ comment by shokwave · 2011-08-27T03:19:00.114Z · LW(p) · GW(p)
Saying "the rationality minicamp was highly successful" before you have analyzed the data you have gathered to assess the success of the rationality minicamp is irrational.
I am in the mind of Einstein's Arrogance here. The people involved in the camps received a lot of evidence that (obviously) isn't available to us, because we weren't there. They might have enough evidence to be convinced that it worked already - but of course, they also set up data-gathering mechanisms so that they could have enough evidence to convince people who necessarily don't have access to the the physical experience of the camp that it worked.
I expect that this is the case, and Eliezer sees absolutely no problem with citing this as something in favour of Luke's hire.
↑ comment by SarahSrinivasan (GuySrinivasan) · 2011-08-26T21:56:03.550Z · LW(p) · GW(p)
I would be unsurprised if later transparent analysis showed the minicamp to be highly successful
What do you mean by unsurprised? Some words don't communicate as well as we feel they do.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2011-08-27T16:41:17.883Z · LW(p) · GW(p)
My sentence was trash, sorry!
What I should have said:
"I have a 0.5ish estimate of the minicamps success, with big error bars. The suspicious behaviour is no more than minute confirmation for it being a failure."
Or something like that.
↑ comment by Academian · 2011-09-02T08:16:33.204Z · LW(p) · GW(p)
I was at the camp. It was spectacularly awesome in my judgement, too, and Luke was a big part of that. \end{soft.bayesian.evidence}
Specifically, the camp is tied for the title of the most life-altering workshop-like event of my life, and I've been to many such events, inside and outside academia (~3 per year for the past 10 years). The tie is with the workshop that got me onto my PhD topic (graphical causal modelling), so that's saying something.