Comment by alyssavance on CFAR’s new focus, and AI Safety · 2016-12-03T02:02:03.752Z · score: 29 (29 votes) · LW · GW

This is just a guess, but I think CFAR and the CFAR-sphere would be more effective if they focused more on hypothesis generation (or "imagination", although that term is very broad). Eg., a year or so ago, a friend of mine in the Thiel-sphere proposed starting a new country by hauling nuclear power plants to Antarctica, and then just putting heaters on the ground to melt all the ice. As it happens, I think this is a stupid idea (hot air rises, so the newly heated air would just blow away, pulling in more cold air from the surroundings). But it is an idea, and the same person came up with (and implemented) a profitable business plan six months or so later. I can imagine HPJEV coming up with that idea, or Elon Musk, or von Neumann, or Google X; I don't think most people in the CFAR-sphere would, it's just not the kind of thing I think they've focused on practicing.

Comment by alyssavance on On the importance of Less Wrong, or another single conversational locus · 2016-11-30T01:12:31.584Z · score: 2 (2 votes) · LW · GW

Seconding Anna and Satvik

Comment by alyssavance on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T22:32:02.905Z · score: 16 (16 votes) · LW · GW

Was including tech support under "admin/moderation" - obviously, ability to eg. IP ban people is important (along with access to the code and the database generally). Sorry for any confusion.

Comment by alyssavance on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T17:37:29.503Z · score: 2 (2 votes) · LW · GW

If the money is there, why not just pay a freelancer via Gigster or Toptal?

Comment by alyssavance on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T10:39:26.176Z · score: 32 (32 votes) · LW · GW

I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:

"A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating." (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)

That's obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there's no one person with both the technical and moral authority to:

  • set the rules that all participants have to abide by, and enforce them
  • decide principles for what's on-topic and what's off-topic
  • receive reports of trolls, and warn or ban them
  • respond to complaints about the site not working well
  • decide what the site features should be, and implement the high-priority ones

Pretty much any successful subreddit, even smallish ones, will have a team of admins who handle this stuff, and who can be trusted to look at things that pop up within a day or so (at least collectively). The highest intellectual-quality subreddit I know of, /r/AskHistorians, has extremely active and rigorous moderation, to the extent that a majority of comments are often deleted. Since we aren't on Reddit itself, I don't think we need to go quite that far, but there has to be something in place.

Comment by alyssavance on Why CFAR's Mission? · 2015-12-31T13:11:21.240Z · score: 11 (11 votes) · LW · GW

I mostly agree with the post, but I think it'd be very helpful to add specific examples of epistemic problems that CFAR students have solved, both "practice" problems and "real" problems. Eg., we know that math skills are trainable. If Bob learns to do math, along the way he'll solve lots of specific math problems, like "x^2 + 3x - 2 = 0, solve for x". When he's built up some skill, he'll start helping professors solve real math problems, ones where the answers aren't known yet. Eventually, if he's dedicated enough, Bob might solve really important problems and become a math professor himself.

Training epistemic skills (or "world-modeling skills", "reaching true beliefs skills", "sanity skills", etc.) should go the same way. At the beginning, a student solves practice epistemic problems, like the ones Tetlock uses in the Good Judgement Project. When they get skilled enough, they can start trying to solve real epistemic problems. Eventually, after enough practice, they might have big new insights about the global economy, and make billions at a global macro fund (or some such, lots of possibilities of course).

To use another analogy, suppose Carol teaches people how to build bridges. Carol knows a lot about why bridges are important, what the parts of a bridge are, why iron bridges are stronger than wood bridges, and so on. But we'd also expect that Carol's students have built models of bridges with sticks and stuff, and (ideally) that some students became civil engineers and built real bridges. Similarly, if one teaches how to model the world and find truth, it's very good to have examples of specific models built and truths found - both "practice" ones (that are already known, or not that important) and ideally "real" ones (important and haven't been discovered before).

Comment by alyssavance on Why CFAR? The view from 2015 · 2015-12-18T03:19:17.768Z · score: 29 (26 votes) · LW · GW

Hey! Thanks for writing all of this up. A few questions, in no particular order:

  • The CFAR fundraiser page says that CFAR "search[es] through hundreds of hours of potential curricula, and test[s] them on smart, caring, motivated individuals to find the techniques that people actually end up finding useful in the weeks, months and years after our workshops." Could you give a few examples of curricula that worked well, and curricula that worked less well? What kind of testing methodology was used to evaluate the results, and in what ways is that methodology better (or worse) than methods used by academic psychologists?

  • One can imagine a scale for the effectiveness of training programs. Say, 0 points is a program where you play Minesweeper all day; and 100 points is a program that could take randomly chosen people, and make them as skilled as Einstein, Bismarck, or von Neumann. Where would CFAR rank its workshops on this scale, and how much improvement does CFAR feel like there has been from year to year? Where on this scale would CFAR place other training programs, such as MIT grad school, Landmark Forum, or popular self-help/productivity books like Getting Things Done or How to Win Friends and Influence People? (One could also choose different scale endpoints, if mine are too suboptimal.)

  • While discussing goals for 2015, you note that "We created a metric for strategic usefulness, solidly hitting the first goal; we started tracking that metric, solidly hitting the second goal." What does the metric for strategic usefulness look like, and how has CFAR's score on the metric changed from 2012 through now? What would a failure scenario (ie. where CFAR did not achieve this goal) have looked like, and how likely do you think that failure scenario was?

  • CFAR places a lot of emphasis on "epistemic rationality", or the process of discovering truth. What important truths have been discovered by CFAR staff or alumni, which would probably not have been discovered without CFAR, and which were not previously known by any of the staff/alumni (or by popular media outlets)? (If the truths discovered are sensitive, I can post a GPG public key, although I think it would be better to openly publish them if that's practical.)

  • You say that "As our understanding of the art grew, it became clear to us that “figure out true things”, “be effective”, and “do-gooding” weren’t separate things per se, but aspects of a core thing." Could you be more specific about what this caches out to in concrete terms; ie. what the world would look like if this were true, and what the world would look like if this were false? How strong is the empirical evidence that we live in the first world, and not the second? Historically, adjusted for things we probably can't change (like eg. IQ and genetics), how strong have the correlations been between truth-seeking people like Einstein, effective people like Deng Xiaoping, and do-gooding people like Norman Borlaug?

  • How many CFAR alumni have been accepted into Y Combinator, either as part of a for-profit or a non-profit team, after attending a CFAR workshop?

Comment by alyssavance on A Proposed Adjustment to the Astronomical Waste Argument · 2013-05-27T20:45:49.350Z · score: 9 (11 votes) · LW · GW

Religions partially involve values and I think values are a plausible area for path-dependence.

Please explain the influence that, eg., the theological writings of Peter Abelard, described as "the keenest thinker and boldest theologian of the 12th Century", had on modern-day values that might reasonably have been predictable in advance during his time. And that was only eight hundred years ago, only ten human lifetimes. We're talking about timescales of thousands or millions or billions of current human lifetimes.

Conceivably, the genetic code, base ten math, ASCII, English language and units, Java, or the Windows operating system might last for trillions of years.

This claim is prima facie preposterous, and Robin presents no arguments for it. Indeed, it is so farcically absurd that it substantially lowers my prior on the accuracy of all his statements, and it lowers my prior on your statements that you would present it with no evidence except a blunt appeal to authority. To see why, consider, eg., this set of claims about standards lasting two thousand years (a tiny fraction of a comparative eyeblink), and why even that is highly questionable. Or this essay about programming languages a mere hundred years from now, assuming no x-risk and no strong-AI and no nanotech.

For specific examples of changes that I believe could have very broad impact and lead to small, unpredictable positive trajectory changes, I would offer political advocacy of various kinds (immigration liberalization seems promising to me right now), spreading effective altruism, and supporting meta-research.

Do you have any numbers on those? Bostrom's calculations obviously aren't exact, but we can usually get key numbers (eg. # of lives that can be saved with X amount of human/social capital, dedicated to Y x-risk reduction strategy) pinned down to within an order of magnitude or two. You haven't specified any numbers at all for the size of "small, unpredictable positive trajectory changes" in comparison to x-risk, or the cost-effectiveness of different strategies for pursuing them. Indeed, it is unclear how one could come up with such numbers even in theory, since the mechanisms behind such changes causing long-run improved outcomes remain unspecified. Making today's society a nicer place to live is likely worthwhile for all kinds of reasons, but expecting it to have direct influence on the future of a billion years seems absurd. Ancient Minoans from merely 3,500 years ago apparently lived very nicely, by the standards of their day. What predictable impacts did this have on us?

Furthermore, pointing to "political advocacy" as the first thing on the to-do list seems highly suspicious as a signal of bad reasoning somewhere, sorta like learning that your new business partner has offices only in Nigeria. Humans are biased to make everything seem like it's about modern-day politics, even when it's obviously irrelevant, and Cthulhu knows it would be difficult finding any predictable effects of eg. Old Kingdom Egypt dynastic struggles on life now. Political advocacy is also very unlikely to be a low-hanging-fruit area, as huge amounts of human and social capital already go into it, and so the effect of a marginal contribution by any of us is tiny.

Comment by alyssavance on A Proposed Adjustment to the Astronomical Waste Argument · 2013-05-27T19:11:48.645Z · score: 13 (13 votes) · LW · GW

The main reason to focus on existential risk generally, and human extinction in particular, is that anything else about posthuman society can be modified by the posthumans (who will be far smarter and more knowledgeable than us) if desired, while extinction can obviously never be undone. For example, any modification to the English language, the American political system, the New York Subway or the Islamic religion will almost certainly be moot in five thousand years, just as changes to Old Kingdom Egypt are moot to us now.

The only exception would be if the changes to post-human society are self-reinforcing, like a tyrannical constitution which is enforced by unbeatable strong nanotech for eternity. However, by Bostrom's definition, such a self-reinforcing black hole would be an existential risk.

Are there any examples of changes to post-human society which a) cannot ever be altered by that society, even when alteration is a good idea, b) represent a significant utility loss, even compared to total extinction, c) are not themselves total or near-total extinction (and are thus not existential risks), and d) we have an ability to predictably effect at least on par with our ability to predictably prevent x-risk? I can't think of any, and this post doesn't provide any examples.

Comment by alyssavance on Why AI may not foom · 2013-03-23T23:58:57.816Z · score: 1 (23 votes) · LW · GW

A handful of the many, many problems here:

  • It would be trivial for even a Watson-level AI, specialized to the task, to hack into pretty much every existing computer system; almost all software is full of holes and is routinely hacked by bacterium-complexity viruses

  • "The world's AI researchers" aren't remotely close to a single entity working towards a single goal; a human (appropriately trained) is much more like that than Apple, which is much more like than than the US government, which is much more like that than a nebulous cluster of people who sometimes kinda know each other

  • Human abilities and AI abilities are not "equivalent", even if their medians are the same AIs will be much stronger in some areas (eg. arithmetic, to pick an obvious one); AIs have no particular need for our level of visual modeling or face recognition, but will have other strengths, both obvious and not

  • There is already a huge body of literature, formal and informal, on when humans use System 1 vs. System 2 reasoning

  • A huge amount of progress has been made in compilers, in terms of designing languages that implement powerful features in reasonable amounts of computing time; just try taking any modern Python or Ruby or C++ program and porting it to Altair BASIC

  • Large sections of the economy are already being monopolized by AI (Google is the most obvious example)

I'm not going to bother going farther, as in previous conversations you haven't updated your position at all (http://lesswrong.com/lw/i9/the_importance_of_saying_oops/) regardless of how much evidence I've given you.

Comment by alyssavance on MetaMed: Evidence-Based Healthcare · 2013-03-05T20:59:33.547Z · score: 33 (33 votes) · LW · GW

Clients are free to publish whatever they like, but we are very strict about patient confidentiality, and do not release any patient information without express written consent.

What Is Optimal Philanthropy?

2012-07-12T00:17:00.016Z · score: 24 (33 votes)

Advice On Getting A Software Job

2012-07-09T18:52:42.271Z · score: 23 (28 votes)

Negative and Positive Selection

2012-07-06T01:34:25.968Z · score: 77 (82 votes)

New York Less Wrong: Expansion Plans

2012-07-01T01:20:09.534Z · score: 13 (14 votes)

Meetup : A Game of Nomic

2012-06-29T00:49:38.820Z · score: 2 (3 votes)
Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T02:04:04.715Z · score: -1 (5 votes) · LW · GW

I would agree if I were going to spend a lot of hours on this, but I unfortunately don't have that kind of time.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T02:02:27.664Z · score: 1 (3 votes) · LW · GW

What would you propose as an alternative? LW (to my knowledge) doesn't support polls natively, and using an external site would hugely cut response rate.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:39:46.256Z · score: 39 (43 votes) · LW · GW

Vote up this comment if you would be most likely to read a post on Less Wrong or another friendly blog.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:39:27.088Z · score: 2 (6 votes) · LW · GW

Vote up this comment if you would be most likely to read a book chapter, available both on Kindle and in physical book form.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:39:18.642Z · score: 0 (2 votes) · LW · GW

Vote up this comment if you would be most likely to read a mailing list post, made available through a public archive.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:39:11.115Z · score: 15 (17 votes) · LW · GW

Vote up this comment if you would be most likely to read an academic paper, downloadable over the Internet as a PDF.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:39:03.151Z · score: 6 (8 votes) · LW · GW

Vote up this comment if you would be most likely to read a static HTML page on the Singularity Institute's website.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:38:44.759Z · score: 0 (4 votes) · LW · GW

Vote up this comment if you would be most likely to read a page on a Singularity Institute or Less Wrong wiki.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:38:36.692Z · score: 3 (7 votes) · LW · GW

Vote up this comment if you would be most likely to read a speech, downloadable as an audio file.

Comment by alyssavance on What Would You Like To Read? A Quick Poll · 2012-06-21T00:38:28.275Z · score: -5 (7 votes) · LW · GW

Vote up this comment if you would be most likely to read a PowerPoint, or other presentation format.

What Would You Like To Read? A Quick Poll

2012-06-21T00:38:09.212Z · score: 0 (15 votes)
Comment by alyssavance on Why Academic Papers Are A Terrible Discussion Forum · 2012-06-20T20:03:00.839Z · score: 3 (5 votes) · LW · GW

Re-replying:

  • For people who "are extremely busy, and they use "Did they bother to pass peer review?" as a filter for what they choose to read", which specific examples are you thinking of, and how much any of them become nontrivial members of our community, or helped us out in nontrivial ways?

  • I'm sure there are people who a) are very smart, b) look impressive on paper, who c) we've contacted about FAI research, and d) have said "I'm not going to pay attention, since this isn't peer reviewed" (or some equivalent). However, I think that for most of those people, that isn't their true rejection (http://lesswrong.com/lw/wj/is_that_your_true_rejection/), and they aren't going to take us seriously anyway. But I could be wrong - what evidence do you have in mind?

  • A lot of your points are criticisms of blog posts, like "a lot of them don't have citations", or "a lot of them are poorly organized". These are true in many cases. However, if SIAI is considering whether to publish some given idea in paper or blog post form, they could simply spend the (fairly small) effort to write a blog post which was well organized and had citations, thereby making these problems moot.

  • Journal editors obviously aren't perfectly analogous to mob bosses. However, I've heard many stories from academics of authors spending huge amounts of time and effort trying to get stuff published. In the most recent case, which I discussed with a grad student just a few hours ago, it took hundreds of hours, over a full year. If it's usually easy to get around that sort of thing, by just publishing in a different journal, why don't more academics do so?

Comment by alyssavance on Why Academic Papers Are A Terrible Discussion Forum · 2012-06-20T19:11:35.522Z · score: 2 (2 votes) · LW · GW

Not in the mathematical sense, but it's a difference of degree.

Comment by alyssavance on Why Academic Papers Are A Terrible Discussion Forum · 2012-06-20T19:10:06.372Z · score: 4 (4 votes) · LW · GW

Hi Luke! Thanks for replying. Quick counterpoints:

  • Probably most importantly, what do you view as the purpose of SIAI's publishing papers? Or, if there are multiple purposes, which do you see as the most important?

  • If in-person conversations (despite all their limitations) are still the much preferred way to discuss things, instead of papers, that's evidence in favor of papers being bad. (It's also evidence of SIAI being effective, which is great, but that isn't the point under discussion.) If papers were a good discussion forum, there'd be fewer conversations and more papers.

  • If, as you say, the main audience for papers written by SIAI is through SIAI's website and not through the journals themselves, why spend the time and expense and hassle to write them up in journal form? Why not just publish them directly on the site, in (probably) a much more readable format?

  • The problem with conformity in academia isn't that it's impossible to find someplace to publish. You can always find somewhere, given enough effort. The problem is that a) it restricts the sorts of things you can say, b) restricts you, in many cases, to an awkward way of wording things (which I believe you've written about at http://lesswrong.com/lw/4r1/how_siai_could_publish_in_mainstream_cognitive/), and c) it makes academia a less fertile ground for recruiting people. Those are probably in addition to other problems.

  • I agree that we care more about prestige within academia than we do about prestige in almost all similarly sized groups. However, it seems fairly strongly that we aren't going to have that much prestige in academia anyway, given that the main prestige mechanism is elite university affiliations, and most of us don't have those.

  • Which people have come through Eliezer and Bostrom's papers? (That isn't a rhetorical question; given how large our community is compared to Dunbar's number, it's likely there is someone and it's also likely I've missed them, and they might be really cool people to know.)

  • Using my own personal experiences is generalizing from a single dataset, and that's indeed biased in some ways. However, it's very far from generalizing from a single example; it's generalizing from the many thousands of arguments that I've read and accepted at some point in the past. It's still obviously better to use multiple datasets, if you can get them.... but in this case they're difficult to get, because it's hard to know where your friends got all their beliefs.

  • Sure, it's easier to get people to read a single paper than all of the Sequences. But that's a totally unfair comparison: the Sequences are much, much longer, and it's always easier to read something shorter than something longer. How hard would it be to get someone to read a paper, vs. a single Sequence post of equal length, or a bunch of Sequence posts that sum to an equal length?

  • If all new areas of research are developed through in-person conversations and mailing lists, that doesn't imply that papers are a good way to do FAI research; it implies that papers are a bad way to do all those other kinds of research. If what you say is true, then my argument equally well applies to those fields too.

  • Of course, there are some instances of academic moderation being net good rather than net bad. However, to quote of your earlier arguments, "don't generalize from one example". I'm sure that there are some well-moderated journals, just as I'm sure there are Mafia bosses who are really nice helpful guys. However, that doesn't imply that hanging out with Mafia bosses is a good idea.

Why Academic Papers Are A Terrible Discussion Forum

2012-06-20T18:15:57.377Z · score: 29 (42 votes)

Ideas for rationalist meetup topics

2012-01-12T05:04:03.356Z · score: 15 (18 votes)

Quantified Health Prize Deadline Extended

2012-01-05T09:28:09.184Z · score: 3 (10 votes)
Comment by alyssavance on New Haven / Southern Connecticut Meetup, Wednesday Apr. 27th 6 PM · 2011-04-25T17:05:16.495Z · score: 0 (0 votes) · LW · GW

Cool. This is a sub-optimal alternative compared to driving, but there is frequent Greyhound service between New Haven and Hartford.

New Haven / Southern Connecticut Meetup, Wednesday Apr. 27th 6 PM

2011-04-25T04:00:07.525Z · score: 5 (6 votes)

Levels of Action

2011-04-14T00:18:46.695Z · score: 116 (113 votes)

Vassar talk in New Haven, Sunday 2/27

2011-02-27T02:49:28.919Z · score: 2 (5 votes)

Vassar talk in New Haven, Sunday 2/27

2011-02-26T20:57:27.652Z · score: 3 (4 votes)
Comment by alyssavance on Science: Do It Yourself · 2011-02-13T16:39:44.890Z · score: 3 (5 votes) · LW · GW

I think Hitler was more intelligent than average, and a great deal more instrumentally rational. He just didn't have more accurate beliefs about the world.

Comment by alyssavance on Science: Do It Yourself · 2011-02-13T15:43:51.101Z · score: 5 (7 votes) · LW · GW

"At this point, I have to object... the one thing that guarantees lasting fame to a scientist is to successfully overthrow a widely believed theory."

Yes, but essentially no one wants to be famous in fifty years at the expense of having their funding pulled tomorrow.

Comment by alyssavance on Science: Do It Yourself · 2011-02-13T15:42:28.275Z · score: 3 (5 votes) · LW · GW

What's your evidence? Nazi Germany's government was tremendously dysfunctional, and the Nazis believed many things considered insane even by the average Joe's lowly standards, like "mass-murder is a good thing". Hitler himself was sufficiently dysfunctional that he pretty much failed at everything before going into politics.

Comment by alyssavance on Science: Do It Yourself · 2011-02-13T15:39:00.179Z · score: 7 (9 votes) · LW · GW

"DIY science seems to ignore prior work."

Yes, because most of the 'prior work' that floats around on the Internet and in books is terrible, and it's a lot more difficult to figure out which parts are good than to just do simple empiricism yourself.

"You claim using google is a medieval way of doing science."

The important thing isn't the act of using Google (a tool), but where you're getting your information from. If you simply Google X and click on the first result, this is basically equivalent to just asking the person who wrote the web page what they think about X. The distinction is:

medievalism: go to someone who seems like they're an expert, and ask them about X

rationalism: look at the data, see what the data says about X

This also applies if you're reading books or whatever.

"Nowadays scientists make progress by building on the progress of others,"

You don't provide evidence that this actually works well. In physics this seems to genuinely be the case, and in a few other sciences to varying degrees, but for the sorts of questions I'm considering here the "progress of others" is largely gibberish.

Science: Do It Yourself

2011-02-13T04:47:46.252Z · score: 64 (69 votes)

Rally to Restore Rationality

2010-10-18T18:41:33.876Z · score: 5 (7 votes)

Call for Volunteers

2010-10-02T17:43:04.690Z · score: 3 (4 votes)

Reminder: Weekly LW meetings in NYC

2010-09-12T20:44:50.435Z · score: 6 (7 votes)
Comment by alyssavance on MIRI Call for Volunteers · 2010-07-15T04:46:37.120Z · score: 4 (4 votes) · LW · GW

I did have marketing in mind, yes, but the first paragraph also serves an obviously useful purpose: it declares what audience we are trying to address. People who are not interested in existential risks or the Visiting Fellows program probably won't be as interested in volunteering, and it saves everyone a lot of time to state that up front.

Comment by alyssavance on MIRI Call for Volunteers · 2010-07-13T17:39:45.785Z · score: 0 (0 votes) · LW · GW

Possibly. Contact Aruna Vassar (aruna.vassar@intelligence.org) for more info.

Comment by alyssavance on MIRI Call for Volunteers · 2010-07-13T17:39:18.016Z · score: 0 (0 votes) · LW · GW

Did you also downvote http://lesswrong.com/lw/29c/be_a_visiting_fellow_at_the_singularity_institute/ ?

Comment by alyssavance on MIRI Call for Volunteers · 2010-07-13T05:39:28.337Z · score: 1 (1 votes) · LW · GW

Thanks!

People who already have something specific that they think they should work on should contact me (Thomas McCabe) at tom.mccabe@intelligence.org.

MIRI Call for Volunteers

2010-07-12T23:49:57.918Z · score: 12 (15 votes)
Comment by alyssavance on How Irrationality Can Win: The Power of Group Cohesion · 2010-07-09T06:48:05.288Z · score: 1 (1 votes) · LW · GW

Being able to make collective choices at all seems to be an obvious benefit, even given pure selfishness. To consider a simpler example, imagine a group of twelve purely selfish soldiers. Would these soldiers agree to appoint a lieutenant, who they would agree to obey the orders of? Well, if they do appoint a lieutenant, there's a chance that the lieutenant will order them to do something dangerous. But if they don't appoint a lieutenant, they won't be able to fight effectively and will all be killed by the enemy anyway. The selfish choice is clearly to appoint the lieutenant.

Comment by alyssavance on Singularity Summit 2010 on Aug. 14-15 in San Francisco · 2010-06-02T17:55:54.437Z · score: 8 (8 votes) · LW · GW

"People seem to make a leap from "This is 'bounded'" to "The bound must be a reasonable-looking quantity on the scale I'm used to." The power output of a supernova is 'bounded', but I wouldn't advise trying to shield yourself from one with a flame-retardant Nomex jumpsuit." - http://lesswrong.com/lw/qk/that_alien_message/

Comment by alyssavance on Singularity Summit 2010 on Aug. 14-15 in San Francisco · 2010-06-02T16:47:21.519Z · score: 4 (4 votes) · LW · GW

See http://yudkowsky.net/singularity/power .

Singularity Summit 2010 on Aug. 14-15 in San Francisco

2010-06-02T06:01:26.276Z · score: 7 (8 votes)
Comment by alyssavance on Abnormal Cryonics · 2010-05-26T17:58:55.485Z · score: 2 (2 votes) · LW · GW

"You can't assign a life insurance policy to a non-profit organization?"

You can, but it probably won't pay out until relatively far into the future, and because of SIAI's high discount rate, money in the far future isn't worth much.

"Is the long-term viability of low-cost cryonics a known quantity? Is it noticeably similar to the viability of high-cost cryonics?"

Yes. The Cryonics Institute has been in operation since 1976 (35 years) and is very financially stable.

"Did Michael Anissimov, Media Director for SIAI, when citing specific financial data available on Guidestar, lie about SIAI's budget in the linked blog post?"

Probably not, he just wasn't being precise. SIAI's financial data for 2008 is available here (guidestar.org) for anyone who doesn't believe me.

Comment by alyssavance on Abnormal Cryonics · 2010-05-26T17:39:04.016Z · score: 6 (8 votes) · LW · GW

Cryonics Institute is a factor of 5 cheaper than that, the SIAI budget is larger than that, and SIAI cannot be funded through life insurance while cryonics can. And most people who read this aren't actually substantial SIAI donors.

Comment by alyssavance on Abnormal Cryonics · 2010-05-26T17:37:26.764Z · score: 6 (10 votes) · LW · GW

I object to many of your points, though I express slight agreement with your main thesis (that cryonics is not rational all of the time).

"Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are."

This argument basically reduces to, once you remove the aura of philosophical sophistication, "we don't really know whether death is bad, so we should worry less about death". This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .

"If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying:"

If you assume the median date for Singularity is 2050, Wolfram Alpha says I have a 13% chance of dying before then (cite: http://www.wolframalpha.com/input/?i=life+expectancy+18yo+male), and I'm only eighteen.

"A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI3 than by spending that money on pursuing a small chance of eternal life."

If you already donate more than 5% of your income or time to one of these organizations, I'll buy that. Otherwise (and this "otherwise" will apply to the vast majority of LW commenters), it's invalid. You can't say "alternative X would be better than Y, therefore we shouldn't do Y" if you're not actually doing X.

"Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere"

Why? Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn't signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.

"People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious "

This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a "fringe subject" seriously are two entirely separate and to some extent mutually exclusive goals. Maintaining high epistemic standards internally requires that you call people on it if you think they are being stupid. Becoming friends with a person who sees you as a kook requires not telling them about every time they're being stupid.

"Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone."

If people are having kids who they can't afford (cryonics is extremely cheap; someone who can't afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it's a stupid decision is a separate question, but it probably is.

"One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways."

99% of the world's population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can't see a credible argument for why it wouldn't be the case a large percentage of the time.

Comment by alyssavance on The Tragedy of the Social Epistemology Commons · 2010-05-21T20:34:20.368Z · score: 24 (24 votes) · LW · GW

Although I largely agree, there is little actual experimental support for Maslow's theory. He mostly just made it up. See http://lesswrong.com/lw/2j/schools_proliferating_without_evidence/ . See also eg.:

"The uncritical acceptance of Maslow's need hierarchy theory despite the lack of empirical evidence is discussed and the need for a review of recent empirical evidence is emphasized. A review of ten factor-analytic and three ranking studies testing Maslow's theory showed only partial support for the concept of need hierarchy. A large number of cross-sectional studies showed no clear evidence for Maslow's deprivation/domination proposition except with regard to self-actualization. Longitudinal studies testing Maslow's gratification/activation proposition showed no support, and the limited support received from cross-sectional studies is questionable due to numerous measurement problems."

from "Maslow reconsidered: A review of research on the need hierarchy theory", by Wahba and Bridwell.

Comment by alyssavance on What is bunk? · 2010-05-08T21:20:16.721Z · score: 6 (6 votes) · LW · GW

David Chalmers, by the way, has come out pretty strongly in support of us. See The Singularity: A Philosophical Analysis (http://consc.net/papers/singularity.pdf).

Comment by alyssavance on What is bunk? · 2010-05-08T21:15:53.390Z · score: 5 (5 votes) · LW · GW

"How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?"

This is not what's actually going on. To quote Eliezer:

"With regard to academia 'showing little interest' in my work - you have a rather idealized view of academia if you think that they descend on every new idea in existence to approve or disapprove it. It takes a tremendous amount of work to get academia to notice something at all - you have to publish article after article, write commentaries on other people's work from within your reference frame so they notice you, go to conferences and promote your idea, et cetera. Saying that academia has 'shown little interest' implies that I put in that work, and they weren't interested. This is not so. I haven't yet taken my case to academia. And they have not said anything about it, or even noticed I exist, one way or the other. A few academics such as Nick Bostrom and Ben Goertzel have quoted me in their papers and invited me to contribute book chapters - that's about it."

(http://en.wikipedia.org/wiki/Talk:Eliezer_Yudkowsky)

Comment by alyssavance on MathOverflow as an example for LessWrong · 2010-04-27T20:31:30.153Z · score: 2 (2 votes) · LW · GW

The Sequences, of course.

Comment by alyssavance on Attention Less Wrong: We need an FAQ · 2010-04-27T15:38:34.278Z · score: 2 (2 votes) · LW · GW

Great idea, Kevin. I would also suggest adding the FAQ to the About page here: http://lesswrong.com/lw/1/about_less_wrong/, to allow new users to find it more easily.

Comment by alyssavance on Proposed New Features for Less Wrong · 2010-04-27T03:41:16.645Z · score: 0 (0 votes) · LW · GW

Great. Please vote in the polls at http://www.misterpoll.com/polls/482996/ and http://lesswrong.com/lw/265/proposed_new_features_for_less_wrong/1xde so your opinion can be counted.

Comment by alyssavance on Proposed New Features for Less Wrong · 2010-04-27T02:57:02.234Z · score: 2 (2 votes) · LW · GW

Voting based on what other people have commented gives unfairly high weight to the opinions of those who saw the post first.

Comment by alyssavance on Proposed New Features for Less Wrong · 2010-04-27T02:48:02.787Z · score: 0 (2 votes) · LW · GW

Upvote this post if you support keeping it at 20.

Comment by alyssavance on Proposed New Features for Less Wrong · 2010-04-27T02:47:47.831Z · score: 16 (16 votes) · LW · GW

Upvote this post if you support moving it back to 50.

Comment by alyssavance on Proposed New Features for Less Wrong · 2010-04-27T02:47:32.182Z · score: 0 (0 votes) · LW · GW

It was actually raised to 50, then changed back. I'll do a quick poll:

Comment by alyssavance on Proposed New Features for Less Wrong · 2010-04-27T02:21:41.541Z · score: 4 (4 votes) · LW · GW

It's supposed to allow for discussion of things other than rationality (like meetups, existential risks, transhumanism, self-experimentation, etc.), but still stuff with intellectual content, IE not lolcats or football or whatever.

Proposed New Features for Less Wrong

2010-04-27T01:10:39.138Z · score: 7 (10 votes)

Announcing the Less Wrong Sub-Reddit

2010-04-02T01:17:44.603Z · score: 9 (18 votes)

The Graviton as Aether

2010-03-04T22:13:59.800Z · score: 13 (40 votes)

On the Power of Intelligence and Rationality

2009-12-23T10:49:44.496Z · score: 15 (27 votes)

Test Your Calibration!

2009-11-11T22:03:38.439Z · score: 21 (21 votes)

Arrow's Theorem is a Lie

2009-10-24T20:46:07.942Z · score: 27 (44 votes)