Posts

Grass Valley – ACX Meetups Everywhere Spring 2024 2024-03-30T11:24:19.569Z
Grass Valley, CA, ACX Meetup 2023-08-26T00:11:37.273Z
Grass Valley, California, USA – ACX Meetups Everywhere Fall 2023 2023-08-25T23:40:13.125Z
Grass Valley, California, USA – ACX Meetups Everywhere Spring 2023 2023-04-10T22:10:48.767Z
Grass Valley, CA – ACX Meetups Everywhere 2022 2022-08-24T22:58:10.706Z
Prediction Market News/Ticker? 2022-03-08T21:44:20.013Z
Improving on the Karma System 2021-11-14T18:01:30.049Z
Grass Valley, CA – ACX Meetups Everywhere 2021 2021-08-23T08:50:54.835Z
Vegetarianism Ideological Turing Test Results 2015-10-14T00:34:33.898Z
Vegetarian/Omnivore Ideological Turing Test Judging Round! 2015-08-20T01:53:29.016Z
Vegetarianism Ideological Turing Test! 2015-08-09T14:39:48.951Z
Ideological Turing Test Domains 2015-08-02T13:45:46.844Z
Meetup : Cleveland Ohio Meetup 2013-07-23T18:51:00.380Z
Meetup : Less Wrong: Cleveland 2012-12-06T03:27:57.314Z

Comments

Comment by Raelifin on Grass Valley – ACX Meetups Everywhere Spring 2024 · 2024-04-06T14:43:24.627Z · LW · GW
Comment by Raelifin on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-04T23:47:14.766Z · LW · GW

Is there a minimal thing that Claude could do which would change your mind about whether it’s conscious?

Edit: My question was originally aimed at Richard, but I like Mikhail’s answer.

Comment by Raelifin on Prediction Market News/Ticker? · 2022-03-09T05:27:48.419Z · LW · GW

Thanks! The creators also apparently have a substack: https://forecasting.substack.com/

Comment by Raelifin on Long covid: probably worth avoiding—some considerations · 2022-01-17T05:34:43.469Z · LW · GW

Value of information

Comment by Raelifin on Improving on the Karma System · 2021-11-16T19:46:27.298Z · LW · GW

If you have multiple quality metrics then you need a way to aggregate them (barring more radical proposals). Let’s say you sum them (the specifics of how they combine are irrelevant here). What has been created is essentially a 25-star system with a more explicit breakdown. This is essentially what I was suggesting. Rate each post on 5 dimensions from 0 to 2, add the values together, and divide by two (min 0.5), and you have my proposed system. Perhaps you think the interface should clarify the distinct dimensions of quality, but I think UI simplicity is pretty important, and am wary of suggesting having to click 5+ times to rate a post.

I addressed the issue of overcompensating in an edit: if the weighting is a median then users are incentivized to select their true rating. Good thought. ☺️

Thanks for your support and feedback!

Comment by Raelifin on Improving on the Karma System · 2021-11-16T19:34:03.744Z · LW · GW

I agree that there are benefits to hiding karma, but it seems like there are two major costs. The first is in reducing transparency; I claim that people like knowing why something is selected for them, and if karma becomes invisible the information becomes hidden in a way that people won’t like. (One could argue it should be hidden despite people’s desires, but that seems less obvious.) The other major reason is one cited by Habryka: creating common knowledge. Visible Karma scores help people gain a shared understanding of what’s valued across the site. Rankings aren’t sufficient for this, because they can’t distinguish relative quality from absolute quality (eg I’m much more likely to read a post with 200 karma, even if it’s ranked lower due to staleness than one that has 50).

Comment by Raelifin on Improving on the Karma System · 2021-11-14T23:00:56.222Z · LW · GW

I suggested the 5-star interface because it's the most common way of giving things scores on a fixed scale. We could easily use a slider, or a number between 0 and 100 from my perspective. I think we want to err towards intuitive/easy interfaces even if it means porting over some bad intuitions from Amazon or whatever, but I'm not confident on this point.

I toyed with the idea of having a strong-bet option, which lets a user put down a stronger QJR bet than normal, and thus influence the community rating more than they would by default (albeit exposing them to higher risk). I mainly avoided it in the above post because it seemed like unnecessary complexity, although I appreciate the point about people overcompensating in order to have more influence.

One idea that I just had is that instead of having the community rating set by the weighted mean, perhaps it should be the weighted median. The effect of this would be such that voting 5-stars on a 2-star post would have exactly the same amount of sway as voting 3.5, right up until the 3.5 line is crossed. I really like this idea, and will edit the post body to mention it. Thanks!

Comment by Raelifin on Improving on the Karma System · 2021-11-14T22:50:31.774Z · LW · GW

I agree with the expectation that many posts/comments would be nearly indistinguishable on a five-star scale. I'm not sure there's a way around this while keeping most of the desirable properties of having a range of options, though perhaps increasing it from 10 options (half-stars) to 14 or 18 options would help.

My basic thought is that if I can see a bunch of 4.5 star posts, I don't really need the signal as to whether one is 4.3 stars vs 4.7 stars, even if 4.7 is much harder to achieve. I, as a reader, mostly just want a filter for bad/mediocre posts, and the high-end of the scale is just "stuff I want to read". If I really want to measure difference, I can still see which are more uncontroversially good, and also which has more gratitude.

I'm not sure how a power-law system would work. It seems like if there's still a fixed scale, you're marking down a number of zeroes instead of a number of stars. ...Unless you're just suggesting linear voting (ie karma)?

Comment by Raelifin on Improving on the Karma System · 2021-11-14T22:38:12.524Z · LW · GW

Ah! This looks good! I'm excited to try it out.

Comment by Raelifin on Improving on the Karma System · 2021-11-14T22:36:19.672Z · LW · GW

Yep. I'm aware of that. Our karma system is better in that regard, and I should have mentioned that.

Comment by Raelifin on Speaking of Stag Hunts · 2021-11-08T02:52:29.232Z · LW · GW

Nice. Thank you. How would you feel about me writing a top-level post reconsidering alternative systems and brainstorming/discussing solutions to the problems you raised?

Comment by Raelifin on Speaking of Stag Hunts · 2021-11-07T23:38:05.906Z · LW · GW

I also want to note that this proposal isn't mutually exclusive with other ideas, including other karma systems. It seems fine to have there be an additional indicator of popularity that is distinct from quality. Or, more to my liking, would be a button that simply marks that you thought a post was interesting and/or express gratitude towards the writer, without making a statement about how bulletproof the reasoning was. (This might help capture the essence of Rule Thinkers In, Not Out and reward newbies for posting.)

Comment by Raelifin on Speaking of Stag Hunts · 2021-11-07T23:37:50.840Z · LW · GW

One obvious flaw with this proposal is that the quality-indicator would only be a measure of expected rating by a moderator. But who says that our moderators are the best judges of quality? Like, the scheme is ripe for corruption, and simply pushing the popularity contest one level up to a small group of elites.

One answer is that if you don't like the mods, you can go somewhere else. Vote with your feet, etc.

A more turtles-all-the-way-down answer is that the stakeholders of LW (the users, and possibly influential community members/investors?) agree on an aggregate set of metrics for how well the moderators are collectively capturing quality. Then, for each unit of time (eg year) and each potential moderator, set up a conditional prediction market with real dollars on whether that person being a moderator causes the metrics to go up/down compared to the previous time unit. Hire the ones that people predict will be best for the site.

Comment by Raelifin on Speaking of Stag Hunts · 2021-11-07T23:37:30.543Z · LW · GW

To my mind the primary features of this system that bear on Duncan's top-level post are:

  • High-reputation judges can confidently set the quality signal for a piece of writing, even if they're in the minority. The truth is not a popularity contest, even when it comes to quality.
  • The emphasis on betting means that people who "upvote" low-quality posts or "downvote" high-quality ones are punished, making "this made me feel things, and so I'm going to bandwagon" a dangerous mental move. And people who make this sort of move would be efficiently sidelined.

In concert, I expect that it would be much easier to bring concentrated force down on low-quality bits of writing. Which would, in turn, I think make the quality price/signal a much more meaningful piece of information, instead of the current karma score which is as others noted, is overloaded as a measure.

Comment by Raelifin on Speaking of Stag Hunts · 2021-11-07T23:37:01.978Z · LW · GW

First of all, thank you, Duncan, for this post. I feel like it captures important perspectives that I've had, and problems that I can see and puts them together in a pretty good way. (I also share your perspective that the post Could Be Better in several ways, but I respect you not letting the perfect be the enemy of the good.)

I find myself irritated right now (bothered, not angry) that our community's primary method of highlighting quality writing is by karma-voting. It's a similar kind of feeling to living in a democracy--yes, there are lots of systems that are worse, but really? Is this really the best we can do? (No particular shade on Ruby or the Lightcone team--making things is hard and I'm certainly glad LW exists and is as good as it is.)

Like, I think I have an idea that might make things substantially better that's not terrible: make the standard signal for quality being a high price on a quality-arbitrated betting market. This is essentially applying the concept of Futarchy to internet forums (h/t ACX and Hanson). (If this is familiar to you, dear reader, feel free to skip to responses to this comment, where I talk about features of this proposal and other ideas.) Here's how I could see it working:

When a user makes a post or comment or whatever, they also name a number between 0 and 100. This number is essentially a self-assessment of quality, where 0 means "I know this is flagrant trolling" and 100 means "This is obviously something that any interested party should read". As an example, let's say that I assign this comment an 80.

Now let's say that you are reading and you see my comment and think "An 80? Bah! More like a 60!" You can then "downvote" the comment, which nudges the number down, or enter your own (numeric) estimate, which dramatically shifts the value towards your estimate (similar to a "strong" vote). Behind the scenes, the site tracks the disagreement. Each user is essentially making a bet around the true value of the post's quality. (The downvote is a bet that it's "less than 80".) What are they betting? Reputation as judges! New users start 0 judge-of-quality-reputation, unless they get existing users to vouch for them and donate a bit of reputation. (We can call this "karma," but I think it is very important to distinguish good-judge karma, from high-quality-writing karma!) When voting/betting on a post/comment, they stake some of that reputation (maybe 10% up to a cap of 50? (Just making up numbers here for the sake of clarity; I'd suggest actually running experiments)).

Then, you have the site randomly sample pieces of writing, weighting the sampling towards those that are most controversial (ie have the most reputation on the line). Have the site assign these pieces of writing to moderators whose sole job is to study that piece of writing and the surrounding context and to score its quality. (Perhaps you want multiple moderators. Perhaps there should be appeals, in the form of people betting against the value set by the moderator. Etc. More implementation details are needed.) That judgment then resolves all the bets, and results in users gaining/losing reputation.

Users who run out of reputation can't actually bet, and so lose the ability to influence the quality-indicator. However, all people who place bets (or try to place bets when at zero/negative reputation) are subsidized a small amount of reputation just for participating. (This inflation is a feature, encouraging participation in the site.) Thus, even a new user without any vouch can build up ability to influence the signal by participating and consistently being right.

Comment by Raelifin on Grass Valley, CA – ACX Meetups Everywhere 2021 · 2021-09-11T20:41:38.795Z · LW · GW

Update: I decided that I like the grass south of the baseball diamond better. Let's meet there.

Comment by Raelifin on Grass Valley, CA – ACX Meetups Everywhere 2021 · 2021-08-24T02:15:31.915Z · LW · GW

Hey all, Max here. I was bad/busy on the weekend when I was supposed to provide a more specific location, so I've updated the what3words to a picnic table near the dog/skate park. I reserve the right to continue to adjust the meetup location in the coming weeks if I find even better places, so be sure to check on the 18th for specifics.

I'm an AI safety researcher and author of Crystal Society. I did a bunch of community leading/organizing in Ohio, including running a rationality dojo. I moved out to the bay area in 2016, and to Grass Valley in June. If you feel like introducing yourself in the comments here, please do! (But also no pressure.)

Do people want food? I'll probably make it happen, so if you have preferences, let me know ahead of time by email or by comment here. (No need to request vegetarian options; that's a given.)

Comment by Raelifin on Feedback on LW 2.0 · 2017-10-05T00:10:06.459Z · LW · GW

Issue 2 is about to be fixed: https://github.com/Discordius/Lesswrong2/pull/188

Comment by Raelifin on Marketing Rationality · 2015-11-24T22:02:03.907Z · LW · GW

I picked 7 Habits because it's pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.

Comment by Raelifin on Marketing Rationality · 2015-11-23T17:39:41.812Z · LW · GW

Ah, perhaps I misunderstood the negative perception. It sounds like you see him as incompetent, and since he's working with a subject that you care about that registers as disgusting?

I can understand cringing at the content. Some of it registers that way to me, too. I think Gleb's admitted that he's still working to improve. I won't bother copy-pasting the argument that's been made elsewhere on the thread that the target audience has different tastes. It may be the case that InIn's content is garbage.

I guess I just wanted to step in and second jsteinhardt's comment that Gleb is a very growth-oriented and positive, regardless of whether his writing is good enough.

Comment by Raelifin on Marketing Rationality · 2015-11-23T17:27:49.543Z · LW · GW

I agree! Having good intentions does not imply the action has net benefit. I tried to communicate in my post that I see this as a situation where failure isn't likely to cause harm. Given that it isn't likely to hurt, and it might help, I think it makes sense to support in general.

(To be clear: Just because something is a net positive (in expectation) clearly doesn't imply one ought to invest resources in supporting it. Marginal utility is a thing, and I personally think there are other projects which have higher total expected-utility.)

Comment by Raelifin on Marketing Rationality · 2015-11-23T15:28:27.334Z · LW · GW

Okay well it seems like I'm a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I've been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I'm trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.

So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it "the purity argument". Should "rationality" try to stay pure by avoiding things like listicles or the phrase "science shows"? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that's nearest them is only marginally better than swill? (<-- That's me trying not to be biased. I don't like everything we've made, but when I'm not trying to counteract my likely biases I do think a lot of it is pretty good.)

Here's my take on it: I don't know. Like query, I don't pretend to be confident one way or the other. I'm not as scared of "horrific long-term negative impact", however. Probably the biggest reason why is that rationality is already tainted! If we back off of the sacred word, I think we can see that the act of improving-how-we-think exists in academia more broadly, self-help, and religion. LessWrong is but a single school (so to speak) of a practice which is at least as old as philosophy.

Now, I think that LW style rationality is superior than other attempts at flailing at rationality. I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I'd say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist. As Gleb has mentioned elsewhere on the thread, InIn doesn't even use the "rationality" label. I'd argue that the worst thin InIn does to pollute the LW meme-pool is that there are links and references to LW (and plenty of other sources, too).

In other words, I think at worst* InIn is basically just another lame self-help thing that tells people what they want to hear and doesn't actually improve their cognition (a.k.a. the majority of self-help). At best, InIn will out-compete similar things and serve as a funnel which pulls people along the path of rationality, ultimately making the world a nicer, more sane place. Most of my work with InIn has been for personal gain; I'm not a strong believer that it will succeed. What I do think, though, is that there's enough space in the world for the attempt, the goal of raising the sanity waterline is a good one, and rationalists should support the attempt, even if they aren't confident in success, instead of getting swept up in the typical-mind fallacy and ingroup/outgroup and purity biases.

* - Okay, it's not the worst-case scenario. The worst-case scenario is that the presence of InIn aggravates the lords of the matrix into torturing infinite copies of all possible minds for eternity outside of time. :P

(EDIT: If you want more evidence that rationality is already a polluted activity, consider the way in which so many people pattern-match LW as a phyg.)

Comment by Raelifin on Marketing Rationality · 2015-11-23T14:35:40.299Z · LW · GW

I just wanted to interject a comment here as someone who is friends with Gleb in meatspace (we're both organizers of the local meetup). In my experience Gleb is kinda spooky in the way he actually updates his behavior and thoughts in response to information. Like, if he is genuinely convinced that the person who is criticizing him is doing so out of a desire to help make the world a more-sane place (a desire he shares) then he'll treat them like a friend instead of a foe. If he thinks that writing at a lower-level than most rationality content is currently written will help make the world a better place, he'll actually go and do it, even if it feels weird or unpleasant to him.

I'm probably biased in that he's my friend. He certainly struggles with it sometimes, and fails too. Critical scrutiny is important, and I'm really glad that Viliam made this thread, but it kinda breaks my heart that this spirit of actually taking ideas seriously has led to Gleb getting as much hate as it has. If he'd done the status-quo thing and stuck to approved-activities it would've been emotionally easier.

(And yes, Gleb, I know that we're not optimizing for warm-fuzzies. It still sucks sometimes.)

Anyway, I guess I just wanted to put in my two (biased) cents that Gleb's a really cool guy, and any appearance of a status-hungry manipulator is just because he's being agent-y towards good ends and willing to get his hands dirty along the way.

Comment by Raelifin on Vegetarianism Ideological Turing Test Results · 2015-10-16T13:15:05.229Z · LW · GW

Impostor entries were generally more convincing than genuine responses. I chalk this up to impostors trying harder to convince judges.

But who knows? Maybe you were a vegetarian in a past life! ;)

Comment by Raelifin on Vegetarianism Ideological Turing Test Results · 2015-10-15T17:44:37.198Z · LW · GW

You're right, but I'm pretty confident that the difference isn't significant. We should probably see it as evidence that rationalists omnivores are about as capable as rationalist vegetarians.

If we look at average percent of positive predictions (predictions that earn more than 0 points):

Omnivores: 51%

Vegetarians: 46%

If we look at non-negative predictions (counting 50% predictions):

Omnivores: 52%

Vegetarians: 49%

Comment by Raelifin on Vegetarianism Ideological Turing Test Results · 2015-10-14T20:24:23.824Z · LW · GW

As Douglas_Knight points out, it's only 10/12, a probability of ~0.016. In a sample of ~50 we should see about one person at that level of accuracy or inaccuracy, which is exactly what we see. I'm no more inclined to give #14 a medal than I am to call #43 a dunce. See the histogram I stuck on to the end of the post for more intuition about why I see these extreme results as normal.

I absolutely will fess up to exaggerating in that sentence for the sake of dramatic effect. Some judges, such as yourself, were MUCH less wrong. I hope you don't mind me outing you as one of the people who got a positive score, and that's a reflection of you being better calibrated. That said, if you say "I'm 70% confident" four times, and only get it right twice, that's evidence that you were still (slightly) overconfident when you thought "decently able to discern genuine writing from fakery".

Comment by Raelifin on Vegetarianism Ideological Turing Test Results · 2015-10-14T20:14:11.220Z · LW · GW

In retrospect I ought to have included options closer to 50%. I didn't expect that they'd be so necessary! You are absolutely right, though.

A big part of LessWrong, I think, is learning to overcome our mental failings. Perhaps we can use this as a lesson that the best judge writes down their credence before seeing the options, then picks the option that is the best match to what they wrote. I know that I, personally, try (and often fail) to use this technique when doing multiple-choice tests.

Comment by Raelifin on Vegetarianism Ideological Turing Test Results · 2015-10-14T20:04:48.706Z · LW · GW

Every judge being close to 50% would be bizarre. If I flip 13 coins 53 times I would expect that many of those sets of 13 will stray from the 6.5/13 expected ratio. The big question is whether anyone scored high enough or low enough that we can say "this wasn't just pure chance".

Comment by Raelifin on Vegetarianism Ideological Turing Test Results · 2015-10-14T19:59:33.296Z · LW · GW

This is a very good point, and I ought to have mentioned it in the post. The point remains about overconfidence, however. Those who did decide to try (even given that it was hard) didn't have the mental red-flag that perhaps their best try should be saying "I don't know" with or without walking away.

Comment by Raelifin on Personal story about benefits of Rationality Dojo and shutting up and multiplying · 2015-08-26T17:04:49.587Z · LW · GW

Great job, you two! Don't forget to give your elephant and rider some time to "discuss" the findings internally before making the final judgment. I find that my elephant will slowly come around unless there's something important I've overlooked, which is a major risk when doing explicit calculations. For instance, I notice there's no representation of location, which tends to be a very important factor in deciding where to live.

Comment by Raelifin on Vegetarian/Omnivore Ideological Turing Test Judging Round! · 2015-08-21T02:21:42.791Z · LW · GW

I pretty much agree with you. I think it'll be interesting to get the data out of this and see how competent the judges are compared to Leah's Christianity tests. A few people in my local group thought this would be a good topic.

Comment by Raelifin on Vegetarian/Omnivore Ideological Turing Test Judging Round! · 2015-08-21T02:18:42.043Z · LW · GW

Yes. I'll be providing the answer key in the stats post.

Comment by Raelifin on Vegetarianism Ideological Turing Test! · 2015-08-11T13:00:19.125Z · LW · GW

They already did. I encourage you to make your prediction, however (the full judging round will start on Monday or Tuesday depending on my schedule).

Comment by Raelifin on Vegetarianism Ideological Turing Test! · 2015-08-10T19:20:24.753Z · LW · GW

I really like this entry. Don't forget to PM me your actual opinion so I can give feedback to the judges and see how you do. ^_^

Comment by Raelifin on Vegetarianism Ideological Turing Test! · 2015-08-09T23:34:57.765Z · LW · GW

Yikes. If all responses are this good, I'm sure the judges will have a rough time! Thanks so much for your words. At some point you'll need to PM me with a description of your actual beliefs so I can give feedback to the judges and see how you do.

Comment by Raelifin on Ideological Turing Test Domains · 2015-08-04T12:20:20.744Z · LW · GW

These are great suggestions! (As are others, suggested in other comments.) Thank you!

When I gave my presentation last night I made sure that people knew that it was called the ITT by others and that was what to search for (I also pointed them to UnEY). I'm still on the fence about pushing the name (ITT is really hard to say) but I'll keep your reservations in mind.

I'll keep you informed of the details moving forward. :-)

Comment by Raelifin on 2014 Less Wrong Census/Survey · 2014-10-27T18:14:20.979Z · LW · GW

Survey completed! Making a note here: Huge success!

Comment by Raelifin on Weekly LW Meetups: Cleveland, Moscow · 2012-12-22T14:24:15.861Z · LW · GW

The Cleveland meetup was canceled due to people being busy and sick.

Comment by Raelifin on Meetup : Cleveland Ohio Meetup · 2012-12-22T14:23:37.430Z · LW · GW

This meetup was canceled due to people being sick and busy.

Comment by Raelifin on Short introductory materials for a rationality meetup · 2012-11-13T15:42:21.470Z · LW · GW

(Primary author, here.)

This is a good point, and obviously there's a lot of tension between phyggish meme-sharing/codewords and a desire to be more inclusive and not so scary. An earlier draft actually made it an explicit point to talk about the perception of phyg, as I think it's one of the biggest PR issues we have.

The pamphlet was written to try and help people not feel so overwhelmed by coming into a space so loaded down with jargon, but you're right that it perpetuates the problem. I encourage people to copy and edit this, perhaps tailoring it to the level of jargon and the specific goals of your group.

Here's a link to a non-pdf version.

Comment by Raelifin on Does My Vote Matter? · 2012-11-05T18:12:09.878Z · LW · GW

And as a followup, even if you're correct about the probabilities (which I'm not sure you are), it's not intrinsically optimal to vote, even if you care about the outcome. One must always weigh the opportunity cost of an action, and the opportunity cost depends on the person.

If a superintelligent AI is being built and an equal amount of Yudkowsky's time will decrease the extinction probability by the same amount as voting would increase candidate X's election probability, then it's clearly not optimal for Yudkowsky to vote, because the neg-utility of extinction far outweighs the neg-utility of an unfortunate election.

Comment by Raelifin on Does My Vote Matter? · 2012-11-05T18:05:43.060Z · LW · GW

voting is rational if you...

This is a minor objection, but voting can't be rational and it can't be irrational. Voting isn't a system of thinking. You may want to rephrase your argument as "voting is optimal if you..."

Comment by Raelifin on Meetup : First Meetup- Cleveland and Akron, Ohio · 2012-10-30T03:49:41.289Z · LW · GW

Hello northern Ohio! My name is Max, and I recently moved up here from Columbus. I've attended a few LW meetups before, and would enjoy getting a regular thing happening in Cleveland. ^_^

Comment by Raelifin on My Algorithm for Beating Procrastination · 2012-02-10T03:28:57.316Z · LW · GW

Be aware that having tried and failed at something does not mean it does not work. That's generalizing from a single example. Remember: “The apprentice laments 'My art has failed me', while the master says 'I have failed my art'”. This is not to say you're necessarily wrong, just that we need to take a data-based approach, rather than rely on anecdotes.

Comment by Raelifin on Elevator pitches/responses for rationality / AI · 2012-02-03T00:38:17.153Z · LW · GW

The elevator pitch that got me most excited about rationality is from Raising the Sanity Waterline. It only deals with epistemic rationality, which is an issue, and it, admittedly, is best fit towards people who belong to a sanity-focused minority, like atheism or something political. It was phrased with regard to religion originally, so I'll keep it this way here, but it can easily be tailored.

"What is rationality?"

Imagine you're teaching a class to deluded religious people, and you want to get them to change their mind and become atheists, but you absolutely cannot talk about religion in any way. What would you do? You'd have to go deeper than talking about religion itself. You'd have to teach your students how to think clearly and actually reevaluate their beliefs. That's (epistemic) rationality.

"Why is rationality important? Shouldn't we focus on religion first?"

By focusing on rationality itself you not only can approach religion in a non-threatening way, but you can also align yourself with other sane people who may care about economics or politics or medicine. By working together you can get their support, even though they may not care about atheism per se.