Posts

Comments

Comment by Ratheka on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-08T16:33:29.148Z · LW · GW

Agree - talking things out, making everything as common knowledge as possible, and people who strongly value the harder path and who have resources committing some to fence off the worst cases of failure, seem to be necessary prerequisites to staghunting.

Comment by Ratheka on The Amish, and Strategic Norms around Technology · 2019-06-04T14:02:50.879Z · LW · GW

I have definitely gained from having a set bedtime, even in the time that I was in recovery for another run at polyphasic - having a time (01:00) when I am *in bed*, every day, makes life easier to plan, and the sleep itself better. (I'm fairly sure I can support this with studies.) It's been a good policy to start.

Comment by Ratheka on The Amish, and Strategic Norms around Technology · 2019-06-04T13:59:43.742Z · LW · GW

I've definitely benefited from having control over how much friction something had; adding a cost to visiting reddit made it less likely I'd waste time there all out of proportion to how large the cost actually was.

Comment by Ratheka on POSITION: Design and Write Rationality Curriculum · 2012-01-25T11:31:26.218Z · LW · GW

What do you think of the idea of an RPG type game where the quests are designed to trigger biases in people, and that required clear thinking to win at? I'd be a big fan of a game that required you to read quests and think about them, and moved away from the 'track arrow, kill everything en route' model that many have today. Of course, it still needs to be fun to entice people to play it. Functional edutainment seems to be a rough balance to strike.

Comment by Ratheka on Welcome to Less Wrong! · 2012-01-21T10:51:57.744Z · LW · GW

Can I vote Discordianism? Knowing how silly it all is is a property of the text, Isn't that helpful?

Comment by Ratheka on My Wild and Reckless Youth · 2012-01-21T08:07:19.522Z · LW · GW

Well, I'm not personally capable of building AI's, and I'm not as deeply versed as I'm sure many people here are, but, I see an implementation of Bayes theorem as a tool for finding truth, in the mind of a human or an AI or whatever sort of person you care to conceive of / display, whereas the mind behind it is an agent with a quality we might called directedness, or intentionality, or simply an interest to go out and poke the universe with a stick where it doesn't make sense. Bayes is in itself already math, easy to put into code, but we don't understand internally directed behavior well enough to model it, yet.

Comment by Ratheka on My Wild and Reckless Youth · 2012-01-21T03:32:16.971Z · LW · GW

I think even a perfect implementation of Bayes would not in and of itself be an AI. By itself, the math doesn't have anything to work on, or any direction to do so. Agency is hard to build, I think.

As always, of course, I could be wrong.

Comment by Ratheka on Initiation Ceremony · 2012-01-18T22:40:25.377Z · LW · GW

Perhaps I missed yours? Rationality requires the ability to challenge social pressure, certainly. Are you questioning whether this procedure picks rationalists from nonrationalists? If so, and on its own, I don't argue that it would, just that it would probably be one member of a larger set of tests.

Comment by Ratheka on Initiation Ceremony · 2012-01-18T21:46:38.078Z · LW · GW

Nobody else seems to have added this response, so I will. We don't know that this moment, in the ritual room, is the only test they undergo. Perhaps one's ability to take a written exam is part of the public procedures. Perhaps a great open exam where anyone who wants to can sit it, running near continuously, is the first stage, and Brennan has had months in a cloisterlike environment in the public secret face of the conspiracy where the people who can study sciences but not generate new true science study?