Comment by minibearrex on Rationality Quotes August 2013 · 2013-08-05T05:23:38.967Z · score: 25 (27 votes) · LW · GW

He wasn't certain what he expected to find, which, in his experience, was generally a good enough reason to investigate something.

Harry Potter and the Confirmed Critical, Chapter 6

Comment by minibearrex on Rationality Quotes August 2013 · 2013-08-04T06:07:56.644Z · score: 43 (43 votes) · LW · GW

I've got to start listening to those quiet, nagging doubts.

Calvin

Comment by minibearrex on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-08T00:47:13.922Z · score: 1 (1 votes) · LW · GW

BTW, the post says that spoilers from the original canon don't need to be in rot13.

Comment by minibearrex on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-08T00:36:17.633Z · score: 0 (0 votes) · LW · GW

Their hearts stop beating, and they stop needing to breathe during the turning process.

[SEQ RERUN] Final Words

2013-05-13T03:49:52.368Z · score: 2 (3 votes)

[SEQ RERUN] Practical Advice Backed By Deep Theories

2013-05-11T03:38:46.104Z · score: 1 (2 votes)
Comment by minibearrex on [SEQ RERUN] Go Forth and Create the Art · 2013-05-11T03:36:02.862Z · score: 0 (0 votes) · LW · GW

I plan to keep doing reruns through "Final Words", which will be posted two days from now. After the reruns are done, I have no particular plans to keep going. I had planned to create a post to prompt discussion as to future plans, but I don't plan to personally do another rerun.

[SEQ RERUN] Go Forth and Create the Art

2013-05-10T05:04:04.503Z · score: 3 (4 votes)

[SEQ RERUN] Well-Kept Gardens Die By Pacifism

2013-05-09T05:46:24.707Z · score: 1 (4 votes)

[SEQ RERUN] The Sin of Underconfidence

2013-05-08T04:51:43.947Z · score: 2 (3 votes)
Comment by minibearrex on Rationality Quotes - April 2009 · 2013-05-08T04:32:50.170Z · score: 2 (2 votes) · LW · GW

To try to be happy is to try to build a machine with no other specification than that it shall run noiselessly. -Robert Oppenheimer, 1929

[SEQ RERUN] My Way

2013-05-07T05:35:31.400Z · score: 4 (5 votes)

[SEQ RERUN] Of Gender and Rationality

2013-05-06T03:49:02.329Z · score: 2 (3 votes)

[SEQ RERUN] Bayesians vs. Barbarians

2013-05-03T05:39:42.024Z · score: 2 (3 votes)

[SEQ RERUN] Collective Apathy and the Internet

2013-05-01T04:33:10.392Z · score: 2 (3 votes)

[SEQ RERUN] Bystander Apathy

2013-04-30T05:00:52.033Z · score: 3 (4 votes)

[SEQ RERUN] Akrasia and Shangri-La

2013-04-24T05:37:03.768Z · score: 1 (4 votes)

[SEQ RERUN] The Unfinished Mystery of the Shangri-La Diet

2013-04-23T04:56:26.643Z · score: 1 (4 votes)

[SEQ RERUN] Beware of Other-Optimizing

2013-04-16T06:30:20.605Z · score: 4 (5 votes)

[SEQ RERUN] Mandatory Secret Identities

2013-04-15T07:08:06.432Z · score: 2 (3 votes)

[SEQ RERUN] Whining-Based Communities

2013-04-14T03:15:31.187Z · score: 4 (5 votes)

[SEQ RERUN] Extenuating Circumstances

2013-04-13T04:27:04.146Z · score: 2 (3 votes)

[SEQ RERUN] Real-Life Anthropic Weirdness

2013-04-12T05:33:14.994Z · score: 1 (2 votes)

[SEQ RERUN] Incremental Progress and the Valley

2013-04-11T05:58:55.393Z · score: 1 (2 votes)

[SEQ RERUN] Rationality is Systematized Winning

2013-04-10T04:20:56.258Z · score: 2 (3 votes)

[SEQ RERUN] Selecting Rationalist Groups

2013-04-09T05:43:49.871Z · score: 1 (2 votes)

[SEQ RERUN] Purchase Fuzzies and Utilons Separately

2013-04-08T05:01:03.797Z · score: 3 (4 votes)

[SEQ RERUN] Helpless Individuals

2013-04-07T06:47:17.470Z · score: 0 (1 votes)

[SEQ RERUN] Rationality: Common Interest of Many Causes

2013-04-06T04:24:24.498Z · score: 1 (2 votes)

[SEQ RERUN] Church vs. Taskforce

2013-04-05T04:43:13.572Z · score: 1 (2 votes)

[SEQ RERUN] Can Humanism Match Religion's Output?

2013-04-04T05:40:08.789Z · score: 2 (3 votes)
Comment by minibearrex on [SEQ RERUN] Your Price for Joining · 2013-04-04T05:37:35.265Z · score: 2 (2 votes) · LW · GW

I don't think EY actually suggests that people are doing those calculations. He's saying that we're just executing an adaptation that functioned well in groups of a hundred or so, but don't work nearly as well anymore.

[SEQ RERUN] Your Price for Joining

2013-04-03T02:45:39.608Z · score: 4 (5 votes)

[SEQ RERUN] The Sacred Mundane

2013-04-02T05:37:35.867Z · score: 1 (2 votes)

[SEQ RERUN] On Things that are Awesome

2013-04-01T05:23:19.747Z · score: 1 (2 votes)

[SEQ RERUN] You're Calling *Who* A Cult Leader?

2013-03-31T04:18:40.926Z · score: 2 (3 votes)

[SEQ RERUN] Tolerate Tolerance

2013-03-30T20:14:20.498Z · score: 1 (2 votes)

[SEQ RERUN] Why Our Kind Can't Cooperate

2013-03-30T06:03:22.218Z · score: 1 (2 votes)

[SEQ RERUN] Rationalist Fiction

2013-03-29T04:45:28.140Z · score: 1 (2 votes)
Comment by minibearrex on [SEQ RERUN] What Do We Mean By "Rationality"? · 2013-03-29T04:38:10.109Z · score: 1 (1 votes) · LW · GW

The trouble is that there is nothing in epistemic rationality that corresponds to "motivations" or "goals" or anything like that. Epistemic rationality can tell you that pushing a button will lead to puppies not being tortured, and not pushing it will lead to puppies being tortured, but unless you have an additional system that incorporates desires for puppies to not be tortured, as well as a system for achieving those desires, that's all you can do with epistemic rationality.

Comment by minibearrex on [SEQ RERUN] The Pascal's Wager Fallacy Fallacy · 2013-03-29T04:35:21.382Z · score: 1 (1 votes) · LW · GW

I think you're confusing Pascal's Wager with Pascal's Mugging. The problem with Pascal's Mugging is that the payoffs are really high. The problem with Pascal's Wager is that it fails to consider any hypotheses other than "there is the christian god" and "there is no god".

[SEQ RERUN] The Pascal's Wager Fallacy Fallacy

2013-03-28T04:55:48.361Z · score: 2 (3 votes)

[SEQ RERUN] What Do We Mean By "Rationality"?

2013-03-27T04:33:23.491Z · score: 1 (2 votes)

[SEQ RERUN] 3 Levels of Rationality Verification

2013-03-26T02:31:46.908Z · score: 4 (5 votes)

[SEQ RERUN] Schools Proliferating Without Evidence

2013-03-25T11:54:22.153Z · score: 3 (4 votes)

[SEQ RERUN] Epistemic Viciousness

2013-03-24T04:44:35.134Z · score: 1 (2 votes)

[SEQ RERUN] A Sense That More Is Possible

2013-03-22T05:01:52.957Z · score: 2 (3 votes)

[SEQ RERUN] Raising the Sanity Waterline

2013-03-21T04:47:21.520Z · score: 1 (4 votes)

[SEQ RERUN] Striving to Accept

2013-03-20T05:53:37.684Z · score: 0 (3 votes)

[SEQ RERUN] Don't Believe You'll Self-Deceive

2013-03-19T03:57:02.933Z · score: 2 (3 votes)

[SEQ RERUN] Moore's Paradox

2013-03-18T04:21:38.309Z · score: 1 (2 votes)

[SEQ RERUN] Belief in Self-Deception

2013-03-16T05:58:43.062Z · score: 3 (4 votes)

[SEQ RERUN] No, Really, I've Deceived Myself

2013-03-15T07:04:40.521Z · score: 2 (3 votes)

[SEQ RERUN] Teaching the Unteachable

2013-03-14T22:55:33.933Z · score: 0 (1 votes)

[SEQ RERUN] Unteachable Excellence

2013-03-12T09:03:53.982Z · score: 1 (2 votes)

[SEQ RERUN] Markets are Anti-Inductive

2013-03-11T04:13:14.012Z · score: 3 (4 votes)

[SEQ RERUN] Formative Youth

2013-03-10T05:42:18.393Z · score: 2 (3 votes)

[SEQ RERUN] On Not Having an Advance Abyssal Plan

2013-03-09T01:37:18.563Z · score: 2 (3 votes)

[SEQ RERUN] Fairness vs. Goodness

2013-03-08T05:48:29.463Z · score: 2 (3 votes)
Comment by minibearrex on Rationality Quotes March 2013 · 2013-03-05T05:51:10.096Z · score: 2 (4 votes) · LW · GW

I'm not really sure that counts as faith. Faith usually implies something like "believing something without concern for evidence". And in fact, the evidence I have fairly strongly indicates is that when I step into an airplane, I'm not going to die.

Comment by minibearrex on [SEQ RERUN] Cynical About Cynicism · 2013-03-04T05:04:44.668Z · score: 0 (0 votes) · LW · GW

Including that one?

Comment by minibearrex on [SEQ RERUN] Cynicism in Ev-Psych (and Econ) · 2013-02-25T06:21:23.169Z · score: 0 (0 votes) · LW · GW

Hanson's reply may be worth reading.

Comment by minibearrex on [SEQ RERUN] Emotional Involvement · 2013-01-29T05:57:09.612Z · score: 0 (0 votes) · LW · GW

Probably because very few people propose playing solitaire and Settlers of Catan forever as their version of a Utopia. Spending eternity playing games on the holodeck, however, is frequently mentioned.

Comment by minibearrex on [SEQ RERUN] Changing Emotions · 2013-01-22T08:00:46.751Z · score: 2 (2 votes) · LW · GW

By the way, there may be some interruptions to posting sequence reruns over the course of the next week. Unfortunately, I'm going to be traveling and working on an odd schedule that may not let me reliably spend some time daily posting these things. I'll try to get to it as much as possible, but I apologize in advance if I miss a few days.

Comment by minibearrex on Morality is Awesome · 2013-01-18T21:58:29.386Z · score: 1 (1 votes) · LW · GW

I tend to use the word fun.

Comment by minibearrex on Case Study: the Death Note Script and Bayes · 2013-01-07T04:14:37.039Z · score: 0 (0 votes) · LW · GW

We finish with high confidence in the script's authenticity

If you're already familiar this particular leaked 2009 live-action script, please write down your current best guess as to how likely it is to be authentic.

Unless someone already tried to come up with an explicit probability, this ordering will bias the results. Ask people for their guesses before you tell them what you have already written on the subject.

Comment by minibearrex on Just One Sentence · 2013-01-05T04:51:52.400Z · score: 2 (2 votes) · LW · GW

Your competition story qualifies you for an upvote, for munchkinry.

It's a pretty good idea for a sentence, too.

Comment by minibearrex on How to update P(x this week), upon hearing P(x next month) = 99.5%? · 2013-01-05T04:45:58.020Z · score: 1 (1 votes) · LW · GW

I will note that this seems as though it ought to be a problem that we can gather data on. We don't have to theorize if we can look find a good sampling of cases in which a minister said they would resign, and then look at when they actually resigned.

Additionally, this post is mostly about a particular question involving anticipating political change, but the post title sounds like a more abstract issue in probability theory (how we should react if we learn that we will believe something at some later point).

Comment by minibearrex on [SEQ RERUN] You Only Live Twice · 2013-01-02T06:03:06.832Z · score: 4 (4 votes) · LW · GW

And with this post, we have reached the last post in the 2008 Hanson-Yudkowsky AI Foom Debate. Starting tomorrow, we return to the regularly scheduled sequence reruns, and start moving into the Fun Theory Sequence.

Comment by minibearrex on Intelligence explosion in organizations, or why I'm not worried about the singularity · 2012-12-27T05:56:39.000Z · score: 5 (5 votes) · LW · GW

I would advise putting a little bit more effort into formatting. Some of the font jumps are somewhat jarring, and prevent your post from having as much of an impact as you might hope.

Comment by minibearrex on [SEQ RERUN] True Sources of Disagreement · 2012-12-19T00:38:56.708Z · score: 2 (2 votes) · LW · GW

The wikipedia page) on the blind spot contains a good description, as well as a diagram of vertebrate eyes alongside the eye of an octopus, which does not have the same feature.

Comment by minibearrex on Lifeism in the midst of death · 2012-12-09T22:18:54.445Z · score: 4 (4 votes) · LW · GW

I'm sorry you had to go through this. I've been to three Catholic funerals over the past two years, and found them both to be particularly painful. I actually refused requests to perform readings, and thought about doing a eulogy like this. I didn't, and I'm impressed that you had the courage to do so.

Comment by minibearrex on [SEQ RERUN] Whither Manufacturing? · 2012-12-08T01:08:58.255Z · score: 0 (0 votes) · LW · GW

In discussions about a month or so ago, people expressed interest in running posts by Hanson, as well as a few others (Carl Shulman and James Miller), as part of the AI FOOM Debate. This is the 12th post in that debate by someone other than Yudkowsky. There are, after today, 18 more posts in the debate left, of which 9 are by Hanson. After that, we will return to the usual practice of just rerunning Yudkowsky's sequences.

Comment by minibearrex on How to incentivize LW wiki edits? · 2012-12-06T01:58:12.339Z · score: 1 (1 votes) · LW · GW

Every now and then, the Wiki seems to decide that my IP address is spamming the Wiki, and autoblock it. Sometimes it goes away in a day or so, and sometimes it doesn't. In the event that it doesn't, making a new username seems to resolve the issue, for some reason. I'm currently on account number 4, named "Wellthisisaninconvenience". Which is different from my previous account, "Thisisinconvenient".

Comment by minibearrex on Rationality Quotes December 2012 · 2012-12-05T05:12:24.365Z · score: 4 (4 votes) · LW · GW

Perhaps there is nothing in Nature more pleasing than the study of the human mind, even in its imperfections or depravities; for, although it may be more pleasing to a good mind to contemplate and investigate the application of its powers to good purposes, yet as depravity is an operation of the same mind, it becomes at least equally necessary to investigate, that we may be able to prevent it.

-John Hunter

Comment by minibearrex on Rationality Quotes December 2012 · 2012-12-05T04:04:21.942Z · score: 3 (5 votes) · LW · GW

Don't think, try the experiment.

-John Hunter

Comment by minibearrex on [Link] Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show · 2012-12-03T03:36:33.130Z · score: 1 (1 votes) · LW · GW

I think nigerweiss is asserting that "The experiment requires that you continue" activates System 1 but not System 2.

Comment by minibearrex on Open Thread, December 1-15, 2012 · 2012-12-02T02:16:10.866Z · score: 4 (4 votes) · LW · GW

Prior probabilities seem to me to be the key idea. Essentially, young earth creationists want P(evidence|hypothesis) = ~1. The problem is that to do this, you have to make P(hypothesis) very small. Essentially, they're overfitting the data. P(no god) and P(deceitful god) may have identical likelihood functions, but the second one is a conjunction of a lot of statements (god exists, god created the world, god created the world 4000 years ago, god wants people to believe he created the world 4000 years ago, god wants people to believe he created the world 4000 years ago despite evidence to the contrary, etc). All of these statements are an additional decrease in probability for the prior probability in the Bayesian update.

Comment by minibearrex on What do you think of my reading list? · 2012-12-02T00:13:08.874Z · score: 1 (1 votes) · LW · GW

I thought the explanations were just poorly written. But given that Luke, and other seem to have reviewed it positively, I'd guess that it is substantially better than others.

Comment by minibearrex on Breakdown of existential risks · 2012-12-01T01:33:26.410Z · score: 1 (1 votes) · LW · GW

Why does the table indicate that we haven't observed pandemics the same way we've observed wars, famines, and earth impactors?

Comment by minibearrex on What do you think of my reading list? · 2012-11-30T20:32:01.969Z · score: 3 (3 votes) · LW · GW

For what it's worth, I haven't found any of the Cambridge Introduction to Philosophy series to be particularly good. The general sense I have is that they're better used as a reference if you can't remember exactly how the professor explained something, than as a source to actually try to learn the topic independently. That being said, I haven't read the Decision theory one, so take this with a grain of salt.

Comment by minibearrex on [META] Retributive downvoting: Why? · 2012-11-30T20:10:30.251Z · score: 2 (2 votes) · LW · GW

I think any message of this sort is likely to lead to some unpleasantness. "Hey, I just downvoted a whole bunch of your old posts, but it's ok because I actually did think that all of those posts were bad." Downvote things that deserve to get downvoted, but don't make a scene out of it that's just going to poison the discussion.

Comment by minibearrex on Holding your own LW Solstice · 2012-11-30T20:05:09.044Z · score: 0 (0 votes) · LW · GW

Are you planning to do anything like the ritual sequence again this year?

Comment by minibearrex on [SEQ RERUN] Billion Dollar Bots · 2012-11-15T04:59:10.765Z · score: 2 (2 votes) · LW · GW

This post is by James Miller, who posted about a year ago that he was writing a book. It's apparently out now, and seems to have received some endorsements from some recognizable figures. If there's anyone here who's read it, how worthwhile of a read would it be for someone already familiar with the idea of the singularity?

Comment by minibearrex on [SEQ RERUN] Observing Optimization · 2012-11-11T06:54:26.203Z · score: 3 (3 votes) · LW · GW

So if I were talking about the effect of e.g. sex as a meta-level innovation, then I would expect e.g. an increase in the total biochemical and morphological complexity that could be maintained - the lifting of a previous upper bound, followed by an accretion of information. And I might expect a change in the velocity of new adaptations replacing old adaptations.

But to get from there, to something that shows up in the fossil record - that's not a trivial step.

I recall reading, somewhere or other, about an ev-bio controversy that ensued when one party spoke of the "sudden burst of creativity" represented by the Cambrian explosion, and wondered why evolution was proceeding so much more slowly nowadays. And another party responded that the Cambrian differentiation was mainly visible post hoc - that the groups of animals we have now, first differentiated from one another then, but that at the time the differences were not as large as they loom nowadays. That is, the actual velocity of adaptational change wasn't remarkable by comparison to modern times, and only hindsight causes us to see those changes as "staking out" the ancestry of the major animal groups.

I'd be surprised to learn that sex had no effect on the velocity of evolution. It looks like it should increase the speed and number of substituted adaptations, and also increase the complexity bound on the total genetic information that can be maintained against mutation. But to go from there, to just looking at the fossil record and seeing faster progress - it's not just me who thinks that this jump to phenomenology is tentative, difficult, and controversial.

Should you expect more speciation after the invention of sex, or less? The first impulse is to say "more", because sex seems like it should increase the optimization velocity and speed up time. But sex also creates mutually reproducing populations, that share genes among themselves, as opposed to asexual lineages - so might that act as a centripetal force?

The idea that the development of sex didn't speed up the process of speciation would, if true, be important for a certain problem I'm currently working on. Could anyone point me towards some sort of academic discussion on the subject?

Comment by minibearrex on [SEQ RERUN] AI Go Foom · 2012-11-09T18:51:57.481Z · score: 0 (0 votes) · LW · GW

As the problems get more difficult, or require more optimization, the AI has more optimization power available. That might or might not be enough to compensate for the increase in difficulty.

Comment by minibearrex on [POLL] AI-FOOM Debate in Sequence Reruns? · 2012-11-02T05:32:57.297Z · score: 1 (1 votes) · LW · GW

Thanks. I appreciate that.

Comment by minibearrex on [POLL] AI-FOOM Debate in Sequence Reruns? · 2012-11-02T05:31:06.229Z · score: 0 (0 votes) · LW · GW

What do people think of this idea? I'm personally interested in reading all of the debate, and I think I will, no matter what I wind up posting, so nobody else needs to feel lonely if they want to see all of it.

Comment by minibearrex on [POLL] AI-FOOM Debate in Sequence Reruns? · 2012-11-02T05:24:37.864Z · score: 0 (0 votes) · LW · GW

I think so, but truth be told I've actually never read through all of it myself. All of the bits of it I've seen seem to indicate that they hold similar positions in those debates to their positions in the original argument.

Comment by minibearrex on [SEQ RERUN] Worse Than Random · 2012-10-22T21:46:51.635Z · score: 3 (3 votes) · LW · GW

From the next post in the sequences:

There does exist a rare class of occasions where we want a source of "true" randomness, such as a quantum measurement device. For example, you are playing rock-paper-scissors against an opponent who is smarter than you are, and who knows exactly how you will be making your choices. In this condition it is wise to choose randomly, because any method your opponent can predict will do worse-than-average.

Comment by minibearrex on [SEQ RERUN] Lawful Uncertainty · 2012-10-21T01:42:52.656Z · score: 0 (0 votes) · LW · GW

Fixed. Good catch.

Comment by minibearrex on [SEQ RERUN] Aiming at the Target · 2012-10-16T04:24:41.490Z · score: 0 (0 votes) · LW · GW

Because there's a simpler hypothesis (gravity) that not only explains the behavior of water, but also the behavior of other objects, motions of the planets, etc. There is still some tiny amount of probability allocated to the optimization hypothesis, but it loses out to the sheer simplicity and explanatory power of competing hypotheses.

Comment by minibearrex on [SEQ RERUN] Aiming at the Target · 2012-10-12T04:52:43.308Z · score: 0 (0 votes) · LW · GW

Optimization is a hypothesis. It's a complex hypothesis. You get evidence in favor of the hypothesis that water is an optimization process when you see it avoiding local minimums and steering itself to the lowest possible place on earth.

Comment by minibearrex on Thoughts and problems with Eliezer's measure of optimization power · 2012-10-08T23:06:58.789Z · score: 0 (0 votes) · LW · GW

Similarly, OP measures the system's ability to achieve its very top goals, not how hard these goals are. A system that wants to compose a brilliant sonnet has more OP than exactly the same system that wants to compose a brilliant sonnet while embodied in the Andromeda galaxy. Even though the second is plausibly more dangerous. So OP is a very imperfect measure of how powerful a system is.

I'm confused. A system that has to compose a brilliant sonnet and make sure that it exists in the Andromeda galaxy has to hit a smaller target of possible worlds than a system that wants to compose a brilliant sonnet, and doesn't care where it ends up. Achieving more complex goals require more optimization power, in Eliezer's sense, than achieving simple goals.

Comment by minibearrex on Rationality Quotes October 2012 · 2012-10-07T03:38:20.571Z · score: 9 (9 votes) · LW · GW

I happen to agree with the quote; I just don't think it's particularly a quote about rationality. Just because a quote is correct doesn't mean that it's a quote about how to go about acquiring correct beliefs, or (in general) accomplish your goals. The fact that HIV is a retrovirus that employs an enzyme called reverse transcriptase to copy its genetic code into the host cell is useful information for a biologist or a biochemist, because it helps them to accomplish their goals. But it is rather unhelpful for someone looking for a way to accomplish goals in general.

Comment by minibearrex on Rationality Quotes October 2012 · 2012-10-05T05:40:44.560Z · score: 4 (8 votes) · LW · GW

Libertarian quote, or rationality quote?

Comment by minibearrex on [SEQ RERUN] Inner Goodness · 2012-10-04T04:23:15.606Z · score: 1 (1 votes) · LW · GW

A recent conversation with Michael Vassar touched on - or to be more accurate, he patiently explained to me - the psychology of at least three (3) different types of people known to him, who are evil and think of themselves as "evil". In ascending order of frequency:

...

The second type was a whole 'nother story, so I'm skipping it for now.

Does anyone know what the second type was?

Comment by minibearrex on [SEQ RERUN] Prices or Bindings? · 2012-10-01T04:22:02.234Z · score: 0 (0 votes) · LW · GW

I tried to solve it on my own, but haven't been able to so far. I haven't been able to figure out what sort of function someone who knows that I'm using UDT will use to predict my actions, and how my own decisions affect that. If someone knows that I'm using UDT, and I think that they think that I will cooperate with anyone who knows I'm using UDT, then I should break my word. But if they know that...

In general, I'm rather suspicious of the "trust yourself" argument. The Lake Wobegon effect would seem to indicate that humans don't do it well.

Comment by minibearrex on [SEQ RERUN] Prices or Bindings? · 2012-09-30T21:13:02.249Z · score: 1 (1 votes) · LW · GW

If you try to add to that category people who know that, but think that they are smart enough, then it gets tricky. How do I know whether I actually am smart enough, or whether I just think I'm smart enough.

Comment by minibearrex on [SEQ RERUN] Protected From Myself · 2012-09-28T05:54:17.430Z · score: 0 (0 votes) · LW · GW

I agree with Decius. Do you have a wiki account, so you can post your own edit under your own name?