On Need-Sets 2020-08-15T23:25:50.814Z
Marcello's Shortform 2020-08-07T15:41:36.139Z
LessWrong v2.0 Anti-Kibitzer (hides comment authors and vote counts) 2020-05-23T23:44:15.457Z
Positive-affect-day-Schelling-point-mas Meetup 2009-12-23T19:41:02.761Z
LessWrong anti-kibitzer (hides comment authors and vote counts) 2009-03-09T19:18:44.923Z


Comment by Marcello on On Need-Sets · 2020-08-18T22:29:53.358Z · LW · GW

I broadly agree. Though I would add that those things could still be (positive motivation) wants afterwards, which one pursues without needing them. I'm not advocating for asceticism.

Also, while I agree that you get more happiness by having fewer negative motives, being run by positive motives is not 100% happiness. One can still experience disappointment if one wants access to Netflix, and it's down for maintenance one day. However, disappointment is still both more hedonic than fear and promotes a more measured reaction to the situation.

Comment by Marcello on On Need-Sets · 2020-08-17T08:04:01.890Z · LW · GW
Are you trying to say that it should work similarly to a desensitization therapy? But then, there might exist the reversed mode, where you get attached to things even more, as you meditate on why are they good to have. Which of these modes dominates is not clear to me.

I think you make a good point. I feel I was gesturing at something at something real when I wrote down the comparison notion, but didn't express it quite right. Here's how I would express it now:

The key thing I failed to point out in the post is that just visualizing a good thing you have or what's nice about it is not the same as being grateful for it. Gratitude includes an acknowledgement. When you thank an acquaintance for, say, having given you helpful advice, you're acknowledging that they didn't necessarily have to go out of their way to do that. Even if you're grateful for something a specific person didn't give you, and you don't believe in a god, the same feeling of acknowledgment is present. I suspect this acknowledgement is what pushes things out of the need-set.

And indeed, as you point out, just meditating on why something is good to have might increase attachment (or it might not, the model doesn't make a claim about which effect would be stronger).

Comment by Marcello on On Need-Sets · 2020-08-17T07:32:05.706Z · LW · GW
I don't think I get this. Doesn't this apply to any positive thing in life? (e.g. why single out the gratitude practise?)

I expect most positive things would indeed help somewhat, but that gratitude practice would help more. If someone lost a pet, giving them some ice cream may help. However, as long as their mind is still making the comparison to the world where their pet is still alive, the help may be limited. That said, to the extent that they manage to feel grateful for the ice cream, it seems to me as though their internal focus has shifted in a meaningful way, away from grasping at the world where their pet is still alive and towards the real world.

Comment by Marcello on On Need-Sets · 2020-08-17T07:14:44.685Z · LW · GW

1. Yes, I agree with the synopsis (though expanded need-sets are not the only reason people are more anxious in the modern world).

2. Ah. Perhaps my language in the post wasn't as clear as it could have been. When I said:

More specifically, your need-set is the collection of things that have to seem true for you to feel either OK or better.

I was thinking of the needs as already being about what seems true about future states of the world, not just present states. For example, your need for drinking water is about being able to get water when thirsty at a whole bunch of future times.

If this is true then a larger need-set would lead to more negative motivation due to there being more ways for something we think we need to be taken away from us.

Yes, exactly.

Comment by Marcello on Eli's shortform feed · 2020-08-07T16:23:07.415Z · LW · GW

Your seemingly target-less skill-building motive isn't necessarily irrational or non-awesome. My steel-man is that you're in a hibernation period, in which you're waiting for the best opportunity of some sort (romantic, or business, or career, or other) to show up so you can execute on it. Picking a goal to focus on really hard now might well be the wrong thing to do; you might miss a golden opportunity if your nose is at the grindstone. In such a situation a good strategy would, in fact, be to spend some time cultivating skills, and some time in existential confusion (which is what I think not knowing which broad opportunities you want to pursue feels like from the inside).

The other point I'd like to make is that I expect building specific skills actually is a way to increase general problem solving ability; they're not at odds. It's not that super specific skills are extremely likely to be useful directly, but that the act of constructing a skill is itself trainable and a significant part of general problem solving ability for sufficiently large problems. Also, there's lots of cross-fertilization of analogies between skills; skills aren't quite as discrete as you're thinking.

Comment by Marcello on Marcello's Shortform · 2020-08-07T15:41:36.518Z · LW · GW

"Aspiring Rationalist" Considered Harmful

The "aspiring" in "aspiring rationalist" seems like superfluous humility at best. Calling yourself a "rationalist" never implied perfection in the first place. It's just like how calling yourself a "guitarist" doesn't mean you think you're Jimi Hendrix. I think this analogy is a good one, because rationality is a human art, just like playing the guitar.

I suppose one might object that the word "rational" denotes a perfect standard, unlike playing the guitar. However, we don't hesitate to call someone an "idealist" or a "perfectionist" when they're putting in a serious effort to conform to an ideal or strive towards perfection, so I think this objection is weak. The "-ist" suffix already means that you're a person trying to do the thing, with all the shortcomings that entails.

Furthermore, it appears harmful to add the "aspiring". It creates dilution. Think of what it would mean for a group of people to call themselves "aspiring guitarists". The trouble is, it also applies to the sort of person who daydreams about the adulation of playing for large audiences but never gets around to practicing. However, to honestly call yourself a "guitarist", you would have to actually, y'know, play the guitar once in a while.

While I acknowledge I'm writing this many years too late, please consider dropping the phrase "aspiring rationalist" from your lexicon.

Comment by Marcello on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2020-05-09T01:45:34.343Z · LW · GW

Well, I made another UserScript for the current site:

Comment by Marcello on Decision theory and zero-sum game theory, NP and PSPACE · 2018-05-31T14:52:52.777Z · LW · GW

I agree with some of the other commenters that the term "decision theory" ought to be reserved for the overarching problem of which decision algorithm to use, and that the distinction you're referring to ought to be called something like "adversarial" vs "non-adversarial" or "rival" vs "non-rival". Nonetheless, I think this is an interesting handle for thinking about human psychology.

If we view these as two separate modes in humans, and presume that there's some kind of subsystem that decides which mode to use, then false positives of that subsystem look like potential explanations for things like The Imp of the Perverse, or paranoia.

Comment by Marcello on Ms. Blue, meet Mr. Green · 2018-03-01T19:53:16.203Z · LW · GW

In the link post you're referring to, what Scott actually says is:

I suspect this is true the way it’s commonly practiced and studied (“if you’re feeling down, listen to this mindfulness tape for five minutes a day!”), less true for more becoming-a-Buddhist-monk-level stuff.
Comment by Marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T21:14:18.103Z · LW · GW

Yes. Google docs does contain a lame version of the thing I'm pointing at. The right version is that the screen is split into N columns. Each column displays the children of the selection from the previous column (the selection could either be an entire post/comment or a span within the post/comment that the children are replies to.)

This is both a solution to inline comments and a tree-browser that lets you see just the ancestry of a comment at a glance with out having to manually collapse everything else.

Also: you replied to my comment and I didn't see any notifications. I found your reply by scrolling around. That's probably a bug.

Comment by Marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T21:00:56.681Z · LW · GW

If you want to encourage engagement, don't hide the new comment box all the way down at the bottom of the page! Put another one right after the post (or give the post a reply button of the same sort the comments have.)

Comment by Marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T20:58:26.189Z · LW · GW

One UI for this I could imagine (for non-mobile wide-screen use) is to have the post and the comments appear in two columns with the post on the left and the comments on the right (Similar to the Mac OS X Finder's column view.) Then when the user clicks on a comment the appropriate bit of the post would get highlighted.

In fact, I could see doing a similar thing for the individual comments themselves to create a view that would show the *ancestry* of a single comment, stretching left back to the post the conversation was originally about. This could save a fair amount of scrolling.

Comment by Marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T20:51:43.451Z · LW · GW

Agreed. Also, at some point Eigenkarma is going to need to include a recency bias so that the system can react quickly to a commenter going sour.

Comment by Marcello on Welcome to Lesswrong 2.0 · 2017-06-20T20:45:21.386Z · LW · GW

I agree with the spirit of this. That said, if the goal is to calculate a Karma score which fails to be fooled by a user posting a large amount of low quality content, it might be better to do something roughly: sum((P*x if x < 0 else max(0, x-T)) for x in post_and_comment_scores). Only comments that hit a certain bar should count at all. Here P is the penalty multiplier for creating bad content, and T is the threshold a comment score needs to meet to begin counting as good content. Of course, I also agree that it's probably worth weighting upvotes and downvotes separately and normalizing by reads to calculate these per-(comment or post) scores.

Comment by Marcello on Formal Open Problem in Decision Theory · 2017-04-05T00:39:07.000Z · LW · GW

From discussions I had with Sam, Scott, and Jack:

To solve the problem, it would suffice to find a reflexive domain with a retract onto .

This is because if you have a reflexive domain , that is, an with a continuous surjective map , and is a retract of , then there's also a continuous surjective map .

Proof: If is a retract of then we have a retraction and a section with . Construct . To show that is a surjection consider an arbitrary . Thus, . Since is a surjection there must be some with . It follows that . Since was arbitrary, is also a surjection.

Comment by Marcello on Formal Open Problem in Decision Theory · 2017-04-05T00:26:00.000Z · LW · GW

"Self-Reference and Fixed Points: A Discussion and an Extension of Lawvere's Theorem" by Jorge Soto-Andrade and Francisco J. Varela seems like a potentially relevant result. In particular, they prove a converse Lawvere result in the category of posets (though they mention doing this for in an unsolved problem.) I'm currently reading through this and related papers with an eye to adapting their construction to (I think you can't just use it straight-forwardly because even though you can build a reflexive domain with a retract to an arbitrary poset, the paper uses a different notion of continuity for posets.)

Comment by Marcello on Applied Rationality Workshops: Jan 25-28 and March 1-4 · 2013-01-23T23:19:25.543Z · LW · GW

A bit of an aside, but for me the reference to "If" is a turn off. I read it as promoting a fairly-arbitrary code of stoicism rather than effectiveness. The main message I get is keep cool, don't complain, don't show that you're affected by the world, and now you've achieved your goal,

I agree that the poem is about stoicism, but have a very different take on what stoicism is. Real stoicism is about training the elephant to be less afraid and more stable and thereby accomplish more. For example, the standard stoic meditation technique of thinking about the worst and scariest possible outcomes you could face will gradually chip away at instinctive fear responses and allow one to think in a more level headed way. Similarly, taking cold showers and deconditioning the flinch response (which to some extent also allows one not to flinch away from thoughts.)

Of course, all of these real stoic training techniques are challengingly unpleasant. It's much easier to be a poser-stoic who explicitly optimizes for how stoic-looking of a face they put forward, by keeping cool, not complaining, and not emoting, rather than putting in all the hard work required to train the elephant and become a real stoic. This is, as you say, a recipe for disaster if pushed too hard. Most people out there who call themselves stoics are poser-stoics, just as Sturgeon's Law would demand. After reading the article you linked to I now have the same oppinion of the kind of stoicism the Victorian school system demanded.

Comment by Marcello on Who Wants To Start An Important Startup? · 2012-08-18T04:31:05.409Z · LW · GW

Short version: Make an Eckman-style micro-expression reader in a wearable computer.

Fleshed out version: You have a wearable computer (perhaps something like Google glass) which sends video from its camera (or perhaps two cameras if one camera is not enough) over to a high-powered CPU which processes the images, locates the faces, and then identifies micro expressions by matching and comparing the current image (or 3D model) to previous frames to infer which bits of the face have moved in which directions. If a strong enough micro-expression happens, the user is informed by a tone or other notification. Alternatively, one could go the more pedagogical route by showing then a still frame of the person doing the micro-expression some milliseconds prior with the relevant bits of the face highlighted.

Feasibility: We already can make computers are good at finding faces in images and creating 3D models from multiple camera perspectives. I'm pretty sure small cameras are good enough by now. We need the beefy CPU and/or GPU as a separate device for now because it's going to be a while before wearables are good enough to do this kind of heavy-duty processing on their own, but wifi is good enough to transmit very high resolution video. The foggiest bit in my model would be whether current image processing techniques are up to the challenge. Would anyone with expertise in machine vision care to comment on this?

Possible positive consequences: Group collaboration easily succumbs to politics and scheming unless a certain (large) level of trust and empathy has been established. (For example, I've seen plenty of hacker news comments confirm that having a strong friendship with one's startup cofounder is important.) A technology such as this would allow for much more rapid (and justified) trust-building between potential collaborators. This might also allow for the creation of larger groups of smart people who all trust each other. (Which would be invaluable for any project which produces information which shouldn't be leaked because it would allow such projects to be larger.) Relatedly, this might also allow one to train really excellent therapist-empaths.

Possible negative consequence: Police states where the police are now better at reading people's minds.

Comment by Marcello on Quixey - startup applying LW-style rationality - hiring engineers · 2011-09-29T22:02:22.695Z · LW · GW

I didn't leave due to burn-out.

Comment by Marcello on Quixey - startup applying LW-style rationality - hiring engineers · 2011-09-28T21:54:17.829Z · LW · GW

Quixey is a great place to work, and I learned a lot working there. My main reason for leaving was that I wanted to be able to devote more time and mental energy to some of my own thoughts and projects.

Comment by Marcello on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-02T08:42:49.397Z · LW · GW

Offhand, I'm guessing the very first response ought to be "Huzzah! I caught myself procrastinating!" in order to get the reverse version of the effect I mentioned. Then go on to "what would I like to do?"

Comment by Marcello on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-02T08:01:07.594Z · LW · GW

Here's a theory about one of the things that causes procrastination to be so hard to beat. I'm curious what people think of it.

  1. Hypothesis: Many parts of the mind are influenced by something like reinforcement learning, where the emotional valances of our thoughts function as a gross reward signal that conditions their behaviors.

  2. Reinforcement learning seems to have a far more powerful effect when feedback is instant.

  3. We think of procrastinating as a bad thing, and tend to internally punish ourselves when we catch ourselves doing it.

  4. Therefore, the negative feedback signal might end up exerting a much more powerful training effect on the "catcher" system (aka. whatever is activating frontal override) rather than on whatever it is that triggered the procrastination in the first place.

  5. This results in a simple counter-intuitive piece of advice: when you catch yourself procrastinating, it might be a very bad idea to internally berate yourself about it; Thoughts of the form "%#&%! I'm procrastinating again! I really shouldn't do that!" might actually cause more procrastinating in the long run. If I had to guess, things like meditation would be helpful for building up the skill required to catch the procrastination-berating subsystem in the act and get it to do something else.

TL;DR: It would probably be hugely helpful to try to train oneself to make the "flinch" less unpleasant.

Comment by Marcello on Positive-affect-day-Schelling-point-mas Meetup · 2009-12-23T19:45:33.936Z · LW · GW

I am going to be there.

Comment by Marcello on Outlawing Anthropics: An Updateless Dilemma · 2009-09-09T13:32:42.355Z · LW · GW

Why do I think anthropic reasoning and consciousness are related?

In a nutshell, I think subjective anticipation requires subjectivity. We humans feel dissatisfied with a description like "well, one system running a continuation of the computation in your brain ends up in a red room and two such systems end up in green rooms" because we feel that there's this extra "me" thing, whose future we need to account for. We bother to ask how the "me" gets split up, what "I" should anticipate, because we feel that there's "something it's like to be me", and that (unless we die) there will be in future "something it will be like to be me". I suspect that the things I said in the previous sentence are at best confused and at worst nonsense. But the question of why people intuit crazy things like that is the philosophical question we label "consciousness".

However, the feeling that there will be in future "something it will be like to be me", and in particular that there will be one "something it will be like to be me" if taken seriously, forces us to have subjective anticipation, that is, to write probability distribution summing to one for which copy we end up as. Once you do that, if you wake up in a green room in Eliezer's example, you are forced to update to 90% probability that the coin came up heads (provided you distributed your subjective anticipation evenly between all twenty copies in both the head and tail scenarios, which really seems like the only sane thing to do.)

Or, at least, the same amount of "something it is like to be me"-ness as we started with, in some ill-defined sense.

On the other hand, if you do not feel that there is any fact of the matter as to which copy you become, then you just want all your copies to execute whatever strategy is most likely to get all of them the most money from your initial perspective of ignorance of the coinflip.

Incidentally, the optimal strategy looks like an policy selected by updateless decision theory and not like any probability of the the coin having been heads or tails. PlaidX beat me to the counter-example for p=50%. Counter-examples of like PlaidX's will work for any p<90%, and counter-examples like Eliezer's will work for any p>50%, so that pretty much covers it. So, unless we want to include ugly hacks like responsibility, or unless we let the copies reason Goldenly (using Eliezer's original TDT) about each other's actions as tranposed versions of their own actions (which does correctly handle PlaidX's counter-example, but might break in more complicated cases where no isomorphism is apparent) there simply isn't a probability-of-heads that represents the right thing for the copies to do no matter the deal offered to them.

Comment by Marcello on AndrewH's observation and opportunity costs · 2009-07-24T02:57:50.933Z · LW · GW

The most effective version of this would probably be an iPhone (or similar mobile device) application that gives a dollar to charity when you push a button. If it's going to work reliably it has to be something that can be used when the beggar/cause invocation is in sight: for most people, I'm guessing that akrasia would probably prevent a physical box or paper ledger from working properly.

Comment by Marcello on AndrewH's observation and opportunity costs · 2009-07-23T18:10:28.674Z · LW · GW

I recently visited Los Angeles with a friend. Whenever we got lost wandering around the city, he would find the nearest homeless person, ask them for directions and pay them a dollar. (Homeless people tend to know the street layout and bus routes of their city like the backs of their hands.)

Comment by Marcello on The Strangest Thing An AI Could Tell You · 2009-07-16T04:31:18.155Z · LW · GW

Yes, we have a name from this, Religion

Agreed, but the fact that religion exists makes the prospect of similar things whose existence we are not aware of all the scarier. Imagine, for example, if there were something like a religion one of whose tenants is that you have to fool yourself into thinking that the religion doesn't exist most of the time.

Comment by Marcello on The Strangest Thing An AI Could Tell You · 2009-07-15T16:29:57.602Z · LW · GW
  • We actually live in hyperspace: our universe really has four spacial dimensions. However, our bodies are fully four dimensional; we are not wafer thin slices a la flatland. We don't perceive there to be four dimensions because our visual cortexes have a defect somewhat like that of people who can't notice anything on the right side of their visual field.
  • Not only do we have an absolute denial macro, but it is a programmable absolute denial macro and there are things much like computer viruses which use it and spread through human population. That is, if you modulated your voice in a certain way at someone, it would cause them (and you) to acquire a brand new self deception, and start transmitting it to others.
  • Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.
  • There are transparent contradictions inherent in all current mathematical systems for reasoning about real numbers, but no human mathematician/physicist can notice them because they rely heavily on visuospacial reasoning to construct real analysis proofs.
Comment by Marcello on Harnessing Your Biases · 2009-07-03T00:56:54.957Z · LW · GW

I'm not sure the cost of privately held false beliefs is as low as you think it is. The universe is heavily Causally Entangled. Now even if in your example, the shape of the earth isn't causally entangled with anything our mechanic cares about, that doesn't get you off the hook. A false belief can shoot you in the foot in at least two ways. First, you might explicitly use it to reason about the value of some other variable in your causal graph. Second, your intuition might draw on it as an analogy when you are reasoning about something else.

If our car mechanic thinks his planet is a disc supported atop an infinite pile of turtles, when this is in fact not the case, then isn't he more likely to conclude that other things which he may actually come into more interaction with (such as a complex device embedded inside a car which could be understood by our mechanic, if he took it apart and then took the pieces apart about five times) might also be "turtles all the way down"? If I actually lived on a disc on top of infinitely many turtles, then I would be nowhere near as reluctant to conclude that I had a genuine fractal device on my hands. If I actually lived in a world which was turtles all the way down, I would also be much more disturbed by paradoxes involving backward supertasks.

To sum up: False beliefs don't contaminate your belief pool via the real links in the causal network in reality; they contaminate your belief pool via the associations in your mind.

Comment by Marcello on Rationality Quotes - July 2009 · 2009-07-02T23:14:31.996Z · LW · GW

Anyone who doesn't take truth seriously in small matters cannot be trusted in large ones either.

-- Albert Einstein

Comment by Marcello on Rationality Quotes - July 2009 · 2009-07-02T22:16:22.531Z · LW · GW

Those who can make you believe absurdities can make you commit atrocities.

-- Voltaire

Comment by Marcello on Raising the Sanity Waterline · 2009-03-12T07:43:27.099Z · LW · GW

Incidentally, I agree that using the term "spirituality" is not necessarily bad. Though, I'm careful to try to use it to refer to the general emotion of awe/wonder/curiosity about the universe. To me the word means something quite opposed to religion. I mean the emotion I felt years ago when I watched Carl Sagan's "Cosmos".... To me religion looks like what happens when spirituality is snuffed out by an answer which isn't as wonderfully strange and satisfyingly true as it could have been.

It's a word with positive connotations, and we might want to steal it. It would certainly help counteract the vulcan stereotype.

Comment by Marcello on Raising the Sanity Waterline · 2009-03-12T07:10:30.430Z · LW · GW

Michael Vassar said:

Naive realism is a supernatural belief system anyway

What exactly do you mean by "supernatural" in this context? Naive realism doesn't seem to be anthropomorphizing any ontologically fundamental things, which is what I mean when I say "supernatural".

Now of course naive realism does make the assumption that certain assumptions about reality which are encoded in our brains from the get go are right, or at least probably right, in short, that we have an epistemic gift. However, that can't be what you meant by "supernatural", because any theory that doesn't make that assumption gives us no way to deduce anything at all about reality.

Now, granted, some interpretations of naive realism may wrongly posit some portion of the gift to be true, when in fact, by means of evidence plus other parts of the gift, we end up pretty sure that it's wrong. But I don't think this sort of wrongness makes an idea supernatural. Believing that Newtonian physics is absolutely true, regardless of how fast objects move is a wrong belief, but I wouldn't call it a supernatural belief.

So, what exactly did you mean?

Comment by Marcello on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2009-03-11T19:06:27.609Z · LW · GW

Your version is now the official version 0.3. However, the one thing you changed (possibly unintentionally) was to make Kibitzing default to on. I changed it back to defaulting to off, because it's easy to click the button if you're curious, but impossible to un-see who wrote all the comments, if you didn't want to look.

Comment by Marcello on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2009-03-09T21:46:37.438Z · LW · GW

That particular hack looks like a bad idea. What if somebody actually put a bold-face link into a post or comment? However, your original suggestion wan't as bad. All non-relative links to user pages get blocked by the anti-kibitzer. (Links in "Top contributors" and stuff in comments seem to be turned into relative links if they point inside LW.) It's gross, but it works.

Version 0.2 is now up. It hides everything except the point-counts on the recent posts (there was no tag around those.) (Incidentally, I don't have regular expressions because by the time my script gets its hands on the data, it's not a string at all, but a DOM-tree. So, you'd have to specify it in XPath.)

I think trying to do any more at this point would be pointless. Most of the effort involved in getting something like this to be perfect would be gruesome reverse engineering, which would all break the minute the site maintainers change something. The right thing to do(TM) would be to get the people at Tricycle to implement the feature (I hereby put the code I wrote into the public domain, yada yada.) Then we don't have to worry about having to detect which part of the page something belongs to because the server actually knows.

Comment by Marcello on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2009-03-09T21:04:16.129Z · LW · GW

I've upgraded the LW anti-kibitzer so that it hides the taglines in the recent comments sidebar as well. (Which is imperfect, because it also hides which post the comment was about, but better will have to wait until the server starts enclosing all the kibitzing pieces of information in nice tags.) No such hack was possible for the recent posts sidebar.

Comment by Marcello on Don't Believe You'll Self-Deceive · 2009-03-09T18:45:11.272Z · LW · GW

A phrase like trying to see my way clear to should be a giant red flag. If you're trying to accept something then you must have some sort of motivation. If you have the motivation to accept something because you actually believe it is true, then you've already accepted it. If you have that motivation for some other reason, then you're deceiving yourself.

Comment by Marcello on Slow down a little... maybe? · 2009-03-07T17:22:46.398Z · LW · GW

It strikes me that it's not necessarily a bad thing if people are, right now, posting articles faster than they could sustainably produce in the long term. One thing you could do is not necessarily promote things immediately after they're written. Stuff on LW should still be relevant a week after it's written.

If there's a buffer of good posts waiting to be promoted, then we could make the front page a consistent stream of good articles, as opposed to having to promote slightly lower quality posts on bad days, and missing out on a few excellent posts on fast days.

EDIT: Another reason to wait before promoting things is that the goodness of some kinds of posts might really depend on the quality of the discussion that starts to form around them.

Comment by Marcello on Negative photon numbers observed · 2009-03-06T07:29:10.655Z · LW · GW

The Wikipedia link is broken.

Comment by Marcello on Issues, Bugs, and Requested Features · 2009-03-06T01:35:50.380Z · LW · GW

Is there a way to make strike-through text? I'd like to be able to make revisions like this one without deleting the record of what I originally said.

Comment by Marcello on Belief in Self-Deception · 2009-03-06T00:47:39.927Z · LW · GW

well exactly... If the person were thinking rationally enough to contemplate that argument, they really wouldn't need it.

My working model of this person was that the person has rehearsed emotional and argumentative defenses to protect their belief, or belief in belief, and that the person had the ability to be reasonably rational in other domains where they weren't trying to be irrational. It therefore seemed to me that one strategy (while still dicey) to attempt to unconvince such a person would be to come up with an argument which is both:

  • Solid (Fooling/manipulating them into thinking the truth is bad cognitive citizenship, and won't work anyway because their defenses will find the weakness in the argument.)

  • Not the same shape as the argument their defenses are expecting.

Roko: How is your working model of the person different from mine?

Comment by Marcello on Belief in Self-Deception · 2009-03-06T00:16:49.797Z · LW · GW

I stand corrected. I hereby strike the first two sentences.

Comment by Marcello on Belief in Self-Deception · 2009-03-05T18:23:36.571Z · LW · GW

If I had been talking to the person you were talking to, I might have said something like this:

Why are you deceiving yourself into believing Orthodox Judaism as opposed to something else? If you, in fact, are deriving a benefit from deceiving yourself, while at the same time being aware that you are deceiving yourself, then why haven't you optimized your deceptions into something other than an off-the-shelf religion by now? Have you ever really asked yourself the question: "What is the set of things that I would derive the most benefit from falsely believing?" Now if you really think you can make your life better by deceiving yourself, and you haven't really thought carefully about what the exact set of things about which you would be better off deceiving yourself is, then it would seem unlikely that you've actually got the optimal set of self-deceptions in your brain. In particular, this means that it's probably a bad idea to deceive yourself into thinking that your present set of self deceptions is optimal, so please don't do that.

OK, now do you agree that finding the optimal set of self deceptions is a good idea? OK, good, but I have to give you one very important warning. If you actually want to have the optimal set of self deceptions, you'd better not deceive yourself at all while you are constructing this set of self deceptions, or you'll probably get it wrong, because if, for example, you are currently sub-optimally deceiving yourself into believing that it is good to believe X, then you may end up deceiving yourself into actually believing X, even if that's a bad idea. So don't self deceive while you're trying to figure out what to deceive yourself of.

Therefore, to the extent that you are in control of your self deceptions, (which you do seem to be) the first step toward getting the best set of self deceptions is to disable them all and begin a process of sincere inquiry as to what beliefs it is a good idea to have.

And hopefully, at the end of the process of sincere inquiry, they discover the best set of self deceptions happens to be empty. And if they don't, if they actually thought it through with the highest epistemic standards, and even considered epistemic arguments such as honesty being one's last defence, slashed tires, and all that.... Well, I'd be pretty surprised, but if I were actually shown that argument, and it actually did conform to the highest epistemic standards.... Maybe, provided it's more likely that the argument was actually that good, as opposed to my just being deceived, I'd even concede.

Disclaimer: I don't actually expect this to work with high confidence, because this sort of person might not actually be able to do a sincere inquiry. Regardless, if this sort of thought got stuck in their head, it could at least increase their cognitive dissonance, which might be a step on the road to recovery.

Comment by Marcello on Issues, Bugs, and Requested Features · 2009-03-02T21:18:15.255Z · LW · GW

Ah, I didn't know you could embed images because it wasn't in the help. Would it be a good idea to put a link to a Markdown tutorial at the bottom of the table that pops up when I click the help link?

Comment by Marcello on The Most Frequently Useful Thing · 2009-03-01T20:11:07.423Z · LW · GW

The idea that that you shouldn't internally argue for or against things or propose solutions too soon is probably the most frequently useful thing. I sometimes catch myself arguing for or against something and then I think "No, I should really just ask the question."

Comment by Marcello on Issues, Bugs, and Requested Features · 2009-02-27T19:34:53.389Z · LW · GW

Seconded. However, as an interim solution, we can do things like this: the Golden ratio is (1+root(5))/2.

Comment by Marcello on Tell Your Rationalist Origin Story · 2009-02-27T05:46:06.789Z · LW · GW

I think I began as a rationalist when I read this story. (This was before I had run across anything Eliezer wrote.) I had rationalist tendencies before that, but I wasn't really trying very hard to be rational. Back then my "pet causes" (as I call them now) included things like trying to make all the software transparent and free. These were pet causes simply because I was interested in computers. But here, I had found something that was sufficiently terrible and sufficiently potentially preventable that it utterly dwarfed my pet causes.

I learned a simple lesson: If you really want the things you really want, then you need to think carefully about what those things are and how to accomplish them.

Comment by Marcello on War and/or Peace (2/8) · 2009-02-01T05:21:35.000Z · LW · GW

Vladimir says: "I assumed you agree that increasing the babyeating problem tenfold isn't something you'd expect to be reciprocated"

Aye, not necessarily. But perhaps the gesture of good will might be large enough to get the babyeaters to, say, take a medicine which melts the brains of their children right after they're eaten. They might be against such a medicine, but since they didn't evolve knowing that their babies were being slow-tortured for a month, they might not have desires against the medicine stronger than the desires in favor of having ten times as many kids. (And because the humans have tech. superiority, they could actually enforce the deal if that's necessary.)

It's a tricky ethical question knowing whether the humans are better off with that deal. And it's a tricky question of baby-crunch-crunch whether the baby-eaters are more-baby-eaten with that deal. But maybe there are better deals than the one I was able to think of in ten minutes.

Comment by Marcello on War and/or Peace (2/8) · 2009-02-01T05:16:21.000Z · LW · GW

Vladimir says: "I assumed you agree that increasing the babyeating problem tenfold isn't something you'd expect to be reciprocated"

Aye, not necessarily. But perhaps the gesture of good will might be large enough to get the babyeaters to, say, take a medicine which melts the brains of their children right after they're eaten. They might be against such a medicine, but since they didn't evolve knowing that their babies were being slow-tortured for a month, they might not have desires against the medicine stronger than the desires in favor of having ten times as many kids. (And because the humans have tech. superiority, they could actually enforce the deal if that's necessary.)

It's a tricky ethical question knowing whether the humans are better off with that deal. And it's a tricky question of baby-crunch-crunch whether the baby-eaters are more-baby-eaten with that deal. But maybe there are better deals than the one I was able to think of in ten minutes.

Comment by Marcello on War and/or Peace (2/8) · 2009-02-01T01:52:04.000Z · LW · GW

Vladimir says: """Every decision to give a gift on your side corresponds to a decision to abstain from accepting your gift on the other side. Thus, decisions to give must be made on case-to-case basis, cooperation in true prisoner's dilemma doesn't mean unconditional charity."""

Agreed. Obviously (for example) the human ship shouldn't self-destruct. But I wasn't talking about all gifts, I was talking about the specific class of gifts called "helpful advice." And I did specify: "provided that, on the whole, situations in which helpful advice is given freely are better."

I was comparing the two strategies "Don't give away any helpful advice of the level the other party is likely to be able to reciprocate" and "give away all helpful advice of the level the other party is likely to be able to reciprocate" and pointing out that maybe they form another prisoner's dilemma. Of course, there may be more fine-grained strategies that work even better, strategies that actually take into account the relative amount of good and bad each piece of advice brings to the two parties. But remember that you must also consider how your strategy is going to be chronophoned over to the baby eaters. If we make the first gift, what exchange rate of baby-eater utilons for human utilons do we tolerate? (If the gifts are made of information, it may be impossible for trades to be authenticated without the possibility of other party taking the gift and using it (though of course it might be that the equilibrium has an honor system....)) It looks like it gets really complicated. Worth thinking about? Yes, but right now I'm busy.