Comment by marcello on Decision theory and zero-sum game theory, NP and PSPACE · 2018-05-31T14:52:52.777Z · score: 37 (8 votes) · LW · GW

I agree with some of the other commenters that the term "decision theory" ought to be reserved for the overarching problem of which decision algorithm to use, and that the distinction you're referring to ought to be called something like "adversarial" vs "non-adversarial" or "rival" vs "non-rival". Nonetheless, I think this is an interesting handle for thinking about human psychology.

If we view these as two separate modes in humans, and presume that there's some kind of subsystem that decides which mode to use, then false positives of that subsystem look like potential explanations for things like The Imp of the Perverse, or paranoia.

Comment by marcello on Ms. Blue, meet Mr. Green · 2018-03-01T19:53:16.203Z · score: 25 (6 votes) · LW · GW

In the link post you're referring to, what Scott actually says is:

I suspect this is true the way it’s commonly practiced and studied (“if you’re feeling down, listen to this mindfulness tape for five minutes a day!”), less true for more becoming-a-Buddhist-monk-level stuff.
Comment by marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T21:14:18.103Z · score: 1 (3 votes) · LW · GW

Yes. Google docs does contain a lame version of the thing I'm pointing at. The right version is that the screen is split into N columns. Each column displays the children of the selection from the previous column (the selection could either be an entire post/comment or a span within the post/comment that the children are replies to.)

This is both a solution to inline comments and a tree-browser that lets you see just the ancestry of a comment at a glance with out having to manually collapse everything else.

Also: you replied to my comment and I didn't see any notifications. I found your reply by scrolling around. That's probably a bug.

Comment by marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T21:00:56.681Z · score: 3 (7 votes) · LW · GW

If you want to encourage engagement, don't hide the new comment box all the way down at the bottom of the page! Put another one right after the post (or give the post a reply button of the same sort the comments have.)

Comment by marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T20:58:26.189Z · score: 1 (1 votes) · LW · GW

One UI for this I could imagine (for non-mobile wide-screen use) is to have the post and the comments appear in two columns with the post on the left and the comments on the right (Similar to the Mac OS X Finder's column view.) Then when the user clicks on a comment the appropriate bit of the post would get highlighted.

In fact, I could see doing a similar thing for the individual comments themselves to create a view that would show the *ancestry* of a single comment, stretching left back to the post the conversation was originally about. This could save a fair amount of scrolling.

Comment by marcello on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-20T20:51:43.451Z · score: 1 (1 votes) · LW · GW

Agreed. Also, at some point Eigenkarma is going to need to include a recency bias so that the system can react quickly to a commenter going sour.

Comment by marcello on Welcome to Lesswrong 2.0 · 2017-06-20T20:45:21.386Z · score: 3 (3 votes) · LW · GW

I agree with the spirit of this. That said, if the goal is to calculate a Karma score which fails to be fooled by a user posting a large amount of low quality content, it might be better to do something roughly: sum((P*x if x < 0 else max(0, x-T)) for x in post_and_comment_scores). Only comments that hit a certain bar should count at all. Here P is the penalty multiplier for creating bad content, and T is the threshold a comment score needs to meet to begin counting as good content. Of course, I also agree that it's probably worth weighting upvotes and downvotes separately and normalizing by reads to calculate these per-(comment or post) scores.

Comment by marcello on Formal Open Problem in Decision Theory · 2017-04-05T00:39:07.000Z · score: 3 (3 votes) · LW · GW

From discussions I had with Sam, Scott, and Jack:

To solve the problem, it would suffice to find a reflexive domain with a retract onto .

This is because if you have a reflexive domain , that is, an with a continuous surjective map , and is a retract of , then there's also a continuous surjective map .

Proof: If is a retract of then we have a retraction and a section with . Construct . To show that is a surjection consider an arbitrary . Thus, . Since is a surjection there must be some with . It follows that . Since was arbitrary, is also a surjection.

Comment by marcello on Formal Open Problem in Decision Theory · 2017-04-05T00:26:00.000Z · score: 1 (1 votes) · LW · GW

"Self-Reference and Fixed Points: A Discussion and an Extension of Lawvere's Theorem" by Jorge Soto-Andrade and Francisco J. Varela seems like a potentially relevant result. In particular, they prove a converse Lawvere result in the category of posets (though they mention doing this for in an unsolved problem.) I'm currently reading through this and related papers with an eye to adapting their construction to (I think you can't just use it straight-forwardly because even though you can build a reflexive domain with a retract to an arbitrary poset, the paper uses a different notion of continuity for posets.)

Comment by marcello on Applied Rationality Workshops: Jan 25-28 and March 1-4 · 2013-01-23T23:19:25.543Z · score: 7 (7 votes) · LW · GW

A bit of an aside, but for me the reference to "If" is a turn off. I read it as promoting a fairly-arbitrary code of stoicism rather than effectiveness. The main message I get is keep cool, don't complain, don't show that you're affected by the world, and now you've achieved your goal,

I agree that the poem is about stoicism, but have a very different take on what stoicism is. Real stoicism is about training the elephant to be less afraid and more stable and thereby accomplish more. For example, the standard stoic meditation technique of thinking about the worst and scariest possible outcomes you could face will gradually chip away at instinctive fear responses and allow one to think in a more level headed way. Similarly, taking cold showers and deconditioning the flinch response (which to some extent also allows one not to flinch away from thoughts.)

Of course, all of these real stoic training techniques are challengingly unpleasant. It's much easier to be a poser-stoic who explicitly optimizes for how stoic-looking of a face they put forward, by keeping cool, not complaining, and not emoting, rather than putting in all the hard work required to train the elephant and become a real stoic. This is, as you say, a recipe for disaster if pushed too hard. Most people out there who call themselves stoics are poser-stoics, just as Sturgeon's Law would demand. After reading the article you linked to I now have the same oppinion of the kind of stoicism the Victorian school system demanded.

Comment by marcello on Who Wants To Start An Important Startup? · 2012-08-18T04:31:05.409Z · score: 5 (5 votes) · LW · GW

Short version: Make an Eckman-style micro-expression reader in a wearable computer.

Fleshed out version: You have a wearable computer (perhaps something like Google glass) which sends video from its camera (or perhaps two cameras if one camera is not enough) over to a high-powered CPU which processes the images, locates the faces, and then identifies micro expressions by matching and comparing the current image (or 3D model) to previous frames to infer which bits of the face have moved in which directions. If a strong enough micro-expression happens, the user is informed by a tone or other notification. Alternatively, one could go the more pedagogical route by showing then a still frame of the person doing the micro-expression some milliseconds prior with the relevant bits of the face highlighted.

Feasibility: We already can make computers are good at finding faces in images and creating 3D models from multiple camera perspectives. I'm pretty sure small cameras are good enough by now. We need the beefy CPU and/or GPU as a separate device for now because it's going to be a while before wearables are good enough to do this kind of heavy-duty processing on their own, but wifi is good enough to transmit very high resolution video. The foggiest bit in my model would be whether current image processing techniques are up to the challenge. Would anyone with expertise in machine vision care to comment on this?

Possible positive consequences: Group collaboration easily succumbs to politics and scheming unless a certain (large) level of trust and empathy has been established. (For example, I've seen plenty of hacker news comments confirm that having a strong friendship with one's startup cofounder is important.) A technology such as this would allow for much more rapid (and justified) trust-building between potential collaborators. This might also allow for the creation of larger groups of smart people who all trust each other. (Which would be invaluable for any project which produces information which shouldn't be leaked because it would allow such projects to be larger.) Relatedly, this might also allow one to train really excellent therapist-empaths.

Possible negative consequence: Police states where the police are now better at reading people's minds.

Comment by marcello on Quixey - startup applying LW-style rationality - hiring engineers · 2011-09-29T22:02:22.695Z · score: 3 (3 votes) · LW · GW

I didn't leave due to burn-out.

Comment by marcello on Quixey - startup applying LW-style rationality - hiring engineers · 2011-09-28T21:54:17.829Z · score: 3 (3 votes) · LW · GW

Quixey is a great place to work, and I learned a lot working there. My main reason for leaving was that I wanted to be able to devote more time and mental energy to some of my own thoughts and projects.

Comment by marcello on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-02T08:42:49.397Z · score: 17 (18 votes) · LW · GW

Offhand, I'm guessing the very first response ought to be "Huzzah! I caught myself procrastinating!" in order to get the reverse version of the effect I mentioned. Then go on to "what would I like to do?"

Comment by marcello on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-02T08:01:07.594Z · score: 32 (32 votes) · LW · GW

Here's a theory about one of the things that causes procrastination to be so hard to beat. I'm curious what people think of it.

  1. Hypothesis: Many parts of the mind are influenced by something like reinforcement learning, where the emotional valances of our thoughts function as a gross reward signal that conditions their behaviors.

  2. Reinforcement learning seems to have a far more powerful effect when feedback is instant.

  3. We think of procrastinating as a bad thing, and tend to internally punish ourselves when we catch ourselves doing it.

  4. Therefore, the negative feedback signal might end up exerting a much more powerful training effect on the "catcher" system (aka. whatever is activating frontal override) rather than on whatever it is that triggered the procrastination in the first place.

  5. This results in a simple counter-intuitive piece of advice: when you catch yourself procrastinating, it might be a very bad idea to internally berate yourself about it; Thoughts of the form "%#&%! I'm procrastinating again! I really shouldn't do that!" might actually cause more procrastinating in the long run. If I had to guess, things like meditation would be helpful for building up the skill required to catch the procrastination-berating subsystem in the act and get it to do something else.

TL;DR: It would probably be hugely helpful to try to train oneself to make the "flinch" less unpleasant.

Comment by marcello on Positive-affect-day-Schelling-point-mas Meetup · 2009-12-23T19:45:33.936Z · score: 1 (1 votes) · LW · GW

I am going to be there.

Positive-affect-day-Schelling-point-mas Meetup

2009-12-23T19:41:02.761Z · score: 4 (5 votes)
Comment by marcello on Outlawing Anthropics: An Updateless Dilemma · 2009-09-09T13:32:42.355Z · score: 4 (6 votes) · LW · GW

Why do I think anthropic reasoning and consciousness are related?

In a nutshell, I think subjective anticipation requires subjectivity. We humans feel dissatisfied with a description like "well, one system running a continuation of the computation in your brain ends up in a red room and two such systems end up in green rooms" because we feel that there's this extra "me" thing, whose future we need to account for. We bother to ask how the "me" gets split up, what "I" should anticipate, because we feel that there's "something it's like to be me", and that (unless we die) there will be in future "something it will be like to be me". I suspect that the things I said in the previous sentence are at best confused and at worst nonsense. But the question of why people intuit crazy things like that is the philosophical question we label "consciousness".

However, the feeling that there will be in future "something it will be like to be me", and in particular that there will be one "something it will be like to be me" if taken seriously, forces us to have subjective anticipation, that is, to write probability distribution summing to one for which copy we end up as. Once you do that, if you wake up in a green room in Eliezer's example, you are forced to update to 90% probability that the coin came up heads (provided you distributed your subjective anticipation evenly between all twenty copies in both the head and tail scenarios, which really seems like the only sane thing to do.)

Or, at least, the same amount of "something it is like to be me"-ness as we started with, in some ill-defined sense.

On the other hand, if you do not feel that there is any fact of the matter as to which copy you become, then you just want all your copies to execute whatever strategy is most likely to get all of them the most money from your initial perspective of ignorance of the coinflip.

Incidentally, the optimal strategy looks like an policy selected by updateless decision theory and not like any probability of the the coin having been heads or tails. PlaidX beat me to the counter-example for p=50%. Counter-examples of like PlaidX's will work for any p<90%, and counter-examples like Eliezer's will work for any p>50%, so that pretty much covers it. So, unless we want to include ugly hacks like responsibility, or unless we let the copies reason Goldenly (using Eliezer's original TDT) about each other's actions as tranposed versions of their own actions (which does correctly handle PlaidX's counter-example, but might break in more complicated cases where no isomorphism is apparent) there simply isn't a probability-of-heads that represents the right thing for the copies to do no matter the deal offered to them.

Comment by marcello on AndrewH's observation and opportunity costs · 2009-07-24T02:57:50.933Z · score: 7 (7 votes) · LW · GW

The most effective version of this would probably be an iPhone (or similar mobile device) application that gives a dollar to charity when you push a button. If it's going to work reliably it has to be something that can be used when the beggar/cause invocation is in sight: for most people, I'm guessing that akrasia would probably prevent a physical box or paper ledger from working properly.

Comment by marcello on AndrewH's observation and opportunity costs · 2009-07-23T18:10:28.674Z · score: 7 (7 votes) · LW · GW

I recently visited Los Angeles with a friend. Whenever we got lost wandering around the city, he would find the nearest homeless person, ask them for directions and pay them a dollar. (Homeless people tend to know the street layout and bus routes of their city like the backs of their hands.)

Comment by marcello on The Strangest Thing An AI Could Tell You · 2009-07-16T04:31:18.155Z · score: 6 (6 votes) · LW · GW

Yes, we have a name from this, Religion

Agreed, but the fact that religion exists makes the prospect of similar things whose existence we are not aware of all the scarier. Imagine, for example, if there were something like a religion one of whose tenants is that you have to fool yourself into thinking that the religion doesn't exist most of the time.

Comment by marcello on The Strangest Thing An AI Could Tell You · 2009-07-15T16:29:57.602Z · score: 39 (41 votes) · LW · GW
  • We actually live in hyperspace: our universe really has four spacial dimensions. However, our bodies are fully four dimensional; we are not wafer thin slices a la flatland. We don't perceive there to be four dimensions because our visual cortexes have a defect somewhat like that of people who can't notice anything on the right side of their visual field.
  • Not only do we have an absolute denial macro, but it is a programmable absolute denial macro and there are things much like computer viruses which use it and spread through human population. That is, if you modulated your voice in a certain way at someone, it would cause them (and you) to acquire a brand new self deception, and start transmitting it to others.
  • Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.
  • There are transparent contradictions inherent in all current mathematical systems for reasoning about real numbers, but no human mathematician/physicist can notice them because they rely heavily on visuospacial reasoning to construct real analysis proofs.
Comment by marcello on Harnessing Your Biases · 2009-07-03T00:56:54.957Z · score: 4 (4 votes) · LW · GW

I'm not sure the cost of privately held false beliefs is as low as you think it is. The universe is heavily Causally Entangled. Now even if in your example, the shape of the earth isn't causally entangled with anything our mechanic cares about, that doesn't get you off the hook. A false belief can shoot you in the foot in at least two ways. First, you might explicitly use it to reason about the value of some other variable in your causal graph. Second, your intuition might draw on it as an analogy when you are reasoning about something else.

If our car mechanic thinks his planet is a disc supported atop an infinite pile of turtles, when this is in fact not the case, then isn't he more likely to conclude that other things which he may actually come into more interaction with (such as a complex device embedded inside a car which could be understood by our mechanic, if he took it apart and then took the pieces apart about five times) might also be "turtles all the way down"? If I actually lived on a disc on top of infinitely many turtles, then I would be nowhere near as reluctant to conclude that I had a genuine fractal device on my hands. If I actually lived in a world which was turtles all the way down, I would also be much more disturbed by paradoxes involving backward supertasks.

To sum up: False beliefs don't contaminate your belief pool via the real links in the causal network in reality; they contaminate your belief pool via the associations in your mind.

Comment by marcello on Rationality Quotes - July 2009 · 2009-07-02T23:14:31.996Z · score: 13 (15 votes) · LW · GW

Anyone who doesn't take truth seriously in small matters cannot be trusted in large ones either.

-- Albert Einstein

Comment by marcello on Rationality Quotes - July 2009 · 2009-07-02T22:16:22.531Z · score: 24 (22 votes) · LW · GW

Those who can make you believe absurdities can make you commit atrocities.

-- Voltaire

Comment by marcello on Raising the Sanity Waterline · 2009-03-12T07:43:27.099Z · score: 8 (10 votes) · LW · GW

Incidentally, I agree that using the term "spirituality" is not necessarily bad. Though, I'm careful to try to use it to refer to the general emotion of awe/wonder/curiosity about the universe. To me the word means something quite opposed to religion. I mean the emotion I felt years ago when I watched Carl Sagan's "Cosmos".... To me religion looks like what happens when spirituality is snuffed out by an answer which isn't as wonderfully strange and satisfyingly true as it could have been.

It's a word with positive connotations, and we might want to steal it. It would certainly help counteract the vulcan stereotype.

Comment by marcello on Raising the Sanity Waterline · 2009-03-12T07:10:30.430Z · score: 9 (9 votes) · LW · GW

Michael Vassar said:

Naive realism is a supernatural belief system anyway

What exactly do you mean by "supernatural" in this context? Naive realism doesn't seem to be anthropomorphizing any ontologically fundamental things, which is what I mean when I say "supernatural".

Now of course naive realism does make the assumption that certain assumptions about reality which are encoded in our brains from the get go are right, or at least probably right, in short, that we have an epistemic gift. However, that can't be what you meant by "supernatural", because any theory that doesn't make that assumption gives us no way to deduce anything at all about reality.

Now, granted, some interpretations of naive realism may wrongly posit some portion of the gift to be true, when in fact, by means of evidence plus other parts of the gift, we end up pretty sure that it's wrong. But I don't think this sort of wrongness makes an idea supernatural. Believing that Newtonian physics is absolutely true, regardless of how fast objects move is a wrong belief, but I wouldn't call it a supernatural belief.

So, what exactly did you mean?

Comment by marcello on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2009-03-11T19:06:27.609Z · score: 1 (1 votes) · LW · GW

Your version is now the official version 0.3. However, the one thing you changed (possibly unintentionally) was to make Kibitzing default to on. I changed it back to defaulting to off, because it's easy to click the button if you're curious, but impossible to un-see who wrote all the comments, if you didn't want to look.

Comment by marcello on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2009-03-09T21:46:37.438Z · score: 3 (3 votes) · LW · GW

That particular hack looks like a bad idea. What if somebody actually put a bold-face link into a post or comment? However, your original suggestion wan't as bad. All non-relative links to user pages get blocked by the anti-kibitzer. (Links in "Top contributors" and stuff in comments seem to be turned into relative links if they point inside LW.) It's gross, but it works.

Version 0.2 is now up. It hides everything except the point-counts on the recent posts (there was no tag around those.) (Incidentally, I don't have regular expressions because by the time my script gets its hands on the data, it's not a string at all, but a DOM-tree. So, you'd have to specify it in XPath.)

I think trying to do any more at this point would be pointless. Most of the effort involved in getting something like this to be perfect would be gruesome reverse engineering, which would all break the minute the site maintainers change something. The right thing to do(TM) would be to get the people at Tricycle to implement the feature (I hereby put the code I wrote into the public domain, yada yada.) Then we don't have to worry about having to detect which part of the page something belongs to because the server actually knows.

Comment by marcello on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2009-03-09T21:04:16.129Z · score: 2 (2 votes) · LW · GW

I've upgraded the LW anti-kibitzer so that it hides the taglines in the recent comments sidebar as well. (Which is imperfect, because it also hides which post the comment was about, but better will have to wait until the server starts enclosing all the kibitzing pieces of information in nice tags.) No such hack was possible for the recent posts sidebar.

LessWrong anti-kibitzer (hides comment authors and vote counts)

2009-03-09T19:18:44.923Z · score: 62 (66 votes)
Comment by marcello on Don't Believe You'll Self-Deceive · 2009-03-09T18:45:11.272Z · score: 3 (3 votes) · LW · GW

A phrase like trying to see my way clear to should be a giant red flag. If you're trying to accept something then you must have some sort of motivation. If you have the motivation to accept something because you actually believe it is true, then you've already accepted it. If you have that motivation for some other reason, then you're deceiving yourself.

Comment by marcello on Slow down a little... maybe? · 2009-03-07T17:22:46.398Z · score: 14 (14 votes) · LW · GW

It strikes me that it's not necessarily a bad thing if people are, right now, posting articles faster than they could sustainably produce in the long term. One thing you could do is not necessarily promote things immediately after they're written. Stuff on LW should still be relevant a week after it's written.

If there's a buffer of good posts waiting to be promoted, then we could make the front page a consistent stream of good articles, as opposed to having to promote slightly lower quality posts on bad days, and missing out on a few excellent posts on fast days.

EDIT: Another reason to wait before promoting things is that the goodness of some kinds of posts might really depend on the quality of the discussion that starts to form around them.

Comment by marcello on Negative photon numbers observed · 2009-03-06T07:29:10.655Z · score: 1 (3 votes) · LW · GW

The Wikipedia link is broken.

Comment by marcello on Issues, Bugs, and Requested Features · 2009-03-06T01:35:50.380Z · score: 8 (8 votes) · LW · GW

Is there a way to make strike-through text? I'd like to be able to make revisions like this one without deleting the record of what I originally said.

Comment by marcello on Belief in Self-Deception · 2009-03-06T00:47:39.927Z · score: 4 (4 votes) · LW · GW

well exactly... If the person were thinking rationally enough to contemplate that argument, they really wouldn't need it.

My working model of this person was that the person has rehearsed emotional and argumentative defenses to protect their belief, or belief in belief, and that the person had the ability to be reasonably rational in other domains where they weren't trying to be irrational. It therefore seemed to me that one strategy (while still dicey) to attempt to unconvince such a person would be to come up with an argument which is both:

  • Solid (Fooling/manipulating them into thinking the truth is bad cognitive citizenship, and won't work anyway because their defenses will find the weakness in the argument.)

  • Not the same shape as the argument their defenses are expecting.

Roko: How is your working model of the person different from mine?

Comment by marcello on Belief in Self-Deception · 2009-03-06T00:16:49.797Z · score: 1 (1 votes) · LW · GW

I stand corrected. I hereby strike the first two sentences.

Comment by marcello on Belief in Self-Deception · 2009-03-05T18:23:36.571Z · score: 12 (14 votes) · LW · GW

If I had been talking to the person you were talking to, I might have said something like this:

Why are you deceiving yourself into believing Orthodox Judaism as opposed to something else? If you, in fact, are deriving a benefit from deceiving yourself, while at the same time being aware that you are deceiving yourself, then why haven't you optimized your deceptions into something other than an off-the-shelf religion by now? Have you ever really asked yourself the question: "What is the set of things that I would derive the most benefit from falsely believing?" Now if you really think you can make your life better by deceiving yourself, and you haven't really thought carefully about what the exact set of things about which you would be better off deceiving yourself is, then it would seem unlikely that you've actually got the optimal set of self-deceptions in your brain. In particular, this means that it's probably a bad idea to deceive yourself into thinking that your present set of self deceptions is optimal, so please don't do that.

OK, now do you agree that finding the optimal set of self deceptions is a good idea? OK, good, but I have to give you one very important warning. If you actually want to have the optimal set of self deceptions, you'd better not deceive yourself at all while you are constructing this set of self deceptions, or you'll probably get it wrong, because if, for example, you are currently sub-optimally deceiving yourself into believing that it is good to believe X, then you may end up deceiving yourself into actually believing X, even if that's a bad idea. So don't self deceive while you're trying to figure out what to deceive yourself of.

Therefore, to the extent that you are in control of your self deceptions, (which you do seem to be) the first step toward getting the best set of self deceptions is to disable them all and begin a process of sincere inquiry as to what beliefs it is a good idea to have.


And hopefully, at the end of the process of sincere inquiry, they discover the best set of self deceptions happens to be empty. And if they don't, if they actually thought it through with the highest epistemic standards, and even considered epistemic arguments such as honesty being one's last defence, slashed tires, and all that.... Well, I'd be pretty surprised, but if I were actually shown that argument, and it actually did conform to the highest epistemic standards.... Maybe, provided it's more likely that the argument was actually that good, as opposed to my just being deceived, I'd even concede.

Disclaimer: I don't actually expect this to work with high confidence, because this sort of person might not actually be able to do a sincere inquiry. Regardless, if this sort of thought got stuck in their head, it could at least increase their cognitive dissonance, which might be a step on the road to recovery.

Comment by marcello on Issues, Bugs, and Requested Features · 2009-03-02T21:18:15.255Z · score: 3 (3 votes) · LW · GW

Ah, I didn't know you could embed images because it wasn't in the help. Would it be a good idea to put a link to a Markdown tutorial at the bottom of the table that pops up when I click the help link?

Comment by marcello on The Most Frequently Useful Thing · 2009-03-01T20:11:07.423Z · score: 8 (8 votes) · LW · GW

The idea that that you shouldn't internally argue for or against things or propose solutions too soon is probably the most frequently useful thing. I sometimes catch myself arguing for or against something and then I think "No, I should really just ask the question."

Comment by marcello on Issues, Bugs, and Requested Features · 2009-02-27T19:34:53.389Z · score: 5 (5 votes) · LW · GW

Seconded. However, as an interim solution, we can do things like this: the Golden ratio is (1+root(5))/2.

Comment by marcello on Tell Your Rationalist Origin Story · 2009-02-27T05:46:06.789Z · score: 11 (13 votes) · LW · GW

I think I began as a rationalist when I read this story. (This was before I had run across anything Eliezer wrote.) I had rationalist tendencies before that, but I wasn't really trying very hard to be rational. Back then my "pet causes" (as I call them now) included things like trying to make all the software transparent and free. These were pet causes simply because I was interested in computers. But here, I had found something that was sufficiently terrible and sufficiently potentially preventable that it utterly dwarfed my pet causes.

I learned a simple lesson: If you really want the things you really want, then you need to think carefully about what those things are and how to accomplish them.

Comment by marcello on War and/or Peace (2/8) · 2009-02-01T05:21:35.000Z · score: 0 (0 votes) · LW · GW

Vladimir says: "I assumed you agree that increasing the babyeating problem tenfold isn't something you'd expect to be reciprocated"

Aye, not necessarily. But perhaps the gesture of good will might be large enough to get the babyeaters to, say, take a medicine which melts the brains of their children right after they're eaten. They might be against such a medicine, but since they didn't evolve knowing that their babies were being slow-tortured for a month, they might not have desires against the medicine stronger than the desires in favor of having ten times as many kids. (And because the humans have tech. superiority, they could actually enforce the deal if that's necessary.)

It's a tricky ethical question knowing whether the humans are better off with that deal. And it's a tricky question of baby-crunch-crunch whether the baby-eaters are more-baby-eaten with that deal. But maybe there are better deals than the one I was able to think of in ten minutes.

Comment by marcello on War and/or Peace (2/8) · 2009-02-01T05:16:21.000Z · score: 1 (1 votes) · LW · GW

Vladimir says: "I assumed you agree that increasing the babyeating problem tenfold isn't something you'd expect to be reciprocated"

Aye, not necessarily. But perhaps the gesture of good will might be large enough to get the babyeaters to, say, take a medicine which melts the brains of their children right after they're eaten. They might be against such a medicine, but since they didn't evolve knowing that their babies were being slow-tortured for a month, they might not have desires against the medicine stronger than the desires in favor of having ten times as many kids. (And because the humans have tech. superiority, they could actually enforce the deal if that's necessary.)

It's a tricky ethical question knowing whether the humans are better off with that deal. And it's a tricky question of baby-crunch-crunch whether the baby-eaters are more-baby-eaten with that deal. But maybe there are better deals than the one I was able to think of in ten minutes.

Comment by marcello on War and/or Peace (2/8) · 2009-02-01T01:52:04.000Z · score: 0 (0 votes) · LW · GW

Vladimir says: """Every decision to give a gift on your side corresponds to a decision to abstain from accepting your gift on the other side. Thus, decisions to give must be made on case-to-case basis, cooperation in true prisoner's dilemma doesn't mean unconditional charity."""

Agreed. Obviously (for example) the human ship shouldn't self-destruct. But I wasn't talking about all gifts, I was talking about the specific class of gifts called "helpful advice." And I did specify: "provided that, on the whole, situations in which helpful advice is given freely are better."

I was comparing the two strategies "Don't give away any helpful advice of the level the other party is likely to be able to reciprocate" and "give away all helpful advice of the level the other party is likely to be able to reciprocate" and pointing out that maybe they form another prisoner's dilemma. Of course, there may be more fine-grained strategies that work even better, strategies that actually take into account the relative amount of good and bad each piece of advice brings to the two parties. But remember that you must also consider how your strategy is going to be chronophoned over to the baby eaters. If we make the first gift, what exchange rate of baby-eater utilons for human utilons do we tolerate? (If the gifts are made of information, it may be impossible for trades to be authenticated without the possibility of other party taking the gift and using it (though of course it might be that the equilibrium has an honor system....)) It looks like it gets really complicated. Worth thinking about? Yes, but right now I'm busy.

Comment by marcello on War and/or Peace (2/8) · 2009-01-31T20:18:19.000Z · score: 0 (0 votes) · LW · GW

""" "Out of curiosity," said the Lord Pilot, "have they ever tried to produce even more babies - say, thousands instead of hundreds - so they could speed up their evolution even more?"

"It ought to be easily within their current capabilities of bioengineering," said the Xenopsychologist, "and yet they haven't done it. Still, I don't think we should make the suggestion.""

"Agreed," said Akon. """

That's not the least bit obvious. Do we really want the Babyeaters to hold back corresponding suggestions that might make our society better from our perspective and worse from theirs?

If, in this situation, we ought to bite the prisoner's-dilemma bullet to the degree of not invading the Babyeater planet because peaceful situations are, on average, better than war-torn situations, doesn't the same argument mean that we shouldn't hold back helpful advice, provided that, on the whole, situations in which helpful advice is given freely are better?

Now maybe it's the case that if we swapped that particular kind of helpful advice with the baby eaters, the degree to which Babyeater planet got worse by our standards is more than the degree to which our planet would bet better by our standards, and vise versa. But in that case it would be better for both sides to draw up a treaty....

Comment by marcello on Continuous Improvement · 2009-01-12T05:50:23.000Z · score: 5 (5 votes) · LW · GW

ShardPhoenix says "I'd say that if they're willing to believe something just because it sounds nice rather than because it's true, they've already given up on rationality."

Humanity isn't neatly divided into people who have "given up on rationality" and tireless rationalists. There are just people who try to and succeed at being rational (ie. wining) to varying extents depending on a large complicated set of considerations including how the person is feeling and how smart they are. Even Newton was a religious fundamentalist and even one who is trying his mightiest to be rational can flinch away from a sufficiently unpleasant truth.

ShardPhoenix then says "Is the goal to be rational and spread the truth, or to recruit people to the cause with wildly speculative optimism?"

Because we aren't perfectly rational creatures, because we try harder to win when motivated, it makes perfect sense to pursue lines of speculation which can motivate us, so long as we keep careful track of which things we actually know and which things we don't so that it doesn't slash our tires. If you think that in his "wildly speculative optimism" Eliezer has, despite all the question marks in his recent writing, claimed to know something which he shouldn't, or to suspect something more strongly than he should, then by all means point it out. If he hasn't, then the phrase "wildly speculative optimism" might not be a terribly good description of the recent series of posts.

Comment by marcello on Living By Your Own Strength · 2008-12-22T21:19:10.000Z · score: 10 (3 votes) · LW · GW

The first place I encountered the concept that strength must be earned was eight or nine years ago in a passage from, of all things, Jurassic Park, which stuck in my memory long after the other moments of the book faded from memory. The long version: http://www.stjohns-chs.org/english/Seventeenth/jur.html The short version: """ "I’ll make it simple" Malcolm said. "A karate master does not kill people with his bare hands. He does not lose his temper and kill his wife. The person who kills is the person who has no discipline no restraint, and who has purchased his power in the form of a Saturday night special. And that is the kind of power that science fosters, and permits. And that is why you think that to build a place like this is simple." "It was simple," Hammond insisted. "Then why did it go wrong?" """

Comment by marcello on High Challenge · 2008-12-19T16:32:44.000Z · score: 8 (8 votes) · LW · GW

"Though, since you never designed your own leg muscles, you are racing using strength that isn't yours. A race between robot cars is a purer contest of their designers."

Eliezer: While people don't design their muscles, they presently don't design their brains either, so a robot car-designing contest seems like just as impure a contest. Even if people did repeatedly redesign their brains, wouldn't this either result in convergence, in which case the contestants would be identical and the contest wouldn't be interesting, or alternatively, the arbitrary initial advantages and disadvantages would just be passed on in modified and perhaps even amplified form and the contest stays as impure as ever. Even if you try to measure the amount of effort the contestants put in, that's no good either because different people are born with unfairly different amounts of will-power.

So what on earth do you mean by "purer contest"?

Comment by marcello on Visualizing Eutopia · 2008-12-16T20:35:40.000Z · score: 0 (2 votes) · LW · GW

Phil: Really? I think the way the universe looks in the long run is the sum total of the way that peoples lives (and other things that might matter) look in the short run at many different times. I think you're reasoning non-extensionally here.

Comment by marcello on Visualizing Eutopia · 2008-12-16T19:52:07.000Z · score: 9 (9 votes) · LW · GW

Robin: I think Eliezer's question is worth thinking about now. If you do investigate what you would wish from a genie, isn't it possible that one of your wishes might be easy enough for you to grant without the genie? You do say you haven't thought about the question yet, so you really have no way of knowing whether your wishes would actually be that difficult to grant.

Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while. All I have to say on the matter is that that situation is definitely worth avoiding. I still don't expect my present set of answers to be right. I think they're marginally more right than they were three years ago.

You don't have a genie, but you do have a human brain, which is a rather powerful optimization process and despite not being a genie it is still very capable of shooting its owner in the foot. You should check what you think you want in the limiting case of absolute power, because if that's not what you want, then you got it wrong. If you think the meaning of life is to move westward, but then you think about the actual lay of the land hundreds of miles west of where you are and then discover you wouldn't like that, then it's worth trying to more carefully formulate why it was you wanted to go west in the first place, and once you know the reason, maybe going north is even better. If you don't want to waste time moving in the wrong direction then it's important to know what you want as clearly as possible.

Comment by marcello on Dark Side Epistemology · 2008-10-18T16:56:02.000Z · score: 1 (1 votes) · LW · GW

Douglas says: """ And then there are a few like "too complicated." You call those "negative affect words"? Surely it is better to say "that is too complicated to be true" than to say simply "that is not true"? """

Well, yes, but that's only when whatever you mean by complicated has something to do with being true. Some people though, just use the phrase "too complicated" just so they can avoid thinking about an idea, and, in that context it really is an empty negative-affect phrase.

Of course, it is better for a scientist to say "that's too complicated to be true" rather than just "that's not true." You're not done by any means once you've made a claim about whether something is true or false; the claim still needs to be backed up. The point was simply that any characterization of an idea is bad unless that characterization really does have something to do with whether the idea is true.