Posts

Comments

Comment by loqi on How accurate is the quantum physics sequence? · 2013-06-16T20:56:59.692Z · LW · GW

I'd be interested in reading more about your top ten cool possibilities. They sound cool.

Comment by loqi on Great Explanations · 2011-11-04T08:11:37.032Z · LW · GW

My takeaway: Sometimes people don't behave in aggregate the way we think they should. By replacing their money with money\k and convincing them it's still just money, we can manipulate their behavior by jiggling k.*

And it apparently goes without saying that the coupon-issuer has a good way to distinguish "legitimate" reasons to cut back on going out. E.g., flu outbreak, new compelling indoor family activity, all the other stuff no one's even thought of yet, etc.

The Keynesian "key to enlightenment" is that we can cram a knob onto the economy and jack with it?

Comment by loqi on Great Explanations · 2011-11-04T06:25:35.079Z · LW · GW

...as long as you don't mind listening to Sagan drone out "millions, and billions, and millions" for millions, and billions, and millions... basically number-novocaine delivered verbally.

And surely aliens are everywhere, we just haven't noticed them yet.

I tried watching Cosmos about a year ago, and quickly stopped. Is there a case to be made that it's worth soldiering through the awfulness?

Comment by loqi on Amanda Knox: post mortem · 2011-10-24T17:45:07.388Z · LW · GW

Please quote me where I accused you of having faith that you're more reliable than those people.

Right here:

Thanks!

I also won't engage with people who refuse to answer reasonable questions to let me understand their position.

Thanks!

Comment by loqi on Amanda Knox: post mortem · 2011-10-23T23:22:11.150Z · LW · GW

Please quote me where I accused you of having faith that you're more reliable than those people.

Comment by loqi on Amanda Knox: post mortem · 2011-10-21T17:17:35.065Z · LW · GW

Do you agree that the tone of your post is a bit nasty?

Yes. It's a combination of having little respect for the feelings of typically-wrong pseudonymous internet posters as well as faith in my own ability to look at incomplete justifications for sloppy reasoning and draw snarky conclusions.

Comment by loqi on Amanda Knox: post mortem · 2011-10-21T06:09:07.084Z · LW · GW

So, to summarize why you didn't update:

  • You didn't know the names of the people commenting.
  • You have faith that you're more reliable than those people.
  • You would lose your job if you weren't so great at seeing through bullshit.
  • You have often failed to see through bullshit.

Boy was Upton Sinclair ever right.

Comment by loqi on On the Openness personality trait & 'rationality' · 2011-10-14T07:23:33.250Z · LW · GW

My inner Hanson asks me

So you've got a case of the Inner Hanson, eh? My estimation of your psychological fortitude is hereby incremented.

Comment by loqi on Beautiful Probability · 2011-10-12T05:24:05.174Z · LW · GW

Good point, there is some ordering information leaked. This is consistent with identical likelihoods for both setups - learning which permutation of arguments we're feeding into a commutative operator (multiplication of likelihood ratios) doesn't tell us anything about its result.

Comment by loqi on Rationality Drugs · 2011-10-03T23:10:11.318Z · LW · GW

If you don't mind sharing, how do you plan to do this? Is it as simple as "this controlled substance makes my life better, will you prescribe it for me?" Or are you "fortunate" enough to have a condition that warrants its prescription?

I ask because I've had similar experiences with Modafinil (my nickname for it is "executive lubricant"), and it is terribly frustrating to be stuck without a banned goods store.

Comment by loqi on Knox and Sollecito freed · 2011-10-03T21:28:41.556Z · LW · GW

Hooray!

Comment by loqi on The Apparent Reality of Physics · 2011-09-25T01:05:15.744Z · LW · GW

Thanks for following up on Almond. Your statements align well with my intuition, but I admit heavy confusion on the topic.

Comment by loqi on The Apparent Reality of Physics · 2011-09-25T01:02:04.524Z · LW · GW

Thanks, that's a concise and satisfying reply. I look forward to seeing where you take this.

Comment by loqi on Particles break light-speed limit? · 2011-09-23T22:14:14.609Z · LW · GW

And what, if I may ask, are your plans for your grandmother?

Comment by loqi on The Apparent Reality of Physics · 2011-09-23T22:07:55.389Z · LW · GW

All I see here is Tegmark re-hashed and some assertions concerning the proper definitions of words like "real" and "existence". Taboo those, are you still saying anything?

Have you read any of Paul Almond's thoughts on the subject? Your position might be more understandable if contrasted with his.

Comment by loqi on Rationality Quotes: June 2011 · 2011-06-10T01:47:35.874Z · LW · GW

Intuition is extremely powerful when correctly trained. Just because you want to have powerful intuitions about something doesn't mean it's possible to correctly train them.

Comment by loqi on Rationality Quotes: June 2011 · 2011-06-01T18:54:15.180Z · LW · GW

If you can't think intuitively, you may be able to verify specific factual claims, but you certainly can't think about history.

Well, maybe we can't think about history. Intuition is unreliable. Just because you want to think intelligently about something doesn't mean it's possible to do so.

Jewish Atheist, in reply to Mencius Moldbug

Comment by loqi on Overcoming suffering: Emotional acceptance · 2011-05-30T22:50:13.768Z · LW · GW

Ceteris paribus, I would prefer not to be sad when my friends are sad. But this is incompatible with empathy - I use my sadness to model theirs. I can't imagine "loving" someone while trying not to understand them.

Comment by loqi on Grigori Perelman refused prize because he knows "how to control the universe" · 2011-05-20T17:54:59.300Z · LW · GW

Same here.

Comment by loqi on Should I be afraid of GMOs? · 2011-05-19T20:16:05.855Z · LW · GW

The assumption that we can better determine toxicity with our current understanding of human biology than thousands of years of natural selection seems questionable, but peanuts are certainly a good lower bound on selection's ability.

I also don't have much confidence that the parties responsible for safety testing are particularly reliable, but that's a loose belief.

Comment by loqi on Should I be afraid of GMOs? · 2011-05-19T19:45:15.651Z · LW · GW

That's technically true, but in practice the results of selective breeding have undergone "staged deployment" - populations/farmers with harmful variants would have been selected against. Modern GMO can reach a global population much more quickly, so harmful variants have the potential to cause more widespread harm.

Comment by loqi on Should I be afraid of GMOs? · 2011-05-19T18:38:40.649Z · LW · GW

Less selected for human non-toxicity?

Comment by loqi on Living Forever is Hard, or, The Gompertz Curve · 2011-05-19T17:48:46.787Z · LW · GW

Vitamin D is really important. There is an established causal link between vitamin D and immune function. It doesn't just enhance your immune response - it's a prerequisite for an immune response.

Anecdote: Prior to vitamin D supplementation, I caught something like 4 colds per year on average. I'm pretty sure I never did better than 2. I started taking daily D supplements about a year and half ago, and caught my first cold a few days ago. It's worth taking purely as a preventative cold medicine.

Comment by loqi on How some algorithms feel from inside · 2011-05-17T17:51:08.499Z · LW · GW

"I" is how feeling stuff from the inside feels from the inside.

Comment by loqi on Rationalists don't care about the future · 2011-05-16T17:55:00.556Z · LW · GW

Agreed. I don't see significant fungibility here.

Comment by loqi on Rationalists don't care about the future · 2011-05-16T01:05:37.074Z · LW · GW

Enjoying life and securing the future are not mutually exclusive.

Comment by loqi on People who want to save the world · 2011-05-15T23:19:32.364Z · LW · GW

I hereby extend my praise for:

  • Your right action.
  • Its contextual awesomeness.
  • Setting up a utility gradient that basically forces me to reply to your comment, itself a novel experience.
Comment by loqi on The 5-Second Level · 2011-05-14T15:58:23.249Z · LW · GW

I really like this breakdown. I do think the first item can be generalized:

usually automatically activated bias has a feeling attached to it

since positive-affect feelings like righteousness are also useful hooks.

Comment by loqi on Grigori Perelman refused prize because he knows "how to control the universe" · 2011-05-14T15:41:42.537Z · LW · GW

Googling schizophrenia+creativity leads me to suspect that it's more than a cultural expectation. Though I should disclaim the likely bias induced by my personal experience with several creative schizophrenics.

Comment by loqi on Grigori Perelman refused prize because he knows "how to control the universe" · 2011-05-14T14:48:10.749Z · LW · GW

I'd actually be a bit surprised if this were true. My guess is that intelligent madmen are more interesting, so we just pay more attention to them. Now I'm tempted to go looking for statistics.

Not doubting the correlation between madness and mathematics, though.

Comment by loqi on The 5-Second Level · 2011-05-10T19:01:37.674Z · LW · GW

What constitutes a "choice" in this context is pretty subjective. It may be less confusing to tell someone they could have a choice instead of asserting that they do have a choice. The latter connotes a conscious decision gone awry, and in doing so contradicts the subject's experience that no decision-making was involved.

Comment by loqi on The 5-Second Level · 2011-05-10T18:34:42.727Z · LW · GW

Downvoted for spending more words explaining your non-response than it would have taken to just give Nesov the benefit of the doubt and be explicit.

Everyone is capable of misunderstanding trivial things, so the notion "should not need to explain" looks suspicious to me (specifically, it looks like posturing rather than honest communication). Can you explain it, or does it self-apply?

Comment by loqi on Thomas C. Schelling's "Strategy of Conflict" · 2011-04-28T22:22:48.073Z · LW · GW

This is a distinction without a difference. If H bombs D, H has lost

This assumption determines (or at least greatly alters) the debate, and you need to make a better case for it. If H really "loses" by bombing D (meaning H considers this outcome less preferable than proliferation), then H's threat is not credible, and the strategy breaks down, no exotic decision theory necessary. Looks like a crucial difference to me.

That depends on who precommits "first". [...]

This entire paragraph depends on the above assumption. If I grant you that assumption and (artificially) hold constant H's intent to precommit, then we've entered the realm of bluffing, and yes, the game tree gets pathological.

loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.

My mention of Everett branches was an indirect (and counter-productive) way of accusing you of hindsight bias.

Your talk of "convincing you" is distractingly binary. Do you admit that the severity and number of close calls in the Cold War is relevant to this discussion, and that these are positively correlated with the underlying justification for Wei Dai's strategy? (Not necessarily its feasibility!)

I look around at the world since WWII and fail to see this horror. I look at Wei Dai's strategy and see the horror.

Let's set aside scale and comparisons for a moment, because your position looks suspiciously one-sided. You fail to see the horror of nuclear proliferation? If I may ask, what is your estimate for the probability that a nuclear weapon will be deployed in the next 100 years? Did you even ask yourself this question, or are you just selectively attending to the low-probability horrors of Wei Dai's strategy?

Then also, in what you dismiss as "messy real-world noise"

Emphasis mine. You are compromised. Please take a deep breath (really!) and re-read my comment. I was not dismissing your point in the slightest, I was in fact stating my belief that it exemplified a class of particularly effective counter-arguments in this context.

Comment by loqi on Thomas C. Schelling's "Strategy of Conflict" · 2011-04-28T04:09:01.121Z · LW · GW

But then, Wei Dai's posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens.

Using the word "intemperate" in this way is a remarkable dodge. Wei Dai's comment was entirely within the scope of the (admittedly extreme) hypothetical under discussion. Your comment contained a paragraph composed solely of vile personal insult and slanted misrepresentation of Wei Dai's statements. The tone of my response was deliberate and quite restrained relative to how I felt.

This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.

Huh? You're "not excusing" the extremity of your interpersonal behavior on the grounds that the topic was fictional, and fiction is more extreme than reality? And then go on to explain that you don't behave similarly toward Eliezer with respect to his position on TORTURE vs SPECKS because that topic is even more fictional?

Is this rationality, or the politics of two-year-olds with nukes?

Is this a constructive point, or just more gesturing?

As for the rest of your comment: Thank you! This is the discussion I wanted to be reading all along. Aside from a general feeling that you're still not really trying to be fair, my remaining points are mercifully non-meta. To dampen political distractions, I'll refer to the nuke-holding country as H, and a nuke-developing country as D.

You're very focused on Wei Dai's statement about backward induction, but I think you're missing a key point: His strategy does not depend on D reasoning the way he expects them to, it's just heavily optimized for this outcome. I believe he's right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.

Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats.

Don't see how this follows. If both countries precommit, D gets bombed until it halts or otherwise cannot continue development. While this is not H's preferred outcome, H's entire strategy is predicated on weighing irreversible nuclear proliferation and its consequences more heavily than the millions of lives lost in the event of a suicidal failure to comply. In other words, D doesn't wield sufficient power in this scenario to affect H's decision, while H holds sufficient power to skew local incentives toward mutually beneficial outcomes.

Speaking of nuclear proliferation and its consequences, you've been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai's strategy. Talking about "murdering millions" without at least framing it alongside the horror of proliferation is not productive.

How are you going to launch those nukes, anyway?

Practical considerations like this strike me as by far the best arguments against extreme, theory-heavy strategies. Messy real-world noise can easily make a high-stakes gambit more trouble than it's worth.

Comment by loqi on Sorting Pebbles Into Correct Heaps · 2011-04-27T01:45:46.105Z · LW · GW

Ack, you're entirely right. "Mark" is somewhat ambiguous to me without context, I think I had imbued it with some measure of goalness from the GP's use.

I have a bad habit of uncritically imitating peoples' word choices within the scope of a conversation. In this case, it bit me by echoing the GP's is-ought confusion... yikes!

Comment by loqi on Offense versus harm minimization · 2011-04-26T20:05:07.903Z · LW · GW

Cat overpopulation is an actual problem, gobs of cats are put down by the Humane Society every day. I don't know what they do with their dead cats, but I find wasting perfectly usable meat and tissue more offensive than the proposed barbecue.

FWIW, I am both a cat owner and a vegetarian.

Comment by loqi on Thomas C. Schelling's "Strategy of Conflict" · 2011-04-26T06:53:40.189Z · LW · GW

I was commenting on what he said, not guessing at his beliefs.

I don't think you've made a good case (any case) for your assertion concerning who is and is not to be included in our race. And it's not at all obvious to me that Wei Dai is wrong. I do hope that my lack of conviction on this point doesn't render me unfit for existence.

Anyone willing to deploy a nuclear weapon has a "bland willingness to slaughter". Anyone employing MAD has a "bland willingness to destroy the entire human race".

I suspect that you have no compelling proof that Wei Dai's hypothetical nuclear strategy is in fact wrong, let alone one compelling enough to justify the type of personal attack leveled by RichardKennaway. Would you also accuse Eliezer of displaying a "bland willingness to torture someone for 50 years" and sentence him to exclusion from humanity?

Comment by loqi on Thomas C. Schelling's "Strategy of Conflict" · 2011-04-26T01:34:43.610Z · LW · GW

So says the man from his comfy perch in an Everett branch that survived the cold war.

What I'm really getting at here is that [a comment you made on LW] unfits you for inclusion in the human race.

Downvoted for being one of the most awful statements I have ever seen on this site, far and away the most awful to receive so many upvotes. What the fuck, people.

Comment by loqi on Human errors, human values · 2011-04-26T01:16:01.582Z · LW · GW

When you say that pain is "fundamentally different" than discomfort, do you mean to imply that it's a strictly more important consideration? If so, your theory is similar to Asimov's One Law of Robotics, and you should stop wasting your time thinking about "discomfort", since it's infinitely less important than pain.

Stratified utility functions don't work.

Comment by loqi on Sorting Pebbles Into Correct Heaps · 2011-04-26T00:49:05.082Z · LW · GW

Isn't the true mark of rationality the ability to reach a correct conclusion even if you don't like the answer?

Winning is a truer mark of rationality.

Comment by loqi on Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing · 2011-04-26T00:29:48.776Z · LW · GW

Your point seems to be roughly that "highly conjunctive arguments are disproportionately convincing". I hate to pick on what may just be a minor language issue, but I really grind to a halt trying to unify this with the phrase "convincing arguments aren't necessarily correct". I don't see much difference between it and "beliefs aren't necessarily correct". The latter is true, but I'm still going to act as if my beliefs are correct. The former is true, but I'm still going to be convinced by the arguments I find most convincing.

Using the word "convincing" as a 1-place predicate distracts from the actual problem, which is simply that you found a weak argument convincing.

Comment by loqi on Attempts to work around Goedel's theorem by using randomness · 2011-04-25T19:42:18.409Z · LW · GW

Indeed. For me, cryptographic hashing is the most salient example of this. Software like git builds entire castles on the probabilistic certainty that SHA-1 hash collisions never happen.

Comment by loqi on Being Wrong about Your Own Subjective Experience · 2011-04-25T09:31:15.677Z · LW · GW

Hume's (and others') point is that we cannot be wrong about things like, "I am seeing blue right now." If you doubt things like that, you must apply at least that same level of doubt to everything else, such as whether you are really reading a LessWrong comment instead of being chased by hungry sharks right now.

Utterly ridiculous comparison. Ever looked at the stars?

Comment by loqi on Epistle to the New York Less Wrongians · 2011-04-25T08:22:32.587Z · LW · GW

Roughly speaking you are often best off choosing what the rational course of action is and then picking the opposite.

I consider this a symptom of poor scenario design - the availability of unpredictably optimal actions is the key technical difference (there are of course social differences) between open-ended and computer-mediated games. If the setting is incompatible with the characters' motivations, it's impossible to maintain the fiction that they're even really trying, and either the setting's incentives or the characters' motivations (or both in tandem) need revision.

Running a good open-ended game in the presence of imaginative and intelligent players is hard. You either leave lots of material unused, or rob the game of its key strength by over-constraining the set of possible actions.

Comment by loqi on What To Do: Environmentalism vs Friendly AI (John Baez) · 2011-04-25T05:41:57.920Z · LW · GW

Agreed, thanks for bringing this up - I threw away what I had on the subject because I was having trouble expressing it clearly. Strangely, Egan occasionally depicts civilizations rendered inaccessible by sheer difference of computing speed, so he's clearly aware of how much room is available at the bottom.

Comment by loqi on What To Do: Environmentalism vs Friendly AI (John Baez) · 2011-04-25T04:43:11.932Z · LW · GW

Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I've been pretty disappointed with his take on AI and the existential risks crowd.

A reoccurring theme in Egan's fiction is that "all minds face the same fundamental computing bottlenecks", serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story - the kind that need only be plausible (e.g., "an asteroid is on course to hit us"), and didn't think much more about it.

But from what I recall of Egan's public comments on the issue of foom (I lack links, sorry) he appears to have a firm intuition that it's impossible, grounded by handwaving "halting problem is unsolvable"-style arguments. Which in turn seemingly forms the basis of his estimation of uFAI scenarios as "immensely unlikely". With no defense on offer for his initial "cognitive universality" assumption, he takes the only remaining course of argumentation...

but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable

...derision.

This spring...

Egan, musing: Some people apparently find this irresistible

Greg Egan is...

Egan, screaming: The probabilities are approaching epsilon!!!

Above the Argument.

Egan, grimly: Yudkowsky is off the air.

Comment by loqi on Learned Blankness · 2011-04-21T07:40:25.008Z · LW · GW

It's funny you say that, I once figured out a problem for someone by diagnosing an error message with C++ templates. Wizardry! However, the "base" of the error message looked roughly like

error: unknown type "boost::python::specify_a_return_value_policy_to_wrap_functions_returning<Foo>"

Cryptic, right? It turns out he needed to specify a return value policy in order to wrap a function returning Foo. All I did for him was scan past junk visually looking for anything readable or the word "error".

Comment by loqi on Learned Blankness · 2011-04-21T07:23:38.496Z · LW · GW

My intuition is mostly the opposite, specifically that "bad with computers" people often treat applications like some gigantic, arbitrary natural system with lots of rules to memorize, instead of artifacts created by people who are often trying to communicate function and purpose through every orifice in the interface.

It only makes sense to ask the what the words in the menus actually mean if you assume they are the product of some person who is using them as a communication channel.

Comment by loqi on Learned Blankness · 2011-04-21T07:03:32.195Z · LW · GW

I suspect that when examined closely enough, your motivations are also likely to be hard to understand from your point of view.

Comment by loqi on The Modesty Argument · 2011-04-21T06:21:36.651Z · LW · GW

In fact, non-lucid dreams feel extremely real, because I try to change what's happening the way I would in a lucid dream, and nothing changes - convincing me that it's real.

This has been my experience. And on several occasions I've become highly suspicious that I was dreaming, but unable to wake myself. The pinch is a lie, but it still hurts in the dream.