Posts

Comments

Comment by complexmeme on On Not Pulling The Ladder Up Behind You · 2024-05-06T14:52:59.559Z · LW · GW

Chief Bob's hearings might well be public[...] I don't think I've ever been present for an actual court case, just seen them on TV.

This seems to me like an odd example given that you're contrasting with American government, where court hearings are almost entirely public, written opinions are generally freely available, and court transcripts are generally public (though not always accessible for free). I guess the steelman version is that the contrast is a matter of geography or scale? Chief Bob's hearings are in your neighborhood and involve your neighbors, whereas your local court might be across town during the business day and involve disputes between people you don't know. But the American judicial system is a lot more accessible than it plausibly could be while still fulfilling its core function.

Comment by complexmeme on Phallocentricity in GPT-J's bizarre stratified ontology · 2024-02-24T19:56:59.052Z · LW · GW

I'd guess that it's related specifically to "thing" being a euphemism for penis, as opposed to some broader generalization about euphemisms.

Comment by complexmeme on Can you control the past? · 2021-09-02T01:50:44.131Z · LW · GW

In the "software twins" thought exercise, you have a "perfect, deterministic copy". But if it's a perfect copy and deterministic, than you're also deterministic. As you say, compatibilism is central to making this not incoherent, presumably no decision theory is relevant if there are no decisions to be made.

I think a key idea in compatibilism is that decisions are not made at a particular instant in time. If a decision is made on the spot, disconnected from the past, it's not compatibilism. If a decision is a process that takes place over time, the only way Omega's oracular powers can work is if the part of the process that causes you to look like a one-boxer can't be followed by the part of the decision process where you turn on a dime and open both boxes. But the earlier part of the process causes the later, not the other way around.

Comment by complexmeme on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-04T13:43:08.386Z · LW · GW

Yeah, I've seen some posts trying to make similar "lockdown goes too far" arguments (including this one on the SSC Tumblr) that seem to be comparing life with COVID-19 mitigation to normal 2019 life or to that plus some chance of getting sick. Aside from understating the potential for long-term consequences, I think there's a trend in those dollar-cost estimates towards significantly underestimating the negative effects of unmitigated pandemic spread beyond the effect on one's personal health.

(Not that I expect that you disagree with this, but it stands out to me that "let it happen modulo the most vulnerable" is already begging the question. I'd expect if that were driving public policy that the "modulo the most vulnerable" part largely wouldn't happen. It's hard to protect any particular group from infectious disease when it's widespread in the general population.)

Comment by complexmeme on What was your reasoning for deciding whether to raise children? · 2020-05-15T16:14:39.930Z · LW · GW

The book "Selfish Reasons to Have More Kids" was a bit of an influence. The quick summary: People often overestimate the downsides of having children, people often underestimate the upsides of having children, people overestimate the marginal benefit of more labor-intensive methods of parenting, therefore maybe you are underestimating how many children you should have (including underestimating the benefit of tradeoffs where you have more children but use a less-intensive parenting style).

I think choosing to raise a child rather than not will probably make me happier when I'm older, even though it's not very pleasant a lot of the time currently, and there is the constant additional exposure to the risk of terrible tragedy. It gives me a reliable source of significant responsibility, which overall I value. I like that I'm playing a small part in creating the next generation of humans (and thus in creating the whole set of future humans), I think that's cool, though having children is not the only way to do that.

I think that human beings are very psychologically flexible, and I haven't been persuaded by arguments that it's not the case that the vast majority of human beings have lives worth living. I also am not persuaded by arguments that favor autonomy to the extreme that it's bad to bring someone into existence because they had no choice in the matter. While I don't think this amounts to a moral imperative, I think having children is a good thing, if the quality of parenting is even minimally acceptable. Overall, I think having and raising children is good for parents but primarily it's good for the children (and, indirectly, their descendants).

Comment by complexmeme on The Puzzling Linearity of COVID-19 · 2020-04-24T15:05:49.139Z · LW · GW
Infections start among people at the river’s mouth, and expand exponentially amongst them, until most of them are infected. It also spreads up the river, but only by local contagion, so the number of deaths (and cases) grows linearly according to how far up-river it has spread. This scenario, however, seems nothing like what we would expect in almost all countries.

That doesn't seem implausible to me, if the epidemic spreads fastest (and therefore first) in densely-connected areas of the social network graph of in-person contacts and mitigation affects those areas of the graph fastest/most. That plus lag from the implementation of mitigation to the results showing up in the case numbers might make growth look approximately linear for a while. Especially when plotted on a linear plot scaled to previous much-faster growth.

Comment by complexmeme on An alarm bell for the next pandemic · 2020-04-06T14:52:59.585Z · LW · GW
There's no clear reason why mortality and transmissibility of a virus should be inversely correlated.

More quickly fatal diseases leave less time for the immune system to respond and less time for transmission to occur. You're right that's not to say we can't end up with diseases that are both more contagious and more deadly than COVID-19, we definitely could, but that's not the direction the correlation goes.

Comment by complexmeme on [April Fools] User GPT2 is Banned · 2019-04-03T20:00:46.501Z · LW · GW
In addition, we have decided to apply the death penalty

Less Wrong moderation policy: Harsh but fair.

Comment by complexmeme on Front Row Center · 2018-06-11T14:48:17.202Z · LW · GW

Not only do theaters want to sell the extra seats, they also want people to arrive early, since they're selling concessions and playing ads.

Comment by complexmeme on Sad! · 2018-04-23T17:18:53.271Z · LW · GW

People go through a grieving process when their image of a loved one changes in a way that they perceive as negative or shocking. That process can be very long. It's possible that your grandparents won't be able to get though enough of that process in time to attend their daughter's wedding, or even at all. And if they don't have it together enough to avoid negative emotional outbursts at the event, it may not be for the best if they attend.

If they made this decision in only an hour, however, I think it would definitely be worth encouraging them to sleep on it. The engagement probably is a shock, even if it should be unsurprising; they may have been holding some rationalizations that underplayed the significance of their daughter's relationship.

Even assuming their views on homosexuality never change (they probably assume that, so assume it for the sake of argument), they may eventually regret missing a significant family event. At some point, if they want to have a good relationship with their daughter, they're going to need to make peace with persistent disagreements. If your aunt is considering raising children, maintaining a good relationship with her (and her partner!) is a prerequisite to having a good relationship with those grandchildren. Given that, your grandparents may want to put some work into getting to a place emotionally where they can be happy attending their daughter's wedding.

(Their views on homosexuality may eventually change, too. But trying to persuade them on ideological grounds is more likely to get them to dig in their heels. The most effective persuasion on those grounds is often passive and long-term. Sometimes emphasizing emotions (e.g. people will be sad and disappointed if they don't attend) can be effective, but that may just remind them of their own negative emotions. Focusing on relationship goals is often a good idea when trying to mediate this sort of conflict.)

Comment by complexmeme on 2016 LessWrong Diaspora Survey Results · 2016-05-17T15:55:20.224Z · LW · GW

"Amount of EA money sent to top four GiveWell charities" might be low because GiveWell itself is not included in that list. (I ended up putting my donation to GiveWell under "other", which while technically accurate, wasn't ideal.) In addition to GiveWell specifically, it would have been worth having an option for Effective Altruism's sort of giving (charities directed at obvious, cost-effective ways of saving the lives of / improving the quality of life for the world's poorest), but not to organizations specifically recommended by GiveWell.

Comment by complexmeme on You have a set amount of "weirdness points". Spend them wisely. · 2014-11-29T05:25:52.512Z · LW · GW

after a bit of searching I can't find a definitive post describing the concept

The idiom used to describe that concept in social psychology is "idiosyncrasy credits", so searching for that phrase produces more relevant material (though as far as I can tell nothing on Less Wrong specifically).

Comment by complexmeme on The Robots, AI, and Unemployment Anti-FAQ · 2013-08-17T03:00:58.356Z · LW · GW

I can see why you think I was making that implicit claim, though that wasn't quite the point I was trying to make.

I don't know to what extent the regulation mentioned in the Wikipedia article I linked to was influenced by industry lobbying versus concern about other sorts of risks to infrastructure or public safety. I'm not sure whether the precise cause of the passage of such regulation is that relevant to the regulation's durability in the face of potential benefits from adoption of new technology. Maybe it is, but the precise example of "limit[ing cars] to the same speed as horses" in the original post seems to imply that was something that didn't happen, not just something that did happen for different reasons.

Comment by complexmeme on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T18:14:42.097Z · LW · GW

The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.

I wouldn't be surprised if this was the case, and I'd be very surprised if the end of cheap (at least, much cheaper) petroleum has nothing to do with that.

Comment by complexmeme on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T17:48:15.919Z · LW · GW

If cars were invented nowadays, the horse-and-saddle industry would surely try to arrange for them to be regulated out of existence, or sued out of existence, or limited to the same speed as horses to ensure existing buggies remained safe.

That's not a new thing, that sort of regulation actually happened!

Comment by complexmeme on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T16:18:48.870Z · LW · GW

They see an overall trend of reduction in employment and wages since at least 2000.

And also wage stagnation in contrast to continuing productivity gains since the 1970s.

Comment by complexmeme on Normal Ending: Last Tears (6/8) · 2012-12-26T16:27:57.307Z · LW · GW

that I will be changed again, also against my will, the next time

The next time, it presumably wouldn't be against your will, due to the first set of changes.

Comment by complexmeme on Bayes for Schizophrenics: Reasoning in Delusional Disorders · 2012-08-14T18:01:14.011Z · LW · GW

"You have brain damage" is also a theory with perfect explanatory adequacy.... Why not?

This led me to think of two alternate hypotheses:

One is that the same problem underlying the second factor ("abnormal belief evaluation") is at fault, that self-evaluation for abnormal beliefs involves the same sort of self-modelling needed for a theory like "I have brain damage" to seem explanatory (or even coherent). The other is that there are separate systems for self-evaluation and belief-probability-evaluation that are both damaged in the case of such delusions.

One might take the Capgras delusion and similar as evidence that those systems at least overlap, but there's some visibility bias involved, since people who hold beliefs that seem (to them) to be both probable and crazy are likely to conceal those beliefs (see someonewrongonthenet's comment).

Comment by complexmeme on Game Theory As A Dark Art · 2012-07-24T18:56:48.293Z · LW · GW

Agreed. Pretty sure even if the other board members didn't see the exact nature of the trap, they'd still find it obvious that it is a trap, especially considering the source.

Comment by complexmeme on Interlude for Behavioral Economics · 2012-07-12T15:34:28.555Z · LW · GW

Given the context, I was assuming the scenario being discussed was one where the two players' decisions are independent, and where no one expects they may be playing against themselves.

You're right that the game changes if a player thinks that their choice influences (or, arguably, predicts) their opponent's choice.

Comment by complexmeme on Interlude for Behavioral Economics · 2012-07-09T19:35:50.004Z · LW · GW

That last "if you know the other person cooperated" is unnecessary, in a True Prisoner's Dilemma each player prefers defecting in any circumstance.

Comment by complexmeme on Living Metaphorically · 2011-11-28T21:25:01.815Z · LW · GW

That's the solution to the Achilles and the Turtle Paradox (also Zeno's), but the Arrow Paradox (in the comment you replied to) is different.

The Arrow Paradox is simply linguistic confusion, I think. Motion is a relation in space relative to different points of time, Zeno's statement that the (moving) arrow is at rest at any given instant is simply false (considered in relation to instants epsilon before or after that instant) or nonsensical (considered in enforced isolation with no information about any other instant).

I never found the Arrow Paradox particularly compelling. For the Achilles and the Turtle Paradox I can at least see why someone might have found that confusing.

Comment by complexmeme on Rhetoric for the Good · 2011-10-25T18:21:05.716Z · LW · GW

That "Engfish" essay is strange. It's right that textbooks and so on encourage students to write in a way that's impersonal and overly verbose. But it doesn't recognize the advantages of academic English. It doesn't even seem to recognize the role (or existence!) of dialects in general. Instead, it takes bad examples of academic English (the writing textbook) and suggests they should be more like bad examples of informal English (the third-grader).

Comment by complexmeme on Blindsight and Consciousness · 2011-09-22T21:35:59.783Z · LW · GW

This implies that some parts of your brain lead to you being conscious, while others don't.

It at least implies that some processes lead to you being conscious, while others don't. The same brain region could be involved in both conscious and unconscious processes.

Comment by complexmeme on Three consistent positions for computationalists · 2011-05-16T16:36:44.847Z · LW · GW

(Didn't realize this site doesn't email reply notifications, thus the delayed response.)

What I'm saying is that someone who answers "algorithms" is clearly not taking that view of substrate-independence, but they could hypothesize that only some side-effects matter. A MOSFET-brain-simulation and a desert-rocks-brain-simulation could share computational properties beyond input-output, even though the side-effects are clearly not identical.

(Not saying that I endorse that hypothesis, just that it's not the same as the "side effects don't matter" version.)

Comment by complexmeme on Three consistent positions for computationalists · 2011-04-15T18:32:06.749Z · LW · GW

the Kolmogorov complexity of a definition of an equivalence relation which tells us that an AND gate implemented in a MOSFET is equivalent to an AND gate implemented in a neuron is equivalent to an AND gate implemented in desert rocks

Isn't that only a problem for those who answer "functions" to question 5? Desert-rocks-AND-gate and MOSFET-AND-gate are functionally-equivalent by definition, but if you're not excluding side-effects it's obvious that they're not computationally equivalent.

Edit: zaph answered algorithms, so your counter-argument doesn't really target him well.

Comment by complexmeme on Updateless anthropics · 2011-02-21T23:34:14.772Z · LW · GW

A few thoughts on the cousin_its problem:

  1. When you calculate the expected outcome for the "deciders say nay" strategy and the "deciders say yea" strategy, you already know that the deciders will be deciders. So "you are a decider" is not new information (relative to that strategy), don't change your answer. (It may be new information relative to other strategies, where the one making the decision is an individual that wasn't necessarily going to be told "you are the decider" for the original problem. If you're told, "you are the decider", you should still conclude with 90% probability that the coin is tails.)

  2. (Possibly a rephrasing of 1.) If the deciders in the tails universe come to the same conclusion as the deciders in the heads universe about the probability of which universe they're in, one might conclude that they didn't actually get useful information about which universe they're in.

  3. (Also a rephrasing of 1.) The deciders do a pretty good job of predicting what universe they're in individually, but the situation is contrived to give the one wrong decider nine times the decision-making power. (Edit: And since you know about that trap in advance, you shouldn't fall into it.)

  4. (Isomorphic?) Perhaps "there's a 90% probability that I'm in the 'tails' universe" is the wrong probability to look at. The relevant probability is, "if nine hypothetical individuals are told 'you're a decider', there's only a 10% probability that they're all in the tails universe".

Comment by complexmeme on Cryonics Questions · 2010-09-01T06:27:34.028Z · LW · GW

Some of your analogies strike me as quite strained:

(1) I wouldn't call the probability of being revived post near-future cryogenic freezing "non-trivial but far from certain", I would call it "vanishingly small, if not zero". If sick and dying and offered a surgery as likely to work as I think cryonics is, I might well reject it in favor of more conventional death-related activities.

(3) My past self has the same relation to me as a far-future simulation of my mind reconstructed from scans of my brain-sicle? Could be, but that's far from intuitive. Also, there's no reason to use "fear" to characterize the opposing view when "think" would work just as well.

(6) What Yvain said.

Comment by complexmeme on Existential Risk and Public Relations · 2010-08-19T02:43:21.026Z · LW · GW

Huh, interesting. I wrote something very similar on my blog a while ago. (That was on cryonics, not existential risk reduction, and it goes on about cryonics specifically. But the point about rhetoric is much the same.)

Anyways, I agree. At the very least, some statements made by smart people (including Yudkowsky) have had the effect of increasing my blanket skepticism in some areas. On the other hand, such statements have me thinking more about the topics in question than I might have otherwise, so maybe that balances out. Then again, I'm more willing to wrestle with my skepticism than most, and I'm still probably a "mediocre rationalist" (to put it in Eliezer's terms).

Comment by complexmeme on Normal Cryonics · 2010-06-02T18:36:56.821Z · LW · GW

Do you think that if someone frozen in the near future is revived, that's likely to happen after a friendly-AI singularity has occurred? If so, what's your reasoning for that assumption?

Comment by complexmeme on Normal Cryonics · 2010-06-02T18:33:31.876Z · LW · GW

But that property is not limited to outcomes of good quality, correct?

Comment by complexmeme on Normal Cryonics · 2010-06-02T18:27:13.186Z · LW · GW

Sure, I'm talking about heuristics. Don't think that's a mistake, though, in an instance with so many unknowns. I agree that my comment above is not a counter-argument, per se, just explaining why your statement goes over my head.

Since you prefer specificity: Why on Earth do you anticipate that?

Comment by complexmeme on Normal Cryonics · 2010-05-31T03:39:31.824Z · LW · GW

I can't argue that cryonics would strike me as an excellent deal if I believed that, but that seems wildly optimistic.