Posts

A Telepathic Exam about AI and Consequentialism 2023-02-22T21:00:21.994Z
You can tell a drawing from a painting 2022-03-07T11:45:33.034Z
Simulacrum Levels of Trust 2021-06-30T10:14:24.512Z

Comments

Comment by alkexr on [deleted post] 2023-07-12T01:12:32.033Z

I'm not an expert either, and I won't try to end the F-35 debate in a few sentences. I maintain my position that the original argument was sloppy. "F-35 isn't the best for specific wars X, Y and Z, therefore it wasn't a competent military decision" is non sequitur. "Experts X, Y and Z believe that the F-35 wasn't a competent decision" would be better in this case, because that seems to be the real reason why you believe what you believe.

Comment by alkexr on [deleted post] 2023-07-11T17:36:18.134Z

F-35 aren't the crucial component to winning the kind of wars in Iraq or Afghanistan. They also aren't the kind of weapon that are important to defend Taiwan. They are just what the airforce culture wants instead of being a choice made by a hypercompetent military. 

I mostly agree with your perception of state (or something) competence, but this seems to me like a sloppy argument? True, the US does have to prepare for the most likely wars, but they also have to be prepared for all other wars that don't happen because they were prepared, aka. deterrence. The F-35 may not be the most efficient asset when it comes to e.g. Taiwan, but it's useful in a wide range of scenarios, and its difficult to predict what exactly one will need as these platforms have to be planned decades in advance.

Not sure how to put this in a way that isn't overly combative, but since the only point you made where I have domain specific understanding seems to be sloppy, it makes me wonder how much I should trust the rest? At a glance it doesn't look like artight reasoning.

EDIT: As a sidenote, what the airforce culture wants is in itself a military consideration. It's often better to have the gear that works well with established doctrine than some other technology that outperforms it on paper.

Comment by alkexr on [deleted post] 2023-07-11T17:05:15.133Z
  • You believe that there is a strong evolutionary pressure to create powerful networks of individuals that are very good at protecting their interests and surviving in competition with other similar networks
  • You believe that these networks utilize information warfare to such an extent that they have adapted by cutting themselves off from most information channels, and are extremely skeptical of what anyone else believes
  • You believe that this policy is a better adaptation to this environment than what anyone else could come up with
  • These networks have adapted by being so extremely secretive that it's virtually impossible to know anything about them
  • You happen to know that these networks have certain (self-perceived) interests related to AI
  • You happen to believe that these networks are dangerous forces and it makes sense to be scared
  • This image that you have of these networks leads to anxiety
  • Anxiety leads to you choosing and promoting a strategy of self-deterrence
  • Self deterrence leads to these networks having their (self-perceived) interests protected at no cost on their behalf

Given the above premises (which, for the record, I don't share), you have to conclude that there's a reasonable chance that your own theory is an active information battleground.

Comment by alkexr on [deleted post] 2023-07-10T10:33:02.365Z
  1. You're saying that these hypothetical elites are hypercompetent to such a hollywoodical degree that normal human constraints that apply to everyone else don't apply to them, because of "out of distribution" reasons. It seems to me that "out of distribution" here stands as a synonym of magic.
  2. You're saying that these hypothetical elites are controlling the world thanks to their hypercompetence, but are completely oblivious to the idea that they themselves could lose control to an AI that they know to be hypercompetent relative to them.
  3. It seems to me that lie detection technology makes the scenario you're worried about even less likely? It would be enough for just a single individual from the hypothetical hypercompetent elites to testify under lie detection that they indeed honestly believe that AI poses a risk to them.
  4. It's worth pointing out I suppose that the military industrial complex is still strongly interested in the world not being destroyed, no matter how cartoonishly evil you may believe they are otherwise, unless they are a Cthulhu cult or something. They could still stumble into such a terrible outcome via arms race dynamics, but if hypercompetence means anything, it's something that makes such accidents less, not more, likely.

My arguments end here. From this point on, I just want to talk about... smell. Because I smell anxiety.

Your framing isn't "here is what I think is most likely the truth". Your framing is "here is something potentially very dangerous that we don't know and can't possibly ever really know".

Also, you explicitly, "secretly" ask for downvotes. Why? Is something terrible going to happen if people read this? It's just a blogpost. No, it's not going to accidentally push all of history off course down into a chasm.

Asking for downvotes also happens to be a good preemptive explanation of negative reception. Just to be clear, I downvoted not because I was asked to. I downvoted because of poor epistemic standards.

Do note that I'm aware that very limited information is available to me. I don't know anything about you. I'm just trying to make sense of the little I see, and the little I see strongly pattern matches with anxiety. This is not any sort of an argument, of course, and there isn't necessarily anything wrong with that, but I feel it's still worth bringing up.

Comment by alkexr on Here's the exit. · 2022-11-22T15:06:41.697Z · LW · GW

I immediately recognize the pattern that's being playing out in this post and in the comments. I've seen it so many times, in so many forms.

Some people know the "game" and the "not-game", because they learned the lesson the hard way. They nod along, because to them it's obvious.

Some people only know the "game". They think the argument is about "game" vs "game-but-with-some-quirks", and object because those quirks don't seem important.

Some people only know the "not-game". They think the argument is about "not-game" vs "not-game-but-with-some-quirks", and object because those quirks don't seem important.

And these latter two groups find each other, and the "gamers" assume that everyone is a "gamer", the "non-gamers" assume that everyone is a "non-gamer", and they mostly agree in their objections to the original argument, even though in reality they are completely talking past each other. Worse, they don't even know what the original argument is about.

Other. People. Are. Different.

Modeling them as mostly-you-but-with-a-few-quirks is going to lead you to wrong conclusions.

Comment by alkexr on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-01T03:57:33.427Z · LW · GW

(Meta: writing this in separate comment to enable voting / agreement / discussion separately)

If you want to make the case for tactical nuclear deployment not happening (which I hope is the likely outcome), I want to see a model of how you see things developing differently

I'll list a few possible timelines. I don't think any of these is particularly likely, but they are plausible, and together with many other similar courses of events they account for significant chunks of probability mass.

  1. Discontinuity in power in Russia.
  2. Internal turmoil or collapse in Russia (e.g. regions start declaring independence). It becomes clear that nuclear weapons won't save Russia.
  3. Abrupt cut in western support to Ukraine, including ammunition (e.g. due to another big war). Putin thinks he can win without nuclear weapons.
  4. Russian army starts being competent. Putin thinks he can win without nuclear weapons.
  5. Conflict freezes over winter, then turns into boiling-the-frog: events happening too slowly to trigger nuclear response. Over the years defeat slowly becomes an accepted fact in Russia.
Comment by alkexr on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-01T02:58:43.766Z · LW · GW

On Nord Stream sabotage: 

  1. Looks like sabotage. Accidents very rarely look like this. (Very high confidence.)
  2. Probably by state actors. It seems like a task that requires significant resources and planning. Also there was plenty of military presence in the area, it's just hard to believe that non-state actors could perform something like this unnoticed. (Medium confidence.)
  3. It wasn't an ally of Germany. There is always a chance that you get caught / leave evidence, and after attacking the critical infrastructure of an ally no one will have a reason to trust you. None of the allies of Germany stand to gain anything that is anywhere near comparable to that risk (that I can think of). (High confidence.)
  4. Given geographic reach, the countries that could perform something like this are probably those around the Baltic sea, plus USA, UK, France. (Medium confidence.)

That leaves us with Russia and Germany. I don't see what Germany could gain from this. I don't see what Russia could gain from this either, but then Russia has developed a habit of doing things despite having nothing to gain from them. Also, I see some reasons why Russia could think this is a good idea (implicitly threatening the West by demonstrating willingness to use grey-zone warfare against their critical infrastructure, to try to get them to back down).

So possibly Russia. (Low confidence.)

Epistemic status: proof by lack of imagination.

Comment by alkexr on Before Colour TV, People Dreamed in Black and White · 2022-02-02T16:54:58.759Z · LW · GW

Thus I claim we don't know whether people see dreams.

That's a pretty bold claim just a few sentences after claiming to have aphantasia.

Some of my dreams have no visuals at all, just a vague awareness of the setting and plot points. Others are as vivid and detailed as waking experience (or even more, honestly), at least as far as vision is concerned. Dreams can fall anywhere on a spectrum between these extremes, and sometimes they can even be a mixture (e.g. a visual experience of the place and an awareness of characters in that place that don't appear visually).

Yes, people do see dreams. I'm fairly certain I can tell the difference.

Comment by alkexr on Why do we need a NEW philosophy of progress? · 2022-01-27T18:24:02.234Z · LW · GW

Yes, I'm aware of all that, and I agree with your premises, but your argument doesn't prove what you think it does. Let's try to reductio it ad absurdum, and turn the same argument against the possibility of fast technological or scientific feedback cycles. 

If you live in a technologically backwards society (think bronze age), you can't become more advanced technologically yourself, because you'll starve spending your time trying to do science. The technology of society (including agriculture, communication, tools, etc.) needs to progress as a whole. If you live in a scientifically backwards society, you can't have more accurate beliefs, because you'll be burned at the stake by all the people believing in nonsense. Therefore, science and technology can only progress as fast as the majority can adopt it.

And all of the above is true, actually, up to a certain point in history. But once the scientific understanding of society advances to the point where it understands that science is a thing and has a basic understanding of how science works, it can basically create a mesa-feedback-loop. Similarly, once you have technologies like writing and free market capitalism, suddenly it's possible to set up a tech company, sell something worthwhile and in exchange not starve.

And that's the frame for my original comment. I didn't mean to imply that a fast moral feedback loop would involve a single person going on some meditation retreat that is somehow a clever feedback loop in disguise and then come back more moral or whatnot. I think it is possible that there is some innovation, moral or social or otherwise (e.g. a common understanding of common knowledge), that would enable the creation of fast moral and social feedback loops.

So the question, again: what are the necessary conditions for such a feedback loop? Are they present? What would it look like? How would you recognize it if it was happening right in front of you?

(EDIT: spelling)

Comment by alkexr on Why do we need a NEW philosophy of progress? · 2022-01-26T12:33:09.701Z · LW · GW

It seems pretty likely that moral and social progress are just inherently harder problems, given that you can't [...] have fast feedback cycles from reality (like you do when trying to make scientific, technological and industrial progress).

We can't? Have we tried? Have you tried? Is there some law of physics I'm missing? What would a real, genuine attempt to do just that even look like? Would you recognize it if it was done right in front of you?

Comment by alkexr on Why do we need a NEW philosophy of progress? · 2022-01-26T12:16:46.975Z · LW · GW

There are multiple meanings of "progress" afoot here. Tabooing the word, my reading of your point is "moving toward any specific imagined future state of the world we all agree is good is good, therefore moving forward is good".

Comment by alkexr on Calibration proverbs · 2022-01-11T21:19:07.049Z · LW · GW

(Another non-native having a go at it...)

When your advice both ways seems fine,
Calibrate, then make it rhyme.

Comment by alkexr on Is "gears-level" just a synonym for "mechanistic"? · 2021-12-13T21:19:32.402Z · LW · GW

more transparent to outsiders

There is the danger of it being more transparency-illuding instead. (Yeah, I just invented that term, but what did I mean by it?)

Comment by alkexr on Improving on the Karma System · 2021-11-15T19:35:57.236Z · LW · GW

My gut feeling is that attracting more attention to a metric, no matter how good, will inevitably Goodhart it.

That is a good gut feeling to have, and Goodhart certainly does need to be invoked in the discussion. But the proposal is about using a different metric with a (perhaps) higher level of attention directed towards it, not just directing more attention to the same metric. Different metrics create different incentive landscapes to optimizers (LessWrongers, in this case), and not all incentive landscapes are equal relative to the goal of a Good LessWrong Community (whatever that means).

I am not sure what problem you are trying to solve, and whether your cure will not be worse than the disease.

This last sentence comes across as particularly low-effort, given that the post lists 10 dimensions along which it claims karma has problems, and then evaluates the proposed system relative to karma along those same dimensions.

Comment by alkexr on The value of low-conscientiousness people on teams · 2021-06-15T20:16:05.457Z · LW · GW

The way this topic appears to me is that there are different tasks or considerations that require different levels of conscientiousness for the optimal solution. In this frame, one should just always apply the appropriate level of conscientiousness in every context, and the trait conscientiousness is just a bias people have in one direction or the other that one should try to eliminate.

This frame is useful, because it opens up the possibility to do things like "assess required conscientiousness for task", "become aware of bias", "reduce bias", etc. But it may also be wrong in an important way. It's somewhere between difficult to impossible to tell how much conscientiousness is required in any particular case, what's more, even what constitutes an optimal solution may not be obvious. In this frame, trait conscientiousness is not bias, but diversity that nature threw against the problem to do selection with.

I have trouble understanding why, in this case, everyone would need to have a consistently high or consistently low level of it across a wide range of contexts; and why, for example, one can't just try a range of approaches of varying consientiousness levels and learn to optimize from the experience. It isn't necessary in any of the examples above to have a person involved with consistently low levels of it, just a person who in that particular case takes the low-conscientiousness approach. This way we could still fall back on the interpretation as a bias and blame nature for just being inefficient doing the evolution biologically instead of memetically.

Comment by alkexr on The reverse Goodhart problem · 2021-06-09T13:04:25.263Z · LW · GW

I think it's empirical observation.

The world doesn't just happen to behave in a certain way. The probability that all examples point in a single direction without some actual mechanism causing it is negligible.

Comment by alkexr on The reverse Goodhart problem · 2021-06-09T12:53:20.665Z · LW · GW

I ended up using mathematical language because I found it really difficult to articulate my intuitions. My intuition told me that something like this had to be true mathematically, but the fact that you don't seem to know about it makes me consider this significantly less likely.

If we have a collection of variables , and , then  is positively correlated in practice with most  expressed simply in terms of the variables.

Yes, but  also happens to be very strongly correlated with most  that are equal to . That's where you do the cheating. Goodhart's law, as I understand it, isn't a claim about any single proxy-goal pair. That would be equivalent to claiming that "there are no statistical regularities, period". Rather, it's a claim about the nature of the set of all potential proxies.

In a Bayesian language, Goodhart's law sets the prior probability of any seemingly good proxy being a good proxy, which is virtually 0. If you have additional evidence, like knowing that your proxy can be expressed in a simple way using your goal, then obviously the probabilities are going to shift.

And that's how your  and  are different. In the case of , the selection of  is arbitrary. In the case of , the selection of  isn't arbitrary, because it was already fixed when you selected . But again, if you select a seemingly good proxy  at random, it won't be an actually good proxy.

Comment by alkexr on The reverse Goodhart problem · 2021-06-08T21:42:33.225Z · LW · GW

You have a true goal, . Then you take the set of all potential proxies that have an observed correlation with , let's call this . By Goodhart's law, this set has the property that any  will with probability 1 be uncorrelated with  outside the observed domain.

Then you can take the set . This set will have the property that any  will with probability 1 be uncorrelated with  outside the observed domain. This is Goodhart's law, and it still applies.

Your claim is that there is one element,  in particular, which will be (positively) correlated with . But such proxies still have probability 0. So how is that anti-Goodhart?

Pairing up  and  to show equivalence of cardinality seems to be irrelevant, and it's also weird.  is an element of , and this depends on .

Comment by alkexr on The reverse Goodhart problem · 2021-06-08T17:30:17.570Z · LW · GW

Your  is correlated with , and that's cheating for all practical purposes. The premise of Goodhart's law is that you can't measure your true goal well. That's why you need a proxy in the first place.

If you select a proxy at random with the only condition that it's correlated with your true goal in the domain of your past experiences, Goodhart's law claims that it will almost certainly not be correlated near the optimum. Emphasis on "only condition". If you specify further conditions, like, say, that your proxy is your true goal, then, well, you will get a different probability distribution.

Comment by alkexr on Networks of Trust vs Markets · 2021-06-03T18:42:05.748Z · LW · GW

Closely related: Leaky Delegation: You are not a Commodity (by Darmani).

Comment by alkexr on Why don't long running conversations happen on LessWrong? · 2021-05-31T20:18:31.062Z · LW · GW

Some frames worth considering:

  • Strong Prune, weak Babble among LessWrongers
  • Conversation failing to evolve past the low-hanging fruit
  • People being reluctant to express thoughts that might make their account look stupid in a way that's visible to the entire internet
  • Everyone can participate, and as the number of people involved in a conversation increases it becomes more and more difficult to track all positions
  • Even lurkers like me can attempt to participate, and it's costly in terms of conversational effort to figure out what background knowledge someone is missing
  • Most topics that appear on LessWrong are suited for mental masturbation only, they offer no obvious course of action through which people can decide to care about said topics
  • There is way, way too much content (heck, I've only skimmed through the comments under this post)
  • Long-running conversations don't tend to happen; therefore there is little incentive in delving deep into one topic, so people (well, me at least) end up engaging with more topics, but in a shallow manner; which in turn creates conditions where long-running conversations are less likely
  • Due to the way the platform is designed, the only real way to maintain a long-running conversation between persons A and B is if the pattern of response is ABABABAB..., so anyone not being confident at any point is a single point of failure

I also have a suggestion. After the discussion here inevitably fades, you could write another post in which you summarize the main takeaways and steelman a position or two that look valuable at that time. That might generate further discussion. Repeat. This way you could attempt to keep the flame of the conversation alive. But if you end up doing this, make sure to give the process a name, so that people realize that this is a Thing and that they are able to participate in this-Thing-specific ways.

Comment by alkexr on What's your visual experience when reading technical material? · 2021-05-29T00:20:29.199Z · LW · GW

The first layer of internal visual experience I have when reading is a degree of synesthesia (letters have colors). Most of the time I'm not aware that this is happening. It does make recalling writing easier (I sometimes deduce missing letters, words or numbers from the color).

Then there is the "internal blackboard", which I use for equations or formulas. I use conscious effort to make the equation appear as a visual experience (in its written form). I can then manipulate this image as if the individual symbols or symbol groups were physical objects that can move and react with each other. This apparently allows me to solve more complex equations in my head than most mathematicians. (I believe this is a learnable skill.)

Finally, there are the visual experiences that I use to understand concepts. I'm not sure how to describe these, because these certainly aren't actual images that are actually possible. More like structures of shapes, spatial relations and other "sub-visual" experiences. It's not like I can actually visualize an n-dimensional subspace, but it isn't simply a lower-dimensional analogue either. It looks thin, but with a vast inside, in a way that would be contradictory in "normal" visual experience.

Whenever I read about a concept that seems interesting (e.g. Moloch), I pause. Then I take the verbal experience of what I've read, and use it as a guide for some internal thought process to follow. The nature of this process is the creation and manipulation of impossible visual experiences of this kind.

These days my visualization is a lot fainter than it used to be, so faint in fact that sometimes I barely see anything at all, in spite of knowing what I'm (not) seeing. This includes my dreams, and maybe even waking experience (how would I tell?), and I believe this is unnatural. This only seems to have a negative effect on the "internal blackboard", but not on any of the other mechanisms I mentioned.

Comment by alkexr on Zvi's Law of No Evidence · 2021-05-15T14:24:21.155Z · LW · GW

Absence of evidence of X is evidence of absence of X.

A claim about the absence of evidence of X is evidence of:

  • the speaker's belief of the listeners' belief in X,
  • absence of evidence of NOT X,
  • the speaker's intention of changing the listeners' belief in X.

No paradox to resolve here.

Comment by alkexr on Don't Sell Your Soul · 2021-04-07T19:24:39.061Z · LW · GW

Non sequitur. Buying isn't the inverse operation of selling. Both cost positive amounts of time and both have risks you may not have thought of. But it probably is a good idea to go back in time and unsell your soul. Except that going back in time is probably a bad idea too. Never mind. It's probably a good investment to turn your attention to somewhere other than the soul market.

Comment by alkexr on Bureaucracy is a world of magic · 2021-03-31T23:32:29.311Z · LW · GW

These rituals are inefficient in cases where there is mutual trust between all participants. But sticking to formality is a great Schelling fence against those trying to gain an advantage by exploiting unwitting bureaucrats.

Comment by alkexr on What is going on in the world? · 2021-01-21T20:11:45.871Z · LW · GW

The basis of the original post isn't existential threats, but narratives - ways of organizing the exponential complexity of all the events in the world into a comparatively simple story-like structure.

Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take

Memetic tribes are only tangentially relevant here. I didn't really intend to present any argument, just a set of narratives present in some other communities you probably haven't encountered.

Comment by alkexr on What is going on in the world? · 2021-01-18T13:02:24.692Z · LW · GW

The above narratives seem to be extremely focused into a tiny part of narrative-space, and it's actually a fairly good representation of what makes LessWrong a memetic tribe. I will try to give some examples of narratives that are... fundamentally different, from the outside view; or weird and stupid, from the inside view. (I'll also try to do some translation between conceptual frameworks.) Some of these narratives you already know - just look around the political spectrum, and notice what narratives people live in. There are aslo some narratives I find better than useless:

  1. Karma. Terrible parents will likely have children who can't reach their full potential and can't help the world, and who will themselves go on becoming terrible parents. Those who were abused by the powerful will go on abusing their power wherever and whenever they have any. Etc. Your role is to "neutralize the karma", to break the part of the cycle that operates through you: don't become a terrible parent yourself, don't abuse your power, etc. even though you were on the recieving end.
  2. The world is on the verge of collapse because the power of humanity through technology has risen faster than our wisdom to handle it. You have to seek wisdom, not more power.
  3. The world is run by institutions that are run by unconscious people (i.e. people who aren't fully aware of how their contribution as a cog to a complex machine affects the world). Most problems in the world are caused by the ignorant operation of these institutions. You have to elevate people's consciousness to solve this problem.
  4. Humans and humanity is evolving through stages of development (according to something like integral theory). Your role is to reach the higher stages of development in your life, and help your environment to do likewise.
  5. History is just life unfolding. Your job isn't to plan the whole process, just as the job of a single neuron isn't to do the whole computation. The best thing you can do is just to live in alignment with your true self, and let life unfold as it has to, whatever the consequences (just as a neuron doing anything other than firing according to its "programming" is simply adding noise to the system).
  6. Profit (Moloch) has overtaken culture (i.e. the way people's minds are programmed). The purpose of profit (i.e. the utility function of Moloch that can be reconstructed from its actions) isn't human well-being or survival of civilization, so the actions of people (which is a manifestation of the culture) won't move the world toward these goals. Your role is to raise awareness, and to help reclaim culture from the hands of profit, and put a human in the driver's seat again (i.e. realign the optimization process by which culture is generated so that the resulting culture is going to be aligned with human values).
  7. Western civilization is at the end of its lifecycle. This civilization has to collapse, to make way for a new one that relates to this civilization in the same way the western civilization relates to the fallen Rome. Your role isn't to prevent the collapse, but to start creating the first building blocks which will form the basis for the new civilization.
  8. The world is on the brink of a context switch (i.e. the world will move to a formerly inaccessible region of phase space - or has already done so). Your models of the world are optimized to the current context, and therefore they are going to be useless in the new context (no training data in that region of the phase space). So you can't comprehend the future by trying to think in terms of models, instead you have to reconnect with the process that generated those models. Your role is to be prepared for the context switch to try to mess things up as little as possible, though some of it is inevitable.
  9. Reality (i.e. the "linear mapping" you use to project the world's phase space to a lower dimensional conceptual space through your perception and sensemaking) is an illusion (i.e. has in its Kernel everything that actually matters). Your role is to realize that (and after that your role will be clear to you).
  10. The world is too complex for any individual to understand. Your role is to be part of a collective sensemaking through openness and dialog that has the potential to collectively understand the world and provide actionable information. (In other words, there is no narrative simple enough for you to understand but complex enough to tackle the world's challenges.)
  11. The grand narrative you have to live your life by changes depending on your circumstances, just like it depends on where you are whether you have to turn left or right. Your role is to learn to understand and evaluate the utility of narratives, and to increase your capacity to do so.

This list is by no means comprehensive, but this is taking way too much time, so I'll stop now, lest it should become a babble challenge.

Comment by alkexr on Covid 1/14: To Launch a Thousand Shipments · 2021-01-16T13:57:54.190Z · LW · GW

You'd also have to consider the long-term effects on the incentive landscape of e.g. establishing the precedent of companies getting $4B deals in case of a pandemic regardless of whether their vaccine works or not. In general, doing things the reasonable way has the downside of incentivizing bad actors to extract any free energy you put into the system by being reasonable until you're potentially no better off than the way Delenda Est Club is handling the situation right now. In any case, I don't see any long-term systemic effects even being considered here, so I'd be surprised if the suggestions didn't have some significant fallout further down the line.

Comment by alkexr on Have the lockdowns been worth it? · 2020-10-14T22:58:03.580Z · LW · GW

Lockdown incentivized politicians to establish positions on a lockdown, which has led to people having strong opinions about it. Even assuming no damage from further polarization, we have a roughly 50% chance of having an anti-lockdown government when the next pandemic hits, with a 10% chance of this new incentive being the deciding factor in not enacting a lockdown (or failing to implement it). Even if we assume that only 10% of the effects of this polarization is the result of the lockdown actually happening, with a 1% yearly chance of a pandemic more dangerous in expectation than the current one (~100M dead), we have ~1M QALYs lost, extrapolated worldwide over the next 10 years (while this effect is most pronounced).

Note: This is just a quick check to see that the effect is at least plausibly on an order of magnitude worth taking into consideration. I'm only somewhat confident that the effect isn't in the opposite direction. I'm only commenting (as opposed to answering) because primarily I expect weak points in my general process of speculation pointed out, not because I believe this to be well-informed enough to be useful.

Comment by alkexr on Why do you (not) use a pseudonym on LessWrong? · 2020-05-07T22:03:29.018Z · LW · GW

I live in a social environment where expressing opinions or otherwise giving information about myself could have negative consequences, ranging from mild inconvenience to serious discrimination. I have no intention to hide my real identity from those who know the account, but I do want to hide my account from those who know my real identity (and aren't close friends). I use this name for most online activity.

Comment by alkexr on My experience with the "rationalist uncanny valley" · 2020-04-23T22:00:52.635Z · LW · GW

I've been aware for a while now that having enough awareness to notice being trapped is not enough to step outside the pattern, but I can't step outside this pattern. I also believe that admitting that there is no substitute for practice isn't going to be causally linked to me actually practicing (due to a special case of the same trap), so I'll just go on staying trapped for now I guess.

Comment by alkexr on What will happen to supply chains in the era of COVID-19? · 2020-04-08T01:29:40.471Z · LW · GW

Being self-sufficient and robust as a national economy is accepting a competitive disadvantage relative to a global just-in-time supply chain in times of prosperity in exchange for a competitive advantage during a crisis. Selection pressures will push economies accepting this tradeoff towards being actively interested in a world with more crises.

Comment by alkexr on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-15T10:02:33.784Z · LW · GW

Question: how does postrationality and instrumental rationality relate to each other? To me it appears that you are simply arguing for instrumental rationality over epistemic rationality, or am I missing something?

Comment by alkexr on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-15T10:00:31.797Z · LW · GW
However, if this is really what 'postrationality' is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.

It feels like calling someone's philosophy poisonous and harmful doesn't advance the conversation, regardless of its truth value, and this proves the point of the main post well.

Comment by alkexr on Tradition is Smarter Than You Are · 2018-09-19T23:07:07.413Z · LW · GW

Being able to speak is probably more important than being as smart as a human. Cultural / memetic evolution is orders of magnitude faster than biological, but its ability to function is dependent on having a memory better than mortal minds. Speech gives some limited non-mortal memory, as does writing the printing press, or the internet. These inventions enable more efficient evolution. AI will ramp up evolution to even higher speeds, since external memory will be replaced with internal 1) lossless and 2) intelligent memory. As such I am unconvinced that this would mean slower takeoff speeds. (You just explained that the most important factor in doing well as humans is something humans are not overly good at, instead of the special magic that only humans posess.)

Comment by alkexr on Physics has laws, the Universe might not · 2018-06-10T17:02:27.792Z · LW · GW

People not being able to come up with any idea but that diseases are a curse of the gods is strong evidence not for diseases being a curse of gods but for the ignorance of those people. The most likely answer to that question is either something no one will think of for centuries to come or simply that the model of separating objects into "sorts of things" is not useful for deciphering the misteries of the universe despite being an evolutionary advantage on the ancestral savanna.

Comment by alkexr on Of Two Minds · 2018-05-26T00:45:46.857Z · LW · GW

You might have gone too far with speculation - your theory can be tested. If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems. It is not immediately obvious how to run such an experiment, though.