Posts

Why don't we vaccinate people against smallpox any more? 2021-04-21T00:08:31.593Z
AI Winter Is Coming - How to profit from it? 2020-12-05T20:23:51.309Z
Implications of the Doomsday Argument for x-risk reduction 2020-04-02T21:42:42.810Z
How to Write a News article on the Dangers of Artificial General Intelligence 2020-02-28T02:14:48.419Z
What will quantum computers be used for? 2020-01-01T19:33:16.838Z
Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z

Comments

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-10T23:08:02.785Z · LW · GW

The absolute travel time matters less for disease spread in this case. It doesn't matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won't spread to those places naturally.

And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they're obscure almost by definition) and plant the virus there, they'll most certainly have no trouble bringing it to Mars either.

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-10T16:56:01.384Z · LW · GW

I strongly believe that nuclear war and climate change are not existential risks, by a large margin.

For engineered pandemics, I don't see why Mars would be more helpful than any other isolated pockets on Earth - do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?

Curiously enough, the last scenario you pointed out - dystopias - might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-09T21:03:29.428Z · LW · GW

Moving to another planet does not save you from misaligned superintelligence.

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea

The only way I can see Musk's position making sense is that it's actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-09T20:48:46.452Z · LW · GW

I would love to hear some longevity-related biotech investment advices from rationalists, which I (and presumably many others here) predict to be the second biggest deal in big picture futurism. 

The only investment idea I can come up with myself are for-profit spin-off companies from SENS Research Foundation, but that's just the obvious option to someone without expertise in the field and trusting the most vocal experts.

Although some growth potential has already been lost due to the pandemic bringing a lot of attention towards this field, I think we're still early enough to capture some of the returns.

Comment by maximkazhenkov on How counting neutrons explains nuclear waste · 2021-06-02T12:26:53.807Z · LW · GW

If you want to learn more about ongoing research into superheavy elements:

To me the most exciting prospect of this research is the potential discovery of not just an island, but an entire continent of stability that could open up endless engineering potential in the realm of nuclear chemistry.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:57:27.035Z · LW · GW

No that's not what I meant; these two issues divide different tribes but the level of toxicity and fanaticism is similar. Heated debates around US-China war scenarios are very common in Taiwanese/Chinese overseas communities.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:45:08.693Z · LW · GW

I also have a personal interest in trying to keep Lesswrong politics-free because for me fighting down the urge to engage in political discussions is a burden, like an ex-junkie constantly tempted with easily available drugs. Old habits die hard, so I immediately committed to not participate in any object-level discussions upon seeing the title of this post. I'm not sure whether this applies to anyone else.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:44:49.739Z · LW · GW

I do have a sense that it's less likely to explode in bad ways, and less likely to attract bad people to the site.

I agree with the first part of the sentence but disagree with the second part. In my view, Lesswrong's best defense thus far has been a frontpage filled with content that appears bland to anyone with a combative attitude coming from other, more toxic social media environments. Posts like this one though stick out like a sore thumb and signal to onlookers that discussions about politics and geopolitics are now an integral part of Lesswrong, even when the discussions themselves are respectful and benign so far. If my hypothesis is correct, an early sign of deterioration would be an accumulation of newly registered accounts that solely leave comments on one or two politics-related posts.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-27T19:10:29.824Z · LW · GW

Politics is politics. US vs China is about as divisive and tribal as you can go, on the same level as pro- vs anti-Trump. Would you encourage political discussions of the latter type on Lesswrong, too?

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-21T16:12:09.707Z · LW · GW

Why couldn't land-based delivery vehicle become autonomous though? That would also cut out the human in the loop.

One reason might be that autonomous flying drone are easier to realize. It is true that air is an easier environment to navigate than ground, but landing and taking off at the destination could involve very diverse and unpredictable situations. You might run into the same long tail problem as self-driving cars, especially since a drone that can lift several kilos has dangerously powerful propellers.

Another problem is that flying vehicles in general are energy inefficient due to having to overcome gravity, and even more so at long distances (tyranny of the rocket equation). Of course you could use drones just for the last mile, but that's an even smaller pool to squeeze value out of.

In general, delivery drones seem less well-suited for densely populated urban environments where landing spots are hard to come by and you only need a few individual trips to serve an entire apartment building. And that's where most of the world will live anyway.

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-21T01:22:38.108Z · LW · GW

Lawnmowers are also very loud yet is widely tolerated (more or less). Plus, delivery drones need only to drop off the package and fly away; the noise pollution will only last for a few seconds. I also don't see why it would necessarily be unpredictable; drones don't get stuck in traffic. Maybe a dedicated time window each day becomes an industry standard.

But the real trouble I see with delivery drones is: what's the actual point? What problem is being solved here? Current delivery logistics work very well, I don't see much value being squeezed out of even faster/more predictable delivery. Looks like another solution in search of a problem to me.

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-19T02:27:34.042Z · LW · GW

I share this sentiment. Shockingly little has happened in the last 20 years, good or bad, in the grand scheme of things. Our age might become a blank spot in the memory of future people looking back at history; the time where nothing much happened.

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-19T01:56:49.806Z · LW · GW

Is there any provision that allows members to be kicked out of NATO?

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-08T22:38:46.099Z · LW · GW

It's always an emergency, lives are always at stake. That's just the nature of the pharmaceutical business. 

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-08T22:37:32.017Z · LW · GW

It's the perception that matters.

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-07T22:27:06.616Z · LW · GW

I think it's mostly the setting of a precedent of stripping away intellectual property rights for political expediency that is worrisome. It's a small step in undermining the rule of law, but a step nonetheless. The symbolic gesture is the problem; it signals to the public that such moves are now not only acceptable, but applaudable.

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-07T18:25:51.776Z · LW · GW

The stock market disagrees.

Comment by maximkazhenkov on The Fall of Rome, III: Progress Did Not Exist · 2021-04-25T14:18:38.733Z · LW · GW

I wasn't trying to argue anything in particular, I'm just using comments as a notebook to keep track of my own thoughts. I'm sorry if it sounded like I was trying to start an argument.

Comment by maximkazhenkov on The Fall of Rome, III: Progress Did Not Exist · 2021-04-25T12:35:45.682Z · LW · GW

The term "unavoidable innovation" really irks me. It has become this teacher's password for all the world's uncomfortable questions. Why was Malthus wrong? Innovation! How do we prevent civilizational collapse? Innovation! How do we solve competition and conflicts for limited resources? Innovation! How can we raise the standard of living without compromising the environment? Innovation!

As if life was fair and nature's challenges were all calibrated to our abilities such that every time we run into population limits, the innovation fairy appears and offers us a way out of the crisis. Where real disaster can only ever result from corruption, greed, power struggles and, y'know, things that generally fit our moral aesthetics about how things ought to go wrong; things that would make a good Game of Thrones episode. 

Certainly not mundane causes like mere exponential population increase. Because that would imply that Malthus was (at least sometimes) right, that life was a ruthless war of all against all, a rapacious hardscrapple frontier. An implication too horrible to ever be true.

I'm not arguing that the Malthusian trap explains all the civilizational collapses in history, or even Rome in particular. But it is the default failure mode because exponential growth is fast and unbounded, so to avoid it your civilization has to A) prevent population growth altogether, B) outpace population growth with innovation consistently, or C) collapse way before population pressure becomes a problem.

Comment by maximkazhenkov on Thiel on secrets and indefiniteness · 2021-04-22T22:40:11.645Z · LW · GW

Biotech startups are an extreme example of indefinite thinking. Researchers experiment with things that just might work instead of refining definite theories about how the body’s systems operate.

Comment by maximkazhenkov on Thiel on secrets and indefiniteness · 2021-04-21T11:21:57.972Z · LW · GW

I find Thiel's writings too narrative-driven. Persuasive, but hardly succinct. Somehow, geographical discoveries, scientific progress and ideas of social justice all fit under the umbrella term "secrets" and... there is some common pattern underlying our failure in each of these aspects? Or is one the cause of the other? What am I supposed to learn from these paragraphs? Thiel himself seems very "indefinite" with his critique.

Incrementalism is bad, but biotech start-ups should nonetheless "refine definite theories" instead of random experimentation? Isn't "refining definite theories" a prime example of incrementalism, and a strategy you would expect more out of established institutions anyway? Seems like biotech companies can only do wrong. You could also easily argue "refining definite theories" is an example of indefinite thinking because instead of focusing on developing a concrete product, you're just trying to keep the options open by doing general theory that might come in handy.

In general this writing feels more like a literary critique than a concrete thesis. I can agree with the underlying sentiment but I don't feel like I'm walking away with a clearer understanding of the problem after reading.

Comment by maximkazhenkov on On Sleep Procrastination: Going To Bed At A Reasonable Hour · 2021-04-17T11:43:51.064Z · LW · GW

Our careers span decades. Maybe being sleep deprived for a few years can work out, but this is unsustainable in the long run. Steve Jobs died young. Nikola Tesla wrote love letters to his pigeon. Elon Musk’s tweets suggest that he may not be thinking clearly. Meanwhile, Jeff Bezos gets a full 8 hours.

This is motivated reasoning. Taking Elon Musk vs. Jeff Bezos as an example, if their sleep patterns were reversed you could have just as easily argued "See, that's why Bezo's rocket company isn't as successful as Musk's".

Comment by maximkazhenkov on What will GPT-4 be incapable of? · 2021-04-06T23:49:00.017Z · LW · GW

The irony is strong with this one

Comment by maximkazhenkov on TAI? · 2021-03-31T09:50:39.247Z · LW · GW

This is the 3D printing hype all over again. Remember how every object in sight was going to be made in a 3D printer? How we won't ever need to go to a store again because we'll be able to just download the blueprint for every product from the internet and make it ourselves? How we're going to print our clothes, furniture, toys and appliances at home and it's only going to cost pennies of raw materials and electricity? Yeah right.

So let me throw down the exact opposite predictions for social implications if there was absolutely 0 innovation in AI:

  • AI continues to try to shoehorn itself into every product imaginable and mostly fail because it's a solution looking desperately for a problem
  • Almost no labor (big exception: self-driving) has been replaced by robots. The robots that do exist are not ML-based
  • Universal Basic Income doesn't see widespread adoption and it has nothing to do with AI, one way or another
  • <1% of YouTube views is produced by AI generated content
  • Space is literally the worst place to apply AI - the stakes couldn't be higher, the training data couldn't be sparser and the tasks are so varied and complex they stretch even the generalization capability of human intelligence; it's the pinnacle of AI-hubris thinking AI will "revolutionize" every single field

(I use ML and AI interchangeably because AI in the broad sense just means software at this point)

In fact, since I don't believe in slow take-off, I'll do one better: these are my predictions for what will actually happen right up until FOOM.

It's time for reality check for not only AI, but digital technologies in general (AR/MR, folding phones, 5G, IoT). We wanted flying cars, instead we got AI-recommended 140 characters.

Comment by maximkazhenkov on Comments on "The Singularity is Nowhere Near" · 2021-03-18T02:52:11.094Z · LW · GW

If you swapped out "AGI" for "Whole Brain Emulation" then Tim Dettmers' analysis becomes a lot more reasonable.

Comment by maximkazhenkov on Dark Matters · 2021-03-17T03:50:24.092Z · LW · GW

And with enough epicycles you can fit the motion of planets with geocentricism. If MOND supporters can dismiss Bullet Cluster they'll dismiss any future evidence, too.

Comment by maximkazhenkov on The average North Korean mathematician · 2021-03-08T14:13:35.982Z · LW · GW

Also the note about incentives being larger in North Korea also applies to much of eastern Europa to a lesser degree, where qualifying for imo is seemingly enough to get access to any university

I think that's the case anywhere; qualifying for IMO is a pretty big deal.

Comment by maximkazhenkov on Thoughts On Computronium · 2021-03-05T19:22:53.752Z · LW · GW

According to this post, computers today are only 3 orders of magnitude away from Landauer limit. So it ought to be literally impossible for the human brain to be six orders of magnitude more efficient. Also, how the hell is the brain supposed to carry out 20 Petaflops with only 100 billion neurons and a firing rate of a few dozen Hertz? The estimate seems way off to me.

Comment by maximkazhenkov on How might cryptocurrencies affect AGI timelines? · 2021-03-04T00:37:44.391Z · LW · GW

See that's why I asked what's the incentive to switch to proof of stake and not why it's better. Like with climate change, this is a coordination problem.

Comment by maximkazhenkov on How might cryptocurrencies affect AGI timelines? · 2021-03-03T07:25:44.153Z · LW · GW

Sorry that's what I meant to ask

Comment by maximkazhenkov on How to end stagnation? · 2021-03-02T03:24:49.458Z · LW · GW

At this point good faith has broken in this argument, we should stop.

Comment by maximkazhenkov on How might cryptocurrencies affect AGI timelines? · 2021-03-02T03:22:50.742Z · LW · GW

You're just delegating the problem away to an observer reputation system that has the same problem one level deeper. Who actually has incentive to align reputations of observers with what actually happened?

Comment by maximkazhenkov on How to end stagnation? · 2021-03-02T03:11:43.597Z · LW · GW

I can't see any structured reasoning steps in your argument.

Comment by maximkazhenkov on How to end stagnation? · 2021-03-02T03:02:37.137Z · LW · GW

The damage is irreversible; once the bureaucracy take on a life of its own the incentives are aligned to drive us down this spiral of madness. Without some drastic event like the creation of AGI or world war 3, the only way I see humanity coming out of this age of stagnation is starting over on a different planet. Sure colonizing Mars is hard, but dismantling a bureaucratic nightmare without violence is impossible (I'd love to learn about any historic examples to the contrary). That's what I believe Elon Musk really means when he says we must back up our civilization.

Comment by maximkazhenkov on How to end stagnation? · 2021-03-02T02:15:40.649Z · LW · GW

I think globalization is actually detrimental to progress. In a globalized world, technological innovations spread around so quickly that whoever fronts the initial investment in capital and effort will be the sucker left standing in the rain. China's entire success story is based on this.

Comment by maximkazhenkov on How might cryptocurrencies affect AGI timelines? · 2021-03-02T01:55:12.888Z · LW · GW

What incentive is there for a broad switch to proof of work?

Comment by maximkazhenkov on How might cryptocurrencies affect AGI timelines? · 2021-03-02T01:46:59.629Z · LW · GW

The Bitcoin rally from Tesla's investment didn't last long, instead TSLA dropped like 15% over the last 3 weeks. Personally I was not thrilled with this move coming from Tesla as an investor.

Comment by maximkazhenkov on How might cryptocurrencies affect AGI timelines? · 2021-03-02T01:37:24.536Z · LW · GW

But can you find the parts for a specific model of machinegun?  A rocket or even guided missile launcher?

Neither of which is a meaningful challenge to state monopoly of violence. You can even legally own tanks, but these weapons are very powerful only in limited contexts. Sure you can blow up a few people with it before being subdued, but an assault rifle or even a truck would do the job just as well. You'd have to go to WMDs before the situation becomes problematic.

The more fundamental problem with this argument is that, once important state secrets/WMDs have been stolen, the damage is done; the fact that someone is trying to make an extra buck off of it afterwards is rather trivial.

A hitman who isn't definitely a cop?  (I've never actually heard of the dark web successfully being used for this, but combined with Smart Contracts for escrow it's at least possible ).

The problem with hiring a hitman on the web is first and foremost that there is no incentive for the hitman to follow through. There is obviously no legal recourse for the buyer, and using cryptocurrency actually disincentivizes the hitman from doing his job even more. If you're untraceable, you're also reputationless. I don't see what problem is being solved by smart contracts here; at the end of the day you have to interact with the real world to enforce your contracts.

Someone who accepts crypto for their business can easily set up the system in a way that it is optional which transactions they report.  Sort of how cash only businesses - or at a lower level, people paid cash tips - probably nearly all cheat to some degree on their taxes.  Some of the cash can be spent on personal expenses without any records kept.  Crypto makes this way easier - no longer is there large sums of cash around, it's harder for the IRS to audit, and the same 'benefits' apply - someone can accept crypto for a transaction to a randomized new account they control, and sometimes they import that transaction into the books they show the government, and sometimes they don't.

You could do the same thing with gold, yet no one bothers to do it. That's basically my knock-down argument against crypto being revolutionary in general: there is no important aspect in which crypto differs from gold. I guess we're not really disagreeing here; I think governments will just slap some regulations on coin exchanges and that will be end of the story, no need for more drastic measures.

Comment by maximkazhenkov on How might cryptocurrencies affect AGI timelines? · 2021-03-01T09:06:15.566Z · LW · GW

The most obvious use case is as a store value; "digital gold" as Peter Thiel likes to call it. Bitcoin is limited in supply and has enough network effect behind it to succeed in the long term, other cryptocurrencies much less so regardless of any technical advantages. I don't see crypto getting banned because A) there is too much institutional investments in it and B) it's not a threat to Western governments any more than gold is.

Comment by maximkazhenkov on Useless knowledge; why people resist education improvement · 2021-02-26T18:52:37.454Z · LW · GW

Downvoted for bringing political content to Lesswrong. Even though I'm inclined to agree with lots of points mentioned in the post, it's more important to nip bad trends in the bud. There is a dangerous tendency for social media platforms to degrade into cesspools of political battlegrounds; I've witnessed it before and would prefer not to witness it again here.

Comment by maximkazhenkov on Exponential growth is the baseline · 2021-02-23T00:15:39.355Z · LW · GW

If declining population growth is a cause of stagnation, how do we solve it?

Solve it? I see declining population growth as God's greatest gift upon humanity of the century - more so than Penicillin and Haber process combined - that at least temporarily staved off a return to a Malthusian world. But if you insist on solving the issue, well, Moloch will take care of that soon enough.

With continued progress, we are not limited by land area and fossil fuels. We are not even limited to planet Earth. We are limited only by the speed of light, the Hubble expansion constant, and the heat death of the universe. If we hit those limits, I’d say humanity had a pretty good run.

That's... not very reassuring? Anything beyond the solar system is completely irrelevant as far as exponential growth is concerned since travel time is so long expanding to the stars won't relieve local population pressure at all. You seem to be analyzing everything in the exponential context except when it comes to resource limits.

Comment by maximkazhenkov on Current cryonics impressions · 2021-02-07T03:33:00.720Z · LW · GW

One argument I can think of to sign up for cryonics sooner rather than later is to create social proof for your extended family. The idea of evading death by freezing yourself and awaiting future technology for a cure might be too outlandish for older people to seriously consider, and leading by example might ease that process. And while you perhaps can afford to wait for better procedures or evidence about efficacy of cryonics, your parents/grandparents probably can't.

Comment by maximkazhenkov on Intuitions about utilities · 2021-02-07T02:29:49.446Z · LW · GW

I think your example fails to accurately represent your actual values, even worse than the original thought experiment. Nothing in the world can be worth 1000x someone you truly care about, not even all the other people you care about combined. Human brains just can't represent that sort emotional weight. It would have been more useful to think of your sister vs a friend of yours.

But honestly, there is no point even in substituting money for "true utility" in Newcomb's problem, because unlike Prisoner's dilemma, there are no altruism/virtue signaling considerations to interfere with your decision. You genuinely want the money when no ethical conundrum is involved. So maybe just put $500,000 in each box?

What really confuses me about Newcomb's problem is why rationalists think it is important. When is the future so reliably predictable that acausal trades become important, in a universe where probability is engrained in its very fabric AND where chaotic systems are abundant?

I've since played at least one other Prisoner's Dilemma game – this one for team points rather than candy – and I cooperated that time as well. In that situation, we were very explicitly groomed to feel empathy for our partner (by doing the classic '36 questions to fall in love' and then staring into each other's eyes for five minutes), which I think majorly interferes with the utility calculations.

Sounds like the exercise was more about teambuilding than demonstrating Prisoner's dilemma.

Comment by maximkazhenkov on Technological stagnation: Why I came around · 2021-01-24T13:59:19.557Z · LW · GW

I agree with your arguments but disagree with your value judgment - why shouldn't digital entertainment be considered progress? What's the point of "physical progress" once people's basic needs are satisfied (which we haven't achieved yet)? If humanity ever becomes a Kardashev III civilization, what would we do with all that matter and energy besides creating digital Disney parks for septillions of immortal souls? What's your vision for humanity's future in the best case?

Comment by maximkazhenkov on Matt Levine on "Fraud is no fun without friends." · 2021-01-20T00:20:53.828Z · LW · GW

And that might be great for society. We don't want people working a job primarily because it's fun and they like their coworkers. We want them working a job because they're providing valuable goods and services that meet pre-existing demand.

Who's "we" and who's "society"?

Comment by maximkazhenkov on AR Glasses: Much more than you wanted to know · 2021-01-16T06:49:04.511Z · LW · GW

I’d be very interested in hearing arguments why this actually wouldn’t be that big of a deal.

It won't be a big deal because smartphones was not a big deal. People still wake up, go to work, eat, sleep, sit hours in front of a screen - TV, smartphone, AR, who cares. No offense to Steve Jobs, but if the greatest technological achievement of your age is the popularization of the smartphone, exciting is about the last adjective I'd describe it with.

Speaking of Black Mirror, I find the show to be a pretty accurate representation of the current intellectual Zeitgeist (not the future it depicts; I mean the show itself) - pretentious, hollow, lame, desperate to signal profundity through social commentary.

Comment by maximkazhenkov on How long till Inverse AlphaFold? · 2020-12-22T15:21:55.851Z · LW · GW

By "bias" I didn't mean biases in the learned model, I meant "the class of proteins whose structures can be predicted by ML algorithms at all is biased towards biomolecules". What you're suggesting is still within the local search paradigm, which might not be sufficient for the protein folding problem in general, any more than it is sufficient for 3-SAT in general. No sampling is dense enough if large swaths of the problem space is discontinuous.

Comment by maximkazhenkov on Ideal Chess - drop chess perfected · 2020-12-19T11:29:15.004Z · LW · GW

Thank you for the response, I will definitely check out these variants. I'm trying to understand what sort of simple rules let deeply strategic games emerge out of them, and how inventors of such games come up with these ideas.

Comment by maximkazhenkov on The next AI winter will be due to energy costs · 2020-12-19T08:09:21.233Z · LW · GW

6 orders of magnitude from FLOPs to bit erasure conversion

Does it take a million bit erasures to conduct a single floating point operation? That seems a bit excessive to me.

Comment by maximkazhenkov on Ideal Chess - drop chess perfected · 2020-12-19T06:17:29.380Z · LW · GW

Excellent post!

I would greatly appreciate a follow-up, perhaps compilations of game variants for other popular games such as Go or Poker?