Posts

Why don't we vaccinate people against smallpox any more? 2021-04-21T00:08:31.593Z
AI Winter Is Coming - How to profit from it? 2020-12-05T20:23:51.309Z
Implications of the Doomsday Argument for x-risk reduction 2020-04-02T21:42:42.810Z
How to Write a News article on the Dangers of Artificial General Intelligence 2020-02-28T02:14:48.419Z
What will quantum computers be used for? 2020-01-01T19:33:16.838Z
Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z

Comments

Comment by maximkazhenkov on How much should we value life? · 2021-09-11T23:35:15.383Z · LW · GW

The ultimate question is the one of temporal discounting, and that question depends on how much we do/should value those post-singularity life years. If values can't shift, then there isn't really anything to talk about; you just ask yourself how much you value those years, and then move on. But if they can shift, and you acknowledge that they can, then we can discuss some thought experiments and stuff.

I think we're getting closer to agreement as I'm starting to see what you're getting at. My comment here would be that yes, your values can shift, and they have shifted after thinking hard about what post-Singularity life will be like and getting all excited. But the shift it has caused is a larger multiplier in front of the temporal discounted integral, not the disabling of temporal discounting altogether.

Actually, I wonder what you think of this. Are you someone who sees death as a wildly terrible thing (I am)?

Yes, but I don't think there is any layer of reasoning beneath that preference. Evading death is just something that is very much hard-coded into us by evolution.

In the pizza example, I think the value shift would moreso be along the lines of "I was prioritizing my current self too much relative to my future selves". Presumably, post-dinner-values would be incorporating pre-dinner-self.

I don't think that's true. Crucially, there is no knowledge being gained over the course of dinner, only value shift. It's not like you didn't know beforehand that pizza was unhealthy, or that you will regret your decision. And if post-dinner self does not take explicit steps to manipulate future value, the situation will repeat itself the next day, and the day after, and so on for hundreds of times.

Comment by maximkazhenkov on How much should we value life? · 2021-09-11T14:24:59.736Z · LW · GW

I think they can inspire you to change your values.

Taken at face value, this statement doesn't make much sense because it immediately begs the question of change according to what, and in what sense isn't that change part of your value already. My guess here is that your mental model says something like "there's a set of primal drives inside my head like eating pizza that I call 'values', and then there are my 'true' values like a healthy lifestyle which my conscious, rational mind posits, and I should change my primal drives to match my 'true' values" (pardon for straw-manning your position, but I need it to make my point).

A much better model in my opinion would be that all these values belong to the same exact category. These "values" or "drives" then duke it out amongst each other, and your conscious mind merely observes and makes up a plausible-sounding socially-acceptable story about your motivations (this is, after all, the evolutionary function of human intelligence in the first place as far as I know), like a press secretary sitting silently in the corner while generals are having a heated debate.

At best, your conscious mind might act as a mediator between these generals, coming up with clever ideas that pushes the Pareto boundary of these competing values so that they can all be satisfied to a greater degree at the same time. Things like "let's try e-cigarettes instead of regular tobacco - maybe it satisfies both our craving for nicotine and our long-term health!".

Even high-falutin values like altruism or long-term health are induced by basic drives like empathy and social status. They are no different to, say, food cravings, not even in terms of inferential distance. Compare for instance "I ate pizza, it was tasty and I felt good" with "I was chastised for eating unhealthily, it felt bad". Is there really any important difference here?

You could of course deny this categorization and insist that only a part of this value set represents your true values. The danger here isn't that you'll end up optimizing for the wrong set of values since who's to tell you what "wrong" is, it's that you'll be perpetually confused about why you keeping failing to act upon your declared "true" values - why your revealed preferences through behavior keep diverging from the stated preferences, and you end up making bad decisions. Decisions that are suboptimal even when judged only against your "true" values, because you have not been leveraging your conscious, rational mind properly by giving it bad epistemics

As an example, consider an immature teenager who doesn't care at all about his future self and just wants to have fun right now. Would you say, "Well, he values what he values."?

Haha, unfortunately you posed the question to the one guy out of 100 who would gladly answer "Absolutely", followed by "What's wrong with being an immature teenager?"

On a more serious note, it is true that our values often shift over time, but it's unclear to me why that makes regret minimization the correct heuristic. Regret can occur in two ways: One is that we have better information later in life, along the lines of "Oh I should have picked these numbers in last week's lottery instead of the numbers I actually picked". But this is just hindsight and useless to your current self because you don't have access to that knowledge. 

The other is through value shift, along the lines of "I just ate a whole pizza and now that my food-craving brain-subassembly has shut up my value function consists mostly of concerns for my long-term health". Even setting temporal discounting aside, I fail to see why your post-dinner-values should take precedence over your pre-dinner-values, or for that matter why deathbed-values should take precedence over teenage-values. They are both equally real moments of conscious experience.

But, since we only ever live and make decisions in the present moment, if you happen to have just finished a pizza, you now have the opportunity to manipulate your future values to match your current values by taking actions that makes the salad option more available the next time pizza-craving comes around by e.g. shopping for ingredients. In AI lingo, you've just made yourself subagent-stable.

My personal anecdote is that as a teenager I did listen to the "mature adults" to study more and spend less time having fun. It was a bad decision according to both my current values and teenage-values, made out of ignorance about how the world operates.

As a final thought, I would give the meta-advice of not trying to think too deeply about normative ethics. Take AlphaGo as a cautionary tale: after 2000 years of pondering, the deepest truths of Go are revealed to be just a linear combination of a bunch of feature vectors. Quite poetic, if you ask me.

Comment by maximkazhenkov on How much should we value life? · 2021-09-10T17:48:01.272Z · LW · GW

To be sure, I don't actually think whether Accelerationism is right has any effect on the validity of your points. Indeed, there is no telling whether the AI experts from the surveys even believe in Accelerationism. A fast-takeoff model where the world experiences zero growth from now to Singularity, followed by an explosion of productivity would yield essentially the same conclusions as long as the date is the same, and so does any model in between. But I'd still like to take apart the arguments from Wait But Why just for fun:

First, exponential curves are continuous, they don't produce singularities. This is what always confused me about Ray Kurzweil as he likes to point to the smooth exponential improvement in computing yet in the next breath predict the Singularity in 2029. You only get discontinuities when your model predicts superexponential growth, and Moore's law is no evidence for that.

Second, while temporary deviations from the curve can be explained by noise for exponential growth, the same can't be said so easily for superexponential growth. Here, doubling time scales with the countdown to Singularity, and what can be considered "temporary" is highly dependent on how long we have left to go. If we were in 10,000 BC, a slowing growth rate over half a century could indeed be seen as noise. But if we posit the Singularity at 2060, then we have less than 40 years left. As per Scott Alexander, world GDP doubling time has been increasing since 1960. However you look at it, the trend has been deviating from the ideal curve for far, far too long to be a mere fluke.

The most prominent example of many small S-curves adding up to an overall exponential trend line is, again, Moore's law. From the inside view, proponents argue that doomsayers are short sighted because they only see the limits of current techniques, but such limits have appeared many times before since the dawn of computing and each time it was overcome by the introduction of a new technique. For instance, most recently, chip manufacturers have been using increasingly complex photolithography masks to print ever smaller features onto microchips using the same wavelengths of UV light, which isn't sustainable. Then came the crucial breakthrough last year with the introduction of EUV, a novel technique that uses shorter wavelengths and allows the printing of even smaller features with a simple mask, and the refining process can start all over again.

But from the outside view, Moore's law has buckled (notice the past tense). One by one, the trend lines have flattened out, starting with processor frequency in 2006, and most recently with transistors per dollar (Kurzweil's favorite metric) in 2018. Proponents of Moore's law's validity had to keep switching metrics for 1.5 decades, and they have a few left - transistor density for instance, or TOP500 performance. But the noose is tightening, and some truly fundamental limitations such as the Landauer limit are on the horizon. As I often like to say, when straight lines run into physical limitations, physics wins.

Keep in mind that as far as Moore's law goes, this is what death looks like. A trend line never halts abruptly, it's always going to peter out gradually at the end.

By the way, the reason I keep heckling Moore's law is because Moore's law itself is the last remnant of the age of accelerating technological progress. Outside the computing industry, things are looking much more dire.

Here are my thoughts. Descriptively, I see that temporal discounting is something that people do. But prescriptively, I don't see why it's something that we should do. Maybe I am just different, but when I think about, say, 100 year old me vs current 28 year old me, I don't feel like I should prioritize that version less. Like everyone else, there is a big part of me that thinks "Ugh, let me just eat the pizza instead of the salad, forget about future me". But when I think about what I should do, and how I should prioritize future me vs present me, I don't really feel like there should be discounting.

I'm not sure the prescriptive context is meaningful with regard to values. It's like having a preference over preferences. You want whatever you want, and what you should want doesn't matter because you don't actually want that, wherever that should came from. A useful framework to think about this problem is to model your future self as other people and reduce it to the classic egoism-altruism balance. Would you say perfect altruism is the correct position to adopt? Are you therefore a perfect altruist?

You could make up philosophical thought experiments and such to discover how much you actually care about others, but I bet you can't just decide to become a perfect altruist no matter how loudly a philosophy professor might scream at you. Similarly, whether you believe temporal discounting to be the right call or not in the abstract, you can't actually stop doing it; you're not a perfect altruist with respect to your future selves and to dismiss it would only lead to confusion in my opinion.

Comment by maximkazhenkov on How much should we value life? · 2021-09-10T11:06:15.668Z · LW · GW

I think so. By symmetry, imperfect anti-alignment will destroy almost all the disvalue the same way imperfect alignment will destroy almost all the value. Thus, the overwhelming majority of alignment problems are solved by default with regard to hyperexistential risks. 

More intuitively, problems become much easier when there isn't a powerful optimization process to push against. E.g. computer security is hard because there are intelligent agents out there trying to break your system, not because cosmic rays will randomly flip some bits in your memory.

Comment by maximkazhenkov on How much should we value life? · 2021-09-09T17:56:13.813Z · LW · GW

Thank you for the post, it was quite a nostalgia trip back to 2015 for me because of all the Wait But Why references. However, my impression is that the Kurzweilian Accelerationism school of thought has largely fallen out of favor in transhumanist circles since that time, with prominent figures like Peter Thiel and Scott Alexander arguing that not only are we not accelerating, we can barely even keep up with 19th century humanity in terms of growth rate. Life expectancy in the US has actually gone down in recent years for the first time.

An important consideration that was left out is temporal discounting. Since you assumed linear scaling of value with post-Singularity QALYs, your result is extremely sensitive to your choice of post-Singularity life expectancy. I felt like it was moot to go into such detailed analysis of the other factors when this one alone could easily vary by ten orders of magnitude. By choosing a sufficiently large yet physically plausible number (such as 100 trillion years), you could justify almost any measure to reduce your risk of dying before Singularity and unambiguously resolve e.g. the question of driving risk. 

But I doubt that's a good representation of your actual values. I think you're much more likely to do exponential discounting of future value, such that the integral of value over time remains finite even in the limit of infinite time. This should lead to much more stable results.

I predict that a lot of people will interpret the claim of "you should expect to live for 10k years" as wacky, and not take it seriously.

Really? This is LessWrong after all^^

Comment by maximkazhenkov on All Possible Views About Humanity's Future Are Wild · 2021-09-09T12:14:46.088Z · LW · GW

Always beware of the spectre of anthropic reasoning though.

Comment by maximkazhenkov on How much should we value life? · 2021-09-08T15:26:37.741Z · LW · GW

I think it's fairly unlikely that suicide becomes impossible in AI catastrophes. The AI would have to be anti-aligned, which means creating such an AI would require precise targeting in the AI design space the same way a Friendly AI does. However, given the extreme disvalue a hyperexistential catastrophe produces, such scenarios are perhaps still worth considering, especially for negative utilitarians.

Comment by maximkazhenkov on Is top-down veganism unethical? · 2021-08-26T23:31:27.803Z · LW · GW

Are you sure you're replying to the right comment?

Comment by maximkazhenkov on Is top-down veganism unethical? · 2021-08-22T23:24:43.430Z · LW · GW

I think the next big step will be legal rather than technical. Imo the Impossible Burger is already good enough that if it sneaked its way into existing standard fast-food products like Big Macs, most people would neither notice nor care. So in the end it will be a similar issue to GMO foods; its wide-spread adoption depending on whether businesses have to explicitly label plant-based alternatives as alternatives. Defaults really matter.

Comment by maximkazhenkov on Analysis of World Records in Speedrunning [LINKPOST] · 2021-08-07T00:03:31.823Z · LW · GW

Doesn't seem particularly relevant for the purpose of understanding trends, the underlying dynamics aren't changed by slowing down time.

Comment by maximkazhenkov on Analysis of World Records in Speedrunning [LINKPOST] · 2021-08-05T01:06:00.999Z · LW · GW

I strongly suggest looking at world records in TrackMania; it should be an absolute treasure trove of data for this purpose. 15+ years of history over dozens of tracks, with loads of incremental improvements and breakthrough exploits alike.

Here's an example of one such incredible history: 

Comment by maximkazhenkov on The shoot-the-moon strategy · 2021-07-22T00:49:13.048Z · LW · GW

I think it's called signal jamming? An alarm that sounds all the time is just as useless as an alarm that never goes off.

Comment by maximkazhenkov on [Link] Musk's non-missing mood · 2021-07-14T21:18:03.229Z · LW · GW

To an individual human, death by AI (or by climate catastrophe) is worse than old age "natural" death only to the extent that it comes sooner, and perhaps in being more violent. 

I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.

To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.

Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.

Comment by maximkazhenkov on The Bullwhip Effect · 2021-07-14T20:57:10.822Z · LW · GW

(e.g., the increased wavelength of the whip in the figure below)

Don't you mean increased amplitude?

Comment by maximkazhenkov on [Link] Musk's non-missing mood · 2021-07-14T12:21:39.552Z · LW · GW

But it is surprising that life could only appear on our planet, since it doesn't seem to have unique features.

What does "could appear" mean here? 1 in 10? 1 in a trillion? 1 in 10^50?

Remember we live in a tiny universe with only ~10^23 stars.

Comment by maximkazhenkov on [Link] Musk's non-missing mood · 2021-07-14T11:38:16.779Z · LW · GW

It was a rhetorical question, there is nothing strange about not observing aliens. I'm an avid critic of the Fermi paradox. You simply update towards their nonexistence and, to a lesser extent, whatever other hypothesis fits that observation. You don't start out with the romantic idea that aliens ought to be out there, living their parallel lives, and then call the lack of evidence thereof a "paradox".

The probability that all sentient life in the observable universe just so happens to invariably reside in the limbo state between nonexistence and total dominance is vanishingly small, to a comical degree. Even on our own Earth, sentient life only occupies a small fragment of our evolutionary history, and intelligent life even more so. Either we're alone, or we're in a zoo/simulation. 

Either way, Clippy doesn't kill more than us.

Comment by maximkazhenkov on [Link] Musk's non-missing mood · 2021-07-13T21:39:43.036Z · LW · GW

How strange for us to achieve superintelligence where every other life in the universe has failed, don't you think?

Comment by maximkazhenkov on [Link] Musk's non-missing mood · 2021-07-13T21:32:50.541Z · LW · GW

Moloch is to the world what senescence is to a person. It, too, dies by default.

Comment by maximkazhenkov on [Link] Musk's non-missing mood · 2021-07-13T17:09:10.111Z · LW · GW

Is death by AI really any more dire than the default outcome, i.e. the slow and agonizing decay of the body until cancer/Alzheimer's delivers the final blow?

Comment by maximkazhenkov on Winston Churchill, futurist and EA · 2021-07-12T11:15:41.126Z · LW · GW

The second definitely doesn't work because it's actually an endothermic reaction (reverse neutron decay), but Churchill couldn't have known that in 1931 before neutron mass was measured accurately.

Comment by maximkazhenkov on Winston Churchill, futurist and EA · 2021-07-12T11:09:47.822Z · LW · GW

My takeaway from this:

  • Beware of laundry lists of future predictions
  • Update against currently promising ("hot") technologies turning out to be impactful
  • Update against the idea that government institutions are becoming less competent
  • Update against wisdom and coordination as useful tools for defusing x-risks
Comment by maximkazhenkov on Are coincidences clues about missed disasters? It depends on your answer to the Sleeping Beauty Problem. · 2021-07-06T04:47:12.900Z · LW · GW

A more paranoid man than myself would start musing about anthropic shadows and selection effects.

Why paranoid? I don't quite get the argument here; doesn't anthropic shadow imply we have nothing to worry about (except for maybe hyperexistential risks) since we're guaranteed to be living in a timeline where humanity survives in the end?

  • A pandemic happened that hurt the economy and increased demand for consumer electronics, driving up the cost of computer chips
  • Intel announced that it was having major manufacturing issues
  • Bitcoin, Ethereum, and other coins reached an all-time high, driving up the price of GPUs

I don't see much of a coincidence here. The pandemic and crypto boom are highly correlated events; it's hardly surprising deflationary value storages will do well in times of crisis, gold also hit an all-time high during the same period. Besides, the last crypto boom in 2017 didn't seem to have slowed down investment in deep learning. Intel has never been a big player in the GPU market and CPU prices are reasonable right now but isn't that relevant for deep learning anyway. And the "AI and Compute" trend line broke down pretty much as soon as the OpenAI article was released, a solid 1.5 - 2 years before the Covid-19 crisis hit. That's a long time in ML world.

Unless you're a fanatic reverend of the God of Straight Lines, there isn't anything here to be explained. When straight lines run into physical limitations, physics wins. Hardware progress clearly can't keep up with the 10x per year growth rate of AI compute, and the only way to make up for it was to increase monetary investment into this field, which is becoming harder to justify given the lack of returns so far.

But, if you disagree and believe that the Straight Line is going to resume any day now, go ahead and buy more Nvidia stocks and win.

Comment by maximkazhenkov on The homework assignment incentives, and why it's now so extreme · 2021-06-27T01:26:34.662Z · LW · GW

I don't mean to sound overdramatic here, but equating honesty with obedience to authority is quite a sinister sleight of hand. Skipping excessive homework is not only advantageous, it is also righteous.

Comment by maximkazhenkov on The homework assignment incentives, and why it's now so extreme · 2021-06-22T08:53:53.912Z · LW · GW

but if the dean had also been unsympathetic we would have had no recourse

I beg to differ. Maybe your school was particularly strict, but usually there are plenty of ways around homework assignments in high school: Copy homework from other classmates last minute, turn in fake homework, read the summary instead of a whole book, share workload with your friends, get solutions from older students, call in sick strategically on days with especially large workloads etc.

And the advance of technology isn't all bad, it also provides students with new options: GPT-3 for essays, Wolfram Alpha for math problems, mechanical-turk-like services to outsource homework, handwriting robots and soon, test-taking AIs. If you got hacking skills, well, let's just say you'd be surprised what sort of stuff teachers leave on the school server. And for online classes, keep in mind that it also becomes harder for the teacher to verify the authenticity of your homework. Be creative, think positive.

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-10T23:08:02.785Z · LW · GW

The absolute travel time matters less for disease spread in this case. It doesn't matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won't spread to those places naturally.

And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they're obscure almost by definition) and plant the virus there, they'll most certainly have no trouble bringing it to Mars either.

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-10T16:56:01.384Z · LW · GW

I strongly believe that nuclear war and climate change are not existential risks, by a large margin.

For engineered pandemics, I don't see why Mars would be more helpful than any other isolated pockets on Earth - do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?

Curiously enough, the last scenario you pointed out - dystopias - might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-09T21:03:29.428Z · LW · GW

Moving to another planet does not save you from misaligned superintelligence.

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea

The only way I can see Musk's position making sense is that it's actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.

Comment by maximkazhenkov on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-09T20:48:46.452Z · LW · GW

I would love to hear some longevity-related biotech investment advices from rationalists, which I (and presumably many others here) predict to be the second biggest deal in big picture futurism. 

The only investment idea I can come up with myself are for-profit spin-off companies from SENS Research Foundation, but that's just the obvious option to someone without expertise in the field and trusting the most vocal experts.

Although some growth potential has already been lost due to the pandemic bringing a lot of attention towards this field, I think we're still early enough to capture some of the returns.

Comment by maximkazhenkov on How counting neutrons explains nuclear waste · 2021-06-02T12:26:53.807Z · LW · GW

If you want to learn more about ongoing research into superheavy elements:

To me the most exciting prospect of this research is the potential discovery of not just an island, but an entire continent of stability that could open up endless engineering potential in the realm of nuclear chemistry.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:57:27.035Z · LW · GW

No that's not what I meant; these two issues divide different tribes but the level of toxicity and fanaticism is similar. Heated debates around US-China war scenarios are very common in Taiwanese/Chinese overseas communities.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:45:08.693Z · LW · GW

I also have a personal interest in trying to keep Lesswrong politics-free because for me fighting down the urge to engage in political discussions is a burden, like an ex-junkie constantly tempted with easily available drugs. Old habits die hard, so I immediately committed to not participate in any object-level discussions upon seeing the title of this post. I'm not sure whether this applies to anyone else.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:44:49.739Z · LW · GW

I do have a sense that it's less likely to explode in bad ways, and less likely to attract bad people to the site.

I agree with the first part of the sentence but disagree with the second part. In my view, Lesswrong's best defense thus far has been a frontpage filled with content that appears bland to anyone with a combative attitude coming from other, more toxic social media environments. Posts like this one though stick out like a sore thumb and signal to onlookers that discussions about politics and geopolitics are now an integral part of Lesswrong, even when the discussions themselves are respectful and benign so far. If my hypothesis is correct, an early sign of deterioration would be an accumulation of newly registered accounts that solely leave comments on one or two politics-related posts.

Comment by maximkazhenkov on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-27T19:10:29.824Z · LW · GW

Politics is politics. US vs China is about as divisive and tribal as you can go, on the same level as pro- vs anti-Trump. Would you encourage political discussions of the latter type on Lesswrong, too?

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-21T16:12:09.707Z · LW · GW

Why couldn't land-based delivery vehicle become autonomous though? That would also cut out the human in the loop.

One reason might be that autonomous flying drone are easier to realize. It is true that air is an easier environment to navigate than ground, but landing and taking off at the destination could involve very diverse and unpredictable situations. You might run into the same long tail problem as self-driving cars, especially since a drone that can lift several kilos has dangerously powerful propellers.

Another problem is that flying vehicles in general are energy inefficient due to having to overcome gravity, and even more so at long distances (tyranny of the rocket equation). Of course you could use drones just for the last mile, but that's an even smaller pool to squeeze value out of.

In general, delivery drones seem less well-suited for densely populated urban environments where landing spots are hard to come by and you only need a few individual trips to serve an entire apartment building. And that's where most of the world will live anyway.

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-21T01:22:38.108Z · LW · GW

Lawnmowers are also very loud yet is widely tolerated (more or less). Plus, delivery drones need only to drop off the package and fly away; the noise pollution will only last for a few seconds. I also don't see why it would necessarily be unpredictable; drones don't get stuck in traffic. Maybe a dedicated time window each day becomes an industry standard.

But the real trouble I see with delivery drones is: what's the actual point? What problem is being solved here? Current delivery logistics work very well, I don't see much value being squeezed out of even faster/more predictable delivery. Looks like another solution in search of a problem to me.

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-19T02:27:34.042Z · LW · GW

I share this sentiment. Shockingly little has happened in the last 20 years, good or bad, in the grand scheme of things. Our age might become a blank spot in the memory of future people looking back at history; the time where nothing much happened.

Comment by maximkazhenkov on What will 2040 probably look like assuming no singularity? · 2021-05-19T01:56:49.806Z · LW · GW

Is there any provision that allows members to be kicked out of NATO?

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-08T22:38:46.099Z · LW · GW

It's always an emergency, lives are always at stake. That's just the nature of the pharmaceutical business. 

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-08T22:37:32.017Z · LW · GW

It's the perception that matters.

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-07T22:27:06.616Z · LW · GW

I think it's mostly the setting of a precedent of stripping away intellectual property rights for political expediency that is worrisome. It's a small step in undermining the rule of law, but a step nonetheless. The symbolic gesture is the problem; it signals to the public that such moves are now not only acceptable, but applaudable.

Comment by maximkazhenkov on Covid 5/6: Vaccine Patent Suspension · 2021-05-07T18:25:51.776Z · LW · GW

The stock market disagrees.

Comment by maximkazhenkov on The Fall of Rome, III: Progress Did Not Exist · 2021-04-25T14:18:38.733Z · LW · GW

I wasn't trying to argue anything in particular, I'm just using comments as a notebook to keep track of my own thoughts. I'm sorry if it sounded like I was trying to start an argument.

Comment by maximkazhenkov on The Fall of Rome, III: Progress Did Not Exist · 2021-04-25T12:35:45.682Z · LW · GW

The term "unavoidable innovation" really irks me. It has become this teacher's password for all the world's uncomfortable questions. Why was Malthus wrong? Innovation! How do we prevent civilizational collapse? Innovation! How do we solve competition and conflicts for limited resources? Innovation! How can we raise the standard of living without compromising the environment? Innovation!

As if life was fair and nature's challenges were all calibrated to our abilities such that every time we run into population limits, the innovation fairy appears and offers us a way out of the crisis. Where real disaster can only ever result from corruption, greed, power struggles and, y'know, things that generally fit our moral aesthetics about how things ought to go wrong; things that would make a good Game of Thrones episode. 

Certainly not mundane causes like mere exponential population increase. Because that would imply that Malthus was (at least sometimes) right, that life was a ruthless war of all against all, a rapacious hardscrapple frontier. An implication too horrible to ever be true.

I'm not arguing that the Malthusian trap explains all the civilizational collapses in history, or even Rome in particular. But it is the default failure mode because exponential growth is fast and unbounded, so to avoid it your civilization has to A) prevent population growth altogether, B) outpace population growth with innovation consistently, or C) collapse way before population pressure becomes a problem.

Comment by maximkazhenkov on Thiel on secrets and indefiniteness · 2021-04-22T22:40:11.645Z · LW · GW

Biotech startups are an extreme example of indefinite thinking. Researchers experiment with things that just might work instead of refining definite theories about how the body’s systems operate.

Comment by maximkazhenkov on Thiel on secrets and indefiniteness · 2021-04-21T11:21:57.972Z · LW · GW

I find Thiel's writings too narrative-driven. Persuasive, but hardly succinct. Somehow, geographical discoveries, scientific progress and ideas of social justice all fit under the umbrella term "secrets" and... there is some common pattern underlying our failure in each of these aspects? Or is one the cause of the other? What am I supposed to learn from these paragraphs? Thiel himself seems very "indefinite" with his critique.

Incrementalism is bad, but biotech start-ups should nonetheless "refine definite theories" instead of random experimentation? Isn't "refining definite theories" a prime example of incrementalism, and a strategy you would expect more out of established institutions anyway? Seems like biotech companies can only do wrong. You could also easily argue "refining definite theories" is an example of indefinite thinking because instead of focusing on developing a concrete product, you're just trying to keep the options open by doing general theory that might come in handy.

In general this writing feels more like a literary critique than a concrete thesis. I can agree with the underlying sentiment but I don't feel like I'm walking away with a clearer understanding of the problem after reading.

Comment by maximkazhenkov on On Sleep Procrastination: Going To Bed At A Reasonable Hour · 2021-04-17T11:43:51.064Z · LW · GW

Our careers span decades. Maybe being sleep deprived for a few years can work out, but this is unsustainable in the long run. Steve Jobs died young. Nikola Tesla wrote love letters to his pigeon. Elon Musk’s tweets suggest that he may not be thinking clearly. Meanwhile, Jeff Bezos gets a full 8 hours.

This is motivated reasoning. Taking Elon Musk vs. Jeff Bezos as an example, if their sleep patterns were reversed you could have just as easily argued "See, that's why Bezo's rocket company isn't as successful as Musk's".

Comment by maximkazhenkov on What will GPT-4 be incapable of? · 2021-04-06T23:49:00.017Z · LW · GW

The irony is strong with this one

Comment by maximkazhenkov on TAI? · 2021-03-31T09:50:39.247Z · LW · GW

This is the 3D printing hype all over again. Remember how every object in sight was going to be made in a 3D printer? How we won't ever need to go to a store again because we'll be able to just download the blueprint for every product from the internet and make it ourselves? How we're going to print our clothes, furniture, toys and appliances at home and it's only going to cost pennies of raw materials and electricity? Yeah right.

So let me throw down the exact opposite predictions for social implications if there was absolutely 0 innovation in AI:

  • AI continues to try to shoehorn itself into every product imaginable and mostly fail because it's a solution looking desperately for a problem
  • Almost no labor (big exception: self-driving) has been replaced by robots. The robots that do exist are not ML-based
  • Universal Basic Income doesn't see widespread adoption and it has nothing to do with AI, one way or another
  • <1% of YouTube views is produced by AI generated content
  • Space is literally the worst place to apply AI - the stakes couldn't be higher, the training data couldn't be sparser and the tasks are so varied and complex they stretch even the generalization capability of human intelligence; it's the pinnacle of AI-hubris thinking AI will "revolutionize" every single field

(I use ML and AI interchangeably because AI in the broad sense just means software at this point)

In fact, since I don't believe in slow take-off, I'll do one better: these are my predictions for what will actually happen right up until FOOM.

It's time for reality check for not only AI, but digital technologies in general (AR/MR, folding phones, 5G, IoT). We wanted flying cars, instead we got AI-recommended 140 characters.

Comment by maximkazhenkov on Comments on "The Singularity is Nowhere Near" · 2021-03-18T02:52:11.094Z · LW · GW

If you swapped out "AGI" for "Whole Brain Emulation" then Tim Dettmers' analysis becomes a lot more reasonable.

Comment by maximkazhenkov on Dark Matters · 2021-03-17T03:50:24.092Z · LW · GW

And with enough epicycles you can fit the motion of planets with geocentricism. If MOND supporters can dismiss Bullet Cluster they'll dismiss any future evidence, too.