Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z · score: -6 (5 votes)
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z · score: 12 (8 votes)
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z · score: 9 (8 votes)


Comment by maximkazhenkov on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T20:37:56.566Z · score: 1 (1 votes) · LW · GW

There are plenty (maybe even a majority) of people who would pay a premium for avoiding social interaction with strangers. In fact, early adoption of these automated technologies might be driven by exactly this reason. I think this satire puts it pretty concisely.

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-11-04T01:13:27.078Z · score: 3 (3 votes) · LW · GW

I was going to bring up Red Ice TV as a counter-example but just found out they got banned from Youtube 2 weeks ago. Troubling indeed.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-03T15:41:39.721Z · score: 16 (5 votes) · LW · GW

I think the AI just isn't sure what to do with the information received from scouting. Unlike AlphaZero, AlphaStar doesn't learn via self-play from scratch. It has to learn builds from human players, hinting at its inability to come up with good builds on its own, so it seems likely that AlphaStar also doesn't know how to alter its build depending on scouted enemy buildings.

One thing I have noticed though from observing these games is that AlphaStar likes to over-produce probes/drones as if preempting early game raids from the enemy. It seems to work out quite well for AlphaStar; being able to mine on full capacity afterwards. Is there a good reason why pro gamers don't do this?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-03T11:54:16.203Z · score: 1 (1 votes) · LW · GW

I see. In that case, I don't think it makes much sense to model scientific institutions or the human civilization as an agent. You can't hope to achieve unanimity in a world as big as ours.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T18:09:09.194Z · score: 3 (2 votes) · LW · GW

So basically game tree search was the "reasoning" part of AlphaZero?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T18:07:17.248Z · score: 1 (1 votes) · LW · GW

Well, it's overdetermined. Action space, tree depth, incomplete information; any one of these is enough to make MC tree search impossible.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T13:20:17.677Z · score: 22 (11 votes) · LW · GW

I think DeepMind should be applauded for addressing the criticisms about AlphaStar's mechanical advantage in the show games against TLO/Mana. While not as dominant in its performance, the constraints on the new version basically matches human limitations in all aspects; the games seemed very fair.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T13:12:36.808Z · score: 2 (2 votes) · LW · GW

In other words, there is a non-trivial chance we could get to AGI literally this year?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T13:03:51.927Z · score: 0 (3 votes) · LW · GW

I think one ought to be careful with the wording here. What is the proportion of existing AI progress? We could be 90% there on the time axis and only one last key insight is left to be discovered, but still virtually useless compared to humans on the capability axis. It would be a precarious situation. Is the inability of our algorithms to reason the problem, or our only saving grace?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T12:42:32.207Z · score: 3 (2 votes) · LW · GW

Could you elaborate why it's "extremely bad" news? In what sense is it "better" for DeepMind to be more staightforward with their reporting?

Comment by maximkazhenkov on The Technique Taboo · 2019-10-31T09:18:46.868Z · score: 2 (2 votes) · LW · GW

I've heard the alternate explanation that having to stare at the keyboard is bad for your neck/spine because of the downward angle in your head position, and touch typing allows you to avoid that. Which is especially important for programmers apparently because they work in front of a computer screen all the time.

Comment by maximkazhenkov on The Technique Taboo · 2019-10-30T20:44:05.235Z · score: 4 (3 votes) · LW · GW

Why is fast typing considered a necessary skill for programmers in the first place? For secretaries or writers it seems to make a difference, but how much time would be delayed if a programmer was only able to type, say, 30 words per minute? It seems to me that if you have to pause to think, typing speed was never a bottleneck anyway.

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-10-30T18:49:34.725Z · score: 3 (2 votes) · LW · GW
Look at Japan you can go over there and mention the bombs and their war crimes and most won't care.

I really don't think picking out the most conservative and conformist country on the planet supports your point very well. Of course they don't care, denying their past war crimes is the official position. Meanwhile in the US, the evils of Western Imperialism (including recent ones) is standard textbook material. Whether you agree with those textbooks or not, the phrase "history is written by the victors" usually doesn't imply self-critical writing.

The ideas are public but most people are not willing to state them and risk their social lives.

Or perhaps people are not willing to state them because they don't agree those ideas? If people are protected by legal rights to free speech and anonymity on the web yet some ideas still can't gain any traction on the market of ideas, you should start considering the possibility that those ideas aren't even secretly popular.

Comment by maximkazhenkov on The Missing Piece · 2019-10-30T17:03:27.727Z · score: 1 (1 votes) · LW · GW

A very insightful explanation. It leads me to think what this implies for the replication of nanobots:

If all nanodevices produced are precise molecular copies, and moreover, any mistakes on the assembly line are not heritable because the offspring got a digital copy of the original encrypted instructions for use in making grandchildren, then your nanodevices ain't gonna be doin' much evolving.
You'd still have to worry about prions—self-replicating assembly errors apart from the encrypted instructions, where a robot arm fails to grab a carbon atom that is used in assembling a homologue of itself, and this causes the offspring's robot arm to likewise fail to grab a carbon atom, etc., even with all the encrypted instructions remaining constant.

So is prion evolution just sliding from fixed point to fixed point? If so, how likely is it to happen and how would one go about suppressing process? How would one reduce the density of fixed points?

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-10-30T16:39:21.446Z · score: 1 (1 votes) · LW · GW
You seem to be following my every comment.

No I haven't, I just click through comments on posts that interest me.

What I'm confused about is what you mean by "they wouldn't dare express it in public". There are entire communities and subcultures built around conspiracy theories on the web, whether it's 9/11, Holocaust denial, moon landing or flat earth. How much more public can it get?

Comment by maximkazhenkov on What's your big idea? · 2019-10-29T23:34:42.515Z · score: 1 (1 votes) · LW · GW

This is like saying we need the government to mandate apple production, because without apples we might become malnourished which is bad. Why can't the market solve the problem more efficiently? Where's the coordination failure?

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-10-29T15:20:34.456Z · score: 1 (1 votes) · LW · GW

They can be stated, nobody is contesting that. They can also be downvoted to hell, which is what I'm arguing for.

Comment by maximkazhenkov on What's your big idea? · 2019-10-29T15:12:44.933Z · score: 1 (1 votes) · LW · GW

And thus, knowing geography becomes a comparative advantage to those who choose to study it. Why should the rest of us care?

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T18:53:39.787Z · score: 2 (2 votes) · LW · GW

I disagree on multiple dimensions:

First, let's get disagreements about values out of the way: I hate the term "brainwashing" since it's virtually indistinguishable from "teaching", the only difference being the intent of the speaker (we're teaching our kids liberal democratic values while the other tribe is brainwashing their kids with Marxism). But to the extent "brainwashing" has a useful definition at all, creating "a population who will perpetuate the state" would be it. In my view, if our civilization can't survive without tormenting children with years upon years of conditioning, it probably shouldn't.

Second, I'm very skeptical about this model of a self-perpetuating society. So "they" teach us literature and history? Who's "they"? Group selectionism doesn't work; there is no reason to assume that memes good at perpetuating themselves would also be good at perpetuating the civilization they find themselves in. I think it's likely that people in charge of framing the school curriculum are biased towards holding those subjects in high regard that they've been taught in school themselves (sunken cost fallacy, prestige signaling), thus becoming vehicles for meme spread. I don't see any incentive for any education board member to stop, think and analyze what will perpetuate the government they're a part of.

I also very much doubt the efficacy of such education/brainwashing at manipulating citizens into perpetuating the state. In my experience, reverse psychology and tribalism are much better methods for this purpose than straightforward indoctrination, particularly with people in their rebellious youth. The classroom, frequently associated with boredom and monotony, is among the worst environments to apply these methods. There is no faster way to create an atheist out of a child than sending him through mandatory Bible study classes; and no faster way to create a libertarian than to make him memorize Das Kapital.

Lastly, the bulk of today's actual school curriculum is neutral with respect to perpetuating our society - maths, physics, chemistry, biology, foreign languages, even most classical literature are apolitical. So even setting the issue of "civilizational propagation" aside, there is still enormous potential for optimization.

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T17:44:22.394Z · score: 1 (1 votes) · LW · GW

It's easy to prepare kids to become anything. Just teach what's universally useful.

It's impossible to prepare kids to become everything. Polymaths stopped being viable two centuries ago.

There is a huge difference between union and intersection of sets.

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T17:35:01.165Z · score: 1 (1 votes) · LW · GW

This is basically the long-term goal of Neuralink as stated by Elon Musk. I am however very skeptical because of two reasons:

  • Natural selection did not design brains to be end-user modifiable. Even if you could accurately monitor every single neuron in a brain in real-time, how would you interpret your observations and interface with it? You'd have to build a translator by correlating these neuron firing patterns with observed behaviors, which seems extremely intractable
  • In what way would such a brain-augmenting external memory be superior to pen and paper? Pen and paper already allows me to accomplish working-memory limited tasks such as multiplication of large numbers, and I'm neither constrained by storage space (I will run out of patience before I run out of paper) nor by bandwidth of the interface (most time is spent on computing what to write down, not writing itself)

It seems there is an extreme disproportionality between the difficulty of the problem and the value of solving it.

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T16:59:29.953Z · score: 1 (1 votes) · LW · GW
Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.

Why would this be an ethical thing to do? It sounds like you're trying to manipulate others into people you'd like them to be and not what they themselves like to be.

How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

Ethics aside, this seems to be a tall order. You're basically trying to hack into someone else's mind through very limited input channels (speech/text). In my experience it's never a lack of knowledge that's hindering people from overcoming akrasia (also the reason I'm skeptical towards the efficacy of self-help books).

Essentially, I think we're under-utilizing several higher mathematical objects - Tensors, to name one.

That's a very good point. In ML courses lots of time is spent on introducing different network types and technical details of calculus/linear algebra, without explaining why to pick out neural networks from idea space in the first place beyond hand-waving that it's "biologically inspired".

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-10-25T09:53:37.431Z · score: 1 (1 votes) · LW · GW

It's a question about value, not fact. "Bias" is not even a criticism here; value is nothing but bias.

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-10-25T09:47:09.286Z · score: 3 (2 votes) · LW · GW

I think people are confused about how to evaluate answers here. Should we upvote opinions we agree with on the object level as usual, or should we upvote opinions base on usefulness for the kind of research the OP is trying to conduct (i.e. not mainstream, but not too obscure/random/quirky either like 2+2=5)?

It seems like most have defaulted to the former interpretation while the most-upvoted comments are advocating the latter. Clear instructions is warranted here; the signal is all mixed up.

A close analogy would be a CNN segment bashing Trump posted on an Alt-Right site: the audience there might be confused as to whether they should dogpile on this post as a representative of the opposing tribe or to upvote the post as a heroic act of exposing the ugly nature of the opposing tribe (usually it's resolved by an allegiance-declaring intro segment by the OP but isn't always the case).

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-10-25T09:25:24.737Z · score: 1 (1 votes) · LW · GW

Why is it a dangerous opinion to hold? I don't know about others, but to me at least valuing freedom of expression has nothing to do with valuing the ideas being expressed.

Comment by maximkazhenkov on What are some unpopular (non-normative) opinions that you hold? · 2019-10-25T08:52:51.187Z · score: -3 (3 votes) · LW · GW

Yet you dare expressing it on LessWrong? What's "in public" then? Soap box on the street?

Comment by maximkazhenkov on What economic gains are there in life extension treatments? · 2019-10-24T10:12:47.170Z · score: 1 (1 votes) · LW · GW
inter-society competition will not particularly favor anti-aging investment
  • While life-extension may very well be outcompeted (in particular, birth rate changes the exponential base while life-extension only adds a constant factor), this particular failure mode seems wrong because group selectionism doesn't work
  • Don't even worry about it, unless the future is ruled by a Singleton, Moloch will win anyway, by default, in more ways than we can imagine
Comment by maximkazhenkov on What's your big idea? · 2019-10-21T03:35:19.230Z · score: 14 (6 votes) · LW · GW

Everyone has his pet subject which he thinks everybody in society ought to know and thus ought to be added to the school curriculum. Here on LessWrong, it tends to be rationality, Bayesian statistics and economics, elsewhere it might be coding, maths, the scientific method, classic literature, history, foreign languages, philosophy, you name it.

And you can always imagine a scenario where one of these things could come in handy. But in terms of what's universally useful, I can hardly think of anything beyond reading/writing and elementary school maths, that's it. It makes no economic sense to drill so much knowledge into people's heads; specialization is like the whole point of civilization.

It's also morally wrong to put people through needless suffering. School is a waste, or rather theft, of youthful time. I wish I had played more video games and hung out with friends more. I wish I scored lower on all the exams. If your country's children speak 4 languages and rank top 5 in PISA tests, that's nothing to boast about. I waited for the day when all the misery would make sense; that day never came. The same is happening to your kids.

Education is like code - the less the better; strip down to the bare essentials and discard the rest.

Edit: Sorry for the emotion-laden language, the comment turned into a rant half-way through. Just something that has affected me personally.

Comment by maximkazhenkov on Anti-counterfeiting Ink - an alternative way of combating oil theft? · 2019-10-20T22:42:55.485Z · score: 1 (1 votes) · LW · GW

I don't have the relevant expertise, unfortunately. I'm thinking protein sequence or some other biomolecule.

Comment by maximkazhenkov on What's your big idea? · 2019-10-20T22:00:11.096Z · score: 7 (5 votes) · LW · GW

A lot of these are quite controversial:

  • AI alignment has failed once before, we are the product
  • Technical obstacles in the way of AGI is our most valuable resource right now, and we're rapidly depleting it
  • A future without superintelligent AI is also dystopian by default (after reading that last one, being turned into paperclips doesn't sound so bad to me after all)
  • AI or Moloch, the world will eventually be taken over by something because there is a world to be taken over
  • We were just lucky nuclear weapons didn't turn out to be an existential threat; we might not be so lucky in the future

  • The (observable) universe is tiny on the logarithmic scale
  • Exploration of outer space turned out way less interesting than I imagined
  • Exploration of cyberspace turned out way more interesting than I imagined
  • Some god-like powers are easier to achieve than flying cars
  • The term "nanotechnology" indicates how primitive the field really is; we don't call our every other technology "centitechnology"

  • Human-level intelligence is the lower bound for a technological species
  • Modern humans are surprisingly altruistic given its population size; ours is the age of disequilibrium
  • Technological progress never repeats itself, and so neither does history
  • Every social progress is just technological progress in disguise
  • The effect of the bloodiest conflicts of the 20th century on world population is.... none whatsoever

  • Schools teach too much, not too little
  • The education system is actually a selection system
  • Innovation, like oil, is a very limited resource; some processes just can't be parallelized
  • The deafening silence around death by aging
Comment by maximkazhenkov on What's your big idea? · 2019-10-20T16:59:55.307Z · score: 2 (2 votes) · LW · GW

Do you think it would make a big difference though? Isn't it likely that a bunch of John von Neumanns are already running around given the world's population? Aren't we just running out of low-hanging fruits for von Neumanns to pick?

Comment by maximkazhenkov on Anti-counterfeiting Ink - an alternative way of combating oil theft? · 2019-10-20T16:51:34.019Z · score: 1 (1 votes) · LW · GW
The fundamental problem with this proposal is that it relies on "security through obscurity"

I don't think that description is accurate. It relies on asymmetry between verification and duplication.

Criminals synthesizing the chemical compounds is virtually guaranteed because it's very difficult to distribute a chemical to every gas station in the country and keep it secret.

Distributing the substances to every gas station is unnecessary. Transportation pipelines are expensive infrastructure projects with very high throughput, so there are only a few of them to guard. The real difficulty is that they're very, very long and the criminals can tap at any point, so it's infeasible to police the entire stretch 24/7 and it'd be great if we could reduce the policing to merely the few end points. The "last mile" is much less problematic since it's close to population centers where witnesses are plenty and police reaction time is short (a tapping operation takes ~40 minutes).

Also, we're not keeping the usage of chemicals a secret any more than we'd keep the existence of passwords a secret.

At the same time, criminals are good at synthesizing weird substances.

That's because these weird substances have different purposes, e.g. narcotics. Here, we are optimizing for nothing but the asymmetry between verification and duplication, because a signature is all the substance is. A closer analogy would be banknotes; they're also specifically optimized for difficulty of duplication. And bank note printers have thus far been very successful, else cartels would not bother smuggling drugs when they could just print money, and paper-currency based economy would become impossible.

Comment by maximkazhenkov on Reflections on Premium Poker Tools: Part 4 - Smaller things that I've learned · 2019-10-13T00:10:38.687Z · score: 1 (1 votes) · LW · GW
Expected value I wouldn't expect most people to know, but I certainly would expect a professional poker player to know, especially when you are also charging people money to coach them.

I would agree if we were talking about Poker AI or Poker software developers here, but I don't see why a professional poker player would need to know about expected value any more than a Go player needs to know the Minimax algorithm - humans can't do these calculations in their heads and have to rely on gut feeling anyway (or am I wrong here? Do Poker players actually calculate probabilities? I thought it was just a cliche from Casino Royale).

Comment by maximkazhenkov on Reflections on Premium Poker Tools: Part 4 - Smaller things that I've learned · 2019-10-12T21:22:00.647Z · score: 1 (1 votes) · LW · GW
People think of a mobile app when you say you're building an "app"
Even when I clarify and try to explain that it's a web app, most people are still confused. So sometimes I call it a website, which I hate because that sort of implies that it's static. Sometimes I describe it as poker software. I still haven't found a good solution to this. I think "website" is probably best.

This is really a blessing in disguise, because words like "app" and "software" sound like potential users would have to download and install something and potentially fumble around with settings/permissions before they can get a first glimpse, and also having to delete/uninstall afterwards. You mentioned people are lazy, but you might still be underestimating just how lazy people are. Patience is measured in milliseconds on the web.

Ghosting is normal
This is a huge pet peeve of mine. I hate it. But apparently it's just a thing that many people do. At least in the business world. Let me give you some examples.

I think ghosting is so ubiquitous in every facet of life that at this point, we'd all be better off to just accept it as a neutral fact. In particular:

  • If someone doesn't respond after the second request, assume he is guilty of ghosting with no way of proving innocence because let's be honest, technical problems with communication are rare these days and even if it's actually the case, a third request is almost certainly going to run into the same problem
  • If at some later point contact is reestablished, pretend like nothing happened and don't push the other person into coming up with an excuse if you deem the interaction still worthwhile

Paying for people's meals doesn't seem to induce much reciprocation
A lot of times I meet with people and will pay for their meals, in hopes that they'll reciprocate and spend more effort trying to help me out. But I've found it to be incredibly ineffective.

I'd be surprised if it was effective. From the perspective of the receiver, what you're signaling isn't "I'm nice and forthcoming", it's "You better be worth my money", and if you're not demanding immediate reciprocation, you're making them indebted in a gift economy they have no interest in participating in. The only people you're likely to attract with this strategy are people who are unscrupulous to take advantage of you.

Long inferential distances is the realest thing in the world
One example is that I was talking to a professional poker player and coach. He didn't know how to read a 2d graph with x-y coordinates. I said "x-axis". He said, "what?".
This made me decide to change the text on my app to say "horizontal axis" instead of "x-axis".
He also struggled to understand that a point on the graph refers to a pair of data points. And to understand what the slope means. And how to calculate expected value.
He wasn't the only one. I have plenty of other examples of stuff like this.
I don't want to come across as being mean though. Just sayin'. I certainly have my own share of incompetencies.

But how could one reasonably expect people in general to have such obscure technical knowledge? I mean, this sounds weird to say on Lesswrong, sarcastic even, but that's just due this forum being a bubble within a bubble within a bubble. Even though stuff like expected value are not very hard to grasp per se, people outside STEM fields don't just bump into such topics by casually browsing the internet. To classify it as incompetence seems misguided; I'd be much more worried about a society where people broadly speaking understood these topics, because then you have to wonder what other non-universally useful stuff are they wasting everybody's time with?

Comment by maximkazhenkov on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-21T21:00:04.382Z · score: 5 (3 votes) · LW · GW

I think there are two perspectives to view the mechanical constraints put on AlphaStar:

One is the "fairness" perspective, which is that the constraints should perfectly mirror that of a human player, be it effective APM, reaction time, camera control, clicking accuracy etc. This is the perspective held mostly by the gaming community, but it is difficult to implement in practice as shown by this post, requiring enormous analysis and calibration effort.

The other is what I call the "aesthetics" perspective, which is that the constraints should be used to force the AI into a rich strategy space where its moves are gratifying to watch and interesting to analyze. The constraints can be very asymmetrical with respect to human constraints.

In retrospect, I think the second one is what they should have gone with, because there is a single constraint could have achieved it: signal delay

Think about it: what good would arbitrarily high APM and clicking accuracy amount to if the ping is 400-500ms?

  • It would naturally introduce uncertainties through imperfect predictions and bias towards longer-term thinking anywhere on the timescale from seconds to minutes
  • It would naturally move the agent into the complex strategy space that was purposefully designed into the game but got circumvented by exploiting certain edge cases like ungodly blink stalker micro
  • It avoids painstaking analysis of the multi-dimensional constraint-space by reducing it down to a single variable
Comment by maximkazhenkov on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-21T19:59:25.842Z · score: 3 (3 votes) · LW · GW
You could argue that it's not showcasing the skills we're interested in, as it doesn't need to put the same emphasis on long-term planning and outsmarting its opponent, that equal human players have to. But that will also be the case if you put me against someone who's never played the game.

Interesting point. Would it be fair to say that, in a tournament match, a human pro player is behaving much more like a reinforcement learning agent than a general intelligence using System 2? In other words, the human player is also just executing reflexes he has gained through experience, and not coming up with ingenious novel strategies in the middle of a game.

I guess it was unreasonable to complain about the lack of inductive reasoning and game-theoretic thinking in AlphaStar from the beginning since DeepMind is a RL company, and RL agents just don't do that sort of stuff. But I think it's fair to say that AlphaStar's victory was much less satisfying than AlphaZero, being not only unable to generalize across multiple RTS games, but also unable to explore the strategy space of a single game (hence the incentivizing of use of certain units during training). I think we all expected seeing perfect game sense and situation-dependent strategy choice, but instead blink stalkers is the one build to rule them all, apparently.

Comment by maximkazhenkov on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-21T18:27:51.770Z · score: 3 (3 votes) · LW · GW

No, but Starcraft is an imperfect information game like Poker, and involves computing mixed strategies

Comment by maximkazhenkov on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-20T17:18:03.901Z · score: 2 (2 votes) · LW · GW

I estimate the current AI to be ~100 times less efficient with simulation time than humans are, and that humans are another ~100 times less efficient than an ideal AI (practically speaking, not the Solomonoff-induction-kind). Humans Who Are Not Concentrating Are Not General Intelligences, and from observation I think it's clear that human players spend a tiny fraction of their time thinking about strategies and testing them compared to practicing pure mechanical skills.

Comment by maximkazhenkov on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-20T17:15:45.277Z · score: 2 (2 votes) · LW · GW
If so, unlike Chess and Go, there may not be some deep strategic insights Alphastar can uncover to give it the edge

I think that's where the central issue lies with games like Starcraft or Dota; their strategy space is perhaps not as rich and complex as we have initially expected. Which might be a good reason to update towards believing that the real world is less exploitable (i.e. technonormality?) as well? I don't know.

However, I think it would be a mistake to write off these RTS games as "solved" in the AI community the same way chess/Go are and move on to other problem domains. AlphaStar/OpenAI5 require hundreds of years of training time to reach the level of human top professionals, and I don't think it's an "efficiency" problem at all.

Additionally, in both cases there are implicit domain knowledge integrated into the training process: In the case of AlphaStar, the AI was first trained on human game data and, as the post mentions, competing agents are subdivided into strategy spaces defined by human experts:

Hundreds of versions of the AI play against each other, and the ones that perform best are selected to play against human players. Each one has its own set of units that it is incentivized to use via reinforcement learning, so that they each play with different strategies.

In the case of OpenAI5, the AI is still constrained to a small pool of heroes, the item choices are hard-coded by human experts, and it would have never discovered relatively straightforward strategies (defeating Roshan to receive a power-up, if you're familiar with the game) were it not for the programmers' incentivizing in the training process. It also received the same skepticism in the gaming community (in fact, I'd say the mechanical advantage of OpenAI5 was even more straightforward to see than with AlphaStar).

This is not to belittle the achievements of the researchers, it's just that I believe these games still provide fantastic testing grounds for future AI research, including paradigms outside deep reinforcement learning. In Dota, for example, one could change the game mode to single draft to force the AI out of a narrow strategy-space that might have been optimal in the normal game.

In fact, I believe (~75% confidence) the combinatorial space of heroes in a single draft Dota game (and the corresponding optimal-strategy-space) to be so large that, without a paradigm shift at least as significant as the deep learning revolution, RL agents will never beat top professional humans within 2 orders of magnitude of compute of current research projects.

I'm not as familiar with Starcraft II but I'm sure there are simple constraints one can put on the game to make it rich in strategy space for AIs as well.

Comment by maximkazhenkov on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-20T16:07:51.489Z · score: 1 (1 votes) · LW · GW

I don't think this would be a worthwhile endeavor, because we already know that deep reinforcement learning can deal with these sorts of interface constraints as shown by Deepmind's older work. I would expect the agent behavior to converge towards that of the current AI, but requiring more compute.

Comment by maximkazhenkov on Don't depend on others to ask for explanations · 2019-09-19T01:56:40.897Z · score: 4 (2 votes) · LW · GW

I think there is a trade-off as far as reader retention is concerned:

On the one hand, thorough explanations that the reader is already aware of can nevertheless give confidence to finish reading by signaling that the post is within grasp of the reader.

On the other hand, it is possible to overdo it and extend the length of the post beyond the attention span of many readers.

Speaking from personal experience, the high level of redundancy in many of Eliezer's sequences was actually very helpful for me to grasp and more importantly retain the insights conveyed.

Comment by maximkazhenkov on Request for stories of when quantitative reasoning was practically useful for you. · 2019-09-13T16:44:45.500Z · score: 1 (1 votes) · LW · GW

Sounds interesting...could you elaborate a bit as to why one should worry less? Do we overestimate the importance of buying a house for retirement?

Comment by maximkazhenkov on So You Want to Colonize The Universe Part 5: The Actual Design · 2019-08-23T15:27:20.978Z · score: 1 (1 votes) · LW · GW

Assuming acceleration occurs over a 40 light year distance and uniform acceleration (because why not; we have a variable power source), the ship would experience a constant acceleration of ~0.3m/s^2 ( ).

If we wanted the same peak deceleration using only lightsail and a sun-like star, we'd get a deceleration of 83km/s (back of envelope calculation analogizing photon pressure as a reversed gravitational well), so we'll need 72 stars in total.

That is quite reasonable considering the star density in the galactic core. T̶h̶e̶ ̶o̶n̶l̶y̶ ̶p̶r̶o̶b̶l̶e̶m̶ ̶h̶e̶r̶e̶ ̶o̶f̶ ̶c̶o̶u̶r̶s̶e̶ ̶i̶s̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶r̶ ̶l̶i̶g̶h̶t̶s̶a̶i̶l̶ ̶m̶i̶g̶h̶t̶ ̶b̶e̶ ̶s̶o̶ ̶s̶m̶a̶l̶l̶ ̶t̶h̶a̶t̶ ̶g̶r̶a̶v̶i̶t̶a̶t̶i̶o̶n̶ ̶d̶o̶m̶i̶n̶a̶t̶e̶s̶,̶ ̶i̶n̶ ̶w̶h̶i̶c̶h̶ ̶c̶a̶s̶e̶ ̶y̶o̶u̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶l̶o̶o̶k̶ ̶f̶o̶r̶ ̶s̶t̶a̶r̶s̶ ̶w̶i̶t̶h̶ ̶h̶i̶g̶h̶e̶r̶ ̶p̶h̶o̶t̶o̶n̶-̶p̶r̶e̶s̶s̶u̶r̶e̶-̶t̶o̶-̶m̶a̶s̶s̶ ̶r̶a̶t̶i̶o̶, which are less densely populated. It's a trade-off between peak acceleration, destination constraint and sail size. Our sun for example would be among the worst targets for decelerating an incoming intergalactic spaceship.

also in the meantime I realized that neutron damage over those sorts of timescales are going to be *really* bad

Is it though? Radiation in general tends to attenuate exponentially in matter, so a merely linear increase in shielding should solve the problem completely.

Btw this sequence has been a very enjoyable read; I'm glad I'm not the only speculating about Clarketech-level space travel in free time.

Comment by maximkazhenkov on Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? · 2019-08-23T13:25:56.795Z · score: 2 (2 votes) · LW · GW

Just read up on the Dust Theory, and I think you can take it a notch further: no need for a vast universe, a rock is sufficient to represent any mind since there exists some mapping between the interactions of its constituent atoms to the brain activity of anyone. In fact, why not discard physical reality entirely and rest in the thought of everything existing in abstract math space?

does it make any sense to say 'I am in _this_ one'? You're in all of them, so long as those contexts can be said to 'exist'. And what is stopping them from 'existing'?

Well, why not jump from a bridge for fun then? You will continue to exist no matter what you do. Not saying that you won't, but it seems once one gets to this point anthropics stops having any implications for actions in the real world and is forever relegated to the realm of abstract philosophical thought experiments.

Why not both?

My thought was that Boltzmann was proposed as a counter-argument to the idea of the Big Bang as the result of quantum fluctuations of an eternal universe. Since Boltzmann brains are much less massive than the whole observable universe, it is vastly more likely that the observer is just a random-fluctuation-generated Boltzmann brain hallucinating its observations than an observer (simulated or not) in an actual Big-Bang universe.

Comment by maximkazhenkov on What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause? · 2019-08-22T08:53:46.097Z · score: 1 (1 votes) · LW · GW

A counter-argument to this would be the classical s-risk example of a cosmic ray particle flipping the sign on the utility function of an otherwise Friendly AI, causing it to maximize suffering that would dwarf any accidental suffering caused by a paperclip maximizer.

Comment by maximkazhenkov on What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause? · 2019-08-22T08:27:23.741Z · score: 5 (3 votes) · LW · GW

I'll add one more:

  • Doomsday Argument as overwhelming evidence against futures with large number of minds

Also works against any other x-risk related effort and condones a carpe-diem sort of attitude on the civilizational level.

Comment by maximkazhenkov on Negative "eeny meeny miny moe" · 2019-08-21T08:44:42.613Z · score: 2 (2 votes) · LW · GW
Perfectly cheating in a round of the negative version improves your chances of winning by 1/(k(k-1)), where k is the number of people in to start the round. Stopping you from doing so improves each other person's chances by the same amount.

I think 1/(k(k-1)) is the improvement in each other person's chance of getting into the next round, not the improvement in chance of winning the whole thing. The point still holds though since the absolute numbers are so much smaller in any single round.

Comment by maximkazhenkov on So You Want To Colonize The Universe Part 3: Dust · 2019-08-13T17:34:45.543Z · score: 5 (2 votes) · LW · GW

A bit late to the party here, but here's an alternative proposal to shielding:

I disagree with the statement

it's going to take the form of a super-narrow pinprick of kinetic energy directed on a single point

0.9 c is not ultra-relativistic, the Lorentz factor at this speed is a mere 2.3, so the kinetic energy of an incoming particles is on the same order of magnitude of its rest mass. This means that relativistic beaming is going to be fairly weak.

Since the kinetic energy far outstrips the chemical binding energy of the dust grain, we can expect it to instantly disintegrate into a particle shower upon first contact with the shield. So why not place the shield hundreds of kilometers ahead of the ship and let the inverse-square law do the rest? Most of the particle shower will simply miss the ship. Multiple, thinner shields would be even better at diffusing the particle shower while taking less damage themselves.

I imagine there will only be some minor damage to the shield since most energy still lies within the particle shower if you made the shield thin enough, and you wouldn't even need advanced nanotechnology to repair, just fill the hole in the shield with ANY material. And since we're moving at 0.9 c, you'd need to make the shield at most a few hundred meters wide to prevent dust sneaking in between the shield and the ship.

Furthermore, if the ship itself already uses a magnetic field to protect itself against charged particle radiation, a soap bubble would suffice as a shield as it would be enough to vaporize and ionize the dust grain. Either way, I think there is enormous mass savings possible here.

Comment by maximkazhenkov on So You Want to Colonize The Universe Part 5: The Actual Design · 2019-08-13T05:54:45.099Z · score: 1 (1 votes) · LW · GW

Phase 3 seems a bit wasteful to me. 0.1 c change in velocity at 100,000 light years yields a margin of 100 light years across; that amounts to thousands of stars even in the galactic suburb. Why be so picky about target stars and not simply send more probes instead? If you really need to make course corrections that big, you should do it in stages starting millions of light years out.

In phase 5, can't we just use the lightsail again? 0.2 c seems very doable. Wolf-Rayet stars are top candidates here since they have a very high photon pressure to gravity ratio. On second thought, this probably won't work for galaxies on the edge of our cosmic horizon since by the time of our arrival only red dwarfs will be left, but the Virgo Supercluster is fine.

Finally, in phase 6, I don't see why not to use our fission/fusion/antimatter engine from phase 3 again. Maneuvering a solar system with low thrust ion engines is one of the very few technologies on this intergalactic journey we have already mastered.