Posts

Implications of the Doomsday Argument for x-risk reduction 2020-04-02T21:42:42.810Z · score: 5 (2 votes)
How to Write a News article on the Dangers of Artificial General Intelligence 2020-02-28T02:14:48.419Z · score: 9 (4 votes)
What will quantum computers be used for? 2020-01-01T19:33:16.838Z · score: 11 (4 votes)
Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z · score: -4 (6 votes)
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z · score: 12 (8 votes)
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z · score: 9 (8 votes)

Comments

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-08-01T01:39:06.097Z · score: 2 (2 votes) · LW · GW

Fighting the Taliban also fulfills the purpose of funneling money to friends and supporters.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T21:25:27.920Z · score: 5 (3 votes) · LW · GW
One of the major problems that Western nations have run into in the past half century is that we're in wars where (a) we don't just want to kill everyone, and (b) there is no strong central control of the opposition (or at least none we want to preserve), so we're effectively forced into the last scenario above.

This argument is only supportive of your main point "command and control by far most important" insofar future wars will also be exclusively asymmetric. That assumption, though, is problematic even today. The US isn't spending billions of dollars on stealth fighters and bombers to fight the Taliban.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T21:16:11.442Z · score: 1 (1 votes) · LW · GW

How can an AI that is 10 times as smart and innovative as Elon Musk not be godlike? xD

But seriously, if an AI is really capable of making such great headway in weapons technology, it is then surely capable of bootstrapping itself to superintelligence.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T21:07:09.067Z · score: 1 (1 votes) · LW · GW

In the limit of large swarms of cheap, small drones, the attacker always has an intrinsic advantage. The attacking drones are trying to hit large, relatively slow moving targets while the defender is trying to "hit a bullet with another bullet". The only scalable countermeasure in my mind are directed energy weapons; you can't get faster or smaller than elementary particles. If a laser is fast and accurate enough to shoot down mosquitoes out of the air, it can probably shoot down drones, too.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T20:50:44.457Z · score: 1 (1 votes) · LW · GW

The US has gained a lot of experience in asymmetric warfare in the last few decades, but due to the Long Peace no one can be sure of which military technologies actually work well in the context of a symmetric war between major powers; none of it has really been validated. So the "lead" the US has over the rest is somewhat theoretical.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T20:45:12.088Z · score: 1 (1 votes) · LW · GW

If you buy into the Great Stagnation theory then a 20-year lead today should be a lesser deal than in 1900.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T20:42:10.750Z · score: 1 (1 votes) · LW · GW

Drones, yes, Terminators less so. It depends on whether AI technology can thread the needle of being powerful enough to navigate a very complex environment but not general enough to be a superintelligence. I kinda doubt that such a gap even exists.

Comment by maximkazhenkov on Are we in an AI overhang? · 2020-07-27T19:00:22.827Z · score: 4 (3 votes) · LW · GW

If you extrapolated those straight lines further, doesn't it mean that even small businesses will be able to afford training their own quadrillion-parameter-models just a few years after Google?

Comment by maximkazhenkov on Are we in an AI overhang? · 2020-07-27T18:56:28.010Z · score: 3 (2 votes) · LW · GW

Is density even relevant when your computations can be run in parallel? I feel like price-performance will be the only relevant measure, even if that means slower clock cycles.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-22T10:32:09.439Z · score: 3 (2 votes) · LW · GW

You can listen to his thoughts on AGI in this video

I find that he has an exceptionally sharp intuition about why deep learning works, from the original AlexNet to Deep Double Descent. You can see him predicting the progress in NLP here

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-22T08:44:23.591Z · score: 1 (1 votes) · LW · GW
"Why isn't it an AGI?" here can be read as "why hasn't it done the things I'd expect from an AGI?" or "why doesn't it have the characteristics of general intelligence?", and there's a subtle shade of difference here that requires two different answers.
For the first, GPT-3 isn't capable of goal-driven behaviour.

Why would goal-driven behavior be necessary for passing a Turing test? It just needs to predict human behavior in a limited context, which was what GPT-3 was trained to do. It's not an RL setting.

and by saying that GPT-3 definitely isn't a general intelligence (for whatever reason), you're assuming what you set out to prove.

I would like to dispute that by drawing the analogy to the definition of fire before modern chemistry. We didn't know exactly what fire is, but it's a "you know it when you see it" kind of deal. It's not helpful to pre-commit to a certain benchmark, like we did with chess - at one point we were sure beating the world champion in chess would be a definitive sign of intelligence, but Deep Blue came and went and we now agree that chess AIs aren't general intelligence. I know this sounds like moving the goal-post, but then again, the point of contention here isn't whether OpenAI deserves some brownie points or not.

"Passing the Turing test with competent judges" is an evasion, not an answer to the question – a very sensible one, though.

It seems like you think I made that suggestion in bad faith, but I was being genuine with that idea. The "competent judges" part was so that the judges, you know, are actually asking adversarial questions, which is the point of the test. Cases like Eugene Goostman should get filtered out. I would grant the AI be allowed to be trained on a corpus of adversarial queries from past Turing tests (though I don't expect this to help), but the judges should also have access to this corpus so they can try to come up with questions orthogonal to it.

I think the point at which our intuitions depart is: I expect there to be a sharp distinction between general and narrow intelligence, and I expect the difference to resolve very unambiguously in any reasonably well designed test, which is why I don't care too much about precise benchmarks. Since you don't share this intuition, I can see why you feel so strongly about precisely defining these benchmarks.

I could offer some alternative ideas in an RL setting though:

  • An AI that solves Snake perfectly on any map (maps should be randomly generated and separated between training and test set), or
  • An AI that solves unseen Chronotron levels at test time within a reasonable amount of game time (say <10x human average) while being trained on a separate set of levels

I hope you find these tests fair and precise enough, or at least get a sense of what I'm trying to see in an agent with "reasoning ability"? To me these tasks demonstrate why reasoning is powerful and why we should care about it in the first place. Feel free to disagree though.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T20:08:54.442Z · score: 4 (3 votes) · LW · GW

Yeah the terms are always a bit vague; as far as existence proof for AGI goes there's already humans and evolution, so my definition of a harbinger would be something like 'A prototype that clearly shows no more conceptual breakthroughs are needed for AGI'.

I still think we're at least one breakthrough away from that point, however that belief is dampened by Ilya Sutskever's position on this whose opinion I greatly respect. But either way GPT-3 in particular just doesn't stand out to me from the rest of DL achievements over the years, from AlexNet to AlphaGo to OpenAI5.

And yes, I believe there will be fast takeoff.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T19:17:59.780Z · score: 5 (4 votes) · LW · GW

I don't think GPT-3 is a harbinger. I'm not sure if there ever will be a harbinger (at least to the public); leaning towards no. An AI system that passes the Turing test wouldn't be a harbinger, it's the real deal.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T16:08:46.802Z · score: 2 (4 votes) · LW · GW
See, it does break down in that it thinks moving >5 degrees to the right is also bad. What's going on with the "car locks", or the "algorithm"? I agree that's weird. But the concept is still understood, and, AFAICT, is not "just associating" (in the way you mean it).

That's the exact opposite impression I got from this new segment. In what world is confusing "right" and "left" a demonstration of reasoning over mere association? How much more wrong could GPT-3 have gotten the answer? "Turning forward"? No, that wouldn't appear in the corpus. What's the concept that's being understood here?

And why wouldn't it be amazing for some (if not all) of its rolls to exhibit impressive-for-an-AI reasoning?

Because GPT-3 isn't using reasoning to arrive at those answers? Associating gravity with falling doesn't require reasoning, determining whether something would fall in a specific circumstance does, but that leaves only a small space of answers, so guessing right a few times and wrong at other times (like GPT-3 is doing) isn't evidence of reasoning. The reasoning doesn't have to do any work of locating the hypothesis because you're accepting vague answers and frequent wrong answers.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T15:50:31.537Z · score: 3 (3 votes) · LW · GW

I didn't mean to imply we should wait for AI to pass the Turing test before doing alignment work. Perhaps the disagreement comes down to you thinking "We should take GPT-3 as a fire-alarm for AGI and must push forward AI alignment work" whereas I'm thinking "There is and will be no fire-alarm and we must push forward AI alignment work"

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T14:20:38.259Z · score: 3 (3 votes) · LW · GW
So a good exercise becomes: what minimally-complex problem could you give to GPT-3 that would differentiate between pattern-matching and predicting?

Passing the Turing test with competent judges. If you feel like that's too harsh yet insist on GPT-3 being capable of reasoning, then ask yourself: what's still missing? It's capable of both pattern recognition and reasoning, so why isn't it an AGI yet?

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T14:10:30.264Z · score: 3 (5 votes) · LW · GW
GPT-3 inferred that not being able to turn left would make driving difficult. Amazing.

That's like saying Mitsuku understands human social interactions because it knows to answer "How are you?" with "I'm doing fine thanks how are you?". Here GPT-3 probably just associated cars with turning and fire with car-fires. Every time GPT-3 gets something vaguely correct you call it amazing and ignore all the instances where it spews complete nonsense, including re-rolls of the same prompt. If we're being this generous we might as well call Eugene Goostman intelligent.

Consistency, precision and transparency are important. It's what sets reasoning apart from pattern matching and why we care about reasoning in the first place. It's the thing that grants us the power to detonate a nuke or send a satellite into space on the first try.

Comment by maximkazhenkov on Science eats its young · 2020-07-14T16:16:01.241Z · score: 1 (1 votes) · LW · GW
In a world where 90% of scientists just assume that science works like a religion, a 96%-4% consensus is not a good indicator for implementing policy, it's an indicator that the few real scientists are almost evenly split on the correct solution.

Why would that cause a 96%-4% split and not a 60%-40% split?


In an Aristotelian framework, dropping 3 very heavy and well-lackered balls towards Earth and seeing they fall with a constant speed barring any wind is enough to say
FG = G * m1 * m2 / r^2
is a true scientific theory.

You mean increasing speed?

Comment by maximkazhenkov on What should we do about network-effect monopolies? · 2020-07-07T09:41:13.382Z · score: 1 (1 votes) · LW · GW
Even on the margin, anything that costs Facebook users also makes it less valuable for its remaining users—it’s a negative feedback loop.

I think you meant to say "positive feedback loop". "Negative" refers to self-stabilizing, not bad/undesirable or the sign of the change.

Comment by maximkazhenkov on A game designed to beat AI? · 2020-06-23T21:53:32.696Z · score: 2 (2 votes) · LW · GW

I think I have found an example for my third design element:

Patterns that require abstract reasoning to discern

The old Nokia game snake isn't technically a board game, but it's close enough if you take out the reaction time element of it. The optimal strategy here is to follow a Hamilton cycle, this way you'll never run into a wall or yourself until the snake literally covers the entire playing field. But a reinforcement learning algorithm wouldn't be able to make this abstraction; you would never run into the optimal strategy just by chance. Unfortunately, as I suggested in my answer, the pattern is too rigid which allows a hard-coded AI to solve the game.

Comment by maximkazhenkov on Book Review: Narconomics · 2020-05-23T02:24:43.364Z · score: 3 (2 votes) · LW · GW
So why is it that people who would never dream of sending their friend who tried coke to prison or even the friend who sold that friend some of his stash how do we end up with draconian drug laws?

There's really nothing left to explain here. They would never dream of sending their friend who tried coke to prison because they're friends. The same doesn't hold for strangers. Similarly, you'd probably let a friend of yours who just lost his home spend the night at your place, but not any random homeless person.

Comment by maximkazhenkov on What are your greatest one-shot life improvements? · 2020-05-17T13:45:41.310Z · score: 4 (4 votes) · LW · GW

Installing the Hide YouTube Comments chrome extension stopped my habit of reading and participating in the toxic comment section of YouTube. Absolutely essential for mental hygiene if you suffer from the same habit but at same time don't want to miss out on the great video content there.

Comment by maximkazhenkov on Machine Learning Can't Handle Long-Term Time-Series Data · 2020-05-14T13:43:59.760Z · score: 1 (1 votes) · LW · GW
ML can generate classical music just fine but can't figure out the chorus/verse system used in rock & roll.

This statement seems outdated: openai.com/blog/jukebox/

To me this development came as a surprise and correspondingly an update towards "all we need for AGI is scale".

Comment by maximkazhenkov on A game designed to beat AI? · 2020-05-13T13:44:01.237Z · score: 3 (2 votes) · LW · GW
I don't really know SC2 but played Civ4, so by 'scouting' did you mean fogbusting? And the cost is to spend a unit to do it? Is fogbusting even possible in a real life board game?

Yes. There has to be some cost associated with it, so that deciding whether, when and where to scout becomes an essential part of the game. The most advanced game-playing AIs to date, AlphaStar and OpenAI5, have both demonstrated tremendous weakness in this respect.

What does it have to do with Markov property?

Markov property refers to the idea that the future only depends on the current state, thus the history can be safely ignored. This is true for e.g. chess or Go; AlphaGoZero could play a game of Go starting from any board configuration without knowing how it got there. It's not easily applicable to Starcraft because of the fog of war; what you scouted inside your opponent's base a minute ago but isn't visible right now still provides valuable information about what's the right action to take. Storing the entire history as part of the "game state" would add huge complexity (tens of thousands of static game states).

Is fogbusting even possible in a real life board game?

Yes, see Magic the Gathering for instance (it's technically a card game, but plenty of board games have card elements integrated into them). Or, replace chess pieces with small coin-like tokens with information about their identity written on the down-facing side (this wouldn't work for chess in particular because you can tell the identity by the way pieces move, but perhaps some other game with moving pieces).

BTW, what is RL?

RL stands for reinforcement learning, basically all recent advances in game-playing AI has come from this field and is the reason why it's so hard to come up with a board game that would be hard to solve for AI (you could always reconfigure the Turing test or some other AGI-complete task into a "board game" but that's cheating). I'd even guess it's impossible to design such a board game because there is just too much brute force compute now.

Comment by maximkazhenkov on Book Review: Narconomics · 2020-05-03T03:28:52.315Z · score: 6 (4 votes) · LW · GW

Great Post. The last part is a major update for my model of how drug legalization opponents think about the issue. Perhaps, just like the climate change debate, it's all value disagreements masked as factual disagreements.

Comment by maximkazhenkov on A game designed to beat AI? · 2020-04-29T05:25:01.618Z · score: 6 (4 votes) · LW · GW

Excellent question! Once again, late to the party, but here are my thoughts:

It's very hard to come up with any board game where humans would beat computers, let alone an interesting one. Board games, by their nature, are discretized and usually perfect information. This type of game is not only solved by AI, but solved by essentially a single algorithm. Card games with mixed strategy equilibrium like Poker do a little better, although Poker has been solved the algorithm doesn't generalize to other card games without significant feature engineering.

If I were to design a board game to stump AIs, I would use these elements:

  • Incomplete information, with the possibility of information gathering ("scouting") at a cost (like in Starcraft 2) to invalidate Markov property
  • Lengthy gameplay (number of moves) to make the credit assignment problem as bad as possible for RL agents
  • Patterns that require abstract reasoning to discern (e.g. the pigeonhole principle lets you conclude immediately that 1001 pigeons don't fit in 1000 pigeonholes; an insight that can't practically be learned through random exploration of permutations)

The last element in particular is a subtle art and must be used with caution, because it trades off intractability for RL against intractability for traditional AI: If the pattern is too rigid the programmer could just hard-code it into a database.

If we considered video games instead, the task becomes much easier. DOTA 2 and Starcraft 2 AIs still can't beat human professionals at the full game despite the news hype, although they probably can beat the average human player. Some games, such as Chronotron or Ultimate Chicken Horse, might be impossible for current AI techniques to even achieve average human level performance on.

Comment by maximkazhenkov on The Best Virtual Worlds for "Hanging Out" · 2020-04-29T03:37:56.351Z · score: 3 (2 votes) · LW · GW

Planetside 2 is fascinating to me. It's one of a kind, not just in the sense of being a MMO shooter, but also giving the player a sense of being part of something big and magnificent, collaborating with not only your small circle of friends, but also with hundreds of other people towards a common goal. This sort of exciting experience is only found in real world projects otherwise (EVE online and browser games notwithstanding; those are more spreadsheets than games to me), and I'm really starting to think this is a hugely neglected opportunity for the gaming industry. Who knows, maybe it will be the next big trend after Battle Royale? Although shooter games with their chaotic and computationally expensive nature is not the best fit for it - perhaps turn-based strategy games instead?

Comment by maximkazhenkov on The Samurai and the Daimyo: A Useful Dynamic? · 2020-04-29T03:25:01.014Z · score: 7 (3 votes) · LW · GW
3) Am I, as a person, actually capable of making a positive difference in general or is my presence generally going to prove useless or detrimental?

To be blunt, I don't think you are making much of a positive difference in terms of changing the exploitative nature of the world, which you seem to be passionate about in your writing. I know it sounds terribly rude but couldn't find another way to put it lest I treat it as a rhetorical question.

I'm not saying you should stop doing what you're doing or that your work isn't valuable in general, any more than I'm saying athletes and theoretical physicists are morons because it's difficult to become a millionaire that way. It's just that in a world overflowing with competing memes, playing politics (in the broader sense of recruiting more people for your tribe) is not a low-hanging fruit in general. I would say the rationalist community isn't so much an army of generals with no soldiers to command, as it is an army of recruiters with no jobs to offer (that is if you conceive rationality as a project rather than just an interest).

Is this something I can improve and if so, how?

Again, not saying you should prioritize changing the world (over doing what you like and enjoy), but in case you want to, I'd say pick a EA cause (you probably know the details better than me) and make an actionable plan. For example, if your preferred cause is AI alignment, enroll in a MOOC on AI. Less meta-level pondering, more object-level work.

Comment by maximkazhenkov on Fast Takeoff in Biological Intelligence · 2020-04-26T14:57:26.626Z · score: 1 (1 votes) · LW · GW

Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don't get sub-agent alignment for free, whether it's made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.

Comment by maximkazhenkov on Fast Takeoff in Biological Intelligence · 2020-04-26T02:54:52.447Z · score: 1 (1 votes) · LW · GW
Dogs and livestock have been artificially selected to emphasize unnatural traits to the point that they might not appear in a trillion wolves or boars

I think you're overestimating biology. Living things are not flexible enough to accommodate for GHz clock speed or lightspeed signal transmission despite having had evolution tinkering on it for billions of years. One in a trillion is just 40 bits, not all that impressive, not to mention dogs and livestock took millennia of selective breeding; that's not fast in our modern context.

Comment by maximkazhenkov on Fast Takeoff in Biological Intelligence · 2020-04-26T02:30:27.339Z · score: 2 (2 votes) · LW · GW
Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won't be any less ethical than normal children).

Why? Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.

Comment by maximkazhenkov on Forbidden Technology · 2020-04-26T02:04:32.554Z · score: 2 (2 votes) · LW · GW

For me, the exact the same list, but backwards.

Comment by maximkazhenkov on Life as metaphor for everything else. · 2020-04-05T22:10:14.670Z · score: 1 (1 votes) · LW · GW

Another useful metaphor in this context: Fire

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-04T01:05:29.696Z · score: 5 (3 votes) · LW · GW

Thank you for the offer, I am however currently reluctant to interact with people I met on the internet in this way. But know that your openness and forthcomingness is greatly appreciated :)

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T23:13:48.737Z · score: 1 (1 votes) · LW · GW

Nitpick: I was arguing that the Doomsday Argument would actually discourage x-risks related work because "we're doomed anyway".

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T19:27:36.590Z · score: 5 (3 votes) · LW · GW

I agree that Lesswrong is probably the place where crazy philosophical ideas are given the most serious consideration; elsewhere it's usually just mentioned as a mind-blowing trivia over dinner parties, if at all. I think there are two reasons why these ideas are so troubling:

  • They are big. Failing to take account of even one of them will derail one's worldview completely
  • Being humble and not taking an explicit position is still just taking the default position effectively

But alas, I guess that's just the epistemological reality we live in. We'll just have to make working assumptions and carry on.

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T14:38:56.060Z · score: 3 (2 votes) · LW · GW
Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.

In this case, it isn't so much that "stakes are high and chances are low so they might cancel out", rather there is an exact inverse proportionality between the stakes and the chances because the Doomsday Argument operates directly through the number of observers.

If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.

I feel like being in a simulation is just as terrible a predicament as doom soon; given all the horrible things that happen in our world the simulators are clearly Unfriendly, they could easily turn off the simulation or thwart our efforts in creating an AI. Basically we're already living in a post-Singularity dystopia so it's too late to work on it.

I have a much harder time accepting the Simulation Hypothesis though because there are so many alternative philosophical considerations that could be pursued. Maybe we are (I am) Boltzmann brains. Maybe we live in an inflationary universe that expands 10^37 fold every second. Maybe minds do not need instantiation, or anything like a rock could be an instantiation. Etc.

Going one meta level up, I can't help but feel like a hypocrite to lament the lack of attention given to intelligence explosion and x-risks by the general public yet fail to seriously consider all these other big weird philosophical ideas. Are we (the rationalist community) doing the same as people outside it, just with a slightly shifted Overton Window? When is it Ok to sweep ideas under the rug and throw hands up in the air?

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T13:44:27.610Z · score: 1 (1 votes) · LW · GW

But isn't the point of the Doomsday Argument that we'll need very very VERY strong evidence to the contrary to have any confidence that we're not doomed? Perhaps we should focus on drastically controlling future population growth to better our chances of prolonged survival?

Comment by maximkazhenkov on How special are human brains among animal brains? · 2020-04-02T16:54:55.682Z · score: 1 (1 votes) · LW · GW

I think what the author meant was that the anthropic principle removes the lower bound on how likely it is for any particular species to evolve language; similar to how the anthropic principle removes the lower bound on how likely it is for life to arise on any particular planet.

So our language capability constitutes zero evidence for "evolving language is easy" (and thus dissolving any need to explain why language arose; it could just be a freak 1 in 10^50 accident); similar to our existence constituting zero evidence for "life is abundant in the universe" (and thus dissolving the Fermi paradox).

Comment by maximkazhenkov on When to assume neural networks can solve a problem · 2020-03-29T23:50:54.923Z · score: 1 (1 votes) · LW · GW
You are aware chatbots have been "beating" the original Turing test since 2014, right?

Yes, I was in fact. Seeing where this internet argument is going, I think it's best to leave it here.

Comment by maximkazhenkov on When to assume neural networks can solve a problem · 2020-03-29T19:35:50.664Z · score: -2 (2 votes) · LW · GW
ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn't), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.

I disagree with this point in particular. I'm assuming you're basing this prediction on the recent successes of AlphaStar and OpenAI5, but there are obvious cracks upon closer inspection.

The "any possible game" part, though, is the final nail in the coffin to me since you can conceive plenty of games that are equivalent or similar to the Turing test, which is to say AGI-complete.

(Although I guess AGI-completeness is a much smaller deal to you)

Comment by maximkazhenkov on AGI in a vulnerable world · 2020-03-29T17:20:44.592Z · score: 1 (1 votes) · LW · GW

It makes no difference if the marginal distributed harm to all of society is so overwhelmingly large that your share of it is still death.

Comment by maximkazhenkov on AGI in a vulnerable world · 2020-03-29T17:13:21.300Z · score: 1 (1 votes) · LW · GW
Also, I suspect this coordination might extend further, to AGIs with different architectures also.

Why would you suppose that? The design space of AI is incredibly large and humans are clear counter-examples, so the question one ought to ask is: Is there any fundamental reason an AGI that refuses to coordinate will inevitably fall off the AI risk landscape?

Comment by maximkazhenkov on AGI in a vulnerable world · 2020-03-29T17:00:35.586Z · score: 1 (1 votes) · LW · GW

The actual bootstrapping takes months, years or even decades, but it might only take 1 second for the fate of the universe to be locked in.

Comment by maximkazhenkov on Seeing the Smoke · 2020-03-02T06:55:48.390Z · score: 0 (3 votes) · LW · GW

I'm totally against the coronavirus if that's what you're wondering about; didn't think I'd need to signal that.

Comment by maximkazhenkov on Seeing the Smoke · 2020-02-28T22:27:35.443Z · score: 2 (6 votes) · LW · GW

Millions of deaths worldwide would be kind of "meh" to be honest. In emerging economies like China and India it's the price paid yearly (lung diseases due to air pollution) in return for faster economic growth. In first world countries it's a bit more unusual, but even in the most endangered age group the risk is at worst (i.e. complete loss of containment like flu) still only on the same order of magnitude as cancer and cardiovascular diseases.

Overall, not great not terrible. I certainly wouldn't classify it as a "major global disaster".

Comment by maximkazhenkov on Making Sense of Coronavirus Stats · 2020-02-28T02:25:48.190Z · score: 1 (1 votes) · LW · GW
What I believe is that if other countries do not take similar measures to China, this thing is going to rapidly spread.

Would you say the measures taken in Italy and South Korea (particularly the lockdown of towns in Norther Italy) are sufficiently similar to China?

The rosiest outcome I can imagine is warm weather halts the spread of the disease, and then we get a vaccine ready by the time fall rolls around.

I find that rather unlikely considering the virus' spread in warm regions like Singapore.

Comment by maximkazhenkov on Response to Oren Etzioni's "How to know if artificial intelligence is about to destroy civilization" · 2020-02-28T00:56:09.951Z · score: 4 (3 votes) · LW · GW

I find that the Winograd schemas is more useful as a guideline to adversarial queries to stump AIs than an actual test. An AI reaching human-level accuracy on Winograd schemas would be much less impressive to me than an AI passing the traditional Turing test conducted by an expert who is aware of Winograd schemas and experienced in adversarial queries in general. The former is more susceptible to Goodhart's law due to the stringent format and limited problem space.

Comment by maximkazhenkov on Quarantine Preparations · 2020-02-27T14:45:54.442Z · score: 1 (1 votes) · LW · GW

Thanks for the reference

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-27T02:02:50.809Z · score: 1 (1 votes) · LW · GW
From the sound of it, they stayed in the lane mainly via some hand-coded image processing which looked for a yellow/white strip surrounded by darker color.

That is what I heard about other research groups but a bit surprising coming from Tesla, I'd imagine things have changed dramatically since then considering this video, albeit insufficient as any sort of safety validation, still demonstrates they're way beyond just following lane markings. According to Musk they're pushing hard for end-to-end ML solutions. It would make sense seeing the custom hardware they've developed and also the data leverage they have with their massive fleet, combined with over-the-air updates.