Posts

Comments

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The uniquely awful example of theism · 2021-09-13T16:31:56.128Z · LW · GW

While admitting to be ignorant of most of the current evidence, I have to note my priors are currently strongly in favor of criminalization (at least for traders and for free use; using in hospitals and in research may or may not be different). Marijuana use, from what I know, lowers IQ by several integer points for some time, causing this (by being a trader or as in the next sentence) is a crime by itself (arguably worse than, say, breaking someone's toe which is clearly criminal). Decriminalization would cause a temporary spike in its use, and for that see above. De-decriminalization is likely to cause turmoil because people are opposed to change. And the possibly-strawman argument against criminalization I just adapted from somewhere in my head that you can implant "trading quantities" of a drug to frame someone does not work: you can likewise implant any substance which is forbidden to be kept freely, like, say, an explosive.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The Most Important Century: Sequence Introduction · 2021-09-12T13:37:35.715Z · LW · GW

"The Duplicator (not yet posted on LW)" - now posted, n'est ce-pas?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on SSC Journal Club: AI Timelines · 2021-09-11T21:45:10.842Z · LW · GW

Unfortunately, it is quite difficult to taboo a term when discussing how (mis)interpretation of said term influenced a survey.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on SSC Journal Club: AI Timelines · 2021-09-03T19:04:43.613Z · LW · GW

Moreover, even if my understanding is ultimately not what the survey-makers had in mind, the responding researchers having the same understanding as me would be enough to get the results in the OP.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on SSC Journal Club: AI Timelines · 2021-09-03T19:03:17.464Z · LW · GW

I would say that, in ideal world, the relevant skill/task is "given the analysis already at hand, write a paper that conveys it well" (and it is alarming that this skill becomes much more valuable than the analysis itself, so people get credit for others' analyses even when they clearly state that they merely retell it). And I fully believe that both the task of scientific analysis (outputting the results of the analysis, not its procedure, because that's what needed for non-meta-purposes!) and the task outlined above will be achieved earlier than an AI that can actually combine them to write a paper from scratch. AND that each new simple task in the line to the occupation further removes their combination even after the simple task itself is achieved.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Grasping Slippery Things · 2021-09-03T18:17:46.483Z · LW · GW

"In your mind's eye, it seems, you can see before you the many could-worlds that follow from one real world." Isn't it exactly what many-worlds interpretation does to QM (to keep it deterministic, yada-yada-yada; to be fair, Brandon specifically stated he is not considering the QM sense, but I am not sure the sense he suggested himself is distinct)? There are worlds that are (with not-infinitesimally-low probability-mass) located in the future of the world we are now (and they are multiple), and there are worlds that are not. The former are "realizable", and they "follow" - and whether they are reachable depends on how good the "forward search process that labels certain options as reachable before judging them and maximizing" is. My intuition says that "could" can mean the former, rather than "whatever my mind generated in the search as options" (and, moreover, that the latter is a heuristics of the mind for the former). (Unless, of course, the real bomb under this definition is in "probability-mass" hiding the same "could-ness", but if you are going to tell me that QM probability-mass is likewise reducible to labeling by a search process and this is the "correct answer", I will find this... well, only mildly surprising, because QM never ceases to amaze me, which influences my further evaluations, but at least I don't see how this obviously follows from the QM sequence.)

Moreover, this quotation from Possibility and Could-ness seems to hint to a similar (yet distinct, because probability is in the mind) problem.
> But you would have to be very careful to use a definition like that one consistently.  "Could" has another closely related meaning in which it refers to the provision of at least a small amount of probability. 

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on SSC Journal Club: AI Timelines · 2021-08-31T22:55:18.087Z · LW · GW

Well, that's not quite true. Let's go to the initial example: you need to write a linguistic paper. To this, you need at least two things: perform the lingustic analysis of some data and actually put it in words. Yet the latter needs the internal structure of the former, not just the end result (as would most currently-practical applications of a machine that does a linguistic analysis). The logic behind trees, for instance, not just a tree-parsed syntactic corpus. A neural network (RNN or something) making better and quicker tree-parsed syntactic corpora than me would just shrug (metaphorically) if asked for the procedure of tree-making. I am near-certain other sciences would show the same pattern for their papers.

Managing AI would also have to manually handle information flow between other AIs more generally, which is kinda "automatic" for human minds (though with some important exceptions, leading to the whole idea of mental modules a la Fodor).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Book Review: Age of Em · 2021-08-31T22:44:53.585Z · LW · GW

Well, it were specifically B1 mass production droids which were made incredibly cheap and so with, let's say, not the best AI ever. A rare model like HK-47 was superior to usual (neither Force-amplified nor decades-of-training-behind-Mandalore) humans; and the latter case could also be a difference in available weaponry (if your weapon cannot penetrate amplified beskar armor and you only find this out in the moment of attack, you'd need to be very smart to immediately find a way to win or retreat before the Battle Reflexes guy shuts you off).

As for FTL - I wouldn't be so sure, history of research sometimes makes strange jumps. Romans were this close to going all steampunk, and a naive modern observer could say "having steam machines without gunpowder seems unlikely". Currently we don't know what, if anything, could provide FTL, and the solution could jump on us unexpectedly and unrelatedly to AI development.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on SSC Journal Club: AI Timelines · 2021-08-28T01:23:00.524Z · LW · GW

I am afraid I cannot agree. For one, this would require a further 21 AI, the "managing AI", that does the combining. Moreover, the data exchange between these narrow AI may be slower and/or worse (especially considering that many of the strong domain-specific AI don't really have extractable internal data of any use).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Book Review: Age of Em · 2021-08-28T01:11:06.983Z · LW · GW

It can and usually does. Note that we do see some scenes where a pilot leaves the ship and it, seemingly by itself, flies away to park or something (for instance, R4 does it in Episode III to Obi-Wan's ship IIRC). It might actually be a funny story of each side using organic pilots because the other side uses human pilots and astrodroids are not that good in predicting organics' behavior, so it is just a Pareto equilibrium.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Extreme Rationality: It's Not That Great · 2021-07-16T18:42:55.271Z · LW · GW

I would replace "introduced" to "sold" or "made interesting" here. It's not enough to introduce a group of people to something - unless their values are already in sync with said something's _appearance_ (and the appearance, aka elevator pitch, aka hook, is really important here), you would need to apply some marketing/Dark Arts/rhetorics/whatever-you-call-it to persuade them it's worth it. And, for all claims of "Rationalists should win", Yudkowsky2008 was too much of a rhetorics-hater (really, not noticing his own pattern of having the good teachers of Defence against the Dark Arts in Hogwarts themselves practicing Dark Arts (or, in case of Lupin, *being* Dark Arts)?) to perform that marketing, and thus the blog went to attract people who already shared the values - nerdy booksmarts (note that a)to the best of my knowledge, HPMoR postdates Sequences; b)Harry Potter isn't exactly a booksmart-choosing fandom, as is shown by many factors including the gross proportion of "watched-the-films-never-read-the-books" fans against readers AND people who imagine Draco Malfoy to be a refined aristocrat whose behavior is, though not nice, perfectly calibred instead of the petty bully we see in both books and films AND - I should stop here before I go on a tangent; so I am not certain how much "Harry Potter fans" is relevant).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Pluralistic Moral Reductionism · 2021-06-25T12:39:47.047Z · LW · GW

"He's an unmarried man, but is he a bachelor?" This is a 'closed' question. The answer is obviously "Yes."

This is a false claim, unfortunately. Bachelor is not merely an "unmarried man", it is an "unmarried man who could've been married in his society" (as all the long-discussed things like "#My 5-year-old son is a bachelor" and "#The Pope is a bachelor" show). ETA: the part beginning with "who" is probably a presupposition rather than assertion ("The Pope is not a bachelor" is only felicitous if used as metalinguistic "The Pope cannot be described by the word 'bachelor'", not if used in the literal sense "The Pope is married although it is not allowed").

Austere Metaethicist: Your definition doesn't connect to reality. It's like talking about atom-for-atom 'indexical identity' even though the world is made of configurations and amplitudes instead of Newtonian billiard balls. Gods don't exist.

This one is also not obviously true. We can ask what Sherlock Holmes would approve of despite the fact that he never existed (and I can imagine a morality that says "good is what Sherlock approves of" - a strange morality though it would be). Why can't we take "an essentially just and loving God" as a similar literature character?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The Best Textbooks on Every Subject · 2021-06-24T16:47:00.342Z · LW · GW

Subject: phonological theories
Recommendation: Routledge's Handbook on Phonological Theory
Strengthes: each chapter on an approach is written by a specialist in that approach, clearly explaining what the ideas are.
Alternatives: the relevant section of Kodzasov & Krivnova's "Obshchaya fonetika" (short and obscuring some very important points, as well as leaving out some approaches); Philip Carr's "Phonology" (somewhat outdated - 1993 - and makes that unpleasant trick with "this is our current theory... now let's look how it's wrong and adopt a better theory... and again... and again" - while this is akin to how scientific thought goes it doesn't necessarily do justice to the theories in question).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Burdens · 2021-05-25T10:41:54.517Z · LW · GW

Sorry, but I do not think that this is a term disagreement and that your "strong-manning" is faithful to my comment. I believe that Scott's idea is somewhat inconsistent because he puts an individual example against a general idea; a specific structure against a set of psychologies.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Gears vs Behavior · 2020-04-25T02:07:01.719Z · LW · GW

Sounds like a (much better than original) explanation of Igor Mel'čuk's "structural model" vs. "functional model". An old topic in linguistics and, arguably, other cognitive sciences.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Adaptive Immune System Aging · 2020-04-25T01:50:04.755Z · LW · GW

Infertility does not entail non-producing of hormones (the most obvious examples being vasectomy in males and the operation on tubes what's-its-name in females). It is pretty unlikely that COVID-19 actually castrates its victims; it is testable, though, by measuring levels of testosterone and estrogenes.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Conflict vs. mistake in non-zero-sum games · 2020-04-25T01:36:30.101Z · LW · GW

suppoesd - should read supposed

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on How to learn soft skills · 2020-03-26T15:38:16.786Z · LW · GW

I wish it came with an explanation what _exactly_ Impatience and Hybris virtues entail (given that both are generally described as non-virtues but I do seem to have the feeling that they can be good; same works for Laziness, but here I believe I have better understanding already).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Capability amplification · 2020-03-24T16:29:06.939Z · LW · GW

"the agent would lack a nuanced understanding of what we consider terrible" - isn't it the whole narrative for Eliezer's genie tales? While having #2 as a separate request is good, failure to follow #1 can still be catastrophic enough because computers think faster, so our formal "staying in control" may not matter enough.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-24T00:11:28.185Z · LW · GW

Oh, then sorry about the RNN attack ;)

Well, no. In particular, if you feed the same sound input to linguistic module (PF) and to the module of (say, initially visual) perception, the very intuition behind Fodorian modules is that they will *not* do the same - PF will try to find linguistic expressions similar to the input whereas the perception module will try to, well, tell where the sound comes from, how loud it is and things like that.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Factored Cognition · 2020-03-23T14:19:03.333Z · LW · GW

This memoizing seems similar to "dynamic programming" (which is, semi-predictably, neither quite dynamic nor stricto sensu programming). Have you considered that angle?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-23T14:07:59.474Z · LW · GW

1. "My understanding is that we can do things like remember a word by putting it on loop using speech motor control circuits" - this is called phonological loop in psycholinguistics (psychology) and is NOT THE SAME as working memory - in fact, tests for working memory usually include reading something aloud precisely to occupy the circuits and not let the test subject take advantage of their phonological loop. What I mean by working memory is the number of things one can hold in their mind simultaneously captured by "5+-2" work and Daneman's tests - whatever the explanation is.

2. Fodorian modules are, by definition, barely compatible with CCA. And the Zeitgeist of theoretical linguistics leads me to think that when you use RNN to explain something you're cheating your way to performance instead of explaining what goes on (i.e. to think that brain ISN'T an RNN or a combination thereof - at least not in an obvious sense). Thus we don't quite share neurological assumptions - though bridging to a common point may well be possible.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Approval-directed agents · 2020-03-22T14:53:12.872Z · LW · GW

Allowing to specify another overseer? Not to generalize from fiction, but have you even seen Spider-Man: Away from home? The new overseer may well turn out to be a manipulator who convinced Hugh to turn over the reins - and this is much more likely than a manipulator that can influence every decision of Hugh. Thus AI should probably have a big sparkling warning sign of NOT CHANGING THE OVERSEER, maybe unless an "external observer" party approves - though this is somewhat reminiscent of "turtles all the way down" manipulating several observers is trivially more difficult.

Also, SIMPLE case of natural language? The fact that current NLP works on strings and neural nets and other most likely wrong assumptions about language kinda suggests that it is not simple.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The human side of interaction · 2020-03-22T14:29:21.236Z · LW · GW

On the latter: yes, this is part of the question but not the whole question. See addendum.

On the former: technically not true. If we take "human values" as "values averaged between different humans" (not necessarily by arithmetical mean, of course) they may be vastly different from "is this good from my viewpoint?".

On the bracketed part: yeah, that too. And our current morals may not be that good judging by our metamorals.

Again, I want to underscore that I mention this as a theoretical possibility not so improbable as to make it not worth considering - not as an unavoidable fact.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-22T14:01:15.802Z · LW · GW

I would think that the former are the _mechanism_ of the latter - though, as they say, "don't quote me on that".

There is an interesting question of whether, if many things are modules, there is also non-module part, the "general intelligence" part which does not share those properties. Perhaps unsurprisingly, there is no consensus (though my intuitions say there is the GI part).

Also, it seems that different modules might use the same (common) working memory - though this is not set in stone (and depends, in particular, on your analysis of language - if late Chomsky is right, only phonology (PF) and perhaps semantics (LF) are modular, whereas syntax uses our general recursive ability, and this is why it uses general working memory).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The human side of interaction · 2020-03-21T13:23:16.366Z · LW · GW

This led me to think... why do we even believe that human values are good? Perhaps the typical human behaviour amplified by possibilities of a super-intelligence would actually destroy the universe. I don't personally find this very likely (that's why I never posted it before), but, given that almost all AI safety is built around "how to check that AI's values are convergent with human values" one way or another, perhaps something else should be approached - like remodeling history (actual, human history) from a given starting point (say, Roman Principatus or 1945) with actors assigned values different from human values (but in similar relationship to each other, if applicable) and finding what leads to better results (and, in particular, in us not being destroyed by 2020). All with the usual sandbox precautions, of course.

(Addendum: Of course, pace "fragility of value". We should have some inheritance from metamorals. But we don't actually know how well our morals (and systems in "reliable inheritance" from them) are compatible with our metamorals, especially in an extreme environment such as superintelligence.)

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-21T11:58:13.529Z · LW · GW

On the second point - I have misunderstood you, now I see what you're talking about. If Fodorian modules' view is right, the neocortex one(s) still isn't (aren't) "conscious". The received wisdom I have says that modules are:

1)Automatic (one cannot consciously change how they work - except by cutting off their input) - hence susceptible to illusions/wrong analyses/...;

2)Autonomous (consciousness only "sees" outputs, a module is black box for its owner; these two properties are related but distinct - yet something that has both can barely be called "conscious");

3)Inherited with a critical period of fine-tuning (that's basically what you called time window).

There were some more points but I (obviously) forgot them. And that brings me to your first point: I can't point to a textbook right away but that was part of several courses I was taught (Psychology of cognitive processes at Moscow State University (Fundamental and Applied Linguistics program); Language, Music, and Cognition in NYI 2016 - nyi.spb.ru).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-20T14:27:33.795Z · LW · GW

For me, your examples of why visual perception needs the same things as language, including time window, is a standard, textbook-level (and often used!) proof of the fact they're both (widely understood) Fodorian modules (in case of visual processing, two distinct modules indeed, though the labels "conscious" and "subconscious" are strange, I'm used to calling those "What-path" and "Where-path"), fine-tuned but not fully designed during the time-window, not that they are, vice versa, both handled by general algorithm like a snowflake.

Now, I understand that Fodorian modules (even when you throw away the old requirement of there being a strictly limited part of the cortex responsible for it) are not that widely held nowadays. However, when I look at people, I cannot help seeing them. From prosopagnosia to specific language impairments, aka aphasias (only two of the six commonly discussed aphasias are really language-based but the name stuck) to memory disruptions, we see individual modules breaking - including in-born reaking, before fine-tuning! - and just as well we see people whose general intelligence is reasonably low with unusually good performance of some of their modules.

Addendum: "visual" in "visual processing" is, of course, a red herring. It would be better to speak of two perception modules, with input variable (blindborn people fine-tune it to other things, for example - whereas those blinded in adulthood, AFAIK, do not).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on No One Can Exempt You From Rationality's Laws · 2020-03-20T13:28:35.459Z · LW · GW

I would say that you're rather strawmanning the author of HPMoR where some reasons to distrust democracy are nicely illustrated - by (spoiler, now rot13ed) gur znva ureb thvyg-gevccrq vagb gnxvat n yvgreny gevc gb Nmxnona naq uvf orfg sevraq nyzbfg trggvat nabgure bar va funpxyrf.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Evaluability (And Cheap Holiday Shopping) · 2020-03-19T19:16:18.099Z · LW · GW

One should factor in the odds of similar games occurring multiple times throughout one's life (unless one is a frequent visitor of casinos). I claim that *these* are too low for the situation to "add up to normality".

Answering the question asked... I could start *considering* the second choice at 25% chance of 15 (probably properly 16 but my gut feeling has, of course, rounded it) and preferring it at... well, maybe never?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Why Truth? · 2020-02-15T18:14:27.916Z · LW · GW

I mean, there are minimal pairs (mostly in cases where possessive apostrophees are for some reason not used, like its - it's, who's - whose). But overall it just helps readability (speaking as a non-native).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Transhumanists Don't Need Special Dispositions · 2019-12-17T01:04:59.926Z · LW · GW

There is a tale in Greek myths of a woman who tricked Apollo into granting her a wish, grabbed a pile of sand and asked to live as many ages as there were individual pieces in the pile - yet she forgot to ask for eternal youth, so a thousand years later she was a dribbling and unhappy old hag desperately wishing she had died ages ago. I suppose something like this is what pictured by non-transhumanists when they speak of "too old" - and there is no obligation that your methods to preserve life will also preserve good bodily condition (not to mention Alzheimer-like thingies - what if the body does live but the mind deteriorates?).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Against Shooting Yourself in the Foot · 2019-10-23T00:37:55.362Z · LW · GW

It seems to be hard in practice to draw the line between "only picking certain bets" and "doing things I'm best at" (though the theoretical difference is obvious - maximizing P(win) by events and maximizing usefulness of one's skills to win by skills). The latter seems to be a good practice - yet your attack on the former seems to indirectly hamper it.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Burdens · 2019-09-04T13:22:18.869Z · LW · GW

But modern society is an individual example of the general idea of society, whereas human psychology as optimized is a general idea (performed in every specific individuum to a certain degree).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Time and Effort Discounting · 2019-09-01T02:24:13.823Z · LW · GW

Isn't the fact that we perceive all the quantities on a logarithmic scale far wider than the specific effects? We find it again and again and again, from sound perception to explicit amount perception to this. (Note that it is an even wider claim than mattnewport's.) So the distance between 8 and 9 years is not the same as distance between 1 month and 1 year + 1 month because the logarithms' difference... erm... differs; and suggestion to give something "now" is then just as infinite as probability 1.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Nobody Is Perfect, Everything Is Commensurable · 2019-08-31T15:48:32.706Z · LW · GW

"But I also benefit from the asteroid that killed the dinosaurs, in that I live on land that was formerly dinosaur-occupied" - you... do know that asteroid as a cause is unlikely, and something like volcano activity is much closer (not that it makes much difference to the argument)?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Burdens · 2019-08-31T15:22:50.610Z · LW · GW

"And that is: humans don’t owe society anything. We were here first" - not true. Society as an idea was clearly established before human race, before language, before thought, it is an all-ape thing, maybe even wider. And (almost) every individual society is older than its individual members. And you apply an illegitimate operation of comparing the individual society with human as an idea. Maybe your idea in general is good, but you're using a wrong argument - incorrigibly wrong, as far as I can tell.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Archipelago and Atomic Communitarianism · 2019-08-30T23:50:56.232Z · LW · GW

"And they should be allowed to form completely voluntary communities of icky people that enforce icky cultural norms and an insular society promoting ickiness, just like everyone else" - the child problem again. If a non-icky child is born to such a community, there may be little possibility to avoid it. Say, conservative parents are only going to subsidise their child's law school if it is the one they condone - the one with those rules - and the child is gay (or generally disagreeing with their agenda, but I believe the gay situation is more obvious because that seems to be inborn).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Don’t Fear The Filter · 2019-08-30T23:33:31.469Z · LW · GW

Despite science-fiction, I see little plausibility in hive rationality. So - and I may be putting my neck under an axe by this - I claim that no hive race could raise to getting anything near "contemporary technology". Also, most of the contemporary technology usable for colonizing is already costly and/or faulty enough that someone who is "paranoid enough" (and some Prof. Moody tells us there is no such thing - but still) would be unlikely to ever leave theit own planet.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The Ideology Is Not The Movement · 2019-08-30T23:03:00.926Z · LW · GW

"As well try to predict the warlike or peaceful nature of the United Kingdom by looking at a topographical map of Great Britain" - such things are _done_ and _super-done_. Mountains (and difficult-to-cultivate steppes? Is this maybe something about pastoring vs. agriculture essentially?) predict average warlikeness fairly well. Who were the most peaceful Ancient Greeks? Thessalians. Why? Thessalia is, like, the only place in Greece vaguely resembling a grassland, where all other Greece is covered in hills. Caucasus is one major battleplace for centuries. Sco'land? You bet. The East seems to be generally more relaxed, but still less so in Tibet than in Eastern China. Early-Rome-time Italy? Relatively peaceful Greek colonies in the Southern coasts, war-like Etrusci and Gauls and, certainly not the least, Romans near Appenines and Alps. (Note that Rome is relatively far from the sea, compared to usual Greek colonies.)

"It also brings together a group of people with some pre-existing common characteristics: male, nerdy, often abrasive, not very successful, interested in speculation, high-systematizing" - can male be an artifact of nerdy, abrasive, and high-systematizing correlation with gender? After all, as you note, when a girl accepts all the other things she is usually easily accepted, and gender is only a proxy for predicting the qualities.

"I think America has better values than Pakistan does, but that doesn’t mean I want us invading them, let alone razing their culture to the ground and replacing it with our own" - why not? No, seriously. America invaded several Muslim (fundamentalist Muslim, not we-kinda-like-Quran-stop-accusing-us-of-ISIS Muslim) countries already anyway. Why not raze the fundamentalist culture to the ground and replace it with universal?

"except that “race” is a much more complicated concept than ethnicity" - W. H. A. T. Arm people with five-to-eight differential features, and you get a good proxy for all the six main races AND predictions for unclear cases. I'd like to see you try doing that to an ethnicity (without language cheat, which is known to backfire).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Guided By The Beauty Of Our Weapons · 2019-08-30T22:18:19.655Z · LW · GW

"If you genuinely believe that facts and logic don’t work on people, you shouldn’t be writing articles with potential solutions" - I have seen people claiming things like "science/logic is only natural for people with schizoid tendencies and utterly unnatural for others". Given that "people with schizoid tendencies" (or whatever the right denominator) may be as much of a tribe dark-matter-like split from the rest of the world as your Red/Blue/Grey tribes you may get an illusion that people are generally convincible, whereas in reality it may well be just a feature of your own tribe.

And then the articles about people unconvincible by logic are written for the tribe's members as a guide to deal with the outsiders.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on In Favor of Niceness, Community, and Civilization · 2019-08-30T21:18:03.347Z · LW · GW

"Or “And that is why the Geneva Convention was so obviously impossible that no one even bothered to attend the conference” - Geneva Convention prohibited mercenaries. And no one ever used them afterwards... except they did.

"Civilization didn’t conquer the world by forbidding you to murder your enemies unless they are actually unrighteous in which case go ahead and kill them all" - that's... what usually justifies most wars including many civilizational wars.

"In particular, the history of the past few hundred years in the United States has been a history of decreasing censorship and increasing tolerance" - EXCUSE YOU? When was the last time US followed the original (modulo Ten Amendments) version of constitution in its anti-censorship? And, moreover, what about silent censorship (the kind Chomsky describes - effectively censored without any legislation for it)?

"And it kind of was. Just not the way most people expected" - yeah, by creating the Chalcedonian (Orthodox before the Orthodox/Catholic split) church which more or less abandoned all that and either consumed (like Dominicans) or destroyed (like Albigensians). What you may think of as a continuing tradition is Luther-Calvin rebellion against that, there is no way original Christians could have won over Roman Empire, even in its decay. The Sun religion was more powerful when Constantine decided to get some more followers by tricking Christians into obeying them.


Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Book Review: Age of Em · 2019-08-30T01:03:46.926Z · LW · GW

"In Star Wars, the Rebellion had all of these beautiful hyperspace-capable starfighters that could shoot laser beams and explore galaxies – and they still had human pilots" - except that, unless the pilot is a Jedi or Bith or something like, most of the work is done by astrodroids like R2 and pilots are something of a general controller/safety measure.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Don’t Fear The Filter · 2019-08-30T00:36:54.687Z · LW · GW

You describe the "x-risk" as if it were only one. As far as I understand, the general idea of Great Filter as self-destruction is "every civilization found _one way or another_ to destroy or irreparably re-barbarize itself". Not the same way. Not "EwayAcivilizations" but "AcivilizationsEway". And this is a much weaker claim.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on SSC Journal Club: AI Timelines · 2019-08-30T00:24:35.636Z · LW · GW

"As the authors point out, these two questions are basically the same" - strongly not true if we speak of domain-specific AI. Creating an AI that does linguistic analysis of a given dataset better than me is easier than creating an AI that is a better linguist than me because it actually requires additional tasks such as writing academic papers. So the task/occupation difference makes sense.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Should AI Be Open? · 2019-08-30T00:15:14.383Z · LW · GW

1)"Wells believed the coming atomic bombs would be so deadly that we would inevitably create a utopian one-world government to prevent them from ever being used. Sorry, Wells. It was a nice thought" - I don't know about utopian but, save for North Korea, this is more or less what we have since USSR lost the Cold War. USA single-handedly controls most of the world, and no nuclear weapon is sighted in active use.

2)"Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren" - unless they all die out from genetic diseases. This is the Ashkenazi evolutionary experiment (the one you link too) up to eleven.

3)"and that one reason we’re not smarter may be that it’s too hard to squeeze a bigger brain through the birth canal" - should be pretty much obliterated by modern Caesarian, but do we see burst of intelligence in last decades?

4)"If an alien species showed up in their UFOs, said that they’d created us but made a mistake and actually we were supposed to eat our children, and asked us to line up so they could insert the functioning child-eating gene in us, we would probably go all Independence Day on them; computers with more goal-directed architecture would if anything be even more willing to fight such changes" - phew, THAT metaphor finally explains why "just don't let it self-modify" is not enough. Yudkowsky's arguments, I'm afraid, did not come through to me on the necessary level... stupid, stupid brain.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on How The West Was Won · 2019-08-29T22:47:29.183Z · LW · GW

As a huge antipostcolonialist (thus colonialist) I really incline towards the "go universal culture" idea. However, there is also a more specific thing. As Nikonov notes, we are to distinguish immigrants and colonists. Immigrants get largely subsumed by the country they arrive to, even if they keep some peculiar details about them. Colonists do... well, what 16th-18th century colonists did, they are certainly not subsumed (how much of Aborigen Australians or Native Americans is seen in Australian/US usual settlers?). And the usual (partly justified, IMO) fear of Muslim "immigration" is that they do not, in fact, immigrate - they colonize. Or, at least, try to. They want to get European riches but keep their culture - like American colonists were certainly not going to acquire "Indian" culture while gaining resources and land in Americas.

"Heroin use is something every society would select if given the opportunity" - erm, consistently not true. Even Friedman in "Machinery of Freedom" concedes that the governmentless world he pictures could still ban heroin.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Considerations On Cost Disease · 2019-08-29T19:38:51.039Z · LW · GW

"Cowen seems to use it indiscriminately to refer to increasing costs in general – which I guess is fine, goodness knows we need a word for that" - the word _is_ inflation. Maybe averaging is not the best way to get the real inflation, but wages are calculated as if it were?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Against Tulip Subsidies · 2019-08-29T18:30:59.321Z · LW · GW

This could have been rigorously applied to schools as well in the past, and we would have ended with people not knowing basic math unless they need to. As someone from Russia where there are still many remnants of Soviet free education (and commercial education is notoriously worse), I really do not see most of your point. I would argue that providing free higher education does not require actually paying that much - prices for education, as everything, are regulated (I don't mean direct governmental regulation) and thus will probably get lower if it is a universally-accessible thing.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Living Metaphorically · 2019-08-29T18:04:23.363Z · LW · GW

Very cool argument; note though that:

1)L&J directly reject analytic philosophy;

2)Frege ended up in a contradiction - namely, Russel's paradox.