Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Burdens · 2021-05-25T10:41:54.517Z · LW · GW

Sorry, but I do not think that this is a term disagreement and that your "strong-manning" is faithful to my comment. I believe that Scott's idea is somewhat inconsistent because he puts an individual example against a general idea; a specific structure against a set of psychologies.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Gears vs Behavior · 2020-04-25T02:07:01.719Z · LW · GW

Sounds like a (much better than original) explanation of Igor Mel'čuk's "structural model" vs. "functional model". An old topic in linguistics and, arguably, other cognitive sciences.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Adaptive Immune System Aging · 2020-04-25T01:50:04.755Z · LW · GW

Infertility does not entail non-producing of hormones (the most obvious examples being vasectomy in males and the operation on tubes what's-its-name in females). It is pretty unlikely that COVID-19 actually castrates its victims; it is testable, though, by measuring levels of testosterone and estrogenes.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Conflict vs. mistake in non-zero-sum games · 2020-04-25T01:36:30.101Z · LW · GW

suppoesd - should read supposed

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on How to learn soft skills · 2020-03-26T15:38:16.786Z · LW · GW

I wish it came with an explanation what _exactly_ Impatience and Hybris virtues entail (given that both are generally described as non-virtues but I do seem to have the feeling that they can be good; same works for Laziness, but here I believe I have better understanding already).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Capability amplification · 2020-03-24T16:29:06.939Z · LW · GW

"the agent would lack a nuanced understanding of what we consider terrible" - isn't it the whole narrative for Eliezer's genie tales? While having #2 as a separate request is good, failure to follow #1 can still be catastrophic enough because computers think faster, so our formal "staying in control" may not matter enough.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-24T00:11:28.185Z · LW · GW

Oh, then sorry about the RNN attack ;)

Well, no. In particular, if you feed the same sound input to linguistic module (PF) and to the module of (say, initially visual) perception, the very intuition behind Fodorian modules is that they will *not* do the same - PF will try to find linguistic expressions similar to the input whereas the perception module will try to, well, tell where the sound comes from, how loud it is and things like that.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Factored Cognition · 2020-03-23T14:19:03.333Z · LW · GW

This memoizing seems similar to "dynamic programming" (which is, semi-predictably, neither quite dynamic nor stricto sensu programming). Have you considered that angle?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-23T14:07:59.474Z · LW · GW

1. "My understanding is that we can do things like remember a word by putting it on loop using speech motor control circuits" - this is called phonological loop in psycholinguistics (psychology) and is NOT THE SAME as working memory - in fact, tests for working memory usually include reading something aloud precisely to occupy the circuits and not let the test subject take advantage of their phonological loop. What I mean by working memory is the number of things one can hold in their mind simultaneously captured by "5+-2" work and Daneman's tests - whatever the explanation is.

2. Fodorian modules are, by definition, barely compatible with CCA. And the Zeitgeist of theoretical linguistics leads me to think that when you use RNN to explain something you're cheating your way to performance instead of explaining what goes on (i.e. to think that brain ISN'T an RNN or a combination thereof - at least not in an obvious sense). Thus we don't quite share neurological assumptions - though bridging to a common point may well be possible.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Approval-directed agents · 2020-03-22T14:53:12.872Z · LW · GW

Allowing to specify another overseer? Not to generalize from fiction, but have you even seen Spider-Man: Away from home? The new overseer may well turn out to be a manipulator who convinced Hugh to turn over the reins - and this is much more likely than a manipulator that can influence every decision of Hugh. Thus AI should probably have a big sparkling warning sign of NOT CHANGING THE OVERSEER, maybe unless an "external observer" party approves - though this is somewhat reminiscent of "turtles all the way down" manipulating several observers is trivially more difficult.

Also, SIMPLE case of natural language? The fact that current NLP works on strings and neural nets and other most likely wrong assumptions about language kinda suggests that it is not simple.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The human side of interaction · 2020-03-22T14:29:21.236Z · LW · GW

On the latter: yes, this is part of the question but not the whole question. See addendum.

On the former: technically not true. If we take "human values" as "values averaged between different humans" (not necessarily by arithmetical mean, of course) they may be vastly different from "is this good from my viewpoint?".

On the bracketed part: yeah, that too. And our current morals may not be that good judging by our metamorals.

Again, I want to underscore that I mention this as a theoretical possibility not so improbable as to make it not worth considering - not as an unavoidable fact.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-22T14:01:15.802Z · LW · GW

I would think that the former are the _mechanism_ of the latter - though, as they say, "don't quote me on that".

There is an interesting question of whether, if many things are modules, there is also non-module part, the "general intelligence" part which does not share those properties. Perhaps unsurprisingly, there is no consensus (though my intuitions say there is the GI part).

Also, it seems that different modules might use the same (common) working memory - though this is not set in stone (and depends, in particular, on your analysis of language - if late Chomsky is right, only phonology (PF) and perhaps semantics (LF) are modular, whereas syntax uses our general recursive ability, and this is why it uses general working memory).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The human side of interaction · 2020-03-21T13:23:16.366Z · LW · GW

This led me to think... why do we even believe that human values are good? Perhaps the typical human behaviour amplified by possibilities of a super-intelligence would actually destroy the universe. I don't personally find this very likely (that's why I never posted it before), but, given that almost all AI safety is built around "how to check that AI's values are convergent with human values" one way or another, perhaps something else should be approached - like remodeling history (actual, human history) from a given starting point (say, Roman Principatus or 1945) with actors assigned values different from human values (but in similar relationship to each other, if applicable) and finding what leads to better results (and, in particular, in us not being destroyed by 2020). All with the usual sandbox precautions, of course.

(Addendum: Of course, pace "fragility of value". We should have some inheritance from metamorals. But we don't actually know how well our morals (and systems in "reliable inheritance" from them) are compatible with our metamorals, especially in an extreme environment such as superintelligence.)

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-21T11:58:13.529Z · LW · GW

On the second point - I have misunderstood you, now I see what you're talking about. If Fodorian modules' view is right, the neocortex one(s) still isn't (aren't) "conscious". The received wisdom I have says that modules are:

1)Automatic (one cannot consciously change how they work - except by cutting off their input) - hence susceptible to illusions/wrong analyses/...;

2)Autonomous (consciousness only "sees" outputs, a module is black box for its owner; these two properties are related but distinct - yet something that has both can barely be called "conscious");

3)Inherited with a critical period of fine-tuning (that's basically what you called time window).

There were some more points but I (obviously) forgot them. And that brings me to your first point: I can't point to a textbook right away but that was part of several courses I was taught (Psychology of cognitive processes at Moscow State University (Fundamental and Applied Linguistics program); Language, Music, and Cognition in NYI 2016 -

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-03-20T14:27:33.795Z · LW · GW

For me, your examples of why visual perception needs the same things as language, including time window, is a standard, textbook-level (and often used!) proof of the fact they're both (widely understood) Fodorian modules (in case of visual processing, two distinct modules indeed, though the labels "conscious" and "subconscious" are strange, I'm used to calling those "What-path" and "Where-path"), fine-tuned but not fully designed during the time-window, not that they are, vice versa, both handled by general algorithm like a snowflake.

Now, I understand that Fodorian modules (even when you throw away the old requirement of there being a strictly limited part of the cortex responsible for it) are not that widely held nowadays. However, when I look at people, I cannot help seeing them. From prosopagnosia to specific language impairments, aka aphasias (only two of the six commonly discussed aphasias are really language-based but the name stuck) to memory disruptions, we see individual modules breaking - including in-born reaking, before fine-tuning! - and just as well we see people whose general intelligence is reasonably low with unusually good performance of some of their modules.

Addendum: "visual" in "visual processing" is, of course, a red herring. It would be better to speak of two perception modules, with input variable (blindborn people fine-tune it to other things, for example - whereas those blinded in adulthood, AFAIK, do not).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on No One Can Exempt You From Rationality's Laws · 2020-03-20T13:28:35.459Z · LW · GW

I would say that you're rather strawmanning the author of HPMoR where some reasons to distrust democracy are nicely illustrated - by (spoiler, now rot13ed) gur znva ureb thvyg-gevccrq vagb gnxvat n yvgreny gevc gb Nmxnona naq uvf orfg sevraq nyzbfg trggvat nabgure bar va funpxyrf.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Evaluability (And Cheap Holiday Shopping) · 2020-03-19T19:16:18.099Z · LW · GW

One should factor in the odds of similar games occurring multiple times throughout one's life (unless one is a frequent visitor of casinos). I claim that *these* are too low for the situation to "add up to normality".

Answering the question asked... I could start *considering* the second choice at 25% chance of 15 (probably properly 16 but my gut feeling has, of course, rounded it) and preferring it at... well, maybe never?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Why Truth? · 2020-02-15T18:14:27.916Z · LW · GW

I mean, there are minimal pairs (mostly in cases where possessive apostrophees are for some reason not used, like its - it's, who's - whose). But overall it just helps readability (speaking as a non-native).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Transhumanists Don't Need Special Dispositions · 2019-12-17T01:04:59.926Z · LW · GW

There is a tale in Greek myths of a woman who tricked Apollo into granting her a wish, grabbed a pile of sand and asked to live as many ages as there were individual pieces in the pile - yet she forgot to ask for eternal youth, so a thousand years later she was a dribbling and unhappy old hag desperately wishing she had died ages ago. I suppose something like this is what pictured by non-transhumanists when they speak of "too old" - and there is no obligation that your methods to preserve life will also preserve good bodily condition (not to mention Alzheimer-like thingies - what if the body does live but the mind deteriorates?).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Against Shooting Yourself in the Foot · 2019-10-23T00:37:55.362Z · LW · GW

It seems to be hard in practice to draw the line between "only picking certain bets" and "doing things I'm best at" (though the theoretical difference is obvious - maximizing P(win) by events and maximizing usefulness of one's skills to win by skills). The latter seems to be a good practice - yet your attack on the former seems to indirectly hamper it.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Burdens · 2019-09-04T13:22:18.869Z · LW · GW

But modern society is an individual example of the general idea of society, whereas human psychology as optimized is a general idea (performed in every specific individuum to a certain degree).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Time and Effort Discounting · 2019-09-01T02:24:13.823Z · LW · GW

Isn't the fact that we perceive all the quantities on a logarithmic scale far wider than the specific effects? We find it again and again and again, from sound perception to explicit amount perception to this. (Note that it is an even wider claim than mattnewport's.) So the distance between 8 and 9 years is not the same as distance between 1 month and 1 year + 1 month because the logarithms' difference... erm... differs; and suggestion to give something "now" is then just as infinite as probability 1.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Nobody Is Perfect, Everything Is Commensurable · 2019-08-31T15:48:32.706Z · LW · GW

"But I also benefit from the asteroid that killed the dinosaurs, in that I live on land that was formerly dinosaur-occupied" - you... do know that asteroid as a cause is unlikely, and something like volcano activity is much closer (not that it makes much difference to the argument)?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Burdens · 2019-08-31T15:22:50.610Z · LW · GW

"And that is: humans don’t owe society anything. We were here first" - not true. Society as an idea was clearly established before human race, before language, before thought, it is an all-ape thing, maybe even wider. And (almost) every individual society is older than its individual members. And you apply an illegitimate operation of comparing the individual society with human as an idea. Maybe your idea in general is good, but you're using a wrong argument - incorrigibly wrong, as far as I can tell.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Archipelago and Atomic Communitarianism · 2019-08-30T23:50:56.232Z · LW · GW

"And they should be allowed to form completely voluntary communities of icky people that enforce icky cultural norms and an insular society promoting ickiness, just like everyone else" - the child problem again. If a non-icky child is born to such a community, there may be little possibility to avoid it. Say, conservative parents are only going to subsidise their child's law school if it is the one they condone - the one with those rules - and the child is gay (or generally disagreeing with their agenda, but I believe the gay situation is more obvious because that seems to be inborn).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Don’t Fear The Filter · 2019-08-30T23:33:31.469Z · LW · GW

Despite science-fiction, I see little plausibility in hive rationality. So - and I may be putting my neck under an axe by this - I claim that no hive race could raise to getting anything near "contemporary technology". Also, most of the contemporary technology usable for colonizing is already costly and/or faulty enough that someone who is "paranoid enough" (and some Prof. Moody tells us there is no such thing - but still) would be unlikely to ever leave theit own planet.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The Ideology Is Not The Movement · 2019-08-30T23:03:00.926Z · LW · GW

"As well try to predict the warlike or peaceful nature of the United Kingdom by looking at a topographical map of Great Britain" - such things are _done_ and _super-done_. Mountains (and difficult-to-cultivate steppes? Is this maybe something about pastoring vs. agriculture essentially?) predict average warlikeness fairly well. Who were the most peaceful Ancient Greeks? Thessalians. Why? Thessalia is, like, the only place in Greece vaguely resembling a grassland, where all other Greece is covered in hills. Caucasus is one major battleplace for centuries. Sco'land? You bet. The East seems to be generally more relaxed, but still less so in Tibet than in Eastern China. Early-Rome-time Italy? Relatively peaceful Greek colonies in the Southern coasts, war-like Etrusci and Gauls and, certainly not the least, Romans near Appenines and Alps. (Note that Rome is relatively far from the sea, compared to usual Greek colonies.)

"It also brings together a group of people with some pre-existing common characteristics: male, nerdy, often abrasive, not very successful, interested in speculation, high-systematizing" - can male be an artifact of nerdy, abrasive, and high-systematizing correlation with gender? After all, as you note, when a girl accepts all the other things she is usually easily accepted, and gender is only a proxy for predicting the qualities.

"I think America has better values than Pakistan does, but that doesn’t mean I want us invading them, let alone razing their culture to the ground and replacing it with our own" - why not? No, seriously. America invaded several Muslim (fundamentalist Muslim, not we-kinda-like-Quran-stop-accusing-us-of-ISIS Muslim) countries already anyway. Why not raze the fundamentalist culture to the ground and replace it with universal?

"except that “race” is a much more complicated concept than ethnicity" - W. H. A. T. Arm people with five-to-eight differential features, and you get a good proxy for all the six main races AND predictions for unclear cases. I'd like to see you try doing that to an ethnicity (without language cheat, which is known to backfire).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Guided By The Beauty Of Our Weapons · 2019-08-30T22:18:19.655Z · LW · GW

"If you genuinely believe that facts and logic don’t work on people, you shouldn’t be writing articles with potential solutions" - I have seen people claiming things like "science/logic is only natural for people with schizoid tendencies and utterly unnatural for others". Given that "people with schizoid tendencies" (or whatever the right denominator) may be as much of a tribe dark-matter-like split from the rest of the world as your Red/Blue/Grey tribes you may get an illusion that people are generally convincible, whereas in reality it may well be just a feature of your own tribe.

And then the articles about people unconvincible by logic are written for the tribe's members as a guide to deal with the outsiders.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on In Favor of Niceness, Community, and Civilization · 2019-08-30T21:18:03.347Z · LW · GW

"Or “And that is why the Geneva Convention was so obviously impossible that no one even bothered to attend the conference” - Geneva Convention prohibited mercenaries. And no one ever used them afterwards... except they did.

"Civilization didn’t conquer the world by forbidding you to murder your enemies unless they are actually unrighteous in which case go ahead and kill them all" - that's... what usually justifies most wars including many civilizational wars.

"In particular, the history of the past few hundred years in the United States has been a history of decreasing censorship and increasing tolerance" - EXCUSE YOU? When was the last time US followed the original (modulo Ten Amendments) version of constitution in its anti-censorship? And, moreover, what about silent censorship (the kind Chomsky describes - effectively censored without any legislation for it)?

"And it kind of was. Just not the way most people expected" - yeah, by creating the Chalcedonian (Orthodox before the Orthodox/Catholic split) church which more or less abandoned all that and either consumed (like Dominicans) or destroyed (like Albigensians). What you may think of as a continuing tradition is Luther-Calvin rebellion against that, there is no way original Christians could have won over Roman Empire, even in its decay. The Sun religion was more powerful when Constantine decided to get some more followers by tricking Christians into obeying them.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Book Review: Age of Em · 2019-08-30T01:03:46.926Z · LW · GW

"In Star Wars, the Rebellion had all of these beautiful hyperspace-capable starfighters that could shoot laser beams and explore galaxies – and they still had human pilots" - except that, unless the pilot is a Jedi or Bith or something like, most of the work is done by astrodroids like R2 and pilots are something of a general controller/safety measure.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Don’t Fear The Filter · 2019-08-30T00:36:54.687Z · LW · GW

You describe the "x-risk" as if it were only one. As far as I understand, the general idea of Great Filter as self-destruction is "every civilization found _one way or another_ to destroy or irreparably re-barbarize itself". Not the same way. Not "EwayAcivilizations" but "AcivilizationsEway". And this is a much weaker claim.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on SSC Journal Club: AI Timelines · 2019-08-30T00:24:35.636Z · LW · GW

"As the authors point out, these two questions are basically the same" - strongly not true if we speak of domain-specific AI. Creating an AI that does linguistic analysis of a given dataset better than me is easier than creating an AI that is a better linguist than me because it actually requires additional tasks such as writing academic papers. So the task/occupation difference makes sense.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Should AI Be Open? · 2019-08-30T00:15:14.383Z · LW · GW

1)"Wells believed the coming atomic bombs would be so deadly that we would inevitably create a utopian one-world government to prevent them from ever being used. Sorry, Wells. It was a nice thought" - I don't know about utopian but, save for North Korea, this is more or less what we have since USSR lost the Cold War. USA single-handedly controls most of the world, and no nuclear weapon is sighted in active use.

2)"Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren" - unless they all die out from genetic diseases. This is the Ashkenazi evolutionary experiment (the one you link too) up to eleven.

3)"and that one reason we’re not smarter may be that it’s too hard to squeeze a bigger brain through the birth canal" - should be pretty much obliterated by modern Caesarian, but do we see burst of intelligence in last decades?

4)"If an alien species showed up in their UFOs, said that they’d created us but made a mistake and actually we were supposed to eat our children, and asked us to line up so they could insert the functioning child-eating gene in us, we would probably go all Independence Day on them; computers with more goal-directed architecture would if anything be even more willing to fight such changes" - phew, THAT metaphor finally explains why "just don't let it self-modify" is not enough. Yudkowsky's arguments, I'm afraid, did not come through to me on the necessary level... stupid, stupid brain.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on How The West Was Won · 2019-08-29T22:47:29.183Z · LW · GW

As a huge antipostcolonialist (thus colonialist) I really incline towards the "go universal culture" idea. However, there is also a more specific thing. As Nikonov notes, we are to distinguish immigrants and colonists. Immigrants get largely subsumed by the country they arrive to, even if they keep some peculiar details about them. Colonists do... well, what 16th-18th century colonists did, they are certainly not subsumed (how much of Aborigen Australians or Native Americans is seen in Australian/US usual settlers?). And the usual (partly justified, IMO) fear of Muslim "immigration" is that they do not, in fact, immigrate - they colonize. Or, at least, try to. They want to get European riches but keep their culture - like American colonists were certainly not going to acquire "Indian" culture while gaining resources and land in Americas.

"Heroin use is something every society would select if given the opportunity" - erm, consistently not true. Even Friedman in "Machinery of Freedom" concedes that the governmentless world he pictures could still ban heroin.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Considerations On Cost Disease · 2019-08-29T19:38:51.039Z · LW · GW

"Cowen seems to use it indiscriminately to refer to increasing costs in general – which I guess is fine, goodness knows we need a word for that" - the word _is_ inflation. Maybe averaging is not the best way to get the real inflation, but wages are calculated as if it were?

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Against Tulip Subsidies · 2019-08-29T18:30:59.321Z · LW · GW

This could have been rigorously applied to schools as well in the past, and we would have ended with people not knowing basic math unless they need to. As someone from Russia where there are still many remnants of Soviet free education (and commercial education is notoriously worse), I really do not see most of your point. I would argue that providing free higher education does not require actually paying that much - prices for education, as everything, are regulated (I don't mean direct governmental regulation) and thus will probably get lower if it is a universally-accessible thing.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Living Metaphorically · 2019-08-29T18:04:23.363Z · LW · GW

Very cool argument; note though that:

1)L&J directly reject analytic philosophy;

2)Frege ended up in a contradiction - namely, Russel's paradox.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Living Metaphorically · 2019-08-29T18:00:41.110Z · LW · GW

I mean, I always disliked L&J's work. Perhaps because there is a tendency for overstatements overall, perhaps because it often gets dragged to grammatical categories as well, where the basis is much lower (while lexical Time is certainly often metaphorized as both Money and Space, grammatical tense is never Money (and its similarity to spatial relationships, when persists, is due to the concept of axis relevant for both)).

(Now, as linguists rarely agree on anything, there are certainly linguists (Croft 2001 "Radical Construction Grammar", for one) who claim that lexical/grammatical distinction is non-existent. I believe this is a no-go, but you might believe otherwise.)

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on How Common Are Science Failures? · 2019-08-27T19:35:27.740Z · LW · GW

Very cool article, but... Fifty scientific fields? A major overkill, imho. I doubt there are twenty.

Also, linguistics... well, linguists rarely agree on anything but most of us do agree that Blumfield-style descriptivism was wrong (though I was recently startled to find a French linguist using almost precisely their arguments, but that is an outlier). Of course, one may say that it is counting evidence twice, as some kind of link to behaviourism is obvious, but their going down in flames in linguistics (thanks, Chomsky! And... thanks, weird guys like Langacker and Givόn, I gue-ess?) kinda predated their failure in psychology.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The Categories Were Made For Man, Not Man For The Categories · 2019-08-26T22:49:56.346Z · LW · GW

"I could point out that trans-Napoleonism seem to be mysteriously less common than transgender" - well, because the Napoleon is a far more specific thing than (wo)man, nothing mysterious about that (even ignoring the fact that you are squaring off one (trans-Napoleon) against two (MtF and FtM). A better example would perhaps be something like "Asian feeling themselves as Caucasian" or - a broad category of a really small set (even if there are more than two genders I doubt many people would claim there are more than ten).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Philosophy: A Diseased Discipline · 2019-08-26T21:48:49.850Z · LW · GW

As a scientist, not a philosopher, I still don't see much virtue in writing "simply". This is a particulary Anglo-Saxon tradition, whereas I (and most of the German-Russian tradition, AFAIK) have always felt that when you try writing simply you lose at least speed of the train of thought and quite likely some of your arguments' power. "No math - no science" is a specific example, but not the only one.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on But There's Still A Chance, Right? · 2019-08-26T21:04:19.154Z · LW · GW

Time for nitpicking... "Consider his example if you ever you find yourself thinking, “But you can’t prove me wrong.” If you’re going to ignore a probabilistic counterargument, why not ignore a proof, too?" - ...your own argument of certainty being infinity. In cardinal numbers theory the highest infinity (be it aleph-zero or continuum or 2^continuum or whatever) trumps any lower numbers (you can through out all the rational numbers, whose number is aleph-zero, and [0;1] will still be continual), including all natural numbers, and only an infinity of the same size or larger may compete. And I believe that the usual, single-infinity models do the same. If we _could_ have infinite certainty, it would be end-of-story, allowing for no possibility to "put the weight down - yes, down". The problem is, we can't.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Human Evil and Muddled Thinking · 2019-08-23T21:06:59.045Z · LW · GW

Woes and leaps forward can be intertwined. Peter the Great deliberately sent many workers (not to mention soldiers) to certain deaths to build St. Petersburg and fleet to win over Charles XII of Sweden. Nor was he ever hypocritical about it - he was strict enough to witness executions and not care. But in the end Peter's actions do seem like a leap forward. No hypocrisy, no stupidity, but huge woes.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Feeling Rational · 2019-08-21T11:55:32.412Z · LW · GW

What. Female misogyny seems to be at least as powerful as male, however contradicting it may seem. Women do not generally accept womanhood, it takes a certain subtype of feminists to do so (first wave did _not_, second wave is arguable).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Feeling Rational · 2019-08-21T11:52:42.844Z · LW · GW

That invites a rather optimistic view of mind. If we have a mind deprived of emotions but similar to us in other we expect that it will on average fare better than ours. Not because emotion is somehow _underlyingly_ irrational but because it tends to intensify our biases (and be the main motivation for some of them - affective death spirals come to mind first).

You could respond that curiosity and having something to protect are both based on emotions - but that's human motivation for rationality not guarantee of its efficiency, and both, unless supported by a good model, can also be fulfilled by religion. Truth as an instrument could be sufficient for emotionless brain as well.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Biases: An Introduction · 2019-08-21T11:27:00.298Z · LW · GW

"Most people answer “librarian.” Which is a mistake: shy salespeople are much more common than shy librarians, because salespeople in general are much more common than librarians—seventy-five times as common, in the United States" - ...this completely ignores the fact that works have personality requirements. Salespeople have to actually, y'know, talk to many people. I would not deem impossible that less than half a percent of salespeople and more than of half of librarians are shy.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Variable Question Fallacies · 2019-08-20T18:55:08.764Z · LW · GW

Variable truth-value (VTV) of a sentence is a technical thing in formal semantics - it means that the truth-value of this sentence depends on the little thingy called variable assignment. While the term might seem misleading, it is useful for explaining why we still claim "He walked in" has a truth-value - it first has the VTV, and then we find some "discourse" assignment that converts VTV to truth-value. Also, variable assignment can be manipulated from within the sentence (anaphora, movement, you name it).

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Words as Mental Paintbrush Handles · 2019-08-20T18:49:15.368Z · LW · GW

"Believe it or not, for some decades, there was a serious debate about whether people really had mental images in their mind—an actual picture of a chair somewhere—or if people just naively thought they had mental images (having been misled by "introspection", a very bad forbidden activity), while actually just having a little "chair" label, like a LISP token, active in their brain" - AFAIK, you misrepresent the debate. It was rather about what is primary and what is secondary. Sure, your brain paints a chair - but does it first paint the chair and then search for its properties, or is the mental image merely one of the properties of a pre-found concept (the correspondence to the options you represent is _in the order given_)? Not _that_ silly, is it? (It still has a right answer, but it happens to be the second one not the first one.)

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on Where to Draw the Boundary? · 2019-08-20T18:28:45.848Z · LW · GW

What is wrong (if you don't try to match the real biological taxonomy of Pisces) with definition like "vertebrate constantly living in water" for "fish"? While true_fish and dolphins have some differences, they have the very common points that led to the unification in the first place.

EDIT: Had already expanded upon by Scott Alexander.

Comment by Дмитрий Зеленский (dmitrii-zelenskii) on The Argument from Common Usage · 2019-08-20T17:51:50.321Z · LW · GW

There is descriptive linguistics and prescriptive linguistics (and that applies, in particular, to lexicography); but to make sense, to create rules people will not immediately and fully ignore (merely somewhat in some relatively rare cases as the language changes), prescriptive linguistics feeds on descriptive linguistics to prescribe something not too different (which does not say "the same"). Thus to create a dictionary which will unify common usage you need to describe common usage first - not to be too astray.

Unfortunately, in English tradition this is also blurred by having no usual distinction between prescriptive grammars (and lexicons) and style guides.