Posts

Comments

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Failures in Kindness · 2024-07-19T22:58:45.299Z · LW · GW

The problem raises an important problem. Though I have to admit my gut reaction is "neurotypicals are being weird again" :)

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on D&D.Sci Evaluation and Ruleset · 2024-05-01T19:08:52.667Z · LW · GW

Damn. And I tried the strategy "what if I try to predict it only off the text, without looking at csv" :D

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on D&D.Sci Evaluation and Ruleset · 2024-04-27T19:36:39.400Z · LW · GW

Why DEX though? Like, conceptually it's absolutely unpredictable, this is one of the most useful scores in most TTRPGs.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Using axis lines for good or evil · 2024-03-19T12:39:44.605Z · LW · GW

Yeah, there seems to be a lot of personal preference involved. Removing cell borders is obnoxious and inconvenient, the table below hurts. The table above has the borders a tad too thick, but removing them is a cure that's, personally, worse than the disease.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Biology-Inspired AGI Timelines: The Trick That Never Works · 2023-12-03T12:38:23.187Z · LW · GW

In real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted

Half-joking - unless the futurist in question is Gerbert Wells. I think there was a quote that showed that he effectively predicted pixelization of early images along with many similar small-level details of early XXI century (although, of course, survivor bias for details probably influences my memory and the retelling I rely on).

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Don't leave your fingerprints on the future · 2023-11-19T15:46:22.462Z · LW · GW

Independently,

(in principle it could be figured out by human neuroscientists working without AI, but it's a bit late for that now)

What? Why? There is no AI as of now, LLMs definitely do not count. I think it is still quite possible that neuroscience will make its breakthrough on its own, without any non-human mind help (again, dressing up the final article doesn't count, we're talking about the general insights and analysis here).

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Don't leave your fingerprints on the future · 2023-11-19T15:43:59.345Z · LW · GW

To begin with, there is a level of abstraction at which the minds of all four of you are the same, yet different from various nonhuman minds.
 

I am actually not even sure about that. Your "identify the standard cognitive architecture of this entity's species" presupposes existence thereof - in a sufficiently specified way to then build its utopia and to derive that identification correctly in all four cases.

But, more importantly, I would say that this algorithm does not derive my CEV in any useful sense.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Superintelligent AI is necessary for an amazing future, but far from sufficient · 2023-11-09T20:50:12.333Z · LW · GW

I like this text but I find your take on Fermi paradox wholly unrealistic.

Let's even assume, for the sake of the argument, that both P(life) and P(sapience|life) are bigger than 1/googol (though why?) so your hunch on how many planets originally evolve sapient aliens is broadly correct. A very substantial part of alternative histories of the last century (I wanted to say "most" but most, of course, is uninteresting differences such as whether a random human puts a right shoe or a left shoe on first) result in humanity dead or thrown into possibly-irrecoverable barbarism. The default take for aliens that have evolved is to fail their version of Berlin crisis, or Caribbean crisis, or whatever other near-total-destruction situation we've had even without AI (not necessarily with nuclear weapons, mind you - say, what if instead of pretty-harmless-in-comparison COVID we got a sterilizing virus on the loose that kills genitalia instead of osmotic nerves? Since its method of proliferation does not depend on the host's ability to procreate, you could imagine sterilized population of the planet). And then you tack on the fact that you also predict very high chance of AGI ruin; so most of the hypothetical aliens that survived the kind of hurdles humanity somehow survived (again, with possibly totally different specifics) are replaced by misaligned AGI, throwing a huge hurdle into the cosmopolitan result you predict - meeting paperclip-maximiser built by ant-people is more likely than meeting ant-people themselves, given your background beliefs.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Warning Shots Probably Wouldn't Change The Picture Much · 2023-11-09T19:59:44.386Z · LW · GW

Banning gain-of-function research would be a mistake. What would be recklessly foolish is incentivising governments to decide what avenues of research are recklessly foolish. The fact that governments haven't prohibited it in a panic bout (not even China that otherwise did a lot of panicky things) is a good testament of their abilities, not an inability to react to warning shots.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Warning Shots Probably Wouldn't Change The Picture Much · 2023-11-09T19:56:58.969Z · LW · GW

The expected value of that is infinitesimal, both in general and for x-risk reduction in particular. People who prefer political reasoning (so, the supermajority) will not trust it, people who don't think COVID was an important thing except in how people reacted to it (like me) won't care, and most people who both find COVID important (or sign of anything important) and actually prefer logical reasoning have already given it a lot of thought and found out that the bottleneck is data that China will not release anyone soon.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Don't leave your fingerprints on the future · 2023-11-09T19:49:17.978Z · LW · GW

I think the concept that all peoples throughout history would come into near agreement about what is good if they just reflected on it long enough is unrealistic.

Yes. Exactly. You don't even need to go through time, place and culture on modern-day Earth are sufficient. While I cannot know my CEV (for if I knew, I would be there already), I predict with high confidence that my CEV, my wife's CEV, Biden's CEV and Putin's CEV are four quite different CEVs, even if they all include as a consequence "the planet existing as long as the CEV's bearer and the beings the CEV's bearer cares about are on it".

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on AI as a science, and three obstacles to alignment strategies · 2023-11-09T19:05:06.684Z · LW · GW

And in part because it’s socially hard to believe, as a regulator, that you should keep telling everyone “no”, or that almost everything on offer is radically insufficient, when you yourself don’t concretely know what insights and theoretical understanding we’re missing.

That's not true. We can end up with a regulator that stands in the pose of "prohibit everything". See IRB in America, for instance: medical experiments are made plainly insurmountable.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Everyday Lessons from High-Dimensional Optimization · 2023-11-07T23:29:43.978Z · LW · GW

I think at this point we should just ask @johnswentworth which one of us understood him correctly. As far as I see, we measure a distance between vectors, not between individual parameters, and that's why this thing fails.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Everyday Lessons from High-Dimensional Optimization · 2023-11-07T22:10:52.885Z · LW · GW

Erm, I think you're getting mixed up between comparing parameters and comparing the results of applying some function to parameters. These are not the same, and it's the latter that become incomparable.

(Also, would your algorithm derive that ln(4+3i)=ln(5) since |4+3i|=|5|? I really don't expect the "since we measure distances" trick to work, but if it does work, it should also work on this example.)

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Everyday Lessons from High-Dimensional Optimization · 2023-11-02T08:53:38.740Z · LW · GW

If you allow complex numbers, comparison "greater than/less than" breaks down.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Gears vs Behavior · 2023-11-02T08:44:23.536Z · LW · GW

Huh. In linguistics that's known as "functional model" vs. "structural model" (Mel'čuk's terminology): whether you treat linguistic ability as a black box or try model how it works in the brain (Mel'čuk opts for the former as a response to Chomsky's precommitment to the latter). This neatly explains why structural models are preferable.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2023-11-02T08:29:33.465Z · LW · GW

Correlation is not causation. And I am rather certain that there are few people who believe a genuine causation is present here (even though it is rather likely to be present in my opinion).

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on The Majority Is Always Wrong · 2023-09-12T16:49:33.086Z · LW · GW

One, where have you seen a foot-long shoe? That would be, what, 48 or 49 European size? This naming was always curious for me, feet are just… noticeably longer than feet.
Two, metric system has the main advantage of easy scalability. Switching from liter to deciliter to centiliter to millimeter is far easier than jumping between gallons, pints and whatever even is there. That's the main point, not any constant to multiply it on (i.e. a system with inch, dekainch, and so on would be about as good).
Three, I really see no problem in saying things like "36 centimeters" to describe an object's length. I know that my hand is ~17 centimeters, and I use it as a measurement tool in emergencies, but I always convert back to do any kind of reasonable thinking, I never actually count in "two hands and a phalanx".

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on In defense of the MBTI · 2023-02-21T15:31:09.573Z · LW · GW

However, just in case, you only covered my first suggestion, not both.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on In defense of the MBTI · 2023-02-21T15:30:27.078Z · LW · GW

Well, that at least is an experiment one could set up. Time of reaction should probably be a reasonably-appropriate measure for "harder" (perhaps error rate, too, but on many tasks error rate is trivially low). But this requires to determine how "using a function" is detected; you'd need, at the very least, "clear cases" for each function.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on In defense of the MBTI · 2023-02-14T14:15:41.315Z · LW · GW

Oh, there are many. One, MBTI supposes the functions are antagonistic in very specific ways, so null hypothesis is absence of those antagonistic pairings even if the functions themselves are as it says. Two, each cutting out of a function is a subhypothesis of clustering the thingspace (in this case, cognitionspace), and the null hypothesis is that it doesn't cut at reality's joints.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on In defense of the MBTI · 2023-02-14T14:12:57.572Z · LW · GW

Are you aiming to convince or to actually check whether it holds?

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on In defense of the MBTI · 2023-02-07T01:05:31.625Z · LW · GW

Offer an alternative hypothesis. "A fair fight", as HPMoR puts it. To understand if it's valid, you need to be able to imagine both a world in which it is and a world in which it isn't and outline what the differences would be.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on In defense of the MBTI · 2023-02-06T18:29:04.477Z · LW · GW

It's a great post in that it seemingly tries to engage with the question in true faith. That said…

We don't ask people on how they come to their datapoints because we can't trust their answers. That kind of introspection is deeply unreliable in most people, they (we) aren't, in this respect, enough of a lens that can see its flaws. That's why Big Five questions skipped that, not by careless omission as your post seems to imply. The MBTI-type cognitive function gears would be "big if true", but most big if true models are wrong, and not just in a technical sense of "all models are wrong, some models are useful", but in failure to properly connect to reality by providing wrong compressions; the post provides literally no arguments for why these are useful gears.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on The Road to Mazedom · 2023-02-04T02:59:15.343Z · LW · GW

That [organizational] culture can and does change.

You asked to notify you for things the previous texts failed to lay the groundwork, so here it is. The previous discussion largely looked as if it's static and self-supporting, aside from a couple of examples of how organizations jumped to being mazes as they grew. I feel like this is partially related - but distinct from - this, but getting your own perspective on when it can vs. can't change (not in the case of heroic efforts where you basically uproot everything because I presume that's not what you meant by this) could be useful.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Basics of Rationalist Discourse · 2023-02-03T00:43:10.941Z · LW · GW

That's an ingenious solution! I still feel like there's some catch here but can't formulate it. Maybe because it's way past midnight here and I should just go to sleep.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Basics of Rationalist Discourse · 2023-02-03T00:29:09.514Z · LW · GW

"Can you try passing my ITT, so that I can see where I've miscommunicated?"

...is a very difficult task even by standards of "good discourse requires energy". To present anything but a strawman in such case may require more time than the general discussion - not necessarily because your model actually is a strawman but because you'd need to "dot many i's and cross many t's" - I think that's the wording.

(ETA: It seems to me like it is directly related to obeying your tenth guideline.)

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Why has nuclear power been a flop? · 2023-01-14T21:17:29.676Z · LW · GW

I love the description, sounds compelling, should actually read the book :D

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Archimedes's Chronophone · 2023-01-13T22:45:50.997Z · LW · GW

Extremely late to the party but that's the whole idea of ecological validity (which sucks: https://omer.lingsite.org/blogpost-ecological-validity/).

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on [Simulators seminar sequence] #2 Semiotic physics - revamped · 2023-01-09T15:40:17.516Z · LW · GW

Agreed. As a linguist, I looked at the Proposition 2 and immediately thought "sketchy, shouldn't hold in a good model of a language model".

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on I Can Tolerate Anything Except Factual Inaccuracies · 2022-10-06T17:31:02.270Z · LW · GW

Personally, I am caring about the former but not about the latter in your Julia Galef quote... "Due process" seems largely a way to abuse loopholes, and in this, give the upper hand to the more professional lawyer rather than the correct side. Due process makes argument less valid, in a way.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Were atoms real? · 2022-09-15T14:32:29.646Z · LW · GW

EXTREMELY late to the party, but I have to warn a potential lurker against Lakoff's book as a linguist. His stories are extremely just-so-stories - or even just-not-so-stories.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Closet survey #1 · 2022-09-15T12:55:39.618Z · LW · GW

So, you believe that "It's dangerous to be half a Rationalist". Literally part of the sequences by now. A good thought but probably shared by many here by now :)

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Marijuana: Much More Than You Wanted To Know · 2022-07-28T15:27:26.574Z · LW · GW

so the total burden of the ~6000ish marijuana imprisonments each year is 3 * ~6000 * 0.5 = 10 kiloQALYs.

Clearly "1/3 * 6000 * 0.5 = 10" is meant?

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on This Can't Go On · 2022-06-30T14:35:36.418Z · LW · GW

Hey... the post links to tenth footnote instead of this one. (Also, no, the Sun seems at the somewhat low end of brightness?)

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Whence the sexes? · 2022-06-24T18:13:59.853Z · LW · GW

It's not. It's really-really not. Having a switching ovo-testis is, while not cheap, far cheaper than a set of parallel organs which commit literally contradictory effects on your body simultaneously. Like, if we take humans, gestagen is used for chemical castration for a reason.

Comment by Дмитрий Зеленский (dmitrii-zelenskii-1) on Whence the sexes? · 2022-06-24T18:08:27.389Z · LW · GW

Evolution sucks at long term though - that's essentially your 4: they would be outselected before they would be frequent enough.

However, the "hermaphroditism is only found widely where, for one reason or another, finding a partner and finding them to be wrong sex can be really fatal" (sessile plants, slow snails) suggests it must be very costly indeed, and I'd bet on "simple" metabolic explanation: these adapted organs directly compete in terms of their influence on the body.