Posts

Comments

Comment by Polkovnik_TrueForm (yurii-burak) on The Dictatorship Problem · 2024-08-01T12:29:32.455Z · LW · GW

As a Stalinist, USA is too experienced at killing us to have us in power and cause any of your problems. Fascists aren't "provoked to get stronger", they simply get more influential for quite a few natural reasons. Also, your "far-left" forgive the student debt... Partially?.. AHAHA BLY- 

Comment by Polkovnik_TrueForm (yurii-burak) on 2023 Unofficial LessWrong Census/Survey · 2023-12-30T15:03:58.539Z · LW · GW

Most of the probabilities are epsilon, and/or 100%-epsilon, and/or larger/smaller epsilons (for "unsure in known psychology" and "unsure in known logic" probability difference).

Maybe will do next year, because I have a lot to change.

Comment by Polkovnik_TrueForm (yurii-burak) on Probability is in the Mind · 2023-12-20T15:41:30.417Z · LW · GW

I tried to rush the angry comment about how it all is wrong, but a few second ater posting the comment (oops) I understood. I've seen a great example since the school genetics: when two heterozygotes cross (Aa is crossed with Aa), frequency of homozygotes among the descendants with dominant trait is 1/3. AA Aa aA aa (may never survive to the adulthood. Or AA may not survive. Or both survive, but we aren't interested)

There may be something that influences the 1:2:1 proportion (only in one side?), but it's a "You flip a loaded coin. What's your bet on it falling heads?" case.

Comment by Polkovnik_TrueForm (yurii-burak) on Leaky Generalizations · 2023-12-14T10:34:00.447Z · LW · GW

I've seen quite a few people who said that jews (and only jews) did the Holocaust. Implied reasons differ from "in order to blame everyone and get profit" to "it was a magical sacrifice to get power" (The fact that many other peoples were in the list for annihilation, usually including the statement author's, is hard to plant in their mind. If it was the biggest problem with such a statement...)

Comment by Polkovnik_TrueForm (yurii-burak) on Clarifying "AI Alignment" · 2023-12-10T09:59:11.900Z · LW · GW

I do agree that AI, who is underdeveloped in terms of its goals and allowed to exist, is too likely to become an ethical and/or existential catastrophe, but have a few questions.

  1. If neurosurgery and psychology develop sufficiently, is it ethically okay to align humans (or newborn) to other, more primitive life forms to the extent we want to align AI to humanity (I didn't say "the same way", because human brain seems to be differently organized than programmable computers, but I mean practically the same behaviour and/or goals change)?
  2. Does anyone, mentioning that AI would become more intelligent than whole human civilization, think that AI would be, therefore, more valuable than humanity? Shouldn't AI goals be set with consideration of that? If not, isn't answer for 1) "yes"?
Comment by Polkovnik_TrueForm (yurii-burak) on Dark Side Epistemology · 2023-12-07T15:25:11.980Z · LW · GW

Your link is broken.

Well, cultural relativity is a fact, as there are no morality and people either justify any of their actions via tradition, or simply follow it when they don't want to think. Universal life rights would be great (no less than human rights, at least. I'm one personality legalist and one personality ecocentrist who wants sentience to remain in order to save biosphere from geological and astronomical events that are coming sooner than new Homo sapiens may emerge through evolution if current one is extinct before making AGI) Everything else, I upvote.