Posts

Is it time to talk about AI doomsday prepping yet? 2023-03-05T21:17:54.270Z
How much should we care about non-human animals? 2022-11-04T21:36:57.836Z

Comments

Comment by bokov (bokov-1) on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2023-09-20T11:24:48.614Z · LW · GW

Definition please.

VNM

Comment by bokov (bokov-1) on What works for ADHD and/or related things? · 2023-08-03T13:28:28.734Z · LW · GW

The first step is to see a psychiatrist and take the medication they recommend. For me it was an immediate night-and-day difference. I don't know why the hell I wasted so much of my life before I finally went and got treatment. Don't repeat my mistake.

Comment by bokov (bokov-1) on Contrary to List of Lethality's point 22, alignment's door number 2 · 2023-05-15T15:53:31.249Z · LW · GW

Yes, OP

Comment by bokov (bokov-1) on Contrary to List of Lethality's point 22, alignment's door number 2 · 2023-04-11T19:04:51.970Z · LW · GW

I actually tried running your essay through ChatGPT to make it more readable but it's way too long. Can you at least break it into non-redundant sections not more than 3000 words each? Then we can do the rest.

Comment by bokov (bokov-1) on Contrary to List of Lethality's point 22, alignment's door number 2 · 2023-04-11T18:57:48.864Z · LW · GW

I second that. I actually tried to read your other posts because I was curious to find out why you are getting downvoted-- maybe I can learn something outside the LW party-line from you.

But unfortunately, you don't explain your position in clear, easy to understand terms so I'm going to have to put off sorting through your stuff until I have more time.

Comment by bokov (bokov-1) on Is it time to talk about AI doomsday prepping yet? · 2023-03-05T22:31:29.930Z · LW · GW

I meant prepping metaphorically, in the see of being willing to delve into the specifics of a scenario most other people would dismiss as unwinnable. The reason I posted this is that though it's obvious that the bunker approach isn't really the right one, I'm drawing a blank for what the right approach would even look like.

That being said, I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI. Are you saying scenarios where many but not all people die due to political/economic/environmental consequences of AI emergence are unlikely enough to disregard?

So let's talk about dystopias/wierdtopias. Do you see any categories into which these can be grouped? The question then becomes, who will lose the most and who will lose the least under various types of scenarios.

Comment by bokov-1 on [deleted post] 2022-11-23T19:20:45.464Z

It's ironic that you're so excited about autonomous weapons but the first video you posted is a dramatic depiction created by a YouTube account called "Stop Autonomous Weapons".

I think the idea of this video was to scare the public by how powerful, precise, and possibly opaque these weapons are.

But I agree with you-- ethical or not, groups that limit their use of these weapons will be at a disadvantage against groups that do not. That's a microcosm of the whole AI regulatory problem right there.

Comment by bokov (bokov-1) on How much should we care about non-human animals? · 2022-11-10T09:30:22.678Z · LW · GW

I'm sad to see him go. I don't know enough about LWs history and have too little experience with forum moderation to agree or disagree with your decision. Though LW had been around for a very long time without imploding so that's evidence you guys know what you're doing.

Please don't take down his post though. I believe somewhere in there is a good faith opinion at odds with my own. I want to read and understand it. Just not ready for this much reading tonight.

I wish I could write so prolifically! Or maybe it's a curse rather than a blessing because then it becomes an obstacle to people understanding your point of view.

Comment by bokov (bokov-1) on Open Letter Against Reckless Nuclear Escalation and Use · 2022-11-04T21:47:47.515Z · LW · GW

Are there any links we can read about non-appeasing de-escalation strategies?

Either theoretical ones or ones that have been tried in the past are fine.

Comment by bokov (bokov-1) on Open Letter Against Reckless Nuclear Escalation and Use · 2022-11-03T20:22:48.783Z · LW · GW

There have been "Nuclear first-use and threats or advocacy thereof" and those are easy to condemn. But as far as I know they are coming unilaterally from the Russian side and already being widely condemned by those not on the Russian side. But it sounds like you are looking for some broader consensus to condemn escalation on both sides.

Unfortunately neither this post nor the open letter you linked give any specifics about what other behaviours you are asking us to condemn. I'm reluctant to risk endorsing a false-equivalence argument by signing a blank check.

Is blowing up the Kerch bridge escalatory? Is Arestovich trolling the occupiers to sap their morale and bolster the morale of the defenders escalatory? I'm not qualified to determine whether the tactical or psychological benefit is justified by the escalatory risk of these sorts of actions and in the Kerch example, we don't even know if it was done by the Ukrainian government, provocateurs, or sympathizers acting independently.

I agree that it's not a binary choice between appeasement and escalation, and I am very curious about the non-appeasing de-escalation strategies you allude to. That's what we should be brainstorming and what you should lead with in your letter for it to be convincing.

Comment by bokov (bokov-1) on The harms you don't see · 2022-10-28T22:14:55.916Z · LW · GW

The EU approach to getting Ukraine to protect the rights of minorities seems more... sustainable... than Russia's approach, so I propose a different compromise:

How about Russia withdraw all its troops back to the 2014 borders and we all give the slow, non-violent path a chance to work.

Comment by bokov (bokov-1) on The harms you don't see · 2022-10-28T22:06:58.980Z · LW · GW

I'm not equating the West and Anti-West in terms of power. I agree that the Anti-West is much weaker. That doesn't mean it's incapable of becoming a threat in the future. 

Comment by bokov (bokov-1) on Ukraine and the Crimea Question · 2022-10-28T22:00:48.122Z · LW · GW

Furthermore, it's up to the Ukrainian people to confront their dark past. Not Russians to do it for them. 

Just like it's up to Americans to confront and atone for America's history of slavery. Not some neighbouring country to roll in with tanks and turn our historical/cultural/political problem into a military one.

Comment by bokov (bokov-1) on Ukraine and the Crimea Question · 2022-10-28T21:48:35.575Z · LW · GW

This is basically a false equivalence "there are good/bad people on both sides" type of argument. 

If some other country sent troops inside Russia's borders and held a referendum for whether or not the regions they occupied want to be annexed, I would consider Russia to be the victim no matter how screwed up its internal politics are. Furthermore, such a referendum would not be legitimate no matter how honestly executed it is because the presence of foreign troops and displacement of civilians already hopelessly biases the outcome. 

For the same reason, until there are no more Russian soldiers inside of Ukraine's pre-2014 borders, I see no reason to treat these referenda and complicated stories about some Ukrainians someplace being Nazis as anything other than Russian propaganda, albeit you deserve praise for well crafted propaganda delivered in a civil manner.

Comment by bokov (bokov-1) on The harms you don't see · 2022-10-18T18:54:23.021Z · LW · GW

A decisively defeated Russia will have fewer resources with which to coerce him. And if he's smart and keeps his powder dry like he has, he will have more resources with which to resist.

And if he gets overthrown in a color revolution, the Belarussians have not yet gotten so much blood on their hands as to preclude support from the West.

Comment by bokov (bokov-1) on The harms you don't see · 2022-10-18T18:51:13.245Z · LW · GW

So I support a ceasefire and I oppose sponsorship of insurgency in Russia. But my opinions don't count. 

You opinions count, though most of us disagree with you. Thus, the replies.

Let's suppose that supporting Ukraine does further empower 'our globe-spanning military-industrial complex'. But failing to support Ukraine empower the rival globe-spanning military-industrial complex that in addition to Russia includes Iran, Syria, and China.

A ceasefire that results in Russia keeping more Ukrainian land than it started will empower this rival military-industrial complex and set the precedent for rewarding aggression while weakening Ukraine militarily and strategically. Even letting Russia keep Don-Bas and Crimea will leave Ukraine vulnerable to future invasions.

So, which globe-spanning military-industrial complex do you oppose more?

Comment by bokov (bokov-1) on What sorts of preparations ought I do in case of further escalation in Ukraine? · 2022-10-10T18:41:21.688Z · LW · GW

I wonder what the feasibility is for a group of LW-ers somehow putting on retainer a charter flight to NZ?

Comment by bokov (bokov-1) on Russia will do a nuclear test · 2022-10-04T18:13:59.999Z · LW · GW

How would a nuclear test demonstrate that Putin is not bluffing?

It only demonstrates that he has nukes, which we already know.

Comment by bokov (bokov-1) on A tentative dialogue with a Friendly-boxed-super-AGI on brain uploads · 2022-08-03T19:59:59.177Z · LW · GW

I'm also biting the bullet and saying that this is probably what we should aim for, barring pivotal acts because I see AGI development as mostly inevitable, and there are far worse outcomes than this.

Dead is dead, whether due to AGI or due to a sufficient percentage of smart people convincing themselves that destructive uploading is good enough and continuity is a philosophical question that doesn't matter.

Comment by bokov (bokov-1) on A tentative dialogue with a Friendly-boxed-super-AGI on brain uploads · 2022-08-03T19:55:06.744Z · LW · GW

Now, if synchronizing minds is possible, it would address this problem.

But I don't see nearly as much attention being put into that as into uploading. Why?

Comment by bokov (bokov-1) on A tentative dialogue with a Friendly-boxed-super-AGI on brain uploads · 2022-08-03T19:49:58.906Z · LW · GW

A copy of you ceases to exist and then another copy comes into existence with the exact same sense of memories/continuity of self etc. That's like going to sleep and waking up.

Even when it becomes possible to do this at sufficient resolution, I see no reason it won't be like going to sleep and never waking up.

It's not as if there is a soul to transfer or share between the two instances. No way to sync the experiences of the two instances.

So I don't see a fundamental difference between "You go to sleep and an uploaded you wakes up" vs "You go to sleep and an uploaded somebody else wakes up". In either case it will be a life in which I am not a participant and experiences I will not be able to access.

Non-destructive uploads could be benign, provided they are not used as an excuse for not improving the lives of the original instances.

Comment by bokov (bokov-1) on A tentative dialogue with a Friendly-boxed-super-AGI on brain uploads · 2022-08-03T17:52:32.942Z · LW · GW

What I like about this story is that it makes more accessible the (to me) obvious fact that, in the absence of technology to synchronize/reintegrate memories from parallel instances, uploading does not solve any problems for you-- it at best spawns a new instance of you that doesn't have those problems, but you still do.

Yet uploading is so much easier than fixing death/illness/scarcity in the physical world that people want to believe it's the holy grail. And may resist evidence to the contrary.

Destructive uploads are murder and/or suicide.

Comment by bokov (bokov-1) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-16T15:33:47.958Z · LW · GW

Are there any specific examples of anybody working on AI tools that autonomously look for new domains to optimize over?

  • If no, then doesn't the path to doom still amount to a human choosing to apply their software to some new and unexpectedly lethal domain or giving the software real-world capabilities with unexpected lethal consequences? So then, shouldn't that be a priority for AI safety efforts?
  • If yes, then maybe we should have a conversation about which of these projects is most likely to bootstrap itself, and the likely paths it will take?
Comment by bokov (bokov-1) on AGI Ruin: A List of Lethalities · 2022-06-16T15:05:06.670Z · LW · GW

Now we know more than nothing about the real-world operational details of AI risks. Albeit mostly banal everyday AI that we can't imagine harming us at scale. So maybe that's what we should try harder to imagine and prevent. 

Maybe these solutions will not generalize out of this real-world already-observed AI risk distribution. But even if not, which of these is more dignified? 

  • Being wiped out in a heartbeat by some nano-Cthulu in pursuit of some inscrutable goal that nobody genuinely saw coming
  • Being killed even before that by whatever is the most lethal thing you can imagine evolving from existing ad-click maximizers, bitcoin maximizers, up-vote maximizers, (oh, and military drones, those are kind of lethal) etc. because they seemed like too mundane a threat