Posts

Focus on existential risk is a distraction from the real issues. A false fallacy 2023-10-30T23:42:02.066Z
Do you work at an AI lab? Please quit 2023-05-05T23:41:31.560Z
Campaign for AI Safety: Please join me 2023-04-01T09:32:11.907Z

Comments

Comment by Nik Samoylov (nik-samoylov) on Focus on existential risk is a distraction from the real issues. A false fallacy · 2023-11-02T08:05:07.639Z · LW · GW

So I am ready for this comment to be downvoted 😀

I realise that what I wrote did not resonate with the readers.

But I am not an inexperienced writer. I would not rate the above piece as below average in substance, precision, or style. Perhaps the main addressees were not clear (even though they are named in the last paragraph).

I am exploring a tension in this post and I feel this very tension has burst out into the comments and votes. This tension wants to be explored further and I will take time to write better about it.

Comment by Nik Samoylov (nik-samoylov) on Focus on existential risk is a distraction from the real issues. A false fallacy · 2023-10-31T02:11:53.680Z · LW · GW

Is it a satire of the 'AI ethics' position?

No, it is not actually.

 

What is confusing? :-) 

Comment by Nik Samoylov (nik-samoylov) on Do you work at an AI lab? Please quit · 2023-05-06T04:46:34.960Z · LW · GW

Great to see some support for these ideas. Well, if anything at all, a union will be a good distraction for the management and a drain on finances that would otherwise be spent on compute.

I do not know how I can help personally with this, but here is a link for anyone who reads this and happens to work at an AI lab: https://aflcio.org/formaunion/4-steps-form-union

Demand an immediate undefinite pause. Demand that all work is dropped and you only work on alignment until it is solved. Demand that humanity live and not die.

Comment by Nik Samoylov (nik-samoylov) on Do you work at an AI lab? Please quit · 2023-05-06T01:18:58.678Z · LW · GW

I am using a moral appeal to elicit a practical outcome.

Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something.

Two objections:

  1. I think it will not get you fired now. If you are an expensive AI researcher (or better a bunch of AI researchers), your act will create a small media storm. Firing you will not be an acceptable option for optics. (Just don't say you believe AI is conscious.)
  2. A year or two might be a little late for that.

One recommendation: Unionise.

You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.

Great marginal impact, precisely because of the media effect. "AI researchers strike against the machines, demanding AI lab pause"

Comment by Nik Samoylov (nik-samoylov) on Do you work at an AI lab? Please quit · 2023-05-06T01:05:12.997Z · LW · GW

The time for a pause is now. Advancing AI capabilities now is immoral and undemocractic.

OK, then, here is another suggestion I have for the concerned people at AI labs: Go on a strike and demand that capability research is dropped in favour of alignment research.

Comment by Nik Samoylov (nik-samoylov) on Do you work at an AI lab? Please quit · 2023-05-06T00:33:11.603Z · LW · GW

Would be great to hear the objections from the down-voters.

Comment by Nik Samoylov (nik-samoylov) on Do you work at an AI lab? Please quit · 2023-05-06T00:03:44.359Z · LW · GW

And some practical advice on quitting. 

Regardless of what contract says, you can quit any time. No contract under law can force you to work out any specific time. You do not owe your company any specific amount of time because slavery was abolished a little while ago already.

Even contracts that say that "your company paid you a sign-on bonus that you have to re-pay if you leave within a year" are questionable. See https://qr.ae/pyIrpO .

You do not owe it to your family and friends to stay there. In a few months when uproar will be massive and public, you will be seen as an early hero. (See case of Hinton.)

If you work there, it means you are so smart that you will easily get a job elsewhere (maybe for less pay, I admit). I will personally help you find a job if you have any trouble.

How to quit: It is very easy. Send an email to your direct manager OR HR:

Subject: Resigning effective immediately

Body:

Dear (name of your manager or HR person).

I hereby resign from my position effective immediately. 

I wish you all the very best and remain your loyal friend.

Best regards,
(Your name)

Comment by Nik Samoylov (nik-samoylov) on Campaign for AI Safety: Please join me · 2023-04-02T08:49:40.272Z · LW · GW

Thank you for your words of caution @the gears to ascension , @Ruby , @Chris_Leong 

Indeed I have just recently updated on AI. I lived happily believing AGI was just nonsense after seeing gimmick after gimmick and slow progress on anything general. This all came down as a rude shock a couple of weeks ago.

I will heed your advise on consulting with others.

I am however of the firm opinion that AI alignment is not going to be solved any time soon. The best thing is just to shuit progress on new capabilities down indefinitely. I do not see it being done without the force of law and politics will inevitably be at play.

Comment by Nik Samoylov (nik-samoylov) on Pessimism about AI Safety · 2023-04-02T08:24:42.713Z · LW · GW

Many words. But fundamentally first time I see something that makes sense on the topic. If you make a God, prepare to be killed by him.

If Sutskever, Altman et al. want that, I wish there was a way to send them off to a parallel universe to run their experiments. I have a family and normal life to attend to.

There is no such thing as safe AGI. It must be indefinitely delayed.

Comment by Nik Samoylov (nik-samoylov) on Nobody’s on the ball on AGI alignment · 2023-03-30T06:11:06.743Z · LW · GW

I generally agree with your commentary about the dire lack of research in this area now, and I want to be hopeful about solvability of alignment.

I want to propose that AI alignment is not only a problem for ML professionals. It is a problem for the whole society and we need to get as many people involved here as possible, soon. From lawyers and law-makers, to teachers and cooks. It is so for many reasons:

  1. They can have wonderful ideas people with ML background might not. (Which may translate into technical solutions, or into societal solutions.)
  2. It affects everyone, so everyone should be invited to address the problem.
  3. We need millions of people working on this problem right now.

I want to show what we are doing at my company: https://conjointly.com/blog/ai-alignment-research-grant/ . The aim is to make social science PhDs aware of the alignment problem and get them involved in the way they can. Is it the right way to do it? I do not know. 

I, for one, am not an LLM specialist. So I intend to be making noise everywhere I can with the resources I have. This weekend I will be writing to every member of the Australian parliament. Next weekend, I will be writing to every university in the country.