Posts

Vincent Fagot's Shortform 2023-02-22T21:07:10.001Z

Comments

Comment by Vincent Fagot (vincent-fagot) on "No-one in my org puts money in their pension" · 2024-02-18T10:33:41.752Z · LW · GW

Thank you for sharing this.

Comment by Vincent Fagot (vincent-fagot) on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-17T05:33:41.078Z · LW · GW

The date of AI Takeover is not the day the AI takes over. The point of no return isn't when we're all dead – it's when the AI has lodged itself into the world firmly enough that humans' faltering attempts to dislodge it would fail.

 

Isn't that arguably in the past? Just the economic and political forces pushing the race for AI are already sufficient to resist being impeded in most foreseeable cases. AI is already embedded, and desired. AI with agency on top of that process is one more step, making it even more irreversible.

Comment by Vincent Fagot (vincent-fagot) on Bringing Agency Into AGI Extinction Is Superfluous · 2023-04-09T06:25:57.211Z · LW · GW

This has been well downvoted. I'm not sure why, so if anyone has feedback about what I said that wasn't correct, or how I said it, that feedback is more than welcome.

Comment by Vincent Fagot (vincent-fagot) on Remarks 1–18 on GPT (compressed) · 2023-04-09T00:42:56.479Z · LW · GW

These two entities are distinct and must be treated as such. I've started calling the first entity "Static GPT" and the second entity "Dynamic GPT", but I'm open to alternative naming suggestions.

 

After a bit of fiddling, GPT suggests "GPT Oracle" and "GPT Pandora".

Comment by Vincent Fagot (vincent-fagot) on Bringing Agency Into AGI Extinction Is Superfluous · 2023-04-09T00:14:51.952Z · LW · GW

It's tempting to seek out smaller, related problems that are easier to solve when faced with a complex issue. However, fixating on these smaller problems can cause us to lose sight of the larger issue's root causes. For example, in the context of AI alignment, focusing solely on preventing bad actors from accessing advanced tool AI isn't enough. The larger problem of solving AI alignment must also be addressed to prevent catastrophic consequences, regardless of who controls the AI.

Comment by Vincent Fagot (vincent-fagot) on GPT-4 Plugs In · 2023-03-31T17:45:00.142Z · LW · GW

Wouldn't it be challenging to create relevant digital goods if the training set had no references to humans and computers? Also, wouldn't the existence and properties of humans and computers be deducible from other items in the dataset?

Comment by Vincent Fagot (vincent-fagot) on Vincent Fagot's Shortform · 2023-02-22T21:07:10.286Z · LW · GW

Is there some sort of support group for those of us who are taking the idea that our civilization is in a dead end seriously and can't do much to help on the frontlines?

Comment by Vincent Fagot (vincent-fagot) on AGI Ruin: A List of Lethalities · 2022-06-12T07:01:54.135Z · LW · GW

Well, obviously, it won't be consolation enough, but I can certainly revel in some human warmth inside by knowing I'm not alone in feeling like this.

Comment by Vincent Fagot (vincent-fagot) on AGI Ruin: A List of Lethalities · 2022-06-08T22:39:36.577Z · LW · GW

As a bystander who can understand this, and find the arguments and conclusions sound, I must say I feel very hopeless and "kinda" scared at this point. I'm living in at least an environment, if not a world, where even explaining something comparatively simple like how life extension is a net good is a struggle. Explaining or discussing this is definitely impossible - I've tried with the cleverer, more transhumanistic/rationalistic minded people I know, and it just doesn't click for them, to the contrary, I find people like to push in the other direction, as if it were a game.

And at the same time, I realize it is unlikely I can contribute anything remotely significant to a solution myself. So I can only spectate. This is literally maddening, especially so when most everyone seems to underreact.