satchlj's Shortform
post by satchlj · 2024-11-15T18:48:59.057Z · LW · GW · 1 commentsContents
2 comments
1 comments
Comments sorted by top scores.
comment by satchlj · 2025-02-17T23:52:28.480Z · LW(p) · GW(p)
If you haven't already, I'd recommend reading Vinge's 1993 essay on 'The Coming Technological Singularity': https://accelerating.org/articles/comingtechsingularity
He is remarkably prescient, to the point that I wonder if any really new insights into the broad problem have been made in the last 22 years since he wrote. He discusses, among other things, using humans as a base to build superintelligence on as an possible alignment strategy, as well as the problems with this approach.
Here's one quote:
Eric Drexler [...] agrees that superhuman intelligences will be available in the near future — and that such entities pose a threat to the human status quo. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely. This is I. J. Good's ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate — say — one million times slower than you, there is little doubt that over a period of years (your time) you could come up with "helpful advice" that would incidentally set you free. [...]