LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
People representing Anthropic argued against government-required RSPs. I don’t think I can share the details of the specific room where that happened, because it will be clear who I know this from.
Ask Jack Clark whether that happened or not.
programcrafter on The future of humanity is in managementActually, AIs can use other kinds of land (to suggest from the top of the head, sky islands over oceans, or hot air balloons for a more compact option) to be run, which are not usable by humans. There have to be a whole lot of datacenters to make people short on land - unless there are new large factories built.
knight-lee on Mikhail Samin's ShortformThat's a very good heuristic. I bet even Anthropic agrees with it. Anthropic did not release their newer models until OpenAI released ChatGPT and the race had already started.
That's not a small sacrifice. Maybe if they released it sooner, they would be bigger than OpenAI right now due to the first mover advantage.
I believe they want the best for humanity, but they are in a no-win situation, and it's a very tough choice what they should do. If they stop trying to compete, the other AI labs will build AGI just as fast, and they will lose all their funds. If they compete, they can make things better.
AI safety spending [? · GW] is only $0.1 billion while AI capabilities spending is $200 billion. A company which adds a comparable amount of effort on both AI alignment and AI capabilities should speed up the former more than the latter.
Even if they don't support all the regulations you believe in, they're the big AI company supporting relatively much more regulation than all the others.
I don't know, I may be wrong. Sadly it is so very hard to figure out what's good or bad for humanity in this uncertain time.
lelapin on Catastrophe through Chaos(off topic to op, but in topic to Jan bringing up ALERT)
To what extent do you believe Sentinel fulfills what you wanted to do with ALERT? Their emergency response team is pretty small rn. Would you recommend funders support that project or a new ALERT?
SB1047 was mentioned separately so I assumed it was something else. Might be the other ones, thanks for the links.
dagon on Gradual Disempowerment, Shell Games and FlinchesWhen humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete?
The point behind my question is "we don't know. If we reason analogously to human institutions (which are made of humans, but not really made or controlled BY individual humans), we have examples in both directions. AIs have less biological drive to care about humans than humans do, but also have more training on human writings and thinking than any individual human does.
My suspicion is that it won't take long (in historical time measure; perhaps only a few decades, but more likely centuries) for a fully-disempowered species to become mostly irrelevant. Humans will be pets, perhaps, or parasites (allowed to live because it's easier than exterminating them). Of course, there are plenty of believable paths that are NOT "computational intelligence eclipses biology in all aspects" - it may hit a wall, it may never develop intent/desire, it may find a way to integrate with biologicals rather than remaining separate, etc. Oh, and it may be fragile enough that it dies out along with humans.
lwlw on Schizophrenia as a deficiency in long-range cortex-to-cortex communicationHi Steven! This is an old post, so you probably won't reply, but I'd appreciate it if you did! What do you think might be going on in the brains of schizophrenics with high intelligence? I know schizophrenia is typically associated with MRI abnormalities and lower intelligence, but this isn't always the case! At least for me, my MRI came back normal, and my cognitive abilities were sufficient to do well in upper level math courses at a competitive university: even during my prodromal period. I actually deal with hypersensitivity as well, so taking a very shallow understanding of your post and applying it to me, might my brain have a quirk that enables strong intracircuit communication (resulting in strong working memory and processing speed and hypersensitivity), but not intercircuit communication (resulting in hallucinations/paranoia as downsides but a high DAT score as an upside?)?
zach-stein-perlman on Mikhail Samin's ShortformMy guess is it's referring to Anthropic's position on SB 1047, or Dario's and Jack Clark's statements that it's too early for strong regulation, or how Anthropic's policy recommendations often exclude RSP-y stuff (and when they do suggest requiring RSPs, they would leave the details up to the company).
raemon on The Case Against AI Control ResearchOne major counterargument here is "is control a necessary piece of the 'solve alignment in time plan'"? Like, it may be "5-10x less important" than dealing with slop, but, still, in if you didn't eventually solve both you don't get useful carefully-implemented-slightly-superhuman work done, and it (might be) that our surviving worlds look like either that, or "get a serious longterm pause."
simon on Escape from Alderaan ILuke ignited the lightsaber Obi-Wan stole from Vader.
This temporarily confused me until I realized
it was not talking about the lightsaber Vader was using here, but about the one that Obi-Wan took from him in the Revenge of the Sith and gave to Luke near the start of A New Hope.