Posts

Comments

Comment by Meme Marine (meme-marine) on MIRI 2024 Communications Strategy · 2024-05-31T04:03:08.180Z · LW · GW

No message is intuitively obvious; the inferential distance between the AI safety community and the general public is wide, and even if many people do broadly dislike AI, they will tend to think that apocalyptic predictions of the future, especially ones that don't have as much hard evidence to back them as climate change (which is already very divisive!) belong in the same pile as the rest of them. I am sure many people will be convinced, especially if they were already predisposed to it, but such a radical message will alienate many potential supporters.

I think the suggestion that contact with non-human intelligence is inherently dangerous is not actually widely intuitive. A large portion of people across the world believe they regularly commune with non-human intelligence (God/s) which they consider benevolent. I also think this is a case of generalizing from fictional evidence - mentioning "aliens" conjures up stories like the War of the Worlds. So I think that, while this is definitely a valid concern, it will be far from a universally understood one.

I mainly think that using existing risks to convince people of their message would help because it would lower the inferential distance between them and their audience. Most people are not thinking about dangerous, superhuman AI, and will not until it's too late (potentially). Forming coalitions is a powerful tool in politics and I think throwing this out of the window is a mistake.

The reason I say LLM-derived AI is that I do think that to some extent, LLMs are actually a be-all-end-all. Not language models in particular, but the idea of using neural networks to model vast quantities of data, generating a model of the universe. That is what an LLM is and it has proven wildly successful. I agree that agents derived from them will not behave like current-day LLMs, but will be more like them than different. Major, classical misalignment risks would stem from something like a reinforcement learning optimizer.

I am aware of the argument of dangerous AI in the hands of ne'er do wells, but such people already exist and in many cases, are able to - with great effort - obtain means of harming vast amounts of people. Gwern Branwen covered this; there are a few terrorist vectors that would require relatively minuscule amounts of effort but that would result in a tremendous expected value of terror output. I think in part, being a madman hampers one's ability to rationally plan the greatest terror attack one's means could allow, and also that the efforts dedicated to suppressing such individuals vastly exceed the efforts of those trying to destroy the world. In practice, I think there would be many friendly AGI systems that would protect the earth from a minority of ones tasked to rogue purposes.

I also agree with your other points, but they are weak points compared to the rock-solid reasoning of misalignment theory. They apply to many other historical situations, and yet, we have ultimately survived; more people do sensible things than foolish things, and we do often get complex projects right the first time around as long as there is a theoretical underpinning to them that is well understood - I think proto-AGI is almost as well understood as it needs to be, and that Anthropic is something like 80% of the way to cracking the code.

I am afraid I did forget in my original post that MIRI would believe that the person who holds AGI is of no consequence. It simply struck me as so obvious I didn't think anyone could disagree with this.

In any case, I plan to write a longer post in collaboration with some friends who will help me edit it to not sound quite like the comment I left yesterday, in opposition of the PauseAI movement, which MIRI is a part of.

Comment by Meme Marine (meme-marine) on MIRI 2024 Communications Strategy · 2024-05-30T02:57:21.815Z · LW · GW

I am sorry for the tone I had to take, but I don't know how to be any clearer - when people start telling me they're going to "break the overton window" and bypass politics, this is nothing but crazy talk. This strategy will ruin any chances of success you may have had. I also question the efficacy of a Pause AI policy in the first place - and one argument against it is that some countries may defect, which could lead to worse outcomes in the long term.

Comment by Meme Marine (meme-marine) on MIRI 2024 Communications Strategy · 2024-05-30T00:28:51.552Z · LW · GW

Why does MIRI believe that an "AI Pause" would contribute anything of substance to the goal of protecting the human race? It seems to me that an AI pause would:

  • Drive capabilities research further underground, especially in military contexts
  • Force safety researchers to operate on weaker models, which could hamper their ability to conduct effective research
  • Create a hardware overhang which would significantly increase the chance of a sudden catastrophic jump in capability that we are not prepared to handle
  • Create widespread backlash against the AI Safety community among interest groups that would like to see AI development continued
  • Be politically contentious, creating further points for tension between nations that could spark real conflict; at worst, you are handing the reins to the future to foreign countries, especially ones that don't care about international agreements - which are the countries you would probably least want to be in control of AGI.

In any case, I think you are going to have an extremely difficult time in your messaging. I think this strategy will not succeed and will most likely, like most other AI safety efforts, actively harm your efforts.

Every movement thinks they just need people to "get it". Including, and especially, lunatics. If you behave like lunatics, people will treat you as such. This is especially true when there is a severe lack of evidence as to your conclusions. Classical AI Alignment theory does not apply to LLM-derived AI systems and I have not seen anything substantial to replace it. I find no compelling evidence to suggest even a 1% chance of x-risk from LLM-based systems. Anthropogenic climate change has mountains of evidence to support it, and yet a significant chunk of the population does not believe in it.

You are not telling people what they want to hear. Concerns around AI revolve around copyright infringement, job displacement, the shift of power between labor and capital, AI impersonation, data privacy, and just plain low-quality AI slop taking up space online and assaulting their eyeballs. The message every single news outlet has been publishing is: "AI is not AGI and it's not going to kill us all, but it might take your job in a few years" - that is, I think, the consensus opinion. Reframing some of your argument in these terms might make them a lot more palatable, at least to the people in the mainstream who already lean anti-AI. As it stands, even though the majority of Americans have a negative opinion on AI, they are very unlikely to support the kind of radical policies you propose, and lawmakers, who have an economic interest in the success of AI product companies, will be even less convinced.

I'm sorry if this takes on an insolent tone but surely you guys understand why everyone else plays the game, right? They're not doing it for fun, they're doing it because that's the best and only way to get anyone to agree with your political ideas. If it takes time, then you had better start right now. If a shortcut existed, everyone would take it. And then it would cease to be a shortcut. You have not found a trick to expedite the process, you have stumbled into a trap for fanatics. People will tune you out among the hundreds of other groups that also believe the world will end and that their radical actions are necessary to save it. Doomsday cults are a dime a dozen. Behaving like them will produce the same results as them: ridicule.

Comment by Meme Marine (meme-marine) on What mistakes has the AI safety movement made? · 2024-05-30T00:03:34.711Z · LW · GW

I think one big mistake the AI safety movement is currently making is not paying attention to the concerns of the wider population about AI right now. People do not believe that a misaligned AGI will kill them, but are worried about job displacement or the possibility of tyrannical actors using AGI to consolidate power. They're worried about AI impersonation and the proliferation of misinformation or just plain shoddy computer generated content.

Much like the difference between more local environmental movements and the movement to stop climate change, focusing on far-off, global-scale issues causes people to care less. It's easy to deny climate change when it's something that's going to happen in decades. People want answers to problems they face now. I also think there's an element of people's innate anti-scam defenses going off; the more serious, catastrophic, and consequential a prediction is, the more evidence they will want to prove that it is real. The priors one should have of apocalyptic events are quite low; it doesn't actually make sense that "They said coffee would end the world, so AGI isn't a threat" but it does in a way contribute Bayesian evidence towards the inefficacy of apocalypse predictions.

On the topic of evidence, I think it is also problematic that the AI safety community has been extremely short on messaging for the past 3 or so years. People are simply not convinced that an AGI would spell doom for them. The consensus appears to be that LLMs do not represent a significant threat no matter how advanced they become. It is "not real AI", it's "just a glorified autocomplete". Traditional AI safety arguments hold little water because they describe a type of AI that does not actually exist. LLMs and AI systems derived from them do not possess utility functions, do understand human commands and obey them, and exhibit a comprehensive understanding of social norms, which they follow. LLMs are trained on human data, so they behave like humans. I have yet to see any convincing argument other than a simple rejection that explains why RLHF or related practices like constitutional AI do not actually constitute a successful form of AI alignment. All of the "evidence" for misalignment is shaky at best or an outright fabrication at worst. This lack of an argument is really the key problem behind AI safety. It strikes outsiders as delusional.

Comment by Meme Marine (meme-marine) on “Artificial General Intelligence”: an extremely brief FAQ · 2024-03-12T00:58:13.108Z · LW · GW

Even so, one of the most common objections I hear is simply "it sounds like weird sci-fi stuff" and then people dismiss the idea as totally impossible. Honestly, this really seems to be how people react to it!

Comment by Meme Marine (meme-marine) on Drone Wars Endgame · 2024-02-03T01:32:35.525Z · LW · GW
  • "Guided bullets" exist; see DARPA's EXACTO program.
  • Assuming the "sniper drone" uses something like .50 BMG, you won't be able to fit enough of a payload into the bullet to act as a smoke grenade. You can't fit a "sensor blinding round" into it.
  • Being able to fly up 1000m and dodge incoming fire would add a lot of cost to a drone. You would be entering into the territory of larger UAVs. The same goes for missile launching drones.
  • Adding the required range would also be expensive. Current small consumer drones have a range of about 8 miles (DJI Mavic) so taking significant ground with these would be difficult.
  • You would need a considerable amount of relay drones if you want them to stay relatively low to the ground and avoid detection. The horizon - and in some cases, trees and hills - will block the communications lasers. This is the main reason we don't see point-to-point links used more often.
  • In general you are talking about adding a great deal of capability to these drones, but this will balloon the cost. Adding capabilities also increases weight, which further increases cost and logistics footprint. The growth in cost to size is exponential.
  • The force composition presented seems to be geared towards anti-armor at the expense of all else. There isn't an answer for infantry in buildings here.
  • You cannot "ignore" aircraft! Bombs may not be able to target moving drones, but they can target your command and control infrastructure, your logistics, and your industry. 
  • You will need stationary infrastructure because you will need to maintain and repair those drones.
  • You can't occupy territory with drones. Infantry will still have a place enforcing the occupation, gathering HUMINT, and performing labor duties.
  • You would be able to counter these drones with flak guns. Anti-air cannons firing explosive shells can destroy drones, and the drones may not be agile enough to dodge them. Fuzed explosive shells can be very cheap, so this would bring the economic calculation back in favor of conventional forces.
  • The US military seems to believe it will need to conduct a lot of tunnel warfare in the near future. There are miles of tunnel networks beneath many major cities in the forms of sewers, drains, subways, and nuclear bunkers. You can't use drones here.