0 comments
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2023-06-30T16:15:48.178Z · LW(p) · GW(p)
On the other, open sourcing only the frontend part of the interface allows other engineers to look at its internals and add extra safety guards or testing.
I have been unable to imagine a situation in which the frontend requires safety guards.
All tools that help organizations develop complicated software artifacts make the danger we are in worse unless there were some practical way (which there probably is not) to prevent AI labs from using the tool. That seems like the dominant consideration here, but then maybe I just do not know enough about Copilot-like software artifacts.
The current version of Copilot is based on GPT-3, which has been put in enough risky situations (given access to the internet, to compute resources to run any code it writes, to gullible humans) that if it were capable of doing us harm, it would have done so already. Is the danger you worry about that the core foundational model will be replaced by a more potent one, and this time (miraculously) the AI research community won't put the model in risky situations, but then the AI gains access to compute resources or gullible humans through the IDE extension?