Posts
Comments
This kind of fine tuning is not a serious danger to humanity. First, it doesn't do anything beyond what the usual "Sure!" at the beginning of a response promt does (I don't think it's harmful to reveal this technique because it's obvious and known to many). Second, the most current models can do is to give away what already exists on the internet. Any recipes for drugs, explosives, and other harmful things are much easier to find in a search than to get them from Transformer models. Third, current and future models on the Transformer architecture do not pose an existential threat to humanity without serious modifications that are currently not available to either the open community or closed corporations.
On the other hand, releasing open models makes access to text models (I can't call it AI since it's not AI) more equal, plus, less energy will be spent on training alternatives, hence less carbon emissions. There is no way to ban the release of open source models, since any internet user with a bittorrent client on their PC can distribute them. So trying to limit the release of open source models is a waste of time. It is better to concentrate on really dangerous things, like self-exfiltrate models, than to fight against what some fake news can release.