Posts
Comments
Thank you. What a coincidence, huh?
I think you miss the point where gradual disempowerment from AI happens as AI is more economically and otherwise performant option that systems can and will select instead of humans. Less reliance on human involvement leads to less bargaining power for humans.
But I mean we already have examples like molochian corporate structures that kind of lost the need to value individual humans as they can afford high churn rate and there are always other people to get a decently paid corporate job even if the conditions are ... suboptimal.
If the current trajectory continues, it's not the case that the AI you have is a faithful representative of you, personally, run in your garage. Rather it seems there is a complex socio-economic process leading to the creation of the AIs, and the smarter they are, the more likely it is they were created by a powerful company or a government.
This process itself shapes what the AIs are "aligned" to. Even if we solve some parts of the technical alignment problem we still face the question of what is the sociotechnical process acting as “principal”.
This touches an idea I'd really like to get more attention. The idea that we should build AI fundamentally tethered to human nature so that this drift toward arbitrary form that e.g. "just so happens to sell best" doesn't happen. I call it tetherware - more about in my post here.
Hmm, seems like someone beat me to it. Federico Faggin describes the idea I had in mind with his Quantum Information Panpsychism theory. Check it out here if interested - and I'll appreciate your opinion on plausiblity of the theory.
https://www.essentiafoundation.org/quantum-fields-are-conscious-says-the-inventor-of-the-microprocessor/seeing/
I know this might not be a very satisfying response, but as extraordinary claims require extraordinary arguments, I'm going to need a series of posts to explain - hence the Substack.
Haha, I've noticed you reacted with "I'd bet this is false" - I would be quite willing to present my arguments and contrast them with yours, but ultimately this is philosophical belief and no conclusive evidence can be produced for either side (that we know of). Sorry if my comment was misleading.
Yes, I see connection with your section about digital people, and it is true that what I propose would make AI more compatible for merging with humans. But from my understanding I don't think "digital" people or consciousness can exist. I strongly disbelieve computational functionalism, and believe that consciousness is inherently linked to quantum particle wavefunction collapse. Therefore, I think that if we can recreate consciousness in machines it will always be bound to specialized non-deterministic hardware. I will be explaining my positions in more detail in my Substack.
Hmm... I see the connection to 1984. But isn't it useful having the words to spot something that is obviously already happening? (like drug clinical trial data being demonstrably presented or even altered with a certain malicious/monetary intent in mind)
Thank you for starting a discussion about this. I have two things to say:
1) In the post above, the "inftoxic" adjective means very much false, incorrect information. Additionally, it also means the falseness was intentionally "put in" the data or story with an intent to mislead, cause harm, etc. So, in fact, the term is different (and to me personally more useful) than the term "malinformation" (which I likewise find quite unhelpful).
2) Regardless of the usefulness of the terminology I used as an example, do you think that we could use new words in and around information, that could improve the way how we lead the debate in an attempt to be less wrong?
Thank you for such a detailed write-up! I have to admit that I am teetering on the issue whether to ban or not to ban open-source LLMs and as I a co-founder of an AI-for-drug-design startup I had taken the increased biosecurity risk as probably the single most important consideration. So I think the conversation sparked up by your post is quite valuable.
That said, even if I consider all that you presented, I am still leaning towards banning powerful open-source LLMs, at least until we get much more information and most importantly before we establish other safeguards against global pandemics (like "airplane lavatory detectors" etc.).
First, I think there is definitely a big difference between having online information about actually making lethal pathogens, having the full assistance of an LLM makes a quite significant difference, especially for people starting from near zero.
Then, if I consider all of my ideas for what kind of damage could be done with all the new capabilities, especially combining LLMs with generative bioinformatic AIs... I think a lot of caution is surely warranted.
Ultimately, If you take the potential benefits of open-source LLMs over fine-tuned LLMs (not crazy significant in my opinion but of course we don't have that data either), and compare to the risks posed by essentially removing all of the guardrails and safety measures everyone is working on in AI labs... I think at least waiting some time with open-sourcing is the right call now.