Posts
Comments
I think that most people imagine themselves as gatekeepers for controlling the world, when in reality the world is controlled by systems we don't fully understand. For example, I don't think people really know what "money" means in the real world. You can imagine yourself as a person who knows it, but can you verify that your understanding matches reality?
If we want to have a chance to control the future to some degree, then it means that humans need to take over the world. It is not sufficient to stop AGI from taking over the world, but we also need to ensure that the future of the world is aligned with human values. For now, it seems that stuff like "money" is running the show.
Obviously, problems are not exclusive! I find it easier to imagine a civilization that has survived for a long time and made significant technological progress: How would a such civilization approach ASI? I think they will analyze the problem to death and use automated theorem proving as much as possible and having a culture where only a tiny amount of ideas ever get implemented, even if most of those ideas never implemented would seem very good to us. In short: Higher standards for safety.
One challenge with the "people will use it for bad stuff"-situations is that a sufficiently aligned AGI needs to be confidently non-trusting towards minds of people who in general wants to change the underlying physical processes of life as it evolved on Earth. This also holds for more bizarre and somewhat safe goals such as "make human babies have pointy ears". It is not an X-risk, but we still don't want that kind of stuff to happen. However, how to engineer AGI systems such that they refuse to cooperate with such people, is enormously difficult and beyond my level of intelligence.
It is also worth thinking if you put in context that people said "no, obviously, humans would not let it out of the box". Their confident arguments persuaded smart people into thinking that this was not a problem.
You also have the camp "no, the problem will not be people telling the AI do bad stuff, but about this hard theoretical problem we have to spend years doing research on in order to save humanity" versus "we worry that people will use it for bad things" which in hindsight is the first problem that occurred, while alignment research either comes too late or becomes relevant only once many other problems already happened.
However, in the long run, alignment research might be like building the lighthouse in advance of ship traffic on the ocean. If you never seen the ocean before, a lighthouse factory seems mysterious as it is on land and has no seemingly purpose that is easy to relate to. Yet, such infrastructure might be the engine of civilizations that reaches the next Kardashev scale.
The error margin LeCun used is for independent probabilities, while in an LLM the paths that the output takes become strongly dependent on each other to stay consistent within a path. Once an LLM masters grammar and use of language, it stays consistent within the path. However, you get the same problem, but now repeated on a larger scale.
Think about it as tiling the plane using a building block which you use to create larger building blocks. The larger building block has similar properties but at a larger scale, so errors propagate, but slower.
If you use independent probabilities, then it is easy to make it look as if LLMs are diverging quickly, but they are not doing that in practice. Also, if you have 99% of selecting one token as output, then this is not observed by the LLM predicting next token. It "observes" its previous token as 100%. There is a chance that the LLM produced the wrong token, but in contexts where the path is predictable, it will learn to output tokens consistently enough to produce coherence.
Humans often think about probabilities as inherently uncertain, because we have not evolved the intuition for nuance to understand probabilities at a deeper level. When an AI outputs actions, probabilities might be thought of as an "interpretation" of the action over possible worlds, while the action itself is a certain outcome in the actual world.
The error margin is larger than it appears as estimated from tokens, due to combinatorial complexity. There are many paths that the output can take and still produce an acceptable answer, but an LLM needs to stay consistent depending on which path is chosen. The next level is to combine paths consistently, such that the dependence of one path on another is correctly learned. This means you get a latent space where the LLM learns the representation of the world using paths and paths between paths. With other words, it learns the topology of the world and is able to map this topology onto different spaces, which appear as conversations with humans.
I up-voted your post because I think this is a useful discussion to have, although I am not inclined to use the same argument and my position is more conditional. I learned this lesson from the time I played with GPT-3, which seemed to me as a safe pathway toward AGI, but I failed to anticipate how all the guardrails to scale back deployment were overrun by other concerns, such as profits. It is like taking a safe pathway and incrementally make it more dangerous over time. In the future, I expect something similar to happen to GPT-4, e.g. people develop hardware to put it directly on a box/device and selling it in stores. Not just as a service, but as a tool where the hardware is patented/marketed. For now, it looks like the training is the bottleneck for deployment, but I don't expect this to stay as there are many incentives to bring the training costs down. Also, I think one should be careful about using flaws of architecture as argument against the path toward self-improvement. There could be a corresponding architecture design that provides a work-around that is cheaper. The basic problem is that we only see a limited number of options while the world processes in parallel many more options that are available to a single person.
I saw somebody on twitter suggesting "LessBad" to match "LessWrong". https://twitter.com/Kat__Woods/status/1642291750621581312
If ChatGPT is asked questions like "should we put in safeguards in the development of self-improving AIs" then is it likely to answer "yes". Now, if ChatGPT was given political power, it becomes a policy that world leaders need to solve. Do we need constraints on GPU computing clusters? Maybe ChatGPT answers "STOP", because it thinks the question is too complex to answer directly. It is always more difficult to decide on what actions to do in order to implement general policies, than agreeing about the overall policy. However, if we can align our overall policy decisions, then we might have a better chance dealing with threats of smarter AIs. I don't think this will work perfectly, but it might be aimed at some sort of improvement over the current state.
Maybe you can talk to Eric Weiser, who kindly provided me a proof in Lean 3: https://github.com/advancedresearch/path_semantics/blob/master/papers-wip/semiconjugates-as-satisfied-models-of-total-normal-paths.pdf
The experts in dependent types I know, think Path Semantics might help provide a better foundation or understanding in the future, or perhaps languages with some new features. We don't know yet, because it takes a lot of work to get there. I don't have the impression that they are thinking about Path Semantics, since there is already a lot to do in dependent types.
The reason I worked with Kent Palmer, was because unlike in dependent types, it is easier to see the connection between Path Semantics and Continental Philosophy. Currently, there is a divide between Analytic Philosophy and Continental Philosophy and Kent Palmer is interested in bridging these two.
The paper about the counter-example to Leibniz's First Principle in Path Semantics has been released: https://github.com/advancedresearch/path_semantics/blob/master/papers-wip2/counter-example-to-leibniz-first-principle.pdf
OK, that thread is from last year, before HOOO EP was completed. HOOO EP is now completed.
You should not be confused by the term "Path Semantics", as this is the name of my project. The foundational level of Path Semantics is a set of logical languages consisting of IPL, HOOO EP, PSQ and PSI.
Of course HOOO EP is a made up term. I am the person who invented it. However, the Prop library contains stuff about IPL, which is not something I invented. You don't have to understand HOOO EP to trust that IPL is previous work, as it is well established field. Consider HOOO EP a contribution I have done to Type Theory. The same for PSQ and PSI. PSQ is fully formalized and PSI is almost completed.
It takes time to publish research. The foundation work in Path Semantics is not yet completed, but it is getting closer now as it can model functional programming using the "types as propositions" approach known from Category Theory. There is a link between Path Semantics and Homotopy Type Theory, but this is not yet formalized. I will visit Henri Pointcare Institute this year, as they have the best collection in the world of work in topology.
Let me break down Kent Palmer's paper for you: Zizek saved Lacan's sexuation formulas from the dustbin of history and connected it to work of Derrida. There is an ongoing program in Continental Philosophy where these ideas are being developed. I am not part of this development. However, I helped Kent Palmer to formalize his Schema Theory in propositional logic and developed a grammar of "maximal mathematical languages", which is generalization of Gödel's Incompleteness Theorems extended with results from philosophy of the past century. Kent Palmer is a veteran in this field and is perhaps one of the few people in the world that know the work 6 modern philosophers in great detail. He is hosting reading groups in philosophy and Applied Category Theory. "Nilsen theories" is his naming of the Inside/Outside distinction I used to introduce an underlying structure in the formalization of his Schema Theory. The name "Schema" goes back to Emmanuel Kant, hence the reference to Antimonies of Kant.
The name "Seshatism" comes from the Anti-Thoth argument of Daniel Fischer based on Plato's writings. Thoth was an ancient Egyptian god credited with invention of writing. His female consort, Seshat, was also credited invention of writing by some believers, but this was forgotten by history at the time of Plato. The Anti-Thoth argument is about language bias introduced by the core axiom in Path Semantics, which can be thought of as a kind of Platonic bias. The corresponding anti-bias is dual-Platonic, but I did not want to negate Platonism as there was a whole philosophy and mythology in ancient Egypt that was dual to the later Platonism. Therefore, I named it "Seshatism" after Seshat.
If you are reading a math paper from a researcher that is at the top of their field, then it is very likely that you need to spend a lot of time absorbing the ideas and learn new concepts. This is just natural. I estimate Path Semantics is more than twice as hard to learn than e.g. IPL. I am trying to flatten the learning curve, but there is only so much I can do and I need to complete the foundation first.
You can also take a look at Avalog, which is an implementation of Avatar Logic. I made it from scratch, using my own experience building automated theorem provers. I took the work of Barry Jay about Closure Calculus and formalized it in Avalog. I also formalized an example of MIRI's research into Cartesian Frames, which is Chu spaces. This was just a test run to check whether Avalog made it easier to formalize modern research in mathematics.
So, this is the level of expertise that I work on a daily basis. When somebody like you that claim this is generated by GPT, you can't expect me to take you seriously. I don't know anything about your background, so if I try to explain my own research to you, it's like shooting in the dark. If somebody doesn't understand what IPL is, so what? I got research to do and it's not wrong of me to share things about it. There is a reason the organization is called AdvancedResearch!
I am not good at writing papers, but the papers uploaded are work documents. They are used internally in the organization. There has been people like you trying to figure out whether I am a crackpot, but that's just reddit drama. Good luck with that.
Path semantics is built upon previous works, e.g. Homotopy Type Theory: https://homotopytypetheory.org/
This kind of previous work is cited all of the place. I have no idea why you think there is no preexisting discussion going on.
Yes, all you needed was simply to ask: https://www.academia.edu/45073917/Asymmetries_of_Complementary_Nilsen_Theories?email_work_card=title
Here is a theory based on the work of 3 people in collaboration: https://github.com/advancedresearch/joker_calculus
I can give you more examples.
Let's say you don't trust the Pocket-Prover library. Yet, this library is a brute force theorem prover that no person so far has reported bugs. I understand you won't trust it without further investigation, but there are no other people who have reasonable doubt about it. Where does this line of reasoning come from? The program is not trusted because... I was the one who wrote it? Isn't the point of logic to be reproducible independent of who write it?
Gaslighting arguments are always used by people that are interested in another kind of discussion than the person who submits the original content. Let's say I made an error in some argument somewhere. This is natural. However, to claim the entire thing is generated by GPT, is unrealistic, you know it, yet you joke about it as if you expect other people to feel the same way you do.
This is logic, not about your emotions! I can't help it if my project rubs you in the wrong way. I try to provide more angles for people to make the learning curve a little easier to handle. Most people involved don't understand all of it, but they understand some parts to some degree such that the big picture seems a little more plausible than entirely wrong.
If you were willing to let me answers questions you might have, then there is a dialogue. Blocking me from that discussion is gaslighting. For example, I don't know which part you are confused about yet. Can you point to something concrete that you did not understand? Maybe I can help you find the external sources or get you in touch with people who might help you, if you are interested.
I can tell that you are not interested and not looking for a constructive dialogue. That was obvious from the first word in the first sentence in your first comment.
I assume you can assume for yourself that what I write is nonsense, but this does not prove that I am writing nonsense. Claiming that my work is generated by GPT is false, when it isn't.
That's not how logic works. If no evidence I provide can convince you otherwise, then your condition for changing your mind is predicated on that the agent proving you evidence is not me. This is not acceptable for me, because if I give you link to somebody who uses my work or a computer program that uses my theory, then you will not accept it as a reason to investigate further.
You didn't make a claim that some of my work is wrong, you made the claim that the ENTIRE thing is wrong, which is a gaslighting argument.
I'll give short introduction here to some of the background in constructive logic required to provide constructive criticism, as I saw one person here using a gaslighting argument:
Constructive logic is a weaker logic than classical logic, in which a statement A => B
is modeled as a lambda expression in simple type theory. Propositional equality is A == B
, consisting of two maps A => B
and B => A
. The reason A == B
is technically an assumed position in logic, is that lambda expressions can capture variables from their environment. It means, A == B
is a statement in any possible worlds where one starts with a non-empty or empty context. Tautological equality requires an empty context. In Provability Logic, this statement is derived from sequent calculus |- A == B
implies □(A == B)
. However, unlike First Order Logic theories that builds on predicates, Path Semantics is grounded in Propositional Logic with avatars (newtypes), where ¬(false^true)
is trivial to prove and ^
is like a function pointer, unable to capture variables, and as a consequence Löb's theorem and Provability Logic is inconsistent with the meta-theory HOOO EP. The qubit operator ~
is an extension to normal logic using symbolic distinction and has tautological congruence (necessary worlds semantics), unlike predicates in First Order Logic which have normal congruence (possible worlds semantics). This is how the counter-example to Leibniz' first principle is constructed. The entire theory of Path Semantics builds upon the qubit operator ~
, so path semanticists (those who study this field) consider this operator central.
With other words, if you think about this from the perspective Set Theory using First Order Logic, then you are looking at it through the wrong mathematical bias. This makes some people believe that Path Semantics is bullshit, but I ensure you, that the research is formal and rigorous. I don't want to teach people stuff here about e.g. First Order Logic that they ought to know in order to provide constructive criticism about Path Semantics. I am not a teacher of First Order Logic, that's not my job. I try to communicate my own research in the best way I can, but I can't make people learn stuff about First Order Logic in a way when they think their language (I suppose they use it on a daily basis) can do something it simply can't. The usual trap is that people do not know how equality behaves properly. You are supposed to know the limitations of your tools and I feel embarrassed if I have to point them out to you. I am very well aware of these limitations and my research is not about that, it is about making progress about something else entirely.
Author here. This is based on research from implemented theorem proving. See the links in the post.
Thank you! Post updated to include the definition.
The use of Chu spaces is very interesting. This is also a great introduction to Chu spaces.
I was able to formalize the example in the research automated theorem prover Avalog: https://github.com/advancedresearch/avalog/blob/master/source/chu_space.txt
It is still very basic, but shows potential. Perhaps Avalog might be used to check some proofs about Cartesian frames.