Posts
Comments
I am not expecting any worldwide regulation on AI that prohibits people from using or training unaligned systems (I am just expecting a usual level of regulation). I am mainly hoping for spectral techniques to develop to the point where AI groups will want to use these spectral techniques (or some other method) more and more until they are competitive with neural networks at general tasks or at least complement the deficiencies of neural networks. I also hope that these spectral techniques will remain interpretable and aligned.
Right now, there are several kinds of tasks in which I would rather use spectral techniques than neural networks. I have been evaluating the cryptographic security of block ciphers with small message size and very small key size (for cryptocurrency research), and it seems like the spectral techniques that I have developed give consistent measures of security for such block ciphers (I am not done with the training yet) and these spectral techniques are better for cryptanalysis than neural networks. I have been able to use these spectral techniques for other problems such as the problem of finding the largest clique in a graph (this is not something that I would have expected before I did it), and right now these spectral techniques are the only way I know how to transform a non-commutative polynomial into something that other machine learning models can work with better.
Right now, I do not know how to use spectral techniques to replace deep neural networks. I do not know how to use spectral techniques to approximate a universal function and I do not know how to use spectral techniques to make machine learning models with many layers. I hope to be able to solve these problems of spectral techniques, but I agree that there will be a tradeoff between performance and interpretability. The goal is to make this tradeoff favor interpretable, aligned, and safe machine learning models.
I agree that interpretability research is risky, and that one should carefully consider whether it is worth it to perform this interpretability research. I propose that a safer alternative would be to develop machine learning models that are
- quite unrelated to the highest performing machine learning models (this is so that capability gains in these safer models do not translate very well to capability gains for the high performing models),
- as inherently interpretable as one can make these models,
- more limited in functionality than the highest performing machine learning models, but
- still functional enough to be useful in practice.
These characteristics are correlated with each other, so it is likely that one would get all of these characteristics at once. For example, one will most likely not get interpretability for free but instead trade interpretability for performance. Being unrelated to the highest performing machine learning models will also likely reduce the performance of the machine learning models.
It seems like spectral methods such as PageRank or the eigenvectors of the Laplacian of a graph satisfy some of these characteristics, but it would be good if there were more research into developing spectral methods further to increase their functionality.
Perhaps interpretability research has yielded such few results because we need to first develop inherently interpretable models.
I do not see any evidence that large language models are equipped to understand the structure behind prime numbers. But transformers along with other machine learning tools should be well-equipped to investigate other mathematical structures. In particular, I am thinking about the mathematical structures called Laver-like algebras that I have been researching on and off since about 2015.
I have developed an algorithm that is capable of producing new Laver-like algebras from old ones. From every Laver-like algebra, one can generate a sequence of non-commutative polynomials. From these non-commutative polynomials, one can recover the entire Laver-like algebra up to critical equivalence. Therefore, to use deep learning to investigate the structure of Laver-like algebras, we need a few things.
- We need a way of transforming the non-commutative polynomials into something that further layers of AI can work with. I suggest using my notion of an LSRDR that I developed for cryptographic purposes to transform these non-commutative polynomials into collections of matrices. These matrices are often (but not always) unique up-to-a constant scalar factor and orthogonal/unitary equivalence. But this is simply what I have come up with using my own methods. There are likely other ways that transform non-commutative polynomials into vectors that I have simply not thought of.
- We need a way of solving a machine learning problem from a sequence of vectors or matrices as input. This is where we can use transformers.
- We need a problem or problems to solve along with an algorithm for deciding which Laver-like algebras to go through next. One such problem would be to estimate the number of Laver-like algebras exist that satisfy certain conditions. Another such problem would be to determine whether an algebra arises from the rank-into-rank embeddings or not. Since transformers are good at working with a variety of problems, we may be able to train transformers to do a variety of tasks related to Laver-like algebras.
Let me know if you want more detailed information about this proposal.
While I believe that this is a sensible proposal, I do not believe there will be too much of a market (in the near future) for it for societal reasons. Our current society unfortunately does not have a sufficient appreciation for mathematics and mathematical theorems for this to have much of a market cap. To see why this is the case, we can compare this proposal to the proposal for cryptocurrency mining algorithms that are designed to advance science. I propose that a scientific cryptocurrency mining algorithm should attract a much larger market capitalization than a theorem-proof market, but since scientific cryptocurrencies do not have much of a market, one should not expect for shares of theorems to attract much of a market either.
A market for mathematical conjectures requires participants to not only actively understand the mathematics behind the conjectures, but to also understand these theorems in their complete formalism. Perhaps AI systems may be developed to translate the formalism into English, but such a system may be flawed and may be subject to hacking in the sense that the statement in English may not be exactly the same as the formal statement, and it may be really easy to prove the formal statement. Since such a market requires a substantial amount of effort from the users, it is unlikely that there will be many users who are willing to put in the effort into understanding the mathematics along with the formalism.
It is also likely for the market to favor simple propositions that are easy to understand such as the bounded gaps between primes conjecture. For example, Beal's conjecture states that there are no positive integers with and and where have no common factor. This conjecture was formulated by the billionaire Michael Beal. Beal attached a million dollar prize to this conjecture. This conjecture is a generalization of Fermat's last theorem. Beal's conjecture is a reasonably interesting conjecture, but it seems like Beal's conjecture is the very best mathematics problem that amateur mathematicians can come up with and fund with one's own money. This means that the market for mathematical theorems will be small and will mainly for theorems that anyone can understand such as the Twin Prime conjecture. It would be more satisfying if the market for mathematical theorems better represents the type of problems and questions that your typical mathematician were interested in.
Cryptocurrency mining algorithms can be used to solve scientific problems, but the users of the cryptocurrency do not even have to care very much about the underlying scientific problem. This is because a cryptocurrency is much more than its mining algorithm, and the details about the mining algorithm are relatively unimportant to the functionality of the cryptocurrency. If chosen correctly, the underlying scientific/mathematical problem does not harm or interfere very much with the cryptocurrency, so there is really no excuse for the cryptocurrency community to waste all those resources on cryptocurrencies on non-scientific problems because the cryptocurrency community does not have to. The cryptocurrency community needs to have a desire to not waste energy along with the philosophy that solving the most important scientific problem is profitable (this would be a self-fulfilling prophecy). The cryptocurrency community should also have a basic awareness of mining algorithms that are designed to solve a scientific problem (they do not need to know the specific details). This basic awareness should be part of the due diligence that every cryptocurrency enthusiast should partake in order to try to maximize profit, so this understanding should be natural for the cryptocurrency sector. Since there is still only one dominant minable cryptocurrency (namely Bitcoin), one only needs one scientific mining algorithm, so the cryptocurrency community does not even need to understand multiple scientific mining algorithms. They just need to know one. It is easier to know about one cryptocurrency mining algorithm than it is to know the market value of many theorems.
Right now Bitcoin has a market capitalization of about 500 billion dollars, and if Bitcoin was set up correctly, it could have had a mining algorithm that was designed to solve a very important scientific problem without the cryptocurrency community even thinking about this. Unfortunately, Bitcoin's mining algorithm was never designed to advance science, and at this point, the cryptocurrency community is not very interested (due to anti-intellectualism) in using cryptocurrency technologies to solve any scientific problem. The cryptocurrency community to a great extent has no concept or understanding of Bitcoin or what it means for a cryptocurrency mining algorithm to solve a scientific problem. And those who think they understand what it means for a cryptocurrency mining algorithm to be designed to solve a scientific problem are quite antagonistic to cryptocurrency mining being used for any scientific purpose. Since the cryptocurrency community hates using cryptocurrency mining for a scientific purpose so much, one should expect for any market for proofs of theorems to be very small and undeveloped.
Nobody should expect for there to be a sizable theorem/proof market when that requires ~100 times as much effort from users than a market for cryptocurrencies with scientific mining algorithms and when there is no sizable scientific mining algorithm market.
With everything being said, we make a tradeoff with cryptocurrencies with scientific mining algorithms. Cryptocurrency mining algorithms such as SHA-256 must satisfy stringent conditions, so there are limited options in designing a cryptocurrency with a mining algorithm to advance science, and it is hard to fork a cryptocurrency to replace its scientific problem.
I would rather see people work on making scientific cryptocurrency mining more of a reality than imagining a market that is unlikely to exist any time soon because scientific cryptocurrencies are more feasible with the current state of the world.
You claim that Terry Tao says that Benford's law lack's a satisfactory explanation. That is not correct. Terry Tao actually gave an explanation for Benford's law. And even if he didn't, if you are moderately familiar with mathematics, an explanation for Benford's law would be obvious or at least a standard undergraduate exercise.
Let be a random variable that is uniformly distributed on the interval for some . Let denote the first digit of . Then show that .
For all , let be a uniformly distributed random variable on the interval . Let denote the first digit of . Suppose that . Prove that .
Terry Tao in that blog post lists the characteristics that imply Zipf's law here:
"These laws govern the asymptotic distribution of many statistics which
- (i) take values as positive numbers;
- (ii) range over many different orders of magnitude;
- (iiii) arise from a complicated combination of largely independent factors (with different samples of arising from different independent factors); and
- (iv) have not been artificially rounded, truncated, or otherwise constrained in size."
Terry Tao merely stated that Benford's law cannot be proven the same way mathematical theorems are proven; "Being empirically observed phenomena rather than abstract mathematical facts, Benford’s law, Zipf’s law, and the Pareto distribution cannot be “proved” the same way a mathematical theorem can be proved. However, one can still support these laws mathematically in a number of ways, for instance showing how these laws are compatible with each other, and with other plausible hypotheses on the source of the data."
I made the Latex compile by adding a space. Let me know if there are any problems.
Is the Latex compiling here?
I have originally developed LSRDRs to investigate the cryptographic security of the mining algorithm for the cryptocurrency Circcash and compare Circcash mining with similar mining algorithms. I launched the cryptocurrency Circcash so that Circcash mining accelerates the development of reversible computing hardware. But to do this, I needed to construct my own cryptographic algorithm. Unfortunately, I was unable to thoroughly investigate the cryptographic security of mining algorithms before launching Circcash. I decided that it was best to develop some of my own techniques for evaluating the cryptographic security of Circcash including LSRDRs, normal forms, and other techniques, but I still have not completely investigated Circcash using LSRDRs since I need more computational power to reduce the dimension of collections of 2^32-1 by 2^32-1 matrices.
But it looks like LSRDRs and similar algorithms have other uses (such as producing word embeddings, graph/hypergraph embeddings, etc), and I expect to expand the capabilities of algorithms like LSRDRs.
Here is a post that I have made about how we still need to calculate LSRDRs of cryptographic functions to evaluate the cryptographic security of these cryptographic functions:
https://github.com/jvanname/circcash/issues/10
Since Circcash mining will accelerate the development of more advanced AI, it is a good idea to use the knowledge that I have produced by trying to investigate the cryptographic security of Circcash to try to get reversible computing to be used for good. Here is a post about how Circcash needs to be used to advance AI safety instead of just computational power.
https://github.com/jvanname/circcash/issues/13
Yes. I am looking for investors in Circcash, and I am willing to consult on the use of LSRDRs and similar algorithms in machine learning.
Massively downvoting mathematics without commenting at all just shows that the people on this site are very low quality specimens who do not care at all about rationality at all but who just want to pretend to be smart.
I will certainly make future posts about spectral methods in AI since spectral methods are already really important in ML, and it appears that new and innovative spectral methods will help improve AI and especially AI interpretability (but I do not see them replacing deep neural networks though). I am sure that LSRDRs can be used for QA for ML products and for interpretability (but it is too early to say much about the best practices for LSRDRs for interpreting ML). I don't think I have too much to say at the moment about other cryptanalytic tools being applied to ML though (except for general statistical tests, but other mathematicians have just as much to say about these other tests).
Added 8/4/2023: On a second thought, people on this site really seem to dislike mathematics (they seem to love discrediting themselves). I should probably go elsewhere where I can have higher quality discussions with better people.
Hello. I am Joseph Van Name. I have been on this site substantially this year, but I never introduced myself. I have a Ph.D. in Mathematics, and I am a cryptocurrency creator. I am currently interested in using LSRDRs and related constructions to interpret ML models and to produce more interpretable ML models.
Let be -matrices. Define a fitness function by letting ( denotes the spectral radius of while is the tensor product of and ). If locally maximizes , then we say that is an -spectral radius dimensionality reduction (LSRDR) of (and we can generalize the notion of an LSRDR to a complex and quaternionic setting).
I originally developed the notion of an LSRDR in order to evaluate the cryptographic security of block ciphers such as the AES block cipher or small block ciphers, but it seems like LSRDRs have much more potential in machine learning and interpretability. I hope that LSRDRs and similar constructions can make our AI safer (LSRDRs do not seem directly dangerous at the moment since they currently cannot replace neural networks).
The equation king-man+woman=queen suggests that vector addition is not necessarily a fundamental operation in the space of embedded words, but instead we want a weaker operation that is kind of like addition but where we cannot pinpoint a zero vector.
In mathematics, we sometimes want to define a new axiomatization for an algebraic structure where we do not have any presence of a zero element. The best way to do this is to replace our binary fundamental operation with a ternary operation. For example, the notion of a median algebra is an adaptation of the notion of a distributive lattice where we do not have any notion of which way is up and which way is down but if we have two points and we specify which one we want to be on top and which one we want to be on the bottom, then the interval between those two points will be a distributive lattice. As another example, the notion of a heap is what you would get if you took the notion of a group but you removed the identity element.
A heap is an algebraic structure where is a set and is a ternary operation that satisfies the identities
and .
If is a group, then we can define a ternary heap operation by setting . For example, t(king,man,woman)=queen.
Similarly, if is a heap operation on a set and ( can be called a basepoint), then we can define a group operation by setting . The category of groups is isomorphic to the category of heaps with a basepoint .
We can adapt other classes of mathematical structures to the case where we remove a basepoint including the classes of rings, vector spaces, and even inner product spaces if we really wanted to, and an affine space is the thing we obtain when we take a vector space and remove the origin but keep everything else intact in such a way that we can recover everything from the affine space and the origin. In an affine space over a field (the field has an origin, but the space does not), we cannot define scalar multiplication since that requires a basepoint, but we can declare a point in to be the origin for the purposes of performing scalar multiplication. We therefore define a scalar multiplication operation by setting which is scalar multiplication where is declared to be the origin. From this scalar multiplication, we can easily define the mean of several elements and weighed means (except in some finite fields). So if we are given data in an affine space , we can still average and then declare the average to be the origin of and then we have obtained a vector space instead of just an affine space.
Suppose that are finite dimensional vector spaces over a field , and let be a vector space isomorphism. Let denote the heap operations on these vector spaces. Let be a permutation, and let be the function defined by . If , then define by setting . Observe that .
We can define (in a model theoretic sense) the heap operations on and from , but we are not necessarily able to define the group operations on and from .
If , then .
Given , there is a unique where since precisely when which happens precisely when . We can therefore define the heap operation on from .
Given , there are where . We can therefore define from by setting We can define the heap operations on from , but we cannot necessarily define the zero element from .
For example, could be vector spaces over the finite field with elements and could be the round function in a block cipher (we observe that permutation-substitution encryption functions like the AES encryption are simply recurrent neural networks over finite fields; AES uses as its activation function). More generally could be vector spaces, and could be an RNN layer.
Let us not forget that anyone who has studied a little bit of physics knows that there is a lot of physical intuition for affine spaces without origins. We cannot pinpoint a precise location and declare that location to be the center of the universe. We cannot say that an object is at rest; we can only say that an object is at rest relative to another object.
The Radon-Nikodym theorem allows us to define the Kullback-Leibler divergence in a more general setting. Even though this setting is more abstract, in this abstract setting it is clear how the KL-divergence is a way to define relative entropy. The abstract setting limits our options to only the correct option.
Suppose that is the probability density function of a random variable supported on . Then we define the continuous entropy of as .
Suppose that we are given two probability measures on the Borel subsets of and we know that is the Lebesgue probability measure on . Then is there any way to abstractly define the continuous entropy of that only refers to the abstract measures without referring to or by recovering from the abstract properties of ? Yes there is. The Radon-Nikodym theorem allows us to recover from the measure theoretic properties of and it therefore allows us to recover the continuous entropy.
Suppose that is a -algebra. Let be two measures on . We say that is absolutely continuous with respect to if , and we write in this case. Recall that a measure is -finite if there are for each natural number such that for each and .
Theorem: (Radon-Nikodym theorem) Suppose that and are both -finite measures. Then there exists a positive measurable function where for each .
The function is called the Radon-Nikodym derivative of with respect to and is denoted by . The Radon-Nikodym derivative is unique up to a set of measure zero.
We therefore define the Kullback-Kiebler divergence as which coincides with the continuous entropy without the factor.
Now, the assumption that is isomorphic to the Lebesgue probability measure on is a surprisingly general assumption that works for many probability measures .
A Boolean algebra is said to be -complete if whenever for each natural number , the set has a least upper bound which we denote by .
A measure algebra is a pair where is a -complete Boolean algebra and is a function with iff and whenever for . If is a measure algebra, then define a metric (and hence a topology and uniformity and whatnot) on by setting . Recall that a topological space is separable if it has a countable dense subset.
For example, if is the collection of Lebesgue measurable subsets of , and is the collection of measure zero subsets of , then the quotient Boolean algebra is complete and if is the Lebesgue measure on , then is a separable atomless probability measure algebra, and this measure algebra is unique.
Theorem: (Caratheodory) Every separable atomless probability measure algebra is isomorphic.
We have an even stronger uniqueness theorem for this measure algebra.
Theorem: Every Borel probability measure on a complete separable metric space without any atoms is isomorphic to the Lebesgue measure on the interval .
Suppose now that are Borel probability measures on a complete separable metric space that possibly have atoms. Then there is some measure where is isomorphic to the Lebesgue probability measure. In this case, for almost all , so
. In this case, we can pretend that is the Lebesgue measure on and that just measures the continuous entropy of with respect to the Lebesgue probabilty measure on .
The KL-divergence also generalizes greatly. Given a function ( is usually convex or concave), one can define another measure of similarity between probability measures such as known as the -divergence. One can also define a measure on from the Radon-Nikodym derivative by where if and is a measure on , then is the measure on defined by . The random variable gives information about the similarity between the probability measures from which one can always recover the -divergence. In this case,.
If the singular value decompositions of various matrices in neural networks is interpretable, then I suspect that various other spectral techniques that take more non-linear interactions into consideration would be interpretable as well. For research for cryptocurrency technologies, I have been investigating spectral techniques for evaluating block ciphers including the AES, but it seems like these spectral techniques may also be used to interpret matrices in neural networks as well (though I still need to do experiments to figure out how well this actually works in practice).
The singular value decomposition is designed to treat the weight matrix as a linear transformation between inner product spaces while ignoring all additional structure on the inner product spaces, so it seems like we may be able to use better spectral techniques for analyzing matrices in neural networks or other properties of neural networks since these other spectral techniques do not necessarily ignore the structure that neural networks contain. The SVD as other disadvantages including the numerical instability of the orthogonal matrices in the presence of nearby singular values, the lack of generalization to higher order tensors (higher order SVDs lose most of the interesting properties of the traditional SVD), and the inability for the SVD to find clusters of related dimensions (the top singular vectors may not have anything to do with each other so the SVD is not a clustering algorithm, and one cannot use PCA to find a cluster of dimensions). The singular value decomposition has been around since the 1800's, so it is not exactly a cutting edge technique.
Suppose that we encode data and parameters as a collection of square matrices for some . Then we reduce the dimensionality of these square matrices (I call this dimensionality reduction the -spectral radius dimensionality reduction and abbreviate it as LSRDR) by finding where and where we maximize using gradient ascent where denotes the spectral radius and refers to the tensor product (the gradient process is quicker than you think even though we are using tensor products and the spectral radius). The matrices are the matrices of reduced dimensionality. Let . Then , so will be a projection matrix but is not necessarily an orthogonal projection, and the vector spaces will be clusters of dimensions in . Like the singular value decomposition, the matrix is often (but not always) unique, and this is good because when is unique, you know that your LSRDR is well-behaved and that LSRDRs are probably a good tool for whatever you are using them for, but the non-uniqueness of indicates that the LSRDRs may not be the best tool for whatever one is using them for (and there are other indicators of whether LSRDRs are doing anything meaningful). One can perform the same dimensionality reduction technique for any completely positive superoperator . We therefore need to find ways of representing parts of neural networks or parts of the data coming into neural networks as 3rd and 4th order tensors or as collections of matrices or completely positive superoperators (but the higher order tensors need to be in a tensor product of the form or for inner product spaces in order for the LSRDR to function).
There are probably several ways of using LSRDRs to interpret neural networks, but I still need to run experiments applying LSRDRs to neural networks to see how well they work and establish best practices for LSRDRs used in interpreting neural networks.
Sadly, I am still the only entity who is publicly talking about how BSL-4 labs need to post cryptographic timestamps of all of their records (I searched the internet for posts about this issue made by others, but I was only able to find my own posts), and the only way I see this really changing is if the next pandemic possibly comes from a lab causes a few people to wake up and realize that BSL-4 labs currently have insufficient security standards. Since I do not expect for BSL-4 labs to post cryptographic timestamps of all their records on blockchains nor do I make anyone except for myself to make a significant effort in making sure that BSL-4 labs post these cryptographic timestamps, I have adapted my views about the security of BSL-4 labs.
I am now only asking for BSL-4 labs to publicly post cryptographic timestamps of all their records instead of worrying about making sure that these cryptographic timestamps make their way to public blockchains. For example, if a BSL-4 labs publicly posts a cryptographic hash or a Merkle root of all its new data every 10 minutes (on a website) without paying the transaction fee to put this cryptographic hash or Merkle root on a public blockchain, then that would be fine with me. In this case, BSL-4 labs do not need to pay cryptocurrency transaction fees, and people at BSL-4 labs can be completely oblivious to the usefulness of cryptocurrency technologies for bio-safety and that would be fine with me. But in this case, someone else needs to periodically take the cryptographic timestamps from the BSL-4 labs and post these on public blockchains. Perhaps a web spider can go on the websites for the BSL-4 labs and gather their cryptographic hashes to make sure that they end up on blockchains. Of course, in this case, for security, there should probably be multiple organizations that make sure that the cryptographic timestamps make their way to blockchains, but at least this will make it a little easier for BSL-4 labs to do what they need to do.
Someone on this site referred me to a paper with a similar proposal by those with medical and biological knowledge Aleksandr V. Kudriavtsev, Anna Vakhrusheva, and Alexander Shneider (but who have less expertise in information security and cryptography) which proposes using private blockchains (it looks like this is just a shared database where deleting information is not allowed) to track the origin and dissemination of scientific data. Their idea may have some merit (though such an idea would require more data to be shared between more people, and this could be a security hazard, so maybe it is best if just the timestamps are made public), but at this point, we need to implement the simplest and most inexpensive solution which is for the BSL-4 labs to post those cryptographic timestamps publicly.
The singular value decomposition is great and it even works well for complex and quaternionic matrices (and we can even generalize decompositions like the spectral, polar, and singular value decomposition and apply them to bounded linear operators between infinite dimensional Hilbert spaces), but to get a better appreciation of the singular value decomposition, we ought to analyze its deficiencies as well. I am currently researching other linear dimensionality reduction techniques (namely LSRDRs) that work well in the cases when the singular value decomposition is not the best technique to use for a linear dimensionality reduction. These cases include the following:
- While the SVD approximates a matrix with a low rank matrix, it does not generalize that well to higher order SVDs that decompose tensors in where are vector spaces.
- The SVD works applies to linear mappings between inner product spaces, but the SVD does not take any additional structure that the linear mappings or inner product spaces have. For example, if we had a tuple of vectors , then we may want to use a linear dimensionality reduction that does not just consider as a matrix. For example, it is more meaningful to consider a weight matrix in a neural network as a tuple of row vectors or a tuple of column vectors than just a matrix without additional structure.
- If we apply the principal component analysis to a collection of vectors (with mean 0 for simplicity), then the -dimensional subspace that best approximates may fail to cluster together. For example, suppose that are independent normally distributed random variables each with mean 0 where each has covariance matrix while each has covariance matrix . If we take a random sample from and then perform a principal component analysis to to find an -dimensional subspace of , then the principal component analysis will not tell you anything meaningful. We would ideally want to use something similar to the principal component analysis but which instead returns subspaces that are near or . The principal component analysis returns the top -dimensional affine subspace of a vector space in magnitude, but the principal component analysis does not care if the canonical basis for these -dimensions form a cluster in any way.
- Every real, complex, or quaternionic matrix has an SVD, and the SVD is unique (except in the case where we have repeated singular values). While mathematicians tend to like it when something exists and is unique, and computer scientists may find the existence and uniqueness of the singular value decomposition to be useful, the existence and uniqueness of the SVD does have its weaknesses (and existence and uniqueness theorems imply a sort of simplicity that is not the best indicator of good mathematics; good mathematics is often more complicated than what you would get from a simple existence and uniqueness result). One should consider the SVD as a computer program, programming language, and piece of code that always return an output regardless of whether the output makes sense without ever producing an error message. This makes it more difficult to diagnose a problem or determine whether one is using the correct tools in the first place, and this applies to the singular value decomposition as well. The poor behavior of an algorithm could also provide some useful information. For example, suppose that one is analyzing a block cipher round function using an algorithm . If the algorithm produces errors for complicated block cipher round functions but does not produce these errors for simple block cipher round functions, then the presence of one or more errors indicates that the block cipher round function is secure.
- If is a real matrix, but we take the complex or quaternionic SVD of to get a factorization , then the matrices will be real orthogonal matrices instead of complex or quaternionic matrices. This means that the SVD of a matrix is always well-behaved which is again a problem since this well-behavedness does not necessarily mean that the singular value decomposition is useful for whatever circumstances we are using it for and the poor behavior of a process may provide useful information.
- The SVD is not exactly new or cutting edge, so it will give one only a limited amount of information about matrices or other objects.
Let denote either the field of real numbers, the field complex numbers, or the division ring of quaternions. Suppose that are -matrices with entries in . If are -matrices, then define an superoperator by letting whenever . Define a partial (but nearly total) function by letting
. Here, let denote the spectral radius of a linear operator. We say that is a -spectral radius dimensionality reduction (LSRDR) of type if the quantity is locally maximized.
One can compute LSRDRs using a flavor of gradient ascent. Don't worry. Taking an approximate gradient of the is less computationally costly than it sounds, and the gradient ascent should converge to an LSRDR . If the gradient ascent process fails to quickly converge to an LSRDR , then LSRDRs may not be the best tool to use.
We say that are projectively similar and write if there is some ( denotes the center of ) and some invertible matrix such that for . Let denote the equivalence class containing .
The equivalence class of an LSRDR of type of is often unique. At the very least, one should only be able to find a few equivalence classes of LSRDRS of type of , and the equivalence class of LSRDRs with highest fitness should also be the easiest to find. But if the equivalence class is far from being unique, then this should be an indicator that the notion of taking an LSRDR may not be the best tool to use for analyzing , so one should try something else in this case.
If are all real matrices but , then the equivalence class of the LSRDR should contain a tuple where each is a real matrix. One can quickly test whether one should be able to find such a tuple given an LSRDR is to compute . If is a real number (up-to a rounding error), then that means that the LSRDR is well-behaved and perhaps an appropriate tool to use, but otherwise the LSRDR may not be the best tool to use.
If we find our LSRDR of type of , then if everything works out well, there should be some matrices where for and where and where for some (not necessarily orthogonal) projection matrix and constant . If , then we say that is constant factor normalized; if is constant factor normalized, then , so let us assume that is constant factor normalized to make everything simpler. Let be the dominant eigenvector of , and let be the dominant eigenvector of . Then there are positive semidefinite matrices and non-zero constants where. The projection matrix should be recovered from the positive semidefinite matrices since , and the positive semidefinite matrices (up-to a constant real factor) should be uniquely determined. The positive semidefinite matrices should be considered to be the dominant clusters of dimensions for .
Order 2 tensors: Suppose that for some finite dimensional real inner product space . Then set for . Then , so the positive semidefinite matrix is our desired dimensionality reduction of . For example, if is a weight matrix in a neural network, then we can make the columns of , or we can make the transposes of the rows of . Since we apply activation functions before and after we apply , it makes sense to separate into rows and columns this way. And yes, I have performed computer experiments that indicate that for , the matrices do represent a cluster of dimensions (at least sometimes) rather than simply the top dimensions. I have done the experiment where and in this experiment, the matrices (up to a constant factor for ) are all approximately the projection matrix that projects onto the subspace .
Order 3 tensors: Suppose that are finite dimensional real or complex inner product spaces and is a linear mapping. Observe that is canonically isomorphic to . Now give an orthonormal basis , and set for . Then one can apply an LSRDR to to obtain the positive semidefinite matrices . The positive semidefinite matrices do not depend on the orthonormal basis that we choose. For example, suppose that are open subsets of Euclidean spaces of possibly different dimensions and is a -function where there are where for each . Then let for where denotes the Hessian of . Then the matrices of an LSRDR of represent a cluster of dimensions in the tangent space at the point .
Order 4 tensors: Given a vector space , let denote the collection of linear maps from to . Let be a finite dimensional complex inner product space. Then there are various ways to put into a canonical one-to-one correspondence with . Furthermore, the Choi representation gives a one-to-one correspondence between the completely positive operators in and the positive semidefinite operators in . An operator is completely positive if and only if there are where for all . Therefore, whenever is completely positive, we compute a complex LSRDR of , and we should get matrices , and give us our desired dimensionality reduction. Of course, given an order 4 tensor, one has to ask whether it is appropriate to use LSRDRs for this order 4 tensor, and one should ask about the best way to use these order 4 tensors to produce an LSRDR.
If this comment were not long enough already, I would give an explanation for why I believe LSRDRs often behave well, but this post is really about the SVDs so I will save my math for another time.
Yes. The LW community is very silly. They do not even know what I have been doing. They just hate because they are anti-intellectuals. They hate me because I have a Ph.D. in Mathematics and they don't. Not that that matters because universities are extremely unprofessional and still refuse to apologize for promoting violence against me, and I am going to mention this every time I have the opportunity to do so until they apologize. But I would like to be enlightened by one one the downvoters if they have anything to say (unlike Gene Smith). If are -complex matrices and , then define the -spectral radius of to be
Can you at least tell me why we have a MAX there and not a SUP? And once you tell me that, can you tell me what I should not have spent any time at all using the -spectral radius to solve problems in cryptography and AI? Because it seems like the people here are just anti-intellectual and hate AI safety got some reason.
Added later: I see that people are downvoting this because they are anti-intellectual and hate mathematics. WHAT A JOKE!
There are reasons why I said what I did, but I am unwilling to assume that you are ready or willing to have a decent discussion. It is best if we ceased talking about this.
3. "Labs record and respond to every incident and plan for emergencies."-But do they do far enough with these procedures? Do BSL-4 labs publicly post cryptographic timestamps (cryptographic hashes and Merkle roots) of all their records?
Once the cryptographic timestamps are made public, people can publicly post these cryptographic timestamps on public blockchains like the Bitcoin blockchain.
I searched for these cryptographic timestamps I did not find any at any BSL-4 lab. This means that if there is a lab incident that they do not want to make public, then the lab (with the help of the government) can falsify records, but they cannot do this once the timestamps are posted on the Bitcoin blockchain.
I am still the only entity advocating for this safety measure (Aleksandr V. Kudriavtsev, Anna Vakhrusheva, and Alexander Shneider proposed in a scientific paper creating an entirely new blockchain which is much more complicated than simply asking for the BSL-4 labs to periodically tell us the hashes of all of the records; I am only expecting the BSL-4 labs to implement the most basic solutions).
And yes, cryptographic timestamps can be used for AI-safety as well in many different ways. For example, when training LLMs, if the training data came with cryptographic timestamps, the LLM has an upper bound for the time when that data has been produced (there may be other ways to do this, but since cryptographic timestamps are so easy to produce and verify, I see no reason why one should not use the added security of cryptographic timestamps). For example, if the data has a timestamp before the year 2017 when the notion of a transformer has been introduced, the LLM will know for sure that the data has not been produced by another LLM. If LLMs are trained with data produced by other LLMs and the old LLMs have the problem of making stuff up and declaring it as fact, then the future LLMs will recursively improve at making stuff up and it will be harder for LLMs to output true statements but LLMs will become better at making the statements believable.
The transitivity of correlation (or of the inner product on the sphere) is a reformulation of the triangle inequality for spheres with the Riemannian metric (and also for the surface of the unit ball in a real Hilbert space).
Let be the Riemannian metric on the sphere . The value is just the length of the shortest path with . If , then since the path cannot be a straight line segment in . The metric can also be defined by . From the triangle inequality, if , we know that . Therefore, by applying the cosine function (which is order reversing) to this inequality, we have
.
For the reverse inequality, we apply cosine to the inequality to obtain
.
As a cryptocurrency creator, I can assure you that there is something seriously wrong with nearly everyone who owns and is a fan of cryptocurrency technologies. You can't trust those kinds of people. And most crypto-chlurmcks have not even read the Bitcoin whitepaper since they are incapable of understanding it despite how easy the Bitcoin whitepaper is to read. And nearly all crypto-chlurmcks cannot seem to understand that Bitcoin has a mining algorithm that was never even designed to advance science while simultaneously establishing consensus. And unlike GPT, the crypto-chlurmcks typically do not know how to form intelligible complete sentences. They hate mathematics. And they are all-around really dysfunctional people. The cryptocurrency sector has its fair share of scandals because the people who are attracted to cryptocurrencies are not the best people.
The intersection of machine learning and generalizations of Laver tables seems to have a lot of potential, so your question about this is extremely interesting to me.
Machine learning models from generalized Laver tables?
I have not been able to find any machine learning models that are based on generalized Laver tables that are used for solving problems unrelated to Laver tables. I currently do not directly see any new kinds of deep learning models that one can produce from generalizations of Laver tables, but I would not be surprised if there were a non-standard machine learning model from Laver like algebras that we could use for something like NLP.
But I have to be skeptical about applying machine learning to generalized Laver tables. Generalizations of Laver tables have intricate structure and complexity but this complexity is ordered. Furthermore, there is a very large supply of Laver-like algebras (even with 2 generators) for potentially any situation. Laver-like algebras are also very easy to compute. While backtracking may potentially take an exponential amount of time, when I have computed Laver-like algebras, the backtracking algorithm seems to always terminate quickly unless it is producing a large quantity or Laver-like algebras or larger Laver-like algebras. But with all this being said, Laver-like algebras seem to lack the steerability that we need to apply these algebras to solving real-world problems. For example, try interpreting (in a model theoretic sense) modular arithmetic modulo 13 in a Laver-like algebra. That is quite hard to do because Laver-like algebras are not built for that sort of thing.
Here are a couple of avenues that I see as most promising for producing new machine learning algorithms from Laver-like algebras and other structures.
Let be a set, and let be a binary operation on that satisfies the self-distributivity identity . Define the right powers for inductively by setting and . We say that is a reduced nilpotent self-distributive algebra if satisfies the identities for and if for all there is an with . A reduced Laver-like algebra is a reduced nilpotent self-distributive algebra where if for each , then for some . Here we make the convention to put the implied parentheses on the left so that .
Reduced nilpotent self-distributive algebras have most of the things that one would expect. Reduced nilpotent self-distributive algebras are often equipped with a composition operation. Reduced nilpotent self-distributive algebras have a notion of a critical point, and if our reduced nilpotent self-distributive algebra is endowed with a composition operation, the set of all critical points in the algebra forms a Heyting algebra. If is a self-distributive algebra, then we can define and in the discrete topology (this limit always exists for nilpotent self-distributive algebras). We define precisely when and precisely when . We can define to be the equivalence class of with respect to and . The operations on are and whenever we have our composition operation . One can use computer calculations to add another critical point to a reduced nilpotent self-distributive algebra and obtain a new reduced nilpotent self-distributive algebra with the same number of generators but with another critical point on top (and this new self-distributive algebra will be sub-directly irreducible). Therefore, by taking subdirect products and adding new critical points, we have an endless supply of reduced nilpotent self-distributive algebras with only 2 generators. I also know how to expand reduced nilpotent self-distributive algebras horizontally in the sense that given a finite reduced nilpotent self-distributive algebra that is not a Laver-like algebra, we can obtain another finite reduced nilpotent self-distributive algebra where are both generated by the same number of elements and these algebras both have the same implication algebra of critical points but but there is a surjective homomorphism . The point is that we have techniques for producing new nilpotent self-distributive algebras from old ones, and going deep into those new nilpotent self-distributive algebras.
Since reduced nilpotent self-distributive algebras are closed under finite subdirect products, subalgebras, and quotients, and since we have techniques for producing many nilpotent self-distributive algebras, perhaps one can make a ML model from these algebraic structures. On the other hand, there seems to be less of a connection between reduced nilpotent self-distributive algebras and large cardinals and non-commutative polynomials than with Laver-like algebras, so perhaps it is sufficient to stick to Laver-like algebras as a platform for constructing ML models.
From Laver-like algebras, we can also produce non-commutative polynomials. If is a Laver-like algebra, and is a generating set for , then for each non-maximal critical point , we can define a non-commutative polynomial to be the sum of all non-commutative monomials of the form where and but where for . These non-commutative polynomials capture all the information behind the Laver-like algebras since one can reconstruct the entire Laver-like algebra up to critical equivalence from these non-commutative polynomials. These non-commutative polynomials don't work as well for nilpotent self-distributive algebras; for nilpotent self-distributive algebras, these non-commutative polynomials will instead be rational expressions and one will not be able to recover the entire nilpotent self-distributive algebra from these rational expressions.
I have used these non-commutative polynomials to construct the infinite product formula
where the polynomials are obtained from different non-trivial rank-into-rank embeddings . This means that the non-commutative ring operations in the rings of non-commutative polynomials are meaningful for Laver-like algebras.
If I wanted to interpret and make sense of a Laver-like algebra, I would use these non-commutative polynomials. Since non-commutative polynomials are closely related to strings (for NLP) and since the variables in these non-commutative polynomials can be substituted with matrices, I would not be surprised if one can turn Laver-like algebras into a full ML model using these non-commutative polynomials in order to solve a problem unrelated to Laver-like algebras.
Perhaps, one can also use strong large cardinal hypotheses and logic to produce ML models from Laver-like algebras. Since the existence of a non-trivial rank-into-rank embedding is among the strongest of all large cardinal hypotheses, and since one can produce useful theorems about natural looking finite algebraic structures from these large cardinal hypotheses, perhaps the interplay between the logic using large cardinal hypotheses and finite algebraic structures may be able to produce ML models? For example, one can automate the process of using elementarity to prove the existence of finite algebraic structures. After one has proven the existence of these structures using large cardinal hypotheses, one can do a search using backtracking in order to actually find candidates for these algebraic structures (or if the backtracking algorithm is exhaustive and turns up nothing, then we have a contradiction in the large cardinal hierarchy or a programming error. Mathematicians need to expend a lot of effort into finding an inconsistency in the large cardinal hierarchy this way because I am confident that they won't find such an inconsistency they will instead find plenty of near misses.). We can automate the process of producing examples of Laver-like algebras that satisfy conditions that were first established using large cardinal hypotheses, and we can perhaps completely describe these Laver-like algebras (up-to-critical equivalence) using our exhaustive backtracking process. This approach can be used to produce a lot of data about rank-into-rank embeddings, but do not see a clear path that allows us to apply this data outside of set theory and Laver-like algebras.
Using deep learning to investigate generalized Laver tables
We can certainly apply existing deep learning and machine learning techniques to classical and multigenic Laver tables.
The n-th classical Laver table is the unique self-distributive algebraic structure where whenever . The classical Laver tables are up-to-isomorphism the only nilpotent self-distributive algebras generated by a single element.
Problem: Compute the -th classical Laver table for as large as we can achieve.
Solution: Randall Dougherty in his 1995 paper Critical points in an algebra of elementary embeddings II gave an algorithm for computing the 48-th classical Laver table and with machine learning, I have personally gotten up to (I think I have a couple of errors, but those are not too significant), and I have some of the computation of .
Suppose now that . Then Dougherty has shown that one can easily recover the entire function from the restriction where for all . Suppose now that . Then let denote the least natural number such that . Then there is a number called the threshold of at such that and and such that for every with , we have whenever and for . The classical Laver table can be easily recovered from the table and the threshold function . To go beyond Dougherty's calculation of to and beyond, we exploit a couple of observations about the threshold function :
i: in most cases, , and
ii: in most cases, for some .
Let , and let where is the least natural number where and . Then one can easily compute from and the data by using Dougherty's algorithm. Now the only thing that we need to do is compute in the first place. It turns out that computing is quite easy except when is of the form or