Joseph Van Name's Shortform
post by Joseph Van Name (joseph-van-name) · 2023-10-29T23:33:58.117Z · LW · GW · 16 commentsContents
16 comments
16 comments
Comments sorted by top scores.
comment by Joseph Van Name (joseph-van-name) · 2023-12-11T22:36:13.249Z · LW(p) · GW(p)
Every entry in a matrix counts for the -spectral radius similarity. Suppose that are real -matrices. Set . Define the -spectral radius similarity between and to be the number
. Then the -spectral radius similarity is always a real number in the interval , so one can think of the -spectral radius similarity as a generalization of the value where are real or complex vectors. It turns out experimentally that if are random real matrices, and each is obtained from by replacing each entry in with with probability , then the -spectral radius similarity between and will be about . If , then observe that as well.
Suppose now that are random real matrices and are the submatrices of respectively obtained by only looking at the first rows and columns of . Then the -spectral radius similarity between and will be about . We can therefore conclude that in some sense is a simplified version of that more efficiently captures the behavior of than does.
If are independent random matrices with standard Gaussian entries, then the -spectral radius similarity between and will be about with small variance. If are random Gaussian vectors of length , then will on average be about for some constant , but will have a high variance.
These are some simple observations that I have made about the spectral radius during my research for evaluating cryptographic functions for cryptocurrency technologies.
Replies from: Algon↑ comment by Algon · 2023-12-11T22:54:44.054Z · LW(p) · GW(p)
Your notation is confusing me. If r is the size of the list of matrices, then how can you have a probability of 1-r for r>=2? Maybe you mean 1-1/r and sqrt{1/r} instead of 1-r and sqrt{r} respectively?
Replies from: joseph-van-name↑ comment by Joseph Van Name (joseph-van-name) · 2023-12-12T22:49:13.498Z · LW(p) · GW(p)
Thanks for pointing that out. I have corrected the typo. I simply used the symbol for two different quantities, but now the probability is denoted by the symbol .
comment by Joseph Van Name (joseph-van-name) · 2024-10-29T17:52:13.924Z · LW(p) · GW(p)
I have originally developed a machine learning notion which I call an LSRDR (
-spectral radius dimensionality reduction), and LSRDRs (and similar machine learning models) behave mathematically and they have a high level of interpretability which should be good for AI safety. Here, I am giving an example of how LSRDRs behave mathematically and how one can get the most out of interpreting an LSRDR.
Suppose that is a natural number. Let denote the quantum channel that takes an qubit quantum state and selects one of those qubits at random and send that qubit through the completely depolarizing channel (the completely depolarizing channel takes a state as input and returns the completely mixed state as an output).
If are complex matrices, then define superoperators and by setting
and for all .
Given tuples of matrices , define the L_2-spectral radius similarity between these tuples of matrices by setting
.
Suppose now that are matrices where . Let . We say that a tuple of complex by matrices is an LSRDR of if the quantity is locally maximized.
Suppose now that are complex -matrices and is an LSRDR of . Then my computer experiments indicate that there will be some constant where is similar to a positive semidefinite operator with eigenvalues and where the eigenvalue has multiplicity where denotes the binomial coefficient. I have not had a chance to try to mathematically prove this. Hooray. We have interpreted the LSRDR of , and I have plenty of other examples of interpreted LSRDRs.
We also have a similar pattern for the spectrum of . My computer experiments indicate that there is some constant where has spectrum where the eigenvalue has multiplicity .
comment by Joseph Van Name (joseph-van-name) · 2024-08-12T19:28:38.631Z · LW(p) · GW(p)
The notion of the linear regression is an interesting machine learning algorithm in the sense that it can be studied mathematically, but the notion of a linear regression is a quite limited machine learning algorithm as most relations are non-linear. In particular, the linear regression does not give us any notion of any uncertainty in the output.
One way to extend the notion of the linear regression to encapsulate uncertainty in the outputs is to regress a function not to a linear transformation mapping vectors to vectors, but to regress the function to a transformation that maps vectors to mixed states. And the notion of a quantum channel is an appropriate transformation that maps vectors to mixed states. One can optimize this quantum channel using gradient ascent.
For this post, I will only go through some basic facts about quantum information theory. The reader is referred to the book The Theory of Quantum Information by John Watrous for all the missing details.
Let be a complex Euclidean space. Let denote the vector space of linear operators from to . Given complex Euclidean spaces , we say that a linear operator from to is a trace preserving if
for all , and we say that is completely positive if there are linear transformations where for all ; the value is known as the Choi rank of . A completely positive trace preserving operator is known as a quantum channel.
The collection of quantum channels from to is compact and convex.
If is a complex Euclidean space, then let
denote the collection of pure states in .
can be defined either as the set of unit vector in modulo linear dependence, or
can be also defined as the collection of positive semidefinite rank- operators on with trace .
Given complex Euclidean spaces and a (multi) set of distinct ordered pairs of unit vectors , and given a quantum channel
, we define the fitness level and the loss level .
We may locally optimize to minimize its loss level using gradient descent, but there is a slight problem. The set of quantum channels spans the which has dimension . Due to the large dimension, any locally optimal will contain many parameters, and this is a large quantity of parameters for what is supposed to be just a glorified version of a linear regression. Fortunately, instead of taking all quantum channels into consideration, we can limit the scope the quantum channels of limited Choi rank.
Empirical Observation: Suppose that are complex Euclidean spaces, is finite and is a positive integer. Then computer experiments indicate that there is typically only one quantum channel of Choi rank at most where is locally minimized. More formally, if we run the experiment twice and produce two quantum channels where is locally minimized for , then we would typically have . We therefore say that when is minimized, is the best Choi rank quantum channel approximation to .
Suppose now that is a multiset. Then we would ideally like to approximate the function better by alternating between the best Choi rank r quantum channel approximation and a non-linear mapping. An ideal choice of a non-linear but partial mapping is the function that maps a positive semidefinite matrix to its (equivalence class of) unit dominant eigenvector.
Empirical observation: If and is the best Choi rank quantum channel approximation to , then let for all , and define . Let be a small open neighborhood of . Let . Then we typically have . More generally, the best Choi rank quantum channel approximation to is typically the identity function.
From the above observation, we see that the vector is an approximation of that cannot be improved upon. The mapping is therefore a trainable approximation to the mapping and since are not even linear spaces (these are complex projective spaces with non-trivial homology groups), the mapping is a non-linear model for the function to .
I have been investigating machine learning models similar to for cryptocurrency research and development as these sorts of machine learning models seem to be useful for evaluating the cryptographic security of some proof-of-work problems and other cryptographic functions like block ciphers and hash functions. I have seen other machine learning models that behave about as mathematically as .
I admit that machine learning models like are currently far from being as powerful as deep neural networks, but since behaves mathematically, the model should be considered as a safer and interpretable AI model. The goal is to therefore develop models that are mathematical like but which can perform more and more machine learning tasks.
(Edited 8/14/2024)
Replies from: joseph-van-name↑ comment by Joseph Van Name (joseph-van-name) · 2024-08-14T10:51:03.238Z · LW(p) · GW(p)
Here is an example of what might happen. Suppose that for each , we select a orthonormal basis of unit vectors for . Let . Then
Then for each quantum channel , by the concavity of the logarithm function (which is the arithmetic-geometric mean inequality), we have
. Here, equality is reached if and only if
for each , but this equality can be achieved by the channel
defined by which is known as the completely depolarizing channel. This is the channel that always takes a quantum state and returns the completely mixed state. On the other hand, the channel has maximum Choi rank since the Choi representation of is just the identity function divided by the rank. This example is not unexpected since for each input of the possible outputs span the entire space evenly, so one does not have any information about the output from any particular input except that we know that the output could be anything. This example shows that the channels that locally minimize the loss function are the channels that give us a sort of linear regression of but where this linear regression takes into consideration uncertainty in the output so the regression of a output of a state is a mixed state rather than a pure state.
comment by Joseph Van Name (joseph-van-name) · 2024-01-10T18:14:35.928Z · LW(p) · GW(p)
We can use the spectral radius similarity to measure more complicated similarities between data sets.
Suppose that are -real matrices and are -real matrices. Let denote the spectral radius of and let denote the tensor product of with . Define the -spectral radius by setting , Define the -spectral radius similarity between and as
.
We observe that if is invertible and is a constant, then
Therefore, the -spectral radius is able to detect and measure symmetry that is normally hidden.
Example: Suppose that are vectors of possibly different dimensions. Suppose that we would like to determine how close we are to obtaining an affine transformation with for all (or a slightly different notion of similarity). We first of all should normalize these vectors to obtain vectors with mean zero and where the covariance matrix is the identity matrix (we may not need to do this depending on our notion of similarity). Then is a measure of low close we are to obtaining such an affine transformation . We may be able to apply this notion to determining the distance between machine learning models. For example, suppose that are both the first few layers in a (typically different) neural network. Suppose that is a set of data points. Then if and , then is a measure of the similarity between and .
I have actually used this example to see if there is any similarity between two different neural networks trained on the same data set. For my experiment, I chose a random collection of of ordered pairs and I trained the neural networks to minimize the expected losses . In my experiment, each was a random vector of length 32 whose entries were 0's and 1's. In my experiment, the similarity was worse than if were just random vectors.
This simple experiment suggests that trained neural networks retain too much random or pseudorandom data and are way too messy in order for anyone to develop a good understanding or interpretation of these networks. In my personal opinion, neural networks should be avoided in favor of other AI systems, but we need to develop these alternative AI systems so that they eventually outperform neural networks. I have personally used the -spectral radius similarity to develop such non-messy AI systems including LSRDRs, but these non-neural non-messy AI systems currently do not perform as well as neural networks for most tasks. For example, I currently cannot train LSRDR-like structures to do any more NLP than just a word embedding, but I can train LSRDRs to do tasks that I have not seen neural networks perform (such as a tensor dimensionality reduction).
comment by Joseph Van Name (joseph-van-name) · 2023-12-14T23:23:19.448Z · LW(p) · GW(p)
So in my research into machine learning algorithms that I can use to evaluate small block ciphers for cryptocurrency technologies, I have just stumbled upon a dimensionality reduction for tensors in tensor products of inner product spaces that according to my computer experiments exists, is unique, and which reduces a real tensor to another real tensor even when the underlying field is the field of complex numbers. I would not be too surprised if someone else came up with this tensor dimensionality reduction before since it has a rather simple description and it is in a sense a canonical tensor dimensionality reduction when we consider tensors as homogeneous non-commutative polynomials. But even if this tensor dimensionality reduction is not new, this dimensionality reduction algorithm belongs to a broader class of new algorithms that I have been studying recently such as LSRDRs.
Suppose that is either the field of real numbers or the field of complex numbers. Let be finite dimensional inner product spaces over with dimensions respectively. Suppose that has basis . Given , we would sometimes want to approximate the tensor with a tensor that has less parameters. Suppose that is a sequence of natural numbers with . Suppose that is a matrix over the field for and . From the system of matrices , we obtain a tensor . If the system of matrices locally minimizes the distance , then the tensor is a dimensionality reduction of which we shall denote by .
Intuition: One can associate the tensor product with the set of all degree homogeneous non-commutative polynomials that consist of linear combinations of the monomials of the form . Given, our matrices , we can define a linear functional by setting . But by the Reisz representation theorem, the linear functional is dual to some tensor in . More specifically, is dual to . The tensors of the form are therefore the
Advantages:
- In my computer experiments, the reduced dimension tensor is often (but not always) unique in the sense that if we calculate the tensor twice, then we will get the same tensor. At least, the distribution of reduced dimension tensors will have low Renyi entropy. I personally consider the partial uniqueness of the reduced dimension tensor to be advantageous over total uniqueness since this partial uniqueness signals whether one should use this tensor dimensionality reduction in the first place. If the reduced tensor is far from being unique, then one should not use this tensor dimensionality reduction algorithm. If the reduced tensor is unique or at least has low Renyi entropy, then this dimensionality reduction works well for the tensor .
- This dimensionality reduction does not depend on the choice of orthonormal basis . If we chose a different basis for each , then the resulting tensor of reduced dimensionality will remain the same (the proof is given below).
- If is the field of complex numbers, but all the entries in the tensor happen to be real numbers, then all the entries in the tensor will also be real numbers.
- This dimensionality reduction algorithm is intuitive when tensors are considered as homogeneous non-commutative polynomials.
Disadvantages:
- This dimensionality reduction depends on a canonical cyclic ordering the inner product spaces .
- Other notions of dimensionality reduction for tensors such as the CP tensor dimensionality reduction and the Tucker decompositions are more well-established, and they are obviously attempted generalizations of the singular value decomposition to higher dimensions, so they may be more intuitive to some.
- The tensors of reduced dimensionality have a more complicated description than the tensors in the CP tensor dimensionality reduction.
Proposition: The set of tensors of the form does not depend on the choice of bases .
Proof: For each , let be an alternative basis for . Then suppose that for each . Then
. Q.E.D.
A failed generalization: An astute reader may have observed that if we drop the requirement that , then we get a linear functional defined by letting
. This is indeed a linear functional, and we can try to approximate using a the dual to , but this approach does not work as well.
comment by Joseph Van Name (joseph-van-name) · 2024-10-28T23:02:24.892Z · LW(p) · GW(p)
In this note, I will continue to demonstrate not only the ways in which LSRDRs (-spectral radius dimensionality reduction) are mathematical but also how one can get the most out of LSRDRs. LSRDRs are one of the types of machine learning that I have been working on, and LSRDRs have characteristics that tell us that LSRDRs are often inherently interpretable which should be good for AI safety.
Suppose that is the quantum channel that maps a qubit state to a qubit state where we select one of the 6 qubits at random and send it through the completely depolarizing channel (the completely depolarizing channel takes a state as an input and returns the completely mixed state as an output). Suppose that are by matrices where has the Kraus representation .
The objective is to locally maximize the fitness level where the norm in question is the Euclidean norm and where denotes the spectral radius. This is a 1 dimensional case of an LSRDR of the channel .
Let when is selected to locally maximize the fitness level. Then my empirical calculations show that there is some where is positive semidefinite with eigenvalues and where the eigenvalue has multiplicity which is the binomial coefficient. But these are empirical calculations for select values ; I have not been able to mathematically prove that this is always the case for all local maxima for the fitness level (I have not tried to come up with a proof).
Here, we have obtained a complete characterization of up-to-unitary equivalence due to the spectral theorem, so we are quite close to completely interpreting the local maximum for our fitness function.
I made a few YouTube videos showcasing the process of maximizing the fitness level here.
Spectra of 1 dimensional LSRDRs of 6 qubit noise channel during training
Spectra of 1 dimensional LSRDRs of 7 qubit noise channel during training
Spectra of 1 dimensional LSRDRs of 8 qubit noise channel during training
I will make another post soon about more LSRDRs of a higher dimension of the same channel .
comment by Joseph Van Name (joseph-van-name) · 2024-09-18T23:37:05.802Z · LW(p) · GW(p)
I personally like my machine learning algorithms to behave mathematically especially when I give them mathematical data. For example, a fitness function with apparently one local maximum value is a mathematical fitness function. It is even more mathematical if one can prove mathematical theorems about such a fitness function or if one can completely describe the local maxima of such a fitness function. It seems like fitness functions that satisfy these mathematical properties are more interpretable than the fitness functions which do not, so people should investigate such functions for AI safety purposes.
My notion of an LSRDR is a notion that satisfies these mathematical properties. To demonstrate the mathematical behavior of LSRDRs, let's see what happens when we take an LSRDR of the octonions.
Let denote either the field of real numbers or the field of complex numbers (
could also be the division ring of quaternions, but for simplicity, let's not go there). If are -matrices over , then an LSRDR (-spectral radius dimensionality reduction) of is a collection of -matrices that locally maximizes the fitness level
. denotes the spectral radius function while denotes the tensor product and denotes the matrix obtained from by replacing each entry with its complex conjugate. We shall call the maximum fitness level the -spectral radius of over the field , and we shall wrote for this spectral radius.
Define the linear superoperator by setting
and set . Then the fitness level of is .
Suppose that is an -dimensional real inner product space. Then the octonionic multiplication operation is the unique up-to-isomorphism bilinear binary operation on together with a unit such that and for all x. If we drop the condition that the octonions have a unit, then we do not quite have this uniqueness result.
We say that an octonion-like algbera is a -dimensional real inner product space together with a unique up-to-isomorphism bilinear operation such that for all .
Let be a specific octonion-like algebra.
Suppose now that is an orthonormal basis for (this does not need to be the standard basis). Then for each , let be the linear operator from to defined by setting for all vectors . All non-zero linear combinations of are conformal mappings (this means that they preserve angles). Now that we have turned the octonion-like algebra into matrices, we can take an LSRDR of the octonion-like algebras, but when taking the LSRDR of octonion-like algebras, we should not worry about the choice of orthonormal basis since I could formulate everything in a coordinate-free manner.
Empirical Observation from computer calculations: Suppose that and is the field of real numbers. Then the following are equivalent.
- The matrices are a LSRDR of over where has a unique real dominant eigenvalue.
- There exists matrices where for all and where is an orthonormal projection matrix.
In this case, and this fitness level is reached by the matrices in the above equivalent statements. Observe that the superoperator is similar to a direct sum of and a zero matrix. But the projection matrix is a dominant eigenvector of and of as well.
I have no mathematical proof of the above fact though.
Now suppose that . Then my computer calculations yield the following complex -spectral radii:
Each time that I have trained a complex LSRDR of , I was able to find a fitness level that is not just a local optimum but also a global optimum.
In the case of the real LSRDRs, I have a complete description of the LSRDRs of . This demonstrates that the octonion-like algebras are elegant mathematical structures and that LSRDRs behave mathematically in a manner that is compatible with the structure of the octonion-like algebras.
I have made a few YouTube videos that animate the process of gradient ascent to maximize the fitness level.
Edit: I have made some corrections to this post on 9/22/2024.
Fitness levels of complex LSRDRs of the octonions (youtube.com)
comment by Joseph Van Name (joseph-van-name) · 2024-03-03T14:30:16.055Z · LW(p) · GW(p)
There are some cases where we have a complete description for the local optima for an optimization problem. This is a case of such an optimization problem.
Such optimization problems are useful for AI safety since a loss/fitness function where we have a complete description of all local or global optima is a highly interpretable loss/fitness function, and so one should consider using these loss/fitness functions to construct AI algorithms.
Theorem: Suppose that is a real,complex, or quaternionic -matrix that minimizes the quantity . Then is unitary.
Proof: The real case is a special case of a complex case, and by representing each -quaternionic matrix as a complex -matrix, we may assume that is a complex matrix.
By the Schur decomposition, we know that where is a unitary matrix and is upper triangular. But we know that . Furthermore, , so . Let denote the diagonal matrix whose diagonal entries are the same as . Then and . Furthermore, iff T is diagonal and iff is diagonal. Therefore, since and is minimized, we can conclude that , so is a diagonal matrix. Suppose that has diagonal entries . By the arithmetic-geometric mean equality and the Cauchy-Schwarz inequality, we know that
Here, the equalities hold if and only if for all , but this implies that is unitary. Q.E.D.
comment by Joseph Van Name (joseph-van-name) · 2024-01-08T12:39:54.914Z · LW(p) · GW(p)
The -spectral radius similarity is not transitive. Suppose that are -matrices and are real -matrices. Then define . Then the generalized Cauchy-Schwarz inequality is satisfied:
.
We therefore define the -spectral radius similarity between and as . One should think of the -spectral radius similarity as a generalization of the cosine similarity between vectors . I have been using the -spectral radius similarity to develop AI systems that seem to be very interpretable. The -spectral radius similarity is not transitive.
and
, but can take any value in the interval .
We should therefore think of the -spectral radius similarity as a sort of least upper bound of -valued equivalence relations than a -valued equivalence relation. We need to consider this as a least upper bound because matrices have multiple dimensions.
Notation: is the spectral radius. The spectral radius is the largest magnitude of an eigenvalue of the matrix . Here the norm does not matter because we are taking the limit. is the direct sum of matrices while denotes the Kronecker product of matrices.
comment by Joseph Van Name (joseph-van-name) · 2023-12-18T23:34:06.324Z · LW(p) · GW(p)
Let's compute some inner products and gradients.
Set up: Let denote either the field of real or the field of complex numbers. Suppose that are positive integers. Let be a sequence of positive integers with . Suppose that is an -matrix whenever . Then from the matrices , we can define a -tensor . I have been doing computer experiments where I use this tensor to approximate other tensors by minimizing the -distance. I have not seen this tensor approximation algorithm elsewhere, but perhaps someone else has produced this tensor approximation construction before. In previous shortform posts on this site, I have given evidence that the tensor dimensionality reduction behaves well, and in this post, we will focus on ways to compute with the tensors , namely the inner product of such tensors and the gradient of the inner product with respect to the matrices .
Notation: If are matrices, then let denote the superoperator defined by letting . Let .
Inner product: Here is the computation of the inner product of our tensors.
.
In particular, .
Gradient: Observe that . We will see shortly that the cyclicity of the trace is useful for calculating the gradient. And here is my manual calculation of the gradient of the inner product of our tensors.
.
comment by Joseph Van Name (joseph-van-name) · 2023-12-17T21:45:12.197Z · LW(p) · GW(p)
So in my research into machine learning algorithms, I have stumbled upon a dimensionality reduction algorithm for tensors, and my computer experiments have so far yielded interesting results. I am not sure that this dimensionality reduction is new, but I plan on generalizing this dimensionality reduction to more complicated constructions that I am pretty sure are new and am confident would work well.
Suppose that is either the field of real numbers or the field of complex numbers. Suppose that are positive integers and is a sequence of positive integers with . Suppose that is an -matrix whenever . Then define a tensor .
If , and is a system of matrices that minimizes the value , then is a dimensionality reduction of , and we shall denote let denote the tensor of reduced dimension . We shall call a matrix table to tensor dimensionality reduction of type .
Observation 1: (Sparsity) If is sparse in the sense that most entries in the tensor are zero, then the tensor will tend to have plenty of zero entries, but as expected, will be less sparse than .
Observation 2: (Repeated entries) If is sparse and and the set has small cardinality, then the tensor will contain plenty of repeated non-zero entries.
Observation 3: (Tensor decomposition) Let be a tensor. Then we can often find a matrix table to tensor dimensionality reduction of type so that is its own matrix table to tensor dimensionality reduction.
Observation 4: (Rational reduction) Suppose that is sparse and the entries in are all integers. Then the value is often a positive integer in both the case when has only integer entries and in the case when has non-integer entries.
Observation 5: (Multiple lines) Let be a fixed positive even number. Suppose that is sparse and the entries in are all of the form for some integer and . Then the entries in are often exclusively of the form as well.
Observation 6: (Rational reductions) I have observed a sparse tensor all of whose entries are integers along with matrix table to tensor dimensionality reductions of where .
This is not an exclusive list of all the observations that I have made about the matrix table to tensor dimensionality reduction.
From these observations, one should conclude that the matrix table to tensor dimensionality reduction is a well-behaved machine learning algorithm. I hope and expect this machine learning algorithm and many similar ones to be used to both interpret the AI models that we have and will have and also to construct more interpretable and safer AI models in the future.
comment by Joseph Van Name (joseph-van-name) · 2023-10-29T23:33:58.309Z · LW(p) · GW(p)
Suppose that are natural numbers. Let . Let be a complex number whenever . Let be the fitness function defined by letting . Here, denotes the spectral radius of a matrix while denotes the Schatten -norm of .
Now suppose that is a tuple that maximizes . Let be the fitness function defined by letting . Then suppose that is a tuple that maximizes . Then we will likely be able to find an and a non-zero complex number where .
In this case, represents the training data while the matrices is our learned machine learning model. In this case, we are able to recover some original data values from the learned machine learning model without any distortion to the data values.
I have just made this observation, so I am still exploring the implications of this observation. But this is an example of how mathematical spectral machine learning algorithms can behave, and more mathematical machine learning models are more likely to be interpretable and they are more likely to have a robust mathematical/empirical theory behind them.
Replies from: joseph-van-name↑ comment by Joseph Van Name (joseph-van-name) · 2023-10-30T23:34:09.771Z · LW(p) · GW(p)
I think that all that happened here was the matrices just ended up being diagonal matrices. This means that this is probably an uninteresting observation in this case, but I need to do more tests before commenting any further.