Posts

SONN : What's Next ? 2022-01-09T08:15:32.386Z
Self-Organised Neural Networks: A simple, natural and efficient way to intelligence 2022-01-01T23:24:56.856Z

Comments

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-18T20:23:06.348Z · LW · GW

Welcome onboard this IT ship to baldly go where no one as gone before !

Indeed, I just wrote 'when it spikes' and further as the 'low threshold' and no more. I work in complete isolation and some things are so obvious inside my brain that I do not consider them as non obvious to others.

It is part of the 'when' aspect of learning, but uses an internal state of the neuron instead of an external information from the quantilisers.

If there is little reaction to a sample in a neuron (spiking does happen slowly, or not), it is meaningless and you should ignore it. If it comes too fast, it is already 'in' the system and there is no point in adding to it. You are right to say the first rule is more important than the second.

Originally, there was only one threshold instead of 3.  When learning, the update would only take place if the threshold was reached after a minimum of two cycles (or 3, but then it converges unbearably slowly), and only for the connections that had been active at least twice. I 'compacted' it for use within one cycle (to make it look simpler), so it was 50% of the threshold minimum, and then adjusted (might as well) that value by scanning around and, then, added the upper threshold, but more to limit the number of updates than to improve the accuracy (although it contributes a small bit). The best result is with 30% and 120%, whatever the size or the other parameters.

Before I write this, I quickly checked on PI-F-MNIST. It is still ongoing, but it seems to hold true even on that dataset (BTW: use quantUpP = 3.4 and quantUpN = 40.8 to get to 90.2% with 792 neurons and 90.5% with 7920).

As it seems you are interested, feel free to contact me through private message. There is plenty more in my bag than can fit in a post or comment. I can provide you some more complete code (this one is triple distilled).

Thank you very much for your interest.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-16T11:19:28.940Z · LW · GW

I am going to answer this comment because it is the first to address the analysis section. Thank you.

I close the paragraph saying that there is no functions anywhere and it will aggrieve some. The shift I am trying to suggest is for those who want to analyse the system using mathematics, and could be dismayed by the absence of functions to work with.

Distributions can be a place to start. The quantilisers are a place to restart mathematical analysis. I gave some links to an existing field of mathematical research that is working along those lines.

Check this out: they are looking for a multi-dimensional extension to the concept. Here it is, I suggest.

Comment by D𝜋 on SONN : What's Next ? · 2022-01-09T15:11:34.815Z · LW · GW

This introduces a new paradigm. Read T.Kuhn. You cannot compare different paradigms.

Everything that matters is in the post. Read it; really.

What is needed next is engineering, ingenuity and suitable ICs, not maths. The IT revolution came from IT (coders) and ICs, not CS.

As for your recommendation, I have tried so many things over the past four years… I posted here first to get to the source of one of the evidences; to no avail.

Good bye everyone

I am available through private messages

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-07T15:43:52.824Z · LW · GW

BP is Back-Propagation.

We are completely missing the plot here. 

I had to use a dataset for my explorations and MNIST was simple; and I used PI-MNIST to show an 'impressive' result so that people have to look at it. I expected the 'PI' to be understood, and it is not. Note that I could readily answer the 'F-MNIST challenge'.

If I had just expressed an opinion on how to go about AI, the way I did in the roadmap, it would have been just, rightly, ignored. The point was to show it is not 'ridiculous' and the system fits with that roadmap.

I see that your last post is about complexity science. This is an example of it. The domain of application is nature. Nature is complex, and maths have difficulties with complexity. The field of chaos theory puttered in the 80s for that reason. If you want to know more about it, start with Turing morphogenesis (read the conclusion), then Prigogine. In NN, there is Kohonen

Some things are theoretical correct, but practically useless. You know how to win the lotto, but nobody does it. Better something simple that works and can be reasoned about, even without a mathematical theory. AI is not quantum physics.

Maybe it could be said that intelligence is to cut through all the details to, then, reason using what is left, but the devil is in those details.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-07T07:28:16.947Z · LW · GW

Also,

No regularisation. I wrote about that in the analysis.

Without max-norm (or maxout, ladder, VAT: all forms of regularisation), BP/SGD only achieves 98.75% (from the dropout -2014- paper).

Regularisation must come from outside the system. - SO can be seen that way - or through local interactions (neighbors). Many papers clearly suggest that should improve the result. 

That is yet to do.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-06T13:58:35.939Z · LW · GW

... and it is in this description:

"The spiking network can adjust the weights of the active connections"

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-06T13:54:58.161Z · LW · GW

It is not a toolbox you will be using tomorrow.

I applied it to F-MNIST, in a couple of hours after being challenged, to show that is not just only MNIST. I will not do it again, that is not the point.

It is a completely different approach to AGI, that sounds so ridiculous that I had to demonstrate that it is not, by getting near SOTA on one widely used dataset (so PI-MNIST) and finding relevant mathematical evidence.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-06T13:42:56.129Z · LW · GW

I am going after pure BP/SGD, so neural networks (no SVM), no convolution,...

No pre-processing either. That is changing the dataset.

It is just a POC, to make a point: you do not need mathematics for AGI. Our brain does not.

I will publish a follow-up post soon.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-06T08:09:42.664Z · LW · GW

I doubt that this would be the best a MLP can achieve on F-MNIST.

I will put it this way: SONNs and MLPs do the same thing, in a different way. Therefore they should achieve the same accuracy. If this SONN can get near 90%, so should MLPs. 

It is likely that nobody has bothered to try 'without convolutions' because it is so old-fashioned.

Convolutions are for repeated locally aggregated correlations.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-05T12:20:43.652Z · LW · GW

Spot on.

I hope your explanation will be better understood than mine. Thank you.

It 'so happens' that MNIST (but not PI) can also be used for basic geometry. That is why I selected it for my exploration (easy switch between the two modes).

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-05T10:50:17.150Z · LW · GW

no convolution.

You are comparing pears and apples.

I have shared the base because it has real scientific (and philosophical) value.

Geometry and other are separate, and of lesser scientific value. they are more technology.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-05T09:51:03.425Z · LW · GW

That is correct.

I am referring to that paper as a vindication of the concept, but I do not use the quantiliser algorithm provided.

The one I use I devised on my own, a long time ago, with the thought experiment described, but it has since been mathematically studied. Actually, when I searched for it and found it the first time, it was in a much simpler version, but I cannot find that one again now... 

I have not been down to every detail of lsusr's rewrite yet, just the main corrections to the description of the mechanism. I had to do F-MNIST first.

Side note: The thought experiment describes a mechanical system. Why should it be called algorithm when implemented in code ? because that makes it un-patentable ? I am not sure a super-intelligent AI could understand human politics.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-05T09:32:38.788Z · LW · GW

Please do, and thank you for trying.

That is exactly what I am trying to elicit.

If you have any question, I am available to help (through private messages).

I do not know Python (I am extremely comfortable with C and I get full speed and I do not have the time or need), but it seems the ML community is.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-05T09:19:02.298Z · LW · GW

Update: 3 runs (2 random) , 10 million steps. All three over 88.33 (average 9.5-10.5 million on the 3: 88.43). New SOTA ? Please check and update.

Update 2: 89.85 at step 50 Million with QuantUpP = 3.2 and quantUpN = 39. It does perform very well. I will leave it at that. As said in my post, those are the two important parameters (no, it is not a universal super-intelligence in 600 lines of code). Be rational, and think about what the fact that this mechanism works so well means (I am talking to everybody, there).

I looked at it, the informed way.

It gets over 88% with very limited effort.

As I pointed, the two dataset are similar in technical description, but they are 'reversed' in the data.

MNIST is black dots on white background. F-MNIST is white dots on black background. The histograms are very different.

I tried to make it work despite that, just with parameter changes, and it does.

Here are the changes to the code:

on line 555: quantUpP = 1.9 ;

on line 556: quantUpN = 24.7 ;

with rand(1000), as it is in the code, you already clear 86% at step 300,000 and 87% at step 600,000 and 88% at 3 Million.

I had made another, small and irrelevant, change, in my full tests, so I am running the full tests again without it (the value/steps above are from that new series). It seems to be better again without it... maybe a new SOTA (update: touched 88.33% at step 4,800,000 ! ... and 88.5 at 6.8 Millions !. MLPs perform poorly when applied to data even slightly more complicated than MNIST)

I do not understand what is all the hype around MNIST. Once again, this is PI-MNIST and that makes it very different (to put it simply: no geometry, so no convolution).

I would like anybody to give me a reference to some 'other method that worked on MNIST but did not make it further', that uses PI-MNIST and gets more than 98.4% on it.

And if anybody tries it on yet another dataset, could they please notify me so I look at it, before they make potentially damaging statements.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-04T17:06:44.722Z · LW · GW

See my comment on reversing the shades on F-MNIST. I will check it later but I see it gets up to 48% in the 'wrong' order and that is surprisingly good. I worked on CIFAR, but that is another story. As-is it gives bad results and you have to add other 'things'.

As you guessed, I belong to neuroinspired branch and most of my 'giants' belong there. I strongly expected, when I started my investigations, to use some of the works that I knew and appreciated along the lines your are mentioning, and I investigated some of them early on.

To my surprise, I did not need them to get to this result, so they are absent.

The two neuronal layers form of the neocortex is where they will be useful. This is only one layer.

Another (bad) reason, is that they add to the GPU hell... that has limited my investigations. It is identified source of potential improvements.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-04T16:09:55.495Z · LW · GW

That actually brings us to the core of it.

The way I phrased that was, deliberately, ambiguous.

Since 1958, the question the field has been trying to answer is how to transfer the information we get when a sample is presented, to the weights, so next time it will perform better.

BP computes the difference between what would be expected and what is measured, and the propagates it to all intermediary weights according to a set of mathematically derived rules (the generalised delta rule). A lot of work as gone into figuring out the best way to do that. This is what I called 'how' to learn.

In this system, the method used is just the simplest possible, and the most intuitive, one: INC and DEC of the weights depending on wether it is or not the correct answer. 

The quantiliser, then, tell the system to only apply that simple rule under certain conditions (the Δ⊤and  limits). That is 'when'.

You can use the delta rule instead of our basic update rule if you want (I tried). The result is not better and it is less stable, so you have to use small gradients. The problem, as I see it, is that the conditions under witch Jessica Taylor's theorem is valid are not met any more and you have to 'fix' that. I did not investigate that extensively.
 

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-04T15:40:29.492Z · LW · GW

'those seen before' are.values of Δ⊥i across all samples seen before, not within a sample.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-04T15:31:17.315Z · LW · GW

It is not, per se, Hebb's rule. Hebb's rule is very general. I personally see this as belonging to it, that's all. I give attributions where is think it is deserved.

Comment by D𝜋 on D𝜋's Spiking Network · 2022-01-04T15:26:41.862Z · LW · GW

I just discovered about the 'ping back' on LessWrong...

I gave a first read of your description. Most of it is correct. I will check in more details.

I used the terms 'total' and 'groups' to make things simpler, but yours are better.

four corrections:

1.

The potential of a neuron can be negative. It is the pure sum of all weights, positive and negative. There is no 'negative spiking' (It is one of the huge number of things I tried that did not bring any benefit). It think I remember trying to set a bottom limit at 0 (no negative potential) and that, as always, it did not make any real difference...

2.

'Our system thus has four receptors per pixel. Exactly one receptor activates per pixel per image' is incorrect.

The MNIST pixels are grey shades 0-255.

It is reduced down to 4: 0, 1-63, 64-127, 128-191, >=192. (only keep the 2 top bits). That is enough for MNIST. Many papers have noted that the depth can be reduced and it is true.

Images are presented over 4 'cycles', filtered by those 4 limits. In the first cycle, only pixels with a value over 191 are presented, in the second one, those over 127, the third 63, the last, 0 (not nul).

In the code the array 'wd' contains the 4 limits, and at each cycle, the pixel values are tested to be superior or equal to those limits.

Connexions are established with matrix pixels. Over the 4 cycles, they are presented with 4 successive 0 or 1.

If a connexion is on a pixel that shows value 112, on the first cycle it will not be active (<192), on the second it will not be active (<128), but on the third and forth, it will be (>63 and >0).

That is what allows the 'model averaging across cycles'.

From there, you can understand a first, fatal, reason why F-MNIST cannot be processed as-is: the grey shades are reversed. In MNIST, the background is white, in F-MNIST, it is black. So the cycle limits would have to be reversed.

I will have a look at it.

3.

The  computation includes the  column.

Note that the 'highest' ⊥ selection can be easily implemented using population coding with inhibition of the ⊤ column.

I do not know if the options I used are only valid for this dataset or if there have larger validity across dataset as I only have used that one. Maybe they are and you won't have to figure out each time.

4.

When a new connection is established, the initial weight is always the same. It is given as a fraction of the threshold in the variable 'divNew', that is the divider. You can do random, it does not make a difference. You can change it to another value. But it has to be small enough (the divider) that the number of connections of a neuron multiplied by the number of cycles be superior to the threshold, or the system will never 'boot' as no neuron would ever spike. So I use 1/10 of the threshold with 10 connections and 4 cycle, and it is fine.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-04T11:38:29.252Z · LW · GW

Thank you for the congrats, it helps.

Note, that I only claim to reach SOTA, not to beat it.

It would be preposterous to convince anybody with this limited evidence. The goal is to raise interest so some will spend some time to look deeper into it. Most will not, of course, for many reasons, and yours is a valid one.

The advantage of this one is its simplicity. At this point any coder can take it up and build on it. This has to be turned into a new type of construction set. I would like this to provide the 15 years old of today the pleasure my first computer (machine language) gave me, and Legos before that.

You got the last bit correctly. That is what self-organisation provides: ad-hoc selection.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-04T09:34:47.340Z · LW · GW

See my answer to mlem_mlem_mlem for the second part of your comment.

You are bringing another interesting point: scaling up and tuning.

As I indicated in the roadmap, nature has chosen the way of width to that of depth.

The cortical sheet is described as a 6 layers structure, but only 3 are neurons and 2 pyramidal neurons. That is not deep. Then we see columns, functional 'zones', 'regions'... There is an organisation, but it is not very deep. The number of columns in each 'zone' is very large. Also note that the neuron is deemed 'stochastic', so precision is not possible. Lastly, note (sad but true) that those who got the prize worked on technical simplification for practical use.

There is two options at this stage:

We consider, as the symbolic school has since 1969, that the underlying substrate is unimportant and, if we can find mathematical ways to describe it, we will be able to reproduce it, or...

We consider that nature has done the work (we are here to attest to that), properly, and we should look at how it did it.

1986 was an acceptable compromise, for a time. 2026, will mark one century of the 5th Solvay conference.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-04T08:54:18.368Z · LW · GW

You are comparing step and ladder (I had to seize on it !).

If you look at Table 2 in your last reference, you will see that they, carefully, show results improving has steps are added. Ladder is just another step (an optimisation one). There is a reason why researchers use PI-MNIST: it is to reduce the size of the ladder to make comparisons clearer. 

What I am trying to bring here is a new first step.

I could have tried a 784-25-10 BP/SGD network (784*25 = 19600 parameters) to compare with this system with 196 neurons and 10 connections. I have managed to get 98% with that. How much for the same with BP/SGD ?

The current paradigm has been building up since 1986, and was itself based on the perceptron from 1958.

Here, I take the simplest form of the perceptron (single layer), only adjoin a, very basic, quantiliser to drive it, and already get near SOTA. I also point out that this quantiliser is just another form of neuron.

I am trying to show it might be an interesting step to take.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T17:57:32.648Z · LW · GW

I wrote a comment on that but this is a better to place for it.

I changed the update value from 1000 to 500 for that network size (in the code).

1000 is for the large network (98.9%). At size 792 (for 98.6%) it is too much, and the accuracy goes down after reaching the top. I did not take the time to check properly before publishing. My fault.

If you check it out now, it will get to >98.6% and stay there (tested up to 10 millions, three times, random).

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T15:08:07.238Z · LW · GW

Update:

I changed the adjustment values for the 98.65% version to 500/-500 (following lsusr comments).

1000/-1000 is good for the larger network (98.9%), but too much for the smaller ones. It makes the accuracy reduce after it has reached the peak value.

I was too fast publishing and did not go through all the required verifications. My fault.

I am running a series of tests to confirm. The first two are in spec and stable at 10 million steps.

Larger values speed up the convergence, and I was trying to make it as fast as possible, to not waste the time of those who would spend it verifying. Sorry about that.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T14:53:25.591Z · LW · GW

They belong to the same, forgotten, family.

T.Kuhn said it all.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T14:14:16.149Z · LW · GW

No, there isn't, but it is interesting.

I gave it a quick look. It seems to be closer to this (this is closer to the point)

I was heavily influenced, back in the 70s, by the works of Mandelbrot and the chaos theory that developed at the time, and has gone nowhere.

The concept of self-organisation has been around for a long time but it is hard to study from the mathematical point of view, and, probably for that reason, it has never 'picked up'.

So, of course, there are similarities, and, please, go back to all of those old papers and re-think it all. 

You will benefit from an hands-on approach rather then a theoretical one. First you experiment, then you find, then you analyse and finally, you theorise. This is not quantum physics and we have the tools (computers) to easily conduct experiments.

This is just another exemple, one that could prove very useful. That's it.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T11:30:43.984Z · LW · GW

The only function that is implemented in CUDA is the test one (test_gpu).

It is also implement for CPU (test_mt_one), identically.

What matters is all clearly (I hope) explained in the text. It is simple enough that its reach is not limited to ML researchers and clearly within that of a lot of coders. The IT revolution started when amateurs got PCs.

In this version of the code, I had to make a tradeoff between completeness, usability and practicality. Write your own code, it does not matter. It is the concept that does.

The (upcoming) website will give separate, readable, versions. I am waiting to get a proper idea of what is demanded before I do that, so thank you for that input.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T10:34:23.209Z · LW · GW

I have, deliberately, taken away everything relating to geometry from this presentation. 

It took 12 years (1986-1998) and (how much research effort ?) ,to go from BP/SGD to convolutions.

This is a one man effort, on my own personal time (20,000 hours over the past 6 years), that I am giving away for the community to freely take over. I am really sorry if it is not enough. Their choice.

It is not an add-on to something that exist but a complete restart. One thing at a time.

As for CUDA, if you have a lot of threads, it is bearable, and you can use old, cheap, GPUs with very little loss (they have been optimised recently for FP multiply/add, at the expense of ADD of INT).

FYI, I got >99.3% with only adding a fix layer of simple preset filters (no augmentation) and the idea behind can be readily extended. And you can also train, unsupervised, convolutions.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T09:59:23.686Z · LW · GW

That is a question of low philosophical value, but of the highest practical importance.

At line 3,000,000 with the 98.9% setup, in the full log, there is these two informations:

'sp: 1640  734  548', and 'nbupd: 7940.54  6894.20' (and the test accuracy is exactly 98.9%)

It means that the average spike count for the IS-group is 1640 per sample, 734 for the highest ISNOT-group and 548 for the other ones. The number of weight updates per IS-learn is 7940.54 and 6894.20 per ISNOT-learn. With the coding scheme used, the average number of inputs per sample over the four cycles is 2162 (total number of activated pixels in the input matrix) for 784 pixels. There is 7920 neurons per group with 10 connections each (so 10/784 th of the pixel matrix), for a total of 79200 neurons.

From those numbers:

The average number of integer additions done for all neurons in a group when a sample is presented is:  79200 * 2162 * 10/784  =  2,184,061 integer additions in total.

And for the spike counts:  1640 + 734 + 8*548 =  6758  INCrements (counting the spikes).

When learning, for each IS-learn, there is 7940, and for each ISNOT-learn, 6894, weight updates . Those are additions of integers. So an average of (7940 + 9*6894) / 10 = 7000 additions of integer.

That is to be compared with a 3 fully connected, say 800 units (to make up for the difference between 98.90% and 98.94%) layers.

That would be at least 800*800*2 + 800*784 = 1,907,200 floating point multiplications, plus what would be used for Max-norm, ReLu,... that I am not qualified to evaluate, but might roughly double that ?

And the same for each update (low estimate).

Even with recent works on sparse updates that do reduce that by 93%, it is still more than 133,000 floating-point multiplications (against 7000 additions of integers).

I have managed to get over 98.5% with 2000 neurons (20,000 connections). I would like to know if BP/SGD can perform to that level with such a small number of parameters (that would be one fully connected layer of 25 units) ? And, as I said in the roadmap, that is what will matter for full real systems. 

That is the basic building bloc. the 1x1x1 Lego brick. 1.5/1.1 = 36% improvement with 40 times the ressources is useless in practice.

And that is missing the real point laid out in the Roadmap: this system CAN and MUST be implemented in analog (hybrid until we get practical memristors), whereas BP/SGD CAN NOT.

There is, at least, another order of magnitude in efficiency to be gained there.

There is a lot of effort invested, at this time, in the industry to implement AI at IC-level. Now is the time.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-02T22:29:48.225Z · LW · GW

I think I have expressed my views on the matter of responsibility quiet clearly in the conclusion.

I just checked Yudkowsky on Google. He founded this website, so good.

Here is not the place to argue my views on super-intelligence, but I clearly side with Russell and Norvig. Life is just too complex; luckily.

As for safety, the title of Jessica Taylor's article is:

"Quantilizers: A Safer Alternative to Maximizers for Limited Optimization".

I will just be glad to have proved that alternative to be effective.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-02T20:12:33.850Z · LW · GW

I am not sure I understand your question (sorry, I do not know what is Yudkowsky'DMs)

I basically disclosed, to all, that the way we all think we think, does work.

What kind of responsibility could that bear ?

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-02T16:46:45.084Z · LW · GW

We never stop learning.

To kill the program, shoot 'Ctrl+C'.

Seriously, this system works 'online'. I gave the exemple of the kids and the Dutch to illustrate that in nature, things change around us and we adjust what we know to the new conditions. A learning process should not have a stopping criteria.

The system converges, on PI-MNIST, at 3-5 million steps. To compare, recent research papers stop at 1 million, but keep in mind that we only update about 2 out of 10 groups each time, so it is equivalent.

So you can use  "for( ; b<5000000 ; b++ )" instead of "while( 1 == 1 )" in the batch() function.

After convergence, it stays within a 0.1% margin forever after. You can design a stop test around that if you want, or around the fact that weights stabilise, or anything of that kind.

If you were to use a specific selection of the dataset, wait until it stabilises and, then, use the whole set, the system would 'start learning again' and adjust to that change. Forever. 

It is a feature, not a bug.

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-02T13:15:04.399Z · LW · GW

It is PI-MNIST.

Permutation Invariant. To keep it simple, you cannot use convolutions. It is all explained in the text.

Real SOTA on that version is 99.04% (Maxout), but that is with 65 Millions+ parameters. I do not have the hardware (or time).

I stopped at 98.9% with 750,000 connections (integers and additions) and this is close to what BP/SGD (table 2) gets with 3 hidden layer of 1024 units each, for a total of >3,000,000 parameters (floating-points and multiplication) with max-norm and Relu.

For a similar accuracy, the number of 'parameters' is almost an order of magnitude lower with this system and efficiency even more.

Remember, it is not supposed to work at all, and it is not optimised.

Comment by D𝜋 on Another view of quantilizers: avoiding Goodhart's Law · 2022-01-02T10:05:20.090Z · LW · GW

Happy new year.

I have just posted on LessWrong the result of my work on AI.

Your work on quantilisers is the core mathematical evidence of what I propose (the code is another).

I would really appreciate your opinion on it.

Kind regards

Comment by D𝜋 on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-02T08:07:55.399Z · LW · GW

The link I typed in and appears when hovering over is, indeed, 'http://yann.lecun.com/exdb/mnist/', and it works on my machine... Thanks for the additional link.