Posts
Comments
Hi, not to spend a lot of time here, but someone called my attention to the fact that I was mentioned in the comments. Just a few things:
-
In the comment of mine where I was quoted, I was talking about conventional architectures wherein one erases bits by simply discarding the signal energy, and where the signals themselves have enough associated energy (e.g. an energy difference between 0 and 1 states, or an energy barrier between the states) to be reliably distinguished despite thermal noise. Yes, theoretically one can do somewhat better than this (i.e., closer to the kT ln 2 minimum) with more complicated erasure protocols, but these do generally come at a cost in terms of time. Also, most treatments of these protocols ignore the energy requirements for operating the control mechanisms, which depending on their nature can themselves be substantial. For realistic engineering purposes, one should really analyze a very concrete exemplar mechanism in much more detail; otherwise there are always valid questions that can be raised about whether the analysis that one has done is actually realistic.
-
That being said, regarding the details and analysis and optimization of alternative erasure protocols, there is a lot of existing published and preprint literature on this topic, some of it quite recent, so I would encourage anyone interested in this topic to begin by spending some time surveying what’s already been done on this before spending a ton of time reinventing the wheel. Start by spending a few minutes doing some relevant keyword searches on Google Scholar. Following citations and backward cites, etc. Nowadays you can use AIs to help you quickly absorb the gist of papers you find, so “doing your homework” in terms of background research is easier than ever.
-
All this aside, from my POV, optimizing bit erasure is less interesting than reversible computing, since RC can in theory do even better by avoiding erasure or greatly reducing the number of bit erasures that are even needed. Of course, reversible computing has its own overheads, and my above comments about needing to analyze concrete mechanisms in detail in order to be more relevant for engineering purposes also apply to it. Lots of work still needs to be done to prove any of these ideas to be truly practical. I’d certainly encourage anyone who’s interested to get involved, since we’re never going to see these things happen until a lot more people seriously start working on it. And sadly AI is still a long way from being able to do serious engineering innovation all on its own — but I think humans who understand how to engage AI effectively on challenging problems could make great strides.
Cheers… ~Mike Frank