Posts
Comments
That writes can affect another session violates my expectations, at least, of the boundaries that'd be set.
Nit: 0.36 bits/letter seems way off. I suspect you only counted the contribution of the letter E from the above table (-p log2 p
for E's frequency value is 0.355).
Agreed. I had [this recent paper](https://ieeexplore.ieee.org/abstract/document/9325353) in mind when I raised the question.
The Landauer limit constrains irreversible computing, not computing in general.
Here's the argument I'd give for this kind of bottleneck. I haven't studied evolutionary genetics; maybe I'm thinking about it all wrong.
In the steady state, an average individual has n children in their life, and just one of those n makes it to the next generation. (Crediting a child 1/2 to each parent.) This gives log2(n) bits of error-correcting signal to prune deleterious mutations. If the genome length times the functional bits per base pair times the mutation rate is greater than that log2(n), then you're losing functionality with every generation.
One way for a beneficial new mutation to get out of this bind is by reducing the mutation rate. Another is refactoring the same functionality into fewer bits, freeing up bits for something new. But generically a fitness advantage doesn't seem to affect the argument that the signal from purifying selection gets shared by the whole genome.
An allegedly effective manual spaced-repetition system: flashcards in a shoebox with dividers. You take cards from the divider at one end and redistribute them by how well you recall. I haven't tried this, but maybe I will since notecards have some advantages over a computer at a desk or a phone.
(It turns out I was trying to remember the Leitner system, which is slightly different.)
Radical Abundance is worth reading. It says that current work is going on under other names like biomolecular engineering, the biggest holdup is a lack of systems engineering focused on achieving strategic capabilities (like better molecular machines for molecular manufacturing), and we ought to be preparing for those developments. It's in a much less exciting style than his first book.
Small correction: Law's Order is by David Friedman, the middle generation. It's an excellent book.
I had a similar reaction to the sequences. Some books that influenced me the most as a teen in the 80s: the Feynman Lectures and Drexler's Engines of Creation. Feynman modeled scientific rationality, thinking for yourself, clarity about what you don't know or aren't explaining, being willing to tackle problems, ... it resists a summary. Drexler had many of the same virtues, plus thinking carefully and boldly about future technology and what we might need to do in advance to steer to an acceptable outcome. (I guess it's worth adding that seemingly a lot of people misread it as gung-ho promotion of the wonders of Tomorrowland that we could all look forward to by now, more like Kurzweil. For one sad consequence, Drexler seems to have become a much more guarded writer.)
Hofstadter influenced me too, and Egan and Szabo.
I'm not a physicist, but if I wanted to fuse metallic hydrogen I'd think about a really direct approach: shooting two deuterium/tritium bullets at each other at 1.5% of c (for a Coulomb barrier of 0.1 MeV according to Wikipedia). The most questionable part I can see is that a nucleus from one bullet could be expected to miss thousands of nuclei from the other, before it hit one, and I would worry about losing too much energy to bremsstrahlung in those encounters.
s/From their/From there
I also reviewed some of his prototype code for a combinatorial prediction market around 10 years ago. I agree that these are promising ideas and I liked this post a lot.
Robin Hanson proposed much the same over 20 years ago in "Buy Health, Not Health Care".
IIRC Doug Orleans once made an ifMUD bot for a version of Zendo where a rule was a regular expression. This would give the user a way to express their guess of the rule instead of you having to test them on examples (regex equality is decidable).
Also I made a version over s-expressions and Lisp predicates -- it was single-player and never released. It would time-out long evaluations and treat them as failure. I wonder if I can dig up the code...
Here's what's helped for me. I had strong headaches that would persist for weeks, with some auras, which my doctor called migraines. (They don't seem to be as bad as what people usually mean by the word.) A flaxseed oil supplement keeps them away. When I don't take enough, they come back; it needs to be at least 15g/day or so (many times more than the 2-3 gelcaps/day that supplement bottles direct you to take). I've taken fish oil occasionally instead.
I found this by (non-blinded) experimenting with different allegedly anti-inflammatory supplements. I'm not a doctor, etc.
Computing: The Pattern On The Stone by Daniel Hillis. It's shorter and seemingly more focused on principles than the Petzold book Code, which I can't compare further because I stopped reading early (low information density).
it's also notable that he successfully predicted the rise of the internet
Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn't give any such timeframe for nanotechnology, I guess it's worth mentioning.)
Radical Abundance, came out this past month.
Added: The most relevant things in the book for this post (which I've only skimmed):
There's been lots of progress in molecular-scale engineering and science that isn't called nanotechnology. This progress has been pretty much along the lines Drexler sketched in his 1981 paper and in the how-can-we-get-there sections of Nanosystems, though. This matches what I saw sitting in on Caltech courses in biomolecular engineering last year. Drexler believes the biggest remaining holdup on the engineering work is how it's organized: when diverse scientists study nature their work adds up because nature is a whole, but when they work on bits and pieces of technology infrastructure in the same way, their work can't be expected to coalesce on its own into useful systems.
He gives his latest refinement of the arguments at a lay level.
Yes -- in my version of this you do get passed your own source code as a convenience.
If you'd rather run with a very small and well-defined Scheme dialect meant just for this problem, see my reply to Eliezer proposing this kind of tournament. I made up a restricted language since Racket's zillion features would get in the way of interesting source-code analyses. Maybe they'll make the game more interesting in other ways?
There's a Javascript library by Andrew Plotkin for this sort of thing that handles 'a/an' and capitalization and leaves your code less repetitive, etc.
In Einstein's first years in the patent office he was working on his PhD thesis, which when completed in 1905 was still one of his first publications. I've read Pais's biography and it left me with the impression that his career up to that point was unusually independent, with some trouble jumping through the hoops of his day, but not extraordinarily so. They didn't have the NSF back then funding all the science grad students.
I agree that all the people we're discussing were brought into the system (the others less so than Einstein) and that Einstein had to overcome negative selection even while some professors thought he showed promise of doing great things. (Becoming an insider then isn't guaranteed -- in the previous century there was Hermann Grassman trying to get out of teaching high school all his life.)
Heaviside and Ramanujan accomplished less than Einstein, but they started way further outside.
Better examples of outsider-scientists from around then include Oliver Heaviside and Ramanujan. I'm having trouble thinking of anyone recent; the closest to come to mind are some computer scientists who didn't get PhD's until relatively late. (Did Oleg Kiselyov ever get one?)
Yes, that's where I got the figure (the printed book). The opening chapter lists a bunch of other figures of merit for other applications (strength of materials, power density, etc.)
Figure 16.8. (I happened to have the book right next to me.)
Ah -- .1nm is also the C-H or C-C bond length, which comes to mind more naturally to me thinking about the scale of an organic molecule -- enough to make me wonder where the 0.24 was coming from. E.g. a (much bigger) sulfur atom can have bonds that long.
Oh, you're right, thanks.
Isn't an H atom more like 0.1nm in diameter? Of course it's fuzzy.
I agree with steven0461's criticisms. Drexler outlines a computer design giving a lower bound of 10^16 instructions/second/watt.
Should there be a ref to http://e-drexler.com/d/07/00/1204TechnologyRoadmap.html ?
Quibbling about words: "atom by atom" seems to have caused some confusion with some people (taking it literally as defining how you build things when the important criterion is atomic precision). Also "nanobots" was coined in a ST:TNG episode, IIRC, and I'm not sure if people in the field use it.
You could grind seeds in a coffee grinder, as BillyOblivion suggests. (I don't because the extra stuff in seeds disagrees with another body issue of mine.) Sometimes I take around 5 gelcaps a day while traveling, which isn't as effective but makes most of the difference for the headaches.
What I do is put on a swimmer's nose clip, drink the oil by alternately taking in a mouthful of water and floating a swallow of oil down on top of that; follow up with a banana or something because I've found taking it on an empty stomach to disagree with me; have a bit more water; then take off the noseclip. The clip is mainly to help with Shangri-La appetite control, which I consider just a bonus.
The first time I took this it gave me heartburn -- starting with a smaller amount the first couple of times might be smart.
My headaches mostly went away with daily flaxseed oil or fish oil. I have no particular reason to expect you'd see the same, but it's easy to try. I take 1 or 2 tablespoons of flaxseed oil per day.
Thanks! Yes, I figure one-shot and iterated PDs might both hold interest, and the one-shot came first since it's simpler. That's a neat idea about probing ahead.
I'll return to the code in a few days.
On message passing as described, that'd be a bug if you could do it here. The agents are confined. (There is a side channel from resource consumption, but other agents within the system can't see it, since they run deterministically.)
I hadn't considered doing that -- really I just threw this together because Eliezer's idea sounded interesting and not too hard.
I'll at least refine the code and docs and write a few more agents, and if you have ideas I'd be happy to offer advice on implementing your variant.
I followed Eliezer's proposal above (both players score 0) -- that's if you die at "top level". If a player is simulating you and still has fuel after, then it's told of your sub-death.
You could change this in play.scm.
When you call RUN, one of two things happens: it produces a result or you die from exhaustion. If you die, you can't act. If you get a result, you now know something about how much fuel there was before, at the cost of having used it up. The remaning fuel might be any amount in your prior, minus the amount used.
At the Scheme prompt:
(run 10000 '(equal? 'exhausted (cadr (run 1000 '((lambda (f) (f f)) (lambda (f) (f f))) (global-environment)))) global-environment)
; result: (8985 #t) ; The subrun completed and we find #t for yes, it ran to exhaustion.
(run 100 '(equal? 'exhausted (cadr (run 1000 '((lambda (f) (f f)) (lambda (f) (f f))) (global-environment)))) global-environment)
; result: (0 exhausted) ; Oops, we never got back to our EQUAL? test.
The only way to check your fuel is to run out -- unless I goofed.
You could call that message passing, though conventionally that names a kind of overt influence of one running agent on another, all kinds of which are supposed to excluded.
It shouldn't be hard to do variations where you can only run the other player and not look at their source code.
I just hacked up something like variant 3; haven't tried to do anything interesting with it yet.
I second the rec for Feynman volume 1: it was my favorite text as a freshman, though the class I took used another one. Since that was in the last millennium and I haven't kept up, I won't comment on other books. Volumes 2 and 3 won't be accessible to beginners.
Yes, tentatively. I've read the textbook, more like given it a first pass, and it's excellent. This should help me stick to a more systematic study. If the video lectures have no transcripts, that'd suck, though (I'm hard of hearing).
O shame to men! Devil with devil damned / Firm concord holds; men only disagree / Of creatures rational
-- Milton, Paradise Lost: not on Aumann agreement, alas
A related example that I, personally, considered science fiction back in the 80s: Jerry Pournelle's prediction that by the year 2000 you'd be able to ask a computer any question, and if there was a humanly-known answer, get it back. Google arrived with a couple years to spare. To me that had sounded like an AI-complete problem even were all the info online.
You bring up cryonics and AI. 25 years ago Engines of Creation had a chapter on each, plus another on... a global hypertext publishing network like the Web. The latter seemed less absurd back then than the first two, but it was still pretty far out there:
One of the things I did was travel around the country trying to evangelize the idea of hypertext. People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “who’s going to pay to make all those links?” or “why would anyone want to put documents online?” Alas, many things really must be experienced to be understood.
I believed Drexler's prediction that this technology would be developed by the mid-90s but I didn't expect it to be taking over the world by then. Probably to most people even in computers it was science fiction.
As far as computers in general, their hardware reliability's the least intuitive aspect to me. Billions of operations per second, OK, but all in sequence, each depending on the last, without a single error? While I know how that's possible, it's still kind of shocking.
I know someone who was on dialysis while waiting for a transplant. It was really hard on them, and for a while it looked like they might not pull through. I don't know how common such an experience is.
A doctor faces a patient whose problem has resisted decision-tree diagnosis -- decision trees augmented by intangibles of experience and judgement, sure. The patient wants some creative debugging, which might at least fail differently. Will they get their wish? Not likely: what's in it for the doctor? The patient has some power of exit, not much help against a cartel. To this patient, to first order, Phil Goetz is right, and your points partly elaborate why he's right and partly list higher-order corrections.
(I did my best to put it dispassionately, but I'm rather angry about this.)
I've wondered lately while reading The Laws of Thought if BDDs might help human reasoning too, the kind that gets formalized as boolean logic, of course.
This article reminded me of your post elsewhere about lazy partial evaluation / explanation-based learning and how both humans and machines use it.
The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don't strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don't know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to recompile only small chunks and make linking cheap -- in the limit there's the old 8-bit Basics that essentially treated each line of the program as a compilation unit. See P. J. Brown's old book, or Chuck Moore's Color Forth.
I can't make it. Anyone going through Burbank would be welcome to stop by my place for a chat, though -- it's quiet here. Email withal@gmail.com for the address.
A idealized free market is that of selfish rational agents competing (with a few extra condition I'm skipping). I'm moderately confident this could work pretty ok in the absence of "general" (if such a thing exists) or perhaps human "intelligence", but I'm not familiar enough with simulations of markets to be certain.
Eric Baum's papers, among others, show this kind of thing applied to AI. There doesn't seem to have been much followup.
Comparative Ecology: A Computational Perspective compares this idea to the human economy and biological evolution and says the idealized computer version ought to be, well, more ideal as an optimization process.
Doug Orleans told me once of a version like this he made to be played with an IRC or MUD bot (I forget which). A rule was a regular expression. (This came up when I mentioned doing it with Lisp s-expressions for the koans instead.)
About this article's tags: you want dark_arts, judging by the tags in the sidebar. The 'arts' tag links to posts about fiction, etc.
ObDarkArts101: Here's a course that could actually have been titled that:
Writing Persuasion (Spring 2011) A course in persuasive techniques that do not rely on overt arguments. It would not be entirely inaccurate to call this a course in the theory, practice, and critique of sophistry. We will explore how putatively neutral narratives may be inflected to advance a (sometimes unstated) position; how writing can exploit readers' cognitive biases; how a writer's persona on the page -- what Aristotle might call her ethos -- may be constructed to influence her readers.
There might be more agreement here than meets the eye. Drexler often posts informatively and approvingly about progress in DNA nanotechnology and other bio-related tech at http://metamodern.com ; this is the less surprising when you remember his very first nanotech paper outlined protein engineering as the development path. Nanosystems is mainly about establishing the feasibility of a range of advanced capabilities which happen to not already be done by biology, and for which it's not obvious how it could. Biology and its environment being complicated and all, as Jones says.
Freitas in Nanomedicine addresses applying a Nanosystems technology base to our bio problems, or at least purports to -- I haven't been able to get into it because it's really long-winded and set in tiny type. Nanosystems was more inviting.