Dragon Ball's Hyperbolic Time Chamber
post by gwern · 2012-09-02T23:49:50.925Z · LW · GW · Legacy · 65 commentsContents
65 comments
A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl's law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the disanalogies.
Master version on gwern.net
65 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2012-09-03T00:37:41.134Z · LW(p) · GW(p)
A year in a day is not very impressive, but 365 years (of research) in a year is. Depending on how many people can be placed in these boxes at any given time, this may amount to three centuries worth of progress (at least in software and mathematics).
Replies from: Will_Newsome, None↑ comment by Will_Newsome · 2012-09-05T12:46:25.748Z · LW(p) · GW(p)
yo dawg we heard you like hyperbolic time chambers
Replies from: CronoDAS, katydee↑ comment by CronoDAS · 2012-09-07T10:34:19.924Z · LW(p) · GW(p)
I think the characters in Primer may have done something like this, getting around a limitation of their time machine by putting a second time machine inside of the first. Then again, the movie isn't always clear as to what's happening, so it's hard to tell...
↑ comment by katydee · 2012-09-07T10:14:05.011Z · LW(p) · GW(p)
Note: the above is actually a highly insightful comment if you stop and think about it for a second.
Replies from: thomblake, CronoDAS↑ comment by thomblake · 2012-09-12T13:38:15.623Z · LW(p) · GW(p)
the above is actually a highly insightful comment if you stop and think about it for a second.
Any comment can seem insightful if you're allowed to supply details until it makes sense.
Replies from: katydee, chaosmosis↑ comment by chaosmosis · 2012-09-12T13:44:15.696Z · LW(p) · GW(p)
afsd;ljkurjzvn,x
Replies from: chaosmosis↑ comment by chaosmosis · 2012-09-12T13:48:01.384Z · LW(p) · GW(p)
The above comment would be insightful if it was a counterexample. This means it is not a counterexample. That means that it is not insightful. That means it is a counterexample. It's like the least interesting number paradox of nonsense strings of letters.
Regardless, I might recognize the technical accuracy of your point, but your point is only superficially useful. I liked the original comment and thought that it was both funny and insightful. Yes, some of that insight is mine as well, rocks can't sing or dance or use logic, but that doesn't mean that the initial comment isn't also interesting.
Replies from: thomblake↑ comment by thomblake · 2012-09-12T14:50:16.595Z · LW(p) · GW(p)
The above comment would be insightful if it was a counterexample. This means it is not a counterexample. That means that it is not insightful.
This does not follow. You're treating the first premise like a double implication, but it's certainly not true that the comment would be insightful if and only if it was a counterexample.
Clearly the comment "afsd;ljkurjzvn,x" was just a typo for "afsd:ljkurjzvn.x", which I read as agreement with my point, making clever reference to complexity theory and Aaronson's refutation of the waterfall argument.
Replies from: Rukifellth↑ comment by Rukifellth · 2013-04-14T22:43:53.492Z · LW(p) · GW(p)
So it's isn't rot13?
↑ comment by CronoDAS · 2012-09-07T10:39:17.632Z · LW(p) · GW(p)
Indeed. Put a hyperbolic time chamber inside another hyperbolic time chamber, and you get a speedup factor of 365 squared.
I think the characters in Primer may have done something like this, getting around a limitation of their time machine by putting a second time machine inside of the first. Then again, the movie isn't always clear as to what's happening, so it's hard to tell...
Replies from: gwern↑ comment by gwern · 2012-09-07T17:25:20.642Z · LW(p) · GW(p)
Yeah, it would be interesting, but it's not doable in either the original DBZ scenario, or in upload scenarios: you can't emulate an emulator and get a speedup like that - the buckpassing doesn't work, the computations still have to be done somewhere.
(Any optimization you could apply to emulating an emulation, like some sort of Futamura projection collapsing the emulated program and the emulated hardware, could be done at the original emulation level, so all it leaves you with is possible programming convenience and constant factors of inefficiency and indirection.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-07T20:05:53.493Z · LW(p) · GW(p)
You can have an arbitrarily deep sequence of speeding-up optimized nested emulations, with each subsequent emulation running faster by its container's clock than the otherwise identical container would run by itself (by its own clock).
(The catch is that n obk gung'f ehaavat na rzhyngvba znl or fybjre ol vgf pbagnvare'f pybpx guna vs vg vfa'g.)
↑ comment by [deleted] · 2012-09-06T18:01:38.222Z · LW(p) · GW(p)
I think you'd have to get a pretty large team in there to see any substantial results. One person, working alone without feedback or contact with other scientists or any chance to do experiments won't do very much more in 365 years than they would in one.
comment by NancyLebovitz · 2012-09-03T01:51:45.079Z · LW(p) · GW(p)
The consequences if it could be tuned for smaller time chunks would be interesting-- for example, people who'd like to have a time pocket for work and/or sleep, freeing up their out-in-the-world time for family, friends, or hobbies that can't be done in a time box.
Replies from: gwerncomment by arundelo · 2012-09-03T01:46:56.714Z · LW(p) · GW(p)
I really liked William Sleator's YA novel Singularity, which is about teenage identical twins who find a time anomaly with a similar speedup factor.
Replies from: Nisan, gwern↑ comment by Nisan · 2012-09-04T13:04:16.082Z · LW(p) · GW(p)
I enjoyed that book when I was a teenager as well. It's kind of absurd because it starts out in the kids-discover-a-mysterious-artifact/phenomenon genre, and then suddenly turns into a irefvba bs Zl Fvqr Bs Gur Zbhagnva jurer gur xvq whfg fvgf va n furq sbe n lrne. V'z yvxr, "Ubj qvq gur nhgube xabj V'q rawbl ernqvat gung?"
↑ comment by gwern · 2012-09-03T02:07:59.465Z · LW(p) · GW(p)
I see that it's on libgen.info, so I'll give it a look.
Replies from: arundelo↑ comment by arundelo · 2012-09-03T03:55:59.163Z · LW(p) · GW(p)
It's not a super deep book*, but it is very gripping, and more character-oriented than you might expect given the premise. The viewpoint character is a convincing 16-year-old. For me, the book is one of the most memorable fictional depictions of grit) I've seen, right up there with Gattaca and The Shawshank Redemption**. (Disclaimer: I've read the book several times, but the most recent time was five or ten years ago.)
* But much deeper than Dragon Ball Z from what I've seen. :-)
Edit: Here's Orson Scott Card giving a glowing review to Singularity and some other Sleator books. This contains a spoiler for Singularity! -- although vg'f n cybg cbvag lbh pbhyq cebonoyl thrff tvira gung jr'ir nyernql gnyxrq nobhg gur gjva cnenqbk.
** Edit 2: The Count of Monte Cristo deserves a place on this list too.
Replies from: gwern, Elec0↑ comment by gwern · 2012-09-03T04:16:13.739Z · LW(p) · GW(p)
I just finished it. The Count of Monte Cristo came to my mind too during the 'prison' sequence, which was fairly good.
As far as the HTC scenario goes, it illustrates both the upside and downside: the ability to focus on something for a long interval, but also the massive reduction in quality of life. (It also mentions in passing the aging problem: the uncle is 40 but 'looks 60' and dies early, in the middle of his research - which he might have been able to finish if he had spent more time in realtime so he could await future replies from researchers & new textbooks or results.)
Replies from: arundelo↑ comment by arundelo · 2012-09-03T04:43:46.530Z · LW(p) · GW(p)
02:07:59AM
I see that it's on libgen.info, so I'll give it a look.04:16:13AM
I just finished it.
I guess it is a short book, but man! You don't kid around.
As shown in the book, the aging thing can in very narrow circumstances be a feature not a bug, but when I daydream about using a secret HTC to amaze everyone with my productivity and learning speed, the daydream includes some means of not aging faster than everyone else. (This makes me think of some sort of SF Dorian Gray.)
Replies from: Nonecomment by ShardPhoenix · 2012-09-03T03:25:12.452Z · LW(p) · GW(p)
One important aspect of uploads is that they can (presumably) be easily copied - this is enough for them to have a huge economic impact even if they only run at normal human speed. If you have one upload that is willing and able to do a job, suddenly you have as many as you need, and can displace all human workers in that job (at least once the hardware gets cheap enough).
Replies from: None↑ comment by [deleted] · 2012-09-03T19:57:57.247Z · LW(p) · GW(p)
This assumes that uploaded people would agree to being copied or that the world turned so dystopian that no one would ask them for permission. I for one wouldn't want several instances of me running around.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-05T16:48:00.064Z · LW(p) · GW(p)
Can you say more about why not?
Not that you're obligated to; preferences are preferences. But this particular preference is sufficiently alien to me that I'd like to understand it better.
Replies from: None↑ comment by [deleted] · 2012-09-05T20:11:26.660Z · LW(p) · GW(p)
I like my personal identity and creating several causally interacting copies of me would feel like diluting it. I could anticipate a future in which there would be several instances of me with different life experiences since the moment of splitting. All of those experiences would be 'mine' in a way, yet 'I' wouldn't own most of them.
This would be less of a problem if there was a way to merge back into a single person after doing whatever needed doing but that's not a given. Copying is straightforward in an uploading scenario while merging requires progress in conceptual understanding of the mind. And if we knew how to merge back together, we might also know how to make it an ongoing process, so instead of splitting into separate personalities we could launch several communicating threads of attention that would still match my intuition of being a single person, which I would find preferable.
(And yes, I know about many worlds. That's different because the world splits with me.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-05T21:49:58.017Z · LW(p) · GW(p)
Cool; thanks.
Agreed that merging different-but-similar minds is a vastly different (and more complicated) problem from creating copies. And agreed that ongoing synchronization among minds is yet a third problem, and that both of those would be awesome.
comment by V_V · 2012-09-03T10:12:08.761Z · LW(p) · GW(p)
But I’m not sure how many real-world problems there are of high economic value which can soak up a year of serial processing.
Think of any real-world problem that takes one day on modern hardware. When computers were 365 times slower (that is, 12 - 13 years ago), it would have taken one year to solve such problems.
Don't you think there is any real-world problem that would benefit from 365x faster hardware, even if you can interact with it only once a day?
(Even if individual tasks take less than one year to complete, you can pool several of them and run them serially on the 365x computer)
Replies from: RomanDavis, Lightwave, gwern↑ comment by RomanDavis · 2012-09-03T22:44:42.117Z · LW(p) · GW(p)
The first thing that comes to mind is evolutionary algorithms, which eat up tons of ram and keeping running at a decent pace while keeping track of dozens variables is an enormous engineering challenge.
↑ comment by Lightwave · 2012-09-04T15:10:12.404Z · LW(p) · GW(p)
Doesn't circuit design (and therefore computer processor design) require fairly large computational resources (for mathematical modelling)? Thus faster hardware now can be used to create even faster hardware.. faster.
Replies from: gwern, V_V↑ comment by gwern · 2012-09-04T15:40:20.558Z · LW(p) · GW(p)
Yes, but how much of the work that goes into the next generation is just layout? It doesn't solve all of your chemical or quantum mechanical issues, or fixes your photomasks for the next shrunken generation, etc. If layout were a major factor, we should expect to hear of 'layout farms' or supercomputers or datacenters devoted devoted to the task. I, at least, haven't. (I'm sure Intel has a datacenter or two, but so do many >billion tech multinationals.)
And if layout is just a fraction of the effort like 10%, then Amdahl's law especially applies.
Replies from: bcoburn↑ comment by bcoburn · 2012-09-05T00:07:22.241Z · LW(p) · GW(p)
it doesn't give many actual current details, but http://en.wikipedia.org/wiki/Computational_lithography implies that as of 2006 designing the photomask for a given chip required ~100 CPU years of processing, and presumably that has only gone up.
Etching a 22nm line with 193nm light is a hard problem, and a lot of the techniques used certainly appear to require huge amounts of processing. It's close to impossible to say how much of a bottle neck this particular step in the process is, but based on how much really knowing what is going on in even just simple mechanical design requires lots of simulation I would actually expect that every step in chip design has similar types of simulation requirements.
↑ comment by V_V · 2012-09-04T16:19:31.575Z · LW(p) · GW(p)
There is some positive feedback in circuit design (although sublinear, I think), but hardware serial speed is essentially limited by the size of the surface features on the IC, which is in turn limited by the manufacturing process and ultimately by the physical limits of CMOS technology.
↑ comment by gwern · 2012-11-21T02:50:20.190Z · LW(p) · GW(p)
Most of the examples people would come up with of extremely compute-intensive tasks are parallel algorithms, and those would be cheaper to run in the real-world on server farms which do not need self-contained powersources fitting in an HTC or similar special setups or attendants paid handsomely to be willing to spend a year isolated in the prison of HTC. There's simply no reason to take a highly parallel task and run it in an HTC when you can get the same result at less cost and less latency by running it on a perfectly normal server farm or cloud computing platform.
(The genetic algorithm will spit out similar answers if you run it on 1 CPU in a HTC at >$1/day for >$365 or if you run it on 365 CPUs on Amazon EC2 at $1/day for 1 day total for $365.)
Remember, electricity is already the dominant cost to running a server these days!
Are there serial tasks which people do not run because they would take a fraction of a year but if they could be run overnight would justify a years' worth of premium power bills? I'm sure there's some and the introduction of an HTC would conjure up some uses which no one had bothered to work out because it was obviously pointless, but I can't help but think that there being no obvious candidates means the candidates wouldn't be fantastically useful.
Replies from: V_V↑ comment by V_V · 2012-11-25T21:31:28.601Z · LW(p) · GW(p)
Even the so-called Embarrassingly parallel problems, those whose theoretical performance scales almost linearly with the number of cpus, in practice scale sublinearly in the amount of work done per dollar: massive parallelization comes with all kinds of overheads, from synchronization to cache contention to network communication costs to distributed storage issues. More trivially, large data centers have significant heat dissipation issues: they all need active cooling and many are also housed in high-tech buildings specifically designed to address this issue. Many companies even place data centers in northern countries to take advantage of the colder climate, instead of putting them in, say, China, India or Brazil where labor costs much less.
Problems that are not embarrassingly parallel are limited by Amdahl's law: as you increase the number of cpus, the performance quickly reach an asymptote where the sequential parts of the algorithms dominate.
I can't help but think that there being no obvious candidates means the candidates wouldn't be fantastically useful.
Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.
Replies from: gwern↑ comment by gwern · 2012-11-25T23:45:39.449Z · LW(p) · GW(p)
A HTC would come with serious overhead costs too; the cooling is just the flip side of the electricity - a HTC isn't in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years' worth of heat production to eject each opening).
Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.
I'm not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel's report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote. How much of an advantage is running for a year rather than the otherwise available days/weeks? Is it large enough to pay for a year of premium HTC computing power?
Replies from: V_V↑ comment by V_V · 2012-11-26T15:14:36.930Z · LW(p) · GW(p)
a HTC isn't in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years' worth of heat production to eject each opening).
Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.
I'm not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel's report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote.
These problems have polynomial time complexity, they don't asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables). For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don't seem particularly useful.
Replies from: gwern↑ comment by gwern · 2012-11-28T01:28:57.449Z · LW(p) · GW(p)
Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.
Hm, I wasn't, except in the shift to the upload scenario where the speedup is not from executing regular algorithms (presumably anything capable of executing emulated brains at 365x realtime will have much better serial performance than current CPUs). As an ordinary computer there's still heat considerations - how is it taking care of putting out 365x a regular computer's heat even if it's doing 365x the work? And as a pocket universe as specified, heat is an issue - in fact, now that I think about it Stephen Baxter invented an space-faring alien race in his Ring hard sf universe which lives inside tiny pocket universes as the ultimate in heat insulation.
These problems have polynomial time complexity, they don't asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables).
I was referring to the quality of the solution produced by the approximating algorithms.
For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don't seem particularly useful.
Quickly googling, there seems to be plenty of work on approximate solutions and approaches in model checking; for example http://cs5824.userapi.com/u11728334/docs/77a8b8880f48/Bernhard_Steffen_Verification_Model_Checking_an.pdf includes a paper:
In this paper, we propose an approximation method to verify quantitative properties on discrete Markov chains. We give a randomized algorithm to approximate the probability that a property expressed by some positive LTL formula is satisfied with high confidence by a probabilistic system. Our randomized algorithm requires only a succinct representation of the system and is based on an execution sampling method. We also present an implementation and a few classical examples to demonstrate the effectiveness of our approach.
I'll admit I don't know much about the model checking field, though.
comment by Emile · 2012-09-03T07:48:15.439Z · LW(p) · GW(p)
The reason I expect uploads to have a big impact is not particularly accelerated time - it's the ability to copy, observe and modify the uploads.
If I was an upload with access to my own source code (or access to my data and the source of the software on which I'm running, etc.), I might want to try to run modified versions of myself to see what changes, to see if I can have better short-term memory, or if I can "outsource" any maths calculation to more optimized software, or have introspective access to my emotional reactions or the reasons I believe things, or have better/different senses (directly perceive and modify code and data?), or decrease my learning time, etc.
Replies from: GeraldMonroe↑ comment by GeraldMonroe · 2012-09-03T17:59:59.641Z · LW(p) · GW(p)
What stops you from making a change that is addictive or self-amplifying? For example, suppose a subtle tweak makes you less averse to making another subtle tweak in the same direction. A few thousand iterations later and your network is trashed. http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
It seems to me that the only safe way to do this would be to only permit other uploaded entities to make the edits, working in teams, with careful observation and testing of results. Older versions of yourself might be team members.
Also, the hardware design would need to be extremely well thought out, so that it is not possible for someone to Blue Pill attack you without your knowledge, or directly overwrite your neural structures with someone else's patterns. The hardware would have to be designed with security permissions inherently baked in : here's a blog post where Drexler discusses this :
http://metamodern.com/2011/08/03/quiz-question-what-is-wrong-with-this-model-of-computation/
comment by Risto_Saarelma · 2012-09-03T08:37:35.027Z · LW(p) · GW(p)
Greg Egan just posted an article about his new books, which seem to deal with a somewhat similar setup: The Orthogonal Universe (spoilers for the basic worldbuilding of Egan's Orthogonal trilogy)
Replies from: Baughn, gwern↑ comment by Baughn · 2012-09-03T10:25:36.380Z · LW(p) · GW(p)
There's also a similar concept in Accel World (dealing with a VR world with a speedup of 10,000) and the later novels of Sword Art Online (Alicization, specifically), which is set earlier on the same timeline.
It's not entirely obvious why there are no apparent consequences to society as a whole, but it definitely affects the characters who get trapped there, who end up variously between a decade and several hundred years older than their apparent age.
That universe also makes uploading much easier than ours. They may appear human, but their brains don't work on quite the same principles as ours.
↑ comment by gwern · 2012-11-21T02:40:46.363Z · LW(p) · GW(p)
That's true, The Clockwork Rocket is a clear example of the 'exploiting time acceleration for practical use', but note that it does this by either biting bullets or working around them: everyone on the rocket expects to die of aging or worse, and the research is only plausible because a good chunk of the world's researchers are apparently on board.
comment by Desrtopa · 2012-09-06T16:47:02.976Z · LW(p) · GW(p)
Could we use it for regular martial arts training? The DBZ article for the HTC mentions no non-emergency use, and a little thought leads us to conclude that, probably not: inside the HTC, time passes as normal, which means that you don’t save any time. All the HTC is doing is rearranging relative time between groups. If you step in, you still age a full year before stepping out, and you will now die a year early by the realtime calendar. So what’s the point?
Plenty of people would benefit from an opportunity to adjust their life's timeline a bit relative to the rest of the world. To stick with the training premise, you might take a year to recover from an injury and get back up to form in time for the Olympics.
Replies from: flash↑ comment by flash · 2013-03-06T19:00:57.128Z · LW(p) · GW(p)
I recall an episode of Star Trek TNG in which the crew, when entering a time anomaly, affixed a device to their arms in order to anchor them to their own time. I suppose that with a similar anchor to the time outside of the HTC a person whilst inside the chamber wouldn't age.
Replies from: Desrtopacomment by mstevens · 2012-09-04T13:49:31.923Z · LW(p) · GW(p)
Reminds me a little of Anathem - it's like a really fast Math.
Replies from: gwern↑ comment by gwern · 2012-09-04T22:06:27.305Z · LW(p) · GW(p)
I found Anathem really unbelievable with the deepest Maths like the Millennials - human social structures & network effects do not work that way! Make a 100 Millennials, and if you're lucky, they'll all just be utterly stagnant, having gotten lost in one cul-de-sac. (If you're unlucky, they'll all be dead due to cold or bad social dynamic or something. If you're really unlucky, they'll turn into a warrior cult that slaughters everyone near them.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-04T23:05:07.326Z · LW(p) · GW(p)
I'd gotten about a third of the way through Anathem when the person who lent it to me asked me how I was liking it. I replied that it was fun to read, but I was having serious trouble accepting his world as actually populated by human beings, even for long enough to suspend my disbelief... humans just don't work that way.
He smiled and said "keep reading".
Later, I was amused.
comment by GeraldMonroe · 2012-09-03T17:49:58.656Z · LW(p) · GW(p)
Some specific comments about uploads : a. First, a more reasonable estimate of the speedup is on the order of 10^6 to 10^8. This depends on a few factors : if we assume 5ghz switching speeds and we use dedicated discrete circuits for every single simulated neuron, that would in theory be a speed of 25 million times versus human switching speeds on the order of 200 Hz. Some neurons are faster than this, however, and you need enough discrete timesteps to account for small differences in arrival times that are a major factor in signaling. The upper end of the range is likely possible if you use much faster nano-scale components, and again, dedicated hardware circuits for every component. These kinds of speeds are also only practical if the neuroscience understanding is good enough that fine cellular details need not be simulated, merely neural network nodes with a large number of parameters.
If general purpose CPUs inside a supercomputer are used instead, achieving sim speeds close to realtime is expected to be very difficult to accomplish.
b. Working from the assumption that the speed advantage is 10^6, predicting superhuman capabilities from the uploads seems reasonable. A single uploaded entity wouldn't get just 12 PhDs - they could get every PhD. Presumably, they'd enroll in every PhD program in the world simultaneously, and send some kind of robotic avatar to class. By rapidly switching between many robot avatars, queuing up commands for each one, they could presumably do many slow tasks at once.
c. These kinds of speedups allow levels of micromanagement that do not exist today. For example, suppose that the uploaded entity has a factory that produces some kind of assembly robot. As the factory runs, the entity observes every last motion by the machines in the factory and can adjust settings and motions for optimal performance. So, now the assembly robots are coming off the production line and going to work. The entity might notice a design flaw in the first batch and make dozens of changes to correct the flaw, so 5 minutes later version 1.1 of the bot is the one coming off the line. But 1.1 has other drawbacks, so 5 minutes after that, it's version 1.2.
Or even more detailed : the entity understands the current task to such a fine grained detail that EVERY robot coming off of the assembly line is slightly customized so that it uses available resources to the most efficient degree possible.
I do acknowledge the author's basic point. An entity that can think 1 million times faster would not be able to advance technology at 1 million times current speed. R&D cannot be done purely in simulations : physical experiments must be performed, physical prototypes must be built and tested against the real world. However, the speedups would still be large enough than a world with high speed uploaded entities would soon look very different from a world without them.
comment by Will_Newsome · 2012-09-05T12:45:34.205Z · LW(p) · GW(p)
yo dawg we heard you like hyperbolic time chambers
comment by A1987dM (army1987) · 2012-09-04T22:39:11.102Z · LW(p) · GW(p)
Deadlines? Imagine you have an important exam/audition in two days, but haven't had the time to study/rehearse properly. Then, tomorrow you look yourself in the chamber with your books/instrument and leave it when you're totally awesome. OTOH, since (IIRC) you are only allowed into the chamber twice in your life (and even if this weren't the case, you don't want to be 35 subjective years old 25 calendar years after the date on our birth certificate), you only do that for things you really care about.
Replies from: gwern, TheOtherDave↑ comment by gwern · 2012-09-05T00:52:50.933Z · LW(p) · GW(p)
Deadlines like that are zero-sum games, and so the impact is limited - it shifts around who wins the exams/auditions at a substantial cost (whatever it takes to use the HTC over just living in the real world). On the macro scale, I'm not sure how much it matters at the margin: if someone needs a HTC just to study for an exam...
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-09-05T04:42:46.800Z · LW(p) · GW(p)
Deadlines like that are zero-sum games
So is poker, but it doesn't mean there's no point in playing it. (Also, they are only zero-sum if the number of candidates who will pass is fixed in advance.)
Replies from: gwern↑ comment by gwern · 2012-09-05T13:35:23.841Z · LW(p) · GW(p)
Well, I think a lot of people play poker who shouldn't...
As for not being zero-sum - that may be true, and I assume your argument is that the additional return justifies the use of the HTC. But if you're not using the speedup aspect but the pocket-universe/precommitment aspect, why not just run cheaper facilities in realtime which approximate prisons for students? They need a week's practice, they enter the prison a week before the audition...
This has the tremendous advantage that we could do it already, right now, in the real world. Yet I've never heard of such a thing.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-09-05T22:22:13.497Z · LW(p) · GW(p)
I think you may have misunderstood my idea. I was thinking that candidates would choose to go into the HTC, not that they would be required to. (And when I said “things you really care about” by “you” I meant candidates, not examiners.) And I wasn't assuming it would only work if you cannot leave the room before the 24 clock hours/12 subjective months -- indeed, it would work better if you could decide to stay as little or as long as you want.
↑ comment by TheOtherDave · 2012-09-04T23:00:09.477Z · LW(p) · GW(p)
I suspect that anything I am sufficiently non-motivated to study/rehearse that I haven't used the available time to do so, I will probably end up not using the time in the chamber to study/rehearse terribly efficiently, either.
Replies from: magfrumpcomment by [deleted] · 2012-09-03T01:03:51.551Z · LW(p) · GW(p)
If humans don't age, it would be a great way to get some research done.