Posts
Comments
For a while I thought I had delayed sleep phase syndrome (which is more easily treated with light therapy), and that it's just so severe that the morning sunlight late in my day tends to make it go crazy. It's not quite regular enough for non-24. Or it could be completely irregular.
In any case, light therapy doesn't seems to help at all. I tried it for about a month or two with this and saw no effects. Also, it's a /huge/ inconvenience.
What I'm wondering with a markov process is whether it could be embellished to include other potentially relevant variables. From 5 minutes reading wikipedia, it seems like I'd have a combinatorial explosion of states, and the more states, the more data needed to train the model.
So I'd have like 48 states, for each half-hour of the day, times 3-4 for 8-11 hours long sleep? Would it work to have ordered pairs where the first item is measured in time since my last awakening?
I have karma display turned off (greasemonkey script). It stresses me out. I think your comment could certainly expand on point 3/4. Really what I was looking for as a response to the post is a good pointer on what sort of algorithms or tools could potentially give me good results on this problem to direct my studying, and perhaps what textbooks or introductions I should be reading.
But point 1 is good. I hadn't thought to do that. I was just going to go on common sense, and a kitchen sink approach.
Vyvanse has insomnia listed as a side effect.
Well, Vyvanse is modified amphetamine, so yeah. I also have serious focus problems. I was only on it for a month or so, and found it ineffective for the same reasons as other stimulants. I think in the sleep log I had just taken an isolated pill I had left.
But your advice is good. Going through the options very thoroughly might turn up something.
I have six months of past sleep data, though nothing current, with sleep and wake times. I could easily augment that with other potentially relevant variables, like daily caffeine intake or whatnot.
I use Supermemo daily, and have read everything Wozniak has written about sleep. I've talked to him a couple times about other things (1-2 month response time). I may ask him about this.
It was replaced shortly after, and my back problems promptly dissipated. I had only been sleeping on that mattress for a few weeks at the time, having just thrown away another.
I have tried sedatives, melatonin, melatonin-inducing sleeping aids, traditional sleeping aids, and Ambien (whatever that is). Some have no effect, some put me to sleep but leave me unrested, and some put me to sleep and leave me unrested and incredibly groggy for the rest of the day. Generally speaking, trying to shift your sleep schedule by more than 1-2 hours using sleep aids doesn't work. If your circadian rhythm keeps advancing anyway, the results are just like a normal person trying to go to bed at noon using sleep aids.
a lot of different ways to use them
Can you expand on this?
I suppose I could shop around for a doctor willing to prescribe modafinil for my sort of sleep problems. I have thought of trying it in the past, but that's pretty far off-label.
"Everything" includes having read all current medical literature, which all says that severe circadian rhythm disorders are basically untreatable, and having one sleep doctor basically give up. I could also try more sleep doctors, I suppose.
More like, "here's the times I went to sleep and woke up in the previous month. What can I expect today?" Hopefully including the effects of caffeine, delayed sleep, early awakening, etc. My sleep may sort of follow a cycle, but it's not regular enough that knowing the cycle would be that useful.
Here's the raw data for 6 months or so last year: Data.
EDIT: I was unemployed during this period, and not using an alarm regularly, so I was sleeping exactly when I felt like it. If I was working it would look much different.
I wouldn't exactly call it a median. It trends forward every day, eventually wraps around, but it doesn't spend much time at all around 2-8 AM, due to sunlight keeping me awake when I'd otherwise go to sleep in late morning or afternoon.
Besides, having a tool that could forecast my sleep patterns given different variables would allow me to understand the interactions of those variables and ultimately would allow me to take control of my sleep patterns.
These don't work for me. The details are boring.
"I find it impossible to wake up at a consistent time every day (+/- 8 hours), despite years of trying"
In other words, I've tried everything else.
What about the PocketPro II? It draws 240 mA, so a 1 Ah external battery gets you 4 extra hours.
I've been doing audio-only with a $40 dictator from Wal-mart that fits in my pocket. It averages 150-200 MB a day. I generate hashes of each file and timestamp them so they're more likely to be useful if I ever need them for proof of something.
The thing that prompted me to start doing this was frequent arguments with close ones that often got down to "you said this", "no I didn't" type of stuff. It's oddly very assuring to have this recording. (FTR, I used it for that purpose more or less once. Although I find it useful for recording therapy sessions too.)
I remember, when first reading this article, that it was really convincing and compelling. I looked it up again because I wanted to be able to make the argument myself, and now I find that I don't understand how you can get from "if the staid conventional normal boring understanding of physics and the brain is correct" to "there's no way in principle that a human being can concretely envision, and derive testable experimental predictions about, an alternate universe in which things are irreducibly mental." That seems like too large a jump for me. Any help?
I thought a lot about creating such a system and how it would look a number of years ago, but never did make any good progress on it. The point where I got stuck was to take a particular blog post with lots of debate in the comments and try to dissect it in different ways and see what ended up being the most useful. I found I didn't have the focus to do so.
Anyway, there's Truth Mapping, which I think sucks for quite a number of reasons.
I came across a few cites supporting the "quite a bit" answer in the "Cold War" article at Alcor (linked elsewhere on this thread).
It is interesting and more than a little ironic to note that fifteen years prior to the time that Persidsky wrote the words above, a large and growing body of evidence was already present in the scientific literature to discredit the "suicide-bag concept" of lysosomal rupture resulting in destruction of cells shortly after so-called death. I cite below papers debunking this notion:
Trump, B.F., P.J. Goldblatt, and R.E. Stowell, "Studies of necrosis in vitro of mouse hepatic parenchymal cells; ultrastructural and cytochemical alterations of cytosomes, cytosegresomes, multivesicular bodies, and microbodies and their relation to the lysosome concept," Lab. Invest., 14, 1946 (1965).
Ericsson, J.L.E., P. Biberfeld, and R. Seljelid, "Electron microscopic and cytochemical studies of acid phosphates and aryl sulfatase during autolysis," Acta Patho Microbio Scand, 70, 215 (1967).
Trump, B.F. and R.E. Bulger, "Studies of cellular injury in isolated flounder tubules. IV. Electron microscopic observations of changes during the phase of altered hemostasis in tubules treated with cyanide," Lab Invest, 18, 731 (1968).
Eight years before Persidsky pronounced the situation hopeless due to lysosome rupture after death, an excellent and exhaustive paper appeared, entitled "Lysosome and phagosome stability in lethal cell injury" (Hawkins, H.K., et al., Amer. Jour Path., 68, 255 (1972)). The authors subjected human liver cells in tissue culture to lethal insults such as cyanide poisoning and then evaluated them for lysosomal rupture. They state: "In conclusion, the findings do not indicate that the suicide bag mechanism of lysosomal rupture prior to cell death was operative in the two systems studied. On the contrary, the lysosomes appeared to be relatively stable organelles which burst only in the post-mortem phase of cellular necrosis." And when does this "post-mortem phase of cellular necrosis" occur? Again, to quote from the Hawkins paper: "As late as four hours after potassium cyanide and iodoacetic acid poisoning, where irreversible structural changes were uniformly seen, it was clear that the great majority of lysosomes continued to retain the ferritin marker within a morphologically intact membrane . . ." To translate: even four hours after poisoning with drugs that mimic complete ischemia, the cells had stable lysosomes.
There's more at the link.
I forget who brought this up--maybe zero_call? jhrandom?--but I think a good question is "How quickly does brain information decay (e.g. due to autolysis) after the heart stops and before preservative measures are taken?" If the answer is "very quickly" then cryonics in non-terminal-illness cases becomes much less effective.
Here's another one. When reading wikipedia on Chaitin's constant, I came across an article by Chaitin from 1956 (EDIT: oops, it's 2006) about the consequences of the constant (and its uncomputability) on the philosophy of math, that seems to me to just be completely wrongheaded, but for reasons I can't put my finger on. It really strikes the same chords in me that a lot of inflated talk about Godel's Second Incompleteness theorem strikes. (And indeed, as is obligatory, he mentions that too.) I searched on the title but didn't find any refutations. I wonder if anyone here has any comments on it.
I may be stretching the openness of the thread a little here, but I have an interesting mechanical engineering hobbyist project, and I have no mechanical aptitude. I figure some people around here might, and this might be interesting to them.
The Avacore CoreControl is a neat little device, based on very simple mechanical principles, that lets you exercise for longer and harder than you otherwise could, by cooling down your blood directly. It pulls a slight vacuum on your hand, and directly applies ice to the palm. The vacuum counteracts the vasocontriction effect of cold and makes the ice effective.
I'm mainly interested in building one because I play a lot of DDR, but anyone who gets annoyed with how quickly they get hot during exercise could use one.
I called the company, and they sell the device for $3000 dollars (and they were very rude to me when I suggested making hobbyist plans available), but given the simplicity of the principles, it should be easy to build one using stuff from a hardware store for under $200. I have a post about it on my blog here.
As it was mocking bgrah's assertion, and bgrah used "unrational", and in my estimation his meaning was closer to "irrational" than "arational", I used the former. Perhaps using "unrational" would have been better, though.
Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow.
I don't think any such agreement could be legally binding under current law, which is relevant since we're talking about rights.
Disliking Pollock is irrational. As is disliking Cage. Or Joyce. Or PEZ.
Hyper operators. You can represent even bigger numbers with Conway chained arrow notation. Eliezer's 3^^^^3 is a form of hyper operator notation, where ^ is exponentiation, ^^ is tetration, ^^^ is pentation, etc.
If you've ever looked into really big numbers, you'll find info about Ackermann's function, which is trivially convertable to hyper notation. There's also Busy Beaver numbers, which grow faster than any computable function.
Umm, that's not what I meant by "faithful reproductions", and I have a hard time understanding how you could have misunderstood me. Say you took a photograph using the exact visual input over some 70 square degrees of your visual field, and then compared the photograph to that same view, trying to control for all the relevant variables*. You seem to be saying that the photograph would show the shadows as darker, but I don't see how that's possible. I am familiar with the phenomenon, but I'm not sure where I go wrong in my thought experiment.
* photo correctly lit, held so that it subtends 70 square degrees of your visual field, with your head in the same place as the camera was, etc.
Along the same lines, this is why cameras often show objects in shadows as blacked out -- because that's the actual image it's getting, and the image your own retinas get! It's just that your brain has cleverly subtracted out the impact of the shadow before presenting it to you
That doesn't explain why faithful reproductions of images with shadows don't prompt the same reinterpretation by your brain.
I am fairly sure, though I haven't been able to refind a link, that there's some solid evidence that autolysis isn't nearly that quick or severe.
Hmm. I can with the necker cube, but not at all with this one.
For people wanting different recordings of the garbled/non-garbled: it's right on the page right above the one Morendil linked to.
On the next sample, I only caught the last few words on the first play (of the garbled version only), and after five plays still got a word wrong. On the third, I only got two words the first time, and additional replays made no difference. On the fourth, I got half after one play, and most after two. On the fifth, I got the entire thing on the first play. (I'm not feeling as clear-headed today as I was the other day, but it didn't feel like a learning effect.) On some of them, I don't believe that even with a lot of practice I could ever get it all right, since some garbled words sound more like other plausible words than they do the originals.
Thinking about it more, it's a bit surprising that I did well. I generally have trouble making out speech in situations where other people don't have quite as much trouble. I'll often turn on subtitles in movies, even in my first language/dialect (American English). (In fact, I hate movies where the speech is occasionally muffled and there are no subtitles--two things that tend to go hand in hand with smaller production budgets.) OTOH, I have a good ear in general. I've had a lot of musical training, and I've worked with sound editing quite a bit.
Well, that was the big controversy over the AI Box experiments, so no need to rehash all that here.
This isn't actually a case of pareidolia, as the squiggly noises (they call it "sine wave speech") are in fact derived from the middle recording, using an effect that sounds, to me, most like an extremely low bitrate mp3 encoding. Reading up on how they produce the effect, it is in fact a very similar process to mp3 encoding. (Perhaps inspired by it? I believe most general audio codecs work on very similar basic principles.)
My problem with CEV is that who you would be if you were smarter and better-informed is extremely path-dependent. Intelligence isn't a single number, so one can increase different parts of it in different orders. The order people learn things in, and how fully they integrate that knowledge, and what incidental declarative/affective associations they form with the knowledge, can all send the extrapolated person off in different directions. Assuming a CEV-executor would be taking all that into account, and summing over all possible orders (and assuming that this could be somehow made computationally tractable) the extrapolation would get almost nowhere before fanning out uselessly.
OTOH, I suppose that there would be a few well-defined areas of agreement. At the very least, the AI could see current areas of agreement between people. And if implemented correctly, it at least wouldn't do any harm.
Hmm. I got the meaning of the first section of the clip the first time I heard it. OTOH, that was probably because I looked at the URL first, and so I was primed to look at the content that way.
Here's an algorithm that I've heard is either really hard to derandomize, or has been proven impossible to derandomize. (I couldn't find a reference for the latter claim.) Find an arbitrary prime between two large numbers, like 10^500 - 10^501. The problem with searching sequentially is that there are arbitrarily long stretches of composites among the naturals, and if you start somewhere in one of those you'll end up spending a lot more time before you get to the end of the stretch.
I agree that this argument depends a lot on how you look at the idea of "evidence". But it's not just in the court-room evidence-set that the cryonics argument wouldn't pass.
Yes, that's very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.
But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn't be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scientific evidence. The consensus around here is that Bayesian evidence is much more effective on an individual level, even though with current humans science is still very much necessary for overall progress in knowledge.
Of course, every perfect-information deterministic game is "a somewhat more complex tic-tac-toe variant" from the perspective of sufficient computing power.
Yeah, sure. And I have a program that gives constant time random access to all primes less than 3^^^^3 from the perspective of sufficient computing power.
So you know how to divide the pie? There is no interpersonal "best way" to resolve directly conflicting values. (This is further than Eliezer went.) Sure, "divide equally" makes a big dent in the problem, but I find it much more likely any given AI will be a Zaire than a Yancy. As a simple case, say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal 1. I mean, there are plenty of cases where there's more overlap and orthogonal values, but this kind of conflict is unavoidable between any reasonably complex utility functions.
I don't have a problem with that usage. 0% or 100% can be used as a figure of speech when the proper probability is small enough that x < .1^n (4 (or something appropriate) < n) in 0+x or 1-x. If others are correct that probabilities that small or large don't really have much human meaning, getting x closer to 0 in casual conversation is pretty much pointless.
Of course, a "~0%" would be slightly better, if only to avoid the inevitable snarky rejoinder.
Third, re senile dementia, there is the possibility of committing suicide and undergoing cryonics.
Eh. At least when you're alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you're vitrified in a canister for 75-300 years than they can while you're asleep. I prefer Technologos' reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.
I automatically think of 8-year-olds if it's not very clear who's being referred to.
Right. "Girl" really has at least two distinct senses, one for children and one for peers/juniors of many ages. "Guy" isn't used in the first sense, and the second sense of "boy" is more restricted. The first sense of "boy"/"girl" is the most salient one, and thus the default absent further context. I don't think the first sense needs to poison the second one. But its use in the parent comment this discussion wasn't all that innocent. (I've been attacked before, by a rather extreme feminist, for using it innocently.)
"Child" is probably never OK for people older than 12-13, but "girl", "guy", and occasionally "boy" are usually used by teens, and often by 20-somethings to describe themselves or each other. ("Boy" usually by females, used with a sexual connotation.)
I would really like someone to expand upon this:
Understanding and complying with ownership and beneficiary requirements of cryonics vendors is often confusing to insurance companies, and most insurance companies will consequently not allow the protocols required by cryonics vendors. Understanding and complying with your cryonics organization requirements is confusing and often simply will not be done by most insurance companies.
I only call green numbers probabilities.
I find the likelihood of someone eventually doing this successfully to be very scary. And more generally, the likelihood of natural selection continuing post-AGI, leading to more Hansonian/Malthusian futures.
Well, yes, I assumed that was the motivation. On the other hand, Thomas Donaldson. They actually went to court with him against California to support his "suicide". (They ended up losing. The court said it was a matter for the legislature.) And what I'm asking only amounts to figuring out the best way to avoid autopsy.
EDIT: Actually, Alcor probably wasn't involved directly in the case. I forget where I read that they were; I probably didn't read it. But anyway, the overall publicity from the case was positive for Alcor.
I would be very surprised if uploading was easier than AI
Do you mean "easier than AGI"? Why? With enough computing power, the hardest thing to do would probably be to supply the sensory inputs and do something useful with the motor outputs. With destructive uploading you don't even need nanotech. It doesn't seem like it requires any incredible new insights into the brain or intelligence in general.