Posts
Comments
I took the whole thing! That's two years in a row.
My ex wife is in Geriatrics and I've heard a few situations from her where she, possibly appropriately, lied to patients with severe dementia by playing along with their fantasies. The most typical example would be a patient believing their dead spouse is coming that day for a visit, and asking about it every 15 minutes. I think she would usually tell the truth the first few times, but felt it was cruel to be telling someone constantly that their spouse is dead, and getting the same negative emotional reaction every time, so at that point she would start saying something like, "I heard they were stuck in traffic and can't make it today."
The above feels to me like a grey area, but more rarely a resident would be totally engrossed in a fantasy, like thinking they were in a broadway play or something. In these cases, where the person will never understand/accept the truth anyway, I think playing along to keep them happy isn't a bad option.
I've been living like that for a long time, but just recently started noticing it.
Oddly, it feels like one key part of my recovery has been to train myself to feel as unguilty as possible about any >recreational activity.
Do you have any specific advice for how to do this?
One problem I see with this kind of study is that valproic acid has a very distinct effect (from personal experience), which makes it easier for participants to determine whether they are in the placebo group. It would be nice if there were an "active placebo" group who took another mood stabilized that is not an HDAC. Also, it would have been nice to see the effect on ability to produce a tone by humming or whistling, given the pitch name.
Some very weak anecdotal evidence in favor of the hypothesis: For a couple months in 2005 I was being treated with valporic acid and, during that time, I took an undergraduate course in topology. In my brief stint as a graduate student (2012), I also took topology and performed much better in this than in any of my other courses, though this could just be due to liking the subject.
Actually, I started thinking about computations containing people (in this context) because I was interested in the idea of one computation simulating another, not the other way around. Specifically, I started thinking about this while reading Scott Aaronson's review of Stephen Wolfram's book. In it, he makes a claim something like: the rule 110 cellular automata hasn't been proved to be turing complete because the simulation has an exponential slowdown. I'm not sure if the claim was that strong, but definitely it was claimed later by others that turing completeness hadn't been proved for that reason. I felt this was wrong, and justified my feeling by the thought experiment: suppose we had an intelligence that was contained in a computer program and we simulated this program in rule 110, with the exponential slowdown. Assuming the original program contained a consciousness, would the simulation also? And I felt strongly, and still do, that it would.
It was later shown, If i'm remembering right, that there was a simulation with only polynomial slowdown, but I still think it's a useful question to ask, although the notion it captures, if it does so at all, seems to me to be a slippery one.
What if they don't output anything?
I don't see the relevance of either of these links.
What is TTS?
I'm skeptical that the relevance of the two modes of thinking in question has much to do with the mathematical field in which they are being applied. Some of grothendiek's most formative years were spent reconstructing parts of measure theory, specifically he wanted a rigorous definition of the concept of volume and ended up reinventing the Lebesgue measure, if memory serves, in other words, he was doing analysis and, less directly, probability theory...
I do think it's plausible that more abstract thinkers tend towards things like algebra, but in my limited mathematical education, I was much more comfortable with geometry, and I avoid examples like the plague...
Maybe the two approaches are not all that different. When you zoom out on a growing body of concrete examples you may see something similar to the "image emerging from the mist", that grothendiek describes.
Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to "pointwise causal isomorphism":
Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified >that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of >detection) and verified surface correspondence (the person emulated says they can't internally detect any >difference) then my probability of consciousness is essentially "top", i.e. I would not bother to think about >alternative hypotheses because the probability would be low enough to fall off the radar of things I should think >about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even >though your biological brain isn't?
We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.
Neither do I, but my intuition suggests that a static copy of a brain/the software necessary to emulate it plus a counter wouldn't cause that brain to experience consciousness, whereas actually running the simulation as a reversible computation would...
One thing I've had partial success with this month is changing the vocabulary/tone of my inner dialog. My original plan was to replace "Austin, you **ing retard!", which was getting sub-vocalized far too often, with "well, that was wrong.." or something of the like. It worked at first, but now I find myself saying "really??!?!?" instead, and basically meaning the same thing I was originally saying. I'm not sure what effect it's had on my self-confidence, if any, but it was worth a try and I did consciously change a behavior.
I took the whole survey.
This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I'd like to capture the notion of being able to contain a consciousness, so what I'm asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. "pointwise" isomorphism, if you're saying what I think, seems too strict. On the other hand, allowing any invertible function to be a ___morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
Has this been discussed in any other threads?
When considering the risks of "recreational" chemicals, it helps if we distinguish between moreish and addictive. By moreish I mean the tendency to lead to compulsive redosing, and of course when I say addictive I mean in the medium to long term. These can be surprisingly independent. In the case of MDMA, the consensus among drug users, in my experience, is that it's medium high on the moreishness scale but very low on the long term addictiveness scale. In my opinion there is pretty much 0 danger of addiction for the vast majority of less wrongers.
However, from personal experience it can be a very dangerous social lubricant, it lead to multiple social interactions that I later regretted strongly, and this seems to be pretty common.
Amphetamine (adderall, vyvanse): +1
I've been using this for motivation and to combat akrasia for about 1 yr. Were it not for tolerance/dependance, i would give this drug at least a +6, the effects from a single dose can be quite profound. Basically, I was unable to achieve consistent boosts in motivation without increasing the dose, which would continually increase side effects until I had to abstain for a while, rinse/repeat. My guess is that this drug is much more useful for people who are naturally motivated; the other cognitive benefits (increased focus, mental clarity) do not seem subject to the tolerance issue. As for dependence, I just mean learning behavior X with amphetamine may mean dependence on amphetamine for behavior X