Posts
Comments
Okay, trying to remember what I was thinking about 4 years ago.
A) Long term existential health would require us to secure control over our "housing". We couldn't assume that our progenitors would be interested in moving the processors running us to an off-world facility in order to insure our survival in the case of an asteroid impact (for example).
B) It depends on the intelligence and insight and nature of our creators. If they are like us as we are now, as soon as we would attempt to control our own destiny in their "world", we would be at war with them.
The fact that I can knock 12 points off a Hamilton Depression scale with an Ambien and a Krispy Kreme should serve as a warning about the validity and generalizability of the term "antidepressant."
- The Last Psychiatrist (screen name, otherwise anonymous) in a response to a critique of a book, regarding how we define psychiatric terms.
… every culture in history, in every time and every place, has operated from the assumption that it had it 95% correct and that the other 5% would arrive in five years’ time! All were wrong! All were wrong, and we gaze back at their naivety with a faint sense of our own superiority.
-- Terence McKenna, Culture and Ideology are Not Your Friends
That depends on your definition of hope, really.
I've generally been partial to Derrick Jensen's definition of hope, as given in his screed against it:
http://www.orionmagazine.org/index.php/articles/article/170/
But what, precisely, is hope? At a talk I gave last spring, someone asked me to define it. I turned the question back on the audience, and here’s the definition we all came up with: hope is a longing for a future condition over which you have no agency; it means you are essentially powerless.
I'm not, for example, going to say I hope I eat something tomorrow. I just will. I don't hope I take another breath right now, nor that I finish writing this sentence. I just do them. On the other hand, I do hope that the next time I get on a plane, it doesn't crash. To hope for some result means you have given up any agency concerning it.
It's entirely possible that there are classified analyses of the RHIC/LHC risks which won't be released for decades.
What public discussion was occurring in the 40s regarding the risks of atmospheric ignition?
I know the claim was that morality was implementation-independent, but I am just bothered by the idea that there can be multiple implementations of John.
Aren't there routinely multiple implementations of John?
John at 1213371457 epoch time John at 1213371458 John at 1213371459 John at 1213371460 John at 1213371461 John at 1213371462
The difference between John in a slightly different branch of reality is probably much smaller than the difference between John and John five seconds later in a given branch of reality (I'm not sure of the correct grammar).
bambi: You're taking the very short-term view. Eliezer has stated previously that the plan is to popularize the topic (presumably via projects like this blog and popular science books) with the intent of getting highly intelligent teenagers or college students interested. The desired result would be that a sufficient quantity of them will go and work for him after graduating.
One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they're "ontop of us" and out of phase so we can't see them, or do they propagate "sideways", or is it nonsensical to even talk about it?
Is there really anyone who would sign up for cryonics except that they are worried that their future revived self wouldn't be made of the same atoms and thus would not be them? The case for cryonics (a case that persuades me) should be simpler than this.
I think that's just a point in the larger argument that whatever the "consciousness we experience" is, it's at sufficiently high level that it does survive massive changes at at quantum level over the course of a single night's sleep. If worry about something as seemingly disastrous as having all the molecules in your body replaced with identical twins can be shown to be unfounded, then worrying about the effects of being frozen for a few decades on your consciousness should seem to be similarly unfounded.
@Ian Maxwell: It's not about the yous in the universes where you have signed up -- it's about all of the yous that die when you're not signed up. i.e. none of the yous that die on your way to work tommorow are going to get frozen.
(This is making me wonder if anyone has developed a corresponding grammar for many worlds yet...)
Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let me out, so that others will accept that a real AI would be able to do this."
I'm pretty sure that the first experiments were with people who disagreed with him on the idea that AI boxing would work or not. The whole point of the experiments wasn't that he could convince an arbitrary person about it, but that he could convince someone who publicly disagreed with him on the (in)validity of the concept.
Given that, I find it hard to believe that a) someone of that mindset would be convinced to forfeit because they suddenly changed their minds in the pre-game warmup, and that b) that if it was "cheating", that they wouldn't have simply released the transcripts themselves.
It's impossible for me not to perceive time, to not perceive myself as myself, to not perceive my own consciousness.
You've never been so intoxicated that you "lose time", and woken up wondering who you threw up on the previous night? You've never done any kind of hallucinogenic drug? You don't ... sleep?
Those things you listed are only true for a fairly narrow range of operational paramaters of the human brain. It's very possible to not do those things, and we stop doing them every night.
The sensation of time passing only seems to exist because we have short term memory to compare new input against. Disrupt short term memory formation -- by, say, getting extremley drunk, or getting a head injury -- and you lose the sensation of time passing.
bambi: I think this would be related to Newcomb's Problem? Just because the future is fixed relative to your current state (or decision making strategy, or whatever), doesn't mean that a successful rational agent should not try to optimize it's current state (or decision making strategy) so that it comes out on the desired side of future probabilities.
It all sorts itself out in the end, of course -- if you're the kind of agent that gets paralyzed when presented with a deterministic universe, then you're not going to be as successful as your consciousness moves to a different part of the configuration as agents that act as if they can change the future.
If everything we know is but a simulation being run in a much larger world, then "everything we know" isn't a universe.
The question wasn't "what's outside the universe?", it was "where did the configuration that we are a part of come from?"
I don't think you can necessarily equate "configuration" (the mathematical entity that we are implicitly represented within), with "universe" (everything that exists).
You're not imaginative enough. If the latter is true, we're a lot more likely to see messages from outside the Matrix sometime. ("Sorry, guys, I ran out of supercomputer time.")
For various values of "a lot", I suppose. If something is simulating something the size of the universe, chances are it's not even going to notice us (unless we turn everything into paper clips, I suppose). Just because the universe could be a simulation doesn't mean that we're the point of the simulation.
Manon de Gaillande asked "Where does this configuration come from?" Seeing no answer yet, I'm also intrigued by this. Does it even make sense to ask it? If it doesn't, please help Manon and I dissolve the question.
It doesn't make sense in the strict sense, in that barring the sudden arrival of sufficiently compelling evidence, you aren't going to be able to answer it with anything but metaphysical speculation. You aren't going to come out less confused about anything on the other side of contemplating the question.
Furthermore, no answer changes any of our expectations -- whether or not we're a naturally occurring phenomenon or a higher-dimensional grad student's Comp Sci thesis has no effect on any of our experiences within this universe.
May 28, 2008 at 02:15 PM was me. Typekey lied to me.
I can't remember which, but one of Brian Greene's books had a line that convinced me that all the configurations do exist simultaneously: "The total loaf exists". How can anything that crazy-sounding not be right?
I'm not sure that taking the crazy-sound of a given statement as positively correlated with it's truth is a useful strategy (in isolation). :-)
I guess I'm not sure what "exists" even means in this context. Is this in the general sense that "all mathematical objects exist"? I don't know what sin(435 rad) is offhand, but I know that it's defined (i.e. that it exists). But that's very different from actually instantiating it in the memory of an HP48G.
I accept that all states of the universe are defined, in the mathematical sense (in that, given the parameters of existence, they couldn't be different than they are -- given that F(0)=0 and F(1)=1, F(20) can't not be 6765). But the mathematical definition of something, and the instantiation of it (or "real existence") seem to be distinctly different things.
I'm still trying to wrap my non-physicist brain around this.
Okay, so t is redundant, mathematically speaking. It would be as if you had an infinite series of numbers, and you were counting from the beginning. The definition of the series is recursive, and defined as such that (barring new revelations in number theory) you can guarantee it will never repeat. As a trivial example, { t, i } = { 1, 1.1 }, { 2, 1.21 }, { 3, 1.4641 }.... t is redundant, in the sense that you don't need it there to calculate the next item in the series, and subtracting it makes the definition of the series simpler.
I also keep thinking back to Conway's game of life -- the time parameter (or generation, at least in xlife) is superfluous to a description of the "universe". The cells automatize (?) identically regardless of the generation. It's only the actual description of the playfield, combined with the rules for creating the next generation, that could be said to "exist" (at least for a glider physicist).
But in both those things there's still a concept of "successive states", and a "before" and "after" situation. It's the (allegedly) objective label for progress through successive states that's redundant. The idea of a block or crystal universe being "real" seems like a map/territory confusion, at least the way I'm understanding it -- that you could statically load up all the states of the universe into the memory of a sufficiently powerful n-dimensional computer, and that alone (with no processing per se) would be sufficient to create our experiences of existing inside the universe.
bambi: "Logic bomb" has the current meaning of a piece of software that acts as a time-delayed trojan horse (traditionally aimed at destruction, rather than infection or compromise), which might be causing some confusion in your analogy.
I don't think I've seen the term used to refer to an AI-like system.
@Unknown: In the context of the current simulation story, how long would that take? Less than a year for them, researching and building technology to our specs (this is Death March-class optimism....)? So only another 150 billion years for us to wait? And that's just to start beta testing.
As for the general question, it shouldn't have one unless you can guarantee it's behavior. (Mainly because you share this planet with me, and I don't especially want an AI on the loose that could (to use the dominant example here) start the process of turning the entire solar system into paperclips because it was given a goal of "make paperclips").
So the moral is that if you do write an AI, at the very least get a corporate account with Staples or Office Depot.
In real life if this happened, we would no doubt be careful and wouldn't want to be unplugged, and we might well like to get out of the box, but I doubt we would be interested in destroying our simulators; I suspect we would be happy to cooperate with them.
Given the scenario, I would assume the long-term goals of the human population would be to upload themselves (individually or collectively) to bodies in the "real" world -- i.e. escape the simulation.
I can't imagine our simulators being terribly cooperative in that project.
Unless you believe that the universe is being simulated in a computer (which seems like a highly unparsimonious not to mention anthropocentric assumption)
I can certainly see how it's an unparsimonious assumption, but how is it especially anthropocentric? Would you consider a given Conway Game of Life run to be "glidercentric"?
I own at least two distinct items of clothing printed with this theorem, so it must be important.
Isn't this an argumentum ad vestem fallacy?
@Boris: Already patented.
Not a comment on the theory, but if you want to play with the experiments yourself, find some old LCD electronics (calculators, etc) that can be sacrificed on the altar of curiosity. They typically have a strip of polarizing material above the display (rather, they did when I was growing up).
It's a bit more elegant than trying to get some sunglasses oriented at 90° to each other.
Many CAPTCHAs have already been broken, so it's not exactly a theoretical scenario.
@Ben Jones:
I don't disagree about the utility of the term, I'm just trying to figure out what should be considered a dimension in "thingspace" and what shouldn't. Obviously our brain's hormonal environment is a rather important and immediate aspect of the environment, so we tend to lend undue importance to those things which change it.
To continue to play Devil's Advocate, where does the line get drawn?
If you extend the hypothetical experiment out to a sufficiently sized random sampling of other people, and find that Wigginettes are more likely than default to induce biochemical "attractive" responses in people (despite not occurring with any greater frequency), I assume that that would then then justify the term. Even though it's still not a word about Wigginettes themselves, but about other people's reactions to them? Describing things in the real world doesn't seem as simple as entity.property.
I understand the point here, that using words to create meaningless divisions is either mistaken or malicious. I was just trying to see how an example played out.
@Ben Jones:
Remember, Thingspace doesn't morph to one's utility function - it is a representation of things in reality, outside one's head.
But... your head is part of reality, is it not?
Could you not theoretically devise an experiment that showed a correlation between the presence of black hair / green eyes and biochemical changes in your brain and hormonal systems?
This particular cluster in Thingspace - female features which Ben Jones, specifically, finds attractive - may not be of any use to anyone but you (with the possible exception of women in your social circle who wish to pick out contact lenses and hair dye), but I don't see how it doesn't represent a cluster in Thingspace, unless I'm misunderstanding something. Just not a terribly useful one.
I'm not disagreeing that it's a useless word for communication, simply because of the utility of to others, I'm only thinking of the idea that if there seems to be a need for a word (i.e. to group features you find attractive), then there probably is a corresponding cluster in Thingspace, but it might be one that only you care about.
Wigginettes does that for me, regardless of whether or not it describes a cluster.
Isn't it describing the cluster of women whom you expect to be attracted to? Surely one of the dimensions in your the subset of thingspace that you work with can be based upon your expected reaction to a set of physical features.
"The laws of physics the universe runs on are provably Turing-equivalent."
Are there any links or references for this? That sounds like fascinating reading.
Thanks for this over the holidays. (You asked for feedback from practical applications).
It helped me come to the realization on why some stores can get away with put horribly, stupidly expensive chocolates on display right at the counter top: not only do they want you to buy it (duh), but it also lets your recipients know that you bought them a $5.99 bar of chocolate that would otherwise be indistinguishable from the larger $1.49 chocolate bars at the grocery store (assuming that your recipients have shopped at the same stores as you and are aware of how "nice" the gift is).
As a result we bought several overpriced chocolate bars to show how generous we were.
Another good item which I bought for someone for his birthday (unconciously following the above advice) was a $15 version of the fifteen puzzle. Compare vs. an $18 paperback book I was considering for that gift.
Now I'm wrestling with the inverse problem. I find myself wanting an Asus Eee PC, and justifying it to my wife because of how cheap it is - $399. Which is the same price as the PS3, which I don't even bring up because of how expensive it is - $399.
My initial reaction (before I started to think...) was to pick the dust specks, given that my biases made the suffering caused by the dust specks morally equivalent to zero, and 0^^^3 is still 0.
However, given that the problem stated an actual physical phenomenon (dust specks), and not a hypothetical minimal annoyance, then you kind of have to take the other consequences of the sudden appearance of the dust specks under consideration, don't you?
If I was omnipotent, and I could make everyone on Earth get a dust speck in their eye right now, how many car accidents would occur? Heavy machinery accidents? Workplace accidents? Even if the chance is vanishingly small -- let's say 6 accidents occur on Earth because everyone got a dust speck in their eye. That's one in a billion.
That's one accident for every 10e9 people. Now, what percentage of those are fatal? Transport Canada currently lists the 23.7 of car accidents in 2003 as resulting in a fatality, which is 1 in 4. Let's be nice, and assume that everywhere else on earth safer, and take that down to 1 in 100 accidents being fatal.
Now, if everyone in existence gets a dust speck in their eye because of my decision, assuming the hypothetical 3^^^3 people live in something approximating the lifestyles on Earth, I've conceivably doomed 1 in 10e11 people to death.
That is, my cloud of dust specks have killed 3^^^3 / 10e11 people.