Posts
Comments
Conversations seem to occur on several levels simultaneously. There's a level of literal truth. There are also multiple dimensions of politics. (What I call "micro" and "macro," in a way analogous to the application to economics.) There's even a meta-level that consists of just trying to overwhelm people with verbiage.
Well, I note in a comment somewhere, that it would have to be a version of Amelia who was rather ditsy about time.
It doesn't preclude scenario B. It just makes it unlikely.
I have a "Many Worlds/QM" style interpretation of time turner mechanics. Basically, all of the possible interpretations of the information+metainformation you have transmitted via time turner "exists" or is in a kind of superposition, until receiving information precludes them. Making Scenario B overwhelmingly unlikely is precluding it.
It's very possible for to distinguish the two situations. The same probabilistic mechanism that determines the arrow of time precludes scenario B. Also, it's not really that Dumbledore is actually doing the distinction. It's more if he could do it.
No, because if she was able to provide that much information as a conscious communication, she will have provided enough information to have affixed her departure at a specific time.
In any case, there's probably some reason that would make it impossible for her to convey that much information inside 6 hours, anyhow.
I am going to have to accuse you of making a grave Mind Projection
Apparently Black Holes preserve information. There are other connections to physics and information theory, Such as the theoretical computers that can use ever smaller quantities of energy, so long as all of their operations are reversible. Given that, it doesn't seem unreasonable that there would be an information theoretic component to the rules of magic. My formulation doesn't require a human mind. If I talk about minds or arbiters, or use language suggesting that then that's just lazy writing on my part.
I only saw the 91-92 thread and didn't think it fit there. Other threads that I found were marked as superseded.
All information is probabilistic, Bayesian.
Is there a rigorous argument for this, or is this just a very powerful way of modeling the world?
The problem here is that even if Scenario A and Scenario B are indistinguishable, Amelia's words still constitute Bayesian evidence on which Dumbledore can update his beliefs.
In my formulation, that's "side information." Really, my gedankenexperiment doesn't work unless Amelia Bones happens to be very ditzy concerning time.
I'm inclined to believe that whatever intelligence is behind capital-T Time is enforcing an intuitive definition of information, in the same way that brooms work off of Aristotelian mechanics.
So then, this is a limitation in the "interface" that the Atlantean engine is following. I think my hypothesis is testable.
I don't think the path of a single neutrino could do it. Answer this, from the informational POV of Dumbledore's location in space-time, is path P of that neutrino any less consistent with Scenario A or Scenario B?
This is precisely what I meant when I mentioned the empirical side information detector. The "informational point of view of Dumbledore" is "whatever-it-is that keeps histories consistent," and the indistinguishability only has to come into play in the local context of whenever Dumbledore uses the time turner. In the way I've envisioned it to work, Dumbledore can only use your algorithm to detect leaked information or side-information that was available to him which he might not be aware of.
Your formulation of "indistinguishable" was already invalidated on reddit.com/r/hpmor by a different objection to my hypothesis. When you lie, you leak information. That information just puts the situation into the 6-hour rule. This cuts off the rest of your reasoning below. It also shows how hard the 6-hour rule is to "fool," which in turn explains why it hasn't been figured out yet.
EDIT: Rewrote one sentence to put the normal 6-hour rule back.
EDIT: Basically, if all of the information Dumbledore can receive from Amelia Bones could logically come from her departing anywhere between time X and time Y, then the metadata available to Dumbledore is effectively that, "Amelia Bones came from anywhere between time X and time Y."
In short, the rule is that you cannot convey information more than 6 hours into the information's relative past, but that does not necessarily mean that you cannot go to a forbidden part of the past after learning it. It merely means that you cannot change your mind about doing so after learning it. Worth noting: if you plan on going to the past, and then receive some information from 6 hours in the future that changes your mind, you have conveyed information to the past. I'm not sure how that is handled, other than that the laws of the universe are structured as to never allow it to happen.
I suspect my actual formulation (not your slight misread of it) and yours come out to much the same.
From Chapter 6:
Harry was examining the wizarding equivalent of a first-aid kit, the Emergency Healing Pack Plus. There were two self-tightening tourniquets. A Stabilisation Potion, which would slow blood loss and prevent shock. A syringe of what looked like liquid fire, which was supposed to drastically slow circulation in a treated area while maintaining oxygenation of the blood for up to three minutes, if you needed to prevent a poison from spreading through the body. White cloth that could be wrapped over a part of the body to temporarily numb pain. Plus any number of other items that Harry totally failed to comprehend, like the "Dementor Exposure Treatment", which looked and smelled like ordinary chocolate. Or the "Bafflesnaffle Counter", which looked like a small quivering egg and carried a placard showing how to jam it up someone's nostril.
From Chapter 89:
"Fuego!" / "Incendio!" Harry heard, but he wasn't looking, he was reaching for the syringe of glowing orange liquid that was the oxygenating potion, pushing it into Hermione's neck at what Harry hoped was the carotid artery, to keep her brain alive even if her lungs or heart stopped, so long as her brain stayed intact everything else could be fixed, it had to be possible for magic to fix it, it had to be possible for magic to fix it, it had to be possible for magic to fix it, and Harry pushed the plunger of the syringe all the way down, creating a faint glow beneath the pale skin of her neck. Harry then pushed down on her chest, where her heart should be, hard compressions that he hoped was moving the oxygenated blood around to where it could reach her brain, even if her heart might have stopped beating, he hadn't actually thought to check her pulse.
The oxygenation potion also slows circulation. Did Harry accidentally kill Hermione? Would the potion have unintentionally prevented blood flow to her brain by retarding flow in her carotid artery, while unhelpfully oxygenating her neck? It makes sense that a potion designed to prevent the spread of poison would prevent movement of the blood. It's also stated that it works on "a treated area." If it's primarily meant to slow the spread of poisons from bites, the spell's "treated area" might be defined as the volume of flesh a certain distance away from the injection site.
Also, giving CPR to someone when their heart is still beating is definitely not good for them.
Yes, but instead of the mechanism making the beliefs more radical in the context of the whole society, it acts to make beliefs more mainstream. Though, one could argue that a more jingoistic China would be more radical in the analogous larger context.
What the hell is green tech? Is it just more efficient tech? Or does it have less to do with the technology and more to do with economic agents acknowledging externalities, consciously choosing to internalize some of that cost?
I'll take that as an analogy for what it means to be a moral person. (It's another way of talking about Kant's Categorical Imperative.)
A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.
What does it mean when people hold difficult to understand moral opinions?
You're telling us that everyone should party with the million dollars for three days, and then die.
[Citation Needed] Ahem.
No, I'm not saying that. I'm painting the other position in a light so it's understandable. Your analogy is incomplete. What if they could also donate that million dollars to other research that could increase the life expectancy of 1000 people by 1 year with 90% certainty?
Science is much worse at figuring out what is right because it's method of determining what is right is "Of all the possible hypotheses, we'll eliminate the wrong ones and choose the most probably of what exists".
Someone should write a Sherlock script, where someone uses Sherlock's principle: "when you have eliminated the impossible, whatever remains, however improbable, must be the truth," against him, so that he decisively takes the wrong action.
"Call me when cryonicists actually revive someone," they say; which, as Mike Li observes, is like saying "I refuse to get into this ambulance; call me when it's actually at the hospital".
There was a time when expecting mothers did the rational thing by not going to the maternity ward. http://www.ehso.com/ehshome/washing_hands.htm#History
Resources to be devoted to cryonics and a future lifespan could also be devoted to the lifespan you are fairly sure you have right now. The situation would be more like getting into an ambulance, when there have been no known successful arrivals of ambulance trips and many known failures.
It is important to be rational about charity for the same reason it is important to be rational about Arctic exploration: it requires the same awareness of opportunity costs and the same hard-headed commitment to investigating efficient use of resources
In his Mars Direct talks, Robert Zubrin cited the shoestring budget Amundsen expedition through the Northwest Passage in comparison to around 30 contemporary government funded expeditions with state of the art steam frigates and huge logistics trains. The Amundsen expedition traveled in a cheap little sealing boat and fed themselves largely through rifles and ammunition they brought with them.
http://www.youtube.com/watch?feature=player_detailpage&v=Mm34Muv6Lsg#t=102s
So the real threat to humanity are the machines that humanity will become. (Is in the process of becoming.)
There are massive intractable problems with human society on earth at the moment which lack easy solutions (poverty, aids, overpopulation, climate change, social order).
Poverty - has always been with us. Many, many people are better off. AIDS - We will solve this. Overpopulation - Population will stabilize at 10 billion. See 2nd link. Climate change - see below. Social order - so long as we don't extinguish ourselves, this will work itself out.
http://www.gapminder.org/videos/hans-rosling-ted-2006-debunking-myths-about-the-third-world/
http://www.ted.com/talks/hans_rosling_on_global_population_growth.html
We might be stuck in the solar system for the next century, but we're certainly not stuck on Earth.
http://www.wired.com/autopia/2012/03/elon-musk-says-ticket-to-mars-will-cost-500000/
For the longer term, it is hugely beyond our technological abilities
We could start colonizing Mars using nuclear rockets in 20 years, if we wanted to. Heck, if we wanted to badly enough, we could start it in 20 years with chemical rockets.
whatever determines our survival as a species for the nex millennium will be decided on earth. And we are struggling with that right now.
Certain things will be decided in the next century. We could colonize Mars with agriculture but without terraforming well inside that. When it comes to an issue like "species survival" I think the expense and redundancy are justified. Whether or not western civilization decides to colonize Mars will be one of those deciding factors. The colonization of Mars would be a turning point in human history as significant as the european colonization of North America, with political and economic consequences as large and as far-ranging. Perhaps it would be better if western civilization did not choose to colonize Mars. I'm fairly certain Chinese civilization will do so, and having both powers vying for new territory could well result in war.
How about large stations with artificial gravity and zero-G? We were launching 747 sized hulls 97% of the way into orbit, only to dispose of them about once or twice a year for many, many years. (Shuttle main tank.) Large trampoline-sided spaces would result in really cool new sports and forms of art.
The problem with this (and related theories) is that the soul believers believe that the soul itself can live and think without the body. Much of thinking is mediated by language. I don't think a believer in soul would accept that their soul after death will be incapable of thought until God provides it a substitute pineal gland.
Actually, the concept of soul without language makes more sense on its own and fits more religious traditions (especially if you abandon literal translations) than souls that have language.
So, a little background- I've just come out as an atheist to my dad, a Christian pastor, who's convinced he can "fix" my thinking and is bombarding me with a number of flimsy arguments that I'm having trouble articulating a response to
Being articulate has nothing to do with the truth. If your dad isn't willing to explore where he's wrong, then you shouldn't be talking about your world views with him. If you can't establish your world view without him, then you're not ready to establish it at all.
I'd advise not worrying about "the big questions" so much as what kind of person you are in the relationships that mean the most to you. I suggest creating value in the world. What kind of person you are "in the small" is actually more complex and more rewarding to explore.
You're assuming that there's always an answer for the more intelligent actor. Only happens that way in the movies. Sometimes you get the bear, and sometimes the bear gets you.
Sometimes one can pin their hopes on the laws of physics in the face of a more intelligent foe.
There's lots of scope for great adventure stories in dystopian futures.
The approx 2% figure is interesting to me. This seems to be about the right frequency to be related to the small minority of jerks who will haze strangers for sexist and/or racist motivations.
http://news.ycombinator.com/item?id=3736037
This might be related to the differences in the perception of the prevalence of racism between minorities and mainstream members of society. If one stands out in a crowd, then one can be more easily "marked" by individuals seeking to victimize someone vulnerable. This is something that I seem to have observed over the years, though I have not taken the time to gather hard data.
Basically, if one has a noticeable and salient difference, one will tend to attract more than one's share of attention from "jerks." Though such events are uncommon, they will happen often enough that the possibility always lurks in the back of one's mind. This results in a noticeable cognitive difference between minorities and mainstream persons.
It's easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don't have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness. ... I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically.
A summary of your points is that: while conceivable, there's no reason to think it's at all likely. Ok. How about, "Because it's fun to think about?"
Actually, lasers might not be practical against maneuverable targets because of the diffraction limit and the lightspeed limit. In order to focus a laser at very great distances, one would need very large lenses. (Perhaps planet sized, depending on distance and frequency.) Targets could respond by moving out of the beam, and the lightspeed limit would preclude immediate retargeting. Compensating for this by making the beam wider would be very expensive.
To write a culture that isn't just like your own culture, you have to be able to see your own culture as a special case - not as a norm which all other cultures must take as their point of departure.
Most North Americans that fall into the rather arbitrary "white" category do not see their culture as a special case. "White" North Americans tend to see themselves as the "plain vanilla" universal human. Everyone else is a "flavor." In truth, vanilla is also a flavor, of course.
How do I know this? Because I'm of Korean extraction, and I've been playing Irish Traditional music for the past 23 years. For some reason, the fact that I play such music is more notable than "white" people of Hungarian, German, English, Polish, and French extraction that I've met -- but only in the cases where such persons do not have "funny" accents. In this context, a "funny" accent that isn't from the British isles is just as good as different skin tone and facial features.
There's more to this I could say. I've also been walking around as a well educated middle class native-born member of this culture, while wearing different facial features. I grew up in isolation from my "own" ethnic community. In this, I have a certain advantage concerning awareness of my own culture. (And even so, I became aware of how unaware I usually am of it when travelling abroad.)
If within our own lifetime we undergo such alien thought changes, alien thoughts in actual aliens will be alien indeed.
Indeed. However, I am beginning to think that by emphasizing the magnitude of the alienness of alien thought, we are intending to avoid complacency but we are also creating another kind of "woo."
Reason: Cockroaches and the behavior of humans. We can and do kill individuals and specific groups of individuals. We can't kill all of them, however. If humans can get into space, the lightspeed barrier might let far-flung tribes of "human fundamentalists," to borrow a term from Charles Stross, to survive, though individuals would often be killed and would never stand a chance in a direct conflict with a super AI.
What if the AI are advanced over us as we are over cockroaches, and the superintelligent AI find us just as annoying, disgusting, and hard to kill?
I wonder if a DDR version of Dual-N-Back could be devised?
Sounds silly, and it's not very hip, but Fly Lady has worked very well for my girlfriend. Basically, they send you messages giving you mostly short (like 3 minute) tidying and cleaning missions. Your place gets messy a minute at a time, so they keep you cleaning for short intervals to counteract that.
When my girlfriend is participating, the difference is dramatic, and it stays that way for weeks at a time.
Which god? If by "God" you mean "something essentially perfect and infallible," then yes.
That one. Big man in sky invented by shepherds does't interest me much. Just because I'm a better optimizer of resources in certain contexts than an amoeba doesn't make me perfect and infallible. Just because X is orders of magnitude a better optimizer than Y doesn't make X perfect and infallible. Just because X can rapidly optimize itself doesn't make it infallible either. Yet when people talk about the post-singularity super-optimizers, they seem to be talking about some sort of Sci-Fi God.
In a practical sense, I think this means you want to put yourself in situations where success is the default, expected result.
This is a little like "burning the boats."
http://techcrunch.com/2010/03/06/andreessen-media-burn-boats/
Isn't it almost certain that super-optimizing AI will result in unintended consequences? I think it's almost certain that super-optimizing AI will have to deal with their own unintended consequences. Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?
Simply switch to using it as a punishment on the days that you have little appetite. :)
See if I can free up more time and energy.
The administrative admin of the group I was working with told me something that started my habit of brushing and flossing: "It's simple. You only have to floss between the teeth you want to keep." This evokes lots of images for me.
That was 15 years ago, and my habit is still strong to this day.
Giving up porn for an entire month.
True story. Some years back, I was having trouble sleeping and decided I was getting too much light in the mornings. So I measured my bedroom windows, which were all different, odd widths, and went to Lowe's where they sell nicely opaque vinyl blinds. So I pick out the blinds I want, and go to the cutting machine and press the button to summon store help. The cutting machine turned the blinds, which were cut by a blade which screw clamps to a metal bar marked off like a ruler. There were no detents or slots, so any width could be cut by simply moving the blade to the right measurement. Well, a young woman comes along wearing one of the store vests and I tell her I need blinds cut and show her my measurements. She looks at them and looks me straight in the eyes and tells me, "The machine doesn't do fractions."
I almost fell over.
http://www.crinfo.org/articlesummary/10594/
Bushman society is fairly egalitarian, with power being evenly and widely dispersed. This makes coercive bilateral power-plays (such as war) less likely to be effective, and so less appealing. A common unilateral power play is to simply walk away from a dispute which resists resolution. Travel among groups and extended visits to distant relatives are common. As Ury explains, Bushmen have a good unilateral BATNA (Best Alternative to a Negotiated Agreement). It is difficult to wage war on someone who can simply walk away. Trilateral power plays draw on the power of the community to force a settlement. The emphasis on consensual conflict resolution and egalitarian ethos means that Bushmen communities will not force a solution on disputing parties. However the community will employ social pressure, by for instance ostracizing an offender, to encourage dispute resolution.
Please explain to me how Bushmen picked up the above from industrialized society. It strikes me as highly unlikely that this pattern of behavior didn't predate the industrial era.
Did you consider precisely what you were objecting to, or was this a knee-jerk reaction to a general category?
Computation market prices can and do go down. But since society can grow almost infinitely quickly (by copying ems), from an em's POV it's more relevant to say that everything else's price goes up.
A society of super-optimizers better have a darn good reason for allowing resource use to outstrip N^3. (And no doubt, they often will.)
A society of super-optimizers that regulates itself in a way resulting in mass death either isn't so much super-optimized, or has a rather (to me) unsavory set of values.
Otherwise we might as well talk about a society of <10 planet-sized Jupiter brains, each owning its physical computing substrate and so immortal short of violent death.
Past a certain point of optimization power, all deaths become either violent or voluntary.
Instead of the deletion or killing of uploads that want to live but can't cut it economically, why not slow them down? (Perhaps to the point where they are only as "quick" and "clever" as an average human being is today.) Given that the cost of computation keeps decreasing, this should impose a minimal burden on society going forward. This could also be an inducement to find better employment, especially if employers can temporarily grant increased computation resources for the purposes of the job.
From what I have read of groups in the Amazon and New Guinea, if you were to walk away from your group and try to walk into another, you would most likely be killed, and possibly captured and enslaved.
Perhaps this varies because of local environmental/economic conditions. From my undergraduate studies, I seem to remember that !Kung Bushmen would sometimes walk away from conflicts into another group.
In my experience, Pandora simply doesn't tend to give me music that I like even when I put in an artist that I like.
Yes, Pandora does give me music with qualities in common with the music I like. It's just that those aren't the qualities that make me really like the music. Instead, I just get ho-hum dopplegangers of bands that I like.
Perhaps we should view our moral intuitions as yet another evolved mechanism, in that they are imperfect and arbitrary though they work well enough for hunter gatherers.
When we lived as hunter gatherers, an individual could find a group with compatible moral intuitions or walk away from a group with incompatible ones. The ability or possibility that an unpleasant individual's moral intuitions would affect you from one valley over was minimal.
One should note, though, that studies of murder rates amongst hunter gatherer groups found that they were on the high side compared to industrialized societies.