Posts
Comments
While Earth would be easier to terraform due to available resources and global conditions already closer to something inhabitable, it would not be safer, as mistakes in the terraforming process are not going to be as catastrophic when you try to terraform a backup, uninhabited planet.
Toying with complex, poorly understood processes at a time when we wouldn't even have our current resources, manpower on a ravaged Earth whose environment might just be one wrong step from becoming much worse, could destroy a majority of what remains of humanity, the economy and valuable resources, making it impossible for us to ever recover.
(I am however assuming we were talking about global terraforming of the whole planet, not making minute changes to local spots)
But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history.
You're assuming that there exists something like our universe, with at least one full human being like you having beliefs causally entwined with Obama existing. What if there is none, and there are only Boltzmann brains or something equivalent?
In a Boltzmann brain scenario, how can you even assume that the universe in which they appear is ruled by the same laws of physics as those we seemingly observe? After all, the observations and beliefs of a Boltzmann brain aren't necessarily causally linked to the universe that generated it.
You could well be a single "brain" lost in a universe whose laws make it impossible for something like our own Hubble volume to exist, where all your beliefs about physics, including beliefs about Boltzmann brains, is just part of the unique, particular beliefs of that one brain.
Wait, would an equivalent way to put it be evidential as in "as viewed by an outside observer" as opposed to "from the inside" (the perspective of a Boltzmann brain)?
Most of this seems unrelated to what the OP says. Are you sure you posted this in the right place?
http://en.wikipedia.org/wiki/The_End_of_History_and_the_Last_Man
I would second that. On the other hand, how would you decide what weight to give to someone's vote? Newcomers vs older members? Low vs high karma? I'm not sure a function of both these variables would be sufficient to determine meaningful voting weights (that is, I'm not sure such a simple mechanism would be able to intelligently steer more karma towards good quality posts even if they were hidden, obscure or too subtle).
Would it be difficult (and useful) to change the voting system inherited from reddit and implement one where casting a vote would rate something on a scale from minus ten to ten, and then average all votes together?
How well do they though? I've seen a few academics from around me having enough command of English to get by, but they might still miss some of the subtle points. They just can't reason as well in English as they do in their mother tongue.
labeling a death as "heroic" can be a similar sort of rationalization.
Homer, about 2800 years ago :
It is entirely seemly for a young man killed in battle to lie mangled by the bronze spear. In his death all things appear fair.
Strategies would be different for an individual as opposed to societies. Both would as a first approximation only be as cautious as they need to be in order to preserve themselves. That's where the difference between local and global disasters comes into play.
A disaster that can kill an individual won't usually kill a society. The road to progress for society has been paved by countless individual failures, some of which took a heavy toll, but in the end they never destroyed everything. It may be a gamble for an individual to take a risk that could destroy it, and risk-averse people will avoid it. But for society as a whole, non-risk averse individuals will sometime strike the motherlode, especially as the risk to society (the loss of one or a few individuals out of the adventurous group at a time) is negligible enough. Such individuals could therefore conceivably be an asset. They'd explore venues past certain local optima for instance. This would also benefit those few individuals who'd be incredibly successful from time to time, even if most people like them are destined to remain in the shadow.
Of course nowadays even one person could fail hard enough to take everything along with it. That mat be why you get that impression that rational people are perhaps too cautious, and could hamper progress. The rules for the game have changed, you can't just be careless anymore.
Aumann agreement! Dissension among rational people!
This one's easy; I'm guessing this is about "rational" people (lesswrongers for instance) disagreeing. "Rational" in the above sentence isn't the same as rational as defined in Aumann's paper.
Specifically, we're human beings, two of us don't necessarily have the same priors, or common knowledge of a posterior A for every possible event A. So we're bound to disagree sometimes.
I'm not sure I understand your question.
I guess part of my point is that a laptop processor is a very general purpose tool, while the human brain is a collection of specialized modules. Also, the more general a tool is, the less efficient it will be on average for any task.
The human brain might be seen as a generalist, but not in the same way a laptop computer processor is.
Besides, even a laptop processor has certain specializations and advantages over the human brain in certain narrow domains, like for instance among others, number crunching and fast arithmetic operations.
Some questions which we need to answer then :
1 ) What is the effective level of visual precision computed by those processors for Avatar, versus the level of detail that's processed in the human visual cortex?
2) Is the synapse the equivalent of a transistor if we are to estimate the respective computing power of a brain and a computer chip? (i.e., is there more hidden computation going, on other levels? As synapse use different neurotransmitters, does that add additional computational capability? Are there processes in the neurons that similarly do computational work too? Are other cells, such as glial cells, performing computationally relevant operations too?)
The switching rate in a processor is faster than the firing rate of neurons.
All else being equal, a computer should be faster than an aggregate of neurons. But all isn't equal, even when comparing different processors. Comparing transistors in a modern processor to synapses in a human brain yields many more synapses than transistors. Furthermore, the brain is massively parallel, and has a specialized architecture. For what it does, it's well optimized, at least compared to how optimized our software and hardware are for similar tasks at this point.
For instance, laptop processors are general purpose processors, being able to do many different tasks they aren't really fast or good at any. Some specific tasks may make use of custom made processors, which, even if their clock rate is slower, or if they have less transistors, will still vastly outperform a general purpose processor if they are to compete for the task they were custom-built for.