Posts
Comments
how do we respond?
Well, as a society, at some point we set a cut-off and make a law about it. Thus some items are banned while others are not, and some items are taxed and have warnings on them instead of an outright ban.
And it's not just low intelligence that's a risk. People can be influenced by advertising, social pressure, information saturation, et cetera. Let's suppose we do open this banned goods shop. Are we going to make each and every customer fill out an essay question detailing exactly how they understand these items to be dangerous? I don't mean check a box or sign a paper, because that's like clicking "I Agree" on a EULA or a security warning, and we've all seen how well that's worked out for casual users in the computer realm, even though we constantly bombard them with messages not to do exactly the things that get them in trouble.
Is it Paternalist arrogance when the system administrator makes it impossible to download and open .exe attachments in Microsoft Outlook? Clearly, there are cases where system administrators are paternalist and arrogant; on the other hand, there are a great many cases where users trash their machines. The system administrator has a much better knowledge about safely operating the computer; the user knows more about what work they need to get done. These things are issues of balance, but I'm not ready to throw out top-down bans on dangerous-to-self products.
No amount of money can raise the dead. It's still more efficient to prevent people from dying in the first place.
All people are idiots at least some of the time. I don't accept the usage of Homeopathic Brake Pads as a legitimate decision, even if the person using them has $1 billion USD with which to compensate the innocent pedestrians killed by a speeding car. I'll accept the risk of occasional accident, but my life is worth more to me than the satisfaction some "alternative vehicle control systems" nut gets from doing something stupid.
Unfortunately we have not yet discovered a remedy by which court systems can sacrifice the life of a guilty party to bring back a victim party from the dead.
I, for one, imagine that I could easily walk into the Banned Shop, given the right circumstances. All it takes is one slip up - fatigue, drunkness, or woozy medication would be sufficient - to lead to permanent death.
With that in mind, I don't think we should be planting more minefields than this reality currently has, on purpose. I like the idea of making things idiot-proof, not because I think idiots are the best thing ever, but because we're all idiots at least some of the time.
Yeah, I thought the post was largely well-reasoned, but that that statement was reckless (largely because it seems ungrounded and plays to a positive self-image for this group.)
While I very much enjoy programming (look at my creations come to life!) and have been known to conduct experiments in video games to discover their rules, I am almost entirely disinterested in puzzles for their own sake.
I'm a programmer, though, not a scientist, but if puzzles that were largely free of context where solving them could be used to accomplish some goal were a large part of science curricula, I'd be concerned about possible side effects.
Not that I don't think there may be some merit to be mined here.
Forgive me if I'm just being oblivious, but did anything end up happening on this?
Where can I find rationality exercises?
I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.
Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects.
All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn't mean they'll even follow them at all. If you specify a great deal of process, they may not even do so intentionally - they may just forget. With a computer, that would be caused by an error, but it's a controllable process. With a human? People can't just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably.
So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.
Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.
This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.
I read that as meaning something along the lines of, "if Nature is truly so wonderful, why did dogs leave it (to become domesticated)?"
Your stretching pulls the word over so large an area as to render it almost meaningless. I feel as though it exists to further some other goal.
The last time I heard art defined, it was as "something which has additional layers of meaning beyond the plain interpretation", or something like that. I'm not sure even that's accurate.
However, if you're going to insist on calling a spec ops team in action "art", then that level of stretching is such that so could designing a diesel locomotive, or any number of other purely practical exercises which are not performed for their aesthetic value. A "found object", or Jackson Pollock painting, or what-have-you, is created primarily for aesthetic value and/or communication of additional layers of meaning.
If you're a Transhumanist, you should give Ghost in the Shell: Standalone Complex a try. It's excellent Postcyberpunk in general.
This is basically the primary issue. It is possible for a hostile or simply incompetent drug company to spam the information sources of people with false or misleading information, drowning out the truth. The vast majority of humans in our society aren't experts in drugs, and becoming an expert in drugs is very expensive, so they rely on others to evaluate drugs for them. The public bureaucrats at least have a strong counter-incentive to letting nasty drugs out into the wild.
Furthermore, it can take some time to realize a drug isn't working, and the placebo effect is going to be in full force to make that even harder. By the time you realize you were sold snake oil, you may already be dead. "Reputation" may not be of use here, as fake drugs are much cheaper to develop than real ones, so the cost of throwing an old trademark or company shell under the bus every few years is minimal, especially compared to the cost of discovering that for individuals.
Consider also the time in man-hours that must be spent hunting for information and evaluating safety, not just of the drugs themselves, but also the reputations of the private verification firms, by all individuals that need drugs. The FDA is cheaper.
Edit: I should say that "in my estimation, the FDA is cheaper." It's only back-of-the-napkin math.
I generally take the position that we should protect people from themselves to the degree that it is reasonably practical to do so. We have all failed due to ignorance, irrationality, or inattention at some point. Of course, when someone tries to break open your high-voltage power line to steal the copper inside, well...
One thing I desperately want to devise is some method, at least partial, of incentivizing bureaucrats (public or private) to act in the most useful manner. This is, by its very nature, a difficult challenge with lots of thorny sub-problems. However, I think it's something LWers have been thinking about, even if not always explicitly.
What if you bid $1, explain the risk of a bidding war resulting in a probable outcome of zero or net negative dollars, then offer to split your winnings with whoever else doesn't bid?
What occurred to me when I read it is "Why is this guy allowed to propose a motion which changes its actions based on how many people voted in favor of, or against, it?" While it's likely the company's bylaws don't specifically prohibit it, I'm not sure what a lawyer would make of it, and even if it worked, I don't think these sort of meta-motions would remain viable for long. I suspect the other members of the board would either sign a contract with each other, (gaining their own certainty of precommitment,) or refuse to acknowledge it on the grounds that it isn't serious.