at the heart of blackmail law lies what some call the blackmail paradox: Blackmail — which I’ll define here as threatening to reveal an accurate embarrassing fact about a person unless he does what you demand — generally involves (a) threatening to do something that you have every legal right (even a constitutional right) but no legal obligation to do, in order to (b) get someone to do what he has every legal right to do.
Nor can we resolve this by saying that coercive threats, even threats to do something legal, are generally criminal. “Pay me $10,000 or I’ll stop doing business with you” is perfectly legal (assuming that the threat comes from a sole proprietor, rather than someone lining his own pockets at the expense of his employer). “Pay me $10,000, neighbor, or I’ll sell my house, which is next to yours, to someone you dislike” is perfectly legal, too. Much legitimate hardball negotiation involves threats aimed at getting someone to do something, including threats of financial ruin. It’s just when the threat is to reveal embarrassing information that it becomes blackmail (or, as some statutes label it, coercion or extortion).
Of course, there are lots of possible theoretical and pragmatic responses to this objection; and the law does punish blackmail, though the definition varies from state to state. But the theoretical paradox, and specifically the fact that so much legal and commonplace behavior is very similar to blackmail, causes practical problems. [Too literal interpretation of a law] would even make it a crime to say, “Pay back the money you took from me, or I’ll sue you to get it back,”
I created a 1dollarscan subscription for "100 sets" (each set is 100 pages, so I paid $99 for the ability to scan up to 100sets*100pages/set = 10,000 pages), but I'm not going to use all of the sets, so if you have dead tree books that you'd like to destroy/convert to PDFs, PM me. My subscription ends on July 15th, and you'd have to mail in the books so that they arrive before then.
I've decided to work on a book while I also work on the computer architecture. It pulls together a bunch of threads of thinking I've had around the subject of autonomy. Below is the TL; DR. If lots of peopled are interested I can try and blogify it. If not many people are I might seek your opinions on drafts.
We are entering an age where questions of autonomy become paramount. We have created computers with a certain amount of autonomy and are exploring how to give more autonomy to them. We simultaneously think that autonomous computers are overhyped and that autonomous computers (AI) could one day take over the earth.
The disconnect in views is due to a choice made early in computing's history that requires a programmer or administrator to look after a computer by directly installing programs and stopping and removing bad programs. The people who are worried about AI are worried that the computers will become more autonomous and no longer need an administrator. People embedded in computing cannot see how this would happen as computers, as they stand, still require someone to control the administrative function and we are not moving towards administrative autonomy.
Can we build computer systems that are administratively autonomous? Administration can be seen as a resource allocation problem, with an explicit administrator serving the same role as a dictator in a command economy. An alternative computer architecture is presented that relies on a market based allocation of resources to programs on based on human feedback. This architecture if realized would allow programs to experiment with new programs in the machine and would lead to a more efficient adaptive computer that didn’t need an explicit administrator. Instead it would be trained by a human.
However making computers more autonomous can either lead to more autonomy for each of us by helping us or it could lead to computers being completely autonomous and us at their mercy. Ensuring the correct level of autonomy in the relationship between computers and people should be a top priority.
The question of more autonomy for humans is a also a tricky one. On the one hand it would allow us to explore the stars and safeguard us from corrupt powers. On the other hand more autonomy for humans might lead to more wars and existential risks due to the increase in destructive powers of individuals and decrease in interdependence.
Autonomy is currently ill defined. It is not an all or nothing affair. During this discussion what we mean by autonomy will be broken down, so that we can have a better way of discussing it and charting our path to the future.
data T = N T T | L deriving Show
ts :: Int -> [T]
ts 1 = [L]
ts k | (n, 1) <- divMod k 2 = [N x y | i <- [1..n ], x <- ts i, y <- ts (k-i)]
ts k | (n, 0) <- divMod k 2 = [N x y | i <- [1..n-1], x <- ts i, y <- ts (k-i)]
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀++ [N x y | ys@(x:_) <- tails (ts n), y <- ys]
<Gurkenglas> > ts 4
<lambdabot> [N L (N L (N L L)),N (N L L) (N L L)]
(Beware, I had to use U+2800 to almost align the code block in spite of LW's software eating whitespace. Source here)
Not really, the sequence grows quickly enough to outstrip the recursive overhead. To calculate the overhead, replace the * in f(i)*f(2n+1-i) with a +. Memoizing is of course trivial anyway, using memoFix.
On 2, I'd say not really: fuzzy logic is a logic which has a continuum of truth values. Logical uncertainty works by imposing, on classical logic, a probability assignment that is as "nice" as possible.
The thermodynamic arrow of time) says that we tend to end up in macrostates (states of knowledge) that contain many microstates, which is completely compatible with time-symmetric evolution of microstates. Basically physics is like a random walk, which is time-symmetric but you tend to end up in bigger countries. (Bigger countries correspond to macrostates near equilibrium, because there are more ways to arrange two molecules with velocity 10 than one with velocity 0 and another with velocity 20. The difference is exponential in the number of molecules, so the second law of thermodynamics is an iron law indeed.)
The usual problem with that story is Loschmidt's paradox: if we have a glass of hot water with some ice cubes floating in it, the most probable future of that system is a glass of uniformly warm water, but then so is its most probable past, according to the exact same Bayesian reasoning. Putting that to the extreme, you should conclude that every person you see was a decomposing (recomposing?) corpse a minute ago. That seems weird!
The usual resolution to that paradox is the Past Hypothesis: for predicting the most probable past of a system, we need to condition not just on the present, but also on a very low-entropy distant past. For example, a uniform distribution of matter in the early universe would do the job, because it would be very far from gravitational equilibrium. See this write by Huw Price for a simple explanation.
The trouble is that the Past Hypothesis isn't completely satisfying. Leaving aside the question of how we can infer the distant past except by looking at the present, in the overall soup of all past and future states it's still much more likely that any particular low entropy state (like ours) came from a higher entropy one, by pure dumb chance. If only because the future universe will be in equilibrium for a long time, enough for many fluctuations to arise. So you must assume that you're the smallest possible fluctuation compatible with your experience, which is known as a Boltzmann brain. Basically your whole vision will turn into TV static in the next second. That's even worse than recomposing corpses!
So what do we make of this? I've toyed with the idea that K-complexity might determine which laws of physics we're likely to see. If you have a bunch of bits describing a world that looks lawful like ours, without recomposing corpses or vision turning into static, then the most likely (K-simplest) future evolution of these bits will follow the same laws, whatever they are. That still leaves the question of figuring out the laws, but at least gives a hint why we aren't Boltzmann brains, and also why the early universe was simple. That sounds promising! On the other hand, K-complexity feels like a shiny new hammer that can lead to all sorts of paradoxes as well, so we should use it carefully.