Open thread, Jul. 03 - Jul. 09, 2017

post by MrMind · 2017-07-03T07:20:07.113Z · LW · GW · Legacy · 24 comments

Contents

24 comments
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

24 comments

Comments sorted by top scores.

comment by [deleted] · 2017-07-06T23:07:25.390Z · LW(p) · GW(p)

Postdoctoral position acquired. May be doing some work off a NASA astrobiology grant, eventually.

comment by Viliam · 2017-07-09T13:17:54.784Z · LW(p) · GW(p)

I think we had a debate about the exact definition of blackmail, so here is an interesting legal opinion:

Blackmail is surprisingly hard to define

at the heart of blackmail law lies what some call the blackmail paradox: Blackmail — which I’ll define here as threatening to reveal an accurate embarrassing fact about a person unless he does what you demand — generally involves (a) threatening to do something that you have every legal right (even a constitutional right) but no legal obligation to do, in order to (b) get someone to do what he has every legal right to do.

Nor can we resolve this by saying that coercive threats, even threats to do something legal, are generally criminal. “Pay me $10,000 or I’ll stop doing business with you” is perfectly legal (assuming that the threat comes from a sole proprietor, rather than someone lining his own pockets at the expense of his employer). “Pay me $10,000, neighbor, or I’ll sell my house, which is next to yours, to someone you dislike” is perfectly legal, too. Much legitimate hardball negotiation involves threats aimed at getting someone to do something, including threats of financial ruin. It’s just when the threat is to reveal embarrassing information that it becomes blackmail (or, as some statutes label it, coercion or extortion).

Of course, there are lots of possible theoretical and pragmatic responses to this objection; and the law does punish blackmail, though the definition varies from state to state. But the theoretical paradox, and specifically the fact that so much legal and commonplace behavior is very similar to blackmail, causes practical problems. [Too literal interpretation of a law] would even make it a crime to say, “Pay back the money you took from me, or I’ll sue you to get it back,”

comment by Daniel_Burfoot · 2017-07-04T03:05:26.669Z · LW(p) · GW(p)

I am working on a software tool that allows programmers to automatically extract FSM-like sequence diagrams from their programs (if they use the convention required by the tool).

Here is a diagram expressing the Merge Sort algorithm

Here is the underlying source code.

I believe this kind of tool could be very useful for code documentation purposes. Suggestions or improvements welcome.

Replies from: ChristianKl, jackk
comment by ChristianKl · 2017-07-04T17:14:38.503Z · LW(p) · GW(p)

Most of code documentation happens in text files. Maybe it's worth drawing the diagram in ASCII or unicode characters?

comment by jackk · 2017-07-06T02:30:21.827Z · LW(p) · GW(p)

You might be interested in Conal Eliott's work on Compiling to Categories, which enables automatic diagram extraction (among a bunch of other things) for Haskell.

comment by cousin_it · 2017-07-09T08:16:01.033Z · LW(p) · GW(p)

Realistic AI risk scenario similar to The Matrix: ad tech eats the world and keeps humans around for clicks. Clickbots won't do, because clickbot detection evolves as part of ad tech.

comment by whpearson · 2017-07-03T20:51:12.195Z · LW(p) · GW(p)

I've decided to work on a book while I also work on the computer architecture. It pulls together a bunch of threads of thinking I've had around the subject of autonomy. Below is the TL; DR. If lots of peopled are interested I can try and blogify it. If not many people are I might seek your opinions on drafts.


We are entering an age where questions of autonomy become paramount. We have created computers with a certain amount of autonomy and are exploring how to give more autonomy to them. We simultaneously think that autonomous computers are overhyped and that autonomous computers (AI) could one day take over the earth.

The disconnect in views is due to a choice made early in computing's history that requires a programmer or administrator to look after a computer by directly installing programs and stopping and removing bad programs. The people who are worried about AI are worried that the computers will become more autonomous and no longer need an administrator. People embedded in computing cannot see how this would happen as computers, as they stand, still require someone to control the administrative function and we are not moving towards administrative autonomy.

Can we build computer systems that are administratively autonomous? Administration can be seen as a resource allocation problem, with an explicit administrator serving the same role as a dictator in a command economy. An alternative computer architecture is presented that relies on a market based allocation of resources to programs on based on human feedback. This architecture if realized would allow programs to experiment with new programs in the machine and would lead to a more efficient adaptive computer that didn’t need an explicit administrator. Instead it would be trained by a human.

However making computers more autonomous can either lead to more autonomy for each of us by helping us or it could lead to computers being completely autonomous and us at their mercy. Ensuring the correct level of autonomy in the relationship between computers and people should be a top priority.

The question of more autonomy for humans is a also a tricky one. On the one hand it would allow us to explore the stars and safeguard us from corrupt powers. On the other hand more autonomy for humans might lead to more wars and existential risks due to the increase in destructive powers of individuals and decrease in interdependence.

Autonomy is currently ill defined. It is not an all or nothing affair. During this discussion what we mean by autonomy will be broken down, so that we can have a better way of discussing it and charting our path to the future.

comment by Thomas · 2017-07-03T08:38:51.607Z · LW(p) · GW(p)

Enjoy this problem!

Replies from: Gurkenglas
comment by Gurkenglas · 2017-07-03T13:09:35.071Z · LW(p) · GW(p)

Your huffman codes with essential indifference are binary trees (each node has 0 or 2 children) with isomorphism.

Let f(n) be the number of trees with n leaves.

f(1)=1
f(2n+1)=sum from i=1 to n of f(i)*f(2n+1-i)
f(2n)=f(n)*(f(n)+1)/2 + sum from i=1 to n-1 of f(i)*f(2n-i)

Here's the first 26 numbers of such trees:

[1,1,1,2,3,6,11,23,46,98,207,451,983,2179,4850,10905, 24631,56011,127912,293547,676157,1563372,3626149,8436379,19680277]

Replies from: Thomas
comment by Thomas · 2017-07-03T15:26:36.756Z · LW(p) · GW(p)

It's something. But what are the codes? An algorithm to create them would suffice. A faster one is better, of course.

Replies from: Gurkenglas
comment by Gurkenglas · 2017-07-03T20:54:15.010Z · LW(p) · GW(p)

The same control flow generates them. In Haskell:

data T = N T T | L deriving Show
⠀
ts :: Int -> [T]
ts 1 = [L]
ts k | (n, 1) <- divMod k 2 = [N x y | i <- [1..n  ], x <- ts i, y <- ts (k-i)]
ts k | (n, 0) <- divMod k 2 = [N x y | i <- [1..n-1], x <- ts i, y <- ts (k-i)]
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀++ [N x y | ys@(x:_) <- tails (ts n), y <- ys]

⠀
⠀
<Gurkenglas> > ts 4
<lambdabot>  [N L (N L (N L L)),N (N L L) (N L L)]

(Beware, I had to use U+2800 to almost align the code block in spite of LW's software eating whitespace. Source here)

Edit: See also: oeis, where you enter an integer sequence and it tells you where people have seen it.

Replies from: Thomas
comment by Thomas · 2017-07-05T07:37:32.203Z · LW(p) · GW(p)

Very well, congratulations again!

Perhaps a nonrecursive function would be faster.

Replies from: Gurkenglas
comment by Gurkenglas · 2017-07-05T14:36:31.694Z · LW(p) · GW(p)

Not really, the sequence grows quickly enough to outstrip the recursive overhead. To calculate the overhead, replace the * in f(i)*f(2n+1-i) with a +. Memoizing is of course trivial anyway, using memoFix.

comment by CurtisSerVaas · 2017-07-04T19:04:44.855Z · LW(p) · GW(p)

I created a 1dollarscan subscription for "100 sets" (each set is 100 pages, so I paid $99 for the ability to scan up to 100sets*100pages/set = 10,000 pages), but I'm not going to use all of the sets, so if you have dead tree books that you'd like to destroy/convert to PDFs, PM me. My subscription ends on July 15th, and you'd have to mail in the books so that they arrive before then.

comment by madhatter · 2017-07-08T15:40:26.518Z · LW(p) · GW(p)

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

comment by madhatter · 2017-07-07T06:13:18.655Z · LW(p) · GW(p)

Two random questions.

1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?

2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?

Replies from: ChristianKl, MrMind, cousin_it
comment by ChristianKl · 2017-07-08T18:48:59.332Z · LW(p) · GW(p)

The kind of money that projects like DeepMind or OpenAI cost seem to be within the budget of a Russian billionaire who strongly cares about the issue.

But there seem to be many countries that are stronger than Russia: https://futurism.com/china-has-overtaken-the-u-s-in-ai-research/

comment by MrMind · 2017-07-07T15:57:38.159Z · LW(p) · GW(p)

On 2, I'd say not really: fuzzy logic is a logic which has a continuum of truth values. Logical uncertainty works by imposing, on classical logic, a probability assignment that is as "nice" as possible.

comment by cousin_it · 2017-07-07T07:54:20.754Z · LW(p) · GW(p)

1) low chance 2) no connection

comment by WalterL · 2017-07-06T13:24:36.873Z · LW(p) · GW(p)

This might be better saved for a 'dumb questions' thread, but whatever.

So...I've had a similar experience a couple of times. You go to the till, make a purchase, something gets messed up and you need to void out. The cashier has to call a manager.

This one time I had a cashier who couldn't find her manager, so she put the transaction through, then put a refund through. Neither of these required a manager.

Why is it that you need a manager code to void a transaction, while the cashier is presumed confident for sales and refunds?

Replies from: drethelin
comment by drethelin · 2017-07-07T18:34:02.163Z · LW(p) · GW(p)

Voiding a transaction deletes it (I'm pretty sure), which removes the information trail. The other way records the transactions, so if they end up being criminal, the cashier in question is caught.

Replies from: WalterL
comment by WalterL · 2017-07-07T18:37:29.205Z · LW(p) · GW(p)

That sounds right, thanks.

comment by turchin · 2017-07-04T10:20:44.725Z · LW(p) · GW(p)

Do we have any non-science-fiction link on a global risk that Narrow AI virus affects robotic hardware, like self-driving cars or home robots, and they start to attack humans?

comment by cousin_it · 2017-07-03T09:28:53.767Z · LW(p) · GW(p)

The story so far:

The thermodynamic arrow of time) says that we tend to end up in macrostates (states of knowledge) that contain many microstates, which is completely compatible with time-symmetric evolution of microstates. Basically physics is like a random walk, which is time-symmetric but you tend to end up in bigger countries. (Bigger countries correspond to macrostates near equilibrium, because there are more ways to arrange two molecules with velocity 10 than one with velocity 0 and another with velocity 20. The difference is exponential in the number of molecules, so the second law of thermodynamics is an iron law indeed.)

The usual problem with that story is Loschmidt's paradox: if we have a glass of hot water with some ice cubes floating in it, the most probable future of that system is a glass of uniformly warm water, but then so is its most probable past, according to the exact same Bayesian reasoning. Putting that to the extreme, you should conclude that every person you see was a decomposing (recomposing?) corpse a minute ago. That seems weird!

The usual resolution to that paradox is the Past Hypothesis: for predicting the most probable past of a system, we need to condition not just on the present, but also on a very low-entropy distant past. For example, a uniform distribution of matter in the early universe would do the job, because it would be very far from gravitational equilibrium. See this write by Huw Price for a simple explanation.

The trouble is that the Past Hypothesis isn't completely satisfying. Leaving aside the question of how we can infer the distant past except by looking at the present, in the overall soup of all past and future states it's still much more likely that any particular low entropy state (like ours) came from a higher entropy one, by pure dumb chance. If only because the future universe will be in equilibrium for a long time, enough for many fluctuations to arise. So you must assume that you're the smallest possible fluctuation compatible with your experience, which is known as a Boltzmann brain. Basically your whole vision will turn into TV static in the next second. That's even worse than recomposing corpses!

So what do we make of this? I've toyed with the idea that K-complexity might determine which laws of physics we're likely to see. If you have a bunch of bits describing a world that looks lawful like ours, without recomposing corpses or vision turning into static, then the most likely (K-simplest) future evolution of these bits will follow the same laws, whatever they are. That still leaves the question of figuring out the laws, but at least gives a hint why we aren't Boltzmann brains, and also why the early universe was simple. That sounds promising! On the other hand, K-complexity feels like a shiny new hammer that can lead to all sorts of paradoxes as well, so we should use it carefully.

What do you think?