Open thread, August 7 - August 13, 2017
post by Thomas · 2017-08-07T08:07:01.288Z · LW · GW · Legacy · 35 commentsContents
35 comments
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
35 comments
Comments sorted by top scores.
comment by username2 · 2017-08-10T12:56:10.454Z · LW(p) · GW(p)
How is development of the new LW platform/closed beta coming along? Does it look like it will actually get off the ground?
I realize username2 will not be welcome there but am very interested in signing up with a normal username when it launches, if there's anything to sign up for. I'm hoping all the action there has just moved out of public view rather than just subsiding as it appears from outside.
Replies from: username2comment by Daniel_Burfoot · 2017-08-08T23:24:59.978Z · LW(p) · GW(p)
Theory of programming style incompatibility: it is possible for two or more engineers, each of whom is individually highly skilled, to be utterly incapable of working together productively. In fact, the problem of style incompatibility might actually increase with the skill level of the programmers.
This shouldn't be that surprising: Proust and Hemingway might both be gifted writers capable of producing beautiful novels, but a novel co-authored by the two of them would probably be terrible.
Replies from: Lumifer, WalterL↑ comment by WalterL · 2017-08-09T02:49:36.811Z · LW(p) · GW(p)
Kind of...
Like, part of being 'highly skilled' as a programmer is being able to work with other people. I mean, I get what you are saying, but working with assholes is part of the devs tool bag, or he hasn't been a dev very long.
Replies from: Screwtape↑ comment by Screwtape · 2017-08-10T17:35:12.962Z · LW(p) · GW(p)
Is that really a programming skill though? Aren't most fields of human endeavor theoretically improved by being able to work with people, making it something of a generic skill? Alternately, if cooperation is domain specific enough to be a 'programming' skill then it seems like some programmers are amazing even if they lack that skill.
Various novels have been written by two authors, but I wouldn't say the inability to co-write with an arbitrary makes one a terrible author. Good Omens was amazing, but I'm not sure that Pratchett and Stephen King hypothetically failing to work well together makes either of them a bad writer. This is less obvious in less clearly subjective fields, but I think it might still be true.
It's worth noting that "Gah, I can't work with that guy, I'm too highly skilled in my own amazing paradigm!" is more often a warning sign of different problem rather than a correct diagnosis of this one.
comment by MrMind · 2017-08-07T13:26:00.578Z · LW(p) · GW(p)
"Inscrutable", related to the meta-rationality sphere, is a word that gets used a lot these days. On the fun side, set theory has a perfectly scrutable definition of indescribability.
Very roughly: the trick is to divide your language in stages, so that stage n+1 is strictly more powerful than stage n. You can then say that a concept (a cardinal) k is n-indescribable if every n-sentence true in a world where k is true, is also true in a world where a lower concept (a lower cardinal) is true. In such a way, no true n-sentence can distinguish a world where k is true from a world where something less than k is true.
Then you can say that k is totally indescribable if the above property is true for every finite n.
Total indescribability is not even such a strong property, in the grand scheme of large cardinals.
comment by Thomas · 2017-08-07T08:09:16.623Z · LW(p) · GW(p)
This problem to think about.
Replies from: gjm, gjm, IlyaShpitser, MrMind↑ comment by gjm · 2017-08-08T14:46:16.961Z · LW(p) · GW(p)
I wrote a little program to attack this question for a smaller number of primes. The results don't encourage me to think that there's a "principled" answer in general. I ran it for (as it happens) the first 1002 primes, up to 7933. The visibility counts fluctuate wildly; it looks as if there may be a tendency for "typical" visibility counts to decrease, but the largest is 256 for the 943rd prime (7451) which not coincidentally has a large gap before it and smallish gaps immediately after.
It seems plausible that the winner with a billion primes might be the 981,765,348th prime, which is preceded by a gap of length 38 (the largest in the first billion primes), but I don't know and I wouldn't bet on it. With 1200 primes you might think the winner would be at position 1183, after the first ever gap of size 14 -- but in fact that gap is immediately followed by another of size 12, and the prime after that does better even though it can't see its next-but-one neighbour, and both are handily beaten by the 943rd prime which sees lots of others above as well as below.
It's still feeling to me as if any solution to this is going to involve more brute force than insight. Thomas, would you like to tell us whether you know of a solution that doesn't involve a lot of calculation? (Since presumably any solution will at least need to know something about all the first billion primes, maybe I need to be less vague. If the solution looked like "the winning prime is the prime p_i for which p_{i+1}-p_{i-1} is greatest" or something similarly simple, I would not consider it to be mostly brute force.)
Replies from: Thomas, Thomas↑ comment by Thomas · 2017-08-08T15:39:38.746Z · LW(p) · GW(p)
Well, congratulations for what you have done so far.
I have hoped it will be something like this. An intricated landscape of prime towers. I don't have a solution yet because I have invented this problem this Monday morning. Like "Oh, My God, it's Monday morning, I have to publish another Problem on my blog and cross-post it on Lesswrong ...".
I did some Googling to prevent my brains to plagiarize too much, and that was all.
I doubt, that there is a clever solution, just some brute force solutions are possible here. But one has to be clever to perform a brute force solution in this case.
Which you guys are.
↑ comment by gjm · 2017-08-07T13:08:45.677Z · LW(p) · GW(p)
Initial handwaving:
Super-crudely the n'th prime number is about n log n. If this were exact then each tower would see all the others, because the function x -> x log x is convex. In practice there are little "random" fluctuations which make a difference. It's possible that the answer to the question depends critically on those random fluctuations and can be found only by brute force...
↑ comment by IlyaShpitser · 2017-08-09T22:17:02.491Z · LW(p) · GW(p)
Isn't this literally asking for the largest increasing prime gap sequence between 1 and 1bil? Probably some number theorist knows this.
Replies from: Thomas↑ comment by Thomas · 2017-08-10T06:24:35.478Z · LW(p) · GW(p)
If we ask for only the smallest 3000 primes, the answer is then the first tower, which is 2 in height. From there you can see 592 tops.
No major prime gap around 2.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2017-08-10T22:13:26.537Z · LW(p) · GW(p)
Got it, it's a combination of gap size and prime magnitude. That's a weird question, but there might still be a theorem on this.
Replies from: Thomas↑ comment by Thomas · 2017-08-11T06:54:53.542Z · LW(p) · GW(p)
Perhaps, there is. But there are billions of such relatively simple constructions possible, such as this "Prime Towers Problem".
I wonder how many of those are already covered by some older proven theorem or some unproven conjecture maybe. I think many of them are still unvisited and unrelated to anything known. If this one is such, I don't know. It might be. Just might.
↑ comment by MrMind · 2017-08-07T12:25:51.624Z · LW(p) · GW(p)
The intuitive answer seems to me to be: the last one. It's the tallest, so it witness exactly one billion towers. Am I misinterpreting something?
Replies from: Oscar_Cunningham, gjm↑ comment by Oscar_Cunningham · 2017-08-07T12:43:44.688Z · LW(p) · GW(p)
I guess some of the towers might block each other from view.
↑ comment by gjm · 2017-08-07T12:43:19.001Z · LW(p) · GW(p)
Yes: merely being lower isn't enough to guarantee visibility, because another intermediate tower might be (lower than the tallest but still) tall enough to block it. Like this, if I can get the formatting to work:
#
# #
# #
# #
# # #
You can't see the third tower from the first, because the second is in the way.
Replies from: Thomas↑ comment by Thomas · 2017-08-07T13:04:55.711Z · LW(p) · GW(p)
Yes, exactly so.
There is another small ambiguity here. The towers 2, 3, and 4 have colinear tops. But this is the only case and not important for the solution.
Replies from: gjm↑ comment by gjm · 2017-08-08T13:37:58.868Z · LW(p) · GW(p)
This is not the only case. For instance, the tops of the towers at heights 11, 17, 23 are collinear (both height differences are 6, both pairs are 2 primes apart).
Even if it turns out not to be relevant to the solution, the question should specify what happens in such cases.
Replies from: Thomas, username2↑ comment by username2 · 2017-08-08T13:42:00.567Z · LW(p) · GW(p)
As you say, there indeed many examples, even of three literally consecutive primes: https://en.wikipedia.org/wiki/Balanced_prime
comment by [deleted] · 2017-08-15T14:50:35.447Z · LW(p) · GW(p)
Question: How do you make the paperclip maximizer want to collect paperclips? I have two slightly different understandings of how you might do this, in terms of how it's ultimately programmed: 1) there's a function that says "maximize paperclips" 2) there's a function that says "getting a paperclip = +1 good point"
Given these two different understandings though, isn't the inevitable result for a truly intelligent paperclip maximizer to just hack itself and based on my two different understandings: 1) make itself /think/ that it's getting paperclips because that's what it really wants--there's no way to make it value ACTUALLY getting paperclips as opposed to just thinking that it's getting paperclips 2) find a way to directly award itself "good points" because that's what it really wants
I think my understanding is probably flawed somewhere but haven't been able to figure it out so please point out where
comment by Tenoke · 2017-08-09T17:19:37.684Z · LW(p) · GW(p)
Karpathy mentions offhand in this video that he thinks he has the correct approach to AGI but doesnt say what it is. Before that he lists a few common approaches, so I assume it's not one of those. What do you think he suggests?
P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.
Replies from: ChristianKl, Manfred↑ comment by ChristianKl · 2017-08-09T23:08:10.569Z · LW(p) · GW(p)
P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.
The overview lecture doesn't really get me worried. It basically means that we are at the point where we can use machine learning to solve well-defined problems with plenty of training data. At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.
At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.
Replies from: Tenoke↑ comment by Tenoke · 2017-08-10T00:35:57.394Z · LW(p) · GW(p)
At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.
At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.
Not in itself, sure, but yeah there was the bit about the progress made so you wont need a ML engineer for developing the right net to solve a problem. However, there was also the bit whee they have nets doing novel research (e.g. new activation functions with better performance than sota, novel architectures etc.). And for going further in that direction, they just want more compute which they're going to be getting more and more of.
I mean, if we've entered the point where we AI research is a problem tackalable by (narrow) AI, which can further benefit from that research and apply it to make further improvements faster/wtih more accuracy.. then maybe there is something to potentially worry about .
Unless of course you think that AGI will be built in such a different way that no/very few DL findings are likely to be applicable. But even then I wouldn't be convinced that doing this completely separate AGI research wont also be the kind of problem that DL wont be able to handle - as AGI research is in the end a "narrow" problem.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-08-10T09:41:02.199Z · LW(p) · GW(p)
To me the question isn't whether new DL findings are applicable but whether they are sufficient. I don't think they are sufficient to be able to solve problems where there isn't a big dataset available.
↑ comment by Manfred · 2017-08-09T17:57:14.805Z · LW(p) · GW(p)
I think I don't know the solution, and if so it's impossible for me to guess what he thinks if he's right :)
But maybe he's thinking of something vague like CIRL, or hierarchical self-supervised learning with generation, etc. But I think he's thinking of some kind of recurrent network. So maybe he has some clever idea for unsupervised credit assignment?
comment by rxs · 2017-08-07T12:50:28.727Z · LW(p) · GW(p)
Is there an alternative to predictionbook.com for private predictions? I'd like to have all the nice goodies like updateble predictions in scicast/metaculus, but for private stuff?
Alternative question: Is there a off-line version of prediction book (command line or gui)?
Replies from: gwern, disconnect_duplicate0.563651414951392↑ comment by gwern · 2017-08-07T17:19:39.753Z · LW(p) · GW(p)
You can set PB predictions to be private. Of course, this doesn't guarantee privacy since there are so many ways to hack websites and PB is not the best maintained codebase nor has it ever been audited... You could encrypt your private predictions, which would offer security but also reminders+scoring.
I don't know of any offline CLI versions but the core functionality is pretty trivial so you could hack one up easily.
↑ comment by disconnect_duplicate0.563651414951392 · 2017-08-07T20:25:28.170Z · LW(p) · GW(p)
For mobile, there's LW Predictions on Android.
Replies from: rxs