Open thread, Jul. 17 - Jul. 23, 2017
post by MrMind · 2017-07-17T08:15:02.502Z · LW · GW · Legacy · 70 commentsContents
70 comments
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
70 comments
Comments sorted by top scores.
comment by Daniel_Burfoot · 2017-07-18T20:55:42.248Z · LW(p) · GW(p)
Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?
Replies from: Lumifer↑ comment by Lumifer · 2017-07-18T21:04:38.280Z · LW(p) · GW(p)
I expect them to be nice places to work (because they are not subject to the vulgar and demeaning necessity to turn a profit), I also don't expect them to be making much progress in the near future.
Replies from: None↑ comment by [deleted] · 2017-07-18T23:41:41.676Z · LW(p) · GW(p)
I have spoken to someone who has spoken to some of the scientific higher ups at Calico and they are excited about the longer-term funding models for biomedical research they think they can get there for sure.
I have also seen a scientific talk about a project that was taken up by Calico from a researcher who visited my university. Honestly not sure how much detail I should/can go into about the details of the project before I look up how much of what I saw was published versus not (haven't thought about it in a while), but I saw very preliminary data from mice on the effects of a small molecule from a broad screen in slowing the progression of neurodegenerative disease and traumatic brain injury.
Having no new information for ~2 years on the subject but having seen what I saw there and knowing what I know about cell biology, I find myself suspecting that it probably will actually slow these diseases, probably does not affect lifespan much especially for the healthy, and in my estimation has a good chance of increasing the rate of cancer progression (which needs more research, this hasn't been demonstrated). Which would totally be worth it for the diseases involved.
EDIT: Alright, found press releases. https://www.calicolabs.com/news/2014/09/11/
http://www.cell.com/cell/abstract/S0092-8674(14)00990-8
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4163014/
Replies from: username2↑ comment by username2 · 2017-07-19T07:46:07.642Z · LW(p) · GW(p)
That assessment is actually quite common with approaches to radical longevity "likely leads to more cancers."
I am encouraged for the long term prospects of SENS in particular because the "regular maintenance" approach doesn't necessarily require mucking around with internal cellular processes. At least not as much as the more radical approaches.
comment by cousin_it · 2017-07-20T12:15:38.339Z · LW(p) · GW(p)
I just came up with a funny argument for thirdism in the sleeping beauty problem.
Let's say I'm sleeping beauty, right? The experimenter flips a coin, wakes me up once in case of heads or twice in case of tails, then tells me the truth about the coin and I go home.
What do I do when I get home? In case of tails, nothing. But in case of heads, I put on some scented candles, record a short message to myself on the answering machine, inject myself with an amnesia drug from my own supply, and go to sleep.
...The next morning, I wake up not knowing whether I'm still in the experiment or not. Then I play back the message on the answering machine and learn that the experiment is over, the coin came up heads, and I'm safely home. I've forgotten some information and then remembered it; a trivial operation.
But that massively simplifies the problem! Now I always wake up with amnesia twice, so the anthropic difference between heads and tails is gone. In case of heads, I find a message on my answering machine with probability 1/2, and in case of tails I don't. So failing to find the message becomes ordinary Bayesian evidence in favor of tails. Therefore while I'm in the original experiment, I should update on failing to find the message and conclude that tails are 2/3 likely, so thirdism is right. Woohoo!
Replies from: Thomas, Xianda_GAO_duplicate0.5321505782395719, entirelyuseless↑ comment by Thomas · 2017-07-21T11:51:06.618Z · LW(p) · GW(p)
You have changed the initial conditions. The initial conditions don't speak about some external memory.
Replies from: cousin_it↑ comment by cousin_it · 2017-07-21T11:56:31.514Z · LW(p) · GW(p)
I'm not using any external memory during the experiment. Only later, at home. What I do at home is my business.
Replies from: Thomas↑ comment by Thomas · 2017-07-21T13:33:00.227Z · LW(p) · GW(p)
Then, it's not the experiment's business.
Replies from: cousin_it↑ comment by Xianda_GAO_duplicate0.5321505782395719 · 2017-07-22T17:13:55.250Z · LW(p) · GW(p)
This argument is the same as Cian Dorr's version with a weaker amnesia drug. In that experiment a weaker amnesia drug is used on beauty if Heads which only delays the recollection of memory for a few minutes, just like in your case the memory is delayed until the message is checked.
This argument was published in 2002. It is available before majority of the literature on the topic is published. Suffice to say it is not convincing to halfers. Even supporter like Terry Horgan admit the argument is suggestive and could run a serious risk of slippery slope.
Replies from: cousin_it↑ comment by cousin_it · 2017-07-23T00:20:55.150Z · LW(p) · GW(p)
Thank you for the reference! Indeed it's very similar, the only difference is that my version relies on the beauty's precommitment instead of the experimenter, but that probably doesn't matter. Shame on me for not reading enough.
Replies from: Xianda_GAO_duplicate0.5321505782395719↑ comment by Xianda_GAO_duplicate0.5321505782395719 · 2017-07-25T00:15:28.849Z · LW(p) · GW(p)
Nothing shameful on that. Similar arguments, which Jacob Ross categorized as "hypothetical priors" by adding another waking in case of H, have not been a main focus of discussion in literatures for the recent years. I would imagine most people haven't read that.
In fact you should take it as a compliment. Some academic who probably spent a lot of time on it came up the same argument as you did.
↑ comment by entirelyuseless · 2017-07-21T14:27:21.394Z · LW(p) · GW(p)
I agree with Thomas -- even if this proved that thirdism is right when you are planning to do this, it would not prove that it is right if you are not planning to do this. In fact it suggests the opposite: since the update is necessary, thirdism is false without the update.
Replies from: cousin_it↑ comment by cousin_it · 2017-07-21T15:18:40.953Z · LW(p) · GW(p)
The following principle seems plausible to me: creating any weird situation X outside the experiment shouldn't affect my beliefs, if I can verify that I'm in the experiment and not in situation X. Disagreeing with that principle seems like a big bullet to bite, but maybe that's just because I haven't found any X that would lead to anything except thirdism (and I've tried). It's certainly fair to scrutinize the idea because it's new, and I'd love to learn about any strange implications.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-22T01:30:23.850Z · LW(p) · GW(p)
"The next morning, I wake up not knowing whether I'm still in the experiment or not. "
By creating a situation outside the experiment which is originally indistinct from being in the experiment, you affect how the experiment should be evaluated. The same thing is true, for example, if the whole experiment is done multiple times rather than only once.
Replies from: cousin_it↑ comment by cousin_it · 2017-07-22T05:56:53.251Z · LW(p) · GW(p)
Yeah, if the whole experiment is done twice, and you're truthfully told "this is the first experiment" or "this is the second experiment" at the beginning of each day (a minute after waking up), then I think your reasoning in the first experiment (an hour after waking up) should be the same as though the second experiment didn't exist. Having had a minute of confusion in your past should be irrelevant.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-22T14:36:30.669Z · LW(p) · GW(p)
I disagree. I have presented arguments on LW in the past that if the experiment is run once in the history of the universe, you should reason as a halfer, but if the experiment is run many times, you will assign a probability in between 1/2 and 1/3, approaching one third as the number of times approaches infinity. I think that this applies even if you know the numerical identity of your particular run.
Replies from: cousin_it↑ comment by cousin_it · 2017-07-22T14:40:57.836Z · LW(p) · GW(p)
Interesting! I was away from LW for a long time and probably missed it. Can you give a link, or sketch the argument here?
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-22T15:32:41.648Z · LW(p) · GW(p)
Actually, I was probably mistaken. I think I was thinking of this post and in particular this thread and this one. (I was previously using the username "Unknowns".)
I think I confused this with Sleeping Beauty because of the similarity of Incubator situations with Sleeping Beauty. I'll have to think about it but I suspect there will be similar results.
comment by ImmortalRationalist · 2017-07-20T10:42:38.810Z · LW(p) · GW(p)
For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?
Replies from: Turgurth↑ comment by Turgurth · 2017-07-22T17:09:51.964Z · LW(p) · GW(p)
I saw this same query in the last open thread. I suspect you aren't getting any responses because the answer is long and involved. I don't have time to give you the answer in full either, so I'll give you the quick version:
I am in the process of signing up with Alcor, because after ten years of both observing cryonics organizations myself and reading what other people say about them, Alcor has given a series of cues that they are the more professional cryonics organization.
So, the standard advice is: if you are young, healthy with a long life expectancy, and are not wealthy, choose C.I., because they are less expensive. If those criteria do not apply to you, choose Alcor, as they appear to be the more serious, professional organization.
In other words: choose C.I. as the type of death insurance you want to have, but probably won't use, or choose Alcor as the type of death insurance you probably will use.
Replies from: ImmortalRationalist↑ comment by ImmortalRationalist · 2017-07-24T14:10:58.553Z · LW(p) · GW(p)
If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?
comment by Lumifer · 2017-07-27T19:46:15.299Z · LW(p) · GW(p)
Up LW's alley: A Pari-mutuel like Mechanism for Information Aggregation: A Field Test Inside Intel
Abstract:
A new information aggregation mechanism (IAM), developed via laboratory experimental methods, is implemented inside Intel Corporation in a long-running field test. The IAM, incorporating features of pari-mutuel betting, is uniquely designed to collect and quantize as probability distributions dispersed, subjectively held information. IAM participants’ incentives support timely information revelation and the emergence of consensus beliefs over future outcomes. Empirical tests demonstrate the robustness of experimental results and the IAM’s practical usefulness in addressing real-world problems. The IAM’s predictive distributions forecasting sales are very accurate, especially for short horizons and direct sales channels, often proving more accurate than Intel’s internal forecast.
comment by [deleted] · 2017-07-21T04:13:42.270Z · LW(p) · GW(p)
Update on Instrumental Rationality sequence: about 40% done with a Habits 101 post. Turns out habits are denser than planning and have more intricacies. Plus, the techniques for creating / breaking habits are less well-defined and not as strong, so I'm still trying to "technique-ify" some of the more conceptual pieces.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2017-07-25T10:16:35.852Z · LW(p) · GW(p)
You might already be aware of them / their contents, but I found these two papers useful in creating a habit workshop:
- Wood & Rünger (2016) Psychology of Habit
- Wood & Neal (in press) Habit-Based Behavior Change Interventions
comment by MrMind · 2017-07-18T08:36:24.352Z · LW(p) · GW(p)
From Gwern's newsletter: did you know that algorithms already can obtain legal personhood?
Not scary at all.
↑ comment by turchin · 2017-07-18T20:22:17.063Z · LW(p) · GW(p)
What worries me is that if ransomware virus could own money, it could pay some human to install itself on others people computers, and also pay programmers for finding new exploits and even for the improvement of the virus.
But for such development legal personhood is not needed, only illegal.
Replies from: Kaj_Sotala, lmn, username2↑ comment by Kaj_Sotala · 2017-07-24T19:12:45.889Z · LW(p) · GW(p)
Malware developers already invest in R&D and buying exploits; I'm not sure what key difference it makes whether the malware or its owners does the investing.
Replies from: turchin↑ comment by turchin · 2017-07-24T21:17:50.542Z · LW(p) · GW(p)
It makes me worry because it looks like Seed AI with clearly non-human goals.
The difference between current malware developers may be subtle, but it may grow after some threshold of such narrow AI virus. You can't turn off the virus, but malware creators are localised and could be caught. They also share most human values with other humans and will stop at some level of possible destruction.
↑ comment by lmn · 2017-07-20T00:52:48.129Z · LW(p) · GW(p)
even for the improvement of the virus.
I don't think this would work. This requires some way for it to keep the human it has entrusted with editing its programing from modifying it to simply send him all the money it acquires.
Replies from: turchin↑ comment by turchin · 2017-07-20T10:12:18.798Z · LW(p) · GW(p)
The human has to leave part of the money with the virus, as the virus needs to pay for installing its ransomware and for other services. If the human takes all money, the virus will be noneffective and will not so quickly replicate. Thus some form of natural selection will help viruses that give only part of their money (and future revenues) for programmers in exchange for modification.
↑ comment by username2 · 2017-07-19T07:48:08.539Z · LW(p) · GW(p)
With bitcoin botnet mining this was briefly possible. Also see "google eats itself."
Replies from: lmn, turchin↑ comment by lmn · 2017-07-20T00:58:43.557Z · LW(p) · GW(p)
I don't think this could work. Where would the virus keep its private key?
Replies from: username2↑ comment by turchin · 2017-07-19T10:16:04.328Z · LW(p) · GW(p)
why "was briefly possible"? - Was the botnet closed?
Replies from: philh↑ comment by philh · 2017-07-19T10:49:01.462Z · LW(p) · GW(p)
They may be referring to the fact that bitcoin mining is unprofitable on most people's computers.
Replies from: Lumifer↑ comment by Lumifer · 2017-07-19T14:48:19.779Z · LW(p) · GW(p)
It is profitable for a botnet -- that is, if someone else pays for electricity.
Replies from: username2, None↑ comment by username2 · 2017-07-22T03:47:11.123Z · LW(p) · GW(p)
You need to earn minimum amounts before you can receive a payout share or, worse, solo mine a block. With the asymmetric advantage provided by optimized hardware, your expectation time for finding enough shares to earn a payout using cpu mining is in the centuries to millenniums timeframe. This is without considering rising fees that raise the bar even higher.
↑ comment by [deleted] · 2017-07-19T18:05:25.951Z · LW(p) · GW(p)
What with the way the ASIC mining chips keep upping the difficulty, can a CPU botnet even pay for the developer's time to code the worm that spreads it any more?
Replies from: turchin, Lumifercomment by Thomas · 2017-07-17T11:08:11.853Z · LW(p) · GW(p)
Try this
Replies from: Oscar_Cunningham, Manfred, Gurkenglas↑ comment by Oscar_Cunningham · 2017-07-18T14:44:44.047Z · LW(p) · GW(p)
A different question in the same vein:
Replies from: ThomasTwo Newtonian point particles, A and B, with mass 1kg are at rest separated by a distance of 1m. They are influenced only by the other's gravitational attraction. Describe their future motion. In particular do they ever return to their original positions, and after how long?
↑ comment by Thomas · 2017-07-18T19:05:18.745Z · LW(p) · GW(p)
The collision of two "infinitely small" points is quite another problem. Have some similarities, too.
For two points on a colliding path, the action and reaction force are present and of equal size and oppposite directions.
My example can have finite size balls or zero size mass points, but there is no reaction force to be seen. At least, I don't.
↑ comment by Manfred · 2017-07-17T20:05:50.972Z · LW(p) · GW(p)
Note that your force grows unboundedly in N, so close to zero you have things that are arbitrarily heavy compared to their distance. So what this paradox really is about, is alternating series' that grow with N, and whether we can say that they add up to zero.
If we call the force between the first two bodies f12, then the series of internal forces on this system of bodies (using negative to denote vector component towards zero) looks like -f12+f12-f23+f23-f13+f13-f34..., where, again, each new term is bigger than the last.
If you split this sum up by interactions, it's (-f12+f12)+(-f23+f23)+(-f13+f13)..., so "obviously" it adds up to zero. But if you split this sum up by bodies, each term is negative (and growing!) so the sum must be negative infinity.
The typical physicist solution is to say that open sets aren't physical, and to get the best answer we should take the limit of compact sets.
↑ comment by Gurkenglas · 2017-07-17T19:43:26.552Z · LW(p) · GW(p)
The same can be said of unit masses at every whole negative number.
The arrow that points to the right is at the same place that the additional guest in Hilbert's Hotel goes. Such unintuitiveness is life when infinities/singularities such as the diverging forces acting on your points are involved.
Replies from: Lumifercomment by ImmortalRationalist · 2017-07-20T10:39:25.744Z · LW(p) · GW(p)
Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?
Replies from: drethelin, hairyfigment↑ comment by drethelin · 2017-07-20T17:13:12.933Z · LW(p) · GW(p)
You can justify a belief in "Induction works" by induction over your own life.
Replies from: ImmortalRationalist, g_pepper↑ comment by ImmortalRationalist · 2017-07-21T03:32:30.752Z · LW(p) · GW(p)
Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that does not depend on priors. The second problem is that you also need to justify that your memories are reliable. This could be done using induction and with a sufficiently large prior probability that memory works, but this has the same problems mentioned previously.
↑ comment by g_pepper · 2017-07-20T17:27:05.612Z · LW(p) · GW(p)
Wouldn't that be question begging?
↑ comment by hairyfigment · 2017-08-19T22:41:39.829Z · LW(p) · GW(p)
Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality.
Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'
comment by Fivehundred · 2017-07-21T16:44:02.134Z · LW(p) · GW(p)
How do I contact a mod or site administrator on Lesswrong?
Replies from: Elocomment by AlexMennen · 2017-07-20T00:14:20.807Z · LW(p) · GW(p)
Can anyone point me to any good arguments for, or at least redeeming qualities of, Integrated Information Theory?
Replies from: ImmortalRationalist↑ comment by ImmortalRationalist · 2017-07-21T03:34:36.463Z · LW(p) · GW(p)
Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.
comment by madhatter · 2017-07-17T22:52:48.933Z · LW(p) · GW(p)
never mind this was stupid
Replies from: WalterL, turchin↑ comment by WalterL · 2017-07-17T23:16:30.803Z · LW(p) · GW(p)
The reliable verification methods are a dream, of course, but the 'forbidden from sharing this information with non-members' is even more fanciful.
Replies from: madhatter↑ comment by turchin · 2017-07-17T23:30:43.586Z · LW(p) · GW(p)
In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world - will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes.
Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created - or both. Basically, it means the creation of the world government, design especially to contain AI.
It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.