Open thread, Sep. 19 - Sep. 25, 2016

post by DataPacRat · 2016-09-19T18:34:12.589Z · LW · GW · Legacy · 92 comments

Contents

92 comments

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

92 comments

Comments sorted by top scores.

comment by DataPacRat · 2016-09-19T18:35:24.934Z · LW(p) · GW(p)

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Replies from: turchin, DataPacRat, moridinamael, ChristianKl, turchin, scarcegreengrass, pcm, siIver
comment by turchin · 2016-09-19T23:23:14.503Z · LW(p) · GW(p)

These two lines seem to me contradictory. It is not clear to me should I upload you or preserve your brain.

  • I don't understand how the cells of the brain produce qualia and consciousness, and have a certain concern that an attempt at uploading my mind into digital form may lose important parts of my self. If you haven't solved those fundamental problems of how brains produce minds, I would prefer to be revived as a biological, living being, rather than have my mind uploaded into software form.

  • I understand that all choices contain risk. However, I believe that the "information" theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain. I also suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages. In order to try to gain those advantages, and minimize those disadvantages, I am willing to volunteer to let my cryonically-preserved brain be used for experimental mind-uploading procedures, provided that certain preconditions are met, including:

Replies from: DataPacRat
comment by DataPacRat · 2016-09-19T23:55:42.761Z · LW(p) · GW(p)

The intended meaning, which it seems I will need to rephrase to clarify: "If you are experimenting with uploading, and can meet these minimal common-sense standards, then I'm willing to volunteer ahead of time be your guinea pig. If you can't meet them, then I'd rather stay frozen a little longer. Just FYI."

Replies from: WhySpace_duplicate0.9261692129075527
comment by WhySpace_duplicate0.9261692129075527 · 2016-09-21T04:32:05.422Z · LW(p) · GW(p)

This is potentially quite important.

MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers.

Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI:

  1. Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI.

  2. The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone's mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift.

The obvious solution is to hand Bostrom's Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.

comment by DataPacRat · 2016-09-20T17:28:36.267Z · LW(p) · GW(p)

New version of the draft text: https://www.datapacrat.com/temp/Cryo Revival Preferences - draft 0.1.2.txt

Replies from: DataPacRat
comment by DataPacRat · 2016-09-21T16:10:03.070Z · LW(p) · GW(p)

Today's version: https://www.datapacrat.com/temp/Cryo Revival Preferences - draft 0.1.3.txt

The change: Added new paragraph:

There is no such thing as being able to have 100% certainty that a piece of software is without flaws or errors. One of the few methods for detecting a large proportion of any program's is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection. Without that strategy, not only are bugs much more likely to remain, but when someone does manage to find a bug, it is likely to remain secret and uncorrected. Such uncorrected bugs can be used by unscrupulous people to do just about anything to any data stored on a computer. This is bad enough when that data is merely personal email, or even a bank's financial records; when the data is a sapient mind, the possibilities are horrifying. Given the possible downsides, I find it difficult to trust the motives of anyone who wishes to run an uploaded mind on a computer that uses closed-source software. Therefore, if there is a choice between uploading my mind using uninspectable, closed-source software, and not being revived, I would choose not to be uploaded in that fashion, even if doing so increases the risk of never being revived at all. If there is a choice between being uploading my mind using closed-source software that the uploaded mind can inspect, then if that includes all the documentation that is necessary for the uploaded mind to learn how to understand the software, I would reluctantly agree to the uploading procedure as being preferable to risking never being revived at all.

Replies from: Lumifer
comment by Lumifer · 2016-09-21T16:38:04.163Z · LW(p) · GW(p)

One of the few methods for detecting a large proportion of any program's is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection.

That's a claim often made ("With enough eyes all bugs are shallow") but it's not so clear-cut in practice. In real life a lot of open-source projects are very buggy and remain very buggy (and open to 'sploits) for a very long time. At the same time there is closed-source software which is considerably more bug-free (but very expensive) -- e.g. the code in fly-by-wire airplanes.

Besides, physical control, generally speaking, trumps all. If your mind is running on top of, say, open-source Ubuntu 179.5 Zooming Zazzle but I have access to your computing substrate, that is, the physical machine which runs the code, the fact that the machine runs an open-source OS is quite irrelevant. You're looking for impossible guarantees.

And remember, that you are not making choices, but requests. You can't "trust the motives" or not -- if someone revives you with malicious intent, he can ignore your requests easily enough.

Replies from: DataPacRat
comment by DataPacRat · 2016-09-21T16:46:44.518Z · LW(p) · GW(p)

a lot of open-source projects are very buggy and remain very buggy

Yep.

there is closed-source software which is considerably more bug-free

Yep.

You're looking for impossible guarantees.

I'm not looking for guarantees at all. (Put another way, I'm well aware that 0 and 1 are not probabilities.) What I am doing is trying to gauge the odds; and given my own real-world experience, open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software, to the extent that I'm willing to make an important choice based on whether or not a piece of software is open-source.

And remember, that you are not making choices, but requests.

True, as far as it goes. However, this document I'm writing is also something of a letter to anyone who is considering reviving me, and given how history goes, they are very likely going to have to take into account factors that I currently can't even conceive of. Thus, I'm writing this doc in a fashion that not only lists my specific requests in regards to particular items, but also describes the reasoning behind the requests, so that the prospective reviver has a better chance of being able to extrapolate what my preferences about the unknown factors would likely be.

if someone revives you with malicious intent

If someone revives me with malicious intent, then all bets are off, and this document will nigh-certainly do me no good at all. So I'm focusing my attention on scenarios involving at least some measure of non-malicious intent.

Replies from: Lumifer
comment by Lumifer · 2016-09-21T20:47:26.803Z · LW(p) · GW(p)

open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software

On the basis of this "tends" you make a rather drastic request to NOT revive you if you'll be running on top of some closed-source layer.

Not to mention that you're assuming that "open-source" and "closed-source" concepts will still make sense in that high-tech future. As an example, let's say I give you a trained neural net. It's entirely open source, you can examine all the nodes, all the weights, all the code, everything. But I won't tell you how I trained that NN. Are you going to trust it?

Replies from: DataPacRat
comment by DataPacRat · 2016-09-21T23:20:51.306Z · LW(p) · GW(p)

On the basis of this "tends" you make a rather drastic request to NOT revive you if you'll be running on top of some closed-source layer.

That's true. But given the various reasonably-possible scenarios I can think of, making this extreme of a request seems to be the only way to express the strength of my concern. I'll admit it's not a common worry; of course, this isn't a common sort of document.

(If you want to know more about what leads me to this conclusion, you could do worse than to Google one of Cory Doctorow's talks or essays on 'the war on general-purpose computation'.)

As an example

You provide insufficient data about your scenario for me to make a decent reply. Which is why I included the general reasoning process leading to my requests about open- and closed-source - and in the latest version of the doc, have mentioned part of the reason for going into that detail is to let revivalists have some data to extrapolate what my choices would be in unknown scenarios. (In this particular case, the whole point of differentiating between open- and closed-source software is the factor of /trust/ - and in your scenario, you don't give any information on how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted.)

Replies from: Lumifer
comment by Lumifer · 2016-09-22T14:44:40.904Z · LW(p) · GW(p)

I am well aware of the war on general computation, but I fail to see how it's relevant here. If you are saying you don't want to be alive in a world where this war has been lost, that's... a rather strong statement.

To make an analogy, we're slowly losing the ability to fix, modify, and, ultimately, control our own cars. I think that is highly unfortunate, but I'm unlikely to declare a full boycott of cars and go back to horses and buggy whips.

Since you're basically talking about security, you might find it useful to start by specifying a threat model.

how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted

What do you mean by "such NNs"? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly -- it's too general for a meaningful answer.

In any case, the point is that the preference for open-source relies on it being useful, that is, the ability to gain helpful information from examining the code, and the ability to modify it to change its behaviour. You can examine a sufficiently complex trained NN all you want, but the information you'll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.

Replies from: DataPacRat
comment by DataPacRat · 2016-09-22T18:37:52.842Z · LW(p) · GW(p)

Since you're basically talking about security, you might find it useful to start by specifying a threat model.

I thought I had; it's the part around the word 'horrifying'.

What do you mean by "such NNs"? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly -- it's too general for a meaningful answer.

We actually already have a lot of the fundamental software required to run an "emulate brain X" program - stuff that accesses hardware, shuffles swap space around, arranges memory addresses, connects to networking, models a virtual landscape and avatars within, and so on. Some scientists have done extremely primitive emulations of neurons or neural clusters, so we've got at least an idea of what software is likely to need to be scaled up to run a full-blown human mind. None of this software has any particular need for neural-nets. I don't know how such NNs as you propose would be necessary to emulate a brain; I don't know what service they would add, how fundamental they would be, what sort of training data would be used, and so on.

Put another way, as best as I can interpret your question, it's like saying "And what if future cars required an algae system?", without even saying whether the algae tubing is connected to the fuel, or the exhaust, or the radiator, or the air conditioner. You're right that NNs are general-purpose; that is, in fact, the issue I was trying to raise.

You can examine a sufficiently complex trained NN all you want, but the information you'll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.

Alright. In this model, in which it appears that the training data is unavailable, that the existing NN can't be retrained or otherwise modified, and that there doesn't seem to be any mention of being able to train up a replacement NN with different behaviours, then it appears to match the relevant aspects of "closed-source" software much more closely than "open-source", in that if a hostile exploiter finds a way to, say, leverage increased access and control of the computer through the NN, there is little-to-no chance of detecting or correcting the aspects of the NN's behaviour which allow that. I'll spend some time today seeing if I can rework the relevant paragraphs so that this conclusion can be more easily derived.

Replies from: Lumifer
comment by Lumifer · 2016-09-23T15:00:45.245Z · LW(p) · GW(p)

the part around the word 'horrifying'

That's not a threat model. A threat model is basically a list of adversaries and their capabilities. Typically, defensive measures help against some of them, but not all of them -- a threat model helps you figure out the right trade-offs and estimate who you are (more or less) protected from, and who you are vulnerable to.

stuff that accesses hardware, shuffles swap space around, arranges memory addresses

That stuff usually goes by the name of "operating system". Why do you think that brain emulations will run on top of something that's closely related to contemporary operating systems?

a hostile exploiter

You seem to worry a lot about your brain emulation being hacked from the outside, but you don't worry as much about what the rightful owner of the hardware and the software on top of which your em lives might do?

Replies from: DataPacRat
comment by DataPacRat · 2016-09-23T19:12:42.854Z · LW(p) · GW(p)

A threat model is

I'm merely a highly-interested amateur. Would you be willing to help me work out the details of such a model?

Why do you think that brain emulations will run on top of something that's closely related to contemporary operating systems?

Because even as a scifi fan, I can only make so many guesses about alternatives, and it seems at least vaguely plausible that the same info-evolutionary pressures that led to the development of contemporary operating systems will continue to exist for at least the next couple of decades. At least, plausible enough that I should at least cover it as a possibility in the request-doc.

the rightful owner of the hardware

Without getting into the whole notion of property rights versus the right to revolution, if I thought whoever was planning to run a copy of me on a piece of hardware was fully trustworthy, why would I have included the 'neutral third-third party' clause?

Replies from: Lumifer
comment by Lumifer · 2016-09-27T18:27:40.838Z · LW(p) · GW(p)

You are writing a, basically, living will for a highly improbable situation. Conditional on that situation happening, I think that since you have no idea into which conditions you will wake up, it's best to leave the decision to the future-you. Accordingly, the only thing I would ask for is the ability for your future-you to decide his fate (notably, including his right to suicide if he makes this choice).

Replies from: DataPacRat
comment by DataPacRat · 2016-09-28T01:33:52.584Z · LW(p) · GW(p)

living will

In the latest draft, I've rewritten at least half from scratch, focusing on the reasons why I want to be revived in the first place, and thus under which circumstances reviving me would help those reasons.

future-you

The whole point about being worried about hostile entities taking advantage of vulnerabilities hidden in closed-source software is that future-me might be even less trustable to work towards my values than the future-self of a dieter can be trusted not to grab an Oreo if any are left in their home. Note to self: include the word 'precommitment' in version 0.2.1.

Replies from: Lumifer
comment by Lumifer · 2016-09-28T14:22:46.153Z · LW(p) · GW(p)

is that future-me might be even less trustable to work towards my values

If whoever revives you deliberately modifies you, you're powerless to stop it. And if you're worried that future-you will be different from past-you, well, that's how life works. A future-you in five years will be different from current-you who is different from the past-you of five years ago.

As to precommitment, I don't think you have any power to precommit, and I don't think it's a good idea either. Imagine if a seven-year-old past-you somehow found a way to precommit the current-you to eating a pound of candy a day, every day...

Replies from: DataPacRat
comment by DataPacRat · 2016-09-28T21:57:05.656Z · LW(p) · GW(p)

If whoever revives you deliberately modifies you, you're powerless to stop it.

True, which is why I'm assuming a certain minimal amount of good-will on the part of whoever revives me. However, just because the reviver has control over the technology allowing my revival doesn't mean they're actually technically competent in matters of computer security - I've seen too many stories in /r/talesfromtechsupport of computer-company executives being utterly stupid in fundamental ways for that. The main threat I'm trying to hold off is, roughly, "good-natured reviver leaves the default password in my uploaded self's router unchanged, script-kiddie running automated attacks on the whole internet gains access, script turns me into a sapient bitcoin-miner-equivalent for that hacker's benefit". That's just one example of a large class of threats. No hostile intent by the reviver is required, just a manager-level understanding of computer security.

A future-you in five years will be different from current-you who is different from the past-you of five years ago.

Yes, I know. This is one reason that I am trying not to specify /what/ it is I value in the request-doc, other than 1) instrumental goals that are good for achieving many terminal goals, and 2) valuing my own life both as an instrumental and a terminal goal, which I confidently expect to remain as one of my fundamental values for quite some time to come.

I don't think you have any power to precommit

I'll admit that I'm still thinking on this one. Socially, precommitting is mainly useful as a deterrence, and I'm working out whether trying to precommit to work against anyone who modifies my mind without my consent, or any other variation of the tactic, would be worthwhile even if I /can/ follow through.

Replies from: Lumifer, DataPacRat
comment by Lumifer · 2016-09-29T14:39:22.058Z · LW(p) · GW(p)

leaves the default password in my uploaded self's router unchanged

Imagine a Faerie Queen popping into existence near you and saying: Yo, I have a favour to ask. See, a few centuries ago a guy wished to live in the far future, so I thought why not? it's gonna be fun! and I put him into stasis. It's time for him to wake up, but I'm busy so can you please reanimate him? Here is the scroll which will do it, it comes with instructions. Oh, and the guy wrote a lengthy letter before I froze him -- he seemed to have been very concerned about his soul being tricked by the Devil -- here it is. Cheers, love, I owe you one! ...and she pops out of existence again.

You look at the letter (which the Faerie Queen helpfully translated into more or less modern English) and it's full of details about consecrated ground, and wards against evil eyes, and witch barriers, and holy water, and what kind of magic is allowed anywhere near his body, and whatnot.

How seriously are going to take this letter?

Replies from: DataPacRat
comment by DataPacRat · 2016-09-29T15:52:24.727Z · LW(p) · GW(p)

How seriously are going to take this letter?

Language is a many-splendored thing. Even a simple shopping list contains more information than a mere list of goods; a full letter is exponentially more valuable. As one fictional character once put it, it's worth looking for the "underneath the underneath"; as another one put it, it's possible to deduce much of modern civilization from a cigarette butt. If you need a specific reason to pay attention to such a letter spelled out for you, then it could be looked at for clues as to how likely the reanimated fellow would need to spend time in an asylum before being deemed competent to handle his own affairs and released into modern society, or if it's safe to plan on just letting him crash on my couch for a few days.

And that's without even touching the minor detail that, if a Faerie Queen is running around, then the Devil may not be far behind her, and the resurrectee's concerns may, in fact, be completely justified. :)

PS: I like this scenario on multiple levels. Is there any chance I could convince you to submit it to /r/WritingPrompts, or otherwise do more with it on a fictional level? ;)

Replies from: gjm, Lumifer
comment by gjm · 2016-09-29T22:00:36.233Z · LW(p) · GW(p)

It looks like you've changed the subject a bit -- from whether the letter should be taken seriously in the sense of doing what it requests, to whether it should be taken seriously in the sense of reading it carefully.

Replies from: DataPacRat
comment by Lumifer · 2016-09-29T16:28:28.230Z · LW(p) · GW(p)

Oh, I'm sure the letter is interesting, but the question is whether you will actually set up wards and have a supply of holy water on hand before activating the scroll. Though the observation that the existence of the Faerie Queen changes things is a fair point :-)

I don't know if the scenario is all that exciting, it's a pretty standard trope, a bit tarted-up. If you want to grab it and run with it, be my guest.

comment by DataPacRat · 2016-09-29T01:36:40.956Z · LW(p) · GW(p)

I'm still working out various aspects, details, and suchlike, but so you can at least see what direction my thoughts are going (before I've hammered these into good enough shape to include in the revival-request doc), here's a few paragraphs I've been working on:

Sometimes, people will, with the best of intentions, perform acts that turn out to be morally reprehensible. As one historical example in my home country, with the stated justification of improving their lives, a number of First Nations children were sent to residential schools where the efforts to eliminate their culture ranged from corporal punishment for speaking the wrong language to instilling lessons that led the children to believe that Indians were worthless. While there is little I, as an individual, can do to make up for those actions, I can at least try to learn from them, to try to reduce the odds of more tragedies being done with the claim of "it was for their own good". To that end, I am going to attempt a strategy called "precommitment". Specifically, I am going to do two things: I am going to precommit to work against the interests of anyone who alters my mind without my consent, even if, after the alteration, I agree with it; and I am going to give my consent in advance to certain sharply-limited alterations, in much the way that a doctor can be given permission to do things to a body that would be criminal without that permission.

I value future states of the universe in which I am pursuing things I value more than I value futures in which I pursue other things. I do not want my mind to be altered in ways that would change what I value, and the least hypocritical way to do that is to discourage all forms of non-consensual mind-alteration. I am willing to agree, that I, myself, should be subject to such forms of discouragement, if I were to attempt such an act. I have been able to think of one, single moral justification for such acts - if there is clear evidence that doing so will reduce the odds of all sapience going permanently extinct - but given how easily people are able to fool themselves, if non-consensually altering someone's mind is what is required to do that, then accepting responsibility for doing that, including whatever punishments result, would be a small price to pay, and so I am willing to accept such punishments even in this extreme case, in order to discourage the frivolous use of this justification.

While a rigid stance against non-consensual mind-alteration may be morally required in order to allow a peaceful society, there are also certain benefits to allow consensual mind-alteration, in certain cases. Most relevantly, it could be argued that scanning a brain and creating a software emulation of it could be counted as altering it, and it is obviously in my own self-interest to allow that as an option to help me be revived to resume pursuing my values. Thus, I am going to give my consent in advance to "alter" my mind to allow me to continue to exist, with the minimal amount of alteration possible, in two specific circumstances: 1) If such alterations are absolutely required to allow my mind to continue to exist at all, and 2) As part of my volunteering to be a subject for experimental mind-uploading procedures.

Replies from: Lumifer
comment by Lumifer · 2016-09-29T14:29:03.407Z · LW(p) · GW(p)

I am going to precommit to work against the interests of anyone who alters my mind without my consent

And how are you going to do this? Precommitment is not a promise, it's making it so that you are unable to choose in the future.

Replies from: DataPacRat
comment by DataPacRat · 2016-09-29T15:27:37.448Z · LW(p) · GW(p)

Well, if you don't mind my tweaking your simple and absolute "unable" into something more like "unable, at least without suffering significant negative effects, such as a loss of wealth", then I am aware of this, yes. Precommitment for something on this scale is a big step, and I'm taking a bit of time to think the idea over, so that I can become reasonably confident that I want to precommit in the first place. If I do decide to do so, then one of the simpler options could be to, say, pre-authorize whatever third-party agents have been nominated to act in my interests and/or on my behalf to use some portion of edited-me's resources to fund the development of a version of me without the editing.

Replies from: Lumifer
comment by Lumifer · 2016-09-29T15:33:37.660Z · LW(p) · GW(p)

If you're unable to protect yourself from being edited, what makes you think your authorizations will have any force or that you will have any resources? And if you actually can "fund the development of a version of me without the editing", don't you just want to do it unconditionally?

Replies from: DataPacRat
comment by DataPacRat · 2016-09-29T16:01:01.159Z · LW(p) · GW(p)

I think we're bumping up against some conflicting assumptions. At least at this stage of the drafting process, I'm focusing on scenarios where at least some of the population of the future has at least some reason to pay at least minimal attention to whatever requests I make in the letter. If things are so bad that someone is going to take my frozen brain and use it to create an edited version of my mind without my consent, and there isn't a neutral third-party around with a duty to try to act in my best interests... then, in such a future, I'm reasonably confident that it doesn't matter what I put in this request-doc, so I might as well focus my writing on other futures, such as ones in which a neutral third-party advocate might be persuadable to set up a legal instrument funneling some portion of my edited-self's basic-guaranteed-income towards keeping a copy of the original brain-scan safely archived until a non-edited version of myself can be created from it.

comment by moridinamael · 2016-09-20T14:15:02.785Z · LW(p) · GW(p)

If I were going to make such a document, I would make it minimally restrictive. I would rather be brought back even in less-than-ideal circumstances, so that I could to observe how the world had developed, and then decide whether I wanted to stay. At least then I would have a me-like agent operating on my own behalf.

If they bring me back as a qualia-less em, then at least there's a chance that the em will be able to say, "Hey, this is cool and everything, but this isn't actually what my predecessor wanted. So even though I don't have qualia, I'll make it my personal mission to try to bring myself back with qualia." Precommitting to such an attitude now while you're alive boosts the odds of this. At worst, if it turns out to be impossible to revive the "observer", there's a thing-like-you running around in the future spreading your values, even if it doesn't have your consciousness, and I can't see that as a bad thing.

Replies from: Houshalter
comment by Houshalter · 2016-09-21T20:05:36.667Z · LW(p) · GW(p)

Well what if suicide is illegal in the future? And even if it isn't, suicide is really hard to go through with. A lot of people have preferences that they would prefer not to be revived with brain damage, but people with brain damage do not commonly kill themselves.

Replies from: Dagon, Lumifer
comment by Dagon · 2016-09-22T16:23:47.636Z · LW(p) · GW(p)

I see this combination of expressed preference and actions (would prefer not to live with brain damage, but then actually choose to live with brain damage) as a failure of imagination and incorrect far-mode statements, NOT as an indication that the prior statement true, but was thwarted by some outside force.

Future-me instances have massively more information about what they're experiencing in the future than present-me has now. It's ludicrous for present-me to try to constrain future-me's decisions, and even more so to try to identify situations where present-me's wishes will be honored but future-me's decisions won't.

You can prevent adverse revival by cremation or burial (in which case you also prevent felicitous revival). If an evil regime wants you, any contract language is useless. If an individual-respecting regime considers your revival, future you would prefer to be revived and asked rather than being held to a past-you document that cannot predict the details of the current the situation very well.

comment by Lumifer · 2016-09-21T20:50:33.474Z · LW(p) · GW(p)

Well what if suicide is illegal in the future?

More to the point, what if suicide is impossible? It's not hard at all to prevent an em from committing suicide and, of course, if you have copies and backups, he can suicide all he wants...

comment by ChristianKl · 2016-09-20T10:25:35.012Z · LW(p) · GW(p)

You don't seem to describe what you would consider as a revived copy of you. How much of your personality has to stay intact?

comment by turchin · 2016-09-19T23:28:26.476Z · LW(p) · GW(p)

I would add lines about would you prefer to be revived together with your friends, family members, before them or after.

May be I would add a secret question to check if you are restored properly.

I would also add all my digital immortality back-up information, which could be used to fill gaps in case if some information is lost.

I also expect that revival may happen in maybe 20-30 years from my death so I should add some kind of will about how to manage my property during my absence.

Replies from: DataPacRat
comment by DataPacRat · 2016-09-20T00:02:45.761Z · LW(p) · GW(p)

I'm afraid that none of my friends or family are interested in cryo.

I already created one recognition protocol, but it's more for multiple copies of myself meeting. I suppose it would be easy enough to include an MD5 hash of a keyphrase in this doc.

I already have provisions in place for my other data, which will end up in that "perpetual storage drawer" I mentioned.

Preserving assets while im dead is an entirely different kettle of fish, and assumes that I will have any worth preserving, which, given my financial situation, I don't expect to be the case.

Replies from: ChristianKl
comment by ChristianKl · 2016-09-20T10:26:30.095Z · LW(p) · GW(p)

I suppose it would be easy enough to include an MD5 hash of a keyphrase in this doc.

I think MD5 hashes are likely broken by the time of any resurrection. MD5 already has collision problems today.

comment by scarcegreengrass · 2016-09-19T21:22:51.183Z · LW(p) · GW(p)

Interesting idea! I guess you could add a 'when in doubt' for whether you'd rather be revived in an early period (eg, if resurrection is possible with a 80% success rate) or to be downprioritized until resurrection is very mature and safe.

Replies from: DataPacRat
comment by DataPacRat · 2016-09-20T00:04:17.814Z · LW(p) · GW(p)

It shouldn't be too hard to add some quantitative numbers, or at least which numbers I'd like potential revivers to consider.

comment by pcm · 2016-09-22T19:23:33.292Z · LW(p) · GW(p)

My equivalent of this document focused more on the risks of unreasonable delays in uploading me. Cryonics organizations have been designed to focus on preservation, which seems likely to bias them toward indefinite delays. This might be especially undesirable in an "Age of Em" scenario.

Instead of your request for a "neutral third-party", I listed several specific people, who I know are comfortable with the idea of uploading, as people whose approval would be evidence that the technology is adequate to upload me. I'm unclear on how hard it would be to find a genuinely neutral third party.

My document is 20 years old now, and I don't have a copy handy. I suppose I should update it soon.

comment by siIver · 2016-09-20T15:49:27.438Z · LW(p) · GW(p)

Great idea. I will probably do a similar thing myself at some point, and it will probably look similar to yours.

The only thing I see that might be missing is advise for a scenario in which the odds of revival go down with time, creating pressure to revive you sooner rather than later. In that case your wishes may contradict with each other (since later revival could still increase the odds of living indefinitely). That seems far fetched but not entirely impossible.

Other than that, I'd say be more specific to avoid any possible misinterpretation. You never know how much bureaucracy will be involved in the process when it finally happens.

comment by [deleted] · 2016-09-25T06:18:03.628Z · LW(p) · GW(p)

Astrobiology bloggery got interrupted by a SEVERE bout of a sleep disorder, developing systems to measure metabolic states of single yeast cells in order to freaking graduate soonish, and having a bit of a life for a while.

Astrobiology bloggery resumes within 1 week, with my blog moved from thegreatatuin.blogspot.com to thegreatatuin.wordpress.com, blogger being completely unusable when it comes to inserting graphs and the like. Dear gods I'm excited, the last year has seen a massive explosion in origin of life research and study of certain outer solar system bodies. To the point that I'm pretty sure the metabolism of the last universal common ancestor has been figured out and the origin of the ribosome (and therefore protein-coding genetics) as well.

Advice on running personal wordpress account welcomed.

comment by Throawey · 2016-09-20T04:54:25.385Z · LW(p) · GW(p)

For a while now, I have been working on a potentially impactful project. The main limiting factor is my own personal productivity- a great deal of the risk is frontloaded in a lengthy development phase. Extrapolating the development duration based on progress so far does not yield wonderful results. It appears I should still be able to finish it in a not-absurd timespan, it will just be slower than ideal.

I've always tried to improve my productivity, and I've made great progress in that compared to ten or even five years ago, but at this point I've picked most of the standard low hanging fruit. I've already fiddled with some extremely easy and safe kinda-nootropics already- melatonin, occasional caffeine pills- but not things like modafinil or amphetamines, or some of the less studied options.

And while thinking about this today, I decided to just run some numbers on amphetamines. Based on my current best estimates of market realities and the potential success and failure cases of the project, assuming amphetamines could improve my productivity by 30% on average, the expected value of taking amphetamines for the duration of development comes out to...

...a few hundred human lives.

And, in the best-reasonable case scenario, a lot more than that. This wasn't really unexpected, but it's surprisingly the first time I actually did the math.

So I imagine the God of Dumb Trolley Problems sits me down for a thought experiment and explains: "In a few years, there will be a building full of 250 people. A bomb will go off and kill all of them. You have two choices." The god leans in for dramatic effect. "Either you can do nothing, and let all of them die... or..." It lowers its head just enough for shadows to cast over its features... "You take this low, safe dose of Adderall for a few years, and the bomb magically gets defused."

This is not a difficult ethical problem. Even taking into account potential side effects, even assuming the amphetamines were obtained illegally and so carried legal liability, this is not a difficult ethical problem. When I look at this, I feel like the answer of what I should do is blindingly obvious.

And yet I have a strong visceral response of "okay yeah sure but no." I assume part of this is fairly extreme risk aversion to the idea of getting anything like amphetamines outside of a prescription. Legal trouble would be pretty disastrous, even if unlikely. And part of me is spooked about doing something like this without expert oversight.

But why not just try to get an actual prescription? For this, or some other advantageous semi-nootropic, at least. Once again, I just get a gross feeling about the idea of trying to manipulate the system. How about if I just explain the situation in full, with zero manipulation, to a sympathetic doctor? The response from my gut feels like a blank "... no."

So basically, I feel stuck. Part of me wants to recognize the risk aversion as excessive, and suggests I should at least take whatever steps I can safely. The other part is saying "but that is doing something waaaay out of the ordinary and maybe there's a reason for that that you haven't properly considered."

I am not even sure what I want to ask with this post. I guess if you've got any ideas or insights, I'd like to hear them.

Replies from: username2, hg00, moridinamael, Gurkenglas
comment by username2 · 2016-09-22T00:20:52.590Z · LW(p) · GW(p)

Have you ever taken Adderall? I greatly suspect you have not.

People who fight chronic akrasia because of varoius degrees of ADHD and related mental disorders have a different response to stimulants than "normal" individuals. For me, Adderall puts me into cool, calm, clear focus. The kind of productive mode of being that most people get into by drinking a cup of coffee (except coffee makes me jittery and unfocused). Being on Adderall is just... "normal." Indeed the first time I tried it I thought the dose was too low because I didn't feel a thing.. until 8 hours later when I realized I was still cranking away good code and able to focus instead of my normal bouts of mid-day akrasia. I could probably count on my hands the number of times I had a full day of highly focused work without feeling stress or burn-out afterwards... now it's the new normal :)

For such people low-dose amphetamines don't provide any high, nor are they accompanied by some sort of berserker productivity binge like popular media displays. In the correct dosages they also don't seem to come with any addiction or withdraw -- I go off of it without any problems, other than reverting to the normal, viscous cycles of distraction and akrasia. (This isn't just anecdotal data -- the incidence rate of Adderall addiction among those following the prescribed plan is lost in the backround noise of people who are abusing in these trials.)

Honestly, see a psychiatrist that specializes in these things and talk to them about your inability to focus, your history of trouble in completing complex, long tasks, how this is affecting your career and personal growth goals, etc. Be honest about your shortcomings, and chances are they will work with you to find a treatment plan that truly helps you. You're not manipulating anybody.

Seriously, ADHD is a real mental disorder. Your first step should be to recognize it as such, and accept the fact that you might actually have a real medical condition that needs treatment. You're not manipulating the system, you're exactly the kind of person the system is trying to help! Prescription drugs are for more than just people who hear voices...

Replies from: Throawey
comment by Throawey · 2016-09-23T04:54:10.927Z · LW(p) · GW(p)

You are correct that I have not taken Adderall, or any other amphetamines. I would probably be less hesitant if I already knew how I reacted to them.

I do fully recognize ADD/ADHD as real, though. I have spent a great deal of time around people with it. Some are very, very severely impacted. (I have to laugh a bit whenever I see implications that it's somehow 'fake'- it can be about as subtle as a broken bone.)

But my familiarity with it is also part of the reason why I have never really considered the possibility of having it. Even measured against 'normal' people, I seem to be very productive, and when I compare my difficulties with those of people I know with ADHD... It seems like mine would have to be a relatively mild case, or there would need to be some factor that is mitigating its impact.

That said, from a hereditary perspective it would be a little weird if I don't have it to some degree. The situation and low cost of asking basically demand that I give it further investigation, at least.

comment by hg00 · 2016-09-20T05:49:55.603Z · LW(p) · GW(p)

Drugs are prescribed based on a cost-benefit analysis. In general, the medical establishment is pretty conservative (there's little benefit to the doctor if your problem gets solved, but if they hurt you they're liable to get sued). In the usual case for amphetamines, the cost is the risk of side effects and the benefit is helping someone manage their ADHD. For you, the cost is the same but it sounds like the benefit is much bigger. So even by the standards of the risk-averse medical establishment, this sounds like a risk you should take.

You're an entrepreneur. A successful entrepreneur thinks and acts for themselves. This could be a good opportunity to practice being less scrupulous. Paul Graham on what makes founders successful:

Naughtiness

Though the most successful founders are usually good people, they tend to have a piratical gleam in their eye. They're not Goody Two-Shoes type good. Morally, they care about getting the big questions right, but not about observing proprieties. That's why I'd use the word naughty rather than evil. They delight in breaking rules, but not rules that matter. This quality may be redundant though; it may be implied by imagination.

Sam Altman of Loopt is one of the most successful alumni, so we asked him what question we could put on the Y Combinator application that would help us discover more people like him. He said to ask about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into computers. It has become one of the questions we pay most attention to when judging applications.

I'd recommend avoiding Adderall as a first option. I've heard stories of people whose focus got worse over time as tolerance to the drug's effects developed.

Modafinil, on the other hand, is a focus wonder drug. It's widely used in the nootropics community and bad experiences are quite rare. (/r/nootropics admin: "I just want to remind everyone that this is a subreddit for discussing all nootropics, not just beating modafinil to death.")

The legal risks involved with Modafinil seem pretty low. Check out Gwern's discussion.

My conclusion is that buying some Modafinil and trying it once could be really valuable, if only for comfort zone expansion and value of information. I have very little doubt that this is the right choice for you. Check out Gwern's discussion of suppliers. (Lying to your doctor is another option if you really want to practice being naughty.)

Replies from: fr00t, Throawey, ChristianKl
comment by fr00t · 2016-09-22T16:48:45.814Z · LW(p) · GW(p)

In general, the medical establishment is pretty conservative (there's little benefit to the doctor if your problem gets solved, but if they hurt you they're liable to get sued)

If they don't give me what I want after I say the correct sequence of words I won't be returning to them.

It's easy to find a doctor who will work with you.

comment by Throawey · 2016-09-20T21:41:32.831Z · LW(p) · GW(p)

Thanks for the links.

I do notice that the idea of trying modafinil does not result in the nearly the same degree of automatic internal 'no' as amphetamines. That would suggest my inhibitions are somehow related to the relative perceived potency, or potential health effects... or I'm disinclined to do something that could signal 'drug abuser', which I associate much more strongly with amphetamines than modafinil. Hm.

I've also been going around and asking the more conservative people in my circle about this situation as well, to try to give a more coherent voice to my subverbal objections. So far I've found that they actually support me trying things, which suggests I really should try to recalibrate those gut reactions a bit.

Upon reflection, I think I could actually get modafinil completely legitimately. I feel a bit dumb for not resolving to do this sooner, given that I was fully aware of modafinil- even to the point of very nearly purchasing some a while ago, before I knew it was schedule 4- and given that I was fully aware of what modafinil was often used to treat. At this point, the choice is pretty massively overdetermined.

Replies from: Douglas_Knight, hg00
comment by Douglas_Knight · 2016-09-22T01:59:10.144Z · LW(p) · GW(p)

Amphetamine is officially more dangerous than modafinil (for good reason), but doctors actually respond worse to patients asking for modafinil than asking for amphetamine because it's weird. The easiest way to get modafinil is probably to start with amphetamine and later ask for modafinil because it's weaker and safer.

Replies from: Throawey
comment by Throawey · 2016-09-23T04:18:20.516Z · LW(p) · GW(p)

That's... pretty goofy. I would hope sleep specialists, at least, would tend to reach for modafinil before amphetamines.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2016-09-23T05:17:41.619Z · LW(p) · GW(p)

Yes, I'm sure that narcoleptics are referred to sleep specialists who know that it is on-label for narcolepsy. Probably that makes them more likely to prescribe it off-label.

But few people go to sleep specialists. Scott Alexander has written many times about how as a psychiatry resident he sees patients who need a stimulant, but can't take amphetamine. He brainstorms with his supervisor and suggests modafinil and even in this perfect setup, he gets pushback.

But I wasn't talking about sleep problems, which includes the approved use of modafinil. I was talking about using it in place of amphetamine for ADHD, which is further off-label.

comment by hg00 · 2016-09-21T03:50:10.071Z · LW(p) · GW(p)

Glad I could help :D

comment by ChristianKl · 2016-09-20T09:40:16.644Z · LW(p) · GW(p)

Drugs are prescribed based on a cost-benefit analysis. In general, the medical establishment is pretty conservative (there's little benefit to the doctor if your problem gets solved, but if they hurt you they're liable to get sued).

The idea that doctors who describe Adderal to ADHD patients are conversative about prescribing it seems to be an extraordinary claim.

How many doctors do you think get sued for giving patients adderal?

There a lot of money from drug companies who lobby that drugs like Adderal don't get perscribed in a conservative fashion.

Replies from: hg00
comment by hg00 · 2016-09-21T03:47:58.642Z · LW(p) · GW(p)

How many doctors do you think get sued for giving patients adderal?

I'm assuming you think the answer is "not many". If so, this shows it's not a very risky drug--it rarely causes side effects that are nasty enough for a patient to want to sue their doctor.

From what I've read about pharmaceutical lobbying, it consists primarily of things like buying doctors free meals for in exchange for using the company's drug instead of a competitor's drug. I doubt many doctors are willing to run a serious risk of losing their career over some free meals.

Replies from: ChristianKl
comment by ChristianKl · 2016-09-21T09:39:58.579Z · LW(p) · GW(p)

From what I've read about pharmaceutical lobbying, it consists primarily of things like buying doctors free meals for in exchange for using the company's drug instead of a competitor's drug.

No. It also consists of lobbying the relevant politicians to make it hard to sue doctors and generally policies to reduce harms caused by drugs. Drugmakers fought state opioid limits amid crisis:

The makers of prescription painkillers have adopted a 50-state strategy that includes hundreds of lobbyists and millions in campaign contributions to help kill or weaken measures aimed at stemming the tide of prescription opioids, the drugs at the heart of a crisis that has cost 165,000 Americans their lives and pushed countless more to crippling addiction.

The drugmakers vow they're combating the addiction epidemic, but The Associated Press and the Center for Public Integrity found that they often employ a statehouse playbook of delay and defend that includes funding advocacy groups that use the veneer of independence to fight limits on the drugs, such as OxyContin, Vicodin and fentanyl, the narcotic linked to Prince's death.


If so, this shows it's not a very risky drug--it rarely causes side effects that are nasty enough for a patient to want to sue their doctor.

That argument assumes that only side effects that can be proven in court to be bad are meaningful to worry about. Giving that establishing causation of drug effects usually takes millions of money to run well controlled studies that get published in leading medical journals that allow the drug companies that publish the studies that don't follow best standards of science that the journals pledged to honor (the CONSORT standards), it's not easy to prove all causation.

comment by moridinamael · 2016-09-20T19:45:54.691Z · LW(p) · GW(p)

This is not intended to be snarky or backhanded or anything. You did ask for insights.

It sounds like you're seeking some kind of complex justification to do something that you want to do anyway. Currently your reasons are not-necessarily-rational and maybe not fully consciously acknowledged, but you feel the desire/compulsion anyway. I say just go ahead and do what your gut is suggesting, while keeping in mind that you can always back. This isn't an irrevocable decision, so you lose almost nothing for trying.

Replies from: Throawey
comment by Throawey · 2016-09-20T21:02:46.953Z · LW(p) · GW(p)

There is probably some of that going on. More potent nootropics have long been a kind of forbidden fruit to me.

comment by Gurkenglas · 2016-09-21T03:11:48.918Z · LW(p) · GW(p)

Perhaps you expect to in the future be in a position where your expected impact is significantly larger, and so your gut tells you to be careful with anything whose long-term effects are not clear?

Replies from: Throawey
comment by Throawey · 2016-09-23T04:19:25.724Z · LW(p) · GW(p)

Possibly. I don't know if my gut is that smart and forward thinking, but that is a bit of a conscious concern.

comment by Elo · 2016-09-21T23:28:58.691Z · LW(p) · GW(p)

have updated the list of common human goals.
http://lesswrong.com/r/discussion/lw/mnz/list_of_common_human_goals/

social looked like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people. Do you have an established social network? Do you have intimacy?

and now looks like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people. Do you have an established social network? Do you have intimacy? Do you have seek opportunities to have soul to soul experiences with other people? Authentic connection?

From feedback from someone who felt it wasn't covered and had a strong goal of authentic connection.

http://bearlamp.com.au/list-of-common-human-goals/

Replies from: ChristianKl
comment by ChristianKl · 2016-09-23T13:54:25.732Z · LW(p) · GW(p)

soul to soul

Those are interesting semantics.

Replies from: Elo
comment by Elo · 2016-09-25T19:26:38.740Z · LW(p) · GW(p)

not necessarily in lw jargon, but it appeals to some.

comment by MattG2 · 2016-09-20T15:52:15.981Z · LW(p) · GW(p)

Let's say I have a set of students, and a set of learning materials for an upcoming test. My goal is to run an experiment to see which learning materials are correlated with better scores on the test via multiple linear regression. I'm also going to make the simplifying assumption that the effects of the learning materials are independent.

I'm looking for an experimental protocol with the following conditions:

  1. I want to be able to give each student as many learning materials as possible. I don't want a simple RCT, but a factorial experiment where students get many materials and the statistics tease out the linear regression.

  2. I have a prior about which learning materials will do better, I'd like to utilize this prior by originally distributing these materials to more students.

  3. (Bonus) Students are constantly entering this class, I'd love to be able to do some multi-armed bandit thingy where as I get more data I continually change this prior.

I've looked at most of the links going from https://en.wikipedia.org/wiki/Optimal_design but they mostly show the mathematical interpretation of each method, not a clear explanation of in which conditions you'd use that method.

Thanks!

Replies from: gwern
comment by gwern · 2016-09-20T16:38:27.898Z · LW(p) · GW(p)

You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they're more intended for fixed up-front designing of experiments. They also tend to be oriented towards overall information or reduction of variance, which doesn't necessarily correspond to your loss function. Having priors affects the optimal design somewhat (usually, you can spend fewer datapoints on the variables with prior information; for a Bayesian experimental design, you can simulate a set of parameters from your priors and then simulate drawing n datapoints with a particular experimental design, fit the model, find your loss or your entropy/variance, record the loss/design, and repeat many times; then find the design with the best average loss.).

If you are running the learning material experiment indefinitely and want to maximize cumulative test scores, then it's a multi-armed bandit and so Thompson sampling on a factorial Bayesian model will work well & handle your 3 desiderata: you set your informative priors on each learning material, model as a linear model (with interactions?), and Thompson sample from the model+data.

If you want to find what set of learning materials is optimal as fast as possible by the end of your experiment, then that's the 'best-arm identification' multi-armed bandit problem. You can do a kind of Thompson sampling there too: best-arm Thompson sampling: http://imagine.enpc.fr/publications/papers/COLT10.pdf https://www.escholar.manchester.ac.uk/api/datastream?publicationPid=uk-ac-man-scw:227658&datastreamId=FULL-TEXT.PDF http://nowak.ece.wisc.edu/bestArmSurvey.pdf http://arxiv.org/pdf/1407.4443v1.pdf https://papers.nips.cc/paper/4478-multi-bandit-best-arm-identification.pdf One version goes: with the full posteriors, find the action A with the best expected loss; for all the other actions B..Z, Thompson sample their possible value; take the action with the best loss out of A..Z. This explores the other arms in proportion to their remaining chance of being the best arm, better than A, while firming up the estimate of A's value.

Replies from: MattG2
comment by MattG2 · 2016-09-22T19:17:29.279Z · LW(p) · GW(p)

You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they're more intended for fixed up-front designing of experiments.

So after looking at the problem I'm actually working on, I realize an adaptive/sequential design isn't really what I'm after.

What I really want is a fractional factorial model that takes a prior (and minimizes regret between information learned and cumulative score). It seems like the goal of multi-armed bandit is to do exactly that, but I only want to do it once, assuming a fixed prior which doesn't update over time.

Do you think your monte-carlo Bayesian experimental design is the best way to do this, or can I utilize some of the insights from Thompson sampling to make this process a bit less computationally expensive (which is important for my particular use case)?

Replies from: gwern
comment by gwern · 2016-09-23T16:34:44.691Z · LW(p) · GW(p)

but I only want to do it once, assuming a fixed prior which doesn't update over time.

I still don't understand what you're trying to do. If you're trying to maximize test scores by increasing them through picking textbooks and this is done many times, you want a multi-armed bandit to help you find what is the best textbook over the many students exposed to different combinations. If you are throwing out the information from each batch and assuming the interventions are totally different each time, then your decision is made before you do any learning and your optimal choice is simply whatever your prior says: the value of information is the subsequent decisions it affects, except you're not updating your prior so the information can't change any decisions after the first one and is worthless.

Do you think your monte-carlo Bayesian experimental design is the best way to do this, or can I utilize some of the insights from Thompson sampling to make this process a bit less computationally expensive (which is important for my particular use case)?

Dunno. Simulation is the most general way of tackling the problem, which will work for just about anything, but can be extremely computationally expensive. There are many special cases which can reuse computations or have closed-form solutions, but must be considered on a case by case basis.

comment by Daniel_Burfoot · 2016-09-20T17:32:51.324Z · LW(p) · GW(p)

I have a question for LWers who are non-native English speakers.

I am working on a software system for linguistically sophisticated analysis of English text. At the core of the system is a sentence parser. Unlike most other research in NLP, a central goal of my work is to develop linguistic knowledge and then build that knowledge into the parser. For example, my system knows that the verb ask connects strongly to subjectized infinitive phrases ("I asked him to take out the trash"), unlike most other verbs.

The system also has a nice parse visualization tool, which shows the grammatical structure of an input sentence. You can check it out here.

This work began as a research project and I am trying to figure out a way to commercialize it. One of my ideas is to use the system as a tool for helping students to learn English. Students could submit confusing sentences to the system and observe the parse tree, allowing them to understand the grammatical structure. They could also submit their own written sentences to the system, as a way of checking their grammar. Teachers of ESL students might also ask them to submit their class papers to the parser to check for obvious mistakes (apparently there are many people who can communicate well in spoken English but whose written English is full of mistakes).

I would also write up a series of articles about subtle points of English grammar, such as phrasal verbs, argument structure, verb tense, and so on. Students could then read the articles and experiment with using the relevant grammar patterns in the parser.

Does this sound like a plausible product that people would want to use? Are there products already on the market that do something similar? (I am aware of Grammarly, but it doesn't appear to show parse trees).

Replies from: Gunnar_Zarncke, MrMind
comment by Gunnar_Zarncke · 2016-09-20T22:12:45.329Z · LW(p) · GW(p)

Poll for it:

I'm a native speaker [pollid:1163]

Such a tool to visualize parse trees would be/have been helpful. [pollid:1164]

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-09-24T03:28:30.956Z · LW(p) · GW(p)

Thanks to Gunnar for setting up the poll and also to all who answered.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-09-24T15:13:34.685Z · LW(p) · GW(p)

You should look at the correlations in the raw data really.

Also: Polls are easy. You should have created your own really.

See https://wiki.lesswrong.com/wiki/Comment_formatting#Polls

comment by MrMind · 2016-09-26T09:10:38.453Z · LW(p) · GW(p)

How would you use the grammar visualization tool to aid study? Many people answered "unsure" to the poll because it's not clear how it should be used, or "Not really" because the first uses they thought about were not helpful.
You should give the user the guidelines on how to better consume your product.

Usually needs --> tools. Yours seems a case of inverted implication.

comment by ChristianKl · 2016-09-23T10:32:10.158Z · LW(p) · GW(p)

Did Zuckerberg make the right choice by a Berkeley, Stanford, and University of California collaboration decide how to spend their money? I guess BioHub will be similar than the NIH is how it allocates funding.

Zuckerberg could also have funded Aubrey de Grey. They could have funded research on how to make medical research better the way the Laura and John Arnold Foundation does.

TechCrunch:

The technologies Zuckerberg listed were “AI software to help with imaging the brain…to make progress on neurological diseases, machine learning to analyze large databases of cancer genomes,

Last year we made progress in understanding that the brain contains lympahtic tissue because a surgeon fund it. All the standard imaging didn't bring us forward. Using machine learning to analyze large databases of cancer genomes is also a well funded research area.

Funding AI technology to create <1000$ bodyscans based on technology like Walabot would likely bring us much further in understanding our bodies than the kind of research that's already well funded like brain imagining and genome analysis.

Replies from: turchin, ChristianKl
comment by turchin · 2016-09-23T23:50:13.172Z · LW(p) · GW(p)

He didn't not. Also Buck institute of aging is underfunded.

comment by ChristianKl · 2016-09-24T21:50:08.254Z · LW(p) · GW(p)

Having read a bit more sources besides TechCrunch I'm a bit more optimistic. Chen/Zuckerberg won't judge applications and tool building is a valid goal.

The Cell Atlas also looks like a valid project.

comment by morganism · 2016-09-20T00:13:53.799Z · LW(p) · GW(p)

Six plant extracts delay yeast chronological aging through different signaling pathways

"Our recent study has revealed six plant extracts that slow yeast chronological aging more efficiently than any chemical compound yet described."

http://www.impactjournals.com/oncotarget/index.php?journal=oncotarget&page=article&op=view&path[]=10689&path[]=33840

article http://www.kurzweilai.net/these-six-plant-extracts-could-delay-aging

comment by morganism · 2016-09-24T23:59:49.691Z · LW(p) · GW(p)

Sleep Learning: Your Brain Continues to Process Simple Tasks, Classify Words Subconsciously

The experiment showed that when people were subjected to simple word classification tasks before sleeping, the brain continues to unconsciously make classifications even in sleep."

http://www.medicaldaily.com/sleep-learning-your-brain-continues-process-simple-tasks-classify-words-subconsciously-302746

Source: Kouider, Andrillon T, Barbosa L, et al. Inducing task-relevant responses to speech in the sleeping brain, Current Biology. 2014.

comment by morganism · 2016-09-24T22:29:52.789Z · LW(p) · GW(p)

UK posts their guidelines for robotic ethics, pay-walled at 200 bucks tho. Article follows

http://www.digitaltrends.com/computing/bsi-robot-ethics-guidelines/

edit: and robot disarms entrenched shooter by stealing his rifle

http://www.latimes.com/local/lanow/la-me-ln-robot-barricaded-suspect-lancaster-20160915-snap-story.html

comment by Houshalter · 2016-09-23T10:40:57.274Z · LW(p) · GW(p)

Here is a real world control problem: Self driving cars. Companies are currently taking dash cam footage of people driving, and using it to train AIs to drive cars.

There is a serious problem with this. The AIs can learn to predict exactly what a human would do. But humans aren't actually optimal drivers. They make tons of mistakes. They have slow reaction times. They fail to notice things. They don't apply the optimal braking or acceleration, they speed, they don't make optimal turns, etc.

AIs trained on human data end up mimicking all of these imperfections. Then combined with the AIs own imperfections, you get a subpar driver. At best, if the AI is perfect, you get a driver that is equally as good as a human, but not necessarily any better.

Self driving cars are a perfect test case for AI control methods, and a perfect way to encourage mainstream researchers to consider the control problem. There will be many similar cases in the future as AIs start being applied to real world problems in open ended domains. Or wherever there is a hard to define goal function to measure the AI by.

Replies from: ChristianKl
comment by ChristianKl · 2016-09-23T11:28:06.094Z · LW(p) · GW(p)

It's not my impression that self driving cars simply try to copy what a human does in any case. The AI don't violate speed limits and generally try to drive with as little risk as possible. Humans drive very differently.

Replies from: Houshalter
comment by Houshalter · 2016-09-23T12:47:29.627Z · LW(p) · GW(p)

You might be thinking of Google's self driving car which seems like it was designed from the ground up with traditional programming. I am thinking of system's like Comma.ai's which use machine learning to train self driving cars, by predicting what a human driver would do.

Of course you can put a regulator on the gas pedal and prevent the AI from speeding. But other issues are more difficult to control. How do you enforce that the Ai should "try to drive with as little risk as possible"? We have very few training examples of accidents, and we can't let the car experiment under real conditions.

My guess on how to solve this issue is to develop a way to "speak" with the AI. So we can see what it is thinking, and tell it what we would prefer it to do. But this is difficult and there is little research on methods to do this, yet.

Replies from: ChristianKl
comment by ChristianKl · 2016-09-23T14:14:32.906Z · LW(p) · GW(p)

Google car also uses machine learning. That still doesn't mean that it tries to emulate a human driver. The article doesn't say that the car predicts what a human driver would do.

How do you enforce that the Ai should "try to drive with as little risk as possible"?

There's the example of the Google car waiting for the woman in the wheelchair who chased ducks. That's behavior you get from the way Google algorithm cares about safety that you wouldn't get from emulating human drivers.

Replies from: Houshalter
comment by Houshalter · 2016-09-23T14:41:17.437Z · LW(p) · GW(p)

Google uses machine learning, but it's not based on it. There is a difference between a special "stop sign detector" function, and an "end to end" approach where a single algorithm learns everything.

Comma.ai's business model is to pay people to upload their dashcam footage, and train neural networks based on it. As far what I described is their approach.

Replies from: ChristianKl
comment by ChristianKl · 2016-09-23T14:57:55.297Z · LW(p) · GW(p)

I would be surprised if they setup their system in a way where they can't tell a car to approach a red light by using less fuel than human drivers use.

As far as accidents go, the idea that automatic breaking should take over in emergency situations is already implemented in many cars on the road. It's unlikely that the system would react how a human driven car would have reacted a decade ago.

comment by moridinamael · 2016-09-20T19:41:09.828Z · LW(p) · GW(p)

I have a name that I want to give my new product. That name is already trademarked for an unrelated use. Is it a bad idea to go ahead and use that product name? Is a trademark comprehensive enough that I should just pick a different name?

Replies from: ChristianKl, Elo
comment by ChristianKl · 2016-09-20T21:15:30.326Z · LW(p) · GW(p)

Registering a trademark doesn't cost that much. You can simply apply to register a trademark for your product for your usage and see whether the government will grant you a trademark for that usage.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-09-21T21:16:19.326Z · LW(p) · GW(p)

The cost depends on the number of class and countries you want to register it in. For reference: There are 45 internationally agreed on classes.

comment by Elo · 2016-09-21T23:30:07.934Z · LW(p) · GW(p)

is it google-able. If you google the name, will you show up easy? That's what having a name is all about right?

Replies from: ChristianKl
comment by ChristianKl · 2016-09-23T10:13:03.460Z · LW(p) · GW(p)

You also don't want to get sued and be forced to change your name.

comment by morganism · 2016-09-24T22:33:50.525Z · LW(p) · GW(p)

UN declares antibiotic resistance largest global threat

“Antimicrobial resistance poses a fundamental threat to human health, development, and security,”

http://news.nationalgeographic.com/2016/09/antibiotic-resistance-bacteria-disease-united-nations-health/?linkId=29137110

comment by ChristianKl · 2016-09-24T21:45:10.032Z · LW(p) · GW(p)

There seems to be a sizable amount of people in the census who consider that there's a decent probability that there's another intelligent civilisation in our universe.

When it comes to existential risk discussions, there often the argument that existential risk is important for the future of intelligent life. If there's other intelligent life out there, is existential risk still as important?

Replies from: None
comment by [deleted] · 2016-09-25T06:30:02.458Z · LW(p) · GW(p)

Important to who? Any intelligent system wants to stick around as long as possible in an indifferent universe.

comment by turchin · 2016-09-23T23:58:44.084Z · LW(p) · GW(p)

It would be interesting to make Null experiment, which will consist only of two control groups, so we will know what is the medium difference between two equal groups. It would also interesting to add two control groups in each experiment, as we will see how strong is the effect.

For example if we have difference between main and control in 10 per cent, it could looks like strong result. But if we have second control group, and it has 7 per cent difference from first control group, our result is not so strong after all.

I think that it is clear that can't do it just splitting existing control group in two parts, as such action could be done in many different ways and researcher could choose most favorable, and also because there could be some interactions inside control group, and also because smaler statistic power.

Replies from: gwern
comment by gwern · 2016-09-24T00:50:48.539Z · LW(p) · GW(p)

I think that it is clear that can't do it just splitting existing control group in two parts, as such action could be done in many different ways and researcher could choose most favorable, and also because there could be some interactions inside control group, and also because smaler statistic power.

You can. Cross-validation, the bootstrap, permutation tests - these rely on that sort of procedure. They generate an empirical distribution of differences between groups or effect sizes which replace the assumption of being two normal distributions etc. It would be better to do those with both the experimental and control data, though.