Open thread, June. 12 - June. 18, 2017

post by Thomas · 2017-06-12T05:36:02.328Z · LW · GW · Legacy · 101 comments
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

101 comments

Comments sorted by top scores.

comment by cousin_it · 2017-06-16T12:53:46.694Z · LW(p) · GW(p)

I've finally figured out why Eliezer was popular. He isn't the best writer, or the smartest writer, or the best writer for smart people, but he's the best writer for people who identify with being smart. This opportunity still seems open today, despite tons of rational fiction being written, because its authors are more focused on showing how smart they are, instead of playing on the self-identification of readers as Eliezer did.

It feels like you could do the same trick for people who identify with being kind, or brave, or loving, or individualist, or belonging to a particular nation... Any trait that they secretly feel might be undervalued by the world. Just fan it up and make it sound like the most important quality in a person. I wonder how many writers do it consciously.

Replies from: Viliam, Kaj_Sotala
comment by Viliam · 2017-06-17T23:30:41.273Z · LW(p) · GW(p)

Sequences also contain criticism, towards smart people who are smart in the wrong way ("clever arguers"), and even towards smart people in general ("why our kind can't cooperate").

Making smartness sound like the most important thing gives you Mensa or RationalWiki; you also need harsh lessons on how to do it right to create Less Wrong. Maybe so harsh that most people who identify with being X will actually turn against you, because you assigned low status to their way of doing X.

And by the way, effective altruism is already using this strategy in the field of... well, altruism.

comment by Kaj_Sotala · 2017-06-18T11:35:41.305Z · LW(p) · GW(p)

Can you give specific examples of him doing that?

Replies from: Viliam
comment by Viliam · 2017-06-19T15:11:00.292Z · LW(p) · GW(p)

Not an OP, but I suspect the parts that rub many people wrong are the following:

  • Quantum physics sequence; specifically that Eliezer claims to know the right answer to which interpretation of quantum physics is scientific, despite the professional quantum physicists can't all agree on the same answer. ("I am so smart I know science better than the best scientists in the world, despite being a high-school dropout.")

  • Dismissing religion. ("I am so smart I know for sure that billions of people are wrong, including the theologists who spent their whole lives studying religion.")

  • The whole "sense that more is possible" approach. (Feels like bragging about abilities of you and your imaginary tribe of smart people. Supported by the fictional evidence of the Beisutsukai superheroes, to illustrate how high you think about yourself.)

I guess people with different attitude will see the relative importance of these parts differently. If you start reading the book already not believing in supernatural and not being emotionally invested in quantum physics, you will be like: "Supernatural is not-even-wrong? Yeah. Many worlds? I guess this is what biting the bullet really means, huh. Could we do better? Yeah, that's a nice dream, and perhaps not completely impossible." And then you focus on the parts of how to avoid specific mistakes.

But if you start reading the book believing strongly in supernatural, or Copenhagen interpretation, or that nerds are inherently and irreparably losers, you will probably be like: "Oh, this guy is so wrong. And so overconfident. Oh, please someone slap him already to remind him of his real status, because this is so embarrassing. Jesus, this low-status nerd is now surrounded by other low-status nerds who worship him. What a cringe-fest!"

So different people can come with completely different interpretations of what the Sequences are actually about. If you dismiss all the specific advice, it seems like a status move, because when people write books about "other people are wrong, and I am right", it usually is a status move.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-19T17:04:34.984Z · LW(p) · GW(p)

I agree with these examples, but cousin_it said specifically that

its authors are more focused on showing how smart they are, instead of playing on the self-identification of readers as Eliezer did

and these examples all seem to be more "Eliezer showing off how smart he is" rather than "Eliezer making his readers feels smart".

Though now that's it been pointed out, I agree that there's a sense of Eliezer also doing the latter, and doing more of it than the average focused-on-the-former writer... but this distinction seems a little fuzzy to me and it's not entirely clear to me what the specific things that he does are.

comment by Daniel_Burfoot · 2017-06-12T15:50:41.901Z · LW(p) · GW(p)

Does anyone follow the academic literature on NLP sentence parsing? As far as I can tell, they've been writing the same paper, with minor variations, for the last ten years. Am I wrong about this?

Replies from: Darklight, MrMind
comment by Darklight · 2017-06-13T05:22:01.933Z · LW(p) · GW(p)

Well, as far as I can tell, the latest progress in the field has come mostly through throwing deep learning techniques like bidirectional LSTMs at the problem and letting the algorithms figure everything out. This obviously is not particularly conducive to advancing the theory of NLP much.

comment by MrMind · 2017-06-13T07:35:15.714Z · LW(p) · GW(p)

I'm not following NLP per se, but lately I've seen papers on grammar's analysis based on the categorical semantics of quantum mechanics (that is, dagger-compact categories). Search the latest papers by Coecke on the arXiv.

comment by [deleted] · 2017-06-12T13:59:05.621Z · LW(p) · GW(p)

Two things have been bugging me about LessWrong and its connection to other rationality diaspora/tangential places.

1) Criticism on LW is upvoted a lot, leading to major visibility. This happens even in the case where the criticism is quite vitriolic, like in Duncan's Dragon Army Barracks post. Currently, there's only upvotes for comments, and there aren't multiple reactions, like on Facebook, Vanilla Forums, or other places. So there's no clear way to say something like "you bring up good points, but also, your tone is probably going to make other people feel attacked, and that's not good". You either give an upvote or you don't.

I think this leads to potentially toxic comments that wouldn't survive elsewhere (FB, HackerNews, Reddit, etc.) being far more prominently visible. (A separate but related issue is my thought that the burden of not-pissing-people off lies with the commenter. Giving unnecessarily sharply worded criticism and then saying the other person isn't engaging well with you is bad practice.)

2) There seems to be a subset of people tangentially related to LW that really likes criticizing LW (?) My current exposure to several blogs / messaging boards seems to be it's fashionable/wise/something in some sense to be that LW types childish/autistic/stupid (?) I'm curious why this is the case. It's true that some people in the community are lacking social skills, and this often shows in posts that try to overanalyze social patterns/behavior. But why keep bringing this up? Like, LW has also got some pretty cool people who have written some useful posts on beating procrastination, health, etc. But those positives don't seem to get as much attention?

Replies from: MrMind, Viliam
comment by MrMind · 2017-06-13T07:41:44.813Z · LW(p) · GW(p)

Critics are also a sign that the site is becoming more recognized and has started spreading around... You cannot control what other people choose to criticize, mainly because it's known that people get a status kick by taking down others.
When downvotes will be resurrected, we'll have some means of judging nasty or undue criticisms.

Replies from: Viliam
comment by Viliam · 2017-06-13T12:36:51.380Z · LW(p) · GW(p)

Also, it will be nice to have some tools to detect sockpuppets. Because if a nasty comment gets 20 upvotes, that doesn't necessarily mean that 20 people upvoted it.

Replies from: MrMind
comment by MrMind · 2017-06-14T06:59:05.384Z · LW(p) · GW(p)

Yes, there's also that... has the glitch allowing a sock-puppets army been discovered / fixed?

Replies from: Viliam
comment by Viliam · 2017-06-14T11:03:13.622Z · LW(p) · GW(p)

Well, how would you prevent someone registering multiple accounts manually? Going by IP could unfairly stop multiple people using the same computer (e.g. me and my wife) or even multiple people behind the same proxy server (e.g. the same job, or the same university).

I think the correct approach to this is to admit that you simply cannot prevent someone from creating hundreds of accounts, and design a system in a way that doesn't allow an army of hundred zombies to do significant damage. One option would be to require something costly before allowing someone to either upvote or downvote, typically to have karma high enough that you can't gain that in a week by simply posting three clever quotes to Rationality Threed. Preferably to require high karma and afterwards a personal approval by a moderator.

Maybe some of this will be implemented in LessWrong 2.0, I don't know.

Replies from: MrMind, Lumifer
comment by MrMind · 2017-06-15T07:08:42.604Z · LW(p) · GW(p)

Well, how would you prevent someone registering multiple accounts manually?

That's beside the point, any user determined enough can create enough sock-puppets to be annoying, but I remember that you said that specifically in the case of Eugene there should have been a glitch that allowed him to create automatically multiple accounts. But the usual standard precaution here would suffice: captcha login and unique email verifications should be deterring enough.

comment by Lumifer · 2017-06-14T14:53:15.845Z · LW(p) · GW(p)

you simply cannot prevent someone from creating hundreds of accounts

You can't, but you can make the process more difficult and slower. This is, more or less, infosec and here it's rarely feasible to provide guarantees of unconditional safety. Generally speaking, the goal of defence is not so much to stop the attacker outright, but rather change his cost-benefit calculation so that the attack becomes too expensive.

a way that doesn't allow an army of hundred zombies to do significant damage

The issue is detection: once you know they are zombies, their actions are not hard to undo.

Replies from: Viliam
comment by Viliam · 2017-06-14T15:58:40.497Z · LW(p) · GW(p)

The issue is detection: once you know they are zombies, their actions are not hard to undo.

Generally true, but with Reddit code and Reddit database schema, everything is hard (both detecting the zombies and undoing their actions). One of the reasons to move to LessWrong 2.0.

(This may be difficult to believe until you really try do download the Reddit/LW codebase and try to make it run on your home machine.)

comment by Viliam · 2017-06-12T15:48:59.866Z · LW(p) · GW(p)

Seems to me, when you find a vitriolic comment, there are essentially three options (other than ignoring it):

  • upvote it;
  • downvote it;
  • write a polite steelmanned version as a separate top-level comment, and downvote the original one.

The problem is, the third option is too much work. And the second options feels like: "omg, we can't take any criticism; we have become a cult just like some people have always accused us!". So people choose the first option.

Maybe a good approach would be if the moderators would write a message like: "I am going to delete this nasty comment in 3 hours; if anyone believes it contains valuable information, please report it as a separate top-level comment."

There seems to be a subset of people tangentially related to LW that really likes criticizing LW

Some of them also like to play the "damned if you do, damned if you don't" game (e.g. the Basilisk). Delete or not delete, you are a bad guy either way; you only have a choice of what kind of a bad guy you are -- the horrible one who keeps nasty comments on his website, or the horrible one who censors information exposing the dark secrets of the website.

My current exposure to several blogs / messaging boards seems to be it's fashionable/wise/something in some sense to be that LW types childish/autistic/stupid (?) I'm curious why this is the case.

Trolling or status games, I guess. For people who don't have rationality as a value, it is fun (and more pageviews for their website) to poke the nerds and watch how they react. For people who have rationality as a value, it is a status move to declare that they are more rational than the stupid folks at LW.

At some moment, trying to interpret everything charitably and trying to answer politely and reasonably will make you a laughingstock. The most important lacking social skill is probably recognizing when you are simply being bullied. It is good to start with assumption of good intent, but it is stupid to refuse to update in face of overwhelming evidence.

For example, it is obvious that people on RationalWiki are absolutely not interested in evaluating the degree of rationality on LW objectively; they enjoy too much their "snarky point of view", which simply means bullying the outgroup; and they have already decided that we are an outgroup. Now that we stopped giving a fuck about them, and more or less called them stupid in return, they moved to editing the Wikipedia article about LW as their next step. Whatever. As they say, never wrestle with a pig, because you get dirty, and besides, the pig likes it. Any energy spent on debating them would be better spent e.g. writing new valuable content for LW.

Replies from: Pimgd
comment by Pimgd · 2017-06-13T13:44:47.130Z · LW(p) · GW(p)

And the second options feels like: "omg, we can't take any criticism; we have become a cult just like some people have always accused us!".

You mean "the second option is disabled". which would leave upvote or ignore.

Replies from: Viliam
comment by Viliam · 2017-06-13T13:50:16.394Z · LW(p) · GW(p)

True, but I guess some people were doing this even before the downvotes were disables. Or sometimes we had a wave of downvotes first, then someone saying "hey, this contains some valid criticism, so I am going to upvote it, because we shouldn't just hide the criticism", then a wave of contrarian upvotes, then a meta-debate... eh.

comment by whpearson · 2017-06-12T09:41:29.589Z · LW(p) · GW(p)

I'm thinking about starting an AIrisk meetup every other tuesday in London. Anyone interested? Also if you could signal boost to other Londoners you know, that would be good.

Replies from: philh
comment by philh · 2017-06-12T15:38:32.996Z · LW(p) · GW(p)

I think I'm unlikely to attend regularly, but what do you plan to do with the meetup? Lay discussion, technical discussion, attempts to make progress?

I'll link to this from the London rationalish group.

Replies from: whpearson
comment by whpearson · 2017-06-12T17:32:27.834Z · LW(p) · GW(p)

A mixture of things that I think aren't being done enough (if they are let me know).

  • A meeting point for people interested in the subject in London.
  • Discussion around some of the social issues (prevention of arms races)
  • Discussion on the nature of intelligence, and how we should approach safety in light of it
  • Discussion of interesting papers (including psychology/neuroscience) to feed into the above

Maybe forming a society to do these long term.

If people are interested in helping solve the normal computer control problem as a stepping stone to solving the super intelligence problem, that would be cool. But I'd rather keep the meetup generalist and have things spin off from it.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-18T11:38:07.509Z · LW(p) · GW(p)

If you want to have a reading group, there's an existing one with a nice list of stuff they've covered that can be used for inspiration.

comment by research_prime_space · 2017-06-15T19:51:32.352Z · LW(p) · GW(p)

I have a question about AI safety. I'm sorry in advance if it's too obvious, I just couldn't find an answer on the internet or in my head.

The way AI has bad consequences is through its drive to maximize (destroys the world in order to produce paperclips more efficiently). If you instead designed AIs to: 1) find a function/algorithm within an error range of the goal, 2)stop once that method is found, 3) do 1) and 2) while minimizing the amount of resources it uses and/or its effect on the outside world

If the above could be incorporated as a convention into any AI designed, would that mitigate the risk of AI going "rougue"?

Replies from: cousin_it
comment by cousin_it · 2017-06-15T20:15:13.586Z · LW(p) · GW(p)

It's one of the proposed plans. The main difficulty is that low impact is hard to formalize. For example, if you ask the AI to cure cancer with low impact, it might give people another disease that kills them instead, to keep the global death rate constant. Fully unpacking "low impact" might be almost as hard as the friendliness problem. See this page for more. The LW user who's doing most work on this now is Stuart Armstrong.

comment by Lumifer · 2017-06-14T20:32:40.096Z · LW(p) · GW(p)

A highly recommended review of James Scott's Seeing like a State which, not coincidentally, has also been reviewed by Yvain.

Sample:

I think this is helpful to understand why certain aesthetic ideals emerged. Many people maybe started on the more-empirical side, but then noticed that all of the research started looking the same. I’ve called this “quantification”. It probably looked geometric, “simple” (think Occam’s razor), etc. Much like you’d imagine scientific papers to look today. When confronted with a situation where they didn’t have data, but still knowing that in the past all the good data looked like “quantification”, there’s a pretty natural instinct to assume it should have a certain look. This is, of course, separate from the obsession with grids – what I’m really interested in here is why empirical data overlays metis in activities, not the preference for what a “clean city” would look like.

Where does this get us? From the perspective of early 20th century rationalists: global commerce and dazzling new technologies vs. peasants who insisted on setting their own crops on fire for a bizarre religious ritual. That’s not hyperbole, by the way, that’s an example from the book. The rationalists were maybe aware that they had spotty or incomplete data on growing practices, but the data they did have was quantified and supported by labs, whereas the villagers couldn’t explain their practice using any kind of data. The peasants, it turns out, were right, but would you have guessed that? More importantly: would you guess that now?

Replies from: Viliam, MrMind
comment by Viliam · 2017-06-16T16:48:03.639Z · LW(p) · GW(p)

Finally read the review, and I am happy I did. Made me think about a few things...

Legibility has its costs. For example, I had to use Jira for tracking my time in many software companies, and one task is always noticeably missing, despite requiring significant time and attention of all team members, namely using the Jira itself. How much time and attention does it require, in addition to doing the work, to make notes about what exactly you did when, whether it should be tracked as a separate issue, what meta-data to assign to that issue, who needs to approve it, communicating why they should approve it, explaining technical details of why the map drawn by the management doesn't quite match the territory, explaining that you are doing a "low-priority" task X because it is a prerequisite to a "high-priority" task Y, then explaining the same thing to yet another manager who noticed that you are logging time on low-priority tasks despite having high-priority tasks in the queue and decided to take initiative, negotiating whether you should log the time in your company's Jira or your company's customer's Jira or both, in extreme cases whether it is okay to use English when writing in your customer's Jira or you need to learn their language, etc. And as the review mentions, these costs are usually imposed by the people higher in hierarchy, and paid by those lower in hierarchy, so people having the power to improve this don't have an incentive.

Legibility is often used as a reason to remove options. Previously you had options X, Y, Z, but 90% of people used X, and X is most known to people in power and they already have it better documented, so let's make things more legible by banning Y and Z, and perhaps by also banning the choice to use none of these services; now everyone uses X which we can easily track.

Depending on country, educational system may be a victim of this: homeschooling either illegal or just strongly disliked by people in power, same for alternative schools, it is much simpler to use the same system for everyone. So although my daughter knew all characters in alphabet when she was 1, the government will try hard to teach her all those letters again slowly when she is 6, and of course it will pay for this complete waste of everyone's time and energy using my tax money. (Luckily homeschooling is kinda-legal here, if you interpret some laws creatively and deal with all the resulting inconvenience.) Because a country where everyone learns alphabet exactly at the age of 6 is aesthetically more appealing for the people in positions to make the bureaucratic decisions about education. If I try to debate them, they will probably be tempted to just make a strawman who wants to keep their children illiterate, and debate that strawman instead of me.

Or there could be a problem with people who make small irregular side income - such as a student or an employee who also has a web page with adsense or a game or two on android market - because the state would like you to clearly choose whether you are an employee (with all the related legal and tax consequences) or an entrepreneur (with another set of consequences), and not "mostly this, but also a bit of that". You may be unable to find an expert who really understand how to tax your adsense / android / kickstarter income, because you are not supposed to be doing this, unless you have a company for doing it.

Similarly, the state may be rolling their eyes if you are trying to find an employment with 2 hours a day, or working only every other weak, because it has the nice predefined category of an "employee" that works 8 hours a day, 5 days a week.

And the problem is that most people designing this system, and probably many left-leaning people hearing about your problems they don't share, will most likely react on emotional level like: "Just choose one of the existing options, asshole, just like all normal people do! Why are you trying to complicate our work? Do you believe there should be special laws for you; do you consider yourself some kind of royalty?" When the truth is that no one needed a special law for them, until the state attempted to standardized things using a map that doesn't quite fit the territory.

Also, some of these problems could be solved by creating a less specific abstraction; for example, the state could check that your children have certain knowledge and skills, regardless of how specifically they gained them; or it could tax your total income, regardless of how specifically you gained it. Programmers call this design pattern "programming for interface, not for implementation" (in this analogy "education" would be an interface, and "classical school", "alternative school", "homeschooling", "unschooling" would be implementations; or "income" would be an interface, and "employment", "business", "kickstarter", "patreon" would be implementations). But it would probably still be difficult to convince the people in power of the need of this abstraction in first place. Again, because of legibility -- with interface, you have certain things you know, and the rest is black box; but implementation tempts you with unlimited possibilities of further inspection.

As usual, the impact is worse on poor people; when a middle-class employee cannot find a way to properly tax his monthly $10 from the android game, they can either just give the game away for free, or take a risk, hoping that in case they are doing wrong, the punishment is not going to be draconic. For a poor person, the monthly $10 makes a greater difference, but also involves a risk that e.g. if they are taking unemployment benefits, having an extra untaxed income will be treated as an attempt to cheat the system, and may have more severe consequences. Similarly, a rich person has more options to make their homeschooling legal, e.g. by declaring their child a foreign citizen (of a country that allows homeschooling) and perhaps even "prove" it by buying a house in that country; or whatever it is that according to the given country's laws will make the bureaucrat satisfied.

Then there is the aspect "if people in power don't understand it, it doesn't exist, and everyone who cares about it is acting irrationally". Whether a programmer acts like an asshole, or helps people around him, as long as it is not reflected in Jira, it didn't happen. Of course the official answer is to put it in Jira, which includes all the trivial and not-so-trivial inconvenience. You may enjoy cooperating with one colleague and suffer when you are forced to cooperate with another one, but on the official level this does not exist, and you will be treated as irrational (I think the proper word in this context is "unprofessional") for simply expressing preference for sitting next to person X instead of person Y, even if doing so would mean zero costs for the company, and could actually increase your productivity. There are some allowed ways to express your job-unrelated human side, and they are generally called "teambuilding", i.e. doing what the people in power believe is fun.

On the state level, using your local knowledge that is not recognized by state may be considered illegal discrimination. Better not go into specific details.

Replies from: ChristianKl
comment by ChristianKl · 2017-06-17T21:30:27.823Z · LW(p) · GW(p)

Depending on country, educational system may be a victim of this: homeschooling either illegal or just strongly disliked by people in power, same for alternative schools, it is much simpler to use the same system for everyone.

Many Western countries do allow alternative schools for the elite. The UK won't shut down Eton anytime soon.

Replies from: Viliam
comment by Viliam · 2017-06-17T23:42:52.234Z · LW(p) · GW(p)

Somehow people in power can always make an exception for themselves and for their families. Legibility only overrides the needs of everyone else. Sometimes you can also benefit from the exception, even if you are not one of them, if you happen to have exactly the same need. But the further from the elite you are, the less likely are your specific needs going to fit the exception made for them.

comment by MrMind · 2017-06-15T07:22:02.673Z · LW(p) · GW(p)

We should adopt an acronym: YASLASR, yet another seeing like a state review. And we are crossing into the meta-reviews territory already.
To be frank, I've never understood the wide appeal that the book enjoys. Sure, it's an important lesson that implicit knowledge sometimes is less wrong than scientific knowledge, but we (should) already know: it's in Jaynes (about the peasants believing that meteors were rocks falling from the sky) and it's in the metaphor of evolution as the mad god Azatoth (referenced here many times). Perhaps is less surprising to aspiring rationalists because we already know the limitation of scientific knowledge.

Replies from: Lumifer
comment by Lumifer · 2017-06-15T16:41:44.252Z · LW(p) · GW(p)

I can gesture in the direction of some points that make it appeal to me:

  • I like the concept of legibility. It's a new one for me and I find it useful
  • I like the lack of clear-cut heroes and villains -- it is complicated
  • I like the attention paid to what can be expressed in what language and the observation that there are real concerns which cannot be readily expressed in the language of rationality
  • I like the recognition of the role that power plays in social arrangements, regardless of what's "rational" or not
  • I like the pushback against the idea -- very popular among rationalists, mind you -- that we have a new shiny tool called math and logic which will solve everything so we can ignore the accumulated local knowledge deadwood

All in all it's smart book written by someone on the other side of the ideological fence (AFAIK James Scott is a Marxist, though not entirely an orthodox one) which makes it very interesting.

Replies from: MaryCh
comment by MaryCh · 2017-06-15T18:03:51.392Z · LW(p) · GW(p)

Goodness, you said something definite! :)

Replies from: Lumifer
comment by Lumifer · 2017-06-15T18:25:20.180Z · LW(p) · GW(p)

Ooops, sorry ma'am, won't happen again :-P

comment by MrMind · 2017-06-13T07:33:11.371Z · LW(p) · GW(p)

Well, if there's been any less accurate spam...

comment by BeleagueredPotential · 2017-06-18T08:10:22.035Z · LW(p) · GW(p)

I find myself in a potentially critical crossroads at the moment, one that could affect my ability to become a productive researcher for friendly AI in the future. I'll do my best to summarize the situation.

I had very strong mental capabilities 7 years ago, but a series of unfortunate health related problems including a near life threatening infection led to me developing a case of myalgic encephalomyelitis (chronic fatigue syndrome). This disease is characterized by extreme fatigue that usually worsens with physical or mental exertion, and is not significantly improved by rest. There are numerous other symptoms that are common to ME, I luckily escaped a great many of them. However I developed the concentration and memory problems which are common to ME to a very large degree.

I had somewhat bad ME until a few years ago when in conjunction with a mind/body specialist I was able to put it into partial remission. I am now able to do physically demanding activities without fatigue but I still have severe cognitive constraints; my intelligence now seems to be almost as sharp as it ever was despite deficits in mental energy, concentration, and memory (especially working memory). However having efficacious mental throughput relies so much on these attributes that support intelligence, and I am hardly useful at all as it stands. Therefore my primary concern these past few years has been to resolve my medical issues to a large enough degree to enable real productivity.

I am still in this state despite putting all of my effort towards remedying it, I have stuck to safer treatments (like bacteriotherapy or sublingual methylcobalamin) in order to prevent worsening my condition (although I have had some repercussions from following even this philosophy). I am wondering if I can reasonably expect to get better using this methodology though. It could be that I need to take more extreme risks, because I won't do any good as I am and time continues to tick away. Looking at the the big picture with a properly pessimistic outlook gives me the impression that friendly AI research does not have a lot of time to spare as it is.

There is a doctor that is recommended by a large amount of people on a ME forum I frequent who has exceptionally aggressive treatment protocols. His name is Dr. Kenny de Meirleir and while I have misgivings about some of the stuff I've read about him, I've pretty much given up on trying to find someone who is both good and doesn't have a long wait list. I've gotten on the wait list of one practitioner who is local but I do not have too much confidence in them. Dr. Meirleir wasn't too difficult to get an appointment with because he travels to the USA for a few days every couple of months and these appointments are not widely known about.

However even the cost of initial tests and evaluation could be an unrecoverable failure for me if they don't pan out like I hope. It will cost thousands of dollars to pay for travel to the states, hotel, the consultation, and the comprehensive tests he is likely to run; even considering how much of the lab tests my own country will probably cover. Although at least then I could finally confirm a lot of unknowns about my health, such as whether there are infectious agents still affecting me. Despite all the testing I've gone through over the years he does a lot of tests I haven't gotten yet.

It really depends on the results of the tests, but I'm reading plenty of anecdotal reports that suggest a high likelihood of me getting put on multiple antibiotics by him. Plenty of people whose stories I have read have reported worsening conditions and relapses of ME due to antibiotics, and I know from my research that ME treatments in general often have these risks.

The quantity of symptoms I have has always been small, which might indicate that there is a lot more of my physiology that is working the way it should be compared to the average ME patient. My condition is also in partial remission already and I am still under 30, so I consider myself to have better odds of major recovery than the low rates of total remission this disease is usually predicted to have.

The question then is; as rationalists what path do you think I should take here? If I choose to go to the appointment next weekend, I lose a large chunk of my limited capital but gain knowledge and possibilities for treatment. If I then proceed to do treatment of the type he often prescribes, I probably lose most or all of my remaining money in something that could stand the best chance of making me functional again but that could also do nothing or make me irrecoverably worse (or anything else between the two extremes). This is not money I can recover easily, work is difficult still and it could take me lots of time to save considering normal essential expenses. If I chose to do nothing, cancel the appointment, and continue on my safe but so far ineffective path then I keep the status quo and avoid risking my health. Although if I do this I waste precious time either waiting for one of my less risky solutions to work, or waiting for the unlikely possibility of researchers developing a cure anytime soon. The years it will take for me to finish developing and expanding my skills and knowledge after recovery have to be factored in as well, I cannot just jump into FAI research right away. There are no doubt other options and variables I cannot see at the moment but I haven't found them as of yet.

Due to the aforementioned cognitive restraints I know that my ideas and research I have done on my condition are probably riddled with biases, errors, and gaps in knowledge. If anyone can offer suggestions or comments about this situation it would be appreciated. It's safe to assume that the personal outcomes I face from this choice only matter in the context of whether it increases or decreases the probability of me being useful to friendly AI development in the future. Even if I only further recover partially and can contribute in other ways (like financially), I'll consider that worth the effort.

I might not get the chance to answer any responses in a timely manner because of how much strain writing causes me (and if I do decide not to cancel the appointment I will have to prepare for travel this coming weekend). However reading and thinking both cost me less energy so know that any responses posted will be considered by me as carefully as I can and it will give me more perspective to help decide what to do in this situation.

Replies from: Mitchell_Porter, ChristianKl
comment by Mitchell_Porter · 2017-06-18T11:33:53.788Z · LW(p) · GW(p)

I'm going to take a wild guess, and suggest that your attitude towards FAI research, and your experience of CFS, are actually related. I have no idea if this is a standard theory, but in some ways CFS sounds like depression minus the emotion - and that is a characteristic symptom in people who have a purpose they regard as supremely important, who find absolutely no support for their attempt to pursue it, but who continue to regard it as supremely important.

The point being that when something is that important, it's easy to devalue certain aspects of your own difficulties. Yes, running into a blank wall of collective incomprehension and indifference may have been personally shattering; you may be in agony over the way that what you have to do in order to stay alive, interferes with your ability to preserve even the most basic insights that motivate your position ... but it's an indulgence to think about these feelings, because there is an invisible crisis happening that's much more important.

So you just keep grinding away, or you keep crawling through the desert of your life, or you give up completely and are left only with a philosophical perspective that you can talk about but can't act on... I don't know all the permutations. And then at some point it affects your health. I don't want to say that this is solely about emotion, we are chemical beings affected by genetics, nutrition, and pathogens too. But the planes intersect, e.g. through autoimmune disorders or weakened disease resistance.

The core psychological and practical problem is, there's a difficult task - the great purpose, whatever it is - being made more difficult in ways that have no intrinsic connection to the problem, but are solely about lack of support, or even outright interference. And then on top of that, you may also have doubts and meta doubts to deal with - coming from others and from yourself (and some of those doubts may be justified!). Finally, health problems round out the picture.

The one positive in this situation, is that while all those negatives can reinforce each other, positive developments in one area can also carry across to another.

OK, so that's my attempt to reflect back to you, how you sound to me. As for practical matters, I have only one suggestion. You say

he travels to the USA for a few days every couple of months

so I suggest that you at least wait until his next visit, and use that extra time to understand better how all these aspects of your life intersect.

Replies from: BeleagueredPotential
comment by BeleagueredPotential · 2017-07-11T04:50:45.306Z · LW(p) · GW(p)

I spent a good year and half trying to answer questions related to the points you brought up after first seeing the mind-body specialist, although you gave me some good perspective.

and that is a characteristic symptom in people who have a purpose they regard as supremely important, who find absolutely no support for their attempt to pursue it, but who continue to regard it as supremely important

.

And then at some point it affects your health.

Actually I only discovered the purpose a couple of years after the myalgic encephalomyelitis set in, before that point my primary goal was to get better and to worry about other goals afterwards. I do not think that becoming more purpose focused translated into me devaluing my difficulties; I was focused on myself and my health at the start of this thing and that seems to have remained constant, it’s just that suddenly those weren’t the most important things to me anymore. My health became not just something intrinsically valuable but also a very important means to an end. Though I’ll be mindful about how my goals affect me, even if they weren’t initially involved in my health problems they could be involved in their continuation if I take matters too seriously.

I don't want to say that this is solely about emotion, we are chemical beings affected by genetics, nutrition, and pathogens too. But the planes intersect

Exactly this; I keep learning over and over new ways in which the mind and body and all their subsystems can affect each other in very major ways. Several insights related to this concept put me into partial remission in the first place.

so I suggest that you at least wait until his next visit, and use that extra time to understand better how all these aspects of your life intersect

I wouldn’t say that I’ve done all I can in figuring out how all these things interact with each other. I would say though that with the success of the partial remission and all the work I did afterwards towards figuring out mind-body interactions within myself that I am at the point of diminishing returns with results vs effort and that I need to pursue other avenues at this point.

comment by ChristianKl · 2017-06-18T08:27:47.840Z · LW(p) · GW(p)

One of the core questions of rationality is: "Why do you believe what you believe?" Specifically, why do you believe that this doctor will be able to help you in a way that others can't?

You can also write down the likelihood of different outcomes.

Given that you had success at improving your condition with one mind-body paradigm, why not try others?

Given that you speak about travelling to the US it would also be worthwhile to know where you are living at the moment to know about what's available to you.

Replies from: BeleagueredPotential
comment by BeleagueredPotential · 2017-07-11T04:27:12.088Z · LW(p) · GW(p)

This was extremely helpful in figuring out what to do, it hadn't occurred to me that a Bayesian calculation would be useful here. After tallying up all the variables I came to the conclusion that my current methods had a lower chance of helping me than I had always implicitly thought. What I referred to as “extreme risks” may not even be the truly risky options when considering other factors; like how the longer someone has myalgic encephalomyelitis the less likely it is they can get better. I realized the types of solutions I’ve been trying give me the mental satisfaction of having “done something” but they might not stand the best chance of actually working.

I trust this one doctor more because I am trying to treat this condition rather than manage it and patients of his have reported more actual reduction of symptoms than almost any other ME doctor I can find, except possibly Dr. Sarah Myhill (but she isn’t accepting new patients). I have seen many health professionals (general practitioners, psychiatrists, dietitians, etc.) already but only the one mind-body specialist has treated the ME rather than just managed the symptoms.

After careful consideration I decided in the end to go to the appointment. He ordered a lot of lab tests and it was quite a bit more expensive than I thought, but I had anticipated that possibility beforehand and went ahead with it even so. I should get all the results back in one or two months.

Given that you had success at improving your condition with one mind-body paradigm, why not try others?

This is precisely what I thought after seeing what the mind-body specialist achieved. I tried numerous approaches over the following year and a half; such as seeing a therapist, cognitive behavioral therapy from a book, and acting on suggestions from the specialist. Unfortunately I didn't see any further progress for my primary health concerns (there were some favorable, unrelated side benefits though). My guess is that only some of the physiological issues going on within me could be corrected this way, at least for any of the mind-body paradigms I've tried.

Given that you speak about travelling to the US it would also be worthwhile to know where you are living at the moment to know about what's available to you.

I live in Calgary, Canada at the moment. I am currently on the waitlist for local doctor Beverley Tompkins although from my conversation with her receptionist I think she is probably someone who will mostly give me treatments I’ve already tried. The most highly recommended specialist in my city is Dr. Eleanor Stein but she is not currently accepting other patients due to a high volume waitlist. While I’ve done some searching for effective specialists in Canada I am not having too much luck, although I haven’t been thorough with my research yet. I thought I found a decent one with Dr. Alison Bested, but she left the Complex Chronic Disease Program a few years back and I can’t get a hold of her or even confirm where she is.

comment by justinpombrio · 2017-06-17T02:42:13.562Z · LW(p) · GW(p)

I'd like to ask a question about the Sleeping Beauty problem for someone that thinks that 1/2 is an acceptable answer.

Suppose the coin isn't flipped until after the interview on Monday, and Beauty is asked the probability that the coin has or will land heads. Does this change the problem, even though Beauty is woken up on Monday regardless? It seems to me to obviously be equivalent, but perhaps other people disagree?

If you accept that these problems are equivalent, then you know that P(Heads | Monday) = P(Tails | Monday) = 1/2, since if it's Monday then a fair coin is about to be flipped. From this we can learn that P(Monday) = 2 * P(Heads), by the calculation below.

This is inconsistent with the halfer position, because if P(Heads) = 1/2, then P(Monday) = 2 * 1/2 = 1.

EDIT: The calculation is that P(Heads) = P(Monday) P(Heads | Monday) + P(Tuesday) P(Heads | Tuesday) = 1/2 P(Monday) + 0 P(Tuesday), so P(Monday) = 2 * P(Heads).

Replies from: entirelyuseless, Jiro
comment by entirelyuseless · 2017-06-18T16:23:54.384Z · LW(p) · GW(p)

I think that 1/2 is an acceptable answer, and in fact the only correct answer. Basically 1/2 corresponds to SSA, and 1/3 to SIA; and in my opinion SSA is right, and SIA is wrong.

We can convert the situation to an equivalent Incubator situation to see how SSA applies. We have two cells. We generate a person and put them in the first cell. Then we flip a coin. If the coin lands heads, we generate no one else. If the coin lands tails, we generate a new person and put them in the second cell.

Then we question all of the persons: "Do you think you are in the first cell, or the second?" "Do you think the coin landed heads, or tails?" To make things equivalent to your description we could question the person in the first cell before the coin is flipped, and the person in the second only if they exist, after it is flipped.

Estimates based on SSA:

P(H) = .5

P(T) = .5

P(1st cell) = .75 [there is a 50% chance I am in the first cell because of getting heads; otherwise there is a 50% chance I am the first person instead of the second]

P(2nd cell) = .25 [likewise]

P(H | 1st cell) = 2/3 [from above]

P(T | 1st cell) = 1/3 [likewise]

P(H | 2nd cell) = 0

P(T | 2nd cell) = 1

Your mistake is in the assumption that "P(Heads | Monday) = P(Tails | Monday) = 1/2, since if it's Monday then a fair coin is about to be flipped." The Doomsday style conclusion that I fully embrace is that if it is Monday, then it is more likely that the coin will land heads.

Replies from: justinpombrio
comment by justinpombrio · 2017-06-19T00:26:09.780Z · LW(p) · GW(p)

and in my opinion SSA is right, and SIA is wrong.

I'm curious: is this grounded on anything beyond your intuition in these cases?

SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are:

P(1st cell) = 2/3

P(2nd cell) = 1/3

P(H | 1st cell) = 1/2

P(H | 2nd cell) = 0

(FYI, I find this intuitive, and find SSA in this situation unintuitive.)

These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you're a utilitarian that's what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).

So your probabilities aren't grounded in frequency&utility. Is there something else they're grounded in that you care about? Or do you choose them only because they feel intuitive?

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-20T15:19:39.601Z · LW(p) · GW(p)

These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you're a utilitarian that's what you care about.

In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not.

If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).

I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time.

So your probabilities aren't grounded in frequency&utility. Is there something else they're grounded in that you care about? Or do you choose them only because they feel intuitive?

I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am.

Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying "these seem equally likely to me" is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish "I exist" from "someone else exists."

In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false -- you have no way of knowing that the coin will land tails or is in any way more likely to land tails, nor do you have a way to distinguish your existence from the existence of someone else.

comment by Jiro · 2017-06-17T17:04:49.724Z · LW(p) · GW(p)

Adding P(Heads | Monday) and P(Tails | Monday) doesn't give you P(Monday), it gives you P(1 | Monday).

Replies from: justinpombrio
comment by justinpombrio · 2017-06-17T17:40:12.020Z · LW(p) · GW(p)

I didn't say it did. I said that P(Heads | Monday) = P(Tails | Monday) = 1/2, because it's determined by a fair coin flip that's yet to happen. This is in contrast to the standard halfer position, where P(Heads | Monday) > 1/2, and P(Tails | Monday) < 1/2. Everyone agrees that P(Heads | Monday) + P(Tails | Monday) = 1.

Or are you disagreeing with the calculation?

P(Heads) = P(Monday) P(Heads | Monday) + P(Tuesday) P(Heads | Tuesday) is just Baye's theorem.

P(Heads | Tuesday) = 0, because if Beauty is awake on Tuesday then the coin must have landed tails.

P(Heads | Monday) = 1/2 by the initial reasoning.

Then P(Monday) = 2 * P(Heads) by a teeny amount of algebra.

Replies from: Jiro
comment by Jiro · 2017-06-17T20:15:02.029Z · LW(p) · GW(p)

The probability is 1/3 per awakening and 1/2 per experiment.

  • P(Heads | Monday) = 1/2
  • P(Tails | Monday) = 1/2
  • P(Heads | Tuesday) = 0
  • P(Tails | Tuesday) = 1

Per-experiment:

  • P(Monday) = 1
  • P(Tuesday) = 1/2
  • P(Heads) = 1/2, P(Tails) = 1/2

Per-awakening:

  • P(Monday) = 2/3
  • P(Tuesday) = 1/3
  • P(Heads) = 1/3, P(Tails) = 2/3

I don't see anything in either of those links claiming that P(Heads | Monday) > 1/2. I assume that your reasoning to get that is something like "P(Heads | Tuesday) is less than P(Heads), so it follows that P(Heads | Monday) is greater than P(Heads)." However, if you're calculating per-experiment, Monday and Tuesday are not mutually exclusive, so this reasoning doesn't work. (If you're calculating per-awakening, P(Heads) isn't 1/2 anyway.)

Replies from: entirelyuseless, justinpombrio
comment by entirelyuseless · 2017-06-18T16:38:04.245Z · LW(p) · GW(p)

Some additional support for the apparently unreasonable conclusion that if it is Monday, it is more likely that the coin will land heads. Suppose that on each awakening, the experimenter flips a second coin, and if the second coin lands heads, the experimenter tells Beauty what day it is, and does not do so if it is tails.

If Beauty is told that it is Tuesday, this is evidence (conclusive in fact) that the first coin landed tails. So conservation of expected evidence means that if she is told that it is Monday, she should treat this as evidence that the first coin will land heads.

Replies from: Jiro
comment by Jiro · 2017-06-18T19:55:58.349Z · LW(p) · GW(p)

Some additional support for the apparently unreasonable conclusion that if it is Monday, it is more likely that the coin will land heads.

More likely than what?

Using per-awakening probabilities, ithe probability of heads without this information is 1/3.

The new information makes heads more likely than the 1/3 that the probability would be without the new information. It doesn't make it more likely than 1/2.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-18T20:19:50.206Z · LW(p) · GW(p)

I misplaced that comment. It was not a response to yours.

More likely than what?

More likely than .5. In fact I am saying the probability of getting heads is 2/3 after being told that it is Monday.

Using per-awakening probabilities, ithe probability of heads without this information is 1/3.

This is a frequentist definition of probability. I am using probability as a subjective degree of belief, where being almost certain that something is so means assigning a probability near 1, being almost certain that it is not means assigning a probability near 0, and being completely unsure means .5.

Here is how this works. If I am sleeping Beauty, on every awakening I am subjectively in the same condition. I am completely unsure whether the coin landed/will land heads or tails. So the probability of heads is .5, and the probability of tails is .5.

What is the subjective probability that it is Monday, and what is the subjective probability it is Tuesday? It is easier to understand if you consider the extreme form. Let's say that if the coin lands tails, I will be woken up 1,000,000 times. I will be quite surprised if I am told that it is day #500,000, or any other easily definable number. So my degree of belief that it is day #500,000 has to be quite low. On the other hand, if I am told that it is the first day, that will be quite unsurprising. But it will be unsurprising mainly because there is a 50% chance that will be the only awakening anyway. This tells me that before I am told what day it is, my estimate of the probability that it is the first day is a tiny bit more than 50% -- 50% of this is from the possibility that the coin landed heads, and a tiny bit more from the possibility that it landed tails but it is still the first day.

When we transition to the non-extreme form, being Monday is still less surprising than being Tuesday. In fact, before being told anything, I estimate a chance of 75% that it is Monday -- 50% from the coin landing heads, and another 25% from the coin landing tails. And when I am told that it is in fact Monday, then I think there is a chance of 2/3, i.e. 50/75, that the coin will land heads.

Replies from: Jiro
comment by Jiro · 2017-06-19T03:11:19.978Z · LW(p) · GW(p)

This tells me that before I am told what day it is, my estimate of the probability that it is the first day is a tiny bit more than 50%... When we transition to the non-extreme form, being Monday is still less surprising than being Tuesday.

In the non-extreme form, the chance of being Monday is 2/3 and the chance of being Tuesday is 1/3. 2/3 is indeed less surprising than 1/3, so your reasoning is correct.

before being told anything, I estimate a chance of 75% that it is Monday -- 50% from the coin landing heads, and another 25% from the coin landing tails

Before being told anything, you should estimate a 2/3 chance that it's Monday (not a 75% chance). There are three possibilities: heads/Monday, tails/Monday, and tails/Tuesday, all of which are equally likely. Because tails results in two awakenings, and you are calculating probability per awakening, that boosts the probability of tails, so it would be incorrect to put 50% on heads/Monday and 25% on tails/Monday. Tails/Monday is not half as likely as heads/Monday; it is equally likely. Only in the scenario where you were woken up either on Monday or Tuesday, but not both, would the probability of tails/Monday be 25%.

And when I am told that it is in fact Monday, then I think there is a chance of 2/3, i.e. 50/75, that the coin will land heads.

When you are told that it is Monday, the chance is not 50/75, it's (1/3) / (2/3) = 50%. Being told that it is Monday does increase the probability that the result is heads; however, it increases it from 1/3 -> 1/2, not from 1/2 -> 2/3.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-20T15:08:21.777Z · LW(p) · GW(p)

Before being told anything, you should estimate a 2/3 chance that it's Monday (not a 75% chance). There are three possibilities: heads/Monday, tails/Monday, and tails/Tuesday, all of which are equally likely.

I disagree that these situations are equally likely. We can understand it better by taking the extreme example. I will be much more surprised to hear that the coin was tails and that we are now at day #500,000, then that the coin was heads and that it is the first day. So obviously these two situations do not seem equally likely to me. And in particular, it seems equally likely to me that the coin was or will be heads, and that it was or will be tails. Going back to the non-extreme form, this directly implies that it seems half as likely to me that it is Monday and that the coin will be tails, as it is that it is Monday and that the coin will be heads. This results in my estimate of a 75% chance that it is Monday.

Because tails results in two awakenings, and you are calculating probability per awakening, that boosts the probability of tails, so it would be incorrect to put 50% on heads/Monday and 25% on tails/Monday. Tails/Monday is not half as likely as heads/Monday; it is equally likely.

I am not calculating "probability per awakening", but calculating in the way indicated above, which does indeed make Tails/Monday half as likely as heads/Monday.

Only in the scenario where you were woken up either on Monday or Tuesday, but not both, would the probability of tails/Monday be 25%.

I am not asking about the probability that the situation as a whole will somewhere or other contain tails/Monday; this has a probability of 50%, just like the corresponding claim about heads/Monday. I am being asked in a concrete situation, "do you think it is Monday?" And I am less sure it is Monday if the coin is going to be tails, because in that situation I will not be able to distinguish my situation from Tuesday. And this is surely the case even when I am woken up both on Monday and Tuesday. It will just happen twice that I am less sure it is Monday.

And based on the above reasoning, being told that it is Monday does indeed lead me to expect that the coin will land heads, with a probability of 2/3.

Replies from: Jiro
comment by Jiro · 2017-06-20T22:58:13.643Z · LW(p) · GW(p)

We can understand it better by taking the extreme example. I will be much more surprised to hear that the coin was tails and that we are now at day #500,000, then that the coin was heads and that it is the first day.

You should not be more surprised in that situation. The more days there are, the more that the extra tails awakenings push down the probability of heads. With 500000 awakenings, the probability gets pushed down by a lot. Now heads is 1/500001 per-awakening probability, same as tails-day-1 and tails-day-500000

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-21T05:26:23.926Z · LW(p) · GW(p)

You are claiming that if I will be wake up 500,000 times if the coin lands tails, I should be virtually certain a priori that the coin will land tails. I am not; I would not be surprised at all if it landed heads. In fact, as I have been saying, the setup does not make me expect tails in any way. So at the start the probability remains 50% heads, 50% tails.

Replies from: Jiro
comment by Jiro · 2017-06-21T15:27:55.875Z · LW(p) · GW(p)

You are claiming that if I will be wake up 500,000 times if the coin lands tails, I should be virtually certain a priori that the coin will land tails

Yes, I am (assuming you mean per-awakening certainty).

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-21T15:30:51.573Z · LW(p) · GW(p)

assuming you mean per-awakening certainty

I do not. I mean reporting my opinion when someone asks, "Do you think the coin landed, heads, or tails?" I will truthfully respond that I have no idea. The fact that I would be woken up multiple times if it landed tails, did not make it any harder for the coin to land heads.

Replies from: justinpombrio
comment by justinpombrio · 2017-06-22T02:23:04.973Z · LW(p) · GW(p)

I'd recommend distinguishing between the probability that the coin landed heads (which happens exactly once), and the probability that, if you were to plan to peak you would see heads (which would happen on average 250,000 times).

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-22T14:43:14.254Z · LW(p) · GW(p)

The problem is that you are counting frequencies, and I am not. It is true that if you run the experiment many times, my estimate will change, from the very moment that I know that the experiment will be run many times.

But if we are going to run the experiment only once, then even if I plan to peek, I would expect with 50% probability to see heads. That does not mean "per awakening" or any other method of counting. It means that if I saw heads, I would say, "Not surprising; that had a 50% chance of happening." I would not say, "What an incredible coincidence!!!!"

comment by justinpombrio · 2017-06-17T23:32:49.868Z · LW(p) · GW(p)

Thank you for walking me through this; I'm having a very hard time seeing the other perspective here.

I understand that P(Monday) is ambiguous. I meant to refer to "the probability that the current day, as Beauty is currently being interviewed, is Monday". Regardless of Beauty's perspective, she can ask weather the current day is Monday or Tuesday, and she does know that it is not currently both. And she can ask what the probability that the coin landed tails given that the current day is Tuesday, etc. Yes? Given that, I'm not seeing what part of my reasoning doesn't work if you replace each instance of "Monday" with "IsCurrentlyMonday".

Replies from: Jiro
comment by Jiro · 2017-06-17T23:51:57.003Z · LW(p) · GW(p)

I meant to refer to "the probability that the current day, as Beauty is currently being interviewed, is Monday".

What you just described is a per-awakening probability. Per-awakening, P(Heads) = 1/3, so the proof that P(Heads | Monday) > 1/2 actually only proves that P(Heads | Monday) > 1/3, which is true since 1/2 > 1/3.

Replies from: justinpombrio
comment by justinpombrio · 2017-06-18T03:12:14.067Z · LW(p) · GW(p)

Sorry, you lost me completely. I didn't prove that P(Heads | Monday) > 1/2 at all.

Could you say which step (1-6) is wrong, if I am Beauty, and I wake up, and I reason as follows?

  1. The experiment is unchanged by delaying the coin flip until Monday evening.

  2. If the current day is Monday, then the coin is equally likely to land heads or tails, because it is a fair coin that is about to be flipped. Thus P(Heads | CurrentlyMonday) = 1/2.

  3. By Bayes' theorem, which is applicable because it cannot currently be both Monday and Tuesday:

    P(Heads) = P(CurrentlyMonday) P(Heads | CurrentlyMonday) + P(CurrentlyTuesday) P(Heads | CurrentlyTuesday)

  4. P(Heads | CurrentlyTuesday) = 0, because if it is Tuesday then the coin must have landed tails.

  5. Thus P(CurrentlyMonday) = 2 * P(Heads) by some algebra.

  6. It may not currently be Monday, thus P(CurrentlyMonday) != 1, thus P(Heads) < 1/2.

Replies from: Jiro
comment by Jiro · 2017-06-18T14:08:23.097Z · LW(p) · GW(p)

Sorry, you lost me completely. I didn't prove that P(Heads | Monday) > 1/2 at all.

You had said:

This is in contrast to the standard halfer position, where P(Heads | Monday) > 1/2

Neither of your links to the halfer position shows anyone claiming that. So I assumed you tried to deduce it from the halfer position. The obvious way to deduce it is wrong for the reason I stated.

Could you say which step (1-6) is wrong, if I am Beauty, and I wake up, and I reason as follows?

"CurrentlyMonday" as you have defined it is a per-awakening probability, not a per-experiment probability. So the P(Heads) that you end up computing by those steps is a per-awakening P(Heads). Per-awakening, P(Heads) is 1/3, which indeed is less than 1/2.

The halfer position assumes that the probability that is meaningful is a per-experiment probability.

(If you want to compute a per-experiment probability, you would have to define CurrentlyMonday as something like "the probability that the experiment contains a bet where, at the moment of the bet, it is currently Monday", and step 3 won't work since CurrentlyMonday and CurrentlyTuesday are not exclusive.)

Replies from: justinpombrio
comment by justinpombrio · 2017-06-19T00:59:36.043Z · LW(p) · GW(p)

"CurrentlyMonday" as you have defined it is a per-awakening probability

The halfer position assumes that the probability that is meaningful is a per-experiment probability.

To be clear, you're saying that, from a halfer position, "the probability that, when Beauty wakes up, it is currently Monday" is meaningless?

Neither of your links to the halfer position shows anyone claiming that.

Sorry, I wrote that without thinking much. I've seen that position, but it's definitely not the standard halfer position. (It seems to be entirelyuseless' position, if I'm not mistaken.)

The per-experiment probabilities you give make perfect sense to me: they're the probabilities you have before you condition on the fact that you're Beauty in an interview, and they're the probabilities from which I derived the "per-awakening" probabilities myself (three indistinguishable scenarios: HM, TM, TT, each with probability 1/2; thus they're all equally likely, though that's not the most rigorous reasoning).

I'm confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she's interviewed each time she wakes up. If instead, on Heads you let Beauty live and on Tails you kill her, then no one would have trouble saying that Beauty should say P(Heads) = 1 in an interview. Why is this different?

Thanks again for the discussion.

Replies from: Jiro, entirelyuseless
comment by Jiro · 2017-06-19T06:15:45.928Z · LW(p) · GW(p)

To be clear, you're saying that, from a halfer position, "the probability that, when Beauty wakes up, it is currently Monday" is meaningless?

It's meaningless in the sense that it doesn't have a meaning that matches what you're trying to use it for. Not that it literally has no meaning.

I'm confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she's interviewed each time she wakes up.

It depends on what you're trying to measure.

If you're trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn't obviously implausible that someone might want to measure what percentage of experiments have heads.

Replies from: justinpombrio
comment by justinpombrio · 2017-06-19T18:24:17.673Z · LW(p) · GW(p)

It's meaningless in the sense that it doesn't have a meaning that matches what you're trying to use it for. Not that it literally has no meaning.

What I'm trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument.

So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?

Replies from: Jiro
comment by Jiro · 2017-06-20T22:54:35.609Z · LW(p) · GW(p)

The "per-experiment" part is a description of, among other things, how we are calculating the probability.

In other words, when you say "P(per-experiment event)" the "per-experiment" is really describing the P, not just the event. So if you say "P(per-experiment event|per-awakening event)" that really is meaningless; you're giving two contradictory descriptions to the same P.

Replies from: justinpombrio
comment by justinpombrio · 2017-06-22T02:50:39.769Z · LW(p) · GW(p)

THANK YOU. I now see that there are two sides of the coin.

However, I feel like it's actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty's answer to the question "what is the probability that the coin landed heads" once per awakening, you're sometimes double-counting her answer (on Tails). It's dishonest to ask her twice about an event that only happened once.

On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she's twice as likely to see tails. If you count Beauty's answer to the question "what is the probability that the coin is currently showing heads" once per experiment, you're sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions.

(Being more precise: suppose the coin lands tails, and you ask Beauty "What is the probability that the coin is currently showing heads?" on each day, but only count her answer on Monday. Well, you've asked her two distinct questions, because the meaning of "currently" changes between the two days, but only counted one of them. It's dishonest.)

Thus, this question isn't up for interpretation. The answer is 1/2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations - per experiment and per awakening - but the interpretation should be set by the question. Likewise, setting a bet doesn't help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged.

Although this is subtle, and maybe I'm still missing things. For one, why is Baye's rule failing? I now know how to use it both to prove that P(Heads) < 1/2 and to prove that P(Heads) = 1/2, by marginalizing on either CurrentlyMonday/CurrentlyTuesday or on WillWakeUpOnTuesday/WontWakeUpOnTuesday. When you use

P(X) = P(X | A) * P(A) + P(X | B) * P(B)

you need that A and B are mutually exclusive. But this seems to be suggesting that there's some other subtle requirement as well that somehow depends on what X is.

It could be, as you say, that P is different. But P should only depend on your knowledge and priors. All the priors are fixed here (it's a fair coin, use SIA), so what are the two sets of knowledge?

Replies from: Jiro
comment by Jiro · 2017-06-22T16:39:05.859Z · LW(p) · GW(p)

The answer is 1/2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads.

That doesn't help. "Coin landed heads" can still be used to describe either a per-experiment or per-awakening situation:

1) Given many experiments, if you selected one of those experiments at random, in what percentage of those experiments did the coin land heads?

2) Given many awakenings, if you selected one of those awakenings at random, in what percentage of those awakenings did the coin land heads?

Replies from: justinpombrio
comment by justinpombrio · 2017-06-25T04:20:23.946Z · LW(p) · GW(p)

In other words, when you say "P(per-experiment event)" the "per-experiment" is really describing the P, not just the event.

My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that?

That doesn't help. "Coin landed heads" can still be used to describe either a per-experiment or per-awakening situation:

Ok, yes, agreed.

Replies from: Jiro
comment by Jiro · 2017-06-27T17:17:46.146Z · LW(p) · GW(p)

My understanding is that P depends only on your knowledge and priors.

A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as "different knowledge" if you wish, since you know things about experiments that are not true of awakenings and vice versa.

comment by entirelyuseless · 2017-06-19T04:16:33.795Z · LW(p) · GW(p)

I'm confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she's interviewed each time she wakes up.

This is a SIA idea, and it's wrong. There's nothing to condition on because there's no new information, just as there's no new information when you find that you exist. You can never find yourself in a position where you don't exist or where you're not awake (assuming awake here is the same as being conscious.)

Replies from: justinpombrio
comment by justinpombrio · 2017-06-19T20:15:14.599Z · LW(p) · GW(p)

This is a SIA idea, and it's wrong.

Please don't make statements like this unless you really understand the other person's position (can you guess how I will respond?). For instance, notice that I haven't ever said that the halfer position is wrong.

There's nothing to condition on because there's no new information

This is just a restatement of SSA. By SIA there is new information, since you're more likely to be one of a larger set of people.

just as there's no new information when you find that you exist

Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she's alive that it landed heads.

I understand that SSA is a consistent position, and I understand that it matches your intuition if not mine. I'm curious how you'd respond to the question I asked above. It's in the post with "So your probabilities aren't grounded in frequency&utility."

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-20T14:54:12.032Z · LW(p) · GW(p)

For instance, notice that I haven't ever said that the halfer position is wrong.

And I didn't say (or even mean to say) that your position is wrong. I said the SIA idea is wrong.

Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she's alive that it landed heads.

You can learn something from the fact that you are alive, as in cases like this. But you don't learn anything from it in the cases where the disagreement between SSA and SIA comes up. I'll say more about this in replying to the other comments, but for the moment, consider this thought experiment:

Suppose that you wake up tomorrow in your friend Tom's body and with his memories and personality. He wakes up tomorrow in yours in the same way. The following day, you swap back, and so it goes from day to day.

Notice that this situation is empirically indistinguishable from the real world. Either the situation is meaningless, or you don't even have a way to know it isn't happening. The world would seem the same to everyone, including to you and him, if it were the case.

So consider another situation: you don't wake up tomorrow at all. Someone else wakes up in your place with your memories and personality.

Once again, this situation is either meaningless, or no one, including you, has a way to know it didn't already happen yesterday.

So you can condition on the fact that you woke up this morning, rather than not waking up at all. We can conclude from this, for example, that the earth was not destroyed. But you cannot condition on the fact that you woke up this morning instead of someone else waking up in your place; since for all you know, that is exactly what happened.

The application of this to SSA and SIA should be evident.

comment by whpearson · 2017-06-14T14:54:04.676Z · LW(p) · GW(p)

It seems (understandably) that to get people to take your ideas seriously about intelligence there are incentives to actually make AI and show it doing things.

Then people will try and make it safe.

Can we do better at spotting ideas about intelligence that might be different compared to current AI and engaging with those ideas before they are instantiated?

Replies from: Lumifer
comment by Lumifer · 2017-06-14T15:18:07.355Z · LW(p) · GW(p)

ideas about intelligence that might be different compared to current AI

What kind of things are you thinking about? Any examples?

Replies from: whpearson
comment by whpearson · 2017-06-14T19:39:23.622Z · LW(p) · GW(p)

You can, hypothetically, build some pretty different interacting systems of ML programs inside the VM I've been building, that has not gotten a lot of interest. I've been thinking about it a fair bit recently.

But I think the general case still stands. How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?

Replies from: Lumifer
comment by Lumifer · 2017-06-14T20:42:25.025Z · LW(p) · GW(p)

How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?

In the usual way someone who has made a breakthrough convinces others. Reputation helps. Whitepapers help. Toy examples help. Etc., etc.

I don't understand the context, however. That someone, how does he know it's a breakthrough without testing it out? And why would he be so concerned with the opinion of the AI risk community (which isn't exactly held in high regard by most working AI researchers)?

Replies from: whpearson
comment by whpearson · 2017-06-15T16:27:37.644Z · LW(p) · GW(p)

Okay. A good metaphor might be the development of the atomic bomb. Lots of nuclear physicists thought that nuclear reactions couldn't be used for useful energy (e.g. Rutherford). Leo Szilard had the insight that you could do a chain reaction and that this might be dangerous. He did not build it the bomb (he could not, he didn't know about neutrons) and assigned the patent to the admiralty to keep it secret.

But he managed to convince other high profile physicists that it might dangerous without publicizing it too much (no whitepapers etc). He had the reputation etc and the physics of these things was far more firm than our whispy grasp of intelligence.

So that worked.

But how will it work for our hypothetical AI researcher who has the breakthrough, if they are not part of the in group of ai risk people? They might be chinese and not have a good grasp on english. They are motivated to try and get the word to say Elon Musk (or another influential concerned person/group that might be able to develop it safely) of their breakthrough but want to keep the idea as secret as possible and do not have the pathway of reaching them.

Replies from: Lumifer
comment by Lumifer · 2017-06-15T17:17:21.983Z · LW(p) · GW(p)

One issue is that you're judging the idea of a chain reaction as a breakthrough post factum. At the time, it was just a hypothesis, interesting but unproven. I don't know the history of nuclear physics well enough, but I suspect there were other hypotheses, also quite interesting, which didn't pan out and we forgot about them.

A breakthrough idea is by definition weird and doesn't fit into the current paradigm. At the time it's proposed, it is difficult to separate real breakthroughs from unworkable craziness unless you can demonstrate that your breakthrough idea actually works in reality. And if you can't -- well, absent a robust theoretical proof, you will just have to be very convincing: we're back to the usual methods mentioned above (reputation, etc.).

Claimed breakthroughs sometimes are real and sometimes are not (e.g. cold fusion). I suspect the base rates will create a prior not favourable to accepting a breakthrough as real.

Replies from: whpearson
comment by whpearson · 2017-06-15T18:25:49.961Z · LW(p) · GW(p)

It was interesting enough that a letter got sent to the president by Einstein about it which was taken seriously, before it was made. I recommend reading up about it, it is a very interesting time in history,

It would be interesting to know how many other potential breakthroughs got that treatment. And how can we make sure that the right ones going to be be made get that treatment.

comment by MrMind · 2017-06-13T07:54:06.999Z · LW(p) · GW(p)

Has there been / will there be in the future / could there be a condition where transforming atoms is cheaper than transforming bits? Or it's a universal law that emulation is always developed before nanotechnology?

Replies from: whpearson
comment by whpearson · 2017-06-13T13:52:09.030Z · LW(p) · GW(p)

Flippant answer. Nanotech has come first! And we are made of it.

I'm not quite sure what you are getting at here. Are you asking whether it will be possible to recreate a human with nanotech more easily than to emulate one?

I ask this because not all atoms are not equal. It is somewhat hard to pry two nitrogen atoms apart but easier to pry two oxygen atoms apart. So the energy costs to make the thing depend a lot what your feedstock is.

Then there is the question of whether running the recreated human is cheaper than running an emulation. Which is separate from the cost to recreate a human vs emulation. It depends on the amount of fidelity you require. If there are strange interactions between neurons mediated by the electrics fields that you want to emulate, or the exact way that the emulation interacts with certain drugs then I think recreation is probably going to be a lot cheaper

comment by cousin_it · 2017-06-12T14:42:28.866Z · LW(p) · GW(p)

Is this true for anyone: "If you offered me X right now, I'd accept the offer, but if you first offered me to precommit against taking X, I'd accept that offer and escape the other one"? For which values of X? Do you think most people have some value of X that would make them agree?

Replies from: Dagon, ChristianKl, whpearson, MrMind, Screwtape
comment by Dagon · 2017-06-12T15:00:36.725Z · LW(p) · GW(p)

Not exactly right now, but I've called in sick for work when I would have gone in with sufficient precommitment.

edit: for clarity - this is a decision that I would prefer to have escaped the night before, and the day after. A number of things I lump into the "akrasia" topic fit this pattern.

comment by ChristianKl · 2017-06-13T16:41:25.684Z · LW(p) · GW(p)

A person who's on a diet might agree if X is "I give you a piece of cake" in many instances.

I'm personally quite good at inhibiting myself from actions I don't want to take but less good at getting myself to do uncomfortable things, so there's no example that comes to mind immediately.

In general, I think that cases where system I wants to accept to offer but system II wants to reject the offer provide material for X. I would be surprised if you can't find examples that hold for most people.

comment by whpearson · 2017-06-13T13:55:31.319Z · LW(p) · GW(p)

Do they have to be examples of willingly yielding?

E.g. if there was a malign Super intelligence in the box that I had to interact with, then I would probably yield to letting it out but if I could I would precommit to not letting it out I would.

Replies from: cousin_it
comment by cousin_it · 2017-06-13T14:03:33.210Z · LW(p) · GW(p)

Good example. "I would yield to a mind hack right now, but I would precommit to not yielding to a mind hack right now." Are there any simpler examples, or specific mind hacks that would work on you?

Replies from: Lumifer
comment by Lumifer · 2017-06-13T15:06:10.033Z · LW(p) · GW(p)

Hmmm... would you precommit to not giving an armed robber your wallet? Would it be a wise precommittment?

Replies from: None
comment by [deleted] · 2017-06-13T15:51:38.665Z · LW(p) · GW(p)

If the robber knew that, then such a precommitment means you never have to face them, yes?

Replies from: Lumifer
comment by Lumifer · 2017-06-13T15:55:19.043Z · LW(p) · GW(p)

No. You assume the robber is a rational homo economicus. Hint: in most cases this is not true.

Besides, this.

comment by MrMind · 2017-06-13T07:37:33.045Z · LW(p) · GW(p)

Could you rewrite it more clearly? I'm not sure exactly what you're asking... Besides, offer me to precommit against x why? With which incentive?

Replies from: cousin_it
comment by cousin_it · 2017-06-13T09:24:48.870Z · LW(p) · GW(p)

I'm looking for examples of temptations that you would yield to, given the chance, and precommit against, given the chance. Basically things that make you torn and confused.

Replies from: MrMind
comment by MrMind · 2017-06-13T12:08:11.923Z · LW(p) · GW(p)

Oh well, that's easy:

  • snoozing
  • snacking
  • slacking at work
  • watching too much youtube
  • etc.
Replies from: cousin_it
comment by cousin_it · 2017-06-13T12:47:43.091Z · LW(p) · GW(p)

Note that the question tries to avoid the time inconsistency angle. You'd yield to one unit of X right now, given the chance, and you'd precommit against yielding to one unit of X right now, given the chance. Do any of your examples work like that?

Replies from: MrMind, entirelyuseless
comment by MrMind · 2017-06-14T07:01:02.332Z · LW(p) · GW(p)

Sometimes they do, yes. Not always though. There are times when I would like not to do something but some other subsystem is in control.

comment by entirelyuseless · 2017-06-13T13:51:04.567Z · LW(p) · GW(p)

I think some people would precommit to never telling lies, if they had the chance, but at the same time, they would lie in the typical Nazi at the door situation, given that they in fact cannot precommit. This has nothing to do with time inconsistency, because after you have lied in such a situation, you don't find yourself wishing you had told the truth.

comment by Screwtape · 2017-06-12T20:44:34.141Z · LW(p) · GW(p)

I'm not sure I'm parsing the question correctly. Attempting to set X = five dollars, I get "If you offered me five dollars right now, I'd accept the offer, but if you first offered me to precommit against taking five dollars, I'd accept that offer and escape the other one." Precommitting against taking five dollars seems strange.

My best interpretation is "If you offered me X right now, I'd accept the offer, but if you first offered me Y to precommit against taking X, I'd accept that offer and later wouldn't take X." If that interpretation is close enough, then yes. If you offered me the opportunity to play Skyrim all day right now, I'd accept the offer, but if you first offered me a hundred dollars to precommit against playing Skyrim all day, I'd accept that offer and later wouldn't take the opportunity to play Skyrim all day. That seems too straightforward though, so I don't think I'm interpreting the question right.

comment by Thomas · 2017-06-12T05:37:24.681Z · LW(p) · GW(p)

A new problem

Replies from: cousin_it
comment by cousin_it · 2017-06-12T11:08:33.202Z · LW(p) · GW(p)

I think this article shows that you probably won't get a crisp answer.

Replies from: Luke_A_Somers, Thomas
comment by Luke_A_Somers · 2017-06-12T15:09:19.462Z · LW(p) · GW(p)

That's more about the land moving in response to the changes in ice, and a tiny correction for changing the gravitational force previously applied by the ice.

This is (probably?) about the way the water settles around a spinning oblate spheroid.

comment by Thomas · 2017-06-12T13:05:24.051Z · LW(p) · GW(p)

This article is quite a bullshit.

Replies from: cousin_it
comment by cousin_it · 2017-06-12T16:07:40.449Z · LW(p) · GW(p)

Hmm, yeah, you're right. I got hypnotized by the yale.edu address.