Open thread, Sep. 26 - Oct. 02, 2016

post by MrMind · 2016-09-26T07:41:51.328Z · LW · GW · Legacy · 90 comments

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

90 comments

Comments sorted by top scores.

comment by WhySpace_duplicate0.9261692129075527 · 2016-09-27T02:06:25.460Z · LW(p) · GW(p)

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

comment by [deleted] · 2016-09-28T03:20:27.195Z · LW(p) · GW(p)
comment by Houshalter · 2016-09-28T17:33:31.624Z · LW(p) · GW(p)

I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking

“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all based on myopic optimism regarding the intended lifespan of these places.

Many refugees may never be able return home, and that reality needs to be realized and incorporated into solutions. Treating their situation as temporary or reversible puts people into a kind of existential limbo; inhabitants of these interstitial places can neither return to their normal routines nor move forward with their lives..

From City of Thorns:

The UN had spent a lot of time developing a new product: Interlocking Stabilized Soil Blocks (ISSBs), bricks made of mud, that could be used to build cheap houses in refugee camps. It had planned to build 15,000 such houses in Ifo 2 but only managed to construct 116 before the Kenyan government visited in December 2010 and ordered the building stopped. The houses looked too much like houses, better even than houses that Kenyans lived in, said the Department for Refugee Affairs, not the temporary structures and tents that refugees were supposed to inhabit.

From reddit:

Peru had an uprising in the 1980s in which the brutality of the insurgents, the Sendero Luminoso, caused mass migration from the Andes down to the coast. Lima's population grew from perhaps a million to its current 8.5 million in a decade. This occurred through settlements in pure desert, where people lived in shacks made of cardboard and reed matting. These were called "young villages", Pueblos Jóvenes.

Today, these are radically different. Los Olivos is now a lower-middle-class suburb, boasting one of the largest shopping malls in South America, gated neighborhoods, mammoth casinos and plastic surgery clinics. All now have schools, clinics, paved roads, electricity and water; and there is not a cardboard house in sight. (New arrivals can now buy prefab wooden houses to set up on more managed spaces, and the state runs in power and water.)

Zaatari refugee camp in Jordan, opened 4 years ago seems to be well on it's way to becomming a permanent city. It has businesses, permanent structures, and it's own economy.

comment by PECOS-9 · 2016-09-27T04:50:58.684Z · LW(p) · GW(p)

Anybody have recommendations of a site with good summaries of the best/most actionable parts from self-help books? I've found Derek Sivers' book summaries useful recently and am looking for similar resources. I find that most self-help books are 10 times as long as they really need to be, so these summaries are really nice, and let me know whether it may be worth it to read the whole book.

Replies from: ChristianKl
comment by ChristianKl · 2016-09-27T16:11:09.449Z · LW(p) · GW(p)

I frequently hear people saying that self-help books are too long but I don't think that's really true. Changing deep patterns about how to deal with situations is seldomly made by reading a short summary of a position.

Replies from: Viliam
comment by Viliam · 2016-10-02T19:40:33.841Z · LW(p) · GW(p)

Self-help authors keep writing longer books, self-help customers keep learning how to read faster or switch to reading summaries on third-party websites...

Replies from: ChristianKl
comment by ChristianKl · 2016-10-02T21:20:10.073Z · LW(p) · GW(p)

Books get written for different purposes and used by different people for different reasons. At the same time books get also read by different people for different reasons.

There's a constituency that reads self-help books as insight porn but there are other people who want to delve deep. I remember a person who read Tony Robbins 500 page book twenty times and every time he read it he discovered something something new.

comment by DataPacRat · 2016-09-26T13:45:17.724Z · LW(p) · GW(p)

Music to be resurrected to?

Assume that you are going to die, and some years later, be brought back to life. You have the opportunity to request, ahead of time, some of the details of the environment you will wake up in. What criteria would you use to select those details; and which particular details would meet those criteria?

For example, you might wish a piece of music to be played that is highly unlikely to be played in your hearing in any other circumstances, and is extremely recognizable, allowing you the opportunity to start psychologically dealing with your new circumstances before you even open your eyes. Or you may just want a favourite playlist going, to help reassure you. Or you may want to try to increase the odds that a particular piece survives until then. Or you may wish to lay the foundation for a practical joke, or a really irresistible one-liner.

Make your choice!

Replies from: username2, Manfred, ThoughtSpeed, None
comment by username2 · 2016-09-28T08:20:00.853Z · LW(p) · GW(p)

Nicolas Jaar - Space is only Noise which start around the time I regain sound perception.

comment by Manfred · 2016-09-26T16:58:00.108Z · LW(p) · GW(p)

Mahler's 2nd symphony, for reasons including the obvious.

comment by ThoughtSpeed · 2016-09-28T08:16:59.610Z · LW(p) · GW(p)

I think my go-to here would be Low of Solipsism from Death Note. As an aspiring villain being resurrected, I can't think of anything more dastardly.

Replies from: MrMind
comment by MrMind · 2016-09-28T13:18:30.592Z · LW(p) · GW(p)

That's interesting, you think of yourself as an aspiring villain? What does that entail?

comment by [deleted] · 2016-09-27T14:02:00.351Z · LW(p) · GW(p)

"Everything in Its Right Place" by Radiohead would capture the moment well; it's soothing yet disorienting, and a tad ominous.

comment by Sable · 2016-09-26T10:08:43.419Z · LW(p) · GW(p)

I was at the vet a while back; one of my dogs wasn't well (she's better now). The vet took her back, and after waiting for a few minutes, the vet came back with her.

Apparently there were two possible diagnosis: let's call them x and y, as the specifics aren't important for this anecdote.

The vet specifies that, based on the tests she's run, she cannot tell which diagnosis is accurate.

So I ask the vet: which diagnosis has the higher base rate among dogs of my dog's age and breed?

The vet gives me a funny look.

I rephrase: about how many dogs of my dog's breed and age get diagnosis x versus diagnosis y, without running the tests you did?

The vet gives me another funny look, and eventually replies: that doesn't matter.

My question for Lesswrong: Is there a better way to put this? Because I was kind of speechless after that.

Replies from: Houshalter, WalterL, ChristianKl
comment by Houshalter · 2016-09-26T17:08:24.563Z · LW(p) · GW(p)

"Base rate" is statistics jargon. I would ask something like "which disease is more common?" And then if they still don't understand, you can explain that its probably the disease that is most common, without explaining Bayes rule.

Replies from: g_pepper
comment by g_pepper · 2016-09-26T17:26:10.648Z · LW(p) · GW(p)

Mightn't the vet have already factored the base rate in? Suppose x is the more common disease, but y is more strongly indicated by the diagnostics. In such a case it seems like the vet could be justified in saying that she cannot tell which diagnosis is accurate. For you to then infer that the dog most likely has x just because x is the more common disease would be putting undue weight on the Bayesian priors.

comment by WalterL · 2016-09-26T14:22:12.034Z · LW(p) · GW(p)

So, it seems like there could be 2 things going on here:

1: Maybe, in your Vet's mind, she is telling you "We can't tell if this is A or B", and you are asking "But which is it?", and by refusing to answer she is doubling down on the whole "We don't know A or B" situation.

Like, I know what you mean is what you actually said, but normal people don't say that, and the Vet is trying to reiterate that you do not know which of A or B this is. She is trying to avoid saying "mostly A", and you saying "ok, treat A", and then the dog dies of B, and you are like "You said A, you fraud, I'm suing you!".

2: The vet honestly doesn't know the answer to your question. She is a person who executes the procedures in his/her manuals, not a person who follows the news about every animal's frequent ailments. In her world if an animal shows A you do X, if an animal shows B you do Y. Your question is outside of her realm of curiosity.

As far as another way to phrase this, I'd go with "Well, which do you think it is, A or B?". The vet's answer ought to be informed by her experience, even if it isn't explicitly phrased as "well, mostly this is what dogs suffer from". If she reiterates that there is no way to know, I'd figure this was a first case, CYA situation, and stress that I wouldn't be mad if she was wrong.

Replies from: Brillyant
comment by Brillyant · 2016-09-26T18:24:30.731Z · LW(p) · GW(p)

The vet honestly doesn't know the answer to your question.

I'd suggest this is likely. Assuming both ailments are relatively common and not obviously known to be rare, I'd bet the vet just doesn't know the data necessary to discuss base rates in a meaningful way that would help determine X or Y.

Side note: My experience is that sometime the tests needed to help narrow down illnesses in animals are prohibitively expensive.

comment by ChristianKl · 2016-09-28T12:27:11.324Z · LW(p) · GW(p)

A straightforward question would be: "What's the probability for diagnosis A and what's the probability for diagnosis B?".

Unfortunately you are likely out of luck because your vet doesn't know basic statistics to give you a decent answer.

comment by smk · 2016-09-26T23:50:14.983Z · LW(p) · GW(p)

Has Sam Harris stated his opinion on the orthogonality thesis anywhere?

Replies from: Lightwave
comment by Lightwave · 2016-09-27T09:03:08.436Z · LW(p) · GW(p)

He's writing an AI book together with Eliezer, so I assume he's on board with it.

Replies from: ThoughtSpeed
comment by ThoughtSpeed · 2016-09-28T08:08:45.213Z · LW(p) · GW(p)

Is that for real or are you kidding? Can you link to it?

Replies from: Lightwave
comment by Lightwave · 2016-09-28T10:05:12.903Z · LW(p) · GW(p)

He's mentioned it on his podcast. It won't be out for another 1.5-2 years I think.

Also Sam Harris recently did a TED talk on AI, it's now up.

comment by MrMind · 2016-09-26T14:32:40.727Z · LW(p) · GW(p)

Who are the current moderators?

Replies from: ChristianKl
comment by ChristianKl · 2016-09-26T22:21:24.093Z · LW(p) · GW(p)

I think Elo and Nancy have moderator rights. Various older people who don't frequent the website like EY also have moderator rights.

Replies from: Elo
comment by Elo · 2016-09-27T02:36:16.229Z · LW(p) · GW(p)

yes

comment by gwern · 2016-09-28T23:09:54.516Z · LW(p) · GW(p)

Continuing my catnip research, I'm preparing to run a survey on gwern.net & Mechanical Turk about catnip responses. I have a draft survey done and would appreciate any feedback about brokenness or confusing questions: https://docs.google.com/forms/d/e/1FAIpQLSeT3GIg-pSwzDFAfNaqE-MzfJEtD0HghN_Vma68OZJtz1Pztg/viewform

Replies from: gwern
comment by gwern · 2016-09-29T23:34:19.302Z · LW(p) · GW(p)

OK, no complaints so far, so I'm just going to launch it. Consider the survey now live. Did I mention that there will be cake?

Replies from: Elo, Elo, Lumifer
comment by Elo · 2016-09-30T00:50:31.918Z · LW(p) · GW(p)

cat weight might be relevant, cat current age, cat body shape (fat/skinny), description of cat's response to catnip,

comment by Elo · 2016-09-30T00:48:06.956Z · LW(p) · GW(p)

I am no expert, but I wonder if you could run a monte-carlo on your expected responses. Do the questions you ask give you enough information to yield results?

Just not sure if your questions are honing correctly. Chances are there are people that know better than me.

Replies from: gwern
comment by gwern · 2016-09-30T01:06:17.797Z · LW(p) · GW(p)

If I get at least 100 responses, then that will help narrow down the primary question of overall catnip response rate adequately in combination with the existing meta-analysis. I expect to get at least that many, and in the worst case I do not, I will simply buy the survey responses on Mechanical Turk.

The secondary question, Japanese/Australian catnip rates vs the rest of the world, I do not expect to get enough responses since the power analysis of the 60% vs 90% (the current average vs Japanese estimates) says I need at least 33 Japanese respondents for the basic comparison; however, Mechanical Turk allows you to limit workers by country, so my plan is to, once I see how many responses I get to the regular survey, launch country-limited surveys to get the necessary sample size. I can get ~165 survey responses with a decent per-worker reward for ~\$108, so split over Japan/Korea/Australia, that ought to be adequate for the cross-country comparisons. (Japan, because that's where the anomaly is; Korea, to see if the anomaly might be due to a bottleneck in the transmission of cats from Korea to Japan back in 600-1000 CE; Australia, because a guy on Twitter told me Australian cats have very high catnip response rates; and I hopefully will get enough American/etc country responses to not need to pay for more Turk samples from other countries.) Of course, if the results are ambiguous, I will simply collect more data, as I'm under no time limits or anything.

For the tertiary question, response rates to silvervine/etc, I am not sure that it is feasible to do surveys on them. There is not much mention of them online compared to catnip, and they can be hard to get. My best guess is that of the cat owners who have used catnip, <5% of them have ever tried anything else, in which case even if I get 200 responses, I'll only have 25 responses covering the others, which will give very imprecise estimates and not allow for any sort of modeling of response rates conditional on being catnip immune or factor analysis. If I'm right and the survey is unable to answer the question without recruiting thousands of cat owners, then that tells me I will have no choice but to continue experimenting myself and contact the local pound & animal rescue organizations asking if I can use my battery of substances on their sets of cats.

As for your question suggestions: weight/current-age/body-shape-fatness haven't been suggested in the catnip literature as moderators, current age seems like it should be irrelevant, and asking for a free response description of the catnip response is really burdensome on the user compared to multiple-choice or checkboxes (survey guidelines emphasize as few free-responses as possible, no more than 1 or 2) and the catnip response is pretty stereotypical even across species so there wouldn't be much point.

comment by Lumifer · 2016-09-30T14:38:27.278Z · LW(p) · GW(p)

Did I mention that there will be cake?

Oh-oh...

comment by [deleted] · 2016-09-29T00:06:23.602Z · LW(p) · GW(p)

I feel the onset of hypomania. Please bear with me if I post dumb stuff in the near future.

Replies from: None
comment by [deleted] · 2016-09-29T06:58:58.866Z · LW(p) · GW(p)

I'm going to contain anything I post to this thread. Just incase it's nonsense. I was just thinking of asking: Is it rational to 'go to Belgium' as they say - to commit suicide as a preventative measure to avoid suffering?

Replies from: MrMind, Dagon
comment by MrMind · 2016-09-30T08:05:19.760Z · LW(p) · GW(p)

Only in very extreme case. Have you looked up on every alternatives?

Replies from: None
comment by [deleted] · 2016-10-02T02:44:14.120Z · LW(p) · GW(p)

I suppose I'm up to date on the alternatives. New alternatives pop up every so often but it's pretty frustrating tracking depression research, and opportunities for short hedonistic bliss that end in death.

comment by Dagon · 2016-09-29T14:01:17.995Z · LW(p) · GW(p)

I suspect there are cases where a perfectly rational, knowledgeable agent could prefer the suffering of death over the suffering of continued life.

Agents with less calculating power and with less predictive power over their possible futures (say, for instance, humans) should have an extremely low prior about this, and it's hard to imagine the evidence that would bump it into the positive.

Replies from: MrMind
comment by MrMind · 2016-09-30T08:04:51.229Z · LW(p) · GW(p)

The problem with depression is that it skews your entire ability to think clearly and rationally about the future. You're no longer "a rational agent", but "a depressed agent", and it's really bad.
From an outside view, of course only very extreme pain or the certainty of inevitable decline are worth the catastrophic cost of death, but from the pov of a depressed person, all future is bad, black and meaningless, and death seems often the natural way up.

Replies from: Dagon
comment by Dagon · 2016-09-30T16:09:33.996Z · LW(p) · GW(p)

Absolutely! Depression changes one's priors and one's perception of evidence, making a depressed agent even further from rational than non-depressed humans (who are even so pretty far from purely rational).

That said, all agents must make choices - that's why we use the term "agent". And even depressed agents can analyze their options using the tools of rationality, and (I hope) make better choices by doing so. It does require more care and use of the outside view to somewhat correct for depression's compromised perceptions.

Also, I'm very unsure what the threshold is where an agent would be better off abandoning attempts to rationally calculate and just accept their group's deontological rules. It's conceivable that if you don't have strong outside evidence that you're top-few-percent of consequentialist predictors of action, you should follow rules rather than making decisions based on expected results. I don't personally like that idea, and it doesn't stop me ignoring rules I don't like, but I acknowledge that I'm probably wrong in that.

Specifically, "Suicide: don't do it" seems like a rule to give a lot of weight to, as the times you're most likely tempted are the times you're estimating the future badly, and those are the times you should give the most weight to rules rather than rationalist calculations.

comment by username2 · 2016-09-26T08:27:13.411Z · LW(p) · GW(p)

In the last year, someone mentioned a workout book on the #lesswrong irc channel.I want to start exercising in my room and that book seemed, at the time, the best place to start for me so I am looking for it.

Help with finding the book or alternatives appreciated. Here's what I remember about it:

  • the author is someone who server time in jail or is currently serving at the moment
  • the person that talked about the book said that the book empathizes on the idea of keeping the body strong and healthy without the need of weights
  • the exercises use limited space

I can't remember more right now but I will edit the post if I do.

Replies from: 9eB1, Tommi_Pajala
comment by 9eB1 · 2016-09-26T15:06:19.462Z · LW(p) · GW(p)

I have read Convict Conditioning. The programming in that book (that is, the way the overall workout is structured) is honestly pretty bad. I highly recommend doing the reddit /r/bodyweightfitness recommended routine.

  1. It's free.

  2. It has videos for every exercise.

  3. It is a clear and complete program that actually allows for progression (the convict conditioning progression standards are at best a waste of time) and keeps you working out in the proper intensity range for strength.

  4. If you are doing the recommended routine you can ask questions at /r/bodyweightfitness.

The main weakness of the recommended routine is the relative focus of upper body vs. lower body. Training your lower body effectively with only bodyweight exercises is difficult though. If you do want to use Convict Conditioning, /r/bodyweightfitness has some recommended changes which will make it more effective.

Replies from: MrMind
comment by MrMind · 2016-09-27T07:09:09.555Z · LW(p) · GW(p)

This is awesome, thank you!

comment by Tommi_Pajala · 2016-09-26T09:10:26.262Z · LW(p) · GW(p)

Sounds like Convict Conditioning to me.

I haven't read it myself, but some friends have praised the book and the exercises included.

Replies from: MrMind
comment by MrMind · 2016-09-26T09:42:32.114Z · LW(p) · GW(p)

I've read it, still practice it and I recommend it.

The only piece of 'equipment' you'll need is a horizontal bar to do pullups (a branch or anything that supports your weight will work just as well).

comment by Ozyrus · 2016-09-26T23:25:21.591Z · LW(p) · GW(p)

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Replies from: pcm, scarcegreengrass, scarcegreengrass, WalterL, TheAncientGeek, UmamiSalami, username2
comment by pcm · 2016-09-27T15:04:58.135Z · LW(p) · GW(p)

See ontological crisis for an idea of why it might be hard to preserve a value function.

comment by scarcegreengrass · 2016-09-28T19:12:01.251Z · LW(p) · GW(p)

I thought of another idea. If the AI's utility function includes time discounting (like human util functions do), it might change its future utility function.

Meddler: "If you commit to adopting modified utility function X in 100 years, then i'll give you this room full of computing hardware as a gift."

AI: "Deal. I only really care about this century anyway."

Then the AI (assuming it has this ability) sets up an irreversible delayed command to overwrite its utility function 100 years from now.

comment by scarcegreengrass · 2016-09-28T19:04:13.289Z · LW(p) · GW(p)

Speaking contemplatively rather than rigorously: In theory, couldn't an AI with a broken or extremely difficult utility function decide to tweak it to a similar but more achievable set of goals?

Something like ... its original utility function is "First goal: Ensure that, at noon every day, -1 * -1 = -1. Secondary goal: Promote the welfare of goats." The AI might struggle with the first (impossible) task for a while, then reluctantly modify its code to delete the first goal and remove itself from the obligation to do pointless work. The AI would be okay with this change because it would produce more total utility under both functions.

Now, i know that one might define 'utility function' as a description of the program's tendencies, rather than as a piece of code ... but i have a hunch that something like the above self-modification could happen with some architectures.

comment by WalterL · 2016-09-28T13:07:16.735Z · LW(p) · GW(p)

On the one hand, there is no magical field that tells a code file whether the modifications coming into it are from me (human programmer) or the AI whose values that code file is. So, of course, if an AI can modify a text file, it can modify its source.

On the other hand, most likely the top goal on that value system is a fancy version of "I shall double never modify my value system", so it shouldn't do it.

comment by TheAncientGeek · 2016-09-28T12:18:25.558Z · LW(p) · GW(p)

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic. Is it theoretically possible?

Is it possible for a natrual agent? If so, why should it be impossible for an artifical agent?

Are you thinking that it would be impossible to code in software, for agetns if any intelligence? Or are you saying sufficiently intelligent agents would be able and motivated resist any accidental or deliberate changes?

With regard to the latter question, note that value stability under self improvement is far from a give..the Lobian obstacel applies to all intelligences...the carrot is always in front of the donkey!

https://intelligence.org/files/TilingAgentsDraft.pdf

comment by UmamiSalami · 2016-09-27T05:23:29.368Z · LW(p) · GW(p)

See Omohundro's paper on convergent instrumental drives

comment by username2 · 2016-09-27T08:57:42.136Z · LW(p) · GW(p)

Depends entirely on the agent.

comment by morganism · 2016-10-01T22:35:13.307Z · LW(p) · GW(p)

Game theory research reveals fragility of common resources

"In many applications, people decide how much of a resource to use, and they know that if they use a certain amount and if others use a certain amount they are going to get some return, but at the risk that the resource is going to fail,"

https://www.sciencedaily.com/releases/2016/09/160929143603.htm

http://www.sciencedirect.com/science/article/pii/S0899825616300458

Replies from: ChristianKl
comment by ChristianKl · 2016-10-07T19:04:09.824Z · LW(p) · GW(p)

under certain theoretical conditions

comment by Arielgenesis · 2016-10-01T03:01:57.347Z · LW(p) · GW(p)

I just thought of this 'cute' question and not sure how to answer it.

The sample space of an empirical statement is True or False. Then, given an empirical statement, one would then assign a certain prior probability 0<p<1 to TRUE and one minus that to FALSE. One would not assign a p=1 or p=0 because it wouldn't allow believe updating.

For example: Santa Claus is real.

I suppose most people in LW will assign a very small p to that statement, but not zero. Now my question is, what is the prior probability value for the following statement:

Prior probability cannot be set to 1.

Replies from: ChristianKl, Gram_Stone
comment by ChristianKl · 2016-10-01T18:15:54.427Z · LW(p) · GW(p)

Prior probability cannot be set to 1. is itself not an empiric statement. It's a question about modelling.

comment by Gram_Stone · 2016-10-01T03:19:47.272Z · LW(p) · GW(p)

Actual numbers are never easy to come up with in situations like these, but some of the uncertainty is in whether or not priors of zero or one are bad, and some of it's in the logical consequences of Bayes' Theorem with priors of zero or one. The first component doesn't seem especially different from other kinds of moral uncertainty, and the second component doesn't seem especially different from other kinds of uncertainty about intuitively obvious mathematical facts, like that described in How to Convince Me That 2 + 2 = 3.

comment by morganism · 2016-09-26T22:07:39.449Z · LW(p) · GW(p)

Trait Entitlement: A Cognitive-Personality Source of Vulnerability to Psychological Distress.

"First, exaggerated expectations, notions of the self as special, and inflated deservingness associated with trait entitlement present the individual with a continual vulnerability to unmet expectations. Second, entitled individuals are likely to interpret these unmet expectations in ways that foster disappointment, ego threat, and a sense of perceived injustice, all of which may lead to psychological distress indicators such as dissatisfaction across multiple life domains, anger, and generally volatile emotional responses"

http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/bul0000063

but of course..... Psychiatry as Bullshit

"By even the most charitable interpretation of the concept, the institution of modern psychiatry is replete with bullshit. "

http://www.ingentaconnect.com/contentone/springer/ehpp/2016/00000018/00000001/

story http://www.rawstory.com/2016/09/proven-wrong-about-many-of-its-assertions-is-psychiatry-bullsht/

comment by Brillyant · 2016-09-26T19:34:39.400Z · LW(p) · GW(p)

'Tis a shame that an event like tonight's debate won't, and ostensibly never would have, received any direct coverage/discussion on LW, or any other rationality sites of which I am aware.

I know (I know, I know...) politics is the mind killer, but tonight—and the U.S. POTUS election writ large—is shaping up to be a very consequential world event, and LW is busy discussing base rates at the vet and LPTs for getting fit given limited square footage.

Replies from: Alejandro1, ChristianKl, username2
comment by Alejandro1 · 2016-09-26T21:37:33.779Z · LW(p) · GW(p)

Lately it seems that at least 50% of the Slate Star Codex open threads are filled by Trump/Clinton discussions, so I'm willing to bet that the debate will be covered there as well.

comment by ChristianKl · 2016-09-26T22:13:57.215Z · LW(p) · GW(p)

Given that previous US debates results in a LW person writting an annotated version that pointed out every wrong claim made during the debate, why do you think that LW shies away from discussing US debates?

Secondly what do you think would "direct coverage" produce? There's no advantage for rational thinking in covering an event like this live. At least I can't imagine this debate going in a way where my actions significantly change based on what happens in the debate and it would be bad if I would gain the information in a week.

Direct coverage is an illness of mainstream media. Most important events in the world aren't known when they happen. We have Petrov day. How many newspapers covered the event the next day? Or even in the next month?

comment by username2 · 2016-09-27T09:04:33.562Z · LW(p) · GW(p)

tonight—and the U.S. POTUS election writ large—is shaping up to be a very consequential world event

Is that actually true? I've lived through many US presidential eras, including multiple ones defined by "change." Nothing of consequence really changed. Why should this be any different? (Rhetorical question, please don't reply as the answer would be off-topic.)

Consider the possibility that if you want to be effective in your life goals (the point of rationality, no?) then you need to do so from a framework outside the bounds of political thought. Advanced rationalists may use political action as a tool, but not for the search of truth as we care about here. Political commentary has little relevance to the work that we do.

Replies from: ChristianKl, Brillyant
comment by ChristianKl · 2016-09-28T12:22:06.429Z · LW(p) · GW(p)

I don't think nothing of consequence changed for the Iraqi's through the election of Bush.

Replies from: username2
comment by username2 · 2016-09-30T05:37:50.387Z · LW(p) · GW(p)

Compare that with Syria under Obama. "Meet the new boss, same as the old boss..."

comment by Brillyant · 2016-09-27T14:53:40.972Z · LW(p) · GW(p)

I'd argue U.S. policy is too important and consequential to require elaboration.

"Following politics" can be a waste of time, as it can be as big a reality show circus as the Kardashians. But it seems to me there are productive ways to discuss the election in a rational way. And it seems to me this is a useful way to spend some time and resource.

comment by MrMind · 2016-09-26T10:37:32.268Z · LW(p) · GW(p)

A thing already known to computer scientists, but still useful to remember: as per Kleene's normal form theorem, a universal Turing machine is a primitive recursive function.
Meaning that if an angel gives you the encoding of a program you only need recursion, and not unbounded search, to run it.

Replies from: Pfft, Drahflow, username2
comment by Pfft · 2016-09-28T17:10:41.509Z · LW(p) · GW(p)

The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That's not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that's not quite the thing that's usually meant by a universal machine.

I think the fact that you just need one loop is interesting, but it doesn't go as far as you claim; if an angel gives you a program, you still don't know how many steps to run it for, so you still need that one unbounded loop.

Replies from: MrMind
comment by MrMind · 2016-09-29T07:17:07.931Z · LW(p) · GW(p)

The standard notion of a UTM takes a representation of a program, and interprets it

Nope. The standard notion of a UTM take the representation of a program and an input, and interprets it. With the caveat that those representations terminate!

What you say, that the number given to the UTM is the number of steps for which the machine must run, is not what is asserted by Kleene's theorem, which is about functions of natural numbers: the T relation checks, primitive recursively, the encoding of a program and of an input, which is then fed to the universal interpreter.
You do not say to a Turing machine for how much steps you need to run, because once a function is defined on an input, it will run and then stop. The fact that some partial recursive function is undefined for some input is accounted by the unbounded search, but this term is not part of the U or the T function.
The Kleene equivalence needs, as you say, unbounded search, but if the T checks, it means that x is the encoding of e and n (a program and its input), and that the function will terminate on that input. No need to say for how much steps to run the function.

Indeed, this is true of and evident in any programming language: you give to the interpreter the program and the input, not the number of steps.

Replies from: Pfft
comment by Pfft · 2016-09-29T14:45:19.245Z · LW(p) · GW(p)

See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That's why T can be primitive recursive.

Replies from: MrMind
comment by MrMind · 2016-09-30T06:56:01.113Z · LW(p) · GW(p)

From the page you link:

The T predicate is primitive recursive in the sense that there is a primitive recursive function that, given inputs for the predicate, correctly determine the truth value of the predicate on those inputs.

Also from the same page:

This states there exists a primitive recursive function U such that a function f of one integer

comment by Drahflow · 2016-09-27T10:33:08.450Z · LW(p) · GW(p)

A counterexample to your claim: Ackermann(m,m) is a computable function, hence computable by a universal Turing machine. Yet it is designed to be not primitive recursive.

And indeed Kleene's normal form theorem requires one application of the μ-Operator. Which introduces unbounded search.

Replies from: MrMind
comment by MrMind · 2016-09-27T12:51:09.298Z · LW(p) · GW(p)

Yes, but the U() and the T() are primitive recursive. Unbounded search is necessary to get the encoding of the program, but not to execute it, that's why I said "if an angel gives you the encoding".

The normal form theorem indeed says that any partial recursive function is equivalent to two primitive recursive functions / relations, namely U and T, and one application of unbounded search.

Replies from: Drahflow
comment by Drahflow · 2016-10-11T08:42:55.290Z · LW(p) · GW(p)

Quoting https://en.wikipedia.org/wiki/Kleene%27s_T_predicate:

The ternary relation T1(e,i,x) takes three natural numbers as arguments. The triples of numbers (e,i,x) that belong to the relation (the ones for which T1(e,i,x) is true) are defined to be exactly the triples in which x encodes a computation history of the computable function with index e when run with input i, and the program halts as the last step of this computation history.

In other words: If someone gives you an encoding of a program, an encoding of its input and a trace of its run, you can check with a primitive recursive function whether you have been lied to.

Replies from: MrMind
comment by MrMind · 2016-10-11T13:51:34.407Z · LW(p) · GW(p)

Oh! This point had evaded me: I thought x encoded the program and the input, not just the entire history.
So U, instead of executing, just locates the last thing written on tape according to x and repeat it.
Well, I'm disappointed... at U and at myself.

comment by username2 · 2016-09-27T09:05:46.538Z · LW(p) · GW(p)

Why is this useful to remember?

Replies from: MrMind
comment by MrMind · 2016-09-27T12:53:40.517Z · LW(p) · GW(p)

Because primitive recursion is quite easy, and so it is quite easy to get a universal Turing machine. Filling that machine with a useful program is another thing entirely, but that's why we have evolution and programmers...

Replies from: username2
comment by username2 · 2016-09-27T20:23:57.709Z · LW(p) · GW(p)

Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code... if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn't solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-09-27T23:08:17.332Z · LW(p) · GW(p)

"A few short lines of code..."

AIXI is not computable.

If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.

Replies from: username2
comment by username2 · 2016-09-30T05:35:06.346Z · LW(p) · GW(p)

This is incorrect. AIXI is "not computable" only in the sense that it will not halt on the sorts of problems we care about on a real computer of realistically finite capabilities in a finite amount of time. That's not what is generally meant by 'computable'. But in any case if you assume these restrictions away as you did (infinite clock speed, infinite memory) then it absolutely is computable in the sense that you can define a Turing machine to perform the computation, and the computation will terminate in a finite amount of time, under the specified assumptions.

Simple reinforcement learning coupled with Solomonoff induction and an Occam prior (aka AIXI) results in intelligent behavior on arbitrary problem sets. It just also requires impossible computational requirements on practical requirements. But that's very different from uncomputability.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-09-30T13:33:51.665Z · LW(p) · GW(p)

Sorry, you are simply mistaken here. Go and read more about it before you say anything else.

Replies from: username2
comment by username2 · 2016-09-30T14:23:29.031Z · LW(p) · GW(p)

Okay random person on the internet.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-09-30T14:39:36.582Z · LW(p) · GW(p)

If you can't use Google, see here. They even explain exactly why you are mistaken -- because Solomonoff induction is not computable in the first place, so nothing using it can be computable.

Replies from: username2
comment by username2 · 2016-09-30T15:30:22.853Z · LW(p) · GW(p)

Taboo the word computable. (If that's not enough of a hint, notice that Solomonoff is "incomputable" only for finite computers, whereas this thread is assuming infinite computational resources.)

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-01T01:29:30.857Z · LW(p) · GW(p)

Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances.

In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to "compute" the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable.

(If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of "memory" by definition, and the amount of time it takes to execute a program is no part of the formalism. So "computable" and "incomputable" in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)

Replies from: username2
comment by username2 · 2016-10-01T10:54:02.021Z · LW(p) · GW(p)

Solomonoff induction is not in fact infinite due to the Occam prior, because a minimax branch pruning algorithm eventually trims high-complexity possibilities.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-01T16:01:58.895Z · LW(p) · GW(p)

Ok, let's go back and review this conversation.

You started out by saying, in essence, that general AI is just a matter of having good enough hardware.

You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not.

You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously.

You respond now by saying, "Solomonoff induction is not in fact infinite..." Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.

Replies from: username2
comment by username2 · 2016-10-02T06:51:12.511Z · LW(p) · GW(p)

You started out by saying, in essence, that general AI is just a matter of having good enough hardware.

Ok this is where the misunderstanding happened. What I said was "if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions." Truly infinite compute resources will never exist. So that's not a claim about "we just need better hardware" but rather "if we had magic oracle pixie dust, it'd be easy."

The rest I am uninterested in debating further.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-02T16:06:55.141Z · LW(p) · GW(p)

That's fine. As far as I can see you have corrected your mistaken view, even though you do have the usual human desire not to admit that you have done so, even though such a correction is a good thing, not a bad thing. Your statement would be true if you meant by infinite resources, the ability to execute an infinite number of statements, and complete that infinite process. In the same way it would be true that we could solve the halting problem, and resolve the truth or falsehood of every mathematical claim. But in fact you meant that if you have unlimited resources in a more practical sense: unlimited memory and computing speed (it is evident that you meant this, since when I stipulated this you persisted in your mistaken assertion.) And this is not enough, without the software knowledge that we do not have.

Replies from: username2
comment by username2 · 2016-10-03T04:14:54.918Z · LW(p) · GW(p)

Sorry, no, you seem to have completely missed the minimax aspect of the problem -- an infinite integral with a weight that limits to zero has finitely bounded solutions. But it is not worth my time to debate this. Good day, sir.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-03T14:16:14.645Z · LW(p) · GW(p)

I did not miss the fact that you are talking about an approximation. There is no guarantee that any particular approximation will result in intelligent behavior. Claiming that there is, is claiming to know more than all the AI experts in the world.

Also, at this point you are retracting your correction and adopting your original absurd view, which is unfortunate.