Open thread, Jan. 18 - Jan. 24, 2016

post by MrMind · 2016-01-18T09:42:47.158Z · LW · GW · Legacy · 199 comments

Contents

199 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

199 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2016-01-19T19:33:24.232Z · LW(p) · GW(p)

Another post about parenting with a lesswrong touch:

In other Peoples Shoes

Part of Philosophy with Children Sequence

"Assume you promised your aunt to play with you nieces while she goes shopping and your friend calls and invites you to something you'd really like to do. What do you do?"

This was the first question I asked my two oldest sons this evening as part of the bed time ritual. I had read about Constructive Development Theory and wondered if and how well they could place themselves in other persons shoes and what played a role in their decision. How they'd deal with it. A good occasion to have some philosophical talk. This is the (shortened) dialog that ensued:

The immediate answer by A: "I will watch after the girls."

Me: "Why?"

A: "Because I promised it."

B: "Does A also promise it and get a call?"

Me: "This is about your nieces and your friend, not about your brother."

B: "But I need this for my answer."

Me: "I don't see why, but OK, assume that he is not involved."

B: "Because I would ask him whether he might play with the girls in exchange for a favor."

Me: "OK, but please assume that he is away."

B: "Then I could ask my aunt whether somebody else can watch for the girls or whether I could do it together with my friend."

Me: "Please assume that she doesn't find somebody and that she doesn't want somebody she doesn't know in her house."

B: "Then I'd do it."

Me: "Why?"

B: "Because I promised it. I'd tell my friend that we can do it another time."

We had another scenario: "Imagine that you and a fellow pupil C are guests at a friend and having a meal. You know that C is from a family that is very strict about not eating a kind of food that you like very much. Would you advise C to eat it or not?"

A (quickly): "I'd advise to not eat it."

Me: "Why?"

A: "I like rules."

B (after some consideration): "I'd advise to follow their heart."

Me: "And if you were C?"

B: "I'd at least try a bit."

(this was followed with a discussion about possible long-term consequences)

I was still not clear whether this implied whether he followed only his preferences considered this in the context of the rules in the family. So I proposed a setting where he had to imagine being in another country with different laws. We settled on a rule he accepts here (indemnification) but that was much harsher in the other country. He asked whether he had the same feelings as here which after some clarification I confirmed. He argued that he wouldn't like the rule in the other country because it set questionable incentives: "If the punishment is that strong that tells people that it is OK to punish equally strong normally."

comment by Viliam · 2016-01-18T21:11:27.899Z · LW(p) · GW(p)

Reading the preface to Science and Sanity by Korzybski:

From its very inception, the discipline of general semantics has been such as to attract persons possessing high intellectual integrity, independence from orthodox commitments, and agnostic, disinterested and critical inclinations. (...) For them, authority reposes not in any omniscient or omnipresent messiah, but solely in the dependability of the predictive content of propositions made with reference to the non-verbal happenings in this universe. They apply this basic rubric as readily to korzybskian doctrine as to all other abstract formulations and theories and, like good scientists, they are prepared to cast them off precisely as soon as eventualities reveal them to be incompetent, i.e., lacking in reliable predictive content. This circumstance in itself should abrogate once and for all the feckless charges sometimes made by ill-informed critics that general semantics is but one more of a long succession of cults, having its divine master, its disciples, a bible, its own mumbo-jumbo and ceremonial rites. (...) Far from being inclined to repel changes that appear to menace the make-up of general semantics, they actively anticipate them and are prepared to foster those that seem to promise better predictions, better survival and better adaptation to the vicissitudes of this earthly habitat.

One cannot help but be aware, in 1958, that there is far less suspicion and misgiving among intellectuals concerning general semantics and general semanticists than prevailed ten and twenty years ago. Indeed, a certain receptivity is noticeable. The term 'semantics' itself is now frequently heard on the radio, TV and the public speaking platform and it appears almost as frequently in the public print. It has even found a recent 'spot' in a Hollywood movie and it gives some promise of becoming an integral part of our household jargon. This in no sense means that all such users of the term have familiarized themselves with the restricted meaning of the term 'semantics,' much less that they have internalized the evaluative implications and guiding principles of action subsumed under general semantics. A comparable circumstance obtains, of course, in the layman's use of other terms, such as 'electronics.'

(...) The years since the close of World War II have similarly witnessed the access of general semantics not only to academic curricula of the primary, secondary and collegiate levels of the North and South American continents, parts of Western Europe, Britain, Australia and Japan, but to the busy realms of commerce, industry and transportation: of military organization and civil administration; of law, engineering, sociology, economics and religion. These constitute no negligible extensions of general semantics into the world of 'practical' affairs. Large business enterprises, looking toward the improvement of intra-and extramural relations, more satisfying resolutions of the complicated problems that arise between labor and management, and the enhancement of service to their immediate constituents and fellow men in general have found it rewarding, in many instances, to reorganize their entire structure so as to assure the incorporation of general semantic formulations. Several organizations now in existence make it their sale business to advise and provide help in the implementation of such changes. The core of their prescriptions consists in the appropriate application of general semantics. It is becoming a routine for the high and intermediate level executives of certain industries, advertising agencies, banking establishments and the like to retreat for several days at a time while they receive intensive instruction and participate in seminar-workshops designed to indoctrinate them with the principles of general semantics. Comparable courses of instruction have been provided within recent years for the officers of the U.S. Air Academy, the traffic officers of the Chicago Police Department and the sales forces of several large pharmaceutical and biochemical houses. These innovations in business procedure entail, of course, enormous outlays of time, energy and money. They must in time pay perceptible dividends or suffer abandonment. That they are steadily on the increase appears to offer eloquent testimony of their effectiveness.

(...) Membership in the two major organizations concerned with the development, teaching and utilization of general semantics, namely, the Institute of General Semantics located at Lakeville, Connecticut and the International Society for General Semantics, with its central office at Chicago, has slowly but steadily increased over the years and, gratifyingly, has generally avoided the 'lunatic fringe' that appears ever ready to attach itself to convenient nuclei. (...) numerous sectional conferences have been held in various cities each year and the number of courses sought and offered in general semantics is definitely on the increase.

All in all, then, a healthy state of affairs appears to prevail in respect of general semantics. The impact of Korzybski's work on Western culture is now unmistakable and there is every reason to be optimistic that his precepts will be read by ever-widening circles of serious students and that the latter, in their turn, must deeply influence generations of students yet to come. It remains to be seen what effects the regular implementation of these precepts will bring to mankind. Many of us are convinced that they will prove highly salutary.

Impressive! It's like reading about CFAR from a parallel universe. I wonder what happened in that parallel universe fifty years after this text was published. Can we use it as an outside view for the LW rationality movement fifty years after they achieve the successes listed here?

Replies from: cousin_it, username2, Gunnar_Zarncke, ChristianKl
comment by cousin_it · 2016-01-19T02:03:35.075Z · LW(p) · GW(p)

Yeah, I guess LW rationality should be filed under "intellectual fads" rather than "cults".

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2016-01-19T14:38:49.772Z · LW(p) · GW(p)

What are the dynamics that produce a fad rather than growth into the mainstream? It might be worth CFAR thinking about that.

Replies from: ChristianKl, TimS, username2
comment by ChristianKl · 2016-01-19T17:36:37.189Z · LW(p) · GW(p)

In the 20st century serious intellectual thought mostly became thought backed up by academia. Academia then had a custom that it didn't really like interdisciplinary departments but it tried to organize itself into nonoverlapping departments. Many departments were also pressured into doing research that's directly useful to corporations and that can produce patents.

General Semantics and also Cybernetics are fields that lost as a result.

I think there a good case that times change. With the Giving Pledge there a lot of Billionaire money that wants to fund new structures. OpenAI is one example of a well funded projected that likely wouldn't have existed in that form in the past. Sam Altman also wants to fund other similar research projects.

The OpenPhilantrophy project is sitting on a lot of money that it wants to funnel into effective project without caring at all about departmental overlapping.

A world where a lot of people live in their own filter bubble instead of living in the bubble created by mainstream media might also lack what we call mainstream at the moment. Yesterday I was at a Circling meetup in Berlin. I also read at the same day about the experience of an old Facebook friend that lives in the US that did a circling trainers training. In my filter bouble Circling is a global trend at the moment but that doesn't mean that it's mainstream.

If we look at general semantics it's also worth noting that it both succeeded and failed. The phrase "The map is not the territory" is very influential. Neuro-linguistic Programming (NLP) is named that way because of Korzybski usage of the term neuro-linguistic. NLP is based on general semantics but it evolved a lot from that point. NLP is today an influential intellectual framework outside of the academic mainstream.

Replies from: Viliam, mwengler
comment by Viliam · 2016-01-20T11:13:58.467Z · LW(p) · GW(p)

Academia then had a custom that it didn't really like interdisciplinary departments but it tried to organize itself into nonoverlapping departments.

I could imagine some good reasons for doing so. Sometimes scientists who are experts in one field become crackpots in another field, and it may be dificult for the new colleagues to argue against them if the crackpot can euler them by using the arguments from their old field.

On the other hand, there is the saying that a map is not the territory, and this seems like suggesting that the existing maps can be modified in the middle, but the boundaries are fixed. But we have already seen e.g. computer science appearing at the boundary of mathematics; biochemistry appearing on the border between biology and chemistry; or game theory somewhere at the intersection of mathematics, economy, and psychology.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-20T11:50:13.869Z · LW(p) · GW(p)

I could imagine some good reasons for doing so.

My argument is primarily about whether that historical development was good or bad. It was that there are reasons why certain memes won over others that aren't directly about the merit of the memes. Additionally I make the prognosis that those reasons are less likely to hold in the next 40 years the way they did in the last 40.

But we have already seen e.g. computer science appearing at the boundary of mathematics

Today computer science is very much a subfield of math. Heinz von Foerster had a psychatrist in his Biological Computer Laboratory. He wanted to study computing, system theory, cybernetics or whatever word you want to use broadly.

The Biological Computer Laboratory was shut down when the military came to the point of deciding that funding it doesn't produce militarily useful results.

But we have already seen e.g. computer science appearing at the boundary of mathematics; biochemistry appearing on the border between biology and chemistry; or game theory somewhere at the intersection of mathematics, economy, and psychology.

As far as I know we don't have professors for game theory as a discipline. We have economic professors who study game theory, mathematics professors who study it and psychology professors who study it. We don't have departments of game theory.

comment by mwengler · 2016-01-24T17:40:25.806Z · LW(p) · GW(p)

Never heard of Circling until your post. Looked it up, initially find nothing going on in San Diego (California US). I wonder if it is more of a European thing?

If you know how I can find something local to San Diego CA US, please let me know.

comment by TimS · 2016-01-19T15:16:57.823Z · LW(p) · GW(p)

Likely strong factors include:

  • Ease of applicability. If the average middle manager cannot apply a technique easily or straightforwardly while working, the major pressure to use a technique will be social signalling (cf. corporate buzzword speak).

  • Measurable outcomes. If the average middle manager cannot easily observe that the technique makes her job easier (either the productivity of subordinates or her control over them), then she will have no reason to emotionally or intellectually invest in the technique.

comment by username2 · 2016-01-19T21:27:25.235Z · LW(p) · GW(p)

Becoming a niche is a third possibility if ideas are suitable to one area but hard to expand to different areas.

Replies from: mwengler
comment by mwengler · 2016-01-24T17:32:34.780Z · LW(p) · GW(p)

I do think rationality is a niche. I had a conversation with a not-particularly-bright administrative assistant at work where she expressed the teachings of Jehovah's Witness as straightforward truth. She talked some of the chaos of her life (drugs, depression) before joining them. As I expressed the abstract case for, essentially, being careful about what one believes, it seemed clear enough to me that she had little or nothing to gain by being "right" (or rather adopting my opinion which is more likely to be true in a Bayesian sense) and she seemed to fairly clearly have something to lose. I, on the other hand, have a philosopho-physicist's values and also value finding regular (non-theological) truths by carefully rejecting my biases, so I was making a choice that (probably) makes sense for me.

When my 14 year old daughter (now 16 and doing much better) was "experimenting" with alcohol, marijuana, and shop-lifting, I had a "come to Jesus" talk with my religious cousin. She told me that I knew right from wrong and that I was doing my daughter no favors by teaching her skepticism above morality. I decided she was essentially correct, and that some of my own "skepticism" was actually self-serving, letting me off the hook for some stealing I had done from employers starting when I was about 15.

I view rationality as a thing we can do with our neocortex. But clearly we have a functional emotional brain that "knows" there are monsters or tigers when we are afraid of the dark and "knows" that girls we are attracted to are also attracted to us. I continue to question whether I am doing myself or my children any real favors by being as devoted to this particular feature of my neocortex as I am.

comment by username2 · 2016-01-18T22:38:14.402Z · LW(p) · GW(p)

Without numbers it sounds more like a sales pitch rather a honest analysis.

Replies from: PipFoweraker
comment by PipFoweraker · 2016-01-19T22:01:09.118Z · LW(p) · GW(p)

I think that's a reasonable position for a preface to take.

comment by Gunnar_Zarncke · 2016-01-18T22:23:15.915Z · LW(p) · GW(p)

I had a comparable impression from reading Cybernetics (at least the parts I got to so far) and other books on system theory.

Replies from: Vaniver, Viliam
comment by Vaniver · 2016-01-19T16:32:42.127Z · LW(p) · GW(p)

I think cybernetics the practice / math is alive and well, even if cybernetics the name is mostly discarded. Take a look at Wiener's wiki page:

Wiener is considered the originator of cybernetics, a formalization of the notion of feedback, with implications for engineering, systems control, computer science, biology, neuroscience, philosophy, and the organization of society.

The right way to read that is that it's used in seven fields, not zero.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-19T17:50:30.104Z · LW(p) · GW(p)

Cybernetics is alive but I think it's misleading to call it well. When talking about an issue like weight loss the dominating paradigm is "calories in, calories out" and not a cybernetics inspired paradigm.

We don't live in a world where any scale on the market allows automatic calculating of the moving averages of the hacker diet.

Quantified Self as movement is based on Cybernetics. At first European conference Gary spoke about how cybernetics is not well.

I had an old professor in university who taught physiology based on regulation system thinking (cybernetics but he didn't use the word cybernetics). According to him there's was no textbook that presents that perspective we could use for the course.

Replies from: Viliam
comment by Viliam · 2016-01-20T11:33:56.357Z · LW(p) · GW(p)

So it seems like cybernetics was dissected and some of its parts were digested by various disciplines, but the original spirit which connected those parts together was lost.

An analogy for the rationality movement would be if in a few decades some of the CFAR or MIRI lessons will become accepted material in pedagogics, physics, or maybe even AI research, but the whole spirit of "tsuyoku naritai" will be forgotten.

Some parts that I guess are likely to survive, because they can fit in the existing education:

  • treating emotions as rational or irrational depending on whether they relate to facts (psychology)
  • planning fallacy (management)
  • illusion of transparency (pedagogics)

Some parts that I guess are likely to be ignored, because they seem too trivial, and don't fit to the existing educational system. They may be mentioned as a footnote in philosophy, but they will not be noticed, because philosophy already contains millions of mostly useless ideas:

  • making beliefs pay rent
  • noticing confusion
  • fake explanations
  • mysterious answers
  • affective spirals
  • fallacy of grey
  • dissolving the question
  • tsuyoku naritai
  • rationality as a common cause of many causes

EDIT: Reading my lists again, seems like the main difference is between things you can describe and things you have to do. The focus of academia is to describe stuff, not to train people. Which makes sense, sure. Except for the paradoxical part where you have to train people to become better at correctly describing stuff.

comment by Viliam · 2016-01-19T09:46:56.077Z · LW(p) · GW(p)

I haven't read the book, but looking at the reviews on the page you linked...

First, it's funny what once passed for pop science. (...) at least 10% of the pages are devoted to difficult equations and proofs, and I had to skip a couple of chapters because the math was way, way over my head.

Wiener was both philosopher and scientist. As a scientist he was evidently peerless at the time; as a philosopher he reads as ... quirky. But at least he's trying. (...) his assertion that the body is a machine - a wonderfully complex machine, but a machine nevertheless - apparently had not been so internalized by his intended audience (again, a mathematically literate lay audience) that it was unnecessary to make the point.

(Wiener) was clearly committed to a program of ethical research and development. He warned of the danger of developing dangerous computing applications, and dismissed the idea that we can always "turn off" machines that we don't like, since it isn't always clear that the danger exists until after the damage is done.

That's like Eliezer from a parallel universe, except that in this parallel universe the alternative Eliezer was a professor of mathematics at MIT.

comment by ChristianKl · 2016-01-19T07:38:20.747Z · LW(p) · GW(p)

One cannot help but be aware, in 1958, that there is far less suspicion and misgiving among intellectuals concerning general semantics and general semanticists than prevailed ten and twenty years ago. Indeed, a certain receptivity is noticeable. The term 'semantics' itself is now frequently heard on the radio, TV and the public speaking platform and it appears almost as frequently in the public print.

Not everything that has 'semantics' written on it is 'general semantics'. The academic seminars on semantics rather see themselves in the tradition of linguistics.

Replies from: TimS
comment by TimS · 2016-01-19T15:10:03.173Z · LW(p) · GW(p)

Yes. That assertion threw up a red flag that the author was overstating the importance of the methodology.

comment by [deleted] · 2016-01-22T13:39:38.073Z · LW(p) · GW(p)

Please share something you consider a positive characteristic of another LessWronger that you haven’t shared elsewhere :)

Replies from: polymathwannabe, philh, None
comment by polymathwannabe · 2016-01-22T14:43:02.413Z · LW(p) · GW(p)

It may not seem so, but I actually enjoy debating Lumifer. He never fails to show me where I'm wrong.

Replies from: None
comment by [deleted] · 2016-01-23T00:41:33.139Z · LW(p) · GW(p)

that's a really good one. Lumifer is wonderful!

comment by philh · 2016-01-25T10:35:41.467Z · LW(p) · GW(p)

IlyaShpitser is really good at calling out people who are using bayes etc. as applause lights. It's valuable and entertaining.

comment by [deleted] · 2016-01-23T11:43:44.035Z · LW(p) · GW(p)

Vaniver and FrameBeningly answer my questions in an understandable and friendly way. (Other people do, too, I just have an easier time deciding to PM those two.) I also pattern-match them to a couple of very good friends in RL, which is why 1) I don't think your formulation is really meaningful, 2) I want them to know, if they are reading this, that I might go to greater lengths in helping them than is rationally expected considering we've never met.

comment by Gunnar_Zarncke · 2016-01-18T22:26:54.319Z · LW(p) · GW(p)

Is here any interest in posts about parenting with a lesswrong touch?

Example:

Mental Images Part of Philosophy with Children

This evening my oldest asked me to test his imagination. Apparently he had played around with it and wanted some outside input to learn more about what he could do. We had talked about https://en.wikipedia.org/wiki/Mental_image before and I knew that he could picture moving scenes composed of known images. So I suggested

  • a five with green white stripes - diagonally. That took some time - apparently the green was difficult for some reason, he had to converge there from black via dark-green
  • three mice
  • three mice, one yellow, one red, and one green
  • the three colored mice running behind each other in circles (all no problem)
  • he himself
  • he himself in a mirror looking from behind (no problem)
  • two almost parallel mirrors with him in between (he claimed to see his image infinitely repeated; I think he just recalled such an experiment we did another time).
  • a street corner with him on the one side and a bike leaning an the other wall with the handlebar facing the corner and with a bicycle bell on the left side such that he cannot see the bike.
  • dito with him looking into a mirror held before him so he can see the bike behind the corner.

The latter took quite some time, partly because he had to assign colors and such so that he could fully picture this and then the image in the mirror. I checked by asking where the handlebar is and the bell. I had significant difficulties to imagine this and correctly place the bell. I noticed that it is easier to just see the bell once the image in the mirror has gained enough detail (the walls before and behind me, the corner, the bike leaning on the corner, the handlebar).

I also asked for a square circle which got the immediate reply that it is logically impossible.

If you have difficulties doing these (are judge them trivial): This is one area where human experience varies a lot. So this is not intended to provide a reference point in ability but an approach to teach human difference, reflection and yes also practice imagination - a useful tool if you have it. If not you might be interested in what universal human experiences are you missing without realizing it.


I'm currently writing these daily and posting them on the LW slack and the less-wrong-parents group.

Replies from: NancyLebovitz, Baughn
comment by NancyLebovitz · 2016-01-19T10:10:26.652Z · LW(p) · GW(p)

Sparks of Genius has a lot of challenges for the imagination. What geometrical figure has a circular cross section and a square cross section? Circular, square, and triangular cross sections?

Replies from: Gunnar_Zarncke, Gunnar_Zarncke, moridinamael
comment by Gunnar_Zarncke · 2016-01-19T20:55:27.648Z · LW(p) · GW(p)

That book looks interesting. Added it to my wish list. Here is a summary: http://vnthomas1.blogspot.de/2009/06/sparks-of-genius-13-thinking-tools-of.html

comment by Gunnar_Zarncke · 2016-01-19T19:21:04.499Z · LW(p) · GW(p)

We talked about 3D objects being square from one side and circle from the other - for example a cylinder. But he rejected this approach (though he was able to visualize the form). He considered taking circle and square apart and putting it back together into something like a rounded square but rejected that too as neither square nor circle.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2016-01-19T19:44:37.025Z · LW(p) · GW(p)

My guess is that your son doesn't have a solid grasp of the idea of a cross section. Actually, I don't quite feel good about a cylinder having a square cross section. It's as though it's wrong to neglect the idea that a cylinder is round.

Replies from: Vaniver, Gunnar_Zarncke
comment by Vaniver · 2016-01-20T16:57:23.063Z · LW(p) · GW(p)

Actually, I don't quite feel good about a cylinder having a square cross section.

Consider a square in front of you with its edges horizontal and vertical. (Say, drawn on your monitor.) Then consider the line running from the top of the square to the bottom of the square that passes through the center of the square. What happens when you rotate the square around that line?

comment by Gunnar_Zarncke · 2016-01-19T20:49:48.924Z · LW(p) · GW(p)

That was also my first impression. But we talked about it a bit longer. I think it clicked when he mentioned how he looked (in imagination) at the form such that the top becomes a straight line (like looking a paper from the side) and the same with the bottom.

comment by moridinamael · 2016-01-19T15:28:29.807Z · LW(p) · GW(p)

I think your link didn't happen correctly.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2016-01-19T16:18:07.469Z · LW(p) · GW(p)

Thanks for letting me know.

comment by Baughn · 2016-01-19T18:06:36.477Z · LW(p) · GW(p)

I also asked for a square circle which got the immediate reply that it is logically impossible.

I am now imagining a square circle. That's interesting.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-01-19T19:21:17.956Z · LW(p) · GW(p)

Can you describe it?

Replies from: Baughn
comment by Baughn · 2016-01-20T03:48:12.307Z · LW(p) · GW(p)

It's circular, and square.

That's literally all there is. I can't imagine it visually, the way I usually would. Wonder why. :P

Replies from: Gunnar_Zarncke, roystgnr
comment by Gunnar_Zarncke · 2016-01-20T07:00:21.814Z · LW(p) · GW(p)

Alice laughed. 'There's no use trying,' she said. 'One can't believe impossible things.'

I daresay you haven't had much practice,' said the Queen. 'When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast. There goes the shawl again!

― Lewis Carroll

See also this article discussing the usefulness of believing impossible things.

comment by roystgnr · 2016-01-20T06:05:47.347Z · LW(p) · GW(p)

I can imagine it. You just have to embed it in a non-Euclidean geometry. A great circle can be constructed from 4 straight lines, and thus is a square, and it still has every point at a fixed distance from a common center (okay, 2 common centers), and thus is a circle.

Replies from: gjm
comment by gjm · 2016-01-20T08:06:10.727Z · LW(p) · GW(p)

The four straight lines in your construction don't meet at right angles.

comment by Lumifer · 2016-01-21T21:49:16.268Z · LW(p) · GW(p)

Oh, dear. A paper in PNAS says that the usual psychological experiments which show that people have a tendency to cooperate at the cost of not maximizing their own welfare are flawed. People are not cooperative, people are stupid and cooperate just because they can't figure out how the game works X-D

Abstract:

Economic experiments are often used to study if humans altruistically value the welfare of others. A canonical result from public-good games is that humans vary in how they value the welfare of others, dividing into fair-minded conditional cooperators, who match the cooperation of others, and selfish noncooperators. However, an alternative explanation for the data are that individuals vary in their understanding of how to maximize income, with misunderstanding leading to the appearance of cooperation. We show that (i) individuals divide into the same behavioral types when playing with computers, whom they cannot be concerned with the welfare of; (ii) behavior across games with computers and humans is correlated and can be explained by variation in understanding of how to maximize income; (iii) misunderstanding correlates with higher levels of cooperation; and (iv) standard control questions do not guarantee understanding. These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.

Replies from: Kaj_Sotala, Gurkenglas, mwengler
comment by Kaj_Sotala · 2016-01-24T13:07:50.089Z · LW(p) · GW(p)

That sounds like it would contradict the results on IQ correlating positively with cooperation:

A series of experiments performed in (of all places) a truck driving school investigated a Window Game. Two players are seated at a desk with a partition between them; there is a small window in the partition. Player A gets $5 and may pass as much of that as she wants through the window to Player B. Player B may then pass as much as she wants back through the window to Player A, after which the game ends. All money that passes through the window is tripled; eg if Player A passes the entire $5 through it becomes $15, and if Player B passes the $15 back it becomes $45 – making passing a lucrative strategy but one requiring lots of trust in the other player. I got briefly nerd-sniped trying to figure out the best (morally correct?) strategy here, but getting back to the point: players with high-IQ were more likely to pass money through the window. They were also more likely to reciprocate – ie repay good for good and bad for bad. In a Public Goods Game (each of N players starts with $10 and can put as much or as little as they like into a pot; afterwards the pot is tripled and redistributed to all players evenly, contributors and noncontributors alike), high-IQ players put more into the pot. They were also more likely to vote for rules penalizing noncontributors. They were also more likely to cooperate and more likely to play closer to traditional tit-for-tat on iterated prisoners’ dilemmas. The longer and more complicated the game, the more clearly a pattern emerged: having one high-IQ player was moderately good, but having all the players be high-IQ was amazing: they all caught on quickly, cooperated with one another, and built stable systems to enforce that cooperation. In a ten-round series run by Jones himself, games made entirely of high-IQ players had five times as much cooperation as average.

Replies from: gwern
comment by gwern · 2016-01-24T18:28:15.860Z · LW(p) · GW(p)

Only if you assume that IQ is independent of altruism. Given that IQ covaries with altruism, patience, willingness to invest, willingness to trust strangers, etc, I don't see why you would make that assumption. I'm fine with believing that greater IQ also causes more cooperation and altruism and so high IQ players understand better how to exploit others but don't want to. If anything, the results suggest that the relationships may have been underestimated, because lower IQ subjects' responses will be a mix of incompetence & selfishness, adding measurement error.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-01-25T21:37:39.644Z · LW(p) · GW(p)

Good point.

comment by Gurkenglas · 2016-01-23T19:08:07.595Z · LW(p) · GW(p)

(ii) They may also be anthropomorphizing the computers. (iii) This just means that the sort of person who cooperates in this sort of game also treats humans and computers equally, right?

Replies from: Lumifer
comment by Lumifer · 2016-01-25T16:56:33.782Z · LW(p) · GW(p)

They may also be anthropomorphizing the computers.

I would count it as supporting evidence for "they're just stoopid" hypothesis X-)

comment by mwengler · 2016-01-24T14:42:12.309Z · LW(p) · GW(p)

Does "value the welfare of others" necessarily mean "consciously value the welfare of others"? Is it wrong to say "I know how to interpret human sounds into language and meaning" just because I can do it? Or do I have to demonstrate I know how because I can deconstruct the process to the point that I can write an algorithm (or computer code) to do it?

The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative. If I can value the welfare of a stranger, then clearly the thing for which I value welfare is not defined too tightly. If a computer (running the right program) displays some of the features that signal me that a human is something i should value, why couldn't I value the computer? We watch animated shows and value and have empathy for all sorts of animated entities. In all sorts of stories we have empathy for robots or other mechanical things. The idea that we cannot value the welfare of a computer flies in the face of the evidence that we can empathize with all sorts of non-human things fictional and real. In real life, we value and have human-like empathy for animals, fishes, and even plants in many cases.

I think the interpretations or assumptions behind this paper are bad ones. Certainly, they are not brought out explicitly and argued for.

Replies from: Jiro, Lumifer
comment by Jiro · 2016-01-25T23:13:35.423Z · LW(p) · GW(p)

I actually read the paper.

It might also be argued that people playing with computers cannot help behaving as if they were playing with humans. However, this interpretation would: (i) be inconsistent with other studies showing that people discriminate behaviorally, neurologically, and physiologically between humans and computers when playing simpler games (19, 56–58), (ii) not explain why behavior significantly correlated with understanding (Fig. 2B and Tables S3 and S4)..."

((iii) and (iv) apply to the general case of "people behave as if they are playing with humans", but not to the specific case of "people behave as if they are playing with humans, because of empathy with the computer").

comment by Lumifer · 2016-01-25T17:01:55.415Z · LW(p) · GW(p)

The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative.

I am always up for being ludicrous :-P

So, what is the welfare of a computer? Does it involve a well-regulated power supply? Good ventilation in a case? Is overclocking an example of inhumane treatment?

Or maybe you want to talk about software and the awful assault on its dignity by an invasive debugger...

comment by ChristianKl · 2016-01-18T13:02:58.775Z · LW(p) · GW(p)

Do we have a way to measure how happy farm animals happen to be? If we don't than developing a metric might produce huge gains in animal welfare, because it allows us to optimize for it better.

Replies from: Vaniver, Gunnar_Zarncke
comment by Vaniver · 2016-01-18T19:47:27.753Z · LW(p) · GW(p)

Temple Grandin has some work that's relevant, and argues for quantitative measures. One of the easy metrics to use now are bodily integrity things, like the percentage of animals who are lame when they make it to the slaughterhouse. A lame animal is unlikely to be a happy or well-treated animal, and it seems easy to measure and compare.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2016-01-19T09:56:06.512Z · LW(p) · GW(p)

She's also done work on what animals are willing to take some trouble to get-- chickens apparently care more about having a secluded place to lay eggs than they care about getting outside.

comment by Gunnar_Zarncke · 2016-01-18T22:19:08.476Z · LW(p) · GW(p)

Which reminds me: Would it be ethical to reduce animal suffering by somehow breeding suffering out of animals? Or asked differently: Is suffering a simple concept (that can be selected against) or is it just negative value with all the associated complexity? Or doesn't this apply to animals?

Obligatory hitchhikers quote.

Replies from: Viliam, mwengler
comment by Viliam · 2016-01-19T08:51:30.976Z · LW(p) · GW(p)

This is tricky, because if we don't understand (on the technical level) how "qualia" work, we cannot be sure if we are breeding for "less suffering" or merely "less ability to express suffering".

In other words, now the humans could play the role of the unfriendly AI who "would rip off your face, wire it into a permanent smile, and start xeroxing".

Replies from: PipFoweraker
comment by PipFoweraker · 2016-01-19T22:06:29.738Z · LW(p) · GW(p)

I'm not certain if we need to understand how suffering works if we can simply remove the organs that house it.

It seems less tricky when a technological set of solutions come along that allow delicious engineered meat to be grown without all the unnecessary and un-delicious bits.

I think the in vitro meat industry will have an extraordinarily good time when things develop to the point of being able to synthesis a lazy-person's whole stuffed camel.

Replies from: Viliam, Dagon
comment by Viliam · 2016-01-20T11:37:07.854Z · LW(p) · GW(p)

In vitro meat -- okay.

Modifying animals so they can't scream -- not okay.

somehow breeding suffering out of animals

-- this is the part that IMHO depends on details we may not yet understand sufficiently.

comment by Dagon · 2016-01-19T23:22:00.769Z · LW(p) · GW(p)

In-vitro meat reduces suffering, but also reduces joy and brain-experienced life in general. I don't know how to evaluate if a current cow or chicken's life is negative value (to the animal) or not.

Replies from: PipFoweraker
comment by PipFoweraker · 2016-01-19T23:55:34.748Z · LW(p) · GW(p)

If I'm exclusively limiting myself to animals that are raised in an organised fashion for eventual slaughter, I don't think I need too much data to assign broadly negative values to lives that are unusually brutish, nasty and short compared to either non-existence or a hypothetical natural existence.

In my consideration, simple things like the registering of a pain stimulus and the complexity of behaviour to display distress are good enough indicators.

Replies from: AstraSequi, Dagon
comment by AstraSequi · 2016-01-20T02:14:29.892Z · LW(p) · GW(p)

I don't think I need too much data to assign broadly negative values to lives that are unusually brutish, nasty and short compared to either non-existence or a hypothetical natural existence.

I don't think you can make that decision so easily. They're protected from predators, well-fed, and probably healthier than they would be in the wild. (About health, the main point against is that diseases spread more rapidly. But farmers have an incentive to prevent that, and they have antibiotics and access to minimal veterinary treatment.)

'no pig' > 'happy pig + surprise axe'

This leads me to conclusions I disagree with - like if a person is murdered, then their life had negative value.

comment by Dagon · 2016-01-20T00:20:20.118Z · LW(p) · GW(p)

I don't think I need too much data to assign broadly negative values to lives that are unusually brutish, nasty and short compared to either non-existence or a hypothetical natural existence.

The comparison at hand is only to non-existence; you're not proposing any mechanism to improve such lives or to make them similar to a hypothetical nature, only to eliminate any experience of the life while still providing the meat.

As such, you don't need too much data, but you currently have none, nor even a theory about what data you'd want. Trying to determine a preference for non-existince in animals (or vegetables, for that matter, or lumps of vat-meat) when such units don't seem to have the concepts (or at least the communication ability) to make choices for themselves doesn't seem obvious at all to me.

Replies from: PipFoweraker
comment by PipFoweraker · 2016-01-20T01:38:37.930Z · LW(p) · GW(p)

When animals are created and destroyed solely for a purpose attributed to them by their human overlords, that reduces their utilisable preferences to zero or near zero. Unless a meat producer had reason to believe that inflicting pain on an animal improved the resulting meat product, that pain would almost certainly be a by-product of whatever the farmer chose rather than an exclusive intent. I personally know no farmers that inflict 'pointless' injury on their livestock.

Given any amount of suffering in the animal stock needed to feed, say the US compared to a zero amount of suffering of the in-vitro meat needed to feed the US, if we were basing decisions solely on the ethics of the situation the choice would be clear-cut. As it stands it is simply one amongst many trade-offs, the numbers and data of which I agree would be laborious to define.

The inability to communicate or even experience a preference for the concept of non-existence compared to an experienced or ongoing pain does not invalidate the experience of the pain. In this field of thought I am happy to start from a non-rigorous framework and then become more so if needs be. At a simple level, my model says [for SolvePorkHunger: 'no pig' > 'happy pig + surprise axe' > 'sad pig + surprise axe'].

The practical ways to improve such lives as already exist are, broadly speaking, answered by practitioners of veganism, vegetarianism, cooperative existence with animals (raising chooks, goats for milk, etc etc).

Replies from: MrMind, Dagon
comment by MrMind · 2016-01-20T08:42:45.800Z · LW(p) · GW(p)

for SolvePorkHunger: 'no pig' > 'happy pig + surprise axe' > 'sad pig + surprise axe'

Although I can understand the intuitiveness of this ordering, I think it should be pondered more deeply.
It's safe to say that no pig experiences no joy and no suffering, and that sad pig experiences lots of suffering. Also it would seem intuitive that a happy pig dying of natural causes experiences lots of joy. From the point of view of the animal:
long lived sad pig < short lived sad pig < no pig < long lived happy pig
It is weird not to put short lived happy pig were it seems to belong, and I think it has to do with the fact that killing a happy pig carries a lot of negative moral weight.
Would you say the same about a pig genetically engineered to die of natural causes when it's most delicious?

Replies from: Dagon
comment by Dagon · 2016-01-20T15:07:00.936Z · LW(p) · GW(p)

Ooh! I love the point about it being morally heavier to kill and eat a happy animal than a sad one.

I tend to think even relatively sad lives are not absolutely negative - very nearly any life is better than none, and a good life better than a bad one, but it's going to give some of my fuzzy-vegetarian friends a good question to ponder.

comment by Dagon · 2016-01-20T15:00:36.434Z · LW(p) · GW(p)

When animals are created and destroyed solely for a purpose attributed to them by their human overlords

Does this argument apply to humans created or destroyed solely for purposes of evolutionary pressure or environmental accident? I'd argue that nothing happens solely for any purpose.

reduces their utilisable preferences to zero or near zero.

Measured how?

'no pig' > 'happy pig + surprise axe' > 'sad pig + surprise axe'

This seems to be the crux of your position. I don't buy it. Let's leave aside (unless you want to try to define terms) the difference between happy, sad, and more common mixed cases.

Let's focus on the main inequality of nonexistence vs some temporary happiness. Would you say 'no human' > 'happy human + surprise cancer'? I assert that neither human nor pig really frames things in terms of the farmer's or universe's motivations.

Replies from: mwengler
comment by mwengler · 2016-01-27T13:43:19.857Z · LW(p) · GW(p)

'no pig' > 'happy pig + surprise axe' > 'sad pig + surprise axe'

Would this also mean

'no pig' > 'happy pig + surprise predator' > 'sad pig + surprise predator' I don't think nature is generally any better than (some kinds of) farming for prey animals. Should vegans be benefitting from lowering the birth rates among natural animals?

Or for that matter, does it also mean 'no human' > 'happy human + eventual death' > 'sad human + eventual death' Even in nature, all life is alive, and then it dies, almost always in a way it would not choose or enjoy. Does life just suck? Are we bad actors for having children?

Replies from: Vaniver, IlyaShpitser
comment by Vaniver · 2016-01-27T14:28:13.262Z · LW(p) · GW(p)

I don't think nature is generally any better than (some kinds of) farming for prey animals.

The term to search for is 'wild animal suffering.'

Does life just suck? Are we bad actors for having children?

The term to search for is 'anti-natalism.'

comment by IlyaShpitser · 2016-01-27T14:26:58.684Z · LW(p) · GW(p)

People who worry that life sucks that much should make sure they correctly priced in the possibility that we can figure out how to arrange it so that life is super great in the future.

(But everyone here realizes this).

comment by mwengler · 2016-01-24T17:53:51.313Z · LW(p) · GW(p)

Would it be ethical to grow meat in a vat without a brain associated with it? Personally, I think pretty clearly yes.

So breeding suffering out of animals would seem to be between growing meat in a vat and what we have now. So it would seem to be a step in the right direction.

We, and animals, almost certainly have suffering because it had survival value for us and animals in the environment in which we evolved. Being farmed for meat is not that environment. I don't think removing suffering from our farmed animals has a downside. Of course, removing it from wild animals would probably not be a good thing, but would probably correct itself relatively quickly in the failure of non-suffering animals to survive.

Replies from: Gunnar_Zarncke, Jiro
comment by Gunnar_Zarncke · 2016-01-24T22:09:17.777Z · LW(p) · GW(p)

I wonder about more intermediate stages. Animals suffering less is one obviously. Animals with less nervous systems would be another (though probably not practical). More ideas?

comment by Jiro · 2016-01-25T00:15:07.645Z · LW(p) · GW(p)

Most vegetarians would think that activities that normally make animals suffer are bad in themselves. They may have originally have used suffering as a reason to figure out that those activities are bad,m but they're bad in themselves. You can't just take away the bad consequences and make them good.

Also, utilitarianism has a problem with blissful ignorance. Most vegetarians would probably think that animals that are engineered to be unable to suffer have a blissful ignorance problem; they are being harmed and just don't realize it.

Replies from: Lumifer, mwengler
comment by Lumifer · 2016-01-25T17:16:48.707Z · LW(p) · GW(p)

Most vegetarians would probably think that animals that are engineered to be unable to suffer have a blissful ignorance problem; they are being harmed and just don't realize it.

Do carrots have a blissful ignorance problem, then?

Replies from: Jiro
comment by Jiro · 2016-01-25T18:27:57.866Z · LW(p) · GW(p)

The problem only exists for beings with some sort of mind that has moral relevance. I would guess that most vegetarians believe that animals have such a mind, but not carrots.

Replies from: Lumifer
comment by Lumifer · 2016-01-25T20:16:56.386Z · LW(p) · GW(p)

So what happens when you engineer a "mind that has moral relevance" out of an animal?

And going a bit upthread, what do you mean by acts that are "bad in themselves"?

Replies from: Jiro
comment by Jiro · 2016-01-25T22:10:02.145Z · LW(p) · GW(p)

I'm not a vegetarian myself. I was just describing how people think. I don't know that they have a coherent concept of "acts that are bad in themselves".

comment by mwengler · 2016-01-27T13:31:08.723Z · LW(p) · GW(p)

Most vegetarians would think that activities that normally make animals suffer are bad in themselves.

Presumably the moral win in reducing or eliminating the suffering of farmed meat would have more to do with non-vegetarians than vegetarians. But really, is the point here to do something better than is already done, or is to impress vegetarians?

comment by MrMind · 2016-01-18T09:44:42.161Z · LW(p) · GW(p)

I was on vacation, confident that Clarity would have opened the new open threads. Since it wasn't the case, I'm resuming from today the 'duty' of creation of such threads. Happy LessWronging.

Replies from: None, Elo
comment by [deleted] · 2016-01-21T06:05:20.181Z · LW(p) · GW(p)

Evidence for an inefficient market for karma for public goods like open threads on LessWrong. I assumed we would adapt to it after I attracted controversy for starting that open thread the other day. Maybe I've taboo'd non-MrMind's from doing instead and legitimised a kind of proprietary right to it to you!

Replies from: MrMind
comment by MrMind · 2016-01-21T09:08:35.170Z · LW(p) · GW(p)

Yeah, but why it would be inefficient?
Entry cost is negligible and the need of open thread is very visible.
I would rather say that there's no market because there's no demand: nobody cares that much about gaining that karma. Or maybe it's not a high status activity.

Maybe I've taboo'd non-MrMind's from doing instead and legitimised a kind of proprietary right to it to you!

Perverse incentives are everywhere! ;)

comment by Elo · 2016-01-18T22:53:07.954Z · LW(p) · GW(p)

I wasn't watching... Should have jumped in myself... Sorry...

Replies from: MrMind
comment by MrMind · 2016-01-19T08:04:28.694Z · LW(p) · GW(p)

No problem, no harm has been effected, as far as I can see.

comment by [deleted] · 2016-01-18T12:53:31.870Z · LW(p) · GW(p)

Interested if anyone has thoughts/research on this question:

Are chickens affected by the hedonic treadmill? If so, are they equally, more, or less susceptible to it? What about pigs?

Replies from: None
comment by [deleted] · 2016-07-30T16:03:27.127Z · LW(p) · GW(p)

I have yet to find good research on this. However, if anyone out there believes that farm animals are affected by the hedonic treadmill, and that farm animal suffering causes great disvalue, prioritizing donations to the Humane Slaughter Association (HSA) might be a good idea. Part of HSA's mission is to reduce the suffering of animals during slaughter, and I find it unlikely that farm animals hedonically adapt during their short and often intensely painful deaths. It seems more likely that a chicken hedonically adapts during its time in a battery cage.

Brian Tomasik has a good piece on HSA here.

comment by [deleted] · 2016-01-23T11:45:17.624Z · LW(p) · GW(p)

What would be the optimal wording for a tattoo asking doctors to harvest one's organs for transplants if one happens to die?

Replies from: gwern, Tem42
comment by gwern · 2016-01-23T16:49:20.131Z · LW(p) · GW(p)

Have you checked first that tattoos do not affect organ donation eligibility, or have any legal/medical weight whatsoever compared to, say, an organ donor card or check on your driver's license?

Replies from: Tem42, None
comment by Tem42 · 2016-01-23T21:21:04.822Z · LW(p) · GW(p)

It would be worth double-checking your local regulations, but tattoos do not generally restrict you from organ donation. You should make sure you get your tattoo from a licensed business, of course.

As far as legal status -- that is a good question. I would think that as long as you updated it at least as often as you update your driver's licence, it would remain a valid indicator of your intent. That might mean adding a date to the tattoo, and adding another one every few years. You might contact your local hospital and see what they would do if they had a fresh corpse with no ID but a organ donor tattoo...

Replies from: ChristianKl
comment by ChristianKl · 2016-01-24T10:37:29.309Z · LW(p) · GW(p)

I would think that as long as you updated it at least as often as you update your driver's licence, it would remain a valid indicator of your intent.

Why? Doctors have procedures for how to deal with organ donations. That procedure means looking at driver's license and organ donor cards. There are huge legal risks for them being creative.

Replies from: Tem42
comment by Tem42 · 2016-01-24T16:17:32.130Z · LW(p) · GW(p)

I had to give up on trying to find out if a tattoo can count as consent on its own -- I would guess that it would be iffy territory unless you had it notarized and witnessed.

It might still be worthwhile to have a tattoo; it does tell them that you have given consent, meaning that they will make an extra effort to look for consent (In the US this means a state database). This would only be relevant if you are found without your drivers license/ID. There are a number of fringe cases where you might be found dead and dying without easy access to your ID, but they are admittedly rare. They are also more likely to cases where your organs aren't usable (fire, ravaged by bears, rip tide carries you out to sea). However, if the legal team gets any head start on finding a John Doe's organ donor status, on average this is likely to result in increased organ salvage.

Here's a revised suggestion, for social feasibility, effectiveness, and pain reduction: get a tattoo of a red heart and the words organ donor and your name in a protected area (e.g. on the side of your trunk, just below the arm pit). Until RDFI chips become common this is also probably one of your best protections against becoming a J. Doe (I mean, other than living a sane and safe life).

Replies from: ChristianKl
comment by ChristianKl · 2016-01-24T17:58:19.893Z · LW(p) · GW(p)

I had to give up on trying to find out if a tattoo can count as consent on its own

The core question isn't whether it can legally count as consent but whether the process that a medical team uses when it finds a dead body recognizes the tattoo.

Replies from: Tem42
comment by Tem42 · 2016-01-27T23:11:39.127Z · LW(p) · GW(p)

I am not a first responder, but if I had a pile of corpses and one of them had an organ donor tattoo, that corpse would definitely be flagged for special attention and quick transport to the morgue. I wouldn't count on it being legal for them to make an extra effort to ID one body before another just based on (suspected) organ donor status, but making it into the refrigerator a bit earlier is a benefit.

comment by [deleted] · 2016-01-23T19:12:33.426Z · LW(p) · GW(p)

I don't have a driver's license, but taking into account the possibility of ...eligibility and what Tem24 said, it would seem definitely better to go about getting myself a card.

Sorry, I just thought somebody could have already asked that before.

comment by Tem42 · 2016-01-23T16:55:41.752Z · LW(p) · GW(p)

I think that optimal design would include the red heart that is placed on driver licences (in most American states) and on NHS cards (in the UK), plus the words "Organ donor". You might also want to include your organ donor ID, but you might not... in the US this is (sometimes? usually?) your driver's licence number, which may not be something you need strangers to see when you are at the beach.

My understanding is that if you do not specify otherwise, it is assumed that they can take any organ they need, but if you wanted to clarify (or were worried that your relatives my get greedy about the parts you get buried with), I would expect that the words "no limitations" would be sufficient to allow the hospital to take any skin, eyes, etc., they feel they have a use for.

Optimal wording may be less important than optimal placement. I would assume on the chest over the heart would be least likely to be destroyed in an accident / most likely to be seen by first responders... Plus, if that is destroyed, the best organs are also likely to be damaged. However, if you want optimal, you should really get a set of tattoos -- one for the chest, one for the stomach, and one for the neck(?).

Replies from: None
comment by [deleted] · 2016-01-23T19:14:09.797Z · LW(p) · GW(p)

Damn, I'd better just get a card then. Thank you!

Replies from: Tem42
comment by Tem42 · 2016-01-23T21:13:36.203Z · LW(p) · GW(p)

I wasn't arguing against the tattoo! It sounds like a good idea, and more likely to be seen than the card. (However, you should get the card and then plot the tattoo. A being on the local database and having your wishes known by your next-of-kin is your best bet to being effective in donating).

Replies from: None
comment by [deleted] · 2016-01-24T06:30:15.106Z · LW(p) · GW(p)

Yes, but I would rather suffer the small embarrassment of having the card with me on the beach than the pain of multiple tattoos, plus having to listen to my other-than-next-of-kin relatives' sighs and moans if they see the ones on appendages etc. (I have not had a single one yet, but I assume there is pain involved.)

comment by Panorama · 2016-01-21T20:42:45.073Z · LW(p) · GW(p)

Evidence for a distant giant planet in the Solar System

Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive. In this work we show that the orbits of distant Kuiper Belt objects (KBOs) cluster not only in argument of perihelion, but also in physical space. We demonstrate that the perihelion positions and orbital planes of the objects are tightly confined and that such a clustering has only a probability of 0.007% to be due to chance, thus requiring a dynamical origin. We find that the observed orbital alignment can be maintained by a distant eccentric planet with mass gsim10 m⊕ whose orbit lies in approximately the same plane as those of the distant KBOs, but whose perihelion is 180° away from the perihelia of the minor bodies. In addition to accounting for the observed orbital alignment, the existence of such a planet naturally explains the presence of high-perihelion Sedna-like objects, as well as the known collection of high semimajor axis objects with inclinations between 60° and 150° whose origin was previously unclear. Continued analysis of both distant and highly inclined outer solar system objects provides the opportunity for testing our hypothesis as well as further constraining the orbital elements and mass of the distant planet.

comment by CurtisSerVaas · 2016-01-19T23:04:17.261Z · LW(p) · GW(p)

There was a link (I think it was from Wedrifed) that allowed you to sort a particular user's posts/comments by karma (rather than by time). Does anybody know where that link is?

Replies from: gwern
comment by gwern · 2016-01-20T00:12:06.191Z · LW(p) · GW(p)

You mean Wei Dai's tool? eg http://www.ibiblio.org/weidai/lesswrong_user.php?u=gwern ? Works best with accounts with few comments...

Replies from: CurtisSerVaas
comment by CurtisSerVaas · 2016-01-20T01:51:46.232Z · LW(p) · GW(p)

Yep! Thanks!

Edit: I see what you mean about it being slow with accounts with lots of comments.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-01-22T18:10:45.323Z · LW(p) · GW(p)

And now lots of people are thinking about who might be searched for insight. If it's Yvain then there are curated lists of that...

Replies from: CurtisSerVaas
comment by CurtisSerVaas · 2016-01-23T04:30:40.811Z · LW(p) · GW(p)

I'm basically going back over the sequences and top posts of LW again. I'm already aware of curated lists for Yvain and Kaj, but I don't think there are curated lists for the other top all-time posters. Unfortunately, A. That tool doesn't really work well enough. B. The list of all time top-posters has disappeared, and I'm sure I'd forget some of them if I tried to go off the top of my head.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-01-23T09:15:06.720Z · LW(p) · GW(p)

There used to be a ZIP download with all the posts. I just can't find that link though. An alternative is to mirror the site with wget and grep that.

comment by Fluttershy · 2016-01-19T02:59:05.095Z · LW(p) · GW(p)

So, I only recently decided to start taking Vitamin D after reading Gwern's discussion of it here, and I've been wondering if there are other easy wins for extending one's healthspan/life expectancy/lifespan cheaply that we're collectively missing.

On one level, it seems like having individual LWers go out, read a number of research papers, and then do a cost-benefit analysis on an intervention has produced good research before, but this approach feels a bit unorganized to me.

So, part of me wonders if it might be a good idea to just pay someone (say, Gwern, or someone who used to work for MetaMed--not that I asked Gwern if he'd be up for the task before writing this) to go and see if there are any obvious interventions that we're not aware of. The writer could try to write a more complete version of Lifestyle Interventions to Increase Longevity, or they could just look for new interventions that we LWers have collectively overlooked, and publish a short summary of their findings, if any.

I'm mainly asking about this now to see if people think this is a good idea, but I hope that, in a year or so, I'd actually be able to put up a chunk of money for something like this to be done, if I still thought it was a good idea.

Replies from: James_Miller, Lumifer
comment by James_Miller · 2016-01-19T05:24:02.181Z · LW(p) · GW(p)

Other easy wins: The Squatty Potty, magnesium supplements, meditation, and donating blood if you are male.

Replies from: PipFoweraker, gwern, pepe_prime
comment by PipFoweraker · 2016-01-19T21:59:06.527Z · LW(p) · GW(p)

My experience with giving people the data behind squatting to go to the dunny is that their awkwardness about it strongly outweighs, initially, their willingness to experiment.

Which leads to the thought that there are probably some provably life-enhancing things that people don't even consider doing because it is so far outside their social mores that the possibility doesn't occur. I have had an entertaining few minutes trying to think of some that my great-descendants will be bewildered we didn't consider.

Replies from: James_Miller, ChristianKl
comment by James_Miller · 2016-01-20T01:15:23.913Z · LW(p) · GW(p)

Fecal transplants and cryonics.

comment by ChristianKl · 2016-01-19T22:03:22.091Z · LW(p) · GW(p)

Seth Roberts nose clips while eating for people who want to lose weight probably falls under strong awkwardness that doesn't let people consider it.

comment by gwern · 2016-01-19T22:03:31.072Z · LW(p) · GW(p)

I gave squatting a try a few months back. You can do the same thing by grabbing two cinder blocks and positioning them on either side of the toilet with the seat up. It felt slightly easier to defecate, but I couldn't figure out how to use it with pants as easily as regular sitting; you need to get out of one leg, almost, for it to work. And taking off my pants every time I need to defecate is a pain in the ass.

Replies from: Tem42
comment by Tem42 · 2016-01-23T16:06:30.393Z · LW(p) · GW(p)

For many people who on their own homes it would actually be feasible to build or install a pit toilet. I do not know of anyone in America who has done so.

The cider-block idea sounds unstable... but I haven't tried it. However, it seems that it should be fairly easy to train your body to go just before you take a shower, assuming you take showers on a predictable schedule, thus solving the undressing inconvenience.

Replies from: gwern, None
comment by gwern · 2016-01-23T16:46:34.192Z · LW(p) · GW(p)

No, the cinder-blocks were very stable. That was not the issue. I also think it's a little unreasonable to schedule your defecations and showers for the convenience of your squatting toilet rather than the other way around. Bidets are a big improvement but I'm not convinced by squatting for people without problems.

comment by [deleted] · 2016-01-24T06:49:16.179Z · LW(p) · GW(p)

An anecdote: it was easy to train myself to go before I went on to yoga-like exercises (at home) which lasted more than an hour, although admittedly I was a teenager, one should have an instructor at hand at least in the beginning, one should shower after the exercises, and I did it 3-4 times a week.

However, it also (seemed to) improve sleep quality.

comment by pepe_prime · 2016-04-11T19:10:02.635Z · LW(p) · GW(p)

Could you elaborate on why squatting is a clear win? I took a brief look online and the evidence seems to favor squatting, but not hugely: https://skeptoid.com/blog/2015/09/26/squatty-potty/

Regardless, thanks for the list!

Replies from: James_Miller
comment by James_Miller · 2016-04-11T23:14:25.893Z · LW(p) · GW(p)

The cost of squatting is tiny, and part of the benefit is saved time so on net it seems like a clear win.

comment by Lumifer · 2016-01-19T05:59:39.457Z · LW(p) · GW(p)

Averages are pretty useless -- go to a doctor, ask for a full set of blood tests. And when I say "full", I mean ridiculously all-encompassing, if your doctor is OK with this. The printout of your results should take a couple of dozen pages.

Ask for copies of the lab results. Study them carefully and they will tell you personally what would be a good idea for your health.

Replies from: Tem42
comment by Tem42 · 2016-01-23T16:26:29.440Z · LW(p) · GW(p)

Is that working under the assumption that normalizing is better for your health? I don't think that I would trust myself or my doctor to optimize supplements based simply on what I am low in.

For example, normal vit. D3 levels are often set by the healthy level for Caucasians, with the result that Asians with healthy, normal levels for their genotype are flagged as dangerously low. This is not something that you can assume that your doctor is aware of.

However, the tests would give you some starting points for research. Also, I suspect that most doctors are not likely to offer much more than a chem-20, which I think is pretty useful across populations (IANAD) -- but also is probably not what you are recommending.

Replies from: Lumifer
comment by Lumifer · 2016-01-25T16:54:53.991Z · LW(p) · GW(p)

Is that working under the assumption that normalizing is better for your health?

No. That's working under the assumption that more information is better than less information.

This is not something that you can assume that your doctor is aware of.

I didn't say "listen to your doctor". I said "study them carefully".

most doctors are not likely to offer much more than a chem-20

Ask for specific, comprehensive panels. Do not go in saying "You think I should maybe get some tests?" :-/

comment by [deleted] · 2016-01-21T05:52:11.655Z · LW(p) · GW(p)

Star Slate Codex readers may remember the prime number factorisation experimental protocol (http://slatestarcodex.com/2015/04/21/universal-love-said-the-cactus-person/). It’s one of many dangerous but high impact rationality experiments that I have had (not longer) an interest in testing. Before I got serious about rationality I was getting increasingly mentally ill. I was considered to be in the prodrome of schizophrenia and even experienced (though I was skeptical about the veracity of my memory, till this recent experience which helped remind me of the subjective experience) ‘’first episode of psychosis’’. I believe that immersion in the rationality community helped me eventually get discharged from early-psychosis intervention, get taken off anti-psychotics and to no longer be considered psychotic in any way shape of form. This is unpresedented, since it is generally believe the progress of schizophrenia is such that there is no such remission. So I reckon it’s my social duty to explore this and see if my story can be of value to others: raising the sanity waterline if you will.

This post will be the first in a multi-post series about the most recent such journey and why I have given it up. I hope it serves as a warning and caution to those with the gall to do similar things, and some insight into the limits of rationality to more mainstream rationalists. To anyone looking for something profound in this post without using their own powers to draw more from it than I saw straight up, I recommend you skip this one and keep an eye out for the follow-up posts since this is basically background information.

I had noticed a man wearing a plain pastel green shirt. No that’s too generous: plain pastel green tabard, a big nose and haphazardly braided hair aboard the trip. I assumed he was a kind of hippy tourist that was trying to hard and had gone out of his way to make his own clothing. I assume a gentle nature. As I left the ship he shoved past me through a tight corridor. It would be the first in a string of assumptions I would make that night that would plunge me into and out of harrowing world where I would explore the fringes of rationality.

Tonight was the culmination of years of planning and preparation. ‘Do you know where Mellow Mountain is?’ I asked the receptionist. She pointed at the raised platform in the dark of the rocks way above, looking over the waves. The stairs were rickety and lights didn’t pave the way. I had expected a happy looking hut. I had passed two drunkards a km away who had pointed me in this direction and said that the mafia pay off the local authorities. I thought they were talking about the locan mafia. I later had reason to believe they were referring to the Russian mafia. Open secrets. I had read online that the place was towards one end of the beach when I had chanced upon this place of accommodation. It was just what I was looking for: somewhere close by to retreat to. I thought it would be my sanctuary. I was so wrong.

There was a row of very dark skinned Thais at a bench overlooking the steps. They didn’t look up as a passed, they all paid attention to their smartphones. The whole place was really odd. The music was lurid, not traditional drug music. It was a rather scary place. It was dark, and none of the Thais smiled. I looked at the first bar. There was marajuana art around. This wasn’t the place I was looking for. I went further in and saw a second bar, tucked into the corner. There were no signs, no prices, nothing advertised. This must be it.

I asked the bartender: ‘Can I have one?’. He asked ‘what do you want?’. I felt a lump in my throat. If the rumours weren’t true, this could be a big mistake. Thailand takes drug crime very seriously. I asked for a ‘milkshake’. The man went to the back. It was an obtuse time, just like I wanted. There didn’t seem to be any eyes on me. I peeped over and observed. It looked like he and his coworker were preparing some kind of root by chopping it up. It was dark coloured. It didn’t look like a mushroom. The drink prepared didn’t look anything like a milkshake: there was clearly no milk in it. It didn’t taste mushroomy. I drank it and retreated. The effects were initially mild. I waited to get a sense for how much lead time I had before I had to retreat to a safer space when I downed a second drink. I saw the boats at sea driving around. I thought maybe they were patrol boats sponsored by the local mafia to ensure there were no drug related deaths at sea: a sure fire way to curb tourism. From past experiences I know I have extreme tollerance for hallucinogenic experiences. For instance, with concentration and reason I can flux into and out of lucidity during DMT trips, irrespective of dose.

Soon I returned to those stairs, quite clearly affected, yet not arousing the attention of the dark skinned Thai’s above still who had probably and/or experienced all kinds of things in their day. I guessed they were the local mafia. In the lead up to my return for seconds I had enjoyed embracing the sand, looking out at the waves from the beach, tried seeing if I could have a more nuanced reading of Rationality from AI to zombies and explored past memories, cognitive functioning and such. Now it was time for me to reach a so called ‘Level 5 trip’ (https://en.wikipedia.org/wiki/Psychedelic_experience#Level_5_.28Ineffable.29).

What happened over the course of the night will have a seperate post. This post has just been relevant background information. The rationality experiments and my interpretations are coming in the next.

I will give the post’s Latin names to make them easy to find. I don’t want to post so much so I will combine various maxims into say 2 seperate post. The maxims aptly describe the relevant themes:

Tantum nimirum ex publicis malis sentimus, quantum ad privatas res pertinet : nec in iis quicquam acrius quam pecuniae damnum stimulat. - We feel public misfortunes just so far as they affect our private circumstances, and nothing of this nature appeals more directly to us than the loss of money (Livy).

goldBis interimitur qui suis armis perit - He is doubly destroyed who perishes by his own arms. (Syrus)

Acerrima proximorum odia - The hatred of those most nearly connected is the bitterest of all (Tacit)

aegri somnia vana - a sick man's dream

Facilis descensus averno - The descent to Avernus (Hell) is easy (Virgil)

Graviora manent - Greater dangers await (Virgil)

Tempus edax rerum - Time is the devourer of things Una salus victis nullam sperare salutem - The one safety for the vanquished is to abandon hope of safety (Virgil)

Acclinis falsis animus meliora recusat - The mind intent upon false appearances refuses to admit better things (Horace)

Replies from: None
comment by [deleted] · 2016-01-21T14:47:20.509Z · LW(p) · GW(p)

Ahh, screw the format. I’ll just post here to contain it in one place as it comes to me. I lie here in my hotel. Sometimes I think of relatively trivial matters compared to the recent tribulations: Í shouldn’t have prepaid my entire stay in this hotel. I should have taken it one night at a time’ I think. That may be wrong. There were some parts of the events of last night where my intuitions were gravely wrong. Both the hallucinations and the delusions offered insights:

Let’s star with the comedic, but totally non-chronological. One gentleman at a bar I stopped for some food seemed like mafia to me. It turned out he was a gay guy who upon noticing me, did the surrepticious gay secret ritual for wanting me to follow him to his hotel room. If I was in the mood and not drug-fucked, I might have obliged. Importantly, it’s unlikely that there would be an openly gay russian mafioso because of the homophobia among Russians and machismo of criminals. I noticed upon leaving that the bar was called Same Same and the servers came across as pretty gay.

So I was delusional. My insight fluctuated throughout the night but remained under a threshold. For one, I couldn’t will myself to consciously test items on scales of insight that I otherwise can vaguely remember if I’m primed with the memory. And indeed, during the peak of the trip I did WANT to test myself since it often increases my self-awareness: I just couldn’t do it.

During that peak: what felt like several hours but I later inferred was under an hour, perhaps about 330 minutes or less. I pleaded out aloud for anyone who might be able to here me to evacuate me to a psychiatric ward or hospital immediately. It was, in retrospect, looking for positives, a respite from the paranoia of losing my valuables that has burdened me for the whole trip. Nevermind that I could not reconcile my visual and auditory information, nor my olfactory sensory data with one another. In a sense it was a moment of highly intentioned high stakes reason: reason uninhibited by cachhed thoughts associated with typical familiar sensory input.

I assume the auditory information is hallucinatory. I was outside later and there was a dying beach scene. It wa already 10.30pm. Around 11.30pm when I go outside I can see there is a huge party happening. This relieves a lot of my anxiety.

The greatest relief from this hell was talking to people. After the hurdle of social anxiety was bounded, I landed in the real sanctuary: a safer, social environment. It took the group’s movemet into the party side of the beach filled with alcohol for me to return to my room, finally having some semblance of peace. I loathe recreational self-harm above such as alcohol and didnt want to be part of it.

I consciously thought through several rationality maxims given in the sequenes. I tried various techniques I had thought of too. They just pushed me deeper into the rabbit whole. My anxiety built while I found them to be rather useless in this context. How had it got to this? Before I was patting some nice cat that I can approached me while I sat on a bench up the road. I wasn’t at all scared of the dogs around me, including one missing a leg with a big bump on its head. I walked barefoot in quite an inappopriate place. It was a big downswing from when I literally hugged the beach sand. What marked the downswing to me was being shouted at by the Magic Mountain milkshake seller man. Or that’s what I remember remembering during my peak. It’s not clear to me if that really happened. I was very afraid thereafter and at the point was unsure what he was trying to tell me, but wasn’t able to figure out what to do to figure that out.

I waited to die. It was something to look forward too. I thought what positive I could: If I make it out of this I have nothing to fear of say torture: since at least I can know that is a finite thing with death as the end. I wasn’t even sure if perhaps this psychological torture is the real reality, and what I had known before was actually a disorted memory, or some kind of false dimension part of an infinite continuum of weirdness I was now in. At times, I suspected I am dead. At the time I was totally awarae these ideas were absurd. They felt absurd, but they also felt like useful working hypothesis to figure out how to play this game. I prayed to god. I tried closed eye visualisation to hallucinate meeting with god who then proceeded to tell me that he had no more left to teach you. It was underwhelming and I reckon it’s just my lack of creativity and pride speaking. I suppose it might be what they call çlose eyes hallucinations.

Replies from: None
comment by [deleted] · 2016-01-21T15:17:34.224Z · LW(p) · GW(p)

I went on brief mental journeys. Things were still positive at this time. It was supposed to be a time of healing, retreating to my room. But, it would be the lead in to the negative parts. The mental journeys culminated in me making a few notes:

My life is a fortress This was based on close eye hallucinations where everything was blocky, including my eyes Its okay to try hard I don’t remember the story behind this And you cab (sic) make the profound out of nothing and ritual Relating to how I was identifying meaning in trivial things. This conforms with priors around semiotics and psychiatry.

I had been to hell and back: psychosis before without drugs. This was a reminder of what it could be like and importance of mental health..i know that this probably not cure depression, just brief serotonin boost, others have reported this, but now motivated to restart antidepressants and maybe antipsychotics

I’m just on drugs, I thought. But that didn’t really help. I thought maybe it wasn’t mushrooms. It was the right duration, but so does LCD and some RC’s. The dangers of the unknown substance really scared me. Also my history of mental health scared me in that it may have precipitated on ongoing psychosis, or HPPD.

I think about the phone monkey cliff mafia. They might be disinformants

Posts pop up on various facebook groups urging foreigners not to comment to the media, or speak to any outsiders until approval is given by key people on the island. Comments are deleted or self censored. There is an appearance of a wall of silence, either for personal safety, or to protect business interests. Various sock puppet accounts appear on online message boards such as Thaivisa.com attempting to derail commentary on the incident and the character assassination of the only witness begins.

Replies from: None
comment by [deleted] · 2016-01-22T05:20:38.079Z · LW(p) · GW(p)

Okay I CBF writing up a trip report so notes are dumped here and here in case you want to read into them.

comment by turchin · 2016-01-20T17:52:19.297Z · LW(p) · GW(p)

Rant mode on:

Whenever Hawking blurts something out, mass media spread it around straight away. While he is probably OK with black holes, when it comes to global risks, his statements are not only false, but, one could say, harmful.

So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war. This statement brings all the problems to about the same distance as that to the nearest black hole.

In fact, both a nuclear war and artificial viruses are realistic right now and can be used during our lifetime with probability as high as tens percent.

Feel the difference between chances for an artificial flu virus to exterminate 90% of population within 5 years (the rest would be finished off by other viruses) and suppositions regarding dangers over thousands of years.

The first thing is mobilizing, while the second one causes enjoyable relaxation.

He said: ‘Chances that a catastrophe on the Earth can emerge this year are rather low. However, they grow with time; so this undoubtedly will happen within the nearest one thousand or ten thousand years’

The scientist believes that the catastrophe will be the result of human activity: people can be destroyed by nuclear disaster or artificial virus spread. However, according to the physicist, the mankind still can save itself. For this end, colonization of other planets is needed. Reportedly, earlier Stephen Hawking stated that the artificial intelligence would be able to surpass the human one as soon as in 100 years.”

Also, the statement that migration to other planets automatically means salvation is false. What catastrophe can we escape if we have a colony on Mars? It will die off without supplies. If a world war started, nuclear missiles would reach it as well. In case of a slow global pandemia, people would bring it there like they bring AIDS virus now or used to bring plague on ships in the past. If hostile AI appeared, it would instantly penetrate to Mars via communication channels. Even gray goo can fly from one planet to another. Even if the Earth was hit by a 20-km asteroid, the amount of debris thrown into the space would be so great that they would reach Mars and fall there in the form of a meteorite shower.

I understand that simple solutions are luring, and a Mars colony is a romantic thing, but its usefulness would be negative. Even if we learned to build starships travelling at speeds close to that of light, they would primarily become a perfect kinetic weapon: collision of such a starship with a planet would mean death of the planet’s biosphere.

Finally, some words about AI. Why namely 100 years? Talking about risks, we have to consider a lower time limit, rather than a median. And the lower limit of estimated time to create some dangerous AI is 5 to 15 years, not 100. http://www.sciencealert.com/stephen-hawking-says-a-planetary-disaster-on-earth-is-a-near-certainty

Rant mode off

Replies from: gjm, ChristianKl
comment by gjm · 2016-01-21T11:04:59.747Z · LW(p) · GW(p)

I think you're reading things into what he said that he never intended to put there.

His central claim is certainly not a reassuring, relaxing one: disaster is "a near certainty". He says it'll be at least a hundred years before we have "self-sustaining colonies in space" (note the words "self-sustaining"; he is not talking about a colony on Mars that "will die off without supplies") and that this means "we have to be very careful in this period".

Yes, indeed, the timescale on which he said disaster is a near certainty is "the next thousand or ten thousand years". I suggest that this simply indicates that he's a cautious scientific sort of chap and doesn't like calling something a "near certainty" if it's merely very probable. Let's suppose you're right about nuclear war and artificial viruses, and let's say there's a 10% chance that one of those causes a planetary-scale disaster within 50 years. (That feels way too pessimistic to me, for what it's worth.) Then the time until there's a 99% chance of such disaster -- which, for me, is actually not enough to justify the words "near certainty" -- is 50 log(0.9)/log(0.01) years ... or about 2000 years. Well done, Prof. Hawking!

Indeed, the statement that "migration to other planets automatically means salvation" is false. But that goes beyond what he actually said. A nuclear war or genetically engineered flu virus that wiped out most of the population on earth probably wouldn't also wipe out the population of a colony on, say, Mars. (You say "nuclear missiles would reach [Mars] as well", but why? Existing nuclear missiles certainly aren't close to having that capability, and there's a huge difference from the defensive point of view between "missiles have been launched and will hit us in a few minutes if not intercepted" and "missiles have been launched and will hit us in a few months if not intercepted".)

You ask "Why namely 100 years?" but, at least in the article you link to, Hawking is not quoted as saying that there are no AI risks on timescales shorter than that. Maybe he's said something similar elsewhere?

I really don't think many people are going to read that article and come away feeling more relaxed about humanity's prospects for survival.

Replies from: turchin
comment by turchin · 2016-01-21T18:03:01.586Z · LW(p) · GW(p)

I comment on what impression he translates to public, that risks are remote and space colonies will save us. He may secretly have other thoughts on the topic, but this does not matter. We could reconstruct his line of reasoning, but most people will not do it. Even if he thinks that risks are 1 per cent in next 50 years it results in 87 per cent in 10 000 years. I think that attempts to reconstructs some one line of reasoning with goal to get more comforting result is biased.

For example one may said: "I want to kill children". But we know apriori that he is clever and kind man. So may be he started from the idea that he wants to make the world better place and may be he just wanted to say that he would like to prevent overpopulation. But I prefer to take claims on their face value. No matter who say it and what he may think, but didn't said.

Self-sustain Mars colonies are able to create nukes on their own. If a war starts on Earth and if we have several colonies on Mars built by different countries, they could start war between each other too. In this case travel time for nukes will still be minutes - from one point on Mars to another. The history of WW2 shows that war between metropolis often resulted in the war in colonies (North Africa).

Hawking has been quoted about AI in 100 years in another article about the same lectures.

Replies from: gjm
comment by gjm · 2016-01-21T19:39:46.499Z · LW(p) · GW(p)

87 per cent in 10 000 years

That's not my idea of "near certainty".

with goal to get more comforting result

That is not my goal, and I have no idea why you suggest it is.

But I prefer to take claims on their face value.

It doesn't appear to me that you are doing this in Hawking's case; rather, you are reading all sorts of implications into his words that they don't logically entail and don't seem to me to imply in weaker senses either.

they could start war between each other too.

Sure, they could. But I don't see any particular reason to assume that they would.

Replies from: turchin
comment by turchin · 2016-01-21T19:59:51.268Z · LW(p) · GW(p)

We don't know what Hawking meant by "near certainty" - 90 per cent or 99,999 per cent and depending on it we may come to different conclusion about what probability it implies for next 100 years. Most readers will not do this type of calculations anyway. They will learn that global risks is something that could happen in 1000 - 10 000 years time frame. And will discount it as unimportant.

Your goal seems to be to prove that Hawking thinks thats global risks are real in near term future while he said exactly opposite.

A lot of media starts to report Hawking claims in following words: "Professor Stephen Hawking has warned that a disaster on Earth within the next thousand or ten thousand years is a ‘near certainty'. http://www.telegraph.co.uk/news/science/science-news/12107623/Prof-Stephen-Hawking-disaster-on-planet-Earth-is-a-near-certainty.html While media may be not exact in repeating his claims and the wording is rather ambiguous, he didn't clarify them publicly as I know.

About Mars. If colonies will be built by national states, for example there will be two colonies, American and Chinese, the war between US and China will result in war between their colonies with high probability, because if one side choose to completely destroy other side and its second strike capability it has to destroy all its remote military bases which may have nukes.

Replies from: gjm
comment by gjm · 2016-01-21T20:41:44.967Z · LW(p) · GW(p)

He did not "say exactly opposite". He said: it'll be at least 100 years before we have much chance of mitigating species-level disasters by putting part of our species somewhere other than earth, so "we have to be very careful".

My goal is to point out that you are misrepresenting what Hawking said.

If colonies will be built by national states

If these are genuinely self-supporting colonies on another planet, I think it will not be long -- a few generations at most -- before they stop thinking of themselves as mere offshoots of whatever nation back on earth originally produced them. Their relations with other colonies on Mars (or wherever) will be more important to them than their relations with anyone back on earth. And I do not think they will be keen to engage in mutually assured destruction merely because their alleged masters back on earth tell them to.

(And if they are not genuinely self-supporting colonies, then they are not what Hawking was talking about.)

Replies from: turchin
comment by turchin · 2016-01-21T21:34:20.097Z · LW(p) · GW(p)

My criticism should concentrate on two levels: on his wording and on his model of x-risks and their prevention. His wording is ambiguous than he speak about tens of thousand years - we don't have them.

But I also think that his claims that we have 100 years (with small probability of extinction) and that space colonies are our best chance are both false.

Firstly, because we need strong AI and nanotech to create really self-sustained colony. Self-replicting robots are the best way to build colonies. So we need to prevent risks of AI and nanotech before we create such colonies. And I think that strong AI will be created in less than 100 years. The same maybe said about most other risks - we could create new flu virus even now without any new technologies. Global catastrophe is almost certain in next 100 years if we don't implement protective measures here on Earth.

The space colonies will not be safe from UFAI and from nanobots. Large space crafts maybe used as kinetic weapon against planets, so space exploration could create new risks. Space colonies also will not be safe from internal conflicts, as large colony will be able to create nukes and viruses and use it against another planet or another colony on the same planet or even in case internal terrorism. Only starships with near light speed maybe useful as escape mechanism as they could help spread civilization through Galaxy and create many independent nodes.

Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-22T10:03:28.477Z · LW(p) · GW(p)

But I also think that his claims that we have 100 years (with small probability of extinction)

His claim is that we have 100 years in with we have to be extra careful to prevent Xrisk.

The same maybe said about most other risks - we could create new flu virus even now without any new technologies.

With today's technology you could create a problematic new virus. On the other hand that hardly would mean extinction. Wearing masks 24/7 to filter air isn't fun but it's a possible step when we are afraid of airbone viruses.

Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.

It's not like Hawkings doesn't call for AGI control.

comment by ChristianKl · 2016-01-21T09:58:33.642Z · LW(p) · GW(p)

So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war. This statement brings all the problems to about the same distance as that to the nearest black hole.

Do you really think that it's Hawking's position that at the moment there's no threat of nuclear war?

Replies from: turchin
comment by turchin · 2016-01-21T17:41:14.345Z · LW(p) · GW(p)

I don't think that he think so, I comment on what impression he translate to public, that risks are remote and space colonies will save us. He may secretly have other thoughts on the topic, but this does not matter.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-21T21:22:23.417Z · LW(p) · GW(p)

I don't think that it makes sense to give the full responsibility for a message to a person that's distinct from the author of an article.

That said I don't think that saying: Although the chance of a disaster on planet Earth in a given year may be quite low, it adds up over time, becoming a near certainty in the next thousand or ten thousand years. makes any reader update to believe that the chance of nuclear war or genetically engineered viruses are lower than they previously expected.

Talking with mainstream media inherently requires simplying your message.Focuses the message compounding of risk over time doesn't seem wrong to me.

Replies from: turchin
comment by turchin · 2016-01-21T21:42:43.434Z · LW(p) · GW(p)

If he write an article about his understanding of x-risks timeframe and prevention measures timeframe with all fidelity he use to describe black holes we could concentrate on it.

But now it may be wise to said that the media is wrongly interoperated his words and that he (probably) meant exactly opposite: that we must invest in x-risks prevention now. The media publications is only thing with which we could argue. Also I think that he may take more responsibility while talking to media, because he is guru and everything he said may be understood uncritically.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-21T21:53:19.891Z · LW(p) · GW(p)

Even the article says we have to be extra careful with x-risk prevention in the next 100 years because we don't have a self sustaining Mars base. I think you are misreading the article when you say it argues against investing in x-risk prevention now.

comment by Good_Burning_Plastic · 2016-01-27T19:16:17.393Z · LW(p) · GW(p)

Comments by The Lion show up on his overview page but no longer in their original context (the permalinks say "There doesn't seem to be anything here.") What gives? In particular, that of 27 January 2016 02:16:08AM turned my inbox icon red but didn't show up in my inbox, which confused me.

Replies from: polymathwannabe, polymathwannabe
comment by polymathwannabe · 2016-01-29T01:43:42.796Z · LW(p) · GW(p)

He has now created a new username, The Lion 2, to repost his old posts and threaten the moderators.

Edited to add: As of now, he has gathered 65 karma points (87% positive) in less than a full day.

Replies from: Zubon
comment by Zubon · 2016-01-29T20:34:27.465Z · LW(p) · GW(p)

That seems like really sloppy sockpuppetry. Wouldn't that just tell admins which other accounts are likely also the same person, so ban the lot of them?

Replies from: Vaniver
comment by Vaniver · 2016-01-29T20:37:24.174Z · LW(p) · GW(p)

Let's not publicly discuss flaws in plans to evade admin action.

Replies from: Lumifer
comment by Lumifer · 2016-01-29T20:56:54.576Z · LW(p) · GW(p)

Why not?

I am not buying this argument in the context of national security, why in the world would it apply here? I didn't pinky swear to uphold the power of moderators.

Besides, it's not like we're talking about non-obvious things.

Replies from: Vaniver
comment by Vaniver · 2016-01-29T21:35:26.744Z · LW(p) · GW(p)

I am not buying this argument in the context of national security, why in the world would it apply here?

There's an asymmetry between discussing flaws in the admin's plans and discussing flaws in the attacker's plans, which is significant enough that the first can be a public service and the second a public disservice.

The national security analog of the former is pointing out security holes, and the analog of the latter is giving helpful advice to terrorists.

Besides, it's not like we're talking about non-obvious things.

If it is truly obvious, then there is nothing to be gained by saying it; if it is not obvious, then there is something lost by saying it.

Replies from: Lumifer
comment by Lumifer · 2016-01-29T21:44:15.894Z · LW(p) · GW(p)

There's an asymmetry between discussing flaws in the admin's plans and discussing flaws in the attacker's plan

Not quite, the defence and the attack are a matching zero-sum pair. Aiding one disadvantages the other.

The national security analog of the former is pointing out security holes, and the analog of the latter is giving helpful advice to terrorists.

Pointing out security holes is routinely called "giving helpful advice to terrorists" (or other members of the unholy triad, child pornographers and drug dealers).

comment by polymathwannabe · 2016-01-27T19:42:09.817Z · LW(p) · GW(p)

I opened a dozen permalinks of replies to his comments, then clicked the upward "Parent" link, and all of them showed "Comment deleted." Someone has been systematically deleting his comments.

comment by [deleted] · 2016-01-22T02:39:27.207Z · LW(p) · GW(p)

I silently think I have conservative political values, yet my private lifestyle is anything but. In real life, I generally expouse fairly conservative views too, in contrast to my online posting behaviours. It’s one reason I am hesistant to be completely transparent about my LessWrong/reddit identities and my every day physical world identity.

I reckon it’s okay to have different attitudes to public and private life, since governance is differant than running your own life. However, the hypocrisy kinda unnerves me. My intuition is that conservative aesthetics make me less anxious. People around me will be more predictable and that’s a society I will feel safer in. But, I don’t trust the world to provide for me an acceptable life, so I life a free life of experimentation privately. Should I redefine my values or lifetyle?

Replies from: polymathwannabe, Strangeattractor, Tem42
comment by polymathwannabe · 2016-01-22T15:28:22.306Z · LW(p) · GW(p)

I have liberal politics and choose to live a very restrained personal life. I don't see any incompatibility. Why do you feel hypocritical?

Replies from: gjm
comment by gjm · 2016-01-22T23:15:19.772Z · LW(p) · GW(p)

Oversimplifying:

  • Liberals say "do what you like". If you are liberal and choose not to do X, Y, and Z then you are not violating any of your espoused principles.
  • Conservatives say "don't do X, Y, or Z". If you are conservative and choose to do X, Y, and Z then you are violating your espoused principles.

(I repeat, the above is an oversimplification, and in fact there are things most liberals say not to do and most conservatives don't object to.)

comment by Strangeattractor · 2016-01-26T00:53:44.042Z · LW(p) · GW(p)

Is conservatism something you appreciate aesthetically? A tribe you like belonging to more than the "liberal" tribe? Something closer or farther away from what you perceive to be reality than the alternatives?

I don't know how to give advice based on the question as you've stated it. Or, rather, my advice would be to pose a better question, not necessarily here, but to yourself. Figure out where the conflicts are and evaluate each one individually, instead of generalizing to all of them.

Replies from: None
comment by [deleted] · 2016-01-26T03:59:22.229Z · LW(p) · GW(p)

Is conservatism something you appreciate aesthetically?

Yes

? A tribe you like belonging to more than the "liberal" tribe?

False choice. In Australia, the liberal and cosnervative tribes are one in the same for historic reasons. I'm from Australia.

Something closer or farther away from what you perceive to be reality than the alternatives?

It's a reality of possible utilities from the value system rather than reality reality of predictive validity

Or, rather, my advice would be to pose a better question, not necessarily here, but to yourself.

I'll meditate on that

Figure out where the conflicts are and evaluate each one individually, instead of generalizing to all of them.

Ok

comment by Tem42 · 2016-01-23T16:30:07.716Z · LW(p) · GW(p)

One possible interpretation of this is that you are more liberal when surrounded by people whose judgement you trust -- which is a sane and defensible position. You should give more trustworthy (and more rational) people more leeway in their behaviors.

Replies from: None
comment by [deleted] · 2016-01-24T09:35:21.631Z · LW(p) · GW(p)

I like how that sounds

comment by Brillyant · 2016-01-18T18:47:07.973Z · LW(p) · GW(p)

Which is the best online dating site or dating app? Why?

Replies from: username2, Viliam, ChristianKl, CurtisSerVaas, MrMind, Gunnar_Zarncke
comment by username2 · 2016-01-18T22:29:17.998Z · LW(p) · GW(p)

OKCupid has way more people from geeky demographics than most other dating sites.

comment by Viliam · 2016-01-18T20:59:19.928Z · LW(p) · GW(p)

LessWrong meetups obviously. Unless you want to date irrational people. :D

comment by ChristianKl · 2016-01-18T23:27:47.654Z · LW(p) · GW(p)

That likely depends on where you live and what people you are seeking. In many areas OkCupid is good because a lot of interesting people sign up for it. It's matching algorithm also allows you to search for people with similar personalities.

comment by CurtisSerVaas · 2016-01-19T22:59:49.734Z · LW(p) · GW(p)

Surprisingly, I've had more success meeting nerdy types on Tinder than OkCupid.

I think this might be because there are more people on Tinder.

comment by MrMind · 2016-01-19T09:29:42.605Z · LW(p) · GW(p)

Are you good looking? What kind of relationship are you searching for? Where do you live?

Replies from: Brillyant
comment by Brillyant · 2016-01-20T14:46:19.811Z · LW(p) · GW(p)

I'm average. Monogamous. U.S.

I'm looking for more of a meta analysis, though.

Generally, my experience has been that ease of access is negatively correlated with the overall positivity of my experience.

Pay sites seem to eliminate a lot of insincere people, where apps like Tinder (free and almost instant setup) attract many flaky people, making it almost unusable in my experience.

Replies from: MrMind
comment by MrMind · 2016-01-21T09:01:02.989Z · LW(p) · GW(p)

Then I would say that your analysis is correct: in my experience, Tinder and the like are more suited to the good looking, younger and fling-searching crowd, in your case you'd be better off with Match. com and eHarmony.

Replies from: Tem42
comment by Tem42 · 2016-01-23T16:39:38.077Z · LW(p) · GW(p)

My experience is that pay sites are geared most towards signaling professionalism and (moderate) wealth -- which is often shorthand for 'adult', and therefore useful. However, in my experience OKCupid has provided better signaling for intelligence and thoughtfulness, simply because it allows users to write commentary on any question you answer. Most users do not take advantage of this, but looking for 'explained answers' is one of the most useful metrics I have found on any dating site.

comment by Gunnar_Zarncke · 2016-01-18T22:20:38.091Z · LW(p) · GW(p)

That heavily depends on your country. For Germany see here.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-19T11:49:28.066Z · LW(p) · GW(p)

Why do you consider that link to be good? It's a website that makes money by directing people to dating sides with affiliate programs and therefore doesn't list free websites like OkCupid or Finya.

Payed websites not only mean that you pay money but also that you have people reading your messages and generally trying to manipulate you to spend money.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-01-19T19:18:12.557Z · LW(p) · GW(p)

Your criticism of the linked site is valid. Nonetheless it does list the platforms with the largest number of customers.

I tried Okcupid and the number of people there seems to be quite small compared to these platforms (I did like the flexible site mechanics though). Actually I will try one or two of these paid services shortly.

One reason people might pay for such services is because paying signals that you are willing to invest in dating and by extension the partner. We will see.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-19T19:44:58.043Z · LW(p) · GW(p)

A while ago I sat at the community camp in Berlin where the CEO of FriendScoot24 and the CEO of ElitePartner were in attendance. That gave me an interesting perspective into how those sites work.

For them it's important that you pay for the service. That means they have humans reading your messages to check that you don't give the other person your email address in the first message. They also allow you to write messages to people who can read your message but need to upgrade their account to actually read them.

Apart from that I don't want to stop you from trying any of them as the invested money isn't that big compared to the possible benefit.

Nonetheless it does list the platforms with the largest number of customers.

Customers, in the sense that people pay for the service. Not customers in the sense of users. Finya has a large amount of users in Germany.

On LW person recently wrote on his facebook feed that a while ago he reasoned that there were 1000 people in the world who fit his list of potential partners. Then he went on to reason that's no problem because he can actually meet those people.

Multiple people I know in the German LW scene have talked about having OkCupid profiles while I don't think anybody talked about having a profile on other sites.

comment by gjm · 2016-01-27T17:36:09.462Z · LW(p) · GW(p)

Mark Zuckerberg of Facebook wants to build an AI. That's a Facebook link; for anyone who for whatever reason doesn't want to go there, the Hacker News discussion includes one comment containing all Zuckerberg's text.

comment by knb · 2016-01-20T03:45:52.529Z · LW(p) · GW(p)

Oil prices have recently fallen to near record lows. What are the risks and benefits?

Risks:

  • Increased instability in the Mideast as governments are less able to buy the loyalty of the people.
  • Likely slow the transition to electric vehicles, may slow the overall decline of carbon emissions.
  • Many job losses in petroleum-producing locations, especially higher cost producers.

Benefits:

  • Possibly stimulate the global economy, especially good for fast-developing oil importers like China and India.
  • Might push oil producers toward economic reforms that will be needed eventually anyway.
  • Might help avert some of the resource-war scenarios that have been described as inevitable by Malthusians.

Debatable:

  • Geopolitical power relatively shifts away from Russia, Iran, Venezuela, Saudi Arabia, toward NATO/OECD and China.

Any important dynamics I'm missing?

Replies from: MrMind, _rpd, Richard_Kennaway
comment by MrMind · 2016-01-20T08:16:22.676Z · LW(p) · GW(p)

Only two nuances:

  • Venezuela is / will very shortly also experience political instability;
  • Iran has just seen heavy international sanctions lifted, so while its oil will be valued less than it could have been, it will still provide a push to the economy.
Replies from: None
comment by [deleted] · 2016-01-21T06:07:21.819Z · LW(p) · GW(p)

Finally my readings of Stratfor Wikileak documents about Venezuala are topical!

comment by _rpd · 2016-01-21T18:35:25.244Z · LW(p) · GW(p)

We can expect lower food prices. High food prices have been an important political stressor in developing nations.

comment by Richard_Kennaway · 2016-01-20T14:26:24.844Z · LW(p) · GW(p)

Any important dynamics I'm missing?

Saudi Arabia flooded the market in order to reduce the price, in order to combat the benefit to Iran of the raising of sanctions.

Replies from: ChristianKl, knb, polymathwannabe
comment by ChristianKl · 2016-01-21T10:19:41.522Z · LW(p) · GW(p)

Saudi Arabia flooded the market

Their oil production didn't rise that much. They didn't really flood the market. They mainly decided not to cut their production.

Replies from: _rpd
comment by _rpd · 2016-01-21T18:25:10.261Z · LW(p) · GW(p)

They mainly decided not to cut their production.

And there is a good reason for this decision. Saudi Arabia tried cutting production in the '80s to lift prices, and it was disastrous for them. Here's a blog post with nice graphs showing what happened ...

Understanding Saudi Oil Policy: The Lessons of ‘79

comment by knb · 2016-01-21T09:13:20.581Z · LW(p) · GW(p)

That's one argument. Another common argument is that they want to increase their market share and kill the US tight oil industry.

comment by polymathwannabe · 2016-01-20T15:48:47.106Z · LW(p) · GW(p)

That's a strange move. Right now the last thing oil producers need is even lower prices.

Replies from: Lumifer, MrMind
comment by Lumifer · 2016-01-20T16:13:45.750Z · LW(p) · GW(p)

You're assuming they are maximizing money. That is not so.

comment by MrMind · 2016-01-21T08:49:54.074Z · LW(p) · GW(p)

Consider that Saudi Arabia and Iran are on the opposing side of Islam sects, so the reason was to stump the sudden economic growth of their enemy.
Yes, it's centuries that people kill each over who was the correct successor of Muhammad, his cousin or his friend. Welcome to planet Earth...

Replies from: polymathwannabe
comment by polymathwannabe · 2016-01-21T16:50:36.915Z · LW(p) · GW(p)

the reason was to stump the sudden economic growth of their enemy

I get that. But by cutting their own lifeblood? On what basis do Saudi strategists estimate that the damage to the Saudi economy will be small enough compared to the damage to the Iranian one?

comment by MrMind · 2016-01-19T09:31:24.051Z · LW(p) · GW(p)

I would love to see a CFAR SuperBetter pack.

comment by WhyAsk · 2016-01-24T17:36:04.427Z · LW(p) · GW(p)

Why does my Karma score keep increasing when I don't do anything? It's a disincentive to post. . .?

Replies from: tut, Dagon
comment by tut · 2016-01-24T19:43:38.598Z · LW(p) · GW(p)

Presumably people read your old comments and upvote them.

comment by Dagon · 2016-01-25T17:10:11.228Z · LW(p) · GW(p)

I think tut has it right: old articles still get some votes. But why is it a disincentive? I'd think it would make you (slightly) more willing to take a risk and post something that might get downvoted, since you don't expect a marginal downvote to matter as much based on your background karma income.

Replies from: gjm
comment by gjm · 2016-01-25T17:22:17.217Z · LW(p) · GW(p)

I think WhyAsk's model of the situation was "get karma while not doing anything" rather than "get karma independently of what you're doing". (WhyAsk, the reality is much nearer the latter than the former.)

Replies from: WhyAsk
comment by WhyAsk · 2016-01-25T19:41:17.591Z · LW(p) · GW(p)

A lot of what I was sure of, I'm not any longer. . .:D

comment by [deleted] · 2016-01-22T03:12:56.986Z · LW(p) · GW(p)

Why isn't there empirical evidence in the Wikipedia article on investment strategy? Are the hypothesises financial engineers make unscientific?

Replies from: IlyaShpitser, Lumifer
comment by IlyaShpitser · 2016-01-22T04:33:18.222Z · LW(p) · GW(p)

Markets are not like Nature, they are much more adversarial.

Replies from: None
comment by [deleted] · 2016-01-22T05:22:43.165Z · LW(p) · GW(p)

Even if they are better modeled by game theoretic processes, surely that could still be inferred empirically?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2016-01-22T05:37:41.763Z · LW(p) · GW(p)

I don't know what you mean by "inferred empirically." If you mean "statistical inference," there are tons of unstated assumptions that basically assume the observed object is benign or indifferent. There is work in machine learning on learning in adversarial settings, but it's a much harder problem. Markets are super adversarial, and in addition there are incentives against publishing sensible analyses (why give away money to hostile/competing interests?)


edit: Sorry, I should say "tons of assumptions." People state them, and it's clear they are benign, e.g. samples are i.i.d.

Replies from: None
comment by [deleted] · 2016-01-22T05:59:27.116Z · LW(p) · GW(p)

How interesting! I've seen work on game theoretic optimal poker playing. I can only imagine how sophisticated the market versions would be. Looking forward to seeing a wikipedia page on the topic one day :)

Replies from: ChristianKl
comment by ChristianKl · 2016-01-22T10:52:32.770Z · LW(p) · GW(p)

Poker is a game with known rules. In investing the rules are not known in the same way. Nassim Taleb calls equating the two the ludic fallacy.

Replies from: None
comment by [deleted] · 2016-01-22T11:49:47.387Z · LW(p) · GW(p)

Who?

Replies from: gjm
comment by gjm · 2016-01-22T12:04:01.318Z · LW(p) · GW(p)

Nassim Nicholas Taleb, author of the somewhat well-known book "The Black Swan". Former (successful) trader and (not so successful) hedge fund manager.

Replies from: None
comment by [deleted] · 2016-01-23T00:56:33.319Z · LW(p) · GW(p)

Interesting!

comment by Lumifer · 2016-01-22T03:25:52.550Z · LW(p) · GW(p)

Are the hypothesises financial engineers make unscientific?

Depends on your definition of "scientific".

Markets change. What used to be true ten years ago might not be true now.

Replies from: ChristianKl, None
comment by ChristianKl · 2016-01-22T10:54:10.316Z · LW(p) · GW(p)

Depends on your definition of "scientific".

In this case it seems to mean "What Wikipedia considers scientific enough to back up a claim".

comment by [deleted] · 2016-01-22T05:24:40.050Z · LW(p) · GW(p)

Nature changes too. In both cases I believe there are identified causal determinants of both market and natural behaviour and in both cases these can change over time. Still, the scientific process can be meta-inducted in the case of markets to identify whether, for instance, financial news or academic papers give useful tips and in what circumstances.

In fact, now I'm really curious to know about literature on that very topic.

comment by WhyAsk · 2016-01-18T19:45:53.634Z · LW(p) · GW(p)

Cost of being less wrong: increased cognitive load?

Benefit oblw: longer life expectancy?

Risk oblw: becoming a pariah in most crowds?

Replies from: username2, None, None
comment by username2 · 2016-01-18T22:26:21.387Z · LW(p) · GW(p)

Finding a crowd that allows your abilities to bloom is a useful skill as well :)

Replies from: WhyAsk
comment by WhyAsk · 2016-01-19T01:40:41.163Z · LW(p) · GW(p)

That's one reason I'm here, but in the limited time the mortality tables give me I'd like to find a way to present myself favorably to almost any crowd.

In the past, very few have cheered me on and a more vocal few have fervently hoped I'd fail.

Replies from: TimS
comment by TimS · 2016-01-19T15:40:31.444Z · LW(p) · GW(p)

There's no reason you should be a pariah accidentally simply because you have clarified your goals or gotten better at implementing them.

One possibility - your estimate of how many people are not friends to you. That sucks, but you can't force another person to be a good person at you.

Remember the right way to approach someone-is-wrong-on-the-internet, and apply the same principle to in-person interactions.

I'd like to find a way to present myself favorably to almost any crowd.

This is a much harder, and dramatically different goal, from not being a pariah.

Replies from: PipFoweraker
comment by PipFoweraker · 2016-01-19T22:03:12.840Z · LW(p) · GW(p)

There is value in having crowds that view you mildly and strongly disfavourably, but much of this value depends on the rule of law in one's immediate environment.

comment by [deleted] · 2016-01-21T06:08:01.532Z · LW(p) · GW(p)

Cost of being less wrong: increased cognitive load?

I was very inefficient when I was more wrong. Definately lower cognitive load now.

comment by [deleted] · 2016-01-19T20:34:55.285Z · LW(p) · GW(p)

Cost: Not much that I wasn't already doing, less optimally. Benefit: Social group that I can count on to at least TRY to be epistemically honest. Also, openness towards odd people. Risk: Pariah? I'm already considered odd by a lot of people.

Replies from: WhyAsk
comment by WhyAsk · 2016-01-20T21:15:17.495Z · LW(p) · GW(p)

:D

The textbooks written about my personality type say I'm "eccentric".