Open thread, Nov. 02 - Nov. 08, 2015

post by MrMind · 2015-11-02T10:07:16.681Z · LW · GW · Legacy · 195 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

195 comments

Comments sorted by top scores.

comment by ChristianKl · 2015-11-02T22:22:42.964Z · LW(p) · GW(p)

I'm in the process of reading Kuhn's "The Structure of Scientific Revolutions". It's interesting but it is quite dated. Is there a 21st cenutry book on the history and philosophy of science that you would recommend?

Replies from: Daniel_Burfoot, Strangeattractor, Douglas_Knight, username2
comment by Daniel_Burfoot · 2015-11-03T18:50:42.963Z · LW(p) · GW(p)

Maybe there are more up-to-date books, but it is hard for me to imagine any book having more insight per page than SSR. It is also incomparably well-written; even if you don't believe any of the philosophical claims, it is worthwhile simply as a lesson in how to write engagingly about scientific topics.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-03T21:17:59.790Z · LW(p) · GW(p)

but it is hard for me to imagine any book having more insight per page than SSR.

Yes, it's the book where I highlited the most things in my kindle.

I don't want to discourage anybody reading it. At the same time I want to know what to read as a follow-up.

comment by Strangeattractor · 2015-11-06T08:49:52.649Z · LW(p) · GW(p)

It's not exactly on the same topic, but I liked The Nature of Technology by Brian Arthur. When I read it, at first it seemed like he was stating the obvious, but over time I've thought about it more, and realized that a lot of people don't understand what is in his book, and that his framework is more useful than many others.

comment by Douglas_Knight · 2015-11-09T00:11:07.344Z · LW(p) · GW(p)

I haven’t read it, and I’m not sure it’s really on the same topic, but a lot of people like the Golem (de) by Collins and Pinch (1993/1998).

How can a work on the history and philosophy of science be outdated? I suppose new information could rewrite history, but I don’t think that happened. Philosophy is more likely to change, particularly as scientists respond to Kuhn, but largely, they didn’t.

Replies from: gwern, ChristianKl
comment by gwern · 2015-11-09T19:00:30.541Z · LW(p) · GW(p)

I suppose new information could rewrite history, but I don’t think that happened.

New information and representations and analysis of old information are both possible. I don't remember if Kuhn himself focused on the case of Galileo, but a lot of people took him to be a paradigmatic case (sorry!) and Feyerabend undermined a lot of that through close re-examination of primary sources, in support of his own particular philosophy of science.

comment by ChristianKl · 2015-11-09T01:10:08.480Z · LW(p) · GW(p)

How can a work on the history and philosophy of science be outdated?

Mainly 50 years of new history happened. People came up with concepts like "evidence-based medicine" and a bunch of concepts about how science is supposed to progress.

Philosophy is more likely to change, particularly as scientists respond to Kuhn, but largely, they didn’t.

After dealing a bit more with HPS (history and philosophy of science) I get the impression like logical positivism simple ignored the arguments made against it. The New Atheist crowd simply reject criticism of logical positivism as obstruce postmodernism but I never heard someone actually engage the kind of arguments that Kuhn makes.

After I wrote the post I found a lectures series by Hakob Barseghyan. He makes a lot of sense and yet, for some reason HPS isn't in popular culture. I don't understand why HPS doesn't get taught in high schools.

Replies from: Lumifer
comment by Lumifer · 2015-11-09T16:26:45.810Z · LW(p) · GW(p)

People came up with concepts like "evidence-based medicine"

That's not a new concept. That's a straightforward application of the scientific method (and some common sense) to the area which stubbornly resisted and continues to resist it.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-09T19:56:07.048Z · LW(p) · GW(p)

That's a straightforward application of the scientific method (and some common sense) to the area which stubbornly resisted and continues to resist it.

I think both Kuhn and Barseghyan would say that there isn't a single thing that's "the scientific method" and that believing in such a thing isn't defensible when you look at the history of science.

comment by [deleted] · 2015-11-02T11:39:03.723Z · LW(p) · GW(p)

Excepts from ''Explaining and inducing savant skills: privileged access to lower level, less-processed information'' by Allan Snyder, available here

There are now several accounts of artificially induced savant-like skills, in drawing, proofreading, numerosity and false memory reduction, all by inhibiting the LATL with repetitive transcranial magnetic stimulation (rTMS; Snyder et al. 2003, 2006; Young et al. 2004; Gallate et al. 2009).

(i) Why does becoming more literal enhance numerosity?

We argue that the estimation of number by normal people is performed on information after it has been processed into meaningful patterns. The meaning we unconsciously assign to these patterns interferes with our accuracy of estimation, whereas savants, by virtue of being literal, have less interference.

This insight has an important generalization. The healthy brain makes hypotheses in order to extract meaning from the sensory input, hypotheses derived from prior experience (Gregory 1970, 2004; Snyder & Barlow 1988; Snyder et al. 2004). So judgements in general are likely to be performed on this hypothesized content, not on the actual raw sensory input. This suggests the possibility of artificially reducing certain types of false memories and prejudice by making a person more literal, as well as enhancing creativity (Snyder et al. 2004).

The majority of savants are autistic. Why not all? Autistic spectrum disorders encompass a hugely diverse population. However, it may well be that autistic savants represent autism in its purest form, uncontaminated by learned algorithms and other disorders that are frequently associated with autism. In other words, autistic savants typify an idealized, pure autism, most closely identified with Kanner's (1943) infantile autism—a mind in a protracted state of infancy (Snyder et al. 2004), a preconceptual mind that thinks in detail, rather than through concepts. This oversimplifying caricature goes some way to explain why all autistic people are not savants.

But, creativity would seem to require that we, at least momentarily, free ourselves of previous interpretations. Such literalness is a consequence of privileged access and thus gives insights into the so-called autistic genius (Snyder 2004) as well as hints to artificially enhance creativity (Snyder et al. 2004).

The classical portrait of autism is that of rigid insistence on sameness, rote memory and significant learning disabilities. Even autistic savants are the antithesis of creative, being largely imitative: ‘there are no savant geniuses about… Their mental limitations disallow and preclude an awareness of innovative developments’. (Hermelin 2001, p. 177).

The fact that genius might fall within the autistic spectrum challenges our deepest notions of creativity. Are there radically different routes to creativity: normal and autistic? The autistic mind builds from the parts to the whole—a strategy ideally suited to working within a closed system of specified rules. By contrast, the ‘healthy’ mind appears to make unexpected connections between seemingly disparate systems, inventing entire new systems rather than finding novelty within a previously prescribed space

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2015-11-03T16:36:00.039Z · LW(p) · GW(p)

Closely related: When the world becomes ‘too real’: a Bayesian explanation of autistic perception

Perceptual experience is influenced both by incoming sensory information and prior knowledge about the world, a concept recently formalised within Bayesian decision theory. We propose that Bayesian models can be applied to autism – a neurodevelopmental condition with atypicalities in sensation and perception – to pinpoint fundamental differences in perceptual mechanisms. We suggest specifically that attenuated Bayesian priors – ‘hypo-priors’ – may be responsible for the unique perceptual experience of autistic people, leading to a tendency to perceive the world more accurately rather than modulated by prior experience. In this account, we consider how hypo-priors might explain key features of autism – the broad range of sensory and other non-social atypicalities – in addition to the phenomenological differences in autistic perception.

comment by gjm · 2015-11-07T13:37:59.270Z · LW(p) · GW(p)

I just (in, say, the last couple of days) got something like 50 downvotes, presumably all on old comments since I don't see any sign of a lot of downvotes on recent comments of mine.

This is the kind of thing that has got people banned in the past; if any moderator happens to be reading this and considers it worth investigating, I'd be interested to know the result.

Replies from: NancyLebovitz, username2
comment by NancyLebovitz · 2015-11-07T16:17:50.601Z · LW(p) · GW(p)

I've contacted tech.

Replies from: gjm
comment by gjm · 2015-11-07T23:29:13.472Z · LW(p) · GW(p)

Thanks!

comment by username2 · 2015-11-07T17:03:43.951Z · LW(p) · GW(p)

I think that you should have PMed moderators instead of posting about it in an open thread. You can get -50 karma, if, say, 5 people downvoted your posts to Main or meetup posts.

Replies from: gjm
comment by gjm · 2015-11-07T23:36:08.112Z · LW(p) · GW(p)

You can indeed, but I think I've posted exactly once to Main, it was over five years ago hence probably getting very few votes either way now, and existing votes on the post in question suggest that it would be very unlikely that five people would independently decide to downvote it in one day.

On the other hand, I've been hit by mass-downvoting more than once in the past, by a user who is widely (and with very good reason) believed to be active on LW with his (at least) third identity after the two previous ones were banned for downvote abuse.

The advantage of posting about it is that if indeed someone (whether or not the particular individual just alluded to) is mass-downvoting me, then they might be doing it to other people too, and some of those other people might see my comment and mention that it's happening to them, which might be useful to moderators if they're contemplating any sort of action.

comment by Liron · 2015-11-04T01:52:55.397Z · LW(p) · GW(p)

I made this joke site: https://flashcash.money

It's often rational to burn cash on positional goods like Rolexes and bottle service at clubs, but FlashCash.money takes that value proposition to the logical extreme.

Replies from: lmm
comment by lmm · 2015-11-06T19:27:55.995Z · LW(p) · GW(p)

The problem is the site looks cheap. If I'm showing off how rich I am, I want something that looks elegant and refined.

Replies from: Lumifer
comment by Lumifer · 2015-11-06T20:25:57.375Z · LW(p) · GW(p)

If I'm showing off how rich I am, I want something that looks elegant and refined.

People who value elegance are refinement are NOT the target demographic for that website X-)

comment by SodaPopinski · 2015-11-04T01:58:02.017Z · LW(p) · GW(p)

What is a computation? Intuitively some (say binary) states of the physical world are changed, voltage gates switched, rocks moved around (https://xkcd.com/505/), whatever.
Now, in general if these physical changes were done with some intention like in my CPU or the guy moving the rocks in the xkcd comic, then I think of this as a computation, and consequentially I would care for example about if the computation I performed simulated a conscious entity.

However, surely my or my computer's intention can't be what makes the physical state changes count as a computation. But then how do we get around the slippery slope where everything is computing everything imaginable. There are billions of states I can interpret as 1's and 0's which get transformed in countless different ways every time I stir my coffee. Even worse, in quantum mechanics, the state of a point is given by a potentially infinitely wiggly function. What stops me from interpreting all of this as computation which under some encoding gives rise to countless Boltzmann brain type conscious entities and simulated worlds?

Replies from: Viliam, Houshalter
comment by Viliam · 2015-11-04T09:37:58.639Z · LW(p) · GW(p)

I think everything is a computation, and all computations happen... but somehow, some of those computations happen "more" and some of them happen "less". (Similarly how in quantum mechanics any particle can be anywhere, but some combinations of particles "exist"more", and some "exist less", so in real life we don't percieve literally everything, but some specific situations.)

Without understanding the nature of this "more" and "less" it will not make much sense... and I don't really understand it.

Replies from: SodaPopinski
comment by SodaPopinski · 2015-11-04T19:41:12.271Z · LW(p) · GW(p)

If there are really infinite instances of conscious computations, then I don't think it is unreasonable to believe that there exists no more/less measure and simply we have no reason at all to be surprised to be living in one type of simulation than another. I guess my interest with the question was if there is any way to not throw the baby out with the bathwater, by having a reasonable more restrictive notation of what a computation is.

Replies from: Viliam
comment by Viliam · 2015-11-05T08:13:43.979Z · LW(p) · GW(p)

I think having a measure is exactly the way to not throw the baby out with the bathwater. But I am not really an expert on this.

comment by Houshalter · 2015-11-05T22:21:40.234Z · LW(p) · GW(p)

I'm confused about your "interpretation". Lets say I throw together a bunch of random transistors. They compute a totally random function. What "encoding" can you possibly use to interpret this is a conscious mind?

Lets just say we already know what consciousness is and what algorithm the human brain uses. Maybe it's something like current neural networks. How would you find a computation of a neural network inside a random circuit?

I don't think you could. You'd need to find groups of logic gates which just happen to compute multiplication of two numbers. And other groups which computes addition. And another group which saves the state. And all of these groups would have to be connected in just the right way.

I think conscious minds are a very specific kind of computation. That's very unlikely to form by random chance.

Replies from: SodaPopinski
comment by SodaPopinski · 2015-11-05T22:52:13.788Z · LW(p) · GW(p)

Take the thermal noise generated in part of the circuit. By setting a threshold we can interpret it as a sequence 110101011 etc. Now if this list sequence was enormous we would eventually have a pixel by pixel description of any picture, letter by letter description of every book, state after state description of the tape on any Turing machine etc (basically a Library of Babel situation). Now of course we would need a crazy long sequence for this, but there is similar noise associated with the motion of every atom in the circuit, likewise the noise is far more complex if we don't truncate it to 0's and 1's, and finally there are many many many encodings of our resulting strings (does 110 represent the letter A, 0101 a blue pixel and so on).

If I chose ahead of time the procedure of how the thermal noise fluctuates and I seed in two instances of noise I think of as representing 2 and 3, and after a while it outputs a thermal noise I think of as 5 then I am ok calling that a computation. But why should my naming of the noise and dictating how the system develops be required for computation to occur?

Replies from: Houshalter
comment by Houshalter · 2015-11-06T01:14:28.792Z · LW(p) · GW(p)

Random sequences aren't really interesting. Even the digits of pi are believed to contain every possible sequence of integers. The hard part is finding where each sequence is located. The index is likely to be longer than the sequence itself!

And a sequence of digits isn't computation. A recording of your neural activity isn't conscious. It's just a static object.

If I chose ahead of time the procedure of how the thermal noise fluctuates and I seed in two instances of noise I think of as representing 2 and 3, and after a while it outputs a thermal noise I think of as 5 then I am ok calling that a computation.

But there is no computation happening there. It's just random noise. It's just as likely to output 5 as 6 or 3. There is no causal link between you inputting "2+3" and the output.

Replies from: SodaPopinski
comment by SodaPopinski · 2015-11-06T01:42:37.462Z · LW(p) · GW(p)

I agree with your sentiment. I am hoping though that one can define formally what a computation is given a physical system. Perhaps you are on to something with the causal requirement, but I think this is hard to pin down precisely. The noise is still being caused by the previous state of the system, so how can we sensibly talk about cause in a physical system. It seems like we would be more interested in 'causes' associated to more agent-like objects like an engine than formless things like the previous state of a cloud of gas. Actually I think Caspar's article was trying to formalize something like this but I don't understand it that well: http://lesswrong.com/r/discussion/lw/msg/publication_on_formalizing_preference/

Replies from: Houshalter
comment by Houshalter · 2015-11-07T21:43:12.413Z · LW(p) · GW(p)

Read Causal Universes first if you haven't.

I think causality is the only requirement for "computation". Step A causes step B. A computation has happened. If A and B are independent, then there is no computation happening..

comment by Panorama · 2015-11-05T22:30:52.975Z · LW(p) · GW(p)

Gene editing saves girl dying from leukaemia in world first

For the first time ever, a person’s life has been saved by gene editing.

...

Layla’s doctors got permission to use an experimental form of gene therapy using genetically engineered immune cells from a donor. Within a month these cells had killed off all the cancerous cells in her bone marrow.

Replies from: None
comment by [deleted] · 2015-11-06T00:09:11.198Z · LW(p) · GW(p)

Acute lymphoblastic leukemia and other blood tumors in which B-cells become malignant are extremely well-suited to this approach. You can cook up a T-cell that will react against B-cell specific proteins, inject it, and it will sense all the B cells around it and grow up to large numbers and kill all the B-cells and B-cell derived tumor cells in the patient's body. You can live without B-cells (with a hit to immune system strength) and they have some very cell-specific proteins. Going after B-cell malignancies with modified immune cells has been successfully done before.

I am loving the new twist though - rather than going through the process of extracting the patient's own T-cells and modifying them, they took a T-cell line they already had and destroyed its ability to ever respond to anything but the targeted antigen, meaning that a tissue-compatibility mismatch was irrelevant because it would never go after any foreign things it encountered other than the one coded target. The cells were apparently also modded to be resistant to chemotherapy drugs. The same cell line could be used in multiple people - though I'm sure that if any of the patient's own immune system remained at all the foreign T cells would eventually be killed off rather than becoming a permanent part of the immune system as sometimes happens when the cells come from the patient themselves.

comment by Panorama · 2015-11-05T20:38:06.192Z · LW(p) · GW(p)

Zombie physics: 6 baffling results that just won't die

To celebrate Halloween, Nature brings you the undead results that physicists can neither prove — nor lay to rest.

When a scientific result seems to show something genuinely new, subsequent experiments are supposed to either confirm it — triggering a textbook rewrite — or show it to be a measurement anomaly or experimental blunder. But some findings seem to remain forever stuck in the middle ground between light and shadow. Even efforts to replicate these results — normally science’s equivalent of Valyrian steel — have little effect. Welcome to the realm of undead physics.

Ahead of Halloween, Nature guides you through some findings in physics, astronomy and cosmology that researchers have repeatedly left for dead — only to find that they keep coming back.

comment by Panorama · 2015-11-05T20:10:50.014Z · LW(p) · GW(p)

Laszlo Babai (University of Chicago): Graph Isomorphism in Quasipolynomial Time (Combinatorics and TCS seminar)

We outline an algorithm that solves the Graph Isomorphism (GI) problem and the related problems of String Isomorphism (SI) and Coset Intersection (CI) in quasipolynomial (exp(polylog n)) time.

The best previous bound for GI was \exp(\sqrt{n log n}), where n is the number of vertices (Luks, 1983). For SI and CI the best previous bound was similar, \exp(\sqrt{n}(log n)^c), where n is the size of the permutation domain (the speaker, 1983).

G. Phi. Fo. Fum. by Scott Aaronson

Earlier today, I was tipped off to what might be the theoretical computer science result of the decade. My source asked me not to break the news on this blog—but since other theory bloggers (and twitterers) are now covering the story, I guess the graph is out of the Babai.

According to the University of Chicago’s theory seminar calendar, on Tuesday of next week (November 10), the legendary Laszlo Babai will be giving a talk about a new algorithm that solves the graph isomorphism problem in quasipolynomial time. The previous fastest algorithm to decide whether two n-vertex graphs G and H are isomorphic—by Babai and Luks, back in 1983—ran in exp(√(n log n)) time. If we credit the announcement, Babai has now gotten that down to exp(polylog(n)), putting one of the central problems of computer science “just barely above P.” (For years, I’ve answered questions on this blog about the status of graph isomorphism—would I bet that it’s in BQP? in coNP? etc.—by saying that, as far as I and many others are concerned, it might as well just be in P. Of course I’m happy to reaffirm that conjecture tonight.)

comment by Lumifer · 2015-11-03T18:11:16.738Z · LW(p) · GW(p)

An unusually clear discussion of the failings of the p-values and what you can (or can not) expect from them. The author seems to have a slight allergy to the Bayesian approach though he freely acknowledges that what he is using is, in fact, the Bayesian approach :-/

comment by passive_fist · 2015-11-02T20:45:04.669Z · LW(p) · GW(p)

What are time-efficient ways of finding people with similar interests and skills to cooperate with?

Replies from: MrMind, Fluttershy, ChristianKl, None
comment by MrMind · 2015-11-03T08:25:36.173Z · LW(p) · GW(p)

I'll try to throw some suggestions at you, see what sticks:

Online:

  • meetup.com
  • searching for a specific hashtag in Facebook and befriend the people that shows up
  • Kickstarter

Offline

  • fliers with your contact info and the intended interests
  • a post on some wallboard in a crowded community (that is somehow related to the field of interest)
comment by Fluttershy · 2015-11-03T10:10:50.428Z · LW(p) · GW(p)

I've always been told something along the lines of "find a group based around a hobby that you like/ an interest that you have, and make friends through them," though I've been recently wondering if it's possible to guess in advance which groups might be more likely to contain the most potential close friends.

Personally, I've had an easier time making friends with bronies and HPMoR readers in meatspace than I have had with making friends with people participating in, say, service organizations or chemistry club. The most obvious explanation here is that I have more in common with people in the first two of these groups than I do with people in the last two of these groups. Still, I'm nevertheless tempted to posit something about the fact that signaling membership as an HPMoR reader, or as a brony, is reasonably costly to some people-- and that this might serve to filter out a portion of the would-be members of these groups who I'd be less likely to be friends with.

Replies from: Stingray
comment by Stingray · 2015-11-03T11:00:59.856Z · LW(p) · GW(p)

Another possibility: the first group of people have more free time than the latter and spending some free time together is quite important for building friendships.

comment by ChristianKl · 2015-11-02T20:54:22.707Z · LW(p) · GW(p)

The most time-efficient way is to be open about who you are and what kind of people you are seeking so that other people can find you.

comment by [deleted] · 2015-11-02T21:22:48.161Z · LW(p) · GW(p)

Find a community that someone else created of people with similar interests and skills (eg, lesswrong if you're looking to cooperate with other people interested in rationality

comment by stoat · 2015-11-02T15:10:04.428Z · LW(p) · GW(p)

I have a foggy memory of someone here (I think it was gwern) linking to an article about simulation interface design. It built up examples based on a bird's eye view of a car steering down a road. I haven't been able to find it, anyone know a link to the article?

Replies from: gwern
comment by gwern · 2015-11-02T17:18:10.034Z · LW(p) · GW(p)

Bret Victor's "Up and Down the Ladder of Abstraction"?

Replies from: stoat
comment by stoat · 2015-11-02T17:48:41.816Z · LW(p) · GW(p)

Thanks a bunch that is the one!

comment by DanielDeRossi · 2015-11-06T14:41:58.881Z · LW(p) · GW(p)

Does anyone have any good recommended reading on being social? Stuff like understanding social situations and how to respond appriopriately. Understanding personality types and how to engage with them. How to make people like you can how to keep people's attention. I'd really love to learn these skills , as I feel I am deficient at them.

Replies from: ChristianKl, username2
comment by ChristianKl · 2015-11-06T14:58:28.502Z · LW(p) · GW(p)

The Charisma Myth by Olivia Fox Cabane is a book that helped a bunch of people on LW.

Understanding personality types and how to engage with them.

I'm not sure that's a good approach to the issue. Don't label other people with a pesonality type and then try to engage with them based on the label.

Replies from: Vaniver
comment by Vaniver · 2015-11-06T16:37:41.215Z · LW(p) · GW(p)

I'm not sure that's a good approach to the issue. Don't label other people with a pesonality type and then try to engage with them based on the label.

I think I would actually recommend this. If other people are deeply mysterious to you, then reading up on personality types and trying to recognize them in the wild is helpful training and theory.

The trouble is twofold:

  1. The theory will be incomplete, and only give you broad understanding.

  2. The theory will be limiting, in that you will be more likely to notice observations that match the theory than observations that do not agree with the theory.

You can ameliorate both troubles by learning multiple theories, and trying to hold them in your head / evaluate people along different ones simultaneously.

(There's a longer conversation here, about how much learning should be system 1 vs. system 2, and how to tell what level of development you are in a skill, and so on, but that's probably enough for now.)

Replies from: Tem42, ChristianKl
comment by Tem42 · 2015-11-12T01:25:24.426Z · LW(p) · GW(p)

You might be able to do a bit better; learn a simple and catchy system like the True Colors personality spectrum (a simplified adaptation of the Myers-Briggs), and work on understanding why it works. (Or if you like, why it "works".) While you might guess someone's 'color' incorrectly, if you understand why everyone identifies at least a little with every color, you can start to use general, positive statements to identify what people like about themselves. It should be a productive exercise in understanding the average person's self concept.

comment by ChristianKl · 2015-11-06T16:51:10.076Z · LW(p) · GW(p)

You can ameliorate both approaches by learning multiple theories, and trying to hold them in your head / evaluate people along different ones simultaneously.

Success in social interaction is not about holding more things in your head but often about holding less things in your head.

It's better to do exercises that raise physical presence as the one's suggested in "The Charisma Myth".

Replies from: Kaj_Sotala, Vaniver
comment by Kaj_Sotala · 2015-11-06T19:15:43.697Z · LW(p) · GW(p)

Success in social interaction is not about holding more things in your head but often about holding less things in your head.

In the sense that you'll want to be able to model people and their reactions automatically and without needing to spend effort on it or it hogging up all your working memory, true. But if you're not good at modeling people yet, it may be better to practice it consciously until it becomes automatic.

It's better to do exercises that raise physical presence as the one's suggested in "The Charisma Myth".

These are not mutually exclusive.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-06T20:56:00.163Z · LW(p) · GW(p)

But if you're not good at modeling people yet, it may be better to practice it consciously until it becomes automatic.

Normal people don't model each other through putting each other in distinct mental categories (personality types) but via mirror neurons.

Being judgemental of other people doesn't get better by doing it automatically. I don't get anything when I have an automatic thought that tells me "the person I'm interacting with is a ISTP"

In social interaction "Get out of your head" is good advice for the average nerd. Judging another person as a ISTP rather keeps them in their head.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2015-11-06T22:22:47.612Z · LW(p) · GW(p)

So, let's take autistic vs. neurotypical people as an example. As a general (but not iron-clad) rule, autistic people tend to read less social connotations into the meanings of words. As a result, they are often less likely to take offense from things that a neurotypical person might read as insulting. And as a result of that, they're more likely to prefer the kind of communication that's more direct and to the point. In contrast, with more neurotypical people, exactly the same kind of communication might come across as cold and blunt.

Knowing this lets me optimize my style of communication to the kind of person I'm talking with, more direct with autistic and more careful with neurotypical.

Now of course there are some autistic people you need to communicate carefully with, and some neurotypical people who prefer direct and blunt communciation. But if I have a higher prior probability on someone preferring direct communication, that lets me make some cautious probes to measure their reaction to that style of communication. Probes which could have a negative expected utility if I put a higher prior probability on the person being easily offended by more direct language.

This doesn't necessarily happen on a conscious level. Just having the background knowledge of neurotypical and autistic people differing on this dimension, helps me do this on a partially instinctive basis.

I wasn't explicitly taught this thing about how autistic and neurotypical people differ. It was something that I picked up by experience, from interacting with both kinds of people. But for learning this, it was important to have some kind of a mental handle for hanging the differing experiences on. If I hadn't known that there was such a concept as an autistic person, I couldn't have noticed the correlation between autism and the preferred style of communication. Rather my experience would just have been "different people react totally differently to the same kind of words, and it's totally mysterious to me when to use what kinds of words".

If you have more mental categories to put different people into, then if anything about them happens to correlate with those categories, it will become possible for you to notice that correlation. Without those categories, it'll be harder for you to generalize anything you learn about one person to other people you interact with. Maybe you learn that this particular person prefers direct communication and this other person prefers indirect communication, but when the third person shows up, you don't know what style to use with them. This will slow down the development of your social skills.

comment by Vaniver · 2015-11-06T21:56:18.075Z · LW(p) · GW(p)

Success in social interaction is not about holding more things in your head but often about holding less things in your head.

Sounds like we do need to go into the longer conversation.

I view most of these skills are something like the follows: at level 0, you have no clue what's going on; at level 1, you have a system 2 model of what's going on that's too slow / clumsy to operate successfully in real time; at level 2, you have a system 1 model of what's going on that's fast and good enough to operate successfully in real time.

Most people go directly from level 0 to level 2, with some level 1 help. Most language speakers don't have an abstract grammatical model of their language in their heads, some constructions "just look weird" or don't come to mind, and they often can't articulate rules even if they use them correctly.

For example, in English, why is something "harder" instead of "more hard"? Why is it "more difficult" instead of "difficulter"?* (This came to mind because my mother is teaching ESL classes and had been surprised that there was a simple underlying rule, which I could not successfully identify before the question was spoiled, even though for any particular word I could correctly determine whether 'more' or 'er' was appropriate.)

But there are situations where it seems better to go through level 1. If you're teaching someone a second language, for example, they're much more likely to be able to make use of abstractly stated grammatical rules than children are. If someone has already been a child and yet not developed a 'normal' level of social intelligence, then the normal approach is inadequate, and we need to consider alternatives.

When developing those alternatives, it's worth noting that the right approach for going from level 0 to level 1 (learning more grammatical rules into system 2) is different from the right approach for going from level 1 to level 2 (practicing the grammatical rules into system 1). So yes, someone who is at level 1 would not get much out of holding more things in their head, but someone who is at level 0 would.

(To elaborate even further, I think someone at level 0 probably has some feature of their personality / communication style at a fundamental enough level of their model that they won't generate hypotheses that contradict it, and thus large parts of human interactions will be fundamentally mysterious to them. The point of reading about personality styles and communication styles and so on is that it generates alternative hypotheses at that level--many 'nerds' do not realize that there are people out there who interpret statements as about relationship closeness instead of as about factual accuracy, and pointing that out to them is the fastest way to level up their interaction ability.)

* Single syllable adjectives get an "er" or "est," multi-syllable adjectives get a "more" or "most," at least most of the time.

Replies from: ChristianKl, Douglas_Knight, 4hodmt
comment by ChristianKl · 2015-11-07T01:11:08.130Z · LW(p) · GW(p)

I think the problem is that you ignore the physiological effect of being in your head and how it makes people less likely to want to engage in social interaction with you.

A problem that is about not being in contact with one's emotions is not helped by having concept with you can use to label the person with whom you are interacting.

The point of reading about personality styles and communication styles and so on is that it generates alternative hypotheses at that level--many 'nerds' do not realize that there are people out there who interpret statements as about relationship closeness instead of as about factual accuracy, and pointing that out to them is the fastest way to level up their interaction ability.

I don't think that it's useful to people into the bracket of caring about relationship closeness and people who care about factual accuracy. Depending on the context of the conversation the same person will focus on a different layer of the communicatoin.

Schulz von Thun's model describe the issue well. You don't need to put people into categories for that.

comment by Douglas_Knight · 2015-11-09T00:50:25.706Z · LW(p) · GW(p)

I think that page oversimplifies the rule for constructing comparative forms. One-syllable adjectives definitely take suffixes and three-syllable adjectives take words, but two syllable adjectives are difficult. I think this page is largely correct. For two syllable adjectives, some terminal syllables (-y, -le) require suffixes and some (-ing, -ed, -ful, -less) require words. The rest are OK either way (quieter, more quiet).

comment by 4hodmt · 2015-11-08T00:38:23.961Z · LW(p) · GW(p)

This rule is incomplete. Most two-syllable adjectives ending in "y" can be converted to comparative form with "er". Some of these may be uncommon, but not all, and my spell checker agrees they are real words, in both British and American English.

Eg. Angrier, heavier, cleverer, friendlier, happier, lazier, tidier, etc. And even three syllable words can take "er": bubblier, foolhardier, jitterier, slipperier, many words starting with "un".

comment by username2 · 2015-11-06T15:00:06.553Z · LW(p) · GW(p)

This handbook is written by a person with Asperger Syndrome and it's intended for other people with Asperger Syndrome, but it is very good even if you aren't autistic, because it spells out everything in detail and makes you explicitly think about all the rules and interpersonal skills.

comment by OrphanWilde · 2015-11-03T16:26:55.464Z · LW(p) · GW(p)

Posting here to avoid introducing an irrelevant aside on one of the main [ETA: Discussion-main, not Main-main] threads, regarding the "retrocausality" of Newcomb-like problems.

Causality is always bidirectional. It is information which only goes in one direction. Once you dissolve that distinction, the question is one of information, which doesn't need to involve any strange loops at all; the behavior of Newcomb-like problems isn't produced by your actions changing history, but by information about what your action will be changing the future, or having already changed it.

What's lost is that this kind of forward-facing information is at play all the time; pretty much every single one of us constantly does low-level prediction of everyone around them (avoiding running into people by predicting where they are going to walk, for a particularly low-level example), and usually (but not always) gets it right. What's unusual in Newcomb-like problems isn't the predictive element, but the uncanny accuracy of that prediction. But if somebody could predict with uncanny accuracy somebody's behavior in the future, we wouldn't resort to retrocausality as an explanation, but rather very good predictive power.

Which is to say, once you notice there is nothing particularly interesting or necessarily reality-violating happening in Newcomb, the issue dissolves substantially, and two-boxing as a statement of the value of human autonomy is about as meaningful as turning suddenly and walking into traffic because the fact that the drivers of cars anticipate that you aren't going to do that is some kind of affront to human dignity.

comment by Panorama · 2015-11-05T20:46:51.806Z · LW(p) · GW(p)

Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say “Usually Not” by Andrew C. Chang and Phillip Li

We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files. Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files. We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.

Replies from: jsteinhardt
comment by jsteinhardt · 2015-11-06T17:02:04.068Z · LW(p) · GW(p)

Note that their implicit definition of "replicable" is very narrow --- under their procedure, one can fail to be "replicable" simply by failing to reply to an e-mail from the authors asking for code. This is somewhat of a word play, since typically "failure to replicate" means that one is unable to get the same results as the authors while following the same procedure. Based on their discussion at the end of section 3, it appears that (at most) 9 of the 30 "failed replications" are due to actually running the code and getting different results.

Replies from: Lumifer
comment by Lumifer · 2015-11-06T17:25:16.675Z · LW(p) · GW(p)

Yes, there is a difference between "unable to replicate because we couldn't even attempt to replicate" (code and/or data are missing) and "unable to replicate because we tried and the results did not match". Either both or only the second case could be called "failure to replicate", depends on your preferred definition.

Still, while the second case is clearly "bad science" -- it's either mistakes or fraud -- the first case is "not science" because science doesn't work by trusting the word of the researcher. A well-known example of the first case is cold fusion.

comment by SodaPopinski · 2015-11-05T19:17:27.142Z · LW(p) · GW(p)

It is interesting to compare the Less Wrong and Wikipedia articles on Recursive self improvement: http://wiki.lesswrong.com/wiki/Recursive_self-improvement https://en.wikipedia.org/wiki/Recursive_self-improvement I still find the anti-foom arguments based on diminishing returns in the Wikipedia article to be compelling. Has there been any progress on modelling recursively self improving systems systems beyond what we can find in the foom-debate?

Replies from: Viliam
comment by Viliam · 2015-11-06T09:54:00.790Z · LW(p) · GW(p)

Quoting Wikipedia:

Ramez Naam argues against a hard takeoff by pointing out that we already see recursive self-improvement by superintelligences, such as corporations.

Corporations are not superintelligences. Not in the narrow sense we use when we talk about AGI.

Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.

The whole idea of superhuman artificial general intelligence is that the machine would be better at everything that humans can do. Also, if the machine wants to specialize, it doesn't have to deal with humans: it could just create more copies of itself (okay, it would need to buy the hardware, but that's it) and let different copies specialize at doing different things.

Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth.

Not sure if I understand it correctly, but this seems to me like an argument that "wealth = money, therefore the AI will trade with humans". If that is the intended meaning, I disagree. Money is just one way to get resources. You can also take them by force, steal them, convince people to donate to you, create them, discover them, et cetera. Just because the AI would use its intelligence to gather resources, it does not follow it will trade with humans.

Max More argues that if there were only a few superfast human-level AIs, they wouldn't radically change the world, because they would still depend on other people to get things done and would still have human cognitive constraints.

Why would the AGI still have human cognitive constraints? Just because we can't imagine otherwise? Why would it depend on other people to get things done? If it can get some people to build robots, the rest of the work can be done by those robots.

Okay, maybe some of these arguments need much more thinking than I used now, but I wrote this to explain why the arguments in Wikipedia seem completely unimpressive to me. Most of them seem to be based on refusing the idea that anything, including any AI, could really be significantly smarter than humans. "The AI cannot do research in several fields at once, because humans can't. The AI will not have hands, and will therefore forever depend on humans. The only way an AI could get resources is to have a job, and then buy whatever it needs in the shop. This all ensures that AI is just another human, so other humans will have no problems to overpower it, if needed."

comment by MarsColony_in10years · 2015-11-02T17:29:18.355Z · LW(p) · GW(p)

I've recently started using RSS feeds. Does anyone have LW-related feeds they'd recommend? Or for that matter, anything they'd recommend following which doesn't have an RSS feed?

Here's my short list so far, in case anyone else is interested:

  • Less Wrong Discussion

  • Less Wrong Main (ie promoted)

  • Slate Star Codex

  • Center for the Study of Existential Risk

  • Future of Life Institute [they have a RSS button, but it appears to be broken. They just updated their webpage, so I'll subscribe once there's something to subscribe to.]

  • Global Priorities Project

  • 80,000 Hours

  • SpaceX [an aerospace company, which Elon Musk refuses to take public until they've started a Mars colony]

These obviously have an xrisk focus, but feel free to share anything you think any Less-Wrongers might be interested in, even if it doesn't sound like I would be.

For anyone looking to start using RSS, I'd recommend using the Bamboo Feed Reader extension in FireFox, and deleting all the default feeds. I started out using Sage as a feed aggregator, but didn't like the sidebar style or the tiled reader.

Replies from: Viliam, username2, Stingray, polymathwannabe
comment by Viliam · 2015-11-02T20:04:33.043Z · LW(p) · GW(p)

You may like some of the sources mentioned in "List of Blogs" on LW wiki.

comment by username2 · 2015-11-05T14:48:38.241Z · LW(p) · GW(p)

Overcoming bias sidebar has a lot of interesting blogs.

comment by Stingray · 2015-11-05T18:30:42.616Z · LW(p) · GW(p)

You should also subscribe to this subreddit.

comment by polymathwannabe · 2015-11-03T13:21:25.350Z · LW(p) · GW(p)

I used to use Thunderbird Portable, but my flash drive died and I lost thousands of saved articles.

After Google Reader was discontinued, I switched to Feedly. This is my OPML. Note that I've included several Colombian media sources because that's where I live.

comment by cleonid · 2015-11-02T11:19:30.030Z · LW(p) · GW(p)

From Omnilibrium:

Replies from: passive_fist
comment by passive_fist · 2015-11-02T20:50:27.125Z · LW(p) · GW(p)

Climatologists who work for government agencies (that is, nearly all of them) face a strong pressure to conform to the consensus opinion.

Evidence? This is LessWrong.

So far, there is no evidence that climate models can predict weather changes over long term periods.

Based on what I've read and the contents of the IPCC report, the match between models and climate change has been pretty good so far, actually.

Replies from: Lumifer
comment by Lumifer · 2015-11-02T21:02:36.856Z · LW(p) · GW(p)

Based on what I've read and the contents of the IPCC report, the match between models and climate change has been pretty good so far, actually.

As you mentioned, this is LessWrong. So someone (like me) will go and look at your link. And find that it doesn't talk about forecasts, it is predominantly concerned with whether models can reproduce historically observed features of the climate. In other words, the issue is just trying to get a good fit to historical data.

By the way, if you look at p.771 and around, you'll notice that the models have a lot of difficulties with the current "hiatus".

Replies from: passive_fist
comment by passive_fist · 2015-11-02T21:28:19.649Z · LW(p) · GW(p)

And find that it doesn't talk about forecasts, it is predominantly concerned with whether models can reproduce historically observed features of the climate

Exactly; that's what I said.

EDIT: To clarify, I'm trying to say that past models have had a good fit to data so far, so it's reasonable to expect they will continue to perform. This is certainly evidence towards climate models being able to carry out predictions, and it's how science should be done.

In other words, the issue is just trying to get a good fit to historical data.

You make it sound as if they are just arbitrarily varying the parameters of the models to get a good fit. In reality, they are using model ensembles for various different emissions scenarios obtained from real-world data and seeing if the resulting predictions fall within a reasonable confidence interval of what was actually observed. The answer is: yes, they do.

By the way, if you look at p.771 and around, you'll notice that the models have a lot of difficulties with the current "hiatus".

There are several ways of interpreting this; I'd be glad to have a discussion about it if you're interested.

Replies from: Lumifer
comment by Lumifer · 2015-11-02T21:36:47.994Z · LW(p) · GW(p)

Exactly; that's what I said.

Why, then, are you talking about the models' fit when answering the question of whether the "climate models can predict weather changes over long term periods" (emphasis mine)?

You make it sound as if they are just arbitrarily varying the parameters of the models to get a good fit.

Not arbitrarily, of course, but "varying the parameters of the models" is the most common and a very general method of getting "a good fit".

and seeing if the resulting predictions

Do you mean something like cross-validation? I don't think they predict the future in this context.

Replies from: passive_fist
comment by passive_fist · 2015-11-03T04:14:04.277Z · LW(p) · GW(p)

Why, then, are you talking about the models' fit when answering the question of whether the "climate models can predict weather changes over long term periods" (emphasis mine)?

What other way is there? Building a time machine?

How else can you estimate the suitability of models in making predictions than testing their past predictions on current data?

Replies from: Salemicus, Lumifer
comment by Salemicus · 2015-11-03T15:58:01.073Z · LW(p) · GW(p)

One possible answer is to look at how the then-state-of-the-art models in (say) 1990, 1995, 2000, etc, predicted temperature changes going forwards.

The answer, in point-of-fact, is that they consistently predicted a considerably greater temperature rise than actually took place, although the actual temperature rise is just about within the error bars of most models.

Now, there are two plausible conclusions to this:

  • Those past mistakes have been appropriately corrected into today's models, so we don't need to worry too much about past failures.
  • This is like Paul Samuelson's economics textbook, which consistently (in editions published in the 50s, 60s, 70s and 80s) predicted that the Soviet Union would overtake the US economy in 25 years.
Replies from: passive_fist
comment by passive_fist · 2015-11-03T20:33:15.476Z · LW(p) · GW(p)

One possible answer is to look at how the then-state-of-the-art models in (say) 1990, 1995, 2000, etc, predicted temperature changes going forwards.

It's not as simple as that. Most models give predictions that are conditional on input data to the models (real rate of CO2 production, etc.). To analyze the predictions from, say, a model developed in 1990, you need to feed the model input data from after 1990. Otherwise you get too wide an error margin in your prediction.

The answer, in point-of-fact, is that they consistently predicted a considerably greater temperature rise than actually took place, although the actual temperature rise is just about within the error bars of most models.

True. As I said, this is definitely evidence towards the suitability of the models, and certainly seems to be counter to the claim that "there is no evidence that climate models are valuable in predicting future climate trends.

This is like Paul Samuelson's economics textbook, which consistently (in editions published in the 50s, 60s, 70s and 80s) predicted that the Soviet Union would overtake the US economy in 25 years.

That's definitely a possibility, but it's reasonable to think that the mathematics and science involved in the climate models stands on a firmer basis than economical analysis, and definitely a firmer basis than Samuelson's analysis.

comment by Lumifer · 2015-11-03T17:29:57.549Z · LW(p) · GW(p)

What other way is there?

The usual plain-vanilla way is to use out-of-sample testing -- check the model on data that neither the model nor the researchers have seen before. It's common to set aside a portion of the data before starting the modeling process explicitly to serve as a final check after the model is done.

In the cases where the stability of the underlying process in in doubt, it may be that there is no good way other than waiting for a while and testing the (fixed in place) model on new data as it comes in.

The characteristics of the model's fit are not necessarily a good guide to the model's predictive capabilities. Overfitting is still depressingly common.

comment by Panorama · 2015-11-05T20:32:15.878Z · LW(p) · GW(p)

NASA Study: Mass Gains of Antarctic Ice Sheet Greater than Losses

A new NASA study says that an increase in Antarctic snow accumulation that began 10,000 years ago is currently adding enough ice to the continent to outweigh the increased losses from its thinning glaciers.

The research challenges the conclusions of other studies, including the Intergovernmental Panel on Climate Change’s (IPCC) 2013 report, which says that Antarctica is overall losing land ice.

According to the new analysis of satellite data, the Antarctic ice sheet showed a net gain of 112 billion tons of ice a year from 1992 to 2001. That net gain slowed to 82 billion tons of ice per year between 2003 and 2008.

“We’re essentially in agreement with other studies that show an increase in ice discharge in the Antarctic Peninsula and the Thwaites and Pine Island region of West Antarctica,” said Jay Zwally, a glaciologist with NASA Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the study, which was published on Oct. 30 in the Journal of Glaciology. “Our main disagreement is for East Antarctica and the interior of West Antarctica – there, we see an ice gain that exceeds the losses in the other areas.” Zwally added that his team “measured small height changes over large areas, as well as the large changes observed over smaller areas.”

Replies from: Manfred, knb
comment by Manfred · 2015-11-05T22:36:42.108Z · LW(p) · GW(p)

Interesting. Obviously if some place is still below freezing all year round (i.e. the bulk of East Antactica), global warming can easily increase ice mass due to increased snowfall. But I'd thought decrease in total ice mass was pretty well-established.

Replies from: None
comment by [deleted] · 2015-11-06T00:11:22.000Z · LW(p) · GW(p)

The amount lost in the Arctic is about a factor of 3 larger than the net gain in Antarctica, and West Antarctica as a subset of antarctica is losing ice on the net in a way that is likely to accelerate in the future. Also apparently Antarctica has been gaining ice on the net for 10,000 years according to the source, and it's a case of recent loss rate increases not yet balancing this normal gain rate.

Further quote from the article:

“If the losses of the Antarctic Peninsula and parts of West Antarctica continue to increase at the same rate they’ve been increasing for the last two decades, the losses will catch up with the long-term gain in East Antarctica in 20 or 30 years -- I don’t think there will be enough snowfall increase to offset these losses.”

Replies from: Manfred
comment by Manfred · 2016-01-15T07:23:23.675Z · LW(p) · GW(p)

A late follow-up. I read an article on the study, and it turns out that the difference from previous estimates (which basically all showed a decrease in antarctic ice mass) comes from an interesting place. Everyone agrees on the height change in East Antarctica. But the studies that got a net decrease assumed that the change in height was due to recently increased snowfall, in which case the extra height will have the density of snow. This new study that gets a net mass increase assumes that the change in height is actually part of a long-term mass rebound from the last minimum, and if that's true then the density profile of the Antarctic ice sheet should be roughly constant, and the extra height will have the density of ice, which is ~3x that of snow.

I think the disagreement over this shows how big our error bars are.

comment by knb · 2015-11-06T05:02:40.783Z · LW(p) · GW(p)

This seems significant, but I'm not sure how to interpret it... Is it good news the ice sheet isn't shrinking or bad news that the sea level rise apparently came from other sources without us noticing?

Replies from: Lumifer
comment by Lumifer · 2015-11-06T15:33:58.624Z · LW(p) · GW(p)

the sea level rise apparently came from other sources

Which sea level rise?

Replies from: None
comment by [deleted] · 2015-11-07T21:57:56.156Z · LW(p) · GW(p)

http://www.worldviewofglobalwarming.org/images11/SeaLevelRiseRateChart2010.jpg

This is the global mean. Rise measured at any given actual shoreline will be different and sometimes even falling, due to local geology altering elevations of land at not-dissimilar rates in some areas (especially areas where post-glacial rebound is still occurring) as well as thermal expansion being uneven.

Replies from: Good_Burning_Plastic, Lumifer
comment by Good_Burning_Plastic · 2015-11-08T10:40:02.519Z · LW(p) · GW(p)

Was something weird happening in the 1920s or is it just an optical illusion due to the black lines?

Replies from: None
comment by [deleted] · 2015-11-12T02:26:38.504Z · LW(p) · GW(p)

I think you'd see similar anomalies in 1880 and 1985 stand out with similar lines.

comment by Lumifer · 2015-11-08T05:32:34.154Z · LW(p) · GW(p)

Yes. But the sea levels have been rising continuously since the time of the last glacial maximum. 10,000 years ago they were rising at a rather more dramatic rate, too.

Replies from: None
comment by [deleted] · 2015-11-08T09:29:34.135Z · LW(p) · GW(p)

Yep! My favorite bit of what went on during the end of the last glaciation is the way that it happened unevenly, a sedate constant flow of water from ice to the oceans interrupted by centuries here and there where sea level rose by at least 2-5 centimeters a year. Presumably that's what happens once an ice sheet becomes unstable and pieces of them collapse quickly and nonlinearly.

Replies from: Lumifer
comment by Lumifer · 2015-11-09T16:19:35.342Z · LW(p) · GW(p)

That was one of those "interesting times to live in"? Still it's peanuts compared to the mother of all floods :-)

comment by Ritalin · 2015-11-07T01:15:35.736Z · LW(p) · GW(p)

This is the most terrifying comic SMBC has made yet How much of a point does Zach have, here? Can this be the shape of the future?

Replies from: Lumifer, knb, username2
comment by Lumifer · 2015-11-07T17:09:02.530Z · LW(p) · GW(p)

Luddites are not new.

comment by knb · 2015-11-09T03:07:23.945Z · LW(p) · GW(p)

That scenario doesn't seem terrifying to me, though it's pretty vague. He says there are job losses and revolution is impossible but so what? Realistically in this scenario people just vote to raise taxes on capital owners and give themselves a paycheck. Machine labor is apparently extremely capable and near-free in this scenario so owning even a small amount of capital makes you effectively rich in absolute but not relative terms. I guess he's assuming democracy breaks in a way that is pro-capital owners somewhere along the way but that isn't actually stated.

comment by username2 · 2015-11-07T17:05:46.533Z · LW(p) · GW(p)

Probably not, because people who do not like new society can create a small closed society somewhere where population density is smaller and live off the land.

comment by [deleted] · 2015-11-06T14:05:05.633Z · LW(p) · GW(p)

If you had to select just 5 mutually exclusive and collectively exhaustive variables to predict the the outcome of something you have expert knowledge (relative to say...me) about:

  • what is that situation?
  • what are the 5 things that best determine the outcome?

    Please tell us about multiple things if you are an expert at multple things. No time for humility know, it is better that you are kind teacher than a modest mute.

If you can come up with a better way I could ask this, please point it out! It sounds clumsy, but the question has a rather technical background to it's composition:

Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing, but consistently good at identifying the determinants that are important to consider in a given thing within their field of expertise, such as what are the variables that determine a given thing. The human working memory can only hold 7+/-2 things at a given time. So, tending to even more stressful situations where our memories may be situanionally brought down to a level of functioning commiserate with someone with a poorer memory, I want to ask for 5 things anyone could think about when they come across one or more of your niches of expertise that they can pay attention to in order to gather the most relevant information from the experience

Replies from: JoshuaZ
comment by JoshuaZ · 2015-11-06T19:31:40.525Z · LW(p) · GW(p)

Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing,

Do you have a citation for this? My understanding was that in many fields experts perform better than nonexperts. The main thing that experts share in common with non-experts is overconfidence about their predictions.

Replies from: None
comment by [deleted] · 2015-11-06T23:34:57.629Z · LW(p) · GW(p)

Yep, follow the summary here to the Expert Judgement textbook if you like. I was skeptical too before I followed up this claim and actually read the textbook.

comment by [deleted] · 2015-11-05T23:31:46.554Z · LW(p) · GW(p)

Top 4 web resources about the application of machine learning to public health:

As taught coursework

As a field of research

As independent scientific movements

In one computationally intensive field of public health

Feel free to add any non significantly overlapping, high quality resources in the comments, or to comment otherwise.

comment by [deleted] · 2015-11-02T11:45:45.102Z · LW(p) · GW(p)

This may be a silly question, if complex regional pain syndrome is 47 out of 50 on the McGill pain scale, where would 'torture' be?

Replies from: James_Miller, None
comment by James_Miller · 2015-11-03T02:25:22.610Z · LW(p) · GW(p)

Reminds me of a joke where a kid in great pain is asked by a doctor to rate how much it hurts on a scale from 1 to 10 where 10 is the most pain he can imagine, and the kid says "1."

Replies from: Alejandro1
comment by [deleted] · 2015-11-02T21:24:50.893Z · LW(p) · GW(p)

What specific method of torture? I'd assume that many methods are designed to get as high as possible, but there are others that are much lower and instead involve other negative sensations besides pain.

Replies from: CBHacking
comment by CBHacking · 2015-11-06T01:31:01.513Z · LW(p) · GW(p)

Agreed. "Torture" as a concept doesn't describe any particular experience, so you can't put a specific pain level to it. Waterboarding puts somebody in fear for their life and evokes very well-ingrained terror triggers in our brain, but doesn't really involve pain (to the best of my knowledge). Branding somebody with a glowing metal rod would cause a large amount of pain, but I don't know how much - it probably depends in the size, location, and so on anyhow - and something very like this on a small scale this can be done as a medical operation to sterilize a wound or similar. Tearing off somebody's finger- and toenails is said to be an effective torture, and I can believe it, but it can also happen fairly painlessly in the ordinary turn of events; I once lost a toenail and didn't even notice until something touched where it should have been (though I'd been exercising, which suppresses pain to a degree).

If you want to know how painful it is to, say, endure the rack, I can only say I hope nobody alive today knows. Same if you want to know the pain level where an average person loses the ability to effectively defy a questioner, or anything like that...

comment by [deleted] · 2015-11-06T11:42:41.934Z · LW(p) · GW(p)

Identifying cognitive distortions

Psychologists someone's hand out a worksheet with the names of cognitive distortions. Ie: mentalising. It is usually encumbrance upon psychiatric patients to pick up on trends for these distortions themselves from what their psychologists tells them.

For instance, my psychologists might intuit that something is paranoid while I don't. That certainly kills that one belief after a bit of reflection, but my mind remains generally predisposed to paranoid thinking in the next moments of conscious ideation too. They will never catch up with me that way.

I went to library bathroom a few minutes ago. It's night, and someone just happened to be there. A fairly Innocuous thought persisted in my imagination that the person who happened to already be there might think I was there to trouble him, since it seemed unlikely that two people should be in the bathroom at the same time at this time of night. A psychologist would usually point out that that's not strictly the case, and challenging assumptions is the best way of addressing these beliefs as far as they and we've known since Socrates at least.

But the thought only become apparent as aberrant when I considered, upon privellaging the hypothesis that my thoughts are biased towards paranoia from past experience, after considering the opposite of my thought. If I had entered the bathroom before him, it would seem like he was falling me, in my mind, or that I was planning for his arrival. Only this last thought, that someone might actually wait for someone stood out as aberrant and odd at the time. But it is very similar in notion to the others and they have since till now gradually becoming notions I consider highly unlikely.

I reckon there's a rationality technique I could generalise from this. Or I hope.

comment by [deleted] · 2015-11-05T12:20:31.380Z · LW(p) · GW(p)

There are 4 computer memory designers. One is a major phone manufacturer, one is a major computer manufacturer and one is a major car manufacturer. The fourth, crucial, makes computer memory for their competitors. I reckon this is best if only example of a stable large scale unregulated efficiency monopoly.

I wish I was clever enough to understand computer memory, get in on the open hard ware movement and capitalise on Project Aras forthcoming release early next year! Imagine, an AppStore for hardware, sponsored by Google for android!

Replies from: ChristianKl
comment by ChristianKl · 2015-11-05T13:30:34.380Z · LW(p) · GW(p)

I reckon this is best if only example of a stable large scale unregulated efficiency monopoly.

No. Producing memory cheaply needs volumne. Nothing that the open-hardware movement does changes the fact that you need scale to cheaply produce memory chips.

I'm also not sure whether you can produce memory chips without violating patents from those company so you need additional millions of capital for that.

comment by Osho · 2015-11-04T17:03:42.519Z · LW(p) · GW(p)

I'm looking for an academic/science/technology enthusiast to cofound my new website with me.

Website is already built and nearing launch. I am looking for someone who can serve as an adviser and contribute directly to site content (I will explain the content in more detail to serious candidates). This will require knowing a lot about science, and a lot about how scientists think. He/she needs to be interested in new technologies and current events. From international affairs, economic policy, politics and ethical issues to cryptocurrency, biotech, physics and artificial intelligence- my ideal candidate will be a huge geek when it comes to these areas. I'm looking for an all around informed person who likes to research issues.

Ideally, you'll have experience in academia and/or science/tech.

I don't imagine this being a ton of work at first, it's just work that I need someone else to focus on. There will be equity. Message me if interested.

Replies from: ChristianKl, ChristianKl, Lumifer, ChristianKl
comment by ChristianKl · 2015-11-04T21:06:41.884Z · LW(p) · GW(p)

I will explain the content in more detail to serious candidates

I would guess that you currently haven't said enough about your project to make anybody have serious interest to be part of your project.

comment by ChristianKl · 2015-11-04T18:52:41.375Z · LW(p) · GW(p)

Using the nick Osho is quite strange for a website about science. What's your background?

What is the website supposed to be about? How does it differ from what is out there at the moment?

comment by Lumifer · 2015-11-04T17:13:16.270Z · LW(p) · GW(p)

Your post is all about what you want. What are you offering in exchange?

Replies from: Osho
comment by Osho · 2015-11-04T17:24:20.981Z · LW(p) · GW(p)

I said there would be equity.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-04T18:57:46.974Z · LW(p) · GW(p)

Equity is only useful if one consider a business to be a good idea. If you want someone to join and be payed in equity you need to convince them that working on your project with you makes sense.

comment by ChristianKl · 2015-11-04T18:57:30.754Z · LW(p) · GW(p)

Using the nick Osho is quite strange for a website about science. What's your background?

What is the website supposed to be about? How does it differ from what is out there at the moment?

comment by [deleted] · 2015-11-03T11:34:28.211Z · LW(p) · GW(p)

I want to learn to play the Dataridoo. Is Swirl the right choice if I want to graduate into a career in machine learning, but failed at learning Python, and managed to learn Stata? Also I'm shit at concentrating, and it's the only learning platform that doesn't confuse me with all the features. I kid you not I took months to figure out how to use LessWrong. I may be stupid, but I am dedicated. Once I find the best platform for me, I'll stick to it. But good recommendations now may save me months later.

Replies from: gjm, ChristianKl
comment by gjm · 2015-11-03T12:02:48.949Z · LW(p) · GW(p)

Pardon my candour, but if you are "shit at concentrating", are readily confused by things, and found Python too difficult to learn, then you might want to consider whether machine learning is a good choice of career.

(I gravely doubt that you are in fact stupid, and given sufficient dedication I expect you could do it, but it seems like a lot of pain. Are you sure it's worth it?)

Replies from: None
comment by [deleted] · 2015-11-03T23:44:55.945Z · LW(p) · GW(p)

I am fairly confident it will be worth it. I'm not good at a whole lot of other things too. But, big data is said to be a profitable career trajectory in the long term. Most if not everything I do for my career is fairly painful, but I try to enjoy it as I do it and I'm grateful for the experience. The question I ask myself is: what could I be doing instead? And I honestly don't have a lot of better things to be doing anyway, haha.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-04T09:50:51.947Z · LW(p) · GW(p)

How strong are your math skills? Did you have IQ testing done?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-04T18:16:37.701Z · LW(p) · GW(p)

Mathematicians don't give a shit about IQ. When was the last time you heard Terry Tao talk about IQ?

Writing papers >>>>> psychometrics.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2015-11-04T18:46:46.566Z · LW(p) · GW(p)

Mathematicians don't give a shit about IQ.

That's because they are an exclusive high-IQ club to start with.

Take someone who scored in 300s on his SAT -- would you recommend him to try to become a mathematician?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-04T19:00:27.352Z · LW(p) · GW(p)

What do you suppose Ramanujan's IQ was? Do you think Hardy cared?

Replies from: Lumifer
comment by Lumifer · 2015-11-04T19:06:42.849Z · LW(p) · GW(p)

What do you suppose Ramanujan's IQ was?

No one has any idea since he lived before IQ tests.

Do you think Hardy cared?

That's like saying a basketball coach doesn't care about the player's height, he only cares how high can he jump.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-04T19:11:34.116Z · LW(p) · GW(p)

IQ is not like height. Height is a fairly objective physical measurement that is directly relevant for basketball because of the game setup. IQ is the result of a projection of an extremely high dimensional space into a single number that is not directly relevant for mathematics (people do mathematics in an extremely heterogeneous way). Erdos, Ramanujan, and Groethendieck were all top notch and were all very very different from each other. Erdos I think couldn't tie his shoes. Ramanujan was an Indian peasant. Groethendieck wasn't exactly a high functioning individual.

A better analogy would be if a basketball coach cared about "hit points" (determined by whatever methods doctors use, slatestar would know more).

Replies from: Vaniver, Lumifer, username2
comment by Vaniver · 2015-11-04T21:08:58.135Z · LW(p) · GW(p)

Ramanujan was an Indian peasant.

Ramanujan was a Brahmin. "Peasant' isn't quite appropriate.

Replies from: Lumifer
comment by Lumifer · 2015-11-04T22:19:05.584Z · LW(p) · GW(p)

"Peasant" is also technically wrong since neither he nor his parents tilled the soil (his father was a clerk).

comment by Lumifer · 2015-11-04T19:31:04.614Z · LW(p) · GW(p)

IQ is the result of a projection of an extremely high dimensional space into a single number that is not directly relevant for mathematics

How is it not "directly relevant"? What do you think the average IQ of mathematicians is, do you imagine it's anywhere close to the population average?

Being able (or not) to tie one's shoes or being an Indian peasant are NOT indicators of IQ. Not being socially successful is not an indicator of low IQ either.

I understand your point that genius mathematicians are really, really weird people. But I see no contradiction there, it's perfectly possible to have high IQ and be really weird.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-04T19:42:08.769Z · LW(p) · GW(p)

My point isn't just that they are really weird, but that people think about mathematics in an extremely heterogeneous way, and reducing human brains to one number as some sort of "math hit points" is silly for that reason. You are just ignoring most relevant information.

What made Erdos good and what made Ramanujan good were weird complicated facts about their brains (I expect Ramanujan's IQ wouldn't be very different from his cohort in India, e.g. likely not super high, but for some reason he just "saw" natural numbers). This does not make Ramanujan or his cohort stupid. I just don't think "smart" and "stupid" is what IQ measures in any interesting way.

Telling people to go take an IQ test as a way to selecting themselves out from trying math is an especially toxic practice (especially if this advice is not coming from a mathematician).


I prove theorems for a living, and I say: ignore the haters, just read about math that interests you, try your hand at following and constructing arguments, etc. Math is hard (for everyone), don't worry about it. It's fun, too.

Replies from: Lumifer, None, VoiceOfRa
comment by Lumifer · 2015-11-04T20:01:05.340Z · LW(p) · GW(p)

people think about mathematics in an extremely heterogeneous way

Sure, that's true.

reducing human brains to one number as some sort of "math hit points" is silly

And I agree. But consider the setup: we have a person who doesn't quite know what he wants to do and who has shown no signs of possessing any "supernatural" math abilities. Could he turn out to be another Groethendieck? Well, sure, it's possible, but we are talking about the base population rate here, the chances are, let's say, not very high.

Now, it so happens that most math professionals have high IQ. That's not a coincidence, of course -- if your brain is insufficiently weird to see math "directly", you have to rely on the same dimensions of performance (working memory, etc.) which are reflected in the IQ score.

Trying out a profession has costs, sometimes considerable. You can't try everything on the off chance that it might work out -- you want to focus on the areas where you expect to do well. And someone with an IQ of 130 has much, MUCH better chances of becoming a mathematician than someone with the IQ of 80.

comment by [deleted] · 2015-11-04T19:52:35.743Z · LW(p) · GW(p)

Aaaanyway I haven't had a crude IQ test done but I've had a tailored subset of psychometric tests including subscales from the WAIS from which IQ is derived which indicate my maths skills are above average. The same tests indicated my concentration skills are below average....

Replies from: Vaniver
comment by Vaniver · 2015-11-04T21:29:26.904Z · LW(p) · GW(p)

Aaaanyway I haven't had a crude IQ test done but I've had a tailored subset of psychometric tests including subscales from the WAIS from which IQ is derived which indicate my maths skills are above average. The same tests indicated my concentration skills are below average....

Hmm. I haven't put much thought into what professions are a good fit for someone with concentration as a comparative disadvantage.

I would suspect that research, and mathematics research in particular, is a bad bet. Much of a day will be spent just thinking about ideas, and being able to think about the same idea all day long is necessary to reach the end of long and complicated chains of reasoning. The difference between Newton and his contemporaries, for example, seems to have mostly been superior concentration ability on Newton's part, not considerably higher intelligence.

But people use machine learning many other places; you might be able to work as an industrial data scientist or analyst. It's not clear to me whether low concentration ability would be a smaller or larger handicap there.

Replies from: Lumifer, None
comment by Lumifer · 2015-11-04T21:42:50.068Z · LW(p) · GW(p)

what professions are a good fit for someone with concentration as a comparative disadvantage.

Sports commentator :-D

comment by [deleted] · 2015-11-05T08:14:05.403Z · LW(p) · GW(p)

Ooh my main strengths are vocabulary and verbal abstract reasoning which are apparently greater than 3 standard deviations away from several populations means. Could you reassess possible career paths that might suit those strengths?

Other weaknesses include social cognition and memory. The neuropsychologists reckons the memory thing may be due to lapses in concentration though.

I'm highly skeptical of my verbal abstract reasoning results aince whenever I've done job psychometric tests or related tests when I was in school my verbal abstract reasoning can range very broadly including into the below average group albeit in sometimes competitive populations.

I am very confident in my vocabulary though. It's probably the strongest in anyone I've ever seen except spelling b competition kids on TV, assuming they actually know what the words they're familiar with denote and connote in practice. I'm not so sure it's useful since among regular people people don't get what I'm saying when I slip in technical words often. I reckon it would be good in cross disciplinary technical communication. Don't know an example of that other than systems engineering but I'm no engineer and engineering isn't the broadest category. Politicians span multiple portfolios but I get totally stressed dealing with multiple issues or assignments at once. :S thanks for your assistance everyone once again

I thought becoming an intelligence analyst would be a good choice. Military intelligence analysts in Australia may do shift work which isn't good for one's health.

Replies from: Vaniver, Viliam
comment by Vaniver · 2015-11-05T19:49:05.045Z · LW(p) · GW(p)

Sales is standard advice for people with high verbal ability, and there's plenty of sales jobs for technical subjects that do not require direct technical ability. (Someone sells MRI machines to hospitals, and they aren't an engineer.) There's a fairly large industry in machine learning enterprise solutions, where all you would need is the ability to tell apart Spark and Impala and R and Hive and Hadoop, not necessarily the ability to use any of them competently.

Two issues: 'social cognition' is rather important, and there will be multiple issues or assignments at once.

I think most other verbal fields are in a bad way and have declining prospects. Verbal + abstract reasoning has historically screamed law, but going to law school now is a terrible mistake. Similarly, journalism has very poor options that I suspect will continue to get worse.

Replies from: Lumifer
comment by Lumifer · 2015-11-05T20:01:34.347Z · LW(p) · GW(p)

'social cognition' is rather important

Yes -- sales at the corporate level is mostly about gladhanding and networking. People who can't seamlessly insinuate themselves into the local old-boy network will do poorly.

going to law school now is a terrible mistake

Well... going to some law school has been a terrible mistake for years by now. On the other hand, if you can get into a top-tier law school (and there about half a dozen of those in the US), I would hesitate to call it a mistake.

Replies from: Vaniver
comment by Vaniver · 2015-11-05T20:26:25.454Z · LW(p) · GW(p)

On the other hand, if you can get into a top-tier law school (and there about half a dozen of those in the US), I would hesitate to call it a mistake.

Yes, there are still top tier law firms, and you have a chance of getting hired by one if you go to a top tier law school.

My point is more that even conditioned on knowing that you would survive law school and make it into a top tier law firm, it's not obvious to me that law is the best path to take: options in other industries may be far more valuable. (Consider claims about how doctors only get rich in real estate, or compare physics PhDs in academia and quantitative trading, or Peter Thiel narrowly missing out on a Supreme Court clerkship and founding a company instead.)

Replies from: Lumifer
comment by Lumifer · 2015-11-05T20:48:29.600Z · LW(p) · GW(p)

it's not obvious to me that law is the best path to take: options in other industries may be far more valuable

It all depends, of course. Each path has its risks and its rewards. However, if -- and that's a huge if -- you can get admitted to a top-tier law school, get hired by Biglaw, and spend a few years in, say, a white-shoe NYC law firm, that doesn't sound like horrible fate to me (subject to the sensitivities of your soul, naturally).

comment by Viliam · 2015-11-05T08:18:24.799Z · LW(p) · GW(p)

Could you reassess possible career paths that might suit (...) vocabulary and verbal abstract reasoning

Philosophy? Journalism?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-05T19:09:45.351Z · LW(p) · GW(p)

Philosophy grad students tend to get very high combined analytic/verbal scores on tests.

comment by VoiceOfRa · 2015-11-10T00:41:25.587Z · LW(p) · GW(p)

My point isn't just that they are really weird, but that people think about mathematics in an extremely heterogeneous way

Heterogeneous compared to what? It's a lot more homogeneous then how people think about nearly any other subject.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-11T19:58:58.668Z · LW(p) · GW(p)

I suppose I will defer to your expertise on how people think about everything, Eugene.

comment by username2 · 2015-11-05T15:09:41.402Z · LW(p) · GW(p)

A better analogy would be if a basketball coach cared about "hit points" (determined by whatever methods doctors use, slatestar would know more).

In NFL they do care about intelligence tests.

comment by ChristianKl · 2015-11-04T18:40:35.331Z · LW(p) · GW(p)

Mathematicians don't give a shit about IQ.

I don't claim that they do.

Clarity speaks of himself as stupid and the fact that he failed to learn python is indication of that. If his IQ is <100, I think that would be a valid ground on which to advice him against seeking a career in machine learning.

That's exactly the purpose for which IQ test were designed.

Replies from: Viliam, IlyaShpitser
comment by Viliam · 2015-11-05T08:22:51.906Z · LW(p) · GW(p)

Clarity speaks of himself as stupid

This is only a weak evidence for non-high IQ.

I know a few people who had bad opinion about their IQ, and when I convinced them to take the test, they scored above 130. It's because they believed the stereotype of "high IQ = math prodigy", and they happened to be average at math simply because they focused their lives on something else.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-05T09:47:17.135Z · LW(p) · GW(p)

This is only a weak evidence for non-high IQ.

I haven't implied that it's strong evidence, for me the available evidence was enough to raise the question. The answer to that question matters for whether or not to tell him not to seek a career in machine learning.

I do think that for this purpose the testing that tells him that he's above average in math might be enough.

comment by IlyaShpitser · 2015-11-05T15:49:11.891Z · LW(p) · GW(p)

I think it would be useful to taboo "stupid." It is not a useful word.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-05T21:55:43.122Z · LW(p) · GW(p)

Tabooing "stupid" is what asking for IQ is about and why I asked about IQ in this context.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-05T22:17:19.302Z · LW(p) · GW(p)

Except you are not tabooing anything then, you are just substituing "low IQ" for "stupid." The point of tabooing stupid is to get binary classification out of an inherently complicated multidimensional problem.

The request of tabooing in general is a request for more cognitive work.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-05T23:04:07.247Z · LW(p) · GW(p)

Scoring low on a specific test is something more complex than a label. Changing a vague term with a operationalised term is something that often makes sense for tabooing.

I think you confuse cognitive work with explicitely describing cognitive work. When it comes to speaking about negative features of other people it's worthwhile not to say every negative thing that can be said publically.

comment by ChristianKl · 2015-11-03T11:54:04.843Z · LW(p) · GW(p)

What do you mean with "failed at learning Python"?

Replies from: None
comment by [deleted] · 2015-11-03T23:49:09.822Z · LW(p) · GW(p)

I tried to learn it on my own first, and didn't really pick up on anything.

I tried to learn it at university then, and failed that course.

As more and more helpful resources came online, I tried learning from them, and didn't end up learning it. I think my brain works very differently to most people. There are some things which simply require a kind of functioning I really don't have. It seems languages are the frontier of that - where I have it in me to learn exceptions, whereas I can't learn most. It does seem to generalise to languages - I couldn't even become bilingual even though my parents speak another language and continuously tried to teach it to me, then sent me through school for it. At one point I learned how to read in this one other language and latter forget - can't read that stuff at all now, which is kinda odd.

Anyway, I've managed to learn Stata. And R is for statistical programming like Stata. So, I suspect I could learn it. Though, Stata is more GUI-like and you can't do machine learning with it.

Replies from: ChristianKl, MrMind
comment by ChristianKl · 2015-11-04T14:21:19.778Z · LW(p) · GW(p)

The fact that made Stata easier to learn is that it's GUI like. R isn't. I see no reason to believe that learning R on a level where you can do machine learning with it is easier than learning python. Python has much better documentation than R. It has functions that are much more reasonably named.

comment by MrMind · 2015-11-04T08:29:03.428Z · LW(p) · GW(p)

I tried learning from them, and didn't end up learning it. I think my brain works very differently to most people. There are some things which simply require a kind of functioning I really don't have.

I don't really think this is the case, since you are using correctly English, which is far more complicated than Python. Just to be clear: complicated = has more rules and is more ambiguous.

Replies from: Kaj_Sotala, None
comment by Kaj_Sotala · 2015-11-06T19:29:09.852Z · LW(p) · GW(p)

Formal languages require quite a different kind of thinking than natural ones do. It's not just a matter of comparing their complexity.

Replies from: MrMind
comment by MrMind · 2015-11-09T08:05:21.234Z · LW(p) · GW(p)

Unfortunately, there's only one study about the neuropsychology of programming language, but it does contradict your assertion.
Or at least, if it requires a different thinking, that thinking is done with the same area used for natural language.

comment by [deleted] · 2015-11-04T19:50:56.966Z · LW(p) · GW(p)

That's really not how primary v.s secondary language acquisition works. Also k. complexity isn't the same as cognitive complexity.

Replies from: MrMind
comment by MrMind · 2015-11-05T09:05:46.624Z · LW(p) · GW(p)

I know of only one study on the neurobiology of programming language comprehension. It stacks evidence in favor of the theory that the brain uses the same areas of the brain associated with natural language processing (BA 6/44).
On the other hand, studies in bilingual aphasia shows conflicting evidence: some patients lose/recover only one of the language following a brain lesion, while others shows modifications at both languages at the same time.
So, if you think you have neurological deficiencies regarding the acquisition of Python, I think (wild speculation ahead) that you should show other signs of impairment in the acquisition/use of primary/secondary language. For example, were you able to learn Mata?

Regarding K complexity, the difference in cognitive load is exactly my point: if for you manipulating something that has low complexity has higher complexity, it means that something is wrong in the way you learned it.

Replies from: None
comment by [deleted] · 2015-11-08T05:23:32.890Z · LW(p) · GW(p)

*Great research. Thanks for looking at the evidence, I didn't know those things and I'll try to take (admittently, a very poor and unbacked up claim on my part that I'm sorry for) your approach in the future.

*I have yet to try learning Mata -I'm unclear of its applications. But, I've shown decent skill in the basic neuropsychological components of second language aquisition from military intelligence analysis testing. On the other hand I've been fairly bad at learning languages at school. May just have been the classroom format though!

Regarding K complexity, the difference in cognitive load is exactly my point: if for you manipulating something that has low complexity has higher complexity, it means that something is wrong in the way you learned it.

Didn't think of it that way. Wow!

Edit: It's just hit me how complex this phrase is:

if for you manipulating something that has low complexity has higher complexity, it means that something is wrong in the way you learned it.

I can't even conceive of what level of abstraction to place 'the way I learned a given thing' between the sandwhiches of cognitive and k complexity...

In fact that may be because it's incommesurable within the domain of discourse of computational complexity

comment by Jurily · 2015-11-03T03:41:43.252Z · LW(p) · GW(p)

So, apparently NLP is pseudoscience, and now I'm confused. Does anyone actually claim

  • Richard Bandler hasn't demonstrated even a single verifiable, undisputable result with his methods, and he's been fabricating things like this for decades?
  • his methods don't lead to his results in a way that matches his predictions?
  • the creator of NLP is not qualified to decide whether or not his methods are NLP?

If there are no claims to any of the above, what exactly was discredited?

Replies from: ChristianKl, MrMind, knb
comment by ChristianKl · 2015-11-03T10:54:23.142Z · LW(p) · GW(p)

Richard Bandler hasn't demonstrated even a single verifiable, undisputable result with his methods, and he's been fabricating things like this for decades?

There's research that indicates that the NLP Fast Phobia Cure produce effects but there no research that it's better than other CBT techniques.

I consider basic claims by Bandler about rapport as nowadays accepted by psychology as mimicry of bodylanguage. As far as I see nobody cited Bandler for that and mainstream psychology developed their ideas about mimicry separately decades later.

The idea that there are eye accessing cues that are the same in every person that NLP taught in it's early days has been shown to be false in methodically bad studies and it's not taught anymore by Bandler and good modern NLP trainers. You will however still find articles on the internet proclaiming the theory to be true as claimed in the early days of NLP.

In Bandler latest book he mostly talks about applying an idea about strengthing emotions that you want by spinning them in your body and disassociating negative emotions. I'm not aware about good published research around those mechanisms.

Another significant claim of Bandler is that he can cure schizophrenics. I don't know his approach with schizophrenics works and as far as I know there no research investigating it.

his methods don't lead to his results in a way that matches his predictions?

NLP trainers after Bandler are not in the habit of using language with the goal of saying things that are objective true, but focus on saying things that they believe will produce positive change in the person they are talking with.

Bandler is not open about what he beliefs he's doing when he's training NLP trainers. Science itself rests on people openly stating what they believe.

the creator of NLP is not qualified to decide whether or not his methods are NLP?

Bandler does tell people at the end of his NLP trainer programs that there no such thing as NLP, so the issue of whether he decides whether or not his methods are NLP is not straightforward.

NLP works very differently with epistomological questions. It has a different approach to the question of how you teach a person skills to be a good therapist than mainstream psychology.

Replies from: Jurily
comment by Jurily · 2015-11-03T15:31:17.680Z · LW(p) · GW(p)

I'm aware that Strugeon's law is in full effect within the NLP community, my questions were specifically about Bandler and his results.

I fail to see how anything you said has an impact on the observation that Andy did not need to return to the mental institute. Unless you dispute at least that single claim, the lack of research is better explained with the hypothesis that the researchers failed to understand the topic well enough to account for enough variables, like how Bandler almost always teaches NLP in the context of hypnosis.

If whatever Bandler does is producing verifiable results, shouldn't it be at least an explicit goal of science to find out why it works for him, as opposed to whether it works if you throw an NLP manual at an undergrad? Shouldn't it be a goal of science to find out how he came up with his techniques, and how to do that better than him?

Replies from: jimmy, ChristianKl
comment by jimmy · 2015-11-03T19:03:35.828Z · LW(p) · GW(p)

If whatever Bandler does is producing verifiable results, shouldn't it be at least an explicit goal of science to find out why it works for him, as opposed to whether it works if you throw an NLP manual at an undergrad?

YES!

Personally, I wouldn't take Bandler very seriously because of the whole "narcissistic liar" thing and the fact that the one intervention of his I saw was thoroughly lacking in displayed skill (and noteworthy result), but yes, you should look at the experts, not at the undergrads handed a manual designed by the researcher who isn't an expert himself. It's much better to study "effectiveness of this expert", not "effectiveness of this technique". I'd just rather see someone like Steve Andreas studied.

I know from personal experience that even people with good intentions will strawman the shit out of you if you talk about this kind of thing because there's so much behind it that they just aren't gonna get. Ironically enough, Milton Erickson, the guy who Bandler modeled NLP after, allegedly had this exact complaint about NLP ("Bandler and Grinder think they have me in a nut shell, but all they have is a nutshell." )

Replies from: ChristianKl
comment by ChristianKl · 2015-11-03T22:52:18.092Z · LW(p) · GW(p)

Personally, I wouldn't take Bandler very seriously because of the whole "narcissistic liar" thing and the fact that the one intervention of his I saw was thoroughly lacking in displayed skill (and noteworthy result), but yes, you should look at the experts, not at the undergrads handed a manual designed by the researcher who isn't an expert himself. It's much better to study "effectiveness of this expert", not "effectiveness of this technique". I'd just rather see someone like Steve Andreas studied.

A while ago I would have agreed, today I'm not sure whether that would go somewhere. I think you need researchers with both scientific skills and which actual abilities.

Part of the reason why I respect Danis Bois so much is that after he was successful at teaching bodywork he went and worked through the proper academic way because he found the spiritual community to dogmatic. He got a real PHD and then a professorship.

For hypnosis it likely would have to be similar. Someone who went deep into it. Who lives in the mental world of hypnosis and does 90%+ of his day to day communication in that mode but who then feels bad about the unscientific attitude of his community. A person who then starts a scientific career might really bring the field forward.

Replies from: jimmy
comment by jimmy · 2015-11-09T04:56:59.113Z · LW(p) · GW(p)

Yeah, I see the distinction you're getting at and completely agree. I was referring more to showing "hey, this can't be nonsense since somehow this guy actually gets results even though I have no idea what he's doing", which is an important step on its own, even if it's not scientific evidence behind individual teachable things.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-09T10:54:43.795Z · LW(p) · GW(p)

Look at the state of pyschology today. They tried to replicate 100 findings. A third checked out. A third nearly checked out and another third didn't check out at all.

If you are a psychologists at the moment and get embarrased as a result, you want to move in a direction where more results replicate. Studying highly performing people like Steve Andreas could very well not help with that goal.

Replies from: jimmy
comment by jimmy · 2015-11-09T21:43:48.550Z · LW(p) · GW(p)

Right.

To me, that looks like a slightly different angle on the same thing. If you want to nail down some things so you can say "hey look, we know some things", then studying high performing people wouldn't be the way to go. If, on the other hand, you're pretty okay with saying "hey look, of course we don't know anything, that's why we're still in exploration mode, but look at all this cool shit we're sifting through!", then it starts to look a lot more appealing.

It certainly doesn't surprise me that this kind of research isn't being done, and I can empathize with that embarrassment and wanting to have something nailed down to show the nay sayers. I also find it rather unfortunate. It strikes me as eating the marshmallow. Personally, I'd rather fast for a few days then drag back a moose.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2015-11-10T20:34:02.018Z · LW(p) · GW(p)

If, on the other hand, you're pretty okay with saying "hey look, of course we don't know anything, that's why we're still in exploration mode, but look at all this cool shit we're sifting through!", then it starts to look a lot more appealing.

That, actually, depends on whether this cool shit is a stable pattern or just transient noise. Looking at cool-shit noise is fine as an aesthetic experience, but I wouldn't call it science (or "exploration mode" either).

And, of course, there is the issue of intellectual honesty: saying "we found this weird thing that looks curious" is different from saying "we have conclusively demonstrated a statistically significant at the 0.0X level result".

Personally, I'd rather fast for a few days then drag back a moose.

I suspect you'll go off chasing butterflies and will never get anywhere, if we're getting into hunter-gatherer metaphors.

Replies from: jimmy
comment by jimmy · 2015-11-12T06:29:57.456Z · LW(p) · GW(p)

Looking at cool-shit noise is fine as an aesthetic experience,

That's a terrible aesthetic experience. Your sense of aesthetics is supposed to do something

I suspect you'll go off chasing butterflies and will never get anywhere, if we're getting into hunter-gatherer metaphors.

That's a very reasonable thing to suspect. It's a less reasonable thing to take as given, especially considering the size of the prize and the ease of asking a hunter "ever killed anything?".

Replies from: Lumifer
comment by Lumifer · 2015-11-12T17:16:19.471Z · LW(p) · GW(p)

That's a terrible aesthetic experience.

LOL. Besides the whole going-meta-on-aesthetics thing, wouldn't that depend on how cool the shit it?

and the ease of asking a hunter "ever killed anything?

The hunter will proudly show you his collection of butterflies, all nicely pinned and displayed in proper boxes. Proper boxes are very important, dontcha know?

I have a feeling we have different images in mind. You have a vision of intrepid explorers deep in the jungle, too busy collecting specimens and fighting off piranhas and anacondas to suitably process all they see -- the solid scientific work can wait until they return to the lab and can properly sort and classify all they brought back.

I see a medieval guild of piece workers, producing things. Some things are OK, some not really, but you must produce the pieces, otherwise you'll starve and never make it from apprentice to the master. It would be, of course, very nice to craft a masterpiece, but if you can't a steady flow of adequate (as determined by your peers who are not exactly unbiased judges) pieces will be sufficient and the more the better.

Replies from: jimmy
comment by jimmy · 2015-11-13T22:13:49.191Z · LW(p) · GW(p)

wouldn't that depend on how cool the shit it?

The point is that how "cool" something is is supposed to track the potential value there. In practice it doesn't always (carbon fiber decals are a thing), but that just means they're doing it wrong.

The hunter will proudly show you his collection of butterflies, all nicely pinned and displayed in proper boxes. Proper boxes are very important, dontcha know?

I'd find that very strange, but could happen. And if so, you can confirm your suspicion that they weren't getting anything interesting done. Still seems worth asking to me.

I have a feeling we have different images in mind.

It seems like you see me as implicitly asking "why do you guys keep making pieces instead of going on an adventure!?!?!" and answering with "you see epic adventure, but what they see is the necessity of making their pieces. If they didn't have to get their pieces made, and if there actually was epic adventure to have, of course they'd do that instead. It's that they don't agree with you".

I agree. That's why they do what they do - 'twas never a mystery to me. I see room for that and epic and lucrative anaconda fighting adventures. Or for fools chasing that fantasy and running off into the jungle to starve. Or all three and more.

I have a couple points here even before getting into what happens when you quit and seek adventure.

1) "you must produce the pieces". Really? You sure sure? What number do you put on that confidence? How you think you know?

Often people get caught up running from what seems like a "must" only for it to turn out to be not mission critical. Literal hunger makes for a perfect example. When people fast for a few days for the first time, it often really changes the way they think about the hunger signal. It's no longer "You must eat" and instead becomes more of just a suggestion.

2) "I'm not convinced adventuring is worth it". Of course not. You haven't done your research.

And from your mindset - if you really must produce the pieces, then you didn't need to. If I offer you a chance of a million dollars or a sure $500, but the mob is gonna kill you if you don't pay off your $500 debt, there's little point in asking what the chance is if you already know it isn't "all but guaranteed".

However, even if it's only a 15% chance, you're losing out on an expected $149,500. If there's any chance that 1) not producing the pieces isn't an immediate game ender or 2) it's not completely impossible to sell your chance for much more than $500, then you should probably at least ask what your chance of winning the million is before settling for $500.

So what I see is not "and adventure that is sure to pay off in excess and yeah it might be uncomfortable, but it's not like there's any real downside so don't be stupid", but rather "these people aren't being careful to consider their confidence levels when it's crucial, and so they are going to end up stuck as pieceworkers even if there's a way to have much much more"

Replies from: Lumifer
comment by Lumifer · 2015-11-16T01:58:17.942Z · LW(p) · GW(p)

The point is that how "cool" something is is supposed to track the potential value there

Nope. How useful something is is supposed to track the potential value. If I were to go meta, I'd say that "cool" implies a particular kind of signaling to a specific social sub-group. There isn't much "potential value" other than the value of the signal itself.

It seems like you see me as implicitly asking "why do you guys keep making pieces instead of going on an adventure!?!?!"

Still nope. Most people don't want to go on a real adventure -- it's too risky, dangerous, uncomfortable. Most people -- by far -- prefer the predictable job of producing the pieces so that they can pay the mortgage on their suburban McMansion. In the case of academia, going for broke usually results in your being broke (and tenure-less) while a steady production of published papers gives you quite good chances of remaining in academia. Maybe not in the Ivies, but surely there is a college in South Dakota that wants you as a professor :-/

"you must produce the pieces". Really?

If you want tenure, yes. If you don't want tenure, you can do whatever you want.

then you should probably at least ask what your chance of winning the million is before settling for $500.

Sure. The answer is a shrug and if you want a verbalization, it will go along the lines of "Nobody knows".

so they are going to end up stuck as pieceworkers even if there's a way to have much much more"

There is no way for all of them to "have much much more". Whether you think the trade-off is acceptable depends, among other things, on your risk tolerance, but in any case the mode -- the most likely outcome -- is still of you losing.

Replies from: jimmy
comment by jimmy · 2015-11-16T06:30:45.023Z · LW(p) · GW(p)

From here it looks like you aren't addressing what I'm actually saying and instead are responding to arguments you think I must be trying to get at.

Are you sure you're being sufficiently careful and charitable in your reading of my comments?

Replies from: Lumifer
comment by Lumifer · 2015-11-16T15:41:32.206Z · LW(p) · GW(p)

Are you sure you're being sufficiently careful and charitable in your reading of my comments?

Sufficiently? X-D Clearly not.

Replies from: jimmy
comment by jimmy · 2015-11-19T18:12:47.087Z · LW(p) · GW(p)

Heh, okay. I'll try again from another angle.

To be clear I do see the whole "intrepid explorers" thing pretty much exactly how you said it. I went that way myself and I'm super glad I did. It has been fun and had large payoff for me.

At the same time though, I realize that this is not how everyone sees it. I realize that a lot of the payoffs I've gotten can be interpreted other ways or not believed. I realize that other people want other things. I realize that I am in a sense lucky to not only get anything out of it, but to even be able to afford trying. And I realize why many people wouldn't even consider the possibility.

Given that, it'd be pretty stupid to run around saying "drop what you're doing and go on an adventure!" (or anything like it) as if it weren't that from their perspective not only is "adventure" almost certainly going to lead nowhere, but they must make the pieces. As if "adventure" actually is a good idea for them - for most people, all things considered, it probably isn't.

My point is entirely on the meta level. It's not even about this topic in particular. I frequently see people rounding "this is impossible within my current models" to "this is impossible". Pointing this out is rarely a "woah!" moment for people, because people generally realize that they could be wrong and at some point you have to act on your models. If you've looked and don't see any errors it doesn't mean none exist, but knowing that errors might exist doesn't exactly tell you where to look or what to do differently.

What I think people don't realize is how important it is to think through how you're making that decision - and what actually determines whether they round something off to impossible or not. I don't think people take seriously the idea that taking negligible in-model probabilities seriously will pay off on net - since they've never seen it happen and it seems like a negligible probability thing.

And who knows, maybe it won't pay off for them. Maybe I'm an outlier here too and even if people went through the same mental motions as me it'd be a waste. Personally though, I've noticed that not always but often enough those things that feel "impossible" aren't. I find that if I look hard enough, I often find holes in my "proof of impossibility" and occasionally I'll even find a way to exploit those holes and pull it off. And I see them all the time in other people - people being wrong where they don't even track the possibility that they're wrong and therefore there is no direct path to pointing out their error because they'll round my message to something that can exist in their worldview. I have other things to say about what's going on here that makes me really doubt they're right here, but I think this is sufficient for now.

Given that, I am very hesitant to round p=epsilon down to p=0, and if the stakes are potentially high I make damn sure that my low probability is stable upon more reflection and assumption questioning. I won't always find any holes in my "proof", nor will I always succeed if I do. Nor will I always try, of course. But the motions of consciously tracking the stakes involved and value of an accurate estimate has been very worthwhile for me.

The point I'm making is in the abstract, but one that I see as applying very strongly here. Given that this is one of the examples that seems to have paid off for me, it'd take something pretty interesting (and dare I say "cool"?) to convince me that it was never worth even taking the decision seriously :)

Replies from: Lumifer
comment by Lumifer · 2015-11-20T16:37:09.896Z · LW(p) · GW(p)

Yes, I agree that people sometimes construct a box for themselves and then become terribly fearful of stepping outside this box (="this is impossible"). This does lead to them either not considering at all the out-of-the-box options or assigning, um, unreasonable probabilities to what might happen once you step out.

The problem, I feel, is that there is no generally-useful advice that can be given. Sometimes your box is genuinely constricting and you'd do much better by getting out. But sometimes the box is really the best place (at least at the moment) and getting out just means you become lunch. Or you wander in the desert hoping for a vision but getting a heatstroke instead.

You say

I don't think people take seriously the idea that taking negligible in-model probabilities seriously will pay off on net

but, well, should they? My "in-model probabilities" tell me that I'm not going to become rich by playing the lottery. Should I take the lottery idea seriously? Negligible probabilities are often (but not always) negligible for a good reason.

Given that, I am very hesitant to round p=epsilon down to p=0

Sure. But things have costs. If the costs (in time, effort, money, opportunity) are high enough, you don't care whether it's epsilon or a true zero, the proposal fails the cost-benefit test anyway.

Replies from: jimmy
comment by jimmy · 2015-11-28T20:03:37.840Z · LW(p) · GW(p)

but, well, should they?

Yes. From the inside it can be very tough to tell, but from the outside they're clearly they're wrong about them all being low probability. They don't check for potential problems with the model before trusting it without reservation, and that causes them to be wrong a lot. Even if your "might as well be 100%" is actually 97% - which is extremely generous, you'll be wrong about these things on a regular basis. It's a separate question of what - if anything - to do about it, but I'm not going to declare that I know there's nothing for me to do about it until I'm equally sure of that.

I think one of the real big things that makes the answer feel like "no" is that even if you learn you're wrong, if you can't learn how you're wrong and in which direction to update even after thinking about it, then there's no real point in thinking about it. If you can't figure it out (or figure out that you can trust that you've figured it out) even when it's pointed out to you, then there's less point in listening. I think "I don't see what I can do here that would be helpful" often gets conflated with "it can't happen", and that's a mistake. The proper way to handle those doesn't involve actively calling them "zero". It involves calling them "not worth thinking about" and the like. There is nothing to be gained by writing false confidences in mental stone and much to be lost.

My "in-model probabilities" tell me that I'm not going to become rich by playing the lottery. Should I take the lottery idea seriously? Negligible probabilities are often (but not always) negligible for a good reason.

Right. With the lottery, you have more than a vague intuitive "very low odds" of winning. You have a model that precisely describes the probability of winning and you have a vague intuitive but well backed "practically certain" odds of your model being correct. If I were to ask "how do you know that your odds are negligible?" you'd have an answer because you've already been there. If I were to ask you "well how do you know that your model of how the lottery works is right?" you could answer that too because you've been there too. You know how you know how the lottery works. Winning the lottery may be a very big win, but the expected value of thinking about it further is still very low because you have detailed models and metamodels that put firm bounds on things.

At the end of the day, I'm completely comfortable saying "it is possible that it would be a very costly mistake to not think harder about whether winning the lottery might be doable or how I'd go about doing it if it were AND I'm not going to think about it harder because I have better things to do".

If I were gifted a lotto ticket and traded it for a burrito, I'd feel like it was a good trade. Even if the lottery ticket ended up winning the jackpot, I could stand there and say "I was right to trade that winning lotto ticket for a burrito" and not feel bad about it. It'd be a bit of a shock and I'd have to go back and make sure that I didn't err, but ultimately I wouldn't have any regrets.

If, say, it was given to me as a "lucky ticket" with a wink and a nod by some mob boss whose life I'd recently saved... and I traded it for a freaking burrito because "it's probably 1 in 100 million, and 1 in 100 million isn't worth taking seriously"... I'd be kicking myself real hard for not taking a moment to question the "probably" when I learned that I traded a winning ticket for a burrito.

And all those times the ticket from the mob boss didn't win (or I didn't realize it won because I traded it for a burrito) would still be tremendous mistakes. Just invisible mistakes if I don't stop to think and it doesn't happen to whack me in the head. The idea of making mistakes, not realizing, and then using that lack of realization as further evidence that I'm not overconfident is a trap I don't want to fall into.

My brief attempt at "general advice" is to make sure you actually think it through and would be not just willing to but comfortable eating the loss if you're wrong. If you're not, there's your little hint that maybe you're ignoring something important.

When I point people to these considerations ("you say you're sure, so you'd be comfortable eating that loss if it turns out not to be the case, the vast majority of the times when they stop deflecting and give a firm "yes" or "no", the answer is "no" - and they rethink things. There are all sorts of caveats here, but the main point stands - when its important, most people conclude they're sure without actually checking to their own standards.

That's just not making bad decisions relative to your own best models/metamodels - you can still make bad decisions by more objective standards. This can't save you from that but what it can do is make sure your errors stand out and don't get dismissed prematurely. In the process of coming to say "yes, and I can eat the loss if I'm wrong" you end up figuring out what kinds of things you don't expect to see and committing to the fact that your model predicts they shouldn't happen. This makes it a lot easier to both notice the fact that your model is wrong and harder to let yourself get away with pretending it isn't.

Replies from: Lumifer
comment by Lumifer · 2015-11-30T17:44:36.241Z · LW(p) · GW(p)

From the inside it can be very tough to tell, but from the outside they're clearly they're wrong about them all being low probability.

I don't know about that. That clearly depends on the situation -- and while you probably have something in mind where this is true, I am not sure this is true in the general case. I am also not sure of how would you recognize this type of situation without going circular or starting to mumble about Scotsmen.

if you learn you're wrong, if you can't learn how you're wrong and in which direction to update even after thinking about it

What do you mean, can you give some examples? Normally, if people locked themselves in a box of their own making, they can learn that the box is not really there.

The idea of making mistakes, not realizing, and then using that lack of realization as further evidence that I'm not overconfident is a trap I don't want to fall into.

That's a good point -- I agree that if you don't realize what opportunity costs you are incurring, your cost-benefit analysis might be wildly out of whack. But again, the issue is how do you reliably distinguish ex ante where you need to examine things very carefully and where you do not have to do this. I expect this distinguishing to be difficult.

"Actually thinking it through" is all well and good, but it basically boils down to "don't be stupid" and while that's excellent advice, it's not terribly specific. And "can you eat the loss?" is not helping much. For example, let's say one option is me going to China and doing a start-up there. My "internal model" says this is a stupid idea and I will fail badly. But the "loss" is not becoming a multimillionaire -- can I eat that? Well, on the one hand I can, of course, otherwise I wouldn't have a choice. On the other hand, would I be comfortable not becoming a multimillionaire? Um, let's say I would much prefer to become one :-) So should I spend sleepless nights contemplating moving to China?

Replies from: jimmy
comment by jimmy · 2015-12-02T02:45:47.736Z · LW(p) · GW(p)

I don't know about that. That clearly depends on the situation -- and while you probably have something in mind where this is true, I am not sure this is true in the general case. I am also not sure of how would you recognize this type of situation without going circular or starting to mumble about Scotsmen.

I mean about the whole group of things that any given person decides or would decide is "low probability". I see plenty of "p=0" cases being true, which is plenty to show that the group "p=0" as a whole is overconfident - I'm not trying to narrow it down to a group where they're probably wrong, just overconfident.

What do you mean, can you give some examples? Normally, if people locked themselves in a box of their own making, they can learn that the box is not really there.

It's not that they can't learn that the box isn't really there, it's that even if they know it's not there they don't know how to climb out of it.

There are a lot of things I know I might be wrong about (and care about) that I don't look into further. It's not that I think it's unlikely that there's anything for me to find, but that it's unlikely for me to find it in the next unit of effort. Even if someone is working with an obviously broken model with no attempts to better their model, it doesn't necessarily mean they haven't seriously considered the possibility that they're wrong. It might just mean that they don't know in which direction to update and are stuck working with a shitty model.

Some things are like saying "check your shoelaces". Others are like saying "check your shoelaces" to a kid too young to know how to tie his own shoes.

"Actually thinking it through" is all well and good, but it basically boils down to "don't be stupid" and while that's excellent advice, it's not terribly specific.

Heh. Yes, it is difficult and I expect that just comes with the territory. And yes, it kinda sorta just boils down to "don't be stupid". The funny thing is that when dealing with people who know me (and therefore get the affection and intent behind it) "don't be stupid" is often advice I give, and it gets the intended results. The specificity of "you're doing something stupid right now" is often enough.

And "can you eat the loss?" is not helping much. For example, let's say one option is me going to China and doing a start-up there. My "internal model" says this is a stupid idea and I will fail badly. But the "loss" is not becoming a multimillionaire -- can I eat that? Well, on the one hand I can, of course, otherwise I wouldn't have a choice. On the other hand, would I be comfortable not becoming a multimillionaire? Um, let's say I would much prefer to become one :-) So should I spend sleepless nights contemplating moving to China?

I'd much prefer to be a multimillionaire too, yet I'm comfortable with choosing not to pursue a startup in china because I am sufficiently confident that it is not the best thing for me to pursue right now - and I'm sufficiently confident that I wouldn't change my mind if I looked into it a little further. It's not that I don't care about millions of dollars, its that when multiplied by the intuitive chance that thinking one step further will lead to me having it, it rounds down to an acceptable loss.

If, on the other hand, when you look at it you hear this little voice that says "Eek! Millions of dollars is a lot! How do I know that I shouldn't be pursuing a china startup!?", then yes, I'd say you should think about it (or how you make those kinds of decisions) until you're comfortable eating that potential loss instead of living your life by pushing it away.

You say "don't be stupid" as if it's something that we're beyond as a general rule. I see it as something that takes a whole lot of thought to figure out how not to be stupid this way. Once I started paying attention to these signs of incongruity, I started to recognizing it everywhere. Even in places that used to be or still are outside my "box".

comment by ChristianKl · 2015-11-09T23:51:11.050Z · LW(p) · GW(p)

Science itself is about the search for finding knowledge and not about sifting through cool shit. I also consider it okay that our society has academic psychologists who attempt to build reliable knowledge.

I think it's worthwhile to have different communities of people persuing different strategies of knowledge generation.

Replies from: jimmy
comment by jimmy · 2015-11-10T20:21:07.523Z · LW(p) · GW(p)

I don't disagree with any of the statements you made, and none of them are required to be false for my point to be valid.

I'm kinda getting the impression that you aren't being very careful or charitable in your reading of my comments. Is that impression wrong?

Replies from: ChristianKl
comment by ChristianKl · 2015-11-10T20:55:59.545Z · LW(p) · GW(p)

I don't think the point of a post is to show how another person is wrong or to only say things where who I'm talking about is likely to disagree.

comment by ChristianKl · 2015-11-03T16:37:30.616Z · LW(p) · GW(p)

I fail to see how anything you said has an impact on the observation that Andy did not need to return to the mental institute.

Given the current scientific framework you don't change a theory based on anecdotal evidence and single case studies. Especially when it comes to a person who's known to be at least partly lying about the anecdotes he tells.

If whatever Bandler does is producing verifiable results, shouldn't it be at least an explicit goal of science to find out why it works for him, as opposed to whether it works if you throw an NLP manual at an undergrad?

What do you mean with the phrase "explicit goal of science"? The goals that grand funding agencies set when they give out grants? To the extent that you think studying people with high abilities is good approach of advancing science, I wouldn't pick a person who's in the habit of lying and showmanship but a person who values epistemically true beliefs and who's open about what they think they are doing.

I think the term pseudoscience doens't really apply for Bandler. For me the term means a person who's pretending to play with the rules of science but who doesn't. Bandler isn't playing with the rules or pretending to do so. That doesn't mean that he's wrong and what he teaches isn't effective but at the same time it doesn't bring his work into science.

It's typical for New Atheists to reject everything that's not part of the scientific mosaic as useless discredited pseudoscience. I don't think that's useful way of looking at how the world works. If you want to go further into that direction of thought, a nice talk was recently shared on the Facebook LW group: Scientific Pluralism and the Mission of History and Philosophy of Science

For full disclosure, I do have a decent amount of NLP training with Chris Mulzer who attended Bandlers trainer training program every year for a decade. I know multiple people who attended seminars with Bandler.

Replies from: Jurily
comment by Jurily · 2015-11-06T00:48:57.182Z · LW(p) · GW(p)

Given the current scientific framework you don't change a theory based on anecdotal evidence and single case studies.

Oh, I see the problem now. You're waiting for research to allow you to decide to do the research you're waiting for. When the scientific framework tells you there isn't enough research to reach a conclusion, doesn't it also tell you to do more research? Picking a research topic should not be as rigorous a process as the research itself.

Even if all the anecdotal and single case studies are false, shouldn't you at least be interested in why so many people believe in it? NLP is not a religion, you pick it up as an adult. Even if the entire NLP/hypnosis/seduction/whatever industry is just a giant crackpot convention, they still demonstrate enough persuasion techniques to convince people it's real. Shouldn't you be swarming over that with the idea of eliminating your suicide rate?

Replies from: ChristianKl
comment by ChristianKl · 2015-11-06T12:00:34.435Z · LW(p) · GW(p)

What do you mean when you say "you"?

I have more formal credentials with NLP then with academic psychology.

Even if the entire NLP/hypnosis/seduction/whatever industry is just a giant crackpot convention

I have multiple friends who makes their living in that industry. One of my best friends worked for a while as a salesperson for Bandlers seminars. I don't have friends who have as much friends who have degrees in academic psychology.

I just understands both sides well enough to tell you about the situation we have at the moment.

comment by MrMind · 2015-11-03T08:20:22.165Z · LW(p) · GW(p)

NLP is arguably very difficult to analyze, because it's not a single body of coherent knowledge forming a model, rather than a mash-up of psychological techniques and some assertions about how the minds works.
I think that when you can extrapolate something that is definitely an assertion about the mind or how some techniques improves the life of people who use it, then you can test it. And it's usually found to be false.
There are however some assertions that turned out to be 'true' (that is, an experiment showed some effects), like the mirroring effect, and others that were borrowed from other fields or experiments.
It's better not to be too hang-up about the pseudoscience label: just know that when you talk about NLP, you are entering in a field of not necessarily related beliefs which are mostly false.

comment by knb · 2015-11-03T06:13:26.378Z · LW(p) · GW(p)

Richard Bandler hasn't demonstrated even a single verifiable, undisputable result with his methods, and he's been fabricating things like this for decades?

I think homeopaths and faith-healers could probably dredge up a few convincing-seeming anecdotes as well.

If there are no claims to any of the above, what exactly was discredited?

The wikipedia article you linked to presents numerous meta-analyses in support of the claim that NLP is a pseudoscience. If you want to know what they think they've discredited, read them.

comment by [deleted] · 2015-11-02T10:50:27.118Z · LW(p) · GW(p)

Health is good. Intelligence is sometimes associated with health. Wikipedia has two articles on the relationship between intelligence and health. The other is here. So wouldn't you want to know more about your intelligence?

Clarity's index of sometimes subtle neurological conditions that impair cognitive functionl

Kleine–Levin syndrome

a rare sleep disorder characterized by persistent episodic hypersomnia and cognitive or mood changes. Many patients also experience hyperphagia, hypersexuality and other symptoms.

KLS can be diagnosed when there is confusion, apathy, or derealization in addition to frequent bouts of extreme tiredness and prolonged sleep*

Vascular dementia

Patients suffering from vascular dementia present with cognitive impairment, acutely or subacutely as in mild cognitive impairment, after one or many cerebrovascular events. The symptoms of dementia may progress gradually or step-wise after each small stroke. Some people may appear to improve between events and decline after more silent strokes

Patients develop progressive cognitive, motor and behavioural signs and symptoms. A significant proportion also develop affective symptoms. These changes typically occur over a period of 5–10 years. If the frontal lobes are affected, which is often the case, patients may present as apathetic or abulic. This is often accompanied by problems with attention, orientation...

Binswanger's disease

This disease is characterized by loss of memory and intellectual function and by changes in mood. These changes encompass what are known as executive functions of the brain. It usually presents between 54 and 66 years of age, and the first symptoms are usually mental deterioration or stroke

Cortical dementia is atrophy of the cortex which affects ‘higher’ functions such as memory, language, and semantic knowledge whereas subcortical dementia affects mental manipulation, forgetfulness, and personality/emotional changes. Binswanger’s Disease has shown correlations with impairment in executive functions, but have normal episodic or declarative memory. Executive functions are brain processes that are responsible for planning, cognitive flexibility, abstract thinking, rule acquisition, initiating appropriate actions and inhibiting inappropriate actions, and selecting relevant sensory information. There have been many studies done comparing the mental deterioration of Binswanger patients and Alzheimer patients. It has been found in the Graphical Sequence Test that Binswanger patients have hyperkinetic perseveration errors which cause the patients to repeat motion even when not asked whereas Alzheimer patients have semantic preservation because when asked to write a word they will instead draw the object of the word.

Autism

Although individuals with Asperger syndrome acquire language skills without significant general delay and their speech typically lacks significant abnormalities, language acquisition and use is often atypical. Abnormalities include verbosity, abrupt transitions, literal interpretations and miscomprehension of nuance, use of metaphor meaningful only to the speaker, auditory perception deficits, unusually pedantic, formal or idiosyncratic speech, and oddities in loudness, pitch, intonation, prosody, and rhythm.[1] Echolalia has also been observed in individuals with AS. Pursuit of specific and narrow areas of interest is one of the most striking possible features of AS. Three aspects of communication patterns are of clinical interest: poor prosody, tangential and circumstantial speech, and marked verbosity. According to the Adult Asperger Assessment (AAA) diagnostic test, a lack of interest in fiction and a positive preference towards non-fiction is common among adults with AS. peech may convey a sense of incoherence; the conversational style often includes monologues about topics that bore the listener, fails to provide context for comments, or fails to suppress internal thoughts. Individuals with AS may fail to detect whether the listener is interested or engaged in the conversation. The speaker's conclusion or point may never be made, and attempts by the listener to elaborate on the speech's content or logic, or to shift to related topics, are often unsuccessful. It maps well to general-processing theories such as weak central coherence theory, which hypothesizes that a limited ability to see the big picture underlies the central disturbance in ASD. Asperger called the condition "autistic psychopathy" and described it as primarily marked by social isolation

More on Central Coherence Theory here

Meningitis

In children there are several potential disabilities which may result from damage to the nervous system, including sensorineural hearing loss, epilepsy, learning and behavioral difficulties, as well as decreased intelligence. These occur in about 15% of survivors. Some of the hearing loss may be reversible. In adults, 66% of all cases emerge without disability. The main problems are deafness (in 14%) and cognitive impairment (in 10%).

The most common symptoms of meningitis are headache and neck stiffness associated with fever, confusion or altered consciousness, vomiting, and an inability to tolerate light (photophobia) or loud noises (phonophobia).

Meningitis can lead to serious long-term consequences such as deafness, epilepsy, hydrocephalus and cognitive deficits, especially if not treated quickly.[

Toxic encephalopathy

Consequence:

. Toxic encephalopathy has a wide variety of symptoms, which can include memory loss, small personality changes/increased irritability, insidious onset of concentration difficulties, involuntary movements, fatigue, seizures, arm strength problems, and depression

Accessibility:

In addition, chemicals, such as lead, that could instigate toxic encephalopathy are sometimes found in everyday products such as cleaning products, building materials, pesticides, air fresheners, and even perfumes. These harmful chemicals can be inhaled (in the case of air fresheners) or applied (in the case of perfumes)

Recovery:

Long term studies have demonstrated residual cognitive impairment (primarily attention and information-processing impairment resulting in dysfunction in working memory) up to 10 years following cessation of exposure

Replies from: ChristianKl
comment by ChristianKl · 2015-11-02T20:36:26.215Z · LW(p) · GW(p)

So wouldn't you want to know more about your intelligence? [...] Clarity's index of sometimes subtle neurological conditions

I don't think trying to diagnose themself via a bunch of descriptions of rare illnesses is a good road way to learn more about one's intelligence.

comment by [deleted] · 2015-11-03T05:39:17.731Z · LW(p) · GW(p)

i reckon the reason i have admired and.imitated entrepreneurship, innovation and initiative above their market value, social and financial, is that it seems to signal reaponsibility, in spite of circumstanes for which their solution may not be apparent for a particular problem. But there are a whole class of.other values and hehaviours that entail.reaponsibility and I reckon the quality of those particular aforementioned signals to imply responsibility has and is decaying as I mature. Duty, and consistency, are the traits a value them on par with now, things I had neglected before.

comment by [deleted] · 2015-11-02T11:13:08.332Z · LW(p) · GW(p)

Has anyone tried Gene partner to identify genetically suitable dates? I don't think it uses 23andme data though, those money grabbing rats want us to do their own test!

Even a MIRI researcher thinks the idea is good and profitable. The North American competitior is instant chem. There appears to be distrust for the idea, see Moran's comment in the criticism here

Also, any sperm donors on LW?

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2015-11-02T11:41:29.075Z · LW(p) · GW(p)

Please stop to edit post to completely replace their content with something else.

Replies from: gjm, Gunnar_Zarncke
comment by gjm · 2015-11-02T12:34:48.493Z · LW(p) · GW(p)

I would suggest that that sort of behaviour merits (some warning, and then if no change) a ban.

Having no magical moderatorial powers, I'll have to settle for just downvoting the offending comment and would encourage others to do likewise (if they are willing to believe ChristianKI's statement about what Clarity did).

Entirely tangentially (and with apologies if I cause offence), a linguistic note: English uses the gerund (Xing) rather than the infinitive (to X) in the construction "stop ..." -- we would write "please stop editing ..." rather than "please stop to edit". I'm not sure exactly how this generalizes -- you could say either "start editing" or "start to edit"; either "cease editing" or "cease to edit"; either "I like editing" or "I like to edit"; only "I have to edit" and not "I have editing". This may be of interest but doesn't shed much light on why you can "start to edit" but not "stop to edit". When I started writing this paragraph I hoped to be able to say something more general and therefore more useful, but having got this far I'll leave it in even though it seems actually to be quite specific to the verb "stop".

comment by Gunnar_Zarncke · 2015-11-02T19:44:44.993Z · LW(p) · GW(p)

I assume Clarity has rewritten their post because I can't clearly see the offense after skimming the links. Or what did I miss?

Replies from: ChristianKl
comment by ChristianKl · 2015-11-02T20:30:46.173Z · LW(p) · GW(p)

"Rewritten" would assume that the content that's now in the post has something to do with the original content. I don't think that's true.

I did quote one sentence:

Why do this? Cause intelligence isn't a terminal goal. For most people, health is.

I don't have a copy of the rest but it was about reducing one's own intelligence.

comment by ChristianKl · 2015-11-02T11:18:43.233Z · LW(p) · GW(p)

Why do this? Cause intelligence isn't a terminal goal. For most people, health is.

What makes you assume that you will get healthier if you do something to get less intelligent?

Replies from: entirelyuseless
comment by entirelyuseless · 2015-11-02T14:29:38.183Z · LW(p) · GW(p)

Health is no more of a terminal goal for me than intelligence is. Both of them are just useful for other things.

Replies from: gjm
comment by gjm · 2015-11-02T15:03:41.925Z · LW(p) · GW(p)

Maybe this depends on how you define "health". Many common ways of being unhealthy involve pain, tiredness, nausea, etc. Freedom from those things seems pretty much a terminal goal (at least for me). So, even if health isn't a terminal goal, not having symptoms of ill-health probably is. (Again, for me, but I doubt I'm unusual in this respect.)

Replies from: entirelyuseless
comment by entirelyuseless · 2015-11-02T15:08:08.196Z · LW(p) · GW(p)

I agree, but in that way intelligence is also a terminal goal; I like the feeling of understanding things and it seems to me this is pretty closely connected to intelligence. Of course I realize that it is possible to be so unintelligent that you incorrectly feel that you understand everything; but that is more like being in bad health and taking drugs that make you feel good.