Posts

A future for neuroscience 2018-08-19T23:58:21.807Z

Comments

Comment by mike-johnson on [deleted post] 2022-03-01T01:56:23.678Z

I’m really glad to see this. I can’t say I fully grasp your particular approach, but what you’ve written about model fragments has really resonated.

My intuition around value extrapolation is that if we extrapolate the topic itself it’ll eventually turn into creating fine models of nervous system dynamics. Will be curious to see how your work intersects and what it assumes about neuroscience, and also what sort of neuroscience progress you think might make your work easier.

Good luck!

Comment by Mike Johnson (mike-johnson) on A future for neuroscience · 2018-08-21T20:04:08.276Z · LW · GW

I do think that lower-frequency harmonics will be both better defined, and more useful for hanging functional or computational stories on. (Agree that low-harmonics-as-operators-on-bayesian-priors could be a very generative frame. I'm a little skeptical of the current stories being told of functional localization; some of the localization could indeed be spatial, but some could be temporal (information tacitly encoded into harmonics). I think the proof is in the pudding in terms of what each hypothesis can let us do. Probably no one-size-fits-all.

Comment by Mike Johnson (mike-johnson) on A future for neuroscience · 2018-08-20T17:58:00.486Z · LW · GW

Thanks for the thoughtful comment. I would generally endorse the claims you make, but would differ on your analogy about psychologists not needing to know about advances in neuroscience for the same reason programmers don't need to know about transistors, and the conclusion you draw from that.

First, I'd stand behind the theme that:

The problem facing neuroscience in 2018 is that we have a lot of experimental knowledge about how neurons work– and we have a lot of observational knowledge about how people behave– but we have few elegant compressions for how to connect the two. CSHW promises to do just that, to be a bridge from bottom-up neural dynamics – things we can measure – to high-level psychological/phenomenological/psychiatric phenomena – things we care about. And a bottom-up bridge like this should also allow continuous improvement as our understanding of the fundamentals improve, as well as significant unification across disciplines: instead of psychology, psychiatry, philosophy, and so on each having their own (slightly incompatible) ontologies, a true bottom-up approach can unify these different ways of knowing and serve as a common platform, a lingua franca for high-level brain dynamics.

In short, brain-stuff isn't neatly modularized like computer-stuff, and so advances in the lower levels of the stack can have big impacts about how things are (or should be) done higher up in the stack. If CSHW does turn out to be generative in the ways I list, I think it'll have direct impact on psychology and psychiatry; they couldn't help but change. In particular, a theory which might allow unification across the different psychological sciences is a big deal.

Re: your conclusion, I think it's easy to underestimate how risk-adverse academia is, and the degree to which academic politics plays a role in which ideas gain traction and which don't. The idea that psychologists and psychiatrists are currently working from good models, and if a better model comes around, science will straightforwardly prove this new model is better and the community will naturally and quickly adopt it -- I think it would be great if all of these things were true, but I have little confidence any of them are.

Instead, I think there are huge structural problems which allow considerable arbitrage if you have a better-than-average model of what's going on. Granted, the bit about how "all neuroscientists, all philosophers, all psychologists, and all psychiatrists" should drop what they're doing and learn CSHW is hyperbole. But I think it's the correct direction to push.

Comment by Mike Johnson (mike-johnson) on A future for neuroscience · 2018-08-20T17:40:55.276Z · LW · GW
I don't think folk psychology does a good job at ontology when it comes to speaking about subjects like depression or willpower.

I'd agree with that.

How does what you propose there differ from General Semantics?

I don't know enough about General Semantics to offer much here, but from a quick reading of Wikipedia it feels like GS is aimed at a slightly different goal, and relies on a much different algorithmic stack, than a CSHW-inspired theory of language and meaning. Would be glad to hear your thoughts.

Comment by Mike Johnson (mike-johnson) on A future for neuroscience · 2018-08-20T17:34:29.675Z · LW · GW

Hi shminux,

You're welcome to follow the academic literature trail I link to. CSHW is a new paradigm so it would definitely would benefit from a close critical review, if you're able to provide that. (If you'd rather just critique something as pattern-matching to "crackpot red flags" and "pretty pictures" you can do that too, but I find this to be a content-free strategy of avoiding dealing with any of my object-level or methodological claims, and think that it needlessly lowers the level of discussion.)

I mention my personal intuitions about "limitations and potential failures" near the end of my piece; . My expectation is that CSHW, along with the predictive coding framework, is the most plausible route for neuroscience to develop knowledge in the five spheres I identified. ("Most plausible" does not mean "sure thing" of course.) The hard work still needs to be done of course. If you know of more plausible ways to unify neuroscience I'd be happy to read about it.