Posts

How webs of meaning grow and change 2020-08-14T13:58:38.422Z
Gödel's Legacy: A game without end 2020-06-28T18:50:43.536Z
Neural Basis for Global Workspace Theory 2020-06-22T04:19:15.947Z
Research snap-shot: question about Global Workspace Theory 2020-06-15T21:00:28.749Z
The "hard" problem of consciousness is the least interesting problem of consciousness 2020-06-08T19:00:57.962Z
Legibility: Notes on "Man as a Rationalist Animal" 2020-06-08T02:18:01.935Z
Consciousness as Metaphor: What Jaynes Has to Offer 2020-06-07T18:41:16.664Z
Finding a quote: "proof by contradiction is the closest math comes to irony" 2019-12-26T17:40:41.669Z
ToL: Methods and Success 2019-12-10T21:17:56.779Z
ToL: This ONE WEIRD Trick to make you a GENIUS at Topology! 2019-12-10T21:02:40.212Z
ToL: The Topological Connection 2019-12-10T20:29:06.998Z
ToL: Introduction 2019-12-10T20:19:06.029Z
ToL: Foundations 2019-12-10T20:15:09.099Z
Books on the zeitgeist of science during Lord Kelvin's time. 2019-12-09T00:17:30.207Z
The Actionable Version of "Keep Your Identity Small" 2019-12-06T01:34:36.844Z
Hard to find factors messing up experiments: Examples? 2019-11-15T17:46:03.762Z
Books/Literature on resolving technical disagreements? 2019-11-14T17:30:16.482Z
Paradoxical Advice Thread 2019-08-21T14:50:51.465Z
The Internet: Burning Questions 2019-08-01T14:46:17.164Z
How much time do you spend on twitter? 2019-08-01T12:41:33.289Z
What are the best and worst affordances of twitter as a technology and as a social ecosystem? 2019-08-01T12:38:17.455Z
Do you use twitter for intellectual engagement? Do you like it? 2019-08-01T12:35:57.359Z
How to Ignore Your Emotions (while also thinking you're awesome at emotions) 2019-07-31T13:34:16.506Z
Where is the Meaning? 2019-07-22T20:18:24.964Z
Prereq: Question Substitution 2019-07-18T17:35:56.411Z
Prereq: Cognitive Fusion 2019-07-17T19:04:35.180Z
Magic is Dead, Give me Attention 2019-07-10T20:15:24.990Z
Decisions are hard, words feel easier 2019-06-21T16:17:22.366Z
Splitting Concepts 2019-06-21T16:03:11.177Z
STRUCTURE: A Hazardous Guide to Words 2019-06-20T15:27:45.276Z
Defending points you don't care about 2019-06-19T20:40:05.152Z
Words Aren't Type Safe 2019-06-19T20:34:23.699Z
Arguing Definitions 2019-06-19T20:29:44.323Z
What is your personal experience with "having a meaningful life"? 2019-05-22T14:03:39.509Z
Models of Memory and Understanding 2019-05-07T17:39:58.314Z
Rationality: What's the point? 2019-02-03T16:34:33.457Z
STRUCTURE: Reality and rational best practice 2019-02-01T23:51:21.390Z
STRUCTURE: How the Social Affects your rationality 2019-02-01T23:35:43.511Z
STRUCTURE: A Crash Course in Your Brain 2019-02-01T23:17:23.872Z
Explore/Exploit for Conversations 2018-11-15T04:11:30.372Z
Starting Meditation 2018-10-24T15:09:06.019Z
Thoughts on tackling blindspots 2018-09-27T01:06:53.283Z
Can our universe contain a perfect simulation of itself? 2018-05-20T02:08:41.843Z
Reducing Agents: When abstractions break 2018-03-31T00:03:16.763Z
Diffusing "I can't be that stupid" 2018-03-24T14:49:51.073Z
Request for "Tests" for the MIRI Research Guide 2018-03-13T23:22:43.874Z
Types of Confusion Experiences 2018-03-11T14:32:36.363Z
Hazard's Shortform Feed 2018-02-04T14:50:42.647Z
Explicit Expectations when Teaching 2018-02-04T14:12:09.903Z
TSR #10: Creative Processes 2018-01-17T03:05:18.903Z

Comments

Comment by hazard on crl826's Shortform · 2021-01-10T19:54:09.497Z · LW · GW

Rao offhandedly mentions that the Clueless are useful to put blame on when there's a "reorg". That didn't mean much to me until I read the first few chapters of Moral Mazes, where it went through several detailed examples of the politics of a reorg.

Comment by hazard on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2021-01-09T15:02:30.686Z · LW · GW

I'm the author, writing a review/reflection.

I wrote this post mainly to express myself and make more real my understanding of my own situation. The summer of 2019 I was doing a lot of exploration on how I felt and experience the world, and also I was doing lots of detective work trying to understand "how I got to now."

The most valuable thing it adds is a detailed example of what it feels like to mishandle advice about emotions from the inside. This was prompted by the fact that younger me "already knew" about dealing with his emotions, and I wanted to write a post the plausibly would have helped him. 

I think this sort of data is incredibly important. Understanding the actual details of your mind that prevented you from taking advantage of "good advice". I want more of people sharing "here's the particular way I got this wrong for a long time" more so than "something other people get wrong is blah". This feels like the difference between "What? I guess you weren't paying attention when you read the sequences" and "Ah, your mind is in a way where you will reliably get this one important aspect of the sequences wrong, let's explore this."

I still reference this post a lot, to friends and in my own thinking. It's no longer the focal point of any of my self work, but it's a foundational piece of self-knowledge.

"Does this post make accurate claims" is the fun part :) I tried my hardest to make this 100% "here's a thing that happened to me" because I'm an expert on my own history. But real quick I'll try to pull out the external claims and give them a spot check:

  • Everyone could learn to wiggle their ears
    • Not exactly a booming field of research, but this had the little research I could find. I think I'd put 80% or something on this being true.
  • Certain mental/emotional skills that you haven't practiced you're whole life have the same "flailing around in the dark" aspect as learning to wiggle your ears
    • "Flailing around in the dark" is defs a possible human experience. Maybe a better example would be bling people seeing through sensors on their tongue. It takes time to learn how to use such a device.
    • I'd expect most people to agree with me that as a developing infant, learning to actuate your body and mind involved a lot of time "flailing around in the dark". Though I imagine one could also say "yeah, but after you grow up that's not a problem any more. There aren't parts of my body that I'm mysteriously unable to move but have the potential to." Wiggling ears was supposed to be an example of such a part, but I still want to address this. Why wouldn't you have learned how to actuate all the parts of your mind? My answer is longer and I'm going to punt it to another comment.
  • The parent child model, and parts-work in general
    • Kaj's amazing sequence is where you should look for exploring the literal truth of these sorts of models.
    • pjeby and kaj had a great comment discussion about when and where parts models help or get in the way of self-work. The central paradox of parts work is that even if you sensibly identify conflicting parts of yourself, it's still all you. It always has been. Mostly in accord with what pjeby says, I did in fact find the parent child model very useful specifically because the level of self-judgment I had made it really hard to not attack myself for having these wants and needs, but when I frame things has a group I can tap into all the intuitions I've built over the years about how of course you need to listen to people and not beat them into silence.
      • In summary, parts models can have the effect of putting distance between you and desires and needs that you have. It is possible that you are currently self-judgemental enough that you won't be able to make much progress unless you find a way to distance these desires, at least long enough for your judgement to shut up, and possibly allow you to figure out how to deal with the judgement.

Right, onto follow up.

In a comment, raemon said he'd appreciate an exploration of "what bad stuff actually happens if you ignore your emotions in this or a similar way?" There are 3 great response sharing snippets of diff people's experience. I think the most compelling extension I could add would be exploring more how "ignoring emotions" and "ignoring my ability to want" blend together, and how these processes combined to, for a long time, make it really hard for me to tell if something actually felt good, if I liked it or was interested in it, and as a corollary this made it easier for me to chase after substitutes (I can't tell if I like this, but it's impressive and everyone will reward me for it, but I also am not aware that I can't tell if I like it, so I now do this thing and think I like it, even though my motivation/energy for it will not survive outside the realm of social reinforcement). I'm currently writing a post that explores some of those dynamics! I could certainly add a paragraph or two to this post.

In some comment Lisa Feldman's work on emotions was mentioned. This also highlights how I don't really look at what emotions are in this post. I've since built a waaaay more detailed model of emotions, how to think about mind-body connection, how this relates to trauma, and how it all connects to clear thinking / not being miserable. Again, this would be a whole other post, possibly many.

Another follow up on how I relate to parts models. I think in parts way less often these days. Pretty sure this is a direct result of having defused a decent amount of judgement. But I can also see a lot of that judgement flare up again when I'm in social situations. So I'm generally able to, when by myself (which is often), feel safe accepting all of me, but I generally don't feel safe doing that around other people.

A few people have told me that they really wanted a section on "and here's what healthy emotional processing looks like", but I don't think I'm going to add one, because I can't. I think the most valuable stuff I can write is "here's a really detailed example of how it happened to me... that's all." And while I have grown better at processing and listening to emotions, I've yet to gain the distance to figure what I've been doing is was most essential for me, and what the overall arc/shape of my progress looks like. Plus, this would be a whole nother giant post, not an addition. 

Comment by hazard on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2021-01-09T14:45:34.397Z · LW · GW

I'm pondering this again. I expect, though I have not double checked, that the studied cases of pressure to find repressed memories leading to fake memories are mostly ones that involve, well, another person pressuring you. How often does this happen if you sit alone in your room and try it? Skilled assistant would almost certainly be better than an unskilled assistant, though I don't know how it compares to DIY, if you add the complication of "can you tell if someone is skilled or not?"

Would be interested if anyone's got info about DIY investigations. 

Comment by hazard on Eli's shortform feed · 2021-01-04T02:29:10.914Z · LW · GW

I plan to blog more about how I understand some of these trigger states and how it relates to trauma. I do think there's a decent amount of written work, not sure how "canonical", but I've read some great stuff that from sources I'm surprised I haven't heard more hype about. The most useful stuff I've read so far is the first three chapters of this book. It has hugely sharpened my thinking.

I agree that a lot of trauma discourse on our chunk of twitter is more for used on the personal experience/transformation side, and doesn't let itself well to bigger Theory of Change type scheming.

http://www.traumaandnonviolence.com/chapter1.html

Comment by hazard on Hazard's Shortform Feed · 2020-12-16T15:31:40.685Z · LW · GW

The way I see "Politics is the Mind Killer" get used, it feels like the natural extension is "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own Is The Mind Killer".

From this angle, a commitment to prevent things from getting "too political" to "avoid everyone becoming angry idiots" is also a commitment to not having an impact.

I really like how jessica re-frames things in this comment. The whole comment is interesting, here's a snippet:

Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then "politics is the mind-killer" is the wrong framing. Rather, "politics is a domain where people often try to kill each other's minds" is closer.

With would further transform my new no longer catchy phrase to "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own will result in people trying to kill each other's minds."

Which has very different repercussions from the original saying.

Comment by hazard on What confusions do people have about simulacrum levels? · 2020-12-15T01:15:00.111Z · LW · GW

Your linked comment was very useful. To those who didn't click, here's a relevant snippet:

It seems like Simulacrum Levels were aiming to explore two related concepts:

  • How people's models/interactions diverge over time from an original concept (where that concept is gradually replaced by exaggerations, lies, and social games, which eventually bear little or not referent to the original)
  • How people relate to object level truth, as a whole, vs social reality

The first concept makes sense to call "simulacrum", and the second one I think ends up making more sense to classify in the 2x2 grid that I and Daniel Kokotajilo both suggested (and probably doesn't make sense to refer to as 'simulacrum')

Comment by hazard on Hazard's Shortform Feed · 2020-12-14T23:53:36.209Z · LW · GW

I started writing on LW in 2017, 64 posts ago. I've changed a lot since then, and my writing's gotten a lot better, and writing is becoming closer and closer to something I do. Because of [long detailed personal reasons I'm gonna write about at some point] I don't feel at home here, but I have a lot of warm feelings towards LW being a place where I've done a lot of growing :)

Comment by hazard on Cultural accumulation · 2020-12-06T16:10:55.954Z · LW · GW

This makes me wonder, for every experiment that's had a result of "X amount of people can't do Y task", how would that translate to "Z amount of people can/can't do Y task when we paid them to take 2 days/ a week off of work and focus soley on it".

Hard to test for obvious reasons.

Comment by hazard on Cultural accumulation · 2020-12-06T16:07:31.702Z · LW · GW

The article cited is also wrong about the line counts for some of the other groups it mentions, google doesn't have 2000 billion lines, according to their own metrics.

Comment by hazard on Postmortem on my Comment Challenge · 2020-12-05T00:38:57.030Z · LW · GW

Love that you did this and learned something about some of the reasons discussions don't actually get started. I notice that I have often don't comment in a discussion conducing way because I don't enjoy trying to discuss with the time lag normally involved in lw comments. On twitter, I'm very quick to start convos, especially ones that are more speculative. That's partially because if we quickly strike a dead end (it was a bad question, I assumed something incorrect) it feels like no big deal. I'd be more frustrated having a garden path convo like that in LW comments.

Comment by hazard on Building up to an Internal Family Systems model · 2020-12-04T22:37:00.313Z · LW · GW

Really what I want is for Kaj's entire sequence to be made into a book. Barring that, I'll settle for nominating this post. 

Comment by hazard on Hazard's Shortform Feed · 2020-12-04T00:08:08.340Z · LW · GW

To everyone on the LW team, I'm so glad we do the year in review stuff! Looking over the table of contents for the 2018 book I'm like "damn, a whole list of bangers", and even looking at top karma for 2019 has a similar effect. Thanks for doing something that brings attention to previous good work.

Comment by hazard on Everybody Knows · 2020-12-03T23:41:23.879Z · LW · GW

Besides being a really great object level post, I think it's also an great example of pointing to a subtle conversational move that appears pretty innocuous but upon investigation often is being used to sabotage information flows, intentionally or otherwise. I think a large part of rationality is being able to spot and navigate around these moves.

Comment by hazard on System 2 as working-memory augmented System 1 reasoning · 2020-12-03T23:19:54.917Z · LW · GW

The S1/S2 dichotomy has proven very unhelpful for me.

  1. For some time it served as my "scientific validation" for taking a coercive-authoritarian attitude towards myself, resulting in plenty pain.
  2. It's really easy to conflate S2 with "rational" with "gets correct answers". I know think that "garbage in -> garbage out" applies to S2. You can learn a bunch of explicit procedural thinking patters that are shit getting things right.
  3. In general, S1/S2 encourages conflating "motives" and "cognitive capacities". "S1 is fast and biased and S2 is slow and rational". If you think of slow/fast, intentional/unintentional, biased/rational, you are capable of doing cognition that combines any of these qualities. Unnecessarily grouping them together makes it easier to spin narratives where one "system" is a bad guy that must be overcome, and that's just not how your brain works.

This post (along with the rest of Kaj's amazing sequence) was an crucial nudge away from the S1/S2 frame and towards a way more gearsy model of the mind.

Comment by hazard on Power Buys You Distance From The Crime · 2020-12-03T22:44:31.278Z · LW · GW

This post makes a fairly straightforward point that has been vary helpful for thinking about power. Having several grounding concrete examples really helped as well. The quote from moral mazes that gave examples of the sorts of wiping-hands-of-knowledge things executives actually say really helped make this more real to me.

Comment by hazard on Power Buys You Distance From The Crime · 2020-12-03T22:34:38.703Z · LW · GW

This is a very helpful comment, thank you!

Comment by hazard on Why I’m Writing A Book · 2020-11-10T23:13:29.444Z · LW · GW

Excited to see the final product, good luck!

Comment by hazard on The Treacherous Path to Rationality · 2020-10-11T03:36:33.475Z · LW · GW

This feels like an incredibly important point, the pressures when "the rationalists" are friends your debate with online vs when they are close community you are dependant on.

Comment by hazard on Hazard's Shortform Feed · 2020-10-05T21:44:50.000Z · LW · GW

This flared up again recently. Besides "wanting insight" often I simply am searching for fluency. I want something that I can fluently engage with, and if there's an impediment to fluency, I bounce off. Wanting an experience of fluency is a very different goal from wanting to understand the thing. Rn I don't have too many domains where I have technical fluency. I'm betting if I had more of that, it would extend my patience/ability to slog through texts that are hard for me.

Comment by hazard on Maybe Lying Can't Exist?! · 2020-08-27T18:33:23.100Z · LW · GW

I'm glad you're bringing sender-receiver lit into this discussion! It's been useful for me to ground parts of my thinking. What follows is almost-a-post's worth of, "Yes, and also..."

Stable "Deception" Equilibrium

The firefly example showed how an existing signalling equilibrium can be hijacked by a predator. What once was a reliable signal becomes unreliable. As you let things settle into equilibrium, the signal of seeing a light should lose all informational content (or at least, it should not give any new information about whether or not the signal is coming from mate or predator). 

Part of the what ensures this result is the totally opposed payoffs of P.rey and P.redator. In any signalling game where the payouts are zero-sum there isn't going to be an equilibrium where the signals conveys information.

More complex varied payouts can have more interesting results:

from one of Skyrms' book

Again, at the level of the sender-receiver game this is deception, but it still feels a good bit different from what I intuitively track as deception. This might be best stated as an example of "equilibrium of ambiguous communication as a result of semi-adversarial payouts"

Intention

I would not speculate on the mental life of bees; to talk of the mental life of bacteria seems absurd; and yet signalling plays a vital biological role in both cases.
-Skyrms

I want to emphasize that the sender-receiver model and Skyrms' use of "informational content" are not meant to provide an explanation of intention. Information is meant to be more basic than intent, and present in cases (like bacteria) where there seems to be no intent. Skyrms seems to be responding to some scholars who want to say "intent is what defines communication!", and like Skyrms, I'm happy to say that communication and signals seems to cover a broad class of phenomena, of which intent would be a super-specialized subset.

For my two-cents, I think that intent in human communication involves both goal-directedness and having a model of the signalling equilibrium that can be plugged into an abstract reasoning system.

In sender-receiver games, the learning of signalling strategy often happens either through replicator-dynamics or a very simple Roth-Erev reinforcement learning. These are simple mechanisms that act quite directly and don't afford any reflection on the mechanism itself. Humans can not only reliably send a signal in the presence of certain stimulus, but can also do "I'm bored, I know that if I shout 'FIRE!' Sarah is gonna jump out of her skin, and then I'll laugh at her being surprised." Another fun example is that seems to rely on being able to reason about the signalling equilibrium itself is "what would I have to text you to covertly convey I've been kidnapped?"

I think human communication is always a mix of intentional and non-intentional communication, as I explore in another post. When it comes to deception, while a lot of people seem to want to use intention to draw the boundary between "should punish" and "shouldn't punish", is see it more as a question of "what sort of optimization system is working against me?" I'm tempted to say "intentional deception is more dangerous because that means the full force of their intellect is being used to deceive you, as opposed to just their unconscious" but that wouldn't be quite right. I'm still developing thoughts on this.

Far from equilibrium

I expect it's most fruitful to think of human communication as an open system that's far from equilibrium, most of the time. Thinking of equilibrium helps me think of directions things might move, but I don't expect everyone's behavior to be "priced into" most environments.

Comment by hazard on Hazard's Shortform Feed · 2020-08-14T14:24:38.940Z · LW · GW

HOLY shit! I just checked out the new concepts portion of the site that shows you all the tags. This feels like a HUGE step in the direction the LW team's vision of a place where knowledge production can actually happen. 

Comment by hazard on Diagramming "Replacing Guilt," Part 1 · 2020-08-07T12:29:12.381Z · LW · GW

Aah, it makes sense now! I'd forgotten the point of the og post it was related to, and the extra quotes you added were enough for it to click back together.

Comment by hazard on Diagramming "Replacing Guilt," Part 1 · 2020-08-06T01:39:38.873Z · LW · GW

I 100% support drawing pictures to have as memory aids for ideas of a post, and am glad you did this!

I don't get a few of the pictures (don't get how the "you're allowed to fight for something" image matches text") but I still support drawings and would love to see more.

Comment by hazard on NaiveTortoise's Short Form Feed · 2020-07-23T01:44:35.078Z · LW · GW

One way I think about things. Everything that I've found in myself and close friends that looks and smells like "shoulds" is sorta sneaky. I keep on finding shoulds which seem have been absorbed from others and are less about "this is a good way to get a thing in the world that I want" and "someone said you need to follow this path and I need them to approve of me". The force I feel behind my shoulds is normally "You SCREWED if you don't!" a sort of vaguely panicy, inflexible energy. It's rarely connected to the actual good qualities of the thing I "should" be doing.

Because my shoulds normally ground out in "if I'm not this way, people won't like me", if the pressure get's turned up, following a should takes me farther and farther away from things I actually care about. Unblocking stuff often feels like transcending the panicy fear that hides behind a should. It never immediately lets me be awesome at stuff. I still need to develop a real connection to the task and how it works into the rest of my life. There's still drudgery, but it's dealt with from a calmer place.

Comment by hazard on DARPA Digital Tutor: Four Months to Total Technical Expertise? · 2020-07-08T17:06:21.964Z · LW · GW

I really appreciate posts like this that give some useful details and info about a very specific thing. Thanks!

Comment by hazard on Gödel's Legacy: A game without end · 2020-06-29T12:41:35.024Z · LW · GW

I'm not sure what this means. Is this a question about if I'd prefer comments on LW instead of my other site? LW, since my other site has no comments section.

Comment by hazard on Don't Make Your Problems Hide · 2020-06-29T01:38:50.931Z · LW · GW

I support this point, and also wrote a post detailing my history with making my problems and emotions hide from me.

Comment by hazard on Gödel's Legacy: A game without end · 2020-06-28T23:17:55.141Z · LW · GW

Thanks, got it!

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-26T01:06:43.013Z · LW · GW

Some sort of "combination" seems plausible for perception. Baars actually mentions "The binding problem" (how is it that disparate features combine to make a cohesive singular perception) but I couldn't see how their idea addressed it.

This is actually one of the reasons I'm interested in looking for stuff that might be the "clock time" of any sort of bottleneck. Some amount of simultaneity of perception seems to be a post production thing. The psychological refractory period relates to experiments where you see and hear something and have to respond, and one seems to block the other for a moment (I haven't investigated this in depth, so I'm not v familiar with the experimental paradigm). But there are other things that totally seem like simultaneously experience modalities of perception. I wonder what sorts of experiments would piece apart "actually happening at the same time" from "rapid concurrent switching + post production experience construction". I'm very interested in finding out.

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-25T14:40:27.495Z · LW · GW

I don't know the concrete details about what "taking on a global value" looks like, but I visualize a grid (like in Kevin Simler's going critical post) that has a few competing colors trying to spread, and it seems reasonable that you could tweak the setting of the network such that very quickly one signal dominates the entire network.

But I don't know how to actually make something like that.

If you're interested in the TIN specifically, what I got from the paper was "here's a totally plausible candidate, and from what we know about self-organization in neural networks, it could totally do this functionality". 

The biggest reason to think that there's something that's winner-take-all with a global value, is to explain bottlenecks that won't go away. Intentional conscious thought seems to be very serial, and the neural turing machine model does a decent jump of showing how a global workspace is central to this. If there's no global workspace, and there's just the thalamus doing sensory gating, and routing chunks of cortex to each other, I'd expect to see a lot more multi tasking ability.

Also, this is more a property than a constraint, if global communication works by routing then everything that's routed needs to know where it's going. This makes sense for some systems, but I think part of the cool flexibility in a GNW architecture is that all of the cortex sees the contents of the GNW, and subsystems that compute with that as an input can spontaneously arise.

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-22T22:49:41.687Z · LW · GW

That's really useful feedback! Picking the level to write at was a challenge and it's good to hear that this worked for someone.

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-22T22:48:03.892Z · LW · GW

No problem! 

Comment by hazard on The point of a memory palace · 2020-06-20T13:55:54.754Z · LW · GW

Been thinking about memory recently and where/if different mnemonic practice can fit into practical learning. Glad to hear these thoughts!

Comment by hazard on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-06-17T23:02:01.460Z · LW · GW

Yeah! Somehow I had the memory that the two of them actually wrote a book together on the topic, but I just checked and it looks like that's not the case.

Comment by hazard on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-06-17T19:37:49.385Z · LW · GW

Great post! 

Example #4: Logarithms. Jeff thinks we have a reference frame for everything! Every word, every idea, every concept, everything you know has its own reference frame, in at least one of your cortical columns and probably thousands of them. Then displacement cells can encode the mathematical transformations of logarithms, and the relations between logarithms and other concepts, or something like that. I tried to sketch out an example of what he might be getting at in the next section below. Still, I found that his discussion of abstract cognition was a bit sketchier and more confusing than other things he talked about. My impression is that this is an aspect of the theory that he's still working on.

George Lakoff's whole shtick is this! He's a linguist, so he frames it in terms of metaphors; "abstract concepts are understood via metaphors to concrete spatial/sensory/movement domains". His book "Where Mathematics Comes From" is a in depth exploration of trying to show how various mathematical concepts ground out in mashups of physical metaphors.

Jeff's ideas seem like they would be the neurological backing to Lakoff's more conceptual analysis. Very cool connection!

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:28:34.708Z · LW · GW

Yeep, you + Kaj mentioning the basal ganglia are making me shift on this one.

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:25:50.302Z · LW · GW

Thank you for pointing to the basal ganglia's relation to motor control! This feels like one of those things that's obvious but I just didn't know because I haven't "broadly studied" neuroanatomy. Relatedly, if anyone knows of any resources of neuroanatomy that really dig into why we think of this or that region of the brain as being different, I'd love to hear. I know there's both a lot of "this area defs has this structure and does this thing" and "an fMRI went beep so this is the complain-about-ants part of your brain!", and I don't yet have the knowledge to tell them apart.

Also:

Connecting this with the GNW, several of the salience cues used in the model are perceptual signals, e.g. whether or not a wall or a cylinder is currently perceived. We also know that signals which get to the GNW have a massively boosted signal strength over ones that do not. So while the GNW does not "command" any particular subsystem to take action, salience cues that get into the GNW can get a significant boost, helping them win the action selection process.

This was a very helpful emphasis shift for me. Even though I wasn't conceptualizing GNW as a commander, I was still thinking of it as a "destination", probably because of all the claims about its connection to consciousness. The "signal boosting" frame feels like a much better fit. Subsystems are already plugged into the various parts of your brain that they need to be connected to; the GNW is not a universal router. It's only when you're doing Virtual Machine esque conscious thinking that it's a routing bottleneck. Other times it might look like a bottle neck, but maybe it's more "if you get a signal boost from the GNW, you're probs gonna win, and only one thing can get boosted at a time".

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:08:46.720Z · LW · GW

Other fun weird thing I forgot to mention, you can decrease the effect of AB by lightly distracting the subject (having them listen to music or something).

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:07:34.980Z · LW · GW

Hmm, yeah looks like I got PP attention backwards.

There's two layers! Predictive circuits are sorta "autonomously" creating a focus within the domain of what they predict, and then the "global" or top-down attention can either be an attentional subsystem watching the GNW, or the distributed attentional-relevancy gate around the GNW.

The pandemonium stuff is also a great model. In another comment I mentioned that I'm fuzzy on how tightly or loosely coupled different subsytems can be, and how they are organized, and I was unintentionally imagining them as quite monolithic entities.

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T15:47:40.278Z · LW · GW

Your GNW has an active generative model built out of lots of component models. I would say that the "tennis-match-flow" case entails little sub-sub-components asynchronously updating themselves as new information comes in—the tennis ball was over there, and now it's over here. By contrast the more typically "choppy" way of thinking involves frequently throwing out the whole manifold of generative models all at once, and activating a wholly new set of interlocking generative models. The latter (unlike the former) involves an attentional blink, because it takes some time for all the new neural codes to become active and synchronized, and in between you're in an incoherent, unstable state with mutually-contradictory generative models fighting it out.

Ahhhh this seems like an idea I was missing. I was thinking of the generative models as all being in a ready and waiting state, only ever swapping in and out of broadcasting on the GNW. But a model might take time to become active and/or do it's work. I've been very fuzzy on how generative models are arranged and organized. You pointing this out makes me think that attentional blink (or "frame rate" stuff in general) is probably rarely limited by the actual "time it takes a signal to be propogated on the GNW" and much more related to the "loading" and "activation" of the models that are doing the work.

Comment by hazard on Hazard's Shortform Feed · 2020-06-14T16:06:15.199Z · LW · GW

I'm ignoring that gap unless I find out that a bulk of the people reading my stuff think that way. I'm more writing to what feels like the edge of interesting and relevant to me.

Comment by hazard on We've built Connected Papers - a visual tool for researchers to find and explore academic papers · 2020-06-10T01:49:24.113Z · LW · GW

What would lead to this tool no longer working and how can people contribute to making those things not happen? e.g. can I donate money for server costs?

This is amazing, and I have all the same question. 

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T15:15:10.265Z · LW · GW

Hmm, you did notice a point where I sorta simplified Chalmers to get the post done. 

Then one could answer "but why couldn't a computer system just have this enum-like system that had all the properties which match your subjective experience, without having that subjective experience?"

This is near a question I do think is interesting. I'm starting to think there's a sliding scale of "amount of subjective experience" a thing can have. And I am very curious about "what sorts of things will and won't have X amount of subjective experience".

I guess my beef is that when it's framed as "But why does XYZ system entail qualia?" I infer that even if in the far future I had a SUPER detailed understanding of "tweak this and you get X more units of  experience, if you don't have ABC any experience is impossible, LMN architecture is really helpful, but not necessary" that Chalmers would still be unimpressed and got "But why does any of this lead to qualia?"

Well, I don't actually think he'd say that. If I had that sorta detailed outline I think his mind would be blown and he'd be super excited.

But when I imagine the person who is still going "But why", I'm imagining that they must be thinking of qualia is this isolated, other, and separate thing.

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T15:03:20.957Z · LW · GW

Truly the accent is one of the most powerful weapons any Daemon has in its arsenal. 

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T14:49:38.447Z · LW · GW

Hehe, yes.

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T14:49:16.473Z · LW · GW

I've defs got socially formed priors on what things do and don't have experience. And when I try and move past those priors and or think "we'll these priors came from somewhere, what were they originally tapping into?" I see that anyone making a judgement about  this is doing so through what they could observe.

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T14:45:52.069Z · LW · GW

Yeps. This feels like seeing one's experience of color as all of the things it's connected to. You've got a unique set of associations to diff colors, and that makes your experience different. 

What I've seen of the "hard problem of consciousness" is that it says "well yeah, but all those associations, all of what a color means to you, that's all separate from the qualia of the color", and that is the thing that I think is divorced from interesting stuff. All the things you mentioned are the interesting parts of the experience of color.

Comment by hazard on Legibility: Notes on "Man as a Rationalist Animal" · 2020-06-09T14:41:54.901Z · LW · GW

Thanks! I might want to make another post at some point that really digs into subtle differences between rationality and legibility. Because I think a lot of people's rationality is legibility. It's like the shadow side of rationality.

Comment by hazard on Hazard's Shortform Feed · 2020-06-08T14:54:18.757Z · LW · GW

tldr;

In high-school I read pop cogSci books like "You Are Not So Smart" and "Subliminal: How the Subconscious Mind Rules Your Behavior". I learned that "contrary to popular belief", your memory doesn't perfectly capture events like a camera would, but it's changed and reconstructed every time you remember it! So even if you think you remember something, you could be wrong! Memory is constructed, not a faithful representation of what happened! AAAAAANARCHY!!!

Wait a second, a camera doesn't perfectly capture events. Or at least, they definitely didn't when this analogy was first made. Do you remember red eye? Instead of philosophizing on the metaphysics of representation, I'm just gonna note that "X is a construct!" sorts of claims cache out in terms of "you can be wrong in ways that matter to me!".

There's something funny about loudly declaring "it's not impossible to be wrong!"

In high-school, "gender is a social construct!" was enough of a meme that it wasn't uncommon for something to be called a social construct to express that you thought it was dumb.

Me: "God, the cafeteria food sucks!"

Friend: "Cafeteria food is a social construct!"

Calling something a social construct either meant "I don't like it" or "you can't tell me what to do". That was my limited experience with the idea of social constructs. Something I didn't have experience with was the rich feminist literature describing exactly how gender is constructed, what it's effects are, and how it's been used to shape and control people for ages.

That is way more interesting to me than just the claim "if your explanation involves gender, you're wrong". Similarly, these days the cogSci I'm reading is stuff like Predictive Processing theory, which posits that all of human perception is made through a creative construction process, and more importantly it gives a detailed description of the process that does this constructing.

For me, a claim that "X is a construct" of "X isn't a 100% faithful representation" can only be interesting if there's either an account of the forces that are trying to assert otherwise, or there's an account of how the construction works.

Put another way; "you can be wrong!" is what you shout at a someone who is insisting they can't be and is trying to make things happen that you don't like. Some people need to have that shouted at them. I don't think I'm that person. If there's a convo about something being a construct, I want to jump right to the juicy parts and start exploring that!

(note: I want to extra emphasize that it can be as useful to explore "who's insisting to me that X is infallible?" as it is to explore "how is this fallible?" I've been thinking about how your sense of what's happening in your head is constructed, noticed I want to go "GUYS! Consciousness IS A CONSTRUCT!" and when I sat down to ask "Wait, who was trying to insist that it 100% isn't and that it's an infallible access into your own mind?" I got some very interesting results.)

Comment by hazard on Legibility: Notes on "Man as a Rationalist Animal" · 2020-06-08T14:49:31.013Z · LW · GW

Cool! Yeah, I've gone over all of them a few times and starting outlining this, but also lost steam and moved to other things. You noting interest is useful for me getting more out :)