Gödel's Legacy: A game without end 2020-06-28T18:50:43.536Z · score: 29 (11 votes)
Neural Basis for Global Workspace Theory 2020-06-22T04:19:15.947Z · score: 27 (10 votes)
Research snap-shot: question about Global Workspace Theory 2020-06-15T21:00:28.749Z · score: 13 (4 votes)
The "hard" problem of consciousness is the least interesting problem of consciousness 2020-06-08T19:00:57.962Z · score: 21 (10 votes)
Legibility: Notes on "Man as a Rationalist Animal" 2020-06-08T02:18:01.935Z · score: 14 (5 votes)
Consciousness as Metaphor: What Jaynes Has to Offer 2020-06-07T18:41:16.664Z · score: 14 (6 votes)
Finding a quote: "proof by contradiction is the closest math comes to irony" 2019-12-26T17:40:41.669Z · score: 9 (3 votes)
ToL: Methods and Success 2019-12-10T21:17:56.779Z · score: 9 (2 votes)
ToL: This ONE WEIRD Trick to make you a GENIUS at Topology! 2019-12-10T21:02:40.212Z · score: 11 (3 votes)
ToL: The Topological Connection 2019-12-10T20:29:06.998Z · score: 12 (4 votes)
ToL: Introduction 2019-12-10T20:19:06.029Z · score: 12 (7 votes)
ToL: Foundations 2019-12-10T20:15:09.099Z · score: 11 (3 votes)
Books on the zeitgeist of science during Lord Kelvin's time. 2019-12-09T00:17:30.207Z · score: 32 (5 votes)
The Actionable Version of "Keep Your Identity Small" 2019-12-06T01:34:36.844Z · score: 63 (28 votes)
Hard to find factors messing up experiments: Examples? 2019-11-15T17:46:03.762Z · score: 33 (14 votes)
Books/Literature on resolving technical disagreements? 2019-11-14T17:30:16.482Z · score: 13 (2 votes)
Paradoxical Advice Thread 2019-08-21T14:50:51.465Z · score: 13 (6 votes)
The Internet: Burning Questions 2019-08-01T14:46:17.164Z · score: 13 (6 votes)
How much time do you spend on twitter? 2019-08-01T12:41:33.289Z · score: 6 (1 votes)
What are the best and worst affordances of twitter as a technology and as a social ecosystem? 2019-08-01T12:38:17.455Z · score: 6 (1 votes)
Do you use twitter for intellectual engagement? Do you like it? 2019-08-01T12:35:57.359Z · score: 16 (6 votes)
How to Ignore Your Emotions (while also thinking you're awesome at emotions) 2019-07-31T13:34:16.506Z · score: 157 (78 votes)
Where is the Meaning? 2019-07-22T20:18:24.964Z · score: 22 (7 votes)
Prereq: Question Substitution 2019-07-18T17:35:56.411Z · score: 20 (7 votes)
Prereq: Cognitive Fusion 2019-07-17T19:04:35.180Z · score: 15 (6 votes)
Magic is Dead, Give me Attention 2019-07-10T20:15:24.990Z · score: 50 (29 votes)
Decisions are hard, words feel easier 2019-06-21T16:17:22.366Z · score: 9 (6 votes)
Splitting Concepts 2019-06-21T16:03:11.177Z · score: 8 (3 votes)
STRUCTURE: A Hazardous Guide to Words 2019-06-20T15:27:45.276Z · score: 7 (2 votes)
Defending points you don't care about 2019-06-19T20:40:05.152Z · score: 44 (18 votes)
Words Aren't Type Safe 2019-06-19T20:34:23.699Z · score: 24 (10 votes)
Arguing Definitions 2019-06-19T20:29:44.323Z · score: 13 (6 votes)
What is your personal experience with "having a meaningful life"? 2019-05-22T14:03:39.509Z · score: 23 (12 votes)
Models of Memory and Understanding 2019-05-07T17:39:58.314Z · score: 20 (5 votes)
Rationality: What's the point? 2019-02-03T16:34:33.457Z · score: 12 (5 votes)
STRUCTURE: Reality and rational best practice 2019-02-01T23:51:21.390Z · score: 6 (1 votes)
STRUCTURE: How the Social Affects your rationality 2019-02-01T23:35:43.511Z · score: 1 (3 votes)
STRUCTURE: A Crash Course in Your Brain 2019-02-01T23:17:23.872Z · score: 8 (5 votes)
Explore/Exploit for Conversations 2018-11-15T04:11:30.372Z · score: 38 (13 votes)
Starting Meditation 2018-10-24T15:09:06.019Z · score: 24 (11 votes)
Thoughts on tackling blindspots 2018-09-27T01:06:53.283Z · score: 45 (13 votes)
Can our universe contain a perfect simulation of itself? 2018-05-20T02:08:41.843Z · score: 21 (5 votes)
Reducing Agents: When abstractions break 2018-03-31T00:03:16.763Z · score: 42 (11 votes)
Diffusing "I can't be that stupid" 2018-03-24T14:49:51.073Z · score: 56 (18 votes)
Request for "Tests" for the MIRI Research Guide 2018-03-13T23:22:43.874Z · score: 70 (20 votes)
Types of Confusion Experiences 2018-03-11T14:32:36.363Z · score: 31 (9 votes)
Hazard's Shortform Feed 2018-02-04T14:50:42.647Z · score: 31 (9 votes)
Explicit Expectations when Teaching 2018-02-04T14:12:09.903Z · score: 53 (17 votes)
TSR #10: Creative Processes 2018-01-17T03:05:18.903Z · score: 16 (4 votes)
No, Seriously. Just Try It: TAPs 2018-01-14T15:24:38.692Z · score: 42 (14 votes)


Comment by hazard on Gödel's Legacy: A game without end · 2020-06-29T12:41:35.024Z · score: 2 (1 votes) · LW · GW

I'm not sure what this means. Is this a question about if I'd prefer comments on LW instead of my other site? LW, since my other site has no comments section.

Comment by hazard on Don't Make Your Problems Hide · 2020-06-29T01:38:50.931Z · score: 2 (1 votes) · LW · GW

I support this point, and also wrote a post detailing my history with making my problems and emotions hide from me.

Comment by hazard on Gödel's Legacy: A game without end · 2020-06-28T23:17:55.141Z · score: 2 (1 votes) · LW · GW

Thanks, got it!

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-26T01:06:43.013Z · score: 6 (3 votes) · LW · GW

Some sort of "combination" seems plausible for perception. Baars actually mentions "The binding problem" (how is it that disparate features combine to make a cohesive singular perception) but I couldn't see how their idea addressed it.

This is actually one of the reasons I'm interested in looking for stuff that might be the "clock time" of any sort of bottleneck. Some amount of simultaneity of perception seems to be a post production thing. The psychological refractory period relates to experiments where you see and hear something and have to respond, and one seems to block the other for a moment (I haven't investigated this in depth, so I'm not v familiar with the experimental paradigm). But there are other things that totally seem like simultaneously experience modalities of perception. I wonder what sorts of experiments would piece apart "actually happening at the same time" from "rapid concurrent switching + post production experience construction". I'm very interested in finding out.

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-25T14:40:27.495Z · score: 5 (3 votes) · LW · GW

I don't know the concrete details about what "taking on a global value" looks like, but I visualize a grid (like in Kevin Simler's going critical post) that has a few competing colors trying to spread, and it seems reasonable that you could tweak the setting of the network such that very quickly one signal dominates the entire network.

But I don't know how to actually make something like that.

If you're interested in the TIN specifically, what I got from the paper was "here's a totally plausible candidate, and from what we know about self-organization in neural networks, it could totally do this functionality". 

The biggest reason to think that there's something that's winner-take-all with a global value, is to explain bottlenecks that won't go away. Intentional conscious thought seems to be very serial, and the neural turing machine model does a decent jump of showing how a global workspace is central to this. If there's no global workspace, and there's just the thalamus doing sensory gating, and routing chunks of cortex to each other, I'd expect to see a lot more multi tasking ability.

Also, this is more a property than a constraint, if global communication works by routing then everything that's routed needs to know where it's going. This makes sense for some systems, but I think part of the cool flexibility in a GNW architecture is that all of the cortex sees the contents of the GNW, and subsystems that compute with that as an input can spontaneously arise.

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-22T22:49:41.687Z · score: 2 (1 votes) · LW · GW

That's really useful feedback! Picking the level to write at was a challenge and it's good to hear that this worked for someone.

Comment by hazard on Neural Basis for Global Workspace Theory · 2020-06-22T22:48:03.892Z · score: 2 (1 votes) · LW · GW

No problem! 

Comment by hazard on The point of a memory palace · 2020-06-20T13:55:54.754Z · score: 3 (2 votes) · LW · GW

Been thinking about memory recently and where/if different mnemonic practice can fit into practical learning. Glad to hear these thoughts!

Comment by hazard on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-06-17T23:02:01.460Z · score: 2 (1 votes) · LW · GW

Yeah! Somehow I had the memory that the two of them actually wrote a book together on the topic, but I just checked and it looks like that's not the case.

Comment by hazard on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-06-17T19:37:49.385Z · score: 4 (2 votes) · LW · GW

Great post! 

Example #4: Logarithms. Jeff thinks we have a reference frame for everything! Every word, every idea, every concept, everything you know has its own reference frame, in at least one of your cortical columns and probably thousands of them. Then displacement cells can encode the mathematical transformations of logarithms, and the relations between logarithms and other concepts, or something like that. I tried to sketch out an example of what he might be getting at in the next section below. Still, I found that his discussion of abstract cognition was a bit sketchier and more confusing than other things he talked about. My impression is that this is an aspect of the theory that he's still working on.

George Lakoff's whole shtick is this! He's a linguist, so he frames it in terms of metaphors; "abstract concepts are understood via metaphors to concrete spatial/sensory/movement domains". His book "Where Mathematics Comes From" is a in depth exploration of trying to show how various mathematical concepts ground out in mashups of physical metaphors.

Jeff's ideas seem like they would be the neurological backing to Lakoff's more conceptual analysis. Very cool connection!

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:28:34.708Z · score: 2 (1 votes) · LW · GW

Yeep, you + Kaj mentioning the basal ganglia are making me shift on this one.

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:25:50.302Z · score: 4 (2 votes) · LW · GW

Thank you for pointing to the basal ganglia's relation to motor control! This feels like one of those things that's obvious but I just didn't know because I haven't "broadly studied" neuroanatomy. Relatedly, if anyone knows of any resources of neuroanatomy that really dig into why we think of this or that region of the brain as being different, I'd love to hear. I know there's both a lot of "this area defs has this structure and does this thing" and "an fMRI went beep so this is the complain-about-ants part of your brain!", and I don't yet have the knowledge to tell them apart.


Connecting this with the GNW, several of the salience cues used in the model are perceptual signals, e.g. whether or not a wall or a cylinder is currently perceived. We also know that signals which get to the GNW have a massively boosted signal strength over ones that do not. So while the GNW does not "command" any particular subsystem to take action, salience cues that get into the GNW can get a significant boost, helping them win the action selection process.

This was a very helpful emphasis shift for me. Even though I wasn't conceptualizing GNW as a commander, I was still thinking of it as a "destination", probably because of all the claims about its connection to consciousness. The "signal boosting" frame feels like a much better fit. Subsystems are already plugged into the various parts of your brain that they need to be connected to; the GNW is not a universal router. It's only when you're doing Virtual Machine esque conscious thinking that it's a routing bottleneck. Other times it might look like a bottle neck, but maybe it's more "if you get a signal boost from the GNW, you're probs gonna win, and only one thing can get boosted at a time".

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:08:46.720Z · score: 2 (1 votes) · LW · GW

Other fun weird thing I forgot to mention, you can decrease the effect of AB by lightly distracting the subject (having them listen to music or something).

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T16:07:34.980Z · score: 2 (1 votes) · LW · GW

Hmm, yeah looks like I got PP attention backwards.

There's two layers! Predictive circuits are sorta "autonomously" creating a focus within the domain of what they predict, and then the "global" or top-down attention can either be an attentional subsystem watching the GNW, or the distributed attentional-relevancy gate around the GNW.

The pandemonium stuff is also a great model. In another comment I mentioned that I'm fuzzy on how tightly or loosely coupled different subsytems can be, and how they are organized, and I was unintentionally imagining them as quite monolithic entities.

Comment by hazard on Research snap-shot: question about Global Workspace Theory · 2020-06-16T15:47:40.278Z · score: 2 (1 votes) · LW · GW

Your GNW has an active generative model built out of lots of component models. I would say that the "tennis-match-flow" case entails little sub-sub-components asynchronously updating themselves as new information comes in—the tennis ball was over there, and now it's over here. By contrast the more typically "choppy" way of thinking involves frequently throwing out the whole manifold of generative models all at once, and activating a wholly new set of interlocking generative models. The latter (unlike the former) involves an attentional blink, because it takes some time for all the new neural codes to become active and synchronized, and in between you're in an incoherent, unstable state with mutually-contradictory generative models fighting it out.

Ahhhh this seems like an idea I was missing. I was thinking of the generative models as all being in a ready and waiting state, only ever swapping in and out of broadcasting on the GNW. But a model might take time to become active and/or do it's work. I've been very fuzzy on how generative models are arranged and organized. You pointing this out makes me think that attentional blink (or "frame rate" stuff in general) is probably rarely limited by the actual "time it takes a signal to be propogated on the GNW" and much more related to the "loading" and "activation" of the models that are doing the work.

Comment by hazard on Hazard's Shortform Feed · 2020-06-14T16:06:15.199Z · score: 2 (1 votes) · LW · GW

I'm ignoring that gap unless I find out that a bulk of the people reading my stuff think that way. I'm more writing to what feels like the edge of interesting and relevant to me.

Comment by hazard on We've built Connected Papers - a visual tool for researchers to find and explore academic papers · 2020-06-10T01:49:24.113Z · score: 3 (2 votes) · LW · GW

What would lead to this tool no longer working and how can people contribute to making those things not happen? e.g. can I donate money for server costs?

This is amazing, and I have all the same question. 

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T15:15:10.265Z · score: 4 (2 votes) · LW · GW

Hmm, you did notice a point where I sorta simplified Chalmers to get the post done. 

Then one could answer "but why couldn't a computer system just have this enum-like system that had all the properties which match your subjective experience, without having that subjective experience?"

This is near a question I do think is interesting. I'm starting to think there's a sliding scale of "amount of subjective experience" a thing can have. And I am very curious about "what sorts of things will and won't have X amount of subjective experience".

I guess my beef is that when it's framed as "But why does XYZ system entail qualia?" I infer that even if in the far future I had a SUPER detailed understanding of "tweak this and you get X more units of  experience, if you don't have ABC any experience is impossible, LMN architecture is really helpful, but not necessary" that Chalmers would still be unimpressed and got "But why does any of this lead to qualia?"

Well, I don't actually think he'd say that. If I had that sorta detailed outline I think his mind would be blown and he'd be super excited.

But when I imagine the person who is still going "But why", I'm imagining that they must be thinking of qualia is this isolated, other, and separate thing.

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T15:03:20.957Z · score: 2 (1 votes) · LW · GW

Truly the accent is one of the most powerful weapons any Daemon has in its arsenal. 

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T14:49:38.447Z · score: 2 (1 votes) · LW · GW

Hehe, yes.

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T14:49:16.473Z · score: 4 (2 votes) · LW · GW

I've defs got socially formed priors on what things do and don't have experience. And when I try and move past those priors and or think "we'll these priors came from somewhere, what were they originally tapping into?" I see that anyone making a judgement about  this is doing so through what they could observe.

Comment by hazard on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-09T14:45:52.069Z · score: 2 (1 votes) · LW · GW

Yeps. This feels like seeing one's experience of color as all of the things it's connected to. You've got a unique set of associations to diff colors, and that makes your experience different. 

What I've seen of the "hard problem of consciousness" is that it says "well yeah, but all those associations, all of what a color means to you, that's all separate from the qualia of the color", and that is the thing that I think is divorced from interesting stuff. All the things you mentioned are the interesting parts of the experience of color.

Comment by hazard on Legibility: Notes on "Man as a Rationalist Animal" · 2020-06-09T14:41:54.901Z · score: 9 (3 votes) · LW · GW

Thanks! I might want to make another post at some point that really digs into subtle differences between rationality and legibility. Because I think a lot of people's rationality is legibility. It's like the shadow side of rationality.

Comment by hazard on Hazard's Shortform Feed · 2020-06-08T14:54:18.757Z · score: 9 (5 votes) · LW · GW


In high-school I read pop cogSci books like "You Are Not So Smart" and "Subliminal: How the Subconscious Mind Rules Your Behavior". I learned that "contrary to popular belief", your memory doesn't perfectly capture events like a camera would, but it's changed and reconstructed every time you remember it! So even if you think you remember something, you could be wrong! Memory is constructed, not a faithful representation of what happened! AAAAAANARCHY!!!

Wait a second, a camera doesn't perfectly capture events. Or at least, they definitely didn't when this analogy was first made. Do you remember red eye? Instead of philosophizing on the metaphysics of representation, I'm just gonna note that "X is a construct!" sorts of claims cache out in terms of "you can be wrong in ways that matter to me!".

There's something funny about loudly declaring "it's not impossible to be wrong!"

In high-school, "gender is a social construct!" was enough of a meme that it wasn't uncommon for something to be called a social construct to express that you thought it was dumb.

Me: "God, the cafeteria food sucks!"

Friend: "Cafeteria food is a social construct!"

Calling something a social construct either meant "I don't like it" or "you can't tell me what to do". That was my limited experience with the idea of social constructs. Something I didn't have experience with was the rich feminist literature describing exactly how gender is constructed, what it's effects are, and how it's been used to shape and control people for ages.

That is way more interesting to me than just the claim "if your explanation involves gender, you're wrong". Similarly, these days the cogSci I'm reading is stuff like Predictive Processing theory, which posits that all of human perception is made through a creative construction process, and more importantly it gives a detailed description of the process that does this constructing.

For me, a claim that "X is a construct" of "X isn't a 100% faithful representation" can only be interesting if there's either an account of the forces that are trying to assert otherwise, or there's an account of how the construction works.

Put another way; "you can be wrong!" is what you shout at a someone who is insisting they can't be and is trying to make things happen that you don't like. Some people need to have that shouted at them. I don't think I'm that person. If there's a convo about something being a construct, I want to jump right to the juicy parts and start exploring that!

(note: I want to extra emphasize that it can be as useful to explore "who's insisting to me that X is infallible?" as it is to explore "how is this fallible?" I've been thinking about how your sense of what's happening in your head is constructed, noticed I want to go "GUYS! Consciousness IS A CONSTRUCT!" and when I sat down to ask "Wait, who was trying to insist that it 100% isn't and that it's an infallible access into your own mind?" I got some very interesting results.)

Comment by hazard on Legibility: Notes on "Man as a Rationalist Animal" · 2020-06-08T14:49:31.013Z · score: 2 (1 votes) · LW · GW

Cool! Yeah, I've gone over all of them a few times and starting outlining this, but also lost steam and moved to other things. You noting interest is useful for me getting more out :)

Comment by hazard on Growing Independence · 2020-06-08T01:17:11.692Z · score: 5 (3 votes) · LW · GW

Thanks for a big list of concrete examples from your life! I find stuff like this really useful/insightful.

Comment by hazard on Consciousness as Metaphor: What Jaynes Has to Offer · 2020-06-07T20:52:54.836Z · score: 6 (3 votes) · LW · GW

Yeah, I think we're more similar than dissimilar. I'm sure the dissimilarities will pop up organically over time :)

Unrelated, I was recently enjoying some of your posts on the neo-cortex! Good stuff.

Comment by hazard on Visual Babble and Prune · 2020-06-05T02:58:52.714Z · score: 5 (3 votes) · LW · GW

++ for experimenting around with this! Enjoyed reading your experience.

Comment by hazard on On the construction of the self · 2020-06-02T16:05:05.295Z · score: 4 (2 votes) · LW · GW

As always, thank you for your service :)

Comment by hazard on Writing Causal Models Like We Write Programs · 2020-05-06T00:06:09.981Z · score: 6 (4 votes) · LW · GW

Have you used system verilog or some other hardware description language? Your clunk model of the ripple adder looks suspiciously like verilog code I wrote to make a ripple adder in a class. I can't recall enough deets to tell how different they are, but you might gain some insights from investigating.

Comment by hazard on An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes · 2020-05-05T23:13:22.633Z · score: 4 (2 votes) · LW · GW

Thanks! In my head, I was using the model of "flip 100 coins, exact value of all coins is micro states, heads-tails count is macro state". In that model, the macro states form disjoint sets, so it's probably not a good example.

I think I get your point in abstract, but I'm struggling to form an example model that fits it. Any suggestions?

Comment by hazard on An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes · 2020-05-04T21:34:31.105Z · score: 4 (2 votes) · LW · GW

I liked this paper and summary, and was able to follow most of it except for the actual physics :)

I feel like I missed something important though:

If we are trying to judge , what's the use of knowing the entropy of state ? The thrust I got was "Give weight to possible in accordance with their entropy, and somehow constrain that with info from ", but I didn't get a sense of what using as constraints looked like (I expect that it would make more sense if I could do the physics examples).

Comment by hazard on Negative Feedback and Simulacra · 2020-04-29T15:30:31.778Z · score: 3 (2 votes) · LW · GW

I think the sentiment was "even things that look like they might only be operating at level 1, they are also operating at other levels".

The fact that the stranger responds at all to your request for the bathroom signifies an amount of "We are on the same side enough to not physically attack each other". There are places where you can ask a stranger a question and they straight up won't answer you, or won't give you a true answer.

Comment by hazard on Unrolling social metacognition: Three levels of meta are not enough. · 2020-04-18T15:47:08.259Z · score: 4 (2 votes) · LW · GW

If you like this post but want more examples, Knots by R.D Laing is a book full of them.

Comment by hazard on Causal Abstraction Intro · 2020-04-12T14:59:52.866Z · score: 4 (2 votes) · LW · GW

Great video! It was easier to understand than the previous posts, and it got your point across well. I've been dwelling on similar ideas recently, and will be positing to this video as a reference.

Comment by hazard on Hazard's Shortform Feed · 2020-02-28T06:33:31.872Z · score: 2 (1 votes) · LW · GW

See this for the best example of rapid brainstorming, and the closest twitter has to long form content.

Comment by hazard on Hazard's Shortform Feed · 2020-02-28T06:31:21.237Z · score: 2 (1 votes) · LW · GW

I've been writing A LOT on twitter lately. It's been hella fun.

One thing that seems clear. Twitter threads are not the place to hash out deep disagreements start to finish. When you start multi threading, it gets chaotic real fast, and the character limit is a limiting force.

On the other side of things, it's feels great for gestating ideas, and getting lots of leads on interesting ideas.

1) Leads: It helps me increase my "known unknowns". There's a lot of topics, ideas, disciplines I see people making offhand comments about, and while it's rarely enough to piece together the whole idea, I often can pick up the type signature and know where the idea relates to other ideas I am familiar with. This is dope. Expand you anti-library

2): gestation: there's a limit to how much you can squeeze into a single tweet, but threading really helps to shotgun blast out ideas. It often ends up being less step-by-step carefully reasoned arg, and more lots of quasi-independent thoughts on the topic that you then connect. Also, I easily get 5x engagement on twitter, and other people throwing in their thoughts is really helpful.

I know Raemon and crew have mentioned trying to help with more gestation and development of ideas (without sacrificing overall rigor). post-rat-twitter / strangely-earnest-twitter feels like it's nailed the gestation part. Might be something to investigate.

Comment by hazard on The Relational Stance · 2020-02-12T17:17:24.437Z · score: 4 (3 votes) · LW · GW


Comment by hazard on A Cautionary Note on Unlocking the Emotional Brain · 2020-02-09T15:00:17.580Z · score: 8 (4 votes) · LW · GW

Thanks for sharing! ++ for "I tried the thing, this is how it went" post

Comment by hazard on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T14:19:40.282Z · score: 9 (6 votes) · LW · GW

It might be useful to know that I'm not that sold on a lot of singularity stuff, and the parts of rationality that have affected me the most are some of the more general thinking principles. "Look at the truth even if it hurts" / "Understanding tiny amounts of evo and evo psyche ideas" / "Here's 18 different biases, now you can tear down most people's arguments".

It was those ideas (a mix of the naive and sophisticated form of them) + my own idiosyncrasies that caused me a lot of trouble. So that's why I say "rationalist memes". I guess that if I bought more singularity stuff I might frame it as "weird but true ideas".

Comment by hazard on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T22:17:48.575Z · score: 6 (6 votes) · LW · GW

I found this a very useful post. It feels like a key piece in helping me think about CFAR, but also it sharpens my own sense of what stuff in "rationality" feels important to me. Namely "Helping people not have worse lives after interacting with rationalist memes"

Comment by hazard on "human connection" as collaborative epistemics · 2020-01-13T03:17:19.449Z · score: 6 (3 votes) · LW · GW
Bar the lone soul on a heroic dissent, I don't think most of us are able to keep meaningfully developing our worldview if there is no one to enthusiastically share our findings with.

Some version of this feels pretty important.

Comment by hazard on Hazard's Shortform Feed · 2020-01-13T02:26:09.987Z · score: 4 (3 votes) · LW · GW

So a thing Galois theory does is explain:

Why is there no formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)?

Which makes me wonder; would there be a formula if you used more machinery that normal stuff and radicals? What does "more than radicals" look like?

Comment by hazard on Hazard's Shortform Feed · 2020-01-12T18:31:03.346Z · score: 2 (1 votes) · LW · GW

I'm noticing an even more granular version of this. Things that I might do casually (reading some blog posts) have a significant effect on what's loaded into my mind the next day. Smaller than the week level, I'm noticing a 2-3 day cycle of "the thing that was most recently in my head" and how it effects the question of "If I could work on anything rn what would it be?"

This week on Tuesday I picked Wednesday as the day I was going to write a sketch. But because of something I was thinking before going to bed, on Wednesday my head was filled with thoughts on urbex. So I switched gears, and urbex thoughts ran their course through Wednesday, and on Thursday I was ready to actually write a sketch (comedy thoughts need to be loaded for that)

Comment by hazard on Hazard's Shortform Feed · 2020-01-05T14:33:06.023Z · score: 5 (3 votes) · LW · GW

I've been writing on twitter more lately. Sometimes when I'm trying to express and idea, to generate progress I'll think "What's the shortest sentence I can write that convinces me I know what I'm talking about?" This is different from "What's a simple but no simpler explanation for the reader?"

Starting a twitter thread and forcing several tweet sized chunk of ideas out are quite helpful for that. It helps get the concept clearer in my head, and then I have something out there and I can dwell on how I'd turn it into a consumable for others.

Comment by hazard on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T03:33:05.735Z · score: 5 (3 votes) · LW · GW
[...] and yet suppose that I were invited to write for a venue where my ideas would never be challenged, where my writing were not subjected to scrutiny, where no interested and intelligent readers would ask probing questions… shouldn’t I expect my writing (and my ideas!) to degrade?

I'm not completely swayed either way, but I want to acknowledge this as an important and interesting point.

Comment by hazard on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T22:39:53.339Z · score: 5 (3 votes) · LW · GW

Very useful comment, in that I have not previously imagined that this was your, or anyone else's, normative view on responding to comments.

Comment by hazard on Moloch Hasn’t Won · 2019-12-28T18:32:25.441Z · score: 6 (3 votes) · LW · GW

I'm quite interested in the rest of this. Though I did find the idea of Moloch useful for responding to the most naive forms of "If we all did X everything would be perfect", I also have a vague feel that rationalist's belief in Moloch being all powerful prevents them from achieving totally achievable levels of group success.

Comment by hazard on Values Assimilation Premortem · 2019-12-28T18:21:43.669Z · score: 3 (2 votes) · LW · GW

More or less. Here are some related pieces of content:

There's a twitter thread by Qiaochu that ostensibly is about addiction, but has the idea "It's more useful to examine what you're running from, than what you're running to." In the context of our conversation, the Christianity and Rationalism would be "what you've been running to" and "what you're running from" (for me) has been social needs not being met, not having a lot of personal agency, etc.

Meaningness is an epic tome by David Chapman on different attitudes towards meaning that one can take and their repercussions.

Regarding regarding examples and generalizing, I've been finding it that it's really hard to feel like I've changed my mind in any substantive way, unless I can find the examples and memories of events that lead me to believe a general claim in the first place, and address those examples. Matt Goldenberg has a sequence on a specific version of this idea.

Comment by hazard on Values Assimilation Premortem · 2019-12-26T18:46:35.316Z · score: 9 (6 votes) · LW · GW

Hi, welcome to LW! Fellow deconverted christian here. I've both gone through some crisis mode deconverting from christianity, and some crisis mode when exploring and undoing some of the faux-rational patches I had made during the first crisis. Can't wait for round three :)

I'm happy to give some more thoughts, though it might be useful for you to enumerate a few example beliefs / behaviors that you are adopting and now rethinking. "rationalist" is a pretty big space and there's many different strokes for many different folks.

As a very general thought, I'm currently exploring the idea that most of my problems aren't related to big picture philosophy / world-view stuff, and more matters of increasing personal agency (i.e "Do I feel stressed from not enough money?" "Am I worried about the security of my job?" "Can I reliably have fun conversations?" "Can I spend time with people who love me?" "Does my body feel good?" etc). Though admittedly, I had to arrive at this stance via big picture world-view style thinking. Might be useful to dwell on.