Comment by hazard on MLU: New Blog! · 2019-06-12T21:59:42.516Z · score: 2 (1 votes) · LW · GW

Yay making stuff yourself!

Comment by hazard on What is a good moment to start writing? · 2019-05-30T18:08:22.574Z · score: 13 (6 votes) · LW · GW
What is a good moment to start writing?

Now.

Pithy replies aside, I think it's likely the answer you could do with hearing. Echoing what some other people have said, I recommend finding what feels like a super low stakes place to write (personal blog, a shortform feed, private google doc, emails to a friend) and work on writing a lot. I find that having a thought-out channel makes it easier to have thoughts in the first place. If you are frequently in the practice of trying to crystallize ideas, you'll get better at it.

I find that whenever I try to "do a post justice" it kills my ability and motivation. I think what's happening in my head is that I assume a mantle of "well I'm supposed to be a careful thinking, well written, master rationalist, so anything that isn't that can't get out, or else how can I act authoritative?". It helps to remind myself that my intent is not to be an authority, but to express my ideas.

Perpetual draft style might also help with getting over "but what if I'm wrong?".

Also, specifically speaking to:

I know much more than a year ago. If I explain my ideas now, I'm going to be embarrassed by it next year.

I have a lot of notebooks that I've kept over the last 5 years that contain a lot of whatever interesting-to-me thing I was thinking at the time. It's been incredibly useful to have this record! A lot of the breakthroughs, epiphanies, and slow-shifts-in-thought aren't completely unique and different from thoughts I've had in the past. It's been super useful to ask "What has changed since the last time I thought about this?" and actually be able to find a record of how I used to think, and see the difference between now and then.

If you can switch from "I'm embarrassed at what I used to think" to "See how much I've grown!" you might find it much easier to write.

Comment by hazard on What is a good moment to start writing? · 2019-05-30T17:32:42.272Z · score: 4 (2 votes) · LW · GW

I second having a shortform feed! I try to collect a lot of "minimum intriguing ideas". One way I interact with my shortform is to not work on a post over time. If I have a brief nugget of an idea that seems like it might be self contained or interesting, I try to just force it out. If I get it, great. If I get a mess, I ditch it and don't worry about coming back to finish it later. (Note: this was a comment on how I shortform, not really my advice for toon)

Comment by hazard on What is your personal experience with "having a meaningful life"? · 2019-05-24T01:37:15.066Z · score: 2 (1 votes) · LW · GW

Thanks! Noting your concern with conversations about meaning being about signaling: do you think that the "[...] and therefor are praise-worthy" comes from the fact that most people have a lack of meaning and do consider it important? Or do you think it's just a signal that is no longer attached to the reality it was once (if ever) grounded in?


Comment by hazard on What is your personal experience with "having a meaningful life"? · 2019-05-24T01:27:11.065Z · score: 2 (1 votes) · LW · GW

To clarify, what you've outlined is your current guess at the mechanism in human minds that is the root of humans experiencing a thing we call meaning? (as opposed to "I've found meaning in my life via becoming less confused")

If so, thanks, and do you know if anyone else has written about this? Separately, I'd also be interested in knowing more about how you personally experience meaning. Given that you've put explicit thought into the mechanisms behind meaning, has that changed how you experience it?

Comment by hazard on What is your personal experience with "having a meaningful life"? · 2019-05-24T01:03:28.738Z · score: 2 (1 votes) · LW · GW

Thanks! I'm pretty sure I understand the broad strokes you've given about how you think/feel on this, and I'd be interested about prying into the details.

  • Are these rewrites little corrective tweaks that you do fairly quickly, or are they more drawn out processes that you need to devote time to?
  • Were there any particular experiences that hammered in "these stories are mostly made up"?
  • I'm guessing from what you've said that it doesn't bother you that your stories are made up. Can you talk about why that doesn't bother you, or why you think it might bother other people, and why those reasons don't apply to you? (I know questions like "why doesn't ____ bother you?" are weird and hard to answer, so answer however you like)
Comment by hazard on What is your personal experience with "having a meaningful life"? · 2019-05-24T00:53:33.630Z · score: 2 (1 votes) · LW · GW

Thanks! Some questions on what caring about stuff feels like for you:

  • Do you immediately/readily tell whether or not you care about something, or do you think about and "decide" what things you care about?
  • Have you had any things that you cared about for some time, then something happened that made you not care about it anymore? If so, what did that feel like?
Comment by hazard on Go Do Something · 2019-05-22T16:32:56.834Z · score: 16 (3 votes) · LW · GW

I'm not sure I'm thinking about the same thing you are, so let me know what you thing of these examples:

"Become a well known writer/blogger"

"Start a popular meetup for Y topic"

"Get respected in a community"

"Make a viral video"

Me phrasing what I think is your point:

Some of the most readily imaginable "things to do" are identified by their effects on social reality (make something popular, be respected). Learning to shape social reality is a skill in itself, but if you mistakenly believe that you are learning how to shape reality you will hit problems when you are confronted with a problem that requires shaping reality.

Comment by hazard on What is your personal experience with "having a meaningful life"? · 2019-05-22T14:16:47.690Z · score: 4 (2 votes) · LW · GW

Growing up I had a narrative of being a a scrappy protagonist in a fantasy YA novel, with a duel theoretical backing of "god exists and is nice" and "I read that people who think too much on the meaning of life go crazy, so I'll just not do that".

I've had one huge meltdown that was a culmination of previous structures of meaning collapsing in my life (my uncanny valley of rationality). The strong fallout lasted 3 months, strong background unpleasantness lasted for 2 years, and various lingering small threads are still being dealt with today (3.5 years later).

Once a week(ish) I have a some small to medium sense of listlessness.

Every other month or so I get a one-three day funk of feeling like the things I'm doing aren't interesting/don't matter any more.

I do feel a need for meaning and endorse that need for it. I often feel my life is most meaningful when I can work on a project the provides a decent amount of flow + clear indicators of progress, and when I get to spend lots of time with people I like.

What is your personal experience with "having a meaningful life"?

2019-05-22T14:03:39.509Z · score: 22 (11 votes)
Comment by hazard on Complex Behavior from Simple (Sub)Agents · 2019-05-11T12:42:55.136Z · score: 12 (4 votes) · LW · GW

Yay model building and experimenting! I like, and would love to see more of, people building simple models to experiment with the plausibility of ideas and build intuition. You also seem to have approached this with a good epistemic attitude; yes, this does not constitute strong evidence that humans are implemented with sub agents, but does demonstrate that familiar humany behaviours can arise from some form of sub agents.

Comment by hazard on Totalitarian ethical systems · 2019-05-08T16:25:55.590Z · score: 2 (1 votes) · LW · GW

"It feels good and right for me have a life where I'm producing more than I'm consuming. Wait, if it was actually a good thing to produce more than I consume, wouldn't that mean we should have a society every one is pumping out production that never get's used by anyone?"

The above is not something I'm very concerned with, but it did feel easy to jump to "this is now a question of the effects of this policy instantiated across all humans."

Comment by hazard on Declarative Mathematics · 2019-05-07T17:41:36.527Z · score: 2 (1 votes) · LW · GW

I was going to just write a comment, but it turned into a post. Here, I outlined the models I was using to think about this, and what that said about my reaction to ignoring "under the hood" stuff.

Models of Memory and Understanding

2019-05-07T17:39:58.314Z · score: 20 (5 votes)
Comment by hazard on Hazard's Shortform Feed · 2019-05-06T21:55:38.401Z · score: 9 (4 votes) · LW · GW

A forming thought on post-rationality. I've been reading more samzdat lately and thinking about legibility and illegibility. Me paraphrasing one point from this post:

State driven rational planning (episteme) destroys local knowledge (metis), often resulting in metrics getting better, yet life getting worse, and it's impossible to complain about this in a language the state understands.

The quip that most readily comes to mind is "well if rationality is about winning, it sounds like the state isn't being very rational, and this isn't a fair attack on rationality itself" (this comment quotes a similar argument).

Similarly, I was having a conversation with two friends once. Person A expressed that they were worried if they started hanging around more EA's and rationalists, they might end up having a super boring optimized life and never do fun things like cook meals with friends (because soylent) or go dancing. Friend B expressed, "I dunno, that sounds pretty optimal to me."

I don't think friend A was legitimately worried about the general concept of optimization. I do think they were worried about what they expected there implementation (or their friends implementation) of "optimality" in their own lives.

Current most charitable claim I have of the post-rationalist mindset: the best and most technical specifications that we have for what things like optimal/truth/rational might look like contain very little information about what to actually do. In your pursuit of "truth"/"rationality"/"the optimal" as it pertains to your life, you will be making up most of your art along the way, not deriving it from first principles. Furthermore, thinking in terms of the truth/rationality/optimality will [somehow] lead you to make important errors you wouldn't have made otherwise.

A more blase version of what I think the post rationalist mindset is: you can't handle the (concept of the) truth.

Comment by hazard on Hazard's Shortform Feed · 2019-05-05T13:26:04.280Z · score: 2 (1 votes) · LW · GW

Short framing on one reason it's often hard to resolve disagreements:

[with some frequency] disagreements don't come from the same place that they are found. You're brain is always running inference on "what other people think". From a statement like, "I really don't think it's a good idea to homeschool", you're mind might already be guessing at a disagreement you have 3 concepts away, yet only ping you with a "disagreement" alarm.

Combine that with a decent ability to confabulate. You ask yourself "Why do I disagree about homeschooling?" and you are given a plethora of possible reasons to disagree and start talking about those.

Comment by hazard on Totalitarian ethical systems · 2019-05-04T18:24:06.243Z · score: 20 (4 votes) · LW · GW

Highlighting the parts that felt important:

I think the frame in which it's important to evaluate global states using simple metrics is kind of sketchy and leads to people mistakenly thinking that they don't know what's good locally.
[...]
Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric, so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god.
[...]
"I don't know what a single just soul looks like, so let's figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that."

I can see ways in which my own thinking has fallen into the frame you mention in the first quote. It's an interesting and subtle transition, going from asking, "What is it best for me to do?" to "What is it best for a human to do?"/"What would it be best for everyone to be doing?". I notice that I feel very compelled to make this transition when thinking.

Comment by hazard on Declarative Mathematics · 2019-04-18T20:58:55.683Z · score: 4 (2 votes) · LW · GW
Users don’t need to know what’s going on under the hood; the algorithms and proofs generally “just work” without the user needing to worry about the details. The user’s job is to understand the language of the framework, the interface, and translate their own problems into that language.

Interesting, my gut reaction to this approach as applied to math was "ugh, that sounds horrible, I don't want to ignore the under the hood details, the whole point of math is understanding what's going on" .

Yet when I consider the same approach to programming and computer sciency stuff my reaction is "well duh, of course we're trying to find good abstractions to package away as much of the nitty gritty details as possible, otherwise you can't make/build really big interesting stuff."

I'll think more about why these feel different.

Comment by hazard on Deconfusing Logical Counterfactuals · 2019-04-09T01:49:44.924Z · score: 4 (2 votes) · LW · GW

I really appreciate "Here's a collection of a lot of the work that has been done on this over the years, and important summaries" type posts. Thanks for writing this!

Comment by hazard on The Simple Solow Model of Software Engineering · 2019-04-09T01:44:37.945Z · score: 3 (2 votes) · LW · GW

I appreciate that you outlined what predictions are made from the Solow model applied to software. Do you know of any other models that might be applied?

Comment by hazard on Towards a Quieter Life · 2019-04-08T13:07:35.134Z · score: 4 (3 votes) · LW · GW

Yay intents for quiet! Hope you get something out of it.

Comment by hazard on Hazard's Shortform Feed · 2019-04-01T01:35:19.021Z · score: 4 (2 votes) · LW · GW

Reasons why I currently track or have tracked various metrics in my life:

1. A mindfulness tool. Tacking the time to record and note some metric is itself the goal.

2. Have data to be able to test an hypothesis about ways some intervention would affect my life. (i.e Did waking up earlier give me less energy in the day?)

3. Have data that enables me to make better predictions about the future (mostly related to time tracking, "how long does X amount of work take?")

4. Understanding how [THE PAST] was different of [THE PRESENT] to help defeat the Deadly Demons of Doubt and Shitty Serpents of Should (ala Deliberate Once).

I have not always had these in mind when deciding to track a metric. Often I tracked because "that's wut productive people do right?". When I keep these in mind, tracking gets more useful.

Comment by hazard on Hazard's Shortform Feed · 2019-04-01T01:27:35.292Z · score: 6 (3 votes) · LW · GW

The idea was less "Individual humans are ontologically basic" and more: I see I often talking about broad groups of people has been less useful than dropping down to talk about interactions I've had with individual people.

In writing the comment I was focusing more on what the action I wanted to take was (think about specific encounters with people when evaluating my impressions) and less my my ontological claims of what exists. I see how me lax opening sentence doesn't make that clear :)

Comment by hazard on Hazard's Shortform Feed · 2019-04-01T01:22:38.055Z · score: 7 (3 votes) · LW · GW

Me circa March 2018

"Should"s only make sense in a realm where you are divorced form yourself. Where you are bargaining with some other being that controls your body, and you are threatening it.

Update: This past week I've had an unusual amount of spontaneous introspective awareness on moments when I was feeling pulled my a should, especially one that came from comparing myself from others. I've also been meeting these thoughts with an, "Oh interesting. I wonder why this made me feel a should?" as opposed to a standard "endorse or disavow" response.

Meta Thoughts: What do I know about "should"s that I didn't know in March 2018?

I'm more aware of how incredibly pervasive "should"s are in my thinking. Last saturday alone I counted over 30 moments of feeling the negative tug of some "should".

I know see that even for things I consider cool, dope, and virtuous, I've been using "you should do this or else" to get myself to do them.

Since CFAR last fall I've gained a lot of metis on aligning myself, a task that I've previously trivialized or brought in "willpower" to conquer. Last year I was more inclined to go, "Well okay fine, I'm still saying I should do XYZ, but the part of me that is resisting that is actually just stupid and deserves to be coerced."

Comment by hazard on Starting Meditation · 2019-04-01T01:07:18.131Z · score: 4 (2 votes) · LW · GW

I've missed seven days of journaling in the last month plus (non consecutive though).

Thoughts: I've gotten some good insight from this time. Towards the end, it became more, "What are the important things that happened recently?" journaling.

I've put much less ritual-intent into this habit than with meditation. In the past week I changed my sleep schedule (I now sleep in till whenever instead of getting up with an alarm at 7am) which makes it slightly harder to ensure the sanctity of morning journaling, but I'm currently okay with that because sleeping more and getting up at my own pace has had a wonderful positive affect this past week (keeping a keen eye on if that trend continues).

It feels vaguely important to not go into this journaling with an agenda. I get more rewarding journaling when I wait until the thing that catches my interest most works its way to the top of my mind.

Comment by hazard on Hazard's Shortform Feed · 2019-03-23T22:52:08.737Z · score: 10 (3 votes) · LW · GW

The general does not exist, there are only specifics.

If I have a thought in my head, "Texans like their guns", that thought got there from a finite amount of specific interactions. Maybe I heard a joke about texans. Maybe my family is from texas. Maybe I hear a lot about it on the news.

"People don't like it when you cut them off mid sentence". Which people?

At a local meetup we do a thing called encounter groups, and one rule of encounter groups is "there is no 'the group', just individual people". Having conversations in that mode has been incredibly helpful to realize that, in fact, there is no "the group".

Comment by hazard on Hazard's Shortform Feed · 2019-03-23T22:47:03.207Z · score: 4 (2 votes) · LW · GW

(Less a reply and more just related)

I often think a sentence like, "I want to have a really big brain!". What would that actually look like?

  • Not experiencing fear or worry when encountering new math.
  • Really quick to determine what I'm most curious about.
  • Not having my head hurt when I'm thinking hard, and generally not feeling much "cognitive strain".
  • Be able to fill in the vague and general impressions with the concrete examples that originally created them.
  • Doing a hammers and nails scan when I encounter new ideas.
  • Having a clear, quickly accessible understanding of the "proof chains" of ideas, as well as the "motivation chains".
    • I don't need to know all the proofs or motivations, but I do have a clear sense of what I understand myself, and what I've outsourced.
  • Instead of feeling "generally confused" by things of just "not getting them", I always have concrete, "This doesn't make sense because BLANK" expressions that allow me to move forward.
Comment by hazard on Hazard's Shortform Feed · 2019-03-21T02:15:05.297Z · score: 6 (3 votes) · LW · GW

What are the barriers to having really high "knowledge work output"?

I'm not capable of "being productive on arbitrary tasks". One winter break I made a plan to apply for all the small $100 essay scholarships people were always telling me no one applied for. After two days of sheer misery, I had to admit to myself that I wasn't able be productive on a task that involved making up bullshit opinions about topics I didn't care about.

Conviction is important. From experiments with TAPs and a recent bout of meditation, it seems like when I bail on an intention, on some level I am no longer convinced the intention is a good idea/what I actually want to do. Strong conviction feels like confidence all the way up in the fact that this task/project is the right thing to spend your time on.

There's probably a lot in the vein of have good chemistry: sleep well, eat well, get exercise.

One of the more mysterious quantities seems to be "cognitive effort". Sometimes thinking hard feel like it hurts my brain. This post has a lot of advice in that regard.

I've previously hypothesized that the a huge chunk of painful brain fog is the experience of thinking at a problem, but not actually engaging with it. (similar to how Mark Forster has posited that the resistance one feels to a given task is proportional to how many times it has been rejected)

Having the rest of your life together and time boxing your work is insanely important for reducing the frequency with which your brains promotes "unrelated" thoughts to your consciousness (if there's important stuff that isn't getting done, and you haven't convinced yourself that it will be handled adequately, your mind's tendency is to keep it in a loop).

I've got a feeling that there's a large amount of gains in the 5-second level. I would be super interested in seeing anyone's thoughts or writings on the 5-second level of doing better work and avoiding cognitive fatigue.

Comment by hazard on Some thoughts after reading Artificial Intelligence: A Modern Approach · 2019-03-20T17:11:51.878Z · score: 13 (8 votes) · LW · GW

Yay self-study! Positive reinforcement!

Comment by hazard on Verifying vNM-rationality requires an ontology · 2019-03-13T03:52:13.877Z · score: 5 (3 votes) · LW · GW

I was just thinking about this earlier today while re-reading a similar point by stuart armstrong.

Comment by hazard on Hazard's Shortform Feed · 2019-03-12T18:18:34.198Z · score: 2 (1 votes) · LW · GW

AFI worry: A human-in-the-loop AI that only takes actions that get human approval (and whose expected outcomes have human approval) hits big problems when the context the AI is acting in is a very different context from where our values were trained.

Is there any way around this besides simulating people having their values re-organized given the new environment? Is this what CEV is about?

Comment by hazard on Hazard's Shortform Feed · 2019-03-12T18:12:00.491Z · score: 2 (1 votes) · LW · GW

Weirdness that comes from reflection:

In this frame, I can self-reflect on a given circuit and ask, "Does this circuit actually push me towards what I think is good?" When doing this, I'll be using some more meta/higher-order circuits (concepts I've built up over time about what a "good" brain looks like) but I'll also be using lower level circuits, and I might even end up using the evaluated circuit itself in this evaluation process.

Sometimes this reflection process will go smooth. Sometimes it won't. But one takeaway/claim is you have this complex roundabout process for re-evaluating your values when some circuits begin to think that other circuits have diverged from "good".

Because of this ability to reflect and change, it seems correct to say that "I value things conditional on my environment" (where environment has a lot of flex, it could be as small as your work space, or as broad as "any existing human culture").

Example. Let's say there was literally no scarcity for survival goods (food water etc). It seems like a HUGE chunk of my values and morals are built up inferences and solutions to resource allocation problems. If resource scarcity was magically no longer a problem, much of my values have lost their connection to reality. From what I've seen so far of my own self-reflection process, it seems likely that overtime I would come to reorganize my values in such a post-scarcity world. I've also currently got no clue what that reorganization would look like.

Comment by hazard on Hazard's Shortform Feed · 2019-03-12T17:57:35.669Z · score: 2 (1 votes) · LW · GW

Current beliefs about how human value works: various thoughts and actions can produce a "reward" signal in the brain. I also have lots of predictive circuits that fire when they anticipate a "reward" signal is coming as a result of what just happened. The predictive circuits have been trained to use the patterns of my environment to predict when the "reward" signal is coming.

Getting an "actual reward" and a predictive circuit firing will both be experienced as something "good". Because of this, predictive circuits can not only track "actual reward" but also the activation of other predictive circuits. (So far this is basically "there's terminal and instrumental values, and they are experienced as roughly the same thing")

The predictive circuits are all doing some "learning process" to keep their firing correlated to what they're tracking. However, the "quality" of this learning can vary drastically. Some circuits are more "hardwired" than others, and less able to update when they begin to become uncorrelated from what they are tracking. Some are caught in interesting feedback loops with other circuits, such that you have to update multiple circuits simultaneously, or in a particular order.

Thought every thing that feels "good" feels good because at some point or another it was tracking the base "reward" signal, it won't always be a good idea to think of the "reward" signal as the thing you value.

Say you have a circuit that tracks a proxy of your base "reward". If something happens in your brain such that this circuit ceases to update, you basically value this proxy terminally.

Said another way, I don't have a nice clean ontological line between terminal values and instrumental values. The less valuable a predictive circuit, the more "terminal" the value it represents.

Comment by hazard on Do we need a high-level programming language for AI and what it could be? · 2019-03-06T20:06:27.910Z · score: 8 (5 votes) · LW · GW

My thoughts are similar to Paul's comment.

You might be interested in the difference between high-level languages and Domain Specific Languages (DSL). High-level languages are intended to be general purpose languages that abstract away some of the nitty gritty details of lower levels (C doesn't have to think about registers while assembly does, python doesn't have to think about memory management while C does). DSLs are languages specially designed for certain problem domains. Stan and nile are examples of DSLs for probabilistic programming and displaying graphics, respectively. You seem to be suggesting a DSL for AI as opposed to a generic higher-level language.

To generalize, it seems like DSLs and higher-level languages are useful to make doing something you already know how to do more intuitive/simple/straight-forward. Making AGI and AFI are things we don't know how to do. We could likely make a DSL that would allow you to think more fluently about existing AI concepts, and it's possible that might allow you to reach various insights quicker. But you still have to do the work.

Comment by hazard on mAIry's room: AI reasoning to solve philosophical problems · 2019-03-06T00:40:59.591Z · score: 7 (4 votes) · LW · GW

Wanted to note that I really enjoyed this and thought it was quite lucid. A great example of how thinking about possible detailed implementations of a mind help you dodge a lot of the fudging that with various philosophical puzzles.

Comment by hazard on Starting Meditation · 2019-02-25T14:26:56.065Z · score: 5 (3 votes) · LW · GW

I'm going to put meditation on pause, but still keep my protected 30 min morning time. I'm going to swap in "Thinking, introspecting, and dwelling on how I relate to other people."

The decisions comes from a few things:

  • How I relate to people is one of the most frequent subjects of mind-wandering during meditation.
  • I'm currently in a relationship that has made me a lot more aware of various habits of behavior and thinking that I have.
  • I feel well poised to make a lot of progress on that front.
  • When I mind notice I'm mind wandering, I try to follow, "Why did my attention go from the my breathe to this topic?" I frequently find that I pretty deeply feel that spending time on relations would benefit me more than the meditation (ala this comment).

I'm going to give this a one-month trial period. Expect an update March 31, 2019

Comment by hazard on Hazard's Shortform Feed · 2019-02-22T19:56:26.056Z · score: 2 (1 votes) · LW · GW

Often I have an idea three times in various forms before it makes it to territory of, "Well thought out idea that I'm actually acting upon and having good stuff come from it."

My default, I follow a pattern of, "semi-randomly expose myself to lots of ideas, not worry a lot about screening for repetitive stuff, let the most salient ideas at any given moment float up to receive tid-bits of conscious thought, then forget about them till the next semi-random event triggers it being thought about."

I'd be interested if there was a better protocol for, "This thing I've encountered seems extra important/interesting, let me dwell on it more and more intentionally integrate it into my thinking/"

Comment by hazard on Starting Meditation · 2019-02-07T17:11:33.146Z · score: 6 (3 votes) · LW · GW

It's been 112 days on this habit.

5 days have been missed, none happened consecutively. "Don't skip twice" has been a good star to follow.

For the past month and a half I've felt a good deal more mind-wandering than in the first month and a half. This seemed to be related to not giving as much attention to why I am doing this, forgetting the exciting motivation, and treating it as, "routine". This past week has been a slight uptick in attention. We'll see what happens.

Comment by hazard on Rationality: What's the point? · 2019-02-04T03:25:22.887Z · score: 2 (1 votes) · LW · GW

Would you buy the claim you can "Be more right and get less wrong"? (asked because I feel like I'm pointing to the same thing as the first bullet, but the first bullet is not phrased super well)

On the question of "does understanding your mind make you fail less often", I notice that there are 3+ cases that immediately jump to mind that match, "I didn't fail because I learned more about my mind". Do you think a lot of those cases I didn't fail for reasons other than understanding my mind, or do you expect that I'm racking up new-different failures as a result of understanding my mind more?

On post rationality, I just now read a bit more, and my reaction was, "Wait, wasn't that a key piece of rationality in the first place?" I'm interested to see if I've secretly always been of a post-rationalist persuasion, or if I'm not seeing the main angle of post-rationality.

Comment by hazard on STRUCTURE: A Crash Course in Your Brain · 2019-02-03T23:51:46.224Z · score: 2 (1 votes) · LW · GW

Cool. There's nothing I currently want transfered. This was more a forward thinking thought. I'll pm you if it comes up.

Comment by hazard on STRUCTURE: A Crash Course in Your Brain · 2019-02-03T16:58:49.213Z · score: 2 (1 votes) · LW · GW

Thanks! As of now, is there any tool for migrating comments from one post to another? I just revised my approach to this sequence in a way that I expect to reduce the need for that (more clearly separating structure and content posts), but I'd imagine it could be useful.

Comment by hazard on STRUCTURE: Reality and rational best practice · 2019-02-03T16:51:33.227Z · score: 2 (1 votes) · LW · GW

1st plan was that this and the prev posts where "outlines of the main ideas I want to cover in however many posts, to be written later". When there was enough detail of, "Here's a discrete post/worthy idea I've decided I want to cover" I would then create new post which would be the flushed out outline of the idea. This would then be edited and revised to be a normal human readable content post.

You made me notice that I didn't have a clear sense of how I wanted comments to work. If there's discussion of the ideas/content outlined in this post, would I want to / have to port that to the actual post I eventually make?

Current plan is to clearly differentiate "structure" and "content" posts. This is a structure post, and I'd like comments to be about, "What do you think of this chunk of idea space being an intro to rationality?". Content posts will be, "What do you think of this idea/content/how I wrote about it?"

Rationality: What's the point?

2019-02-03T16:34:33.457Z · score: 12 (5 votes)
Comment by hazard on Hazard's Shortform Feed · 2019-02-03T14:09:49.051Z · score: 6 (3 votes) · LW · GW

From Gwern's about page:

I personally believe that one should think Less Wrong and act Long Now, if you follow me.

Possibly my favorite catch-phrase ever :) What do I think is hiding there?

  • Think Less Wrong
    • Self anthropology- "Why do you believe what you believe?"
    • Hugging the Query and not sinking into confused questions
    • Litany of Tarski
    • Notice your confusion - "Either the story is false or you model is wrong"
  • Act Long Now
    • Cultivate habits and practice routines that seem small / trivial on a day/week/month timeline, but will result in you being superhuman in 10 years.
    • Build abstractions where you are acutely aware of where it leaks, and have good reason to believe that leak does not affect the most important work you are using this abstraction for.
    • What things trigger "Man, it sure would be useful helpful if I had data on XYZ from the past 8 years"? Start tracking that.

STRUCTURE: Reality and rational best practice

2019-02-01T23:51:21.390Z · score: 6 (1 votes)

STRUCTURE: How the Social Affects your rationality

2019-02-01T23:35:43.511Z · score: 1 (3 votes)

STRUCTURE: A Crash Course in Your Brain

2019-02-01T23:17:23.872Z · score: 8 (5 votes)
Comment by hazard on Hazard's Shortform Feed · 2019-01-27T02:36:37.610Z · score: 10 (3 votes) · LW · GW

Concrete example: when I'm full, I'm generally unable to imagine meals in the future as being pleasurable, even if I imagine eating a food I know I like. I can still predict and expect that I'll enjoy having a burger for dinner tomorrow, but if I just stuffed myself on french fries, and just can't run a simulation of tomorrow where the "enjoying the food experience" sense is triggered.

I take this as evidence for my internal food experience simulator has "code" that just asks, "If you ate XYZ right now, how would it feel?" and spitting back the result.

This makes me wonder how many other mental systems I have that I think of as "Trying to imagine how I'd feel in the future" are really just predicting how I'd feel right now.

More specifically, the fact that I literally can't do a non-what-im-feeling-right-now food simulation makes me expect that I'm currently incapable of predicting future feelings in certain domains.

Comment by hazard on Hazard's Shortform Feed · 2019-01-13T01:04:11.419Z · score: 2 (1 votes) · LW · GW

True if you squint at it right: Learning more about "how things work" is a journey that starts at "Life is a simple and easy game with random outcomes" and ends in "Life is a complex and thought intensive game with deterministic outcomes"

Comment by hazard on Starting Meditation · 2019-01-10T14:40:16.887Z · score: 4 (2 votes) · LW · GW

When I was doing 6 months of TAPs one of the big things I noticed was that when I disengaged from a TAP (or the TAP was bubbling up to the top layer of conscious awareness, but was dismissed before getting to the top, but still while there was some awareness on it) there was a feeling of, "Oh, I'm not actually convinced of the purpose and mission of this TAP right now". Once or twice I gained insight from exploring that lack of conviction, and found a hole in the logic of my TAP.

I notice a similar experience in meditation. On the mornings when I feel the most disengaged, and returning to the breath is the most "chore like", I notice that there's some way in which I'm no longer convinced that returning to the breath is "the best use of my time".

Comment by hazard on The 3 Books Technique for Learning a New Skilll · 2019-01-10T14:34:58.418Z · score: 2 (1 votes) · LW · GW

I think I can relate. You made me notice that there are two things I could point to when talking of "deep learning". One is "making a piece of knowledge or information a deeply ingrained, easily accessible piece of me" and the other is, "not having any whole in my conceptual understanding, every piece of info is well connected in my knowledge graph and well motivated, and things make sense in a powerful way".

Comment by hazard on Sequence introduction: non-agent and multiagent models of mind · 2019-01-10T02:21:13.947Z · score: 7 (3 votes) · LW · GW

I'm very excited to see the rest of this! Last spring I wrote the first post for a sequence that had very similar intents. You posting this has given me a nudge to move forward with mine. Here's a brief outline of things I was going to look at (might be useful for you to further clarify to yourself the specific chunks of this topic you are trying to explore)

  • Give some computer architecture arguments for why it's hard to get something to be agent like, and why those arguments might apply to our minds.
  • Explore how social pressure to "act like an agent" and conform to the person-hood interface makes it difficult to notice one's own non-agentyness.
  • For me (and I'd guess others) a lot of my intentional S2 frames for valuing people seems to put a lot of weight on how "agenty" someone is. I would like to dwell on a "rescuing the utility function" like move for agency.
Comment by hazard on The 3 Books Technique for Learning a New Skilll · 2019-01-10T02:02:05.818Z · score: 43 (25 votes) · LW · GW

I really like the "positive reviews should look like X, negative reviews should look like Y" information. I've never seen it before, and I expect it to actually be useful when looking for resources.

I'm cofused by how "deep" and "surface" are being used in your first picture. From how the "What" and "How" books are described (and from the examples you give), I would have called "What" the deep resource, and "How" as the "surface level" resource. How are you thinking of it?

Comment by hazard on Learning-Intentions vs Doing-Intentions · 2019-01-06T17:54:52.157Z · score: 2 (1 votes) · LW · GW
"learned something along the way" is the wrong level

Agreed. When I used to think of "learning something along the way", it was a very passive sort of framing. I wasn't able to think of search/exploit as a very active, "fractal and recursive" activity.

Comment by hazard on Learning-Intentions vs Doing-Intentions · 2019-01-03T14:34:25.137Z · score: 5 (3 votes) · LW · GW

Though, like you said, I've heard of these ideas in startup land before, I found your post particularly lucid. Last spring when I tried a TAP a week, I had the learning-intention and also had a hard time articulating that.

I notice there's also an uncomfortable sort of suffering I experience when I approach a task/project/goal that is fundamentally a learning/explore objective, but I think of it as a doing/exploit. It feels like me getting hyper focused on the outcome/production, and if I don't get the one I want I dismiss thoughts of "Well you learned something along the way!" as grasping a straws/justification.

Comment by hazard on Akrasia is confusion about what you want · 2018-12-29T12:44:13.947Z · score: 6 (3 votes) · LW · GW

I skimmed the excercise since I couldn't think of a recent concrete example, but I really enjoyed this post! I like how you took something that I've seen tossed around for a while, "There's an important way in which akrasia isn't real" and expanded it fully. Previously, I've wanted to express this to a friend but didn't have a way of doing that I think would be useful, but now I'll be referencing this post.

Explore/Exploit for Conversations

2018-11-15T04:11:30.372Z · score: 36 (12 votes)

Starting Meditation

2018-10-24T15:09:06.019Z · score: 22 (9 votes)

Thoughts on tackling blindspots

2018-09-27T01:06:53.283Z · score: 45 (13 votes)

Can our universe contain a perfect simulation of itself?

2018-05-20T02:08:41.843Z · score: 21 (5 votes)

Reducing Agents: When abstractions break

2018-03-31T00:03:16.763Z · score: 42 (11 votes)

Diffusing "I can't be that stupid"

2018-03-24T14:49:51.073Z · score: 55 (17 votes)

Request for "Tests" for the MIRI Research Guide

2018-03-13T23:22:43.874Z · score: 68 (18 votes)

Types of Confusion Experiences

2018-03-11T14:32:36.363Z · score: 31 (9 votes)

Hazard's Shortform Feed

2018-02-04T14:50:42.647Z · score: 31 (9 votes)

Explicit Expectations when Teaching

2018-02-04T14:12:09.903Z · score: 53 (17 votes)

TSR #10: Creative Processes

2018-01-17T03:05:18.903Z · score: 16 (4 votes)

No, Seriously. Just Try It: TAPs

2018-01-14T15:24:38.692Z · score: 42 (14 votes)

TSR #9: Hard Rules

2018-01-09T14:57:15.708Z · score: 32 (10 votes)

TSR #8 Operational Consistency

2018-01-03T02:11:32.274Z · score: 20 (8 votes)

TSR #7: Universal Principles

2017-12-27T01:54:39.974Z · score: 23 (8 votes)

TSR #6: Strength and Weakness

2017-12-19T22:23:57.473Z · score: 3 (3 votes)

TSR #5 The Nature of Operations

2017-12-12T23:37:06.066Z · score: 16 (5 votes)

Learning AI if you suck at math

2017-12-07T15:15:15.480Z · score: 10 (4 votes)

TSR #4 Value Producing Work

2017-12-06T02:44:27.822Z · score: 20 (8 votes)

TSR #3 Entrainment: Discussion

2017-12-01T16:46:35.718Z · score: 25 (9 votes)

Changing habits for open threads

2017-11-26T12:54:27.413Z · score: 9 (4 votes)

Increasing day to day conversational rationality

2017-11-16T21:18:37.424Z · score: 26 (10 votes)

Acknowledging Rationalist Angst

2017-11-06T05:26:45.505Z · score: 30 (12 votes)

Trope Dodging

2017-10-21T18:43:34.729Z · score: 4 (4 votes)