Comment by hazard on Declarative Mathematics · 2019-04-18T20:58:55.683Z · score: 4 (2 votes) · LW · GW
Users don’t need to know what’s going on under the hood; the algorithms and proofs generally “just work” without the user needing to worry about the details. The user’s job is to understand the language of the framework, the interface, and translate their own problems into that language.

Interesting, my gut reaction to this approach as applied to math was "ugh, that sounds horrible, I don't want to ignore the under the hood details, the whole point of math is understanding what's going on" .

Yet when I consider the same approach to programming and computer sciency stuff my reaction is "well duh, of course we're trying to find good abstractions to package away as much of the nitty gritty details as possible, otherwise you can't make/build really big interesting stuff."

I'll think more about why these feel different.

Comment by hazard on Deconfusing Logical Counterfactuals · 2019-04-09T01:49:44.924Z · score: 4 (2 votes) · LW · GW

I really appreciate "Here's a collection of a lot of the work that has been done on this over the years, and important summaries" type posts. Thanks for writing this!

Comment by hazard on The Simple Solow Model of Software Engineering · 2019-04-09T01:44:37.945Z · score: 3 (2 votes) · LW · GW

I appreciate that you outlined what predictions are made from the Solow model applied to software. Do you know of any other models that might be applied?

Comment by hazard on Towards a Quieter Life · 2019-04-08T13:07:35.134Z · score: 3 (2 votes) · LW · GW

Yay intents for quiet! Hope you get something out of it.

Comment by hazard on Hazard's Shortform Feed · 2019-04-01T01:35:19.021Z · score: 4 (2 votes) · LW · GW

Reasons why I currently track or have tracked various metrics in my life:

1. A mindfulness tool. Tacking the time to record and note some metric is itself the goal.

2. Have data to be able to test an hypothesis about ways some intervention would affect my life. (i.e Did waking up earlier give me less energy in the day?)

3. Have data that enables me to make better predictions about the future (mostly related to time tracking, "how long does X amount of work take?")

4. Understanding how [THE PAST] was different of [THE PRESENT] to help defeat the Deadly Demons of Doubt and Shitty Serpents of Should (ala Deliberate Once).

I have not always had these in mind when deciding to track a metric. Often I tracked because "that's wut productive people do right?". When I keep these in mind, tracking gets more useful.

Comment by hazard on Hazard's Shortform Feed · 2019-04-01T01:27:35.292Z · score: 4 (2 votes) · LW · GW

The idea was less "Individual humans are ontologically basic" and more: I see I often talking about broad groups of people has been less useful than dropping down to talk about interactions I've had with individual people.

In writing the comment I was focusing more on what the action I wanted to take was (think about specific encounters with people when evaluating my impressions) and less my my ontological claims of what exists. I see how me lax opening sentence doesn't make that clear :)

Comment by hazard on Hazard's Shortform Feed · 2019-04-01T01:22:38.055Z · score: 7 (3 votes) · LW · GW

Me circa March 2018

"Should"s only make sense in a realm where you are divorced form yourself. Where you are bargaining with some other being that controls your body, and you are threatening it.

Update: This past week I've had an unusual amount of spontaneous introspective awareness on moments when I was feeling pulled my a should, especially one that came from comparing myself from others. I've also been meeting these thoughts with an, "Oh interesting. I wonder why this made me feel a should?" as opposed to a standard "endorse or disavow" response.

Meta Thoughts: What do I know about "should"s that I didn't know in March 2018?

I'm more aware of how incredibly pervasive "should"s are in my thinking. Last saturday alone I counted over 30 moments of feeling the negative tug of some "should".

I know see that even for things I consider cool, dope, and virtuous, I've been using "you should do this or else" to get myself to do them.

Since CFAR last fall I've gained a lot of metis on aligning myself, a task that I've previously trivialized or brought in "willpower" to conquer. Last year I was more inclined to go, "Well okay fine, I'm still saying I should do XYZ, but the part of me that is resisting that is actually just stupid and deserves to be coerced."

Comment by hazard on Starting Meditation · 2019-04-01T01:07:18.131Z · score: 4 (2 votes) · LW · GW

I've missed seven days of journaling in the last month plus (non consecutive though).

Thoughts: I've gotten some good insight from this time. Towards the end, it became more, "What are the important things that happened recently?" journaling.

I've put much less ritual-intent into this habit than with meditation. In the past week I changed my sleep schedule (I now sleep in till whenever instead of getting up with an alarm at 7am) which makes it slightly harder to ensure the sanctity of morning journaling, but I'm currently okay with that because sleeping more and getting up at my own pace has had a wonderful positive affect this past week (keeping a keen eye on if that trend continues).

It feels vaguely important to not go into this journaling with an agenda. I get more rewarding journaling when I wait until the thing that catches my interest most works its way to the top of my mind.

Comment by hazard on Hazard's Shortform Feed · 2019-03-23T22:52:08.737Z · score: 4 (2 votes) · LW · GW

The general does not exist, there are only specifics.

If I have a thought in my head, "Texans like their guns", that thought got there from a finite amount of specific interactions. Maybe I heard a joke about texans. Maybe my family is from texas. Maybe I hear a lot about it on the news.

"People don't like it when you cut them off mid sentence". Which people?

At a local meetup we do a thing called encounter groups, and one rule of encounter groups is "there is no 'the group', just individual people". Having conversations in that mode has been incredibly helpful to realize that, in fact, there is no "the group".

Comment by hazard on Hazard's Shortform Feed · 2019-03-23T22:47:03.207Z · score: 4 (2 votes) · LW · GW

(Less a reply and more just related)

I often think a sentence like, "I want to have a really big brain!". What would that actually look like?

  • Not experiencing fear or worry when encountering new math.
  • Really quick to determine what I'm most curious about.
  • Not having my head hurt when I'm thinking hard, and generally not feeling much "cognitive strain".
  • Be able to fill in the vague and general impressions with the concrete examples that originally created them.
  • Doing a hammers and nails scan when I encounter new ideas.
  • Having a clear, quickly accessible understanding of the "proof chains" of ideas, as well as the "motivation chains".
    • I don't need to know all the proofs or motivations, but I do have a clear sense of what I understand myself, and what I've outsourced.
  • Instead of feeling "generally confused" by things of just "not getting them", I always have concrete, "This doesn't make sense because BLANK" expressions that allow me to move forward.
Comment by hazard on Hazard's Shortform Feed · 2019-03-21T02:15:05.297Z · score: 6 (3 votes) · LW · GW

What are the barriers to having really high "knowledge work output"?

I'm not capable of "being productive on arbitrary tasks". One winter break I made a plan to apply for all the small $100 essay scholarships people were always telling me no one applied for. After two days of sheer misery, I had to admit to myself that I wasn't able be productive on a task that involved making up bullshit opinions about topics I didn't care about.

Conviction is important. From experiments with TAPs and a recent bout of meditation, it seems like when I bail on an intention, on some level I am no longer convinced the intention is a good idea/what I actually want to do. Strong conviction feels like confidence all the way up in the fact that this task/project is the right thing to spend your time on.

There's probably a lot in the vein of have good chemistry: sleep well, eat well, get exercise.

One of the more mysterious quantities seems to be "cognitive effort". Sometimes thinking hard feel like it hurts my brain. This post has a lot of advice in that regard.

I've previously hypothesized that the a huge chunk of painful brain fog is the experience of thinking at a problem, but not actually engaging with it. (similar to how Mark Forster has posited that the resistance one feels to a given task is proportional to how many times it has been rejected)

Having the rest of your life together and time boxing your work is insanely important for reducing the frequency with which your brains promotes "unrelated" thoughts to your consciousness (if there's important stuff that isn't getting done, and you haven't convinced yourself that it will be handled adequately, your mind's tendency is to keep it in a loop).

I've got a feeling that there's a large amount of gains in the 5-second level. I would be super interested in seeing anyone's thoughts or writings on the 5-second level of doing better work and avoiding cognitive fatigue.

Comment by hazard on Some thoughts after reading Artificial Intelligence: A Modern Approach · 2019-03-20T17:11:51.878Z · score: 13 (8 votes) · LW · GW

Yay self-study! Positive reinforcement!

Comment by hazard on Verifying vNM-rationality requires an ontology · 2019-03-13T03:52:13.877Z · score: 5 (3 votes) · LW · GW

I was just thinking about this earlier today while re-reading a similar point by stuart armstrong.

Comment by hazard on Hazard's Shortform Feed · 2019-03-12T18:18:34.198Z · score: 2 (1 votes) · LW · GW

AFI worry: A human-in-the-loop AI that only takes actions that get human approval (and whose expected outcomes have human approval) hits big problems when the context the AI is acting in is a very different context from where our values were trained.

Is there any way around this besides simulating people having their values re-organized given the new environment? Is this what CEV is about?

Comment by hazard on Hazard's Shortform Feed · 2019-03-12T18:12:00.491Z · score: 2 (1 votes) · LW · GW

Weirdness that comes from reflection:

In this frame, I can self-reflect on a given circuit and ask, "Does this circuit actually push me towards what I think is good?" When doing this, I'll be using some more meta/higher-order circuits (concepts I've built up over time about what a "good" brain looks like) but I'll also be using lower level circuits, and I might even end up using the evaluated circuit itself in this evaluation process.

Sometimes this reflection process will go smooth. Sometimes it won't. But one takeaway/claim is you have this complex roundabout process for re-evaluating your values when some circuits begin to think that other circuits have diverged from "good".

Because of this ability to reflect and change, it seems correct to say that "I value things conditional on my environment" (where environment has a lot of flex, it could be as small as your work space, or as broad as "any existing human culture").

Example. Let's say there was literally no scarcity for survival goods (food water etc). It seems like a HUGE chunk of my values and morals are built up inferences and solutions to resource allocation problems. If resource scarcity was magically no longer a problem, much of my values have lost their connection to reality. From what I've seen so far of my own self-reflection process, it seems likely that overtime I would come to reorganize my values in such a post-scarcity world. I've also currently got no clue what that reorganization would look like.

Comment by hazard on Hazard's Shortform Feed · 2019-03-12T17:57:35.669Z · score: 2 (1 votes) · LW · GW

Current beliefs about how human value works: various thoughts and actions can produce a "reward" signal in the brain. I also have lots of predictive circuits that fire when they anticipate a "reward" signal is coming as a result of what just happened. The predictive circuits have been trained to use the patterns of my environment to predict when the "reward" signal is coming.

Getting an "actual reward" and a predictive circuit firing will both be experienced as something "good". Because of this, predictive circuits can not only track "actual reward" but also the activation of other predictive circuits. (So far this is basically "there's terminal and instrumental values, and they are experienced as roughly the same thing")

The predictive circuits are all doing some "learning process" to keep their firing correlated to what they're tracking. However, the "quality" of this learning can vary drastically. Some circuits are more "hardwired" than others, and less able to update when they begin to become uncorrelated from what they are tracking. Some are caught in interesting feedback loops with other circuits, such that you have to update multiple circuits simultaneously, or in a particular order.

Thought every thing that feels "good" feels good because at some point or another it was tracking the base "reward" signal, it won't always be a good idea to think of the "reward" signal as the thing you value.

Say you have a circuit that tracks a proxy of your base "reward". If something happens in your brain such that this circuit ceases to update, you basically value this proxy terminally.

Said another way, I don't have a nice clean ontological line between terminal values and instrumental values. The less valuable a predictive circuit, the more "terminal" the value it represents.

Comment by hazard on Do we need a high-level programming language for AI and what it could be? · 2019-03-06T20:06:27.910Z · score: 8 (5 votes) · LW · GW

My thoughts are similar to Paul's comment.

You might be interested in the difference between high-level languages and Domain Specific Languages (DSL). High-level languages are intended to be general purpose languages that abstract away some of the nitty gritty details of lower levels (C doesn't have to think about registers while assembly does, python doesn't have to think about memory management while C does). DSLs are languages specially designed for certain problem domains. Stan and nile are examples of DSLs for probabilistic programming and displaying graphics, respectively. You seem to be suggesting a DSL for AI as opposed to a generic higher-level language.

To generalize, it seems like DSLs and higher-level languages are useful to make doing something you already know how to do more intuitive/simple/straight-forward. Making AGI and AFI are things we don't know how to do. We could likely make a DSL that would allow you to think more fluently about existing AI concepts, and it's possible that might allow you to reach various insights quicker. But you still have to do the work.

Comment by hazard on mAIry's room: AI reasoning to solve philosophical problems · 2019-03-06T00:40:59.591Z · score: 7 (4 votes) · LW · GW

Wanted to note that I really enjoyed this and thought it was quite lucid. A great example of how thinking about possible detailed implementations of a mind help you dodge a lot of the fudging that with various philosophical puzzles.

Comment by hazard on Starting Meditation · 2019-02-25T14:26:56.065Z · score: 5 (3 votes) · LW · GW

I'm going to put meditation on pause, but still keep my protected 30 min morning time. I'm going to swap in "Thinking, introspecting, and dwelling on how I relate to other people."

The decisions comes from a few things:

  • How I relate to people is one of the most frequent subjects of mind-wandering during meditation.
  • I'm currently in a relationship that has made me a lot more aware of various habits of behavior and thinking that I have.
  • I feel well poised to make a lot of progress on that front.
  • When I mind notice I'm mind wandering, I try to follow, "Why did my attention go from the my breathe to this topic?" I frequently find that I pretty deeply feel that spending time on relations would benefit me more than the meditation (ala this comment).

I'm going to give this a one-month trial period. Expect an update March 31, 2019

Comment by hazard on Hazard's Shortform Feed · 2019-02-22T19:56:26.056Z · score: 2 (1 votes) · LW · GW

Often I have an idea three times in various forms before it makes it to territory of, "Well thought out idea that I'm actually acting upon and having good stuff come from it."

My default, I follow a pattern of, "semi-randomly expose myself to lots of ideas, not worry a lot about screening for repetitive stuff, let the most salient ideas at any given moment float up to receive tid-bits of conscious thought, then forget about them till the next semi-random event triggers it being thought about."

I'd be interested if there was a better protocol for, "This thing I've encountered seems extra important/interesting, let me dwell on it more and more intentionally integrate it into my thinking/"

Comment by hazard on Starting Meditation · 2019-02-07T17:11:33.146Z · score: 6 (3 votes) · LW · GW

It's been 112 days on this habit.

5 days have been missed, none happened consecutively. "Don't skip twice" has been a good star to follow.

For the past month and a half I've felt a good deal more mind-wandering than in the first month and a half. This seemed to be related to not giving as much attention to why I am doing this, forgetting the exciting motivation, and treating it as, "routine". This past week has been a slight uptick in attention. We'll see what happens.

Comment by hazard on Rationality: What's the point? · 2019-02-04T03:25:22.887Z · score: 2 (1 votes) · LW · GW

Would you buy the claim you can "Be more right and get less wrong"? (asked because I feel like I'm pointing to the same thing as the first bullet, but the first bullet is not phrased super well)

On the question of "does understanding your mind make you fail less often", I notice that there are 3+ cases that immediately jump to mind that match, "I didn't fail because I learned more about my mind". Do you think a lot of those cases I didn't fail for reasons other than understanding my mind, or do you expect that I'm racking up new-different failures as a result of understanding my mind more?

On post rationality, I just now read a bit more, and my reaction was, "Wait, wasn't that a key piece of rationality in the first place?" I'm interested to see if I've secretly always been of a post-rationalist persuasion, or if I'm not seeing the main angle of post-rationality.

Comment by hazard on STRUCTURE: A Crash Course in Your Brain · 2019-02-03T23:51:46.224Z · score: 2 (1 votes) · LW · GW

Cool. There's nothing I currently want transfered. This was more a forward thinking thought. I'll pm you if it comes up.

Comment by hazard on STRUCTURE: A Crash Course in Your Brain · 2019-02-03T16:58:49.213Z · score: 2 (1 votes) · LW · GW

Thanks! As of now, is there any tool for migrating comments from one post to another? I just revised my approach to this sequence in a way that I expect to reduce the need for that (more clearly separating structure and content posts), but I'd imagine it could be useful.

Comment by hazard on STRUCTURE: Reality and rational best practice · 2019-02-03T16:51:33.227Z · score: 2 (1 votes) · LW · GW

1st plan was that this and the prev posts where "outlines of the main ideas I want to cover in however many posts, to be written later". When there was enough detail of, "Here's a discrete post/worthy idea I've decided I want to cover" I would then create new post which would be the flushed out outline of the idea. This would then be edited and revised to be a normal human readable content post.

You made me notice that I didn't have a clear sense of how I wanted comments to work. If there's discussion of the ideas/content outlined in this post, would I want to / have to port that to the actual post I eventually make?

Current plan is to clearly differentiate "structure" and "content" posts. This is a structure post, and I'd like comments to be about, "What do you think of this chunk of idea space being an intro to rationality?". Content posts will be, "What do you think of this idea/content/how I wrote about it?"

Rationality: What's the point?

2019-02-03T16:34:33.457Z · score: 12 (5 votes)
Comment by hazard on Hazard's Shortform Feed · 2019-02-03T14:09:49.051Z · score: 6 (3 votes) · LW · GW

From Gwern's about page:

I personally believe that one should think Less Wrong and act Long Now, if you follow me.

Possibly my favorite catch-phrase ever :) What do I think is hiding there?

  • Think Less Wrong
    • Self anthropology- "Why do you believe what you believe?"
    • Hugging the Query and not sinking into confused questions
    • Litany of Tarski
    • Notice your confusion - "Either the story is false or you model is wrong"
  • Act Long Now
    • Cultivate habits and practice routines that seem small / trivial on a day/week/month timeline, but will result in you being superhuman in 10 years.
    • Build abstractions where you are acutely aware of where it leaks, and have good reason to believe that leak does not affect the most important work you are using this abstraction for.
    • What things trigger "Man, it sure would be useful helpful if I had data on XYZ from the past 8 years"? Start tracking that.

STRUCTURE: Reality and rational best practice

2019-02-01T23:51:21.390Z · score: 6 (1 votes)

STRUCTURE: How the Social Affects your rationality

2019-02-01T23:35:43.511Z · score: 1 (3 votes)

STRUCTURE: A Crash Course in Your Brain

2019-02-01T23:17:23.872Z · score: 8 (5 votes)
Comment by hazard on Hazard's Shortform Feed · 2019-01-27T02:36:37.610Z · score: 4 (2 votes) · LW · GW

Concrete example: when I'm full, I'm generally unable to imagine meals in the future as being pleasurable, even if I imagine eating a food I know I like. I can still predict and expect that I'll enjoy having a burger for dinner tomorrow, but if I just stuffed myself on french fries, and just can't run a simulation of tomorrow where the "enjoying the food experience" sense is triggered.

I take this as evidence for my internal food experience simulator has "code" that just asks, "If you ate XYZ right now, how would it feel?" and spitting back the result.

This makes me wonder how many other mental systems I have that I think of as "Trying to imagine how I'd feel in the future" are really just predicting how I'd feel right now.

More specifically, the fact that I literally can't do a non-what-im-feeling-right-now food simulation makes me expect that I'm currently incapable of predicting future feelings in certain domains.

Comment by hazard on Hazard's Shortform Feed · 2019-01-13T01:04:11.419Z · score: 2 (1 votes) · LW · GW

True if you squint at it right: Learning more about "how things work" is a journey that starts at "Life is a simple and easy game with random outcomes" and ends in "Life is a complex and thought intensive game with deterministic outcomes"

Comment by hazard on Starting Meditation · 2019-01-10T14:40:16.887Z · score: 4 (2 votes) · LW · GW

When I was doing 6 months of TAPs one of the big things I noticed was that when I disengaged from a TAP (or the TAP was bubbling up to the top layer of conscious awareness, but was dismissed before getting to the top, but still while there was some awareness on it) there was a feeling of, "Oh, I'm not actually convinced of the purpose and mission of this TAP right now". Once or twice I gained insight from exploring that lack of conviction, and found a hole in the logic of my TAP.

I notice a similar experience in meditation. On the mornings when I feel the most disengaged, and returning to the breath is the most "chore like", I notice that there's some way in which I'm no longer convinced that returning to the breath is "the best use of my time".

Comment by hazard on The 3 Books Technique for Learning a New Skilll · 2019-01-10T14:34:58.418Z · score: 2 (1 votes) · LW · GW

I think I can relate. You made me notice that there are two things I could point to when talking of "deep learning". One is "making a piece of knowledge or information a deeply ingrained, easily accessible piece of me" and the other is, "not having any whole in my conceptual understanding, every piece of info is well connected in my knowledge graph and well motivated, and things make sense in a powerful way".

Comment by hazard on Sequence introduction: non-agent and multiagent models of mind · 2019-01-10T02:21:13.947Z · score: 7 (3 votes) · LW · GW

I'm very excited to see the rest of this! Last spring I wrote the first post for a sequence that had very similar intents. You posting this has given me a nudge to move forward with mine. Here's a brief outline of things I was going to look at (might be useful for you to further clarify to yourself the specific chunks of this topic you are trying to explore)

  • Give some computer architecture arguments for why it's hard to get something to be agent like, and why those arguments might apply to our minds.
  • Explore how social pressure to "act like an agent" and conform to the person-hood interface makes it difficult to notice one's own non-agentyness.
  • For me (and I'd guess others) a lot of my intentional S2 frames for valuing people seems to put a lot of weight on how "agenty" someone is. I would like to dwell on a "rescuing the utility function" like move for agency.
Comment by hazard on The 3 Books Technique for Learning a New Skilll · 2019-01-10T02:02:05.818Z · score: 43 (25 votes) · LW · GW

I really like the "positive reviews should look like X, negative reviews should look like Y" information. I've never seen it before, and I expect it to actually be useful when looking for resources.

I'm cofused by how "deep" and "surface" are being used in your first picture. From how the "What" and "How" books are described (and from the examples you give), I would have called "What" the deep resource, and "How" as the "surface level" resource. How are you thinking of it?

Comment by hazard on Learning-Intentions vs Doing-Intentions · 2019-01-06T17:54:52.157Z · score: 2 (1 votes) · LW · GW
"learned something along the way" is the wrong level

Agreed. When I used to think of "learning something along the way", it was a very passive sort of framing. I wasn't able to think of search/exploit as a very active, "fractal and recursive" activity.

Comment by hazard on Learning-Intentions vs Doing-Intentions · 2019-01-03T14:34:25.137Z · score: 5 (3 votes) · LW · GW

Though, like you said, I've heard of these ideas in startup land before, I found your post particularly lucid. Last spring when I tried a TAP a week, I had the learning-intention and also had a hard time articulating that.

I notice there's also an uncomfortable sort of suffering I experience when I approach a task/project/goal that is fundamentally a learning/explore objective, but I think of it as a doing/exploit. It feels like me getting hyper focused on the outcome/production, and if I don't get the one I want I dismiss thoughts of "Well you learned something along the way!" as grasping a straws/justification.

Comment by hazard on Akrasia is confusion about what you want · 2018-12-29T12:44:13.947Z · score: 6 (3 votes) · LW · GW

I skimmed the excercise since I couldn't think of a recent concrete example, but I really enjoyed this post! I like how you took something that I've seen tossed around for a while, "There's an important way in which akrasia isn't real" and expanded it fully. Previously, I've wanted to express this to a friend but didn't have a way of doing that I think would be useful, but now I'll be referencing this post.

Comment by hazard on Open and Welcome Thread December 2018 · 2018-12-23T15:22:44.378Z · score: 12 (5 votes) · LW · GW

I vaguely remember a comment, possibly from a post in the last year or two, where someone said something like, "The highest return, under appreciated, life improvements you could make right now are fixing the relationships with your family and those close to you [... some other stuff...]". Does anyone remember this comment and or have a link to it?

Comment by hazard on Hazard's Shortform Feed · 2018-12-23T14:41:20.595Z · score: 4 (2 votes) · LW · GW

The slogan version of some thoughts I've been having lately are in the vein of "Hurry is the root of all evil". Thinking in terms of code. I've been working in a new dev environment recently and have felt the siren song of, "Copy the code in the tutorial. Just import all the packages they tell you to. Don't sweat the details man, just go with it. Just get it running." All that as opposed to "Learn what the different abstractions are grounded in, figure out what tools do what, figure out exactly what I need, and use whatever is necessary to accomplish it."

When I ping myself about why the former feels to have tug, I come up with 1). a tiny fear of not being capable of understanding the fine details and 2). a tiny fear that if understanding is possible, it will take a lot of time and WE'RE RUNNING OUT OF TIME!

Which is interesting, because this is just a side project that I'm doing for fun over winter break, which is specifically designed to get me to learn more.

Comment by hazard on Hazard's Shortform Feed · 2018-12-22T22:24:25.547Z · score: 4 (2 votes) · LW · GW

Had a similar style bug while programming today. I caught it much faster though I can't say if that can be attributed to previously identifying this pattern. But did think of the previous big as soon as I made the mental leap to figure out what was wrong this time.

Comment by hazard on What self-help has helped you? · 2018-12-21T19:23:13.609Z · score: 8 (5 votes) · LW · GW

Time Blocking + No Timepieces

(technically two different techniques, but they play into each other so well)

Context: I'm a student, so all of my "work" is school stuff or fun side projects. Also, I just need to get things done my deadlines, and have no, "I'm at the place of work for X hours and need to appear active for all of them."

Time blocking is just going "I'm going to work on X for Y minutes/hours" and then doing it. If you get done early, great, you can stop. If you had a task in mind to complete, but didn't finish it in Y minutes, that's too bad, you have to stop working.

To be more flexible, I sometimes set up time blocks with conditionals. "I've got time in my schedule for a 2 hour morning block, and one after lunch. I expect X to only take 2 hours, but if it takes longer I've got the back up block." I'm cool with changing plans, but a key idea is to not end up "just working forever".

Time Blocking feels distinct from Pomodoros, which I know someone else mentioned. You can easily turn a Pomodor

No Timepieces is this: When doing work blocks, I strictly use timers instead of watching the clock. This actually extends beyond time blocks. I've gotten rid of my watch, and the clock on my computer. I try to make decisions

I do this because I noticed that it becomes much easier to loose steam if I realize that I'm 15-5 min away from the end of a time block. I also do it because I noticed that whenever I started to run up against hard problems, the very first distraction I would engage in would be to look at the clock (probably because it only required a head turn, and it doesn't feel like a distraction). When I was still habituating to No Timepieces, I would often look at my empty wrist, be surprised, and then realize "Oh, yeah, I'm encountering a hard problem and was trying to distract myself." Now a days I don't look at my wrist, but there is still a mental loop of, "I wonder what time it is?" and it helps me notice distraction.

Hypothesized Mechanisms:

Time blocking seems to mostly have the effect of making my commitments very concrete and clear. I'm not sitting down to "work for a while", I'm sitting down to work on X for 3 hours. In making a time block, I've already freed the time, so even if 20 minutes in I feel "Oh shit, I'm going to make very little progress" that's okay, because I've already checked that it's okay to spend 3 hours banging my head against something.

No Timepieces has given a lot of insight into how I distract myself. I've come to believe that almost all instances of me checking the time are some form of me trying to escape from the reality I'm in. "Maybe if I look at my watch, I'll see that it's time to stop working, and then I can stop!" or similarly it provides the escape hatch of, "hm, it's 2:30pm. Wasn't something happening at 4? Dinner? No, meeting Michael. Ooh, Michael's in town, I wonder what he's been working on?"

Comment by hazard on Systems Engineering and the META Program · 2018-12-21T13:56:06.384Z · score: 5 (3 votes) · LW · GW

Thank you for posting this! I'm excited to go through the links later this weekend.

Comment by hazard on Starting Meditation · 2018-12-13T15:41:14.955Z · score: 4 (2 votes) · LW · GW

12-13-18 Update:

Have kept a continuous streak. On a handful of days, something happened in the morning and I mediated in the evening, which was a good bit harder.

I started to use Seinfeld calendar streak idea (replacing "X"s with green arrows) and it's been surprisingly pleasant. I get way more joy than anticipated from being able to put a green arrow on the wall every day, and to see a long chain of green arrows on said wall. One day over thanksgiving, morning meditation didn't work out, and I was going to drop it that day (it felt okay on a principled level, since thanksgiving was when I was originally intending to re-calibrate my intentions) but as soon as I imagined my calendar not having those green arrows I felt a huge emotional shift. Also interesting, it didn't feel like a guilt trip, it felt more like, "I can't allow this to happen because then I don't get my amazingly beautiful green arrows and there will be an ugly whole on the calendar, and that's just not how the world is going to be if I have anything to say about it".

Only update on meditation "progress" is that I feel little to no resistance to spending time meditating. There's no ugg field around "having" to do it when I wake up, when I sit down I'm mostly excited, and when it's getting to the end of the 30 minutes I'm rarely restless/"just waiting for it to be over". Mind wandering vs breath focus time doesn't seem to have changed much.

I still haven't read all the way through the mind illuminated. That might happen over winter break, but it's not a super high priority.

I think my new intention is continue meditating daily into the foreseeable future. The shape of my time is regular enough that I don't expect this to be a huge challenge, and since I've gotten more comfortable with spending time meditating I'd be very surprised at some internal jumping-ship.

If things ever get more chaotic, on Malcolm Ocean's recommendation I think I'm going to try the don't skip twice approach (raemon also mentions it in sunset at noon which is still a great post a year later)

Comment by hazard on Playing Politics · 2018-12-06T01:03:18.763Z · score: 5 (4 votes) · LW · GW

This post and some other LW post (I think a zvi "problems with the Bay" post) are the only times I've encountered a story of someone showing up several hours late to something. I really don't want to trivialize the problem of general group coordination, but stories like that (which to me feel like HUGE problems) make me learn towards the belief that there's some basic "how to group" education that is missing.

To anyone who is part of multiple dissimilar social groups, I would be interested in hearing if you've noticed anything like, "Oh, my XYZ group always has coordination problems 1), 2), and 3), but my ABC group has almost none."

Explore/Exploit for Conversations

2018-11-15T04:11:30.372Z · score: 36 (12 votes)
Comment by hazard on Productivity: Instrumental Rationality · 2018-11-11T20:42:48.521Z · score: 2 (1 votes) · LW · GW

For the past one and a half years I've done time tracking of work for school (and less rigorous tracking of time towards other pursuits). This was also a time period where I was training myself to be better at deep work.

My main intents for time tracking was to know how long work actually took so I'd get better at making future time estimates. Also during this time, each week I would time block out when I was going to work on what for the entire week, and I had something like a 95% success rate with that. (deviations came mostly from a mix of "I'm not in a good physical state because of an unusual reason for staying up late" (which were fixed by having enough slack to just take a nap) or a few "lost purpose/this work sucks" slumps (dealt with via realizing what felt dumb about the work and letting myself feel okay with doing the minimum))

I ask about how you're framing "productive hours". I'd generally only 3-5deep work hours in a day. I've yet to try (thought now I'm curious) but I'd be confident that I couldn't do 12 deep work hours a day. Though I could definitely do "12 hours a day that I reflectively endorse and that was all aimed towards things I care about".

What sort of "12 productive hours" are you shooting for?

Comment by hazard on Hazard's Shortform Feed · 2018-11-04T13:37:47.843Z · score: 6 (3 votes) · LW · GW

Claim: There's a headspace you can be in where you don't have a bucket for explore/babble. If you are entertaining an idea or working through a plan, it must be because you already expect it to work/be interesting. If your prune filter is also growing in strength and quality, then you will be abandoning ideas and plans as soon as you see any reasonable indicator that they won't work.

Missing that bucket and enhancing your prune filter might feel like you are merely growing up, getting wiser, or maybe more cynical. This will be really strongly felt if the previous phase in your life involved you diving into lots of projects only to realize some time and money later that they won't work out. The mental motion of, "Aha! This plan leaves ABC completely unspecified and I'd probably fall apart when reaching that roadblock," will be accompanied by a, "Man, I'm so glad I noticed that, otherwise I would have wasted a whole day/week/month. Go prune!".

Until you get a new bucket for explore, and attempts to get you to "think big" and "get creative" and "let it all out in a brainstorm" will feel like attacks on your valuable time. Somehow, you need to get a strong felt sense for explore being it's own, completely viable option, which in no way obliges you to act on what you've explored.

Next thoughts: What is needed for me to deeply feel explore as an option, and what things might be stopping me from doing so? *tk*

Comment by hazard on Hazard's Shortform Feed · 2018-11-01T15:40:03.282Z · score: 2 (1 votes) · LW · GW

Idea that I'm going to use in these short form posts: for ideas/things/threads that I don't feel are "resolved" I'm going to write "*tk*" by the most relevant sentence for easy search later (I vaguely remember Tim Ferris talking about using "tk" as a substitute for "do research and put the real numbers in" since "tk" is not a letter pair that shows up much in English words. )

Comment by hazard on Hazard's Shortform Feed · 2018-11-01T15:36:35.055Z · score: 2 (1 votes) · LW · GW

Ahh, the "meta-thoughts" idea in seems like a useful thing to apply if/when this happens again.

(which begs the questions, when I wrote the above comment, why didn't I have the meta-thought that I did in the linked comment? (I don't feel up to thinking about that in this moment)) *tk*

Comment by hazard on Hazard's Shortform Feed · 2018-11-01T00:07:26.021Z · score: 2 (1 votes) · LW · GW

Thinking about belief in belief.

You can have things called "beliefs" which are of type action. "Having" this belief is actually your decision to take certain actions in certain scenarios. You can also have things called "beliefs" which are of type probability, and are part your deep felt sense of what is and isn't likely/true.

A belief-action that has a high EV (and feels "good") will probably feel the same as a belief-probability that is close to 1.

Take a given sentence/proposition. You can put a high EV on the belief-action version of that sentence (mayhaps it has important consequences for your social groups) while putting a low probability on the belief-probability version of the sentence.

Meta Thoughts: The above idea is not fundamentally different from belief in belief or crony beliefs, both of which I've read a year or more ago. What I just wrote felt like a genuine insight. What do I think I understand now that I don't think I understood then?

I think that recently (past two months, since CFAR) I've had better luck with going into "Super-truth" mode, looking into my own soul and asking, "Do you actually belief this?"

Now, I've got many more data points of, "Here's a thing that I totally thought that I believed(probability) but actually I believed(action)."

Maybe the insight is that it's easy to get mixed up between belief-prob and belief-action because the felt sense of probability and EV are very very similar, and genuinely non-trivial to peel apart.

^yeah, that feels like it. I think previously I thought, "Oh cool, now that I know that belief-action and belief-prob are different things, I just won't do belief-action". Now, I believe that you need to teach yourself to feel the difference between them, otherwise you will continue to mistake belief-actions for belief-probs.

Meta-Meta-Thought: The meta-thoughts was super useful to do, and I think I'll do it more often, given that I often have a sense of, "Hmmmm, isn't this basically [insert post in The Sequences here] re-phrased?"

Comment by hazard on Hazard's Shortform Feed · 2018-10-26T20:44:02.692Z · score: 6 (3 votes) · LW · GW

You can have infinite aspirations, but infinite plans are often out to get you.

When you make new plans, run more creative "what if?" inner-sims, sprinkle in more exploit, and ensure you have bounded loss if things go south.

When you feel like quitting, realize you have the opportunity to learn and update being asking, "What's different between now and when I first made this plan?"

Make your confidence in your plans explicit, so if you fail you can be surprised instead of disappointed.

If the thought of giving up feels terrible, you might need to learn how to lose.

And of course, if you can't afford to lose,

![](https://i.imgur.com/80acRCF.jpg)

Comment by hazard on Hazard's Shortform Feed · 2018-10-25T18:47:37.442Z · score: 4 (2 votes) · LW · GW

The fact that utility and probability can be transformed while maintaing the same decisions matches what the algo feels like from the inside. When thinking about actions, I often just feel like a potential action is "bad", and it takes effort to piece out if I don't think the outcome is super valuable, or if there's a good outcome that I don't think is likely.

Comment by hazard on Starting Meditation · 2018-10-25T16:40:41.098Z · score: 4 (2 votes) · LW · GW

Was thinking about Getting Got in terms of goals, and realized that my target was implicitly, "Meditate every day forever", which makes anything less (even if it's a Good Idea) Giving Up (which feels Bad). So my new intention is that until Thanksgiving, daily meditation is a "The world must explode before I give up" target, at which point I'll reconsider what my intention is.

Starting Meditation

2018-10-24T15:09:06.019Z · score: 22 (9 votes)

Thoughts on tackling blindspots

2018-09-27T01:06:53.283Z · score: 45 (13 votes)

Can our universe contain a perfect simulation of itself?

2018-05-20T02:08:41.843Z · score: 21 (5 votes)

Reducing Agents: When abstractions break

2018-03-31T00:03:16.763Z · score: 42 (11 votes)

Diffusing "I can't be that stupid"

2018-03-24T14:49:51.073Z · score: 55 (17 votes)

Request for "Tests" for the MIRI Research Guide

2018-03-13T23:22:43.874Z · score: 68 (18 votes)

Types of Confusion Experiences

2018-03-11T14:32:36.363Z · score: 31 (9 votes)

Hazard's Shortform Feed

2018-02-04T14:50:42.647Z · score: 31 (9 votes)

Explicit Expectations when Teaching

2018-02-04T14:12:09.903Z · score: 53 (17 votes)

TSR #10: Creative Processes

2018-01-17T03:05:18.903Z · score: 16 (4 votes)

No, Seriously. Just Try It: TAPs

2018-01-14T15:24:38.692Z · score: 42 (14 votes)

TSR #9: Hard Rules

2018-01-09T14:57:15.708Z · score: 32 (10 votes)

TSR #8 Operational Consistency

2018-01-03T02:11:32.274Z · score: 20 (8 votes)

TSR #7: Universal Principles

2017-12-27T01:54:39.974Z · score: 23 (8 votes)

TSR #6: Strength and Weakness

2017-12-19T22:23:57.473Z · score: 3 (3 votes)

TSR #5 The Nature of Operations

2017-12-12T23:37:06.066Z · score: 16 (5 votes)

Learning AI if you suck at math

2017-12-07T15:15:15.480Z · score: 10 (4 votes)

TSR #4 Value Producing Work

2017-12-06T02:44:27.822Z · score: 20 (8 votes)

TSR #3 Entrainment: Discussion

2017-12-01T16:46:35.718Z · score: 25 (9 votes)

Changing habits for open threads

2017-11-26T12:54:27.413Z · score: 9 (4 votes)

Increasing day to day conversational rationality

2017-11-16T21:18:37.424Z · score: 26 (10 votes)

Acknowledging Rationalist Angst

2017-11-06T05:26:45.505Z · score: 30 (12 votes)

Trope Dodging

2017-10-21T18:43:34.729Z · score: 4 (4 votes)