Comment by mr-hire on Rest Days vs Recovery Days · 2019-03-21T12:28:35.478Z · score: 1 (1 votes) · LW · GW

I assume this is where a "gut-check" came from.

Comment by mr-hire on Active Curiosity vs Open Curiosity · 2019-03-17T20:42:51.354Z · score: 7 (4 votes) · LW · GW

I'm reminded of Malcolm Ocean's article "questions are not just for asking." Open curiosity feels more like holding a question while active curiosity is asking it. He also links to my favorite web-comic ever which seems to advocate a sort of open curiosity.

Comment by mr-hire on Rule Thinkers In, Not Out · 2019-03-17T11:50:57.774Z · score: 1 (1 votes) · LW · GW

Off topic but... Is there something I don't know about Einstein's preferred pronouns? Did he prefer ey and eir over he and him?

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-15T19:08:49.814Z · score: 1 (1 votes) · LW · GW

I am familiar with derivatives. I don't remember the properties of logarithms but I half remember the base change one :).

Comment by mr-hire on Active Curiosity vs Open Curiosity · 2019-03-15T11:55:24.322Z · score: 4 (3 votes) · LW · GW

Babble vs. prune.

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-15T10:08:51.948Z · score: 1 (1 votes) · LW · GW

I'm not sure if this is the way I would think of it but I can kind of see it. I more think of them as different responses to the same sorts of stressors.

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-15T09:56:58.256Z · score: 1 (1 votes) · LW · GW

After having someone else on the EA forum also point me to the data on commodities, I'm now updating the post.

Comment by mr-hire on A Neglected Cause: Altruism Cultivation Practices · 2019-03-14T20:41:29.081Z · score: 3 (2 votes) · LW · GW

I was at a talk at the EA hotel that claimed there's evidence that a specific type of compassion meditation for 30 minutes a day for a few weeks has large effects sizes on compassion. I would be surprised however if this caused people to work on large global problems. I woudn't be surprised if the combination of interventions that improve compassion and interventions that improve rationality caused more people to work on large global problems.

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-14T18:48:29.549Z · score: 1 (1 votes) · LW · GW

> How my own driving skill differs from the average person feels to me a straightforward known unknown.

I didn't think of model where this mattered. I was more thinking of a model like "number of mistakes goes up linearly with alcohol consumption" than "number of mistakes gets multiplied by alcohol consumption". If the latter than this becomes an opaque risk (that can be measured by measuring your number of mistakes in a given time period).

> For a business that sells crops it's reasonable to buy options to protect against risk that come from the uncertainty about future prices.

Agreed. It also seems reasonable when selecting what commodity to sell to do a straight up expected value calculation based on historical data, and choose the one that has the the highest expected value. When thinking about it, perhaps there's "semi-transparent risks" that are not that dynamic or adversarial but do have black swans, and that should be it's own category above transparent risks, under which commodities and utilities would go. However, I think the better way to handle this is to treat the chance of black swan as model uncertainty that has knightian risk, and otherwise treat the investment as transparent based on historical data.

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-14T18:29:29.111Z · score: 2 (2 votes) · LW · GW

Sort of both. Both optionality and pilot in the plane principle are like "guiding principles" of anti-fragility and effectuation from which the subsequent principles fall out. However, they're also good principles in their own rights and subsets of the broader concept. It might be that I should change the picture to reflect the second thing instead of the first thing, to prevent confusions like this one.

A good exercise to see if you grock anti-fragility or effectuation is to go through each principle and explain how it follows from either Optionality or the Pilot-in-Plane principle respectively

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-14T18:24:53.474Z · score: 1 (1 votes) · LW · GW

Thanks! I do get the purpose/idea behind kelly criterion, but I don't get how to actually do the math, nor how to intuitively think about it when making decisions the way I intuitively think about expected value.

Comment by mr-hire on You Have About Five Words · 2019-03-14T13:05:10.401Z · score: 1 (1 votes) · LW · GW

I didn't make the leap from bits of information to feedback loops but it makes intuitive sense. Transmiting information that compresses by giving you the tools to figure out the information yourself seems useful.

Comment by mr-hire on Understanding information cascades · 2019-03-14T10:00:46.372Z · score: 10 (5 votes) · LW · GW

I also have this visceral feeling. It feels like a "subquestions" feature could fix both these issues.

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-13T12:10:48.892Z · score: 3 (3 votes) · LW · GW

That claim is something that often seems to be true, but it's one of the things I'm unsure of as a general rule. I do know that in practice when I try to mitigate risk in my own projects, and I think of anti-fragile and effectuative strategies, they tend to be at odds with each other (this is true of both the "0 to 1 Companies" and "AGI Risk" examples below")

The difference between hormesis and the lemonade principle is one of mindset.

In general, the anti-fragile mindset is "you don't get to choose the game but you can make yourself stronger according to the rules." Hormesis from that mindset is "Given the rules of this game, how can I create a policy that tends to make me stronger to the different types of risks?"

The effectuative mindset is "rig the game, then play it." From that perspective, the lemonade principle looks more like "Given that I failed to rig this game, how can I use the information I just acquired to rig a new game."

You're a farmer of a commodity and there's an unexpected drought. The hormetic mindset is "store a bit more water in the future." (and do this every time there's a draught). The lemonade mindset is "Start a draught insurance company that pays out in water."

Comment by mr-hire on What Vibing Feels Like · 2019-03-12T22:08:27.348Z · score: 1 (1 votes) · LW · GW

Looking forward to this. Feel free to send me an invite to look over the google doc.

Comment by mr-hire on Plans are Recursive & Why This is Important · 2019-03-12T17:17:55.777Z · score: 4 (3 votes) · LW · GW

Really enjoyed this. I've found myself using this concept a few times in my thoughts just the past couple days since I read this.

Comment by mr-hire on What Vibing Feels Like · 2019-03-12T16:43:27.929Z · score: 3 (2 votes) · LW · GW

I mostly agree with this. If rationality means "systematized winning" them I'm comfortable including Vibing in it, but if it means something more specific than I wouldn't include this in rationality. However, I still think it belongs on LessWrong, which is more about creating common knowledge to allow for systematized winning.

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-12T14:55:23.637Z · score: 2 (2 votes) · LW · GW

Yes I think I have different intuitions than Taleb here. When you think about Risk in terms of the strategies you use to deal with it, it doesn't make sense to use for instance anti-fragility to deal with drunk driving on a personal level. It might make sense to use anti-fragility in general for risks of death, but the inputs for your anti-fragile decision should basically take the statistics for drunk driving at face value. I think it's pretty similar to a lottery ticket in that 99% of the risk is transparent, and a remaining small amount is model uncertainty due to unknown unknowns (maybe someone will rig lottery) .The lucidic fallacy in that sense applies to every risk, because there's always some small amount of model uncertainty (maybe a malicious demon is confusing me).

One way to think about this is that your base risk is transparent and your model uncertainty is Knightian - this is a sensible way to approach all transparent risks, and it's part of the justification for the barbell strategy.

Comment by mr-hire on How to Understand and Mitigate Risk · 2019-03-12T12:32:01.267Z · score: 3 (3 votes) · LW · GW

I would say almost all global catastrophic risks would be classified as Knightian Risk. An exception might be something like an asteroid strike, which would be more opaque.

Edit: changed meteor to asteroid.

Comment by mr-hire on What Vibing Feels Like · 2019-03-12T11:53:37.121Z · score: 3 (2 votes) · LW · GW

Yes, this is very related to the benefits of what vibing is. I think that "communication with emotional flow" is as close to a succinct description of vibing as I've gotten to. By respecting the emotional energy in the room you can be honest without breaking the vibe of flow.

On a more meta note, I really appreciate that all of your comments on my posts seem to make an effort to model the norms I put in my commenting guidelines. I don't know if it's intentional or not but it is appreciated.

How to Understand and Mitigate Risk

2019-03-12T10:14:19.873Z · score: 47 (13 votes)
Comment by mr-hire on What Vibing Feels Like · 2019-03-12T09:27:21.335Z · score: 1 (1 votes) · LW · GW

One thing that's important about this post is that it's meant to evoke a particular set of felt senses more than be literally accurate. For instance, under certain definitions of "thinking", the only way to avoid it would be to be unconscious or dead, which I'm obviously not trying to state here. This post was merely trying to evoke a certain phenomenological state, without delving into why that state was useful.

There's a few reasons I think that vibing is useful for rationality, besides being an affordance that can allow you to enjoy communication in a new way (which I think would be reason enough). Note that many of these depend on models that I haven't written up yet, so I don't have the means to show why I believe them.

  • Vibing allows groups to communicate in a way that minimizes rationalizations and defensiveness in the discussion
  • Vibing allows a sort of proto "Looking" where you can see the world and your own psychology more for what it is as a group
  • Vibing allows groups to quickly see if a person will fit into their culture
  • Vibing gives you a better sense of the values people are actually optimizing for, including your own

I think that this sort of list can be dangerous because consciously trying to achieve these things can prevent you from vibing, in a similar way that consciously trying to practice reading people can harm circling, even though it's a benefit of circling.

Comment by mr-hire on What Vibing Feels Like · 2019-03-12T08:40:44.884Z · score: 1 (1 votes) · LW · GW

It's tough to engage in the sort of rationality that the Berkeley rationality community likes to engage in.

A good analogy is circling, which is a style of communication that the Berkeley rationality HAS picked up. You might say "it's tough to engage in rationality without logic being a central component of communication," but I think a central thing that makes circling so good at getting at important truths is that it allows and encourages you to engage in communication where you don't feel the need to justify everything you say logically. (Someone will disagree with me here, just substitute "logic" for some other thing that exists in normal rationality discourse but not in circling).

Vibing is importantly different from circling, but it has this same quality of getting at "truthiness" that typical rationality discussion can't get at by throwing away some of the tenets of traditional rationality communication. I don't think you should necessarily be talking about rationality when vibing, but I do think that you're engaging in rationality when vibing, in a similar manner to circling.

Comment by mr-hire on What Vibing Feels Like · 2019-03-12T08:31:17.997Z · score: 3 (2 votes) · LW · GW

No, there aren't any sort of drugs needed for vibing. The way certain people describe/experience Molly makes me think it helps them with vibing, although I don't really get the same effect.

Similarly, I think for some people who tend to be anxious alcohol can help with vibing (it helps me) but it also reduces the awareness aspect which is important.

What Vibing Feels Like

2019-03-11T20:10:30.017Z · score: 9 (20 votes)
Comment by mr-hire on Renaming "Frontpage" · 2019-03-10T13:25:52.946Z · score: 1 (1 votes) · LW · GW

I like whiteboard the best. Only ones I feel aversion to are Science and sparkly purple ball. Default and common feel no better than front page.

Comment by mr-hire on Motivation: You Have to Win in the Moment · 2019-03-08T16:55:41.473Z · score: 3 (2 votes) · LW · GW

I still think there's cruxes there that you're not seeing. My approach just accentuated the problems of looking at things at the level of a motivation system, they're still there even if you have the idea of harmony... they stick until you realize that the harmony is the thing, and the motivation system analogy is just crudely approximating that. (of course, I'm sure the harmony is just crudely approximating something even more fundamental). Note that this is the same thing that stuck out to me during your ACT presentation - missing that the harmony was the thing, not the ability to take actions.

I don't think there's much much more of a gap that can be bridged here, at least not with my skills. I won't be replying anymore but I appreciate you engaging :).

Comment by mr-hire on Motivation: You Have to Win in the Moment · 2019-03-07T14:11:21.943Z · score: 19 (5 votes) · LW · GW
mr-hire also states simpler ideas worked well for a really long time (though I'm not sure which simpler ideas or what counts as "brute force".

I'm very much interested in the object level of this post, and want to return to that.

To be more explicit about the levels of development here.

At some point, I was all about pragmatics. Every single thing change I could make that made me more likely to take my endorsed actions and less likely to take my unendorsed actions was used. I had a Pavlok. I used Beeminder. I had blocking software. I used social pressure when it helped and avoided it when it didn't. I reframed my beliefs to be more powerful. Comfort zone expansion was my default - when something scared me, I felt the fear and did it anyway. I even used techniques that would become central in the next stage of development - looking at beliefs, using introspection, using mindfulness and being in the moment - but the framing of it was all in the idea of a big pragmatic "use the things that make me more likely to take my intended actions."

At some point, this type of thinking just hit a brick wall. It led me to crashes, where I would follow my endorsed actions for months, and then crash, unable to force myself to go forward even with all of the pragmatic motivation tools I had set up. It also caused me to get myself into trouble one too many times - one too many subconscious Chesterton fences that I ignored in the pursuit of taking the action that was "obviously correct."

It became clear that there was something being missed in the simple piling on of pragmatic motivational tools. At this point, it became necessary to delve deeper into the relation between subconscious beliefs and actions taken. Introspection became very important. Understanding how tools like mindfulness related to how I oriented to my internal beliefs. Tools like the part's model became much more useful, and understanding the good that came from situations became important. I started seeing the previous motivational tools as "brute forcing", trying to go against the grain of the more fundamental influences of beliefs, parts, belief orientations. I used them more sparingly, surgically, here and there as tools to shape beliefs and get things done pragmatically, while being aware of the pitfalls.

Hopefully that gives a bit of more clear picture of where I (and I suspect Gordon) am coming from.

Edit: This post gives some more explicit pointers towards my current model, although it's obviously a bit behind: https://www.lesswrong.com/posts/mFvuQTzHQiBCDEKw6/a-framework-for-internal-debugging

Comment by mr-hire on S-Curves for Trend Forecasting · 2019-03-05T12:22:04.228Z · score: 4 (3 votes) · LW · GW

> Is this falsifiable?

Innovation research is notoriously hard to falsify and subject to just-so stories and post-hoc justifications.

One of the things I find compelling about S-curves is just how frequently they show up in innovation research coming from different angles and using different methodologies.

Some examples:

  • Everett rogers is a communication professor trying to figure out how ideas spread. So he finds measurements for ownership of different technologies like television and radio throughout society. Finds S-curves.
  • Clayton Christensen is interested in how new firms overtake established firms in the market. Decides to study the transistor market because there's easy measurements and it moves quickly. Finds S-curves.
  • Carlotta Perez is interested in broad shifts in society and how new innovations effect the social context. She maps out these large shifts using historical records. Finds S-curves.
  • Genrich Altshuller is interested in how engineers create novel inventions. So he pores through thousands of patents, looks for the ones that show real inventiveness, and tries to find patterns. Finds S-curves.
  • Simon Wardley is interested in the stages that software goes through as it becomes commodotized. Takes recent tech innovations that were commodotized and categorizes the news stories about them, then plots their frequency. Finds S-curves

> How do S-curves help me make predictions, or, alternately, tell me when I shouldn't try predicting?

By understanding the separate patterns, they can give you an idea of the most likely future of different technologies. For instance, here's a question on LW that I was able to better understand and predict because of my understanding of S-curves and how innovations stack.

> How do I know when some trend isn't made of S-curves?

I think understanding how to work with fake frameworks is a key skill here. Something like S-curves isn't used in a proof to get to the right answer. Rather, you can use it as evidence pointing you towards certain conclusions. You know that they tend to apply in an environment with self-reinforcing positive feedback loops and constraints on those feedback loops. You know they tend to apply for diffusion and innovation. When things have more of these features, you can expect them to be more useful. When things have less of these features, you can expect them to be less useful. By holding up a situation to lots of your fake frameworks, and seeing how much each applies, you can "run the Bayesian Gauntlet" and decide how much probability mass to put on different predictions.

Comment by mr-hire on Personalized Medicine For Real · 2019-03-04T23:04:20.797Z · score: 10 (8 votes) · LW · GW

> I think we were pretty successful at finding these kinds of mismatches between medical science and medical practice.  By their nature, though, these kinds of solutions are hard to scale to reach lots of people.

I'm curious about the cause of this. It seems like this is relatively straight forward to scale: Simply use marketing to get them to be more standard practice at hospitals and doctors offices.

There's two potential reasons I can imagine off the top of my head, but would really like to hear from you why these were so hard to scale.

1. They weren't hard to scale, but they were hard to make money with. If this is the case, maybe a non-profit could do it?

2. The fact that they weren't already standard practice meant that most of them had some other reason not to scale (the treatment was weird, it added extra liability, etc).

Was there some other reason these types of interventions wouldn't scale?

Comment by mr-hire on To understand, study edge cases · 2019-03-04T17:19:15.016Z · score: 3 (2 votes) · LW · GW

Good concept.

There's a corresponding principle in design, which is to design for extremes. If you want to make a pair of scissors easy to use for the average person, think about how to make them easy to use for both a person without a thumb, and a person with only a thumb.

Comment by mr-hire on Motivation: You Have to Win in the Moment · 2019-03-04T14:35:30.133Z · score: 8 (2 votes) · LW · GW

>If you can’t stop thinking about transistors, you will find it hard to focus on and fully appreciate the boolean algebra you’re executing on your logic gates made out of transistors.

I think the point Gordon was making was the opposite. you've described a leaky abstraction of logic gates that works at a base level, but that doesn't hold muster when you actually look at the transistors.

For me for instance, a basic strategy of "make the alternatives I endorse really easy and highly rewarding, and the alternatives I don't really hard and highly punishing" worked really really well for me for a long time, and was sufficient to overcome some of my most obvious bottlenecks.

However, that kind of thinking actively became harmful at a certain point in my development, when I hit diminishing returns on brute forcing my motivation system (I encountered problems that couldn't be brute forced that way, and these problems were my bottlenecks) and had to take a step back to understand what was actually going on, understanding my internal parts, belief orientations and awareness, etc.

Comment by mr-hire on Karma-Change Notifications · 2019-03-02T18:52:36.887Z · score: 4 (3 votes) · LW · GW

Can you explain this strategy more? In theory a random reinforcement schedule shouldn't create any less dopamine when rewarded, and should be the most resistant to extinction. I'm having trouble understanding what you mean by dopamine neutral I think.

Comment by mr-hire on S-Curves for Trend Forecasting · 2019-03-02T10:36:46.279Z · score: 1 (1 votes) · LW · GW

I wanted to work in this idea of getting ahead of constraints but it didn't really seem to fit anywhere.

I agree Eugene is among my top 3 business strategists to follow online and I learned a lot about how to think about strategy by reading his blog.

Idea - a thread for the best blogs to follow on every subject.

Comment by mr-hire on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-03-02T10:29:50.125Z · score: 2 (2 votes) · LW · GW

Man, at first I thought you were saying that your top level comment was generated by GPT-2 and I thought you were on a whole nother level of meta.

Comment by mr-hire on Policy-Based vs Willpower-Based Intentions · 2019-02-28T15:55:02.864Z · score: 5 (3 votes) · LW · GW

I'm curious about the phenomenology of your Policy-based intentions... do they feel similar to habits you haven't chosen consciously? Is it simply the same thing as CFAR calls a TAP, and most everyone else calls a habit, or is it something different?

P.S. I wrote something about the phenomenology of "willpower based" intentions here:

https://www.lesswrong.com/posts/QmcFeZtwSRhcsYDZu/video-the-phenomenology-of-intentions

Comment by mr-hire on Why didn't Agoric Computing become popular? · 2019-02-19T01:30:55.404Z · score: 3 (3 votes) · LW · GW

Most people do this for other utilities all the time though (like power)

Comment by mr-hire on Avoiding Jargon Confusion · 2019-02-18T15:43:05.683Z · score: 4 (4 votes) · LW · GW

I suspect the majority of drift in concepts comes from simple misunderstandings/playing a game of telephone and memetic selection. This leads to more politically charged and simple definitions, without any need for negative intent. I suspect providing other related concepts won't help with this besides maybe delaying the drift by a few weeks, but it at least seems worth trying.

However, when actually trying to put your suggestions into practice I'm at a loss as to how to do that. For instance, I'm writing a post naming and explaining different types of risk. It's not clear to me how to make a type of risk sound good. I'm also unsure how to go about introducing other words that are clearly bad in the same post that are more general, without having the post become completely off-topic and rambling.

Comment by mr-hire on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T13:03:41.581Z · score: 2 (2 votes) · LW · GW

So after rereading, it seems like what you're saying is - Have the AGI do the resolutions? Which means people predicting what an AGIs probabilities will be on hard questions (assuming the AGI isn't omniscient, it will still be able to only give probabilities on these items and not certainties). This makes a bit more sense in that instead of a resolution date it gives a resolution event. However you lose the ability to weight people's answers by their accuracy since nothing ever gets resolved till the AGI comes, and it seems to fall prey to the "predicting what someone smarter than me would do" problem.

Comment by mr-hire on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T12:47:13.268Z · score: 2 (2 votes) · LW · GW

I think I'm missing a key inferential step here.

I'm having trouble seeing the benefit of something like this over simply a regular prediction market/poll with long time horizons. Any existing prediction market/poll will by definition become a post-AGI prediction market/poll once AGI is developed. This of course won't be able to ask questions dependent on AGI being developed (without explicitly stating those in the question), but many of the questions in your examples don't seem to be those sorts of dependent questions.

I'm also having trouble seeing how you would resolve some of the questions you asked in a traditional prediction market/poll system. It seems more at that point like just asking an AGI what their probabilties are on specific things, without being able to measure their accuracy. It seems like having a list of questions that it would be useful to ask an AGI is a worthwhile goal in itself, but it seems like you have something else in mind that I'm not quite getting.

Comment by mr-hire on Why didn't Agoric Computing become popular? · 2019-02-17T16:21:32.695Z · score: 2 (2 votes) · LW · GW

This argument doesn't apply to the Agoric computing case though, in which the microtransactions are being decided by the programs and not the human.

Comment by mr-hire on Why didn't Agoric Computing become popular? · 2019-02-16T14:19:52.486Z · score: 12 (8 votes) · LW · GW

The limiting factor on a thing being charged as a utility is that it is evolved enough and understood enough that the underlying architecture won't change (and thus leave all the consumers of that utility with broken products). We've now basically gotten there with storage, and computing time is next on the chopping block as the next wave of competitive advantage comes from moving to serverless architecture.

Once serverless becomes the defacto standard, the next step will be to commoditizie particular common functions (starting with obvious one like user login/permission systems/etc). Once these functions begin to be commoditized, you essentially have an Agora computing architecture for webapps. The limiting factor is simply the technological breakthroughs, evolution of practice, and understanding of customer needshat allowed first storage, then compute, and eventually computer functions to become commodotized. Understanding S-curves and Wardley mapping is key here to understanding the trajectory.

Comment by mr-hire on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T13:20:37.551Z · score: 3 (3 votes) · LW · GW

I had a thought. One way to make this taxonomy more robust might be to use your RAIN framework to talk about the types of tradeoffs each document is trying to make (while assuming that all posts are trying to optimize for importance)

Clarification posts: aim to optimize novelty at the expense of robustness and accessibility.

Explanatory posts: aim to optimize accessibility and robustness at the expense of novelty.

Academic posts: aim to emphasize robustness at the expense of accessibility and novelty

Some other categories this suggests:

Meme spreading/popularization: emphasizes accessibility at the expense of novelty and robustness.

Original research: aim to emphasize robust and novelty at the expense of accessibility.

Exploratory/Speculation posts: Aim to optimize novelty and accessibility at the expense of robustness.

Comment by mr-hire on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-13T22:45:34.662Z · score: 1 (1 votes) · LW · GW

Makes sense. Thanks!

Comment by mr-hire on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-13T21:39:40.333Z · score: 1 (1 votes) · LW · GW

Curious how you arrived at this ontology. It doesn't seem obvious to me as a natural grouping.

Comment by mr-hire on How to stay concentrated for a long period of time? · 2019-02-05T19:47:07.027Z · score: 3 (3 votes) · LW · GW

I retracted it because I accidentally posted as a full post comment instead of as a reply.

Comment by mr-hire on How to stay concentrated for a long period of time? · 2019-02-03T14:27:14.099Z · score: 4 (4 votes) · LW · GW

Critch mentioned that when he was working for focusing for long periods of time on math, one thing that helped him was tying the pursuit of math to as many terminal values and deep needs as he could (he had a list of those needs from a theory of human motivation, but the general idea should work without the list). I've since had varying levels of success with that technique, the key being really having my system 1 get how this particular activity is tied to what it wants.

The theory being that often distractions are to meet a need that's not being met, and if you're already getting (or realize you will get) that need met from your current activity, there's no reason to switch tasks.

I think the old "urge propagation" CFAR technique was doing something like this.

Comment by mr-hire on How to stay concentrated for a long period of time? · 2019-02-03T13:39:08.932Z · score: 3 (3 votes) · LW · GW

Note that people with ADHD often have hyperfocus, and can concentrate on certain things for much longer than two hours (e.g. videogames), they just have poor attentional control.

Comment by mr-hire on How to stay concentrated for a long period of time? · 2019-02-03T13:38:19.346Z · score: 1 (1 votes) · LW · GW

Note that people with ADHD often have hyperfocus, and can concentrate on certain things for much longer than two hours (e.g. videogames), they just have poor attentional control.

Comment by mr-hire on Building up to an Internal Family Systems model · 2019-01-27T12:17:27.171Z · score: 3 (3 votes) · LW · GW

I've come to a similar conclusion that subagents are something like belief clusters. Which themselves are a closer to the metal leaky abstraction if what's actually going on. However I'm open to the idea that Kajs model is the right one here.

Comment by mr-hire on The 3 Books Technique for Learning a New Skilll · 2019-01-26T15:25:30.682Z · score: 2 (2 votes) · LW · GW

This is an interesting question. I can imagine the technique being useful for acquiring the general skill of language learning, but a language itself I can only really see the "what" and "how" books being useful, not the why.

Nor can I imagine this technique being very helpful to learn how to ride a bike, although I imagine it could be useful to become a competitive bike racer.

The distinction between skill and knowledge seems a good start, but it seems like there's more going on here.

Comment by mr-hire on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-26T15:04:06.541Z · score: 4 (4 votes) · LW · GW

To me deep mind is simply trying to paint themselves in the best light. I'm not particularly surprised by the behavior; I would expect it from a for-profit company looking to get PR. Nor am I particularly upset about the behavior; I don't see any outright lying going on, merely an attempt to frame the facts in the best possible way for them.

S-Curves for Trend Forecasting

2019-01-23T18:17:56.436Z · score: 87 (27 votes)

A Framework for Internal Debugging

2019-01-16T16:04:16.478Z · score: 20 (10 votes)

The 3 Books Technique for Learning a New Skilll

2019-01-09T12:45:19.294Z · score: 125 (66 votes)

Symbiosis - An Intentional Community For Radical Self-Improvement

2018-04-22T23:15:06.832Z · score: 29 (7 votes)

How Going Meta Can Level Up Your Career

2018-04-14T02:13:02.380Z · score: 40 (19 votes)

Video: The Phenomenology of Intentions

2018-01-09T03:40:45.427Z · score: 34 (9 votes)

Video - Subject - Object Shifts and How to Have Them

2018-01-04T02:11:22.142Z · score: 11 (4 votes)