Posts

When Gears Go Wrong 2020-08-02T06:21:25.389Z · score: 26 (7 votes)
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z · score: 11 (5 votes)
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z · score: 15 (5 votes)
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z · score: 12 (5 votes)
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z · score: 31 (6 votes)
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z · score: 18 (7 votes)
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z · score: 8 (2 votes)
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z · score: 25 (11 votes)
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z · score: 25 (11 votes)
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z · score: 49 (15 votes)
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z · score: 16 (4 votes)
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z · score: 24 (9 votes)
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z · score: 32 (6 votes)
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z · score: 9 (3 votes)
The Case for The EA Hotel 2019-03-31T12:31:30.969Z · score: 66 (23 votes)
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z · score: 57 (17 votes)
What Vibing Feels Like 2019-03-11T20:10:30.017Z · score: 17 (27 votes)
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z · score: 103 (39 votes)
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z · score: 43 (20 votes)
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z · score: 160 (94 votes)
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z · score: 29 (7 votes)
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z · score: 44 (23 votes)
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z · score: 37 (11 votes)
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z · score: 14 (4 votes)

Comments

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-10-01T00:46:59.658Z · score: 2 (1 votes) · LW · GW

I have a visceral negative reaction to the comments on this post.

It really annoys me that rationalists are so bad at understanding and using analogy.

https://www.lesswrong.com/posts/HzDcLf2LJg4x66fcH/not-all-communication-is-manipulation-chaperones-don-t

Comment by mr-hire on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-10-01T00:41:38.872Z · score: 2 (1 votes) · LW · GW

Thanks, this analogy is interesting.

Comment by mr-hire on AllAmericanBreakfast's Shortform · 2020-09-30T19:10:44.428Z · score: 2 (1 votes) · LW · GW

My hypothesis is that it's pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person's psychological reward system.

This just kicks the can down the road on you defining pleasure, all of my points still apply

If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.

That is, I think it's possible to say that pleasure kicks in around values that we really want, rather than vice versa.

Comment by mr-hire on AllAmericanBreakfast's Shortform · 2020-09-29T17:52:19.495Z · score: 2 (1 votes) · LW · GW

"values" is a term for the types of stories that give us pleasure.

It really depends on what you mean by "pleasure".  If pleasure is just "things you want", then almost tautologically meaning comes from pleasure, since you want meaning.

If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.  I think that there are also people that just WANT the pleasure, and if they could get it while ignoring their values, they would.

I call this the"Heaven/Enlightenment" dichotomy, and I think it's a frequent misunderstanding.

I've seen some people say "all we care about is feeling good, and people who think they care about the outside world are confused." I've also seen people say "All we care about is meeting our values, and people who think it's about feeling good are confused."

Personally, I think that people are more towards one side of the spectrum or the other along different dimensions, and I'm inclined to believe both sides about their own experience.

Comment by mr-hire on "Win First" vs "Chill First" · 2020-09-29T15:19:55.881Z · score: 2 (1 votes) · LW · GW

He says good startup teams need people with both attitudes.

Comment by mr-hire on supposedlyfun's Shortform · 2020-09-29T00:49:16.239Z · score: 2 (1 votes) · LW · GW

It just feels like "biases" are such a high level of abstraction that are based on basic brain architecture.  To get rid of them would be like creating a totally different design.

Comment by mr-hire on How often do series C startups fail to exit? · 2020-09-29T00:27:10.358Z · score: 2 (1 votes) · LW · GW

The problem is that the company is trying to grow, and will increase it's ARR by an order of magnitude in pursuit of that growth.

 

I got confused reading this a few times, because increasing your annual recurring revenue by an order of magnitude IS growth (most of the time).

I think this is supposed to say something like increasing burn rate by an order of magnitude.

Comment by mr-hire on AllAmericanBreakfast's Shortform · 2020-09-29T00:10:51.240Z · score: 2 (1 votes) · LW · GW

I looked through that post but didn't see any support for the claim that meaning comes from pleasure.

My own theory is that meaning comes from values, and both pain and pleasure are a way to connect to the things we value, so both are associated with meaning.

Comment by mr-hire on "Win First" vs "Chill First" · 2020-09-28T21:30:27.769Z · score: 2 (1 votes) · LW · GW

Yeah, when Thiel is talking about cooperation vs. competition, he's also not talking about being a team player vs needing to be the star- he's talking about the relation to either ignoring competition and just focusing on creating a good product, or specifically worrying about your competitors and figuring out how you can beat them.

Comment by mr-hire on What are good rationality exercises? · 2020-09-28T20:52:43.904Z · score: 12 (4 votes) · LW · GW

"Doing impossible things"

  • Get 100 strangers to show up at a specific place at a specific time.
  • Make $5,000 counterfactual dollars in a weekend.
  • Be featured in a major print publication in less than a month.
  • etc.
Comment by mr-hire on "Win First" vs "Chill First" · 2020-09-28T20:49:11.295Z · score: 14 (4 votes) · LW · GW

Interesting.  I'm trying to fit this Peter Thiel's thoughts that startups need BOTH competitive spirits (win first) and cooperative spirits (maybe not chill first but perhaps "cooperate first").

One way to point at the difference in perspectives is to simply say that "cooperative" and "chill first" are different... but in my mind they seem to be  pointing at similar perspectives.

Comment by mr-hire on MikkW's Shortform · 2020-09-28T17:53:42.414Z · score: 5 (2 votes) · LW · GW

This seems to be the common rationalist position, but it does seem to be at odds with:

  1. The common rationalist position to vote on UDT grounds.
  2. The common rationalist position to eschew contextualizing because it ruins the commons.

I don't see much difference between voting because you want others to also vote the same way, or choosing stocks because you want others to choose stocks the same way.

I also think it's pretty orthogonal to talk about telling the truth for long term gains in culture, and only giving money to companies with your values for long term gains in culture.

Comment by mr-hire on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T22:56:06.868Z · score: 22 (7 votes) · LW · GW

It seems like the lessons are more about credulity and basic opsec?  

Comment by mr-hire on supposedlyfun's Shortform · 2020-09-26T22:52:23.630Z · score: 5 (3 votes) · LW · GW

Do you think genetic editing could remove biases?  My suspicsion is that they're probably baked pretty deeply into our brains and society, and you can't just tweak a few genes to get rid of them.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-09-26T18:21:05.019Z · score: 2 (1 votes) · LW · GW

Mods are asleep, post pictures of mushroom clouds.

Comment by mr-hire on niplav's Shortform · 2020-09-25T18:52:09.285Z · score: 3 (2 votes) · LW · GW

They also have negative externalities, moving websites from price discrimination models that are available to everyone, to direct pay models that are only available to people who can afford that.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-09-23T22:36:44.186Z · score: 3 (2 votes) · LW · GW

Is there much EA work into tail risk from GMOs ruining crops or ecosystems?

If not, why not?

Comment by mr-hire on How often do series C startups fail to exit? · 2020-09-23T16:37:38.222Z · score: 2 (1 votes) · LW · GW

I'm not an expert on this but I believe the owners have a fiduciary duty to try to make the investors whole.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-09-22T20:23:11.672Z · score: 4 (5 votes) · LW · GW

It seems like the spirit of the Litany of Gendlin is basically false?

Owning up to what's true makes things way worse if you don't have the psychological immune system to handle the negative news/deal with the trauma or whatever.

And it's precisely the things that you are avoiding looking at that are  most likely to be those things you can't handle, as that's WHY you developed the response of not looking at them.

Comment by mr-hire on How often do series C startups fail to exit? · 2020-09-21T21:20:32.757Z · score: 2 (1 votes) · LW · GW

If the company is at Series C,  but they're not a homerun, they have no way to pay back their investors.

Furthermore, their valuation by definition was too high... so you have the same "dissappearing value" mentioned elsewhere, and your options as an employee are likely not worth that much.  As an employee in such a company, you're also at risk of being laid off, as VC funds were likely put into hiring in anticipation of VC scale growth.

Comment by mr-hire on How often do series C startups fail to exit? · 2020-09-21T21:14:19.586Z · score: 9 (5 votes) · LW · GW

Most Series C companies are worth in the 100-200M range, the one I'm at is worth 270M. How does all the value just evaporate? What happens to the companies that "fail"?

 

There's no way to short series C startups, and the market is not open.  It's not an efficient market, so I wouldn't equivocate "VC's valuing the company at X" with "The company being worth X". Recall that most funds don't make money.

Even if it was an efficient market, you have to remember that VCs are black swan farming.  So they're investing in a whole bunch of companies at a valuation $x00,000,000, with the expectation that many of those will go to 0 or be lower than their valuation, in order to get their few unicorns.

Comment by mr-hire on Mati_Roy's Shortform · 2020-09-20T17:16:48.239Z · score: 2 (1 votes) · LW · GW

I  tired because I didn't sleep well.

Comment by mr-hire on avturchin's Shortform · 2020-09-17T18:10:08.038Z · score: 2 (1 votes) · LW · GW

Ok, tabooing the word ontology here.  All that's needed is an understanding of Bayesianism to answer the question of how you combine the chance of all other explanations.

Comment by mr-hire on avturchin's Shortform · 2020-09-17T16:47:27.709Z · score: 2 (1 votes) · LW · GW

But "all the other explanations combined" was talking about the probabilities. We're not combining the explanations, that wouldn't make any sense.

The only ontology that is required is Bayesianism, where explanations can have probabilities of being correct.

Comment by mr-hire on capybaralet's Shortform · 2020-09-17T01:37:09.484Z · score: 3 (2 votes) · LW · GW

> But, that is indeed a clunkier statement, and probably defeats the point of you being able to casually mention it in the first place.)

Also like, if you're in something like guess culture, and someone tells you "I'm just telling you this with no expectation," they will still be trying to guess what you may want from that.

Comment by mr-hire on rohinmshah's Shortform · 2020-09-15T20:13:22.438Z · score: 4 (2 votes) · LW · GW

I recently interviewed someone who has a lot of experience predicting systems, and they had 4 steps similar to your two above.

  1. Observe the world and see if it's sufficient to other systems to predict based on intuitionistic analogies.
  2. If there's not a good analogy, Understand the first principles, then try to reason about the equilibria of that.
  3. If that doesn't work, Assume the world will stay in a stable state, and try to reason from that.
  4. If that doesn't work, figure out the worst case scenario and plan from there.

I think 1 and 2 are what you do with expertise, and 3 and 4 are what you do without expertise.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-09-15T19:31:07.985Z · score: 2 (1 votes) · LW · GW

I'm doing interviews for this now.

I've gotten great feedback from people I've interviewed, saying it gave then a better understanding of themselves.

If you're interested in being interviewed, sign up here.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-09-15T19:27:22.111Z · score: 9 (2 votes) · LW · GW

Trying to describe a particular aspect of Moloch I'm calling hyper-inductivity:

 

The machine is hyper-inductive. Your descriptions of the machine are part of the machine. The machine wants you to escape, that is part of the machine. The machine knows that you know this. That is part of the machine.

Your trauma fuels the machine. Healing your trauma fuels the machine. Traumatizing your kids fuels the machine. Failing to traumatize your kids fuels the machine.

Defecting on the prisoner's dilemma fuels the machine. Telling others not to defect on the prisoner's dilemma fuels the machine.

Your intentional community is part of the machine. Your meditation practice is part of the machine. Your art installation is part of the machine. Your protest is part of the machine.

A select few will escape the machine. That is part of the machine. The machine will simplify, the machine will distort, the machine will politicize, the machine will consumerize.

Jesus is part of the machine. Buddha is part of the machine. Elijah is part of the machine. Zuess is part of the machine.

Your Kegan-5 ability to see outside the machine is part of the machine. Your mental models are part of the machine. Your bayesianism is part of the machine. Your shitposts are part of the machine.

The machine devours. The machine creates. Your attempts to protect your ideas from the machine is part of the machine.

Your attempts to fix the machine is part of the machine. Your attempts to see that the machine is an illusion is part of the machine. Your attempts to use the machine for your own purposes is part of the machine.

The machine's goal is to grow the machine. The machine does not have a goal. The machine is designed to be anti-fragile. The machine is not designed.

This post is part of the machine.
 

Comment by mr-hire on avturchin's Shortform · 2020-09-11T15:03:13.583Z · score: 2 (0 votes) · LW · GW

I'm struggling to think of a situation where on priors (with no other information), I expect the simplest explanation to be more likely than all other situations combined (including the simplest explanation with a tiny nuance).

Can you give an example of #1?

Comment by mr-hire on romeostevensit's Shortform · 2020-08-31T18:32:29.640Z · score: 6 (3 votes) · LW · GW

It seems you could apply this in reverse for non-acceptance as well.  Thinking that its not ok for the boat to leak does not imply a belief that the boat is not leaking. (often this is the argument of people who think a doctrine of non-acceptance is implying not seeing clearly).

Comment by mr-hire on What posts on finance would your find helpful or interesting? · 2020-08-24T15:44:52.042Z · score: 2 (1 votes) · LW · GW

Hmm, is there an app for that?

Comment by mr-hire on What posts on finance would your find helpful or interesting? · 2020-08-23T02:13:39.023Z · score: 4 (2 votes) · LW · GW

I see people throwing around words with numbers attached like "I shorted the stock at $x" or "I bought at 5x leverage, and I only vaguely know what the words mean and how they're related to the numbers being thrown about.

Comment by mr-hire on What posts on finance would your find helpful or interesting? · 2020-08-22T22:27:23.103Z · score: 11 (8 votes) · LW · GW

Honestly the very basics.  How does short selling actually work?  How does leverage actually work? As someone who has never really gone into finance I'd love a LW sequence explaining what all the numbers actually mean.

Comment by mr-hire on Epistemic Comparison: First Principles Land vs. Mimesis Land · 2020-08-22T20:30:20.877Z · score: 4 (2 votes) · LW · GW

To make "Mimesis Land" work at all, I'd posit that there is some amount of experimentation of techniques and beliefs happening at all times.

 

It feels like this gives Mimesis Land too much of an edge. They get to experiment with their practices AND pass on the best ones, while First Principles land just has to act on their best guess given the evidence they have at the time.

I think to make it a more fair comparison, neither can have access to "Experimentation Land", the only change in Mimesis land comes from accidents and misunderstandings when passing down beliefs and practices, whereas the only change in First Principles land comes from happenstance direct observations

Comment by mr-hire on mingyuan's Shortform · 2020-08-21T18:57:52.993Z · score: 2 (1 votes) · LW · GW

This same thing can open happen with debugging but internally.  You think it's about dishes but actually it's about not having your mother's love.

I've observed that different pair debuggers tend to focus on finding the root internal or external causes, and the best can hone in on which is more relevant.

Comment by mr-hire on ricraz's Shortform · 2020-08-20T22:22:52.535Z · score: 3 (2 votes) · LW · GW

In the half-formed thoughts stage, I'd expect to see a lot of literature reviews, agendas laying out problems, and attempts to identify and question fundamental assumptions. I expect that (not blog-post-sized speculation) to be the hard part of the early stages of intellectual progress, and I don't see it right now.

I would expect that later in the process.  Agendas laying out problems and fundamental assumptions don't spring from nowhere (at least for me), they come from conversations where I'm trying to articulate some intuition, and I recognize some underlying pattern. The pattern and structure doesn't emerge spontaneously, it comes from trying to pick around the edges of a thing, get thoughts across, explain my intuitions and see where they break.

I think it's fair to say that crystallizing these patterns into a formal theory is a "hard part", but the foundation for making it easy is laid out in the floundering and flailing that came before.

Comment by mr-hire on ricraz's Shortform · 2020-08-20T16:11:44.972Z · score: 11 (3 votes) · LW · GW

And we're trying to produce reliable answers to much harder questions by, what, writing better blog posts, and hoping that a few of the best ideas stick? This is not what a desperate effort to find the truth looks like.

It seems to me that maybe this is what a certain stage in the desperate effort to find the truth looks like?

Like, the early stages of intellectual progress look a lot like thinking about different ideas and seeing which ones stand up robustly to scrutiny.  Then the best ones can be tested more rigorously and their edges refined through experimentation.  

It seems to me like there needs to be some point in the desparate search for truth in which you're allowing for half-formed thoughts and unrefined hypotheses, or else you simply never get to a place where the hypotheses you're creating even brush up against the truth.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-08-18T03:53:20.450Z · score: 5 (3 votes) · LW · GW

Are there big takeaways from Moral Mazes that you don't get from The Gervais Principle?

Comment by mr-hire on grumpyfreyr's Shortform · 2020-08-17T22:21:23.024Z · score: 2 (1 votes) · LW · GW

Isn't that the whole point? You can't prove that rightness is impossible, the same way you can't prove rightness is possible.

Comment by mr-hire on Partially Enlightened AMA · 2020-08-17T21:49:18.467Z · score: 3 (2 votes) · LW · GW

Just to check, are you making a claim like The Secret that without taking any actions, without expressing the contents of your thoughts in any way, without any sort of traditional "causal connection", your thoughts will shape the universe to get you what you want?

I think you have made a classical enlightenment mistake of failing to separate your experience of the world from the world itself.  The map is indeed not the territory, even when you're enlightened.

Comment by mr-hire on Why haven't we celebrated any major achievements lately? · 2020-08-17T21:44:25.526Z · score: 13 (16 votes) · LW · GW

Perhaps progress has accelerated so much that we've become a bit numb?   

If you come from an assumption that progress is accelerating, it would stand to reason you could get celebration/awe fatigue if the introduction of something wonderous was so commonplace that wonder itself became habituated.

Comment by mr-hire on Partially Enlightened AMA · 2020-08-16T23:06:10.835Z · score: 2 (1 votes) · LW · GW

I mean, excess time and attention would mean that you used your lack of bias to craft a life where you could put your time attention in places that you fundamentally enjoy.

Comment by mr-hire on Partially Enlightened AMA · 2020-08-16T21:13:03.138Z · score: 2 (1 votes) · LW · GW

If you're so unbiased, do you have an excess of time and attention?

Comment by mr-hire on Partially Enlightened AMA · 2020-08-16T20:06:05.766Z · score: 2 (1 votes) · LW · GW

If you're so unbiased, are you rich?

Comment by mr-hire on grumpyfreyr's Shortform · 2020-08-15T14:31:58.670Z · score: 4 (4 votes) · LW · GW

Yeup, all models are wrong, even this one.

From some perspective there's a way in which 2 + 2 = 4 is  just "right."

Comment by mr-hire on How to Lose a Fair Game · 2020-08-15T03:51:05.670Z · score: 4 (2 votes) · LW · GW

For an even-money bet, the formula is simply

 

This article says that while this is often quoted, it's only true for an even-money bet where you lose everything. Which is a shame because this would have been a fairly easy heuristic.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-08-15T03:25:25.118Z · score: 2 (1 votes) · LW · GW

What can I do to get an intuitive grasp of Kelly betting? Are there apps I can play or exercises I can try?

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-08-13T01:22:44.455Z · score: 2 (1 votes) · LW · GW

But I'm not sure how to do it with their affiliate link creator. The default link they give me is not smile.

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-08-12T18:58:57.524Z · score: 2 (1 votes) · LW · GW

^ Affiliate links.  Feel free to search them on your own if you don't want some of the money to go to me.  If affiliate links are against the rules, let me know mods!

Comment by mr-hire on Matt Goldenberg's Short Form Feed · 2020-08-12T18:56:10.471Z · score: 9 (2 votes) · LW · GW

Recently went on a quest to find the best way to minimize the cord clutter, cord management, and charging anxiety that creates a dozen trivial inconveniences throughout the day.
 

Here's what worked for me:

1. For each area that is a wire maze, I get one of these surge protectors with 18 outlets and 3 usb slots: https://amzn.to/33UfY7i

2. For everywhere I am that I am likely to want to charge something, I fill 1 -3 of the slots with these 6ft multi-charging usb cables (more slots if I'm likely to want to charge multiple things). I get a couple extras for travel so that I can simply leave them in my travel bag: https://amzn.to/33RV48T

3. For everywhere that I am likely to want to plug in my laptop, I get one of these universal laptop chargers. Save the attachments somewhere safe for future laptops, and leave the attachment that works for my laptop plugged in at each place. I get an extra to keep and put into my travel bag: https://amzn.to/3iwHjkf

4. I run the USB cords and laptop cord through these nifty little cord clips, so they stay in place: https://amzn.to/31KdcPA

5. All the excess wiring, along with the surge protector, is put into this cord box. I use the twisty ties with that to secure wires from dangling, and ensure they go into the box neatly. Suddenly, the wires are super clean: https://amzn.to/2PIGbxA

6. (Bonus Round) I have a charging case for my phone, so the only time I have to worry about charging it as night. I use this one for my Pixel 3A, but you'll have to find one that works for your phone: https://amzn.to/31MuxHn

7. (Bonus Round 2): Work to go wireless for things that have that option, like headphones.

This will set you back $200 - $500 (depending on much of each thing you need) but man is it nice to not ever have to worry about finding a charging cord, moving a cord around, remembering to pack your charger, tripping over wires or having the wire jungle distract, etc.