romeostevensit's Shortform

post by romeostevensit · 2019-08-07T16:13:55.144Z · score: 2 (1 votes) · LW · GW · 21 comments

21 comments

Comments sorted by top scores.

comment by romeostevensit · 2019-09-19T00:47:54.420Z · score: 34 (12 votes) · LW(p) · GW(p)

A service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.

comment by elityre · 2019-09-28T08:48:37.547Z · score: 4 (2 votes) · LW(p) · GW(p)

Hahahahahaha.


comment by romeostevensit · 2020-02-21T01:22:57.238Z · score: 18 (8 votes) · LW(p) · GW(p)

A willingness to lose doubled my learning rate. I recently started quitting games faster when I wasn't having fun (or predicted low future fun from playing the game out). I felt bad about this because I might have been cutting off some interesting come backs etc. However, after playing the new way for several months (in many different games) I found I had about doubled the number of games I can play per unit time and therefore upped my learning rate by a lot. This came not only from the fact that many of the quit games were exactly those slogs that take a long time, but also that the willingness to just quit if I stopped enjoying myself made me more likely to experiment rather than play conservatively.

This is similar to the 'fail fast' credo.

comment by romeostevensit · 2020-02-22T02:01:39.735Z · score: 2 (1 votes) · LW(p) · GW(p)

This also applies to books

comment by romeostevensit · 2019-08-07T16:13:55.342Z · score: 12 (5 votes) · LW(p) · GW(p)

Your self model only contains about seven moving parts.

Your self model's self model only contains one or two moving parts.

Your self model's self model's self model contains zero moving parts.

Insert UDT joke here.

comment by romeostevensit · 2019-09-10T15:39:48.488Z · score: 11 (3 votes) · LW(p) · GW(p)

A short heuristic for self inquiry:

  • write down things you think are true about important areas of your life
  • produce counter examples
  • write down your defenses/refutations of those counter examples
  • come back later when you are less defensive and review whether your defenses were reasonable
  • if not, why not? whence the motivated reasoning? what is being protecting from harm?
comment by romeostevensit · 2020-01-03T03:56:09.898Z · score: 10 (6 votes) · LW(p) · GW(p)

Flow is a sort of cybernetic pleasure. The pleasure of being in tight feedback with an environment that has fine grained intermediary steps allowing you to learn faster than you can even think.

comment by romeostevensit · 2019-11-29T21:17:08.329Z · score: 10 (6 votes) · LW(p) · GW(p)

The most important inversion I know of is cause and effect. Flip them in your model and see if suddenly the world makes more sense.

comment by romeostevensit · 2020-01-01T18:43:43.630Z · score: 9 (4 votes) · LW(p) · GW(p)

The arguments against iq boosting on the grounds of evolution as an efficient search of the space of architecture given constrains would have applied equally well for people arguing that injectable steroids usable in humans would never be developed.

comment by interstice · 2020-01-01T23:35:02.105Z · score: 12 (4 votes) · LW(p) · GW(p)

Steroids do fuck a bunch of things up, like fertility, so they make evolutionary sense. This suggests we should look to potentially dangerous or harmful alterations to get real IQ boosts. Greg cochran has a post suggesting gout might be like this.

comment by Raemon · 2020-01-01T23:40:07.914Z · score: 3 (1 votes) · LW(p) · GW(p)

My understanding is those arguments are usually saying "you can't easily get boosts to IQ that wouldn't come with tradeoffs that would have affected fitness in the ancestral environment." I'm actually not sure what the tradeoffs of steroids are – are they a free action everyone should be taking apart from any legal concerns? Do they come with tradeoffs that you think would still be a net benefit in the ancestral environment?

[fake edit: interstice beat me to it]

comment by romeostevensit · 2020-01-03T03:25:16.804Z · score: 7 (4 votes) · LW(p) · GW(p)

The smaller an area you're trying to squeeze the probability fluid into the more watertight your inferential walls need to be

comment by Liam Donovan (liam-donovan) · 2020-01-03T06:36:28.480Z · score: 1 (1 votes) · LW(p) · GW(p)

.

comment by romeostevensit · 2019-10-29T00:48:57.963Z · score: 6 (4 votes) · LW(p) · GW(p)

We have fewer decision points than we naively model and this has concrete consequences. I don't have 'all evening' to get that thing done. I have the specific number of moments that I think about doing it before it gets late enough that I put it off. This is often only once or twice.

comment by romeostevensit · 2019-12-16T19:13:26.160Z · score: 5 (2 votes) · LW(p) · GW(p)

Social orders function on the back of unfakeably costly signals. Proof of Work social orders encourage people to compete to burn more resources, Proof of Stake social orders encourage people to invest more into the common pool. PoS requires reliable reputation tracking and capital formation. They aren't mutually exclusive, as both kinds of orders are operating all the time. People heavily invested in one will tend to view those heavily invested in the other as defectors.There is a market for narratives that help villainize the other strategy.

comment by romeostevensit · 2019-09-10T15:36:39.587Z · score: 4 (2 votes) · LW(p) · GW(p)

When young you mostly play within others' reward structures. Many choose which structure to play in based on Max reward. This is probably a mistake. You want to optimize for opportunity to learn how to construct reward structures.

comment by romeostevensit · 2020-01-10T03:01:44.538Z · score: 2 (1 votes) · LW(p) · GW(p)

You can't straightforwardly multiply uncertainty from different domains to propagate uncertainty through a model. Point estimates of differently shaped distributions can mean very different things i.e. the difference between the mean of a normal, bimodal, and fat tailed distribution. This gets worse when there are potential sign flips in various terms as we try to build a causal model out of the underlying distributions.

comment by romeostevensit · 2020-01-10T03:08:45.833Z · score: 2 (1 votes) · LW(p) · GW(p)

(I guess this is why guesstimate exists)


comment by Pattern · 2020-01-11T07:26:27.268Z · score: 1 (1 votes) · LW(p) · GW(p)

How does guesstimate help?

comment by romeostevensit · 2020-01-11T07:52:45.120Z · score: 4 (3 votes) · LW(p) · GW(p)

guesstimate propagates full distributions for you