Posts

Comments

Comment by Causal Chain (causal-chain) on Saving the world sucks · 2024-01-14T01:06:19.355Z · LW · GW

I interpret upvotes/downvote as

  1. Do I want other people to read this post
  2. Do I want to encourage the author and others to write more posts like this.

And I favour this post for both of those reasons.

I agree that this post doesn't make philosophical argument for it's position, but I don't require that for every post. I value it as an observation of how the EA movement has affected this particular person, and as criticism.

A couple of strongly Anti-EA friends of mine became so due to a similar moral burnout, so it's particularly apparent to me how little emphasis is put on mental health.

Comment by Causal Chain (causal-chain) on Conflicts between emotional schemas often involve internal coercion · 2023-05-18T00:58:03.631Z · LW · GW

This dynamic reminds me of arguments-as-soldiers from The Scout Mindset. If people are used to wielding arguments as soldiers on themselves then it seems relatively easy to extend those patterns to reasoning with others. 

Testing this hypothesis seems tricky. One avenue is "people with more internal conflicts are more predisposed to the soldier mindset". I can see a couple ways in-model for this to be untrue though. 

Comment by Causal Chain (causal-chain) on New User's Guide to LessWrong · 2023-05-17T02:52:48.327Z · LW · GW

Some typos:

rationality lessons we've accumulated and made part of our to our thinking

Seems like some duplicated words here.

weird idea like AIs being power and dangerous in the nearish future.

 Perhaps: "weird ideas like AIs being powerful and dangerous"

Comment by causal-chain on [deleted post] 2023-04-14T02:27:40.371Z

Similar to being able to reply freely to comments on our posts, it would be nice if we could reply freely to comments on our own comments.

Comment by Causal Chain (causal-chain) on Simulacrum 3 As Stag-Hunt Strategy · 2022-12-10T21:45:12.859Z · LW · GW

This seems like a reasonable mechanism, but I thought we already had one: belief-in-belief makes it easier to lie without being caught.

Comment by Causal Chain (causal-chain) on Do meta-memes and meta-antimemes exist? e.g. 'The map is not the territory' is also a map · 2022-08-10T22:14:06.383Z · LW · GW

The phrase "the map is not the territory" is not just a possibly conceivable map, it's part of my map.

Thinking in terms of programming, it's vaguely like I have a class instance s where one of the elements p is a pointer to the instance itself. So I can write *(s.p) == s. Or go further and write *(*(s.p).p) == s.

As far as I want with only the tools offered to me by my current map.

Comment by Causal Chain (causal-chain) on Writing this post as rationality case study · 2022-07-31T01:50:02.968Z · LW · GW

My immediate mental response was that I value this post, but it doesn't fit with the mood of lesswrong. Which is kind of sad because this seems practical. But this is heavily biased by how upvotes are divvied out, since I typically read highly-upvoted posts.

It seems less likely to maximize my happiness or my contribution to society, but it doesn't make me not want it

I thought this was clear to me, but then I thought some more and I no longer think it's straightforward. It pattern matched against

  • high value vs low probability
  • personalities are inbuilt biases in human strategy

But deductions from them seem of spurious use.

I agree that it's a good idea to give things a try to collect data before making longer term plans. Since you're explicitly exploring rather than exploiting, I suggest trying low-effort wacky ideas in many different directions (eg. Not on lesswrong)

Comment by Causal Chain (causal-chain) on What Are You Tracking In Your Head? · 2022-07-07T23:23:28.024Z · LW · GW

This reminds me of dual N-back training. Under this frame, dual N-back would improve your ability to track extra things. It's still unclear to me whether training it actually improves mental skills in other domains.

Comment by Causal Chain (causal-chain) on What Are You Tracking In Your Head? · 2022-07-07T12:53:00.652Z · LW · GW

The improvement to my intuitive predictive ability is definitely a factor to why I find it comforting, I don't know what fraction of it is aesthetics, I'd say a poorly calibrated 30%. Like maybe it reminds me of games where I could easily calculate the answer, so my brain assumes I am in that situation as long as I don't test that belief.

I'm definitely only comparing the sizes of changes to the same stat. My intuition also assumes diminishing returns for everything except defense which is accelerating returns - and knowing the size of each step helps inform this.

Comment by Causal Chain (causal-chain) on What Are You Tracking In Your Head? · 2022-07-06T11:35:35.622Z · LW · GW

 > Offering the player a choice between +5 armor and +10 accuracy implies that the numbers "5" and "10" are somehow expected to be relevant to the player.

When I imagine a game which offers "+armor" or "+accuracy" vs a game which offers "+5 armor" or "+10 accuracy", the latter feels far more comfortable even if I do not intend to do the maths. I suspect it gives something for my intuition to latch onto, to give me a sense of scale.

Comment by Causal Chain (causal-chain) on 14 Techniques to Accelerate Your Learning · 2022-05-21T00:39:39.645Z · LW · GW

Possible typo

Here is our last set of flashcards to provide you with the key takeaways from the section “Project-based Learning.”

Should be

Here is our last set of flashcards to provide you with the key takeaways from the section “put your learning into practice.”

Comment by Causal Chain (causal-chain) on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2021-12-01T22:24:23.580Z · LW · GW

Well, you check if it's a multiple of every prime below sqrt(11009) ~= 105.

Though if you suspect he's intentionally chosen a tricky number, a product of two large primes, you can look at the square numbers larger than 11009. In this case 11025 = 105^2, and 11025-11009 = 16 = 4^2, so 11009 = 105^2 - 4^2 = (105+4)(105-4) = 109×101