eigen's Shortform

post by eigen · 2019-08-28T16:27:08.446Z · LW · GW · 38 comments

Contents

38 comments

38 comments

Comments sorted by top scores.

comment by eigen · 2019-11-30T13:21:11.630Z · LW(p) · GW(p)

Eliezer has the sequences, Scott the Codex; what does Robin Hanson have? Can someone point me to a direction where I could start reading his posts in a manner that makes sense? I found this post: https://www.lesswrong.com/posts/SSkYeEpTrYMErtsfa/what-are-some-of-robin-hanson-s-best-posts [LW · GW] which may be helpful, does someone have an opinion on this?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-11-30T13:47:28.089Z · LW(p) · GW(p)

Robin Hanson has a book, Elephant in the Brain, that does a good job getting his basic views across.

Replies from: eigen
comment by eigen · 2019-12-01T14:28:25.780Z · LW(p) · GW(p)

Thank you. I did not consider the book. Have you or someone read it? I think I'm going to go the route of the articles mentioned in the post I linked.

Replies from: Hazard
comment by Hazard · 2019-12-01T20:31:33.723Z · LW(p) · GW(p)

Would recommend the book. I frequently use the models and frames he puts forward in it, and as someone who's only read a small amount of Robin's blog posts, it seems like a lot of his blogging his connected to the ideas he puts in that book.

comment by eigen · 2019-12-01T14:32:16.003Z · LW(p) · GW(p)

Has someone re-read the sequences? did you find value in doing so?

Further, I do think the comments on each of the essays are worthy of reading, something I did not do the first time. I can pinpoint a few comments from people in this community on the essays which were very insightful! I wonder if I lost something by not participating in it or by not having read all the comments when I was reading the sequences.

Replies from: AnnaSalamon, Hazard, Viliam, habryka4
comment by AnnaSalamon · 2019-12-02T06:53:22.351Z · LW(p) · GW(p)

I've reread portions of the Sequences, and have derived notable additional value from it. Particularly fruitful at one point (many years ago) was when I reread a bunch of the "Map and territory" stuff (Noticing Confusion; Mysterious Answers to Mysterious Questions; Fake Beliefs) while substituting in examples of "my beliefs about myself" in place of all of Eliezer's examples -- because somehow that was a different domain I hadn't trained the concepts on when I read it the first time.

I plan to probably do more such exercises soon. I've found "check where my trigger-action patterns are and aren't matching the normative patterns suggested by the Sequences, and design exercises to investigate this" pretty useful in general, and its been ~5 years since I've done it, which seems time for a re-do.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-02T07:09:28.435Z · LW(p) · GW(p)

I'd love to see exercises for "Lonely Dissent" [? · GW].

comment by Hazard · 2019-12-01T20:22:48.352Z · LW(p) · GW(p)

I haven't done a full re-read, but I have re-read certain chapters. It was hella helpful. The experience was often, "Ohhhh, I only got the shadow of the idea on my first pass, it's grown since then but has been scattered, and the reread let me unify the ideas and feel confident I'm now getting the core idea and it's repercussions."

comment by Viliam · 2019-12-01T20:48:52.847Z · LW(p) · GW(p)

I did, but my last re-reading was long ago, so I don't remember the exact impression.

The comments are sometimes nice, but together they make the already-too-long sequences ten times longer. It would be nice to pick (and edit, if necessary) the best comments for the book version. Because I usually recommend reading the book instead of web, precisely because it is better to read the entire book than to read 10% of the web version and then decide it is too much. So I think you didn't lose much by not reading the comments.

Replies from: Pattern
comment by Pattern · 2019-12-02T01:18:13.281Z · LW(p) · GW(p)
It would be nice to pick (and edit, if necessary) the best comments for the book version.

A roundup like that would be valuable.

I usually recommend reading the book instead of web, precisely because it is better to read the entire book than to read 10% of the web version and then decide it is too much.

Does this consideration apply to re-reads as strongly?

Replies from: Viliam, habryka4
comment by Viliam · 2019-12-04T17:33:21.998Z · LW(p) · GW(p)
Does this consideration apply to re-reads as strongly?

Reading entire Sequences with all comments seems like an enormous waste of time; that's a ton of text. Your time would be better spent reading a few other books, I think.

That's just my opinion, though; see other comments.

comment by habryka (habryka4) · 2019-12-02T01:20:57.259Z · LW(p) · GW(p)

Hmm, I like this idea. I've been thinking of ways to curate and synthesize comment sections for a while, and the original sequences might be a good place to put that in action. 

Replies from: Viliam
comment by Viliam · 2019-12-04T17:38:03.607Z · LW(p) · GW(p)

It would be nice to have a "comment synthesis" that is written sufficiently long after the debate ended (not sooner than one month after publishing the original article?).

By the way, if you do this for many articles in the Sequences, perhaps you could also afterwards join those reactions into one big "community reaction to the Sequences", as a new article where people could read it all in one place.

comment by habryka (habryka4) · 2019-12-01T19:14:00.747Z · LW(p) · GW(p)

I've reread them about 3-4 times. Two of those times were with comments (the first time and the most recent time). I found reading the comments quite valuable.

comment by eigen · 2019-08-28T16:27:08.939Z · LW(p) · GW(p)

Last week I had a rather random thought come immediately at my mind. It was about the things I use and frequent daily. It was something like this:

What has reddit.com provided you this week?

I could not find a single thing that benefited me from looking at Reddit every day, I could not take a single insight from all the news and from the discussion that I read. Of course, what came naturally afterwards was to prohibit all interaction with Reddit.

Instead of visiting Reddit I shifted my focus and started reading books that I had currently on hold, and without needing to say, I have much more to say about these books than to lurking the web aimlessly.

I want to hold on a little bit on using it this question for other things (like for lesswrong.com) but I know that regardless I will do it.

So what has x provided you this week?

Replies from: matthew-barnett, Viliam, Raemon
comment by Matthew Barnett (matthew-barnett) · 2019-08-28T22:02:41.948Z · LW(p) · GW(p)

I tend to get "nothing" from Reddit in the sense that you described. In other words, I can't distill any insight from what I've been reading. However, I think this is a more general thing than something that just happens on Reddit or time wasting websites.

Sometimes I'll study something for an hour or two and still can't distill my learning into a few sentences. I think the human brain is good at retrieving knowledge upon inquiry rather than generating it on demand. Generating what we learned is a much harder thing to do for any type of task.

Replies from: eigen
comment by eigen · 2019-08-28T22:29:16.624Z · LW(p) · GW(p)

I agree, but I think that the answer to the immediate inquiry question is clearer if I shift my time to books or specific blogs instead of a subreddit where I may be liable to read about mindless conversations (sometimes even engage!).

About on-demand inquiries, this is somewhat off-topic, but it relates to how much can we retrieve after learning, or how many times we plateau. I've found that embedding Anki in my learning, I can't just forget about immediate retrievals (go on learning while changing the subject) and the Anki questions will take care of that stuff!

comment by Viliam · 2019-08-30T20:39:09.826Z · LW(p) · GW(p)

I don't read Reddit, but I have a similar experience with Hacker News. While I am reading it, it seems interesting, but when I afterwards try to remember anything useful, I can't.

My explanation is that I spend my time reading, but I don't spend my time processing what I have just read, because I am immediately moving to the next topic. Passivity is bad for remembering. (Compare with how spaced repetition learning software requires you to guess the correct answer, before telling you. Or how the mere act of note-taking improves remembering, even if you don't read your notes afterwards.) But again, reading without actively working with the topic seems to be the default approach when reading sites such as Reddit that throw a lot of content at you. With active engagement, my procrastination sessions wouldn't take an hour or two, but the entire day.

Seems like the rule is that you can only meaningfully process a limited amount of topics during a day. Reading a book seems like about the right amount. Also, the things in the book are related to each other, it is not a random mix of unrelated facts. (Related things are easier to remember than unrelated ones. Even if you make up a silly relation between them; a few mnemonic techniques are based on that.)

Replies from: eigen
comment by eigen · 2019-08-31T00:42:19.603Z · LW(p) · GW(p)

You are very on point with passivity being bad for remembering, completely agree.

Seems like the rule is that you can only meaningfully process a limited amount of topics during a day.

I think I'm starting to disagree with this. (weird phrasing but I'll explain).

For the longest time I used to think that I had at most only a few hours to learn/study in a day. But what happened was that I pretty much overloaded my working memory with a particular subject and then tried to keep building on that, it reached a point where I just could not keep up (maybe four hours straight on a subject); when I started changing subjects (and using much more of Anki which plays the biggest role here) I found out that I could keep going on learning and dedicate another four hours to another subject, while knowing that Anki takes care that I don't forget anything of both subjects.

I think, more or less, the same idea applies here, as you remark:

  • Twitter: One tweet, a few comments and then I drop it all from my memory. Go on to the next tweet.
  • Reddit: One post, few comments and to the next post.

What I'm trying to say is that you can read a book, drop it and then go on to the next and the same applies for learning. You don't have to read just one book, you don't have to study only one subject in a day.




Replies from: Viliam
comment by Viliam · 2019-08-31T12:05:48.990Z · LW(p) · GW(p)

I agree, reading a book... and then reading a book on a different topic when you already had too much of the former... seems like a good approach.

Actually, the school seems to be designed this way, of course only if you assume that 45 minutes is the optimal time to spend with one subject. (Which is probably wrong, and also depends on age, subject, etc. But the idea of "focus on X for nontrivial time, then focus on Y" is there.)

comment by Raemon · 2019-08-28T21:18:18.009Z · LW(p) · GW(p)

Somewhat surprised the answer for reddit was nothing. It didn't provide you with jokes or an opportunity to chill reading moderately interesting comments?

(Which is not to say that those things are worth it, just surprised that they weren't on the list to be evaluated)

Replies from: eigen
comment by eigen · 2019-08-28T21:50:42.581Z · LW(p) · GW(p)

Reddit, of course, is an example; the same can be asked of Facebook, Twitter, a group of friends and of course Lesswrong.com.

But in the case of Reddit, I usually frequent subreddits like /r/slaterstarcodex, /r/machinelearning, maybe communities like /r/Rust, and I don't dare go anywhere near the frontpage or /r/popular, it's like someone putting a magazine on my face while I'm walking on the street. (I'm trying to be more focused on what I consume around the internet, so I don't go anywhere near feeds, such as Youtube index or things like that; an extension like Distract Free Youtube for Chrome work great here).

Indeed I find value on Reddit but only on restricted-and-very-focused discussions which I'm already searching for, like entering /r/SeanCarroll to see what people are saying about a certain podcast episode; About funny comments (usually my friends or family would send me memes and I cannot avoid those!) I think I may be better considering a stand-up of Dave Chapelle or something to the like!

Or, there's always another option which is that I will end up going back, but at least I can say that I did the test!

Replies from: Raemon
comment by Raemon · 2019-08-28T22:02:27.559Z · LW(p) · GW(p)

I think that all makes sense. My response was prompted by some kind of wariness around "if one only acknowledges 'virtuous sounding' things that reddit/facebook/etc has provided you, you may be setting yourself up to be at war with yourself. If you systematically remove things that are 'merely' mindless fun, you may find yourself suddenly depressed or unmotivated without understanding why."

When I ask "what has Facebook provided me last week", several answers immediately came to mind which weren't, like, super-obviously imporant, or better than whatever I'd have gotten without facebook, but it included amusement, and at least slight connection to friends I don't normally see.

I think it's quite good to notice things like "the stuff facebook/reddit/etc provides isn't actually very good compared to what else I could be getting." But if you answer is "literally zero" I think you're more likely to be rounding things off to "what can I legibly understand as good" which is a very different question than "what has X provided me with?"

Replies from: eigen
comment by eigen · 2019-08-28T22:35:19.962Z · LW(p) · GW(p)

Yes, your comment makes me thing, maybe the post should be named "Beware of demands of goodness" à la Scott [? · GW]. But I have tried this before (not systematized like it's suggesting here, but rather in a nonchalant way) and I have found that the thing which I exchange for say Reddit is usually better by general standards. I've done this with Facebook, maybe TV shows, etc...

The good thing (to be mindful) is to catch us if we're going adrift. Like, if I can tell I'm missing something, then the thing I cut is probably it.

comment by eigen · 2021-04-25T12:44:23.183Z · LW(p) · GW(p)

Unsong from Scott Alexander is a masterpiece. 

An unexplored land within LessWrong is where the objective world meets the narrative world. Science dedicates its time almost exclusively to objective facts (what is) but this is hardly our everyday life. The world we live is full of emotions, motivations, pain, and joy. This is the world of stories that constrain and inform action. Rather than brush it aside, it should be explored as an important puzzle piece in instrumental rationality (I think this is what Unsong is kind-of about, including other works in the category of the aptly named: "Rational Fiction.")

Replies from: eigen
comment by eigen · 2021-04-25T18:41:47.673Z · LW(p) · GW(p)

The idea of ontological flexibility [LW · GW] hints at this.

comment by eigen · 2019-12-24T16:42:58.598Z · LW(p) · GW(p)

Happy Christmas and Merry Chanukah!

comment by eigen · 2021-04-25T00:37:31.994Z · LW(p) · GW(p)

One of the things that enticed me about LessWrong is a concrete and easy way to call someone a "rationalist," namely someone who has read the three books from the Library section (Sequences, HPMOR, and The Codex.) 

After that, the curated sequences and the concepts page. I just think is a wonderfully easy way to define concepts, and create a shared vocabulary while building on top of it. I hope that with time it gets expanded to encompass more books and sequences.

  • The term rationalist then falls short but this is how I easily can separate people and am sympathetic to those who want to change the name.
comment by eigen · 2024-11-25T14:53:30.519Z · LW(p) · GW(p)

I'm curious to know for anyone that has read a lot of Yudkowsky's and Scott Alexander's writings (I read them for entertainment even) how are they feeling about the advancements of AI -- all happening so fast and in such magnitude.

Replies from: Seth Herd, elityre
comment by Seth Herd · 2024-11-25T22:39:23.245Z · LW(p) · GW(p)

Yudkowsky's views can now be found mostly on Twitter. He is very pessimistic, for reasons described in detail in his List of Lethalities [LW · GW] and better summarized by Zvi [LW · GW]. I'm curious about Alexander's current views - I don't keep up on Astral Codex Ten.

To me it seems that Yudkowsky's reasons for pessimism are all good ones, but do not stack up to nearly the 99%+ p(doom) he's espoused. I've attempted to capture why that is in essentially all of my posts, but in brief form in Cruxes of disagreement on alignment difficulty [LW(p) · GW(p)], The (partial) fallacy of dumb superintelligence [LW · GW] and in a little more detail on one important point of disagreement in Conflating value alignment and intent alignment is causing confusion [LW · GW].

None of those address one of his important reasons for pessimism: humans have so far shown themselves to be just terrible at taking the dangers of AGI and the difficulties of alignment seriously. Here I think EY is too pessimistic; humans are short-sighted and argumentative as hell, but they are capable of taking serious issues seriously when they're staring them in the face. Attitudes will change when AI is obviously important, and our likely timelines are long enough for that to make at least some difference.

comment by Eli Tyre (elityre) · 2024-11-26T19:38:54.135Z · LW(p) · GW(p)

Read ~all the sequences. Read all of SSC (don't keep up with ACX).

Pessimistic about survival, but attempting to be aggresively open-minded about what will happen instead of confirmation biasing my views from 2015. 

comment by eigen · 2019-12-23T23:49:39.222Z · LW(p) · GW(p)

I've heard some critiques to the part of the sequences concerning Quantum Mechanics and Conscience; but I always considered those as a demonstration of applied rationality, say, “How do we get to the correct answer by applying what we've learned?”

This is way more obvious and way more clear in Inadequate Equilibria. Take a problem, a question and deconstruct it completely. It was concise and to the point, I think it's one of the best things Eliezer has written; I cannot recommend it enough.

Replies from: agai
comment by agai · 2019-12-24T19:03:14.056Z · LW(p) · GW(p)

Comment removed for posterity.

Replies from: agai
comment by agai · 2020-01-12T07:15:15.546Z · LW(p) · GW(p)

Comment removed for posterity.

comment by eigen · 2022-04-15T04:00:41.077Z · LW(p) · GW(p)

What books are you reading? Podcast you are watching? Talks/articles to recommend? 

Replies from: yitz
comment by Yitz (yitz) · 2022-04-15T16:51:37.125Z · LW(p) · GW(p)

Reading Infinite Jest for the first time—it’s really good! I wish I could describe it in a sentence or two, but the thing is so complex I’m not sure that I can.

comment by eigen · 2019-12-23T23:53:19.979Z · LW(p) · GW(p)

I'm looking for a post from /u/wei_dai; it had something to do along the lines of deciding what to work on (or do, or study) week by week, and then updating/changing after the week (maybe in a post about UDT?) Does someone know what I'm talking about? Search function, wei_dai posts and google has turned up nothing. Thanks for anyone's help!

Replies from: eigen