Posts

Analogies and General Priors on Intelligence 2021-08-20T21:03:18.882Z
riceissa's Shortform 2021-03-27T04:51:43.513Z
Timeline of AI safety 2021-02-07T22:29:00.811Z
Discovery fiction for the Pythagorean theorem 2021-01-19T02:09:37.259Z
Gems from the Wiki: Do The Math, Then Burn The Math and Go With Your Gut 2020-09-17T22:41:24.097Z
Plausible cases for HRAD work, and locating the crux in the "realism about rationality" debate 2020-06-22T01:10:23.757Z
Source code size vs learned model size in ML and in humans? 2020-05-20T08:47:14.563Z
How does iterated amplification exceed human abilities? 2020-05-02T23:44:31.036Z
What are some exercises for building/generating intuitions about key disagreements in AI alignment? 2020-03-16T07:41:58.775Z
What does Solomonoff induction say about brain duplication/consciousness? 2020-03-02T23:07:28.604Z
Is it harder to become a MIRI mathematician in 2019 compared to in 2013? 2019-10-29T03:28:52.949Z
Deliberation as a method to find the "actual preferences" of humans 2019-10-22T09:23:30.700Z
What are the differences between all the iterative/recursive approaches to AI alignment? 2019-09-21T02:09:13.410Z
Inversion of theorems into definitions when generalizing 2019-08-04T17:44:07.044Z
Degree of duplication and coordination in projects that examine computing prices, AI progress, and related topics? 2019-04-23T12:27:18.314Z
Comparison of decision theories (with a focus on logical-counterfactual decision theories) 2019-03-16T21:15:28.768Z
GraphQL tutorial for LessWrong and Effective Altruism Forum 2018-12-08T19:51:59.514Z
Timeline of Future of Humanity Institute 2018-03-18T18:45:58.743Z
Timeline of Machine Intelligence Research Institute 2017-07-15T16:57:16.096Z
LessWrong analytics (February 2009 to January 2017) 2017-04-16T22:45:35.807Z
Wikipedia usage survey results 2016-07-15T00:49:34.596Z

Comments

Comment by riceissa on Writing On The Pareto Frontier · 2021-09-17T23:57:11.275Z · LW · GW

Robert Heaton calls this (or a similar enough idea) the Made-Up-Award Principle.

Comment by riceissa on Eli's shortform feed · 2021-09-17T21:09:55.725Z · LW · GW

Maybe this? (There are a few subthreads on that post that mention linear regression.)

Comment by riceissa on riceissa's Shortform · 2021-09-12T05:38:31.755Z · LW · GW

I think Discord servers based around specific books are an underappreciated form of academic support/community. I have been part of such a Discord server (for Terence Tao's Analysis) for a few years now and have really enjoyed being a part of it.

Each chapter of the book gets two channels: one to discuss the reading material in that chapter, and one to discuss the exercises in that chapter. There are also channels for general discussion, introductions, and a few other things.

Such a Discord server has elements of university courses, Math Stack Exchange, Reddit, independent study groups, and random blog posts, but is different from all of them:

  • Unlike courses (but like Math SE, Reddit, and independent study groups), all participation is voluntary so the people in the community are selected for actually being interested in the material.
  • Unlike Math SE and Reddit (but like courses and independent study groups), one does not need to laboriously set the context each time one wants to ask a question or talk about something. It's possible to just say "the second paragraph on page 76" or "Proposition 6.4.12(c)" and expect to be understood, because there is common knowledge of what the material is and the fact that everyone there has access to that material. In a subject like real analysis where there are many ways to develop the material, this is a big plus.
  • Unlike independent study groups and courses (but like Math SE and Reddit), there is no set pace or requirement to join the study group at a specific point in time. This means people can just show up whenever they start working on the book without worrying that they are behind and need to catch up to the discussion, because there is no single place in the book everyone is at. This also makes this kind of Discord server easier to set up because it does not require finding someone else who is studying the material at the same time, so there is less cost to coordination.
  • Unlike random forum/blog posts about the book, a dedicated Discord server can comprehensively cover the entire book and has the potential to be "alive/motivating" (it's pretty demotivating to have a question about a blog post which was written years ago and where the author probably won't respond; I think reliability is important for making it seem safe/motivating to ask questions).

I also like that Discord has an informal feel to it (less friction to just ask a question) and can be both synchronous and asynchronous.

I think these Discord servers aren't that hard to set up and maintain. As long as there is one person there who has worked through the entire book, the server won't seem "dead" and it should accumulate more users. (What's the motivation for staying in the server if you've worked through the whole book? I think it provides a nice review/repetition of the material.) I've also noticed that earlier on I had to answer more questions in early chapters of the book, but now there are more people who've worked through the early chapters who can answer those questions, so I tend to focus on the later chapters now.

I am uncertain how well this format would work for less technical books where there might not be a single answer to a question (which leaves room for people to give their opinions more).

(Thanks to people on the Tao Analysis Discord, especially pecfex for starting a discussion on the server about whether there are any similar servers, which gave me the idea to write this post, and Segun for creating the Tao Analysis Discord.)

Comment by riceissa on How to turn money into AI safety? · 2021-08-26T02:07:48.069Z · LW · GW

I learned about the abundance of available resources this past spring.

I'm curious what this is referring to.

Comment by riceissa on MIRI/OP exchange about decision theory · 2021-08-26T02:01:26.085Z · LW · GW

Rob, are you able to disclose why people at Open Phil are interested in learning more decision theory? It seems a little far away from the AI strategy reports they've been publishing in recent years, and it also seemed like they were happy to keep funding MIRI (via their Committee for Effective Altruism Support) despite disagreements about the value of HRAD research, so the sudden interest in decision theory is intriguing.

Comment by riceissa on Set image dimensions using markdown · 2021-08-20T20:25:54.971Z · LW · GW

I am also running into this problem now with the Markdown editor. I switched over from the new rich editor because that one didn't support footnotes, whereas the Markdown one does. It seems like there is no editor that can both scale images and do footnotes, which is frustrating.

Edit: I ended up going with the rich editor despite broken footnotes since that seemed like the less bad of the two problems.

Comment by riceissa on Rob B's Shortform Feed · 2021-08-10T20:27:26.863Z · LW · GW

Re (a): I looked at chapters 4 and 5 of Superintelligence again, and I can kind of see what you mean, but I'm also frustrated that Bostrom seems really non-committal in the book. He lists a whole bunch of possibilities but then doesn't seem to actually come out and give his mainline visualization/"median future". For example he looks at historical examples of technology races and compares how much lag there was, which seems a lot like the kind of thinking you are doing, but then he also says things like "For example, if human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without even touching the intermediary rungs." which sounds like the deep math view. Another relevant quote:

Building a seed AI might require insights and algorithms developed over many decades by the scientific community around the world. But it is possible that the last critical breakthrough idea might come from a single individual or a small group that succeeds in putting everything together. This scenario is less realistic for some AI architectures than others. A system that has a large number of parts that need to be tweaked and tuned to work effectively together, and then painstakingly loaded with custom-made cognitive content, is likely to require a larger project. But if a seed AI could be instantiated as a simple system, one whose construction depends only on getting a few basic principles right, then the feat might be within the reach of a small team or an individual. The likelihood of the final breakthrough being made by a small project increases if most previous progress in the field has been published in the open literature or made available as open source software.

Re (b): I don't disagree with you here. (The only part that worries me is, I don't have a good idea of what percentage of "AI safety people" shifted from one view to the other, whether were were also new people with different views coming in to the field, etc.) I realize the OP was mainly about failure scenarios, but it did also mention takeoffs ("takeoffs won't be too fast") and I was most curious about that part.

Comment by riceissa on AGI will drastically increase economies of scale · 2021-08-10T00:00:42.669Z · LW · GW

I was reading parts of Superintelligence recently for something unrelated and noticed that Bostrom makes many of the same points as this post:

If the frontrunner is an AI system, it could have attributes that make it easier for it to expand its capabilities while reducing the rate of diffusion. In human-run organizations, economies of scale are counteracted by bureaucratic inefficiencies and agency problems, including difficulties in keeping trade secrets. These problems would presumably limit the growth of a machine intelligence project so long as it is operated by humans. An AI system, however, might avoid some of these scale diseconomies, since the AI’s modules (in contrast to human workers) need not have individual preferences that diverge from those of the system as a whole. Thus, the AI system could avoid a sizeable chunk of the inefficiencies arising from agency problems in human enterprises. The same advantage—having perfectly loyal parts—would also make it easier for an AI system to pursue long-range clandestine goals. An AI would have no disgruntled employees ready to be poached by competitors or bribed into becoming informants.

Comment by riceissa on Rob B's Shortform Feed · 2021-07-30T07:36:52.329Z · LW · GW

Ok I see, thanks for explaining. I think what's confusing to me is that Eliezer did stop talking about the deep math of intelligence sometime after 2011 and then started talking about big blobs of matrices as you say starting around 2016, but as far as I know he has never gone back to his older AI takeoff writings and been like "actually I don't believe this stuff anymore; I think hard takeoff is actually more likely to be due to EMH failure and natural lag between projects". (He has done similar things for his older writings that he no longer thinks is true, so I would have expected him to do the same for takeoff stuff if his beliefs had indeed changed.) So I've been under the impression that Eliezer actually believes his old writings are still correct, and that somehow his recent remarks and old writings are all consistent. He also hasn't (as far as I know) written up a more complete sketch of how he thinks takeoff is likely to go given what we now know about ML. So when I see him saying things like what's quoted in Rob's OP, I feel like he is referring to the pre-2012 "deep math" takeoff argument. (I also don't remember if Bostrom gave any sketch of how he expects hard takeoff to go in Superintelligence; I couldn't find one after spending a bit of time.)

If you have any links/quotes related to the above, I would love to know!

(By the way, I was was a lurker on LessWrong starting back in 2010-2011, but was only vaguely familiar with AI risk stuff back then. It was only around the publication of Superintelligence that I started following along more closely, and only much later in 2017 that I started putting in significant amounts of my time into AI safety and making it my overwhelming priority. I did write several timelines though, and recently did a pretty thorough reading of AI takeoff arguments for a modeling project, so that is mostly where my knowledge of the older arguments comes from.)

Comment by riceissa on Rob B's Shortform Feed · 2021-07-28T23:28:19.795Z · LW · GW

Thanks! My understanding of the Bostrom+Yudkowsky takeoff argument goes like this: at some point, some AI team will discover the final piece of deep math needed to create an AGI; they will then combine this final piece with all of the other existing insights and build an AGI, which will quickly gain in capability and take over the world. (You can search "a brain in a box in a basement" on this page or see here for some more quotes.)

In contrast, the scenario you imagine seems to be more like (I'm not very confident I am getting all of this right): there isn't some piece of deep math needed in the final step. Instead, we already have the tools (mathematical, computational, data, etc.) needed to build an AGI, but nobody has decided to just go for it. When one project finally decides to go for an AGI, this EMH failure allows them to maintain enough of a lead to do crazy stuff (conquistadors, persuasion tools, etc.), and this leads to DSA. Or maybe the EMH failure isn't even required, just enough of a clock time lead to be able to do the crazy stuff.

If the above is right, then it does seem quite different from Paul+Katja, but also different from Bostrom+Yudkowsky, since the reason why the outcome is unipolar is different. Whereas Bostrom+Yudkowsky say the reason one project is ahead is because there is some hard step at the end, you instead say it's because of some combination of EMH failure and natural lag between projects.

Comment by riceissa on Rob B's Shortform Feed · 2021-07-28T01:23:03.838Z · LW · GW

Which of the "Reasons to expect fast takeoff" from Paul's post do you find convincing, and what is your argument against what Paul says there? Or do you have some other reasons for expecting a hard takeoff?

I've seen this post of yours, but as far as I know, you haven't said much about hard vs soft takeoff in general.

Comment by riceissa on riceissa's Shortform · 2021-07-24T18:33:34.541Z · LW · GW

(I have only given this a little thought, so wouldn't be surprised if it is totally wrong. I'm curious to hear what people think.)

I've known about deductive vs inductive reasoning for a long time, but only recently heard about abductive reasoning. It now occurs to me that what we call "Solomonoff induction" might better be called "Solomonoff abduction". From SEP:

It suggests that the best way to distinguish between induction and abduction is this: both are ampliative, meaning that the conclusion goes beyond what is (logically) contained in the premises (which is why they are non-necessary inferences), but in abduction there is an implicit or explicit appeal to explanatory considerations, whereas in induction there is not; in induction, there is only an appeal to observed frequencies or statistics.

In Solomonoff induction, we explicitly refer to the "world programs" that provide explanations for the sequence of bits that we observe, so according to the above criterion it fits under abduction rather than induction.

Comment by riceissa on Jimrandomh's Shortform · 2021-07-18T22:47:55.103Z · LW · GW

What alternatives to "split-and-linearly-aggregate" do you have in mind? Or are you just identifying this step as problematic without having any concrete alternative in mind?

Comment by riceissa on Raj Thimmiah's Shortform · 2021-04-27T22:14:00.787Z · LW · GW

There is a map on the community page. (You might need to change something in your user settings to be able to see it.)

Comment by riceissa on You Can Now Embed Flashcard Quizzes in Your LessWrong posts! · 2021-04-19T18:06:33.160Z · LW · GW

I'm curious why you decided to make an entirely new platform (Thought Saver) rather than using Andy's Orbit platform.

Comment by riceissa on Using Flashcards for Deliberate Practice · 2021-04-15T01:54:41.445Z · LW · GW

Messaging sounds good to start with (I find calls exhausting so only want to do it when I feel it adds a lot of value).

Comment by riceissa on Using Flashcards for Deliberate Practice · 2021-04-15T01:36:15.122Z · LW · GW

Ah ok cool. I've been doing something similar for the past few years and this post is somewhat similar to the approach I've been using for reviewing math, so I was curious how it was working out for you.

Comment by riceissa on Using Flashcards for Deliberate Practice · 2021-04-14T19:54:30.796Z · LW · GW

Have you actually tried this approach, and if so for how long and how has it worked?

Comment by riceissa on Progressive Highlighting: Picking What To Make Into Flashcards · 2021-03-30T20:14:15.739Z · LW · GW

So there's a need for an intermediate stage between creating an extract and creating a flashcard. This need is what progressive highlighting seeks to address.

I haven't actually done incremental reading in SuperMemo so I'm not sure about this, but I believe extract processing is meant to be recursive: first you extract a larger portion of the text that seems relevant, then when you encounter it again the extract itself is treated like an original article itself, so you might extract just a single sentence, then when you encounter that sentence again you might make a cloze deletion or Q&A card.

Comment by riceissa on Progressive Highlighting: Picking What To Make Into Flashcards · 2021-03-30T06:02:37.102Z · LW · GW

This sounds a lot like (a subset of) incremental reading. Instead of highlighting, one creates "extracts" and reviews those extracts over time to see if any of them can be turned into flashcards. As you suggest, there is no pressure to immediately turn things into flashcards on a first-pass of the reading material. These two articles about incremental reading emphasize this point. A quote from the first of these:

Initially, you make extracts because “Well it seems important”. Yet to what degree (the number of clozes/Q&As) and in what formats (cloze/Q&A/both) are mostly fuzzy at this point. You can’t decide wisely on what to do with an extract because you lack the clarity and relevant information to determine it. In other words, you don’t know the extract (or in general, the whole article) well enough to know what to do with it.

In this case, if you immediately process an extract, you’ll tend to make mistakes. For example, for an extract, you should have dismissed it but you made two clozed items instead; you may have dismissed it when it’s actually very important to you, unbeknown to you at that moment. With lowered quality of metamemory judgments, skewed by all the cognitive biases, the resulting clozed/Q&A item(s) is just far from optimal.

Comment by riceissa on riceissa's Shortform · 2021-03-27T04:51:43.779Z · LW · GW

Does life extension (without other technological progress to make the world in general safer) lead to more cautious life styles? The longer the expected years left, the more value there is in just staying alive compared to taking risks. Since death would mean missing out on all the positive experiences for the rest of one's life, I think an expected value calculation would show that even a small risk is not worth taking. Does this mean all risks that don't get magically fixed due to life extension (for example, activities like riding a motorcycle or driving on the highway seem risky even if we have life extension technology) are not worth taking? (There is the obvious exception where if one knows when one is going to die, then one can take more risks just like in a pre-life extension world as one reaches the end of one's life.)

I haven't thought about this much, and wouldn't be surprised if I am making a silly error (in which case, I would appreciate having it pointed out to me!).

Comment by riceissa on [deleted post] 2021-03-12T21:56:45.469Z

I like this tag! I think the current version of the page is missing the insight that influence gained via asymmetric weapons/institutions is restricted/inflexible, i.e. an asymmetric weapon not only helps out only the "good guys" but also constrains the "good guys" into only being able to do "good things". See this comment by Carl Shulman. (I might eventually come back to edit this in, but I don't have the time right now.)

Comment by riceissa on [deleted post] 2021-03-03T00:10:00.158Z

The EA Forum wiki has stubs for a bunch of people, including a somewhat detailed article on Carl Shulman. I wonder if you feel similarly unexcited about the articles there (if so, it seems good to discuss this with people working on the EA wiki as well), or if you have different policies for the two wikis.

Comment by riceissa on Spaced Repetition Systems for Intuitions? · 2021-02-27T01:49:48.414Z · LW · GW

I also just encountered Flashcards for your soul.

Comment by riceissa on Probability vs Likelihood · 2021-02-26T18:43:10.246Z · LW · GW

Ah ok, that makes sense. Thanks for clarifying!

Comment by riceissa on Open & Welcome Thread – February 2021 · 2021-02-26T05:27:53.236Z · LW · GW

It seems to already be on LW.

Edit: oops, looks like the essay was posted on LW in response to this comment.

Comment by riceissa on [deleted post] 2021-02-26T00:04:19.519Z

I'm unable to apply this tag to posts (this tag doesn't show up when I search to add a tag).

Comment by riceissa on Learn Bayes Nets! · 2021-02-24T20:28:07.557Z · LW · GW

For people who find this post in the future, Abram discussed several of the points in the bullet-point list above in Probability vs Likelihood.

Comment by riceissa on Probability vs Likelihood · 2021-02-24T20:22:05.341Z · LW · GW

Regarding base-rate neglect, I've noticed that in some situations my mind seems to automatically do the correct thing. For example if a car alarm or fire alarm goes off, I don't think "someone is stealing the car" or "there's a fire". L(theft|alarm) is high, but P(theft|alarm) is low, and my mind seems to naturally know this difference. So I suspect something more is going on here than just confusing probability and likelihood, though that may be part of the answer.

Comment by riceissa on Probability vs Likelihood · 2021-02-24T19:59:39.003Z · LW · GW

I understood all of the other examples, but this one confused me:

A scenario is likely if it explains the data well. For example, many conspiracy theories are very likely because they have an answer for every question: a powerful group is conspiring to cover up the truth, meaning that the evidence we see is exactly what they'd want us to see.

If the conspiracy theory really was very likely, then we should be updating on this to have a higher posterior probability on the conspiracy theory. But in almost all cases we don't actually believe the conspiracy theory is any more likely than we started out with. I think what's actually going on is the thing Eliezer talked about in Technical Explanation where the conspiracy theory originally has the probability mass very spread out across different outcomes, but then as soon as it learns the actual outcome, it retroactively concentrates the probability mass on that outcome. So I want to say that the conspiracy theory is both unlikely (because it did not make an advance prediction) and improbable (very low prior combined with the unlikeliness). I'm curious if you agree with that or if I've misunderstood the example somehow.

Comment by riceissa on [deleted post] 2021-02-02T23:17:15.592Z

Thanks, I like your rewrite and will post questions instead in the future.

I think I understand your concerns and agree with most of it. One thing that does still feel "off" to me is: given that there seems to be a lot of in-person-only discussions about "cutting edge" ideas and "inside scoop" like things (that trickle out via venues like Twitter and random Facebook threads, and only much later get written up as blog posts), how can people who primarily interact with the community online (such as me) keep up with this? I don't want to have to pay attention to everything that's out there on Twitter or Facebook, and would like a short document that gets to the point and links out to other things if I feel curious. (I'm willing to grant that my emotional experience might be rare, and that the typical user would instead feel alienated in just the way you describe.)

Comment by riceissa on Spaced Repetition Systems for Intuitions? · 2021-01-30T03:43:45.634Z · LW · GW

The closest thing I've seen is Unusual applications of spaced repetition memory systems.

Comment by riceissa on Judgment Day: Insights from 'Judgment in Managerial Decision Making' · 2021-01-24T19:51:20.231Z · LW · GW

For those reading this thread in the future, Alex has now adopted a more structured approach to reviewing the math he has learned.

Comment by riceissa on The new Editor · 2021-01-19T03:37:35.909Z · LW · GW

Thanks, that worked and I was able to fix the rest of the images.

Comment by riceissa on The new Editor · 2021-01-19T02:13:14.799Z · LW · GW

I just tried doing this in a post, and while the images look fine in the editor, they come out huge once the post is published. Any ideas on what I can do to fix this? (I don't see any option in the editor to resize the images, and I'm scared of converting the post to markdown.)

Comment by riceissa on [deleted post] 2021-01-18T20:48:30.790Z

Some thoughts in response:

  • I agree that it's better to focus on ideas instead of people. I might have a disagreement about how successfully LessWrong has managed this, so that from your perspective it looks like this page is pushing the status quo toward something we don't want vs looking from my perspective like it's just doing things more explicitly/transparently (which I prefer).
  • I agree that writing about people can be dicey. I might have disagreement about how well this problem can be avoided.
  • Maybe I'm misunderstanding what you mean by "defensible style", but I'm taking it to mean something like "obsession with having citations from respected sources for every assertion, like what you see on Wikipedia". So the concern is that once we allow lots of pages about people, that will force us to write defensibly, and this culture will infect pages not about people to also be written similarly defensibly. I hadn't thought of this, and I'm not sure how I feel about it. It seems possible to have separate norms/rules for different kinds of pages (Wikipedia does in fact have extra rules for biographies of living persons). But I also don't think I can point to any particularly good examples of wikis that cover people (other than Wikipedia, which I guess is sort of a counterexample).
  • I agree that summarizing his ideas or intellectual culture would be better, but that takes way more work, e.g. to figure out what this culture is/how to carve up the space, how to name it, and figuring out what his core ideas are.
Comment by riceissa on [deleted post] 2021-01-18T20:03:30.047Z

Currently the wiki has basically no entries for people (we have one for Eliezer, but none for Scott Alexander or Lukeprog for example)

There do seem to be stubs for both Scott Alexander and Lukeprog, both similar in size to this Vervaeke page. So I think I'm confused about what the status quo is vs what you are saying the status quo is.

Comment by riceissa on [deleted post] 2021-01-18T03:56:04.906Z

I'm not sure what cluster you are trying to point to by saying "wiki pages like this".

For this page in particular: I've been hearing more and more about Vervaeke, so I wanted to find out what the community has already figured out about him. It seems like the answer so far is "not much", but as the situation changes I'm excited to have some canonical place where this information can be written up. He seems like an interesting enough guy, or at any rate he seems to have caught the attention of other interesting people, and that seems like a good enough reason to have some place like this.

If that's not a good enough reason, I'm curious to hear of a concrete alternative policy and how it applies to this situation. Vervaeke isn't notable enough to have a page on Wikipedia. Maybe I could write a LW question asking something like "What do people know about this guy?" Or maybe I could write a post with the above content. A shortform post would be easy, but seems difficult to find (not canonical enough). Or maybe you would recommend no action at all?

Comment by riceissa on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2021-01-18T03:12:56.350Z · LW · GW

Thanks!

Comment by riceissa on Wiki-Tag FAQ · 2021-01-17T21:53:22.593Z · LW · GW

I tried creating a wiki-tag page today, and here are some questions I ran into that don't seem to be answered by this FAQ:

  • Is there a way to add wiki-links like on the old wiki? I tried using the [[double square brackets]] like on MediaWiki, but this did not work (at least on the new editor).
  • Is there a way to quickly see if a wiki-tag page on a topic already exists? On the creation page, typing something in the box does not show existing pages with that substring. What I'm doing right now is to look on the all tags page (searching with my browser) and also looking at the wiki 1.0 imported pages list and again searching there. I feel like there must be a better way than this, but I couldn't figure it out.
  • Is there a way to add MediaWiki-like <ref> tags? Or is there some preferred alternative way to add references on wiki-tag pages?
Comment by riceissa on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2021-01-17T21:39:24.117Z · LW · GW

The Slack invite link seems to have expired. Is there a new one I can use?

Comment by riceissa on Matt Goldenberg's Short Form Feed · 2020-12-05T20:48:04.866Z · LW · GW

That makes sense, thanks for clarifying. What I've seen most often on LessWrong is to come up with reasons for preferring simple interpretations in the course of trying to solve other philosophical problems such as anthropics, the problem of induction, and infinite ethics. For example, if we try to explain why our world seems to be simple we might end up with something like UDASSA or Scott Garrabrant's idea of preferring simple worlds (this section is also relevant). Once we have something like UDASSA, we can say that joke interpretations do not have much weight since it takes many more bits to specify how to "extract" the observer moments given a description of our physical world.

Comment by riceissa on The LessWrong 2019 Review · 2020-12-03T04:15:33.868Z · LW · GW

Thanks! That does make me feel a bit better about the annual reviews.

Comment by riceissa on The LessWrong 2019 Review · 2020-12-03T04:00:27.412Z · LW · GW

I see, that wasn't clear from the post. In that case I am wondering if the 2018 review caused anyone to write better explanations or rewrite the existing posts. (It seems like the LessWrong 2018 Book just included the original posts without much rewriting, at least based on scanning the table of contents.)

Comment by riceissa on The LessWrong 2019 Review · 2020-12-03T03:46:46.049Z · LW · GW

This is a minor point, but I am somewhat worried that the idea of research debt/research distillation seems to be getting diluted over time. The original article (which this post links to) says:

Distillation is also hard. It’s tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery.

I think the kind of cleanup and polish that is encouraged by the review process is insufficient to qualify as distillation (I know this post didn't use the word "distillation", but it does talk about research debt, and distillation is presented as the solution to debt in the original article), and to adequately deal with research debt.

There seems to be a pattern where a term is introduced first in a strong form, then it accumulates a lot of positive connotations, and that causes people to stretch the term to use it for things that don't quite qualify. I'm not confident that is what is happening here (it's hard to tell what happens in people's heads), but from the outside it's a bit worrying.

I actually made a similar comment a while ago about a different term.

Comment by riceissa on Introduction to Cartesian Frames · 2020-12-01T21:24:21.245Z · LW · GW

So the existence of this interface implies that A is “weaker” in a sense than A’.

Should that say B instead of A', or have I misunderstood? (I haven't read most of the sequence.)

Comment by riceissa on Matt Goldenberg's Short Form Feed · 2020-12-01T10:15:45.172Z · LW · GW

Have you seen Brian Tomasik's page about this? If so what do you find unconvincing, and if not what do you think of it?

Comment by riceissa on Daniel Kokotajlo's Shortform · 2020-11-24T05:48:29.630Z · LW · GW

Would this work across different countries (and if so how)? It seems like if one country implemented such a tax, the research groups in that country would be out-competed by research groups in other countries without such a tax (which seems worse than the status quo, since now the first AGI is likely to be created in a country that didn't try to slow down AI progress or "level the playing field").

Comment by riceissa on Embedded Interactive Predictions on LessWrong · 2020-11-23T00:33:49.642Z · LW · GW

Is there a way to see all the users who predicted within a single "bucket" using the LW UI? Right now when I hover over a bucket, it will show all users if the number of users is small enough, but it will show a small number of users followed by "..." if the number of users is too large. I'd like to be able to see all the users. (I know I can find the corresponding prediction on the Elicit website, but this is cumbersome.)

Comment by riceissa on Open & Welcome Thread – November 2020 · 2020-11-19T02:48:48.148Z · LW · GW

Ok. Since visiting your office hours is somewhat costly for me, I was trying to gather more information (about e.g. what kind of moral uncertainty or prior discussion you had in mind, why you decided to capitalize the term, whether this is something I might disagree with you on and might want to discuss further) to make the decision.

More generally, I've attended two LW Zoom events so far, both times because I felt excited about the topics discussed, and both times felt like I didn't learn anything/would have preferred the info to just be a text dump so I could skim and move on. So I am feeling like I should be more confident that I will find an event useful now before attending.