Posts

Drake Morrison's Shortform 2022-12-11T01:23:51.385Z

Comments

Comment by Drake Morrison (Leviad) on The Third Fundamental Question · 2024-11-15T18:48:48.201Z · LW · GW

I like this! Especially the Past, Present, Future framing. I usually split along epistemic and instrumental lines. So my fundamental questions were:
1. Epistemic: What do you think you know and how do you think you know it?
2. Instrumental: What are you trying to protect, and how are you trying to protect it?

I've had some notion of a third thing, but now I've got a better handle on it, thanks!

Comment by Drake Morrison (Leviad) on TurnTrout's shortform feed · 2024-10-23T06:23:20.984Z · LW · GW

I'm fond of saying, "your ethics are only opinions until it costs you to uphold them"

Comment by Drake Morrison (Leviad) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-24T00:30:53.377Z · LW · GW

The reason I think this is important is because "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates": if you write 3000 words inveighing against people who think comparative advantage means that horses can't get sent to glue factories, that doesn't license the conclusion that superintelligence Will Definitely Kill You if there are other reasons why superintelligence Might Not Kill You that don't stop being real just because very few people have the expertise to formulate them carefully.

 

There's a time for basic arguments, and a time for advanced arguments. I would like to see Eliezer's take on the more complicated arguments you mentioned, but this post is clearly intended to argue basics.

Comment by Drake Morrison (Leviad) on Kinds of Motivation · 2024-07-14T17:57:30.266Z · LW · GW

I believe DaystarEld was talking about this in various places at LessOnline. They've got a sequence going in more depth here: Procedural Executive Function, Part 1 

Comment by Drake Morrison (Leviad) on Drake Morrison's Shortform · 2024-06-21T16:11:03.399Z · LW · GW

If they don't tell you how to hold them accountable, its a Chaotic intention, not a Lawful commitment

Comment by Drake Morrison (Leviad) on Nathan Young's Shortform · 2024-05-24T20:40:01.798Z · LW · GW

What do you mean by "necessary truth" and "epistemic truth"? I'm sorta confused about what you are asking.

I can be uncertain about the 1000th digit of pi. That doesn't make the digit being 9 any less valid. (Perhaps what you mean by necessary?) Put another way, the 1000th digit of pi is "necessarily" 9, but my knowledge of this fact is "epistemic". Does this help?

Comment by Drake Morrison (Leviad) on LessWrong's (first) album: I Have Been A Good Bing · 2024-05-14T04:24:50.912Z · LW · GW

For what it's worth, I find the Dath Ilan song to be one of my favorites. Upon listening I immediately wanted this song to be played at my funeral. 

There's something powerful there, which can be dangerous, but it's a kind of feeling that I draw strength and comfort from. I specifically like the phrasing around sins and forgiveness, and expect it to be difficult to engender the same comfort or strength in me without it. Among my friends I'm considered a bit weird in how much I think about grief and death and loss. So maybe it's a weird psychology thing. 

Comment by Drake Morrison (Leviad) on What is the easiest/funnest way to build up a comprehensive understanding of AI and AI Safety? · 2024-04-30T19:40:32.446Z · LW · GW

If you can code, build a small AI with the fast.ai course. This will (hopefully) be fun while also showing you particular holes in your knowledge to improve, rather than a vague feeling of "learn more". 

If you want to follow along with more technical papers, you need to know the math of machine learning: linear algebra, multivariable calculus, and probability theory. For Agent Foundations work, you'll need more logic and set theory type stuff. 

MIRI has some recommendations for textbooks here. There's also the Study Guide and this sequence on leveling up.

3blue1brown's Youtube has good videos for a lot of this, if that's the medium you like. 

If you like non-standard fiction, some people like Project Lawful.
 

At the end of the day, it's not a super well-defined field that has clear on-ramps into the deeper ends. You just gotta start somewhere, and follow your curiosity. Have fun!

Comment by Drake Morrison (Leviad) on Open Thread Spring 2024 · 2024-04-17T20:18:35.953Z · LW · GW

Feature Suggestion: add a number to the hidden author names.

I enjoy keeping the author names hidden when reading the site, but find it difficult to follow comment threads when there isn't a persistent id for each poster. I think a number would suffice while keeping the hiddenness.

Comment by Drake Morrison (Leviad) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T14:50:24.331Z · LW · GW

This has unironically increased the levels of fun in my life

Comment by Drake Morrison (Leviad) on 'Empiricism!' as Anti-Epistemology · 2024-03-15T17:18:07.100Z · LW · GW

If you already have the concept, you only need a pointer. If you don't have the concept, you need the whole construction. [1]

  1. ^
Comment by Drake Morrison (Leviad) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T14:42:43.697Z · LW · GW

Yay! I've always been a big fan of the art you guys did on the books. The Least Wrong page has a sort of official magazine feel I like due to the extra design. 

Comment by Drake Morrison (Leviad) on Deep and obvious points in the gap between your thoughts and your pictures of thought · 2024-02-23T14:54:04.802Z · LW · GW

Similar to Wisdom cannot be Unzipped.

Comment by Drake Morrison (Leviad) on 2023 Unofficial LessWrong Census/Survey · 2023-12-02T22:30:44.961Z · LW · GW

Completed the survey. I liked the additional questions you added, and the overall work put into this. Thanks!

Comment by Drake Morrison (Leviad) on A thought about the constraints of debtlessness in online communities · 2023-10-14T02:19:32.639Z · LW · GW

Oh, got it. 

I mean, that still sounds fine to me? I'd rather know about a cool article because it's highly upvoted (and the submitter getting money for that) than not know about the article at all. 

If the money starts being significant I can imagine authors migrating to the sites where they can get money for their writing. (I imagine this has already happened a bit with things like substack)

Comment by Drake Morrison (Leviad) on A thought about the constraints of debtlessness in online communities · 2023-10-14T01:50:50.616Z · LW · GW

You get money for writing posts that people like. Upvoting posts doesn't get you money. I imagine that creats an incentive to write posts. Maybe I'm misunderstanding you?

Comment by Drake Morrison (Leviad) on A thought about the constraints of debtlessness in online communities · 2023-10-08T03:28:12.156Z · LW · GW

non.io is a reddit clone that costs 1$ to subscribe, and then it splits the money towards those users you upvote more of. I think it's an interesting idea worth watching.

Comment by Drake Morrison (Leviad) on Cohabitive Games so Far · 2023-09-29T19:56:53.468Z · LW · GW

Maybe? I've not played it all that much, honestly. I was simply struck by the neat way it interacted with multiple players. 

I think it could be easily tweaked or houseruled to be a peavewager game by just revealing all the hidden information. Next time I play I'll probably try it out this way. 

Comment by Drake Morrison (Leviad) on Cohabitive Games so Far · 2023-09-28T23:59:38.431Z · LW · GW

War of Whispers is a semi-cooperative game where you play as cults directing nations in their wars. The reason it's cooperative is because each player's cult can change the nation they are supporting. So you can end up negotiating and cooperating with other players to boost a particular nation, because you both get points for it. 

Both times I've played people started on opposite sides, then ended up on the same or nearly the same side. In one of the games two players tied. 

There is still the counting of points so it doesn't quite fit what you are going for here, but it is the closest game I know of where multiple players can start negotiating for mutual aid and both win. 

Comment by Drake Morrison (Leviad) on The point of a game is not to win, and you shouldn't even pretend that it is · 2023-09-28T23:31:46.318Z · LW · GW

I think this is pointing at something real. Have you looked at any of the research with the MDA Framework used in video game development?

There are lots of reasons a group (or individual) goes to play a game. This framework found the reasons clustering into these 8 categories: 

  1. the tactile senses (enjoying the shiny coins, or the clacking of dice)
  2. Challenge (the usual "playing to win" but also things like speedrunners)
  3. Narratives (playing for the story, the characters and their actions)
  4. Fantasy (enjoyment of a make-believe world. Escapism)
  5. Fellowship (hanging out with your buds, insider jokes, etc.)
  6. Discovery (learning new things about the game, revealing a world and map, metroidvania-style games)
  7. Expression (spending 4 hours in the character creation menu)
  8. Abnegation (cookie cutter games, games to rest your mind and not think about things)

The categories are not mutually exclusive by any means, and I think this is pointing at the same thing this post is pointing at. Namely, where the emotional investment of the player is. 

Comment by Drake Morrison (Leviad) on Navigating an ecosystem that might or might not be bad for the world · 2023-09-28T02:41:30.712Z · LW · GW

oh, that's right. I keep forgetting the LessWrong karma does the weighing thing. 

Comment by Drake Morrison (Leviad) on Navigating an ecosystem that might or might not be bad for the world · 2023-09-16T05:47:55.782Z · LW · GW

Has anyone tried experimenting with EigenKarma? It seems like it or something like it could be a good answer for some of this.

Comment by Drake Morrison (Leviad) on Assume Bad Faith · 2023-08-27T01:06:25.141Z · LW · GW

I think this elucidates the "everyone has motives" issue nicely. Regarding the responses, I feel uneasy about the second one. Sticking to the object level makes sense to me. I'm confused how psychoanalysis is supposed to work without devolving. 

For example, let's say someone thinks my motivation for writing this comment is [negative-valence trait or behavior]. How exactly am I supposed to verify my intentions?

In the simple case, I know what my intentions are and they either trust me when I tell them or they don't. 

It's the cases when people can't explain themselves that are tricky. Not everyone has the introspective skill, or verbal fluency, to explain their reasoning. I'm not really sure what to do in those cases other than asking the person I'm psychoanalyzing if that's what's happening. 

Comment by Drake Morrison (Leviad) on Book Launch: "The Carving of Reality," Best of LessWrong vol. III · 2023-08-17T22:08:49.341Z · LW · GW

Someone did a lot of this already here. Might be worth checking their script to use yourself.

Comment by Drake Morrison (Leviad) on Practical ways to actualize our beliefs into concrete bets over a longer time horizon? · 2023-04-21T01:52:16.587Z · LW · GW

I think what you are looking for is prediction markets. The ones I know of are:

  1. Manifold Markets - play-money that's easy and simple to use
  2. Metaculus - more serious one with more complex tools (maybe real money somehow?)
  3. PredictIt - just for US politics? But looks like real money?
Comment by Drake Morrison (Leviad) on Moderation notes re: recent Said/Duncan threads · 2023-04-19T23:24:19.663Z · LW · GW

I don't see all comments as criticism. Many comments are of the building up variety! It's that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times. 

Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.

The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post. 

Comment by Drake Morrison (Leviad) on Moderation notes re: recent Said/Duncan threads · 2023-04-19T00:01:57.764Z · LW · GW

Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking. 

Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)

Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)

Obviously the exact ratio doesn't have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.

  1. ^

    I'm not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post. 

Comment by Drake Morrison (Leviad) on Moved from Moloch's Toolbox: Discussion re style of latest Eliezer sequence · 2023-03-28T05:45:10.960Z · LW · GW

If you feel like it should be written differently, then write it differently! Nobody is stopping you. Write a thousand roads to Rome

Could Eliezer have written it differently? Maybe, maybe not. I don't have access to his internal writing cognition any more than you do. Maybe this is the only way Eliezer could write it. Maybe he prefers it this way, I certainly do.

Light a candle, don't curse the darkness. Build, don't burn. 

Comment by Drake Morrison (Leviad) on Open & Welcome Thread — March 2023 · 2023-03-15T03:16:32.107Z · LW · GW

I used this link to make my own, and it seems to work nicely for me thus far. 

Comment by Drake Morrison (Leviad) on Open & Welcome Thread — February 2023 · 2023-02-28T21:02:14.763Z · LW · GW

This sequence has been a favorite of mine for finding little drills or exercises to practice overcoming  biases.

Comment by Drake Morrison (Leviad) on Beginning to feel like a conspiracy theorist · 2023-02-28T20:57:13.100Z · LW · GW

https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness

Cult or Not-Cult aren't two separate categories. They are a spectrum that all human groups live on. 

Comment by Drake Morrison (Leviad) on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-28T20:40:30.338Z · LW · GW

I agree wholeheartedly that the intent of the guidelines isn't enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?

If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better. 

An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it's just straight up more useful to work with other people's code where the type signatures are explicit. There's less guessing, and therefore less mistakes. Yes, there are tradeoffs. You gain better understanding at the slight cost of implementation code. 

The thing is, you pay that cost anyway. You either pay it upfront, and people can make smoother progress with less mistakes, or they make mistakes and have to figure out the type signatures the hard way. 

People either distinguish between their observations and inferences explicitly, or you spend extra time, and make predictable mistakes, until the participants in the discourse figure out the distinction during the course of the conversation. If they can't, then the conversation doesn't go anywhere on that topic. 

I don't see any way of getting around this if you want to avoid making dumb mistakes in conversation. Not every change is an improvement, but every improvement is necessarily a change. If we want to raise the sanity waterline and have discourse that more reliably leads to us winning, we have to change things. 

Comment by Drake Morrison (Leviad) on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-27T22:01:56.480Z · LW · GW

Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:

  • measure twice before you cut the steel
  • Double check your fittings before you test the engine
  • keep track of which direction the axle is supposed to be turning for the type of engine you are making
  • etc.

The point of the guidelines isn't to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable mistakes. The guidelines are about helping people move past those mistakes so they can actually build an engine that has a chance of working

The point of "rationalist guidelines" isn't to enforce a norm of making particular types of beliefs. They exist to help groups of people stay connected to reality at all. People make consistent, predictable mistakes. The guidelines are for helping people avoid them. Regardless of what those beliefs are. 

Comment by Drake Morrison (Leviad) on On Investigating Conspiracy Theories · 2023-02-23T17:33:57.295Z · LW · GW

As always, the hard part is not saying "Boo! conspiracy theory!" and "Yay! scientific theory!"

The hard part is deciding which is which

Comment by Drake Morrison (Leviad) on You Don't Exist, Duncan · 2023-02-04T01:41:22.172Z · LW · GW

Wow, this hit home in a way I wasn't expecting. I ... don't know what else to say. Thanks for writing this up, seriously. 

Comment by Drake Morrison (Leviad) on Basics of Rationalist Discourse · 2023-01-27T06:24:13.801Z · LW · GW

see the disconnect—the reason I think X is better than Y is because as far as I can tell X causes more suffering than Y, and I think that suffering is bad."

 

 

I think the X's and Y's got mixed up here. 

Otherwise, this is one of my favorite posts. Some of the guidelines are things I had already figured out and try to follow but most of them were things I could only vaguely grasp at. I've been thinking about a post regarding robust communication and internet protocols. But this covers most of what I wanted to say, better than I could say it. So thanks!

Comment by Drake Morrison (Leviad) on Lars Doucet's Georgism series on Astral Codex Ten · 2023-01-16T05:41:22.090Z · LW · GW

The Georgism series was my first interaction with a piece of economic theory that tried to make sense by building a different model than anything I had seen before. It was clear and engaging. It has been a primary motivator in my learning more about economics. 

I'm not sure how the whole series would work in the books, but the review of Progress and Poverty was a great introduction to all the main ideas. 

Comment by Drake Morrison (Leviad) on Sazen · 2022-12-21T19:29:17.817Z · LW · GW

Related:  Wisdom cannot be unzipped

Reading Worth the Candle with a friend gave us a few weird words that are sazen in and of themselves. Being able to put a word to something lets you get a handle on it so much better. Thanks for writing this up. 

Comment by Drake Morrison (Leviad) on What's the best time-efficient alternative to the Sequences? · 2022-12-16T21:54:53.822Z · LW · GW

If the Highlights are too long, then print off a single post from each section. If that's too long, print off your top three. If that's too long, print off one post. 

Summarizing the post usually doesn't help, as you've discovered. So I'm not really sure what else to tell you. You have a lot of curated options to choose from to start. The Highlights, the Best of LessWrong, the Curated Sequences, Codex. Find stuff you like, and print it off for your friend. 

Or, alternatively, tell them about HPMOR. That's how I introduced myself to the concepts in a fashion where the protagonist had need of them. So the techniques stuck with me. 

Comment by Drake Morrison (Leviad) on What's the best time-efficient alternative to the Sequences? · 2022-12-16T21:17:05.247Z · LW · GW

If you have some of the LessWrong books, I would recommend those. They are small little books that you can easily lend out. That's what I've thought of doing before. 

Really, starting is the hard part. Once I saw the value I was getting out of the sequences and other essays, I wanted to read more. So share a single essay, or lend a small book. Start small, and then if you are getting value out of it, continue. 

You don't have to commit to reading the whole Sequences before you start. Just start with one essay from the highlights, when you feel like it. They're not super long. The enduring, net positive change that you are looking for cannot be shortcut. After all, Wisdom Cannot Be Unzipped. 

Think of the sequences as a full course on rationality. You don't introduce your friend who doesn't know calculus into math by showing them the whole textbook and telling them they should read it. You show them a little problem. And demonstrate that the tools you learned in calculus help you solve that problem. Do the same with rationality. 

The art must have an end other than itself or it collapses into infinite recursion. Have a problem in mind when you read the sequences, try and see what will help you solve it. Having a problem gives you a reason to apply it, and can motivate you into learning more. Have some fun while you're at it! This stuff is cool!

Comment by Drake Morrison (Leviad) on Drake Morrison's Shortform · 2022-12-11T01:23:51.661Z · LW · GW
  • Robust communication requires feedback. Knowing you received all the packets of information, and checking whether what you received matches what they sent. 
  • Building ideas vs breaking ideas. Related to Babble and Prune, but for communities. Shortform seems like a good place for ideas to develop, or babble. For ideas to be built together, before you critique things. You can destroy a half built idea, even if it's a good idea. 
Comment by Drake Morrison (Leviad) on The LessWrong 2021 Review: Intellectual Circle Expansion · 2022-12-11T01:20:25.635Z · LW · GW

I wrote a bunch of reviews before I realized I wasn't eligible. Oops. Maybe the review button could be disabled for folks like me?

(I don't care whether my reviews are kept or discarded, either way is fine with me)

Comment by Drake Morrison (Leviad) on How To Write Quickly While Maintaining Epistemic Rigor · 2022-12-11T00:58:12.408Z · LW · GW

Writing up your thoughts is useful. Both for communication and for clarification to oneself. Not writing for fear of poor epistemics is an easy failure mode to fall into, and this post clearly lays out how to write anyway. More writing equals more learning, sharing, and opportunities for coordination and cooperation. This directly addresses a key point of failure when it comes to groups of people being more rational. 

Comment by Drake Morrison (Leviad) on Self-Integrity and the Drowning Child · 2022-12-10T22:19:05.162Z · LW · GW

This post felt like a great counterpoint to the drowning child thought experiment, and as such I found it a useful insight. A reminder that it's okay to take care of yourself is important, especially in these times and in a community of people dedicated to things like EA and the Alignment Problem. 

Comment by Drake Morrison (Leviad) on Making Vaccine · 2022-12-10T21:56:19.837Z · LW · GW

A great example of taking the initiative and actually trying something that looks useful, even when it would be weird or frowned upon in normal society. I would like to see a post-review, but I'm not even sure if that matters. Going ahead and trying something that seems obviously useful, but weird and no one else is doing is already hard enough. This post was inspiring. 

Comment by Drake Morrison (Leviad) on Your Cheerful Price · 2022-12-10T21:42:35.022Z · LW · GW

This was a useful and concrete example of a social technique I plan on using as soon as possible. Being able to explain why is super useful to me, and this post helped me do that. Explaining explicitly the intuitions behind communication cultures is useful for cooperation. This post feels like a step in the right direction in that regard.

Comment by Drake Morrison (Leviad) on Simulacrum 3 As Stag-Hunt Strategy · 2022-12-10T21:16:40.771Z · LW · GW

A great explanation of something I've felt, but not been able to articulate. Connecting the ideas of Stag-Hunt, Coordination problems, and simulacrum levels is a great insight that has paid dividends as an explanatory tool. 

Comment by Drake Morrison (Leviad) on The Point of Trade · 2022-12-10T21:13:22.131Z · LW · GW

I really enjoyed this. Taking the time to lay this out feels more useful than just reading about it in a textbook lecture. The same way doing a math or code problem makes it stick in my head more. One of the biggest takeaways for me was realizing that it was possible to break economic principles down this far in a concrete way that felt graspable. I think this is a good demonstration of that kind of work. 

Comment by Drake Morrison (Leviad) on In Defence of Optimizing Routine Tasks · 2022-12-10T21:04:36.146Z · LW · GW

Cleary articulating the extra costs involved is valuable. I have seen the time tradeoff before, but I didn't think through the other costs that I as a human also go through. 

Comment by Drake Morrison (Leviad) on Slack Has Positive Externalities For Groups · 2022-12-10T21:02:46.204Z · LW · GW

I really enjoyed this post as a compelling explanation of slack in a domain that I don't see referred to that often. It helped me realize the value of having "unproductive" time that is unscheduled. It's now something I consider when previously I did not.