Posts

Drake Morrison's Shortform 2022-12-11T01:23:51.385Z

Comments

Comment by Drake Morrison (Leviad) on Open Thread Spring 2024 · 2024-04-17T20:18:35.953Z · LW · GW

Feature Suggestion: add a number to the hidden author names.

I enjoy keeping the author names hidden when reading the site, but find it difficult to follow comment threads when there isn't a persistent id for each poster. I think a number would suffice while keeping the hiddenness.

Comment by Drake Morrison (Leviad) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T14:50:24.331Z · LW · GW

This has unironically increased the levels of fun in my life

Comment by Drake Morrison (Leviad) on 'Empiricism!' as Anti-Epistemology · 2024-03-15T17:18:07.100Z · LW · GW

If you already have the concept, you only need a pointer. If you don't have the concept, you need the whole construction. [1]

  1. ^
Comment by Drake Morrison (Leviad) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T14:42:43.697Z · LW · GW

Yay! I've always been a big fan of the art you guys did on the books. The Least Wrong page has a sort of official magazine feel I like due to the extra design. 

Comment by Drake Morrison (Leviad) on Deep and obvious points in the gap between your thoughts and your pictures of thought · 2024-02-23T14:54:04.802Z · LW · GW

Similar to Wisdom cannot be Unzipped.

Comment by Drake Morrison (Leviad) on 2023 Unofficial LessWrong Census/Survey · 2023-12-02T22:30:44.961Z · LW · GW

Completed the survey. I liked the additional questions you added, and the overall work put into this. Thanks!

Comment by Drake Morrison (Leviad) on A thought about the constraints of debtlessness in online communities · 2023-10-14T02:19:32.639Z · LW · GW

Oh, got it. 

I mean, that still sounds fine to me? I'd rather know about a cool article because it's highly upvoted (and the submitter getting money for that) than not know about the article at all. 

If the money starts being significant I can imagine authors migrating to the sites where they can get money for their writing. (I imagine this has already happened a bit with things like substack)

Comment by Drake Morrison (Leviad) on A thought about the constraints of debtlessness in online communities · 2023-10-14T01:50:50.616Z · LW · GW

You get money for writing posts that people like. Upvoting posts doesn't get you money. I imagine that creats an incentive to write posts. Maybe I'm misunderstanding you?

Comment by Drake Morrison (Leviad) on A thought about the constraints of debtlessness in online communities · 2023-10-08T03:28:12.156Z · LW · GW

non.io is a reddit clone that costs 1$ to subscribe, and then it splits the money towards those users you upvote more of. I think it's an interesting idea worth watching.

Comment by Drake Morrison (Leviad) on Cohabitive Games so Far · 2023-09-29T19:56:53.468Z · LW · GW

Maybe? I've not played it all that much, honestly. I was simply struck by the neat way it interacted with multiple players. 

I think it could be easily tweaked or houseruled to be a peavewager game by just revealing all the hidden information. Next time I play I'll probably try it out this way. 

Comment by Drake Morrison (Leviad) on Cohabitive Games so Far · 2023-09-28T23:59:38.431Z · LW · GW

War of Whispers is a semi-cooperative game where you play as cults directing nations in their wars. The reason it's cooperative is because each player's cult can change the nation they are supporting. So you can end up negotiating and cooperating with other players to boost a particular nation, because you both get points for it. 

Both times I've played people started on opposite sides, then ended up on the same or nearly the same side. In one of the games two players tied. 

There is still the counting of points so it doesn't quite fit what you are going for here, but it is the closest game I know of where multiple players can start negotiating for mutual aid and both win. 

Comment by Drake Morrison (Leviad) on The point of a game is not to win, and you shouldn't even pretend that it is · 2023-09-28T23:31:46.318Z · LW · GW

I think this is pointing at something real. Have you looked at any of the research with the MDA Framework used in video game development?

There are lots of reasons a group (or individual) goes to play a game. This framework found the reasons clustering into these 8 categories: 

  1. the tactile senses (enjoying the shiny coins, or the clacking of dice)
  2. Challenge (the usual "playing to win" but also things like speedrunners)
  3. Narratives (playing for the story, the characters and their actions)
  4. Fantasy (enjoyment of a make-believe world. Escapism)
  5. Fellowship (hanging out with your buds, insider jokes, etc.)
  6. Discovery (learning new things about the game, revealing a world and map, metroidvania-style games)
  7. Expression (spending 4 hours in the character creation menu)
  8. Abnegation (cookie cutter games, games to rest your mind and not think about things)

The categories are not mutually exclusive by any means, and I think this is pointing at the same thing this post is pointing at. Namely, where the emotional investment of the player is. 

Comment by Drake Morrison (Leviad) on Navigating an ecosystem that might or might not be bad for the world · 2023-09-28T02:41:30.712Z · LW · GW

oh, that's right. I keep forgetting the LessWrong karma does the weighing thing. 

Comment by Drake Morrison (Leviad) on Navigating an ecosystem that might or might not be bad for the world · 2023-09-16T05:47:55.782Z · LW · GW

Has anyone tried experimenting with EigenKarma? It seems like it or something like it could be a good answer for some of this.

Comment by Drake Morrison (Leviad) on Assume Bad Faith · 2023-08-27T01:06:25.141Z · LW · GW

I think this elucidates the "everyone has motives" issue nicely. Regarding the responses, I feel uneasy about the second one. Sticking to the object level makes sense to me. I'm confused how psychoanalysis is supposed to work without devolving. 

For example, let's say someone thinks my motivation for writing this comment is [negative-valence trait or behavior]. How exactly am I supposed to verify my intentions?

In the simple case, I know what my intentions are and they either trust me when I tell them or they don't. 

It's the cases when people can't explain themselves that are tricky. Not everyone has the introspective skill, or verbal fluency, to explain their reasoning. I'm not really sure what to do in those cases other than asking the person I'm psychoanalyzing if that's what's happening. 

Comment by Drake Morrison (Leviad) on Book Launch: "The Carving of Reality," Best of LessWrong vol. III · 2023-08-17T22:08:49.341Z · LW · GW

Someone did a lot of this already here. Might be worth checking their script to use yourself.

Comment by Drake Morrison (Leviad) on Practical ways to actualize our beliefs into concrete bets over a longer time horizon? · 2023-04-21T01:52:16.587Z · LW · GW

I think what you are looking for is prediction markets. The ones I know of are:

  1. Manifold Markets - play-money that's easy and simple to use
  2. Metaculus - more serious one with more complex tools (maybe real money somehow?)
  3. PredictIt - just for US politics? But looks like real money?
Comment by Drake Morrison (Leviad) on Moderation notes re: recent Said/Duncan threads · 2023-04-19T23:24:19.663Z · LW · GW

I don't see all comments as criticism. Many comments are of the building up variety! It's that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times. 

Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.

The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post. 

Comment by Drake Morrison (Leviad) on Moderation notes re: recent Said/Duncan threads · 2023-04-19T00:01:57.764Z · LW · GW

Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking. 

Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)

Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)

Obviously the exact ratio doesn't have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.

  1. ^

    I'm not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post. 

Comment by Drake Morrison (Leviad) on Moved from Moloch's Toolbox: Discussion re style of latest Eliezer sequence · 2023-03-28T05:45:10.960Z · LW · GW

If you feel like it should be written differently, then write it differently! Nobody is stopping you. Write a thousand roads to Rome

Could Eliezer have written it differently? Maybe, maybe not. I don't have access to his internal writing cognition any more than you do. Maybe this is the only way Eliezer could write it. Maybe he prefers it this way, I certainly do.

Light a candle, don't curse the darkness. Build, don't burn. 

Comment by Drake Morrison (Leviad) on Open & Welcome Thread — March 2023 · 2023-03-15T03:16:32.107Z · LW · GW

I used this link to make my own, and it seems to work nicely for me thus far. 

Comment by Drake Morrison (Leviad) on Open & Welcome Thread — February 2023 · 2023-02-28T21:02:14.763Z · LW · GW

This sequence has been a favorite of mine for finding little drills or exercises to practice overcoming  biases.

Comment by Drake Morrison (Leviad) on Beginning to feel like a conspiracy theorist · 2023-02-28T20:57:13.100Z · LW · GW

https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness

Cult or Not-Cult aren't two separate categories. They are a spectrum that all human groups live on. 

Comment by Drake Morrison (Leviad) on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-28T20:40:30.338Z · LW · GW

I agree wholeheartedly that the intent of the guidelines isn't enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?

If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better. 

An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it's just straight up more useful to work with other people's code where the type signatures are explicit. There's less guessing, and therefore less mistakes. Yes, there are tradeoffs. You gain better understanding at the slight cost of implementation code. 

The thing is, you pay that cost anyway. You either pay it upfront, and people can make smoother progress with less mistakes, or they make mistakes and have to figure out the type signatures the hard way. 

People either distinguish between their observations and inferences explicitly, or you spend extra time, and make predictable mistakes, until the participants in the discourse figure out the distinction during the course of the conversation. If they can't, then the conversation doesn't go anywhere on that topic. 

I don't see any way of getting around this if you want to avoid making dumb mistakes in conversation. Not every change is an improvement, but every improvement is necessarily a change. If we want to raise the sanity waterline and have discourse that more reliably leads to us winning, we have to change things. 

Comment by Drake Morrison (Leviad) on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-27T22:01:56.480Z · LW · GW

Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:

  • measure twice before you cut the steel
  • Double check your fittings before you test the engine
  • keep track of which direction the axle is supposed to be turning for the type of engine you are making
  • etc.

The point of the guidelines isn't to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable mistakes. The guidelines are about helping people move past those mistakes so they can actually build an engine that has a chance of working

The point of "rationalist guidelines" isn't to enforce a norm of making particular types of beliefs. They exist to help groups of people stay connected to reality at all. People make consistent, predictable mistakes. The guidelines are for helping people avoid them. Regardless of what those beliefs are. 

Comment by Drake Morrison (Leviad) on On Investigating Conspiracy Theories · 2023-02-23T17:33:57.295Z · LW · GW

As always, the hard part is not saying "Boo! conspiracy theory!" and "Yay! scientific theory!"

The hard part is deciding which is which

Comment by Drake Morrison (Leviad) on You Don't Exist, Duncan · 2023-02-04T01:41:22.172Z · LW · GW

Wow, this hit home in a way I wasn't expecting. I ... don't know what else to say. Thanks for writing this up, seriously. 

Comment by Drake Morrison (Leviad) on Basics of Rationalist Discourse · 2023-01-27T06:24:13.801Z · LW · GW

see the disconnect—the reason I think X is better than Y is because as far as I can tell X causes more suffering than Y, and I think that suffering is bad."

 

 

I think the X's and Y's got mixed up here. 

Otherwise, this is one of my favorite posts. Some of the guidelines are things I had already figured out and try to follow but most of them were things I could only vaguely grasp at. I've been thinking about a post regarding robust communication and internet protocols. But this covers most of what I wanted to say, better than I could say it. So thanks!

Comment by Drake Morrison (Leviad) on Lars Doucet's Georgism series on Astral Codex Ten · 2023-01-16T05:41:22.090Z · LW · GW

The Georgism series was my first interaction with a piece of economic theory that tried to make sense by building a different model than anything I had seen before. It was clear and engaging. It has been a primary motivator in my learning more about economics. 

I'm not sure how the whole series would work in the books, but the review of Progress and Poverty was a great introduction to all the main ideas. 

Comment by Drake Morrison (Leviad) on Sazen · 2022-12-21T19:29:17.817Z · LW · GW

Related:  Wisdom cannot be unzipped

Reading Worth the Candle with a friend gave us a few weird words that are sazen in and of themselves. Being able to put a word to something lets you get a handle on it so much better. Thanks for writing this up. 

Comment by Drake Morrison (Leviad) on What's the best time-efficient alternative to the Sequences? · 2022-12-16T21:54:53.822Z · LW · GW

If the Highlights are too long, then print off a single post from each section. If that's too long, print off your top three. If that's too long, print off one post. 

Summarizing the post usually doesn't help, as you've discovered. So I'm not really sure what else to tell you. You have a lot of curated options to choose from to start. The Highlights, the Best of LessWrong, the Curated Sequences, Codex. Find stuff you like, and print it off for your friend. 

Or, alternatively, tell them about HPMOR. That's how I introduced myself to the concepts in a fashion where the protagonist had need of them. So the techniques stuck with me. 

Comment by Drake Morrison (Leviad) on What's the best time-efficient alternative to the Sequences? · 2022-12-16T21:17:05.247Z · LW · GW

If you have some of the LessWrong books, I would recommend those. They are small little books that you can easily lend out. That's what I've thought of doing before. 

Really, starting is the hard part. Once I saw the value I was getting out of the sequences and other essays, I wanted to read more. So share a single essay, or lend a small book. Start small, and then if you are getting value out of it, continue. 

You don't have to commit to reading the whole Sequences before you start. Just start with one essay from the highlights, when you feel like it. They're not super long. The enduring, net positive change that you are looking for cannot be shortcut. After all, Wisdom Cannot Be Unzipped. 

Think of the sequences as a full course on rationality. You don't introduce your friend who doesn't know calculus into math by showing them the whole textbook and telling them they should read it. You show them a little problem. And demonstrate that the tools you learned in calculus help you solve that problem. Do the same with rationality. 

The art must have an end other than itself or it collapses into infinite recursion. Have a problem in mind when you read the sequences, try and see what will help you solve it. Having a problem gives you a reason to apply it, and can motivate you into learning more. Have some fun while you're at it! This stuff is cool!

Comment by Drake Morrison (Leviad) on Drake Morrison's Shortform · 2022-12-11T01:23:51.661Z · LW · GW
  • Robust communication requires feedback. Knowing you received all the packets of information, and checking whether what you received matches what they sent. 
  • Building ideas vs breaking ideas. Related to Babble and Prune, but for communities. Shortform seems like a good place for ideas to develop, or babble. For ideas to be built together, before you critique things. You can destroy a half built idea, even if it's a good idea. 
Comment by Drake Morrison (Leviad) on The LessWrong 2021 Review: Intellectual Circle Expansion · 2022-12-11T01:20:25.635Z · LW · GW

I wrote a bunch of reviews before I realized I wasn't eligible. Oops. Maybe the review button could be disabled for folks like me?

(I don't care whether my reviews are kept or discarded, either way is fine with me)

Comment by Drake Morrison (Leviad) on How To Write Quickly While Maintaining Epistemic Rigor · 2022-12-11T00:58:12.408Z · LW · GW

Writing up your thoughts is useful. Both for communication and for clarification to oneself. Not writing for fear of poor epistemics is an easy failure mode to fall into, and this post clearly lays out how to write anyway. More writing equals more learning, sharing, and opportunities for coordination and cooperation. This directly addresses a key point of failure when it comes to groups of people being more rational. 

Comment by Drake Morrison (Leviad) on Self-Integrity and the Drowning Child · 2022-12-10T22:19:05.162Z · LW · GW

This post felt like a great counterpoint to the drowning child thought experiment, and as such I found it a useful insight. A reminder that it's okay to take care of yourself is important, especially in these times and in a community of people dedicated to things like EA and the Alignment Problem. 

Comment by Drake Morrison (Leviad) on Making Vaccine · 2022-12-10T21:56:19.837Z · LW · GW

A great example of taking the initiative and actually trying something that looks useful, even when it would be weird or frowned upon in normal society. I would like to see a post-review, but I'm not even sure if that matters. Going ahead and trying something that seems obviously useful, but weird and no one else is doing is already hard enough. This post was inspiring. 

Comment by Drake Morrison (Leviad) on Your Cheerful Price · 2022-12-10T21:42:35.022Z · LW · GW

This was a useful and concrete example of a social technique I plan on using as soon as possible. Being able to explain why is super useful to me, and this post helped me do that. Explaining explicitly the intuitions behind communication cultures is useful for cooperation. This post feels like a step in the right direction in that regard.

Comment by Drake Morrison (Leviad) on Simulacrum 3 As Stag-Hunt Strategy · 2022-12-10T21:16:40.771Z · LW · GW

A great explanation of something I've felt, but not been able to articulate. Connecting the ideas of Stag-Hunt, Coordination problems, and simulacrum levels is a great insight that has paid dividends as an explanatory tool. 

Comment by Drake Morrison (Leviad) on The Point of Trade · 2022-12-10T21:13:22.131Z · LW · GW

I really enjoyed this. Taking the time to lay this out feels more useful than just reading about it in a textbook lecture. The same way doing a math or code problem makes it stick in my head more. One of the biggest takeaways for me was realizing that it was possible to break economic principles down this far in a concrete way that felt graspable. I think this is a good demonstration of that kind of work. 

Comment by Drake Morrison (Leviad) on In Defence of Optimizing Routine Tasks · 2022-12-10T21:04:36.146Z · LW · GW

Cleary articulating the extra costs involved is valuable. I have seen the time tradeoff before, but I didn't think through the other costs that I as a human also go through. 

Comment by Drake Morrison (Leviad) on Slack Has Positive Externalities For Groups · 2022-12-10T21:02:46.204Z · LW · GW

I really enjoyed this post as a compelling explanation of slack in a domain that I don't see referred to that often. It helped me realize the value of having "unproductive" time that is unscheduled. It's now something I consider when previously I did not. 

Comment by Drake Morrison (Leviad) on On silence · 2022-12-02T21:54:04.903Z · LW · GW

This is the best explanation I've ever seen for this phenomenon. I have always had a hard time explaining what it is like to people, so thanks!

Comment by Drake Morrison (Leviad) on Ruling Out Everything Else · 2022-12-02T20:01:08.710Z · LW · GW

This is a great post that exemplifies what it is conveying quite well. I have found it very useful when talking with people and trying to understand why I am having trouble explaining or understanding something. 

Comment by Drake Morrison (Leviad) on The Onion Test for Personal and Institutional Honesty · 2022-11-17T05:52:00.849Z · LW · GW

I think that's the gist of it. I categorize them as Secret and Private. Where Secret information is something I deny knowing, (and therefore fails to pass the onion test), and Private information is something that people can know exist, even if I won't tell them what it is (thereby passing the onion test).

Also, see this which I found relevant.

Comment by Drake Morrison (Leviad) on 2022 LessWrong Census? · 2022-11-09T17:13:05.371Z · LW · GW

It might also be interesting if someone were to set up a prediction market for the results of the census. I'm not really sure how to do that, otherwise I'd do it myself.  You probably need some idea of what the census will be about?

Comment by Drake Morrison (Leviad) on The Onion Test for Personal and Institutional Honesty · 2022-10-09T20:52:08.936Z · LW · GW

I agree. I'll try to be more careful and clear about the wording in the future. 

Comment by Drake Morrison (Leviad) on The Onion Test for Personal and Institutional Honesty · 2022-10-09T19:19:11.363Z · LW · GW

I feel like there is a difference between a Secret That You Must Protect, and information that is status-restricted. 

Say you are preparing a secret birthday party for Alice, and they ask you if you have plans on their birthday. If the birthday is a Secret You Must Protect, then you would be Meta-Honest, and tell Alice you don't have plans. If it's just status-restricted, then you could tell Alice that you have something planned, but you can't say more or ruin the surprise. Thereby passing the Onion test. 

I think the danger is in confusing the two types of information. If you make a secret smelly, so that people know what kind of thing it is, then it loses a lot of the protection of being a secret in the first place. Half of a secret is the fact there is one, right? On the other hand, if you make everything a Secret You Must Protect then people may be surprised and feel betrayed when information was not sign-posted. 

Comment by Drake Morrison (Leviad) on Open & Welcome Thread - July 2022 · 2022-07-08T06:15:02.796Z · LW · GW

Hello! I've been here lurking for a bit, but never quite introduced myself. I found myself commenting for the first time and figured I should go ahead and write up my story.

I don't quite remember how I first stumbled upon this site, but I was astonished. I skimmed a few of the front page articles and read some of the comments. I was impressed by the level of dialogue and clear thought. I thought it was interesting but I should check it out when I had some more time.

One day I found myself trying to explain something to a friend that I had read here, but I couldn't do it justice. I hadn't internalized the knowledge, it wasn't a part of me. That bothered me. I felt like I should have been able to understand better what I read, or explain as I remembered reading it.

So I decided to dig in, I wanted to understand things, to be able to explain the concepts, to know them well enough to write about them and be understood. I like reading fantasy, so I decided to start with HPMOR.

I devoured that book. I found myself stunned with how much I thought like Harry. It was like reading what I had always felt but never been able to put into words. The more I read, the more impressed I was, I had to keep reading. I finished the book, and immediately started on the Sequences. I felt like this was a great project I could only have wished for, and yet here it was.

I started trying to apply the things I learned to myself, and found it very difficult. rationality was not as easy as reading up how it all works, I had to actually change my mind. For me, the first great test of my rationality was religious. I had many questions about my faith for a long time. Reading the Sequences gave me the courage I needed to finally face the scariest questions. I finally had tools that could apply to the foundational questions I had.

The answers I came to where not pretty. Facing the questions had changed me. In finding answers to my questions I had lost my belief in the claims of religion. I found myself with a clarity that I hadn't thought possible. I had some troubling issues to confront, now that my religious conception of the world had fallen away.

I found myself confident, in ways I had never been before. I could kind of explain where the evidence for my beliefs were, instead of having no answer at all. I have all kinds of mental models and names for concepts now that I wish I had found earlier. I had found a path that would take me where I wanted to go. I'm not very far along that path, but I found it.

Of course, I'm still learning. And I'm still not all that good at practicing my rationality. But I'm getting better, a little bit at a time. My priorities have changed. I've got money on the line now for some of my goals, thanks to Beeminder. I've been writing more, trying to get better at communicating. I can't thank enough all the people who contribute and maintain this site. It's a wonderful place of sanity in a mad world, and I have become better, and less wrong, because of it. 

Comment by Drake Morrison (Leviad) on Open & Welcome Thread - July 2022 · 2022-07-08T04:49:39.331Z · LW · GW

The Sequences are very long, but worth it. I would recommend reading the Highlights, and then reading more of the sections that spark your curiosity. 

(I only found out about that today, and I've been lurking here for a little bit. Is there a way for the Highlights to be seen next to the Rationality: A - Z page?)