Beware unfinished bridges 2024-05-12T09:29:07.808Z
Cohesion and business problems 2024-04-19T00:45:00.269Z
The Mom Test: Summary and Thoughts 2024-04-18T03:34:21.020Z
Project idea: an iterated prisoner's dilemma competition/game 2024-02-26T23:06:20.699Z
Status-oriented spending 2024-01-25T06:46:47.029Z
A discussion of normative ethics 2024-01-09T23:29:11.467Z
Is it justifiable for non-experts to have strong opinions about Gaza? 2024-01-08T17:31:21.934Z
The Sugar Alignment Problem 2023-12-24T01:35:20.226Z
Moral Mountains 2023-12-14T10:40:06.179Z
Experiments as a Third Alternative 2023-10-29T00:39:31.399Z
Best effort beliefs 2023-10-21T22:05:59.382Z
Some reasons why I frequently prefer communicating via text 2023-09-18T21:50:48.620Z
On being downvoted 2023-09-17T01:59:52.743Z
Asking for help as an O(1) lookup 2023-09-17T00:25:58.135Z
What is science? 2023-08-11T00:00:34.884Z
What is ontology? 2023-08-02T00:54:14.432Z
Open Mic - August 2023 2023-08-01T19:24:33.351Z
Socialism in large organizations 2023-07-30T07:25:57.736Z
Think like a consultant not a salesperson 2023-07-22T19:31:48.676Z
A brief history of computers 2023-07-19T02:59:19.679Z
Negativity enhances positivity 2023-07-02T02:47:12.201Z
Meta-conversation shouldn't be taboo 2023-06-05T00:19:41.015Z
Project Idea: Challenge Groups for Alignment Researchers 2023-05-27T20:10:12.001Z
Sea Monsters 2023-05-22T00:58:52.353Z
How should one feel morally about using chatbots? 2023-05-11T01:01:39.211Z
What's wrong with being dumb? 2023-05-07T18:31:49.218Z
Alignment vs capabilities 2023-04-11T04:35:22.144Z
Slack Group: Rationalist Startup Founders 2023-04-03T00:44:54.791Z
Proposal: Butt bumps as a default for physical greetings 2023-04-01T12:48:30.554Z
How To Get Startup Ideas: A Brief Lit Review and Analysis 2023-03-30T20:33:04.179Z
Why self-improvement? 2023-03-16T02:49:32.124Z
Startups are like firewood 2023-03-05T23:09:35.066Z
Substitute goods for leisure are abundant 2023-03-05T03:45:32.705Z
Politics is the Fun-Killer 2023-02-25T23:29:41.072Z
Nod posts 2023-02-25T21:53:31.996Z
Rationality-related things I don't know as of 2023 2023-02-11T06:04:16.183Z
Saying things because they sound good 2023-01-31T00:17:26.772Z
Core Concept Conversation: What is technology? 2023-01-15T09:40:47.955Z
Core Concept Conversation: What is wealth? 2023-01-15T09:07:21.336Z
Core Concept Conversations 2023-01-15T07:17:13.978Z
The need for speed in web frameworks? 2023-01-03T00:06:15.737Z
Microstartup Stories: Initial Thoughts 2022-11-27T01:22:49.401Z
The many types of blog posts 2022-11-26T03:57:43.293Z
Choosing the right dish 2022-11-19T01:38:00.040Z
Reflective Consequentialism 2022-11-18T23:56:52.756Z
Developer experience for the motivation 2022-11-16T07:12:19.893Z
Consider your appetite for disagreements 2022-10-08T23:25:44.096Z
Losing the root for the tree 2022-09-20T04:53:53.435Z
Why Portland 2022-07-10T07:20:39.785Z
User research as a barometer of software design 2022-07-08T06:02:58.563Z


Comment by Adam Zerner (adamzerner) on My Dating Heuristic · 2024-05-21T17:44:26.718Z · LW · GW

Why? Because extra information could help me impress them.

I've always been pretty against the idea of trying to impress people on dates.

It risks false positives. Ie. it risks a situation where you succeed at impressing them, go on more dates or have a longer relationship than you otherwise would, and then realize that you aren't compatible and break up. Which isn't necessarily a bad thing but I think it is more often than not.

Impressing your date also reduces the risk of false negatives, which is a good thing. Ie. it helps avoid the scenario where someone who you're compatible with rejects you. Maybe this is too starry-eyed, but I like to think that if you just bring your true self to the table, are open-minded, and push yourself to be a little vulnerable, the risk of such false negatives is pretty low.

I think this is especially relevant because I think the emotionally healthy person heuristic probably says to try to impress your date.

Comment by Adam Zerner (adamzerner) on Beware unfinished bridges · 2024-05-13T17:31:51.291Z · LW · GW

Hm yeah, I feel the same way. Good point.

Comment by Adam Zerner (adamzerner) on Beware unfinished bridges · 2024-05-12T09:40:07.180Z · LW · GW

America's response to covid seems like one example of this.

If I'm remembering correctly from Zvi's blog posts, he criticized the US's policy for being a sort of worst of both worlds middle ground. A strong, decisive requirement to enforce things like masking and distancing might have actually eradicated the virus and thus been worthwhile. But if you're not going to take an aggressive enough stance, you should just forget it: half-hearted mitigation policies don't do enough to "complete the bridge" and so aren't worth the economic and social costs.

It's not a perfect example. The "unfinished bridge" here provides positive value, not zero value. But I think the amount of positive value is low enough that it would be useful to round it down to zero. The important thing is that you get a big jump in value once you cross some threshold of progress.

I think a lot of philanthropic causes are probably in a similar boat.

When there are lots of small groups spread around making very marginal progress on a bunch of different goals, it's as if they're building a bunch of unfinished bridges. This too isn't a perfect example because the "unfinished bridges" provide some value, but like the covid example, I think the amount of value is small enough that we can just round it to zero.

On the other hand, when people get a little barbaric and rally around a single cause, there might be enough concentration of force to complete the bridge.

Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-05-12T07:19:31.684Z · LW · GW

Project idea: virtual water coolers for LessWrong

Previous: Virtual water coolers

Here's an idea: what if there was a virtual water cooler for LessWrong?

  • There'd be Zoom chats with three people per chat. Each chat is a virtual water cooler.
  • The user journey would begin by the user expressing that they'd like to join a virtual water cooler.
  • Once they do, they'd be invited to join one.
  • I think it'd make sense to restrict access to users based on karma. Maybe only 100+ karma users are allowed.
  • To start, that could be it. In the future you could do some investigation into things like how many people there should be per chat.

Seems like an experiment that is both cheap and worthwhile.

If there is interest I'd be happy to create a MVP.

(Related: it could be interesting to abstract this and build a sort of "virtual water cooler platform builder" such that eg. LessWrong could use the builder to build a virtual water cooler platform for LessWrong and OtherCommunity could use the builder to build a virtual water cooler platform for their community.)

Comment by Adam Zerner (adamzerner) on Experiments as a Third Alternative · 2024-05-12T07:13:00.913Z · LW · GW

Update: I tried a few doses of Adderall, up to 15mg. I didn't notice anything.

Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-05-06T19:07:51.473Z · LW · GW

I was envisioning that you can organize a festival incrementally, investing more time and money into it as you receive more and more validation, and that taking this approach would de-risk it to the point where overall, it's "not that risky".

For example, to start off you can email or message a handful of potential attendees. If they aren't excited by the idea you can stop there, but if they are then you can proceed to start looking into things like cost and logistics. I'm not sure how pragmatic this iterative approach actually is though. What do you think?

Also, it seems to me that you wouldn't have to actually risk losing any of your own money. I'd imagine that you'd 1) talk to the hostel, agree on a price, have them "hold the spot" for you, 2) get sign ups, 3) pay using the money you get from attendees.

Although now that I think about it I'm realizing that it probably isn't that simple. For example, the hostel cost ~$5k and maybe the money from the attendees would have covered it all but maybe less attendees signed up than you were expecting and the organizers ended up having to pay out of pocket.

On the other hand, maybe there is funding available for situations like these.

Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-05-06T03:45:34.178Z · LW · GW

Virtual watercoolers

As I mentioned in some recent Shortform posts, I recently listened to the Bayesian Conspiracy podcast's episode on the LessOnline festival and it got me thinking.

One thing I think is cool is that Ben Pace was saying how the valuable thing about these festivals isn't the presentations, it's the time spent mingling in between the presentations, and so they decided with LessOnline to just ditch the presentations and make it all about mingling. Which got me thinking about mingling.

It seems plausible to me that such mingling can and should happen more online. And I wonder whether an important thing about mingling in the physical world is that, how do I say this, you're just in the same physical space, next to each other, with nothing else you're supposed to be doing, and in fact what you're supposed to be doing is talking to one another.

Well, I guess you're not supposed to be talking to one another. It's also cool if you just want to hang out and sip on a drink or something. It's similar to the office water cooler: it's cool if you're just hanging out drinking some water, but it's also normal to chit chat with your coworkers.

I wonder whether it'd be good to design a virtual watercooler. A digital place that mimicks aspects of the situations I've been describing (festivals, office watercoolers).

  1. By being available in the virtual watercooler it's implied that you're pretty available to chit chat with, but it's also cool if you're just hanging out doing something low key like sipping a drink.
  2. You shouldn't be doing something more substantial though.
  3. The virtual watercooler should be organized around a certain theme. It should attract a certain group of people and filter out people who don't fit in. Just like festivals and office water coolers.

In particular, this feels to me like something that might be worth exploring for LessWrong.

Note: I know that there are various Slack and Discord groups but they don't meet conditions (1) or (2).

Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-05-06T03:28:54.392Z · LW · GW

More dakka with festivals

In the rationality community people are currently excited about the LessOnline festival. Furthermore, my impression is that similar festivals are generally quite successful: people enjoy them, have stimulating discussions, form new relationships, are exposed to new and interesting ideas, express that they got a lot out of it, etc.

So then, this feels to me like a situation where More Dakka applies. Organize more festivals!

How? Who? I dunno, but these seem like questions worth discussing.

Some initial thoughts:

  1. Assurance contracts seem like quite the promising tool.
  2. You probably don't need a hero license to go out and organize a festival.
  3. Trying to organize a festival probably isn't risky. It doesn't seem like it'd involve too much time or money.
Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-05-05T06:09:06.559Z · LW · GW

I wish there were more discussion posts on LessWrong.

Right now it feels like it weakly if not moderately violates some sort of cultural norm to publish a discussion post (similar but to a lesser extent on the Shortform). Something low effort of the form "X is a topic I'd like to discuss. A, B and C are a few initial thoughts I have about it. What do you guys think?"

It seems to me like something we should encourage though. Here's how I'm thinking about it. Such "discussion posts" currently happen informally in social circles. Maybe you'll text a friend. Maybe you'll bring it up at a meetup. Maybe you'll post about it in a private Slack group.

But if it's appropriate in those contexts, why shouldn't it be appropriate on LessWrong? Why not benefit from having it be visible to more people? The more eyes you get on it, the better the chance someone has something helpful, insightful, or just generally useful to contribute.

The big downside I see is that it would screw up the post feed. Like when you go to and see the list of posts, you don't want that list to have a bunch of low quality discussion posts you're not interested in. You don't want to spend time and energy sifting through the noise to find the signal.

But this is easily solved with filters. Authors could mark/categorize/tag their posts as being a low-effort discussion post, and people who don't want to see such posts in their feed can apply a filter to filter these discussion posts out.

Context: I was listening to the Bayesian Conspiracy podcast's episode on LessOnline. Hearing them talk about the sorts of discussions they envision happening there made me think about why that sort of thing doesn't happen more on LessWrong. Like, whatever you'd say to the group of people you're hanging out with at LessOnline, why not publish a quick discussion post about it on LessWrong?

Comment by Adam Zerner (adamzerner) on The Mom Test: Summary and Thoughts · 2024-04-27T00:50:27.479Z · LW · GW

Hm, maybe.

Sometimes it can be a win-win situation. For example, if the call leads to you identifying a problem they're having and solving it in a mutually beneficial way.

But often times that isn't the case. From their perspective, the chances are low enough where, yeah, maybe the cold call just feels spammy and annoying.

I think that cold calls can be worthwhile from behind a veil of ignorance though. That's the barometer I like to use. If I were behind a veil of ignorance, would I endorse the cold call? Some cold calls are well targeted and genuine, in which case I would endorse them from behind a veil of ignorance. Others are spammy and thoughtless, in which case I wouldn't endorse them.

Comment by Adam Zerner (adamzerner) on Cohesion and business problems · 2024-04-21T21:01:42.249Z · LW · GW

I agree with everything you've said. Let me try to clarify where it is that I think we might be disagreeing.

I am of the opinion that some "narrow problems" are "good candidates" to build "narrow solutions" for but that other "narrow problems" are not good candidates to build "narrow solutions" for and instead really call for being solved as part of an all-in-one solution.

I think you would agree with this. I don't think you would make the argument that all "narrow problems" are "good candidates" to build "narrow solutions" for.

Furthermore, as I argue in the post, I think that the level of "cohesion" often plays an important role in how "appropriate" it is to use  a "narrow solution" for a "narrow problem". I think you would agree with this as well.

I suspect that our only real disagreement here is how we would weigh the tradeoffs. I think I lean moderately more in the direction of thinking that cohesiveness is important enough to make various "narrow problems" insufficiently good candidates for a "narrow solution" and you lean moderately more in the direction of thinking that cohesiveness isn't too big a deal and the "narrow problem" still is a good candidate for building a "narrow solution" for.

To be clear, I don't think that any of this means that I should attempt to build all-in-one products. I think it means that in my calculus for what "narrow problem" I should attempt to tackle I should factor in the level of cohesion.

Comment by Adam Zerner (adamzerner) on Cohesion and business problems · 2024-04-21T19:20:08.590Z · LW · GW

In practice, all-in-one tools always need a significant degree of setup, configuration and customization before they are useful for the customer. Salesforce, for example, requires so much customization, you can make a career out of just doing Salesforce customization.

I can see that being true for all-in-one tools like Salesforce that are intended to be used across industries, but what about all-in-one tools that are more targeted?

For example, Bikedesk is an all-in-one piece of software that is specifically for bike shops and I would guess that the overall amount of setup and configuration for a shop using Bikedesk is lower than that of a bike shop using a handful of more specific tools.

The tradeoff is between a narrowly focused tool that does one job extremely well immediately, with little or no setup

I suppose the "little or no setup" part is sometimes this is the case, but it seems to me that often times it is not the case. Specifically, when the level of cohesiveness is high it seems to me that it is probably not the case.

Using the bike shop as an example, inventory management software that isn't part of an all-in-one solution needs inventory data to be fed to it and thus will require a moderate amount of setup and configuration.

Comment by Adam Zerner (adamzerner) on Thoughts on seed oil · 2024-04-21T01:06:38.546Z · LW · GW

See also Adam Ragusea's podcast episode on the topic.

Comment by Adam Zerner (adamzerner) on I'm open for projects (sort of) · 2024-04-21T00:33:48.536Z · LW · GW

Hm, gotcha.

It's tough, I think there are a lot of tradeoffs to navigate.

  • You could join a big company. You'll 1) get paid, 2) work on something that lots of people use, but 3) you'll be a small cog in a large machine, and it sounds like that's not really what you're looking for. It sounds like you enjoy autonomy and having a meaningful and large degree of ownership.
  • You could work on your own project. That addresses 3. But then 1 and 2 become pretty big risks. It's hard to build something that makes good money and lots of people use.
  • You could join an open source project that lots of people use and is lacking contributors. But there's often not really a path to getting paid there.
    • Something interesting: I really like what they're doing. I personally think it's the best web framework out there. And there's only one person working on it. He's an incredible developer. Deno is paying him to work on it. I'm not sure if they'd be open to paying a second contributor. And I am not too optimistic that Fresh will become something that many people use.
  • Working on LessWrong is an interesting possibility. After all, you're a longtime user and have the right skillset. However, 1) I'm not sure how good the prospects are for getting paid, 2) it's a relatively small community so you wouldn't be getting that "tons and tons of people use something I built" feeling, and 3) given that it's later stage and there's a handful of other developers working on it, I'm not sure if it'd provide you enough feeling of ownership.
  • Joining a small company seems like the most realistic way to get 1, 2 and 3, but the magnitude of each might not be idea: smaller companies tend to pay less, have fewer users, and still have enough employees such that you don't really have that much ownership.

My best guess is that starting your own company would be best. Something closer to an indie hacker-y lifestyle SaaS business than a "swing for the fences" VC-backed business. The latter is probably better if you're earning to give and looking to maximize impact, but since you're leaning more towards designing a good life for yourself, I think the former is better, and I also think most people would agree with that. I've seen a lot of VC's be very open about the fact that the "swing for the fences" approach is frequently not actually in the founder's interest.

I'm looking to do the lifestyle SaaS business thing right now btw. If you're interested in that I'd love to chat: shoot me a DM.

Comment by Adam Zerner (adamzerner) on Cohesion and business problems · 2024-04-21T00:14:45.298Z · LW · GW

I was thinking that too actually. And at the time I was thinking that for cohesion-related reasons, it's often the case that there just isn't a market for narrow tools like inventory software and instead the market demands an all-in-one tool, in which case there wouldn't be a demand for a tool that solves the problem of many formats of POS system data.

But now I'm not so sure. I'm feeling pretty agnostic. I'm not clear on how often the market demand is largely for all-in-one solutions vs how often there is a market demand for narrow solutions.

Comment by Adam Zerner (adamzerner) on Cohesion and business problems · 2024-04-21T00:09:16.843Z · LW · GW

I guess it's a matter of pros and cons and tradeoffs.

On the one hand a product that solves a narrow and specific problem can focus more on that problem and do a better job of addressing it than a general, all-in-one product can. But then on the other hand it still seems to me that the what I propose about cohesion stands.

Using Anrok as an example, on the one hand the fact that Anrok is narrowly focused on tax and thus is able to do a better job of solving tax-related problems works in Anrok's favor. But on the other hand, there are cohesion-related things that work against Anrok such as having to integrate with other tools and such as customers having to spend more time shopping (with an all-in-one solution they just buy one thing and are done).

I suppose you'd agree that there are in fact tradeoffs at play here and that the real question is what direction the scale tends to lean. And I suppose you are of the opinion that the scale tends to lean in favor of narrower, more targeted solutions than broader, more all-in-one solutions. Is all of that true? If so, would you mind elaborating more on why you are of that belief?

Comment by Adam Zerner (adamzerner) on I'm open for projects (sort of) · 2024-04-18T20:13:48.376Z · LW · GW

Kudos for writing this post. I know it's promotional/self-interested, but I think that's fine. It's also pro-social. Having the rule/norm to encourage this type of post seems unlikely to be abused in a net-negative sort of way (assuming some reasonable restrictions are in place).

Comment by Adam Zerner (adamzerner) on I'm open for projects (sort of) · 2024-04-18T20:09:48.399Z · LW · GW

What are your goals? Money? Impact? Meaning? To what extent?

I think it'd also be helpful to elaborate on your skillset. Front end? Back end? Game design? Mobile apps? Design? Product? Data science?

Comment by Adam Zerner (adamzerner) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-15T05:15:05.223Z · LW · GW

I'll provide a dissenting perspective here. I actually came away from reading this feeling like Metz' position is maybe fine.

Everybody saw it. This is an influential person. That means he's worth writing about. And so once that's the case, then you withhold facts if there is a really good reason to withhold facts. If someone is in a war zone, if someone is really in danger, we take this seriously.

It sounds like he's saying that the Times' policy is that you only withhold facts if there's a "really" good reason to do so. I'm not sure what type of magnitude "really" implies, but I could see the amount of harm at play here falling well below it. If so, then Metz is in a position where his employer has a clear policy and doing his job involves following that policy.

As a separate question, we can ask whether "only withhold facts in warzone-type scenarios" is a good policy. I lean moderately strongly away from thinking it's a good policy. It seems to me that you can apply some judgement and be more selective than that.

However, I have a hard time moving from "moderately strongly" to "very strongly". To make that move, I'd need to know more about the pros and cons at play here, and I just don't have that good an understanding. Maybe it's a "customer support reads off a script" type of situation. Let the employee use their judgement; most of the time it'll probably be fine; once in a while they do something dumb enough to make it not worth letting them use their judgement. Or maybe journalists won't be dumb if they are able to use judgement here, but maybe they'll use that power to do bad things.

I dunno. Just thinking out loud.

Circling back around, suppose hypothetically we assume that the Times does have a "only withhold facts in a warzone-type scenario" policy, that we know that this is a bad and overall pretty harmful policy, and that Metz understands and agrees with all of this. What should Metz do in this hypothetical situation?

I feel unclear here. On the one hand, it's icky to be a part of something unethical and harmful like that, and if it were me I wouldn't want to live my life like that, so I'd want to quit my job and do something else. But on the other hand, there's various personal reasons why quitting your job might be tough. It's also possible that he should take a loss here with the doxing so that he is in position to do some sort of altruistic thing.

Probably not. He's probably in the wrong in this hypothetical situation if he goes along with the bad policy. I'm just not totally sure.

Comment by Adam Zerner (adamzerner) on What's with all the bans recently? · 2024-04-10T00:41:48.402Z · LW · GW

I strongly suspect that spending time building features for rate limited users is not valuable enough to be worthwhile. I suspect this mainly because:

  1. There aren't a lot of rate limited users who would benefit from it.
  2. The value that the rate limited users receive is marginal.
  3. It's unclear whether doing things that benefit users who have been rate limited is a good thing.
  4. I don't see any sorts of second order effects that would make it worthwhile, such as non-rate-limited people seeing these features and being more inclined to be involved in the community because of them.
  5. There are lots of other very valuable things the team could be working on.
Comment by Adam Zerner (adamzerner) on On Complexity Science · 2024-04-05T23:03:00.108Z · LW · GW

Hm, good points.

I didn't mean to propose the difficulty frame as the answer to what complexity is really about. Although I'm realizing now that I kinda wrote it in a way that implied that.

I think what I'm going for is that "theorizing about theorizers" seems to be pointing at something more akin to difficulty than truly caring about whether the collection of parts theorizes. But I expect that if you poke at the difficulty frame you'll come across issues (like you have begun to see).

Comment by Adam Zerner (adamzerner) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-05T22:55:44.475Z · LW · GW

I actually never really understood More Dakka until listening to the song!

Comment by Adam Zerner (adamzerner) on On Complexity Science · 2024-04-05T22:33:55.718Z · LW · GW

I spent a bit of time reading the first few chapters of Complexity: A Guided Tour. The author (also at the Santa Fe institute) claimed that, basically, everyone has their own definition of what "complexity" is, the definitions aren't even all that similar, and the field of complexity science struggles because of this.

However, she also noted that it's nothing to be (too?) ashamed of: other fields have been in similar positions, have come out ok, and that we shouldn't rush to "pick a definition and move on".

We have to theorize about theorizers and that makes all the difference.

That doesn't really seem to me to hit the nail on the head.

I get the idea of how in physics, if billiards balls could think and decide what to do it'd be much tougher to predict what will happen. You'd have to think about what they will think.

On the other hand, if a human does something to another human, that's exactly the situation we're in: to predict what the second human will do we need to think about what the second human is thinking. Which can be difficult.

Let's abstract this out. Instead of billiards balls and humans we have parts. Well, really we have collections of parts. A billiard ball isn't one part, it consists of many atoms. Many other parts. So the question is of what one collection of parts will do after it is influenced by some other collection of parts.

If the system of parts can think and act, it makes it difficult to predict what it will do, but that's not the only thing that can make it difficult. It sounds to me like difficulty is the essence here, not necessarily thinking.

For example, in physics suppose you have one fluid that comes into contact with another fluid. It can be difficult to predict whether things like eddies or vortices will form. And this happens despite the fact that there is no "theorizing about theorizers".

Another example: if is often actually quite easy to predict what a human will do even though that involves theorizing about a theorizer. For example, if Employer stopped paying John Doe his salary, I'd have an easy time predicting that John Doe would quit.

Comment by Adam Zerner (adamzerner) on Some Things That Increase Blood Flow to the Brain · 2024-03-28T21:04:32.589Z · LW · GW

The subtext here seems to be that such references are required. I disagree that it should be.

It is frequently helpful but also often a pain to dig up, so there are tradeoffs at play. For this post, I think it was fine to omit references. I don't think the references would add much value for most readers and I suspect Romeo wouldn't have found it worthwhile to post if he had to dig up all of the references before being able to post.

Comment by Adam Zerner (adamzerner) on Shortform · 2024-03-24T16:51:35.157Z · LW · GW

Ah yeah, that makes sense. I guess utility isn't really the right term to use here.

Comment by Adam Zerner (adamzerner) on Shortform · 2024-03-24T06:56:39.938Z · LW · GW

Yeah, I echo this.

I've gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?

On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they'll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).

I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they'll act extremely selfishly.

The way that I like to think about this is in terms of "moral weight". How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with "moral weights" to assign to different types of people. But I think that people don't really assign a moral weight and then act consistently. In some situations they'll act as if their answer to my previous question is 100,000, and in other situations they'll act like it's 0.00001.

Comment by Adam Zerner (adamzerner) on Shortform · 2024-03-24T06:42:47.328Z · LW · GW

I would be willing to pay someone to help draft a LessWrong post for me about this; I think it's important but my writing skills are lacking.

I'm not looking to write a post about this, but I'd be happy to go back and forth with you in the comments about it (no payment required). Maybe that back and forth will help you formulate your thoughts.

For starters, I'm not sure if I understand the bias that you are trying to point to. Is it that people assume others are more altruistic than they actually are? Do any examples come to your mind other than this?

Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-03-21T17:43:37.533Z · LW · GW

Discord threads do have a significant drawback of lowering visibility by a lot, and people don't like to write things that nobody ever sees.

Meh. If you start a thread under the message "Parent level message" in #the-channel the UI will indicate that there are "N Messages" in a thread belonging to "Parent level message". It's true that those messages aren't automatically visible to people scrolling through the main channel, they'd have to click to open the thread, but if your audience isn't motivated to do that it seems to me like they aren't worth interacting with in the first place.

I do prefer how Slack treats threads though. They're more light and convenient to use in Slack.

Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-03-21T00:41:20.629Z · LW · GW

I wish more people used threads on platforms like Slack and Discord. And I think the reason to use threads is very similar to the reason why one should aim for modularity when writing software.

Here's an example. I posted this question in the #haskell-beginners Discord channel asking whether it's advisable for someone learning Haskell to use a linter. I got one reply, but it wasn't as a thread. It was a normal message in #haskell-beginners. Between the time I asked the question and got a response, there were probably a couple dozen other messages. So then, I had to read and scroll through those to get to the response I was interested in, and to see if there were any other responses.

Each of the messages were part of a different conversation. I think of it as something like this:

Conversation A; message 1
Conversation A; message 2
Conversation B; message 1
Conversation C; message 1
Conversation A; message 3
Conversation C; message 2
Conversation A; message 4
Conversation B; message 2

There is a linear structure for something that more naturally structured as a tree.

Function Programming Discord server
  #haskell-beginners channel
    Conversation A
      Message 1
      Message 2
      Message 3
      Message 4
    Conversation B
      Message 1
      Message 2
    Conversation C
      Message 1
      Message 2

In writing software, imagine that you have three sub-problems that you need to solve. And imagine if you approached this by doing something like this:

// stuff for sub-problem #1
// stuff for sub-problem #1
// stuff for sub-problem #2
// stuff for sub-problem #3
// stuff for sub-problem #1
// stuff for sub-problem #3
// stuff for sub-problem #1
// stuff for sub-problem #2

We generally prefer to avoid writing code this way. Instead, we prefer to take a more modular approach and do something like this:


function solveSubProblemOne() {

function solveSubProblemTwo() {

function solveSubProblemThree() {

By writing the code in a modular fashion, you can easily focus on the code related to sub-problem #1 and not have to sift through code that is unrelated to sub-problem #1. On the other hand, the more imperative non-modular version makes it difficult to tell what code is related to what sub-problem.

Similarly, using threads on platforms like Slack and Discord make it easy to see what messages belong to what conversations. 

And like software, the importance of this gets larger as the "codebase" becomes more involved and complex. Imagine a Slack channel with lots and lots of conversations happening simultaneously without threads. That is difficult to manage. But if it's a small channel with only two or three conversations happening simultaneously, that is more manageable.

Comment by Adam Zerner (adamzerner) on My Clients, The Liars · 2024-03-08T02:23:21.106Z · LW · GW

It sounds like with "factual lies" you're saying that certain lies are about something that can easily be verified, and thus you're unlikely to convince other people that you're being truthful. Is that accurate? If so, that definitely makes sense. It seems like it's almost always a bad idea to lie in such situations.

Why do you say that sympathy lies are not very consequential (assuming they are successful)? My model is that defendants have a pretty large range for how hard they could work on the case, working harder increases the odds of of winning by a good amount, and how hard they work depends a good amount on how sympathetic they are towards the defendant.

And yes, absolutely my job relies heavily on building trust and rapport with my clients. It occupies at least around 80% of my initial conversations with a client.

Gotcha. Makes sense. It's interesting how frequently a job that is on it's surface about X is largely, even mainly about Y. With X being "legal stuff" and Y being "emotional stuff" here (I'm being very hand-wavy).

Another example: I'm a programmer and I think that for programming, X is "writing code" and Y is "empathizing with users and working backwards from their most pressing needs". In theory there is a division of labor and the product manager deals with the Y, but in practice I've found that even in companies that try to do this heavily (smaller, more startup-y companies don't aim to divide the labor as much), Y is still incredibly important. Probably even more important than X.

Comment by Adam Zerner (adamzerner) on My Clients, The Liars · 2024-03-08T02:11:57.798Z · LW · GW

Very good point. I mistakenly assumed that the only goal is to communicate one's ideas, but in retrospect it is obvious that things like -- I'm not sure how to describe this. Aesthetics? Artfulness? How well it flows? -- matter as well, and that such things are a big part of what you were going for in this post. Therefore I take back what I said and think it makes a lot of sense to use colorful, non-simple words.

I'm glad I learned this. I'm going to keep it in mind when I read things and hopefully incorporate it into my own writing as well.

Comment by Adam Zerner (adamzerner) on My Clients, The Liars · 2024-03-08T00:29:02.539Z · LW · GW

I am not a lawyer and don't know (much) more about how this stuff works than the average person. From my perspective, there are pros and cons to a defendant lying to a public defender.


  • Assuming your lie is successful and it earns you sympathy, the public defender might:
    • Work harder.
    • Spend some political capital they have access to on your case.
    • Avoid working against you. Maybe if you don't explicitly earn their sympathy they'll be "in bed with the prosecutors" and share what you tell them in confidence with the prosectors in an attempt to get you convicted.


  • Assuming your lie is successful:
    • The prosecution might realize the truth, and your lawyer will be unprepared to defend you against their arguments.
  • Assuming your lie is unsuccessful:
    • The inverse of the "Pros" section, pretty much.

It doesn't seem to me like "be completely honest with your lawyer" is always the right approach to take. It seems likely that how sympathetic they are to you is very important and I can imagine realistic situations where there are lies you can tell that a) are unlikely to be figured out and b) earn you a lot of sympathy in such a way that the pros probably outweigh the cons.

Separately, there is the question of what is reasonable for the average defendant to expect. Maybe I am wrong, but if I am, it doesn't seem to me that the average defendant has access to enough information to justifiably expect this. I think they'd need to know much more about a) how court cases work and b) the culture that public defenders are a part of.

There is also the point that being under so much stress, the defendants are probably cognitively impaired in some meaningful way, and so expectations of their ability to reason and make good decisions should be correspondingly lower.

But at the same time... yes, I'm sure that a lot of defendants lie in situations where they are pretty likely to get caught, and where it is pretty clearly a bad idea to do so. My guess is that some form of wishful thinking is what explains this. ("I really, really, really don't want anyone to know that I touched that gun! Maybe I can just tell the lawyer that I didn't touch it and no one will ever figure it out.").

If so, I'd imagine that a big part of the job of a defense attorney would be something along the lines of what therapists do: building rapport, earning trust, developing a "therapeutic alliance".

Comment by Adam Zerner (adamzerner) on My Clients, The Liars · 2024-03-08T00:04:08.228Z · LW · GW

Nit: I found myself not knowing what various words in the post mean (marionette, chicanery) and not being super comfortable with others (surreptitiously). I strongly suspect that a non-trivial proportion of other readers are in the same boat and that using simpler words would be an improvement (see Write Simply by Paul Graham).

Comment by Adam Zerner (adamzerner) on Productivity: Working towards a summary of what we know · 2024-03-07T18:23:53.110Z · LW · GW

That's great to hear!

Comment by Adam Zerner (adamzerner) on If you weren't such an idiot... · 2024-03-04T18:24:09.341Z · LW · GW

Good point. Makes sense that it'd be important for such people.

Comment by Adam Zerner (adamzerner) on If you weren't such an idiot... · 2024-03-04T18:17:57.078Z · LW · GW

Laptop chargers are also an object for which it's trivial to own multiple, at a low cost and high (potential) advantage.

I don't see why there is a high potential advantage here. I'd expect:

  • Most people to be able to find a friend or a nice person at a coffee shop with a charger they can borrow.
  • Most people to be able to get a new charger within a day or so (in person store or online + pay for faster shipping).
  • Going a day or so without a laptop not to sacrifice much in terms of fun. I actually expect it to be a net positive there since it'd force you to do something like go for a walk or read a book. It also has the benefit of exercising your "boredom muscles".
  • Going a day or so without a laptop not to sacrifice much in terms of your career. Maybe your boss is frustrated with you in the short term, but I don't expect that to lead to any actual consequences like being meaningfully more likely to get fired or not get a promotion.
Comment by Adam Zerner (adamzerner) on The Parable Of The Fallen Pendulum - Part 1 · 2024-03-02T20:07:09.355Z · LW · GW

I really enjoyed this exercise. I had to think a bunch about it, and I'm not even sure how good my response is. After all, the responses that people contributed in the comments are all pretty varied IMO. I think this points towards it being a good exercise. I'd love to see more exercises like this.

Comment by Adam Zerner (adamzerner) on The Parable Of The Fallen Pendulum - Part 1 · 2024-03-02T19:14:50.468Z · LW · GW

Student: That sounds like a bunch of BS. Like we said, you can't go back after the fact and adjust the theories predictions.

Comment by Adam Zerner (adamzerner) on The Parable Of The Fallen Pendulum - Part 1 · 2024-03-02T10:35:48.956Z · LW · GW

Student: Ok. I tried that and none of my models are very successful. So my current position is that the Newtonian model is suspect, my other models are likely wrong, there is some accurate model out there but I haven't found it yet. After all, the space of possible models is large and as a mere student I'm having trouble pruning this space.

Comment by Adam Zerner (adamzerner) on The Parable Of The Fallen Pendulum - Part 1 · 2024-03-02T10:30:29.087Z · LW · GW

I have a feeling that there is something deep here that is going over my head. If so, would you mind elaborating (with the elaboration wrapped in a spoiler so it doesn't ruin the fun for others)?

Comment by Adam Zerner (adamzerner) on The Parable Of The Fallen Pendulum - Part 1 · 2024-03-02T10:06:27.614Z · LW · GW

I'd have two main things to say.

The first is something along the lines of an inadequacy analysis (a la Inadequate Equilibria). Given the incentives people face, if Newtonian mechanics was this flawed, would we expect it to have been exposed?

I think we can agree that the answer is an extremely confident "yes". There is a lot of prestige to be gained, prestige is something people want, and there aren't high barriers to doing the experiment and subsequent writeup. So then, I have a correspondingly extremely strong prior that Newtonian mechanics is not that flawed. Strong enough where even this experimental result isn't enough to move me much.

The second is surrounding things that I think you can assume are implied in a stated theory. In this pendulum example, I think it's implied that the prediction is contingent on there not being a huge gust of wind that knocks the stand over, for example. I think it's reasonable to assume that such things are implied when one states their theory.

And so, I don't see anything wrong with going back and revising the theory to something like "this is what we'd predict if the stand remains in place". This sort of thing can be dangerous if eg. the person theorizing is proposing a crackpot medical treatment, keeps coming up with excuses when the treatment doesn't work, and says "see it works!" when positive results are observed. But in the pendulum example it seems fine.

(I'd also teach them about the midwit meme and valleys of bad rationality.)

Comment by Adam Zerner (adamzerner) on Experiments as a Third Alternative · 2024-02-27T18:55:05.856Z · LW · GW

I'm in the process of being evaluated for ADHD. I was diagnosed with it as a kid, but that was over 20 years ago and the psychiatrist wanted me to be re-evaluated. It's taken a very long time to get an appointment and then go through the process, but hopefully I'm only a few weeks away now and will try to remember to report back!

Comment by Adam Zerner (adamzerner) on Open Thread – Winter 2023/2024 · 2024-02-26T22:17:04.664Z · LW · GW

I am seeing new a new "Quick Takes" feature on LessWrong. However, I can't find any announcement or documentation for the feature. I tried searching for "quick takes" and looking on the FAQ. Can someone describe "Quick Takes"?

Comment by Adam Zerner (adamzerner) on CFAR Takeaways: Andrew Critch · 2024-02-17T17:55:19.520Z · LW · GW

I'm remembering the following excerpt from The Scout Mindset. I think it's similar to what I say above.

My path to this book began in 2009, after I quit graduate school and threw myself into a passion project that became a new career: helping people reason out tough questions in their personal and professional lives. At first I imagined that this would involve teaching people about things like probability, logic, and cognitive biases, and showing them how those subjects applied to everyday life. But after several years of running workshops, reading studies, doing consulting, and interviewing people, I finally came to accept that knowing how to reason wasn't the cure-all I thought it was.

Knowing that you should test your assumptions doesn't automatically improve your judgement, any more than knowing you should exercise automatically improves your health. Being able to rattle off a list of biases and fallacies doesn't help you unless you're willing to acknowledge those biases and fallacies in your own thinking. The biggest lesson I learned is something that's since been corroborated by researchers, as we'll see in this book: our judgment isn't limited by knowledge nearly as much as it's limited by attitude.

Comment by Adam Zerner (adamzerner) on story-based decision-making · 2024-02-17T09:36:07.599Z · LW · GW

Yeah. This matches my (limited) experience chatting with investors. They're a lot less smart than I was anticipating.

I'm reminded of something I recall Paul Graham saying (and I think I also remember hearing others saying the same thing): that you can think of investors as being like an iceberg where the tip that is above water provide a real value-add with their wisdom and guidance in addition to the money you received from them, and the bulk of the iceberg that is underwater are investors who you should just treat as providing you with no value-add on top of the money you're receiving from them.

Comment by Adam Zerner (adamzerner) on CFAR Takeaways: Andrew Critch · 2024-02-17T09:10:58.166Z · LW · GW

Surprise 1: People are profoundly non-numerate. 

I wonder whether Humans are not automatically strategic is the deeper issue here.

It's one thing if you intend to be strategic about things and fail to do so in part due to lack of numeracy. It's another if you aren't even trying to be strategic in the first place. I suspect that a large majority of the time the issue is not being strategic.

Furthermore, I suspect that most people aren't strategic because they find being strategic distasteful in some way. I've experienced this a lot in my life.

  • I'll want to skim through Yelp for 10 minutes before choosing a restaurant to eat at.
  • Or spend 20 minutes watching trailers and googling around before picking a movie to watch.
  • Or spend 30 minutes on The Wirecutter before making a purchase for a few hundred dollars.
  • Or spend however many dozens of hours researching all sorts of stuff about different cities before moving to one.

I've found that various people see these sorts of things as being, depending on what type of mood they're in, "overly analytical" or "Adam being Adam".

On the other hand, I think there is a smaller but not super small subset of people who don't find it particularly distasteful and would be pretty receptive to a proposal of "you're currently not being strategic about lots of things in your life, being strategic about them would benefit you greatly, and so you should start being strategic about them".

I think that it is important to identify what the real blocker or blockers are here. If there are, for example, multiple blockers and you solve one of them, then you end up in a situation where progress is merely latent. It doesn't really lead to observable results. For example, if someone is both 1) innumerate and 2) not motivated to be strategic, if you teach them to be numerate, (2) will still be a blocker and the person will not achieve better outcomes.

Comment by Adam Zerner (adamzerner) on CFAR Takeaways: Andrew Critch · 2024-02-17T08:53:32.437Z · LW · GW

If your personality type is "writing doesn't work for me", one of your biggest bottlenecks is to make writing work for you.

Thanks for the reminder here. I've thought a lot in the past about the value of writing in past but for whatever reason I feel like I've drifted away from writing. I think I should spend more time writing and am feeling motivated to start doing so now.

Comment by Adam Zerner (adamzerner) on adamzerner's Shortform · 2024-02-17T08:04:21.311Z · LW · GW

Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives. 

- Scott Alexander, Meditations on Moloch

Comment by Adam Zerner (adamzerner) on The Good Balsamic Vinegar · 2024-01-28T08:55:24.670Z · LW · GW

I feel like a better way to approach this would be to stand on the shoulders of others and search around for product recommendations. Ie. this from America's Test Kitchen.

I am now incrementally more powerful at grocery shopping.

I apologize if this ruins any subtlety you were going for, but I'm thinking mostly about how these learnings can be applied more generally.

You kinda did what I think most people would do. The product is in bottles. There's no obvious way to tell how good the product is. So you use price point as a heuristic, and call it a day. But it turned out that with a little bit of thought, there was a reasonable way of judging the quality of the product.

So maybe the lesson is to give things a little bit of thought before assuming that they're actually difficult? This can be tricky though. "Operating on automatic" has it's benefits. If we always "took the wheel" in situations like these it'd be excessive, I suppose.

But I think the balsamic vinegar was a good example of a situation that might on first approximation seem "excessive"[1] to "take the wheel" on, but it turned out to be worth it. And I get the sense that there are a lot of similar situations where most people could benefit by "taking the wheel".


  1. ^

    I remember early in our relationship, one of the first times that I went grocery shopping with my girlfriend we had an argument about this. I was "taking the wheel" and approaching everything pretty strategically. She didn't want to think so hard.

Comment by Adam Zerner (adamzerner) on David Burns Thinks Psychotherapy Is a Learnable Skill. Git Gud. · 2024-01-28T04:56:55.924Z · LW · GW

I see, that all makes a lot of sense. I take back my objection then. It seems at least plausible that Burns is correct here.