Posts
Comments
When I was a student at Fullstack Academy, a coding bootcamp, they had us all do this (mapping it to the control key), along with a few other changes to such settings like making the key repeat rate faster. I think I got this script from them.
My instinct is that it's not the type of thing to hack at with workarounds without buy in from the LW team.
If there was buy in from them I expect that it wouldn't be much effort to add some sort of functionality. At least not for a version one; iterating on it could definitely take time, but you could hold off on spending that time iterating if there isn't enough interest, so the initial investment wouldn't be high effort.
I think this is a great idea, at least in the distillation aspect.
Thanks!
Having briefer statements of the most important posts would be very useful in growing the rationalist community.
I think you're right, but I think it's also important to think about dilution. Making things lower-effort and more appealing to the masses brings down the walls of the garden, which "dilutes" things inside the garden.
But I'm just saying that this is a consideration. And there are lots of considerations. I feel confused about how to enumerate through them, weigh them, and figure out which way the arrow points: towards being more appealing to the masses or less appealing. I know I probably indicated that I lean towards the former when I talked about "summaries, analyses and distillations" in my OP, but I want to clarify that I feel very uncertain and if anything probably lean towards the latter.
But even if we did want to focus on having taller walls, I think the "more is possible" point that I was ultimately trying to gesture at in my OP still stands. It's just that the "more" part might mean things like coming up with things like higher quality explanations, more and better examples of what the post is describing, knowledge checks, and exercises.
Since we don't currently have that list of distilled posts (AFAIK - anyone?)
There is the Sequence Highlights which has an estimated reading time of eight hours.
Sometimes when I'm reading old blog posts on LessWrong, like old Sequence posts, I have something that I want to write up as a comment, and I'm never sure where to write that comment.
I could write it on the original post, but if I do that it's unlikely to be seen and to generate conversation. Alternatively, I could write it on my Shortform or on the Open Thread. That would get a reasonable amount of visibility, but... I dunno... something feels defect-y and uncooperative about that for some reason.
I guess what's driving that feeling is probably the thought that in a perfect world conversations about posts would happen in the comments section of the post, and by posting elsewhere I'm contributing to the problem.
But now that I write that out I'm feeling like that's a bit silly thought. Fixing the problem would take a larger concentration of force than just me posting a few comments on old Sequence posts once in a while. By me posting my comments in the comments sections of the corresponding post, I'm not really moving the needle. So I don't think I endorse any feelings of guilt here.
I would like to see people write high-effort summaries, analyses and distillations of the posts in The Sequences.
When Eliezer wrote the original posts, he was writing one blog post a day for two years. Surely you could do a better job presenting the content that he produced in one day if you, say, took four months applying principles of pedagogy and iterating on it as a side project. I get the sense that more is possible.
This seems like a particularly good project for people who want to write but don't know what to write about. I've talked with a variety of people who are in that boat.
One issue with such distillation posts is discoverability. Maybe you write the post, it receives some upvotes, some people see it, and then it disappears into the ether. Ideally when someone in the future goes to read the corresponding sequence post they would be aware that your distillation post is available as a sort of sister content to the original content. LessWrong does have the "Mentioned in" section at the bottom of posts, but that doesn't feel like it is sufficient.
I recently started going through some of Rationality from AI to Zombies again. A big reason why is the fact that there are audio recordings of the posts. It's easy to listen to a post or two as I walk my dog, or a handful of posts instead of some random hour-long podcast that I would otherwise listen to.
I originally read (most of) The Sequences maybe 13 or 14 years ago when I was in college. At various times since then I've made somewhat deliberate efforts to revisit them. Other times I've re-read random posts as opposed to larger collections of posts. Anyway, the point I want to make is that it's been a while.
I've been a little surprised in my feelings as I re-read them. Some of them feel notably less good than what I remember. Others blow my mind and are incredible.
The Mysterious Answers sequence is one that I felt disappointed by. I felt like the posts weren't very clear and that there wasn't much substance. I think the main overarching point of the sequence is that an explanation can't say that all outcomes are equally probable. It has to say that some outcomes are more probable than others. But that just seems kinda obvious.
I think it's quite plausible that there are "good" reasons why I felt disappointed as I re-read this and other sequences. Maybe there are important things that are going over my head. Or maybe I actually understand things too well now after hanging around this community for so long.
One post that hit me kinda hard that I really enjoyed after re-reading it was Rationality and the English Language, and then the follow up post, Human Evil and Muddled Thinking. The posts helped me grok how powerful language can be.
If you really want an artist’s perspective on rationality, then read Orwell; he is mandatory reading for rationalists as well as authors. Orwell was not a scientist, but a writer; his tools were not numbers, but words; his adversary was not Nature, but human evil. If you wish to imprison people for years without trial, you must think of some other way to say it than “I’m going to imprison Mr. Jennings for years without trial.” You must muddy the listener’s thinking, prevent clear images from outraging conscience. You say, “Unreliable elements were subjected to an alternative justice process.”
I'm pretty sure that I read those posts before, along with a bunch of related posts and stuff, but for whatever reason the re-read still meaningfully improved my understand the concept.
I assume you mean wearing a helmet while being in a car to reduce the risk of car related injuries and deaths. I actually looked into this and from what I remember, helmets do more harm than good. They have the benefit of protecting you from hitting your head against something but the issue with accidents comes much moreso from the whiplash, and by adding more weight to (the top of) your head, helmets have the cost of making whiplash worse, and this cost outweighs the benefits by a fair amount.
Yes! I've always been a huge believer in this idea that the ease of eating a food is important and underrated. Very underrated.
I'm reminded of this clip of Anthony Bourdain talking about burgers and how people often put slices of bacon on a burger, but that in doing so it makes the burger difficult to eat. Presumably because when you go to take a bite you the whole slice of bacon often ends up sliding off the burger.
Am I making this more enjoyable by adding bacon? Maybe. How should that bacon be introduced into the question? It's an engineer and structural problem as much as it is a flavor experience. You really have to consider all of those things. One of the greatest sins in "burgerdom" I think is making a burger that's just difficult to eat.
I've noticed that there's a pretty big difference in the discussion that follows from me showing someone a draft of a post and asking for comments and the discussion in the comments section after I publish a post. The former is richer and more enjoyable whereas the latter doesn't usually result in much back and forth. And I get the sense that this is true for other authors as well.
I guess one important thing might be that with drafts, you're talking to people who you know. But I actually don't suspect that this plays much of a role, at least on LessWrong. As an anecdote, I've had some incredible conversations with the guy who reviews drafts of posts on LessWrong for free and I had never talked to him previously.
I wonder what it is about drafts. I wonder if it can or should be incorporated into regular posts.
Thanks Marvin! I'm glad to hear that you enjoyed the post and that it was helpful.
Imho your post should be linked to all definitions of the sunk cost fallacy.
I actually think the issue was more akin to the planning fallacy. Like when I'd think to myself "another two months to build this feature and then things will be good", it wasn't so much that I was compelled because of the time I had sunk into the journey, it was more that I genuinely anticipated that the results would be better than they actually were.
It isn't active, sorry. See the update at the top of the post.
See also: https://www.painscience.com/articles/strength-training-frequency.php.
Summary:
Strength training is not only more beneficial for general fitness than most people realize, it isn’t even necessary to spend hours at the gym every week to get those benefits. Almost any amount of it is much better than nothing. While more effort will produce better results, the returns diminish rapidly. Just one or two half hour sessions per week can get most of the results that you’d get from two to three times that much of an investment (and that’s a deliberately conservative estimate). This is broadly true of any form of exercise, but especially so with strength training. In a world where virtually everything in health and fitness is controversial, this is actually fairly settled science.
Oh I see, that makes sense. In retrospect that is a little obvious that you don't have to choose one or the other :)
So does the choice of which type of fiber to take boil down to the question of the importance of constipation vs microbiome and cholesterol? It's seeming to me like if the former is more important you should take soluble non-fermentable fiber, if the latter is more important you should take soluble fermentable fiber (or eat it in a whole food), and that insoluble fiber is never/rarely the best option.
Funny. I have a Dropbox folder where I store video tours of all the apartments I've ever lived in. Like, I spend a minute or two walking around the apartment and taking a video with my phone.
I'm not sure why, exactly. Partly because it's fun to look back. Partly because I don't want to "lose" something that's been with me for so long.
I suspect that such video tours are more appropriate for a large majority of people. 10 hours and $200-$500 sounds like a lot. And you could always convert the video tour into digital art some time in the future if you find the nostalgia is really hitting you.
Hm. I hear ya. Good point. I'm not sure whether I agree or disagree.
I'm trying to think of an analogy and came up with the following. Imagine you go to McDonalds with some friends and someone comments that their burger would be better if they used prime ribeye for their ground beef.
I guess it's technically true, but something also feels off about it to me that I'm having trouble putting my finger on. Maybe it's that it feels like a moot point to discuss things that would make something better that are also impractical to implement.
I just looked up Gish gallops on Wikipedia. Here's the first paragraph:
The Gish gallop (/ˈɡɪʃ ˈɡæləp/) is a rhetorical technique in which a person in a debate attempts to overwhelm an opponent by abandoning formal debating principles, providing an excessive number of arguments with no regard for the accuracy or strength of those arguments and that are impossible to address adequately in the time allotted to the opponent. Gish galloping prioritizes the quantity of the galloper's arguments at the expense of their quality.
I disagree that focusing on the central point is a recipe for Gish gallops and that it leads to Schrodinger's importance.
Well, I think that it in combination with a bunch of other poor epistemic norms it might be a recipe for those things, but a) not by itself and b) I think the norms would have to be pretty poor. Like, I don't expect that you need 10/10 level epistemic norms in the presence of focusing on the central point to shield from those failure modes, I think you just need something more like 3/10 level epistemic norms. Here on LessWrong I think our epistemic norms are strong enough where focusing on the central point doesn't put us at risk of things like Gish gallops and Schrodinger's importance.
I actually disagree with this. I haven't thought too hard about it and might just not be seeing it, but on first thought I am not really seeing how such evidence would make the post "much stronger".
To elaborate, I like to use Paul Graham's Disagreement Hierarchy as a lens to look through for the question of how strong a post is. In particular, I like to focus pretty hard on the central point (DH6) rather than supporting and tangential points. I think the central point plays a very large role in determining how strong a post is.
Here, my interpretation of the central point(s) is something like this:
- Poverty is largely determined by the weakest link in the chain.
- Anoxan is a helpful example to illustrate this.
- It's not too clear what drives poverty today, and so it's not too clear that UBI would meaningfully reduce poverty.
I thought the post did a nice job of making those central points. Sure, something like a survey of the research in positive psychology could provide more support for point #1, for example, but I dunno, I found the sort of intuitive argument for point #1 to be pretty strong, I'm pretty persuaded by it, and so I don't think I'd update too hard in response to the survey of positive psychology research.
Another thing I think about when asking myself how strong I think a post is is how "far along" it is. Is it an off the cuff conversation starter? An informal write up of something that's been moderately refined? A formal write up of something that has been significantly refined?
I think this post was somewhere towards the beginning of the spectrum (note: it was originally a tweet, not a LessWrong post). So then, for things like citations supporting empirical claims, I don't think it's reasonable to expect very much from the author, and so I lean away from viewing the lack of citations as something that (meaningfully) weakens the post.
What would it be like for people to not be poor?
I reply: You wouldn't see people working 60-hour weeks, at jobs where they have to smile and bear it when their bosses abuse them.
I appreciate the concrete, illustrative examples used in this discussion, but I also want to recognize that they are only the beginnings of a "real" answer to the question of what it would be like to not be poor.
In other words, in an attempt to describe what he sees as poverty, I think Eliezer has taken the strategy of pointing to a few points in Thingspace and saying "here are some points; the stuff over here around these points is roughly what I'm trying to gesture at". He hasn't taken too much of a stab at drawing the boundaries. I'd like to take a small stab at drawing some boundaries.
It seems to me that poverty is about QALYs. Let's wave our hands a bit and say that QALYs are a function of 1) the "cards you're dealt" and 2) how you "play your hand". With that, I think that we can think about poverty as happening when someone is dealt cards that make it "difficult" for them to have "enough" QALYs.
This happens in our world when you have to spend 40 hours a week smiling and bearing it. It happens in Anoxan when you take shallow breaths to conserve oxygen for your kids. And it happened to hunter-gatherers in times of scarcity.
There are many circumstances that can make it difficult to live a happy life. And as Eliezer calls out, it is quite possible for one "bad apple circumstance", like an Anoxan resident not having enough oxygen, to spoil the bunch. For you to enjoy abundance in a lot of areas but scarcity in one/few other areas, and for the scarcity to be enough to drive poverty despite the abundance. I suppose then that poverty is driven in large part by the strength of the "weakest link".
Note that I don't think this dynamic needs to be very conscious on anyone's part. I think that humans instinctively execute good game theory because evolution selected for it, even if the human executing just feels a wordless pull to that kind of behavior.
Yup, exactly. It makes me think back to The Moral Animal by Robert Wright. It's been a while since I read it so take what follows with a grain of salt, because I could be butchering some stuff, but that book makes the argument that this sort of thing goes beyond friendship and into all types of emotions and moral feelings.
Like if you're at the grocery store and someone just cuts you in line for no reason, one way of looking at it is that the cost to you is negligible -- you just need to wait an additional 45 seconds for them to check out -- and so the rational thing would be to just let it happen. You could confront them, but what exactly would you have to gain? Suppose you are traveling and will never see any of the people in the area ever again.
But we have evolved such that this situation would evoke some strong emotions regarding unfairness, and these emotions would often drive you to confront the person who cut you in line. I forget if this stuff is more at the individual level or the cultural level.
Why? Because extra information could help me impress them.
I've always been pretty against the idea of trying to impress people on dates.
It risks false positives. Ie. it risks a situation where you succeed at impressing them, go on more dates or have a longer relationship than you otherwise would, and then realize that you aren't compatible and break up. Which isn't necessarily a bad thing but I think it is more often than not.
Impressing your date also reduces the risk of false negatives, which is a good thing. Ie. it helps avoid the scenario where someone who you're compatible with rejects you. Maybe this is too starry-eyed, but I like to think that if you just bring your true self to the table, are open-minded, and push yourself to be a little vulnerable, the risk of such false negatives is pretty low.
I think this is especially relevant because I think the emotionally healthy person heuristic probably says to try to impress your date.
Hm yeah, I feel the same way. Good point.
America's response to covid seems like one example of this.
If I'm remembering correctly from Zvi's blog posts, he criticized the US's policy for being a sort of worst of both worlds middle ground. A strong, decisive requirement to enforce things like masking and distancing might have actually eradicated the virus and thus been worthwhile. But if you're not going to take an aggressive enough stance, you should just forget it: half-hearted mitigation policies don't do enough to "complete the bridge" and so aren't worth the economic and social costs.
It's not a perfect example. The "unfinished bridge" here provides positive value, not zero value. But I think the amount of positive value is low enough that it would be useful to round it down to zero. The important thing is that you get a big jump in value once you cross some threshold of progress.
I think a lot of philanthropic causes are probably in a similar boat.
When there are lots of small groups spread around making very marginal progress on a bunch of different goals, it's as if they're building a bunch of unfinished bridges. This too isn't a perfect example because the "unfinished bridges" provide some value, but like the covid example, I think the amount of value is small enough that we can just round it to zero.
On the other hand, when people get a little barbaric and rally around a single cause, there might be enough concentration of force to complete the bridge.
Project idea: virtual water coolers for LessWrong
Previous: Virtual water coolers
Here's an idea: what if there was a virtual water cooler for LessWrong?
- There'd be Zoom chats with three people per chat. Each chat is a virtual water cooler.
- The user journey would begin by the user expressing that they'd like to join a virtual water cooler.
- Once they do, they'd be invited to join one.
- I think it'd make sense to restrict access to users based on karma. Maybe only 100+ karma users are allowed.
- To start, that could be it. In the future you could do some investigation into things like how many people there should be per chat.
Seems like an experiment that is both cheap and worthwhile.
If there is interest I'd be happy to create a MVP.
(Related: it could be interesting to abstract this and build a sort of "virtual water cooler platform builder" such that eg. LessWrong could use the builder to build a virtual water cooler platform for LessWrong and OtherCommunity could use the builder to build a virtual water cooler platform for their community.)
Update: I tried a few doses of Adderall, up to 15mg. I didn't notice anything.
I was envisioning that you can organize a festival incrementally, investing more time and money into it as you receive more and more validation, and that taking this approach would de-risk it to the point where overall, it's "not that risky".
For example, to start off you can email or message a handful of potential attendees. If they aren't excited by the idea you can stop there, but if they are then you can proceed to start looking into things like cost and logistics. I'm not sure how pragmatic this iterative approach actually is though. What do you think?
Also, it seems to me that you wouldn't have to actually risk losing any of your own money. I'd imagine that you'd 1) talk to the hostel, agree on a price, have them "hold the spot" for you, 2) get sign ups, 3) pay using the money you get from attendees.
Although now that I think about it I'm realizing that it probably isn't that simple. For example, the hostel cost ~$5k and maybe the money from the attendees would have covered it all but maybe less attendees signed up than you were expecting and the organizers ended up having to pay out of pocket.
On the other hand, maybe there is funding available for situations like these.
Virtual watercoolers
As I mentioned in some recent Shortform posts, I recently listened to the Bayesian Conspiracy podcast's episode on the LessOnline festival and it got me thinking.
One thing I think is cool is that Ben Pace was saying how the valuable thing about these festivals isn't the presentations, it's the time spent mingling in between the presentations, and so they decided with LessOnline to just ditch the presentations and make it all about mingling. Which got me thinking about mingling.
It seems plausible to me that such mingling can and should happen more online. And I wonder whether an important thing about mingling in the physical world is that, how do I say this, you're just in the same physical space, next to each other, with nothing else you're supposed to be doing, and in fact what you're supposed to be doing is talking to one another.
Well, I guess you're not supposed to be talking to one another. It's also cool if you just want to hang out and sip on a drink or something. It's similar to the office water cooler: it's cool if you're just hanging out drinking some water, but it's also normal to chit chat with your coworkers.
I wonder whether it'd be good to design a virtual watercooler. A digital place that mimicks aspects of the situations I've been describing (festivals, office watercoolers).
- By being available in the virtual watercooler it's implied that you're pretty available to chit chat with, but it's also cool if you're just hanging out doing something low key like sipping a drink.
- You shouldn't be doing something more substantial though.
- The virtual watercooler should be organized around a certain theme. It should attract a certain group of people and filter out people who don't fit in. Just like festivals and office water coolers.
In particular, this feels to me like something that might be worth exploring for LessWrong.
Note: I know that there are various Slack and Discord groups but they don't meet conditions (1) or (2).
More dakka with festivals
In the rationality community people are currently excited about the LessOnline festival. Furthermore, my impression is that similar festivals are generally quite successful: people enjoy them, have stimulating discussions, form new relationships, are exposed to new and interesting ideas, express that they got a lot out of it, etc.
So then, this feels to me like a situation where More Dakka applies. Organize more festivals!
How? Who? I dunno, but these seem like questions worth discussing.
Some initial thoughts:
- Assurance contracts seem like quite the promising tool.
- You probably don't need a hero license to go out and organize a festival.
- Trying to organize a festival probably isn't risky. It doesn't seem like it'd involve too much time or money.
I wish there were more discussion posts on LessWrong.
Right now it feels like it weakly if not moderately violates some sort of cultural norm to publish a discussion post (similar but to a lesser extent on the Shortform). Something low effort of the form "X is a topic I'd like to discuss. A, B and C are a few initial thoughts I have about it. What do you guys think?"
It seems to me like something we should encourage though. Here's how I'm thinking about it. Such "discussion posts" currently happen informally in social circles. Maybe you'll text a friend. Maybe you'll bring it up at a meetup. Maybe you'll post about it in a private Slack group.
But if it's appropriate in those contexts, why shouldn't it be appropriate on LessWrong? Why not benefit from having it be visible to more people? The more eyes you get on it, the better the chance someone has something helpful, insightful, or just generally useful to contribute.
The big downside I see is that it would screw up the post feed. Like when you go to lesswrong.com and see the list of posts, you don't want that list to have a bunch of low quality discussion posts you're not interested in. You don't want to spend time and energy sifting through the noise to find the signal.
But this is easily solved with filters. Authors could mark/categorize/tag their posts as being a low-effort discussion post, and people who don't want to see such posts in their feed can apply a filter to filter these discussion posts out.
Context: I was listening to the Bayesian Conspiracy podcast's episode on LessOnline. Hearing them talk about the sorts of discussions they envision happening there made me think about why that sort of thing doesn't happen more on LessWrong. Like, whatever you'd say to the group of people you're hanging out with at LessOnline, why not publish a quick discussion post about it on LessWrong?
Hm, maybe.
Sometimes it can be a win-win situation. For example, if the call leads to you identifying a problem they're having and solving it in a mutually beneficial way.
But often times that isn't the case. From their perspective, the chances are low enough where, yeah, maybe the cold call just feels spammy and annoying.
I think that cold calls can be worthwhile from behind a veil of ignorance though. That's the barometer I like to use. If I were behind a veil of ignorance, would I endorse the cold call? Some cold calls are well targeted and genuine, in which case I would endorse them from behind a veil of ignorance. Others are spammy and thoughtless, in which case I wouldn't endorse them.
I agree with everything you've said. Let me try to clarify where it is that I think we might be disagreeing.
I am of the opinion that some "narrow problems" are "good candidates" to build "narrow solutions" for but that other "narrow problems" are not good candidates to build "narrow solutions" for and instead really call for being solved as part of an all-in-one solution.
I think you would agree with this. I don't think you would make the argument that all "narrow problems" are "good candidates" to build "narrow solutions" for.
Furthermore, as I argue in the post, I think that the level of "cohesion" often plays an important role in how "appropriate" it is to use a "narrow solution" for a "narrow problem". I think you would agree with this as well.
I suspect that our only real disagreement here is how we would weigh the tradeoffs. I think I lean moderately more in the direction of thinking that cohesiveness is important enough to make various "narrow problems" insufficiently good candidates for a "narrow solution" and you lean moderately more in the direction of thinking that cohesiveness isn't too big a deal and the "narrow problem" still is a good candidate for building a "narrow solution" for.
To be clear, I don't think that any of this means that I should attempt to build all-in-one products. I think it means that in my calculus for what "narrow problem" I should attempt to tackle I should factor in the level of cohesion.
In practice, all-in-one tools always need a significant degree of setup, configuration and customization before they are useful for the customer. Salesforce, for example, requires so much customization, you can make a career out of just doing Salesforce customization.
I can see that being true for all-in-one tools like Salesforce that are intended to be used across industries, but what about all-in-one tools that are more targeted?
For example, Bikedesk is an all-in-one piece of software that is specifically for bike shops and I would guess that the overall amount of setup and configuration for a shop using Bikedesk is lower than that of a bike shop using a handful of more specific tools.
The tradeoff is between a narrowly focused tool that does one job extremely well immediately, with little or no setup
I suppose the "little or no setup" part is sometimes this is the case, but it seems to me that often times it is not the case. Specifically, when the level of cohesiveness is high it seems to me that it is probably not the case.
Using the bike shop as an example, inventory management software that isn't part of an all-in-one solution needs inventory data to be fed to it and thus will require a moderate amount of setup and configuration.
See also Adam Ragusea's podcast episode on the topic.
Hm, gotcha.
It's tough, I think there are a lot of tradeoffs to navigate.
- You could join a big company. You'll 1) get paid, 2) work on something that lots of people use, but 3) you'll be a small cog in a large machine, and it sounds like that's not really what you're looking for. It sounds like you enjoy autonomy and having a meaningful and large degree of ownership.
- You could work on your own project. That addresses 3. But then 1 and 2 become pretty big risks. It's hard to build something that makes good money and lots of people use.
- You could join an open source project that lots of people use and is lacking contributors. But there's often not really a path to getting paid there.
- Something interesting: https://fresh.deno.dev/. I really like what they're doing. I personally think it's the best web framework out there. And there's only one person working on it. He's an incredible developer. Deno is paying him to work on it. I'm not sure if they'd be open to paying a second contributor. And I am not too optimistic that Fresh will become something that many people use.
- Working on LessWrong is an interesting possibility. After all, you're a longtime user and have the right skillset. However, 1) I'm not sure how good the prospects are for getting paid, 2) it's a relatively small community so you wouldn't be getting that "tons and tons of people use something I built" feeling, and 3) given that it's later stage and there's a handful of other developers working on it, I'm not sure if it'd provide you enough feeling of ownership.
- Joining a small company seems like the most realistic way to get 1, 2 and 3, but the magnitude of each might not be idea: smaller companies tend to pay less, have fewer users, and still have enough employees such that you don't really have that much ownership.
My best guess is that starting your own company would be best. Something closer to an indie hacker-y lifestyle SaaS business than a "swing for the fences" VC-backed business. The latter is probably better if you're earning to give and looking to maximize impact, but since you're leaning more towards designing a good life for yourself, I think the former is better, and I also think most people would agree with that. I've seen a lot of VC's be very open about the fact that the "swing for the fences" approach is frequently not actually in the founder's interest.
I'm looking to do the lifestyle SaaS business thing right now btw. If you're interested in that I'd love to chat: shoot me a DM.
I was thinking that too actually. And at the time I was thinking that for cohesion-related reasons, it's often the case that there just isn't a market for narrow tools like inventory software and instead the market demands an all-in-one tool, in which case there wouldn't be a demand for a tool that solves the problem of many formats of POS system data.
But now I'm not so sure. I'm feeling pretty agnostic. I'm not clear on how often the market demand is largely for all-in-one solutions vs how often there is a market demand for narrow solutions.
I guess it's a matter of pros and cons and tradeoffs.
On the one hand a product that solves a narrow and specific problem can focus more on that problem and do a better job of addressing it than a general, all-in-one product can. But then on the other hand it still seems to me that the what I propose about cohesion stands.
Using Anrok as an example, on the one hand the fact that Anrok is narrowly focused on tax and thus is able to do a better job of solving tax-related problems works in Anrok's favor. But on the other hand, there are cohesion-related things that work against Anrok such as having to integrate with other tools and such as customers having to spend more time shopping (with an all-in-one solution they just buy one thing and are done).
I suppose you'd agree that there are in fact tradeoffs at play here and that the real question is what direction the scale tends to lean. And I suppose you are of the opinion that the scale tends to lean in favor of narrower, more targeted solutions than broader, more all-in-one solutions. Is all of that true? If so, would you mind elaborating more on why you are of that belief?
Kudos for writing this post. I know it's promotional/self-interested, but I think that's fine. It's also pro-social. Having the rule/norm to encourage this type of post seems unlikely to be abused in a net-negative sort of way (assuming some reasonable restrictions are in place).
What are your goals? Money? Impact? Meaning? To what extent?
I think it'd also be helpful to elaborate on your skillset. Front end? Back end? Game design? Mobile apps? Design? Product? Data science?
I'll provide a dissenting perspective here. I actually came away from reading this feeling like Metz' position is maybe fine.
Everybody saw it. This is an influential person. That means he's worth writing about. And so once that's the case, then you withhold facts if there is a really good reason to withhold facts. If someone is in a war zone, if someone is really in danger, we take this seriously.
It sounds like he's saying that the Times' policy is that you only withhold facts if there's a "really" good reason to do so. I'm not sure what type of magnitude "really" implies, but I could see the amount of harm at play here falling well below it. If so, then Metz is in a position where his employer has a clear policy and doing his job involves following that policy.
As a separate question, we can ask whether "only withhold facts in warzone-type scenarios" is a good policy. I lean moderately strongly away from thinking it's a good policy. It seems to me that you can apply some judgement and be more selective than that.
However, I have a hard time moving from "moderately strongly" to "very strongly". To make that move, I'd need to know more about the pros and cons at play here, and I just don't have that good an understanding. Maybe it's a "customer support reads off a script" type of situation. Let the employee use their judgement; most of the time it'll probably be fine; once in a while they do something dumb enough to make it not worth letting them use their judgement. Or maybe journalists won't be dumb if they are able to use judgement here, but maybe they'll use that power to do bad things.
I dunno. Just thinking out loud.
Circling back around, suppose hypothetically we assume that the Times does have a "only withhold facts in a warzone-type scenario" policy, that we know that this is a bad and overall pretty harmful policy, and that Metz understands and agrees with all of this. What should Metz do in this hypothetical situation?
I feel unclear here. On the one hand, it's icky to be a part of something unethical and harmful like that, and if it were me I wouldn't want to live my life like that, so I'd want to quit my job and do something else. But on the other hand, there's various personal reasons why quitting your job might be tough. It's also possible that he should take a loss here with the doxing so that he is in position to do some sort of altruistic thing.
Probably not. He's probably in the wrong in this hypothetical situation if he goes along with the bad policy. I'm just not totally sure.
I strongly suspect that spending time building features for rate limited users is not valuable enough to be worthwhile. I suspect this mainly because:
- There aren't a lot of rate limited users who would benefit from it.
- The value that the rate limited users receive is marginal.
- It's unclear whether doing things that benefit users who have been rate limited is a good thing.
- I don't see any sorts of second order effects that would make it worthwhile, such as non-rate-limited people seeing these features and being more inclined to be involved in the community because of them.
- There are lots of other very valuable things the team could be working on.
Hm, good points.
I didn't mean to propose the difficulty frame as the answer to what complexity is really about. Although I'm realizing now that I kinda wrote it in a way that implied that.
I think what I'm going for is that "theorizing about theorizers" seems to be pointing at something more akin to difficulty than truly caring about whether the collection of parts theorizes. But I expect that if you poke at the difficulty frame you'll come across issues (like you have begun to see).
I actually never really understood More Dakka until listening to the song!
I spent a bit of time reading the first few chapters of Complexity: A Guided Tour. The author (also at the Santa Fe institute) claimed that, basically, everyone has their own definition of what "complexity" is, the definitions aren't even all that similar, and the field of complexity science struggles because of this.
However, she also noted that it's nothing to be (too?) ashamed of: other fields have been in similar positions, have come out ok, and that we shouldn't rush to "pick a definition and move on".
We have to theorize about theorizers and that makes all the difference.
That doesn't really seem to me to hit the nail on the head.
I get the idea of how in physics, if billiards balls could think and decide what to do it'd be much tougher to predict what will happen. You'd have to think about what they will think.
On the other hand, if a human does something to another human, that's exactly the situation we're in: to predict what the second human will do we need to think about what the second human is thinking. Which can be difficult.
Let's abstract this out. Instead of billiards balls and humans we have parts. Well, really we have collections of parts. A billiard ball isn't one part, it consists of many atoms. Many other parts. So the question is of what one collection of parts will do after it is influenced by some other collection of parts.
If the system of parts can think and act, it makes it difficult to predict what it will do, but that's not the only thing that can make it difficult. It sounds to me like difficulty is the essence here, not necessarily thinking.
For example, in physics suppose you have one fluid that comes into contact with another fluid. It can be difficult to predict whether things like eddies or vortices will form. And this happens despite the fact that there is no "theorizing about theorizers".
Another example: if is often actually quite easy to predict what a human will do even though that involves theorizing about a theorizer. For example, if Employer stopped paying John Doe his salary, I'd have an easy time predicting that John Doe would quit.
The subtext here seems to be that such references are required. I disagree that it should be.
It is frequently helpful but also often a pain to dig up, so there are tradeoffs at play. For this post, I think it was fine to omit references. I don't think the references would add much value for most readers and I suspect Romeo wouldn't have found it worthwhile to post if he had to dig up all of the references before being able to post.
Ah yeah, that makes sense. I guess utility isn't really the right term to use here.
Yeah, I echo this.
I've gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they'll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they'll act extremely selfishly.
The way that I like to think about this is in terms of "moral weight". How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with "moral weights" to assign to different types of people. But I think that people don't really assign a moral weight and then act consistently. In some situations they'll act as if their answer to my previous question is 100,000, and in other situations they'll act like it's 0.00001.
I would be willing to pay someone to help draft a LessWrong post for me about this; I think it's important but my writing skills are lacking.
I'm not looking to write a post about this, but I'd be happy to go back and forth with you in the comments about it (no payment required). Maybe that back and forth will help you formulate your thoughts.
For starters, I'm not sure if I understand the bias that you are trying to point to. Is it that people assume others are more altruistic than they actually are? Do any examples come to your mind other than this?
Discord threads do have a significant drawback of lowering visibility by a lot, and people don't like to write things that nobody ever sees.
Meh. If you start a thread under the message "Parent level message" in #the-channel
the UI will indicate that there are "N Messages" in a thread belonging to "Parent level message". It's true that those messages aren't automatically visible to people scrolling through the main channel, they'd have to click to open the thread, but if your audience isn't motivated to do that it seems to me like they aren't worth interacting with in the first place.
I do prefer how Slack treats threads though. They're more light and convenient to use in Slack.
I wish more people used threads on platforms like Slack and Discord. And I think the reason to use threads is very similar to the reason why one should aim for modularity when writing software.
Here's an example. I posted this question in the #haskell-beginners
Discord channel asking whether it's advisable for someone learning Haskell to use a linter. I got one reply, but it wasn't as a thread. It was a normal message in #haskell-beginners
. Between the time I asked the question and got a response, there were probably a couple dozen other messages. So then, I had to read and scroll through those to get to the response I was interested in, and to see if there were any other responses.
Each of the messages were part of a different conversation. I think of it as something like this:
Conversation A; message 1
Conversation A; message 2
Conversation B; message 1
Conversation C; message 1
Conversation A; message 3
Conversation C; message 2
Conversation A; message 4
Conversation B; message 2
There is a linear structure for something that more naturally structured as a tree.
Function Programming Discord server
#haskell-beginners channel
Conversation A
Message 1
Message 2
Message 3
Message 4
Conversation B
Message 1
Message 2
Conversation C
Message 1
Message 2
In writing software, imagine that you have three sub-problems that you need to solve. And imagine if you approached this by doing something like this:
// stuff for sub-problem #1
// stuff for sub-problem #1
// stuff for sub-problem #2
// stuff for sub-problem #3
// stuff for sub-problem #1
// stuff for sub-problem #3
// stuff for sub-problem #1
// stuff for sub-problem #2
We generally prefer to avoid writing code this way. Instead, we prefer to take a more modular approach and do something like this:
solveSubProblemOne();
solveSubProblemTwo();
solveSubProblemThree();
function solveSubProblemOne() {
...
}
function solveSubProblemTwo() {
...
}
function solveSubProblemThree() {
...
}
By writing the code in a modular fashion, you can easily focus on the code related to sub-problem #1 and not have to sift through code that is unrelated to sub-problem #1. On the other hand, the more imperative non-modular version makes it difficult to tell what code is related to what sub-problem.
Similarly, using threads on platforms like Slack and Discord make it easy to see what messages belong to what conversations.
And like software, the importance of this gets larger as the "codebase" becomes more involved and complex. Imagine a Slack channel with lots and lots of conversations happening simultaneously without threads. That is difficult to manage. But if it's a small channel with only two or three conversations happening simultaneously, that is more manageable.
It sounds like with "factual lies" you're saying that certain lies are about something that can easily be verified, and thus you're unlikely to convince other people that you're being truthful. Is that accurate? If so, that definitely makes sense. It seems like it's almost always a bad idea to lie in such situations.
Why do you say that sympathy lies are not very consequential (assuming they are successful)? My model is that defendants have a pretty large range for how hard they could work on the case, working harder increases the odds of of winning by a good amount, and how hard they work depends a good amount on how sympathetic they are towards the defendant.
And yes, absolutely my job relies heavily on building trust and rapport with my clients. It occupies at least around 80% of my initial conversations with a client.
Gotcha. Makes sense. It's interesting how frequently a job that is on it's surface about X is largely, even mainly about Y. With X being "legal stuff" and Y being "emotional stuff" here (I'm being very hand-wavy).
Another example: I'm a programmer and I think that for programming, X is "writing code" and Y is "empathizing with users and working backwards from their most pressing needs". In theory there is a division of labor and the product manager deals with the Y, but in practice I've found that even in companies that try to do this heavily (smaller, more startup-y companies don't aim to divide the labor as much), Y is still incredibly important. Probably even more important than X.