Reflections on Connect Developers 2022-01-16T00:20:48.883Z
Conversation as path traversal 2021-12-27T08:41:31.187Z
Anxiety and computer architecture 2021-12-11T10:37:30.898Z
Tentative Anger 2021-11-27T04:38:28.087Z
Using blinders to help you see things for what they are 2021-11-11T07:07:41.961Z
My current thinking on money and low carb diets 2021-10-29T06:50:38.543Z
Impressive vs honest signaling 2021-10-26T07:16:24.478Z
What health-related tips do you have for buying meat? 2021-10-23T07:09:36.744Z
Appropriately Gray Products 2021-10-06T00:56:41.586Z
Secular Therapy Project 2021-09-25T20:34:37.439Z
How much should we value life? 2021-09-06T22:30:29.976Z
A brief review of The Scout Mindset 2021-08-26T20:47:08.731Z
When Programmers Don't Understand Code, Don't Blame The User 2021-08-18T19:59:04.285Z
A Qualitative and Intuitive Explanation of Expected Value 2021-08-10T03:31:13.314Z
When writing triggers memory reconsolidation 2021-07-25T22:10:26.517Z
Believing vs understanding 2021-07-24T03:39:44.168Z
Preparing for ambition 2021-07-19T06:13:10.477Z
Happy paths and the planning fallacy 2021-07-18T23:26:30.920Z
Bad names make you open the box 2021-06-09T03:19:14.107Z
Why don't long running conversations happen on LessWrong? 2021-05-30T22:36:03.951Z
Don't feel bad about not knowing basic things 2021-05-24T01:49:57.637Z
Is driving worth the risk? 2021-05-11T05:04:47.935Z
Taking the outside view on code quality 2021-05-07T04:16:52.912Z
Naming and pointer thickness 2021-04-28T06:35:08.865Z
Bayes' theorem, plausible deniability, and smiley faces 2021-04-11T20:41:10.324Z
Think like an educator about code quality 2021-03-27T05:43:52.579Z
The best frequently don't rise to the top 2021-03-25T06:10:20.278Z
The best things are often free or cheap 2021-03-18T02:57:15.012Z
Five examples 2021-02-14T02:47:07.317Z
How should you go about valuing your time? 2021-01-10T06:54:56.372Z
Babble Thread 2021-01-09T21:52:12.383Z
Thoughts on Mustachianism 2021-01-09T09:27:36.839Z
Conversation, event loops, and error handling 2021-01-08T08:05:49.224Z
Give it a google 2020-12-29T05:30:39.133Z
adamzerner's Shortform 2020-12-16T09:51:03.460Z
Why I love stand up comedy 2020-12-16T09:34:22.198Z
Bad reductionism 2020-12-16T08:21:33.944Z
Debugging the student 2020-12-16T07:07:09.470Z
Map and Territory: Summary and Thoughts 2020-12-05T08:21:07.031Z
Writing to think 2020-11-17T07:54:44.523Z
When socializing, to what extent does walking reduce the risk of contracting Covid as opposed to being stationary? 2020-11-16T00:39:30.182Z
What are some good examples of fake beliefs? 2020-11-14T07:40:19.776Z
What is the right phrase for "theoretical evidence"? 2020-11-01T20:43:38.747Z
What is our true life expectancy? 2020-10-23T23:17:13.414Z
Should we use qualifiers in speech? 2020-10-23T04:46:10.075Z
Blog posts as epistemic trust builders 2020-09-27T01:47:07.830Z
Losing the forest for the trees with grid drawings 2020-09-24T21:13:35.180Z
Updates Thread 2020-09-09T04:34:20.509Z
More Right 2020-07-22T03:36:54.007Z
In praise of contributing examples, analogies and lingo 2020-07-13T06:43:48.975Z


Comment by adamzerner on List of Probability Calibration Exercises · 2022-01-23T05:26:56.306Z · LW · GW

Funny timing. I'm actually in the process of working on and am planning to post some sort of initial alpha release sort of thing to LessWrong soon! I need to seed the database with more questions first though. Right now there are only 10. I have a script and approach that should make it easy enough to get tens of thousands soon enough. This is helpful though. I'll look through the existing resources and see if there's anything I can use to improve my app.

Comment by adamzerner on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-22T17:57:26.362Z · LW · GW

My prediction is that the rise in diagrams would be much larger, based on the following model. 1) Making diagrams is currently not a thing that crosses peoples minds, but if it were an option in the text editor it would cross their minds. 2) Having to save and upload a file is a trivial inconvenience that is a large barrier for people.

Comment by adamzerner on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-22T01:39:35.988Z · LW · GW

A majority of the users won't use this feature because their favourite software is better in some way.

I'm not clear on what you mean here. It sounds like you are saying that even if Excalidraw was integrating into the LW text editor users would still find their favorite drawing software and use it instead. But that almost never happens currently, so I don't see why adding Excalidraw would change that.

But I think the delta usability/ delta dev time is low. The software needs to have all the basic features and have a good UI before it starts offering any advantage at all.

If what they said in the docs is correct, it wouldn't actually require too much dev time. Usability-wise, I've found Excalidraw to be one of the rare pieces of software that is intuitive to use out of the box, and doesn't have much of a learning curve.

Comment by adamzerner on Reflections on Connect Developers · 2022-01-17T17:56:30.118Z · LW · GW

Yeah I see the appeal in that. But for this app it didn't seem worth investing in that sort of either or functionality, and if I have to pick one, I expect people are more interested in matching with those who agree than those who disagree.

Comment by adamzerner on Reflections on Connect Developers · 2022-01-17T17:52:56.200Z · LW · GW

No and no. I didn't think it'd be worth doing either of those things before I knew there'd actually be enough users to justify the investment of time. So I ended up just sending the emails by hand.

Comment by adamzerner on Reflections on Connect Developers · 2022-01-17T17:51:40.313Z · LW · GW

Agreed that the name is pretty bad. I don't suspect that it is part of the problem though. In my Hacker News submissions, the text was "Show HN: Video chat with like-minded developers (". And the header on the website is "Video chat with like-minded developers". So "Connect Developers" doesn't seem like it is being used in any meaningful ways. The reason I stuck with the name is because I couldn't think of something better and wanted to be agile/lean about not letting it be a timesink.

Comment by adamzerner on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-17T17:31:50.803Z · LW · GW

I am not, it looks awesome, thanks for sharing! I will pass it along to my friend.

Comment by adamzerner on Reflections on Connect Developers · 2022-01-16T21:34:59.454Z · LW · GW

My experience with the EA chats project was that, although there were lots of people who'd be happy to connect with likeminded folks, they were nervous about taking the initiative to reach out to specific people and ask to chat. Having a mechanism by which they could be assured of being connected with others who wanted the same thing seemed to eliminate this problem. Likewise, while I'm sure there's an internet message board for almost every interest at this point, that doesn't meant there's a way for people interested in forming real-life relationships to do so.

Great point, that makes sense.

In fact, that might be a way to start seeking out passionate people to connect via such a service: just find subreddits with a high activity-to-members ratio.

That does sound like a good idea! There are a few other projects I'm feeling more excited about right now so I don't think I'm going to pursue it. But I have added it to my list of ideas.

If the tech catches on, how will you deal with trolls, thieves, complaints, and so on?

Yeah that is an important question. It seems like a very difficult problem in general. I think big companies like Facebook have some sort of AI thing that tries to detect it, but my impression is that they haven't had much success.

Comment by adamzerner on Reflections on Connect Developers · 2022-01-16T20:13:14.671Z · LW · GW

great happiness and congratulations for "did a thing"!

Thank you!

I think the "write a business plan already" is absolutely key here. And really, you often only need a business sketch, not a plan.

Yeah, I agree. As my thoughts settle after this experience, that's the main thing that keeps swimming around in the back of my mind. And that's so true about only needing a sketch, not a plan.

Btw, it's good to get this data point of someone really liking the "write a business plan already" idea. The post didn't receive many upvotes, which surprised me and makes me question whether there is something unwise about it. I guess what I'm saying is that I notice some confusion, and so data points here are helpful to me.

What customers/developers have this need to connect, and why is this method any better than the hundreds of other community and discussion sites that exist?

Well, there are places to chat with people, but a) it's usually centered around some topic. Like if you join a Discord group for a particular programming language, the types of conversations that are expected are different from the types of conversations you'd have if you sat down to get coffee with someone. I'm not actually sure where you would go if you were looking for the latter type of conversation. Anything come to mind for you?

b) I think having those types of coffee shop conversations via text is different from having them via video chat (which is different from having them via email, which is different from having them in person). Perhaps the differences aren't very large, and there is a substitute good sort of thing going on. OTOH, perhaps not. I don't feel particularly confident that text is sufficient for the large majority of people, and they wouldn't be excited about video chat.

What IS success for this?

I think success is utilons generated. If you have fun filling out the survey and then decide not to follow through and actually do the video call, that generates a pretty small amount of utilons. If you have one call and it is ok, then that generates some utilons. If you make a long term friend, that generates a ton of utilons.

I agree that it is hard to tell these things though. I was thinking that you can ballpark it and assume that something like 50% of the matches end up having a video call, and 1% of those end up being friends. And that if the app actually did get a lot of users, I could email them asking them to report on their experiences. Definitely not perfect, but a) I don't see better alternatives, and b) it doesn't seem imperfect enough to make this idea a non-starter.

That light blue group may be a VERY thin tail. I think "chatting with strangers over the internet" is probably NOT attractive to the vast majority of people, and software developers even less likely to want that.

Yeah, the more I think about it the more I think you might be right about that.

You also have the problem that this thing is poisoned by a few bad actors, and that happens SO frequently in other domains that it's a fair assumption that if I give any contact information to a stranger, I'll regret it.

Ah, that's a good point. Well, what exactly did you have in mind? The app feels extremely side-project-y, so I wouldn't expect people to be worried about data harvesting. But now I'm thinking about stuff like chat roulette and how there is such a high proportion of weirdos there. Seems like moderately strong evidence that it'd be an issue for Connect Developers too.

Comment by adamzerner on Reflections on Connect Developers · 2022-01-16T19:49:27.638Z · LW · GW

You can answer as many or as few questions as you'd like. Perhaps I should have made this more clear though.

Many (20%?) Of the questions used jargon on at least one end of the scale. I happened to know about half of it, but that's quite a filter even within the programming community

Yeah, that's something I expected. Eg. a front end developer not knowing what an ORM is. But I figured that it'd be ok since it's all optional.

This isn't eHarmony, I don't need a compatible life partner, just someone vaguely in the same mindspace.

I actually disagree with this. I expect people to be attracted to the idea of filling out a long-ish survey and then being able to be matched with someone who is very similar, rather than vaguely similar. Vageuly similar feels notably less exciting. I don't feel strongly though, maybe I'm wrong. Good to get this data point from you.

A 10 point scale, while common, didn't fit on my phone.

Huh, that is surprising. I actually did try to make it look decent on small screens. The buttons are supposed to wrap, eg. 1-5 are on a top row and 6-10 are on a bottom row. Is that not what happened for you?

Comment by adamzerner on Reflections on Connect Developers · 2022-01-16T19:35:11.363Z · LW · GW

I was actually envisioning connecting niche communities as just a separate thing entirely from what I was trying to do in connecting people from a broader community. But I think what you're saying about how the former often paves a path to the latter makes total sense, and Facebook is a great example! I'm not sure why I didn't see that at first; it's funny how many things become obvious in retrospect.

That second version of the simplify idea is interesting. I agree that the difficulty is in finding that group of people. The first thing that comes to mind are niche programming languages/libraries/frameworks. But there the people who feel passionate about them very well might already be in touch with each other.

The internet used to feel more wild, perhaps, and so open-ended exploration felt more valuable. Now, finding the needle of interest in the haystack of content is more challenging.

Hm, an analogy that is coming to my mind is walking around a town with lots of mom-and-pop shops versus walking around a city with lots of chain stores and fast food places. Stumbling across a mom-and-pop shop feels kinda random because it's new, whereas seeing another McDonalds doesn't feel random. Is that what you're going for? Do you also remember you or people you know connecting with strangers in a way that lead to eg. phone calls?

Comment by adamzerner on Reflections on Connect Developers · 2022-01-16T02:19:07.574Z · LW · GW

The thing that jumped out to me about this is that it seems too open-ended. In my experience, meeting with strangers is much easier and less scary if there is some point or goal other than "meeting with strangers who have similar views as you".

For sure! This is something I thought about actually. I agree that it's a little tough to just bootstrap the conversation. Having prompts would be better. I wanted to do something like, but for friendship. I just couldn't figure it out quickly, and didn't want to sink too much time into the project. But if I or anyone else does figure out the right prompts for friendship, that strikes me as the seed of an interesting project.

Comment by adamzerner on Reflections on Connect Developers · 2022-01-16T02:15:48.853Z · LW · GW

One thing you might have tried is to simplify. Instead of asking all the survey questions, just randomly match developers with no additional criteria.

Yeah, maybe. It didn't take very long to deal with the survey part of it, and my sense is that it added value versus having a short description, because it seems cooler to match with someone who is like-minded versus someone random.

A friend and I did this for the EA community, and it got quite a bit of use. People also sent us pretty extensive feedback, indicating they were getting substantial value out of it.

That is super cool that you and your friend did this successfully for the EA community! Kudos to you guys! That makes me happy to hear. I'm thinking now that doing it for small communities like that makes a lot more sense than what I tried. I expect that in small communities like the EA community, where there is some existing sense of connection and where it is a pretty safe assumption that people are like-minded, people would be a lot more willing to meet over video-chat, and would probably get more out of it.

Seems applicable to LessWrong as well. If it had success in the EA community, that seems like very strong evidence that it would also have success with LessWrong. Anyone wanna give it a go for LessWrong? If not I probably will at some point.

The internet used to have much more in the way of random connection with strangers for open-ended conversation, and I think people miss it.

Huh. Care to elaborate, or point me to any resources? I'm interested to hear more about that.

Comment by adamzerner on I have COVID, for how long should I isolate? · 2022-01-13T22:42:49.947Z · LW · GW

One thing you could do is use microCOVID to look at how risky various events are when the other person has covid, and use that as a baseline to make adjustments off of. For example:

From there you can make some educated guesses about how much the risk goes down on day N. Microcovid had a blog post a while ago, pre-delta, that has this image:

From what I remember about omicron, the infectiousness window is smaller. Ballparking it, I'll throw out some numbers and guess that on day 5 maybe you are 20% as infectious, and on day 10 you are 1% as infectious. Which would yield updated numbers of 30, 300 and 7,000 microcovids respectively for the above scenarious on day 5, and then 1.5, 15 and 350 microcovids for day 10.

So that gives you a sense of how much risk you'd be exposing others to in various situations, which can inform your conclusion on the broader question of how long and in what ways you should isolate for.

Comment by adamzerner on Omicron Post #14 · 2022-01-13T19:00:12.623Z · LW · GW

From various sources, I have become convinced that rapid tests taken from nose swabs are likely to often be several days slower at detecting infections than rapid tests that use throat swabs.

I'm confused about how this works. Does the test have to be designed for throat swabs rather than nasal swabs? I see some language about adding a throat swab, which would indicate that the answer is "no", but other language feels like it points towards a "yes".

Comment by adamzerner on adamzerner's Shortform · 2022-01-10T17:31:05.876Z · LW · GW

Noticing confusion about the nucleus

In school, you learn about forces. You learn about gravity, and you learn about the electromagnetic force. For the electromagnetic force, you learn about how likes repel and opposites attract. So two positively charged particles close together will repel, whereas a positively and a negatively charged particle will attract.

Then you learn about the atom. It consists of a bunch of protons and a bunch of neutrons bunched up in the middle, and then a bunch of electrons orbiting around the outside. You learn that protons are positively charged, electrons negatively charged, and neutrons have no charge. But if protons are positively charged, how can they all be bunched together like that? Don't like charges repel?

This is a place where people should notice confusion, but they don't. All of the pieces are there.

I didn't notice confusion about this until I learned about the explanation: something called the strong nuclear force. Yes, since likes repel, the electromagnetic force is pushing the protons away from each other. But on the other hand, the strong nuclear force attracts them together, and apparently it's strong enough to overcome the electromagnetic force in this instance.

In retrospect, this makes total sense. Of course the electromagnetic force is repelling those protons, so there's gotta be some other force that is stronger. The only other force we learned about was gravity, but the masses in question are way too small to explain the nucleus being held together. So there's got to be some other force that they haven't taught us about yet that is in play. A force that is very strong and that applies at the nuclear level. Hey, maybe it's even called the strong nuclear force!

Comment by adamzerner on Covid 1/6/22: The Blip · 2022-01-06T20:09:12.059Z · LW · GW

young people are still very unlikely to die and shouldn’t take minimizing death risk as a major life task except when considering doing actively risky things like skydiving, or putting oneself at risk of violence.

This is probably assuming that people have "normal" expected lifespans of something like 80-100 years. But if we take seriously ideas like the singularity and accelerating technological progress, perhaps we should be expecting much longer lifespans, in which case minimizing death risk would be an important life task.

Comment by adamzerner on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-05T00:12:24.568Z · LW · GW

I was talking to a friendly recently who is an experienced software developer looking to get into AI safety. Both of us have been reading LessWrong for a long time, but were unclear on various things. For example, where can you go to see a list of all job and funding opportunities? Would jobs be ok with someone with a software engineering background learning AI related things on the job? Would grants be ok with that? What remote opportunities are available? What if there is a specific type of work you are interested in? What does the pay look like?

These are just a few of the things we were unclear on. And I expect that if you interviewed other people in similar boats, there would be different things that they are unclear on, and that this results in lots of people not entering the field of AI safety who otherwise would. So then, perhaps having some sort of comprehensive career guide would be a high level action that would result in lots more people entering the field.

Or, perhaps there are good resources available, and I am just unaware of them. Anyone have any tips? I found 80,000 hours' career review of AI safety technical research and johnswentworth's post How To Get Into Independent Research On Alignment/Agency, but neither seems comprehensive enough.

Edit: As an alternative, we could also have some sort of page with a list of people in the field of AI safety who are willing to chat on the phone with those who are looking to enter the field and answer questions. Now that I think about it, I suspect this would be both a) more effective at "converting" new "leads", and b) something that those in the field of AI safety would be more willing to do.

Why do I believe (a)? Having a career guide that is comprehensive enough where you get all of your questions addressed is hard. And there's something about speaking with a real person. Why do I believe (b)? Chatting with people is fun. Especially when you are able to help them. It also is low-commitment and doesn't take very long. On the other hand, writing and (especially) maintaining a guide is a lot of work.

So then, here is a Google Doc:

  • If you're in the field of AI safety, it would be awesome if you added your contact info.
  • If you know someone in the field of AI safety, it would be awesome if you brought this to their attention.
  • I just threw this together haphazardly. If someone is willing to take over the project and/or make the doc a little nicer, do something better in Notion, or create a real website for this, that would be awesome. I'd pursue this myself if there was enough interest (I'm a programmer and would build a real website de
  • If you are a LessWrong moderator, it'd be cool if you considered linking to this prominently. I feel like that might be a necessary condition for this succeeding. Otherwise it feels like the sort of thing that would rely on word of mouth to know that it exists, and that it probably wouldn't spread well enough to survive long enough.
  • If you are someone looking to get into the field of AI safety research, it would be great if you could share your thoughts and experiences, positive or negative, so we can update our beliefs about what the pain points really are.
Comment by adamzerner on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-04T01:58:20.056Z · LW · GW

My biggest thought is that the bar for experimenting is a lot lower than the bar for, say, committing to this site-wide for 12 months. And with that said, it's hard for me to imagine this not being promising enough to experiment with. Eg. by enabling it on select posts and seeing what the results and feedback are.

Comment by adamzerner on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-03T23:34:10.440Z · LW · GW

Feature idea: Integrating into the editor so that users can quickly and easily draw sketches and diagrams. I have been doing so a little bit, eg. this diagram in this post.

I'm a big fan of visual stuff. I think it is pretty useful. And their GitHub repo says it isn't that hard to integrate.

Try out @excalidraw/excalidraw. This package allows you to easily embed Excalidraw as a React component into your apps.

Comment by adamzerner on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-03T18:56:34.339Z · LW · GW

Reaction-ballot voting has a "you make what you measure" feel to me.

  1. You make what you measure.

I learned this one from Joe Kraus. [3] Merely measuring something has an uncanny tendency to improve it. If you want to make your user numbers go up, put a big piece of paper on your wall and every day plot the number of users. You'll be delighted when it goes up and disappointed when it goes down. Pretty soon you'll start noticing what makes the number go up, and you'll start to do more of that. Corollary: be careful what you measure.

If people can vote on your comments along an axis of eg. seeking truth vs conflict, I expect that users will spend more effort to seek truth rather than conflict.

However, there is a risk of unintended consequences. For example, the presence of the truth vs conflict axis might push people away from babble-y and contrarian comments. I actually expect that this would happen in a non-trivial way with the current axes. But if there was an additional axis like "bold vs timid", I think that would offset the effect. Eg. in the sense of how sticking your neck out is a rationalist virtue, as opposed to using language like "X might be the case".

Comment by adamzerner on adamzerner's Shortform · 2022-01-03T01:20:53.389Z · LW · GW

Good point. I was actually thinking about that and forgot to mention it.

I'm not sure how to articulate this well, but my diagram and OP was mainly targeted at gears level modesl. Using the athiesm example, the worlds smartest theist might have a gears level model that is further along than mine. However, I expect that the worlds smartest atheist has a gears level model that is further along than the worlds smartest theist.

Comment by adamzerner on adamzerner's Shortform · 2022-01-01T08:07:13.397Z · LW · GW

Closer to the truth vs further along

Consider a proposition P. It is either true or false. The green line represents us believing with 100% confidence that P is true. On the other hand, the red line represents us believing with 100% confidence that P is false.

We start off not knowing anything about P, so we start off at point 0, right at that black line in the middle. Then, we observe data point A. A points towards P being true, so we move upwards towards the green line a moderate amount, and end up at point 1. After that we observe data point B. B is weak evidence against P. We move slightly further from the green line, but still above the black line, and end up at point 2. So on and so forth, until all of the data relevant to P has been observed, and since we are perfect Bayesians, we end up being 100% confident that P is, in fact true.

Now, compare someone at point 3 to someone at point 4. The person at point 3 is closer to the truth, but the person at point 4 is further along.

This is an interesting phenomena to me. The idea of being further along, but also further from the truth. I'm not sure exactly where to take this idea, but two thoughts come to mind.

The first thought is of valleys of bad rationality. As we make incremental progress, it doesn't always make us better off.

The second thought is of how far along I actually am in my beliefs. For example, I am an athiest. But what if I had to debate the smartest theist in the world. Would I win that debate? I think I would, but I'm not actually sure. Perhaps they are further along than me. Perhaps I'm at point 3 and they're at point 7.

Comment by adamzerner on AnnaSalamon's Shortform · 2022-01-01T05:10:28.154Z · LW · GW

posts about things I presently care a lot about coming to a better understanding of (but where my thoughts are not so organized yet, and so trying to write about it involves much much use of the backspace, and ~80% of the time leads to me realizing the concepts are wrong, and going back to the drawing board).

This is something that I've been thinking about. Currently I sense that the overwhelming majority of people are hesitant to write about ideas that are in this exploratory phase. But collaboration at the exploratory phase is important! I suspect that the main way this collaboration currently happens is that people text their friends, but I feel like we can do better than that.

I'm not exactly sure how. I think it's largely a social problem. Ie. people need to feel like it is ok to post early stage exploratory thoughts that are likely to have problems. And the way to get to that point is probably to see other (high status) members of the community doing so. There's a chicken-egg problem there, but it could probably be bootstrapped by just convincing a critical mass of people to just do it.

I should point out that the LessWrong team has tried to solve this problem with the shortform and by making personal blog posts a thing that is very babble-y. I think that is failing though because the social convention hasn't changed, and the social convention is the crux of the problem.

Another possibility is that this type of exploratory conversation just doesn't happen "in public". It needs to happen in small, tight nit groups no larger than, say, four people. In which case it would be an interesting idea for eg. LessWrong to connect people and form such groups, that are limited in size and have the explicit goal of being for discussing exploratory ideas.

Edit: A big reason why I'm excited about the possibility of (drastically) improving this exploratory phase is because of how high a level of action it is. It should trickle down and have positive effects in many places. In theory.

Comment by adamzerner on Omicron: My Current Model · 2021-12-30T05:08:37.659Z · LW · GW

My guess is that it's because previous infection seems to provide significant (rather than weak or moderate) protection, and there will be a lot more people who have been previously infected next time a new variant roles around.

Comment by adamzerner on What would you like from How valuable would it be to you? · 2021-12-29T19:29:40.820Z · LW · GW

Gotcha, thanks.

Comment by adamzerner on What would you like from How valuable would it be to you? · 2021-12-29T18:29:38.668Z · LW · GW

Good to know, thanks! My understanding is that with exercise, going from nothing to something has a huge benefit, but after that the returns diminish pretty rapidly. I'm being very qualitative here, but maybe eg. going from something to solid exercise is decent, and then solid to intense is small. Does that match what you found?

Comment by adamzerner on What would you like from How valuable would it be to you? · 2021-12-29T18:22:05.396Z · LW · GW

I'm curious what would changes you would make, based on the information? The things that affect driving risk are generally well known and Josh took a stab at quantifying them; what would you do differently if you found certain numbers were off by 20%?

In general I don't care too much about being off by 20%. There are some caveats/comments though.

  1. There are a lot of variables, and it feels too me like if each of them could be off by ~20%, the overall calculation could be off by, idk, a factor of 1-2? That matters somewhat to me, but still not too too much. I'm more interested in orders of magnitude differences, or at least factors of more like 3-5.
  2. I value life a lot more highly than others. And with a higher value on life, differences like 20% start to matter more. Still not too much, and if I'm being honest they probably still aren't the types of differences that would actually change my behavior.
  3. I suppose the things that affect driving risks are well known, but are their magnitudes well known? I have two rationalist friends in particular I'm thinking of who believe/suspect that being a safe driver can have orders of magnitude differences. On the other hand, I don't share that impression, and it looks like you along with Josh Jacobson don't either. But none of us have spent much time investigating this question, so I'd think our confidences are all relatively low. Another example is driving speed. I did a quick investigation and it seems like the sort of thing that could have orders of magnitude impact. If so, that could actually be pretty influential for me, making local trips at low speed limits something I'm ok with. And maybe there are other big impact things we are overlooking that would show up in a closer investigation. That's part of the value I see in a "microcovid for cars/other things": knowing that others have investigated it thoroughly, I can feel comfortable that we're not missing anything important.

Not strictly what you asked for but you might be interested in Dan Luu's write-up on car safety

I am interested in the question of how much the car you're in affects your risk of death, but I'm not really getting that from his article.

UPDATE: I made a guestimate and it turns out that if you're already a basically safe driver, the difference between the safest car and a decent car has to be really stupidly safe to affect your risk of death much. The safety number is made up right now, I have a request out to Dan for his estimate of the safety increase but otherwise am not planning on pursuing it, since it makes so little difference to my life.

If you use the typical $10M valuation for life, then a micromort costs $10. You arrived at 40 micromorts/year, so $400/year. If your ballpark of the safety of a car affecting mortality by a factor of 1 is accurate, and if you own a car for, say, 5 years, then you might save something like $400/year * 5 years = $2,000 by choosing a safer car, but this probably isn't worth the investment of time or money. If you 10x the value you place on life it starts to matter though. I don't get this impression, but it's also possible that the safety of the car affects mortality by a factor of 5-10 instead of 0.5-2.

Comment by adamzerner on What would you like from How valuable would it be to you? · 2021-12-29T08:54:31.820Z · LW · GW

I might have come across it in the past but I don't remember. Thanks!

That last row in particular that adjusts for things like impairment in particular is useful. I would still be willing to pay some good money for something a) more detailed (eg. driving speed is something I've come across that seems important and would be cool to see info on) and b) where more time was invested. At less than 1.5 hours, I feel worried about the reliability.

Comment by adamzerner on What would you like from How valuable would it be to you? · 2021-12-29T08:09:15.381Z · LW · GW

The thing I'd get most value from for microcovid would be good information on how much (in dollars) a microcovid "costs". Yes this is personal, but you could have users enter in info about various person-specific parameters, like how much they value life, and help in answering questions like that. I'm not sure how much I'd pay for it. $100? $250?

More specific information about how risky activities are probably isn't that useful to me. I just need a rough sense.

Comment by adamzerner on What would you like from How valuable would it be to you? · 2021-12-29T08:05:13.685Z · LW · GW

This isn't really an answer to the question at hand, but I'd really like to see something similar for other risks like driving. If it was good I could see myself paying $1,000 for it.

Comment by adamzerner on Should I blog on LessWrong? · 2021-12-29T00:40:27.483Z · LW · GW

The bar for experimenting on things is low. Give it a shot! Once you start experimenting with posts you'll be in a better position to decide whether you want to continue.

If anonymity is the concern, you can always use a full pseudonym. I suspect pretty strongly that it isn't actually a big deal for future employers though. I am a programmer and in my experience interviewing, interviewers spend very little effort actually digging into this stuff. I personally have a long history of blog posts, YouTube videos, a podcast, and two startups that I founded, plus a bunch of stuff on various internet forums, but very, very few interviewers have actually looked at any of it. If anything it's just skimming a blog post or two. Academia is different of course, and isn't something I'm familiar with, but my guess is that it is relatively similar.

Comment by adamzerner on Should I blog on LessWrong? · 2021-12-29T00:36:56.553Z · LW · GW

If you do post, I would suggest limiting posts that mostly talk about yourself and contain little information that is of general interest. I suggest focusing on the question "how can I add value to others".

I actually disagree with this, and so does the LW team. From the FAQ:

What can I post on LessWrong?

Posts on practically any topic are welcomed on LessWrong. I (and others on the team) feel it is important that members are able to “bring their entire selves” to LessWrong and are able to share all their thoughts, ideas, and experiences without fearing whether they are “on topic” for LessWrong. Rationality is not restricted to only specific domains of one’s life and neither should LessWrong be.

However, to maintain its overall focus while still allowing posts on any topic, LessWrong classifies posts as either Personal blogposts or as Frontpage posts. See more in the post on Personal Blogpost vs Frontpage Posts.

Comment by adamzerner on Omicron: My Current Model · 2021-12-28T21:02:38.614Z · LW · GW

I'm skimmed mostly all of your covid posts, so in theory this shouldn't really be teaching me anything new, but I found it to be a very useful compilation. Thank you!

Comment by adamzerner on Conversation as path traversal · 2021-12-28T07:21:06.948Z · LW · GW

I haven't heard of that game, but it's a very interesting idea! I'll have to give it a shot!

It sounds incredibly difficult though. The space of possible people you might be is very large. Plus the other person is fighting against, you, trying to figure out who they are before you can figure out who you are.

Interuption would be impolite of course so you have to be very subtle in steering the conversation.

Now that I think about it, maybe it isn't so obvious that interruption is impolite. I am thinking back to a trip I had back home. My girlfriend and mom established a routine where they would continually interrupt each other right before the other one was finished speaking. I don't like doing that, so I never really said anything when the three of us were hanging out.

Then I brought it up. I didn't have this path traversal analogy at the time, but I basically tried to say what I was saying in this blog post about how it doesn't give me a chance to have any input on where the conversation goes, and there are often times where they would steer it in one direction at times when I wasn't ready to make that turn. They said I should just interrupt then. I said how that would be rude. They said no it isn't. They both realized that they were constantly interrupting each other, but genuinely didn't find it offputting at all. Think girl talk, I guess?

This is an anecdote of course, but thinking about it now, I feel like it is a convention I have seen in others before, and isn't too uncommon.

That all is talking descriptively about whether it is impolite. Prescriptively, I think there are various situations where it shouldn't be considered impolite.

Comment by adamzerner on Conversation as path traversal · 2021-12-28T07:09:35.683Z · LW · GW

The conversation would then involve looking for holes in each other's mental maps (regions of high uncertainty) and cooperating to fill them in.

Absolutely! Although I'm not sure how well this particular path traversal analogy fits that idea. I like the one I used in Debugging the student more. I think the differences between the two are subtle but there.

You seemed to imply that conversations can have goals, i.e. destinations that participants in the conversation can try to steer it towards.

Yeah, I think so. An example that comes to my mind is that recently I was texting with a friend. We were talking about how there are so many covid cases in the NBA. I said how it feels weird to me given how disproportionate it is vs the general population. My friend said stuff about how athletes travel a lot and often do risky things like going out clubbing. I said that's probably true but it doesn't seem strong enough to explain it. Then he said how they are constantly getting covid tests. That lead to a lightbulb going off in my head. "Of course! That's it!" We both felt happy with that as the explanation for the phenomenon, and using this analogy, we reached the destination.

But to address what you bring up later in your comment, I don't think conversations always do or always should have this sort of singular destination as the goal. Witty banter is a good example of that. I think that like most things, it's a spectrum. Sometimes there is a very clear and singular destination that everyone knows they are after, but that is an extreme. Other times the conversation participants know they are headed in some general direction, but aren't sure exactly where the destination. Ie. there is an element of babbling.

They can simulate conversations, but they can't really participate in genuine conversation-space traversals in the sense of deliberately looking for gaps in understanding and for ways to fill those gaps.

I actually know almost nothing about how language models like GPT-3 work, but it at least seems like it should be possible for them to do this, no?

By the way, how would your model handle other types of conversation that have purposes other than conveying or seeking information, such as witty banter, small talk, or giving/receiving orders? Would such conversations still involve traversals in the same space, or would it look qualitatively different? Would there still be goal states or just open-ended evolution?

To address this more explicitly, I think the model still fits.

  • In the witty banter case, the participants continue to take tangents, never pursuing one particular destination too hard. They do so because they enjoy the exploring and/or the novelty of going in new directions.
  • In the giving/receiving orders case, the authority figure has a lot of control over where the conversation goes. And they restrict the paths that the subordinate can take. Eg. by requiring a yes or no answer. Or often times only giving the subordinate one choice for an answer, eg. "Yes sir!".
  • In small talk, it is taboo to go down certain paths. Eg. this clip makes fun of the fact that moving from small talk to "medium talk" is such a big taboo. In the context of small talk, the other paths to things like medium talk ("How's your marriage?") still exist, you can still go down them, it's just that doing so is taboo.
Comment by adamzerner on Internet Literacy Atrophy · 2021-12-27T19:50:40.110Z · LW · GW

This is a good example of a situation where I believe the principle of charity is being applied too strongly. The author's claim was that it is a trap, not that it is possible to see it as a trap. The structure of that first paragraph is "Claim that it is a trap. Points about being an authority figure on the topic." (FWIW I don't mean any of this contentiously, just constructive criticism.)

Comment by adamzerner on Internet Literacy Atrophy · 2021-12-27T00:25:48.316Z · LW · GW

Downvoted for being purely an argument from authority.

Comment by adamzerner on Internet Literacy Atrophy · 2021-12-26T22:09:08.356Z · LW · GW

The OP was hypothesizing that a lack of keeping up with tech trends leads to you "falling behind" and eventually reaching a point where it feeling insurmountable to learn new tech things. It is possible that this hypothesis is true, and that young people have such a huge advantage in learning new things that this advantage outweighs their similar lack of background knowledge.

I don't get that sense though. There are some places where 40 year olds have an advantage over 5 year olds in learning new things. There are other places where 5 year olds have the advantage. Then there's the question of how wide the gap is between 5 year olds and 40 year olds. Language comes to mind as a place where the gap would be massive, but new tech doesn't feel like it should have a massive gap. My epistemic status is just musing though.

Comment by adamzerner on Internet Literacy Atrophy · 2021-12-26T19:51:36.682Z · LW · GW

but I think I need to get over my dislike of recordings of my own voice to the point I can listen to them

Apparently this is extremely common and there is a scientific explanation for it. And as an additional data point, I experienced it myself.

I have a hypothesis that I’m staring down the path my boomer relatives took. New technology kept not being worth it to them, so they never put in the work to learn it, and every time they fell a little further behind in the language of the internet – UI conventions, but also things like the interpersonal grammar of social media – which made the next new thing that much harder to learn. Eventually, learning new tech felt insurmountable to them no matter how big the potential payoff.

This doesn't explain why young people with a similar lack of experience, eg. the three year old mentioned in the post, have a vastly easier time learning new tech-related things.

Comment by adamzerner on Law of No Evidence · 2021-12-21T17:15:17.646Z · LW · GW

I skimmed it but I guess I missed those suggestions.

If the story is that nobody has ever investigated snake oil, and you have no strong opinion on it, and for some reason that’s newsworthy, use the words “either way”: “No Evidence Either Way About Whether Snake Oil Works”.

"No evidence either way" I'm surprised to hear suggested from him, especially in the context of that article. That suggests "it doesn't exist" vs "we haven't looked for it". There's a big difference there. For example, imagine your partner asks if there's any milk left, and you haven't opened the fridge yet to check. You wouldn't say that there's no milk, you'd just say that you don't know yet because you haven't checked.

If the story is that all the world’s top doctors and scientists believe snake oil doesn’t work, then say so. “Scientists: Snake Oil Doesn’t Work”. This doesn’t have the same faux objectivity as “No Evidence Snake Oil Works”. It centers the belief in fallible scientists, as opposed to the much more convincing claim that there is literally not a single piece of evidence anywhere in the world that anyone could use in favor of snake oil. Maybe it would sound less authoritative. Breaking an addiction to false certainty is as hard as breaking any other addiction. But the first step is admitting you have a problem.

I like the thinking here. However, a natural question is "How strongly do these scientists feel?" I think that it is important to start including things like weak, moderate and strong in communication.

Comment by adamzerner on Omicron Post #8 · 2021-12-21T05:20:02.289Z · LW · GW


Comment by adamzerner on Omicron Post #8 · 2021-12-20T23:46:11.865Z · LW · GW

I have fallen mildly ill, as have my wife and son. So far we don’t have a positive Covid-19 test, and everyone is maximally vaccinated, but given the timing the obvious conclusions do seem likely.

Sorry to hear that. Good luck. Would you mind sharing what if anything you are doing to prepare, given the symptoms? Vitamins/supplements? Drugs?

Comment by adamzerner on Law of No Evidence · 2021-12-20T18:06:37.565Z · LW · GW

I wonder what a better phrase would be. "No conclusive evidence" is the first thing that comes to mind, but that word "conclusive" is too strong.

Edit: Maybe "No strong evidence"?

Comment by adamzerner on Law of No Evidence · 2021-12-20T18:02:29.610Z · LW · GW

It purports to treat evidence the way it would be treated in a court of criminal law, where only some facts are ‘admissible’ and the defendant is to be considered innocent until proven guilty using only those facts. Other facts don’t count.

Relevant: Scientific Evidence, Legal Evidence, Rational Evidence. It might be good to mention the reason why the domains of science and law have different standards for evidence, like Eliezer's article does. I think they take those standards way too far, but it does seem helpful to have that context.

This is not an ‘honest’ mistake. This is a systematic anti-epistemic superweapon engineered to control what people are allowed and not allowed to think based on social power, in direct opposition to any and all attempts to actually understand and model the world and know things based on one’s information. Anyone wielding it should be treated accordingly.

I don't get this impression. I have a hard time articulating why though. I just get the impression that people are genuinely confused. They are taught stuff about how science works. They think things "don't count" until they pass some arbitrary threshold. Something like the doctor in this Overcoming Bias post. Some context: I'd guess I'm in the 95th percentile or higher amongst rationalists in how pissed I get when I hear "no evidence".

Comment by adamzerner on Occupational Infohazards · 2021-12-19T04:16:32.076Z · LW · GW

What proportion of, say, random intellectually curious graduate students do you think would suffer this way if put into this environment?

This seems like the sort of thing that we would have solid data on at this point. Seems like it'd be worth it for eg. MIRI to do an anonymous survey. If the results indicate a lot of suffering, it'd probably be worth having some sort of mental health program, if only for the productivity benefits. Or perhaps this is already being done.

Comment by adamzerner on Occupational Infohazards · 2021-12-19T03:12:44.908Z · LW · GW

I have a decently strong sense that I would end up suffering from similar mental health issues. I think it has a lot to do with a tendency to Take Ideas Seriously. Or, viewed less charitably, having a memetic immune disorder.

Xrisk and, a term that is new to me, srisk, are both really bad things. They're also quite plausible. Multiplying how bad they are by how likely they are, I think the rational feeling is some form of terror. (In some sense of the term "rational". Too much terror of course would get in the way of trying to fix it, and of living a happy life.) It reminds me of how in HPMoR, everyone's patronus was an animal, because death is too much for a human to bear.

Comment by adamzerner on Occupational Infohazards · 2021-12-19T01:57:44.668Z · LW · GW

Yay for experimenting!

Comment by adamzerner on Embedded Interactive Predictions on LessWrong · 2021-12-16T03:04:58.521Z · LW · GW

I liked this post a lot. In general, I think that the rationalist project should focus a lot more on "doing things" than on writing things. Producing tools like this is a great example of "doing things". Other examples include starting meetups and group houses.

So, I liked this post a) for being an example of "doing things", but also b) for being what I consider to be a good example of "doing things". Consider that quote from Paul Graham about "live in the future and build what's missing". To me, this has gotta be a tool that exists in the future, and I appreciate the effort to make it happen.

Unfortunately, as I write this on 12/15/21, is down. That makes me sad. It doesn't mean the people who worked on it did a bad job though. The analogy of a phase change in chemistry comes to mind.

If you are trying to melt an ice cube and you move the temperature from 10℉ to 31℉, you were really close, but you ultimately came up empty handed. But you can't just look at the fact that the ice cube is still solid and judge progress that way. I say that you need to look more closely at the change in temperature. I'm not sure how much movement in temperature happened here, but I don't think it was trivial.

As for how it could have been better, I think it would have really helped to have lots and lots of examples. I'm a big fan of examples, sorta along the lines of what the specificity sequence talks about. I'm talking dozens and dozens of examples. I think that helps people grok how useful this can be and when they might want to use it. As I've mentioned elsewhere though, coming up with examples is weirdly difficult.

As for followup work, I don't know what the Elicit team did and I don't want to be presumptuous, but I don't recall any followup posts on LessWrong or iteration. Perhaps something like that would have lead to changes that caused more adoption. I still stand by my old comments about there needing to be 1) a way to embed the prediction directly from the LessWrong text editor, and 2) things like a feed of recent predictions.

Comment by adamzerner on adamzerner's Shortform · 2021-12-15T17:20:42.940Z · LW · GW

The original question is based on the observation that a lot of people, including me, including rationalists, do things like spending an hour of time to save $5-10 when their time is presumably worth a lot more than that, and in contexts where burnout or dips in productivity wouldn't explain it. So my question is whether or not this is something that makes sense.

I feel moderately strongly that it doesn't actually make sense, and that what Eliezer eludes to in Money: The Unit of Caring is what explains the phenomena.

Many people, when they see something that they think is worth doing, would like to volunteer a few hours of spare time, or maybe mail in a five-year-old laptop and some canned goods, or walk in a march somewhere, but at any rate, not spend money.

Believe me, I understand the feeling. Every time I spend money I feel like I'm losing hit points. That's the problem with having a unified quantity describing your net worth: Seeing that number go down is not a pleasant feeling, even though it has to fluctuate in the ordinary course of your existence. There ought to be a fun-theoretic principle against it.