Posts

The Peter Attia Drive podcast episode #102: Michael Osterholm, Ph.D.: COVID-19—Lessons learned, challenges ahead, and reasons for optimism and concern 2020-04-04T05:19:38.304Z · score: 7 (3 votes)
"Preparing for a Pandemic: Stage 3: Grow Food if You Can [COVID-19, hort, US, Patreon]" 2020-04-03T17:57:58.826Z · score: 7 (5 votes)
How much do we know about how brains learn? 2020-01-24T14:46:47.185Z · score: 8 (4 votes)
[Link] "Doing being rational: polymerase chain reaction" by David Chapman 2019-12-13T23:54:45.189Z · score: 11 (6 votes)
Link: An exercise: meta-rational phenomena | Meaningness 2019-10-21T16:56:24.443Z · score: 9 (4 votes)
Paper on qualitative types or degrees of knowledge, with examples from medicine? 2019-06-15T00:31:56.912Z · score: 5 (2 votes)
Flagging/reporting spam *posts*? 2018-05-23T16:14:11.515Z · score: 6 (2 votes)

Comments

Comment by kenny on Pulse and Glide Cycling · 2020-08-03T04:43:57.719Z · score: 1 (1 votes) · LW · GW

I'm piling on with others pointing out that your seat seems too low. I also am surprised that you're having knee pain but seeing the photos of you riding explains that.

If you like bicycling, I'd also suggest looking into getting a better bike. Yours looks heavy, which might also explain your pain and discomfirt.

I think someone else also made this point, but I've personally noticed that too low of a gear can be less comfortable/efficient as well. Professional cyclists maintain extremely high rotational speeds but I don't think that's sensible for anyone else generally. (And some pros use higher gears and slower pedaling too anyways.)

Comment by kenny on Survival in the immoral maze of college · 2020-07-28T17:25:24.465Z · score: 2 (2 votes) · LW · GW

From the post:

Figure out ways to do the tedious busywork as quickly as possible while still getting an acceptable result (and acceptable might still mean straight As).

I'm suspicious of the direction of causality in what you described:

Within the useful classes, doing more than the bare minimum on projects made a very big difference, and the people who obsessively improved their trivial programming projects became the same people who found it easy to get an internship, and then eventually the people who skipped the "Junior Engineer" job title and jumped directly into "real programmer jobs".

Some people enjoy programming independent of schooling. It's meme-level widespread (i.e. common) knowledge among programmers; so much so that the resentment by (professional) programmers that don't enjoy hobby programming is also meme-level widespread.

I don't think advice to 'do more than the bare minimum on projects' or 'obsessively improve your trivial projects' is any good really. The people that seem to benefit from the advised behavior don't need any additional motivation to do it and everyone else isn't going to benefit from doing what is, to them, just more "tedious busywork" (and without any short-term payoff).

Comment by kenny on Partially Stepping Down Isolation · 2020-07-27T21:08:03.831Z · score: 1 (1 votes) · LW · GW

NYC

Comment by kenny on Something about the Pinker Cancellation seems Suspicious · 2020-07-27T21:06:15.461Z · score: -1 (2 votes) · LW · GW

Maybe a GPT-2/3 'open letter to cancel a prominent public intellectual' that was accidentally shared/published?

Comment by kenny on Partially Stepping Down Isolation · 2020-07-25T17:56:31.642Z · score: 1 (1 votes) · LW · GW

Thanks for this post!

I've been thinking about this – there's a particular person I'd like to start seeing in-person, unrestricted (e.g. inside, without a mask, hugging OK) but my concern is that my network of contacts isn't very strict at all.

After reading your post, I'm leaning towards waiting to see the particular person until I can move to a more strictly isolated network.

I am tho (slowly) updating towards less restrictions being reasonable. I've observed many people that are probably pretty close to pre-pandemic behavior in terms of unrestricted contact, and with many strangers, and I'm surprised that that doesn't seem to be spreading the virus, or if it is, there's no significant observable consequences (AFAICT).

Have you thought about milestones for stepping down isolation further?

Comment by kenny on The silence is deafening – Devon Zuegel · 2020-07-21T15:44:56.134Z · score: 1 (1 votes) · LW · GW

Sorry I wasn't clearer – "engineer" was intended to encompass things like social conventions, not just software.

Comment by kenny on The silence is deafening – Devon Zuegel · 2020-07-07T18:53:28.779Z · score: 1 (1 votes) · LW · GW

I don't think the post was about LessWrong specifically (at all); think Twitter or Facebook or random blog comments.

Here on this site, yes both downvotes and the absence of upvotes are strong mostly-legible signals.

Comment by kenny on The silence is deafening – Devon Zuegel · 2020-07-07T18:51:43.155Z · score: 1 (1 votes) · LW · GW

People only receive feedback from people that are engaged enough to give it.

On The Internet, that's generally true. But that's not so true IRL, face-to-face. And the point of the post is that we could engineer feedback-by-default like the reactions people mostly can't help having when they're visible (or audible) in small groups.

Comment by kenny on A reply to Agnes Callard · 2020-07-03T19:22:35.972Z · score: 1 (1 votes) · LW · GW

I agree that there might not be anything wrong with supporting a specific X without also supporting (or with opposing) all X in general. But that all depends on the reasons why you support the specific X but don't support (or oppose) the general X. Why did you sign the petition but the general policy? (Also, what do you think the general policy is exactly?)

I don't personally have strong feelings or convictions pertaining to all of this. I don't want the NYT to publish Scott's full legal name, but I don't have any particular strong objections about them or anyone else doing that in general. I do oppose the specific politics that I think is motivating them publishing his name. I also don't think there are any good reasons to publish his name that aren't motivated to hurt or harm him.

Comment by kenny on A reply to Agnes Callard · 2020-06-28T19:00:03.167Z · score: 1 (1 votes) · LW · GW

You signed the position purely out of instrumental concerns and any principles about petitions and how news organizations should or should not respond to them is entirely independent? Admitting that – even judged just instrumentally – seems counter-productive.

The relevant principle seems pretty clear (to me): of course people should be generally open to being swayed by (reasoned) argumentation, e.g. via petition – unless there's some concern(s) that override it, like a principled pre-commitment to ignore some types of influence (for very good reasons).

Comment by kenny on Radical Probabilism [Transcript] · 2020-06-28T18:53:08.416Z · score: 1 (1 votes) · LW · GW

Thanks again!

Your point about "Bayesianism at a distance" makes a lot of sense.

Comment by kenny on Radical Probabilism [Transcript] · 2020-06-28T00:27:06.011Z · score: 1 (1 votes) · LW · GW

Thanks! That answers a lot of my questions even without a concrete example.

I found this part of your reply particularly interesting:

if you don't have (2), updates are not very constrained by Dutch-book type rationality. So in general, Jeffrey argued that there are many valid updates beyond Bayes and Jeffrey updates.

The abstract example I came up with after reading that was something like 'I think A at 60%. If I observe X, then I'd update to A at 70%. If I observe Y, then I'd update to A at 40%. If I observe Z, I don't know what I'd think.'.

I think what's a little confusing is that I imagined these kinds of adjustments were already incorporated into 'Bayesian reasoning'. Like, for the canonical 'cancer test result' example, we could easily adjust our understanding of 'receives a positive test result' to include uncertainty about the evidence itself, e.g. maybe the test was performed incorrectly or the result was misreported by the lab.

Do the 'same' priors cover our 'base' credence of different types of evidence? How are probabilities reasonably, or practically, assigned or calculated for different types of evidence? (Do we need to further adjust our confidence of those assignment or calculations?)

Maybe I do still need a concrete example to reach a decent understanding.

Comment by kenny on Radical Probabilism [Transcript] · 2020-06-27T18:22:46.151Z · score: 7 (4 votes) · LW · GW

Are there any other detailed descriptions of what a "Jeffrey update" might look like or how one would perform one?

I think I get the point of there being "rationality constraints" that don't, by implication, strictly require Bayesian updates. But are Jeffrey updates the entire set of possible updates that are required?

Can anyone describe a concrete example contrasting a Bayesian update and a Jeffrey update for the same circumstances, e.g. prior beliefs and new information learned?

It kinda seems like Jeffrey updates are 'possibly rational updates' but they're only justified if one can perform them for no possible (or knowable) reason. That doesn't seem practical – how could that work?

Comment by kenny on [META] Building a rationalist communication system to avoid censorship · 2020-06-27T18:08:55.588Z · score: 1 (1 votes) · LW · GW

I can't think of a way that could work that couldn't be automated away, e.g. to a barrier consisting solely of 'install this browser extension'. (Or not at least without being a relatively non-trivial annoyance to the 'trusted' users too.)

Comment by kenny on Is there a good way to simultaneously read LW and EA Forum posts and comments? · 2020-06-27T18:05:37.951Z · score: 1 (1 votes) · LW · GW

I think there's a way to submit feature requests to the LW dev(s). An RSS feed for 'Recent Discussion' should be possible.

And, as I wrote before, I sometimes look at posts on their original sites – LW is one for which I do this a lot of the time (e.g. so I can vote or comment).

Comment by kenny on [META] Building a rationalist communication system to avoid censorship · 2020-06-25T14:49:13.656Z · score: 5 (4 votes) · LW · GW

Users on this "network" are capable of being pseudonymous. Anonymity is probably also possible, tho (much?) harder. We don't seem to have attracted too many people "spewing nonsense", or that many at all.

Requiring a personal connection to existing users will shut out a lot of potential users. And it's probably better for plausible deniability that we continue to allow anyone to signup.

I – and I'd guess most other users – are not doing enough to reliably avoid de-anonymization. It requires very strict opsec in general.

And I don't know how you could possibly calculate the probability of being de-anonymized, even with perfect information about everything you've leaked. Relying on your feelings is probably the only practical option, besides not sharing any info.

Comment by kenny on [META] Building a rationalist communication system to avoid censorship · 2020-06-25T14:32:40.890Z · score: 1 (1 votes) · LW · GW

Is a commitment to entertain controversial or unpopular or odious ideas (or to advocate for them) separate from or integral to rationalism?

Integral – for epistemological rationality anyways, and arguably too for instrumental rationality as well.

Is a mental health professional's preference to maintain enough anonymity so that their blog does not interfere with their practice or their safety separate from or integral to rationality?

I don't think it's "separate from" as much as 'mostly orthogonal'. Scott is largely to blame for his relative lack of pseudonymity – he's published, publicly, a lot of evidence of his identity. What's he trying to avoid is losing enough of what remains so that his (full) legal name is directly linked to Scott Alexander – in the top results of, e.g. a Google search result for his legal name.

When it comes to the general idea that anonymity is needed to discuss certain or any topics, I'm more skeptical.

You're right, it's not needed to discuss anything – at least not once. The entire issue is whether one can do so indefinitely. And, in that case, it sure seems like anonymity/pseudonymity is needed, in general.

I don't think there's a lot of anonymity here on LessWrong but it's certainly possible to be pseudonymous. I don't think most people bother to really try to maintain it particularly strictly. But I find the comments here to be much better than in anonymous/pseudonymous comments in other places, or even – as you seem to agree – on or via FaceBook or Twitter (or whatever). This place is special. And I think this place really is vulnerable to censorship, i.e. pressure NOT to discuss what's discussed here now. The people here – some of them anyways – really would refrain from discussing some things were they to suffer for it like they fear.

Comment by kenny on [META] Building a rationalist communication system to avoid censorship · 2020-06-25T02:30:49.230Z · score: 1 (1 votes) · LW · GW

There's no purely technological solution to censorship, especially indirect forms like what's arguably happening to Scott.

Bruce Schneier, a famous cryptography expert and advocate, eventually realized that cryptography, while good, was often besides the point – it almost always made sense to attack the other parts of systems instead of try to break encryption algorithms. Here's a good essay by him about this and a matching XKCD:

To the degree that we want to avoid censorship, we need to either adopt sufficient opsec – much better than Scott and even better than Gwern, who wrote to someone blackmailing them (by threatening to dox them):

It would be annoying to have my name splashed all over, but I resigned myself to that back in ~2010 when I decided to set up my website; I see Gwern as a pen name now, and not a real pseudonym. I’m glad it managed to last to 2015.

– or we need to prevent censorship IRL. Obviously, we can (and should) do both to some extent.

But really, to avoid the kind of censorship that inspired this, one would need to remain strictly anonymous/pseudonymous, which is hard (and lonely – IRL meetups are a threat vector!).

Comment by kenny on [META] Building a rationalist communication system to avoid censorship · 2020-06-25T02:09:15.981Z · score: 1 (1 votes) · LW · GW

I don't think that would work because a definite identity is needed for people to follow Scott. I don't think I could possibly track 'Scott', or anyone, and notice that there was a specific identity, if I couldn't track a named identity, i.e. a specific account.

Part of who Scott is to a lot of us is someone, i.e. a specific identity, that's worth following, tracking, or paying attention to over long periods. Using throwaway accounts makes that harder, even ignoring the kind of fuckery that someone could pull with something like GPT-3 at hand – maybe it's even (effectively) impossible now.

And to the degree that we all could continue to track Scott, even with him posting via throwaway accounts – and even assuming otherwise perfect opsec from Scott too – so could people who feel malice towards him track him similarly. We'd see 'likely Scott posts' lists. We wouldn't be able to prevent people from speculating as to which posts were his. Scott would only be protected to the degree that we couldn't tell it was him.

There's probably a math/compsci/physics theory that covers this, but there's no way for Scott to be able to maintain both his pseudonymity and his identity – to the degree he wants (or needs) – if his (full) legal name is linked to his pseudonym on the first page of web search results for the legal name.

The safer option would be for Scott to create a new pseudonym and try to maintain (very) plausible deniability to any connection to Scott Alexander. But that would mean starting over and would probably be very hard – and maybe impossible anyways. It might be impossible for anyone to write in a different enough style, consistently, and avoid detection, especially at the lengths he typically posts. It's probably even that much harder (or impossible) for people that have written as much as he has publicly.

Comment by kenny on Is there a good way to simultaneously read LW and EA Forum posts and comments? · 2020-06-25T01:44:19.718Z · score: 3 (2 votes) · LW · GW

Are you sure RSS won't work for you? I follow LW and a bunch of other sites/feeds in a feed reader (Feedly). I scan all of my feeds in a big 'oldest first' list, read most in the reader, but sometimes look at posts on their original sites (via a keyboard shortcut consisting of a single key) – the latter seems like a perfectly fine way to read comments (when I want to).

Or do you need or want to follow comments too, i.e. comments made after you read the original post? I think a lot of sites have comments feeds, so that should be possible via RSS too. I don't do this tho. I do selectively subscribe to comments for specific posts, but typically via email. On some sites, including LW, I only get new comments via on-site or in-app notifications, which mostly seems fine.

Comment by kenny on [META] Building a rationalist communication system to avoid censorship · 2020-06-23T17:53:07.063Z · score: 3 (3 votes) · LW · GW

Anonymity is hard – the posts themselves, e.g. word choice, phrasing, etc., are all extremely vulnerable 'side channels' for breaking anonymity. Defending against those kinds of attacks are probably both impractical and extremely difficult, if not effectively impossible.

The most important step in defending against attacks is (at least roughly) determining a threat model. What kind of attacks do you expect? What attacks are worth defending against – and which aren't?

Comment by kenny on ‘Maximum’ level of suffering? · 2020-06-22T16:52:44.956Z · score: 3 (2 votes) · LW · GW

My point about 'capping' the (dis)utility of pain was that one – a person or mind that isn't a malevolent (super-)intelligence – wouldn't want to be able to be 'held hostage' were something like a malevolent super-intelligence in control of some other mind that could experience 'infinite pain'. You probably wouldn't want to sacrifice everything for a tiny chance at preventing the torture of a single being, even if that being was capable of experiencing infinite pain.

I don't think it's possible, or even makes sense, for a mind to experience an infinite amount/level/degree of pain (or suffering). Infinite pain might be possible over an infinite amount of time, but that seems (at least somewhat) implausible, e.g. given that the universe doesn't seem to be infinite, seems to contain a finite amount of matter and energy, and seems likely to die of an eventual heat death (and thus not able to support life or computation indefinitely).

Even assuming that a super-intelligence could rewire human minds to just increase the amount of pain they can experience, a reasonable generalization is to a super-intelligence creating (e.g. simulating) minds (human or otherwise). That seems to me to be the same (general) moral/ethical catastrophe as your hypothetical(s).

But I don't think these hypotheticals really alter the moral/ethical calculus with respect to our decisions, i.e. the possibility of the torture of minds that can experience infinite pain doesn't automatically imply that we should avoid developing AGI or super-intelligences entirely. (For one, if infinite pain is possible, so might infinite joy/happiness/satisfaction.)

Comment by kenny on ‘Maximum’ level of suffering? · 2020-06-20T23:25:55.327Z · score: 3 (2 votes) · LW · GW

(You need a space between the > and the text being quoted to format it as a quote in Markdown.)

Sure, we can assume a malevolent super-intelligence could prevent people from going into shock and thus cause much more pain than otherwise.

But it's not clear how (or even whether) we can quantize pain (or suffering). From the perspective of information processing (or something similar), it seems like there would probably be a maximum amount of non-disabling pain, i.e. a 'maximum priority override' to focus all energy and other resources on escaping that pain as quickly as possible. It also seems unclear why evolution would result in creatures able to experience pain more intensely than such a maximum.

Let's assume pain has no maximum – I'd still expect a reasonable utility function to cap the (dis)utility of pain. If it didn't, the (possible) torture of just one creature capable of experiencing arbitrary amounts/degrees/levels of pain would effectively be 'Pascal's hostage' (something like, under the control of a malevolent super-intelligence, a utility monster).

But yes, a malevolent super-intelligence, or even just one that's not perfectly 'friendly', would be terrible and the possibility is incredibly scary to me too!

Comment by kenny on ‘Maximum’ level of suffering? · 2020-06-20T19:20:06.179Z · score: 3 (2 votes) · LW · GW

Presumably pain works in some specific way (or some relatively narrow distribution of ways), so there probably is a maximum amount of pain that could be experienced in any circumstance. Real-life animals can and do die of shock, which seems like it might be some maximum 'pain' threshold being exceeded.

But suffering seems much more general than pain. Creating (e.g. simulating) a consciousness or mind and torturing it increases global suffering. Creating multiple minds and torturing them would increase suffering further.

What seems to be different about suffering (to at least some degree – real-life beings also seem to suffer sympathetic pain) is that additional suffering can be, in effect, created simply by informing other minds of suffering of which they were not previously aware. Some suffering is created by knowledge or belief, i.e. spread by information. (This post has a good perspective one can adopt to avoid being 'mugged' by this kind of information.)

The creation or simulation of minds is presumably bounded by physical constraints, thus there probably is some maximum amount of suffering possible.

Are there possible minds that can experience an infinite amount of pain or suffering? I think not. At a 'gears' level, it doesn't seem like pain or suffering could literally ever be infinite, even over an infinite span of time, tho I admit that seems true because it does seem true that, e.g. there's a finite amount of matter in the universe, and that minds cannot exist for an infinite amount of time (e.g. because of the eventual heat death of the universe).

But even assuming minds can exist for an infinite amount of time or that minds could be arbitrarily 'large', I'd expect the amount of pain or suffering that any one mind could experience to be finite. But, under those same assumptions (or similar ones), the total amount of pain or suffering experienced could be infinite.

Comment by kenny on What is meant by Simulcra Levels? · 2020-06-17T16:02:54.033Z · score: 5 (3 votes) · LW · GW

I don't think I've seen this idea being used in different ways.

Comment by kenny on Simulacra Levels and their Interactions · 2020-06-15T21:37:53.712Z · score: 13 (5 votes) · LW · GW

Wait a few years and Dwarf Fortress might have implemented those gears.

Comment by kenny on May Gwern.net newsletter (w/GPT-3 commentary) · 2020-06-08T17:44:17.672Z · score: 3 (2 votes) · LW · GW

I'd suggest that you consider writing independently of how you (ultimately) choose to publish whatever it is you write.

Without "web dev" experience, any common 'DIY' solution you pick is going to potentially require more than "a reasonable amount of time".

I'd suggest looking at one of the following two alternative avenues:

  1. A completely 'hosted' blog, e.g. WordPress.
  2. Static HTML (and probably CSS) files.

For [1], you'd just need to figure out the blog software (and maybe, a little, the host's software). You'd otherwise 'get for free', features like an index page, an RSS feed (for people to follow new posts in feed readers), and date-organized 'archives' (e.g. lists of posts). You might need to spend an 'unreasonable' amount of time getting things like comments to 'work' (e.g. NOT be inundated with spam). And changing the blog software would be effectively impossible outside of developing your own extensions (e.g. 'plugins', 'themes').

For [2], you'd just need a public web server. Wherever you're going to school probably already provides you with one. You'd just need to upload your HTML and CSS. You might be able to host server-side code, e.g. PHP, too, but that would depend entirely on what your school supports. Alternatively, you could pay for your own VPS (virtual private server) from one of many different providers, but that may involve an 'unreasonable' amount of time on your part. If you're hosting your own 'hand-coded' HTML+CSS (and/or server-side code) files, you'll have to implement things like an index or comments (or an RSS feed) yourself. But an index could just be a single static page to which you add a link when you publish a new blog post (which is also just a static file).

There is (of course!) many 'intermediary' solutions available too – but, even for professional web-dev people, just sorting thru them can take an 'unreasonable' amount of time.

One 'genre' you might want to look into are what are called 'static site generators', i.e. code that generates a (mostly) static site, e.g. HTML/CSS/JS files, from some more abstract 'source'. A lot of them allow/require you to write things like posts as Markdown files. Some of them can work with GitHub/GitLab/etc. 'pages' for publishing to the Internet. (You could even use GitHub for comments. One interesting idea I saw floated was to require commenters to create a GitHub pull request to modify a post and add a comment.)

And somewhat outside all of the above possibilities, you could just post whatever you write on this site (which supports 'personal' posts) or Reddit, a suitable forum, or in publicly accessible Google Docs docs – there are a lot of possibilities like this and this might be the overall most sensible solution (if you definitely don't want to necessarily spend an unreasonable amount of time learning the tools that make up other more prototypical 'blogging' solutions).

Comment by kenny on Covid-19: My Current Model · 2020-06-01T00:13:37.469Z · score: 3 (2 votes) · LW · GW

it is totally unclear to me how we would plausibly create a vaccine on a short (“a few months”) timescales.

"Create a vaccine" is somewhat ambiguous. There are already several (dozens?) of vaccines undergoing clinical trials so, in one sense, we've already created vaccines. There are several vaccine technologies that are well understood that can be used to quickly produce a vaccine. In the sense of developing, producing (at scale), and administering a vaccine – to a large number of people – that's (much) less plausible (or possible) given 'civilizational (in)adequacy'.

Comment by kenny on Get It Done Now · 2020-05-23T22:10:08.522Z · score: 1 (1 votes) · LW · GW

What does Pozen discuss in terms of constraints on how OHIO works in practice? I might have all the "decision-relevant information" about a project, but not have the resources, e.g. the time, to start and complete the project immediately.

I'm not sure how well this would fall under that principal, but I'll often outline a project right away, e.g. with tasks to book a flight, reserve a hotel room, etc., if I decide to attend some even to which I've been invited (and making that decision would depend on checking my calendar first).

Comment by kenny on Get It Done Now · 2020-05-23T22:06:25.713Z · score: 3 (2 votes) · LW · GW

I mostly came to the same conclusion as you regarding a schedule, but I'm still struggling to develop one that's supportive without feeling constraining (and thus resent or don't stick to anyways).

I found a (financial) budget to be very helpful for me in the same way that I expect a schedule to as well.

Comment by kenny on Haskenthetical · 2020-05-23T21:07:30.042Z · score: 3 (2 votes) · LW · GW

Aside from the ease of meta-programming with Lisp syntax – as I mentioned in this comment on this post – the other major (historical) reasons why Lisp was nice to use have been greatly copied by newer languages since.

I've found functional programming languages to be roughly as nice as the Lisps I've used previously, and with more 'standard' syntaxes.

But meta-programming can be extremely powerful and thus anything that makes it easier can be pretty useful too.

Clojure was the most recent Lisp (or Lisp-like) language I used. It's very nice and much more 'batteries included' than other Lisps I've played with in the past.

I've been doing a lot of work with Elixir lately. It doesn't have Lisp syntax, but I find it too be very nice in a lot of the ways that Lisp languages often are too.

Comment by kenny on Haskenthetical · 2020-05-19T22:55:44.219Z · score: 4 (3 votes) · LW · GW

Nice!

Related to [6], I have a vague hunch that the chief benefit of 'Lisp syntax' is that it's easy to parse and represent as Lisp data. Writing Lisp is much easier with 'paredit' plugins/features in one's editor. I often heavily format other's code to match my idiosyncratic 'visually scannable' style (tho not just in Lisp or Lisp-like languages).

Comment by kenny on "Preparing for a Pandemic: Stage 3: Grow Food if You Can [COVID-19, hort, US, Patreon]" · 2020-05-19T22:49:35.382Z · score: 1 (1 votes) · LW · GW

I don't think things are too bad, but I don't have a lot of evidence even for my own area. I've only bought food a handful of times in the past two months.

I didn't notice anything 'missing' from the grocery store the last time I was there a week or so ago.

Comment by kenny on Nicotinamide riboside and SARS-CoV-2 · 2020-05-03T03:30:21.842Z · score: 3 (3 votes) · LW · GW

The first link is broken (the URL starts with noticed:).

Comment by kenny on Against strong bayesianism · 2020-04-30T18:48:22.123Z · score: 5 (3 votes) · LW · GW

Are you against Bayesianism or 'Bayesianism'?

I do agree that most things people identify as tenets of bayesianism are useful for thinking about knowledge; but I claim that they would be just as useful, and better-justified, if we forced each one to stand or fall on its own.

This makes me think that you're (mostly) arguing against 'Bayesianism', i.e. effectively requesting that we 'taboo' that term and discuss its components ("tenets") separately.

One motivation for defending Bayesianism itself is that the relevant ideas ("tenets") are sufficiently entangled that they can or should be considered effectively inseparable.

I also have a sense that the particular means by which intelligent entities like ourselves can, incrementally, approach thinking like an 'idealized Bayesianism intelligence' is very different than what you sketched in your dialog. I think a part of that is something like maintaining a 'network' of priors and performing (approximate) Bayesian updates on specific 'modules' in that network and, more infrequently, propagating updates thru (some portion of) the network. Because of that, I didn't think this last part of the dialog was warranted:

A: So why do people advocate for the importance of bayesianism for thinking about complex issues if it only works in examples where all the variables are well-defined and have very simple relationships?

B: I think bayesianism has definitely made a substantial contribution to philosophy. It tells us what it even means to assign a probability to an event, and cuts through a lot of metaphysical bullshit.

In my own reasoning, and what I consider to be the best reasoning I've heard or read, about the COVID-19 pandemic, Bayesianism seems invaluable. And most of the value is in explicitly considering both evidence and the lack of evidence, how it should be interpreted based on (reasonably) explicit prior beliefs within some specific 'belief module', and what updates to other belief modules in the network are warranted. One could certainly do all of that without explicitly believing that Bayesianism is overall effective, but it also seems like a weird 'epistemological move' to make.

If you agree that most of the tenets of a big idea are useful (or true) in what important sense is it useful to say you're against the big idea? Certainly any individual tenet can be more or less useful or true, but in helping one stand or fall on its own, when are you sharpening the big idea versus tearing it down?

Comment by kenny on My experience with the "rationalist uncanny valley" · 2020-04-24T16:04:20.928Z · score: 3 (2 votes) · LW · GW

Nope!

I'll consider writing and publishing a longer post, but here's a quick summary off the top of my head:

There's a real tension – in my own mind anyways – between epistemological and instrumental rationality, particularly in areas dominated by 'psychology' and 'sociology', i.e. when interacting with other people, either alone or in groups. Epistemological rationality is, or at least feels, easier. This tension is, I think, either the or the main cause of the uncanny valley. The first item of the "frontpage comment guidelines" hints at this:

Aim to explain, not persuade

Knowing when to avoid persuading, or recognizing when one is doing that, is hard!

And even explaining is difficult! At some point, I find myself trying to persuade others to accept my attempted explanations and at least understand it to my satisfaction. This is a big reason why I empathize with this statement in your post: "I now find talking to non-rationalists much less interesting". Rationalists at least have norms that depend on differentiating the two. I find that a lot of non-rationalists almost inevitably pattern-match what I intend as an explanation as attempted persuasion.

Because of uncertainty, chaos, and path dependance, even just picking targets against which to judge one's own effectiveness is a seemingly inevitably and permanently nebulous project. I try to maintain the idea that my own effectiveness is bounded by constraints, including my own psychology, and I don't know all of the constraints. But another idea that accompanies that is that I might be limiting my effectiveness by handicapping myself in my own thoughts as a means of preserving (some amount of) my self esteem.

I also struggle with integrating my own preferences into my own judgements about my effectiveness.

I don't think I've climbed out of the rationalist uncanny valley. I think I have been descending into and then climbing out of several uncanny local minima in the landscape of my personal effectiveness. I also think that I descended into a valley (at least once) before I found Overcoming Bias, and then Less Wrong – and before either existed. (I'm 38 years old.) I also feel like my 'effectiveness record' is very mixed. I don't think I've ever been, overall, ineffective, and I think I've definitely scored some clear victories, so, in a sense, there are many different dimensions and it's only in some that I consider myself, at any one time, to be in a valley (or not).

Comment by kenny on Analysis of COVID-19 superspread events (linkpost) · 2020-04-24T03:59:26.949Z · score: 1 (1 votes) · LW · GW

That's frustrating. Do we need to build an alternative utilitarian medical tradition to be able to sensibly handle pandemics properly?

Comment by kenny on My Covid-19 Thinking: 4/23 pre-Cuomo Data · 2020-04-24T03:55:47.965Z · score: 5 (3 votes) · LW · GW

So what’s missing from the subway that’s not missing from other stories of how people all get infected?

Talking. People on the subway don’t talk.

And remember that choir practice where everyone distanced and it didn’t matter?

Hmm.

I've updated modestly against surface transmission or fully (or even partially) aerosolized transmission because of this (and other things). I am still very reluctant to go to my nearby very busy grocery store (in Brooklyn).

Comment by kenny on Analysis of COVID-19 superspread events (linkpost) · 2020-04-23T21:24:27.728Z · score: 2 (2 votes) · LW · GW

Based on the linked post, and other sources, I've updated modestly away from being worried about surface transmission or transmission outside (absent close contact to others).

Given the sheer scale of the crisis, it seems like tests involving directly 'challenging' human subjects (i.e. exposing people to the virus) would be helpful, particularly for different modes of transmission and transmission by people that are infected but asymptomatic. I'm guessing the main obstacle is that the people that would otherwise perform these tests would refuse to do anything that might directly infect anyone. That's understandable, but still seems overall tragic. Is there a better explanation? Is this kind of testing considered unnecessary for some reason?

Comment by kenny on My experience with the "rationalist uncanny valley" · 2020-04-22T21:31:54.539Z · score: 4 (2 votes) · LW · GW

Thanks for this post!

One reason I found it interesting is in spurring me to think about my own journey thru 'the rationalist uncanny valley'.

Comment by kenny on Solar system colonisation might not be driven by economics · 2020-04-22T21:27:19.708Z · score: 1 (1 votes) · LW · GW

Yeah, I wasn't thinking of circumstances where you, rightly, point out that it would be easier to create 'closed habitable systems' on Earth.

The only examples I can think of would be, in essence, very large meteors, e.g. very large meteors or a rogue planet or rogue black hole.

I'd expect any closed habitable systems to be more at risk of failing tho if they were on Earth, were a disaster to occur there, e.g. by 'invasions' from others fleeing the disaster that know of the closed system.

Maybe you're right that the costs of creating closed habitable systems off-Earth will 'always' be greater than just protecting Earth, but my intuition is that there's still a non-zero probability that it might payoff from doing so.

Certainly tho, sufficiently far enough in the future (e.g. many billions of years from now), that cost-benefit analysis might change radically (e.g. when the Sun dies).

I'd expect large benefits from developing the knowledge and understanding sufficient to being able to create closed habitable systems too.

It's also possible that the relevant constraints for avoiding or adapting to some disasters aren't strictly economic or financial but social or organizational, in which case multiple closed habitable systems could payoff in that sense too.

In these comments, I am, maybe obviously, thinking of people living off-Earth as hedging. From that perspective, the optimal costs paid for developing the ability for people to live off-Earth doesn't seem to be literally zero.

From the perspective of weathering large disasters, and not just ensuring the survival of people at all, even not-strictly-closed habitable systems might be worth investing in. If we could grow food in space, it could possibly help feed people on Earth in the aftermath of a global nuclear war, even if it's not strictly necessary to ensure that anyone survives.

Comment by kenny on Solar system colonisation might not be driven by economics · 2020-04-21T21:08:31.760Z · score: 1 (1 votes) · LW · GW

It's currently hard to know with any certainty what the marginal costs of space mining might be. Currently, everything is manufactured on Earth and then must be launched into space. It's not clear what the path(s) to escaping that constraint might be and how long they'll require and at how much cost.

As just one possible cost that might remain relatively high indefinitely is protecting the Earth from 'mis-delivery' of mined resources. Being able to 'land' large masses on Earth is very powerful weapon! Defending against even accidental 'attacks' might remain prohibitively expensive indefinitely.

Also, outside view says there are various companies hoping to do some sort of extraterrestrial mining, so that means some experts must think it'll be profitable. And surely they'll have anticipated the objection that the prices will drop once they start producing.

This seems like slightly weaker evidence than it might otherwise be because I'd expect interest in extraterrestrial mining to be caused by much different motivations than terrestrial mining opportunities.

But yes, I'm sure they're running the numbers for the scenarios in which their additional supply of whatever they're able to mine causes the relevant prices to drop.

Comment by kenny on Solar system colonisation might not be driven by economics · 2020-04-21T20:53:46.705Z · score: 3 (2 votes) · LW · GW

Is there any economics 'research' on redundancy, i.e. humanity surviving Earth ceasing to be habitable? I'd think that would dominate sufficiently long-term considerations.

I agree with you that there are no strict economic benefits to space colonization – not in the 'near-term' anyways.

Comment by kenny on Leaders support global truce to turn fire on coronavirus · 2020-04-20T22:05:15.440Z · score: 1 (1 votes) · LW · GW

Thanks!

That seems like moderate evidence that a ceasefire or truce could do some good, even if it's only partially observed or observed by some factions.

Comment by kenny on Leaders support global truce to turn fire on coronavirus · 2020-04-19T18:52:51.702Z · score: 1 (1 votes) · LW · GW

Thanks!

I don't know the scale of conflicts in those places, particularly involving non-local state actors, but I would weakly estimate that there isn't 'a lot' just based on my not being aware of it. But I also think that that's very weak evidence, for lots of reasons.

So, if a truce is agreed to, and it's observed to a considerable extent, I'd expect conflict to shift towards local actors, both state and non-state, but also possibly encourage actors not party to the truce to seize initiative against actors that are party to the truce, e.g. because they might be expect to retreat in the face of fighting versus defend their current positions.

I definitely don't feel confident in any estimates I have about the effects of the truce. Are there any detailed accounts you can share?

Comment by kenny on Leaders support global truce to turn fire on coronavirus · 2020-04-18T18:54:53.551Z · score: 8 (2 votes) · LW · GW

I wasn't aware there was significant fighting between state actors currently, or recently, or likely to be any in the near-term.

What specific conflicts would this resolution likely affect?

Comment by kenny on Leaders support global truce to turn fire on coronavirus · 2020-04-16T22:17:10.157Z · score: 1 (1 votes) · LW · GW

I'm very confused by this. How would these countries impose a "truce" on Yemen or Syria or anywhere else?

Comment by kenny on Deminatalist Total Utilitarianism · 2020-04-16T20:33:52.892Z · score: 3 (4 votes) · LW · GW

This is interesting – thanks!

Comment by kenny on How should we model complex systems? · 2020-04-14T22:00:00.544Z · score: 1 (1 votes) · LW · GW

Thanks! That's very interesting to me.

It seems like it might be an example of relatively small structures having potentially arbitrarily large long-term effects on the state of the entire system.

It could be the case tho that the overall effects of cyclones are still statistical at the scale of the entire planet's climate.

Regardless, it's a great example of the kind of thing for which we don't yet have good general learning algorithms.

Comment by kenny on Is this viable physics? · 2020-04-14T21:54:11.100Z · score: 1 (1 votes) · LW · GW

I think – very tentatively – that it could be viable.

I highly recommend Wolfram's previous book, available for free here on one of his sites:

I recommend it both on its own as well as crucial context for his recent post.

Wolfram's statement about needing to "find the specific rule for our universe" describes a problem that any theory of everything is likely to have. String theory noticeably had this same problem.