Was SARS-CoV-2 actually present in March 2019 wastewater samples? 2020-07-07T23:08:34.353Z · score: 4 (2 votes)
landfish lab 2020-02-20T00:20:52.322Z · score: 6 (2 votes)
Absent coordination, future technology will cause human extinction 2020-02-03T21:52:55.764Z · score: 24 (6 votes)
Does the US nuclear policy still target cities? 2019-10-02T00:18:10.261Z · score: 67 (31 votes)


Comment by landfish on How credible is the theory that COVID19 escaped from a Wuhan Lab? · 2020-07-07T23:31:22.305Z · score: 1 (1 votes) · LW · GW

I think their claim is that labs only (or usually) work with viruses that have been described / that they have published the sequences for. And furthermore that they would have published such GoF work if they had done it (?). Like I said, not very compelling claims, especially because they're general and unclear.

Comment by landfish on Editor Mini-Guide · 2020-07-07T23:26:00.287Z · score: 1 (1 votes) · LW · GW

I followed Google to this post via Googling: "how to include images in lesswrong posts"

Based on the advice, I tried to upload my photo to Google Drive and share it, but it looks like Google Drive doesn't support this kind of URL-embeddable sharing anymore, if I understand correctly. Next time I will try Dropbox, but if you could update this post to reflect Google Drive no longer supporting this (if you can confirm this is true), I think that would be helpful to others. Including a link on how to upload then share a link to an image would also save future people time who use the same Google query to find this post.

Comment by landfish on Was SARS-CoV-2 actually present in March 2019 wastewater samples? · 2020-07-07T23:19:32.020Z · score: 3 (2 votes) · LW · GW

@jimrandomh pointed towards this HN comment thread:

Comment by landfish on How credible is the theory that COVID19 escaped from a Wuhan Lab? · 2020-04-03T22:39:37.213Z · score: 6 (3 votes) · LW · GW

I would like to see someone collect information about this hypothesis is a more organized fashion (not a youtube video), specifically to outline which labs are a possibility, who the people were at the labs, what their prior publications were, etc.

Also, for the other zoonosis, how did they arise? (I.e., in a city? In the country? etc.) Same question for other lab escapes.

This Nature article argues that two new features of SARS-CoV-2 look like they've undergone selection for humans or human-like hosts: the "receptor-binding motif (RBM) that directly contacts ACE2" and the "polybasic (furin) cleavage site". They argue that the virus had to acquire these features somewhere other than bats, and investigate several hypotheses:

1. Natural selection in an animal host before zoonotic transfer
2. Natural selection in humans following zoonotic transfer
3. Selection during passage

They think the third option is unlikely, though I don't entirely follow their argument:

"In theory, it is possible that SARS-CoV-2 acquired RBD mutations (Fig. 1a) during adaptation to passage in cell culture, as has been observed in studies of SARS-CoV11. The finding of SARS-CoV-like coronaviruses from pangolins with nearly identical RBDs, however, provides a much stronger and more parsimonious explanation of how SARS-CoV-2 acquired these via recombination or mutation19.

The acquisition of both the polybasic cleavage site and predicted O-linked glycans also argues against culture-based scenarios. New polybasic cleavage sites have been observed only after prolonged passage of low-pathogenicity avian influenza virus in vitro or in vivo17. Furthermore, a hypothetical generation of SARS-CoV-2 by cell culture or animal passage would have required prior isolation of a progenitor virus with very high genetic similarity, which has not been described. Subsequent generation of a polybasic cleavage site would have then required repeated passage in cell culture or animals with ACE2 receptors similar to those of humans, but such work has also not previously been described. Finally, the generation of the predicted O-linked glycans is also unlikely to have occurred due to cell-culture passage, as such features suggest the involvement of an immune system"

I think their argument boils down to "it's more parsimonious that SARS-CoV-2 ended up with RBD sites with ACE2 affinity via recombination with a pangolin virus than that it acquired it via selection in animal or cell culture, given the virus had not previously been described". I think this argument could be made cleaner, and that better steelman arguments for both "lab escape" and "zoonosis" origin could be produced.

Comment by landfish on landfish lab · 2020-03-19T01:30:15.191Z · score: 3 (2 votes) · LW · GW

Government and tech companies / tracking

Palintir already helping government track cases -- Article notes that government can get location data from telecoms, but that google has even more precise data from maps and android, which the government can also ask for in an emergency

Comment by landfish on LessWrong Coronavirus Agenda · 2020-03-19T00:21:23.091Z · score: 2 (2 votes) · LW · GW

My collection of links to the projects I know about in this space and some news coverage of them.

Comment by landfish on landfish lab · 2020-03-19T00:18:56.660Z · score: 20 (6 votes) · LW · GW

COVID-19 Contact tracing efforts

US Efforts:

Covid Watch

Private Kit, Safe Paths

South Korea’s Tracking Effort

Their app

SMS messages about cases locations, etc.

Other Articles

Wired’s reporting on S Korea and China’s use of apps]

Open letter asking tech companies to implement opt-in contact tracing:

Israeli intelligence efforts to track people and use that data for epidemiological purposes

Comment by landfish on LessWrong Coronavirus Agenda · 2020-03-18T20:37:29.656Z · score: 33 (11 votes) · LW · GW

Contact Tracing at Scale!

One thing we need, that the Less Wrong community could likely help with, is contact tracing capability at scale. I know of one such project in the US - The Covid Watch project, based out of Stanford.

I think the major tech companies need to set up and throw a ton of engineering and design resources at contact tracing efforts. They currently control the software supply chain to most mobile devices on earth, and thus are ideally placed to help track the spread of infections.

The more testing we have, the more effective contact tracing will be, so this needs to be paired with an increase in testing world-wide, as previously mentioned in the thread.

Comment by landfish on landfish lab · 2020-02-28T04:10:24.702Z · score: 1 (1 votes) · LW · GW

The limited choices is not good, but they're at least competing on the overall experience rather than just engagement inside of a browser. In my experience, OSes seem to have more usability features than social apps do. (night mode, do not disturb, etc.)

Comment by landfish on landfish lab · 2020-02-28T04:03:13.817Z · score: 3 (2 votes) · LW · GW

I don't want everyone and their grandmother to join, but I would like to see a lot more of the rationalist facebook content on LessWrong. Basically low-medium effort posts that abide by the spirit of truth-seeking norms. If I'm sharing memes I'll do it somewhere else, but if I'm brainstorming about a nuclear winter hypothesis it would be cool to do it here.

Comment by landfish on Open & Welcome Thread - February 2020 · 2020-02-28T03:54:34.562Z · score: 4 (3 votes) · LW · GW

He is indeed.

Comment by landfish on Open & Welcome Thread - February 2020 · 2020-02-28T03:53:45.232Z · score: 8 (4 votes) · LW · GW

I'd be curious how people relate to this Open Thread compared to their personal ShortForm posts. I'm trying to get more into LessWrong posting and don't really understand the differences between these.

This has probably already been discussed, and if so please link me to that discussion if it's easy.

Comment by landfish on landfish lab · 2020-02-21T03:25:29.954Z · score: 3 (2 votes) · LW · GW

Wellll, I just signed up for and so far the interface and experience look terrible. It seems designed around sharing news articles, and that's not very interesting or useful or better than Reddit. I would not call it at least google plus level of good.

I agree that it might take a large amount of funding to get something off the ground that has a chance of competing.

Honestly, I'd be pretty happy to see lesswrong shortform evolve more features rival facebook's discussion space in some way. I'm not sure that's actually the right direction, but I am saying I'm interesting in that direction.

Comment by landfish on Ikaxas' Shortform Feed · 2020-02-20T00:22:19.868Z · score: 1 (1 votes) · LW · GW

Nuclear arms control & anti-proliferation efforts are a big one here. Other forms of arms control are important too.

Comment by landfish on landfish lab · 2020-02-20T00:20:52.552Z · score: 13 (3 votes) · LW · GW

Our mixed-motive conflict with social media apps

Modern computers are trash. I'm ready for better interfaces and better AI capabilities that are more aligned with our interests.

I'm going to talk about my phone as a "computer" rather than a collection of (mostly social media) apps, because the thing I want to interface is the computer, not just the apps.

Because that's exactly part of the problem. I don't have enough control over how I interact with the apps. The apps attempt to exert control over my attention. In some ways this is okay -- I do want apps to be high quality and useful, and I want my attention to be drawn to high quality useful things. However, the apps try to draw my attention using short term reward cycles that I often do not endorse upon reflection. This is a kind of superstimulus that we didn't evolve to handle. I want my phone's software to help me with this. I want this to be an Operating System feature and not an app feature, because I don't trust the apps. I have a mixed-motive conflict with the apps, and I want more leverage.

A mixed-motive conflict is one where many interests align but some do not. The name comes from Thomas Schelling's work, The Strategy of Conflict, and can be applied in many domains: Nuclear game theory, advertising, and of course social media apps.

Now, in an ideal world I shouldn't have to turn to an OS to give me greater control over the content apps provide me with. Ideally, the incentives would be aligned between me, the customer, and the app's designers and maintainers. If this were the case, I posit Facebook would look very different. There would be far more controls that would allow me to select the things I want to see that I endorse as good, rather than just the ones that keep me maximally engaged. (I know Facebook has changed their algorithms to optimize factors other than screen time, but I'm including this in the sense of 'engaged').

There really should be a data layer which Facebook presents via an API that my OS can control, allowing me to tweak things like the feed, events I see, etc. Facebook would prefer to control the interface, both because it's easier for them to develop and because it's more effective at keeping me engaged. Except, it may not be.

I pledge to look for social media platforms that allow me greater control over my own data. Initially, this may be limited to in-app control over my feed. But in the long term, I want an OS that will interface with my social data feeds and give me options for control. I am aware that browser plugins exist to assist with this, but they rely on hacks that prevent a smooth experience, and Facebook deliberately prevents them from providing many features. I pledge to look for social media companies whose incentives align closer with my own.

I'm not inherently anti-Facebook. If Facebook decides to give me far more control over my social data and interactions, I would consider paying for this service. However, I'm not optimistic about these prospects.

I'm not pledging to abandon Facebook in favor of Mastodon or equivalent, or even to become an active Mastodon user. My commitment is of a longer term nature. The social media apps are social. They require network effects to be useful. They're about building communities. I want to let my community know that I'm unhappy with the equilibrium we find ourselves and want something better. Not just with Facebook, but the whole ecosystem of OSes and phones. The future of AI should be an enriching and enabling one, and that requires navigating the myriad challenges of mixed-motive conflicts and organizing together with our social networks to use the bargaining power we possess.

Comment by landfish on Absent coordination, future technology will cause human extinction · 2020-02-10T03:26:15.831Z · score: 5 (3 votes) · LW · GW

I haven't yet formed clear hypotheses around what is preventing effective coordination around climate change. My current approach is to examine what led to the fairly successful nuclear arms control treaties and what is causing them to fail now. I have found Thomas Schelling's work quite useful for thinking about international cooperation, but I'm missing a lot of models around internal state politics that enables or prevents those states from being able to negotiate effectively.

One area I'm quite interested in, in regards to climate coordination / conflict, is geoengineering. Several high-impact geoengineering methods seem economically feasible to do unilaterally at scale. This seems like a complicated mixed-motive conflict. I'm not clear where the Schelling Points will be, but I am going to try to figure this out. I'd love to see other people do their own analyses here!

Comment by landfish on Absent coordination, future technology will cause human extinction · 2020-02-07T09:32:22.134Z · score: 2 (2 votes) · LW · GW
First, I am not at all sure history shows international coordination has ever done anything about limiting war.

I think there's a decent case that the Peace of Westphalia is a case of this. It wasn't strong centralized coordination, but it was a case of major powers getting together and engineering a peace that lasted for a long time. I agree that both the League of Nations and the UN have not been successful at the large-scale peacekeeping that their founders hoped for. I do think there are some arguments that the post-WWII US + allies prevented large scale wars. Obviously nuclear deterrence was a big part of that, but it doesn't seem like the only part. I wouldn't call this a big win for explicit international cooperation, but it is an example of a kind of prevention. I recognize that the kind of coordination I'm calling for is unprecedented, and it's unclear whether it's possible.

What I like about the urn metaphor is the recognition that the process is ongoing and it's very hard to model the effects of technologies before we invent them. It's very simplified, but it illustrates that particular point well. We don't know what innovation might lead to an intelligence explosion. We don't know if existentially-threatening biotech is possible, and if so what that might look like. I think the metaphor doesn't capture the whole landscape of existential threats, but does illustrate one class of them.

Comment by landfish on Absent coordination, future technology will cause human extinction · 2020-02-07T09:19:39.354Z · score: 1 (1 votes) · LW · GW

This sounds roughly right to me. There is the FAI/UFAI threshold of technological development, and after humanity passes that threshold, it's unlikely that coordination will be a key bottleneck in humanity's future. I think many would disagree with this take, who think multi-polar worlds are more likely and that AGI systems may not cooperate well, but I think the view is roughly correct.

The main thing I'm pointing at in my post is 5) and 3)-transition-to-5). It seems quite possible to me that SAI will be out of reach for a while due to hardware development slowing, and that the application of other technologies could threaten humanity in the meantime.

Comment by landfish on Absent coordination, future technology will cause human extinction · 2020-02-04T09:17:18.529Z · score: 1 (1 votes) · LW · GW

I'd be surprised if a chernobyl/fukushima/mayak level disaster every fifty years led to human extinction over 500 years. Why do you think that is the case?

Comment by landfish on Absent coordination, future technology will cause human extinction · 2020-02-03T22:00:10.976Z · score: 12 (6 votes) · LW · GW

Exchange from my facebook between Robin Hanson and myself:

Robin Hanson "Will" is WAY too strong a claim.

Jeffrey Ladish The key assumption is that tech development will continue in key areas, like computing and biotech. I grant that if this assumption is false, the conclusion does not follow.

Jeffrey Ladish On short-medium (<100-500 years) timescales, I could see scenarios where tech development does not reach "black marble" levels of dangerous. I'd be quite surprised if on long time scales 1k - 100k years we did not reach that level of development. This is why I feel okay making the strong claim, though I am also working on a post about why this might be wrong.

Robin Hanson You are assuming something much stronger than merely that tech improves.

Jeffrey Ladish However, I think we may have different cruxes here. I think you may believe that there can be fast tech development (i.e. Age of Em), without centralized coordination of some sort (I think of markets as kinds of decentralized coordination), without extinction.

Jeffrey Ladish I'm assuming that if tech improves, humans will discover some autopoietic process that will result in human extinction. This could be an intelligence explosion, it could be synthetic biotech ("green goo"), it could be some kind of vacuum decay, etc. I recognize this is a strong claim.

Robin Hanson Jeffrey, a strong assumption quite out of line with our prior experience with tech.

Jeffrey Ladish That's right.

Jeffrey Ladish Not out of line with our prior experience of evolution though.

Robin Hanson Species tend to improve, but they don't tend to destroy themselves via one such improvement.

Jeffrey Ladish They do tend to destroy themselves via many improvements. Specialists evolve then go extinct.
Though I think humans are different because we can engineer new species / technologies / processes. I'm pointing at reference classes like biotic replacement events:

Jeffrey Ladish I'm working on an longform argument about this, will look forward to your criticism / feedback on it.

Robin Hanson The risk of increasing specialization creating more fragility is not at all what you are talking about in the above discussion.

Jeffrey Ladish Yes, that was sort of a pedantic point. I do think it's related but not very directly. But the second point, about the biotic replacement reference class, is the main one.

Comment by landfish on Absent coordination, future technology will cause human extinction · 2020-02-03T21:56:58.989Z · score: 3 (2 votes) · LW · GW

I didn't really write this in "lesswrong style", but I think it's still appropriate to put this here. There are a number of assumptions implicit in this post that I don't spell out, but plan to with future posts.

Comment by landfish on Does the US nuclear policy still target cities? · 2019-10-02T21:21:45.533Z · score: 3 (2 votes) · LW · GW

I do find the destruction of capital cities from "decapitation strikes" especially worrying, for three reasons.

1) they disrupt NC3 systems

II) they remove the highest levels of leadership and thus make command hierarchies less clear to both sides

III) as you note, they involve the destruction of cities. I would be very surprised if a US - Russia nuclear war broke out without Washington DC and Moscow being hit with multiple nuclear weapons.

The question becomes -- can the destruction of most cities be avoided even with a few being destroyed? It seems unclear. Airports are another very problematic target, as they're always located near cities and provide backup runaways to military aircraft. Huge fallout problem for cities.

Comment by landfish on Does the US nuclear policy still target cities? · 2019-10-02T02:54:32.218Z · score: 5 (3 votes) · LW · GW

I didn't want to go into arguments about whether WWII strategic bombing was effective because it's a point historians have argued amount a fair bit and I wanted to focus on the nuclear targeting question. I do think it's an interesting / important question. I believe the original justification, at least for Britain and the United States, was to destroy the industrial capacity of the nation. The Norden bombsight was hoped to enable more targeting bombing. Then air defenses proved too powerful for day bombing, so the British and American air forces switched to night bombing, in which accurate bombing was impossible. My recollection was that the justification at the time was still partially (especially for the Americans?) was still the "destroy industrial capacity" even though this was clearly more of a terror / demoralizing strategy in practice.

I think separately from the justification is the question is of whether it actually succeeded in helping to win the war, either by

A) Eroding the capacity to make war, especially industrial capacity

B) Eroding morale / inducing surrender

It would not surprise me if the claims of those championing strategic bombing were false or overstated. It may be that, especially in Germany, strategic bombing mostly killed civilian and accomplished no military objective. It seems far less clear in Japan, especially given Japan did surrender after most of their major cities were destroyed. I would be surprised if the bombing of Japan, both conventional and nuclear, had no impact on their decision to surrender. (I am not making any normative claim about whether any power should have engaged in aerial bombardment, conventional or nuclear).

Comment by landfish on The Relationship Between the Village and the Mission · 2019-05-14T21:59:58.239Z · score: 3 (2 votes) · LW · GW

Ray, let's compare notes about group houses in SF offline. I know of a couple but not many, and I'd be interested to know of more. (And I prefer to talk about people's homes in a less public forum).

I'm noticing an error I've been making, which is to be sort of fatalistic about community in SF rather than gathering data and making plans.

Comment by landfish on The Relationship Between the Village and the Mission · 2019-05-14T21:45:52.633Z · score: 8 (4 votes) · LW · GW

My experience in Seattle was 2x - 3x more Village-like than my experience in Berkeley. Caveat that I also didn't live in Berkeley, first I lived in Oakland near Leverage and now I live in San Francisco.

Seattle's community is small enough to have one primary group house where parties happen and people congregate, so it really felt like one extended social group, whereas in Berkeley it feels like there are many. Some people in Seattle also feel very proud of their community (myself included, even though I've moved here), which to me suggests a village-ness. I get the sense that in Seattle the focus is more the Village than the Mission, which then has the problem you mentioned of agenty mission-oriented people moving to other places.

I do think Seattle, like Berkeley, should aspire to be a "true village", since many people there desire this, and the benefits are large. I also think having multiple successful villages would strengthen the [global] community overall. I think Seattle has the advantage that it is small and centralized, and Berkeley has the advantage that it has more Mission energy.

Comment by landfish on The Relationship Between the Village and the Mission · 2019-05-13T23:41:01.762Z · score: 4 (2 votes) · LW · GW

[Comment copied from fb]:

I think there is a distinction between village and home, and that they can have somewhat different focused. That a home can be home centered while a village can be mission-centered. I'm not sure this is the ideal arrangement, but I put some weight on it being so.

The alternative is to live in a village that is not mission centered. I'm worried that will preclude many kinds of successful missions.

Comment by landfish on The Relationship Between the Village and the Mission · 2019-05-13T23:13:48.453Z · score: 10 (5 votes) · LW · GW

1. How do you think about San Francisco / Oakland / other parts of the bay area, as they relate the Berkeley community? Personally, I wish there were more centers of community in SF. Both areas are near enough to each other that I think it's possible to make a village that contains people in the adjacent towns, but the commute and network dynamics makes this a bit tricky. I haven't figured out an ideal vision for this, but I have the sense that there are opportunities here (in SF and other parts of the bay area) that haven't been explored.

II. Community based religions have churches in many places. I grew up Seventh Day Adventist, and there were Adventist churches in most states and many countries. Whenever a Seventh Day Adventist moves, they find their nearest church and start attending. I wonder if it would be possible / desirable to cultivate this type of network of communities. There already exists some of this between Seattle, the bay, New York, Boston, and elsewhere, but I think it could be intentionally cultivated.

Mission requires people in different places. Oxford, the bay area, and DC are three places where there are cluster of longterm-future-Mission oriented people, who all believe they need to be in those places in order to be able to work on their mission effectively.

Comment by landfish on Seek Fair Expectations of Others’ Models · 2017-10-19T07:36:54.565Z · score: 3 (3 votes) · LW · GW

I appreciated how Zvi presented different models of paths to AGI. People do believe many of these different models -- I hear people discuss them in physical space conversation -- but I haven't seen many of these presented on the internet, apart from random Facebook discussion. Even if models are wrong, if people have put effort into them it's useful to articulate them.