Posts
Comments
Short story about this from a few years ago: Your DietBet Destroyed the World. Mirror bacteria developed to produce L-Glucose, everything is fine until there's an accident.
Here is a now-public example of how a biological infection could kill us all: Biological Risk from the Mirror World.
I don't think this makes much sense. In a regulated industry, you want to build up a positive reputation and working relationship with the regulators, where they know what to expect from you, are familiar with your work and approach, have a sense of where you're going, and generally like and trust you. Engaging with them early and then repeatedly over a long period seems like a way better strategy than waiting until you have something extremely ambitious to try to get them to approve.
Funny! I almost deleted the cross-post because it seemed too short to be interesting here.
Put particles in the air and measure how quickly they're depleted. ex: Evaluating a Corsi-Rosenthal Filter Cube
Sounds like I should try repeating this with someone with a higher voice!
I think that's right! Not a reason to take up vaping, though.
There's probably a way to do this with physics, but I do a lot with trial and error ;)
I do think expanding the ceiling fan air purifier would work well. You could make a frame that takes furnace filters, and purify a lot of air very efficiently and relatively cheaply.
If I were doing this again I would extend the filters down below the plane of the fan, now that I know more about how the Bernoulli principle applies.
I assume this is for one location, so have you done any modeling or estimations of what the global prevalence would be at that point? If you get lucky, it could be very low. But it also could be a lot higher if you get unlucky.
We haven't done modeling on this, but I did write some a few months ago (Sample Prevalence vs Global Prevalence) laying out the question. It would be great if someone did want to work on this!
Have you done any cost-effectiveness analyses?
An end-to-end cost-effectiveness analysis is quite hard because it depends critically on how likely you think someone is to try to create a stealth pandemic. We've done modeling on "how much would it cost to detect a stealth pandemic before X% of people are infected" but we're not unusually well placed to answer "how likely is a stealth pandemic" or "how useful is it for us to raise the alarm".
What's the core reason why the NAObservatory currently doesn't provide that data?
Good question!
For wastewater the reason is that the municipal treatment plants which provide samples for us have very little to gain and a lot to lose from publicity, so they generally want things like pre-review before publishing data. This means that getting to where the'd be ok with us making the data (or derived data, like variant tracking) public on an ongoing basis is a bit tricky. I do think we can make progress here, but it also hasn't been a priority.
For nasal swabs the reason is that we are currently doing very little sampling and sequencing: (a) we're redoing our IRB approval after spinning out from MIT and it's going slowly, (b) we don't yet have a protocol that is giving good results, and (c) we aren't yet sampling anywhere near the number of people you'd need to know what diseases are going around.
when in the future would you expect that kind of data to be easily accessible from the NAObservatory website?
The nasal swab sampling data we do have is linked from https://data.securebio.org/sampling-metadata/ as raw reads. The raw wastewater data may or may not be available to researchers depending on how what you want to do interacts with what our partners need: https://naobservatory.org/data
Here's another one: HN
In this case, it looks like a security lockout, where the poster has 2fa enabled with a phone number they migrated away from in 2022.
Fixed! It should have read "We are sequencing"
In general, at any given level of child maturity and parental risk tolerance, devices like this watch let children have more independence.
What has changed over the last few decades is primarily a large decrease in parental risk tolerance. I don't know what's driving this, but it's probably downstream from increasing wealth, lower child mortality, and the demographic transition.
Interesting to read through! Thoughts:
-
I really don't like the no-semicolons JS style. I've seen the arguments that it's more elegant, but a combination of "it looks wrong" and "you can get very surprising bugs in cases where the insertion algorithm doesn't quite match our intuitions" is too much.
-
What's the advantage of making
alreadyClicked
a set instead of keeping it as a property of the things it's clicking on? -
In this case I'm not at all worried about memory leaks, since the tab will only exist for a couple seconds.
-
The
getExpandableComments
simplification is nice! -
I haven't tested it, but I think your
collectComments
has a bug in it where it will include replies as if they are top level comments in addition to including them as replies to the appropriate top level comments.
mostly my suggestions will be minor refactors at best ... post it as a pull request
I'm happy to look at a PR, but I think I'm unlikely to merge one that's minor refactors: I've evaluated the current code through manual testing, and if I were going to make changes to it I'd need another round of manual testing to verify it still worked. Which isn't that much work, but the benefit is also small.
One general suggestion I have is to write some test code that can notify you when something breaks
It's reasonably fast for me to evaluate it manually: pick a post that should have some comments (including nested ones) and verify that it does in fact gather them. Each time it runs it tells me the number of comments it found (via the title bar) and this is usually enough for me to tell if it is working.
I think this is an unusually poor fit for automated tests? I don't need to keep the code functional while people other than the original author work on it, writing tests won't keep dependencies from breaking it, the operation is simple enough for manual evaluation, the stakes are low, and it's quite hard to make a realistic test environment.
Sharing the code in case others are curious, but if you have suggestions on how to do it better I'd be curious to hear them!
Done; thanks!
I tried to find an official pronoun policy for LessWrong, LessOnline, EA Global, etc, and couldn't.
The EA Forum has an explicit policy that you need to use the pronouns the people you're talking about prefer. EAG(x) doesn't explicitly include this in the code of conduct but it's short and I expect is interpreted by people who would consider non-accidental misgendering to be a special case of "offensive, disruptive, or discriminatory actions or communication.". I vaguely remember seeing someone get a warning on LW for misgendering, but I'm not finding anything now.
I think it's a pretty weak hit, though not zero. There are so many things I want to look into that I don't have time for that having this as another factor in my prioritization doesn't feel very limiting to my intellectual freedom.
I do think it is good to have a range of people in society who are taking a range of approaches, though!
Nice of you to offer! I expect, however, that pressure in this direction will come from non-LW non-EA directions.
The "don't look into dragons" path often still involves hiding info, since often your brain takes a guess anyhow
In many cases I have guesses, but because I just have vague impressions they're all very speculative. This is consistent with being able to say "I haven't looked into it" and "I really don't know", and because these are all areas where the truth is not decision relevant it's been easy to leave it at that. Perhaps people notice I have doubts, but at least in my social circles that's acceptable if not made explicit.
I think that's an important norm and I support it, but until it is well established it's not something I (or others) can rely on.
No one has given me a hard time about it. I say things like "I haven't looked into it" and we move on. The next time it happens I will additionally be able to link to this post.
Interesting! Would the original EdgeRank be an algorithm, or is it too simple?
That's still an algorithm, it's just a very simple one.
Personally, I prefer to have the posts I see be the product of a sophisticated algorithm (ex: there are some people I follow who post a lot, and for those people I would like to only see their best posts) but I want it to be one that is in my interest.
Can you give examples of curriculum elements that you think are aimed at the world of 20 years ago? The usual criticism I see is that school is barely connected to the needs of the working world.
Is there something special about 5GB?
What's wrong with streaming?
The remote host supports SCP and SFTP, but not SSH.
Part of them getting cheaper is becoming higher output, which means the same labor cost gets you more power. For example, in 2018 we got 360W panels while in 2024 we got 425W ones. But I agree this isn't the main component.
whether it is spending on transmission or generation is a fiction dictated by the regulator
That's the key place where we disagree: my understanding is that the "generation" charges are actual money leaving the utility for a competitive market, and this is a real division.
you assert that you know what the monopoly spends.
First, don't we know that? It's a public company and it has to report what it spends.
But more importantly, I do generally think getting a regulated monopoly like this to become more efficient is intractable, at least in the short to medium term.
Maybe we're meaning different things by "cost"? If a large monopoly spends $X to do Y then even if they're pretty inefficient in how they do Y I'd still describe $X as the cost. We might discuss ways to get the cost down closer to what we think it should be possible to do Y for (changing regulations, subjecting the monopoly to market forces in other ways, etc) but "cost" still seems like a fine word for it?
I don't know this area well, but my understanding is that the "generation" portion represents a market where different companies can compete to provide power, while the other portion is the specific company that has wires to my house operating as a regulated monopoly. So while I don't trust the detailed breakdown of the different monopoly charges (I suspect the company has quite a bit of freedom in practice to move costs between buckets) the high-level generation-vs-the-rest breakdown seems trustworthy.
How so?
At $1000/kW-hr it's (just barely) not worth even buying batteries to shift energy from daytime generation to night consumption, while at $700/kW-hr it definitely is worthwhile.
Doesn't this depend heavily on local utility rates, and so any discussion of crossover points should include rates? Ex: I'm at $0.33/kWh while a friend in TX is at half that.
releasing the people that CEA bars with nondisclosure agreements about that one episode with Leverage about which we unfortunately don't know more than that there are nondisclosure agreements
I don't know the details in the Leverage case, but usually the way this sort of non-disclosure works is that both parties in a dispute, including employees, have non-disclosure obligations. But one party isn't able to release their (ex) employees unilaterally; the other party would need to agree as well.
That is, I suspect the agreements are structured such that CEA releasing people the way you propose (without Leverage also agreeing, which I doubt they would) would be a willful contract violation.
Have you check if you are receiving some subsidies for it?
I think natural gas in the US is effectively subsidized by underinvesting in export infrastructure? This country produces a lot of gas.
Talking to a friend who works in the energy industry, this is already happening in Puerto Rico. Electricity prices are high enough that it makes sense for a very large fraction of people to get solar, which then pushes prices up even higher for the remainder, and it spirals.
If the cost of power generation were the main contributor to the overall cost of the system then I think you'd be right: economies of scale and the ability to generate in cheap places and sell in expensive places would do a lot to keep people on the grid. But looking at my bill (footnote [1]) the non-generation costs are high enough that if current trends continue that should flip; see my response to cata, above.
I'm not claiming here that it's currently cheaper, but that it will soon be cheaper in a lot of places. Only 47% of my bill is the actual power generation, and the non-generation charges total $0.18/kWh. That's still slightly more expensive than solar+batteries here, but with current cost trends that should flip in a year or two.
Looking at their breakdown (footnote [1]) it seems to be mostly the cost of getting the electricity to the consumer. Since they're a monopoly, there's not much getting them to be efficient here, operating a high-uptime anything is expensive, and MA is an expensive place to do anything.
That's a very different product, using UV inside HVAC systems as an alternative or supplement to traditional filtration. Because the delivery rate of HVAC as a fraction of all air in the room is so much lower than the fraction air above people in a high ceiling room, this is much less valuable.
Very roughly, the main ways people use UV to clean air to reduce spread of diseases are HVAC / in duct, far UV, and upper room. I'm only trying to talk about the last of these here.
The way you demonstrate that there are not long-term side effects is that we have very accurate ability to measure UV, and so you can show that the system being on vs off has a negligible impact on the amount of UV where people are. Long-term impacts would be downstream from this kind of easily detectable effect.
(I think this is very different for far UV, where you intentionally shine it in a way that does include the people. That is potentially a much better approach, because you can clean the air between people instead of only above them, but while the research on far UVC safety looks pretty good to me, it's a much harder system to gather safety evidence on.)
You do need to pay attention to what paint is on the ceiling and measure to verify that levels are low in the places people are, but pointing UVC up is something we've done safely for a long time in many places.
35min is pretty optimized if I'm starting with "all components loose in a bag" -- it used to take me almost an hour. It's a lot of stuff that needs to get plugged in: https://www.jefftk.com/p/rhythm-stage-setup-components
But setting up a pedalboard so I don't need to manually connect dozens of things (this post!) is also optimizing the process.
Thanks; fixed!
Following Emma's advice on FB I exploded the tracks, healed splits, sorted them by take, and then comped manually ignoring reaper's features that are supposed to make this easier. It worked great, and I now have a rough mix ready for Lily's feedback!
Nit: your "Duplo" diagram shows a ratio of 1:3 which would make it a (hypothetical!) Triplo. Real Duplos are 1:2, and (discontinued) Quatro are 1:4 with Lego and 1:2 with Duplo.
(This is all in each dimension, so the overall volume ratio is 1:8)