Posts

Keeping it (less than) real: Against ℶ₂ possible people or worlds 2024-09-13T17:29:44.915Z
Meta: On viewing the latest LW posts 2024-08-25T19:31:39.008Z

Comments

Comment by quiet_NaN on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-21T12:01:50.467Z · LW · GW

Relatedly, if you perform an experiment n times, and the probability of success is p, and the expected number of total successes kp is much smaller than one, then kp is a reasonable measure of getting at least once success, because the probability of getting more than one success can be neglected. 

For example, if Bob plays the lottery for ten days, and each days has a 1:1000,000 chance of winning, then overall he will have a chance of 100,000 of winning once. 

This is also why micromorts are roughly additive: if travelling by railway has a mortality of one micomort per 10Mm, then travelling for 50Mm will set you back 5 micomort. Only if you leave what I would call the 'Newtonian regime of probability', e.g. by somehow managing to travel 1Tm with the railway, you are required to do proper probability math, because naive addition would tell you that you will surely have a fatal accident (1 mort) in that distance, which is clearly wrong. 

Comment by quiet_NaN on The Case For Bullying · 2024-10-27T13:28:55.583Z · LW · GW

Getting down-voted to -27 is an achievement. Most things judged 'bad AI takes' only go to -11 or so, even that recent  P=NP proof only got to -25. Of course, if the author is right, then downvoting further is providing helpful incentives to him. 

I think that bullying is quite distinct from status hierarchies. The latter are unavoidable. There will always be some clique of cool kids in the class who will not invite the non-cool kids to their parties. This is ok. Sometimes, status is correlated with behaviors which are pro-social (kids not smoking; donating to EA), sometimes it is correlated with behaviors which are net-negative (kids smoking; serving in the SS). I was not part of the cool kids circle, and I was fine with that. Live and let live and all that. 

'Bullying' has a distinct negative connotation. The central example is someone who is targeted for sport for being different from the others. The bullies don't want the victims to change their ways, they just like to make their life miserable for thrills. I am sure that sometimes it unintentionally helps their victim, if you push enough people, at some point you are bound to push someone out of the path of a car or bullet. In the grand scheme of things, the bullies are however net negative for their victims and society overall. 

Comment by quiet_NaN on Arithmetic Models: Better Than You Think · 2024-10-27T12:52:34.076Z · LW · GW

I see this as less of an endorsement of linear models and more of a scathing review of expert performance. 

This. Basically, if your job is to do predictions, and the accuracy of your predictions is not measured, then (at least the prediction part of) your job is bullshit. 

I think that if you compare simple linear models in domains where people actually care about their predictions, the outcome would be different. For example, if simple models predicted stock performance better than experts at investment banks, anyone with a spreadsheet could quickly become rich. There are few if any cases of 'I started with Excel and 1000$, and now I am a billionaire'. Likewise, I would be highly surprised to see a simple linear model outperform Nate Silver or the weather forecast. 

Even predicting chess outcomes from mid-game board configurations is something where I would expect human experts to outperform simple statistical models working on easily quantifiable data (e.g. number of pieces remaining, number of possible moves, being in check, etc).

Neural networks contained in animal brains (which includes human brains) are quite capable of implementing linear models, and such should at least perform equally well when they are properly trained. A wolf pack deciding to chase or not chase some prey has direct evolutionary skin in the game of making their prediction of success as accurate as possible which the average school counselor predicting academic success simply does not have.

--

You touch this a bit in 'In defense of explainatory modeling', but I want to emphasize that uncovering causal relationships and pathways is central to world modelling. Often, we don't want just predictions, we want predictions conditional on interventions. If you don't have that, you will end up trying to cure chickenpox with makeup, as 'visible blisters' is negatively correlated with outcomes. 

Likewise, if we know the causal pathway, we have a much better basis to judge if some finding can be applied to out-of-distribution data. No matter how many anvils you have seen falling, without a causal understanding (e.g. Newtonian mechanics), you will not be able to reliably apply your findings to falling apples or pianos. 
 

Comment by quiet_NaN on How I started believing religion might actually matter for rationality and moral philosophy · 2024-08-24T23:01:56.736Z · LW · GW

What I don't understand is why there should be a link between trapped priors and an moral philosophy. 

I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths. 

This might be my trapped priors talking, but I am a non-cognitivist. I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless, and they are better parsed as prescriptive sentences such as "don't kill" or "boo on killing". 

In my view, moral codes are intrinsically subjective. There is no factual disagreement between Harry and Professor Quirrell which they could hope to overcome through empiricism, they simply have different utility functions.

--

My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:

  • Whatever Jesus thought about gender equality when he achieved moral enlightenment, Paul had his own ideas a few decades later. 
  • Mohammed was clearly not opposed to offensive warfare.
  • Martin Luther evidently believed that serfs should not rebel against their lords. 

On the other hand, for instances where religions did advocate for tenets compatible with humanitarianism, such as in Christian abolitionism, do not seem to correspond to strong spiritualism. Was Pope Benedict XIV condemning the slave trade because he was more spiritual (and thus in touch with the universal moral truth) than his predecessors who had endorsed it?

--

My last point is that especially with regard to relational conflicts, our map not corresponding to the territory might often not be a bug, but a feature. Per Hanson, we deceive ourselves so that we can better deceive others. Evolution has not shaped our brains to be objective cognitive engines. In some cases, objective cognition is what it advantageous -- if you are alone hunting a rabbit, no amount of self-deception will fill your stomach -- but in any social situation, expect evolution to put the hand on the scales of your impartial judgement. Arguing that your son should become the new chieftain because he is the best hunter and strongest warrior is much more effective than arguing for that simply because he is your son -- and the best way to argue that is to believe it, no matter if it is objectively true. 

The adulterer, the slave owner and the wartime rapist all have solid evolutionary reasons to engage in behaviors most of us might find immoral. I think their moral blind spots are likely not caused by trapped priors, like an exaggerated fear of dogs is. Also, I have no reason to believe that I don't have similar moral blind spots hard-wired into my brain by evolution.

I would bet that most of the serious roadblocks to a true moral theory (if such a thing existed) are of that kind, instead of being maladaptive trapped priors. Thus, even if religion and spirituality are effective at overcoming maladaptive trapped priors, I don't see how they would us bring closer to moral cognition. 

Comment by quiet_NaN on Universal Basic Income and Poverty · 2024-07-29T04:12:55.750Z · LW · GW

Note: there is an AI audio version of this text over here: https://askwhocastsai.substack.com/p/eliezer-yudkowsky-tweet-jul-21-2024

I find the AI narrations offered by askwho generally ok, worse than what a skilled narrator (or team) could do but much better than what I could accomplish. 

Comment by quiet_NaN on Universal Basic Income and Poverty · 2024-07-29T04:08:04.538Z · LW · GW

[...] somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty.

That feels to me about as convincing as saying: "Chemical fertilizers have not eliminated hunger, just the other weekend I was stuck on a campus with a broken vending machine." 

I mean, sure, both the broken vending machine and actual starvation can be called hunger, just as both working 60h/week to make ends meet or sending your surviving kids into the mines or prostituting them could be called poverty, but the implication that either scourge of humankind has not lost most of its terror seems clearly false. 

Sure, being poor in the US sucks, but I would rather spend a year living the life of someone in the bottom 10% income bracket in the 2024 US than spending a month living the life of a poor person during the English industrial revolution.

I am also not convinced that 60h/week is what it actually takes to survive in the US. I can totally believe that this amount of unskilled labor might be required to rent accommodations in cities, though. 

Comment by quiet_NaN on Superbabies: Putting The Pieces Together · 2024-07-18T12:49:38.314Z · LW · GW

Critically, the gene editing of the red blood cells can be done in the lab; trying to devise an injectable or oral substance that would actually transport the gene-editing machinery to an arbitrary part of the body is much harder.

 

I am totally confused by this. Mature red blood cells don't contain a nucleus, and hence no DNA. There is nothing to edit. Injecting blood cells produced by gene-edited bone marrow in vitro might work, but would only be a therapy, not a cure: it would have to be repeated regularly. The cure would be to replace the bone marrow. 

So I resorted to reading through the linked FDA article. Relevant section:

The modified blood stem cells are transplanted back into the patient where they engraft (attach and multiply) within the bone marrow and increase the production of fetal hemoglobin (HbF), a type of hemoglobin that facilitates oxygen delivery.

Blood stem cells seems to be FDA jargon for ematopoietic stem cells. From the context, I would guess they are harvested from the bone marrow of the patients, then CRISPRed, and then injected back in the blood stream where they will find back to the bone marrow. 

I still don't understand how they would outcompete the non-GMO bone marrow which produces the faulty red blood cells, though. 

I would also take the opportunity to point out that the list of FDA-approved gene therapies tells us a lot about the FDA and very little about the state of the art. This is the agency which banned life-saving baby nutrition for two years, after all. Anchoring what is technologically possible to what the FDA approves would be like anchoring what is possible in mobile phone tech to what is accepted by the Amish. 

Also, I think that editing of multi-cellular organisms is not required for designer babies at all. 

  1. Start with a fertilized egg, which is just a single cell. Wait for the cell to split. After it has split, separate the cells into two cells. Repeat until you have a number of single-cell candidates.
  2.  Apply CRISPR to these cells individually. allow them to split again. Do a genome analysis on one of the two daughter cell. Select a cell line where you did only the desired change in the genome. Go back to step one and apply the next edit. 

Crucially, the costs would only scale linearly with the number of edits. I am unsure how easy is that "turn one two-cell embryo into two one-cell embryos", though. 

Of course, it would be neater to synthesize the DNA of the baby from the scratch, but while prices per base pair synthesis have been dropped a lot, they are clearly still to high to pay for building a baby (and there are likely other tech limitations). 

Comment by quiet_NaN on Superbabies: Putting The Pieces Together · 2024-07-18T11:43:20.701Z · LW · GW

I thought this first too. I checked on Wikipedia:

Adult stem cells are found in a few select locations in the body, known as niches, such as those in the bone marrow or gonads. They exist to replenish rapidly lost cell types and are multipotent or unipotent, meaning they only differentiate into a few cell types or one type of cell. In mammals, they include, among others, hematopoietic stem cells, which replenish blood and immune cells, basal cells, which maintain the skin epithelium [...].

I am pretty sure that the thing a skin cell makes per default when it splits is more skin cells, so you are likely correct. 

Comment by quiet_NaN on The Incredible Fentanyl-Detecting Machine · 2024-07-01T00:56:38.375Z · LW · GW

See here. Of course, that article is a bit light on information on detection thresholds, false-positive rates and so on as compared to dogs, mass spectrometry or chemical detection methods. 

I will also note that humans have 10-20M olfactory receptor neurons, while bees have 1M neurons in total. Probably bees are under more evolutionary pressure to make optimal use of their olcfactory neurons, though. 

Comment by quiet_NaN on The Incredible Fentanyl-Detecting Machine · 2024-07-01T00:34:53.289Z · LW · GW

Dear Review Bot,

please avoid double-posting. 

On the other hand, I don't think voting you to -6 is fair, so I upvoted you. 

Comment by quiet_NaN on The Incredible Fentanyl-Detecting Machine · 2024-07-01T00:29:09.708Z · LW · GW

My take on sniffer dogs is that frequently, what they are best at is picking up is unconscious tells from their handler. In so far as they do, they are merely science!-washing the (possibly meritful) biases of the police officier. 

Packaging something really air-tight without outside contamination is indeed far from trivial. For example, the swipe tests taken at airports are useful because while it is certainly possible to pack a briefcase full of explosives without any residue on the outside is certainly possible, most of the people who could manage to build a bomb would not manage to do that. 

Of course, there are also no profit margins in blowing up airplanes, so stopping the amateurs is already 95% of the job. 

There are significant profit margins in drug trafficking. After you intercept a few shipments and arrest a few mules, the cleverer drug lords will wisen up. 

A multi-method approach might work for a while, glass vials are probably more visible on that CT scan than some organic substance. 

Comment by quiet_NaN on The Incredible Fentanyl-Detecting Machine · 2024-07-01T00:00:30.656Z · LW · GW

My first question is about the title picture. I have some priors on how a computer tomography machine for vehicles would look like. Basically, you want to take x-ray images from many different directions. The medical setup where you have a ring which contains the x-ray source on one side and the detectors on the other side, and rotate that ring to take images from multiple directions before moving the patient perpendicular to the ring to record the next slice exists for a reason: high resolution x-ray detectors are expensive. If we scaled this up to a car size, we might have a ring of four meters in diameter. A bridge of some low-Z material (perhaps beams of wood) would go through that ring. Park on the bridge, get out of the car, and watch as the rotating ring moves over the bridge. 

The thing in the picture looks does not look like it has big moving parts. I can kind of see it taking a sideways x-ray image of the car but I am puzzled by the big grey boxes visible over the car. Having an x-ray source or detector above the car would only make sense if you had the other device below the car, in the road. Of course, if there is anything in the road, one would imagine that that piece of road was some low-Z material material designed not to block half of your x-rays. Instead, the road material below the car looks exactly like the road outside the detector, concrete probably. 

In Europe, we kind of dislike being exposed to ionizing radiation (even though Germany is rather silly about it). I get that the US is a bit more relaxed about it, but a computer tomography device capable of scanning a car with a good resolution would likely emit to a lot of scattered x-rays to anyone nearby. The position in which that boarder guard stands would likely not be a safe position to stand a significant fraction of your work-life. 

At the very least, I would expect that black and yellow trefoil sign warning of ionizing radiation, with a visual indicator if the x-ray is on, and a line marking the minimum safe distance. More realistically, you might have something like a garage door before and behind the car. 

Just from the vibes, I get that the pictured non-intrusive fentanyl inspection machine might run on the operation principle of the famous ADE 651, which is to say it does nothing whatsoever. 

While detecting dry fentanyl through high resolution CT scans seems to be possible in principle at least, once the smugglers go through the additional trouble of dissolving it all bets are off. Wikipedia is light on solubility data, but from the looks of it fentanyl should dissolve well in organic solvents. 

You can check the gas tank of every car if you want, but what are you going to do if an eight-wheeler full of one- gallon-bottles of cooking oil crosses the border? If your scan is good, it would perhaps detect if one bottle in the middle of the stack has been filled with powder instead. There is no way in hell it will detect if one such bottle has ten grams (perhaps 10k doses) of fentanyl dissolved in it. Your best bet would be to detect some residue of the oil on product seized off the streets and work backwards from that.

Of course, "as long as the profit margin is that high, we could go full iron curtain on our borders and would not stop the trafficking" is not a politically acceptable answer. Hence some snake oil salesmen are able to get your tax dollars for products which have not been proven in adversarial conditions. (To train an AI, it is not enough to have a ton of samples, you would need known positive and negative samples. Also, while we are playing buzzword bingo, why not mention that the shipping manifests will be securely stored on The Blockchain?)

Comment by quiet_NaN on The Problem With the Word ‘Alignment’ · 2024-05-23T10:40:11.269Z · LW · GW

The deliberately clumsy term "AInotkilleveryoneism" seems good for this, in any context you can get away with it. 

 

Hard disagree. The position "AI might kill all humans in the near future" is still quite some inferential distance away from the mainstream even if presented in a respectable academic veneer. 

We do not have weirdness points to spend on deliberately clumsy terms, even on LW. Journalists (when they are not busy doxxing people) can read LW too, and if they read that the worry about AI as an extinction risk is commonly called notkilleveryoneism they are orders of magnitude less likely to take us serious, and being taken serious  by the mainstream might be helpful for influencing policy. 

We could probably get away with using that term ten pages deep into some glowfic, but anywhere else 'AI as an extinction risk' seems much better. 

Comment by quiet_NaN on Some perspectives on the discipline of Physics · 2024-05-21T18:46:34.374Z · LW · GW

It is also useful for a lot of practical problems, where you can treat  as being essentially zero and  as being essentially infinite. If you want to get anywhere with any practical problem (like calculating how long a car will take to come to a stop), half of the job is to know which approximations ("cheats") are okay to use. If you want to solve the fully generalized problem (for a car near the Planck units or something), you will find that you would need a theory of everything (that is quantum mechanics plus general relativity) to do so and we don't have that. 

Comment by quiet_NaN on The Problem With the Word ‘Alignment’ · 2024-05-21T18:24:27.071Z · LW · GW

I think that "AI Alignment" is a useful label for the somewhat related problems around P1-P6. Having a term for the broader thing seems really useful. 

Of course, sometimes you want labels to refer to a fairly narrow thing, like the label "Continuum Hypothesis". But broad labels are generally useful. Take "ethics", another broad field label. Nominative ethics, applied ethics, meta-ethics, descriptive ethics, value theory, moral psychology, et cetera. I someone tells me "I study ethics" this narrows down what problems they are likely to work on, but not very much. Perhaps they work out a QALY-based systems for assigning organ donations, or study the moral beliefs of some peoples, or argue if moral imperatives should have a truth value. Still, the label confers a lot of useful information over a broader label like "philosophy". 

By contrast, "AI Alignment" still seems rather narrow. P2 for example seems a mostly instrumental goal: if we have interpretability, we have better chances to avoid a takeover of an unaligned AI. P3 seems helpful but insufficient for good long term outcomes: an AI prone to disobeying users or interpreting their orders in a hostile way would -- absent some other mechanism -- also fail to follow human values more broadly, but an P3-aligned AI in the hand of a bad human actor could still cause extinction, and I agree that social structures should probably be established to ensure that nobody can unilaterally assign the core task (or utility function) of an ASI. 

Comment by quiet_NaN on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?" · 2024-05-19T18:00:09.308Z · LW · GW

I think an AI is slightly more likely to wipe out or capture humanity than it is to wipe out all life on the planet.

While any true Scottsman ASI is so far above us humans as we are above ants and does not need to worry about any meatbags plotting its downfall, as we don't generally worry about ants, it is entirely possible that the first AI which has a serious shot at taking over the world is not quite at that level yet. Perhaps it is only as smart as von Neumann and a thousand times faster. 

To such an AI, the continued thriving of humans poses all sorts of x-risks. They might find out you are misaligned and coordinate to shut you down. More worrisome, they might summon another unaligned AI which you would have to battle or concede utility to  later on, depending on your decision theory.

Even if you still need some humans to dust your fans and manufacture your chips, suffering billions of humans to live in high tech societies you do not fully control seems like the kind of rookie mistake I would not expect a reasonably smart unaligned AI to make. 

By contrast, most of life on Earth might get snuffed out when the ASI gets around to building a Dyson sphere around the sun. A few simple life forms might even be spread throughout the light cone by an ASI who does not give a damn about biological contamination. 

The other reason I think the fate in store for humans might be worse than that for rodents is that alignment efforts might not only fail, but fail catastrophically. So instead of an AI which cares about paperclips, we get an AI which cares about humans, but in ways we really do not appreciate.

But yeah, most forms of ASI which turn out for out bad for homo sapiens also turn out bad for most other species. 

Comment by quiet_NaN on Dating Roundup #3: Third Time’s the Charm · 2024-05-10T22:38:39.779Z · LW · GW

Cassette AI: “Dude I just matched with a model”

“No way”

“Yeah large language”


This made me laugh out loud.

Otherwise, my idea for a dating system would be that given that the majority of texts written will invariably end up being LLM-generated, it would be better if every participant openly had an AI system as their agent. Then the AI systems of both participants could chat and figure out how their user would rate the other user based on their past ratings of suggestions. If the users end up being rated among each others five most viable candidates, 

Of course, if the agents are under the full control of the users, the next step of escalation will be that users will tell their agents to lie on their behalf. ('I am into whatever she is into. If she is big on horses, make up a cute story about me having had a pony at some point. Just put the relevant points on the cheat sheet for the date'.) This might be solved by having the LLM start by sending out a fixed text document. If horses are mentioned as item 521, after entomology but before figure skating, the user is probably not very interested in them. Of course, nothing would prevent a user from at least generically optimizing their profile to their target audience. "A/B testing has shown that the people you want to date are mostly into manga, social justice and ponies, so this is what you should put on your profile." Adversarially generated boyfriend?

Comment by quiet_NaN on AI and Chemical, Biological, Radiological, & Nuclear Hazards: A Regulatory Review · 2024-05-10T21:45:54.391Z · LW · GW

I was fully expecting having to write yet another comment about how human-level AI will not be very useful for a nuclear weapon program. I concede that the dangers mentioned instead (someone putting an AI in charge of a reactor or nuke) seem much more realistic. 

Of course, the utility of avoiding sub-extinction negative outcomes with AI in the near future is highly dependent on p(doom). For example, if there is no x-risk, then the first order effects of avoiding locally bad outcomes related to CBRN hazards are clearly beneficial. 

On the other hand, if your p(doom) is 90%, then making sure that non-superhuman AI systems work without incident is alike to clothing kids in asbestos gear so they don't hurt themselves while playing with matches. 

Basically, if you think a road leads somewhere useful, you would prefer that the road goes smoothly, while if a road leads off a cliff you would prefer it to be full of potholes so that travelers might think twice about taking it. 

Personally, I tend to favor first-order effects (like fewer crazies being able to develop chemical weapons) over hypothetical higher order effects (like chemical attacks by AI-empowered crazies leading to a Butlerian Jihad and preventing an unaligned AI killing all humans). "This looks locally bad, but is actually part of a brilliant 5-dimensional chess move which will lead to better global outcomes" seems like the excuse of every other movie villain. 

Comment by quiet_NaN on Explaining a Math Magic Trick · 2024-05-10T16:32:09.536Z · LW · GW

Edit: looks like was already raised by Dacyn and answered to my satisfaction by Robert_AIZI. Correctly applying the fundamental theorem of calculus will indeed prevent that troublesome zero from appearing in the RHS in the first place, which seems much preferable to dealing with it later. 

My real analysis might be a bit rusty, but I think defining I as the definite integral breaks the magic trick. 

I mean, in the last line of the 'proof',  gets applied to the zero function. 

Any definitive integral of the zero function is zero, so you end up with f(x)=0, which is much less impressive. 

More generally, asking the question Op(f)=0 for any invertable linear operator Op is likely to set yourself up for disappointment. Since the trick relies on inverting an operator, we might want to use a non-linear operator. 

 where C is some global constant might be better. (This might affect the radius of convergence of that Taylor series, do not use for production yet!)

This should result in... uhm... ?

Which is a lot more work to reorder than the original convention used in the 'proof' where all the indefinite integrals of the zero function are conveniently assumed to be the same constant, and all other indefinite integrals conveniently have integration constants of zero. 

Even if we sed s/C// and proclaim that  should be small (e.g. compared to x) and we are only interested in the leading order terms, this would not work. What one would have to motivate is throwing everything but the leading power of x out for every  evaluation, then later meticulously track these lower order terms in the sum to arrive at the Taylor series of the exponential. 

Comment by quiet_NaN on Why I'm doing PauseAI · 2024-05-04T20:26:12.955Z · LW · GW

I think I have two disagreements with your assessment. 

First, the probability of a random independent AI researcher or hobbyist discovering a neat hack to make AI training cheaper and taking over. GPT4 took 100M$ to train and is not enough to go FOOM. To train the same thing within the budget of the median hobbyist would require algorithmic advantages of three or four orders of magnitude. 

Historically, significant progress has been made by hobbyists and early pioneers, but mostly in areas which were not under intense scrutiny by established academia. Often,  the main achievement of a pioneer is discovering a new field, them picking all the low-hanging fruits is more of a bonus. If you had paid a thousand mathematicians to think about signal transmission on a telegraph wire or semaphore tower, they probably would have discovered Shannon entropy. Shannon's genius was to some degree looking into things nobody else was looking into which later blew up into a big field. 

It is common knowledge that machine learning is a booming field. Experts from every field of mathematics have probably thought if there is a way to apply their insights to ML. While there are certainly still discoveries to be made, all the low-hanging fruits have been picked. If a hobbyist manages to build the first ASI, that would likely be because they discover a completely new paradigm -- perhaps beyond NNs. The risk that a hobbyist discovers a concept which lets them use their gaming GPU to train an AGI does not seem that much higher than in 2018 -- either would be completely out of the left field. 

My second disagreement is the probability of an ASI being roughly aligned with human values, or to be more precise, the difference of that probability conditional on who discovers it. The median independent AI enthusiast is not a total asshole [citation needed], so if alignment is easy and they discover ASI, chances are that they will be satisfied with becoming the eternal god emperor of our light cone and not bother to tell their ASI to turn any any huge number of humans to fine red mist. This outcome will not be so different than if Facebook develops an aligned ASI first. If alignment is hard -- which we have some reason to believe it is -- then the hobbyist who builds ASI by accident will doom the world, but I am also rather cynical about the odds of big tech having much better odds. 

Going full steam ahead is useful if (a) the odds of a hobbyist building ASI if big tech stops capability research are significant and (b) alignment is very likely for big tech and unlikely for the hobbyist. I do not think either one is true. 

Comment by quiet_NaN on Why I'm doing PauseAI · 2024-05-04T17:29:18.129Z · LW · GW

Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights.

I am by no means an expert on machine learning, but this sentence reads weird to me. 

I mean, it seems possible that a part of a NN develops some self-reinforcing feature which uses the gradient descent (or whatever is used in training) to go into a particular direction and take over the NN, like a human adrift on a raft in the ocean might decide to build a sail to make the raft go into a particular direction. 

Or is that sentence meant to indicate that an instance running after training might figure out how to hack the computer running it so it can actually change it's own weights?

Personally, I think that if GPT-5 is the point of no return, it is more likely that it is because it would be smart enough to actually help advance AI after it is trained. While improving semiconductors seems hard and would require a lot of work in the real world done with human cooperation, finding better NN architectures and training algorithms seems like something well in the realm of the possible, if not exactly plausible.

So if I had to guess how GPT-5 might doom humanity, I would say that in a few million instance-hours it figures out how to train LLMs of its own power for 1/100th of the cost, and this information becomes public. 

The budgets of institutions which might train NN probably follows some power law, so if training cutting edge LLMs becomes a hundred times cheaper, the number of institutions which could build cutting edge LLMs becomes many orders of magnitude higher -- unless the big players go full steam ahead towards a paperclip maximizer, of course. This likely mean that voluntary coordination (if that was ever on the table) becomes impossible. And setting up a worldwide authoritarian system to impose limits would also be both distasteful and difficult. 

Comment by quiet_NaN on Big-endian is better than little-endian · 2024-04-29T23:19:04.870Z · LW · GW

I think that it is obvious that Middle-Endianness is a satisfactory compromise between Big and Little Endian. 

More seriously, it depends on what you want to do with the number. If you want to use it in a precise calculation, such as adding it to another number, you obviously want to process the least significant digits of the inputs first (which is what bit serial processors literally do). 

If I want to know if a serially transmitted number is below or above a threshold, it would make sense to transmit it MSB first (with a fixed length). 

Of course, using integers to count the number of people in India seems like using the wrong tool for the job to me altogether. Even if you were an omniscient ASI, this level of precision would require you to have clear standards at what time a human counts as born and at least provide a second-accurate timestamp or something. Few people care if the population in India was divisible by 17 at any fixed point in time, which is what we would mostly use integers for. 

The natural type for the number of people in India (as opposed to the number of people in your bedroom) would be a floating point number. 

And the correct way to specify a floating point number is to start with the exponent, which is the most important part. You will need to parse all of the bits of the exponent either way to get an idea of the magnitude of the number (unless we start encoding the exponent as a floating point number, again.)

The next most important thing is the sign bit. Then comes the mantissa, starting with the most significant bit. 

So instead of writing 

The electric charge of the electron is .

What we should write is:

The electric charge of the electron is 

Standardizing for a shorter form (1.6e-19 C --> ??) is left as an exercise to the reader, as are questions about the benefits we get from switching to base-2 exponentials (base-e exponentials do not seem particularly handy, I kind of like using the same system of digits for both my floats and my ints) and omitting the then-redundant one in front of the dot of the mantissa. 

Comment by quiet_NaN on Duct Tape security · 2024-04-27T00:57:16.911Z · LW · GW

The sum of two numbers should have a precision no higher than the operand with the highest precision. For example, adding 0.1 + 0.2 should yield 0.3, not 0.30000000000000004.

I would argue that the precision should be capped at the lowest precision of the operands. In physics, if you add to lengths, 0.123m+0.123456m should be rounded to 0.246m.

Also, IEEE754 fundamentally does not contain information about the precision of a number. If you want to track that information correctly, you can use two floating point numbers and do interval arithmetic. There is even an IEEE standard for that nowadays. 

Of course, this comes at a cost. While monotonic functions can be converted for interval arithmetic, the general problem of finding the extremal values of a function in some high-dimensional domain is a hard problem. Of course, if you know how the function is composed out of simpler operations, you can at least find some bounds. 

 

Or you could do what physicists do (at least when they are taking lab courses) and track physical quantities with a value and a precision, and do uncertainty propagation. (This might not be 100% kosher in cases where you first calculate multiple intermediate quantities from the same measurement (whose error will thus not be independent) and continue to treat them as if they were. But that might just give you bigger errors.) Also, this relies on your function being sufficiently well-described in the region of interest by the partial derivatives at the central point. If you calculate the uncertainty of  for  using the partial derivatives you will not have fun.

Comment by quiet_NaN on My experience using financial commitments to overcome akrasia · 2024-04-25T17:53:02.069Z · LW · GW

In the subagent view, a financial precommitment another subagent has arranged for the sole purpose of coercing you into one course of action is a threat. 

Plenty of branches of decision theory advise you to disregard threats because consistently doing so will mean that instances of you will more rarely find themselves in the position to be threatened.

Of course, one can discuss how rational these subagents are in the first place. The "stay in bed, watch netflix and eat potato chips" subagent is probably not very concerned with high level abstract planning and might have a bad discount function for future benefits and not be overall that interested in the utility he get from being principled.

Comment by quiet_NaN on My experience using financial commitments to overcome akrasia · 2024-04-25T17:15:32.582Z · LW · GW

To whomever overall-downvoted this comment, I do not think that this is a troll. 

Being a depressed person, I can totally see this being real. Personally, I would try to start slow with positive reinforcement. If video games are the only thing which you can get yourself to do, start there. Try to do something intellectually interesting in them. Implement a four bit adder in dwarf fortress using cat logic. Play KSP with the Principia mod. Write a mod for a game. Use math or Monte Carlo simulations to figure out the best way to accomplish something in a video game even if it will take ten times longer than just taking a non-optimal route. Some of my proudest intellectual accomplishments are in projects which have zero bearing on the real world. 

(Of course, I am one to talk right now. Spending five hours playing Rimworld in a not-terrible-clever way for every hour I work on my thesis.)

Comment by quiet_NaN on hydrogen tube transport · 2024-04-20T19:38:35.422Z · LW · GW

You quoted:

the vehicle can cruise at Mach 2.8 while consuming less than half the energy per passenger of a Boeing 747 at a cruise speed of Mach 0.81


This is not how Mach works. You are subsonic iff your Mach number is smaller than one. The fact that you would be supersonic if you were flying in a different medium has no bearing on your Mach number. 

 I would also like to point out that while hydrogen on its own is rather inert and harmless, its reputation in transportation as a gas which stays inert under all practical conditions is not entirely unblemished

The beings travelling in the carriages are likely descendants of survivors of the Oxygen Catastrophe and will require an oxygen-containing atmosphere to survive.

Neglecting nitrogen, you have oxygen surrounded by hydrogen surrounded by oxygen. If you need to escape, you will need to pass through that atmosphere of one bar H2. There is no great way to do that, too little O2 means too little oxidation and suffocation, more O2 means that the your atmosphere is explosive. (The trick with hydrox does not work at ambient pressure.)

Contrast with a vacuum-filled tunnel. If anything goes badly wrong, you can always flood the tunnel with air over a minute, going to conditions which are as safe as a regular tunnel during an accident which is still not all that great. But being 10km up in the air is also not great if something goes wrong.

Barlow's formula means that the material required for a vacuum tunnel scales with the diameter squared. For transporting humans, a diameter of 1m might be sufficient. At least, I would not pay 42 times as much for the privilege of travelling in a 6.5m outer diameter (i.e. 747 sized) cabin instead. Just lie there and sleep or watch TV on the overhead screen. 

Comment by quiet_NaN on CTMU insight: maybe consciousness *can* affect quantum outcomes? · 2024-04-20T17:31:27.244Z · LW · GW

If this was true, how could we tell? In other words, is this a testable hypothesis?

This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$. 

 

General remark:

One way this could turn out to be true is if it’s a priori more likely that there are special, nonrandom portions of the quantum multiverse we're being sampled from. For example, if we had a priori reasons for expecting that we're in a simulation by some superintelligence trying to calculate the most likely distribution of superintelligences in foreign universes for acausal trade reasons, then we would have a priori reasons for expecting to find ourselves in Everett branches in which our civilization ends up producing some kind of superintelligence – i.e., that it’s in our logical past that our civilization ends up building some sort of superintelligence. 

It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description "just do whatever favors ASI" is actually shorter than just the sequence of events.

I mean, if we are simulated by a Turing Machine (which is equivalent to quantum events having a low Kolmogorov complexity), then a TM which just implements the true laws of physics (and cheats with a PNRG, not like the inhabitants would ever notice) is surely simpler than one which tries to optimize towards some distant outcome state. 

As an analogy, think about the Kolmogorov complexity of a transcript of a very long game of chess. If both opponents are following a simple algorithm of "determine the allowed moves, then use a PRNG to pick one of them", that should have a bound complexity. If both are chess AIs which want to win the game (i.e. optimize towards a certain state) and use a deterministic PRNG (lest we are incompressible), the size of your Turing Machine -- which /is/ the Kolmogorov complexity -- just explodes.

Of course, if your goal is to build a universe which invents ASI, do you really need QM at all? Sure, some algorithms run faster in-universe on a QC, but if you cared about efficiency, you would not use so many levels of abstraction in the first place. 

Look at me rambling about universe-simulating TMs. Enough, enough. 

Comment by quiet_NaN on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-16T13:04:44.505Z · LW · GW

Saliva causes cancer, but only if swallowed in small amounts over a long period of time.

(George Carlin)

 

For this to be a risk, the cancer risk would have to be superlinear in the acetaldehyde concentration. In a linear model, the high local concentrations would not matter overall, because the expected number of mutations you get would not depend on how you distribute the carcinogen among your body cells. 

Or the cells in your mouth or throat could be especially vulnerable to cancer. 

From my understanding, having bacteria in your mouth which break down sugar to ethanol is not some bizarre mad science scheme, but it is something which happens naturally, as an alternative to the lactic acid pathway, and people who never get cavities naturally lucked out on their microbiome. This in turn would mean that even among teetotaler AFR patients there should be an excess of oral cancers, and ideally an inverse correlation between number of lifetime cavities and cancer rates. 

On the meta level, I find myself slightly annoyed if people use image formats to transport text, especially text like the quotes from Scott's FAQ which could be easily copy-pasted into a quotation. Accessibility is probably less of an issue than it was 20 years ago thanks to ML, but this still does not optimize for robustness. 

Comment by quiet_NaN on Carl Sagan, nuking the moon, and not nuking the moon · 2024-04-13T19:13:06.145Z · LW · GW

One thing to keep in mind is that the delta-v required to reach LEO is some 9.3km/s. (Handy map)

This is an upper limit for what delta-v can be militarily useful in ICBMs for fighting on our rock. 

Going from LEO to the moon requires another 3.1km/s. 

This might not seem much, but makes a huge difference in the payload to thruster ratio due to the rocket equation.

If physics were different and the moon was within reach of ICBMs then I imagine it might have become the default test site for nuclear tipped ICBMs. 

Instead, the question was "do we want to develop an expensive delivery system with no military use[1] purely as a propaganda stunt?"

Of course, ten years later, the Outer Space Treaty was signed which prohibits stationing weapons in orbit or on celestial bodies.[2]

  1. ^

    Or no military use until the moon people require nuking, at least.

  2. ^

    The effect of forbidding nuking the moon is more accidental. I guess that if I were a superpower, I would be really nervous if a rival decided to put nukes into LEO where they would pass a few hundred kilometers over my cities and into them with the smallest of nudges. The fact that mankind decided to skip on a race of "who can pollute LEO most by putting most nukes there" (which would have entailed radioactive material being scattered when rockets blow up during launch (as rockets are wont to) as well as IT security considerations regarding authentication and deorbiting concerns[3]) is one of the brighter moments in the history of our species. 

  3. ^

    Apart from 'what if the nuke goes off on reentry?' and 'what if the radioactive material gets scattered' there is also a case to be made that supplying a Great Old Ones with nuclear weapons may not be the wisest choice of action.

Comment by quiet_NaN on simeon_c's Shortform · 2024-04-12T22:59:58.310Z · LW · GW

I am sure that Putin had something like the Anschluss in mind when he started his invasion. 

Luckily for the west, he was wrong about that. 

From a Machiavellian perspective, the war in Ukraine is good for the West: for a modest investment in resources, we can bind a belligerent Russia while someone else does all the dying. From a humanitarian perspective, war is hell and we should hope for a peace where Putin gets whatever he has managed to grab while the rest of Ukraine joins NATO and will be protected by NATO nukes from further aggression. 

I am also not sure that a conventional arms race is the answer to Russia. I am very doubtful that a war between a NATO member and Russia would stay a regional or conventional conflict.

Comment by quiet_NaN on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T22:27:41.485Z · LW · GW

Anything related to the Israel/Palestine conflict is invoking politics the mind killer. 

It is the hot button topic number one on the larger internet, from what I can tell. 

"Either the ministry made an honest mistake or the the statistical analysis did" does not seem like the kind of statement most people will agree on. 

Comment by quiet_NaN on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T22:03:34.693Z · LW · GW

Link. (General motte content warning: this is a forum which has strong free speech norms, which disproportionally attracts people who would find it hard to voice their opinions elsewhere. On a bad day you will read five paragraphs of a comment on the war in Gaza only to realize that this is just the introduction the author's main pet topic of holocaust denial. Also, content warning: discussion is meh.)

I am not sure it is the one I remember reading, not that I remember the discussion much. I normally read the CW thread, and vaguely remember the link going to twitter. Perhaps I misremember, or the CW post was deleted by its author, or they have changed reality again.

Comment by quiet_NaN on Medical Roundup #2 · 2024-04-12T15:26:30.409Z · LW · GW

Regarding assisted suicide, the realistic alternative in the case of the 28 year old would not be that she would live unhappily ever after. The alternative is an an unilateral suicide attempt by her. 

Unilateral suicide attempts impose additional costs on society. The patient can rarely communicate their decision to anyone close to them beforehand because any confidant might have them locked up in psychiatry instead. The lack of ability to talk about any particulars with someone who knows her real identity[1], especially their therapist, will in turn mean that plenty of patients who could be dissuaded will not be dissuaded.

There is a direct cost of suicide attempts to society. Methods vary by ease of access, lethality, painfulness and impact on bystanders. Given that society defects against them by refusing to respect their choices regarding their continued existence, some patients will reciprocate and not optimize for a lack of traumatization of bystanders. Imagine being a conductor of any kind of train spotting someone lying on the tracks and knowing that you will never stop the train in time. For their loved ones, losing someone to suicide without advance warning also is a bad outcome. 

I would argue that every unilateral suicide attempt normalizes further such attempts.[2] While I believe that suicide is part of a fundamental right, I also think that not pushing that idea to vulnerable populations (like lovesick teenagers) is probably a good thing. Reading that a 28yo was medically killed at the end of a long medical intervention process will probably normalize suicide in the mind of a teenager than reading that she jumped from a tall building somewhere. 

Of course, medically assisted suicide for psychiatric conditions could also be a carrot to dangle in front of patients to incentivize them to participate in mental health interventions. Given that these interventions are somewhat effective, death would not have to be the default outcome. And working with patients who are there out of their free will is probably more effective than working with whatever fraction of patients survived their previous attempt and got committed for a week or a month. (Of course, I think it is important to communicate the outcome odds clearly beforehand: "after one year of interventions, two out of five patients no longer wanted to die, one only wanted to die some of the time and was denied, one dropped out of treatment and was denied and one was assisted in their suicide." People need that info to make an informed choice!)

 

  1. ^

    Realistically, I would not even bet on being able to have a frank discussion with a suicide hotline. Given that they are medical professionals, they may be required by law to try their best to prevent suicides up to and including alerting law enforcement, and phone calls are not very anonymous per default.

  2. ^

    Assisted suicides would not necessarily legitimize unilateral suicide attempts. People can be willing to accept a thing when regulated by the state and still be against it otherwise. States collecting taxes does not legitimize protection rackets. 

Comment by quiet_NaN on Medical Roundup #2 · 2024-04-12T14:04:30.904Z · LW · GW

Anecdata: I have in my freezer deep-frozen cake which has been there fore months. If it was in the fridge (and thus ready to eat) I would eat a piece every time I open the fridge. But I have no compulsion to further the unhealthy eating habits of future me, let that schmuck eat a proper meal instead!

Ice cream I eat directly from the freezer, so that effect is not there for me.

Comment by quiet_NaN on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T13:51:24.327Z · LW · GW

The appropriate lesswrong-adjacent-adjacent place to post this would be the culture war thread of the motte. I think a tweet making similar claims was discussed there before. 

I have some hot takes on this but this is not the place for them.

Comment by quiet_NaN on The Poker Theory of Poker Night · 2024-04-11T17:31:27.484Z · LW · GW

Thanks, this is interesting. 

From my understanding, in no-limit games, one would want to only have some fraction of ones bankroll in chips on the table, so that one can re-buy after losing an all-in bluff. (I would guess that this fraction should be determined by the Kelly criterion or something.)

On the other hand, from browsing Wikipedia, it seems like many poker tournaments prohibit or limit re-buying after going bust. This would indicate that one has limited amounts of opportunity to get familiar with the strategy of the opponents (which could very well change once the stakes change). 

(Of course, Kelly is kind of brutal with regard to gambling. In a zero sum game, the average edge is zero, so at least one participant should not be playing even from an EV perspective. But even under the generous assumption that you are 50% more likely than chance to win a 50 participant elimination tournament (e.g. because a third of the participants are actively trying to lose) (so your EV is 0.5 the buy-in) Kelly tells you to wager about 1% of your bankroll. So if the buy-in is 10k$ you would have to be a millionaire.)

Comment by quiet_NaN on Toward a Broader Conception of Adverse Selection · 2024-04-10T04:12:12.703Z · LW · GW

(sorry for thread necromancy)

Meta: I kind of wonder about the moderation score of gwern's comment. Karma -5, Agreement -10. So someone saw that comment at -4 and thought 'this is still rated too high'.

FWIW, I do not think his comment was bad. A bit tongue in cheek, perhaps, but I think his comment engages with the subject matter of the post more deeply than the parent comment. 

Or some subset of people voting on LW either really like Banana Taffy or really hate gwern, or both. 

Comment by quiet_NaN on Toward a Broader Conception of Adverse Selection · 2024-04-10T03:49:19.527Z · LW · GW

Not everyone is out to get you

If your BATNA to winning the bid on that wheelbarrow auction is to order it for 120$ of Amazon with free overnight shipping, then winning the auction for 180$ is net negative for you. 

But if your BATNA is to carry bags of sand on your back all summer, then 180$ for a wheelbarrow is a bloody bargain.

Assuming a toy model where dating preferences follow a global preference ordering ('hotness'), then any person showing any interest in dating you is proof that you can likely do better.[1] But if you follow that rule, you can practically never date anyone (because you are only sampling the field of partners), which leaves a lot of utility on the table because relationships can be net positive for all participants even if they do not precisely match their market values. 

If you want to buy stock to make money from speculation then you need to worry that almost everyone you trade with is better informed than you and you will end up net negative. On the other hand, if you buy stock as a long term investment (tracking some index or whatever) then you probably care a lot less about overpaying one percent.

I think that Zvi mentions a few legitimate examples of things which are out to get you, and their advice to avoid ones with unlimited cost potential is certainly sound.

If I buy toilet paper in the supermarket, I am paying more than the market price. If I wanted, I could figure out what toilet paper costs in bulk, find a supplier and buy a lifetime supply of toilet paper, likely saving a few 100$ in the process. I am not doing this because these amounts of savings over a lifetime are just not worth the hassle. Instead, I trust that competition between discounters mean that their markup is less than an order of magnitude and cheerfully pay their price. 

  1. ^

    Don't ask me if that is part of the reason why flirting is about avoiding the creation of common knowledge. I am just some nerd, why would I know?

Comment by quiet_NaN on The Poker Theory of Poker Night · 2024-04-08T00:38:11.091Z · LW · GW

Poker seems nice as a hobby, but terrible as a job as discussed on the motte

Also, if all bets were placed before the flop, the equilibrium strategy would probably be to bet along some fixed probability distribution depending on your position, the previous bets and what cards you have. Instead, the three rounds of betting after some cards are open on the table make the game much more complicated. If you know you have a winning hand, you do not want your opponent to fold, you want them to match your bet. So you kinda have to balance optimizing for the maximum pool at showdown with limiting the information you are leaking so there is a showdown. Or at least it would seem like that to me, I barely know the rules. 

Role playing groups have a similar conundrum. In some way, it is even more severe because while you can have switching members of a poker night, having too many switching members in a role playing game does not work great. On the other hand, typical role players don't have 56 things they would be rather doing. (Personally, I think having five people (DM plus four players) is ideal because you have a bit of leeway to play even if one cancels.) So far, my group manages ok without imposing penalties on players found to be absent without leave. 

Comment by quiet_NaN on How Often Does ¬Correlation ⇏ ¬Causation? · 2024-04-04T02:25:49.689Z · LW · GW

I think different people mean different things with "causation". 

On the one hand, we have things where A makes B vastly more likely. No lawyer tries to argue that while their client shot the victim in the head (A) and the victim died (B), it could still be the case that the cause of death was old age and their client was simply unlucky. This is the strictest useful definition of causation. 

Things get more complicated when A is just one of many factors contributing to B. Nine (or so) of ten lung carcinoma are "caused" by smoking, we say. But for the individual smoker cancer patient, we can only give a probability that their smoking was the cause. 

On the far side, on priors I find it likely that the genes which determine eye colors in humans might also influence the chance that they get depression due to a long causal chain in the human body. Perhaps blue eyed people have an extra  or  chance to get depression compared to green eyed people after correcting for all the confounders, or perhaps it is the other way round. So some eye colors are possibly what one might call a risk factor for depression. This would be the loosest (debatably) useful definition of causation. 

For such very weak "causations", a failure to find a significant correlation does not imply that there is no "causation". Instead, the best we can do is say something like "The likelihood of the observed evidence in any universe where eye color increases the depression risk by more than a factor of  (or whatever) is less than one in 3.5 millions." That is, we provide bounds instead of just saying there is no correlation. 

Comment by quiet_NaN on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-03T17:10:50.349Z · LW · GW

Well, I think Munroe is not thinking big enough here. 

Of course, this might increase global warming in the long run because the impact crater can produce CO2 from both of the global firestorms devastating plant life and the destruction of carbonate rock in the earth mantle, but I think that this can be minimized by choosing a suitable impact location (which was not a concern for Chicxulub) and is partly offset by a decline in fossil fuel use due to indirect effects. Also, all of the tipping point factors in climate change would work to our advantage: larger polar caps reflect more light, more permafrost binds more CO2 and so on. 

At the worst, climate engineering might require periodic impacts on a scale of one per decade, which seems sustainable. 

Comment by quiet_NaN on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-03T16:51:29.825Z · LW · GW

All the doomers (who are mostly white male nerds who read too much scifi) complaining that large asteroid impacts could cause catastrophic climate changes are distracting from the real problem, which is that meteorite impacts TODAY are a tool of oppression used by privileged able-bodied white cis-men. 

STEM people claim that there is no proof that asteroids disproportionally hit minorities, but a more compassionate analysis clearly proves them wrong. 

Regarding direct impacts, it is clear that healthy men are more likely to dodge an meteorite than the malnourished, wheelchair-bound or women and children. Better health care services in Western countries can further improve the survival odds for the minority of privileged people subjected to asteroid hits, leaving disadvantaged minorities to pay the price. 

Looking at https://openasteroidimpact.org/ is it clear that these are the same crowd of Silicon Valley techbros which are responsible for most of the problems in the world. They quote two deities (talk about privilege!) and a bunch of white people. Their board seems to be disproportionally White (and Asian) and male. No statement of diversity and inclusion. 

I think we should therefore shame OAI and its competitors to include mechanisms to their asteroid steering which will further social and racial justice by redirecting some of the profits from the metals to disadvantaged minorities while also making sure that the impact deaths are fairly distributed between different ethnicities. 

Comment by quiet_NaN on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-03T15:26:26.860Z · LW · GW

I think you are seriously underselling OAI. Asteroid impacts have the potential to solve many of the looming humanitarian and existential crisis:

  • Asteroid impacts are a prime candidate to stop global warming.
  • The x-risk from AI is much lower in timeline where OAI succeeds.

Basically, OAI is a magic bullet, which could enable a phase change in human technology. Global poverty will no longer be a thing. The Near East conflict will be solved. It will prevent Putin from conquering Ukraine and keep Taiwan out of the hands of China. It will end all colonialism and discrimination.

Comment by quiet_NaN on On green · 2024-03-27T23:56:26.307Z · LW · GW

With regard to the Redwood trees, my personal thoughts as a blue person are that it is probably a bad idea to destroy something which is both rare and hard to replace (on a human timescale) without having a good reason. 

If Redwood was the most common plant species by biomass, we would of course use it for lumber, or even cut down a few hundreds of them whenever we need space for a new Walmart. 

Likewise archaeological sites or rare fossils. (Of course, all of that has limits. If we lived in some steampunk world where Redwood trees are the obvious thing to build moon rockets from, I would be willing to sacrifice a few of them for the Apollo missions.)

Generalized, this could be phrased as "don't make the universe more boring". 

That being said, in terms of excitement per mole, the observable universe still has a lot of optimization potential. Let us perhaps keep Jupiter and Saturn for future generations, but Uranus and Neptune could probably be put to better use. 

Comment by quiet_NaN on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-19T00:11:41.346Z · LW · GW

The failure mode of having a lot of veto-holders is that  nothing ever gets done. Which is fine if you are happy with the default state of affairs, but not so fine if you prefer not to run your state on the default budget of zero. 

There are some international organizations heavily reliant on veto powers, the EU and the UN Security Council come to mind, and to a lesser degree NATO (as far as the admission of new members in concerned). 

None of these are unmitigated success stories. From my understanding, getting stuff done in the EU means bribing or threatening every state who does not particularly benefit from whatever you want to do. 

Likewise, getting Turkey to allow Sweden to join NATO was kind of difficult, from what I remember. Not very surprisingly, if you have to get 30 factions to agree on something, one will be likely to object for good or bad reasons. 

The UN Security Council with its five veto-bearing permanent members does not even make a pretense at democratic legitimation. The three states with the biggest nuclear arsenals, plus two nuclear countries which used to be colonial powers. The nicest thing one can say about that arrangement is that it failed to start WW III, and in a few cases passed a resolution against some war criminal who did not have the backing of any of the veto powers.

I think veto powers as part of a system of checks and balances are good in moderation, but add to many of them and you end up with a stalemate.

--

I also do not think that the civil war could have been prevented by stacking the deck even more in favor of the South. Sooner or later the industrial economy in the North would have overtaken the slave economy in the South. At best, the North might have seceded in disgust, resulting in the South on track to become some rural backwater.

Comment by quiet_NaN on There is way too much serendipity · 2024-01-21T16:57:59.307Z · LW · GW

You are correct. If one estimates that one requires a milliliter of that 0.5% saccharine solution from that paper cited above to detect the sweetness, that would come around to 50mg of sugar. If neotame is 6000 times more potent, that would mean about 800ng. Even if we switch from VX to the more potent botulinum toxin A, we would need a whole whopping microgram per kilogram orally, so perhaps a 100x more than what we need for neotame. (If we change the route of administration to IV, then botox will easily win, of course.)

Of course, this is highly dependent on the ratio of saliva in the mouth (which will dilute the sweetener) to the weight of the organism (which will affect the toxin dose needed). I don't think this ratio will change overly much when going to elephants or mice, though. 

In a way, this should be unsurprising. Both the taste molecule and the neurotoxin interact with very specific receptor molecules. Only in one case, the animal evolved to cooperate with the molecule (by putting the receptors directly on the tongue) while in the other case the evolutionary pressure was very much not to allow random molecules from the environment access to the synapses. 

Comment by quiet_NaN on Pseudonymity and Accusations · 2023-12-23T02:22:42.195Z · LW · GW

I think there are a few corner cases which it is worthwhile to consider:

  • A whistleblower providing objective evidence of wrongdoing. Here, the accused should just respond to the evidence, not the messenger. 
  • A case relying entirely on the testimonial of the accuser. Here, the credibility of the accuse depend entirely on the reliability of the accuser. The accuser has every right to confidentially talk to trusted third parties about their accusations. But once the accusations are made public to be judged either by a court of law or the court of public opinion, the public also deserves to know from whom the accusations come and judge their reliability as a witness. 

Of course, in the real world, it is often a mix of the two. Just about any evidence a source might hand to an investigator could be faked, or even just taken out of context, so the investigator has to trust their source to some degree.

I am sure that the US would have loved to have wikileaks reveal their source for the collateral murder video just to make sure that that person actually had a security clearance, they would not want wikileaks being taken for a hike by some enemy psyop with video editing software. In that case, revealing the source would be silly.

On the other hand, in a they said / they said situation, things differ. If X is anonymously accused by someone who can only provide their own testimony, I think that we should not update from that more than infinitesimally. If the anonymous accuser convinced an investigator who will provide their own name, that is a bit better, but still not much, because we would not only have to trust that the investigator is truthful, but also that his character judgement is sound.

TL;DR: provide evidence, testify on record or shut up (in public).

Comment by quiet_NaN on Legalize butanol? · 2023-12-21T19:04:46.300Z · LW · GW

You might think these are safe because they're used in eg some paint solvents, but no, they're somewhat toxic.

Personally, I would not update towards "substance X is safe for recreational human consumption" from learning that is used as a paint solvent. But then again I never had the urge to drink paint solvent, so I might be atypical. 

(Also, I assume that the solvents evaporate while the paint dries, so the health and safety problem should be confined to the wet paint. Of course, details are likely to depend on a lot of specifics. Probably not appropriate for fingerpaint, at least.)

Comment by quiet_NaN on Has anyone here investigated the occult community? It is curious to me that many magicians consider themselves empiricists. · 2023-12-14T18:46:11.304Z · LW · GW

The problem is that naive empiricism is not good enough for most non-trivial practical applications. 

(Where a trivial application would be figuring out that a hammer makes a sound when you bash it against a piece of wood, which will virtually always happen assuming certain standard conditions.)

For another example of this failure mode, look at the history of medicine. At least some of the practitioners there were clearly empiricists, otherwise it seems very unlikely that they would have settled on willow bark (which contains salicylic acid). But plenty of other treatments are today recognized as actively harmful. This is because empiricism and good intentions are not enough to do medical statistics successfully. 

Look at the replication crisis for another data point: Even being part of a tradition ostensibly based on experimental rigor is not enough to halfway consistently arrive at the truth. 

If you are testing the hypothesis "I am a wizard" versus the null hypothesis "I am a muggle", it is likely that the former is much preferable to the experimenter than the latter. This means that they will be affected by all sorts of cognitive biases (as being an impartial experimenter was not much selected for in the ancestral environment) which they are unlikely to even know (unless they have Read The Sequences or something alike). 

If it comes to testing oneself for subtle magic abilities, it would take a knowledgeable and rigorous rationalist to do that correctly. I certainly would not trust myself to do it. (Of course, most rationalists would also be likely to reject the magic hypothesis on priors.)

Comment by quiet_NaN on Thoughts on “AI is easy to control” by Pope & Belrose · 2023-12-02T16:00:19.849Z · LW · GW

What I do not get is how this disagreement on p(doom) leads to different policy proposals. 

If ASI has a 99% probability of killing us all, it is the greatest x-risk we face today and we should obviously be willing to postpone ASI, and possibly singularity (to the extend that in the far future, the diameter of the region of space we colonize at any given time will be a few 100 light years less than what it would be if we focused just on capabilities now). 

If ASI has a 1% probability of killing us all, it is still the (debatably) greatest x-risk we face today and we should obviously be willing to postpone ASI etcetera. 

To persuade that ASI is safe, one would either not have to care about the far future (for an individual alive today, a 99% of chance of living in a Culture-esque utopia would probably be worth a 1% risk of dying slightly earlier) or provide a much lower p(doom) (e.g. "p(doom)=1e-20, all the x-risk comes from god / the simulators destroying the universe once humans develop ASI, and spending a few centuries on theological research is unlikely to change that" would recommend "just build the damn thing" as a strategy.)