Posts

Comments

Comment by quiet_NaN on Why I'm doing PauseAI · 2024-05-04T20:26:12.955Z · LW · GW

I think I have two disagreements with your assessment. 

First, the probability of a random independent AI researcher or hobbyist discovering a neat hack to make AI training cheaper and taking over. GPT4 took 100M$ to train and is not enough to go FOOM. To train the same thing within the budget of the median hobbyist would require algorithmic advantages of three or four orders of magnitude. 

Historically, significant progress has been made by hobbyists and early pioneers, but mostly in areas which were not under intense scrutiny by established academia. Often,  the main achievement of a pioneer is discovering a new field, them picking all the low-hanging fruits is more of a bonus. If you had paid a thousand mathematicians to think about signal transmission on a telegraph wire or semaphore tower, they probably would have discovered Shannon entropy. Shannon's genius was to some degree looking into things nobody else was looking into which later blew up into a big field. 

It is common knowledge that machine learning is a booming field. Experts from every field of mathematics have probably thought if there is a way to apply their insights to ML. While there are certainly still discoveries to be made, all the low-hanging fruits have been picked. If a hobbyist manages to build the first ASI, that would likely be because they discover a completely new paradigm -- perhaps beyond NNs. The risk that a hobbyist discovers a concept which lets them use their gaming GPU to train an AGI does not seem that much higher than in 2018 -- either would be completely out of the left field. 

My second disagreement is the probability of an ASI being roughly aligned with human values, or to be more precise, the difference of that probability conditional on who discovers it. The median independent AI enthusiast is not a total asshole [citation needed], so if alignment is easy and they discover ASI, chances are that they will be satisfied with becoming the eternal god emperor of our light cone and not bother to tell their ASI to turn any any huge number of humans to fine red mist. This outcome will not be so different than if Facebook develops an aligned ASI first. If alignment is hard -- which we have some reason to believe it is -- then the hobbyist who builds ASI by accident will doom the world, but I am also rather cynical about the odds of big tech having much better odds. 

Going full steam ahead is useful if (a) the odds of a hobbyist building ASI if big tech stops capability research are significant and (b) alignment is very likely for big tech and unlikely for the hobbyist. I do not think either one is true. 

Comment by quiet_NaN on Why I'm doing PauseAI · 2024-05-04T17:29:18.129Z · LW · GW

Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights.

I am by no means an expert on machine learning, but this sentence reads weird to me. 

I mean, it seems possible that a part of a NN develops some self-reinforcing feature which uses the gradient descent (or whatever is used in training) to go into a particular direction and take over the NN, like a human adrift on a raft in the ocean might decide to build a sail to make the raft go into a particular direction. 

Or is that sentence meant to indicate that an instance running after training might figure out how to hack the computer running it so it can actually change it's own weights?

Personally, I think that if GPT-5 is the point of no return, it is more likely that it is because it would be smart enough to actually help advance AI after it is trained. While improving semiconductors seems hard and would require a lot of work in the real world done with human cooperation, finding better NN architectures and training algorithms seems like something well in the realm of the possible, if not exactly plausible.

So if I had to guess how GPT-5 might doom humanity, I would say that in a few million instance-hours it figures out how to train LLMs of its own power for 1/100th of the cost, and this information becomes public. 

The budgets of institutions which might train NN probably follows some power law, so if training cutting edge LLMs becomes a hundred times cheaper, the number of institutions which could build cutting edge LLMs becomes many orders of magnitude higher -- unless the big players go full steam ahead towards a paperclip maximizer, of course. This likely mean that voluntary coordination (if that was ever on the table) becomes impossible. And setting up a worldwide authoritarian system to impose limits would also be both distasteful and difficult. 

Comment by quiet_NaN on Big-endian is better than little-endian · 2024-04-29T23:19:04.870Z · LW · GW

I think that it is obvious that Middle-Endianness is a satisfactory compromise between Big and Little Endian. 

More seriously, it depends on what you want to do with the number. If you want to use it in a precise calculation, such as adding it to another number, you obviously want to process the least significant digits of the inputs first (which is what bit serial processors literally do). 

If I want to know if a serially transmitted number is below or above a threshold, it would make sense to transmit it MSB first (with a fixed length). 

Of course, using integers to count the number of people in India seems like using the wrong tool for the job to me altogether. Even if you were an omniscient ASI, this level of precision would require you to have clear standards at what time a human counts as born and at least provide a second-accurate timestamp or something. Few people care if the population in India was divisible by 17 at any fixed point in time, which is what we would mostly use integers for. 

The natural type for the number of people in India (as opposed to the number of people in your bedroom) would be a floating point number. 

And the correct way to specify a floating point number is to start with the exponent, which is the most important part. You will need to parse all of the bits of the exponent either way to get an idea of the magnitude of the number (unless we start encoding the exponent as a floating point number, again.)

The next most important thing is the sign bit. Then comes the mantissa, starting with the most significant bit. 

So instead of writing 

The electric charge of the electron is .

What we should write is:

The electric charge of the electron is 

Standardizing for a shorter form (1.6e-19 C --> ??) is left as an exercise to the reader, as are questions about the benefits we get from switching to base-2 exponentials (base-e exponentials do not seem particularly handy, I kind of like using the same system of digits for both my floats and my ints) and omitting the then-redundant one in front of the dot of the mantissa. 

Comment by quiet_NaN on Duct Tape security · 2024-04-27T00:57:16.911Z · LW · GW

The sum of two numbers should have a precision no higher than the operand with the highest precision. For example, adding 0.1 + 0.2 should yield 0.3, not 0.30000000000000004.

I would argue that the precision should be capped at the lowest precision of the operands. In physics, if you add to lengths, 0.123m+0.123456m should be rounded to 0.246m.

Also, IEEE754 fundamentally does not contain information about the precision of a number. If you want to track that information correctly, you can use two floating point numbers and do interval arithmetic. There is even an IEEE standard for that nowadays. 

Of course, this comes at a cost. While monotonic functions can be converted for interval arithmetic, the general problem of finding the extremal values of a function in some high-dimensional domain is a hard problem. Of course, if you know how the function is composed out of simpler operations, you can at least find some bounds. 

 

Or you could do what physicists do (at least when they are taking lab courses) and track physical quantities with a value and a precision, and do uncertainty propagation. (This might not be 100% kosher in cases where you first calculate multiple intermediate quantities from the same measurement (whose error will thus not be independent) and continue to treat them as if they were. But that might just give you bigger errors.) Also, this relies on your function being sufficiently well-described in the region of interest by the partial derivatives at the central point. If you calculate the uncertainty of  for  using the partial derivatives you will not have fun.

Comment by quiet_NaN on My experience using financial commitments to overcome akrasia · 2024-04-25T17:53:02.069Z · LW · GW

In the subagent view, a financial precommitment another subagent has arranged for the sole purpose of coercing you into one course of action is a threat. 

Plenty of branches of decision theory advise you to disregard threats because consistently doing so will mean that instances of you will more rarely find themselves in the position to be threatened.

Of course, one can discuss how rational these subagents are in the first place. The "stay in bed, watch netflix and eat potato chips" subagent is probably not very concerned with high level abstract planning and might have a bad discount function for future benefits and not be overall that interested in the utility he get from being principled.

Comment by quiet_NaN on My experience using financial commitments to overcome akrasia · 2024-04-25T17:15:32.582Z · LW · GW

To whomever overall-downvoted this comment, I do not think that this is a troll. 

Being a depressed person, I can totally see this being real. Personally, I would try to start slow with positive reinforcement. If video games are the only thing which you can get yourself to do, start there. Try to do something intellectually interesting in them. Implement a four bit adder in dwarf fortress using cat logic. Play KSP with the Principia mod. Write a mod for a game. Use math or Monte Carlo simulations to figure out the best way to accomplish something in a video game even if it will take ten times longer than just taking a non-optimal route. Some of my proudest intellectual accomplishments are in projects which have zero bearing on the real world. 

(Of course, I am one to talk right now. Spending five hours playing Rimworld in a not-terrible-clever way for every hour I work on my thesis.)

Comment by quiet_NaN on hydrogen tube transport · 2024-04-20T19:38:35.422Z · LW · GW

You quoted:

the vehicle can cruise at Mach 2.8 while consuming less than half the energy per passenger of a Boeing 747 at a cruise speed of Mach 0.81


This is not how Mach works. You are subsonic iff your Mach number is smaller than one. The fact that you would be supersonic if you were flying in a different medium has no bearing on your Mach number. 

 I would also like to point out that while hydrogen on its own is rather inert and harmless, its reputation in transportation as a gas which stays inert under all practical conditions is not entirely unblemished

The beings travelling in the carriages are likely descendants of survivors of the Oxygen Catastrophe and will require an oxygen-containing atmosphere to survive.

Neglecting nitrogen, you have oxygen surrounded by hydrogen surrounded by oxygen. If you need to escape, you will need to pass through that atmosphere of one bar H2. There is no great way to do that, too little O2 means too little oxidation and suffocation, more O2 means that the your atmosphere is explosive. (The trick with hydrox does not work at ambient pressure.)

Contrast with a vacuum-filled tunnel. If anything goes badly wrong, you can always flood the tunnel with air over a minute, going to conditions which are as safe as a regular tunnel during an accident which is still not all that great. But being 10km up in the air is also not great if something goes wrong.

Barlow's formula means that the material required for a vacuum tunnel scales with the diameter squared. For transporting humans, a diameter of 1m might be sufficient. At least, I would not pay 42 times as much for the privilege of travelling in a 6.5m outer diameter (i.e. 747 sized) cabin instead. Just lie there and sleep or watch TV on the overhead screen. 

Comment by quiet_NaN on CTMU insight: maybe consciousness *can* affect quantum outcomes? · 2024-04-20T17:31:27.244Z · LW · GW

If this was true, how could we tell? In other words, is this a testable hypothesis?

This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$. 

 

General remark:

One way this could turn out to be true is if it’s a priori more likely that there are special, nonrandom portions of the quantum multiverse we're being sampled from. For example, if we had a priori reasons for expecting that we're in a simulation by some superintelligence trying to calculate the most likely distribution of superintelligences in foreign universes for acausal trade reasons, then we would have a priori reasons for expecting to find ourselves in Everett branches in which our civilization ends up producing some kind of superintelligence – i.e., that it’s in our logical past that our civilization ends up building some sort of superintelligence. 

It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description "just do whatever favors ASI" is actually shorter than just the sequence of events.

I mean, if we are simulated by a Turing Machine (which is equivalent to quantum events having a low Kolmogorov complexity), then a TM which just implements the true laws of physics (and cheats with a PNRG, not like the inhabitants would ever notice) is surely simpler than one which tries to optimize towards some distant outcome state. 

As an analogy, think about the Kolmogorov complexity of a transcript of a very long game of chess. If both opponents are following a simple algorithm of "determine the allowed moves, then use a PRNG to pick one of them", that should have a bound complexity. If both are chess AIs which want to win the game (i.e. optimize towards a certain state) and use a deterministic PRNG (lest we are incompressible), the size of your Turing Machine -- which /is/ the Kolmogorov complexity -- just explodes.

Of course, if your goal is to build a universe which invents ASI, do you really need QM at all? Sure, some algorithms run faster in-universe on a QC, but if you cared about efficiency, you would not use so many levels of abstraction in the first place. 

Look at me rambling about universe-simulating TMs. Enough, enough. 

Comment by quiet_NaN on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-16T13:04:44.505Z · LW · GW

Saliva causes cancer, but only if swallowed in small amounts over a long period of time.

(George Carlin)

 

For this to be a risk, the cancer risk would have to be superlinear in the acetaldehyde concentration. In a linear model, the high local concentrations would not matter overall, because the expected number of mutations you get would not depend on how you distribute the carcinogen among your body cells. 

Or the cells in your mouth or throat could be especially vulnerable to cancer. 

From my understanding, having bacteria in your mouth which break down sugar to ethanol is not some bizarre mad science scheme, but it is something which happens naturally, as an alternative to the lactic acid pathway, and people who never get cavities naturally lucked out on their microbiome. This in turn would mean that even among teetotaler AFR patients there should be an excess of oral cancers, and ideally an inverse correlation between number of lifetime cavities and cancer rates. 

On the meta level, I find myself slightly annoyed if people use image formats to transport text, especially text like the quotes from Scott's FAQ which could be easily copy-pasted into a quotation. Accessibility is probably less of an issue than it was 20 years ago thanks to ML, but this still does not optimize for robustness. 

Comment by quiet_NaN on Carl Sagan, nuking the moon, and not nuking the moon · 2024-04-13T19:13:06.145Z · LW · GW

One thing to keep in mind is that the delta-v required to reach LEO is some 9.3km/s. (Handy map)

This is an upper limit for what delta-v can be militarily useful in ICBMs for fighting on our rock. 

Going from LEO to the moon requires another 3.1km/s. 

This might not seem much, but makes a huge difference in the payload to thruster ratio due to the rocket equation.

If physics were different and the moon was within reach of ICBMs then I imagine it might have become the default test site for nuclear tipped ICBMs. 

Instead, the question was "do we want to develop an expensive delivery system with no military use[1] purely as a propaganda stunt?"

Of course, ten years later, the Outer Space Treaty was signed which prohibits stationing weapons in orbit or on celestial bodies.[2]

  1. ^

    Or no military use until the moon people require nuking, at least.

  2. ^

    The effect of forbidding nuking the moon is more accidental. I guess that if I were a superpower, I would be really nervous if a rival decided to put nukes into LEO where they would pass a few hundred kilometers over my cities and into them with the smallest of nudges. The fact that mankind decided to skip on a race of "who can pollute LEO most by putting most nukes there" (which would have entailed radioactive material being scattered when rockets blow up during launch (as rockets are wont to) as well as IT security considerations regarding authentication and deorbiting concerns[3]) is one of the brighter moments in the history of our species. 

  3. ^

    Apart from 'what if the nuke goes off on reentry?' and 'what if the radioactive material gets scattered' there is also a case to be made that supplying a Great Old Ones with nuclear weapons may not be the wisest choice of action.

Comment by quiet_NaN on simeon_c's Shortform · 2024-04-12T22:59:58.310Z · LW · GW

I am sure that Putin had something like the Anschluss in mind when he started his invasion. 

Luckily for the west, he was wrong about that. 

From a Machiavellian perspective, the war in Ukraine is good for the West: for a modest investment in resources, we can bind a belligerent Russia while someone else does all the dying. From a humanitarian perspective, war is hell and we should hope for a peace where Putin gets whatever he has managed to grab while the rest of Ukraine joins NATO and will be protected by NATO nukes from further aggression. 

I am also not sure that a conventional arms race is the answer to Russia. I am very doubtful that a war between a NATO member and Russia would stay a regional or conventional conflict.

Comment by quiet_NaN on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T22:27:41.485Z · LW · GW

Anything related to the Israel/Palestine conflict is invoking politics the mind killer. 

It is the hot button topic number one on the larger internet, from what I can tell. 

"Either the ministry made an honest mistake or the the statistical analysis did" does not seem like the kind of statement most people will agree on. 

Comment by quiet_NaN on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T22:03:34.693Z · LW · GW

Link. (General motte content warning: this is a forum which has strong free speech norms, which disproportionally attracts people who would find it hard to voice their opinions elsewhere. On a bad day you will read five paragraphs of a comment on the war in Gaza only to realize that this is just the introduction the author's main pet topic of holocaust denial. Also, content warning: discussion is meh.)

I am not sure it is the one I remember reading, not that I remember the discussion much. I normally read the CW thread, and vaguely remember the link going to twitter. Perhaps I misremember, or the CW post was deleted by its author, or they have changed reality again.

Comment by quiet_NaN on Medical Roundup #2 · 2024-04-12T15:26:30.409Z · LW · GW

Regarding assisted suicide, the realistic alternative in the case of the 28 year old would not be that she would live unhappily ever after. The alternative is an an unilateral suicide attempt by her. 

Unilateral suicide attempts impose additional costs on society. The patient can rarely communicate their decision to anyone close to them beforehand because any confidant might have them locked up in psychiatry instead. The lack of ability to talk about any particulars with someone who knows her real identity[1], especially their therapist, will in turn mean that plenty of patients who could be dissuaded will not be dissuaded.

There is a direct cost of suicide attempts to society. Methods vary by ease of access, lethality, painfulness and impact on bystanders. Given that society defects against them by refusing to respect their choices regarding their continued existence, some patients will reciprocate and not optimize for a lack of traumatization of bystanders. Imagine being a conductor of any kind of train spotting someone lying on the tracks and knowing that you will never stop the train in time. For their loved ones, losing someone to suicide without advance warning also is a bad outcome. 

I would argue that every unilateral suicide attempt normalizes further such attempts.[2] While I believe that suicide is part of a fundamental right, I also think that not pushing that idea to vulnerable populations (like lovesick teenagers) is probably a good thing. Reading that a 28yo was medically killed at the end of a long medical intervention process will probably normalize suicide in the mind of a teenager than reading that she jumped from a tall building somewhere. 

Of course, medically assisted suicide for psychiatric conditions could also be a carrot to dangle in front of patients to incentivize them to participate in mental health interventions. Given that these interventions are somewhat effective, death would not have to be the default outcome. And working with patients who are there out of their free will is probably more effective than working with whatever fraction of patients survived their previous attempt and got committed for a week or a month. (Of course, I think it is important to communicate the outcome odds clearly beforehand: "after one year of interventions, two out of five patients no longer wanted to die, one only wanted to die some of the time and was denied, one dropped out of treatment and was denied and one was assisted in their suicide." People need that info to make an informed choice!)

 

  1. ^

    Realistically, I would not even bet on being able to have a frank discussion with a suicide hotline. Given that they are medical professionals, they may be required by law to try their best to prevent suicides up to and including alerting law enforcement, and phone calls are not very anonymous per default.

  2. ^

    Assisted suicides would not necessarily legitimize unilateral suicide attempts. People can be willing to accept a thing when regulated by the state and still be against it otherwise. States collecting taxes does not legitimize protection rackets. 

Comment by quiet_NaN on Medical Roundup #2 · 2024-04-12T14:04:30.904Z · LW · GW

Anecdata: I have in my freezer deep-frozen cake which has been there fore months. If it was in the fridge (and thus ready to eat) I would eat a piece every time I open the fridge. But I have no compulsion to further the unhealthy eating habits of future me, let that schmuck eat a proper meal instead!

Ice cream I eat directly from the freezer, so that effect is not there for me.

Comment by quiet_NaN on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T13:51:24.327Z · LW · GW

The appropriate lesswrong-adjacent-adjacent place to post this would be the culture war thread of the motte. I think a tweet making similar claims was discussed there before. 

I have some hot takes on this but this is not the place for them.

Comment by quiet_NaN on The Poker Theory of Poker Night · 2024-04-11T17:31:27.484Z · LW · GW

Thanks, this is interesting. 

From my understanding, in no-limit games, one would want to only have some fraction of ones bankroll in chips on the table, so that one can re-buy after losing an all-in bluff. (I would guess that this fraction should be determined by the Kelly criterion or something.)

On the other hand, from browsing Wikipedia, it seems like many poker tournaments prohibit or limit re-buying after going bust. This would indicate that one has limited amounts of opportunity to get familiar with the strategy of the opponents (which could very well change once the stakes change). 

(Of course, Kelly is kind of brutal with regard to gambling. In a zero sum game, the average edge is zero, so at least one participant should not be playing even from an EV perspective. But even under the generous assumption that you are 50% more likely than chance to win a 50 participant elimination tournament (e.g. because a third of the participants are actively trying to lose) (so your EV is 0.5 the buy-in) Kelly tells you to wager about 1% of your bankroll. So if the buy-in is 10k$ you would have to be a millionaire.)

Comment by quiet_NaN on Toward a Broader Conception of Adverse Selection · 2024-04-10T04:12:12.703Z · LW · GW

(sorry for thread necromancy)

Meta: I kind of wonder about the moderation score of gwern's comment. Karma -5, Agreement -10. So someone saw that comment at -4 and thought 'this is still rated too high'.

FWIW, I do not think his comment was bad. A bit tongue in cheek, perhaps, but I think his comment engages with the subject matter of the post more deeply than the parent comment. 

Or some subset of people voting on LW either really like Banana Taffy or really hate gwern, or both. 

Comment by quiet_NaN on Toward a Broader Conception of Adverse Selection · 2024-04-10T03:49:19.527Z · LW · GW

Not everyone is out to get you

If your BATNA to winning the bid on that wheelbarrow auction is to order it for 120$ of Amazon with free overnight shipping, then winning the auction for 180$ is net negative for you. 

But if your BATNA is to carry bags of sand on your back all summer, then 180$ for a wheelbarrow is a bloody bargain.

Assuming a toy model where dating preferences follow a global preference ordering ('hotness'), then any person showing any interest in dating you is proof that you can likely do better.[1] But if you follow that rule, you can practically never date anyone (because you are only sampling the field of partners), which leaves a lot of utility on the table because relationships can be net positive for all participants even if they do not precisely match their market values. 

If you want to buy stock to make money from speculation then you need to worry that almost everyone you trade with is better informed than you and you will end up net negative. On the other hand, if you buy stock as a long term investment (tracking some index or whatever) then you probably care a lot less about overpaying one percent.

I think that Zvi mentions a few legitimate examples of things which are out to get you, and their advice to avoid ones with unlimited cost potential is certainly sound.

If I buy toilet paper in the supermarket, I am paying more than the market price. If I wanted, I could figure out what toilet paper costs in bulk, find a supplier and buy a lifetime supply of toilet paper, likely saving a few 100$ in the process. I am not doing this because these amounts of savings over a lifetime are just not worth the hassle. Instead, I trust that competition between discounters mean that their markup is less than an order of magnitude and cheerfully pay their price. 

  1. ^

    Don't ask me if that is part of the reason why flirting is about avoiding the creation of common knowledge. I am just some nerd, why would I know?

Comment by quiet_NaN on The Poker Theory of Poker Night · 2024-04-08T00:38:11.091Z · LW · GW

Poker seems nice as a hobby, but terrible as a job as discussed on the motte

Also, if all bets were placed before the flop, the equilibrium strategy would probably be to bet along some fixed probability distribution depending on your position, the previous bets and what cards you have. Instead, the three rounds of betting after some cards are open on the table make the game much more complicated. If you know you have a winning hand, you do not want your opponent to fold, you want them to match your bet. So you kinda have to balance optimizing for the maximum pool at showdown with limiting the information you are leaking so there is a showdown. Or at least it would seem like that to me, I barely know the rules. 

Role playing groups have a similar conundrum. In some way, it is even more severe because while you can have switching members of a poker night, having too many switching members in a role playing game does not work great. On the other hand, typical role players don't have 56 things they would be rather doing. (Personally, I think having five people (DM plus four players) is ideal because you have a bit of leeway to play even if one cancels.) So far, my group manages ok without imposing penalties on players found to be absent without leave. 

Comment by quiet_NaN on How Often Does ¬Correlation ⇏ ¬Causation? · 2024-04-04T02:25:49.689Z · LW · GW

I think different people mean different things with "causation". 

On the one hand, we have things where A makes B vastly more likely. No lawyer tries to argue that while their client shot the victim in the head (A) and the victim died (B), it could still be the case that the cause of death was old age and their client was simply unlucky. This is the strictest useful definition of causation. 

Things get more complicated when A is just one of many factors contributing to B. Nine (or so) of ten lung carcinoma are "caused" by smoking, we say. But for the individual smoker cancer patient, we can only give a probability that their smoking was the cause. 

On the far side, on priors I find it likely that the genes which determine eye colors in humans might also influence the chance that they get depression due to a long causal chain in the human body. Perhaps blue eyed people have an extra  or  chance to get depression compared to green eyed people after correcting for all the confounders, or perhaps it is the other way round. So some eye colors are possibly what one might call a risk factor for depression. This would be the loosest (debatably) useful definition of causation. 

For such very weak "causations", a failure to find a significant correlation does not imply that there is no "causation". Instead, the best we can do is say something like "The likelihood of the observed evidence in any universe where eye color increases the depression risk by more than a factor of  (or whatever) is less than one in 3.5 millions." That is, we provide bounds instead of just saying there is no correlation. 

Comment by quiet_NaN on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-03T17:10:50.349Z · LW · GW

Well, I think Munroe is not thinking big enough here. 

Of course, this might increase global warming in the long run because the impact crater can produce CO2 from both of the global firestorms devastating plant life and the destruction of carbonate rock in the earth mantle, but I think that this can be minimized by choosing a suitable impact location (which was not a concern for Chicxulub) and is partly offset by a decline in fossil fuel use due to indirect effects. Also, all of the tipping point factors in climate change would work to our advantage: larger polar caps reflect more light, more permafrost binds more CO2 and so on. 

At the worst, climate engineering might require periodic impacts on a scale of one per decade, which seems sustainable. 

Comment by quiet_NaN on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-03T16:51:29.825Z · LW · GW

All the doomers (who are mostly white male nerds who read too much scifi) complaining that large asteroid impacts could cause catastrophic climate changes are distracting from the real problem, which is that meteorite impacts TODAY are a tool of oppression used by privileged able-bodied white cis-men. 

STEM people claim that there is no proof that asteroids disproportionally hit minorities, but a more compassionate analysis clearly proves them wrong. 

Regarding direct impacts, it is clear that healthy men are more likely to dodge an meteorite than the malnourished, wheelchair-bound or women and children. Better health care services in Western countries can further improve the survival odds for the minority of privileged people subjected to asteroid hits, leaving disadvantaged minorities to pay the price. 

Looking at https://openasteroidimpact.org/ is it clear that these are the same crowd of Silicon Valley techbros which are responsible for most of the problems in the world. They quote two deities (talk about privilege!) and a bunch of white people. Their board seems to be disproportionally White (and Asian) and male. No statement of diversity and inclusion. 

I think we should therefore shame OAI and its competitors to include mechanisms to their asteroid steering which will further social and racial justice by redirecting some of the profits from the metals to disadvantaged minorities while also making sure that the impact deaths are fairly distributed between different ethnicities. 

Comment by quiet_NaN on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-03T15:26:26.860Z · LW · GW

I think you are seriously underselling OAI. Asteroid impacts have the potential to solve many of the looming humanitarian and existential crisis:

  • Asteroid impacts are a prime candidate to stop global warming.
  • The x-risk from AI is much lower in timeline where OAI succeeds.

Basically, OAI is a magic bullet, which could enable a phase change in human technology. Global poverty will no longer be a thing. The Near East conflict will be solved. It will prevent Putin from conquering Ukraine and keep Taiwan out of the hands of China. It will end all colonialism and discrimination.

Comment by quiet_NaN on On green · 2024-03-27T23:56:26.307Z · LW · GW

With regard to the Redwood trees, my personal thoughts as a blue person are that it is probably a bad idea to destroy something which is both rare and hard to replace (on a human timescale) without having a good reason. 

If Redwood was the most common plant species by biomass, we would of course use it for lumber, or even cut down a few hundreds of them whenever we need space for a new Walmart. 

Likewise archaeological sites or rare fossils. (Of course, all of that has limits. If we lived in some steampunk world where Redwood trees are the obvious thing to build moon rockets from, I would be willing to sacrifice a few of them for the Apollo missions.)

Generalized, this could be phrased as "don't make the universe more boring". 

That being said, in terms of excitement per mole, the observable universe still has a lot of optimization potential. Let us perhaps keep Jupiter and Saturn for future generations, but Uranus and Neptune could probably be put to better use. 

Comment by quiet_NaN on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-19T00:11:41.346Z · LW · GW

The failure mode of having a lot of veto-holders is that  nothing ever gets done. Which is fine if you are happy with the default state of affairs, but not so fine if you prefer not to run your state on the default budget of zero. 

There are some international organizations heavily reliant on veto powers, the EU and the UN Security Council come to mind, and to a lesser degree NATO (as far as the admission of new members in concerned). 

None of these are unmitigated success stories. From my understanding, getting stuff done in the EU means bribing or threatening every state who does not particularly benefit from whatever you want to do. 

Likewise, getting Turkey to allow Sweden to join NATO was kind of difficult, from what I remember. Not very surprisingly, if you have to get 30 factions to agree on something, one will be likely to object for good or bad reasons. 

The UN Security Council with its five veto-bearing permanent members does not even make a pretense at democratic legitimation. The three states with the biggest nuclear arsenals, plus two nuclear countries which used to be colonial powers. The nicest thing one can say about that arrangement is that it failed to start WW III, and in a few cases passed a resolution against some war criminal who did not have the backing of any of the veto powers.

I think veto powers as part of a system of checks and balances are good in moderation, but add to many of them and you end up with a stalemate.

--

I also do not think that the civil war could have been prevented by stacking the deck even more in favor of the South. Sooner or later the industrial economy in the North would have overtaken the slave economy in the South. At best, the North might have seceded in disgust, resulting in the South on track to become some rural backwater.

Comment by quiet_NaN on There is way too much serendipity · 2024-01-21T16:57:59.307Z · LW · GW

You are correct. If one estimates that one requires a milliliter of that 0.5% saccharine solution from that paper cited above to detect the sweetness, that would come around to 50mg of sugar. If neotame is 6000 times more potent, that would mean about 800ng. Even if we switch from VX to the more potent botulinum toxin A, we would need a whole whopping microgram per kilogram orally, so perhaps a 100x more than what we need for neotame. (If we change the route of administration to IV, then botox will easily win, of course.)

Of course, this is highly dependent on the ratio of saliva in the mouth (which will dilute the sweetener) to the weight of the organism (which will affect the toxin dose needed). I don't think this ratio will change overly much when going to elephants or mice, though. 

In a way, this should be unsurprising. Both the taste molecule and the neurotoxin interact with very specific receptor molecules. Only in one case, the animal evolved to cooperate with the molecule (by putting the receptors directly on the tongue) while in the other case the evolutionary pressure was very much not to allow random molecules from the environment access to the synapses. 

Comment by quiet_NaN on Pseudonymity and Accusations · 2023-12-23T02:22:42.195Z · LW · GW

I think there are a few corner cases which it is worthwhile to consider:

  • A whistleblower providing objective evidence of wrongdoing. Here, the accused should just respond to the evidence, not the messenger. 
  • A case relying entirely on the testimonial of the accuser. Here, the credibility of the accuse depend entirely on the reliability of the accuser. The accuser has every right to confidentially talk to trusted third parties about their accusations. But once the accusations are made public to be judged either by a court of law or the court of public opinion, the public also deserves to know from whom the accusations come and judge their reliability as a witness. 

Of course, in the real world, it is often a mix of the two. Just about any evidence a source might hand to an investigator could be faked, or even just taken out of context, so the investigator has to trust their source to some degree.

I am sure that the US would have loved to have wikileaks reveal their source for the collateral murder video just to make sure that that person actually had a security clearance, they would not want wikileaks being taken for a hike by some enemy psyop with video editing software. In that case, revealing the source would be silly.

On the other hand, in a they said / they said situation, things differ. If X is anonymously accused by someone who can only provide their own testimony, I think that we should not update from that more than infinitesimally. If the anonymous accuser convinced an investigator who will provide their own name, that is a bit better, but still not much, because we would not only have to trust that the investigator is truthful, but also that his character judgement is sound.

TL;DR: provide evidence, testify on record or shut up (in public).

Comment by quiet_NaN on Legalize butanol? · 2023-12-21T19:04:46.300Z · LW · GW

You might think these are safe because they're used in eg some paint solvents, but no, they're somewhat toxic.

Personally, I would not update towards "substance X is safe for recreational human consumption" from learning that is used as a paint solvent. But then again I never had the urge to drink paint solvent, so I might be atypical. 

(Also, I assume that the solvents evaporate while the paint dries, so the health and safety problem should be confined to the wet paint. Of course, details are likely to depend on a lot of specifics. Probably not appropriate for fingerpaint, at least.)

Comment by quiet_NaN on Has anyone here investigated the occult community? It is curious to me that many magicians consider themselves empiricists. · 2023-12-14T18:46:11.304Z · LW · GW

The problem is that naive empiricism is not good enough for most non-trivial practical applications. 

(Where a trivial application would be figuring out that a hammer makes a sound when you bash it against a piece of wood, which will virtually always happen assuming certain standard conditions.)

For another example of this failure mode, look at the history of medicine. At least some of the practitioners there were clearly empiricists, otherwise it seems very unlikely that they would have settled on willow bark (which contains salicylic acid). But plenty of other treatments are today recognized as actively harmful. This is because empiricism and good intentions are not enough to do medical statistics successfully. 

Look at the replication crisis for another data point: Even being part of a tradition ostensibly based on experimental rigor is not enough to halfway consistently arrive at the truth. 

If you are testing the hypothesis "I am a wizard" versus the null hypothesis "I am a muggle", it is likely that the former is much preferable to the experimenter than the latter. This means that they will be affected by all sorts of cognitive biases (as being an impartial experimenter was not much selected for in the ancestral environment) which they are unlikely to even know (unless they have Read The Sequences or something alike). 

If it comes to testing oneself for subtle magic abilities, it would take a knowledgeable and rigorous rationalist to do that correctly. I certainly would not trust myself to do it. (Of course, most rationalists would also be likely to reject the magic hypothesis on priors.)

Comment by quiet_NaN on Thoughts on “AI is easy to control” by Pope & Belrose · 2023-12-02T16:00:19.849Z · LW · GW

What I do not get is how this disagreement on p(doom) leads to different policy proposals. 

If ASI has a 99% probability of killing us all, it is the greatest x-risk we face today and we should obviously be willing to postpone ASI, and possibly singularity (to the extend that in the far future, the diameter of the region of space we colonize at any given time will be a few 100 light years less than what it would be if we focused just on capabilities now). 

If ASI has a 1% probability of killing us all, it is still the (debatably) greatest x-risk we face today and we should obviously be willing to postpone ASI etcetera. 

To persuade that ASI is safe, one would either not have to care about the far future (for an individual alive today, a 99% of chance of living in a Culture-esque utopia would probably be worth a 1% risk of dying slightly earlier) or provide a much lower p(doom) (e.g. "p(doom)=1e-20, all the x-risk comes from god / the simulators destroying the universe once humans develop ASI, and spending a few centuries on theological research is unlikely to change that" would recommend "just build the damn thing" as a strategy.)

Comment by quiet_NaN on Kolmogorov Complexity Lays Bare the Soul · 2023-12-02T15:34:25.690Z · LW · GW

First off, the most important practical property of the Kolmogorov complexity is that it is not computable. This means that there is no general algorithm to determine what is the shortest computer program to generate a given output. (The crux is that you can not run all programs below a certain length to see if any of them will generate that output, because some of these programs will run forever, while others might run for an non-computable long time and eventually print that output and terminate.)

This is the reason why we have myriads of heuristic compression algorithms instead of just using Kolmogorov compression, which would yield the best possible results. 

It is also completely unique, being the shortest possible string.

No, there can be more than one such string. 

Even leaving issues of quantum physics aside, macroscopic physical objects like humans are unlikely to be very compressible (information-wise, that is). The author might feel that the number of lead atoms in their 36 molar tooth is not part of their Kolmogorov string, but I would argue that is is certainly part of a complete description. 

Humans are not the product of running some simple starting conditions in some deterministic cellular automaton, but shaped by a chaotic environment full of probabilistic interactions. Mess not math, if you will. 

In practice, that fine level of detail is not actually what I care about. Just like I listen to lossy compressed music, I would be fine with being uploaded into a somewhat lossy representation of myself where I don't have any lead atoms in my teeth.

Comment by quiet_NaN on So you want to save the world? An account in paladinhood · 2023-11-23T12:23:11.247Z · LW · GW

Yes, having a general principle of being kind to others is downstream of that, because a paladin who is known to be kind and helpful will tend to have more resources to save the world with.

Well, instrumental convergence is a thing. If there is a certain sweet spot for kindness with regard to resource gain, I would expect the paladin, the sellsword and the hellknight all to arrive at similar kindness levels. 

There is a spectrum between pure deontology and pure utilitarianism. I agree with the author and EY that pure utilitarianism is not suitable for humans. In my opinion, one failure mode of pure deontology is refusing to fight evil in any not totally inefficient way, while one failure mode of pure utilitarianism is to lose sight of the main goal while focusing on some instrumental subgoal. 

Of course, in traditional D&D, paladins are generally characterized as deontological sticklers for their rules ("lawful stupid").

Let's say you're concerned about animal suffering. You should realize that what is gonna have the most impact on how much animal suffering the future will contain is, by far, determined by what kind of AI is the one that inevitably takes over the world, and then you should decide to work on something which impacts what kind of AI is the AI that inevitably takes over the world.

If ASI comes along, I expect that animal suffering will no longer be a big concern. Economic practices which cause animal suffering tend to decline in importance as we get to higher tech levels. The plow may be pulled by the oxen, but no sensible spaceship engine will ever run on puppy torture. This leaves the following possibilities:

* An ASI which is orthogonal to animal (including human) welfare. It will simply turn the animals to something more useful to its goals, thereby ending animal suffering.

* An ASI which is somewhat aligned to human interests. This could probably result in mankind increasing the total number of pets and wild animals by orders of magnitudes for human reasons. But even today, only a small minority of humans prefers to actively hurt animals for their own sake. Even if we do not get around to fixing pain for some reason, the expected suffering per animal will not be more than what we consider acceptable today. (Few people advocate extincting any wild species because they just suffer too much.)

* An ASI whose end goal is to cause animal suffering. I Have No Mouth And I Must Scream, But With Puppies. This is basically a null set of all the possible ASIs. I concede that we might create a human torturing ASI if we mess up alignment badly enough by making a sign error or whatever, but even that seems like a remote possibility. 

So if ones goal is to minimize the harm per animal conditional on it existing, and one believes that ASI is within reach, the correct focus would seem to be to ignore alignment and focus on capabilities. Either you end up with a paperclip maximizer who is certain to reduce non-human animal suffering as compared to our current world of factory farming within a decade, or you end up with a friendly AI which will get rid of factory farming because it upsets some humans (and is terribly inefficient for food production). 

Of course, if you care about the total number of net-happy animals, and species not going extinct and all of that, then alignment will start to matter. 

Comment by quiet_NaN on Wireheading and misalignment by composition on NetHack · 2023-10-29T12:25:16.789Z · LW · GW

The agent is asked to get near a character named “the oracle”, which typically appears in later levels of the game, that can only be reached with significant exploration and survival efforts.

The oracle resides  in a level between five and nine. I guess for a starting player that could be called a deeper level, but in the grand scheme of things (nethack having 45-53 dungeon levels) it is still quite near the surface. 

Also, you are very unlikely to hallucinate being next to the oracle, there are about 385 monster types in the game (plus you can hallucinate some fictional ones, like Luggage). To get to a 30% chance to hallucinate the oracle, you would have to spend around 137 turns next to a monster. Unless that monster happens to be your pet, that is not very survivable for a level one character. (And pets move away from you and are faster than the player character, so you would spend some of your 200 turns of hallucination of chasing it.)

Or does the oracle condition only check for the symbol of the oracle (@, which includes some 80 monsters)? In that case, I would assume that the easiest way to fulfill the condition is to stand next to a shopkeeper on level 2 or 3.

We found that agents that optimize Motif’s intrinsic reward function exhibit behaviors that are quite aligned with human’s intuitions on how to play NetHack: they are attracted to explorative actions and play quite conservatively, getting experience and only venturing deeper into the game when it is safe

"Safe" is a very relative term in nethack. While exploring a level before going down to the next one is obviously a good idea (unless you are doing a speed run), nutrition conditions prevent players from staying on level 1 indefinitely: eating the odd goblin or lichen corpse are not enough to prevent starvation. 

Comment by quiet_NaN on Hyperreals in a Nutshell · 2023-10-20T12:06:09.417Z · LW · GW

Sorry, I was quoting the only parts of the sentence. 

What I meant was that I would change

Conversely, if I∉U, this implies that N/I∈U, which means that a, b disagree at almost all positions, so they probably shouldn't be equal.

to

Conversely, if I∉U, this implies that NI∈U, which means that a, b disagree at almost all positions, so they probably shouldn't be equal.

Comment by quiet_NaN on Hyperreals in a Nutshell · 2023-10-18T01:37:37.011Z · LW · GW

Some random thoughts. 

First, it would be nice if one could go from rationals to hyperreals directly without having to define the reals in between (especially for people with limit allergies, as the reals are sometimes defined as limits of Cauchy sequences). I don't see a straightforward way to do so though, you can hardly allow people to encode their reals as sequences of rationals, otherwise the sequence would have to be equivalent to zero instead of an infinitesimal. 

Also, one could split the hyperreals into equivalence classes within which the Archimedian property holds. Using the big-O adjacent notation, the reals would be , and the hyperreal called  above would be . Stretching the big-O notation, one could call the equivalence class of  something like . So one has a rather large zoo of these equivalence classes. This would imply that there is no Archimedian equivalence class for the smallest infinite hyperreal. If a hyperreal  is infinite (that is,  diverges), then  is a smaller infinite hyperreal. 

I am well used to there being no biggest infinity, but there being no smallest infinity would indicate that these things are neither equivalent to cardinals nor ordinals. 

Comment by quiet_NaN on Hyperreals in a Nutshell · 2023-10-18T00:40:59.860Z · LW · GW

Thanks, this is helpful to point out.

Of course, this makes all of this rather abstract. It looks to me like for almost any two hyperreals (e.g. a, b as above), the answer to "which of them is larger?" is "It depends on the ultrafilter. Also, I can not tell you if a set is part of any specific ultrafilter. But fear not, for any given ultrafilter, the hyperreals are well-ordered."

Basically for any usable theorem, one would have to prove that the result is independent of the actual ultrafilter used, which means that numbers such as a and b will probably not feature in them a lot. 

I can not fault my analysis 1 professor for opting to stick to the reals (abstract as they are already are) instead. 

Comment by quiet_NaN on Hyperreals in a Nutshell · 2023-10-17T23:04:14.778Z · LW · GW

Furthermore after

Conversely, if I∉U,this implies that the complement of I

the slash is used for the setminus operation. I think using \setminus there (which generates a backslash) would be a more standard notation less likely to be mistaken for quotient structures. 

Comment by quiet_NaN on Sum-threshold attacks · 2023-10-02T03:06:38.105Z · LW · GW

Here's how you can make the sum large without Bob noticing: just add a small amount to each coordinate, like . Then Bob could attribute each coordinate's offset to noise, and you've made the sum of  greater by 100.

 

This would assume that either Bob is unaware that an attack might be happening, or that he can't be bothered to do statistical analysis on his vector. 

If the original value (without the noise chosen by Alice) is non-obvious to the attacker, but obvious to Bob (e.g., they use redundancy and encryption -- it is a well known fact that Alice and Bob like cryptography), and the magnitude of the noise is common knowledge, then all attempts to modify the message will, on average, increase the standard deviation of the noise as measured by Bob. If my math is right, the attacker could modify each value by about half the width of the noise and end up with an expected  sum of 112 instead of 100, which will probably not be suspicious to Bob. 

If Bob has an idea of the attackers objective, detecting tampering will get much easier. If Bob suspects that the attacker wants a huge sum, he can just calculate the sum of the noise terms and compare that to the expected distribution. Then any deniable tampering would have to be within expected random fluctuations. (Of course, for every vector, there is some base in which it looks very suspicious.)

Often, we have an idea what the objective of an adversary using a sum-threshold attack might be. There is more utility in influencing who get's to be president of a country than in influencing who will become the tenth-ranking janitor in their residence. Some bosses would like to pressure their employees into having sex, few if any want to condition them to speak sentences with a prime number of syllables.