Posts

Comments

Comment by zrezzed (kyle-ibrahim) on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-24T17:25:23.404Z · LW · GW

That behind every single thing ChatGPT can do, there's a human who implemented that functionality and understands it.

Or, at the very least, that it's written in legible, human-readable and human-understandable format, and that we can interfere on it in order to cause precise, predictable changes.

 

I’m fairly sure you have, in fact, made the same mistake as you have pointed out! Most people… have exactly no idea what a computer is. They do not understand what software is, or that it is something an engineer implements. They do not understand the idea of a “predictable” computer program.

I find it deeply fascinating most of the comments here are providing pushback in the other direction :)

Comment by zrezzed (kyle-ibrahim) on Originality vs. Correctness · 2023-12-08T19:59:03.323Z · LW · GW

Which do you agree would be better? I’m assuming the latter, but correct me if I’m wrong.

I haven’t thought this through, but a potential argument against: 1) agreement / alignment on what the heavy-tail problems are and their relative weight is a necessary condition for the latter to be a better  strategy 2) neither this community, let alone the broader society have that thus 3) we should still focus on correctness overall.

That does reflect my own thinking about these things.

Comment by zrezzed (kyle-ibrahim) on Originality vs. Correctness · 2023-12-08T04:35:29.190Z · LW · GW

And idk, maybe I am kind of convinced? Like, consistency checks are a really powerful tool and if I imagine a young person being like "I will just throw myself into intellectual exploration and go deep wherever I feel like without trying to orient too much to what is going on at large, but I will make sure to do this in two uncorrelated ways", then I do notice I feel a lot less stressed about the outcome

 

I worry this gives up too much. Being embedded in multiple communities/cultures with differing or even conflicting values and world views is exceedingly common. Noticing that, and explicitly playing with the idea of “holding multiple truths” in your head is less common, but still something perhaps even most people would recognize.

But most people would not recognize the importance of this dialogue. Navigating the tension between academics and politics does not seem sufficient to correctly orient humanity. The conflicting values of business and family do not seem to produce anything that resembles truth seeking.

I feel pretty strongly that letting go of correctness in favor of any heuristic means you will end up with the wrong map, not just a smaller or fuzzier one. I don’t think that’s advice that should be universally given, and I’m not even sure how useful it is at all.

Comment by zrezzed (kyle-ibrahim) on Thomas Kwa's MIRI research experience · 2023-10-09T22:02:50.101Z · LW · GW

One other thing to mention is that the speed of sound of the exhaust matters quite a lot. Given the same area ratio nozzle and same gamma in the gas, the exhaust mach number is constant; a higher speed of sound thus yields a higher exhaust velocity.

 

My understanding is this effect is a re-framing of what I described: for a similar temperature and gamma, a lower molecular weight (or specific heat) will result in a higher speed of sound (or exit velocity).

However, I feel like this framing fails to provide a good intuition for the underlying mechanism. At the limit,  anyways, so it's harder (for me at least) to understand how gas properties relate to sonic properties. Yes, holding other things constant, a lower molecular weight increases the speed of sound. But crucially, it means there's more kinetic energy to be extracted to start with.

Is that not right?

Comment by zrezzed (kyle-ibrahim) on Thomas Kwa's MIRI research experience · 2023-10-09T04:06:01.431Z · LW · GW

Fun nerd snipe! I gave it a quick go and was mostly able to deconfuse myself, though I'm still unsure of the specifics. I would still love to hear an expert take.

First, what exactly is the confusion?

For an LOX/LH2 rocket, the most energy efficient fuel ratio is stoichiometric, at 8:1 by mass. However, real rockets apparently use ratios with an excess of hydrogen to boost [1] -- somewhere around 4:1[2] seems to provide the best overall performance. This is confusing, as my intuition is telling me: for the same mass of propellant, a non-stoichiometric fuel ratio is less energetic. Less energy being put into a gas with more mols should mean lower-enough temperatures that the exhaust velocity should be also be lower, thus lower thrust and .

So, where is my intuition wrong?

The total fuel energy per unit mass is indeed lower, nothing tricky going on there. There's less loss than I expected though. Moving from an 8:1 -> 4:1 ratio only results in a theoretical 10% energy loss[3], but an 80% increase in products (by mol).

However, assuming lower energy implies lower temperatures was at least partially wrong. Given a large enough difference in specific heat, less energy could result in a higher temperature. In our case though, the excess hydrogen actually increases the heat capacity of the product, meaning a stoichiometric ratio will always produce the higher temperature[4].

But as it turns out, a stoichiometric ratio of LOX/LH2 burns so hot that a portion of the H2O dissociates into various species, significantly reducing efficiency. A naive calculation of the stoichiometric flame temperature is around 5,800K, vs ~3,700K when taking into account these details[5]. Additionally, this inefficiency drops off quickly as temperatures lower, meaning a 4:1 ratio is much more efficient and can generate temperatures over 3,000K. 

This seems to be the primary mechanism behind the improved : a 4:1 fuel ratio is able to generate combustion temperatures close enough to the stoichiometric ratio in a gas with a higher enough heat capacity to generate a higher exhaust velocity. And indeed, plugging in rough numbers to the exhaust velocity equation[6], this bears out.

The differences in molecular weight and heat capacities also contribute to how efficiently a real rocket nozzle can convert the heat energy into kinetic energy, which is what the other terms from the exhaust velocity help correct for. But as far as I can tell, this is not the dominant effect and actually reduces exhaust velocity for the 4:1 mixture (though I'm very uncertain about this). 

The internet is full of 1) poor, boldly incorrect and angry explainers on this topic, and 2) and incredibly high-quality rocket science resources and tools (this was some of the most disconsonant non-CW discourse I've had to wade through). With all the great resources that do exist though, I was surprised I couldn't find any existing intuitive explanations! I seemed to find either muddled thinking around this specific example, or clear thinking about the math in abstract. 

... or who knows, maybe my reading comprehension is just poor!

  1. ^

    Intense heat and the dangers of un-reacted, highly oxidizing O2  in the exhaust also motivates excess hydrogen ratios. 

  2. ^

    The Space Shuttle Main Engine used a 6.03:1 ratio, in part because a 4:1 ratio would require a much, much larger LH2 tank. 

  3. ^

    20H2 + 10O2 → 20H2O: ΔH ~= -5716kJ  (vs)  36H2 + 9O2 → 18H2O + 18H2: ΔH ~= -5144.4kJ

  4. ^

    For the fuel rich mixture, if we were somehow able to only heat the water product, the temperature would equal the stoichiometric flame temp. Then when considering the excess H2 temperatures would be necessarily lower. Charts showing showing the flame temp of various fuel ratios support this: http://www.braeunig.us/space/comb-OH.htm 

  5. ^

    See the SSME example here: https://www.nrc.gov/docs/ML1310/ML13109A563.pdf Ironically, they incorrectly use a stoichiometric ratio in their calculations. But as they show, the reaction inefficiencies explain the vast majority of the temperature discrepancy.

  6. ^
Comment by zrezzed (kyle-ibrahim) on Which rationality posts are begging for further practical development? · 2023-07-24T02:39:13.791Z · LW · GW

I think this is a great goal, and I’m looking forward to what you put together!

This may be a bit different than the sort of thing you’re asking about, but I’d love to see more development/thought around topics related to https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline .

Rationality is certainly a skill, and something better / more concise exposition on rationality itself can help people develop. But once you learn to think right, what are the some of the most salient object-level ideas that come next? How do we better realize values in the real world, and make make use of / propagate these better ways of thinking? Why is this so hard, and what are strategies to make it easier?

SSC/AXC is a great example of better exploring object-level ideas, and I’d love to see more of that type of work pulled back into the community.

Comment by zrezzed (kyle-ibrahim) on Adumbrations on AGI from an outsider · 2023-05-29T19:59:54.444Z · LW · GW

What could a million perfectly-coordinated, tireless copies of a pretty smart, broadly skilled person running at 100x speed do in a couple years?

 

I this feels like the right analogy to consider.

And in considering this thought experiment, I'm not sure trying to solve alignment is the only/best way to reduce risks. This hypothetical seems open to reducing risk by 1) better understanding how to detect these actors operating at large scale 2) researching resilient plug-pulling strategies 

Comment by zrezzed (kyle-ibrahim) on Adumbrations on AGI from an outsider · 2023-05-29T19:36:09.669Z · LW · GW

Moreover, even if these things don't work that way and we get a slow takeoff, that doesn't necessarily save humanity. It just means that it will take a little longer for AI to be the dominant form of intelligence on the planet. That still sets a deadline to adequately solve alignment.

 

If a slow takeoff is all that's possible, doesn't that open up other options for saving humanity besides solving alignment?

I imagine far more humans will agree p(doom) is high if they see AI isn't aligned and it's growing to be the dominant form of intelligence that holds power. In a slow-takeoff, people should be able to realize this is happening, and effect non-alignment based solutions (like bombing compute infrastructure).

Comment by zrezzed (kyle-ibrahim) on Adumbrations on AGI from an outsider · 2023-05-29T18:42:34.618Z · LW · GW

a superintelligence will be at least several orders of magnitude more persuasive than character.ai or Stuart Armstrong.

 

Believing this seems central to believing high P(doom).

But, I think it's not a coherent enough concept to justify believing it. Yes, some people are far more persuasive than others. But how can you extrapolate that far beyond the distribution we obverse in humans? I do think AI will prove to better than humans at this, and likely much better. 

But "much" better isn't the same as "better enough to be effectively treated as magic".

Comment by zrezzed (kyle-ibrahim) on Book Review: How Minds Change · 2023-05-28T18:19:38.680Z · LW · GW

This isn't where the community is supposed to have ended up. If rationality is systematized winning, then the community has failed to be rational.

 

Great post, and timely, for me personally. I found myself having similar thoughts recently, and this was a large part of why I recently decided to start engaging with the community more (so apologies for coming on strong in my first comment, while likely lacking good norms).

Some questions I'm trying to answer, and this post certainly helps a bit:

  • Is there general consensus on the "goals" of the rationalist community? I feel like there implicitly is something like "learn and practice rationality as a human" and "debate and engage well to co-develop valuable ideas".
  • Would a goal more like "helping raise the overall sanity waterline" ultimately be a more useful, and successful "purpose" for this community? I potentially think so. Among other reasons, as bc4026bd4aaa5b7fe points out, there are a number of forces that trend this community towards being insular, and an explicit goal against that tendency would be useful.