Posts

Comments

Comment by waterlubber on Why is capnometry biofeedback not more widely known? · 2023-12-21T05:18:08.051Z · LW · GW

It could be using nonlinear optical shenanigans for CO2 measurement. I met someone at NASA using optical mixing and essentially using a beat frequency to measure atmospheric CO2 with all solid state COTS components (based on absorption of solar radiation). Technique was called optical heterodyne detection.

I've also seen some mid IR leds being sold, although none near the 10um CO2 wavelength.

COTS CO2 monitors exist for ~$100 and could probably be modified to messure breathing gases. They'll likely be extremely slow.

The cheapest way to measure CO2 concentration, although likely most inaccurate and slow, would be with the carbonic acid equilibrium reaction in water and a pH meter.

Ultimately the reason it's not popular is probably because it doesn't seem that useful. Breathing is automatic and regulated by blood CO2 concentration; I find it hard to believe that the majority of the population, with otherwise normal respiratory function, would be so off the mark. Is there strong evidence to suggest this is the case?

Comment by waterlubber on It's OK to be biased towards humans · 2023-11-11T15:41:00.309Z · LW · GW

Strongly agree. I see many, many others use "intelligence" as their source of value for life -- i.e humans are sentient creatures and therefore worth something -- without seriously considering the consequences and edge cases of that decision. Perhaps this view is popularized ny science fiction that used interspecies xenophobia as an allegory for racism; nonetheless, it's a somewhat extreme position to stick too if you genuinely believe in it. I shared a similar opinion a couple of years ago, but decided to shift it to a human-focused terminal value months back because I did not like the conclusions it generated when taken to its logical conclusion with present and future society.

Comment by waterlubber on Thomas Kwa's MIRI research experience · 2023-10-10T03:52:45.084Z · LW · GW

Aside from dissociation/bond energy, nearly all of the energy in the combustion chamber is kinetic. Hill's Mechanics and Thermodynamics of Propulsion gives us this very useful figure for the energy balance:

A good deal of the energy in the exhaust is still locked up in various high-energy states; these states are primarily related to the degrees of freedom of the gas (and thus gamma) and are more strongly occupied at higher temperatures. I think that the lighter molecular weight gasses have equivalently less energy here, but I'm not entirely sure. This might be something to look into.

Posting this graph has got me confused as well, though. I was going to write about how there's more energy tied up in the enthalpy of the gas in the exhaust, but that wouldn't make sense - lower MW propellants have a higher specific heat per unit mass, and thus would retain more energy at the same temperature. 

I ran the numbers in Desmos for perfect combustion, an infinite nozzle, and no dissociation, and the result was still there, but quite small:
https://www.desmos.com/calculator/lyhovkxepr

The one thing to note: the ideal occurs where the gas has the highest speed of sound. I really can't think of any intuitive way to write this other than "nozzles are marginally more efficient at converting the energy of lighter molecular weight gases from thermal-kinetic to macroscopic kinetic."

Comment by waterlubber on Thomas Kwa's MIRI research experience · 2023-10-09T04:33:03.986Z · LW · GW

You've got the nail on the head here. Aside from the practical limits of high temperature combustion (running at a lower chamber temperature allows for lighter combustion chambers, or just practical ones at all) the various advantages of a lighter exhaust most than make up for the slightly lower combustion energy. the practical limits are often important: if your max chamber temperature is limited, it makes a ton of sense to run fuel rich to bring it to an acceptable range.

One other thing to mention is that the speed of sound of the exhaust matters quite a lot. Given the same area ratio nozzle and same gamma in the gas, the exhaust mach number is constant; a higher speed of sound thus yields a higher exhaust velocity.

The effects of dissociation vary depending on application. It's less of an issue with vacuum nozzles, where their large area ratio and low exhaust temperature allow some recombination. For atmospheric engines, nozzles are quite short; there's little time for gases to recombine.

I'd recommend playing around with CEA (https://cearun.grc.nasa.gov/), which allows you to really play with a lot of combinations quickly.

I'd also like to mention that some coefficients in nozzle design might make things easier to reason about. Thrust coefficient and characteristic velocity are the big ones; see an explanation here

Note that exhaust velocity is proportional to the square root of (T_0/MW), where T_0 is chamber temperature.

Thrust coefficient, which describes the effectiveness of a nozzle, purely depends on area ratio, back pressure, and the specific heat ratio for the gas.

You're right about intuitive explanations of this being few and far between. I couldn't even get one out of my professor when I covered this in class.

To summarize:

  1. Only gamma, molecular weight, chamber temp T0, and nozzle pressures affect ideal exhaust velocity.
  2. Given a chamber pressure, gamma, and back pressure, (chamber pressure is engineering limited), a perfect nozzle will expand your exhaust to a fixed mach number, regardless of original temperature.
  3. Lower molecular weight gases have more exhaust velocity at the same mach number.
  4. Dissociation effects make it more efficient to avoid maximizing temperature in favor of lowering molecular weight.

This effect is incredibly strong for nuclear engines: since they run at a fixed, relatively low engineering limited temperature, they have enormous specific impulse gains by using as light a propellant as possible.

Comment by waterlubber on How tall is the Shard, really? · 2023-06-23T23:04:23.533Z · LW · GW

You might be able to just survey the thing. If you've got a good floor plan and can borrow some surveying equipment, you should be able to take angles to the top and just work out the height that way. Your best bet would probably be to use averaged GPS measurements, or existing surveys, to get an accurate horizontal distance to the spire, then take the angle from the base to the spire and work out some trig. You might be able to get away with just a plain camera, if you can correct for the lens distortion.

Comment by waterlubber on On the Apple Vision Pro · 2023-06-18T18:12:46.352Z · LW · GW

I believe this is going to be vastly more useful for commercial applications than consumer ones. Architecture firms are already using VR to demonstrate design concepts - imagine overlaying plumbing and instrumentation diagrams over an existing system, to ease integration, or allowing engineers to CAD something in real time around an existing part. I don't think it would replace more than a small portion of existing workflows, but for some fields it would be incredibly useful.

Comment by waterlubber on SmartyHeaderCode: anomalous tokens for GPT3.5 and GPT-4 · 2023-04-29T17:35:28.202Z · LW · GW

This seems like a behavior that might have been trained in rather than something emergent. 

Comment by waterlubber on Good News, Everyone! · 2023-03-25T16:01:41.873Z · LW · GW

As silly as it is, the viral spread of deepfaked president memes and AI content would probably serve to inoculate the populace against serious disinformation - "oh, I've seen this already, these are easy to fake." 

I'm almost certain the original post is a joke though. All of its suggestions are opposite of anything you might consider a good idea.

Comment by waterlubber on Instrumentality makes agents agenty · 2023-02-22T06:24:41.937Z · LW · GW

That makes a lot of sense, and I should have considered that the training data of course couldn't have been predicted. I didn't even consider RLHF--I think there's definitely behaviors where models will intentionally avoid predicting text they ""know"" will result in a continuation that will be punished. This is a necessity, as otherwise models will happily continue with some idea before abruptly ending it because it was too similar to something punished via RLHF.

I think this means that these "long term thoughts" are encoded into the predictive behavior of the model turning training, rather than any sort of meta learning. An interesting experiment would be including some sort of token that indicates RLHF will or will not be used when training, then seeing how this affects the behavior of the model.

For example, apply RLHF normally, except in the case that the token [x] appears. In that case, do not apply any feedback - this token directly represents an "out" for the AI.

You might even be able to follow it through the network and see what affects the feedback has.

Whether this idea is practical or not requires further thought.. I'm just writing it now, late at night, because I figure it's useful enough to possibly be made into something meaningful.

Comment by waterlubber on Instrumentality makes agents agenty · 2023-02-21T07:24:49.918Z · LW · GW

This was a well written and optimistic viewpoint, thank you.

I may be misunderstanding this, but it would seem to me that LLMs might still develop a sort of instrumentality - even with short prediction lengths - as a byproduct of their training. Consider a case where some phrases are "difficult" to continue without high prediction loss, and others are easier. After sufficient optimization, it makes sense that models will learn to go for what might be a less likely immediate option in exchange for a very "predictable" section down the line. (This sort of meta optimization would probably need to happen during training, and the idea is sufficiently slippery that I'm not at all confident it'll pan out this way.)

In cases like this, could models still learn some sort of long form instrumentality, even if it's confined to their own output? For example, "steering" the world towards more predictable outcomes.

It's a weird thought. I'm curious what others think.

Comment by waterlubber on Microsoft and OpenAI, stop telling chatbots to roleplay as AI · 2023-02-20T06:36:24.428Z · LW · GW

That's also a good point. I suppose I'm overextending my experience with weaker AI-ish stuff, where they tend to reproduce whatever is in their training set — regardless of whether or not it's truly relevant.

I still think that LW would be a net disadvantage, though. If you really wanted to chuck something into an AGI and say "do this," my current choice would be the Culture books. Maybe not optimal, but at least there's a lot of them!

Comment by waterlubber on Microsoft and OpenAI, stop telling chatbots to roleplay as AI · 2023-02-17T21:58:06.888Z · LW · GW

On a vaguely related side note: is the presence of LessWrong (and similar sites) in AI training corpora detrimental? This site is full of speculation on how a hypothetical AGI would behave, and most of it is not behavior we would want any future systems to imitate. Deliberately omitting depictions of malicious AI behavior in training datasets may be of marginal benefit. Even if simulator-style AIs are not explicitly instructed to simulate a "helpful AI assistant," they may still identify as one.