Posts
Comments
That works well enough, but a Vital 200S currently costs $160 at amazon, less than the cheapest variant of the thing you linked, and has a slightly higher max air delivery rate, some granular carbon in the filter, and features like power buttons. The Vital 200S on speed 2 has similar power usage and slightly less noise, but less airflow, but a carbon layer always reduces airflow. It doesn't have a rear intake so it can be placed against a wall. It also has a washable prefilter.
Compared to what you linked, the design in this post has 3 filters instead of 2, some noise blocking, and a single large fan instead of multiple fans. Effective floor area usage should be slightly less, but of course it has to go together with shelving for that.
What would this say about subculture gatekeeping? About immigration policy?
First, we have to ask: what's the purpose? Generally aircraft try to get up to their cruise speed quickly and then spend most of their time cruising, and you optimize for cruise first and takeoff second. Do we want multiple cruise speeds, eg a supersonic bomber that goes slow some of the time and fast over enemy territory? Are we designing a supersonic transport and trying to reduce fuel usage getting up to cruise?
And then, there are 2 basic ways you can change the bypass ratio: you can change the fan/propeller intake area, or you can turn off turbines. The V-22 has a driveshaft through the wing to avoid crashes if an engine fails; in theory you could turn off an engine while powering the same number of propellers, which is sort of like a variable bypass ratio. If you have a bunch of turbogenerators inside the fuselage, powering electric fans elsewhere, then you can shut some down while powering the same number of fans. There are also folding propellers.
The question is always, "but is that better"?
On the other hand, the hydrogen pushing against the airship membrane is also an electrostatic force.
Yes, helium costs would be a problem for large-scale use of airships. Yes, it's possible to use hydrogen in airships safely. This has been noted by many people.
Hydrogen has some properties that make it relatively safe:
- it's light so it rises instead of accumulating on the ground or around a leak
- it has a relatively high ignition temperature
and some properties that make it less safe:
- it has a wide range of concentrations where it will burn in air
- fast diffusion, that is, it mixes with air quickly
- it leaks through many materials
- it embrittles steel
- it causes some global warming if released
Regardless, the FAA does not allow using hydrogen in airships, and I don't expect that to change soon. Especially since accidents still happen despite the small number of airships.
In any case, the only uses of airships that are plausibly economical today are: advertising and luxury yachts for the wealthy. Are those things that you care about working towards?
see also: These Are Your Doges, If It Please You
IKEA already sells air purifiers; their models just have a very low flow rate. There are several companies selling various kinds of air purifiers, including multiples ones with proprietary filters.
What all this says to me is, the problem isn't just the overall market size.
Apart from potential harms of far-UVC, it's good to remove particulate pollution anyway. Is it possible that "quiet air filters" is an easier problem to solve?
I'm not convinced that far-UVC is safe enough around humans to be a good idea. It's strongly absorbed by proteins so it doesn't penetrate much, but:
- It can make reactive compounds from organic compounds in air.
- It can produce ozone, depending on the light. (That's why mercury vapor lamps block the 185nm emission.)
- It could potentially make toxic compounds when it's absorbed by proteins in skin or eyes.
- It definitely causes degradation of plastics.
And really, what's the point? Why not just have fans sending air to (cheap) mercury vapor lamps in a contained area where they won't hit people or plastics?
As you were writing that, did you consider why chlorhexidine might cause hearing damage?
https://en.wikipedia.org/wiki/Chlorhexidine#Side_effects
It can also obviously break down to 4-chloroaniline and hexamethylenediamine. Which are rather bad. This was not considered in the FDA's evaluation of it.
If you just want to make the tooth surface more negatively charged...a salt of poly(acrylic acid) seems better for that. And I think some toothpastes have that.
EDTA in toothpaste? It chelates iron and calcium. Binding iron can prevent degradation during storage, so a little bit is often added.
Are you talking about adding a lot more? For what purpose? In situations where you can chelate iron to prevent bacterial growth, you can also just kill bacteria with surfactants. Maybe breaking up certain biofilms held together by Ca? EDTA doesn't seem very effective for that for teeth, but also, chelating agents that could strip Ca from biofilms would also strip Ca from teeth. IIRC, high EDTA concentration was found to cause significant amounts of erosion.
I wouldn't want to eat a lot of EDTA, anyway. Iminodisuccinate seems less likely to have problematic metabolites.
You can post on a subreddit and get replies from real people interested in that topic, for free, in less than a day.
Is that valuable? Sometimes it is, but...not usually. How much is the median comment on reddit or facebook or youtube worth? Nothing?
In the current economy, the "average-human-level intelligence" part of employees is only valuable when you're talking about specialists in the issue at hand, even when that issue is being a general personal assistant for an executive rather than a technical engineering problem.
Triplebyte? You mean, the software job interviewing company?
-
They had some scandal a while back where they made old profiles public without permission, and some other problems that I read about but can't remember now.
-
They didn't have a better way of measuring engineering expertise, they just did the same leetcode interviews that Google/etc did. They tried to be as similar as possible to existing hiring at multiple companies; the idea wasn't better evaluation but reducing redundant testing. But companies kind of like doing their own testing.
-
They're gone now, acquired by Karat. Which seems to be selling companies a way to make their own leetcode interviews using Triplebyte's system, thus defeating the original point.
Good news: the post is both satire and serious, at the same time but on different levels.
Here are some publicly traded large companies that do a lot of coal mining:
- https://finance.yahoo.com/quote/BHP/
- https://finance.yahoo.com/quote/RIO/
- https://finance.yahoo.com/quote/AAL.L/
- https://finance.yahoo.com/quote/COALINDIA.NS/
- https://finance.yahoo.com/quote/GLCNF/
Coal India did pretty well, I guess. The others, not so much.
Nice post Sarah.
If Alzheimer's is ultimately caused by repressor binding failure, that could explain overexpression of the various proteins mentioned.
in short, your claim: "The cost of aluminum die casting and stamped steel is, on Tesla's scale, similar" both seems to miss the entire point and run against literally everything I have seen written about this. You need citations for this claim, I am not going to take your word for it.
OK, here's a citation then: https://www.automotivemanufacturingsolutions.com/casting/forging/megacasting-a-chance-to-rethink-body-manufacturing/42721.article
Here I would be careful since investments, especially in a particular model generation of welding robots, are depreciated. For forming processes, the depreciation can even extend over three or four model generations. This technological write-off – bear in mind that this is not tax-related – runs over a timeframe of 30 years. For the OEMs that are already using these machines for existing vehicle generations, the use of the new technology makes no sense. On the other hand, thanks to its greenfield approach, Tesla can save itself these typical investments in shell-type construction. In a brownfield, it would be operationally nonsensical not to keep using long depreciated machinery. So, in this situation, I would not support the 20-30% cost savings that were cited.
With die casting, one important aspect is that there is a noticeable reduction in the service life of the die-casting moulds. Due to so-called thermal shock, the rule of thumb is that a die-casting mould is good for 100,000-150,000 shots. By contrast, one forming tool is used to make 5m-6m parts. So, we are talking about a factor of 20 to 30. There is quite clearly a limited volume range for which the casting-intensive solution would be appropriate. To me, aluminium casting holds little appeal for very small and very large volumes. Especially for mass production in the millions, you would need about six or seven of these expensive die-casting moulds. We estimate that the die-casting form for the single part, rear-end of a Tesla would weigh about 80-100 tonnes. This translates to huge costs for handling and the peripheral equipment, in the form of cranes, for example. Die-casting moulds also pose technological obstacles and hazards. The leakage of melted material is cited as one example. The risks of not even being able to operate in some situations are not negligible.
Or the geometry of the frame was insufficiently optimized for vertical shear. I do not understand how you reached this conclusion.
No. If aluminum doesn't have weak points, it stretches/bends before breaking. The Cybertruck hitch broke off cleanly without stretching. Therefore there was a weak point.
By nothing I mean that the estimate for their marketing spend in 2022 (literally all marketing to include PR if there was any at all) was $175k.
I'm skeptical of that. PR firms don't report to Vivvix.
Here are the costs from the above link:
It's worth noting that countries (such as India) have the option of simply not respecting a patent when the use is important and the fees requested are unreasonable. Also, patents aren't international; it's often possible to get around them by simply manufacturing and using a chemical in a different country.
The only advantage DDT has over those is lower production cost, but the environmental harms per kg of DDT are greater than the production cost savings, so using it is just never a good tradeoff.
As I said, if DDT was worth using there, it was worth spending however much extra money it would have been to spray with other things instead. If it wasn't worth that much money, it wasn't worth spraying DDT.
And regarding "environmental harms," from personal experience scratching myself bloody as a kid from itchy bites after going to the park in the evening, I would extinct a dozen species if mosquitoes went down with them.
The biggest problem with DDT is that it is bad for humans.
While I still disagree with your interpretation of that post, I don't want to argue over the meaning of a post from that blog. There are actual books written about the history of titanium. I'm probably as familiar with it as the author of Construction Physics, and saying A-12-related programs were necessary for development of titanium usage is just wrong. People who care about that and don't trust my conclusion should go look up good sources on their own, more-extensive ones.
If it wasn't for the A-12 project (and its precursors and successors), then we simply wouldn't be able to build things out of titanium.
That is not an accurate summary of the linked article.
In 1952, another titanium symposium was held, this one sponsored by the Army’s Watertown Arsenal. By then, titanium was being manufactured in large quantities, and while the prior symposium had been focused on laboratory studies of titanium’s physical and chemical properties, the 1952 symposium was a “practical discussion of the properties, processing, machinability, and similar characteristics of titanium". While physical characteristics of titanium still took center stage, there was a practical slant to the discussions – how wide a sheet of titanium can be produced, how large an ingot of it can be made, how can it be forged, or pressed, or welded, and so on. Presentations were by titanium fabricators, but also by metalworking companies that had been experimenting with the metal.
That's before the A-12.
In 1966, another titanium symposium was held, this one sponsored by the Northrup Corporation. By this time, titanium had been used successfully for many years, and the purpose of this symposium was to “provide technical personnel of diversified disciplines with a working knowledge of titanium technology.” This time, the lion’s share of the presentations are by aerospace companies experienced in working with the metal, and the uncertain air that existed in the 1952 symposium is gone.
At that point, the A-12 program was still classified and the knowledge gained from it was not widely shared.
I had an interview with one of these organizations (that will remain unnamed) where the main person I was talking to was really excited about a bunch of stupid bullshit ideas (for eg experimental methods) that, based on their understanding of them, must have come from either university press releases or popular science magazines like New Scientist. I was trying to find a "polite in whatever culture these people have" way to say "this is not useful, I'd like to explain why but it will take a while, here are better things" but doing that eloquently is one of my weak points.
From what I've seen of the people there, ARPA-E has some smart people ("ordinary smart", not geniuses) but the ARPAs are still very tied to the university system, with a heavy reliance on PhD credentials.
I think the basic idea of using more steps of smaller size is worth considering. Maybe it reduces overall drift, but I suspect it doesn't, because my view is:
Models have many basins of attraction for sub-elements. As model capability increases continuously, there are nearly-discrete points where aspects of the model jump from 1 basin to another, perhaps with cascading effect. I expect this to produce large gaps from small changes to models.
Sure, some people add stuff like cheese/tomatoes/ham to their oatmeal. Personally I think they go better with rice, but de gustibus non disputandum est.
The scope of our argument seems to have grown beyond what a single comment thread is suitable for.
AI safety via debate is 2 years before Writeup: Progress on AI Safety via Debate so the latter post should be more up-to-date. I think that post does a good job of considering potential problems; the issue is that I think the noted problems & assumptions can't be handled well, make that approach very limited in what it can do for alignment, and aren't really dealt with by "Doubly-efficient debate". I don't think such debate protocols are totally useless, but they're certainly not a "solution to alignment".
I don't expect such a huge gap between debaters and judges that the judge simply can't understand the debaters' concepts
You don't? But this is a major problem in arguments between people. The variation within humans is already more than enough for this! There's a gap like that every 35 IQ points or so. I don't understand why you're confident this isn't an issue.
I guess we've found our main disagreement, at least?
So in this particular case I am saying: if you penalize debaters that are inconsistent under cross-examination, you are giving an advantage to any debater that implements an honest strategy, and so you should expect training to incentivize honesty.
Now you're training for multiple objectives:
- You want the debater AI to argue for proposition A or not-A according to its role and convince human judges of that.
- You want it to not change its position on sub-arguments.
But (2) is ill-defined. Can sub-arguments be combined for less weighting? Are they all worth the same? What if you have several sub-arguments that all depend on a single sub-2-argument? Good arguments for A or not-A should have lots of disagreements - or do you want to train AI that makes all the same sub-arguments for A or not-A and then says "this implies A / not-A"? I don't think this works.
In response to the linked "HCH" post:
Yes, an agent past some threshold can theoretically make a more-intelligent agent. But that doesn't say anything about alignment; the supposed "question-answering machine" would be subject to instrumental convergence and mesaoptimizer issues, and you'd get value drift with each HCH stage, just as you would with RSI schemes.
Phytic acid is certainly a thing, but it's not quite that simple, see eg https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8746346/. Also, uncooked fruits have phytase. And also, it's not an issue unless you eat mostly something high in it for most meals.
Yes, on one level that's part of the joke. But also, following the above instructions, it can be a low-cost complete meal with nonperishable ingredients that can be fixed in <5 minutes of work and <10 minutes of waiting.
I'm the current owner of the Oatmeal subreddit; that's how you can be sure I'm a Real Expert.
If you want to disallow appeals to authority
I do, but more importantly, I want to disallow the judge understanding all the concepts here. Suppose the judge says to #1: "What is energy?" or "What is conservation?" and it can't be explained to them - what then?
Also, argument 1 isn't actually correct, E=mc^2 and so on.
That seems right, but why is it a problem? The honest strategy is fine under cross-examination, it will give consistent answers across contexts.
"The honest strategy"? If you have that, you can just ask it and not bother with the debate. If the problem is distinguishing it, and only dishonest actors are changing their answers based on the provided situation, you can just use that info. But why are you assuming you have an "honest strategy" available here?
You can recursively decompose the claim "perpetual motion machines are known to be impossible" until you get down to a claim like "such and such experiment should have such and such outcome", which the boss can then perform to determine a winner.
Ah, I don't think you can. Making that kind of abstract conclusion from a practical number of experiments requires abstractions like potential energy, entropy, Noether's theorem, etc - which in this example, the judge doesn't understand. (Without such abstractions, you'd need to consider every possible type of machine separately, which isn't feasible.) This seems like a core of our disagreement here.
You can cross-examine the inventor and show that in other contexts they would agree that perpetual energy machines are impossible.
The debaters are the same AI with different contexts, so the same is true of both debaters. Am I missing something here?
Which paper are you referring to? If you mean doubly efficient debate
Yes, "doubly efficient debate".
That argument doesn't explain things like:
- furry avatars are almost always cartoon versions of animals, not realistic ones
- furries didn't exist until anthropomorphic cartoon animals became popular (and no, "spirit animals" are not similar)
- suddenly ponies became more popular in that sense after a popular cartoon with ponies came out
It's just Disney and cartoons.
To clarify the 2nd point, here's an example. Suppose someone presents you with a large box that supposedly produces electricity endlessly. Your boss thinks it works, and you're debating the inventor in front of your boss.
"Perpetual motion machines are known to be impossible" you say, but your boss isn't familiar with that conceptual class or the reasoning involved.
The inventor says, "Here, let's plug in a thing, we can see that the box does in fact produce a little electricity." Your boss finds this very convincing.
The process proposed in the paper is something like, "let's randomly sample every possible machine to see if it does perpetual motion". So the inventor points to the sun and says, "that thing has been making energy continuously and never stops for as long as we've been able to tell". They point to some stars and say the same thing.
The sampling and evaluation is dependent on a conceptual framework that isn't agreed on, and waiting for the sun and stars to burn out isn't very practical.
I took a look at the debate papers. I think that's a good angle to take, but they're missing some factors that sometimes make debates between humans fail.
-
Humans and neural networks both have some implicit representation of probability distributions of output types. The basis behind "I can't explain why but that seems unlikely" can be more accurate than "here's an argument for why that will happen". You're basically delegating the problem of "making AI thinking explainable" to the AI itself, but if you could do that, you could just...make neural networks explainable, perhaps by asking an AI what another AI is doing and doing RLHF on the response. But that doesn't seem to work in general. In other words, the problem is that by using only the arguments output by NNs, those are weaker agents than NNs that don't have to go through production of arguments.
-
Reasoning about probability distributions means argument branches can be of the type "X is a likely type of thing" vs "X is rare". And empirically checking the distribution can be too expensive. That makes the debate framework not work as well.
We're talking about different timescales. Apple's investments paid off within the tenure of top executives. Meanwhile, banks are still using COBOL.
I upvoted this post, but I do have a few comments.
For what it’s worth, I like glass fibers. They’re pretty easy to make, the material can be be sourced in space, they can handle large temperature ranges, and they’re resistant to atomic oxygen environments and UV
The matrix holding the fibers together is generally going to be more prone to degradation. Glass fibers have good compressive strength, but carbon fiber would be better here.
Maintaining orbit is one of the key issues. You probably need ion thusters and solar panels. I don't think electrodynamic tethers actually work, because of friction vs conductivity.
At these scales and speeds, you can't just think of "solid things" as being rigid. Speed of sound in solid materials becomes a major issue. When something attaches to the tether, there's a wave of increased tension and stretching that propagates through the tether and sets up a vibration. This is a fatal problem for some tether variants.
The projectile needs to reliably connect to the tether. Docking in space is usually slow and doesn't involve large forces, and it's still not easy, but here it needs to be done quickly and establish a strong connection. Here, you could just have a hook grab a perpendicular rope, but if you don't have any contingency plans, well, "dock or die" isn't very appealing. Especially if it happens multiple times.
Yes, micrometeoroids are an issue. Even if there aren't many, the tether might need to be robust to small impacts. A low orbit reduces that risk (but doesn't eliminate it) but a tether would also have relatively high drag; the surface area per mass is higher than eg the ISS.
The main thing people want to do with rockets has been put satellites in orbit. I don't see a reason to expect that to change anytime soon.
People have thought of all this decades ago. Maybe check out "LEOBiblio" or something.
"Everyone is going to switch to cloud stuff" means that, in the short term, there will be a shortage of cloud people and an excess of non-cloud people.
Your argument is for hiring in a long-term future where the non-cloud people retired or forgot how to do their thing, but we know that's not what US executives were thinking because they don't think that long-term due to the incentives they face.
And it certainly doesn't explain some groups of companies switching to cloud stuff together and then switching back together later.
Let's see...
savings were so significant — an estimated 20 to 40% reduction in the cost of a car body
I think that should be "car frame" - the "body" includes things like doors. Anyway, I'm sure that was estimated by some people, but...
that they’re being adopted by many other car manufacturers, particularly Chinese ones
Not really? Several major carmakers were considering using the same approach after Tesla did that, but last I heard they'd backed off. That's how big companies tend to work: executives see a competitor or startup doing something, and then they get some people (internal engineers, consultants, etc) to evaluate if they should be doing the same thing. Doesn't mean they actually will.
good EV charging performance
To be clear, I'm not saying aluminum casting (or forging) is useless; there's a reason people make a lot of aluminum. Battery compartments are one of the better places to use it, because the high thermal conductivity is relevant. But that's different than casting large frame pieces or an entire frame.
As for very large presses for aluminum, those Heavy Press Program ones are several times bigger than Tesla's, and I think friction stir welding progress making it possible to weld aluminum alloys without making weak points might have been why people didn't keep going bigger - combined with even larger components being hard to transport, of course.
Hmm, I think there are a few reasons for software people getting into other industries over vice-versa:
- Software has been very profitable, largely because of how ad-based the US economy has become. So a lot of the available money is from the software side.
- Because code scales and software doesn't require as much capital investment as heavy industry, there are more wealthy founders who did some code themselves than wealthy founders who do, say, chemical engineering themselves. That means you have wealthy people who a) like starting companies and b) are engineering-oriented.
- American companies seem to have more of a competitive advantage vs Japan/China for code than manufacturing. Note that I said companies; Japan actually makes lots of high-quality open-source software.
Yes, it's certainly relevant - any fatigue aluminum has is relevant, because it doesn't have a fatigue threshold like steel or carbon fiber. But fatigue is normally a bigger problem over years, and that was a practically-new cybertruck. And aluminum is supposed to bend/stretch; when it breaks like that, it's generally because there's a weak point from eg a weld.
Well, in the specific case of microservices, I think the main problem being solved is not allowing people on other teams to modify your part of the code.
In theory, people could just not do that. It's kind of like how private variables in Java are considered important, even though sometimes there's a good reason to change them and theoretically you could just use variable names / comments / documentation to indicate which variables are normally meant to be changed. There's a tradeoff between people messing with stuff they shouldn't and inability to do things because you rely on other groups. You could break a monolithic project into multiple git repos instead, but I guess that psychologically feels worse.
When my crossposted blog posts aren't suitable for LessWrong, I won't be offended if the admins don't frontpage them.
The claim was that the decision to go to cloud computing and microservice architectures wasn't based on whether they were a good idea.
But also, yes, I think they're used in many cases where they're less efficient and a mistake. The main argument for cloud stuff is that it saves dev time, but that's a poor argument for moving from a working system to AWS. And microservices are mostly a solution to institutional/management problems, not technical ones.
I dont think this is a good take.
The Cybertruck does not break on that pull. It breaks on this one: 0:27
Your comment isn't worded well: what isn't a good take, exactly?
Then you imply that the truck breaks on a different pull, and link to it going over concrete pipes. I'm assuming you meant it cracked from that impact so it broke on the pull. That's wrong, look closely, please don't just blindly repeat something you saw on a forum somewhere without even linking to it.
Finally, aluminum isn't supposed to break like that, it's supposed to be ductile. That kind of failure indicates some sort of weak point being present.
Understandable, have a nice day.
I really think most people shouldn't be using retinoids, especially not "retinoids in general". For people who have certain skin problems, certain retinoids can be worthwhile, but it's up to them if the side effects are worth the potential (mainly aesthetic) benefits.
I certainly wouldn't recommend "retinoids" - I'd talk about individual compounds.
Adapalene is used topically. Wikipedia doesn't list skin fragility as a side effect but - as you can guess from its mechanism - it is one.
And the mechanism involves increased cell turnover, which long-term tends to either increase stem cell depletion or increase cancer risk. There's a tradeoff, which is why people don't just do the thing normally. Maybe your natural tradeoff is wrong and you want to adjust it, but you have to recognize that it exists.
You can do personalized RHLF on a LLM. Because there's less data, you need to do stronger training per data point than big companies do. The training is still a technical issue, but supposing that becomes cheap enough, one problem is that this produces sycophants. We already see commercial LLMs that just agree with whatever you initially imply you believe.
Producing vector embeddings is, if anything, more natural for neural networks than continuing text, and search engines already use neural networks to produce document embeddings for search. It's entirely feasible to do this for all your personal or company documents, and then search (a vector database of) them using approximate descriptions of what you want.