Posts

Humans are (mostly) metarational 2024-10-09T05:51:16.644Z
Book Review: Righteous Victims - A History of the Zionist-Arab Conflict 2024-06-24T11:02:03.490Z
Sci-Fi books micro-reviews 2024-06-24T09:49:28.523Z
Surviving Seveneves 2024-06-19T13:11:55.414Z
Can stealth aircraft be detected optically? 2024-05-02T07:47:00.101Z
Falling fertility explanations and Israel 2024-04-03T03:27:38.564Z
Is it justifiable for non-experts to have strong opinions about Gaza? 2024-01-08T17:31:21.934Z
Book Review: 1948 by Benny Morris 2023-12-03T10:29:16.696Z
In favour of a sovereign state of Gaza 2023-11-19T16:08:51.012Z
Challenge: Does ChatGPT ever claim that a bad outcome for humanity is actually good? 2023-03-22T16:01:31.985Z
Agentic GPT simulations: a risk and an opportunity 2023-03-22T06:24:06.893Z
What would an AI need to bootstrap recursively self improving robots? 2023-02-14T17:58:17.592Z
Is this chat GPT rewrite of my post better? 2023-01-15T09:47:44.016Z
A simple proposal for preserving free speech on twitter 2023-01-15T09:42:49.841Z
woke offline, anti-woke online 2023-01-01T08:24:39.748Z
Why are profitable companies laying off staff? 2022-11-17T06:19:12.601Z
The optimal angle for a solar boiler is different than for a solar panel 2022-11-10T10:32:47.187Z
Yair Halberstadt's Shortform 2022-11-08T19:33:53.853Z
Average utilitarianism is non-local 2022-10-31T16:36:09.406Z
What would happen if we abolished the FDA tomorrow? 2022-09-14T15:22:31.116Z
EA, Veganism and Negative Animal Utilitarianism 2022-09-04T18:30:20.170Z
Lamentations, Gaza and Empathy 2022-08-07T07:55:48.545Z
Linkpost: Robin Hanson - Why Not Wait On AI Risk? 2022-06-24T14:23:50.580Z
Parliaments without the Parties 2022-06-19T14:06:23.167Z
Can you MRI a deep learning model? 2022-06-13T13:43:05.293Z
If there was a millennium equivalent prize for AI alignment, what would the problems be? 2022-06-09T16:56:10.788Z
What board games would you recommend? 2022-06-06T16:38:04.538Z
How would you build Dath Ilan on earth? 2022-05-29T07:26:17.322Z
Should you kiss it better? 2022-05-19T03:58:40.354Z
Demonstrating MWI by interfering human simulations 2022-05-08T17:28:27.649Z
What would be the impact of cheap energy and storage? 2022-05-03T05:20:14.889Z
Save Humanity! Breed Sapient Octopuses! 2022-04-05T18:39:07.478Z
Are the fundamental physical constants computable? 2022-04-05T15:05:42.393Z
Best non-textbooks on every subject 2022-04-04T11:54:56.193Z
Being Moral is an end goal. 2022-03-09T16:37:15.612Z
Design policy to be testable 2022-01-31T06:04:53.887Z
Newcomb's Grandfather 2022-01-28T08:56:53.417Z
Worldbuilding exercise: The Highwayverse. 2021-12-22T06:47:53.054Z
Super intelligent AIs that don't require alignment 2021-11-16T19:55:01.258Z
Experimenting with Android Digital Wellbeing 2021-10-21T05:43:41.789Z
Feature Suggestion: one way anonymity 2021-10-17T17:54:09.182Z
The evaluation function of an AI is not its aim 2021-10-10T14:52:01.374Z
Towards a Bayesian model for Empirical Science 2021-10-07T05:38:25.848Z
To every people according to their language 2021-10-04T18:42:09.573Z
Schools probably do do something 2021-09-26T07:21:33.882Z
Book Review: Who We Are and How We Got Here 2021-09-24T05:05:46.609Z
Acausal Trade and the Ultimatum Game 2021-09-05T05:36:28.171Z
The halting problem is overstated 2021-08-16T05:26:06.034Z
Combining the best of Georgian and Harberger taxes 2021-08-12T05:55:46.424Z
Analyzing Punishment as Preventation 2021-07-14T15:12:09.071Z

Comments

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T17:01:25.091Z · LW · GW

I've been convinced! I'll let my wife know as soon as I'm back from Jamaica!

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T17:00:06.721Z · LW · GW

Similarly, the point about trash also ignores the larger context. Picking up my own trash has much less relationship to disgust, or germs, than picking up other people's trash.

Agreed, but that's exactly the point I'm making. Once you apply insights from rationality to situations outside spherical trash in a vacuum filled park you end up with all sorts of confounding affects that make the insights less applicable. Your point about germs and my point about fixing what you break are complimentary, not contradictory.

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T16:57:06.256Z · LW · GW

I think this post is missing the major part of what "metarational" means: acknowledging that the kinds of explicit principles and systems humans can hold in working memory and apply in real time are insufficient for capturing the full complexity of reality, having multiple such principles and systems available anyway, and skillfully switching among them in appropriate contexts.

This sounds to me like a semantic issue? Metarational isn't exactly a standard AFAIAA, (I just made it up on the spot), and it looks like you're using it to refer to a different concept from me.

Comment by Yair Halberstadt (yair-halberstadt) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T16:44:27.919Z · LW · GW

Sure it is, if you accept a whole bunch of assumptions. Or it could just not do that.

Comment by Yair Halberstadt (yair-halberstadt) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T16:06:44.264Z · LW · GW

Reading this reminds me of Scott Alexander in his review of "what we owe the future":

But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

You come up with a brilliant simulation argument as to why the AI shouldn't just do what's clearly in his best interests. And maybe the AI is neurotic enough to care. But in all probability, for whatever reason, it doesn't. And it just goes ahead and turns us into paperclips anyway, ignoring a person running behind it saying "bbbbbbut the simulation argument".

Comment by Yair Halberstadt (yair-halberstadt) on What prevents SB-1047 from triggering on deep fake porn/voice cloning fraud? · 2024-09-26T16:16:28.420Z · LW · GW
Comment by Yair Halberstadt (yair-halberstadt) on What prevents SB-1047 from triggering on deep fake porn/voice cloning fraud? · 2024-09-26T15:30:39.210Z · LW · GW

I'm not sure why those shouldn't be included? If someone uses my AI to perform 500 million dollars of fraud, then I should probably have been more careful releasing the product.

Comment by Yair Halberstadt (yair-halberstadt) on Bryan Johnson and a search for healthy longevity · 2024-07-27T19:00:08.743Z · LW · GW

The rest of the family is still into Mormonism, and his wife tried to sue him for millions, and she lost (false accusations)

In case you're interested in following this up, Tracing Woodgrains on the accusations: https://x.com/tracewoodgrains/status/1743775518418198532

Comment by Yair Halberstadt (yair-halberstadt) on Bryan Johnson and a search for healthy longevity · 2024-07-27T18:57:30.634Z · LW · GW

His approach to achieving immortality seems to be similar to someone attempting to reach the moon by developing higher altitude planes. He's using interventions that seem likely to improve health and lifespan by a few percentage points, which is great, but can't possibly get us to where he wants to go.

My assumption is that any real solution to mortality will look more like "teach older bodies to self repair the same way younger bodies do" than "eat this diet, and take these supplements".

Comment by Yair Halberstadt (yair-halberstadt) on The Cancer Resolution? · 2024-07-25T19:52:47.428Z · LW · GW

I very much did not miss that.

I would consider this one of the most central points to clarify, yet the OP doesn't discuss it at all, and your response to it being pointed out was 3 sentences, despite there being ample research on the topic which points strongly in the opposite direction.

Where did I say that?

I never said you said it, I said the book contains such advice:

Lintern suggests that chemotherapy is generally a bad idea.

Comment by Yair Halberstadt (yair-halberstadt) on The Cancer Resolution? · 2024-07-25T05:03:52.475Z · LW · GW

Now it can be very frustrating to hear "you can't have an opinion on this because you're not an expert", and it sounds very similar to credentialism.

But it's not. If you'd demonstrated a mastery of the material, and came up with a convincing description of current evidence for the DNA theory and why you believe it's incorrect, evidence which is not pulled straight out of the book you're reviewing, I wouldn't care what your credentials are.

But you seem to have missed really obvious consequences of the fungi theory, like, "wouldn't it be infectious then", and all the stuff in J Bostock's excellent comment. At that point it seems like you've read a book by a probable crank, haven't even thought through the basic counterarguments, and are spreading it around despite it containing some potentially pretty dangerous advice like "don't do chemotherapy". This is not the sort of content I find valuable on LessWrong, so I heavily downvoted.

Comment by Yair Halberstadt (yair-halberstadt) on The Cancer Resolution? · 2024-07-25T04:57:53.949Z · LW · GW

I want to take a look at the epistemics of this post, or rather, whether this post should have been written at all.

In 95% of cases someone tearing down the orthodoxy of a well established field is a crank. In another 4% of cases they raise some important points, but are largely wrong. In 1% of cases they are right and the orthodoxy has to be rewritten from scratch.

Now these 1% of cases are extremely important! It's understandable why the rationalist community, which has a healthy skepticism of orthodoxy would be interested in finding them. And this is probably a good thing.

But you have to have the expertise with which to do so. If you do not have an extremely solid grasp of cancer research, and you highlight a book like this, 95% of the time you are highlighting a crank, and that doesn't do anyone any good. From what I can make out from this post (and correct me if I'm wrong) you do not have any such expertise.

Now there are people on LessWrong who do have the necessary expertise, and I would value it if they were to either spend 10 seconds looking at the synopsis and saying "total nonsense, not even worth investigating" Vs "I'll delve into that when I get the time". But for anyone who doesn't have the expertise, your best bet is just to go with the orthodoxy. There's an infinite amount of bullshit to get through before you find the truth, and a book review of probable bullshit doesn't actually help anyone.

Comment by Yair Halberstadt (yair-halberstadt) on I found >800 orthogonal "write code" steering vectors · 2024-07-16T05:34:39.441Z · LW · GW

And it turns out that this algorithm works and we can find steering vectors that are orthogonal (and have ~0 cosine similarity) while having very similar effects.

Why ~0 and not exactly 0? Are these not perfectly orthogonal? If not, would it be possible to modify them slightly so they are perfectly orthogonal, then repeat, just to exclude Fabien Roger's hypothesis?

Comment by Yair Halberstadt (yair-halberstadt) on Ice: The Penultimate Frontier · 2024-07-14T06:06:00.822Z · LW · GW

It's not that we can't colonise Alaska, it's that it's not economically productive to do so.

I wouldn't expect colonising mars to be economically productive, but instead to be funded by other sources (essentially charity).

Comment by Yair Halberstadt (yair-halberstadt) on Consider the humble rock (or: why the dumb thing kills you) · 2024-07-04T14:46:50.642Z · LW · GW

I think the chances that something that doesn't immediately kill humanity, and isn't actively trying to kill humanity, polishes us off for good is pretty low at the very least.

Humans have survived as hunter gatherers for a million years. We've thrived in every possible climate under the sun. We're not just going to roll over and die because civilisation has collapsed.

Not that this is much of a comfort if 99% of humanity dies.

Comment by Yair Halberstadt (yair-halberstadt) on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-07-01T03:20:24.928Z · LW · GW

Thanks for the point. I think I'm not really taking to that sort of person? My intended audience is the average American who views the USA as mostly a force for good, even if its foreign policy can be misguided at times.

Comment by Yair Halberstadt (yair-halberstadt) on Secondary forces of debt · 2024-06-28T04:58:33.940Z · LW · GW
  1. Historically European Jews were money lenders since Christians were forbidden to charge interest. This was one of the major forces behind the pogroms, since killing the debt holders saved you having to pay up.

  2. On a country level scale this is significant. If you live in a dangerous area, you want the USA to have invested a lot of money in you which they will lose if you ever are conquered.

  3. People claiming that debts are inherited by the estate: this only applies to formal, legible debts. If you lend money informally (possibly because there's a criminal element involved), or the debt is an informal obligation to provide services/respect then once the debt holder dies it's often gone.

  4. Even for many types of legible debt, once the debt holder dies it's very difficult for the estate to know the debt exists or it's status. If old Joey Richbanks lends me 10 million dollars, witnessed and signed in a contract, who says his children are ever going to find the contract? And if they do, and I claim I already paid it, how certain are they I'm lying? And how likely are they to win the court case if so?

Comment by Yair Halberstadt (yair-halberstadt) on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-06-27T13:24:19.171Z · LW · GW

Putting on my reasonable Israeli nationalist hat:

"Of course, granting a small number of well behaved Palestinians citizenship in Israel is not a problem, and as it happens that occurs at a small scale all the time (e.g. family unification laws, east Jerusalem residents).

But there's a number of issues with that:

  1. No vetting is perfect, some terrorists/bad cultural fits will always slip through the gap.
  2. Even if this person is a great cultural fit, there's no guarantee their children will be, or that they won't pull in other people through family unification laws.
  3. There's a risk of a democratic transition - the more Arab voters, the more power they have, the more they can open the gates to more Arabs, till Israel siezes to be a Jewish state.
  4. We don't trust the government to only keep it at a small scale.

Now let's turn it around:

Why should we do this? What do we have to gain for taking on this risk?"

Comment by Yair Halberstadt (yair-halberstadt) on Andrew Burns's Shortform · 2024-06-27T04:20:44.526Z · LW · GW

There seems to be a huge jump from: there's no moat around generative AI (makes sense as how to make one is publicly known, and the secret sauce is just about improving performance) to... all the other stuff which seems completely unrelated?

Comment by Yair Halberstadt (yair-halberstadt) on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-06-26T15:31:16.061Z · LW · GW

I agree that making places which will definitely be part of Israel in any future two state solutions denser, whilst not increasing footprint, or access to neighbouring land is not inherently problematic.

But give people an inch and they will take a mile. From the US perspective far easier to just deliver an ultimatum on settlement building full stop. Besides, the fewer settlers, the fewer troublemakers, so that's another advantage.

Also that provides an incentive for those who live in the settlements to come to an agreement on a two state solution since that will free up their land for further building.

I agree that they should turn a blind eye to small scale refurbishment/rebuilding of existing housing stock, but should object to any Greenfield building, or major projects.

Comment by Yair Halberstadt (yair-halberstadt) on On Claude 3.5 Sonnet · 2024-06-26T04:03:09.996Z · LW · GW

I think one way of framing it is whether the improvements to itself outweigh the extra difficulty in eking out more performance. Basically does the performance converge or diverge.

Comment by Yair Halberstadt (yair-halberstadt) on I'm a bit skeptical of AlphaFold 3 · 2024-06-25T12:58:19.387Z · LW · GW

This makes sense, but isn't alphafold available for use? Is it possible to verify this one way or another experimentally?

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T06:24:27.099Z · LW · GW

Might be worth posting this as it's own question for greater visibility

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T05:49:54.523Z · LW · GW

Possibly, but some of the missteps just feel too big to ignore. Like what on earth is going on in the second half of the book?

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T05:32:52.155Z · LW · GW

I greatly enjoyed metropolitan man, but feel like web serials, especially fan fiction, are their own genre and deserve their own post.

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T02:35:09.929Z · LW · GW

It's a prequel in the loosest possible sense. In theory they could be set in two different universes and it wouldn't make much of a difference.

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-24T12:52:47.965Z · LW · GW

Oh (not a spoiler) the second narrator is obviously not being entirely truthful.

That's totally a spoiler :-), but for me it was one of the most brilliant twists in the book. You have this stuff that feels like the author is doing really poor sci-fi, and then it's revealed that the author is perfectly aware of that and is making a point about translation.

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-24T12:49:20.175Z · LW · GW

Thanks, really appreciate the feedback! Maybe I'll give The Three Body Problem another chance.

Comment by Yair Halberstadt (yair-halberstadt) on Actually, Power Plants May Be an AI Training Bottleneck. · 2024-06-20T06:45:30.215Z · LW · GW

What about solar power? If you build a data center in the desert, buy a few square km of adjacent land and tile them with solar panels presumably that can be done far quicker and with far less regulation than building a power plant, and at night you can use off peak grid electricity at cheaper rates.

Comment by Yair Halberstadt (yair-halberstadt) on Surviving Seveneves · 2024-06-20T03:04:43.475Z · LW · GW

Vg pna'g or gur pnfr gung jnf gur gehyr rssbeg, fvapr gur cerfvqrag pubfr gb tb gb fcnpr vafgrnq bs gur bgure rssbeg.

Comment by Yair Halberstadt (yair-halberstadt) on Surviving Seveneves · 2024-06-19T17:02:39.441Z · LW · GW

The problem is the temperature of the earth rises fairly fast as you dig downwards. How fast depends on location, but always significantly enough that there's a pretty hard limit on how cold you can go.

Comment by Yair Halberstadt (yair-halberstadt) on Yair Halberstadt's Shortform · 2024-06-18T04:58:49.201Z · LW · GW

Reserve soldiers in Israel are paid their full salaries by national insurance. If they are also able to work (which is common as the IDF isn't great at efficiently using it's manpower) they can legally work and will get paid by their company on top of whatever they receive from national insurance.

Given how often sensible policies aren't implemented because of their optics, it's worth appreciating those cases where that doesn't happen. The biggest impact of a war on Israel is to the economy, and anything which encourages people to work rather than waste time during a war is a good policy. But it could so easily have been rejected because it implies soldiers are slacking off from their reserve duties.

Comment by Yair Halberstadt (yair-halberstadt) on The Data Wall is Important · 2024-06-11T04:03:16.444Z · LW · GW

Not video transcripts - video. 1 frame of Video contains much more data than 1 text token, and you can train an AI as a next frame predictor much as you can a next token predictor.

Comment by Yair Halberstadt (yair-halberstadt) on The Data Wall is Important · 2024-06-10T12:35:33.549Z · LW · GW

I'm guessing that the sort of data that's crawled by Google but not common crawl is usually low quality? I imagine that if somebody put any effort into writing something then they'll put effort into making sure it's easily accessible, and the stuff that's harder to get to is usually machine generated?

Of course that's excluding all the data that's private. I imagine that once you add private messages (e.g. WhatsApp, email) + internal documents that ends up being far bigger than the publicly available web.

Comment by Yair Halberstadt (yair-halberstadt) on Why I don't believe in the placebo effect · 2024-06-10T10:23:27.096Z · LW · GW

I'm interested if you have a toy example showing how Simpsons paradox could have an impact here?

I assume that has a placebo/doesn't have a placebo is a binary variable, and I also assume that the number of people in each arm in each experiment is the same. I can't really see how you would end up with Simpsons paradox with that set up.

Comment by Yair Halberstadt (yair-halberstadt) on The Data Wall is Important · 2024-06-10T05:48:44.481Z · LW · GW

with the best models trained on close to all the high quality data we’ve got.

Is this including images, video and audio? Or just text?

Comment by Yair Halberstadt (yair-halberstadt) on When Are Circular Definitions A Problem? · 2024-05-29T03:55:59.051Z · LW · GW

A dictionary defines all words circularly, but of course nobody learns all words from a dictionary - the assumption is you're looking up a small number of words you don't know.

Humans learn their first few words by seeing how they're used in relation to objects, and the rest can be derived from there without needing circularity.

However the dictionary provides very tight constraints on what words can mean. Whatever the words "wood", "is", "made", "from", and "trees" mean, the sentence "wood is made from trees" must be true. The vast majority of all possible meanings fail this. Using only circular definitions, is it possible to constraint words meanings so tightly that there's only one possible model which fits those constraints?

LLMs seem to provide a resounding yes to that question. Whilst 1st generation LLMs only ever saw text and had no hard coded knowledge, so could only possibly figure out what words meant based on how they're used in relation to other words, they understood the meaning of words sufficiently well to reason about the physical properties of the objects they represented.

Comment by Yair Halberstadt (yair-halberstadt) on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?" · 2024-05-19T08:39:50.291Z · LW · GW

I'm taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.

Which seems reasonable for most models of the sort of AI that kills all humans.

An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.

Comment by Yair Halberstadt (yair-halberstadt) on Designing for a single purpose · 2024-05-08T06:31:05.420Z · LW · GW

Related: https://www.scattered-thoughts.net/writing/small-tech/

I frequently see debates about whether it's better to be a cog at a giant semi-monopoly, or to take investment money in the hopes of one day growing to be head cog at a giant semi-monopoly.

Role models matter. So I made a list of small companies that I admire. Neither giants nor startups - just people making a living writing software on their own terms.

Comment by Yair Halberstadt (yair-halberstadt) on introduction to cancer vaccines · 2024-05-06T16:21:26.287Z · LW · GW

Makes sense thanks!

I imagine a startup of this ilk could be based in Prospera, which wouldn't be a problem for the wealthy few to travel there for personalised treatment.

I also imagine that with a lighter regulatory regime, no need to scale up production, and no need for lengthy trials, developing a monoclonal antibody would be much quicker and cheaper. Consider how quickly COVID vaccines were found compared to when they were ready for use.

The other hurdles sound significant though.

Comment by Yair Halberstadt (yair-halberstadt) on introduction to cancer vaccines · 2024-05-06T14:55:15.236Z · LW · GW

When you say it's not yet practical, are we missing some key steps, or could it be done at high enough cost with current technology but can't scale?

I imagine a startup which cured rich people's cancers on a case by case basis would have a lot of customers, which would help drive prices down as the technology improved.

Comment by Yair Halberstadt (yair-halberstadt) on My hour of memoryless lucidity · 2024-05-05T12:18:38.616Z · LW · GW

My grandmother suffered from Dementia. For a period of a couple of years I would call her every Friday, and we would have literally the exact same conversation each time, including her making the same jokes at the same points in the conversation, using the same phrasing. I concluded that people are in fact pretty deterministic, even over the long term.

Comment by Yair Halberstadt (yair-halberstadt) on Can stealth aircraft be detected optically? · 2024-05-03T12:04:49.590Z · LW · GW

You're intuition is correct when the jet has already passed ahead - those are very hard to catch and shoot down. But usually you detect an aircraft when it's heading towards you, and all the missile has to do is intercept. It doesn't even have to be faster than the jet (unless the jet detected it in time and does a 180).

Comment by Yair Halberstadt (yair-halberstadt) on Can stealth aircraft be detected optically? · 2024-05-03T03:50:10.429Z · LW · GW

But I discussed that in the post. All you need are enough cameras + processing power. Both are cheap.

Comment by Yair Halberstadt (yair-halberstadt) on An explanation of evil in an organized world · 2024-05-02T16:25:32.851Z · LW · GW

To be honest, this just feels like the Euthyphro Dilemma all over again. "Good" is defined by what God does. God chooses to run the laws of physics. Laws of physics are "Good". Who gives a damn?

Also this is directly contradictory to Christianity, since the core beliefs of Christianity all assume some level of non-natural intervention in the world (e.g. resurrection of Christ). Same for almost all other religions. So who is this even for?

Comment by Yair Halberstadt (yair-halberstadt) on Can stealth aircraft be detected optically? · 2024-05-02T16:16:46.857Z · LW · GW

Lens and CCD technology is not trivial at those speeds and insane angular resolution.

But we can easily capture a picture of a fighter jet when it's close. And the further it is the higher the angular resolution required, but also the lower the angular speed, so do those cancel out to make it not much harder, or it doesn't work like that?

Comment by Yair Halberstadt (yair-halberstadt) on Can stealth aircraft be detected optically? · 2024-05-02T13:24:00.867Z · LW · GW

Note you don't even need high resolution in all directions, just high enough to see whether it's worth zooming in/switching to a better camera.

Comment by Yair Halberstadt (yair-halberstadt) on Can stealth aircraft be detected optically? · 2024-05-02T13:22:54.320Z · LW · GW

Why would you need large telescopes?

Naked eye has angular resolution of 30m at 100km, you need something slightly better. A small lense should do it. Cameras + zoom lens are well understood mass produced components. And this is a highly parallelizable task.

Comment by Yair Halberstadt (yair-halberstadt) on Can stealth aircraft be detected optically? · 2024-05-02T12:27:19.967Z · LW · GW

But then no need for stealth at all?

Comment by Yair Halberstadt (yair-halberstadt) on Can stealth aircraft be detected optically? · 2024-05-02T12:25:24.241Z · LW · GW

I wasn't referring to the A10, but the use of e.g. f-35s in ground support roles - as heavily practised by the IDF for example.