Posts

How to put California and Texas on the campaign trail! 2024-11-06T06:08:25.673Z
Why our politicians aren't Median 2024-11-03T14:03:33.779Z
Humans are (mostly) metarational 2024-10-09T05:51:16.644Z
Book Review: Righteous Victims - A History of the Zionist-Arab Conflict 2024-06-24T11:02:03.490Z
Sci-Fi books micro-reviews 2024-06-24T09:49:28.523Z
Surviving Seveneves 2024-06-19T13:11:55.414Z
Can stealth aircraft be detected optically? 2024-05-02T07:47:00.101Z
Falling fertility explanations and Israel 2024-04-03T03:27:38.564Z
Is it justifiable for non-experts to have strong opinions about Gaza? 2024-01-08T17:31:21.934Z
Book Review: 1948 by Benny Morris 2023-12-03T10:29:16.696Z
In favour of a sovereign state of Gaza 2023-11-19T16:08:51.012Z
Challenge: Does ChatGPT ever claim that a bad outcome for humanity is actually good? 2023-03-22T16:01:31.985Z
Agentic GPT simulations: a risk and an opportunity 2023-03-22T06:24:06.893Z
What would an AI need to bootstrap recursively self improving robots? 2023-02-14T17:58:17.592Z
Is this chat GPT rewrite of my post better? 2023-01-15T09:47:44.016Z
A simple proposal for preserving free speech on twitter 2023-01-15T09:42:49.841Z
woke offline, anti-woke online 2023-01-01T08:24:39.748Z
Why are profitable companies laying off staff? 2022-11-17T06:19:12.601Z
The optimal angle for a solar boiler is different than for a solar panel 2022-11-10T10:32:47.187Z
Yair Halberstadt's Shortform 2022-11-08T19:33:53.853Z
Average utilitarianism is non-local 2022-10-31T16:36:09.406Z
What would happen if we abolished the FDA tomorrow? 2022-09-14T15:22:31.116Z
EA, Veganism and Negative Animal Utilitarianism 2022-09-04T18:30:20.170Z
Lamentations, Gaza and Empathy 2022-08-07T07:55:48.545Z
Linkpost: Robin Hanson - Why Not Wait On AI Risk? 2022-06-24T14:23:50.580Z
Parliaments without the Parties 2022-06-19T14:06:23.167Z
Can you MRI a deep learning model? 2022-06-13T13:43:05.293Z
If there was a millennium equivalent prize for AI alignment, what would the problems be? 2022-06-09T16:56:10.788Z
What board games would you recommend? 2022-06-06T16:38:04.538Z
How would you build Dath Ilan on earth? 2022-05-29T07:26:17.322Z
Should you kiss it better? 2022-05-19T03:58:40.354Z
Demonstrating MWI by interfering human simulations 2022-05-08T17:28:27.649Z
What would be the impact of cheap energy and storage? 2022-05-03T05:20:14.889Z
Save Humanity! Breed Sapient Octopuses! 2022-04-05T18:39:07.478Z
Are the fundamental physical constants computable? 2022-04-05T15:05:42.393Z
Best non-textbooks on every subject 2022-04-04T11:54:56.193Z
Being Moral is an end goal. 2022-03-09T16:37:15.612Z
Design policy to be testable 2022-01-31T06:04:53.887Z
Newcomb's Grandfather 2022-01-28T08:56:53.417Z
Worldbuilding exercise: The Highwayverse. 2021-12-22T06:47:53.054Z
Super intelligent AIs that don't require alignment 2021-11-16T19:55:01.258Z
Experimenting with Android Digital Wellbeing 2021-10-21T05:43:41.789Z
Feature Suggestion: one way anonymity 2021-10-17T17:54:09.182Z
The evaluation function of an AI is not its aim 2021-10-10T14:52:01.374Z
Towards a Bayesian model for Empirical Science 2021-10-07T05:38:25.848Z
To every people according to their language 2021-10-04T18:42:09.573Z
Schools probably do do something 2021-09-26T07:21:33.882Z
Book Review: Who We Are and How We Got Here 2021-09-24T05:05:46.609Z
Acausal Trade and the Ultimatum Game 2021-09-05T05:36:28.171Z
The halting problem is overstated 2021-08-16T05:26:06.034Z

Comments

Comment by Yair Halberstadt (yair-halberstadt) on Remap your caps lock key · 2024-12-16T16:48:41.484Z · LW · GW

Chromebooks replace the caps lock key with a search key - which is functionally equivalent to the windows keys on windows. E.g. search+right goes to the end of the line.

Comment by Yair Halberstadt (yair-halberstadt) on Algebraic Linguistics · 2024-12-08T15:21:24.213Z · LW · GW

Yep, and when you run out of letters in a section you use the core letter from the section with a subscript.

Comment by Yair Halberstadt (yair-halberstadt) on Algebraic Linguistics · 2024-12-08T15:11:31.685Z · LW · GW

Also:

m: used for a second whole number when n is already taken.

p: used for primes

q: used for a second prime.

Comment by Yair Halberstadt (yair-halberstadt) on Drexler's Nanotech Software · 2024-12-08T07:24:28.086Z · LW · GW

Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.

Comment by Yair Halberstadt (yair-halberstadt) on Drexler's Nanotech Software · 2024-12-04T09:46:27.851Z · LW · GW

I would have concerns about suitably generic, flexible and sensitive humanoid robots, yes.

Comment by Yair Halberstadt (yair-halberstadt) on Drexler's Nanotech Software · 2024-12-03T10:15:19.241Z · LW · GW

One thing to consider is how hard an AI needs to work to break out of human dependence. There's no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.

If limited nanofactories exist it's much easier to bootstrap them into whatever you want, than it is those nanofactories don't exist, and robotics haven't developed enough for you to create one without the human touch.

Comment by Yair Halberstadt (yair-halberstadt) on Bigger Livers? · 2024-11-09T19:11:50.035Z · LW · GW

Presumably because there's a hope that having a larger liver could help people lose weight, which is something a lot of people struggle to do?

Comment by Yair Halberstadt (yair-halberstadt) on Could orcas be (trained to be) smarter than humans?  · 2024-11-05T09:28:05.175Z · LW · GW

I imagine that part of the difference is because Orcas are hunters, and need much more sophisticated sensors + controls.

I gigantic jellyfish wouldn't have the same number of neurons as a similarly sized whale, so it's not just about size, but how you use that size.

Comment by Yair Halberstadt (yair-halberstadt) on Could orcas be (trained to be) smarter than humans?  · 2024-11-05T03:31:15.484Z · LW · GW

Douglas Adams answered this long ago of course:

For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.

Comment by Yair Halberstadt (yair-halberstadt) on Why our politicians aren't Median · 2024-11-03T18:54:05.149Z · LW · GW

Thanks - I've rehauled that section. Note a Codorcet method is not sufficient here, as the counter-example I give shows.

Comment by Yair Halberstadt (yair-halberstadt) on Why our politicians aren't Median · 2024-11-03T15:05:12.705Z · LW · GW

Why? That's a fact about voting preferences in our toy scenario, not a normative statement about what people should prefer.

Comment by Yair Halberstadt (yair-halberstadt) on electric turbofans · 2024-11-03T14:15:01.119Z · LW · GW

Thanks for this!

What are the chances of a variable bypass engine at some point? Any opinions?

Comment by Yair Halberstadt (yair-halberstadt) on Trading Candy · 2024-11-01T04:50:14.291Z · LW · GW

Counterpoint: when I was about 12, I was too old to collect candy at my Synagogue on Simchat Torah, so I would beg a single candy from someone, then trade it up (Dutch book style) with naive younger kids until I had a decent stash. I was particularly pleased whenever my traded up stash included the original candy.

Comment by Yair Halberstadt (yair-halberstadt) on Examples of How I Use LLMs · 2024-10-15T04:14:43.178Z · LW · GW

The single most useful thing I use LLMs for is telling me how to do things in bash. I use bash all the time for one off tasks, but not quite enough to build familiarity with it + learn all the quirks of the commands + language.

90% of the time it gives me a working bash script first shot, each time saving me between 5 minutes to half an hour.

Another thing LLMs are good at is e.g taking a picture of e.g. screw, and asking what type of screw it is.

They're also great at converting data from one format to another: here's some JSON, convert it into Yaml. Now prototext. I forgot to mention, use maps instead of nested structs, and use Pascal case. Also the JSON is hand written and not actually legal.

Similarly they're good at fuzzy data querying tasks. I received this giant error response including full stack trace and lots of irrelevant fields, where's the actual error, and what lines of the file should I look at.

Comment by Yair Halberstadt (yair-halberstadt) on Prices are Bounties · 2024-10-14T09:05:42.485Z · LW · GW

Buyers have to pay a lot more, but sellers receive a lot more. It's not clear that buyers at high prices are worse off than sellers, so it's egalitarian impact is unclear.

Whereas when you stand in line, that time you wasted is gone. Nobody gets it. Everyone is worse off.

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T17:01:25.091Z · LW · GW

I've been convinced! I'll let my wife know as soon as I'm back from Jamaica!

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T17:00:06.721Z · LW · GW

Similarly, the point about trash also ignores the larger context. Picking up my own trash has much less relationship to disgust, or germs, than picking up other people's trash.

Agreed, but that's exactly the point I'm making. Once you apply insights from rationality to situations outside spherical trash in a vacuum filled park you end up with all sorts of confounding affects that make the insights less applicable. Your point about germs and my point about fixing what you break are complimentary, not contradictory.

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T16:57:06.256Z · LW · GW

I think this post is missing the major part of what "metarational" means: acknowledging that the kinds of explicit principles and systems humans can hold in working memory and apply in real time are insufficient for capturing the full complexity of reality, having multiple such principles and systems available anyway, and skillfully switching among them in appropriate contexts.

This sounds to me like a semantic issue? Metarational isn't exactly a standard AFAIAA, (I just made it up on the spot), and it looks like you're using it to refer to a different concept from me.

Comment by Yair Halberstadt (yair-halberstadt) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T16:44:27.919Z · LW · GW

Sure it is, if you accept a whole bunch of assumptions. Or it could just not do that.

Comment by Yair Halberstadt (yair-halberstadt) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T16:06:44.264Z · LW · GW

Reading this reminds me of Scott Alexander in his review of "what we owe the future":

But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

You come up with a brilliant simulation argument as to why the AI shouldn't just do what's clearly in his best interests. And maybe the AI is neurotic enough to care. But in all probability, for whatever reason, it doesn't. And it just goes ahead and turns us into paperclips anyway, ignoring a person running behind it saying "bbbbbbut the simulation argument".

Comment by Yair Halberstadt (yair-halberstadt) on What prevents SB-1047 from triggering on deep fake porn/voice cloning fraud? · 2024-09-26T16:16:28.420Z · LW · GW
Comment by Yair Halberstadt (yair-halberstadt) on What prevents SB-1047 from triggering on deep fake porn/voice cloning fraud? · 2024-09-26T15:30:39.210Z · LW · GW

I'm not sure why those shouldn't be included? If someone uses my AI to perform 500 million dollars of fraud, then I should probably have been more careful releasing the product.

Comment by Yair Halberstadt (yair-halberstadt) on Bryan Johnson and a search for healthy longevity · 2024-07-27T19:00:08.743Z · LW · GW

The rest of the family is still into Mormonism, and his wife tried to sue him for millions, and she lost (false accusations)

In case you're interested in following this up, Tracing Woodgrains on the accusations: https://x.com/tracewoodgrains/status/1743775518418198532

Comment by Yair Halberstadt (yair-halberstadt) on Bryan Johnson and a search for healthy longevity · 2024-07-27T18:57:30.634Z · LW · GW

His approach to achieving immortality seems to be similar to someone attempting to reach the moon by developing higher altitude planes. He's using interventions that seem likely to improve health and lifespan by a few percentage points, which is great, but can't possibly get us to where he wants to go.

My assumption is that any real solution to mortality will look more like "teach older bodies to self repair the same way younger bodies do" than "eat this diet, and take these supplements".

Comment by Yair Halberstadt (yair-halberstadt) on The Cancer Resolution? · 2024-07-25T19:52:47.428Z · LW · GW

I very much did not miss that.

I would consider this one of the most central points to clarify, yet the OP doesn't discuss it at all, and your response to it being pointed out was 3 sentences, despite there being ample research on the topic which points strongly in the opposite direction.

Where did I say that?

I never said you said it, I said the book contains such advice:

Lintern suggests that chemotherapy is generally a bad idea.

Comment by Yair Halberstadt (yair-halberstadt) on The Cancer Resolution? · 2024-07-25T05:03:52.475Z · LW · GW

Now it can be very frustrating to hear "you can't have an opinion on this because you're not an expert", and it sounds very similar to credentialism.

But it's not. If you'd demonstrated a mastery of the material, and came up with a convincing description of current evidence for the DNA theory and why you believe it's incorrect, evidence which is not pulled straight out of the book you're reviewing, I wouldn't care what your credentials are.

But you seem to have missed really obvious consequences of the fungi theory, like, "wouldn't it be infectious then", and all the stuff in J Bostock's excellent comment. At that point it seems like you've read a book by a probable crank, haven't even thought through the basic counterarguments, and are spreading it around despite it containing some potentially pretty dangerous advice like "don't do chemotherapy". This is not the sort of content I find valuable on LessWrong, so I heavily downvoted.

Comment by Yair Halberstadt (yair-halberstadt) on The Cancer Resolution? · 2024-07-25T04:57:53.949Z · LW · GW

I want to take a look at the epistemics of this post, or rather, whether this post should have been written at all.

In 95% of cases someone tearing down the orthodoxy of a well established field is a crank. In another 4% of cases they raise some important points, but are largely wrong. In 1% of cases they are right and the orthodoxy has to be rewritten from scratch.

Now these 1% of cases are extremely important! It's understandable why the rationalist community, which has a healthy skepticism of orthodoxy would be interested in finding them. And this is probably a good thing.

But you have to have the expertise with which to do so. If you do not have an extremely solid grasp of cancer research, and you highlight a book like this, 95% of the time you are highlighting a crank, and that doesn't do anyone any good. From what I can make out from this post (and correct me if I'm wrong) you do not have any such expertise.

Now there are people on LessWrong who do have the necessary expertise, and I would value it if they were to either spend 10 seconds looking at the synopsis and saying "total nonsense, not even worth investigating" Vs "I'll delve into that when I get the time". But for anyone who doesn't have the expertise, your best bet is just to go with the orthodoxy. There's an infinite amount of bullshit to get through before you find the truth, and a book review of probable bullshit doesn't actually help anyone.

Comment by Yair Halberstadt (yair-halberstadt) on I found >800 orthogonal "write code" steering vectors · 2024-07-16T05:34:39.441Z · LW · GW

And it turns out that this algorithm works and we can find steering vectors that are orthogonal (and have ~0 cosine similarity) while having very similar effects.

Why ~0 and not exactly 0? Are these not perfectly orthogonal? If not, would it be possible to modify them slightly so they are perfectly orthogonal, then repeat, just to exclude Fabien Roger's hypothesis?

Comment by Yair Halberstadt (yair-halberstadt) on Ice: The Penultimate Frontier · 2024-07-14T06:06:00.822Z · LW · GW

It's not that we can't colonise Alaska, it's that it's not economically productive to do so.

I wouldn't expect colonising mars to be economically productive, but instead to be funded by other sources (essentially charity).

Comment by Yair Halberstadt (yair-halberstadt) on Consider the humble rock (or: why the dumb thing kills you) · 2024-07-04T14:46:50.642Z · LW · GW

I think the chances that something that doesn't immediately kill humanity, and isn't actively trying to kill humanity, polishes us off for good is pretty low at the very least.

Humans have survived as hunter gatherers for a million years. We've thrived in every possible climate under the sun. We're not just going to roll over and die because civilisation has collapsed.

Not that this is much of a comfort if 99% of humanity dies.

Comment by Yair Halberstadt (yair-halberstadt) on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-07-01T03:20:24.928Z · LW · GW

Thanks for the point. I think I'm not really taking to that sort of person? My intended audience is the average American who views the USA as mostly a force for good, even if its foreign policy can be misguided at times.

Comment by Yair Halberstadt (yair-halberstadt) on Secondary forces of debt · 2024-06-28T04:58:33.940Z · LW · GW
  1. Historically European Jews were money lenders since Christians were forbidden to charge interest. This was one of the major forces behind the pogroms, since killing the debt holders saved you having to pay up.

  2. On a country level scale this is significant. If you live in a dangerous area, you want the USA to have invested a lot of money in you which they will lose if you ever are conquered.

  3. People claiming that debts are inherited by the estate: this only applies to formal, legible debts. If you lend money informally (possibly because there's a criminal element involved), or the debt is an informal obligation to provide services/respect then once the debt holder dies it's often gone.

  4. Even for many types of legible debt, once the debt holder dies it's very difficult for the estate to know the debt exists or it's status. If old Joey Richbanks lends me 10 million dollars, witnessed and signed in a contract, who says his children are ever going to find the contract? And if they do, and I claim I already paid it, how certain are they I'm lying? And how likely are they to win the court case if so?

Comment by Yair Halberstadt (yair-halberstadt) on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-06-27T13:24:19.171Z · LW · GW

Putting on my reasonable Israeli nationalist hat:

"Of course, granting a small number of well behaved Palestinians citizenship in Israel is not a problem, and as it happens that occurs at a small scale all the time (e.g. family unification laws, east Jerusalem residents).

But there's a number of issues with that:

  1. No vetting is perfect, some terrorists/bad cultural fits will always slip through the gap.
  2. Even if this person is a great cultural fit, there's no guarantee their children will be, or that they won't pull in other people through family unification laws.
  3. There's a risk of a democratic transition - the more Arab voters, the more power they have, the more they can open the gates to more Arabs, till Israel siezes to be a Jewish state.
  4. We don't trust the government to only keep it at a small scale.

Now let's turn it around:

Why should we do this? What do we have to gain for taking on this risk?"

Comment by Yair Halberstadt (yair-halberstadt) on Andrew Burns's Shortform · 2024-06-27T04:20:44.526Z · LW · GW

There seems to be a huge jump from: there's no moat around generative AI (makes sense as how to make one is publicly known, and the secret sauce is just about improving performance) to... all the other stuff which seems completely unrelated?

Comment by Yair Halberstadt (yair-halberstadt) on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-06-26T15:31:16.061Z · LW · GW

I agree that making places which will definitely be part of Israel in any future two state solutions denser, whilst not increasing footprint, or access to neighbouring land is not inherently problematic.

But give people an inch and they will take a mile. From the US perspective far easier to just deliver an ultimatum on settlement building full stop. Besides, the fewer settlers, the fewer troublemakers, so that's another advantage.

Also that provides an incentive for those who live in the settlements to come to an agreement on a two state solution since that will free up their land for further building.

I agree that they should turn a blind eye to small scale refurbishment/rebuilding of existing housing stock, but should object to any Greenfield building, or major projects.

Comment by Yair Halberstadt (yair-halberstadt) on On Claude 3.5 Sonnet · 2024-06-26T04:03:09.996Z · LW · GW

I think one way of framing it is whether the improvements to itself outweigh the extra difficulty in eking out more performance. Basically does the performance converge or diverge.

Comment by Yair Halberstadt (yair-halberstadt) on I'm a bit skeptical of AlphaFold 3 · 2024-06-25T12:58:19.387Z · LW · GW

This makes sense, but isn't alphafold available for use? Is it possible to verify this one way or another experimentally?

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T06:24:27.099Z · LW · GW

Might be worth posting this as it's own question for greater visibility

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T05:49:54.523Z · LW · GW

Possibly, but some of the missteps just feel too big to ignore. Like what on earth is going on in the second half of the book?

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T05:32:52.155Z · LW · GW

I greatly enjoyed metropolitan man, but feel like web serials, especially fan fiction, are their own genre and deserve their own post.

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-25T02:35:09.929Z · LW · GW

It's a prequel in the loosest possible sense. In theory they could be set in two different universes and it wouldn't make much of a difference.

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-24T12:52:47.965Z · LW · GW

Oh (not a spoiler) the second narrator is obviously not being entirely truthful.

That's totally a spoiler :-), but for me it was one of the most brilliant twists in the book. You have this stuff that feels like the author is doing really poor sci-fi, and then it's revealed that the author is perfectly aware of that and is making a point about translation.

Comment by Yair Halberstadt (yair-halberstadt) on Sci-Fi books micro-reviews · 2024-06-24T12:49:20.175Z · LW · GW

Thanks, really appreciate the feedback! Maybe I'll give The Three Body Problem another chance.

Comment by Yair Halberstadt (yair-halberstadt) on Actually, Power Plants May Be an AI Training Bottleneck. · 2024-06-20T06:45:30.215Z · LW · GW

What about solar power? If you build a data center in the desert, buy a few square km of adjacent land and tile them with solar panels presumably that can be done far quicker and with far less regulation than building a power plant, and at night you can use off peak grid electricity at cheaper rates.

Comment by Yair Halberstadt (yair-halberstadt) on Surviving Seveneves · 2024-06-20T03:04:43.475Z · LW · GW

Vg pna'g or gur pnfr gung jnf gur gehyr rssbeg, fvapr gur cerfvqrag pubfr gb tb gb fcnpr vafgrnq bs gur bgure rssbeg.

Comment by Yair Halberstadt (yair-halberstadt) on Surviving Seveneves · 2024-06-19T17:02:39.441Z · LW · GW

The problem is the temperature of the earth rises fairly fast as you dig downwards. How fast depends on location, but always significantly enough that there's a pretty hard limit on how cold you can go.

Comment by Yair Halberstadt (yair-halberstadt) on Yair Halberstadt's Shortform · 2024-06-18T04:58:49.201Z · LW · GW

Reserve soldiers in Israel are paid their full salaries by national insurance. If they are also able to work (which is common as the IDF isn't great at efficiently using it's manpower) they can legally work and will get paid by their company on top of whatever they receive from national insurance.

Given how often sensible policies aren't implemented because of their optics, it's worth appreciating those cases where that doesn't happen. The biggest impact of a war on Israel is to the economy, and anything which encourages people to work rather than waste time during a war is a good policy. But it could so easily have been rejected because it implies soldiers are slacking off from their reserve duties.

Comment by Yair Halberstadt (yair-halberstadt) on The Data Wall is Important · 2024-06-11T04:03:16.444Z · LW · GW

Not video transcripts - video. 1 frame of Video contains much more data than 1 text token, and you can train an AI as a next frame predictor much as you can a next token predictor.

Comment by Yair Halberstadt (yair-halberstadt) on The Data Wall is Important · 2024-06-10T12:35:33.549Z · LW · GW

I'm guessing that the sort of data that's crawled by Google but not common crawl is usually low quality? I imagine that if somebody put any effort into writing something then they'll put effort into making sure it's easily accessible, and the stuff that's harder to get to is usually machine generated?

Of course that's excluding all the data that's private. I imagine that once you add private messages (e.g. WhatsApp, email) + internal documents that ends up being far bigger than the publicly available web.

Comment by Yair Halberstadt (yair-halberstadt) on Why I don't believe in the placebo effect · 2024-06-10T10:23:27.096Z · LW · GW

I'm interested if you have a toy example showing how Simpsons paradox could have an impact here?

I assume that has a placebo/doesn't have a placebo is a binary variable, and I also assume that the number of people in each arm in each experiment is the same. I can't really see how you would end up with Simpsons paradox with that set up.