Posts

Book Review: Safe Enough? A History of Nuclear Power and Accident Risk 2024-07-09T01:12:28.730Z
Red Pill vs Blue Pill, Bayes style 2023-08-16T15:23:24.911Z
Link Summary: Top 10 Replicated Findings from Behavioral Genetics 2020-04-19T01:32:43.000Z
Operationalizing Newcomb's Problem 2019-11-11T22:52:52.835Z
Is the World Getting Better? A brief summary of recent debate 2019-02-06T17:38:43.631Z

Comments

Comment by ErickBall on On the subject of in-house large language models versus implementing frontier models · 2024-09-25T00:40:38.175Z · LW · GW

I would think things are headed toward these companies fine tuning an open source near-frontier LLM. Cheaper than building one from scratch but with most of the advantages.

Comment by ErickBall on Superbabies: Putting The Pieces Together · 2024-07-18T23:03:07.917Z · LW · GW

Yeah, something along the lines of an ELO-style rating would probably work better for this. You could put lots of hard questions on the test and then instead of just ranking people you compare which questions they missed, etc.

Comment by ErickBall on Superbabies: Putting The Pieces Together · 2024-07-18T18:42:58.595Z · LW · GW

This works for corn plants because the underlying measurement "amount of protein" is something that we can quantify (in grams or whatever) in addition to comparing two different corn plants to see which one has more protein. IQ tests don't do this in any meaningful sense; think of an IQ test more like a Moh's hardness scale, where you can figure out a new material's position on the scale by comparing it to a few with similar hardness and seeing which are harder and which are softer. If it's harder than all of the previously tested materials, it just goes at the top of the scale.

Comment by ErickBall on Superbabies: Putting The Pieces Together · 2024-07-18T18:32:27.666Z · LW · GW

I wasn't saying it's impossible to engineer a smarter human. I was saying that if you do it successfully, then IQ will not be a useful way to measure their intelligence. IQ denotes where someone's intelligence falls relative to other humans, and if you make something smarter than any human, their IQ will be infinity and you need a new scale.

Comment by ErickBall on Superbabies: Putting The Pieces Together · 2024-07-17T15:16:27.687Z · LW · GW

it’s not even clear what it would mean to be a 300-IQ human

IQ is an ordinal score, not a cardinal one--it's defined by the mean of 100 and standard deviation of 15. So all it means is that this person would be smarter than all but about 1 in 10^40 natural-born humans. It seems likely that the range of intelligence for natural-born humans is limited by basic physiological factors like the space in our heads, the energy available to our brains, and the speed of our neurotransmitters. So a human with IQ 300 is probably about the same as IQ 250 or IQ 1000 or IQ 10,000, i.e. at the upper limit of that range.

Comment by ErickBall on Another medical miracle · 2024-07-02T14:55:51.746Z · LW · GW

I've heard doctors ask questions like this but I don't think they usually get very helpful answers. "My diet's okay I guess, pretty typical, a lot of times I don't sleep great, and yeah I have a pretty stressful job." Great, what do you do with that?

Comment by ErickBall on The Incredible Fentanyl-Detecting Machine · 2024-07-01T21:42:20.544Z · LW · GW

"Food" in general is about the easiest and most natural thing for a dog to identify. Distinguishing illegal drugs from all the other random stuff a person might be carrying (soap, perfume, medicine, etc.) at least requires a lot better training than finding food.

Comment by ErickBall on On Claude 3.5 Sonnet · 2024-06-28T02:56:07.950Z · LW · GW

It's interesting that 3.5 Sonnet does not seem to match, let alone beat, GPT-4o on the leaderboard (https://chat.lmsys.org/?leaderboard). Currently it shows GPT-4o with elo 1287 and Claude 3.5 Sonnet at 1271.

Comment by ErickBall on Enriched tab is now the default LW Frontpage experience for logged-in users · 2024-06-22T17:08:52.472Z · LW · GW

Although it would also be nice to distinguish that from "I read this post already somewhere else"

Comment by ErickBall on Enriched tab is now the default LW Frontpage experience for logged-in users · 2024-06-22T17:03:26.343Z · LW · GW

I would love to have a checkbox or something next to each post to indicate "I saw this and I don't want to click on it"

Comment by ErickBall on Thoughts on seed oil · 2024-04-22T18:17:51.353Z · LW · GW

As a counterpoint, take a look at this article: https://peterattiamd.com/protein-anabolic-responses/

The upshot is that the studies saying your body can only use 45g of protein per meal for muscle synthesis are mostly based on fast-acting whey protein shakes. Stretching out the duration of protein metabolism (by switching protein sources and/or combining it with other foods in a gradually-digested meal) can mitigate the problem quite a bit.

Comment by ErickBall on Thoughts on seed oil · 2024-04-22T18:09:27.616Z · LW · GW

Saturated fats are definitely manageable in small amounts. For most of history, and still in many places today, the biggest concern for an infant was getting sufficient calories, and saturated fat is a great choice for that. When you look at modern hunter-gatherer diets, they contain animal products, but in most cases they do not make up the majority of calories (exceptions usually involve lots of seafood), the meats are wild and therefore fairly lean, and BMI stays generally quite low. Under those conditions, heart disease risk is small and whether it is slightly increased by the saturated fat in one's diet is mostly irrelevant. There is a big difference between chasing down the occasional antelope and pulling up to the drive through for a cheeseburger. So the evolutionary argument really is not strong evidence that saturated fats are harmless.

I agree that the studies we have are mostly inadequate, but I don't think using hunter-gatherer diets as a control would be very useful either. If you change everything at once, you can't isolate specific causal factors. What we really need (but can't have) is a bunch of large scale trials that have many groups with many different interventions and combinations of interventions, and statistical power to distinguish outcomes between each group.

Comment by ErickBall on Using axis lines for good or evil · 2024-03-23T15:53:18.953Z · LW · GW

Real can of worms that deserves its own post I would think

Comment by ErickBall on Using axis lines for good or evil · 2024-03-23T15:49:02.327Z · LW · GW

I think in this case just spacing them out would help more.

Comment by ErickBall on "Deep Learning" Is Function Approximation · 2024-03-22T02:46:40.416Z · LW · GW

Downvoted because I waded through all those rhetorical shenanigans and I still don't understand why you didn't just say what you mean.

Comment by ErickBall on Cohabitive Games so Far · 2024-03-22T02:23:08.264Z · LW · GW

Separate clocks would be a pain to manage in a board game, but in principle "the game ends once 50% of players have run out of time" seems like a decent condition.

Comment by ErickBall on Cohabitive Games so Far · 2024-03-21T00:36:07.670Z · LW · GW

Oh, good point, I had forgotten about the zero-sum victory points. The extent to which the other parts are zero sum depends a lot on how large the game board is relative to the number of players, so it could be adjusted. I was thinking about having a time limit instead of a round limit, to encourage the play to move quickly, but maybe that's too stressful. If you want the players to choose to end the game, then you'd want to build in a mechanic that works against all of them more and more as the game progresses, so that at some point continuing becomes counterproductive...

Comment by ErickBall on Cohabitive Games so Far · 2024-03-20T04:49:01.735Z · LW · GW

Would a good solution be to just play Settlers, but instead of saying "the goal is to get more points than anyone else," say "this is a variant where the goal is to get the highest score you can, individually"? That seems like it would change the negotiation dynamics in a potentially interesting way without having to make or teach a brand new game. Does this miss the point somehow?

Comment by ErickBall on My Clients, The Liars · 2024-03-11T19:20:27.814Z · LW · GW

So, then it seems like the client's best move in this scenario is to lie to you strategically, or at least omit information strategically. They could say "I know for sure you won't find any fingerprints or identifiable face in the camera footage" and "I think my friends will confirm that I was playing video games with them", and as long as they don't actually tell you that's a lie, you can put those friends on the stand, right?

Comment by ErickBall on My Clients, The Liars · 2024-03-11T17:53:22.974Z · LW · GW

You say that lying to you can only hurt them but "There is a kernel of an exception that is almost not worth mentioning" because it is rarely relevant. I find this pretty hard to believe. If your client tells you "yeah I totally robbed that store, but I was wearing a ski mask and gloves so I think a jury will have reasonable doubt assuming my friends say I was playing video games with them the whole time", would you be on board with that plan? There must be plenty of cases where the cops basically know who did it but have trouble proving it. Maybe those just don't get to the point of a public defender getting assigned?

Comment by ErickBall on Succession · 2023-12-27T05:24:53.273Z · LW · GW

That's like saying that because we live in a capitalist society, the default plan is to destroy every bit of the environment and fill every inch of the world with high rise housing projects. It's... true in some sense, but only as a hypothetical extreme, a sort of economic spherical cow. In reality, people and societies are more complicated and less single minded than that, and also people just mostly don't want that kind of wholesale destruction.

Comment by ErickBall on Succession · 2023-12-26T22:36:37.602Z · LW · GW

I didn't think the implication was necessarily that they planned to disassemble every solar system and turn it into probe factories. It's more like... seeing a vast empty desert and deciding to build cities in it. A huge universe, barren of life except for one tiny solar system, seems not depressing exactly but wasteful. I love nature and I would never want all the Earth's wilderness to be paved over. But at the same time I think a lot of the best the world has to offer is people, and if we kept 99.9% of it as a nature preserve then almost nobody would be around to see it. You'd rather watch the unlifted stars, but to do that you have to exist.

Comment by ErickBall on How "Pause AI" advocacy could be net harmful · 2023-12-26T20:38:47.802Z · LW · GW

I don't think governments have yet committed to trying to train their own state of the art foundation models for military purposes, probably partly because they (sensibly) guess that they would not be able to keep up with the private sector. That means that government interest/involvement has relatively little effect on the pace of advancement of the bleeding edge.

Comment by ErickBall on AI Girlfriends Won't Matter Much · 2023-12-24T07:26:05.665Z · LW · GW

Fair point, but I can't think of a way to make an enforceable rule to that effect. And even if you could make that rule, a rogue AI would have no problem with breaking it.

Comment by ErickBall on The Shortest Path Between Scylla and Charybdis · 2023-12-24T04:15:38.929Z · LW · GW

I think if you could demonstrably "solve alignment" for any architecture, you'd have a decent chance of convincing people to build it as fast as possible, in lieu of other avenues they had been pursuing.

Comment by ErickBall on Welcome to Baltimore LessWrong [Edit With Your Details] · 2023-12-22T22:35:30.437Z · LW · GW

Since our info doesn't seem to be here already: We meet on Sundays at 7pm, alternating between virtual and in-person in the lobby of the UMBC Performing Arts and Humanities Building. For more info, you can join our Google group (message the author of this post, bookinchwrm).

Comment by ErickBall on Redirecting one’s own taxes as an effective altruism method · 2023-12-01T06:03:09.770Z · LW · GW

I found this post interesting, mostly because it illustrates deep flaws in the US tax system that we should really fix. I downvoted it because I think it is a terrible strategy for giving more money to charity. Many other good objections have been raised in the comments, and the post itself admits that lack of effectiveness is a serious problem. One problem I did not see addressed anywhere is reputational risk. The world is not static, and a technique that works for an individual criminal or a few conscientious objectors probably will not work consistently for a large and coordinated group of donors, because society will notice and react. What effect would this behavior have on the charities you give to? I suspect most of them, if they knew about it, would justifiably refuse the money. What effect would it have on other organizations you might be associated with? They are now involved with and perhaps encouraging a known criminal, albeit one who probably won't be prosecuted.

In conclusion, I really wish I could vote to disagree with this post without downvoting to make it less visible. I think readers should be able to see it and also see that practically everyone disagrees with it.

Comment by ErickBall on Am I going insane or is the quality of education at top universities shockingly low? · 2023-11-22T16:46:51.502Z · LW · GW

I always thought it would be great to have one set of professors do the teaching, and then a different set come in from other schools just for a couple weeks at the end of the year to give the students a set of intensive written and oral exams that determines a big chunk of their academic standing.

Comment by ErickBall on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T21:38:05.904Z · LW · GW

Here's a market, not sure how to define linchpin but we can at least predict whether he'll be part of it.

https://manifold.markets/ErickBall/will-the-first-agi-be-built-by-sam?r=RXJpY2tCYWxs

Comment by ErickBall on How did you integrate voice-to-text AI into your workflow? · 2023-11-20T15:28:31.628Z · LW · GW

I can now get real-time transcripts of my zoom meetings (via a python wrapper of the openai api) which makes it much easier to track the important parts of a long conversation. I tend to zone out sometimes and miss little pieces otherwise, as well as forget stuff.

Comment by ErickBall on Am I going insane or is the quality of education at top universities shockingly low? · 2023-11-20T15:23:18.782Z · LW · GW

That's fair, most of them were probably never great teachers.

Comment by ErickBall on OpenAI Staff (including Sutskever) Threaten to Quit Unless Board Resigns · 2023-11-20T15:22:53.476Z · LW · GW

You are attributing a lot more deviousness and strategic boldness to the so-called deep state than the US government is organizationally capable of. The CIA may have tried a few things like this in banana republics but there's just no way anybody could pull it off domestically.

Comment by ErickBall on Am I going insane or is the quality of education at top universities shockingly low? · 2023-11-20T06:01:55.635Z · LW · GW

Professors being selected for research is part of it. Another part is the tenure you mentioned - some professors feel like once they have tenure they don't need to pay attention to how well they teach. But I think a big factor is another one you already mentioned: salaries. $150k might sound like a lot to a student, but to the kind of person who can become a math or econ professor at a top research university this is... not tiny but not close to optimal. They are not doing it for the money. They are bought in to a culture where the goal is building status in academic circles, and that's based on research. I also think you've had some bad luck. I had a lot of good professors and a handful of bad ones as an undergrad (good school but not a research university) and in grad school maybe a little more equal between good professors and those who didn't care much. But even in the latter cases, I rarely felt like I didn't learn anything. It just took a little more effort on my part to read the book if the lectures were a snooze (and yes, there were a few profs whose voices could literally put me to sleep in an instant).

Comment by ErickBall on The other side of the tidal wave · 2023-11-06T23:54:35.132Z · LW · GW

But that sort of singularity seems unlikely to preserve something as delicately balanced as the way that (relatively well-off) humans get a sense of meaning and purpose from the scarcity of desirable things.

I think our world actually has a great track record of creating artificial scarcity for the sake of creating meaning (in terms of enjoyment, striving to achieve a goal, sense of accomplishment). Maybe "purpose" in the most profound sense is tough to do artificially, but I'm not sure that's something most people feel a whole lot of anyway?

I'm pretty optimistic about our ability to adapt to a society of extreme abundance by creating "games" (either literal or social) that become very meaningful to those engaged in them.

Comment by ErickBall on Autonomic Sanity · 2023-09-27T13:16:07.834Z · LW · GW

Excellent, I think I will give something like that a try

Comment by ErickBall on Don't take the organizational chart literally · 2023-09-26T01:40:36.955Z · LW · GW

I know this is an old thread but I think it's interesting to revisit this comment in light of what happened at Twitter. Musk did, in fact, fire a whole lot of people. And he did, in fact, unban a lot of conservatives without much obvious delay or resistance within the company. I'm not sure how much of an implication that has about your views of the justice department, though. Notably, it was pretty obvious that the decisions at Twitter were being made at the top, and that the people farther down in the org chart had to implement those decisions or be fired. That sort of thing is less often true in government, especially when the actions are on the far end of questionably legal. 

Let's take NSA surveillance of American phone records as an example - plenty of people felt that it was unconstitutional. Without getting into any details, the end result was that it ended up being a political decision whether this sort of thing is acceptable. As far as I know, nobody at the NSA got fired, let alone charged, for allowing such a program. Contrast that with convincing someone to bury the results of an autopsy. They know perfectly well that if that comes out they'll be charged with a crime; formal authority is basically useless. Even if that person is generally loyal to the organization, that loyalty is contingent on a belief that the agency's goals are aligned with the person's goals. And that alignment can change very quickly. Then the person in charge is left with the option of threatening to fire people (do you know how hard it is to fire a civil servant?) or maybe just not promote them (until the next administration comes around), and even that would require a paper trail that I don't think they would risk. Soft power can go very far, but almost never as far as covering up a murder.

Comment by ErickBall on Autonomic Sanity · 2023-09-26T00:07:05.697Z · LW · GW

Thanks! I'd love to hear any details you can think of about what you actually do on a daily basis to maintain mental health (when it's already fairly stable). Personally I don't really have a system for this, and I've been lucky that my bad times are usually not that bad in the scheme of things, and they go away eventually.

Comment by ErickBall on Red Pill vs Blue Pill, Bayes style · 2023-08-17T15:03:44.311Z · LW · GW

I'm not sure how I would work it out. The problem is that presumably you don't value one group more because they chose blue (it's because they're more altruistic in general) or because they chose red (it's because they're better at game theory or something). The choice is just an indicator of how much value you would put on them if you knew more about them. Since you already know a lot about the distribution of types of people in the world and how much you like them, the Bayesian update doesn't really apply in the same way. It only works on what pill they'll take because everyone is deciding with no knowledge of what the others will decide.

In the specific case where you don't feel altruistic towards people who chose blue specifically because of a personal responsibility argument ("that's their own fault"), then trivially you should choose red. Otherwise, I'm pretty confused about how to handle it. I think maybe only your level of altruism towards the blue choosers matters.

Comment by ErickBall on Red Pill vs Blue Pill, Bayes style · 2023-08-17T09:41:15.455Z · LW · GW

Doesn't "trembling hand" mean it's a stable equilibrium even if there are?

Comment by ErickBall on Red Pill vs Blue Pill, Bayes style · 2023-08-16T22:46:03.601Z · LW · GW

I mean definitely most people will not use a decision procedure like this one, so a smaller update seems very reasonable. But I suspect this reasoning still has something in common with the source of the intuition a lot of people have for blue, that they don't want to contribute to anybody else dying.

Comment by ErickBall on Red Pill vs Blue Pill, Bayes style · 2023-08-16T22:14:33.052Z · LW · GW

Sure, if you don't mind the blue-choosers dying then use the stable NE.

Comment by ErickBall on Red Pill vs Blue Pill, Bayes style · 2023-08-16T21:18:19.435Z · LW · GW

People are all over the place but definitely not 50/50. The qualitative solution I have will hold no matter how weak the correlation with other people's choices (for large enough values of N).

If you make the very weak assumption that some nonzero number of participants will choose blue (and you prefer to keep them alive), then this problem becomes much more like a prisoner's dilemma where the maximum payoff can be reached by coordinating to avoid the Nash equilibrium.

Comment by ErickBall on video games > IQ tests · 2023-08-16T12:41:41.770Z · LW · GW

I think optimizer-type jobs are a modest subset of all useful or non-bullshit office jobs. Many call more for creativity, or reliably executing an easy task. In some jobs, basically all the most critical tasks are new and dissimilar to previous tasks, so there's not much to optimize. There's no quick feedback loop. It's more about how reliably you can analyze the new situation correctly. 

I had an optimizing job once, setting up computers over the summer in college. It was fun. Programming is like that too. I agree that if optimizing is a big part of the job, it's probably not bullshit. 

But over time I've come to think that even though occasional programming is the most fun part of my job, the inscrutable parts that you have to do in a vacuum are probably more important. 

Comment by ErickBall on video games > IQ tests · 2023-08-05T18:07:12.553Z · LW · GW

I think one of the major purposes of selecting employees based on a college degree (aside from proving intelligence and actually learning skills) is to demonstrate ability to concentrate over extended periods (months to years) on boring or low-stimulation work, more specifically reading, writing, and calculation tasks that are close analogues of office work. A speedrun of a video game is very different. The game is designed for visual and auditory stimulation. You can clearly see when you're making progress and how much, a helpful feature for entering a flow state. There is often a competitive aspect. And of course you don't have to read or write or calculate anything, or even interact with other people in a productive way. Probably the very best speed runners are mostly smart people who could be good at lots of things, because that's true of almost any competition. But I doubt skill at speedrunning otherwise correlates much with success at most jobs.

Comment by ErickBall on The ants and the grasshopper · 2023-06-09T18:11:45.596Z · LW · GW

The math doesn't necessarily work out that way. If you value the good stuff linearly, the optimal course of action will either be to spend all your resources right away (because the high discount rate makes the future too risky) or to save everything for later (because you can get such a high return on investment that spending any now would be wasteful). Even in a more realistic case where utility is logarithmic with, for example, computation, anticipation of much higher efficiency in the far future could lead to the optimal choice being to use essentially the bare minimum right now.

I think there are reasonable arguments for putting some resources toward a good life in the present, but they mostly involve not being able to realistically pull off total self-deprivation for an extended period of time. So finding the right balance is difficult, because our thinking is naturally biased to want to enjoy ourselves right now. How do you "cancel out" this bias while still accounting for the limits of your ability to maintain motivation? Seems like a tall order to achieve just by introspection.

Comment by ErickBall on Arguments Against Fossil Future? · 2023-06-04T18:29:25.803Z · LW · GW

Positive externalities is a bit of an odd way to phrase it--if it's just counting up the economic value (i.e. price) of the fossil fuels, doesn't it also disregard the consumer surplus? In other words, they've demonstrated that the negative externalities of pollution outweigh the value added on the margin, but if we were to radically decrease our usage of fossil fuels then the cost of energy (especially for certain uses with no good substitute, as you discussed above) would go way up, and the tradeoff on the margin would look very different.

Comment by ErickBall on Accidental Terraforming · 2023-04-30T19:36:55.005Z · LW · GW

I see your point about guilt/blame, but I'm just not sure the term we use to describe the phenomenon is the problem. We've already switched terms once (from "global warming" to "climate change") to sound more neutral, and I would argue that "climate change" is about the most neutral description possible--it doesn't imply that the change is good or bad, or suggest a cause. "Accidental terraforming", on the other hand, combined two terms with opposite valence, perhaps in the intent that they will cancel out? Terraforming is supposed to describe a desirable (for humans) change to the environment, while an accident is usually bad.

But the controversy, blame, and anger don't arise from the moniker, they are a natural consequence of trying to change behavior. In fact, people now like to say "anthropogenic climate change" precisely because they intend to put the blame explicitly on polluting industry. How can we take control of our effects on the climate if we don't first acknowledge them, and then add a moral valence? Without a "should", there is no impetus to action. Telling people they should do something different (and costly) will upset them, yes, but then you can't make an omelet without breaking some eggs.

Comment by ErickBall on Discovering Language Model Behaviors with Model-Written Evaluations · 2023-03-10T22:35:28.050Z · LW · GW

How would a language model determine whether it has internet access? Naively, it seems like any attempt to test for internet access is doomed because if the model generates a query, it will also generate a plausible response to that query if one is not returned by an API. This could be fixed with some kind of hard coded internet search protocol (as they presumably implemented for Bing), but without it the LLM is in the dark, and a larger or more competent model should be no more likely to understand that it has no internet access.

Comment by ErickBall on AGI in sight: our look at the game board · 2023-02-21T23:24:23.920Z · LW · GW

If the NRO had Sentient in 2012 then it wasn't even a deep learning system. Probably they have something now that's built from transformers (I know other government agencies are working on things like this for their own domain specific purposes). But it's got to be pretty far behind the commercial state of the art, because government agencies don't have the in house expertise or the budget flexibility to move quickly on large scale basic research.

Comment by ErickBall on AGI in sight: our look at the game board · 2023-02-21T23:10:47.387Z · LW · GW

Those are... mostly not AI problems? People like to use kitchen-based tasks because current robots are not great at dealing with messy environments, and because a kitchen is an environment heavily optimized for the specific physical and visuospatial capabilities of humans. That makes doing tasks in a random kitchen seem easy to humans, while being difficult for machines. But it isn't reflective of real world capabilities.

When you want to automate a physical task, you change the interface and the tools to make it more machine friendly. Building a roomba is ten times easier than building a robot that can navigate a house while operating an arbitrary stick vacuum. If you want dishes cleaned with minimal human input, you build a dishwasher that doesn't require placing each dish carefully in a rack (eg https://youtube.com/watch?v=GiGAwfAZPo0).

Some people have it in their heads that AI is not transformative or is no threat to humans unless it can also do all the exact physical tasks that humans can do. But a key feature of intelligence is that you can figure out ways to avoid doing the parts that are hardest for you, and still accomplish your high level goals.