Posts

Dependencies and conditional probabilities in weather forecasts 2022-03-07T21:23:12.696Z
Money creation and debt 2020-08-12T20:30:42.321Z
Superintelligence and physical law 2016-08-04T18:49:19.145Z
Scope sensitivity? 2015-07-16T14:03:31.933Z
Types of recursion 2013-09-04T17:48:55.709Z
David Brooks from the NY Times writes on earning-to-give 2013-06-04T15:15:26.992Z
Cryonics priors 2013-01-20T22:08:58.582Z

Comments

Comment by AnthonyC on Are there any naturally occurring heat pumps? · 2024-07-26T21:10:59.374Z · LW · GW

I have wondered something very similar to this myself. I think (at least in most cases) it is easier, on evolutionary timescales, to adapt to local climate conditions, rather than develop the machinery (and spend the metabolic energy) fighting against those conditions. 

As far as I know, there are also no organisms that directly extract metabolic energy from wind, wave, tidal, other mechanical motion. Chemosynthesis based on thermal gradients AFAIK only happens in bacteria near hydrothermal vents. I assume any biological heat pumps that could exist would need to be macroscopic to be useful, but really insulation, coloring, and evaporation are just simpler.

Comment by AnthonyC on Best in Class Life Improvement · 2024-07-25T22:29:22.699Z · LW · GW

Yes it is, for sure. I told a nurse at a sleep study that that was my dose. She mentioned she took half a 100mg pill once and stayed up for over 24 hours straight. For me it was barely enough to stay awake through a normal day. It took those 4 years and more to find enough of the root causes to not need to be on it anymore.

Comment by AnthonyC on Best in Class Life Improvement · 2024-07-23T22:29:08.022Z · LW · GW

I was prescribed modafinil for sleep issues for 4 years, 200mg/day. It definitely promoted wakefulness, but also made me more irritable.

Comment by AnthonyC on Best in Class Life Improvement · 2024-07-23T22:26:34.579Z · LW · GW

Personal note from my own experiences swimming: if you have breathing problems this is a lot less enjoyable. That said, it is definitely excellent exercise.

Comment by AnthonyC on "If You're Not a Holy Madman, You're Not Trying" · 2024-07-19T19:41:35.376Z · LW · GW

My claim is that Nate's position is much less puzzling on classical decision-theoretic grounds. Beliefs are "for" decisionmaking. If you're putting some insulation between your beliefs and your decisions, you're probably acting on some hidden beliefs.

I agree with this.

It feels a bit like the Scott position is doing separation of concerns wrong. If your beliefs and your actions disagree, I think it better to revise one or the other, rather than coming up with principles about how it's fine to say one thing and do another. 

I see it more as, not Scott, but human minds doing separation of concerns wrong. A well designed mind would probably work differently, plausibly more in line with decision theoretic assumptions, but you go to war with the army you have. What I have is a brain, coughed up by evolution, built from a few GB of source code, trained on a lifetime of highly redundant low-resolution sensory data, and running on a few tens of watts of sugar. How I should act is downstream  of what I happen to be and what constraints I'm forced to optimize under.

I think the idea of distinguishing CEV-principles as a separate category is a good point. Suppose we follow the iterative-learning-over-a-lifetime to it's logical endpoint, and assume an agent has crafted a fully-fleshed-out articulable set of principles that they endorse reflectively in 100% of cases. I agree this is possible and would be very excited to see the result. If I had it, what would this mean for my actions? 

Well, what I ideally want is to take the actions that the CEV-principles say are optimal. But, I am an agent with limited data and finite compute, and I face the same kind of tradeoffs as an operating system deciding when (and for how long) to run its task scheduler. At one extreme, it never gets run, and whatever task comes along first gets run until it quits. At the other extreme, it runs indefinitely, and determines exactly what action would have been optimal, but not until long after the opportunity to use that result has passed. Both extremes are obviously terrible. In between are a global optimum and some number of local optima, but you only ever have estimates of how close you are to them, and estimates of how much it would cost (in compute or in data acquisition effort) to get better estimates.

Given that, what I can actually do is make a series of approximations that are more tractable and rapidly executable that are usually close to optimal in the conditions I usually need to apply them, knowing that those approximations are liable to break in extreme cases. I then deliberately avoid pushing those approximations too hard in ways I predict would Goodheart them in ways I have a hard time predicting. Even my CEV-principles would (I expect) endorse this, because they would necessarily contain terms for the cost of devoting more resources to making better decisions.

So, from my POV, I have an implicitly encoded seed for generating my CEV-principles, which I use to internalize and reflect on a set of meta-ethical general principles, which I use to generate a set of moral principles to guide actions. I share many of those (but not all) with my society, which also has informal norms and formal laws. Each step in that chain smooths out and approximates the potentially unboundedly complex edges of the underlying CEV-principles, in order to accommodate the limited compute budget allocated to judging individual cases. 

I think one of the reasons for moral progress over time is that, as we become wealthier and get better available data, we can make and evaluate and act on less crude approximations, individually and societally. I suspect this is also a part of why smarter people are, on average, more trusting and prosocial (if the studies I've read about that say this are wrong, please let me know!).

This doesn't mean no one should ever become a holy madman. It just means the bar for doing so should be set higher than a simple expected value calculation would suggest. Similarly, in business, sometimes the right move is to bet the company, and in war, sometimes the right move is one that risks the future of your civilization. But, the bar for doing either needs to be very high, much higher than just "This is the highest expected payoff move we can come up with."

Comment by AnthonyC on Housing Roundup #9: Restricting Supply · 2024-07-19T15:21:08.003Z · LW · GW

This is worth considering, but I don't (currently) share your conclusion. In many cases, the (often past, not current) residents made the areas valuable by doing things they've since disallowed, things which would in fact continue to make the area (including their own property) more valuable. One of the key questions is: who gets to determine the cost or benefit to current residents of new development? 

If a developer is asking to buy my property, then I do, that's easy. 

Otherwise, if we're talking about compensating property owners for externalities caused by actions to develop land they don't own, then how far do we push the reasoning? If developers need to compensate nearby property owners for negative externalities imposed on them, then who compensates the developer for the positive externalities they cause on properties the developer doesn't own? Because if the answer is "no one" then this is a pure disincentive on development, even development that would net-benefit every single person in the community and surrounding communities. 

Right now builders face an enormous number of veto points in any construction process. Of course, so do current homeowners looking to improve their own property. I once looked into converting a half bath in my home to a full bath. I was told 1) You're not allowed to have a full bath on any level that doesn't contain a legally- recognized bedroom, because someone once decided that this makes it possible to be used as an illegal apartment, and 2) You won't be allowed to get any room on a walkout basement level recognized as a bedroom regardless of code definitions, for the same reason. So an action that imposes no cost on any residents, and in fact would likely drive up home values nearby, is disallowed, and overcoming the restrictions was far more costly than the value to me.

Comment by AnthonyC on Housing Roundup #9: Restricting Supply · 2024-07-18T23:50:25.225Z · LW · GW

The more I think about the "spare bedroom" stuff, the more of a fiasco it is in my mind. 1) Plenty of married couples don't or can't sleep in the same bed, more than you might expect, for all sorts of reasons. 2) If you have a 15 year old son and a newborn baby daughter, guess who's not getting a full night's sleep while in high school but also has to keep the lights and all devices off after 8pm? And the decor disagreements! 3) No one is ever allowed to plan for the future. "I just had twins, a boy and a girl, I know they'll need 2 rooms in 10 years by these standards." Too bad! Move then, and not a day sooner or later! 4) At least in the US, rooms are legally bedrooms or not based on means of egress and minimum size. Not by use. You think this is a home office? Nope, bedroom, sorry. 

Comment by AnthonyC on "If You're Not a Holy Madman, You're Not Trying" · 2024-07-18T17:23:41.130Z · LW · GW

To clarify: I don't think there are principles expressible in reasonable-length English sentences that we should be sure of. I actually think no such sentence can be "right" in the sense of conforming-to-what-we-actually-believe. But, I do think there is some set of underlying principles, instantiated in our minds, that we use in practice to decide what events or world states or approximate-and-expressible-principles are good or bad, or better or worse, and to what degree. I use my built-in "what's good?" sense to judge the questions that get asked further down in the hierarchy of legibility.

Comment by AnthonyC on "If You're Not a Holy Madman, You're Not Trying" · 2024-07-14T13:38:02.135Z · LW · GW

But this is really weird from a decision-theoretic perspective. An agent should be unsure of principles, not sure of principles but unsure about applying them.

I don't agree. Or at least, I think there's some level-crossing here of the axiology/morality/legality type (personally I've started to think of that as a 5 level distinction instead, axiology/metaethics/morality/cultural norms/legality). I see it as equivalent to saying you shouldn't design an airplane using only quantum field theory. Not because it would be wrong, but because it would be intractable. We, as embodied beings in the world, may have principles we're sure of - principles that would, if applied, accurately compare world states and trajectories. These principles may be computationally intractable given our limited minds, or may depend on information we can't reliably obtain. So we make approximations, and try to apply them while remembering that they're approximations and occasionally pausing when things look funny to see if the approximations are still working.

Comment by AnthonyC on In Defense of Lawyers Playing Their Part · 2024-07-03T11:00:37.663Z · LW · GW

All of these arguments make what I think is a false assumption: that all cases will be tried in the courts, and the main thing is to make the courts more unbiased in deciding cases brought to them. If you make it harder to defense attorneys to defend the guilty, then the guilty will go to greater lengths to avoid being brought to court in the first place. That could mean a whole lot of things in practice, with effects pointing in many directions. I don't know what they all add up to. Maybe not much of anything, but I'd find that very surprising.

Edit to add: changing this norm could also have some... potentially interesting... effects when applied to civil disobedience and unjust laws. If lawyers can be held accountable for knowingly defending the guilty, what happens to the ACLU? What would have happened to the NAACP and the Civil Rights Movement?

Comment by AnthonyC on Economics Roundup #2 · 2024-07-03T10:44:39.200Z · LW · GW

I took the second point to mean, "You do not want to put your political reputation and stabding on the line to take control of a difficult decision where there is not an obvious right choice and you are not the expert and even the right choice will make a lot of people unhappy."

Comment by AnthonyC on What distinguishes "early", "mid" and "end" games? · 2024-06-27T02:17:13.612Z · LW · GW

Early on, you're far enough from your opponents that you can't really meaningfully compete with them. You're competing with the environment, and random events. It isn't until you expand enough to actually run into each other and need to capture resources and territory from each other that conflict becomes significant. 

Then again, maybe I'm wrong and this is why I'm not very good at Civ

Comment by AnthonyC on Monthly Roundup #19: June 2024 · 2024-06-26T17:39:24.143Z · LW · GW

UK hotels engage in weekly fire alarm tests that everyone treats as not real and they look at you funny if you don’t realize. Never sound an alarm with the intention of people not responding, even or especially as a test.

All across America, counties test their tornado sirens weekly or monthly, at a standard time (to ensure equipment works), and everyone who lives there knows to ignore it. These tests can be important, even if they aren't always. Usually (but not always), the test uses lower volume or a totally different sound than a real warning.

On non-competes: The last time I left a job I was covered by a non-compete. My employer refused to provide any example companies or roles at companies that would or would not run afoul of the clause, including when asked about the specific company and role they knew I was moving to. As written, it was so broad as to include a huge majority of large corporations, VCs, think tanks, tech start-ups, and regulatory agencies on the planet. Obviously absurd, and no way they had the resources to bother trying to fight me on it even if they wanted to. I think a large fraction of non-competes are more like this than could ever make sense.

Comment by AnthonyC on Childhood and Education Roundup #6: College Edition · 2024-06-26T17:11:15.809Z · LW · GW

I haven't been paying much attention to Harvard since graduation, but I was class of '09. On Math 55: very disappointed to hear this. AFAICT they still have 5 different intro math sequences (multivariable calculus + linear algebra; Math 1a and 1b are the equivalent of AP Calculus AB and BC and most skip them) with different levels of rigor, so I have to wonder what Math 19, 21, 23, and 25 are now like. I took 21 and wish I'd taken 23, because even in 2005 I underestimated how much watering down had already happened.

Comment by AnthonyC on When Are Circular Definitions A Problem? · 2024-06-26T15:49:17.150Z · LW · GW

I agree with almost everything in this post, except that (ironically) I think it draws too narrow a boundary around the concept of "mathematics." I do very little formal mathematics, but use mathematical styles of reasoning very often, to good effect. To my understanding, math is the study of patterns, and to point other people at useful patterns, definitions can be a valuable starting point. This is especially true if you explicitly point out that the definition is approximate or fuzzy. If you're trying to inform or educate or advise people, then you need to do it (in part) with words, and you'll need to give enough definition (with examples, yes, but not only with examples) to get the process started. 

What you shouldn't do is use definitions to debate someone else when they have a good underlying point.

That said, there have also been a few times in my career where the most valuable thing I've been able to observe is, "this word shouldn't exist because it doesn't refer to a natural category," or "people stop using this word to describe a thing when the thing starts working properly, so they always think things in the category don't work." My main personal examples of these are smart materials, metamaterials, and nanotech. There is a useful underlying concept in each case, but real-world usage can be so inconsistent that it needs definition at the start of any conversation for the conversation to be useful.

Comment by AnthonyC on Notes on Potential Future AI Tax Policy · 2024-06-26T15:02:26.603Z · LW · GW

An optimal and consistent application of the idea would presumably also apply a Pigouvian subsidy to actions with positive externalities, which would give you more of those actions in proportion to how good they are. If you did this for everything perfectly then you wouldn't need to explicitly track which taxes pay for which subsidies. Every cost/price would correctly reflect all externalities and the market would handle everything else. In principle, "everything else" could (as with some of Robin Hanson's proposals) even include things like law enforcement and courts, with a government that basically only automatically and formulaically cashes and writes checks. I don't expect any real-world  regime could (or should try to) actually approach this limit, in practice, though.

I would note that, in aggregate, the government's net revenue is not the thing government, or tax policy, are optimized for. Surplus, deficit, and neutrality can all be optimal at different times. If the government wanted to maximize net revenue in the long run, I doubt the approach would look much like taxation. Maybe more like a sovereign wealth fund. 

As an example: If a carbon tax had its desired effect, it would collect money now (though the money might be immediately spent on related subsidies or tax breaks), but in the long run, if it's successful, we'd hope to reach a point where it's never collected again, because non-GHG-emitting options became universally better. 

Comment by AnthonyC on What distinguishes "early", "mid" and "end" games? · 2024-06-26T13:40:27.353Z · LW · GW

I would point out that these concepts only exist in finite games. Yes, "survive the development of AGI" is very much a finite game we have to win, but "then continue to thrive afterwards" is an infinite game. Life, in general, is an infinite game. For infinite games, these boundaries blur or vanish. In some sense it's all midgame, however many transitions and phases the midgame includes.

Comment by AnthonyC on What distinguishes "early", "mid" and "end" games? · 2024-06-26T13:33:37.192Z · LW · GW

I don't think this story about Backgammon reveals anything about how to play Chess, or StarCraft, or Civilization.  Most games have phase transitions, but most games don't have the particular phase transition from conflict-dominant to conflict-irrelevant.

I would say that Civilization, if anything, has the opposite transition, though still less sharp.

Comment by AnthonyC on LLM Generality is a Timeline Crux · 2024-06-26T13:26:50.614Z · LW · GW

This issue is further complicated by the fact that humans aren't fully general reasoners without tool support either.

I think the discussion, not specifically here but just in general, vastly underestimates the significance of this point. It isn't like we expect humans to solve meeting planning problems in our heads. I use Calendly, or Outlook's scheduling assistant and calendar. I plug all the destinations into our GPS and re-order them until the time looks low enough. One of the main reasons we want to use LLMs for these tasks at all is that, even with tool support, they are not trivial or instant for humans to solve.

There is also a reason why standardized tests for kids so often include essay questions on breaking down tasks step by step, like (to pick an example from my own past) "describe in as much detail as possible how you would make a peanut butter and jelly sandwich." Even aspiring professional chefs have to learn proper mis en place to keep on top of their (much more complex) cooking tasks. I won't bother listing more examples, but most humans are not naturally good at these tasks.

Yes, current LLMs are worse on many axes. IDK if that would be true if we built wrappers to let them use the planning tools humans rely on in practice, and if we put them through the kinds of practice humans use to learn these skills IRL. I suspect they still would be, but to a much lesser degree. But then I also can't help thinking about the constant stream of incredible-lack-of-foresight things I see other humans do on a regular basis, and wonder if I'm just overestimating us.

FWIW, after I wrote this comment, I asked Gemini what it thought. It came up with a very similar POV about what its limitations were, what tools would help it, and how much those tools would close the gap with humans.  Also, it linked this blog post in its reply. https://gemini.google.com/app/a72701429c8d830a

Comment by AnthonyC on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-06-22T02:03:13.287Z · LW · GW

I know I can immediately think of innocent, logical explanations for this behavior, but it's still incredibly creepy and disturbing to read.

Comment by AnthonyC on Sticker Shortcut Fallacy — The Real Worst Argument in the World · 2024-06-18T15:59:53.188Z · LW · GW

In the space of all possible languages and words, sure, but real languages are (or at least always have been to date on Earth) created and used by humans, with human minds/tendencies. Despite massive cultural/historical/individual variation, that still moves us quite a ways away from truly, fully arbitrary word meanings. I see two counterpoints to your observation of how different languages split and lump concepts:

 1) Languages tend to cluster in the ways they do or don't split. Number of basic color words varies but the order in which they appear as number increases is mostly constant. Number of relationship words varies but as it increases the splitting happens along the axes of gender and degree-of-relatedness and which line we're related through. Most times a language has a hard-to-translate word, it can still be translated as a compound word, and if not, it can usually be roughly defined in a sentence or two, and work as a loanword from then on. 

2) Languages tend to enable new word formation by native speakers with relatively easy agreement on their meaning. English has (had?) no single word meaning "niece and nephew" but many people hear or read the word "nibling" for the first time and figure out the intended meaning with no explanation whatsoever. There's also things like the bouba/kiki effect, which points towards how some of the ways sound ties to meaning are related to synesthesia and metaphor. On the other hand jargon (in any field including law) - where words are actually defined by inhuman category boundaries feels much less natural to most people. From an objective perspective the jargon words are least arbitrary. However, the ease of learning natural language words suggests they're not so much arbitrary as they are running along grooves natural to human thought instead of natural to the physical, non-human world.

Comment by AnthonyC on AI #68: Remarkably Reasonable Reactions · 2024-06-14T22:42:11.015Z · LW · GW

I think we might be talking past each other. I agree that the specific capability I mentioned, even if I can right it down as an advance prediction of a thing that will happen, will have consequences I do not and plausibly cannot expect. In principle, if I had 1000 hours to analyze 1 hour of thinking by 1 instance of such a system, I might be able to figure out what it's doing and why. I won't, but it's possible-in-principle. To me, that means it is an expected capability that is nevertheless transformational. Unexpected capabilities would be a change in quality of thought that I could not bridge with any amount of effort, because it is simply beyond me.

Comment by AnthonyC on Underrated Proverbs · 2024-06-14T22:24:19.864Z · LW · GW

Thanks for the correction, looks like my whole last paragraph is wrong. Whoops!

Comment by AnthonyC on AI #68: Remarkably Reasonable Reactions · 2024-06-14T22:09:34.789Z · LW · GW

I agree with how different they are and how different the world with them will/would be. I disagree with calling them unexpected because, in fact, many of us expect them.

Comment by AnthonyC on Sticker Shortcut Fallacy — The Real Worst Argument in the World · 2024-06-14T10:46:05.624Z · LW · GW

I have found that whenever I have a disagreement about category membership, it's usually still pretty easy to define an ordinal scale between "clear member" and "clear nonmember" that gives clear agreement. The disagreements tend to be about how tightly to draw the boundary of when we accept the use of the category word. In that sense, centrality seems like a real thing that anyone not being actively deceptive can discuss and notice and come to agreement on.

Unfortunately society as a whole almost never does this except in the most contentious cases. Law often requires binary distinctions that assume words have usable strict definitions, even if that's not how words work. But in case you were curious, in Fort Wayne, Indiana, burritos and tacos are definitively a kind of sandwich and the fate of a restaurant depended on deciding where to draw the "made to order sandwich" category boundary. 

Comment by AnthonyC on AI #68: Remarkably Reasonable Reactions · 2024-06-14T10:33:23.894Z · LW · GW

If it was not going to get any unexpected capabilities why is it transformational?

I can at least see a plausible viewpoint in this one. "AI can do everything any human can do, but 1000x faster and cheaper, with arbitrarily many instances in parallel" seems plenty transformational without being unexpected.
 

Comment by AnthonyC on Four Futures For Cognitive Labor · 2024-06-13T20:38:49.359Z · LW · GW

This is also why Ricardian comparative advantage won't apply. If the AI side has a choice of trading with humans for something, vs. spending the same resources on building AIs to produce the same thing cheaper, then the latter option is more profitable.

Maybe it's equivalent, but I have been thinking of this as "The price at which humans have comparative advantage will become lower than subsistence, so humans refuse the job and/or die out anyway." AKA this is what happened to most horses after cars got cheap.

Comment by AnthonyC on "Full Automation" is a Slippery Metric · 2024-06-13T20:31:48.618Z · LW · GW

Very much agreed, and keep in mind how much we might Jevons Paradox ourselves if we just try to count heads. I use LLMs a lot for basic background research at work, and the way I describe it to coworkers is "Imagine you have an unpaid intern, at 9am on their first day, who has no experience in your field, but is very widely read, and can give you very fast top-of-mind thoughts." This kind of research probably, on its own, improves my productivity 10-20% by freeing up time that I can spend on other things. I'm in a field where human time is expensive and reducing human time probably enables more clients to afford buying our product by more than enough to offset labor savings from rising productivity.

Comment by AnthonyC on Probably Not a Ghost Story · 2024-06-13T20:20:40.674Z · LW · GW

If nothing else, you can check the oven clock resetting by briefly flipping a circuit breaker and seeing what happens.

I've had a handful of things happen in my 37 years approximately this weird or slightly weirder, and so have many people I've met and many family members. I think it's well within normal bounds without needing to suppose anything extraordinary or supernatural.

As far as where the line would be where I personally start questioning my mental state, I'm not sure precisely, but I can easily think of things that would be clearly beyond it. Consistency, stability, and community are all factors. Is whatever is happening recurring? Does it pass various tests I and others can concoct? Can others perceive it? How does it make me feel, other than seeming eerie?

Possibly silly example: If you've ever seen the movie What Women Want, there's a scene where Mel Gibson asks Bette Midler to think of a number between one and a million so he can prove he can hear her thoughts. Best example I can think of where media showed a quick and reliable empirical test of a supposed supernatural phenomenon. Very different result from the Cactus Person prime factorization test.

Comment by AnthonyC on Underrated Proverbs · 2024-06-13T18:40:16.547Z · LW · GW

“Don’t judge a book by its cover” or “No pain, no gain.” Others have an opposite proverb that’s similarly common and reasonable.

"If it looks like a duck and quacks like a duck" is likely an opposite for the first, and "The best things in life are free" is at least plausibly counter to the ladder.

See also: https://www.wired.com/2013/04/zizek-on-proverbs/#:~:text=%22The%20tautological%20emptiness%20of%20a,the%20inherent%20stupidity%20of%20proverbs.

Some proverbs are actually autoantonyms, or at least have come to mean the opposite of the original intent. For example, "Blood is thicker than water" nowadays means that the deepest bonds we form are family-related-by-blood. But I've read that originally the saying was "The blood of the covenant is thicker than the water of the womb," aka Christians should feel deeper bonds to other Christians than to their own siblings in cases where the two conflict.

Comment by AnthonyC on Status quo bias is usually justified · 2024-06-08T20:01:07.156Z · LW · GW

It is true that wanting things not to change is often a reasonable desire. Part of the error in the bias, I think, is that we assume that the status quo stays the same by default. In reality, at least nowadays, a lot of things change by default unless we put in deliberate effort, and make additional changes, to keep them the same. There isn't actually a no-change option, but we pretend there is.

Comment by AnthonyC on What's a better term now that "AGI" is too vague? · 2024-05-28T21:11:42.241Z · LW · GW

I'm not convinced the gap between tool and agent is all that large, and the last couple of years have made me think this more, not less. Finding ways to use LLMs as agents is something I see people doing everywhere, constantly. They're still limited agents because they have limited capabilities, but the idea of creating tools and somehow ensuring there's an uncrossable gulf to agency that no company/nation/user will bridge is, or should be, really hard to take seriously.

Comment by AnthonyC on Scientific Notation Options · 2024-05-18T21:43:25.121Z · LW · GW

Makes me wonder if there's an equivalent notation for languages that use other number word intervals. Multiples of 4 would work better in Mandarin, for example.

Although i guess it's more important that it aligns with SI prefixes?

Comment by AnthonyC on AI #64: Feel the Mundane Utility · 2024-05-17T02:52:47.859Z · LW · GW

The problem is not that the answer of 12 cents is wrong, or even that the answer is orders of magnitude wrong.

Ah yes, strong "Verizon can't do math" vibes here.

Comment by AnthonyC on Teaching CS During Take-Off · 2024-05-15T00:38:54.712Z · LW · GW

Is that a good answer to a 17 year old? Is there a good answer to this?

That depends on the student. It definitely would have been a wonderful answer for me at 17. I will also say, well done, because I can think of at most 2 out of all my teachers, K-12, who might have been capable to giving that good of an answer to pretty much any deep question about any topic this challenging (and of those two, only one who might have put in the effort to actually do it).

Comment by AnthonyC on Four Unrelated Is Over · 2024-05-10T01:04:36.601Z · LW · GW

Nicely done, it really is an absurd rule. When I graduated and moved in with my then significant other and two roommates in Medford, back in 2009, this is why I had to be kept off the lease. (As a side effect, this made me ineligible for a resident parking permit for the first few months, but the town had no problem giving me tickets for using a non-resident parking permit as a resident).

Comment by AnthonyC on Dyslucksia · 2024-05-10T00:58:33.949Z · LW · GW

This was really interesting! You probably already know this, but reading out loud was the norm, and silent reading unusual, for most of history: https://en.wikipedia.org/wiki/Silent_reading That didn't really start to change until well after the invention of the printing press.

For most of my life, even now once in a while, I would subvocalize my own inner monologue. Definitely had to learn to suppress that in social situations.

Comment by AnthonyC on GDP per capita in 2050 · 2024-05-06T22:10:57.678Z · LW · GW

I think there's a good chance the degree to which the world of 2050 looks different to the average person might have very little to do with GDP.

On the one hand, a large chunk of the GDP growth I expect will come from changes in how we produce, distribute, and use energy and chemicals and water and food and steel and concrete etc. But for most people what will mostly feel the same is that their home is warm in winter and cool in summer, and they can get from place to place reasonably easily, and they have machines that do their basic household chores.

On the other hand, something like self-driving cars, or augmented or virtual reality, or 3D printed organs, could be hugely tranaformative for society without necessarily impacting GDP growth much at all.

Comment by AnthonyC on Examples of Highly Counterfactual Discoveries? · 2024-04-29T14:35:06.169Z · LW · GW

I think it's worth noting that small delays in discovering new things would, in aggregate, be very impactful. On average, how far apart are the duplicate discoveries? If we pushed all the important discoveries back a couple of years by eliminating whoever was in fact historically first, then the result is a world that is perpetually several years behind our own in everything. This world is plausibly 5-10% poorer for centuries, maybe more if a few key hard steps have longer delays, or if the most critical delays happened a long time ago and were measured in decades or centuries instead.

Comment by AnthonyC on Thoughts on seed oil · 2024-04-21T23:43:59.648Z · LW · GW

This was a great post, really appreciate the summary and analysis! And yeah, no one should have high certainty about nutritional questions this complicated.

For myself, I mostly eliminated these oils from my diet about 4 years ago, along with reducing industrially-processed food in general. Not 100%, I'm not a purist, but other than some occasional sunflower oil none of these are in foods I keep at home, and I only eat anywhere else 0-2 times per week.  I did lose a little weight in the beginning, maybe 10 lbs, but then stabilized. But what I have mostly noticed is that when I eat lots of fried food, depending on the oil used (which to some degree you can taste), I'm either fine, or feel exhausted/congested/thirsty for basically a full day. I think you may have a point about trans fata from reusing oil, since anecdotally this seems even worse for leftovers.

Of course, another thing I did at the same time is switch to grass-fed butter  and pasture-raised eggs. Organic meats and milk, not always pasture raised. Conventional cheeses. I've read things claiming the fatty acid composition is significantly different for these foods depending on what the animals eat, in terms of omega 3/6 ratios, saturated/unsaturated fat ratios, and fatty acid chain lengths. I've never looked too deeply into checking those claims, because for me the fact that they taste better is reason enough. As far as I can tell, it wasn't until WWII or later that we really started feeding cows corn and raising chickens in dense indoor cages with feed? Yet another variable/potential confounder for studies.

Comment by AnthonyC on How to know whether you are an idealist or a physicalist/materialist · 2024-04-20T15:08:13.070Z · LW · GW

In what sense are these two viewpoints in tension?

This seems more a question of "observable by whom" vs "observable in principle."

Comment by AnthonyC on Transportation as a Constraint · 2024-04-19T00:08:42.131Z · LW · GW

This works too, yeah.

Comment by AnthonyC on Transportation as a Constraint · 2024-04-18T23:40:47.085Z · LW · GW

Yeah, I was thinking it's hard to beat dried salted meat, hard cheese, and oil or butter. 

You also don't have to assume that all the food travels the whole way. If (hypothetically) you want to send 1 soldier's worth of food and water 7 days away, and each person can only carry 3 days worth at a time, then you can try to have 3 days worth deposited 6 days out, and then have a porter make a 2 day round trip carrying 1 day's worth to leave for that soldier to pickup up on day 7. Then someone needs to have carried that 3 days worth to 6 days out, which you can do by having more porters make 1 day round trips from 5 days out, etc. Basically it you need exponentially more people and supplies the farther out your supply chains stretch. I think I first read about this in the context of the Incas, because potatoes are less calorie dense per pound than dried grains so it's an even bigger problem? Being able to get water along the way, and ideally to pillage the enemy's supplies, are also a very big deal.

Comment by AnthonyC on Transportation as a Constraint · 2024-04-18T23:30:48.893Z · LW · GW

I think at that point the limiting factors become the logistics of food, waste, water, and waste heat. In Age of Em Robin Hanson spends time talking about fractal plumbing systems and the like, for this kind of reason.

Comment by AnthonyC on Cooperation is optimal, with weaker agents too  -  tldr · 2024-04-18T23:21:36.653Z · LW · GW

All good points, many I agree with. If nothing else, I think that humanity should pre-commit to following this strategy whenever we find ourselves in the strong position. It's the right choice ethically, and may also be protective against some potentially hostile outside forces.

However, I don't think the acausal trade case is strong enough that I would expect all sufficiently powerful civilizations to have adopted it. If I imagine two powerful civilizations with roughly identical starting points, one of which expanded while being willing to pay costs to accommodate weaker allies while the other did not and instead seized whatever they could, then it is not clear to me who wins when they meet. If I imagine a process by which a civilization becomes strong enough to travel the stars and destroy humanity, it's not clear to me that this requires it to have the kinds of minds that will deeply accept this reasoning. 

It might even be that the Fermi paradox makes the case stronger - if sapient life is rare, then the costs paid by the strong to cooperate are low, and it's easier to hold to such a strategy/ideal.

Comment by AnthonyC on Cooperation is optimal, with weaker agents too  -  tldr · 2024-04-18T19:45:32.150Z · LW · GW

This seems to completely ignore transaction costs for forming and maintaining an alliance? Differences in the costs to create and sustain different types of alliance-members? Differences in the potential to replace some types of alliance-members with other or new types? There can be entities for whom forming an alliance that contains humanity will cause them to incur greater costs than humanity's membership can ever repay.

Also, I agree that in a wide range of contexts this strategy is great for the weak and for the only-locally-strong. But if any entity knows it is strong in a universal or cosmic sense, this would no longer apply to it. Plus everyone less strong would also know this, and anyone who truly believed they were this strong would act as though this no longer applied to them either. I feel like there's a problem here akin to the unexpected hanging paradox that I'm not sure how to resolve except by denying the validity of the argument.

Comment by AnthonyC on Monthly Roundup #17: April 2024 · 2024-04-15T13:51:20.799Z · LW · GW

On screen space:

When, if ever, should I expect actually-useful smart glasses or other tech to give me access to arbitrarily-large high-res virtual displays without needing to take up a lot of physical space, or prevent me from sitting somewhere other than a single, fixed desk?

 

On both the Three Body Problem and economic history: It really is remarkably difficult to get people to see that 1) Humans are horrible, and used to be more horrible, 2) Everything is broken, and used to be much more broken, and 3) Actual humans doing actual physical things have made everything much better on net, and in the long run "on net" is usually what matters.

Comment by AnthonyC on Fertility Roundup #3 · 2024-04-02T21:32:23.026Z · LW · GW

On the Paul Ehrlich organization: Even if someone agrees with these ideas, do they not worry what this makes kids feel about themselves? Like, I can just see it: "But I'm the youngest of 3! My parents are horrible and I'm the worst of all!"

 

And this, like shame-based cultural norm enforcement, disproportionately punishes those who care enough to want to be pro-social and conscientious, with extra suffering.

Comment by AnthonyC on Modern Transformers are AGI, and Human-Level · 2024-03-28T17:04:29.066Z · LW · GW

Got it, makes sense, agreed.

Comment by AnthonyC on Modern Transformers are AGI, and Human-Level · 2024-03-28T12:48:57.072Z · LW · GW

I agree that filling a context window with worked sudoku examples wouldn't help for solving hidouku. But, there is a common element here to the games. Both look like math, but aren't about numbers except that there's an ordered sequence. The sequence of items could just as easily be an alphabetically ordered set of words. Both are much more about geometry, or topology, or graph theory, for how a set of points is connected. I would not be surprised to learn that there is a set of tokens, containing no examples of either game, combined with a checker (like your link has) that points out when a mistake has been made, that enables solving a wide range of similar games.

I think one of the things humans do better than current LLMs is that, as we learn a new task, we vary what counts as a token and how we nest tokens. How do we chunk things? In sudoku, each box is a chunk, each row and column are a chunk, the board is a chunk, "sudoku" is a chunk, "checking an answer" is a chunk, "playing a game" is a chunk, and there are probably lots of others I'm ignoring. I don't think just prompting an LLM with the full text of "How to solve it" in its context window would get us to a solution, but at some level I do think it's possible to make explicit, in words and diagrams, what it is humans do to solve things, in a way legible to it. I think it largely resembles repeatedly telescoping in and out, to lower and higher abstractions applying different concepts and contexts, locally sanity checking ourselves, correcting locally obvious insanity, and continuing until we hit some sort of reflective consistency. Different humans have different limits on what contexts they can successfully do this in.