Posts
Comments
It's impossible to prove that an arbitrary program, which someone else gave you, is correct. That's halting-problem equivalent, or Rice's theorem, etc.
Yes, we can prove various properties of programs we carefully write to be provable, but the context here is that a black-box executable Crowdstrike submits to Microsoft cannot be proven reliable by Microsoft.
There are definitely improvements we can make. Counting just the ones made in some other (bits of) operating systems, we could:
- Rewrite in a memory-safe language like Rust
- Move more stuff to userspace. Drivers for e.g. USB devices can and should be written in userspace, using something like libusb. This goes for every device that doesn't need performance-critical code or to manage device-side DMA access, which still leaves a bunch of things, but it's a start.
- Sandbox more kinds of drivers in a recoverable way, so they can do the things they need to efficiently access hardware, but are still prevented from accessing the rest of the kernel and userspace, and can 'crash safe'. For example, Windows can recover from crashes in graphics drivers specifically - which is an amazing accomplishment! Linux eBPF can't access stuff it shouldn't.
- Expose more kernel features via APIs so people don't have to write drivers to do stuff that isn't literally driving a piece of hardware, so even if Crowdstrike has super-duper-permissions, a crash in Crowdstrike itself doesn't bring down the rest of the system, it has to do it intentionally
Of course any such changes both cost a lot and take years or decades to become ubiquitous. Windows in particular has an incredible backwards compatibility story, which in practice means backwards compatibility with every past bug they ever had. But this is a really valuable feature for many users who have old apps and, yes, drivers that rely on those bugs!
Addendum 2: this particular quoted comment is very wrong, and I expect this is indicative of the quality of the quoted discussion, i.e. these people do not know what they are talking about.
Luke Parrish: Microsoft designed their OS to run driver files without even a checksum and you say they aren’t responsible? They literally tried to execute a string of zeroes!
Luke Parrish: CrowdStrike is absolutely to blame, but so is Microsoft. Microsoft’s software, Windows, is failing to do extremely basic basic checks on driver files before trying to load them and give them full root access to see and do everything on your computer.
The reports I have seen (of attempted reverse-engineering of the Crowdstrike driver's segfault) say it did not attempt to execute the zeroes from the file as code, and the crash was unrelated, likely while trying to parse the file. Context: the original workaround for the problem was to delete a file which contains only zeroes (at least on some machines, reports are inconsistent), but there's no direct reason to think the driver is trying to execute this file as code.
And: Windows does not run drivers "without a checksum"! Drivers have to be signed by Microsoft, and drivers with early-loading permissions have to be super-duper-signed in a way you probably can't get just by paying them a few thousand dollars.
But it's impossible to truly review or test a compiled binary, for which you have no sourcecode or even debug symbols, and which is deliberately obfuscated in many ways (as people have been reporting when they looked at this driver crash) because it's trying to defend itself against reverse-engineers designing attacks. And of course it's impossible to formally prove that a program is correct. And of course it's written in a memory-unsafe language, i.e. C++, because every single OS kernel and its drivers are written in such a language.
Furthermore, the Crowdstrike product relies on very quickly pushing out updates to (everyone else's) production to counter new threats / vulnerabilities being exploited. Microsoft can't test anything that quickly. Whether Crowdstrike can test anything that quickly, and whether you should allow live updates to be pushed to your production system, is a different question.
Anyway, who's supposed to pay Microsoft for extensive testing of Crowdstrike's driver? They're paid to sign off on the fact that Crowdstrike are who they say they are, and at best that they're not a deliberately malicious actor (as far as we know they aren't). Third party Windows drivers have bugs and security vulnerabilities all the time, just like most software.
Finally, Crowdstrike to an extent competes with Microsoft's own security products (i.e. Microsoft Defender and whatever the relevant enterprise-product branding is); we can't expect Microsoft to invest too much in finding bugs in Crowdstrike!
Addendum: Crowdstrike also has MacOS and Linux products, and those are a useful comparison in the matter of whether we should be blaming Microsoft.
On MacOS they don't have a kernel module (called a kext on MacOS). For two reasons; first, kexts are now disabled by default (I think you have to go to recovery mode to turn them on?) and second, the kernel provides APIs to accomplish most things without having to write a kext. So Crowdstrike doesn't need to (hypothetically) guard against malicious kexts because those are not a threat nearly as much as malicious or plain buggy kernel drivers are on Windows.
One reason why this works well is that MacOS only supports a small first-party set of hardware, so they don't need to allow a bunch of third party vendor drivers like Windows does. Microsoft can't forbid third party kernel drivers, there are probably tens of thousands of legitimate ones that can't be replaced easily or at all, even if someone was available to port old code to hypothetical new userland APIs. (Although Microsoft could provide much better userland APIs for new code; e.g. WinUSB seems to be very limited.)
(Note: I am not a Mac user and this part is not based on personal expertise.)
On Linux, Crowdstrike uses eBPF, which is a (relatively novel) domain-specific language for writing code that will execute inside the Linux kernel at runtime. eBPF is sandboxed in the kernel, and while it can (I think) crash it, it cannot e.g. access arbitrary memory. And so you can't use eBPF to guard against malicious linux kernel modules.
This is indeed a superior approach, but it's hard to blame Microsoft for not having an innovation in place that nobody had ten years ago and that hasn't exactly replaced most preexisting drivers even on Linux, and removing support for custom drivers entirely on Windows would probably stop it from working on most of the hardware out there.
Then again, most linux systems aren't running a hardened configuration, and getting userspace root access is game over anyway - the attacker can just install a new kernel for the next boot, if nothing else. To a first approximation, Linux systems are secure by configuration, not by architecture.
ETA: I'm seeing posts [0] that say Crowdstrike updates broke Linux installations, multiple times over the years. I don't know how they did that specifically, but it doesn't require a kernel module to make a machine crash or become unbootable... But I have not checked those particular reports.
Either way, no-one is up in arms saying to blame Linux for having a vulnerable kernel design!
[0] https://news.ycombinator.com/item?id=41005936, and others.
Did Microsoft massively screw up by not guarding against this particular failure mode? Oh, absolutely, everyone agrees on that.
I'm sorry, this is wrong, and that everyone thinks so is also wrong - some people got this right.
Normal Windows kernel drivers are sandboxed to some extent. If a driver segfaults, it will be disabled on the next boot and the user informed; if that fails for some reason, you can tell the the computer to boot into 'safe mode', and if that fails, there is recovery mode. None of these options require the manual, tedious, error-prone recovery procedure that the Crowdstrike bug does.
This is because the Crowdstrike driver wants to protect you from malware in other drivers (i.e. malicious kernel modules). (ETA: And it wants to inspect and potentially block syscalls from user-space applications too.) So it runs in early-loading mode, before any other drivers, and it has special elevated privileges normal drivers do not get, in order to be able to inspect other drivers loaded later.
(ETA: I've seen claims that Crowdstrike also adds code to UEFI and maybe elsewhere to prevent anyone from disabling it and to reenable it; which is another reason the normal Windows failsafes for crashing drivers wouldn't work. UEFI is, roughly, the system code running before your OS and bootloader.)
The fact that you cannot boot Windows if Crowdstrike does not approve it is not a bug in either Windows or Crowdstrike. It's by design, and it's an advertised feature! It's supposed to make sure your boot process is safe, and if that fails, you're not supposed to be able to boot!
Crowdstrike are to be blamed for releasing this bug - at all, without testing, without a rolling release, without a better fallback/recovery procedure, etc. Crowdstrike users who run mission-critical stuff like emergency response are also to blame for using Crowdstrike; as many people have pointed out, this is usually pure box-ticking security theater for the individuals making the choice whether to use Crowdstrike; and Crowdstrike is absolutely not at the quality level of the core Windows kernel, and this is not a big surprise to experts.
But Crowdstrike - the product, as advertised - cannot be built without the ability to render your computer unbootable, at least not without a radically different (micro)kernel design that no mainstream OS kernel in the world actually has. (A more reasonable approach these days would be to use a hypervisor, which can be much smaller and simpler and so easier to vet for bugs, as the recovery mechanism.)
About the impossibility result, if I understand correctly, that paper says two things (I'm simplifying and eliding a great deal):
-
You can take a recognizable, possibly watermarked output of one LLM, use a different LLM to paraphrase it, and not be able to detect the second LLM's output as coming from (transforming) the first LLM.
-
In the limit, any classifier that tries to detect LLM output can be beaten by an LLM that is sufficiently good at generating human-like output. There's evidence that a LLMs can soon become that good. And since emulating human output is an LLM's main job, capabilities researchers and model developers will make them that good.
The second point is true but not directly relevant: OpenAI et al are committing not to make models whose output is indistinguishable from humans.
The first point is true, BUT the companies have not committed themselves to defeating it. Their own models' output is clearly watermarked, and they will provide reliable tools to identify those watermarks. If someone else then provides a model that is good enough at paraphrasing to remove that watermark, that is that someone else's fault, and they are effectively not abiding by this industry agreement.
If open source / widely available non-API-gated models become good enough at this to render the watermarks useless, then the commitment scheme will have failed. This is not surprising; if ungated models become good enough at anything contravening this scheme, it will have failed.
There are tacit but very necessary assumptions in this approach and it will fail if any of them break:
- The ungated models released so far (eg llama) don't contain forbidden capabilities, including output and/or paraphrasing that's indistinguishable from human, but also of course notkillingeveryone, and won't be improved to include them by 'open source' tinkering that doesn't come from large industry players
- No-one worldwide will release new more capable models, or sell ungated access to them, disobeying this industry agreement; and if they do, it will be enforced (somehow)
- The inevitable use of more capable models, that would be illegal if released publicly, by some governments, militaries, etc. will not result in the public release of such capabilities; and also, their inevitable use of e.g. indistinguishable-from-human output will not cause such (public) problems that this commitment not to let private actors do it will become meaningless
OpenAI post with more details here.
Charles P. Steinmetz saw a two-hour working day on the horizon—he was the scientist who made giant power possible
What is giant power? I can't figure this out.
So we can imagine AI occupying the most "cushy" subset of former human territory
We can definitely imagine it - this is a salience argument - but why is it at all likely? Also, this argument is subject to reference class tennis: humans have colonized much more and more diverse territory than other apes, or even all other primates.
Once AI can flourish without ongoing human support (building and running machines, generating electricity, reacting to novel environmental challenges), what would plausibly limit AI to human territory, let alone "cushy" human territory? Computers and robots can survive in any environment humans can, and in some where we at present can't.
Also: the main determinant of human territory is inter-human social dynamics. We are far from colonizing everywhere our technology allows, or (relatedly) breeding to the greatest number we can sustain. We don't know what the main determinant of AI expansion will be; we don't even know yet how many different and/or separate AI entities there are likely to be, and how they will cooperate, trade or conflict with each other.
Nuclear power has the highest chance of The People suddenly demanding it be turned off twenty years later for no good reason. Baseload shouldn't be hostage to popular whim.
Thanks for pointing this out!
A few corollaries and alternative conclusions to the same premises:
- There are two distinct interesting things here: a magic cross-domain property that can be learned, and an inner architecture that can learn it.
- There may be several small efficient architectures. The ones in human brains may not be like the ones in language models. We have plausibly found one efficient architecture; this is not much evidence about unrelated implementations.
- Since the learning is transferable to other domains, it's not language specific. Large language models are just where we happened to first build good enough models. You quote discussion of the special properties of natural language statistics but, by assumption, there are similar statistical properties in other domains. The more a property is specific to language, or necessary because of the special properties of language, the less it's likely to be a universal property that transfers to other domains.
Thanks! This, together with gjm's comment, is very informative.
How is the base or fundamental frequency chosen? What is special about the standard ones?
the sinking of the Muscovy
Is this some complicated socio-political ploy denying the name Moskva / Moscow and going back to the medieval state of Muscovy?
I'm a moral anti-realist; it seems to me to be a direct inescapable consequence of materialism.
I tried looking at definitions of moral relativism, and it seems more confused than moral realism vs. anti-realism. (To be sure there are even more confused stances out there, like error theory...)
Should I take it that Peterson and Harris are both moral realists and interpret their words in that light? Note that this wouldn't be reasoning about what they're saying, for me, it would be literally interpreting their words, because people are rarely precise, and moral realists and anti-realists often use the same words to mean different things. (In part because they're confused and are arguing over the "true" meaning of words.)
So, if they're moral realists, then "not throwing away the concept of good" means not throwing away moral realism; I think I understand what that means in this context.
Also known as: the categories were made for man.
When Peterson argues religion is a useful cultural memeplex, he is presumably arguing for all of (Western monotheistic) religion. This includes a great variety of beliefs, rituals, practices over space and time - I don't think any of these have really stayed constant across the major branches of Judaism, Christianity and Islam over the last two thousand years. If we discard all these incidental, mutable characteristics, what is left as "religion"?
One possible answer (I have no idea if Peterson would agree): the structure of having shared community beliefs and rituals remains, but not the specific beliefs, or the specific (claimed) reasons for holding them; the distinctions of sacred vs. profane remains, and of priests vs. laymen, and of religious law vs. freedom of actions in other areas, but no specifics of what is sacred or what priests do; the idea of a single, omniscient, omnipotent God, but not that God's attributes, other than being male; that God judges and rewards or punishes people, but no particulars of what is punished or rewarded, or what punishments or rewards might be.
ETA: it occurs to me that marriage-as-a-sacrament, patriarchy, and autocracy, have all been stable common features of these religions. I'm not sure if they should count as features of the religion, or of a bigger cultural package which has conserved these and other features.
Atheists reject the second part of the package, the one that's about a God. But they (like everyone) still have the first part: shared beliefs and rituals and heresies, shared morals and ethics, sources of authority, etc. (As an example, people sometimes say that "Science" often functions as a religion for non-scientists; I think that's what's meant; Science-the-religion has priests and rituals and dogmas and is entangled with law and government, but it has no God and doesn't really judge people.)
But that's just what I generated when faced with this prompt. What does Peterson think is the common basis of "Western religion over the last two thousand years" that functions as a memeplex and ignores the incidentals that accrue like specific religious beliefs?
They are both pro free speech and pro good where "good" is what a reasonable person would think of as "good".
I have trouble parsing that definition. You're defining "good" by pointing at "reasonable". But people who disagree on what is good, will not think each other reasonable.
I have no idea what actual object-level concept of "good" you meant. Can you please clarify?
For example, you go on to say:
They both agree that religion has value.
I'm not sure whether religion has (significant, positive) value. Does that make me unreasonable?
Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of "AI", not even in the buzzword sense of AI = ML. For all we know it's doing something trivially simple, like combining a few measured properties (how often they're on time, etc.) with a few manually assigned weights and thresholds. Even if it's using ML, it's going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.
Even if some laws are passed about this, they'd be expandable in the directions of "Bezos is literally an evil overlord [which is a quote from the linked article], our readers/voters love to hate him, we should hurt him some more"; and "we already have laws establishing protected characteristics in hiring/firing/housing/etc; if black-box ML models can't prove they're not violating the law, then they're not allowed". The latter has a very narrow domain of applicability so would not affect AI risk.
What possible law or regulation, now or in the future, would differentially impede dangerous AI (on the research path leading to AGI) and all other software, or even all other ML? A law that equally impedes all ML would never get enough support to pass; a law that could be passed would have to use some narrow discriminating wording that programmers could work around most of the time, and so accomplish very little.
Epistemic status: wild guessing:
-
If the US has submarine locators (or even a theory or a work-in-progress), it has to keep them secret. The DoD or Navy might not want to reveal them to any Representatives. This would prevent them from explaining to those Representatives why submarine budgets should be lowered in favor of something else.
-
A submarine locator doesn't stop submarines by itself; you still presumably need to bring ships and/or planes to where the submarines are. If you do this ahead of time and just keep following the enemy subs around, they are likely to notice, and you will lose strategic surprise. The US has a lot of fleet elements and air bases around the world (and allies), so it plausibly has an advantage over its rivals in terms of being able to take out widely dispersed enemy submarines all at once.
-
Even if others also secretly have submarine locators, there may be an additional anti-sub-locator technology or strategy that the US has developed and hopes its rivals have not, which would keep US submarines relevant. Building a sub-locator might be necessary but not sufficient to building an anti-sub-locator.
Now write the scene where Draco attempts to convince his father to accept Quirrel points in repayment of the debt.
"You see, Father, Professor Quirrel has promised to grant any school-related wish within his power to whoever has the most Quirrel points. If Harry gives his points to me, I will have the most points by far. Then I can get Quirrel to teach students that blood purism is correct, or that it would be rational to follow the Dark Lord if he returns, or to make me the undisputed leader of House Slytherin. That is worth far more than six thousand galleons!"
Lord Malfoy looked unconvinced. "If Quirrel is as smart as you say, why would he promise to grant such an open-ended wish? He warned you that Quirrel points were worth only one-tenth of House points, a popularity contest designed to distract fools from true politics and dominated by Quidditch seekers. For every plot you buy from Quirrel with your points, he will hatch a greater counter-plot to achieve what he himself truly wants. You must learn, my son, not to rely overmuch on those greater than yourself to serve as your willing agents; the power loaned by them is never free, and it is not truly yours in the end."
I don't see an advantage
A potential advantage of inactivated virus vaccine is that it can raise antibodies for all viral proteins and not just a subunit of the spike protein, which would make it harder for future strains to evade the immunity. I think this is also the model implicitly behind this claim that natural immunity (from being infected with the real virus) is stronger than the immunity gained from subunit (eg mRNA) vaccines. (I make no claim that that study is reliable, and just on priors it probably should be ignored.)
direct sources are more and more available to the public... But simultaneously get less and less trustworthy.
The former helps cause the latter. Sources that aren't available to the public, or are not widely read by the public for whatever reason, don't face the pressure to propagandize - either to influence the public, and/or to be seen as ideologically correct by the public.
Of course influencing the public only one of several drives to distort or ignore the truth, and less public fora are not automatically trustworthy.
Suppose that TV experience does influence dreams - or the memories or self-reporting of dreams. Why would it affect specifically and only color?
Should we expect people who watch old TV to dream in low resolution and non-surround sound? Do people have poor reception and visual static in their black and white dreams? Would people who grew up with mostly over the border transmissions dream in foreign languages, or have their dreams subtitled or overdubbed? Would people who grew up with VCRs have pause and rewind controls in their dreams?
Some of these effects are plausible. Anecdotally, I watched a lot of anime, and I had some dreams in pseudo-Japanese (I don't speak Japanese). I don't remember ever dreaming subtitles though.
Does either the explanation of the black and white effect make predictions about which other effects should be present, and why?
Epistemic status: anecdote.
Most of the dreams I've ever had (and remembered in the morning) were not about any kind of received story (media, told to me, etc). They were all modified versions of my own experiences, like school, army, or work, sometimes fantastically distorted, but recognizably about my experiences. A minority of dreams has been about stories (eg a book I read), usually from a first person point of view (eg. a self insert into the book).
So for me, dreams are stories about myself. And I wonder: if these people had their dreams influenced by the form of media, were they influenced by the content as well? Or did they dream about their own lives in black and white? The latter would be quite odd.
He's saying that it's extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it's similarly unobvious whether we should worry about the suffering of edge detectors.
Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.
I'd like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to understand? Is there a principled or at least reasonable and consistent definition? Is there a reason to privilege edge detectors made from neurons over, say, a simple edge detector program made from code? Could other (complex, input-processing) tissues and organs have experience, or only those made from neurons?
Could the brain be logically divided in N different ways, such that we'd worry about the experience of a certain sub-network using division A, and not worry about a different sub-network using division B, but actually they're composed mostly of the same neurons, we just model them differently?
We talk about edge detectors mostly because they're simple and "stand-alone" enough that we located and modeled them in the brain. There are many more complex and less isolated parts of the brain we haven't isolated and modeled well yet; should that make us more or less concerned that they (or parts of them) have relevant experiences?
Finally, if very high-level parts of my brain ("I") have a good experience, while a theory leads us to think that lots of edge-detectors inside my brain are having a bad experiences ("I can't decide if that's an edge or not, help!"), how might a moral theory look that would resolve or trade-off these against each other?
This is a question similar to "am I a butterfly dreaming that I am a man?". Both statements are incompatible with any other empirical or logical belief, or with making any predictions about future experiences. Therefore, the questions and belief-propositions are in some sense meaningless. (I'm curious whether this is a theorem in some formalized belief structure.)
For example, there's an argument about B-brains that goes: simple fluctuations are vastly more likely than complex ones; therefore almost all B-brains that fluctuate into existence will exist for only a brief moment and will then chaotically dissolve in a kind of time-reverse of their fluctuating into existence.
Should a B-brain expect a chaotic dissolution in its near future? No, because its very concepts of physics and thermodynamics that cause it to make such predictions are themselves the results of random fluctuations. It remembers reading arguments and seeing evidence for Boltzmann's theorem of enthropy, but those memories are false, the result of random fluctuations.
So a B-brain shouldn't expect anything at all (conditioning on its own subjective probability of being a B-brain). That means a belief in being a B-brain isn't something that can be tied to other beliefs and questioned.
Title typo: cvoid.
Let's take the US government as a metaphor. Instead of saying it's composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary
Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?
We don't ask "what is it like to be an edge detector?", because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences.
If "human experience" includes the experience of an edge detector, I have to ask for a definition of "human experience". Is he saying an edge detector is conscious or sentient? What does it mean to talk of the experience of such a relatively small and simple part of the brain? Why should we care what its experience is like, however we define it?
Finding the percentage of "immigrants" is misleading, since it's immigrants from Mexico and Central America who are politically controversial, not generic "immigrants" averaged over all sources.
I'm no expert on American immigration issues, but I presume this is because most immigrants come in through the (huge) south land border, and are much harder for the government to control than those coming in by air or sea.
However, I expect immigrants from any other country outside the Americas would be just as politically controversial if large numbers of them started arriving, and an open borders policy with Europe or Asia or Africa would be just as unacceptable to most Americans.
Are Americans much more accepting of immigrants from outside Central and South America?
immigrants are barely different from natives in their political views, and they adopt a lot of the cultural values of their destination country.
The US is famous for being culturally and politically polarized. What does it even mean for immigrants to be "barely different from natives" politically? Do they have the same (polarized) spread of positions? Do they all fit into one of the existing political camps without creating a new one? Do they all fit into the in-group camp for Caplan's target audience?
And again:
[Caplan] finds that immigrants are a tiny bit more left-wing than the general population but that their kids and grandkids regress to the political mainstream.
If the US electorate is polarized left-right, does being a bit more left-wing mean a slightly higher percentage of immigrants than of natives are left-wing, but immigrants are still as polarized as the natives?
bad configurations can be selected against inside the germinal cells themselves or when the new organism is just a clump of a few thousand cells
Many genes and downstream effects are only expressed (and can be selected on) after birthing/hatching, or only in adult organisms. This can include whole organs, e.g. mammal fetuses don't use their lungs in the womb. A fetus could be deaf, blind, weak, slow, stupid - none of this would stop it from being carried to term. An individual could be terrible at hunting, socializing, mating, raising grandchildren - none of that would stop it from being born and raised to adulthood.
There's no biological way to really test the effect of a gene ahead of time. So it's very valuable to get genes that have already been selected for beneficial effects outside of early development.
That's in addition to p.b.'s point about losing information.
When you get an allele from sex, there are two sources of variance. One is genes your (adult) partner has that are different from yours. The other is additional de novo mutations in your partner's gametes.
The former has already undergone strong selection, because it was part of one (and usually many) generations' worth of successfully reproducing organisms. This is much better than getting variance from random mutations, which are more often bad than good, and can be outright fatal.
Selecting through many generations of gametes, like (human) sperm do, isn't good enough; it doesn't filter out bad mutations in genes that aren't expressed in sperm cells.
Lateral gene transfer might be as good as sex, but I don't see how higher mutation rates can compete. I believe that empirically, mutations that weaken one of the anti-mutation DNA preservation mechanisms in gametes are usually deleterious and are not selected.
I propose using computational resources as the "reference" good.
I don't understand the implications of this, can you please explain / refer me somewhere? How is the GDP measurement resulting from this choice going to be different from another choice like control of matter/energy? Why do we even need to make a choice, beyond the necessary assumption that there will still be a monetary economy (and therefore a measurable GDP)?
In the hypothetical future society you propose, most value comes from non-material goods.
That seems very likely, but it's not a necessary part of my argument. Most value could keep coming from material goods, if we keep inventing new kinds of goods (i.e. new arrangements of matter) that we value higher than past goods.
However, these non-material goods are produced by some computational process,. Therefore, buying computational resources should always be marginally profitable. On the other hand, the total amount of computational resources is bounded by physics. This seems like it should imply a bound on GDP.
There's a physical bound on how much computation can be done in the remaining lifetime of the universe (in our future lightcone). But that computation will necessarily take place over a very very long span of time.
For as long as we can keep computing, the set of computation outputs (inventions, art, simulated-person-lifetimes, etc) each year can keep being some n% more valuable than the previous year. The computation "just" needs to keep coming up with better things every year instead of e.g. repeating the same simulation over and over again. And this doesn't seem impossible to me.
I think that most people would prefer facing a 10e-6 probability of death to paying 1000 USD.
The sum of 1000 USD comes from the average wealth of people today. Using (any) constant here encodes the assumption that GDP per capita (wealth times population) won't keep growing.
If we instead suppose a purely relative limit, e.g. that a person is willing to pay a 1e-6 part of their personal wealth to avoid a 1e6 chance of death, then we don't get a bound on total wealth.
you imagine that the rate at which new "things" are produced hits diminishing returns
The rate at which new atoms (or matter/energy/space more broadly) are added will hit diminishing returns, at the very least due to speed of light.
The rate at which new things are produced won't necessarily hit diminishing returns because we can keep cannibalizing old things to make better new things. Often, re-configurations of existing atoms produce value without consuming new resources except for the (much smaller) amount of resources used to rearrange them. If I invent email which replaces post mail I produce value while reducing atoms used.
this value growth has to hit a ceiling pretty soon anyway, because things can only be that much valuable
Eventually yes, but I don't think they have to do hit a ceiling soon, e.g. in a timeframe relevant to the OP. Maybe it's probable they will, but I don't know how to quantify it. The purely physical ceiling on ascribable value is enormously high (other comment on this and also this).
Like you, I don't know what to make of intuition pumps like your proposed Pascal's Ceiling of Value. Once you accept that actual physics don't practically limit value, what's left of the OP is a similar-looking argument from incredulity: can value really grow exponentially almost-forever just by inventing new things to do with existing atoms? I don't know that it will keep growing, but I don't see a strong reason to think it can't, either.
I agree, and want to place a slightly different emphasis. A "better" education system is a two-place function; what's better for a poor country is different from what's better for a rich Western one. And education in Western countries looked different back when they were industrializing and still poor by modern standards.
(Not that the West a century ago is necessarily a good template to copy. The point is that the education systems rich countries have today weren't necessarily a part of what made them rich in the first place.)
A lot (some think most) of Western education is also a credentialing and signalling system. It can also promote social integration (shared culture), and serves as daycare for lower grades.These things don't directly help a poor country get richer.
Signalling is a zero sum game competing over the top jobs in a poor economy. Sequestering teenagers reduces available workforce for a net economic loss. Community daycare is economically valuable, but requiring qualified teachers is expensive and can make it a net loss.
So poor countries can copy Western education systems faithfully and still not benefit. What they are cargo culting is not (just) the elements of how to do "education", but the function of the education system in broader society. Faithfully reproducing modern Western education doesn't necessarily make your country rich: that's cargo culting.
Please see my other reply here. Yes, value is finite, but the number of possible states of the universe is enormously large, and we won't explore it in 8000 years. The order of magnitude is much bigger.
(Incidentally, our galaxy is ~ 100,000 light years across; so even expanding to cover it would take much longer than 8000 years, and that would be creating value the old-fashioned way by adding atoms, but it wouldn't support continued exponential growth. So "8000 years" and calculations based off the size of the galaxy shouldn't be mixed together. But the order-of-magnitude argument should work about as well for the matter within 8000 light-years of Earth.)
in their expected lifespan
Or even in the expected lifetime of the universe.
perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.
That's a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don't (and possibly can't, or shouldn't) have an exact (mathematical) definition of a Pan-Human Utility Function.
However, a proof isn't needed to make this happen (for better and for worse). If a local configuration is created which is sufficiently more (universally!) valuable than any other known local configuration, neighbors will start copying it and it will tile the galaxy, possibly ending progress if it's a stable configuration - even if this configuration is far from the best one possible locally (let alone globally).
In practice, "a wonderful thing was invented, everyone copied it of their own free will, and stayed like that forever because human minds couldn't conceive of a better world, leaving almost all possible future value on the table" doesn't worry me nearly as much as other end-of-progress scenarios. The ones where everyone dies seem much more likely.
In the limit you are correct: if a utility function assigns a value to every possible arrangement of atoms, then there is some maximum value, and you can't keep increasing value forever without adding atoms because you will hit the maximum at some point. An economy can be said to be "maximally efficient" when value can't be added by rearranging its existing atoms, and we must add atoms to produce more value.
However, physics provides very weak upper bounds on the possible value (to humans) of a physical system of given size, because the number of possible physical arrangements of a finite-sized system is enormous. The Bekenstein bound is approximately 2.6e43 * M * R (mass times radius) bits per kg * m. Someone who understands QM should correct me here, but just as an order-of-magnitude-of-order-of-magnitude estimation, our galaxy masses around 1e44 Kg with a radius of 1e18 meters, so its arrangement in a black hole can contain up to 2.6e105 bits of information.
Those are bits; the number of states is 2^(2.6e105). That is much, much bigger than the OP's 3e70; we can grow the per-atom value of the overall system state by a factor much bigger than 3e70.
Of course this isn't a tight argument and there are lots of other things to consider. For example, to get the galaxy into some valuable configuration, we'd need to "use up" part of the same galaxy in the process of changing the configuration of the rest. But from a purely physical perspective, the upper bound on value per atom is enormously high.
ETA: replaced mind-boggling numbers with even bigger mind-boggling numbers after a more careful reading of Wikipedia.
The OP's argument is general: it says essentially that (economic) value is bounded linearly by the number of atoms backing the economy. Regardless of how the atoms are translated to value. This is an impossibility argument. My rebuttal was also general, saying that value is not so bounded.
Any particular way of extracting value, like electronics, usually has much lower bounds in practice than 'linear in the amount of atoms used' (even ignoring different atomic elements). So yes, today's technology that depends on 'rare' earths is bounded by the accessible amount of those elements.
But this technology is only a few decades old. The economy has been growing at some % a year for much longer than that, across many industries and technological innovations that have had very different material constraints from each other. And so, while contemporary rare-earth-dependent techniques won't keep working forever, the overall trend of economic growth could continue far beyond any one technology's lifespan, and for much longer than the OP projects.
Technology and other secular change doesn't always increase value; often it is harmful. My argument is that economy can keep growing for a long time, not that it necessarily will, or that all (or even most) changes over time are for the best. And GDP is not a good measure of human wellbeing to begin with; we're measuring dollars, not happiness, and when I talk about "utility" I mean the kind estimated via revealed preferences.
The rate of value production per atom can be bounded by physics. But the amount of value ascribed to the thing being produced is only strictly bounded by the size of the number (representing the amount of value) that can be physically encoded, which is exponential in the number of atoms, and not linear.
By "proportionately more" I meant more than the previous economic-best use of the same material input, which the new invention displaced (modulo increasing supply). For example, the amount of value derived by giving everyone (every home? every soldier? every car?) a radio is much greater than any other value the same amount of copper, zinc etc. could have been used for before the invention of radio. We found a new way to get more value from the same material inputs.
For material outputs (radio sets, telegraph wire, computers), of course material inputs are used. But the amount of value we get from the inputs is not really related to, or bounded by, the amount of input material. A new way of using material can have an arbitrarily high value-produced-to-materials-consumed ratio.
I'll run with your example of semiconductor factories. A factory costs between $1-20 billion to build. The semiconductor industry has a combined yearly revenue of $500 billion (2018). Doesn't sound like a huge multiplier so far.
But then consider that huge amounts of modern technology (= value) require semiconductors as an input. The amount of semiconductor industry inputs, and material waste byproducts, was similar in 1990 and 2020 (same order of magnitude). But the amount of value enabled by using those semiconductors was enormously larger in 2020. Whole new markets were created thanks to the difference in capability between 1990 semiconductors ($100 per megabyte DRAM) and 2020 ($0.003 per MB). Smartphones, PCs, modern videogames, digital video and audio, digital cameras, most of the way the Internet and Web are used today; but also all modern devices with chips inside, from cars to satellites; the list is almost endless.
All of these require extra inputs besides semiconductors, and those inputs cost time and money. But the bill of materials for a 2020 smartphone is smaller and cheaper than that of an early 1990 cellphone, while the value to the owner is much greater. (A lot of the value comes from software and digital movies and music, which don't consume atoms in the relevant sense, because they can be copied on demand.)
GDP growth is measured in money, a measure of value. Value does not have to be backed by a proportional amount of matter (or energy, space or time) because we can value things as much as we like - more than some constant times utilon per gram second.
Suppose I invent an algorithm that solves a hard problem and sell it as a service. The amount people will be willing to pay for it - and the amount the economy grows - is determined by how much people want it and how much money there is, but nobody cares how many new atoms I used to implement it. If I displace older, less efficient algorithms, then I produce value while reducing the number of atoms (or watts) backing the economy!
Material goods and population size can't keep growing forever, but value can. Many recent developments that produced a lot of value, like radio, computing, and the Internet, didn't do it by using proportionally more atoms. An algorithm is a convenient example but this applies to non-digital services just as much.
This is not a novel argument but I can't recall it's source or name.
Sorry, who is GBS?
Also: if Orwell thought vegeterians expected to gain 5 years of life, that would be an immense effect well worth some social disruption. And boo Orwell for mocking them merely for being different and not for any substance of the way they were different. It's not as if people eating different food intrudes on others (or even makes them notice, most of the time), unlike e.g. nudists, or social-reforming feminists.
I strongly agree that the methodology should have presented up front. lsusr's response is illuminative and gives invaluable context.
But my first reaction to your comment was to note the aggressive tone and what feels like borderline name-calling. This made me want to downvote and ignore it at first, before I thought for a minute and realized that yes, on the object level this is a very important point. It made it difficult for me to engage with it.
So I'd like to ask you what exactly you meant (because it's easy to mistake tone on the internet) and why. Calling the LW audience (i.e. including me) 'alarmist and uninformed' I can understand (if not necessarily agree with) but 'an AGI death cult'? That seems to mean a cult that wants to bring about death through AGI but that's the opposite of what LW is about and so I'm pretty sure you didn't mean that. Please clarify.
In addition to this there is the horrible—the really disquieting—prevalence of cranks wherever Socialists are gathered together. One sometimes gets the impression that the mere words 'Socialism' and 'Communism' draw towards them with magnetic force every fruit-juice drinker, nudist, sandal-wearer, sex-maniac, Quaker, 'Nature Cure' quack, pacifist and feminist in England.
It's interesting to see how this aged. 85 years later, sex-maniacs and quacks are still considered 'cranks'; pacifism and nudists are not well tolerated by most societies, whereas sandal-wearing is more often respected; and vegetarianism and (1930s) feminism are completely mainstream.
Also, I was surprised to learn that Orwell thinks people typically become vegetarian to extend their lifespan, and not for ethical reasons. Was this true in 1930s England? Did Western vegetarianism use to be a fad diet on part with Orwell's "fruit-drinkers"?
The link to "Israeli data" is wrong; it goes to the tweet by @politicalmath showing the Houston graph you inlined later.
What is the most rational way to break ice?
-
Does the cost to get a drug approved depend on how novel or irreplaceable it might be? Did it cost the same amount to approve Silenor for insomnia as it would cost to approve a really novel drug much better at combating insomnia than any existing one?
If the FDA imposes equal costs on any new drug, then it's not "imposing [costs] on a company trying to [...] parasitize the healthcare system". It's neutrally imposing costs on all companies developing drugs. And this probably does a lot more harm on net (fewer drugs marketed) then it does good (punishes some drugs that harm society).
Silenor may be a bad example for the anti-FDA narrative, but I don't think this is strong evidence against the narrative, given all the other (hopefully good) examples that we have.
To be clear, it's very important and beneficial to call out bad examples in a narrative, thank you for doing that. We should update on this information. But I don't agree with your conclusions.
-
Pharma companies can probably estimate the cost of bringing a new drug to market, and make a rational cost-benefit decision (citation needed). Somaxon presumably made a bad decision with Silenor, and was 'punished' by losing money.
That's what happens to any companies in a market. Even if it was cheap to bring a drug to market, companies would still make money on some drugs but lose money on others. Why do we need an agency like the FDA imposing extra costs?
One of the complaints about the FDA is that only big and well-established companies can afford to bring a drug to market. It's a moat against new competitors, and a net harm to society because fewer good drugs are developed and approved.
Suppose the FDA found a way to make drug approval cost 50% less, while still approving the same drugs in the same amount of time. That is, pharma companies would pay half what they do now to go through the process. Most people would say this is a good thing, i.e. less dead loss. Would you call it a bad thing because it would reduce the 'punishment' of companies? If so, do you think the cost should be increased, or does it happen to be just right?
Bullshit is what comes out of the mouth of someone who values persuasion over truth. [...] The people with a need to obscure the truth are those with a political or social agenda.
Almost all humans, in almost all contexts, value persuasion over truth and have a social agenda. Condemning all human behavior that is not truth-seeking is condemning almost all human behavior. This is a strong (normative? prescriptive? judgmental?) claim that should be motivated, but you seem to take it for given.
Persuasion is a natural and desirable behavior in a social, cooperative species that is also competitive on the individual level. The main alternative to persuasion is force, and in most cases I'm glad people use persuasion rather than force. Truth-seeking would also fare worse in a more violent world, because truth has some persuasion value but little violence-value.
Truth is instrumentally useful to persuasion inasfar as people are able to identify truth and inclined to prefer it. I'm all for increasing these two characteristics and otherwise "raising the sanity waterline". But that is very far from a blanket condemnation of "valuing persuasion over truth".