Posts

And was it for rational, object-level reasons? 2020-03-17T10:30:20.070Z
80% of data in Chinese clinical trials have been fabricated 2016-10-02T07:38:05.278Z
[LINK] Updating Drake's Equation with values from modern astronomy 2016-04-30T22:08:07.858Z
Meetup : Tel Aviv Meetup: solving anthropic puzzles using UDT 2015-07-20T17:37:37.359Z
Meetup : Tel Aviv Meetup: Social & Board Games 2015-07-01T17:53:21.516Z
When does heritable low fitness need to be explained? 2015-06-10T00:05:10.338Z
Meetup : Tel Aviv Meetup: Social & Board Games 2015-05-05T10:07:51.037Z
Meetup : Less Wrong Israel Meetup: Social and Board Games 2015-04-12T14:43:59.290Z
Meetup : Less Wrong Israel Meetup: Social and Board Games 2015-03-30T08:28:10.122Z
Meetup : Tel Aviv: Slightly Less Hard Problems of Consciousness 2015-03-15T21:07:49.159Z
Meetup : Less Wrong Israel Meetup: social and board games 2015-03-06T10:34:01.202Z
Meetup : Less Wrong Israel Meetup: Social and Board Games 2015-01-10T09:48:33.654Z
Meetup : Israel Less Wrong Meetup - Social, Board Games 2014-11-10T14:00:51.188Z
Meetup : Less Wrong Israel Meetup (Herzliya): Social and Board Games 2014-09-04T13:17:23.800Z
[LINK] Behind the Shock Machine: book reexamining Milgram obedience experiments 2013-09-13T13:20:44.900Z
Meetup : LessWrong Israel September meetup 2013-08-06T12:11:12.797Z
Meetup : Israel LW meetup 2013-06-25T15:44:39.851Z
Does evolution select for mortality? 2013-02-23T19:33:12.534Z
I want to save myself 2011-05-20T10:27:25.788Z
Choose To Be Happy 2011-01-01T22:50:56.697Z
Proposal: Anti-Akrasia Alliance 2011-01-01T21:52:31.760Z

Comments

Comment by DanArmak on News : Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI · 2023-07-23T13:21:18.751Z · LW · GW

About the impossibility result, if I understand correctly, that paper says two things (I'm simplifying and eliding a great deal):

  1. You can take a recognizable, possibly watermarked output of one LLM, use a different LLM to paraphrase it, and not be able to detect the second LLM's output as coming from (transforming) the first LLM.

  2. In the limit, any classifier that tries to detect LLM output can be beaten by an LLM that is sufficiently good at generating human-like output. There's evidence that a LLMs can soon become that good. And since emulating human output is an LLM's main job, capabilities researchers and model developers will make them that good.

The second point is true but not directly relevant: OpenAI et al are committing not to make models whose output is indistinguishable from humans.

The first point is true, BUT the companies have not committed themselves to defeating it. Their own models' output is clearly watermarked, and they will provide reliable tools to identify those watermarks. If someone else then provides a model that is good enough at paraphrasing to remove that watermark, that is that someone else's fault, and they are effectively not abiding by this industry agreement.

If open source / widely available non-API-gated models become good enough at this to render the watermarks useless, then the commitment scheme will have failed. This is not surprising; if ungated models become good enough at anything contravening this scheme, it will have failed.

There are tacit but very necessary assumptions in this approach and it will fail if any of them break:

  1. The ungated models released so far (eg llama) don't contain forbidden capabilities, including output and/or paraphrasing that's indistinguishable from human, but also of course notkillingeveryone, and won't be improved to include them by 'open source' tinkering that doesn't come from large industry players
  2. No-one worldwide will release new more capable models, or sell ungated access to them, disobeying this industry agreement; and if they do, it will be enforced (somehow)
  3. The inevitable use of more capable models, that would be illegal if released publicly, by some governments, militaries, etc. will not result in the public release of such capabilities; and also, their inevitable use of e.g. indistinguishable-from-human output will not cause such (public) problems that this commitment not to let private actors do it will become meaningless
Comment by DanArmak on News : Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI · 2023-07-21T18:35:44.715Z · LW · GW

OpenAI post with more details here.

Comment by DanArmak on Why didn't we get the four-hour workday? · 2023-03-18T22:00:20.567Z · LW · GW

Charles P. Steinmetz saw a two-hour working day on the horizon—he was the scientist who made giant power possible

What is giant power? I can't figure this out.

Comment by DanArmak on Success without dignity: a nearcasting story of avoiding catastrophe by luck · 2023-03-18T15:51:09.995Z · LW · GW

So we can imagine AI occupying the most "cushy" subset of former human territory

We can definitely imagine it - this is a salience argument - but why is it at all likely? Also, this argument is subject to reference class tennis: humans have colonized much more and more diverse territory than other apes, or even all other primates.

Once AI can flourish without ongoing human support (building and running machines, generating electricity, reacting to novel environmental challenges), what would plausibly limit AI to human territory, let alone "cushy" human territory? Computers and robots can survive in any environment humans can, and in some where we at present can't.

Also: the main determinant of human territory is inter-human social dynamics. We are far from colonizing everywhere our technology allows, or (relatedly) breeding to the greatest number we can sustain. We don't know what the main determinant of AI expansion will be; we don't even know yet how many different and/or separate AI entities there are likely to be, and how they will cooperate, trade or conflict with each other.

Comment by DanArmak on What are some good arguments against building new nuclear power plants? · 2022-08-14T16:20:10.891Z · LW · GW

Nuclear power has the highest chance of The People suddenly demanding it be turned off twenty years later for no good reason. Baseload shouldn't be hostage to popular whim.

Comment by DanArmak on Transformer language models are doing something more general · 2022-08-04T15:33:12.293Z · LW · GW

Thanks for pointing this out!

A few corollaries and alternative conclusions to the same premises:

  1. There are two distinct interesting things here: a magic cross-domain property that can be learned, and an inner architecture that can learn it.
  2. There may be several small efficient architectures. The ones in human brains may not be like the ones in language models. We have plausibly found one efficient architecture; this is not much evidence about unrelated implementations.
  3. Since the learning is transferable to other domains, it's not language specific. Large language models are just where we happened to first build good enough models. You quote discussion of the special properties of natural language statistics but, by assumption, there are similar statistical properties in other domains. The more a property is specific to language, or necessary because of the special properties of language, the less it's likely to be a universal property that transfers to other domains.
Comment by DanArmak on What Is a Major Chord? · 2022-04-28T13:52:36.217Z · LW · GW

Thanks! This, together with gjm's comment, is very informative.

How is the base or fundamental frequency chosen? What is special about the standard ones?

Comment by DanArmak on Ukraine Post #11: Longer Term Predictions · 2022-04-25T15:07:47.503Z · LW · GW

the sinking of the Muscovy

Is this some complicated socio-political ploy denying the name Moskva / Moscow and going back to the medieval state of Muscovy?

Comment by DanArmak on The Jordan Peterson vs Sam Harris Debate · 2022-04-06T08:13:38.966Z · LW · GW

I'm a moral anti-realist; it seems to me to be a direct inescapable consequence of materialism.

I tried looking at definitions of moral relativism, and it seems more confused than moral realism vs. anti-realism. (To be sure there are even more confused stances out there, like error theory...)

Should I take it that Peterson and Harris are both moral realists and interpret their words in that light? Note that this wouldn't be reasoning about what they're saying, for me, it would be literally interpreting their words, because people are rarely precise, and moral realists and anti-realists often use the same words to mean different things. (In part because they're confused and are arguing over the "true" meaning of words.)

So, if they're moral realists, then "not throwing away the concept of good" means not throwing away moral realism; I think I understand what that means in this context.

Comment by DanArmak on The Jordan Peterson vs Sam Harris Debate · 2022-04-05T21:03:15.364Z · LW · GW

Also known as: the categories were made for man.

Comment by DanArmak on The Jordan Peterson vs Sam Harris Debate · 2022-04-05T21:02:32.961Z · LW · GW

When Peterson argues religion is a useful cultural memeplex, he is presumably arguing for all of (Western monotheistic) religion. This includes a great variety of beliefs, rituals, practices over space and time - I don't think any of these have really stayed constant across the major branches of Judaism, Christianity and Islam over the last two thousand years. If we discard all these incidental, mutable characteristics, what is left as "religion"?

One possible answer (I have no idea if Peterson would agree): the structure of having shared community beliefs and rituals remains, but not the specific beliefs, or the specific (claimed) reasons for holding them; the distinctions of sacred vs. profane remains, and of priests vs. laymen, and of religious law vs. freedom of actions in other areas, but no specifics of what is sacred or what priests do; the idea of a single, omniscient, omnipotent God, but not that God's attributes, other than being male; that God judges and rewards or punishes people, but no particulars of what is punished or rewarded, or what punishments or rewards might be.

ETA: it occurs to me that marriage-as-a-sacrament, patriarchy, and autocracy, have all been stable common features of these religions. I'm not sure if they should count as features of the religion, or of a bigger cultural package which has conserved these and other features.

Atheists reject the second part of the package, the one that's about a God. But they (like everyone) still have the first part: shared beliefs and rituals and heresies, shared morals and ethics, sources of authority, etc. (As an example, people sometimes say that "Science" often functions as a religion for non-scientists; I think that's what's meant; Science-the-religion has priests and rituals and dogmas and is entangled with law and government, but it has no God and doesn't really judge people.)

But that's just what I generated when faced with this prompt. What does Peterson think is the common basis of "Western religion over the last two thousand years" that functions as a memeplex and ignores the incidentals that accrue like specific religious beliefs?

Comment by DanArmak on The Jordan Peterson vs Sam Harris Debate · 2022-04-05T20:21:17.974Z · LW · GW

They are both pro free speech and pro good where "good" is what a reasonable person would think of as "good".

I have trouble parsing that definition. You're defining "good" by pointing at "reasonable". But people who disagree on what is good, will not think each other reasonable.

I have no idea what actual object-level concept of "good" you meant. Can you please clarify?

For example, you go on to say:

They both agree that religion has value.

I'm not sure whether religion has (significant, positive) value. Does that make me unreasonable?

Comment by DanArmak on Should we push for banning making hiring decisions based on AI? · 2022-04-05T20:10:51.488Z · LW · GW

Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of "AI", not even in the buzzword sense of AI = ML. For all we know it's doing something trivially simple, like combining a few measured properties (how often they're on time, etc.) with a few manually assigned weights and thresholds. Even if it's using ML, it's going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.

Even if some laws are passed about this, they'd be expandable in the directions of "Bezos is literally an evil overlord [which is a quote from the linked article], our readers/voters love to hate him, we should hurt him some more"; and "we already have laws establishing protected characteristics in hiring/firing/housing/etc; if black-box ML models can't prove they're not violating the law, then they're not allowed". The latter has a very narrow domain of applicability so would not affect AI risk.

What possible law or regulation, now or in the future, would differentially impede dangerous AI (on the research path leading to AGI) and all other software, or even all other ML? A law that equally impedes all ML would never get enough support to pass; a law that could be passed would have to use some narrow discriminating wording that programmers could work around most of the time, and so accomplish very little.

Comment by DanArmak on Russian x-risk newsletter March 2022 update · 2022-04-03T16:51:15.555Z · LW · GW

Epistemic status: wild guessing:

  1. If the US has submarine locators (or even a theory or a work-in-progress), it has to keep them secret. The DoD or Navy might not want to reveal them to any Representatives. This would prevent them from explaining to those Representatives why submarine budgets should be lowered in favor of something else.

  2. A submarine locator doesn't stop submarines by itself; you still presumably need to bring ships and/or planes to where the submarines are. If you do this ahead of time and just keep following the enemy subs around, they are likely to notice, and you will lose strategic surprise. The US has a lot of fleet elements and air bases around the world (and allies), so it plausibly has an advantage over its rivals in terms of being able to take out widely dispersed enemy submarines all at once.

  3. Even if others also secretly have submarine locators, there may be an additional anti-sub-locator technology or strategy that the US has developed and hopes its rivals have not, which would keep US submarines relevant. Building a sub-locator might be necessary but not sufficient to building an anti-sub-locator.

Comment by DanArmak on Blatant Plot Hole in HPMoR [Spoilers] · 2022-04-02T09:18:26.653Z · LW · GW

Now write the scene where Draco attempts to convince his father to accept Quirrel points in repayment of the debt.

"You see, Father, Professor Quirrel has promised to grant any school-related wish within his power to whoever has the most Quirrel points. If Harry gives his points to me, I will have the most points by far. Then I can get Quirrel to teach students that blood purism is correct, or that it would be rational to follow the Dark Lord if he returns, or to make me the undisputed leader of House Slytherin. That is worth far more than six thousand galleons!"

Lord Malfoy looked unconvinced. "If Quirrel is as smart as you say, why would he promise to grant such an open-ended wish? He warned you that Quirrel points were worth only one-tenth of House points, a popularity contest designed to distract fools from true politics and dominated by Quidditch seekers. For every plot you buy from Quirrel with your points, he will hatch a greater counter-plot to achieve what he himself truly wants. You must learn, my son, not to rely overmuch on those greater than yourself to serve as your willing agents; the power loaned by them is never free, and it is not truly yours in the end."

Comment by DanArmak on Why do people avoid vaccination? · 2022-02-12T16:52:34.979Z · LW · GW

I don't see an advantage

A potential advantage of inactivated virus vaccine is that it can raise antibodies for all viral proteins and not just a subunit of the spike protein, which would make it harder for future strains to evade the immunity. I think this is also the model implicitly behind this claim that natural immunity (from being infected with the real virus) is stronger than the immunity gained from subunit (eg mRNA) vaccines. (I make no claim that that study is reliable, and just on priors it probably should be ignored.)

Comment by DanArmak on On Bounded Distrust · 2022-02-06T23:01:24.936Z · LW · GW

direct sources are more and more available to the public... But simultaneously get less and less trustworthy.

The former helps cause the latter. Sources that aren't available to the public, or are not widely read by the public for whatever reason, don't face the pressure to propagandize - either to influence the public, and/or to be seen as ideologically correct by the public.

Of course influencing the public only one of several drives to distort or ignore the truth, and less public fora are not automatically trustworthy.

Comment by DanArmak on Before Colour TV, People Dreamed in Black and White · 2022-02-02T14:06:07.860Z · LW · GW

Suppose that TV experience does influence dreams - or the memories or self-reporting of dreams. Why would it affect specifically and only color?

Should we expect people who watch old TV to dream in low resolution and non-surround sound? Do people have poor reception and visual static in their black and white dreams? Would people who grew up with mostly over the border transmissions dream in foreign languages, or have their dreams subtitled or overdubbed? Would people who grew up with VCRs have pause and rewind controls in their dreams?

Some of these effects are plausible. Anecdotally, I watched a lot of anime, and I had some dreams in pseudo-Japanese (I don't speak Japanese). I don't remember ever dreaming subtitles though.

Does either the explanation of the black and white effect make predictions about which other effects should be present, and why?

Comment by DanArmak on Before Colour TV, People Dreamed in Black and White · 2022-02-02T13:50:22.705Z · LW · GW

Epistemic status: anecdote.

Most of the dreams I've ever had (and remembered in the morning) were not about any kind of received story (media, told to me, etc). They were all modified versions of my own experiences, like school, army, or work, sometimes fantastically distorted, but recognizably about my experiences. A minority of dreams has been about stories (eg a book I read), usually from a first person point of view (eg. a self insert into the book).

So for me, dreams are stories about myself. And I wonder: if these people had their dreams influenced by the form of media, were they influenced by the content as well? Or did they dream about their own lives in black and white? The latter would be quite odd.

Comment by DanArmak on Book Review: Why Everyone (Else) Is a Hypocrite · 2021-10-15T07:08:06.794Z · LW · GW

He's saying that it's extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it's similarly unobvious whether we should worry about the suffering of edge detectors.

Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.

I'd like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to understand? Is there a principled or at least reasonable and consistent definition? Is there a reason to privilege edge detectors made from neurons over, say, a simple edge detector program made from code? Could other (complex, input-processing) tissues and organs have experience, or only those made from neurons?

Could the brain be logically divided in N different ways, such that we'd worry about the experience of a certain sub-network using division A, and not worry about a different sub-network using division B, but actually they're composed mostly of the same neurons, we just model them differently?

We talk about edge detectors mostly because they're simple and "stand-alone" enough that we located and modeled them in the brain. There are many more complex and less isolated parts of the brain we haven't isolated and modeled well yet; should that make us more or less concerned that they (or parts of them) have relevant experiences?

Finally, if very high-level parts of my brain ("I") have a good experience, while a theory leads us to think that lots of edge-detectors inside my brain are having a bad experiences ("I can't decide if that's an edge or not, help!"), how might a moral theory look that would resolve or trade-off these against each other?

Comment by DanArmak on Do you think you are a Boltzmann brain? If not, why not? · 2021-10-15T06:53:48.247Z · LW · GW

This is a question similar to "am I a butterfly dreaming that I am a man?". Both statements are incompatible with any other empirical or logical belief, or with making any predictions about future experiences. Therefore, the questions and belief-propositions are in some sense meaningless. (I'm curious whether this is a theorem in some formalized belief structure.)

For example, there's an argument about B-brains that goes: simple fluctuations are vastly more likely than complex ones; therefore almost all B-brains that fluctuate into existence will exist for only a brief moment and will then chaotically dissolve in a kind of time-reverse of their fluctuating into existence.

Should a B-brain expect a chaotic dissolution in its near future? No, because its very concepts of physics and thermodynamics that cause it to make such predictions are themselves the results of random fluctuations. It remembers reading arguments and seeing evidence for Boltzmann's theorem of enthropy, but those memories are false, the result of random fluctuations.

So a B-brain shouldn't expect anything at all (conditioning on its own subjective probability of being a B-brain). That means a belief in being a B-brain isn't something that can be tied to other beliefs and questioned.

Comment by DanArmak on Covid 10/14: Less Long Cvoid · 2021-10-14T15:38:14.068Z · LW · GW

Title typo: cvoid.

Comment by DanArmak on Book Review: Why Everyone (Else) Is a Hypocrite · 2021-10-12T12:54:08.646Z · LW · GW

Let's take the US government as a metaphor. Instead of saying it's composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary

Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?

We don't ask "what is it like to be an edge detector?", because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences.

If "human experience" includes the experience of an edge detector, I have to ask for a definition of "human experience". Is he saying an edge detector is conscious or sentient? What does it mean to talk of the experience of such a relatively small and simple part of the brain? Why should we care what its experience is like, however we define it?

Comment by DanArmak on Book Review: Open Borders · 2021-10-12T12:44:24.859Z · LW · GW

Finding the percentage of "immigrants" is misleading, since it's immigrants from Mexico and Central America who are politically controversial, not generic "immigrants" averaged over all sources.

I'm no expert on American immigration issues, but I presume this is because most immigrants come in through the (huge) south land border, and are much harder for the government to control than those coming in by air or sea.

However, I expect immigrants from any other country outside the Americas would be just as politically controversial if large numbers of them started arriving, and an open borders policy with Europe or Asia or Africa would be just as unacceptable to most Americans.

Are Americans much more accepting of immigrants from outside Central and South America?

Comment by DanArmak on Book Review: Open Borders · 2021-10-10T15:12:00.684Z · LW · GW

immigrants are barely different from natives in their political views, and they adopt a lot of the cultural values of their destination country.

The US is famous for being culturally and politically polarized. What does it even mean for immigrants to be "barely different from natives" politically? Do they have the same (polarized) spread of positions? Do they all fit into one of the existing political camps without creating a new one? Do they all fit into the in-group camp for Caplan's target audience?

And again:

[Caplan] finds that immigrants are a tiny bit more left-wing than the general population but that their kids and grandkids regress to the political mainstream.

If the US electorate is polarized left-right, does being a bit more left-wing mean a slightly higher percentage of immigrants than of natives are left-wing, but immigrants are still as polarized as the natives?

Comment by DanArmak on Contra Paul Christiano on Sex · 2021-10-06T16:26:31.891Z · LW · GW

bad configurations can be selected against inside the germinal cells themselves or when the new organism is just a clump of a few thousand cells

Many genes and downstream effects are only expressed (and can be selected on) after birthing/hatching, or only in adult organisms. This can include whole organs, e.g. mammal fetuses don't use their lungs in the womb. A fetus could be deaf, blind, weak, slow, stupid - none of this would stop it from being carried to term. An individual could be terrible at hunting, socializing, mating, raising grandchildren - none of that would stop it from being born and raised to adulthood.

There's no biological way to really test the effect of a gene ahead of time. So it's very valuable to get genes that have already been selected for beneficial effects outside of early development.

That's in addition to p.b.'s point about losing information.

Comment by DanArmak on Contra Paul Christiano on Sex · 2021-10-01T17:54:03.585Z · LW · GW

When you get an allele from sex, there are two sources of variance. One is genes your (adult) partner has that are different from yours. The other is additional de novo mutations in your partner's gametes.

The former has already undergone strong selection, because it was part of one (and usually many) generations' worth of successfully reproducing organisms. This is much better than getting variance from random mutations, which are more often bad than good, and can be outright fatal.

Selecting through many generations of gametes, like (human) sperm do, isn't good enough; it doesn't filter out bad mutations in genes that aren't expressed in sperm cells.

Lateral gene transfer might be as good as sex, but I don't see how higher mutation rates can compete. I believe that empirically, mutations that weaken one of the anti-mutation DNA preservation mechanisms in gametes are usually deleterious and are not selected.

Comment by DanArmak on This Can't Go On · 2021-09-28T17:52:28.072Z · LW · GW

I propose using computational resources as the "reference" good.

I don't understand the implications of this, can you please explain / refer me somewhere? How is the GDP measurement resulting from this choice going to be different from another choice like control of matter/energy? Why do we even need to make a choice, beyond the necessary assumption that there will still be a monetary economy (and therefore a measurable GDP)?

In the hypothetical future society you propose, most value comes from non-material goods.

That seems very likely, but it's not a necessary part of my argument. Most value could keep coming from material goods, if we keep inventing new kinds of goods (i.e. new arrangements of matter) that we value higher than past goods.

However, these non-material goods are produced by some computational process,. Therefore, buying computational resources should always be marginally profitable. On the other hand, the total amount of computational resources is bounded by physics. This seems like it should imply a bound on GDP.

There's a physical bound on how much computation can be done in the remaining lifetime of the universe (in our future lightcone). But that computation will necessarily take place over a very very long span of time.

For as long as we can keep computing, the set of computation outputs (inventions, art, simulated-person-lifetimes, etc) each year can keep being some n% more valuable than the previous year. The computation "just" needs to keep coming up with better things every year instead of e.g. repeating the same simulation over and over again. And this doesn't seem impossible to me.

Comment by DanArmak on This Can't Go On · 2021-09-23T18:22:04.340Z · LW · GW

I think that most people would prefer facing a 10e-6 probability of death to paying 1000 USD.

The sum of 1000 USD comes from the average wealth of people today. Using (any) constant here encodes the assumption that GDP per capita (wealth times population) won't keep growing.

If we instead suppose a purely relative limit, e.g. that a person is willing to pay a 1e-6 part of their personal wealth to avoid a 1e6 chance of death, then we don't get a bound on total wealth.

Comment by DanArmak on This Can't Go On · 2021-09-23T11:21:32.577Z · LW · GW

you imagine that the rate at which new "things" are produced hits diminishing returns

The rate at which new atoms (or matter/energy/space more broadly) are added will hit diminishing returns, at the very least due to speed of light.

The rate at which new things are produced won't necessarily hit diminishing returns because we can keep cannibalizing old things to make better new things. Often, re-configurations of existing atoms produce value without consuming new resources except for the (much smaller) amount of resources used to rearrange them. If I invent email which replaces post mail I produce value while reducing atoms used.

this value growth has to hit a ceiling pretty soon anyway, because things can only be that much valuable

Eventually yes, but I don't think they have to do hit a ceiling soon, e.g. in a timeframe relevant to the OP. Maybe it's probable they will, but I don't know how to quantify it. The purely physical ceiling on ascribable value is enormously high (other comment on this and also this).

Like you, I don't know what to make of intuition pumps like your proposed Pascal's Ceiling of Value. Once you accept that actual physics don't practically limit value, what's left of the OP is a similar-looking argument from incredulity: can value really grow exponentially almost-forever just by inventing new things to do with existing atoms? I don't know that it will keep growing, but I don't see a strong reason to think it can't, either.

Comment by DanArmak on Weird models of country development? · 2021-09-23T10:29:26.107Z · LW · GW

I agree, and want to place a slightly different emphasis. A "better" education system is a two-place function; what's better for a poor country is different from what's better for a rich Western one. And education in Western countries looked different back when they were industrializing and still poor by modern standards.

(Not that the West a century ago is necessarily a good template to copy. The point is that the education systems rich countries have today weren't necessarily a part of what made them rich in the first place.)

A lot (some think most) of Western education is also a credentialing and signalling system. It can also promote social integration (shared culture), and serves as daycare for lower grades.These things don't directly help a poor country get richer.

Signalling is a zero sum game competing over the top jobs in a poor economy. Sequestering teenagers reduces available workforce for a net economic loss. Community daycare is economically valuable, but requiring qualified teachers is expensive and can make it a net loss.

So poor countries can copy Western education systems faithfully and still not benefit. What they are cargo culting is not (just) the elements of how to do "education", but the function of the education system in broader society. Faithfully reproducing modern Western education doesn't necessarily make your country rich: that's cargo culting.

Comment by DanArmak on This Can't Go On · 2021-09-21T15:49:01.344Z · LW · GW

Great point, thanks!

Comment by DanArmak on This Can't Go On · 2021-09-20T21:49:44.438Z · LW · GW

Please see my other reply here. Yes, value is finite, but the number of possible states of the universe is enormously large, and we won't explore it in 8000 years. The order of magnitude is much bigger.

(Incidentally, our galaxy is ~ 100,000 light years across; so even expanding to cover it would take much longer than 8000 years, and that would be creating value the old-fashioned way by adding atoms, but it wouldn't support continued exponential growth. So "8000 years" and calculations based off the size of the galaxy shouldn't be mixed together. But the order-of-magnitude argument should work about as well for the matter within 8000 light-years of Earth.)

Comment by DanArmak on This Can't Go On · 2021-09-20T17:12:24.121Z · LW · GW

in their expected lifespan

Or even in the expected lifetime of the universe.

perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.

That's a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don't (and possibly can't, or shouldn't) have an exact (mathematical) definition of a Pan-Human Utility Function.

However, a proof isn't needed to make this happen (for better and for worse). If a local configuration is created which is sufficiently more (universally!) valuable than any other known local configuration, neighbors will start copying it and it will tile the galaxy, possibly ending progress if it's a stable configuration - even if this configuration is far from the best one possible locally (let alone globally).

In practice, "a wonderful thing was invented, everyone copied it of their own free will, and stayed like that forever because human minds couldn't conceive of a better world, leaving almost all possible future value on the table" doesn't worry me nearly as much as other end-of-progress scenarios. The ones where everyone dies seem much more likely.

Comment by DanArmak on This Can't Go On · 2021-09-20T14:01:12.150Z · LW · GW

In the limit you are correct: if a utility function assigns a value to every possible arrangement of atoms, then there is some maximum value, and you can't keep increasing value forever without adding atoms because you will hit the maximum at some point. An economy can be said to be "maximally efficient" when value can't be added by rearranging its existing atoms, and we must add atoms to produce more value.

However, physics provides very weak upper bounds on the possible value (to humans) of a physical system of given size, because the number of possible physical arrangements of a finite-sized system is enormous. The Bekenstein bound is approximately 2.6e43 * M * R (mass times radius) bits per kg * m. Someone who understands QM should correct me here, but just as an order-of-magnitude-of-order-of-magnitude estimation, our galaxy masses around 1e44 Kg with a radius of 1e18 meters, so its arrangement in a black hole can contain up to 2.6e105 bits of information.

Those are bits; the number of states is 2^(2.6e105). That is much, much bigger than the OP's 3e70; we can grow the per-atom value of the overall system state by a factor much bigger than 3e70.

Of course this isn't a tight argument and there are lots of other things to consider. For example, to get the galaxy into some valuable configuration, we'd need to "use up" part of the same galaxy in the process of changing the configuration of the rest. But from a purely physical perspective, the upper bound on value per atom is enormously high.

ETA: replaced mind-boggling numbers with even bigger mind-boggling numbers after a more careful reading of Wikipedia.

Comment by DanArmak on This Can't Go On · 2021-09-20T13:23:25.877Z · LW · GW

The OP's argument is general: it says essentially that (economic) value is bounded linearly by the number of atoms backing the economy. Regardless of how the atoms are translated to value. This is an impossibility argument. My rebuttal was also general, saying that value is not so bounded.

Any particular way of extracting value, like electronics, usually has much lower bounds in practice than 'linear in the amount of atoms used' (even ignoring different atomic elements). So yes, today's technology that depends on 'rare' earths is bounded by the accessible amount of those elements.

But this technology is only a few decades old. The economy has been growing at some % a year for much longer than that, across many industries and technological innovations that have had very different material constraints from each other. And so, while contemporary rare-earth-dependent techniques won't keep working forever, the overall trend of economic growth could continue far beyond any one technology's lifespan, and for much longer than the OP projects.

Technology and other secular change doesn't always increase value; often it is harmful. My argument is that economy can keep growing for a long time, not that it necessarily will, or that all (or even most) changes over time are for the best. And GDP is not a good measure of human wellbeing to begin with; we're measuring dollars, not happiness, and when I talk about "utility" I mean the kind estimated via revealed preferences.

Comment by DanArmak on This Can't Go On · 2021-09-20T13:14:28.552Z · LW · GW

The rate of value production per atom can be bounded by physics. But the amount of value ascribed to the thing being produced is only strictly bounded by the size of the number (representing the amount of value) that can be physically encoded, which is exponential in the number of atoms, and not linear.

Comment by DanArmak on This Can't Go On · 2021-09-19T16:23:25.911Z · LW · GW

By "proportionately more" I meant more than the previous economic-best use of the same material input, which the new invention displaced (modulo increasing supply). For example, the amount of value derived by giving everyone (every home? every soldier? every car?) a radio is much greater than any other value the same amount of copper, zinc etc. could have been used for before the invention of radio. We found a new way to get more value from the same material inputs.

For material outputs (radio sets, telegraph wire, computers), of course material inputs are used. But the amount of value we get from the inputs is not really related to, or bounded by, the amount of input material. A new way of using material can have an arbitrarily high value-produced-to-materials-consumed ratio.

I'll run with your example of semiconductor factories. A factory costs between $1-20 billion to build. The semiconductor industry has a combined yearly revenue of $500 billion (2018). Doesn't sound like a huge multiplier so far.

But then consider that huge amounts of modern technology (= value) require semiconductors as an input. The amount of semiconductor industry inputs, and material waste byproducts, was similar in 1990 and 2020 (same order of magnitude). But the amount of value enabled by using those semiconductors was enormously larger in 2020. Whole new markets were created thanks to the difference in capability between 1990 semiconductors ($100 per megabyte DRAM) and 2020 ($0.003 per MB). Smartphones, PCs, modern videogames, digital video and audio, digital cameras, most of the way the Internet and Web are used today; but also all modern devices with chips inside, from cars to satellites; the list is almost endless.

All of these require extra inputs besides semiconductors, and those inputs cost time and money. But the bill of materials for a 2020 smartphone is smaller and cheaper than that of an early 1990 cellphone, while the value to the owner is much greater. (A lot of the value comes from software and digital movies and music, which don't consume atoms in the relevant sense, because they can be copied on demand.)

Comment by DanArmak on This Can't Go On · 2021-09-19T14:49:19.974Z · LW · GW

GDP growth is measured in money, a measure of value. Value does not have to be backed by a proportional amount of matter (or energy, space or time) because we can value things as much as we like - more than some constant times utilon per gram second.

Suppose I invent an algorithm that solves a hard problem and sell it as a service. The amount people will be willing to pay for it - and the amount the economy grows - is determined by how much people want it and how much money there is, but nobody cares how many new atoms I used to implement it. If I displace older, less efficient algorithms, then I produce value while reducing the number of atoms (or watts) backing the economy!

Material goods and population size can't keep growing forever, but value can. Many recent developments that produced a lot of value, like radio, computing, and the Internet, didn't do it by using proportionally more atoms. An algorithm is a convenient example but this applies to non-digital services just as much.

This is not a novel argument but I can't recall it's source or name.

Comment by DanArmak on How factories were made safe · 2021-09-16T10:27:08.302Z · LW · GW

Sorry, who is GBS?

Also: if Orwell thought vegeterians expected to gain 5 years of life, that would be an immense effect well worth some social disruption. And boo Orwell for mocking them merely for being different and not for any substance of the way they were different. It's not as if people eating different food intrudes on others (or even makes them notice, most of the time), unlike e.g. nudists, or social-reforming feminists.

Comment by DanArmak on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-16T10:22:17.816Z · LW · GW

I strongly agree that the methodology should have presented up front. lsusr's response is illuminative and gives invaluable context.

But my first reaction to your comment was to note the aggressive tone and what feels like borderline name-calling. This made me want to downvote and ignore it at first, before I thought for a minute and realized that yes, on the object level this is a very important point. It made it difficult for me to engage with it.

So I'd like to ask you what exactly you meant (because it's easy to mistake tone on the internet) and why. Calling the LW audience (i.e. including me) 'alarmist and uninformed' I can understand (if not necessarily agree with) but 'an AGI death cult'? That seems to mean a cult that wants to bring about death through AGI but that's the opposite of what LW is about and so I'm pretty sure you didn't mean that. Please clarify.

Comment by DanArmak on How factories were made safe · 2021-09-15T20:52:19.532Z · LW · GW

In addition to this there is the horrible—the really disquieting—prevalence of cranks wherever Socialists are gathered together. One sometimes gets the impression that the mere words 'Socialism' and 'Communism' draw towards them with magnetic force every fruit-juice drinker, nudist, sandal-wearer, sex-maniac, Quaker, 'Nature Cure' quack, pacifist and feminist in England.

It's interesting to see how this aged. 85 years later, sex-maniacs and quacks are still considered 'cranks'; pacifism and nudists are not well tolerated by most societies, whereas sandal-wearing is more often respected; and vegetarianism and (1930s) feminism are completely mainstream.

Also, I was surprised to learn that Orwell thinks people typically become vegetarian to extend their lifespan, and not for ethical reasons. Was this true in 1930s England? Did Western vegetarianism use to be a fad diet on part with Orwell's "fruit-drinkers"?

Comment by DanArmak on Covid 8/26: Full Vaccine Approval · 2021-08-26T22:16:51.956Z · LW · GW

The link to "Israeli data" is wrong; it goes to the tweet by @politicalmath showing the Houston graph you inlined later.

Comment by DanArmak on What are some good rationalist ice breaker questions? · 2021-08-25T13:47:42.406Z · LW · GW

What is the most rational way to break ice?

Comment by DanArmak on A deeper look at doxepin and the FDA · 2021-08-13T19:51:01.336Z · LW · GW
  1. Does the cost to get a drug approved depend on how novel or irreplaceable it might be? Did it cost the same amount to approve Silenor for insomnia as it would cost to approve a really novel drug much better at combating insomnia than any existing one?

    If the FDA imposes equal costs on any new drug, then it's not "imposing [costs] on a company trying to [...] parasitize the healthcare system". It's neutrally imposing costs on all companies developing drugs. And this probably does a lot more harm on net (fewer drugs marketed) then it does good (punishes some drugs that harm society).

    Silenor may be a bad example for the anti-FDA narrative, but I don't think this is strong evidence against the narrative, given all the other (hopefully good) examples that we have.

    To be clear, it's very important and beneficial to call out bad examples in a narrative, thank you for doing that. We should update on this information. But I don't agree with your conclusions.

  2. Pharma companies can probably estimate the cost of bringing a new drug to market, and make a rational cost-benefit decision (citation needed). Somaxon presumably made a bad decision with Silenor, and was 'punished' by losing money.

    That's what happens to any companies in a market. Even if it was cheap to bring a drug to market, companies would still make money on some drugs but lose money on others. Why do we need an agency like the FDA imposing extra costs?

    One of the complaints about the FDA is that only big and well-established companies can afford to bring a drug to market. It's a moat against new competitors, and a net harm to society because fewer good drugs are developed and approved.

    Suppose the FDA found a way to make drug approval cost 50% less, while still approving the same drugs in the same amount of time. That is, pharma companies would pay half what they do now to go through the process. Most people would say this is a good thing, i.e. less dead loss. Would you call it a bad thing because it would reduce the 'punishment' of companies? If so, do you think the cost should be increased, or does it happen to be just right?

Comment by DanArmak on Deliberately Vague Language is Bullshit · 2021-05-15T13:22:10.083Z · LW · GW

Bullshit is what comes out of the mouth of someone who values persuasion over truth. [...] The people with a need to obscure the truth are those with a political or social agenda.

Almost all humans, in almost all contexts, value persuasion over truth and have a social agenda. Condemning all human behavior that is not truth-seeking is condemning almost all human behavior. This is a strong (normative? prescriptive? judgmental?) claim that should be motivated, but you seem to take it for given.

Persuasion is a natural and desirable behavior in a social, cooperative species that is also competitive on the individual level. The main alternative to persuasion is force, and in most cases I'm glad people use persuasion rather than force. Truth-seeking would also fare worse in a more violent world, because truth has some persuasion value but little violence-value.

Truth is instrumentally useful to persuasion inasfar as people are able to identify truth and inclined to prefer it. I'm all for increasing these two characteristics and otherwise "raising the sanity waterline". But that is very far from a blanket condemnation of "valuing persuasion over truth".

Comment by DanArmak on Zvi's Law of No Evidence · 2021-05-15T12:56:29.066Z · LW · GW

If someone says there is "no evidence" of something then it is because they are trying to pass off "nobody looked for Bigfoot and nobody found him" as "explorers looked for Bigfoot and nobody found him".

A "no evidence" argument doesn't have to be made in bad faith. It's claiming that we've looked into the people who said they saw Bigfoot (as opposed to looking for Bigfoot itself), and concluded those claims have no good evidence behind them. And so, without evidence, we should rule out Bigfoot, because the prior for Bigfoot is very low. We would need positive evidence to raise the Bigfoot hypothesis to the level of conscious consideration, and we claim there is no such evidence.

Yes, a claim of "no evidence" is - in this context - a social attack on the people who were talking about the subject (and so implicitly claiming "yes evidence"). In the highly politicized context Zvi is discussing, almost all factual arguments are disguised social attacks; rhetorics, meant to persuade people, with facts and logic being instrumental but not the goal.

And so we can justly ignore the whole discussion because we think it's not about facts and arguments and real "evidence" and it never was. But if we want to engage with the discussion using our own arguments and evidence (or to pretend to do so for our own social goals), then we should acknowledge that a valid factual claim is being made here, which we can evaluate without dismissing it as purely rhetorical manipulation ("passing off argument A as argument B").

Zvi wrote,

No evidence should be fully up there with “government denial” or “I didn’t do it, no one saw me do it, there’s no way they can prove anything.” If there was indeed no evidence, there’d be no need to claim there was no evidence, and this is usually a move to categorize the evidence as illegitimate and irrelevant because it doesn’t fit today’s preferred form of scientism.

I disagree with this. If people claim Bigfoot exists, and I think they have no evidence for that claim, then yes I will say there is no evidence. The mere fact that people claim A is not in itself evidence for A, because people are not pure truth-seekers, and if I acknowledge any claim as itself constituting evidence, they will proceed to claim lots of things without evidence behind them. I don't need to "categorize the evidence as illegitimate and irrelevant", I should be able to say plainly that there is no evidence to begin with. It's not because "it doesn't fit today's preferred form of scientism", it's because seeing a vague outline in a snow-storm really truly isn't evidence for Bigfoot.

When people we don't like claim things that are clearly wrong, we may want to dismiss their arguments are rhetorically invalid or malicious or made in bad faith. To claim that the form of such arguments necessarily indicates they are being made in bad faith. But that is engaging on their terms - analyzing why they're making the arguments, instead of analyzing the arguments themselves (simulacra levels!). These two discussions are both necessary but they should be kept apart. On the object level, we should be able to keep saying - the arguments are not "wrongly shaped", they are just factually wrong.

Comment by DanArmak on Covid 4/1: Vaccine Passports · 2021-04-02T16:46:39.271Z · LW · GW

We're talking past one another, trying to solve different problems. I'm a software engineer by profession and I understand how public-key cryptography works. I also assumed you were not a software engineer because your comment didn't make sense for the problem as I understand it.

The QR code contains a cryptographically signed attestion that "DanArmak" is vaccinated. Not "whoever displays this code is vaccinated".

That works fine, and is the system used in Israel and proposed in some EU countries. But it's not what I understand Zvi to be arguing for. Zvi wants a system which doesn't let verifiers identify the person in front of them, only learn that they're vaccinated. He clarifies this in this comment.

If the QR proves "DanArmak is vaccinated", then I also need to prove I'm DanArmak. E.g. by displaying a state ID. This lets verifiers track me, simply because they learn who I am and businesses regularly sell or share data on customers / visitors. The application verifying the QR codes can make this even easier - most businesses install the same verifier application, and it uploads info about the people whose IDs it verifies. IIUC, the US doesn't have any privacy laws that would forbid private entities from such collading, tracking, and selling such data, even without disclosure.

Comment by DanArmak on Covid 4/1: Vaccine Passports · 2021-04-02T16:37:05.897Z · LW · GW

we have proof by example

What's the example you're thinking of? I'm sorry if you mentioned it before and I missed it.

We need something harder to fake than a Fake ID, where the QR code doesn't reveal who you are, so you can't be tracked beyond the existing ability to track cell phones.

If I understand correctly, you don't want the QR code to prove that "John Doe, ID #123456789, is vaccinated" and then have the verifier ask to see a separate, pre-existing ID that shows you're John Doe. Which is how the actual and proposed vaccination passports in Israel and some of the EU work. (Hence I don't know what example you're thinking of.)

Instead you want the QR code to prove that "the bearer of this code is vaccinated". That implies the code must be secret and not trivially shareable between many different people. But copying images and taking screenshots is trivial. So the code must not be a single permanent QR per person, but generated by the application: either frequently replaced (like OTP) or on-demand (challenge-response protocol).

This could work if installing or activating the app required approval from a central database / service. This approach has difficulties I noted before, including proving to the app you're you, and multiple activations. And it still lets the app owner track you, since the app stays active.

What approach are you thinking of?

Comment by DanArmak on Bureaucracy is a world of magic · 2021-04-02T09:56:54.809Z · LW · GW

Both things are true. An attacker can find poorly protected keys that are easier to steal (although key protection may weakly correlate with key value). And a defender can invest to make their own key much harder to steal.