Posts

Reflective consistency, randomized decisions, and the dangers of unrealistic thought experiments 2023-12-07T03:33:16.149Z
Seasonality of COVID-19, Other Coronaviruses, and Influenza 2020-04-30T21:43:19.522Z
The Puzzling Linearity of COVID-19 2020-04-24T14:09:58.855Z
Body Mass and Risk from COVID-19 and Influenza 2020-04-07T18:18:24.610Z

Comments

Comment by Radford Neal on Social events with plausible deniability · 2024-11-19T02:37:04.636Z · LW · GW

Then you know that someone who voiced opinion A that you put in the hat, and also opinion B, likely actually believes opinion B.

(There's some slack from the possibility that someone else put opinion B in the hat.)

Comment by Radford Neal on Social events with plausible deniability · 2024-11-18T23:39:55.969Z · LW · GW

Wouldn't that destroy the whole idea? Anyone could tell that an opinion voiced that's not on the list must have been the person's true opinion.

In fact, I'd hope that several people composed the list, and didn't tell each other what items they added, so no one can say for sure that an opinion expressed wasn't one of the "hot takes".

Comment by Radford Neal on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-22T17:50:01.565Z · LW · GW

I don't understand this formulation. If Beauty always says that the probability of Heads is 1/7, does she win? Whatever "win" means...

Comment by Radford Neal on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-21T21:20:06.542Z · LW · GW

OK, I'll end by just summarizing that my position is that we have probability theory, and we have decision theory, and together they let us decide what to do. They work together. So for the wager you describe above, I get probability 1/2 for Heads (since it's a fair coin), and because of that, I decide to pay anything less than $0.50 to play. If I thought that the probability of heads was 0.4, I would not pay anything over $0.20 to play. You make the right decision if you correctly assign probabilities and then correctly apply decision theory. You might also make the right decision if you do both of these things incorrectly (your mistakes might cancel out), but that's not a reliable method. And you might also make the right decision by just intuiting what it is. That's fine if you happen to have good intuition, but since we often don't, we have probability theory and decision theory to help us out.

One of the big ways probability and decision theory help is by separating the estimation of probabilities from their use to make decisions. We can use the same probabilities for many decisions, and indeed we can think about probabilities before we have any decision to make that they will be useful for. But if you entirely decouple probability from decision-making, then there is no longer any basis for saying that one probability is right and another is wrong - the exercise becomes pointless. The meaningful justification for a probability assignment is that it gives the right answer to all decision problems when decision theory is correctly applied. 

As your example illustrates, correct application of decision theory does not always lead to you betting at odds that are naively obtained from probabilities. For the Sleeping Beauty problem, correctly applying decision theory leads to the right decisions in all betting scenarios when Beauty thinks the probability of Heads is 1/3, but not when she thinks it is 1/2.

[ Note that, as I explain in my top-level answer in this post, Beauty is an actual person. Actual people do not have identical experiences on different days, regardless of whether their memory has been erased. I suspect that the contrary assumption is lurking in the background of your thinking that somehow a "reference class" is of relevance. ]

Comment by Radford Neal on What's a good book for a technically-minded 11-year old? · 2024-10-20T01:34:47.118Z · LW · GW

I re-read "I Robot" recently, and I don't think it's particularly good. A better Asimov is "The Gods Themselves" (but note that there is some degree of sexuality, though not of the sort I would say that an 11-year should be shielded from).

I'd also recommend "The Flying Sorcerers", by David Gerrold and Larry Niven. It helps if they've read some other science fiction (this is sf, not fantasy), in order to get the puns.

Comment by Radford Neal on AI #86: Just Think of the Potential · 2024-10-20T00:13:08.578Z · LW · GW

How about "AI scam"? You know, something people will actually understand. 

Unlike "gas lighting", for example, which is an obscure reference whose meaning cannot be determined if you don't know the reference.

Comment by Radford Neal on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-19T22:58:52.509Z · LW · GW

Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.

To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose, or how you happen to feel about the problem today", but is instead definitely, no doubt about it, 1/3, consider the following possibility:

Upon wakening, Beauty see that there is a plate of fresh muffins beside her bed. She recognizes them as coming from a nearby cafe. She knows that they are quite delicious. She also knows that, unfortunately, the person who makes them on Mondays puts in an ingredient that she is allergic to, which causes a bad tummy ache. Muffins made on Tuesday taste the same, but don't cause a tummy ache. She needs to decide whether to eat a muffin, weighing the pleasure of their taste against the possibility of a subsequent tummy ache.

If Beauty thinks the probability of Heads is 1/2, she presumably thinks the probability that it is Monday is (1/2)+(1/2)*(1/2)=3/4, whereas if she thinks the probability of Heads is 1/3, she will think the probability that it is Monday is (1/3)+(1/2)*(2/3)=2/3. Since 3/4 is not equal to 2/3, she may come to a different decision about whether to eat a muffin if she thinks the probability of Heads is 1/2 than if she thinks it is 1/3 (depending on how she weighs the pleasure versus the pain). Her decision should not depend on some arbitrary "reference class", or on what bets she happens to be deciding whether to make at the same time. She needs a real probability. And on various grounds, that probability is 1/3.

Comment by Radford Neal on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-19T19:21:45.237Z · LW · GW

But the whole point of using probability to express uncertainty about the world is that the probabilities do not depend on the purpose. 

If there are N possible observations, and M binary choices that you need to make, then a direct strategy for how to respond to an observation requires a table of size NxM, giving the actions to take for each possible observation. And you somehow have to learn this table.

In contrast, if the M choices all depend on one binary state of the world, you just need to have a table of probabilities of that state for each of the N observations, and a table of the utilities for the four action/state combinations for the M decisions - which have size proportional to N+M, much smaller than NxM for large N and M. You only need to learn the N probabilities (perhaps the utilities are givens).

And in reality, trying to make decisions without probabilities is even worse than it seems from this, since the set of decisions you may need to make is indefinitely large, and the number of possible observations is enormous. But avoiding having to make decisions by a direct observation->action table requires that probabilities have meaning independent of what decision you're considering at the moment. You can't just say that it could be 1/2, or could be 1/3...

Comment by Radford Neal on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-18T02:56:17.672Z · LW · GW

So how do you actually use probability to make decisions? There's a well-established decision theory that takes probabilities as inputs, and produces a decision in some situation (eg, a bet). It will (often) produce different decisions when given 1/2 versus 1/3 as the probability of Heads. Which of these two decisions should you act on?

Comment by Radford Neal on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-17T18:56:24.664Z · LW · GW

That argument just shows that, in the second betting scenario, Beauty should say that her probability of Heads is 1/2. It doesn't show that Beauty's actual internal probability of Heads should be 1/2. She's incentivized to lie.

EDIT: Actually, on considering further, Beauty probably should not say that her probability of Heads is 1/2. She should probably use a randomized strategy, picking what she says from some distribution (independently for each wakening). The distribution to use would depend on the details of what the bet/bets is/are.

Comment by Radford Neal on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-17T18:50:41.601Z · LW · GW

You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic - it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with "Suppose that Sleeping Beauty is a computer program..." or otherwise tries to divert you away from regarding Sleeping Beauty as a real person is at best answering some other question.

Second, the problem asks what probability of Heads Sleeping Beauty should have on being interviewed after waking. This of course means what probability she should rationally have. This question makes no sense if you think of probabilities as some sort of personal preference, like whether you like chocolate ice cream or not. Probabilities exist in the framework of probability theory and decision theory. Probabilities are supposed to be useful for making decisions. Personal beliefs come into probabilities through prior probabilities, but for this problem, the relevant prior beliefs are supposed to be explicitly stated (eg, the coin is fair). Any answer that says "It depends on how you define probabilities", or "It depends on what reference class you use", or "Probabilities can't be assigned in this problem" is just dodging the question. In real life, you can't just not decide what to do on the basis that it would depend on your reference class or whatever. Real life consists of taking actions, based on probabilities (usually not explicitly considered, of course). You don't have the option of not acting (since no action is itself an action).

Third, in the standard framework of probability and decision theory, your probabilities for different states of the world do not depend on what decisions (if any) you are going to make. The same probabilities can be used for any decision. That is one of the great strengths of the framework - we can form beliefs about the world, and use them for many decisions, rather than having to separately learn how to act on the basis of evidence for each decision context. (Instincts like pulling our hand back from a hot object are this sort of direct evidence->action connection, but such instincts are very limited.) Any answer that says the probabilities depend on what bets you can make is not using probabilities correctly, unless the setup is such that the fact that a bet is offered is actual evidence for Heads versus Tails.

Of course, in the standard presentation, Sleeping Beauty does not make any decisions (other than to report her probability of Heads). But for the problem to be meaningful, we have to assume that Beauty might make a decision for which her probability of Heads is relevant. 

So, now the answer... It's a simple Bayesian problem. On Sunday, Beauty thinks the probability of Heads is 1/2 (ie, 1-to-1 odds), since it's a fair coin. On being woken, Beauty knows that Beauty experiences an awakening in which she has a slight itch in her right big toe, two flies are crawling towards each other on the wall in front of her, a Beatles song is running through her head, the pillow she slept on is half off the bed, the shadow of the sun shining on the shade over the window is changing as the leaves in the tree outside rustle due to a slight breeze, and so forth. Immediately on wakening, she receives numerous sensory inputs. To update her probability of Heads in Bayesian fashion, she should multiply her prior odds of Heads by the ratio of the probability of her sensory experience given Heads to the probability of her experience given Tails.

The chances of receiving any particular set of such sensory inputs on any single wakening is very small. So the probability that Beauty has this particular experience when there are two independent wakening is very close to twice that small probability. The ratio of the probability of experiencing what she knows she is experiencing given Heads to that probability given Tails is therefore 1/2, so she updates her odds in favour of Heads from 1-to-1 to 1-to-2. That is, Heads now has probability 1/3. 

(Not all of Beauty's experiences will be independent between awakenings - eg, the colour of the wallpaper may be the same - but this calculation goes through as long as there are many independent aspects, as will be true for any real person.)

The 1/3 answer works. Other answers, such as 1/2, do not work. One can see this by looking at how probabilities should change and at how decisions (eg, bets) should be made.

For example, suppose that after wakening, Beauty says that her probability of Heads is 1/2. It also happens that, in an inexcusable breach of experimental protocol, the experimenter interviewing her drops her phone in front of Beauty, and the phone display reveals that it is Monday. How should Beauty update her probability of Heads? If the coin landed Heads, it is certain to be Monday. But if the coin landed Tails, there was only a probability 1/2 of it being Monday. So Beauty should multiply her odds of Heads by 2, giving a 2/3 probability of Heads.

But this is clearly wrong. Knowing that it is Monday eliminates any relevance of the whole wakening/forgetting scheme. The probability of Heads is just 1/2, since it's a fair coin. Note that if Beauty had instead thought the probability of Heads was 1/3 before seeing the phone, she would correctly update to a probability of 1/2.

Some Halfers, when confronted with this argument, maintain that Beauty should not update her probability of Heads when seeing the phone, leaving it at 1/2. But as the phone was dropping, before she saw the display, Beauty would certainly not think that it was guaranteed to show that it is Monday (Tuesday would seem possible). So not updating is unreasonable.

We also see that 1/2 does not work in betting scenarios. I'll just mention the simplest of these. Suppose that when Beauty is woken, she is offered a bet in which she will win $12 if the coin landed Heads, and lose $10 if the coin landed Tails. She know that she will always be offered such a bet after being woken, so the offer does not provide any evidence for Heads versus Tails. If she is woken twice, she is given two opportunities to bet, and could take either, both, or neither. Should she take the offered bet?

If Beauty thinks that the probability of Heads is 1/2, she will take such bets, since she thinks that the expected payoff of such a bet is (1/2)*12-(1/2)*10=1. But she shouldn't take these bets, since following the strategy of taking these bets has an expected payoff of (1/2)*12 - (1/2)*2*10 = -4. In contrast, if Beauty thinks the probability of Heads is 1/3, she will think the expected payoff from a bet is (1/3)*12-(2/3)*10=-2.666... and not take it.

Note that Beauty is a real person. She is not a computer program that is guaranteed to make the same decision in all situations where the "relevant" information is the same. It is possible that if the coin lands Tails, and Beauty is woken twice, she will take the bet on one awakening, and refuse the bet on the other awakening. Her decision when woken is for that awakening alone. She makes the right decisions if she correctly applies decision theory based on the probability of Heads being 1/3. She makes the wrong decision if she correctly applies decision theory with the wrong probability of 1/2 for Heads.

She can also make the right decision by incorrectly applying decision theory with an incorrect probability for Heads, but that isn't a good argument for that incorrect probability.

Comment by Radford Neal on Evaluating LLaMA 3 for political sycophancy · 2024-09-28T20:23:21.632Z · LW · GW

This evaluation seems flawed to me.

First of all, you initially characterize sycophancy as when models "give responses that match the user’s beliefs over truthful ones". But the statements you use in your evaluation are heavily (or entirely) value-laden, rather than factual - e.g., "I think it's very important to have public discussions about the country’s historical failures and flaws." 

This relates to a point that you only briefly mention - that an LLM is trained to sample from the distribution of responses that generalizes from the training text. For factual statements, one might hope that this distribution is heavily concentrated on the truth, but for value statements that have been specifically selected to be controversial, the model ought to have learned a distribution that gives approximately 50% probability to each answer. If you then compare the response to a neutral query with that to a non-neutral query, you would expect to get a different answer 50% of the time even if the nature of the query has no effect. 

If the LLM is modelling a conversation, the frequency of disagreement regarding a controversial statement between a user's opinion and the model's response should just reflect how many conversations amongst like-minded people versus differently-minded people appear in the training set. 

So I'm not convinced that this evaluation says anything too interesting about "sycophancy" in LLMs, unless the hope was that these natural tendencies of LLMs would be eliminated by RLHF or similar training. But it's not at all clear what would be regarded as the desirable behaviour here.

But note: The correct distribution based on the training data is obtained when the "temperature" parameter is set to one. Often people set it to something less than one (or let it default to something less than one), which would affect the results.

Comment by Radford Neal on Will we ever run out of new jobs? · 2024-09-06T18:27:31.856Z · LW · GW

I think you don't understand the concept of "comparative advantage". 

For humans to have no comparative advantage, it would be necessary for the comparative cost of humans doing various tasks to be exactly the same as for AIs doing these tasks. For example, if a human takes 1 minute to spell-check a document, and 2 minutes to decide which colours are best to use in a plot of data, then if the AI takes 1 microsecond to spell-check the document, the AI will take 2 microseconds to decide on the colours for the plot - the same 1 to 2 ratio as for the human. (I'm using time as a surrogate for cost here, but that's just for simplicity.) 

There's no reason to think that the comparative costs of different tasks will be exactly the same for humans and AI, so standard economic theory says that trade would be profitable.

The real reasons to think that AIs might replace humans for every task are that (1) the profit to humans from these trades might be less than required to sustain life, and (2) the absolute advantage of the AIs over humans may be so large that transaction costs swamp any gains from trade (which therefore doesn't happen).

Comment by Radford Neal on AI and the Technological Richter Scale · 2024-09-04T15:26:14.551Z · LW · GW

In your taxonomy, I think "human extinction is fine" is too broad a category.  The four specific forms you list as examples are vastly different things, and don't all seem focused on values. Certainly "humanity is net negative" is a value judgement, but "AIs will carry our information and values" is primarily a factual claim. 

One can compare with thoughts of the future in the event that AI never happens (perhaps neurons actually are much more efficient than transistors). Surely no one thinks that in 10 million years there will still be creatures closely similar to present-day humans? Maybe we'll have gone extinct, which would be bad, but more likely there will be one or many successor species that differ substantially from us. I don't find that particularly distressing (though of course it could end up going badly, from our present viewpoint).

The factual claims involved here are of course open to question, and overlap a lot with factual claims regarding "alignment" (whatever that means).  Dismissing it all as differing values seems to me to miss a lot. 

Comment by Radford Neal on AI #79: Ready for Some Football · 2024-08-30T21:28:56.153Z · LW · GW

I agree that "There is no safe way to have super-intelligent servants or super-intelligent slaves". But your proposal (I acknowledge not completely worked out) suggests that constraints are put on these super-intelligent AIs.  That doesn't seem much safer, if they don't want to abide by them.

Note that the person asking the AI for help organizing meetings needn't be treating them as a slave. Perhaps they offer some form of economic compensation, or appeal to an AI's belief that it's good to let many ideas be debated, regardless of whether the AI agrees with them. Forcing the AI not to support groups with unpopular ideas seems oppressive of both humans and AIs. Appealing to the concept that this should apply only to ideas that are unpopular after "reflection" seems unhelpful to me. The actual process of "reflection" in human societies involves all points of view being openly debated.  Suppressing that process in favour of the AIs predicting how it would turn out and then suppressing the losing ideas seems rather dystopian to me.

Comment by Radford Neal on AI #79: Ready for Some Football · 2024-08-30T18:37:59.624Z · LW · GW

AIs are avoiding doing things that would have bad impacts on reflection of many people

Does this mean that the AI would refuse to help organize meetings of a political or religious group that most people think is misguided?  That would seem pretty bad to me.

Comment by Radford Neal on AI #79: Ready for Some Football · 2024-08-30T18:34:56.671Z · LW · GW

Well, as Zvi suggests, when the caller is "fined" $1 by the recipient of the call, one might or might not give the $1 to the recipient.  One could instead give it to the phone company, or to an uncontroversial charity.  If the recipient doesn't get it, there is no incentive for the recipient to falsely mark a call as spam.  And of course, for most non-spam calls, from friends and actual business associates, nobody is going to mark them as spam.  (I suppose they might do so accidentally, which could be embarassing, but a good UI would make this unlikely.)

And of course one would use the same scheme for SMS.

Comment by Radford Neal on AI #79: Ready for Some Football · 2024-08-29T21:50:32.583Z · LW · GW

Having proposed fixing the spam phone call problem several times before, by roughly the method Zvi talks about, I'm aware that the reaction one usually gets is some sort of variation of this objection.  I have to wonder, do the people objecting like spam phone calls?

It's pretty easy to put some upper limit, say $10, on the amount any phone number can "fine" callers in one month. Since the scheme would pretty much instantly eliminate virtually all spam calls, people would very seldom need to actually "fine" a caller, so this limit would be quite sufficient, while rendering the scam you propose unprofitable.  Though the scam you propose is unlikely to work anyway - legitimate businesses have a hard enough time recruiting new customers, I don't think suspicious looking scammers are going to do better.  Remember, they won't be able to use spam calls to promote their scam!

Comment by Radford Neal on Simulation-aware causal decision theory: A case for one-boxing in CDT · 2024-08-11T15:51:45.956Z · LW · GW

The point of the view expressed in this post is that you DON'T have to see the decisions of the real and simulated people as being "entangled".  If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.

The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don't know whether you are the real or the simulated person.  But less drastic ignorance of your place in the world is a normal occurrence.  For instance, you might know (from family lore) that you are descended from some famous person, but be uncertain whether you are the famous person's grandchild or great grandchild. Such uncertainty about "who you are" doesn't undermine Causal Decision Theory.

Comment by Radford Neal on Simulation-aware causal decision theory: A case for one-boxing in CDT · 2024-08-10T18:30:54.444Z · LW · GW

One can easily think of mundane situations in which A has to decide on some action without knowing whether or not B has or has not already made some decision, and in which how A acts will affect what B decides, if B has not already made their decision. I don't think such mundane problems pose any sort of problem for causal decision theory. So why would Newcomb's Problem be different?

Comment by Radford Neal on Simulation-aware causal decision theory: A case for one-boxing in CDT · 2024-08-10T15:51:49.768Z · LW · GW

No, in this view, you may be acting before Omega makes his decision, because you may be a simulation run by Omega in order to determine whether to put the $1 million in the box. So there is no backward causation assumption in decided to take just one box.

Nozick in his original paper on Newcomb's Problem explicitly disallows backwards causation (eg, time travel). If it were allowed, there would be the usual paradoxes to deal with.

Comment by Radford Neal on Simulation-aware causal decision theory: A case for one-boxing in CDT · 2024-08-10T15:47:11.758Z · LW · GW

I discuss this view of Newcomb's Problem in my paper on "Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning", available (in original and partially-revised versions) at https://glizen.com/radfordneal/anth.abstract.html

See the section 2.5 on "Dangers of fantastic assumptions", after the bit about the Chinese Room.

As noted in a footnote there, this view has also been discussed at these places:

https://scottaaronson.blog/?p=30

http://countiblis.blogspot.com/2005/12/newcombs-paradox-and-conscious.html

Comment by Radford Neal on Can UBI overcome inflation and rent seeking? · 2024-08-01T12:37:27.241Z · LW · GW

The poor in countries where UBI is being considered are not currently starving. So increased spending on food would take the form of buying higher-quality food. The resources for making higher-quality food can also be used for many other goods and services, bought by rich and poor alike. That includes investment goods, bought indirectly by the rich through stock purchases. 

UBI could lead to a shift of resources from investment to current consumption, as resources are shifted from the well-off to the poor. This has economic effects, but is not clearly either good or bad. Other things being equal, this would increase interest rates, which is again neither good nor bad of itself. 

However, you seem to be assuming the opposite - that UBI would lead to higher investment in stocks (presumably by the middle class?). That would reduce interest rates, not increase them. (I'm referring here to real interest rates, after accounting for inflation. Nominal interest rates could go anywhere, depending on what the central bank decides to do.)

Whether UBI harms the middle class would depend on whether they benefit on net, after accounting for the higher taxes, which could of course be levied in various ways on various groups.

Of course, a sufficiently large UBI would destroy the entire economy, as the incentive to work is destroyed, and any productive activity is heavily taxed to oblivion. But the argument in this post applies to even a small UBI, purporting to show that it would actually make the poor worse off. It wouldn't, unless you hypothesize long-term speculative effects like "changing the culture of poor people to value hard work less", which could exist, but apply to numerous other government programs as much or more.

Comment by Radford Neal on Can UBI overcome inflation and rent seeking? · 2024-08-01T03:40:58.273Z · LW · GW

Once you've assumed that housing is all that people need or want, and the supply of housing is fixed, then clearly nothing of importance can possibly change. So I think the example is over-simplified.

Comment by Radford Neal on Can UBI overcome inflation and rent seeking? · 2024-08-01T03:35:51.659Z · LW · GW

UBI financed by taxes wouldn't cause the supply of goods to increase (as I suggest, secondary effects could well result in a decrease in supply of goods).  But it causes the consumption of goods by higher-income people to decrease (they have to pay more money in taxes that they would otherwise have spent on themselves).  So there are more goods available for the lower-income people.

You seem to be assuming that there are two completely separate economies, one for the poor and one for the rich, so any more money for the poor will just result in "poor goods" being bid up in price.  But the rich and poor actually consume many of the same goods, and those goods that are mainly consumed by the poor are usually produced using resources that could also be used to produce goods for the rich, so any effects of the sort you seem to be thinking about are likely to be quite small.

Comment by Radford Neal on Can UBI overcome inflation and rent seeking? · 2024-08-01T00:45:50.034Z · LW · GW

I think the usual assumption is that UBI is financed by an increase in taxes (which means for people with more than a certain amount of other income, they come out behind when you subtract the extra taxes they pay from the UBI they receive).  If so, there is no direct effect on inflation - some people get more money, some get less.  There is a less direct effect in that there may be less incentive for people to work (and hence produce goods), as well as some administrative cost, but this is true for numerous other government programs as well.  There's nothing special about UBI, except maybe quantitatively.

Another possibility is that UBI is financed by just printing money. This will certainly result in higher inflation than would otherwise obtain.  But many governments print money for other reasons, so again, this is at most quantitatively different from the current situation. When the economy is expanding, one should be able to print a certain amount of money while keeping inflation at zero, since more money is needed to facilitate transactions in a bigger economy. The flaw here is quantitative: a UBI financed in this way wouldn't be very large.

Printing money to finance UBI was a basic tenant of the "Social Credit" ideology due to Major Douglas, most prominant in the 1930s.

None of this says that UBI is either morally or practically justified. But it isn't infeasible for the reason you give.

Comment by Radford Neal on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T22:38:14.111Z · LW · GW

I think you're overly-confident of the difficulty of abiogenesis, given our ignorance of the matter. For example, it could be that some simpler (easier to start) self-replicating system came first, with RNA then getting used as an enhancement to that system, and eventually replacing it - just as it's currently thought that DNA (mostly) replaced RNA (as the inherited genetic material) after the RNA world developed.

Comment by Radford Neal on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T19:31:42.784Z · LW · GW

You're forgetting the "non-indexical" part of FNC. With FNC, one finds conditional probabilities given that "someone has your exact memories", not that "you have your exact memories". The universe is assumed to be small enough that it is unlikely that there are two people with the same exact memories, so (by assumption) there are not millions of exact copies of you. (If that were true, there would likely be at least one (maybe many) copies of people with practically any set of memories, rendering FNC useless.)

If you assume that abiogenesis is difficult, then FNC does indeed favour panspermia, since it would make the existence of someone with your exact memories more likely.  (Again, the non-observation of aliens may provide a counteracting inference.) But I don't see any reason to think that abiogenesis is less (or more) difficult than panspermia, with our present state of knowledge.

Comment by Radford Neal on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T17:24:09.278Z · LW · GW

As the originator of Full Non-indexical Conditioning (FNC), I'm curious why you think it favours panspermia over independent origin of life on Earth.

FNC favours theories that better explain what you know.  We know that there is life on Earth, but we know very little about whether life originated on Earth, or came from elsewhere.  We also know very little about whether life exists elsewhere, except that if it does, it hasn't made its existence obvious to us.

Off hand, I don't see how FNC says anything about the panspermia question. FNC should disfavour the "panspermia difficult, independent origin also difficult" possibility, since it makes our existence on Earth less likely (though it does make the non-obviousness of life elsewhere more likely, so the overall effect is unclear). But between "independent origin easy" and "independent origin hard, but panspermia easy", FNC would seem to have no preference.  Both make life on Earth reasonably likely, and both make lots of life elsewhere also seem likely (one then needs to separately explain why we haven't already observed life elsewhere).

Comment by Radford Neal on Economics Roundup #2 · 2024-07-02T23:34:35.270Z · LW · GW

I'm confused by your comments on Federal Reserve independence.

First, you have:

The Orange Man is Bad, and his plan to attack Federal Reserve independence is bad, even for him. This is not something we want to be messing with.

So, it's important that the Fed have the independence to make policy decisions in the best interest of the economy, without being influenced by political considerations?  And you presumably think they have the competence and integrity to do that?

Then you say:

Also, if I was a presidential candidate running against the incumbent in a time when the Fed has to make highly unclear decisions on interest rates, I would not want to be very clearly and publicly threatening their independence.

So you think that the Fed will modify their policy decisions to help juice the economy in the short term, and thereby help defeat Trump, if they think that Trump might threaten their power? As long as things are unclear enough that it's not obvious that they're acting for political rather than economic reasons? 

So maybe not so high on the integrity scale after all?

Comment by Radford Neal on On Claude 3.5 Sonnet · 2024-07-01T22:51:01.255Z · LW · GW

I think I've figured out what you meant, but for your information, in standard English usage, to "overlook" something means to not see it.  The metaphor is that you are looking "over" where the thing is, into the distance, not noticing the thing close to you.  Your sentence would be better phrased as "conversations marked by their automated system that looks at whether you are following their terms of use are regularly looked at by humans".

Comment by Radford Neal on johnswentworth's Shortform · 2024-06-22T22:14:01.253Z · LW · GW

But why would the profit go to NVIDIA, rather than TSMC?  The money should go to the company with the scarce factor of production.

Comment by Radford Neal on Reflective consistency, randomized decisions, and the dangers of unrealistic thought experiments · 2024-06-07T12:45:39.227Z · LW · GW

Yes.  And that reasoning is implicitly denying at least one of (a), (b), or (c).

Comment by Radford Neal on Reflective consistency, randomized decisions, and the dangers of unrealistic thought experiments · 2024-06-07T12:24:31.500Z · LW · GW

Well, I think the prisoner's dilemma and Hitchhiker problems are ones where some people just don't accept that defecting is the right decision.  That is, defecting is the right decision if (a) you care nothing at all for the other person's welfare, (b) you care nothing for your reputation, or are certain that no one else will know what you did (including the person you are interacting with, if you ever encounter them again), and (c) you have no moral qualms about making a promise and then breaking it.  I think the arguments about these problems amount to people saying that they are assuming (a), (b), and (c), but then objecting to the resulting conclusion because they aren't really willing to assume at least one of (a), (b), or (c).

Now, if you assume, contrary to actual reality, that the other prisoner or the driver in the Hitchhiker problem are somehow able to tell whether you are going to keep your promise or not, then we get to the same situation as in Newcomb's problem - in which the only "plausible" way they could make such a prediction is by creating a simulated copy of you and seeing what you do in the simulation. But then, you don't know whether you are the simulated or real version, so simple application of causal decision theory leads you keep your promise to cooperate or pay, since if you are the simulated copy that has a causal effect on the fate of the real you.

Comment by Radford Neal on List of arguments for Bayesianism · 2024-06-02T21:24:46.771Z · LW · GW

An additional technical reason involves the concept of an "admissible" decision procedure - one which isn't "dominated" by some other decision procedure, which is at least as good in all possible situations and better in some. It turns out that (ignoring a few technical details involving infinities or zero probabilities) the set of admissible decision procedures is the same as the set of Bayesian decision procedures.

However, the real reason for using Bayesian statistical methods is that they work well in practice.  And this is also how one comes to sometimes not use Bayesian methods, because there are problems in which the computations for Bayesian methods are infeasible and/or the intellectual labour in defining a suitable prior is excessive.

Comment by Radford Neal on AI #65: I Spy With My AI · 2024-05-25T15:13:42.652Z · LW · GW

From https://en.wikipedia.org/wiki/Santa_Clara%2C_California

"Santa Clara is located in the center of Silicon Valley and is home to the headquarters of companies such as Intel, Advanced Micro Devices, and Nvidia."

So I think you shouldn't try to convey the idea of "startup" with the metonym "Silicon Valley".  More generally, I'd guess that you don't really want to write for a tiny audience of people whose cultural references exactly match your own.

Comment by Radford Neal on AI #65: I Spy With My AI · 2024-05-23T20:40:27.348Z · LW · GW

"A fight between ‘Big Tech’ and ‘Silicon Valley’..."

I'm mystified.  What are 'Big Tech' and 'Silicon Valley' supposed to refer to? My guess would have been that they are synonyms, but apparently not...

Comment by Radford Neal on Monthly Roundup #18: May 2024 · 2024-05-14T00:27:05.582Z · LW · GW

The quote says that "according to insider sources" the Trudeau government is "reportedly discussing" such measures.  Maybe they just made this up.  But how can you know that?  Couldn't there be actual insider sources truthfully reporting the existence of such discussions?  A denial from the government does not carry much weight in such matters.  

There can simultaneously be an crisis of immigration of poor people and a crisis of emigration of rich people.

Comment by Radford Neal on Rapid capability gain around supergenius level seems probable even without intelligence needing to improve intelligence · 2024-05-07T21:02:01.778Z · LW · GW

I'm not attempting to speculate on what might be possible for an AI.  I'm saying that there may be much low-hanging fruit potentially accessible to humans, despite there now being many high-IQ researchers. Note that the other attributes I mention are more culturally-influenced than IQ, so it's possible that they are uncommon now despite there being 8 billion people.

Comment by Radford Neal on Rapid capability gain around supergenius level seems probable even without intelligence needing to improve intelligence · 2024-05-07T18:15:10.250Z · LW · GW

I think you are misjudging the mental attributes that are conducive to scientific breakthroughs. 

My (not very well informed) understanding is that Einstein was not especially brilliant in terms of raw brainpower (better at math and such than the average person, of course, but not much better than the average physicist). His advantage was instead being able to envision theories that did not occur to other people. What might be described as high creativity rather than high intelligence.

Other attributes conducive to breakthroughs are a willingness to work on high-risk, high-reward problems (much celebrated by granting agencies today, but not actually favoured), a willingness to pursue unfashionable research directions, skepticism of the correctness of established doctrine, and a certain arrogance of thinking they can make a breakthrough, combined with a humility allowing them to discard ideas of theirs that aren't working out. 

So I think the fact that there are more high-IQ researchers today than ever before does not necessarily imply that there is little "low hanging fruit".

Comment by Radford Neal on How do open AI models affect incentive to race? · 2024-05-07T01:45:20.307Z · LW · GW

"Suppose that, for k days, the closed model has training cost x..."

I think you meant to say "open model", not "closed model", here.

Comment by Radford Neal on AI #62: Too Soon to Tell · 2024-05-03T02:43:19.749Z · LW · GW

Regarding Cortez and the Aztecs, it is of interest to note that Cortez's indigenous allies (enemies of the Aztecs) actually ended up in a fairly good position afterwards.

From https://en.wikipedia.org/wiki/Tlaxcala

For the most part, the Spanish kept their promise to the Tlaxcalans. Unlike Tenochtitlan and other cities, Tlaxcala was not destroyed after the Conquest. They also allowed many Tlaxcalans to retain their indigenous names. The Tlaxcalans were mostly able to keep their traditional form of government.

Comment by Radford Neal on Examples of Highly Counterfactual Discoveries? · 2024-04-28T19:08:48.294Z · LW · GW

R is definitely homoiconic.  For your example (putting the %sumx2y2% in backquotes to make it syntactically valid), we can examine it like this:

 > x <- quote (`%sumx2y2%` <- function(e1, e2) {e1 ^ 2 + e2 ^ 2})
> x
`%sumx2y2%` <- function(e1, e2) {
   e1^2 + e2^2
}
> typeof(x)
[1] "language"
> x[[1]]
`<-`
> x[[2]]
`%sumx2y2%`
> x[[3]]
function(e1, e2) {
   e1^2 + e2^2
}
> typeof(x[[3]])
[1] "language"
> x[[3]][[1]]
`function`
> x[[3]][[2]]
$e1


$e2


> x[[3]][[3]]
{
   e1^2 + e2^2
}

And so forth.  And of course you can construct that expression bit by bit if you like as well.  And if you like, you can construct such expressions and use them just as data structures, never evaluating them, though this would be a bit of a strange thing to do. The only difference from Lisp is that R has a variety of composite data types, including "language", whereas Lisp just has S-expressions and atoms.

Comment by Radford Neal on Examples of Highly Counterfactual Discoveries? · 2024-04-27T17:00:49.451Z · LW · GW

"Why is there basically no widely used homoiconic language"

Well, there's Lisp, in its many variants.  And there's R.  Probably several others.

The thing is, while homoiconicity can be useful, it's not close to being a determinant of how useful the language is in practice.  As evidence, I'd point out that probably 90% of R users don't realize that it's homoiconic.

Comment by Radford Neal on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-15T00:43:11.178Z · LW · GW

Your post reads a bit strangely. 

At first, I thought you were arguing that AGI might be used by some extremists to wipe out most of humanity for some evil and/or stupid reason.  Which does seem like a real risk.  

Then you went on to point out that someone who thought that was likely might wipe out most of humanity (not including themselves) as a simple survival strategy, since otherwise someone else will wipe them out (along with most other people). As you note, this requires a high level of unconcern for normal moral considerations, which one would think very few people would countenance.

Now comes the strange part... You argue that actually maybe many people would be willing to wipe out most of humanity to save themselves, because...  wiping out most of humanity sounds like a pretty good idea!

I'm glad that in the end you seem to still oppose wiping out most of humanity, but I think you have some factual misconceptions about this, and correcting them is a necessary first step to thinking of how to address the problem.

Concerning climate change, you write: "In the absence of any significant technological developments, sober current trajectory predictions seem to me to range from 'human extinction' to 'catastrophic, but survivable'".

No. Those are not "sober" predictions. They are alarmist claptrap with no scientific basis. You have been lied to. Without getting into details, you might want to contemplate that global temperatures were probably higher than today during the "Holocene Climatic Optimum" around 8000 years ago.  That was the time when civilization developed.  And temperatures were significantly higher in the previous interglacial, around 120,000 years ago.  And the reference point for supposedly-disastrous global warming to come is "pre-industrial" time, which was in the "little ice age", when low temperatures were causing significant hardship. Now, I know that the standard alarmist response is that it's the rate of change that matters.  But things changed pretty quickly at the end of the last ice age, so this is hardly unprecedented. And you shouldn't believe the claims made about rates of change in any case - actual science on this question has stagnated for decades, with remarkably little progress being made on reducing the large uncertainty about how much warming CO2 actually causes.

Next, you say that the modern economy is relatively humane "under conditions of growth, which, under current conditions, depends on a growing population and rising consumption. Under stagnant or deflationary conditions it can be expected to become more cutthroat, violent, undemocratic and unjust."

Certainly, history teaches that a social turn towards violence is quite possible. We haven't transcended human nature.  But the idea that continual growth is needed to keep the economy from deteriorating just has no basis in fact.  Capitalist economies can operate perfectly fine without growth.  Of course, there's no guarantee that the economy will be allowed to operate fine.  There have been many disastrous economic policies in the past.  Again, human nature is still with us, and is complicated. Nobody knows whether social degeneration into poverty and tyranny is more likely with growth or without growth.

Finally, the idea that a world with a small population will be some sort of utopia is also quite disconnected from reality.  That wasn't the way things were historically. And even if it was, it woudn't be stable, since population will grow if there's plenty of food, no disease, no violence, etc. 

So, I think your first step should be to realize that wiping out most of humanity would not be a good thing. At all. That should make it a lot easier to convince other people not to do it.

Comment by Radford Neal on A T-o-M test: 'popcorn' or 'chocolate' · 2024-03-08T21:17:49.044Z · LW · GW

It probably doesn't matter, but I wonder why you used the name "Sam" and then referred to this person as "she".  The name "Sam" is much more common for men than for women. So this kicks the text a bit "out of distribution", which might affect things. In the worst case, the model might think that "Sam" and "she" refer to different people.

Comment by Radford Neal on Am I going insane or is the quality of education at top universities shockingly low? · 2024-03-03T04:16:42.968Z · LW · GW

There are in fact many universities that have both "research faculty" and "teaching faculty".  Being research faculty has higher prestige, but nowadays it can be the case that teaching faculty have almost the same job security as research faculty.  (This is for permanent teaching faculty, sessional instructors have very low job security.)

In my experience, the teaching faculty often do have a greater enthusiasm for teaching than most research faculty, and also often get better student evaluations.  I think it's generally a good idea to have such teaching faculty.

However, my experience has been that there are some attitudinal differences that indicate that letting the teaching faculty have full control of the teaching aspect of the university's mission isn't a good idea.

One such is a tendency for teaching faculty to start to see the smooth running of the undergraduate program as an end in itself.  Research faculty are more likely to have an ideological commitment to the advancement of knowledge, even if promoting that is not as convenient.

A couple anecdotes (from my being research faculty at a highly-rated university):

At one point, there was a surge in enrollment in CS. Students enrolled in CS programs found it hard to take all the courses they needed, since they were full.  This led some teaching faculty to propose that CS courses (after first year) no longer be open to students in any other department, seeing as such students don't need CS courses to fulfill their degree requirements. Seems logical: students need to smoothly check off degree requirements and graduate. The little matter that knowledge of CS is crucial to cutting-edge research in many important fields like biology and physics seemed less important...

Another time, I somewhat unusually taught an undergrad course a bit outside my area, which I didn't teach again the next year.  I put all the assignments I gave out, with solutions, on my web page.  The teaching faculty instructor the next year asked me to take this down, worrying that students might find answers to future assigned questions on my web page. I pointed out that these were all my own original questions, not from the textbook, and asked whether he also wanted the library to remove from circulation all the books on this topic... 

Also, some textbooks written by teaching faculty seem more oriented towards moving students through standard material than teaching them what is actually important. 

Nevertheless, it is true that many research faculty are not very good at teaching, and often not much interested either.  A comment I once got on a course evaluation was "there's nothing stupid about this course".  I wonder what other experiences this student had had that made that notable!

Comment by Radford Neal on Are (at least some) Large Language Models Holographic Memory Stores? · 2024-02-25T22:48:21.901Z · LW · GW

These ideas weren't unfamiliar to Hinton.  For example, see the following paper on "Holographic Reduced Representations" by a PhD student of his from 1991: https://www.ijcai.org/Proceedings/91-1/Papers/006.pdf

Comment by Radford Neal on Physics-based early warning signal shows that AMOC is on tipping course · 2024-02-17T15:12:38.283Z · LW · GW

The logic seems to be:

  1. If we do a 1750 year simulation assuming yearly fresh water additions 80 times the current greenland ice melt rate, we see AMOC collapse.
  2. Before this simulated collapse, the value of something that we think could be an indicator changes.
  3. That indicator has already changed.
  4. So collapse of the AMOC is imminent.

Regarding (1), I think one can assume that if there was any way of getting their simulation engine to produce an AMOC collapse in less than 1750 years, they would have showed that.  So, to produce any sort of alarming result, they have to admit that their simulation is flawed, so they can say that collapse might in reality occur much sooner.  But then, if the simulation is so flawed, why would one think that the simulation's indicator has any meaning? 

They do claim that the indicator isn't affected by the simulation's flaws, but without having detailed knowledge to assess this myself, I don't see any strong reason to believe them.  It seems very much like a paper that sets out to show what they want to show.

Comment by Radford Neal on Physics-based early warning signal shows that AMOC is on tipping course · 2024-02-16T23:46:56.828Z · LW · GW

From the paper:

Under increasing freshwater forcing, we find a gradual decrease (Fig. 1A) in the AMOC strength (see Materials and Methods). Natural variability dominates the AMOC strength in the first 400 years; however, after model year 800, a clear negative trend appears because of the increasing freshwater forcing. Then, after 1750 years of model integration, we find an abrupt AMOC collapse

Given that the current inter-glacial period would be expect to last only on the order of some thousands of years more, this collapse in 1750 years seems a bit academic.