Posts

Is LLM Translation Without Rosetta Stone possible? 2024-04-11T00:36:46.568Z
Are Intelligence and Generality Orthogonal? 2022-07-18T20:07:44.694Z

Comments

Comment by cubefox on Generalized Stat Mech: The Boltzmann Approach · 2024-04-13T10:14:03.314Z · LW · GW

One problem with Boltzmann's derivation of the second law of thermodynamics is that it "proves too much". Because an analogous derivation also says that entropy "increases" into the past direction, not just into the future direction. So we should assume that the entropy is as its lowest right now (as you are reading these words), instead of in the beginning. It basically says that the past did look like the future, just mirrored at the present moment, e.g. we grow older both in the past and the future direction. Our memories to the contrary just emerged out of nothing (after we emerged out of a grave), just like we will forget them in the future.

This problem went largely unnoticed for many years (though some famous physicists did notice it, as Barry Loewer, Albert's philosophical partner, points out in an interesting interview with Sean Carroll), until David Albert pointed it out more explicitly some 20 years ago. To "fix" the issue, we have to add, as an ad-hoc assumption, the Past Hypothesis, which simply asserts that the entropy in the beginning of the universe was minimal.

The problem here is that the Past Hypothesis can't be supported by empirical evidence like we would naively expect, as its negation predicts that all our records of the past are misleading. So we have to resort to more abstract arguments in its favor. I haven't seen such an account though. David Albert has a short footnote on how assuming a high entropy past would be "epistemically unstable" (presumably because the entropy being at its lowest "now" is a moving target), but that is far from a precise argument.

Comment by cubefox on Is LLM Translation Without Rosetta Stone possible? · 2024-04-12T13:00:27.646Z · LW · GW

From the post:

Suppose that we want to translate between English and an alien language (Klingon). We have plenty of Klingon text, and separately we have plenty of English text, but it’s not matched up and there are no bilingual speakers.

We train GPT on a mix of English and Klingon text and find that it becomes fluent in both. In some sense this model “knows” quite a lot about both Klingon and English, and so it should be able to read a sentence in one language, understand it, and then express the same idea in the other language. But it’s not clear how we could train a translation model.

So he talks about the difficulty of judging whether an unsupervised translation is good, since there are no independent raters who understand both English and Alienese, so translations can't be improved with RLHF.

He posted this before OpenAI succeeded in applying RLHF to LLMs. I now think RLHF generally doesn't improve translation ability much anyway compared to prompting a foundation model. Based on what we have seen, it seems generally hard to improve raw LLM abilities with RLHF. Even if RLHF does improve translation relative to some good prompting, I would assume doing RLHF on some known translation pairs (like English and Chinese) would also help for other pairs which weren't mentioned in the RLHF data. E.g. by encouraging the model to mention it's uncertainty about the meaning of certain terms when doing translations. Though again, this could likely be achieved with prompting as well.

He also mentions the more general problem of language models not knowing why they believe what they believe. If a model translates X as Y rather than as Z, it can't provide the reasons for its decision (like pointing to specific statistics about the training data), except via post hoc rationalisation / confabulation.

Comment by cubefox on Is LLM Translation Without Rosetta Stone possible? · 2024-04-11T12:58:18.131Z · LW · GW

I guess my question would then be whether the translation would work if neither language contained any information on microphysics or advanced math. Would the model be able to translate e.g. "z;0FK(JjjWCxN" into "fruit"?

Comment by cubefox on Is LLM Translation Without Rosetta Stone possible? · 2024-04-11T12:41:05.274Z · LW · GW

I think this is almost impossible for humans to do. Even with a group of humans and decades of research. Otherwise we wouldn't have needed the Rosetta Stone to read Egyptian hieroglyphs. And would long have deciphered the Voynich manuscript.

Comment by cubefox on Is LLM Translation Without Rosetta Stone possible? · 2024-04-11T03:14:25.949Z · LW · GW

Interesting reference! So an unsupervised approach from 2017/2018, presumably somewhat primitive by today's standards, already works quite well for English/French translation. This provides some evidence that the (more advanced?) LLM approach, or something similar, would actually work for English/Alienese.

Of course English and French are historically related, and arose on the same planet while being used by the same type of organism. So they are necessarily quite similar in terms of the concepts they encode. English and Alienese would be much more different and harder to translate.

But if it worked, it would mean that sufficiently long messages, with enough effort, basically translate themselves. A spiritual successor to the Pioneer plaque and the Arecibo message, instead of some galaxy brained hopefully-universally-readable message, would simply consist of several terabytes of human written text. Smart aliens could use the text to train a self-supervised Earthling/Alienese translation model, and then use this model to translate our text.

Comment by cubefox on The 2nd Demographic Transition · 2024-04-10T21:49:44.649Z · LW · GW

Time spent on care per child could be an effect of decreasing fertility instead of a cause. The fewer children you have, the more time you can spend on each one.

Comment by cubefox on How We Picture Bayesian Agents · 2024-04-10T08:48:21.054Z · LW · GW

Do you have an example?

Say I have the visual impression of a rose, presumably caused by a rose in front of me. Do I then update beliefs involving this rose? And afterwards beliefs about things which caused the rose to exist? E.g. about the gardener? Or perhaps one could say my observation of a rose was caused by my own behavior? Head movements, plans etc.

Comment by cubefox on How We Picture Bayesian Agents · 2024-04-09T22:47:22.195Z · LW · GW

Updates are performed via some sort of message-passing; we expect that the messages don’t typically need to propagate very far.

I'm not sure I understand this -- very far from where? Where do we start with updating? Which beliefs/latents are updated first?

Comment by cubefox on The 2nd Demographic Transition · 2024-04-08T12:57:40.673Z · LW · GW

To the person who disagree voted: I assume you know a better or equally good explanation for the drop?

Comment by cubefox on The 2nd Demographic Transition · 2024-04-08T07:21:59.136Z · LW · GW

Of course this isn't a proof, but it's the best explanation for the drop I heard so far.

Comment by cubefox on The 2nd Demographic Transition · 2024-04-08T06:58:41.792Z · LW · GW

Unfortunately I don't have such data. But regarding Japan, I found this chart:

Japanese fertility

Comment by cubefox on The Case for Predictive Models · 2024-04-07T14:58:39.600Z · LW · GW

Anything that can be measured can be predicted, but the inverse is also true. Whatever can’t be measured is necessarily excluded. A model hat is trained to predict based on images recorded by digital cameras, likely learns to predict what images will be recorded by digital cameras – not the underlying reality. If the model believes that the device recording a situation will be hacked to show a different outcome, then the correct prediction for it to make will be that false reading.

LeCun expects that future models which do self-supervised learning on sensory data instead of text won't predict this sensory data directly, but instead only an embedding. He calls this Joint Embedding Predictive Architecture. The reason is that, unlike text in LLMs, sensory data is much higher dimensional and has a large amount of unpredictable noise and redundancy, which makes the usual token-prediction approach unfeasible.

If that is right, future predictive models won't try to predict exact images of a camera anyway. What it predicts may be similar to a belief in humans. Then the challenge is presumably to translate its prediction from representation space back to something humans can understand, like text, in a way that doesn't incentivize deception. This translation may then well mention things like the camera being hacked.

Comment by cubefox on The 2nd Demographic Transition · 2024-04-06T20:27:31.468Z · LW · GW

Looking just at an U shaped graph isn't very informative, as it neglects the relative size of the population. You can actually see from the circle size in your third graph that there are much fewer people on the right side (high income) of the U shape. This doesn't warrant optimism. One has to actually look at a scatter plot and at the correlation coefficient.

At least on a country level, the correlation between IQ and fertility is strongly negative:

The Pearson correlations between national IQ scores and the three national fertility indicators were as follows; Total Fertility Rate (r = − 0.71, p < 0.01), Birth Rate (r = − 0.75, p < 0.01), and Population Growth Rate (r = − 0.52, p < 0.01). source

This looks very bad.

You also mention data supporting an apparent reversal of the trend in a few high income countries. These aren't a lot of data points, so I don't know how strong and significant those correlations are. Probably not very strong, as it includes only 13 data points. Moreover, they don't include South Korea, which has seen massive decline in fertility. Also note that the chart you include shows two graphs with different y axis scale which makes the fertility in the present look higher than if the scale was the same. Which is somewhat misleading.

You also say that the relationship between fertility and GDP is U shaped, but it rather appears only L shaped, with much more population weight on the left side, which is bad.

I would also highlight that the opportunity cost theory is usually a bit more sophisticated than presented here. The theory is that women tend to determine the decision of whether to have children and prefer to raise them themselves, and that they tend to prefer men with higher income than themselves. So if the woman earns significantly less than the man, her opportunity cost for having children instead of a career is small, because the man is the main breadwinner. If the income of the woman is the same or higher than of her partner, her opportunity cost of having children is high.

This theory says fertility isn't so much about absolute income, but about relative income between men and women. It explains why more gender egalitarian countries have lower fertility rates: Because the income of men and women is more similar due to women having careers. It also predicts that any possible positive relationship between high percentile IQ and fertility is determined by couples where the man has a higher IQ than the woman, but not the other way round -- because the latter case would likely mean that the woman is the main breadwinner, in which case she would be less likely to have children.

Comment by cubefox on Best in Class Life Improvement · 2024-04-04T13:58:20.348Z · LW · GW

nicotine is habit-building more than it is directly addictive

This seems doubtful. Various other sources have described nicotine as highly addictive, comparable to various "hard" drugs. Evidence is that coffee drinking also seems "habit building", but it is empirically much, much easier to quit caffeine than to quit nicotine.

Comment by cubefox on Open Thread Spring 2024 · 2024-04-03T13:36:34.308Z · LW · GW

Is it really desirable to have the new "review bot" in all the 100+ karma comment sections? To me it feels like unnecessary clutter, similar to injecting ads.

Comment by cubefox on Modern Transformers are AGI, and Human-Level · 2024-03-28T19:33:16.235Z · LW · GW

Well, backpropagation alone wasn't even enough to make efficient LLMs feasible. It took decades, till the invention of transformers, to make them work. Similarly, knowing how to make LLMs is not yet sufficient to implement predictive coding. LeCun talks about the problem in a short section here from 10:55 to 14:19.

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-28T00:21:53.229Z · LW · GW

If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.

I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one's support of in the light of this, they end up involving highly politicized taboos.

It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn't have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:56:16.133Z · LW · GW

If you mean by "statement" an action (a physical utterance) then I disagree. If you mean an abstract object, a proposition, for which someone could have more or less evidence, or reason to believe, then I agree.

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:47:41.959Z · LW · GW

But it wasn't a cancellation attempt.

In effect Cade Metz indirectly accused Scott of racism. Which arguably counts as a cancellation attempt.

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:28:42.382Z · LW · GW

Beliefs can only be epistemically legitimate, actions can only be morally legitimate. To "bring something up" is an action, not a belief. My point is that this action wasn't legitimate, at least not in this heavily abridged form.

Comment by cubefox on rhollerith_dot_com's Shortform · 2024-03-27T23:03:36.767Z · LW · GW

I also find them irksome for some reason. They feel like pollution. Like AI generated websites in my Google results.

An exception was the ghost cartoon here. The AI spelling errors added to the humor, similar to the bad spelling of lolcats.

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T22:24:31.773Z · LW · GW

What you're suggesting amounts to saying that on some topics, it is not OK to mention important people's true views because other people find those views objectionable.

It's okay to mention an author's taboo views on a complex and sensitive topic, when they are discussed in a longer format which does justice to how they were originally presented. Just giving a necessarily offensive sounding short summary is only useful as a weaponization to damage the reputation of the author.

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T21:58:44.903Z · LW · GW

On the one hand you say

So I think it's actually pretty legitimate for Metz to bring up incidences like this

but also

This is not to say that I think Scott should be "canceled" for these views or whatever, not at all

which seems like a double standard. E.g. assume the consequence of the NYT article had actually lead to Scott's cancellation. Which wasn't an implausible thing for Metz to expect.

(On a historical analogy, Scott's case seems quite analogous to the historical case of Baruch Spinoza. Spinoza could be (and was) accused of employing a similar strategy to get, with his pantheist philosophy, the highly taboo topic of atheism into the mainstream philosophical discourse. If so, the strategy was successful.)

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T21:24:16.402Z · LW · GW

Wait a minute. Please think through this objection. You are saying that if the NYT encountered factually true criticisms of an important public figure, it would be immoral of them to mention this in an article about that figure?

No, not in general. But in the specific case at hand, yes. We know Metz did read quite a few of Scott's blog posts, and all necessary context and careful subtlety with which he (Scott) approaches this topic (e.g. in Against Murderism) is totally lost in an offhand remark in a NYT article. It's like someone in the 17th century writing about Spinoza, and mentioning, as a sidenote, "and oh by the way, he denies the existence of a personal God" and then moves on to something else. Shortening his position like this, where it must seem outrageous and immoral, is in effect defamatory.

If some highly sensitive topic can't be addressed in a short article with the required carefulness, it should simply not be addressed at all. That's especially true for Scott, who wrote about countless other topics. There is no requirement to mention everything. (For Spinoza an argument could be made that his, at the time, outrageous position plays a fairly central role in his work, but that's not the case for Scott.)

Does it bother you that your prediction didn't actually happen? Scott is not dying in prison!

Luckily Scott didn't have to fear legal consequences. But substantial social consequences were very much on the table. We know of other people who lost their job or entire career prospects for similar reasons. Nick Bostrom probably dodged the bullet by a narrow margin.

Comment by cubefox on Wei Dai's Shortform · 2024-03-27T11:04:28.521Z · LW · GW

There are several levels in which humans can be bad or evil:

  1. Doing bad things because they believe them to be good
  2. Doing bad things while not caring whether they are bad or not
  3. Doing bad things because they believe them to be bad (Kant calls this "devilish")

I guess when humans are bad, they usually do 1). Even Hitler may have genuinely thought he is doing the morally right thing.

Humans also sometimes do 2), for minor things. But rarely if the anticipated bad consequences are substantial. People who consistently act according to 2) are called psychopaths. They have no inherent empathy for other people. Most humans are not psychopathic.

Humans don't do 3), they don't act evil for the sake of it. They aren't devils.

Comment by cubefox on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T10:20:18.950Z · LW · GW

Imagine you are a philosopher in the 17th century, and someone accuses you of atheism, or says "He aligns himself with Baruch Spinoza". This could easily have massive consequences for you. You may face extensive social and legal punishment. You can't even honestly defend yourself, because the accusation of heresy is an asymmetric discourse situation. Is your accuser off the hook when you end up dying in prison? He can just say: Sucks for him, but it's not my fault, I just innocently reported his beliefs.

Comment by cubefox on Modern Transformers are AGI, and Human-Level · 2024-03-26T23:39:27.772Z · LW · GW

No, I was talking about the results. lsusr seems to use the term in a different sense than Scott Alexander or Yann LeCun. In their sense it's not an alternative to backpropagation, but a way of constantly predicting future experience and to constantly update a world model depending on how far off those predictions are. Somewhat analogous to conditionalization in Bayesian probability theory.

LeCun talks about the technical issues in the interview above. In contrast to next-token prediction, the problem of predicting appropriate sense data is not yet solved for AI. Apart from doing it in real time, the other issue is that (e.g.) for video frames a probability distribution over all possible experiences is not feasible, in contrast to text tokens. The space of possibilities is too large, and some form of closeness measure is required, or imprecise predictions, that only predict "relevant" parts of future experience.

In the meantime OpenAI did present Sora, a video generation model. But according to the announcement, it is a diffusion model which generates all frames in parallel. So it doesn't seem like a step toward solving predictive coding.

Edit: Maybe it eventually turns out to be possible to implement predictive coding using transformers. Assuming this works, it wouldn't be appropriate to call transformers AGI before that achievement was made. Otherwise we would have to identify the invention of "artificial neural networks" decades ago with the invention of AGI, since AGI will probably be based on ANNs. My main point is that AGI (a system with high generality) is something that could be scaled up (e.g. by training a larger model) to superintelligence without requiring major new intellectual breakthroughs, breakthroughs like figuring out how to get predictive coding to work. This is similar to how a human brain seems to be broadly similar to a dog brain, but larger, and thus didn't involve a major "breakthrough" in the way it works. Smarter animals are mostly smarter in the sense that they are better at prediction.

Comment by cubefox on Modern Transformers are AGI, and Human-Level · 2024-03-26T18:41:27.716Z · LW · GW

I agree it is not sensible to make "AGI" a synonym for superintelligence (ASI) or the like. But your approach to compare it to human intelligence seems unprincipled as well.

In terms of architecture, there is likely no fundamental difference between humans and dogs. Humans are probably just a lot smarter than dogs, but not significantly more general. Similar to how a larger LLM is smarter than a smaller one, but not more general. If you doubt this, imagine we had a dog-level robotic AI. Plausibly, we soon thereafter would also have human-level AI by growing its brain / parameter count. For all we know, our brain architectures seem quite similar.

I would go so far as to argue that most animals are about equally general. Humans are more intelligent than other animals, but intelligence and generality seem orthogonal. All animals can do both inference and training in real time. They can do predictive coding. Yann LeCun calls it the dark matter of intelligence. Animals have a real world domain. They fully implement the general AI task "robotics".

Raw transformers don't really achieve that. They don't implement predictive coding, and they don't work in real time. LLMs may be in some sense more (e.g. in language understanding) intelligent than animals, but that was already true, in some sense, for the even more narrow AI AlphaGo. AGI signifies a high degree of generality, not necessarily a particularly high degree of intelligence in a less general (more narrow) system.

Edit: One argument for why predictive coding is so significant is that it can straightforwardly scale to superintelligence. Modern LLMs get their ability mainly from trying to predict human written text, even when they additionally process other modalities. Insofar text is a human artifact, this imposes a capability ceiling. Predictive coding instead tries to predict future sensory experiences. Sensory experiences causally reflect base reality quite directly, unlike text produced by humans. An agent with superhuman predictive coding would be able to predict the future, including conditioned on possible actions, much better than humans.

Comment by cubefox on All About Concave and Convex Agents · 2024-03-25T19:51:26.915Z · LW · GW

Whether it is possible to justify Kelly betting even when your utility is linear in money (SBF said it was for him) is very much an open research problem. There are various posts on this topic when you search LessWrong for "Kelly". I wouldn't assume Wikipedia contains authoritative information on this question yet.

Comment by cubefox on All About Concave and Convex Agents · 2024-03-25T11:48:42.661Z · LW · GW

These classifications are very general. Concave utility functions seem more rational than convex ones. But can we be more specific?

Intuitively, it seems a rational simple relation between resources and utility should be such that the same relative increases in resources are assigned the same utility. So doubling your current resources should be assigned the same utility (desirability) irrespective of how much resources you currently have. E.g. doubling your money while being already rich seems approximately as good as doubling your money when being not rich.

Can we still be more specific? Arguably, quadrupling (x4) your resources should be judged twice as good (assigned twice as much utility) as doubling (x2) your resources.

Can we still be more specific? Arguably, the prospect of halfing your resources should be judged being as bad as doubling your resources is good. If you are forced with a choice between two options A and B, where A does nothing, and B either halves your resources or doubles them, depending on a fair coin flip, you should assign equal utility to choosing A and to choosing B.

I don't know what this function is in formal terms. But it seems that rational agents shouldn't have utility functions that are very dissimilar to it.

The strongest counterargument I can think of is that the prospect of losing half your resources may seem significantly worse than the prospect of doubling your resources. But I'm not sure this has a rational basis. Imagine you are not dealing with uncertainty between two options, but with two things happening sequentially in time. Either first you double your money, then you half it. Or first you half your money, then you double it. In either case, you end up with the same amount you started with. So doubling and halfing seem to cancel out in terms of utility, i.e. they should be regarded as having equal opposite utility.

Comment by cubefox on Victor Ashioya's Shortform · 2024-03-24T04:44:43.598Z · LW · GW

A shorter, more high level alternative is Axis of Ordinary, which is also available via Facebook and Telegram.

Comment by cubefox on Shortform · 2024-03-23T17:16:08.783Z · LW · GW

"The x are more y than they actually are" seems like a contradiction?

Comment by cubefox on Shortform · 2024-03-15T03:56:32.303Z · LW · GW

You mentioned positions I described as straw men or weak men. Darwinist utilitarianism would be more like a steel man.

Comment by cubefox on Shortform · 2024-03-15T03:27:16.481Z · LW · GW

Probably because from the outset, only one sort of answer is inside the realm of acceptable answers. Anything else would be far outside the Overton window. If they already know what sort of answer they have to produce, doing the actual calculations has no benefit. It's like a theologian evaluating arguments about the existence of God.

Comment by cubefox on Shortform · 2024-03-15T02:59:24.146Z · LW · GW

The above seems like a strawman or weakman argument. Consider instead Nietzsche's Critique of Utilitarianism:

Thus Nietzsche thinks utilitarians are committed to ensuring the survival and happiness of human beings, yet they fail to grasp the unsavory consequences which that commitment may entail. In particular, utilitarians tend to ignore the fact that effective long-run utility promotion might require the forcible destruction of people who either enfeeble the gene pool or who have trouble converting resources into utility—incurable depressives, the severely handicapped, and exceptionally fastidious people all seem potential targets.

Comment by cubefox on 'Empiricism!' as Anti-Epistemology · 2024-03-14T22:27:09.693Z · LW · GW

I'll add that sometimes, there is a big difference between verbally agreeing with a short summary, even if it is accurate, and really understanding and appreciating it and its implications. That often requires long explanations with many examples and looking at the same issue from various angles. The two Scott Alexander posts you mentioned are a good example.

Comment by cubefox on 'Empiricism!' as Anti-Epistemology · 2024-03-14T22:01:38.831Z · LW · GW

Yeah, but I do actually think this paragraph is wrong on the existence of easy rules. It is a bit like saying: There are only the laws of fundamental physics, don't bother with trying to find high level laws, you just have to do the hard work of learning to apply fundamental physics when you are trying to understand a pendulum or a hot gas. Or biology.

Similarly, for induction there are actually easy rules applicable to certain domains of interest. Like Laplace's rule of succession, which assumes random i.i.d. sampling. Which implies the sample distribution tends to resemble the population distribution. The same assumption is made by supervised learning about the training distribution, which works very well in many cases. There are other examples like the Lindy effect (mentioned in another comment) and various popular models in statistics. Induction heads also come to mind.

Even if there is just one, complex, fully general method applicable to science or induction, there may still exist "easy" specialized methods, with applicability restricted to a certain domain.

Comment by cubefox on 'Empiricism!' as Anti-Epistemology · 2024-03-14T13:06:32.471Z · LW · GW

Yeah, one has to correct, when possible, for likelihood of observing a particular part of the lifetime of the trend. Though absent any further information our probability distribution should arguably be even. Which does suggest there is indeed a sort of "straight rule" of induction when extrapolating trends, as the scientist in the dialogue suspected. It is just that it serves as a weak prior that is easily changed by additional information.

Comment by cubefox on 'Empiricism!' as Anti-Epistemology · 2024-03-14T03:24:02.813Z · LW · GW

There does actually seem to be a simple and general rule of extrapolation that can be used when no other data is available: If a trend has so far held for some timespan t, it will continue to hold, in expectation, for another timespan t, and then break down.

In other words, if we ask ourselves how long an observed trend will continue to hold, it does seem, absent further data, a good indifference assumption to think that we are currently in the middle of the trend; that we have so far seen half of it.

Of course it is possible that we are currently near the beginning of the trend, in which case it would continue longer than it has held so far; or near the end, in which case it would continue less long than it has held so far. But on average we should expect that we are in the middle.

So if we know nothing about the investment scheme in the post, except that it has worked for two years so far, our expection should be that it breaks down after a further two years.

Comment by cubefox on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-13T21:06:41.686Z · LW · GW

Thanks. My question wasn't bait. It comes from repurposing the innocent but (for a two-boxer) uncomfortable "why ain'tcha winning?" question, by applying it to the population level. As a population, South Korea (TFR=0.72 and falling) doesn't look like it's winning the Malthusian game. 2.04 sounds almost sustainable. And Africa has a TFR>4.

No, I think it's still a bad thing because (as with most religions) it fuels beliefs that prevent people from even considering trying to solve problems like aging and death because "heaven will be better than mortality", "God will make everything better", etc.

Yeah, fair enough. Something like that would be my response too. Though I would add that solving aging is not the quite the same as solving a low total fertility rate. There is also the broader issue of dysgenic trends, with a negative correlation between TFR and IQ, but that takes us too far here.

Comment by cubefox on OpenAI's Sora is an agent · 2024-03-13T19:35:15.476Z · LW · GW

Can you say a bit more on how the idea in this post relates to DeepMind's SIMA?

Comment by cubefox on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-13T18:14:38.174Z · LW · GW

Currently the fertility rate is collapsing around the world. In most industrialized countries it is far below 2.1 children per woman. Which suggests that these societies will go extinct, if not some other magical AI solution appears. Even Mormon fertility rates are plummeting, but they are still higher than of most other people in the US. Which suggests Mormons are actually less misaligned with the "goal" of evolution than supposedly more rational people. Mormons are also less responsible for a potential future disappearance of the society they live in.

Do you think this gives Mormonism some practical, if not epistemic, justification?

Comment by cubefox on 0th Person and 1st Person Logic · 2024-03-12T23:02:38.943Z · LW · GW

Suppose I tell a stranger, "It's raining." Under possible worlds semantics, this seems pretty straightforward: I and the stranger share a similar map from sentences to sets of possible worlds, so with this sentence I'm trying to point them to a certain set of possible worlds that match the sentence, and telling them that I think the real world is in this set.

Can you tell a similar story of what I'm trying to do when I say something like this, under your proposed semantics?

So my conjecture of what happens here is: You and the stranger assume a similar degree of confirmation relation between the sentence "It's raining" and possible experiences. For example, you both expect visual experiences of raindrops, when looking out of the window, to confirm the sentence pretty strongly. Or rain-like sounds on the roof. So with asserting this sentence you try to tell the stranger that you predict/expect certain forms of experiences, which presumably makes the stranger predict similar things (if they assume you are honest and well-informed).

The problem with agents mapping a sentence to certain possible worlds is that this mapping has to occur "in our head", internally to the agent. But possible worlds / truth conditions are external, at least for sentences about the external world. We can only create a mapping between things we have access to. So it seems we cannot create such a mapping. It's basically the same thing Nate Showell said in a neighboring comment.

(We could replace possible worlds / truth conditions themselves with other beliefs, presumably a disjunction of beliefs that are more specific than the original statement. Beliefs are internal, so a mapping is possible. But beliefs have content (i.e. meaning) themselves, just like statements. So how then to account for these meanings? To explain them with more beliefs would lead to an infinite regress. It all has to bottom out in experiences, which is something we simply have as a given. Or any really any robot with sensory inputs, as Adele Lopez remarked.)

No, in that post I also consider interpretations of probability where it's subjective. I linked to that post mainly to show you some ideas for how to quantify sizes of sets of possible worlds, in response to your assertion that we don't have any ideas for this. Maybe try re-reading it with this in mind?

Okay, I admit I have a hard time understanding the post. To comment on the "mainstream view":

"1. Only one possible world is real, and probabilities represent beliefs about which one is real."

(While I wouldn't personally call this a way of "estimating the size" of sets of possible worlds,) I think this interpretation has some plausibility. And I guess it may be broadly compatible with the confirmation/prediction theory of meaning. This is speculative, but truth seems to be the "limit" of confirmation or prediction, something that is approached, in some sense, as the evidence gets stronger. And truth is about how the external world is like. Which is just a way of saying that there is some possible way the world is like, which rules out other possible worlds.

Your counterarguments against interpretation 1 seems to be that it is merely subjective and not objective, which is true. Though this doesn't rule out the existence of some unknown rationality standards which restrict the admissible beliefs to something more objective.

Interpretation 2, I would argue, is confusing possibilities with indexicals. These are really different. A possible world is not a location in a large multiverse world. Me in a different possible world is still me, at least if not too dissimilar, but a doppelganger of me in this world is someone else, even if he is perfectly similar to me. (It seems trivially true to say that I could have had different desires, and consequently something else for dinner. If this is true, it is possible that I could have wanted something else for dinner. Which is another way of saying there is a possible world where I had a different preference for food. So this person in that possible world is me. But to say there are certain possible worlds is just a metaphysically sounding way of saying that certain things are possible. Different counterfactual statements could be true of me, but I can't exist at different locations. So indexical location is different from possible existence.)

I don't quite understand interpretation 3. But interpretation 4 I understand even less. Beliefs seem to be are clearly different from desires. The desire that p is different from the belief that p. They can be even seen as opposites in terms of direction of fit. I don't understand what you find plausible about this theory, but I also don't know much about UDT.

Comment by cubefox on 0th Person and 1st Person Logic · 2024-03-12T18:00:35.814Z · LW · GW

Yeah, this is a good point. The meaning of a statement is explained by experiences E, so the statement can't be assumed from the outset to be a proposition (the meaning of a statement), as that would be circular. We have to assume that it is a potential utterance, something like a probabilistic disposition to assent to it. The synonymity condition can be clarified by writing the statements in quotation marks:

Additionally the quantifier ranges only over experiences E, which can't be any statements, but only potential experiences of the agent. Experiences are certain once you have them, while ordinary beliefs about external affairs are not.

By the way, the above is the synonymity condition which defines when two statements are synonymous or not. A somewhat awkward way to define the meaning of an individual statement would be as the equivalence class of all synonymous statements. But a possibility to define the meaning of an individual statement more directly would be to regard the meaning as the set of all pairwise odds ratios between the statement and any possible evidence. The odds ratio measures the degree of probabilistic dependence between two events. Which accords with the Bayesian idea that evidence is basically just dependence.

Then one could define synonymity alternatively as the meanings of two statements, their odds ratio sets, being equal. The above definition of synonymity would then no longer be required. This would have the advantage that we don't have to assign some mysterious unconditional value to P("A"|E)=P("A") if we think A and E are independent. Because independence just means OddsRatio("A",E)=1.

Another interesting thing to note is that Yudkowsky sometimes seems to express his theory of "anticipated experiences" in the reverse of what I've done above. He seems to think of prediction instead of confirmation. That would reverse things:

I don't think it makes much of a difference, since probabilistic dependence is ultimately symmetric, i.e. OddsRatio(X,Y)=OddsRatio(Y,X).

Maybe there is some other reason though to prefer the prediction approach over the confirmation approach. Like, for independence we would, instead of P("A"|E)=P("A"), have P(E|"A")=P(E). The latter refers to the unconditional probability of an experience, which may be less problematic than to rely on the unconditional probability of a statement.

And how does someone compute the degree to which they expect some experience to confirm a statement? I leave that outside the theory. The theory only says that what you mean with a statement is determined by what you expect to confirm or disconfirm it. I think that has a lot of plausibility once you think about synonymity. How could be say two different statements have different meaning when we regard them as empirically equivalent under any possible evidence?

The approach can be generalized to account for the meaning of sub-sentence terms, i.e. individual words. A standard solution is to say that two words are synonymous iff they can be substituted for each other in any statement without affecting the meaning of the whole statement. Then there are tautologies, which are independent of any evidence, so they would be synonymous according to the standard approach. I think we could say their meaning differs in the sense that the meaning of the individual words differ. For other sentence types, like commands, we could e.g. rely on evidence that the command is executed - instead of true, like in statements. An open problem are to account for the meaning of expressions that don't have any obvious satisfaction conditions (like being true or executed), e.g. greetings.

Regarding "What Are Probabilities, Anyway?". The problem you discuss there is how to define an objective notion of probability. Subjective probabilities are simple, they are are just the degrees of belief of some agent at a particular point in time. But it is plausible that some subjective probability distributions are better than others, which suggests there is some objective, ideally rational probability distribution. It is unclear how to define such a thing, so this remains an open philosophical problem. But I think a theory of meaning works reasonably well with subjective probability.

Comment by cubefox on 0th Person and 1st Person Logic · 2024-03-12T01:41:01.166Z · LW · GW

You can interpret them as subjective probability functions, where the conditional probability P(A|B) is the probability you currently expect for A under the assumption that you are certain that B. With the restriction that P(A and B)=P(A|B)P(B)=P(A)P(B|A).

I don't think possible worlds help us to calculate any of the two values in the ratio P(A and B)/P(B). That would only be possible of you could say something about the share of possible worlds in which "A and B" is true, or "B".

Like: "A and B" is true in 20% of all possible worlds, "B" is true in 50%, therefore "A" is true in 40% of the "B" worlds. So P(A|B)=0.4.

But that obviously doesn't work. Each statement is true in infinitely many possible worlds and we have no idea how to count them to assign numbers like 20%.

Comment by cubefox on 0th Person and 1st Person Logic · 2024-03-11T23:13:46.448Z · LW · GW

Instead of directly having a 0P-preference for "a square of the grid is red," the robot would have to have a 1P-preference for "I believe that a square of the grid is red."

It would be more precise to say the robot would prefer to get evidence which raises its degree of belief that a square of the grid is red.

Comment by cubefox on 0th Person and 1st Person Logic · 2024-03-11T22:40:56.745Z · LW · GW

This approach implies there are two possible types of meanings: Sets of possible worlds and sets of possible experiences. A set of possible worlds would constitute truth conditions for "objective" statements about the external world, while a set of experience conditions would constitute verification conditions for subjective statements, i.e. statements about the current internal states of the agent.

However, it seems like a statement can mix both external or internal affairs, which would make the 0P/1P distinction problematic. Consider Wei Dai's example of "I will see red". It expresses a relation between the current agent ("I") and its hypothetical "future self". "I" is presumably an internal object, since the agent can refer to itself or its experiences independently of how the external world turns out to be constituted. The future agent, however, is an external object relative to the current agent which makes the statement. It must be external because its existence is uncertain to the present agent. Same for the future experience of red.

Then the statement "I will see red" could be formalized as follows, where ("I"/"me"/"myself") is an individual constant which refers to the present agent:

Somewhat less formally: "There is an x such that I will become x and there is a experience of red y such that x sees y."

(The quantifier is assumed to range over all objects irrespective of when they exist in time.)

If there is a future object and a future experience that make this statement true, they would be external to the present agent who is making the statement. But is internal to the present agent, as it is the (present) agent itself. (Consider Descartes demon currently misleading you about the existence of the external world. Even in that case you could be certain that you exist. So you aren't something external.)

So Wei's statement seems partially internal and partially external, and it is not clear whether its meaning can be either a set of experiences or a set of possible worlds on the 0P/1P theory. So it seems a unified account is needed.


Here is an alternative theory.

Assume the meaning of a statement is instead a set of experience/degree-of-confirmation pairs. That is, two statements have the same meaning if they get confirmed/disconfirmed to the same degree for all possible experiences that E. So statement A has the same meaning as a statement B iff:

where is a probability function describing conditional beliefs. (See Yudkowsky's anticipated experiences. Or Rudolf Carnap's liberal verificationism, which considers degrees of confirmation instead of Wittgenstein's strict verification.)

Now this arguably makes sense for statements about external affairs: If I make two statements, and I would regard them to be confirmed or disconfirmed to the same degree by the same evidence, that would plausibly mean I regard them as synonymous. And if two people disagree regarding the confirmation conditions of a statement, that would imply they don't mean the same (or completely the same) thing when they express that statement, even if they use the same words.

It also makes sense for internal affairs. I make a statement about some internal affair, like "I see red", formally . Here refers to myself and to my current experience of red. Then this is true iff there is some piece of evidence that which is equivalent to that internal statement, namely the experience that I see red. Then if , otherwise .

Again, the "I" here is logically an individual constant internal to the agent, likewise the experience . That is, only my own experience verifies that statement. If there is another agent, who also sees red, those experiences are numerically different. There are two different constants which refer to numerically different agents, and two constants which refer to two different experiences.

That is even the case if the two agents are perfectly correlated, qualitatively identical doppelgangers with qualitatively identical experiences (on, say, some duplicate versions of Earth, far away from each other). If one agent stubs its toe, the other agent also stubs its toe, but the first agent only feels the pain caused by the first agent's toe, while the second only feels the pain caused by the second agent's toe, and neither feels the experience of the other. Their experiences are only qualitatively but not numerically identical. We are talking about two experiences here, as one could have occurred without the other. They are only contingently correlated.

Now then, what about the mixed case "I will see red"? We need an analysis here such that the confirming evidence is different for statements expressed by two different agents who both say "I will see red". My statement would be (now) confirmed, to some degree, by any evidence (experiences) suggesting that a) I will become some future person x such that b) that future person will see red. That is different from the internal "I see red" experience that this future person would have themselves.

An example. I may see a notice indicating that a red umbrella I ordered will arrive later today, which would confirm that I will see red. Seeing this notice would constitute such a confirming experience. Again, my perfect doppelganger on a perfect twin Earth would also see such a notice, but our experiences would not be numerically identical. Just like my doppelganger wouldn't feel my pain when we both, synchronously, stub our toes. My experience of seeing the umbrella notice is caused (explained) by the umbrella notice here on Earth, not by the far away umbrella notice on twin Earth. When I say "this notice" I refer to the hypothetical object which causes my experience of a notice. So every instance of the indexical "this" involves reference to myself and to an experience I have. Both are internal, and thus numerically different even for agents with qualitatively identical experiences. So if we both say "This notice says I will see a red umbrella later today", we would express different statements. Their meaning would be different.


In summary, I think this is a good alternative to the 0P/1P theory. It provides a unified account of meanings, and it correctly deals with distinct agents using indexicals while having qualitatively identical experiences. Because it has a unified account of meaning, it has no in-principle problem with "mixed" (internal/external) statements.

It does omit possible worlds. So one objection would be that it would assign the same meaning to two hypotheses which make distinct but (in principle) unverifiable predictions. Like, perhaps, two different interpretations of quantum mechanics. I would say that a) these theories may differ in other aspects which are subject to some possible degree of (dis)confirmation and b) if even such indirect empirical comparisons are excluded a priori, regarding them as synonymous doesn't sound so bad, I would argue.

The problem with using possible worlds to determine meanings is that you can always claim that the meaning of "The mome raths outgrabe" is the set of possible worlds where the mome raths outgrabe. Since possible worlds (unlike anticipated degrees of confirmation by different possible experiences) are objects external to an agent, there is no possibility of a decision procedure which determines that an expression is meaningless. Nor can there, with the possible worlds theory, be a decision procedure which determines that two expressions have the same or different meanings. It only says the meaning of "Bob is a bachelor" is determined by the possible worlds where Bob is a bachelor, and that the meaning of "Bob is an unmarried man" is determined by the worlds where Bob is an unmarried man, but it doesn't say anything which would allow an agent to compare those meanings.

Comment by cubefox on Why correlation, though? · 2024-03-10T18:00:13.165Z · LW · GW

That's not quite right. It measures the strength of monotonic relationships, which which may also be linear. So this measure is more general than Pearson correlation. It just measures whether, if one value increases, the other value increases as well, not whether they increase at the same rate.

Comment by cubefox on Completion Estimates · 2024-03-10T00:15:08.068Z · LW · GW

One way to estimate completion times of a person or organization is to compile a list of their own past predictions and actual outcomes, and compute in each case how much longer (or shorter) the actual time to completion was in relative terms.

Since an outcome that took 100% longer than predicted (twice as long), and an outcome that took 50% shorter (half as long) should intuitively cancel out, the geometric mean has to be used to compute the average. In the previous case that would be the square root of the product of those two factors, (2 * 0.5)^(1/2)=1. In that case we should multiply future completion estimates by 1, i.e. leave them as is.

That only works if we have some past history of time estimates and outcomes. Another way would be to look at prediction markets, should one exist for the event at hand. Though that is ultimately a way of outsourcing the problem rather than one of computing it.

Comment by cubefox on Using axis lines for good or evil · 2024-03-09T13:24:10.386Z · LW · GW

These suggestions seem plausible. A few notes:

  • Tick marks for years are ambiguous. Is the tick mark indicating the start of the year? The middle of the year? The end of the year? I have worked with chart libraries, and sometimes it's even the current date, n years ago. Like a tick mark labelled "2010" = March 9, 2010. A better alternative to tick marks is to have "year separators", where the "2010" is placed between two "tick marks" rather than under one, which can only be interpreted as start and end of the year.
  • Regarding temperature. Physically speaking, only 0 Kelvin is an objective zero point, such that something with 20 Kelvin has "twice as" much temperature as with 10 Kelvin. Kelvin is a "ratio" scale. Celsius and Fahrenheit are only "interval" scales, so 20°C is not twice as hot as 10°C, but only ~3.5% hotter. (See also this interesting Wikipedia article on the various types of scales.) This is even though 0°C (water freezes) seems more objective than 0°F.
    Nonetheless, Kelvin is not relevant to what we perceive as "small" and "large" differences in everyday life. We wouldn't say 20°C only feels a mere 3.5% warmer than 10°C.
    I guess it helps to include two familiar "reference points" in the temperature axis of a chart, like freezing and boiling of water (0°C and 100°C, or explicit labels for Fahrenheit) or "fridge temperature" and "room temperature". That should give some intuitive sense of distance in the temperature axis.