Posts

Comments

Comment by artifex on Universal Basic Income and Poverty · 2024-07-27T06:22:52.710Z · LW · GW

I do not see what there is in a continued existence of 60-hour weeks that cannot be explained by the relative strength of the income and substitution effects. This doesn’t need to tell us about a poverty equilibrium, it can just tell us about people’s preferences?

Comment by artifex on Apocalypse insurance, and the hardline libertarian take on AI risk · 2023-11-28T10:07:48.881Z · LW · GW

I agree with most of this, but as a hardline libertarian take on AI risk it is incomplete since it addresses only how to slow down AI capabilities. Another thing you may want a government to do is speed up alignment, for example through government funding of R&D for hopefully safer whole brain emulation. Having arbitration firms, private security companies, and so on enforce proof of insurance (with prediction markets and whichever other economic tools seem appropriate to determine how to set that up) answers how to slow down AI capabilities but doesn’t answer how to fund alignment.

One libertarian take on how to speed up alignment is that

(1) speeding up alignment / WBE is a regular public good / positive externality problem (I don’t personally see how you do value learning in a non-brute-force way without doing much of the work that is required for WBE anyway, so I just assume that “funding alignment” means “funding WBE”; this is a problem that can be solved with enough funding; if you don’t think alignment can be solved by raising enough money, no matter how much money and what the money can be spent on, then the rest of this isn’t applicable)

(2) there are a bunch of ways in which markets fund public goods (for example, many information goods are funded by bundling ads with them) and coordination problems involving positive or negative externalities or other market failures (all of which, if they can in principle be solved by a government by implementing some kind of legislation, can be seen as / converted into public goods problems, if nothing else the public goods problem of funding the operations of a firm that enforces exactly whatever such a legislation would say; so the only kind of market failure that truly needs to be addressed is public goods problems)

(3) ultimately, if none of the ways in which markets fund public goods works, it should always still be possible to fall back on Coasean bargaining or some variant on dominant assurance contracts, if transaction costs can be made low enough

(4) transaction costs in free markets will be lower due, among other reasons, to not having horridly inefficient state-run financial and court systems

(5) prediction markets and dominant assurance contracts and other fun economic technologies don’t, in free markets, have the status of being vaguely shady and perhaps illegal that they have in societies with states

(6) if transaction costs cannot be made low enough for the problem to be solved using free markets, it will not be solved using free markets

(7) in that case, it won’t, either, be solved by a government that makes decisions through, directly or indirectly, some kind of voting system, because for voters to vote for good governments that do good things like funding WBE R&D instead of bad things like funding wars is also an underfunded public good with positive externalities and the coordination problem faced by voters involves transaction costs that are just as great as those faced by potential contributors to a dominant assurance contract (or to a bundle of dominant assurance contracts), since the number of parties, amount of research and communication needed, and so on are just as great and usually greater, and this remains true no matter the kind of voting system used, whether that involves futarchy or range voting or quadratic voting or other attempts at solving relatively minor problems with voting; so using a democratic government to solve a public goods or externality problem is effectively just replacing a public goods or externality problem by another that is harder or equally hard to solve.

In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".

Yes, it makes a lot of sense to say that, but not a lot of sense for a democratic government to be making that assessment and enforcing it (not that democratic governments that currently exist have any interest in doing that). Which I think is why you see some libertarians criticize calls for government-enforced AI slowdowns.

Comment by artifex on The Economics of the Asteroid Deflection Problem (Dominant Assurance Contracts) · 2023-08-31T20:03:23.133Z · LW · GW

Either I am missing a point somewhere, or this probably doesn't work as well outside of textbook examples.

In the example, Frank was "blackmailed" into paying, because the builder knew that there were exactly 10 villagers, and knew that Frank needs the street paved. In real life, you often do not have this kind of knowledge.

Yes, you need to solve two problems (according to Tabarrok) to solve public goods provision, one of which is the free-rider problem. Dominant assurance contracts only solve the free-rider problem, but you need to also solve what he calls the information problem to know how to set the parameters of the contract.

Comment by artifex on Assume Bad Faith · 2023-08-26T00:21:34.472Z · LW · GW

Oh that’s fun, Wikipedia caused me to believe for so many years that “bad faith” means something different from what it means and I’m only learning that now.

Comment by artifex on Noting an error in Inadequate Equilibria · 2023-02-10T13:25:16.406Z · LW · GW

Trillions of dollars in lost economic growth just seems like hyperbole. There’s some lost growth from stickiness and unemployment but of course the costs aren’t trillions of dollars.

Comment by artifex on Noting an error in Inadequate Equilibria · 2023-02-09T23:44:00.134Z · LW · GW

They did in fact not go far enough. Japanese GNI per capita growth from 2013 to 2021 was 1.02%. The prescription would be something like 4%.

Comment by artifex on Why didn't we get the four-hour workday? · 2023-01-07T16:00:55.840Z · LW · GW

I disagree total working hours have decreased. The number of average weekly hours per person from 1950 to 2000 has been “roughly constant”. Work weeks are shorter but there are more people working.

Comment by artifex on Beware boasting about non-existent forecasting track records · 2022-12-05T23:54:37.291Z · LW · GW

As an example to explain why, I predict (with 80% probability) that there will be a five-year shortening in the median on the general AI question at some point in the next three years. And I also predict (with 85% probability) that there will be a five-year lengthening at some point in the next three years.

Both of these things have happened. The community prediction was June 28, 2036 at one time in July 2022, July 30, 2043 in September 2022 and is March 13, 2038 now. So there has been a five-year shortening and a five-year lengthening.

Comment by artifex on Humans do acausal coordination all the time · 2022-11-03T22:16:03.179Z · LW · GW

Even voting online takes more than five minutes in total.

Anyway, I’d rather sell my votes for money. I believe you can find thousands of people, current non-voters, who would vote for whatever you want them to, if you paid them only a little more than the value of their time.

If the value of voting is really in the expected benefits (according to your own values) of good political outcomes brought forth through voting, and these expected benefits really are greater than the time costs and other costs of voting, shouldn’t paying people with lower value of their time to vote the way you want be much more widespread?

You might not be able to verify that they did vote the way you wanted, or that they wouldn’t have voted that way otherwise, but, still, unless the ratio is only a little greater than one, it seems it should be much more widespread?

If however the value of voting is expressive, or it comes in a package in which you adapt your identity to get the benefits of membership in some social club, that explains why there are so many people who don’t vote and why the ones who do don’t seem interested in buying their votes. And it also explains why the things people vote for are so awful.

Comment by artifex on Humans do acausal coordination all the time · 2022-11-02T23:55:29.922Z · LW · GW

Voting takes much more than five minutes and if you think otherwise you haven’t added up all the lost time. And determining how you should vote if you want to vote for things that lead to good outcomes requires extremely more than five minutes.

Comment by artifex on [deleted post] 2022-09-26T07:29:22.090Z

I don’t know what purpose it serves in the post. There are more significant reasons why copies of deceased persons would never be exact anyway, without needing to go into anything beyond classical physics.

Comment by artifex on [deleted post] 2022-09-26T06:39:03.082Z

It’s the mainstream view, but not the only one and not necessarily quite correct. The Standard Model is a quantum field theory incorporating special relativity and the particles are thought of as being quanta of fields. Regardless of whether the particles are entirely reducible to fields, fields are clearly more important overall than particles.

Comment by artifex on [deleted post] 2022-09-26T05:16:15.853Z

This unfortunately means that copies could never be absolutely exact as a consequence of Heisenberg’s Uncertainty Principle

The uncertainty principle doesn’t mean what you think: to replicate a person exactly, you just need to replicate exactly the values of each classical field at each point of space occupied by the person (the world is made of fields, not particles). You probably can’t do that, but it’s not the uncertainty principle that says you can’t do that.

What the uncertainty principle says is more like this: there are no wave functions in the phase space of a quantum system evolving according to a Schrödinger equation such that the density given by the Born rule is concentrated on one value of a variable while simultaneously being concentrated on one value of another variable when the two variables are a pair of conjugate variables, because in that case for the density to be concentrated on one value of one of the variables automatically implies a combination of amplitudes for the values of the other variable with which the density is not concentrated on a single value.

The uncertainty principle is about what’s mathematically possible, rather than about what you can know. You can know what the wave function is and that’s really all there is to know. It’s just that the wave function isn’t going to have definite values simultaneously for both of a pair of conjugate variables.

Comment by artifex on Against population ethics · 2022-08-20T22:23:14.327Z · LW · GW

The only real connection seems to be wanting to do math on on how good things are?

Yes, to me utilitarian ethical theories do seem usually more interested in formalizing things. That is probably part of their appeal. Moral philosophy is confusing, so people seek to formalize it in the hope of understanding things better (that’s the good reason to do it, at least; often the motivation is instead academic, or signaling, or obfuscation). Consider Tyler Cowen’s review of Derek Parfit’s arguments in On What Matters:

Parfit at great length discusses optimific principles, namely which specifications of rule consequentialism and Kantian obligations can succeed, given strategic behavior, collective action problems, non-linearities, and other tricks of the trade. The Kantian might feel that the turf is already making too many concessions to the consequentialists, but my concern differs. I am frustrated with this very long and very central part of the book, which cries out for formalization or at the very least citations to formalized game theory.

If you’re analyzing a claim such as — “It is wrong to act in some way unless everyone could rationally will it to be true that everyone believes such acts to be morally permitted” (p.20) — words cannot bring you very far, and I write this as a not-very-mathematically-formal economist.

Parfit is operating in the territory of solution concepts and game-theoretic equilibrium refinements, but with nary a nod in their direction. By the end of his lengthy and indeed exhausting discussions, I do not feel I am up to where game theory was in 1990.

Comment by artifex on Against population ethics · 2022-08-20T03:48:40.836Z · LW · GW

I don’t see “utility” or “utilitarianism” as meaningless or nearly meaningless words. “Utility” often refers to von Neumann–Morgenstern utilities and always refers to some kind of value assigned to something by some agent from some perspective that they have some reason to find sufficiently interesting to think about. And most ethical theories don’t seem utilitarian, even if perhaps it would be possible to frame them in utilitarian terms.

Comment by artifex on Against population ethics · 2022-08-19T15:32:20.044Z · LW · GW

Would you say you are one?

Yes, I consider it very likely correct to care about paths. I don’t care what percentage of utilitarians have which kinds of utilitarian views because the most common views have huge problems and are not likely to be right. There isn’t that much that utilitarians have in common other than the general concept of maximizing aggregate utility (that is, maximizing some aggregate of some kind of utility). There are disagreements over what the utility is of (it doesn’t have to be world states), what the maximization is over (doesn’t have to be actions), how the aggregation is done (doesn’t have to be a sum or an average or even to use any cardinal information, and don’t forget negative utilitarianism fits in here too), which utilities are aggregated (doesn’t have to be people’s own preference utilities, nor does it have to be happiness, nor pleasure and suffering, nor does it have to be von Neumann–Morgenstern utilities), or with what weights (if any; and they don’t need to be equal). I find it all pretty confusing. Attempts by some smart people to figure it out in the second half of the 20th century seem to have raised more questions than they have produced answers. I wouldn’t be very surprised if there were people who knew the answers and they were written up somewhere, but if so I haven’t come across that yet.

Comment by artifex on Against population ethics · 2022-08-17T04:20:06.450Z · LW · GW

Utilitarianism is pretty broad! There are utilitarians who care about the paths taken to reach an outcome.

Comment by artifex on Unifying Bargaining Notions (1/2) · 2022-07-26T09:08:01.001Z · LW · GW

To put it mildly, this is not really a desiderata at all, it's actually an extremely baffling property.

How can we decide an axiom used to pin down a bargaining solution is intuitive or baffling without first having a goal in mind? Which axioms are sound for the bargaining solution used to pick deals depends on the purpose that led us to want to apply bargaining theory to a problem. If you’re designing a file sharing protocol, you don’t care about bargaining chips. You just want the files to be distributed quickly. Or if you’re designing a standard for network equipment and you want to minimize spectrum congestion or wireless interference, knowing that you can’t trust the owners of the equipment not to be selfish at the expense of other users. You want the solution that works best and if some solution that isn’t the solution that works best becomes unavailable, that doesn’t change the solution you consider best. Independence of irrelevant alternatives is sound for some of the goals we want to apply bargaining theory to.

Comment by artifex on Potato diet: A post mortem and an answer to SMTM's article · 2022-07-17T07:31:34.884Z · LW · GW

Other than water, potatoes are mostly starch, which becomes easily digestible after cooking. This makes your blood sugar level go up and down fast and makes you feel hunger quickly after eating them. I don’t know the implications of eating potatoes on a long-run effect on how hungry you feel generally.

Comment by artifex on A time-invariant version of Laplace's rule · 2022-07-16T11:17:32.674Z · LW · GW

This is a fantastic post. Thank you for writing it!

Comment by artifex on A time-invariant version of Laplace's rule · 2022-07-16T11:16:22.532Z · LW · GW

In the case where there are zero observed successes (so 𝑆 = 0) in the last 𝑛 years, Gott’s formula

for the probability that the next success happens in the next 𝑚 = 𝑍 − 𝑛 years gives

which ends up being exactly the same as the time-invariant Laplace’s rule. The same happens if there was a success (𝑆 = 1) but we chose not to update on it because we chose to start the time period with it. So the time-invariant Laplace’s rule is a sort of generalization of Gott’s formula, which is neat.

Comment by artifex on Potato diet: A post mortem and an answer to SMTM's article · 2022-07-16T07:44:00.213Z · LW · GW

“Eat nothing but 𝑋 for 𝑛 weeks” diets (where 𝑋 is a single food item that isn’t a meal replacement) are pretty bad diets that we wouldn’t want to follow even in the cases where they are effective at losing weight, which when they are they probably are for all sorts of bad reasons. You have more important concerns than losing weight. I wouldn’t follow such a diet for one week and would pay a good amount not to have to follow it for four weeks or longer periods of time; I don’t think people should be willing to participate in such studies for free.

That it’s an all-𝑋 diet is the biggest problem, but potatoes are also not a great choice: they have high glycemic load and worse nutrient content than many non-starchy vegetables, they make you feel hunger again, meaning you’re probably going to eat too much, and they contain dangerous poison, and removing that poison means you have to peel them to some extent meaning you lose a significant amount of what nutrient content they do have. Their proportion of the amino acids our body can’t synthesize itself satisfactorily and their protein digestibility aren’t bad, but there are better sources.

Comment by artifex on How do AI timelines affect how you live your life? · 2022-07-12T06:23:05.159Z · LW · GW

There is no reason why you would want to convert stock to cash in a way related to how (or how much) dividends get paid, so it's purely an inconvenience. And the FIRE safe withdrawal rate is similarly in general unrelated to the dividend rate. Dividends are not relevant to anything.

No, because stock prices are more dependent than dividends on state variables that you don’t care about as a diversified long-term investor. See how smooth dividends are compared to stock prices: the dividends are approximately a straight line on log scale while the price is very volatile. Price declines often come with better expected returns going forward, so they’re not a valid reason to reduce your spending if the dividends you’re receiving aren’t changing.

If you’re just going to hold stocks to eat the dividends (and other cash payments) without ever selling them, how much do you care what happens to the price? The main risk you care about is economic risk causing real dividends to fall. Like if you buy bonds and eat the coupons: you don’t care what happens to the price, if it doesn’t indicate increased risk of default. Sure, interest rates go up and your bond prices go down. You don’t care. The coupons are the same—you receive the same money. Make it inflation-indexed and you receive the same purchasing power. The prices are volatile—it seems like these bonds are risky, right? But you receive the same fixed purchasing power no matter what happens—so, no, they aren’t risky, not in the way you care about.

There are many reasons you probably don’t want to just eat the dividends. By using appropriate rules of thumb and retirement planning you can create streams of cash payments that are better suited to your goals since choosing how much to withdraw gives you so much more flexibility and you have more information (your life expectancy, for example) than the companies deciding how to smooth their streams of dividends. But there also are good reasons why many people took dividends from large companies in the past and today use funds designed for high dividend yield, retirement income, and so on.

Comment by artifex on ITT-passing and civility are good; "charity" is bad; steelmanning is niche · 2022-07-05T10:28:26.426Z · LW · GW

I agree that steelmanning is bad and don’t know what to think of the “charity” cluster of principles (I at least think you should strive to understand what people said and respond as exactly as possible to what they said, not to what seems to you to be the strongest and most rational interpretation; that should only be a consideration for interpreting correctly what they said, if an interpretation being stronger makes it more likely that it was their interpretation; doing otherwise would just not be worth it even if only because it caused misunderstandings, but you’re also liable to be wrong about what interpretation is strongest and understanding people right is hard enough already that you should just not give yourself that kind of additional work because additional effort is better invested in understanding better), but I also generally don’t like the framing of argumentative virtues or the concern for diplomacy, when those work against common discourse patterns. If some discourse patterns are very common in debates, instead of working hopelessly against them, you can find ways to make use of them for your benefit. For example, you can apply specialization and trade to arguments. Two big bottlenecks on figuring out the truth through rational argument are manpower and biases and that helps with both (and especially with manpower, which I think is probably the most important bottleneck anyway).

The situation where this can benefit you is when argument spaces are large, for example when there are a lot of arguments and counterarguments on many sides of a complex issue, including often many incompatible lines of arguments for the same conclusions, and you can’t explore the full space of arguments on even a single side yourself, unless perhaps you spend weeks doing so as your main activity. So there is no way you can explore the arguments on all sides.

Instead, you can adopt the view that seems the most likely to be true to you (you can revise that as you get more information) and try to find arguments supporting that view and not try very hard to find arguments opposing it. This is the opposite of the advice usually given (the usual advice is bad in this situation). And you should argue with people who have other views. These people are more likely to focus on the weakest points in your arguments than you are to do so yourself and on the weakest assumptions you’ve made that you haven’t justified or thought that you needed to justify (I know this is not always true but only claim it’s more likely) and they’re probably going to do a better job of finding the best arguments against your position than you would yourself (also not always true; I just think these two points are more true than not when averaging over all cases). But these two points aren’t that important. The cases where they don’t apply are cases where you might be doing something wrong: if you are aware of better arguments against your position than similarly smart and rational people who disagree with your position, you’ve probably spent more time and effort than you needed to exploring arguments against your position, which you could have spent exploring arguments for your position or arguments about other things or just doing other things than exploring arguments about stuff.

The most important point is that a greater part of the space of arguments can be explored if each person only explores the arguments that support their position, and then they exchange by arguing. A deeper search can be done if each person specializes rather than both exploring the arguments on all sides. And doing a deeper search overall means getting closer to the truth in expectation. Arguing with other people allows exchanging only the best arguments so it should take less time than exploring yourself.

In this situation, you don’t need to be too worried with looking for arguments against your position since you can just leave that to the people who disagree with you. It’s sensible to worry about being biased, but the primary motivation you should get from that worry is a motivation not to make excuses for not spending time arguing with people who disagree with you, rather than a motivation to spend time looking for arguments against your position yourself.

And you should privilege debating with people who disagree with you (so it’s people who have explored different spaces of arguments than you; but arguing with people who share your conclusions for different reasons and disagree with your reasons is very good and I count them as “people who disagree with you”: the disagreements don’t have to be about the final conclusions), who are smart and rational and have thought much about the topic (so they’ll have done a deeper and better search), who have positions that are uncommon and unpopular and that aren’t those of people you’ve already debated before (there will often be not only two positions that are mutually exclusive and there will be many incompatible lines of arguments leading to these positions; you benefit more in expectation from debating with people who have explored things you haven’t already heard about, so things that are uncommon and unpopular or that you haven’t debated people about before).

Some other things that can improve the quality of the search is debates being in written form and asynchronous so people have time to think and can look up the best information and arguments on the Web and check things on Wikipedia. And you should redebate the same things again with the same people sometimes, because l’esprit de l’escalier is a very important thing and you should take care to make it possible for other people to use it to your benefit (including without having to admit that they’re doing so and that they didn’t think of the best response the first time around, because its being known that they didn’t think of the best response the first time around could be embarrassing to them and you don’t want them to double down on a worse line of argument because of that).

Comment by artifex on Air Conditioner Test Results & Discussion · 2022-06-23T16:55:57.615Z · LW · GW

The air conditioner was intended as an example in which a product is shitty in ways the large majority of consumers don’t notice, and therefore market pressures don’t fix it.

But they do: among air-air heat pumps, dual hose air conditioners exist (but one hose versus two hoses is a huge gain in convenience), as do window air conditioners which are better (for efficiency; they cannot be installed in all windows), as do heat pumps with split indoor and outdoor units, which are much better (but more expensive). And ground-source heat pumps, which are better still, exist as well (but are still more expensive upfront and often not subsidized by utility companies and governments like air-air heat pumps are; but, like regulations on the units and on the people installing them, this depends on location and there are places where they are widely used for heating and air conditioning). And simple fans, which are not even air conditioners, also exist. The market offers the entire range of possible tradeoffs between efficiency, convenience, and cost. And different consumers are using products in this entire range.

… though at the same time, a counter has incremented in the back of my head, and I do have a slight concern that I’m avoiding evidence against the “people don’t notice major problems” model.

You are avoiding evidence against that model, but not in the way you think. It’s because you were looking at air conditioner ratings on Amazon, which gives you an impression of consumer preferences that is biased for convenience.

There are a lot of people using more efficient systems for air conditioning that they also use for heating. Searching for air conditioners on Amazon will give you a distorted picture because it selects against systems that are also meant for heating and systems that usually require professional installation—these are the most efficient systems, so searching on Amazon gives you a strong selection bias against efficiency and in favor of convenience. But that doesn’t mean that the majority of consumers don’t notice what products are more efficient. It’s just that Amazon search results for air conditioners aren’t representative of the market: the most efficient air conditioners aren’t marketed as air conditioners and consumers don’t purchase them on Amazon.

Comment by artifex on Is there a worked example of Georgian taxes? · 2022-06-17T04:12:36.509Z · LW · GW

Are there any examples of how much to tax a few properties in a real (or real-ish) example?

Land values are lower than they would be without income taxes, so attempts to estimate how much can be raised with this methodology will underestimate the real number. Market prices for vacant plots ignore most of the value of land because of privileges and regulations suppressing most of that value. Real estate appraisals ignore even more value because assessors often use as basis income generated by land use without accounting for capital value and because they use historical values that lag behind.

Comment by artifex on AGI Ruin: A List of Lethalities · 2022-06-07T16:33:34.646Z · LW · GW

Great post. Many of these arguments are fairly convincing.

Comment by artifex on Beware boasting about non-existent forecasting track records · 2022-05-23T04:55:53.061Z · LW · GW

My first 'dunk' on April 18, about a 5-year shortening of Metaculus timelines in response to evidence that didn't move me at all, asking about a Metaculus forecast of the Metaculus forecast 3 years later, implicitly predicts that Metaculus will update again within 3 years.

I do however claim it as a successful advance prediction, if something of a meta one

Wait, unless I misunderstand you there’s a reasoning mistake here. You request epistemic credit for predicting implicitly that the Metaculus median was going to drop by five years at some point in the next three years. But that’s a prediction that the majority of Metaculites would also have made and it’s a given that it was going to happen, in an interval of time as long as three years. It’s a correct advance prediction, if you did make it (let’s assume so and not get into inferring implicit past predictions with retrospective text analysis), but it’s not one that is even slightly impressive at all.

As an example to explain why, I predict (with 80% probability) that there will be a five-year shortening in the median on the general AI question at some point in the next three years. And I also predict (with 85% probability) that there will be a five-year lengthening at some point in the next three years.

I’m predicting both that Metaculus timelines will shorten and that they will lengthen! What gives? Well, I’m predicting volatility… Should I be given much epistemic credit if I later turned out to be right on both predictions? No, it’s very predictable and you don’t need to be a good forecaster to anticipate it. If you think you should get some credit for your prediction, I should get much more from these two predictions. But it’s not the case that I should get much, nor that you should.

Are there inconsistencies in the AGI questions on Metaculus? Within the forecast timeline, with other questions, with the resolution criteria? Yes, there are plenty! Metaculus is full of glaring inconsistencies. The median on one question will contradict the median on another. An AI question with stronger operationalization will have a lower median than a question with weaker operationalization. The current median says there is a four percent chance that AGI was already developed. The resolution criteria on a question will say it can’t resolve at the upper bound and the median will have 14% for it resolving at the upper bound anyway.

It’s commendable to notice these inconsistencies and right to downgrade your opinion of Metaculus because of them. But it’s wrong to conclude (even with weak confidence), because you can observe such glaring inconsistencies frequently, and predict in advance that specific ones will happen, including changes over time in the median that are predictable even in expected value after accounting for skew, that you are a better forecaster on even just AGI questions (and the implicit claim of being “a slightly better Bayesian” actually seems far stronger and more general than that) than most of the Metaculites forecasting on these questions.

Why? Because Metaculites know there are glaring inconsistencies everywhere, they identify them often, they know that there are more, and they can find them, and fix most of them, easily. It’s not that you’re a better forecaster, just that you have unreasonable expectations of a community of forecasters who are almost all effectively unpaid volunteers.

It’s not surprising that the Metaculus median will change over time in specific and predictable ways that are inconsistent with good Bayesianism. That doesn’t mean they’re that bad: let us see you do better, after all. It’s because people’s energy and interest are scarce. The questions in tournaments with money prizes get more engagement, as do questions about things that are currently in the news. There are still glaring inconsistencies in these questions, because it’s still not enough engagement to fix them all. (Also because the tools are expensive in time to use for making and checking your distributions.)

There are only 601 forecasters who have more than 1000 points on Metaculus: that means only 601 forecasters who have done even a pretty basic amount of forecasting. One of the two forecasters with exactly 1000 points has made predictions on only six questions, for example. You can do that in less than one hour, so it’s really not a lot.

If 601 sounds like a lot, there are thousands of questions on the site, each one with a wall of text describing the background and the resolution criteria. Predictions need updated constantly! The most active predictors on the site burn out because it takes so much time.

It’s not reasonable to expect not to see inconsistencies, predictable changes in the median, and so on. It’s not that they’re bad forecasters. Of course you can do better on one or a few specific questions, but that doesn’t mean much. If you want even just a small but worthwhile amount of evidence, from correct advance predictions, that you are a better forecaster than other Metaculites, you need, for example, to go and win a tournament. One of the tournaments with money prizes that many people are participating in.

Evaluating forecasting track records in practice is hard and very dependent on the scoring rule you use (rankings for PredictionBook vary a lot with your methodology for evaluating relative performance, for example). You need a lot of data, and high quality, to get significant evidence. If you have low-quality data, and only a little, you just aren’t going to get a useful amount of evidence.

Comment by artifex on My Morality · 2022-05-16T04:47:42.605Z · LW · GW

If morality is subjective, why do I form moral opinions and try to act on them? I think I do that for the same reason I think I do anything else. To be happy.

What makes you happy is objective, so if that’s how you ground your theory of morality, it is objective in that sense. It’s subjective only in that it depends on what makes you happy rather than what makes other possible beings happy.

If morality is a thing we have some reason to be interested in and care about, it’s going to have to be grounded in our preferences. Our preferences, not any possible intelligent being’s preferences—so it’s subjective in that sense. But we can’t make up anything, either. We already have a complete theory of how we should act, given by our preferences & our decision theory. Morality needs to be part of or implied by that in some way.

To figure out what’s moral, there is real work that needs to be done: evolutionary psychology, game theoretic arguments, revealed preferences, social science experiments, etc. Stuff needs to be justified. Any aggregation procedure we choose to use, any weights we choose to use in said aggregation procedure, need to be grounded—there has to be a reason we are interested in that aggregation procedure and these weights.

There are multiple kinds of utilities that have moral import for different reasons, some of them interpersonally comparable and others not. Preference utilities are not interpersonally comparable and we care about them for game theoretic reasons that would apply just as well to many agents very different from us (who would use different weights however); what weights and aggregation procedure to use must be grounded in these game theoretic reasons. However they are to be aggregated, it can’t be weighted-sum utilitarianism, since the utilities aren’t interpersonally comparable (which doesn’t mean they can’t be aggregated by other means). But pleasure utilities (dependent on any positive mental or emotional state) often are interpersonally comparable:

An [individual’s] inability to weigh between pleasures is an epistemic problem. [Some] pleasures are greater than others. The pleasure of eating food one really enjoys is greater than that of eating food one doesn’t really enjoy. We can make similar interpersonal comparisons. We know that one person being tortured causes more suffering than another stubbing their toe. (HT: Bentham’s bulldog)

At least it should be the case that some mental states can be biologically quantified in ways that should be interpersonally comparable. And they can have moral import. Why not? It all depends on what evolution did or didn’t do. We need to know in what ways people care about other beings (which state or thing related to these beings they care about), which ones of the beings and to what degrees (and there can be multiple true answers to these questions).

How do we know? Well, there are things like ultimatum game experiments, dictator game, kin altruism, and so on. The details matter and there seems to be much controversy on interpretation.

Can we just know through introspection? It would be awfully convenient if so, but that requires that evolution has given us a way to introspect on our preferences that regard other people and reliably get the real answers instead of getting social desirability bias. How do we know if that’s the case? Two ways.

Way one: by comparing the answers people claim to get through introspection with their actual behavior. If introspection is reliable, the two should probably match to a high degree.

Way two: by seeing how much variation there is in the answers people claim to get through introspection. We still need to interpret that variation. Is it more plausible that people have very different moralities than that their answers are very different for other reasons (which ones?)?

This fog is too thick for me to see through. Many smart people have tried, probably much harder than me, and sometimes have said a few smart things: [1] [2] [3]. There must be people who have figured much more out and if so I would highly appreciate links.

Comment by artifex on Inequality is inseparable from markets · 2022-05-14T22:11:26.693Z · LW · GW

Why is inequality morally relevant?

Comment by artifex on How to be skeptical about meditation/Buddhism · 2022-05-02T02:25:30.713Z · LW · GW

I don’t, but… I’d like to see some indication that the real knowledge is generated by discussion or investigation of meditation or Buddhism here. For example: global workspace theory, predictive processing, cognitive psychology, EEG, neuroscience, these weren’t motivated by meditation and Buddhism, I don’t think? Yes, there are neuroscientists who will write books about meditation and talk about interesting things in these books and also about less interesting things like their profound spiritual insights and I’m afraid the latter part of these books is the one motivated by meditation and Buddhism. Sometimes these books will contain very good presentations of their subjects and rationalists will write good reviews of them and that will have value. This indicates that there’s a market for such books, not really that meditation and Buddhism generate useful knowledge. It’s not a justification for investigating meditation and Buddhism to a particularly greater degree.

Comment by artifex on How to be skeptical about meditation/Buddhism · 2022-05-01T23:52:17.938Z · LW · GW

Meditation and Buddhism are of low interest to most rationalists who have not interacted with any of the in-person rationalist communities. My preference for how to approach these topics in the rationalist community would be: don’t, or do it in a place other than LessWrong frontpage, or do it much less than this. These hypotheses are being unreasonably privileged and overdiscussed on LessWrong relative to the ~nil amount of real knowledge that has been generated by the discussion and investigation so far.

Comment by artifex on Is Metaculus Slow to Update? · 2022-03-27T01:28:02.130Z · LW · GW

Thank you for doing this analysis!

Comment by artifex on A Roadmap to a Post-Scarcity Economy · 2021-10-30T21:32:27.159Z · LW · GW

A post-scarcity society can be defined as a society in which all the basic needs of the population are met and provided for free.

https://oll.libertyfund.org/title/universal-economics#lf1674_label_165

https://oll.libertyfund.org/title/universal-economics#lf1674_head_187

Comment by artifex on Making decisions under moral uncertainty · 2019-12-30T23:04:11.184Z · LW · GW

I think I like this post, but not the approaches.

A correct solution to moral uncertainty must not be dependent on cardinal utility and requires some rationality. So Borda rule doesn’t qualify. Parliamentary model approaches are more interesting because they rely on intelligent agents to do the work.

An example of a good approach is the market mechanism. You do not assume any cardinal utility. You actually do not do anything directly with the preferences you have a probability distribution over at all. You have an agent for each and extrapolate what that agent would do, if they had no uncertainty over their preferences and rationally pursued them, when put in a carefully designed environment that allows them to create arbitrary binding consensual precommitments (“contracts”) with other agents, and weights each agent’s influence over the outcomes that the agents care about according to your probabilities.

What is tricky is making the philosophical argument that this is indeed the solution to moral uncertainty that we are interested in. I’m not saying it is the correct solution. But it follows some insights that any correct solution should:

  • do not use cardinal utility, use partial orders;
  • do not do anything with the preferences yourself, you are at high risk of doing something incoherent;
  • use tools that are powerful and universal: intelligent agents, let them bargain using full Turing machines. You need strong properties, not (for example) mere Pareto efficiency.
Comment by artifex on Perfect Competition · 2019-12-30T04:07:28.879Z · LW · GW

Fragility of value is used correctly only to make very different points from what you are stating here, that must result from how different the preference orderings you obtain are from the original preference orderings if you make changes to the complex computation that the values are. Consumer preferences in general equilibrium theory are a real-valued function whose domain is the consumption set, a subset of a full commodity space. This function can be used to define an order relation that represents the consumer’s preferences, each represented by any of an infinity of functions since you can compose them with any strictly increasing function. Consumer preferences are not the same thing as agents’ preferences or values, which are not at all related to commodity bundles, and don’t have a consumption space as domain, even though they too can be used to define order relations. You cannot make this argument confusing goods and values. The values that are fragile are not the consumer preferences. As far as the actual preferences determine consumer preferences over commodity bundles, they determine the customer’s demand function according to prices and consumer endowments or wealth and translate into buying and selling decisions, and that is relevant to perfect competition. The rest of the preferences is entirely orthogonal to perfect competition—if it wasn’t, then it would, contradicting our assumption, have contributed to determining consumer preferences.

Comment by artifex on Perfect Competition · 2019-12-29T20:49:51.636Z · LW · GW
Comment by artifex on Effect of Advertising · 2019-11-26T23:04:25.081Z · LW · GW
because apparently the strongest evidence for "being the kind of person who buys X" is having bought X recently


In general, that you’ve bought something is evidence that you’re the kind of person who buys that thing. Furthermore, if you’ve bought certain items recently, you are far more likely to buy a similar product (for example, you regret the purchase and want to replace it) than someone who hasn’t.

Comment by artifex on Pricing externalities is not necessarily economically efficient · 2019-11-09T19:25:20.589Z · LW · GW

The statement says “if transaction costs are zero, the market produces the efficient outcome”, but what is most interesting is the equivalent contrapositive “if the market didn’t produce the efficient outcome, it was because of transaction costs”.

I would add that the problem is not only transaction costs but also irrationality. You will not get the efficient outcome if the transaction costs are sufficiently low but the agents are not rational enough to think of the transaction or to consent to it. Also, some transaction costs can be worked around, so the problem is irreducible transaction costs and irrationality.

I would also add that I think the conclusion applies to other coordination problems, market failures, and games in general, not just externalities. Many aggregation mechanisms can always produce the efficient outcome in most or all such problems if transaction costs are low enough and the agents rational enough. The market mechanism is not the only one; if you allow all agents to self-modify and prove to other agents that they did so, that should also be able to solve these problems if transaction costs are low and agents rational enough.

But no mechanism will always be able to produce an efficient outcome even with high transaction costs or bounded rationality. For example, I think we can conceive of games in which producing an efficient outcome requires logical omniscience and a halting oracle (we might design a game in which producing an efficient outcome requires knowing the googolplexth Mersenne prime). Such a game might be solved by the market mechanism only if the agents were as rational as AIXIs.

Comment by artifex on Algorithms of Deception! · 2019-10-19T22:00:21.113Z · LW · GW

Category gerrymandering doesn’t seem like a different algorithm from selective reporting. In both cases, the reporter is providing only part of the evidence.