Posts

Three reasons to expect long AI timelines 2021-04-22T18:44:17.041Z
A new acausal trading platform: RobinShould 2021-04-01T16:56:07.488Z
Conspicuous saving 2021-03-20T20:59:50.749Z
Defending the non-central fallacy 2021-03-09T21:42:17.068Z
My guide to lifelogging 2020-08-28T21:34:40.397Z
Preface to the sequence on economic growth 2020-08-27T20:29:24.517Z
What specific dangers arise when asking GPT-N to write an Alignment Forum post? 2020-07-28T02:56:12.711Z
Are veterans more self-disciplined than non-veterans? 2020-03-23T05:16:18.029Z
What are the long-term outcomes of a catastrophic pandemic? 2020-03-01T19:39:17.457Z
Gary Marcus: Four Steps Towards Robust Artificial Intelligence 2020-02-22T03:28:28.376Z
Distinguishing definitions of takeoff 2020-02-14T00:16:34.329Z
The case for lifelogging as life extension 2020-02-01T21:56:38.535Z
Inner alignment requires making assumptions about human values 2020-01-20T18:38:27.128Z
Malign generalization without internal search 2020-01-12T18:03:43.042Z
Might humans not be the most intelligent animals? 2019-12-23T21:50:05.422Z
Is the term mesa optimizer too narrow? 2019-12-14T23:20:43.203Z
Explaining why false ideas spread is more fun than why true ones do 2019-11-24T20:21:50.906Z
Will transparency help catch deception? Perhaps not 2019-11-04T20:52:52.681Z
Two explanations for variation in human abilities 2019-10-25T22:06:26.329Z
Misconceptions about continuous takeoff 2019-10-08T21:31:37.876Z
A simple environment for showing mesa misalignment 2019-09-26T04:44:59.220Z
One Way to Think About ML Transparency 2019-09-02T23:27:44.088Z
Has Moore's Law actually slowed down? 2019-08-20T19:18:41.488Z
How can you use music to boost learning? 2019-08-17T06:59:32.582Z
A Primer on Matrix Calculus, Part 3: The Chain Rule 2019-08-17T01:50:29.439Z
A Primer on Matrix Calculus, Part 2: Jacobians and other fun 2019-08-15T01:13:16.070Z
A Primer on Matrix Calculus, Part 1: Basic review 2019-08-12T23:44:37.068Z
Matthew Barnett's Shortform 2019-08-09T05:17:47.768Z
Why Gradients Vanish and Explode 2019-08-09T02:54:44.199Z
Four Ways An Impact Measure Could Help Alignment 2019-08-08T00:10:14.304Z
Understanding Recent Impact Measures 2019-08-07T04:57:04.352Z
What are the best resources for examining the evidence for anthropogenic climate change? 2019-08-06T02:53:06.133Z
A Survey of Early Impact Measures 2019-08-06T01:22:27.421Z
Rethinking Batch Normalization 2019-08-02T20:21:16.124Z
Understanding Batch Normalization 2019-08-01T17:56:12.660Z
Walkthrough: The Transformer Architecture [Part 2/2] 2019-07-31T13:54:44.805Z
Walkthrough: The Transformer Architecture [Part 1/2] 2019-07-30T13:54:14.406Z

Comments

Comment by Matthew Barnett (matthew-barnett) on Matthew Barnett's Shortform · 2021-06-19T06:14:32.682Z · LW · GW

It's now been about two years since I started seriously blogging. Most of my posts are on Lesswrong, and the most of the rest are scattered about on my substack and the Effective Altruist Forum, or on Facebook. I like writing, but I have an impediment which I feel impedes me greatly.

In short: I often post garbage.

Sometimes when I post garbage, it isn't until way later that I learn that it was garbage. And when that happens, it's not that bad, because at least I grew as a person since then.

But the usual case is that I realize that it's garbage right after I'm done posting it, and then I keep thinking, "oh no, what have I done!" as the replies roll in, explaining to me that it's garbage.

Most times when this happens, I just delete the post. I feel bad when this happens because I generally spend a lot of time writing and reviewing the posts. Some of the time, I don't delete the post because I still stand by the main thesis, although the delivery or logical chain of reasoning was not very good and so I still feel bad about it.

I'm curious how other writers deal with this problem. I'm aware of "just stop caring" and "review your posts more." But, I'm sometimes in awe of some people who seem to consistently never post garbage, and so maybe they're doing something right that can be learned.

Comment by Matthew Barnett (matthew-barnett) on AI-Based Code Generation Using GPT-J-6B · 2021-06-17T04:16:26.304Z · LW · GW

Have you tried, y'know, testing your belief? ;)

You can Google its answers. I've been googling its answers and am not generally finding direct copy-pastes for each, though I'm also a bit confused about why Google is obtaining no results for short strings such as "(s[0:length] == s[length::-1])".

ETA: even if it's copying code but modifying it slightly so that the variable names match, it seems like (1) this is itself pretty impressive if it actually works reliably, and (2) I don't think the claim is that current tech is literally shovel ready to replace programmers. That would be a strawman. It's about noticing the potential of this tech before it reaches its destination.

Comment by Matthew Barnett (matthew-barnett) on AI-Based Code Generation Using GPT-J-6B · 2021-06-17T04:05:33.230Z · LW · GW

I no longer consider agents with superhuman performance in competitive programming to be a ridiculous thing to pursue. 

Dan Hendrycks and Steven Basart et al. recently released APPS, an ML benchmark for measuring the performance of ML models at the task of writing code. One part of their benchmark measures the performance of code on competitive programming questions. I wrote a Metaculus question on when people expect this benchmark to be solved -- operationalized as getting above 80% strict accuracy on the competitive programming section.

Initial results are encouraging. GPT-Neo 2.7B passes nearly 20% of test cases on average for introductory coding problems, when the model is allowed to give 5 attempts (see Table 4 in the paper). A fine-tuned GPT-J-6B is likely to be even better.

Comment by Matthew Barnett (matthew-barnett) on Three reasons to expect long AI timelines · 2021-04-24T18:39:02.023Z · LW · GW

Wow, that chart definitely surprised me. Yes, this caused me to update.

Comment by Matthew Barnett (matthew-barnett) on Three reasons to expect long AI timelines · 2021-04-24T18:35:38.915Z · LW · GW

Nuclear power is not 10x cheaper.  It carries large risks so some regulation cannot be skipped.  I concur that there is some unnecessary regulation, but the evidence such as the linked source just doesn't leave "room" for a 10x gain.  Currently the data suggests it doesn't provide an economic gain over natural gas unless the carbon emissions are priced in, and they are not in most countries.

I recommend reading the Roots of Progress article I linked to in the post. Most of the reason why nuclear power is high cost is because of the burdensome regulations. And of course, regulation is not uniformly bad, but it seems from the chart Devanney Figure 7.11 in the article that we could have relatively safe nuclear energy for a fraction of its current price.

Comment by Matthew Barnett (matthew-barnett) on Three reasons to expect long AI timelines · 2021-04-23T20:16:13.249Z · LW · GW

Thanks for the useful comment.

You might say "okay, sure, at some level of scaling GPTs learn enough general reasoning that they can manage a corporation, but there's no reason to believe it's near".

Right. This is essentially the same way we might reply to Claude Shannon if he said that some level of brute-force search would solve the problem of natural language translation.

one of the major points of the bio anchors framework is to give a reasonable answer to the question of "at what level of scaling might this work", so I don't think you can argue that current forecasts are ignoring (2).

Figuring out how to make a model manage a corporation involves a lot more than scaling a model until it has the requisite general intelligence to do it in principle if its motivation were aligned.

I think it will be hard to figure out how to actually make models do stuff we want. Insofar as this is simply a restatement of the alignment problem, I think this assumption will be fairly uncontroversial around here. Yet, it's also a reason to assume that we won't simply obtain transformative models the moment they become theoretically attainable.

It might seem unfair that I'm inputting safety and control as an input in our model for timelines, if we're using the model to reason about the optimal time to intervene. But I think on an individual level it makes sense to just try to forecast what will actually happen.

Comment by Matthew Barnett (matthew-barnett) on Three reasons to expect long AI timelines · 2021-04-23T07:08:53.804Z · LW · GW

These arguments prove too much; you could apply them to pretty much any technology (e.g. self-driving cars, 3D printing, reusable rockets, smart phones, VR headsets...).

I suppose my argument has an implicit, "current forecasts are not taking these arguments into account." If people actually were taking my arguments into account, and still concluding that we should have short timelines, then this would make sense. But, I made these arguments because I haven't seen people talk about these considerations much. For example, I deliberately avoided the argument that according to the outside view, timelines might be expected to be long, since that's an argument I've already seen many people make, and therefore we can expect a lot of people to take it into account when they make forecasts.

I agree that the things you say push in the direction of longer timelines, but there are other arguments one could make that push in the direction of shorter timelines

Sure. I think my post is akin to someone arguing for a scientific theory. I'm just contributing some evidence in favor of the theory, not conducting a full analysis for and against it. Others can point to evidence against it, and overall we'll just have to sum over all these considerations to arrive at our answer.

Comment by Matthew Barnett (matthew-barnett) on Three reasons to expect long AI timelines · 2021-04-23T04:38:57.662Z · LW · GW

I'm uncertain. I lean towards the order I've written them in as the order of relative importance. However, the regulation thing seems like the biggest uncertainty to me. I don't feel like I'm good at predicting how people and government will react to things; it's possible that technological advancement will occur so rapidly and will be celebrated so widely that people won't want it to stop.

Comment by matthew-barnett on [deleted post] 2021-04-18T04:39:43.791Z

This does not work without a drastic reduction in total government expenditure.

I agree, though I'm not sure whether this will always be true. Look at the last section of my post.

the majority of "wealth" is in the form of individuals' earnings potential

I meant wealth as in physical assets, especially capital.

Comment by matthew-barnett on [deleted post] 2021-04-18T04:38:30.170Z

The last section of my post comes to roughly the same conclusion.

Comment by matthew-barnett on [deleted post] 2021-04-18T00:16:40.253Z

Can you clarify what your point is? ChristianKl said that my proposal "creates a strong incentive for the government to make sure that the companies it owns can outcompete the other companies." I partially agreed but gave one reason to disagree; namely, that the government isn't profit-maximizing. 

You seem to be asserting the opposite of what ChristianKl said, that is, that the government has no incentive to outcompete other companies. Can you explain why?

Comment by matthew-barnett on [deleted post] 2021-04-17T23:32:27.309Z

The government might have an incentive to outcompete other companies, but the strength of this incentive is unclear to me. The US government already competes with the private sector in, e.g. delivering mail, but this hasn't lead to the end of FedEx. Unlike private companies, the government generally isn't profit-maximizing in any normal sense, and so it's not clear why they'd benefit from monopolistic practices.

Comment by Matthew Barnett (matthew-barnett) on A new acausal trading platform: RobinShould · 2021-04-01T21:04:22.268Z · LW · GW

For you, our patented superintelligent prediction algorithm anticipated that you would want an account, so we already created one for you. Unfortunately, it also predicted that you would make very bad investments in literal galaxy-brain hot takes. Therefore, we decided to terminate your account.

Comment by Matthew Barnett (matthew-barnett) on The EMH is False - Specific Strong Evidence · 2021-03-22T21:10:55.762Z · LW · GW

The "obvious" rationalist investments into Tesla and AMD are an after-the-fact story to make it sound like that was the right thing to do back then.

I'm also unconvinced by this evidence. As other comments here noted, the rise of AMD and GPU stocks has very little to do with deep learning. What would make me more persuaded is if the author made specific public predictions about stock prices, with justification, and then analyzed it later. 

In other words, I want to see what tech stock picks deluks917 currently thinks are undervalued. Then we'll see if they're right in a few years.

Comment by Matthew Barnett (matthew-barnett) on The EMH is False - Specific Strong Evidence · 2021-03-22T20:55:39.151Z · LW · GW

You can't buy the S&P 500, at least not directly. Instead, you can buy funds that try to match the performance of the S&P 500 (while charging a small fee), like VFNIX and VOO.

VTSAX by contrast, doesn't track the S&P 500, but this isn't necessarily a bad thing. From the Vanguard website,

Vanguard Total Stock Market Index Fund is designed to provide investors with exposure to the entire U.S. equity market, including small-, mid-, and large-cap growth and value stocks.

In other words, it's more diversified than an S&P 500 fund.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-22T19:17:32.865Z · LW · GW

There may be some groups for which this is true, but it hasn't been my experience in any of the US or UK work or social subcultures I've been part of.

Sure, many different social environments use different measures for status. In the post I talked about how people rank themselves based on wealth. Here, I mentioned how some people use school and jobs.

My main point was that we already have status rankings. It's true that we don't have a total order, global status ranking. But locally speaking, I don't see what's wrong with introducing a new metric for ranking status. I'm reminded of something I saw recently as a response to social anarchists who want to abolish hierarchies: people will just create new hierarchies along different axes in response to the revolution. We might as well just ask which hierarchies are best to have.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-21T21:19:33.228Z · LW · GW

Another concern I have is that, what you're proposing could make net worth the most salient feature of an individual--this information is out there in a database, rather than something one needs to spend time and effort to get to know the person to find out. This could lead to a hyper-competitive environment where individuals' worth is reduced to a number, and social status is done through rankings.

It seems like most social environments are already like this to some extent. People are ranked according to the school/job they are at, and the grades they got. The first thing is public information, and the second thing can generally be inferred by the first.

I agree that we shouldn't try to make people feel like their social status is reducible to a single number. But if people already think that their social status is reducible to a single number, we might as well think about what number it ought to be, rather than just pretending that the number doesn't already exist.

I also don't know why it's better that social status be reducible to a set of numbers rather than just one number.

In addition, much of the concerns about wealth inequality in today's world still apply here.

I actually think my proposal should be enthusiastically supported by those concerned with wealth inequality. For one, it makes people's wealth much more salient, which will probably make people more willing to redistribute wealth once they viscerally understand just how well some people are doing.

Comment by Matthew Barnett (matthew-barnett) on Mati_Roy's Shortform · 2021-03-21T00:29:13.927Z · LW · GW

The standard example of a public good is national defense. In that case, you're probably right that the market can't provide it, since it would be viewed in competition with the government military, and therefore would probably be seen as a threat to national security.

For other public goods, I'm not sure why the government would have a monopoly. Scientific research is considered a public good, and yet the government doesn't put many restrictions on what types of science you can perform (with some possible exceptions like nuclear weapons research). 

Wikipedia lists common examples of public goods. We could through them one by one. I certainly agree that for some of them, your hypothesis holds.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-20T22:42:29.635Z · LW · GW

Also, in my experience, middle-aged and older people tend to downplay their wealth and not brag about it (why? Not entirely sure).

I think a lot of people have made the observation that old people care less about status than young people. I'm not sure why that might be, but I could come up with some possible evolutionary explanations. My general hypothesis would probably be something like, young people have much more to lose by being low status. They want to get a promising career and have the best romantic partner. For old people, the stakes are lower.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-20T22:30:36.154Z · LW · GW

Saying "people prefer buying status to saving money" is no different from saying "people prefer buying fancy cars to saving money". Pointing out that people are buying status as well as cars doesn't explain why they consistently prefer to buy status now rather than save for status later.

Yeah, I should clarify that I don't think the myopia explanation and the conspicuous explanation are at odds with each other. Ultimately it's myopia that causes people to spend rather than invest right now. But we might also want to know why people are spending so much on the things they do, so that we can figure out how to encourage them to save instead.

Also, people definitely do buy fancy (i.e. expensive) cars. And houses, clothes, jewelry, etc. You say that people "don't want to be seen as bragging", but when someone wears a $100,000 diamond wedding ring, what else is that but bragging?

In the case of cars, people will say, "I have a car so that I can go from point A to point B." A house "is there so that I have shelter to live in." Clothes are "to protect me from the elements."

The only example you mentioned that doesn't seem to have to have an alternative motive is jewelry. But even in the case of a diamond ring, people usually say that they're buying it so that they can show how much they care about their partner, not that they're trying to show how wealthy they are.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-20T21:49:32.061Z · LW · GW

I was able to find articles that talked about Norway's system that requires people to disclose their tax return to others, and other articles that talked about how you can view someone's salary in Sweden. I don't know if these were the policies you were referring to. If they were, then I must say that these are interesting systems and worth reading about, but not they're also not quite what I am proposing, since they don't seem to be about encouraging people to save money.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-20T21:42:01.416Z · LW · GW

To be successful this would need some sort of mechanism making wealth-visibility the default. But that might not be enough, because probably rich people would be most likely to opt out, making opting out itself a signal of wealth.

Yeah, that does seem tricky. My guess is that you might be right that rich people would be the most likely to opt out. But I'm not so sure. Opting out might be more akin to countersignaling in that the only people who are doing it are either people who are actually poor and therefore don't want to be embarrassed, or very wealthy people who want to signal that they're above it all. If that's the case, then being in the system still serves some purpose for people in the middle of the distribution.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-20T21:36:14.572Z · LW · GW

A couple of other issues: 1. Not all wealth is in the form of things with an immediately available price. What is the wealth-looking-up system going to do with houses, privately-held companies, artworks that were last traded 50 years ago, etc.?

Of course, the database wouldn't have to be perfect. It's perfectly fine for people to have other goods that aren't represented in the database, since the operating assumption of the database is just that people should more readily signal wealth that they've saved -- not that they should perfectly signal their wealth.

The privacy implications are worse than you may think: if the system gives a snapshot of a person's wealth, then you can track how that changes from day to day. You can work around that to some extent by reporting some sort of smoothed and/or deliberately noise-corrupted value, but doing that in a way that genuinely doesn't leak more information than you want it to is tricky.

Yeah that makes sense. Another possibility is that you show low resolution information, such as the amount in their account rounded to the nearest $10,000. I think that something like that would have the same problem as current conspicuous behavior, which allows people to estimate someone's economic status via noisy variables like, "how many cars they bought recently" and "how big their house is." Ultimately, I think that being able to opt-out is the crucial part here that makes these concerns less pressing.

Comment by Matthew Barnett (matthew-barnett) on Conspicuous saving · 2021-03-20T21:20:35.632Z · LW · GW

Myopia is the ultimate explanation, but it's also incomplete. As explained in the linked paper, there's a wealth of data showing that people spend a lot on consumption that doesn't seem to raise their physical welfare. For example, people a lot on funerals even in places where people aren't getting adequate nutrition.

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-12T20:38:03.524Z · LW · GW

I need to read that Huemer book, it sounds very interesting from what you've quoted in this and the other thread here.

Awesome. I should say that I don't agree with all of the book's conclusions.

I have quite a lot of ... sympathy(?) for the actual philosophical movements' conclusions, however I still think it collapses to being a bunch of heuristics on top of utilitarian arguments in the end. Also I think objectivism(libertarianisms' radical grandkid(?)) is ... evil? Not utilitarian compatible, at least.

I think I'm pretty much in the same spot here. I think the utilitarian arguments for libertarianism are the strongest, but in the end, I think utilitarians have given good arguments that cast a ton of doubt on the libertarian project. I'm getting a bit off-topic from the post I wrote, but I'll briefly summarize my views below.

In my opinion, the strongest argument for libertarianism is that it leads to higher medium-run economic growth. My views here are closely aligned with Tyler Cowen's in his book Stubborn Attachments. Straightforwardly, living standards have gone way up in the last 200 years primarily due to technological progress, and the economic liberalization that has enabled it. Libertarian governance lowers regulatory burden, reduces deadweight loss associated with taxation, and provides businesses more financial capital available for investment. These effects arguably raise economic growth more than the downsides of libertarian governance might lower it.

However, while economic growth is important on medium timescales, existential risk is far more important in the long-run. Bostrom outlined the fundamental reason why about 20 years ago. His vunerable world paper provides a direct argument for centralized government.

Since I don't actually agree with deontological libertarianism, I can't fundamentally defend it against some of the objections you've raised. I could play Devil's advocate for a while, but I don't feel like I know enough about the topic to go much further.

And going full circle, actual political movements that use the label seem built around objectivists, in that they're willing to say: the principle is more important than the utilitarian outcome of its application. Taxation is bad even if it helps people. Except, of course, modern political movements are awful and don't say that in public, that's just for the inner circle. In public they just lie(in my opinion) and claim that all government activity is net utility desroying. 

Your critique of the actual political movement seems valid to me. FWIW I think it's usually a cheapshot to argue against the median activist for a given cause. In large political movements, it's typical for >90% of the people who identify with the movement to have very bad arguments for their position. That's why I think we should actually look at what the most well-known and respected philosophers are saying about the issue.

In this case, I would recommend David Friedman's essay "Market Failure: An Argument for and Against Government". His central argument against government is not at all deontological. Instead, it's based on the conjunction of three basic points,

  • The reason why market solutions are not always best is because of a large class of market failures.
  • If we look at the source of market failures, we find that they're generally due to markets failing to internalize the cost and benefits of market interactions.
  • However, in a democracy, the costs and benefits of political interactions are even less internalized than in market interactions.

Therefore, democracy is not a solution to market failure. I think this is quite different than the claim that "all government activity is net utility destroying" and its one I would be quite happy to see a solid counterargument to.

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-12T06:23:54.617Z · LW · GW

Your argument here is both circular, and committing the noncentral fallacy!

Then you may be interested in Aaron Bergman's defense of circular arguments (begging the question) and my defense of the non-central fallacy here.

Jokes aside, I didn't think robbery was essential to my argument there. I could have said "Suppose that you want to move to Hawaii because it's so beautiful, but you know (because you saw something on the internet) that upon arrival, someone will kill you." and the structure of my argument would be identical.

My point was that merely taking an action that predictably results in some effect does not imply that you consented to that effect. If I take an action that predictably leads to my enemy capturing me in battle (since I have no better options), that does not mean that I am consenting to enemy capture.

It would be difficult to precisely define consent. However, I think under any common sense definition of consent, something can still be non-consensual even if you knew it would happen to you as a result of taking an action.

If you want to strengthen your argument, limit it to: 'nonconsensual taxation is theft'.

I agree, although I assumed that was already implicit in what I had said.

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-12T03:16:05.924Z · LW · GW

I was replying to ShemTealeaf

Oh makes sense. Because of how I was notified, I thought you were replying to me. Read my comment as if I thought you were. :)

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-11T23:26:13.214Z · LW · GW

Granted even if you leave one country you'd still have to be accepted by some other country where you end up paying some taxes.... buuuut and this is why I hate this argument so much.... that's because citizens have 'collective' private property ownership over the sovereign nation they are a part of. The libertarian argument against taxation reduces to abolition of private property! 

I think you may be giving short shrift to the actual philosophical arguments made by libertarians. Michael Huemer responds to your argument that "the state owns the land you are on" in section 2.5.1 in The Problem of Political Authority. I'll quote some of his reply here,

Even if we granted that the state owns its territory, it is debatable whether it may expel people who reject the social contract (compare the following: if anyone who leaves my party before it is over is doomed to die, then, one might think, I lose the right to kick people out of my party). But we need not resolve that issue here; we may instead focus on whether the state in fact owns all the territory over which it claims jurisdiction. If it does not, then it lacks the right to set conditions on the use of that land, including the condition that occupants should obey the state’s laws.

For illustration, consider the case of the United States. In this case, the state’s control over ‘its’ territory derives from (1) the earlier expropriation of that land by European colonists from the people who originally occupied it and (2) the state’s present coercive power over the individual landowners who received title to portions of that territory, handed down through the generations from the original expropriators. This does not seem to give rise to a legitimate property right on the part of the U.S. government.

Now one may conclude, as you do, that this argument leads to a more general "abolition of private property." After all, many property claims in the world are either the result of conquest and expropriation, or the result of the inheritance of such expropriation. However, this response would ignore two facts.

First, the vast majority of property in the world can't actually be traced back to expropriation, except indirectly. For example, the high market market price of a smartphone is driven almost entirely from the actual engineering and construction of its components, rather than the value of the raw materials that make it up. While the raw materials might be stolen property, the labor used to make it arguably isn't (though Marxists famously disagree).

Second, it is perfectly consistent to argue for property rights in the abstract while holding that most actual claims to property in the real world are illegitimate. Libertarian Robert Nozick defended what he called the "Lockean proviso" in Anarchy, State, and Utopia, under which property right claims are only valid under a specific set of circumstances. Otherwise, we must forfeit them. One article summarizes some radical implications of this view,

One of the most controversial ideas contained in this work is Nozick’s defense of the “Lockean proviso,” which requires the “rectification” of outcomes that result from the unjust appropriation of property. Reparations redressing the effects of slavery, however, may just be the tip of the iceberg should we accept this robust conception of the Lockean proviso. Going back into the past in search of wrongs could lead to the discovery of innumerable injustices that conceivably demand contemporary rectification by their perpetrators’ descendants. It could also serve as the rationale for radically altering (and even abolishing) the global capitalist system altogether. For this reason, the proviso has been far more popular on the Left than the Right.

Nozick usually comes across as a rather traditional libertarian, arguing that “principles of justice” regarding the acquisition and transfer of property exclude theft, fraud, and enslavement. A just appropriation of property must be the result of voluntary and informed consent that both parties to the exchange of property freely exercise. However, he also emphasizes the importance of understanding the “original” or “historical” manner in which the ownership of property came about. If “past injustices” (including theft, fraud, and enslavement) enabled this ownership, then “rectification” of these crimes must take place. Nozick consistently argues that parties that have been made “worse off” by actions like these deserve compensation.

Comment by matthew-barnett on [deleted post] 2021-03-11T20:54:52.618Z

Agreed. Lifelogging and quantified self have a lot of overlap, but generally refer to distinct clusters. Lifelogging is more about capturing high resolution data about someone's life, such as audio or video. Quantified self refers to low resolution statistics, such as heartrate and time spent sleeping. The aims of the two are often different too. Lifeloggers (especially on Lesswrong) will probably be more interested in lifelogging as a means of life extension, whereas that rarely seems to be the goal for people who are interested in quantified self.

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-11T20:30:51.453Z · LW · GW

So this is what's going to stop the rich person from opting out. The threat of violence if they do so. In that light - can we still say they are allowed to opt out?

Well, in the hypothetical, yes, they can opt out. We will assume that in the hypothetical, the people would not rise up against the rich person if they voluntarily opted out of governance under some hypothetical agreement. 

Can you clarify your point more? I am unsure whether you are merely making some general observation about why rich people can't opt out of taxes in the real world, or whether you are making some theoretical argument for the implausibility of one of the assumptions in my thought experiment.

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-10T18:59:52.270Z · LW · GW

If there were some way of opting out of the contract (which also means you opt out of the services above), the vast majority of people would not do so

It depends on what you mean by this. Imagine a community of 99 poor people, and one rich person. Every year, the people conduct a vote on whether to tax the one rich person and redistribute his wealth. Sure enough, most people vote for the policy, and most people like the benefits that this governance structure provides. If given the choice, the vast majority of people in the community would not opt out. But that's leaving out something important.

If everyone really were given a choice to opt out, then precisely one person would, the rich person. After opting out, the community would lose a large tax base, and would therefore require taxing the next richest person. This next richest person would probably then want to opt out.

Put another way, governance is an iterated game. If given the choice, the vast majority of people would prefer not to opt-out in the first round. After sufficient iterations, however, it seems most would prefer to opt-out.

And that's not even getting into the objection that one of the main reasons why people would not opt-out of governance is because they've been indoctrinated into believing government is good. Given the choice to opt-out of aging, many say they would not want to. However, if we grew up in a world where aging was always known to be optional, I'm sure the statistics would be different.

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-10T18:03:30.308Z · LW · GW

If you want a more detailed reply to your objection, it might be worth picking up a copy of Huemer's book, The Problem of Political Authority. The problem with most of these cases is that they only appear like strong arguments if we're already committed to the premise that we should treat state actors and non-state actors differently. In other words, they only appear strong if we begin with the conclusion we set out to prove. For instance,

Empirically lots of people do agree to the contract by explicitly getting a visa and coming to the country and later becoming citizens

Suppose that you want to move to Hawaii because it's so beautiful, but you know (because you saw something on the internet) that upon arrival, someone will rob you. If knowing this information, you still move to Hawaii, does this mean that you are consenting to being robbed? Even if when you actually get to Hawaii, you make sure to explain to every potential robber that you really really don't want to be robbed?

You do use the services provided to you by the government (roads, utilities, fire department, police, parks, etc)

As Huemer points out, this fact can't be strong evidence that I am consenting to be governed, because nearly everyone knows that they'll be forced to pay taxes whether or not they use those services. Likewise, if you offer your kidnapped victims food, and they accept, that does not imply that they agreed to be kidnapped.

Personally, I don't think the contract argument is the best argument for governance. I'd be more inclined to argue for the consequentialist argument for government: that is, that governance provides greater utility overall compared to the alternative. That's also the argument that Scott Alexander seems to want people to use. Huemer also directly replies to this argument in chapter 5 and part 2 in his book, if you're curious.

Comment by Matthew Barnett (matthew-barnett) on Defending the non-central fallacy · 2021-03-10T17:28:28.333Z · LW · GW

I feel like the obvious response is "there is something like consent with taxation, because people have agreed to a contract in which they pay taxes as long as everyone else pays taxes, and force is used to enforce the contract". It still seems like you're still playing games with emotional reactions unless you address this point.

Is there a contract? I certainly never signed one. Yet I still have to pay taxes. FWIW, Michael Huemer responds to this objection directly in chapters 2 and 3 of his book, The Problem of Political Authority. He concludes,

The social contract theory cannot account for political authority. The theory of an actual social contract fails because no state has provided reasonable means of opting out – means that do not require dissenters to assume large costs that the state has no independent right to impose. All modern states, in refusing to recognize explicit dissent, render their relationships with their citizens nonvoluntary.

Most accounts of implicit consent fail, because nearly all citizens know that the government’s laws would be imposed upon them regardless of whether they performed the particular acts by which they allegedly communicate consent. In the case of those governments that deny any obligation to protect individual citizens, the contract theory fails for the additional reason that, if there ever was a social contract, the government has repudiated its central obligation under the contract, thereby releasing its citizens from the obligations they would have had under that contract.

The central moral premise of the traditional social contract theory is commendable: human interaction should be carried out, as far as possible, on a voluntary basis. But the central factual premise flies in the face of reality.

 

I also think this is just a rephrasing of what Scott said, but in the language used by Huemer.

Yes, although I was quoting Huemer for what he said after that quoted paragraph.

Comment by Matthew Barnett (matthew-barnett) on Current cryonics impressions · 2021-02-27T08:05:03.979Z · LW · GW

There is a decent chance that we are already at the freezing part of 2. For instance, a defrosted vitrified rabbit brain apparently appeared to be in good order, though I assume we don’t know how to reattach brains to rabbits, alas.

The reference was aldehyde-stabilized cryopreservation, which is quite a bit different than a typical vitrification procedure. In particular, you can't just rewarm the tissue and expect it to function at all. When aldehyde-stabilized cryopreservation won the Large Mammal BPF Prize in 2018, the authors of the announcement had this to say about the technique,

It is important to understand that the researchers did not actually revive a pig or pig brain. The first step in the ASC procedure is to perfuse the brain’s vascular system with the toxic fixative glutaraldehyde, thereby instantly halting metabolic processes by covalently crosslinking the brain’s proteins in place, and leading to death by contemporary standards (but not necessarily information-theoretic standards). Glutaraldehyde is sometimes used as an embalming fluid, but is more commonly used by neuroscientists to prepare brain tissue for the highest resolution electron microscopic and immunofluorescent examination. It should be obvious that such irreversible crosslinking results in a very, very dead brain making future revival of biological function impossible. So, it is reasonable to ask: “What is the point of a procedure that can preserve the nanoscale structure of a person’s brain when biological revival is impossible?” The answer lies in the possibility of future non-biological revival.

A growing number of scientists and technologists believe that future technology may be capable of scanning a preserved brain’s connectome and using it as the basis for constructing a whole brain emulation, thereby uploading that person’s mind into a computer controlling a robotic, virtual, or synthetic body. The Brain Preservation Prize challenged the scientific community to develop a ‘bridge’ to that future mind uploading technology. The similarity to cryonics is obvious, but in this case the possibility of biological revival was dismissed as currently not feasible. Focus was instead directed toward provably preserving the information content of the brain as encoded within the connectome. Quoting from a recent video presentation by BPF President Kenneth Hayworth: “Aldehyde-Stabilized Cryopreservation is cryonics for uploaders.”

Comment by Matthew Barnett (matthew-barnett) on It’s not economically inefficient for a UBI to reduce recipient’s employment · 2021-02-26T21:19:36.380Z · LW · GW

I expect a lot of people will find this argument to be confusing because they don't have an intuitive sense of what it means for something to be "economically efficient." To clarify what I believe to be your argument, I propose a thought experiment:

Suppose the American government discovered a portal to another universe that, when opened, spews forth an enormous amount of wealth (final consumer goods, not dollar bills) at some rate. After some tabulation, we find that the portal gives us so much free wealth that distributing its bounty equally among US citizens would provide everyone $1000 a month. In response, politicians pass a new law describing how the wealth ought to be distributed: we auction off goods that the portal gives us, and then distribute the revenue from the auctions equally, as to provide everyone a fair share of the pot (i.e. $1k a month to all).

Now, suppose someone claims that we should close this portal. Their argument: if everyone was given $1000 a month, then many would sit at home and do nothing. This portal decreases the incentive to work, and we therefore must not receive any of its benefits.

I'd imagine most people would not accept this argument for closing the portal. But indeed, this argument is precisely the one that people often give against UBI.

Comment by Matthew Barnett (matthew-barnett) on Anti-Aging: State of the Art · 2021-01-11T01:20:39.627Z · LW · GW

You're right about (1). I seemed to have misread the chart, presumably because I was focused on worms.

Concerning (2), I don't see how your argument implies that the marginal returns to new resources are high. Can you clarify?

Comment by Matthew Barnett (matthew-barnett) on Two explanations for variation in human abilities · 2021-01-10T17:29:38.711Z · LW · GW

Formulations are basically just lifted from the post verbatim, so the response might be some evidence that it would be good to rework the post a bit before people vote on it. 

But I think I already addressed the fundamental reply at the beginning of the section 2. The theses themselves are lifted from the post verbatim, however, I state that they are incomplete.

Maybe you'd class that under "background knowledge"? Or maybe the claim is that, modulo broken parts, motivation, and background knowledge, different people can meta-learn the same effective learning strategies? 

I would really rather avoid making strict claims about learning rates being "roughly equal" and would prefer to talk about how, given the same learning environment (say, a lecture) and backgrounds, human learning rates are closer to equal than human performance in learned tasks.

Comment by Matthew Barnett (matthew-barnett) on Two explanations for variation in human abilities · 2021-01-10T00:11:45.794Z · LW · GW

I think it's important to understand that the two explanations I gave in the post can work together. After more than a year, I would state my current beliefs as something closer to the following thesis:

Given equal background and motivation, there is a lot less inequality in the rates human learn new tasks, compared to the inequality in how humans perform learned tasks. By "less inequality" I don't mean "roughly equal" as your prediction-specifications would indicate; the reason is because human learning rates are still highly unequal, despite the fact that nearly all humans have similar neural architectures. As I explained in section two of the post, a similar architecture does not imply similar performance. A machine with a broken part is nearly structurally identical to a machine with no broken parts, yet it does not work.

Comment by Matthew Barnett (matthew-barnett) on Anti-Aging: State of the Art · 2021-01-02T00:21:17.322Z · LW · GW

The personal strategies for slowing aging are interesting, but I was under the impression that your post's primary thesis was that we should give money to, work for, and volunteer for anti-aging organizations. It's difficult to see how doing any of that would personally make me live longer, unless we're assuming unrealistic marginal returns to more effort.

In other words, it's unclear why you're comparing anti-aging and cryonics in the way you described. In the case of cryonics, people are looking for a selfish return. In the case of funding anti-aging, people are looking for an altruistic return. A more apt comparison would be about prioritizing cryonics vs. personal anti-aging strategies, but your main post didn't discuss personal anti-aging strategies.

Comment by Matthew Barnett (matthew-barnett) on Anti-Aging: State of the Art · 2021-01-01T22:43:06.117Z · LW · GW

I appreciate the detailed and thoughtful reply. :)

I and others think that anti-aging and donating to SENS is probably a more important cause area than most EA cause areas (especially short-term ones) besides X-risk for the reasons below.

I agree that anti-aging is neglected in EA compared to other short-term, human focused cause areas. The reason is likely because the people who would be most receptive to anti-aging move to other fields. As Pablo Stafforini said,

Longevity research occupies an unstable position in the space of possible EA cause areas: it is very "hardcore" and "weird" on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the "common-sense" views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the "obvious corollary that curing aging is our number one priority". As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.

I wrote a post about how anti-aging might be competitive with longtermist charities here.

Data from human trials suggest many of these approaches have already been shown to reduce the rate of cognitive impairment, cancer, and many other features of aging in humans. Given these changes are highly correlated with biological aging, the evidence strongly suggests the capacity for the approaches mentioned to slow biological in humans.

Again, this is nice, and I think it's good evidence that we could achieve modest success in the coming decades. But in the post you painted a different picture. Specifically, you said,

The 'white mirror' of aging is a world in which biological age is halted at 20-30 years, and people maintain optimal health for a much longer or indefinite period of time. Although people will still age chronologically (exist over time) they will not undergo physical and cognitive decline associated with biological aging. At chronological ages of 70s, 80s, even 200s, they would maintain the physical appearance and much lower disease risk of a 20-30-year-old.

If humans make continuous progress, then eventually we'll get here. I have no issue with that prediction. But my objection concerned the pace and tractability of research. And it seems like there's going to be a ton of work going from modest treatments for aging to full cures.

One possible response is that the pace of research will soon speed up dramatically. Aubrey de Grey has argued along these lines on several occasions. In his opinion, there will be a point at which humanity wakes up from its pro-aging trance. From this perspective, the primary value of research in the present is to advance the timeline when humanity wakes up and gets started on anti-aging for real.

Unfortunately, I see no strong evidence for this theory. People's minds tend to change gradually in response to gradual technological change. The researchers who said this year that "I'll wait until you have robust mouse rejuvenation" will just say "I'll wait until you have results in humans" when you have results in mice. Humans aren't going to just suddenly realize that their whole ethical system is flawed; that rarely ever happens.

More likely, we will see gradual progress over several decades. I'm unsure whether the overall project (ie. longevity escape velocity) will succeed within my own lifetime, but I'm very skeptical that it will happen within eg. 20 years.

In addition, in the past 2 years, human biological aging has already been reversed using calorie restriction, and with thymic rejuvenation, as measured by epigenetic (DNAm) aging.

I don't think either of these results are strong evidence of recent progress. Calorie restriction has been known about for at least 85 years. The thymic rejuvenation result was a tiny trial with ten participants, and the basic results have been known since at least 1992.

The recent progress in epigenetic clocks is promising, and I do think that's been one of the biggest developments in the field. But it's important to see the bigger picture. When I open up old Alcor Magazine archives, or old longevity books from the 1980s and 1990s, I find pretty much same arguments that I hear today for why a longevity revolution is near. People tend to focus on a few small laboratory successes without considering whether the rate of laboratory successes have gone up, or whether it's common to quickly go from laboratory success to clinical success. 

Given that 86 percent of clinical trails eventually fail, and the marginal returns to new drug R&D has gone down exponentially over time, I want to know what specifically should make us optimistic about anti-aging, that's different from previous failed predictions.

I understand that the number of longevity biotech companies may (wrongly) suggest that the field is well-funded. But this number is not an accurate proxy for the relative funding received by basic geroscience to develop cures for aging, from which these companies are spun-out of. 

If the number of companies working on rejuvenation biotechnology did not accurately represent the amount of total effort in the field, then what was the point of bringing it up in the introduction?

I think many EA's assume academia is an efficient market that will self-correct to prioritise research with the greatest potential impact

Interestingly, I get the opposite impression. But maybe we talk to different EAs.

Aubrey de Grey who has significant insight into the landscape of funding for anti-aging believes that $250-500 million over 10 years is required to kickstart the field sufficiently so that larger sources of funding will flow in.

I don't doubt Aubrey de Grey's expertise or his intentions. But I've heard him say this line too, and I've never heard him give any strong arguments for it. Why isn't the number $10 billion or $1 trillion? If you think about comparably large technological projects in the past, $500 million is a paltry sum; yet, I don't see a good reason to believe that this field is different than all the others. Moreover, there is a well-known bias that people within a field are more optimistic about their work than people outside of it.

For example, a drug or cocktail of therapies that extend life of all humans on Earth by 10 years essentially allows 10-years' worth of people who would otherwise have died of aging (~400 million people) to potentially reach the point at which AI solves aging and hence, longevity escape velocity.

This is only true so long as the drug can be distributed widely almost instantaneously. By comparison, it usually takes vaccines several decades to be widely distributed. I also find it very unlikely that any currently researched treatment will add 10 years of healthy life discontinuously. Again, progress tends to happen gradually.

Comment by Matthew Barnett (matthew-barnett) on Anti-Aging: State of the Art · 2021-01-01T21:55:10.280Z · LW · GW

Oops, that was a typo. I meant curing cancer. And I overlooked the typo twice! Oops.

Comment by Matthew Barnett (matthew-barnett) on Anti-Aging: State of the Art · 2021-01-01T19:19:08.282Z · LW · GW

This seems untrue on its face. What we mean by "curing aging" is negligible senescence.

And presumably what the cancer researcher meant by curing cancer was something like, "Can reliably remove tumors without them growing back"? Do you have evidence that we have not done this in mice?

Comment by Matthew Barnett (matthew-barnett) on Against GDP as a metric for timelines and takeoff speeds · 2021-01-01T08:00:21.811Z · LW · GW

In addition to the reasons you mentioned, there's also empirical evidence that technological revolutions generally precede the productivity growth that they eventually cause. In fact, economic growth may even slow down as people pay costs to adopt new technologies. Philippe Aghion and Peter Howitt summarize the state of the research in chapter 9 of The Economics of Growth,

Although each [General Purpose Technology (GPT)] raises output and productivity in the long run, it can also cause cyclical fluctuations while the economy adjusts to it. As David (1990) and Lipsey and Bekar (1995) have argued, GPTs like the steam engine, the electric dynamo, the laser, and the computer require costly restructuring and adjustment to take place, and there is no reason to expect this process to proceed smoothly over time. Thus, contrary to the predictions of real-business-cycle theory, the initial effect of a “positive technology shock” may not be to raise output, productivity, and employment but to reduce them.

Comment by Matthew Barnett (matthew-barnett) on Anti-Aging: State of the Art · 2021-01-01T06:42:41.100Z · LW · GW

As an effective altruist, I like to analyze how altruistic cause areas fare on three different axes: importance, tractability and neglectedness. The arguments you gave for the importance of aging are compelling to me (at least from a short-term, human-focused perspective). I'm less convinced that anti-aging efforts are worth it according to the other axes, and I'll explain some of my reasons here.

The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans.
[...]
In the lab, we have demonstrated that various anti-aging approaches can extend healthy lifespan in many model organisms including yeast, worms, fish, flies, mice and rats. Life extension of model organisms using anti-aging approaches ranges from 30% to 1000%

When looking at the graph you present, a clear trend emerges: the more complex and larger the organism, the less progress we have made on slowing aging for that organism. Given that humans are much more complex and larger than the model organisms you presented, I'd caution against extrapolating lab results to them.

I once heard from a cancer researcher that we had, for all practical purposes, cured cancer in mice, but the results have not yet translated into humans. Whether or not this claim is true, it's clear that progress has been slower than the starry-eyed optimists had expected back in 1971.

That's not to say that there hasn't been progress in cancer research, or biological research more broadly. It's just that progress tends to happen gradually. I don't doubt that we can achieve modest success; I think it's plausible (>30% credence) that we will have FDA approved anti-aging treatments by 2030. But I'm very skeptical that these modest results will trigger an anti-aging revolution that substantially affects lifespan and quality of life in the way that you have described.

Most generally, scientific fields tend to have diminishing marginal returns, since all the low-hanging fruit tends to get plucked early on. In the field of anti-aging, even the lowest hanging fruit (ie. the treatments you described) don't seem very promising. At best, they might deliver an impact roughly equivalent to adding a decade or two of healthy life. At that level, human life would be meaningfully affected, but the millennia-old cycle of birth-to-death would remain almost unchanged.

Today, there are over 130 longevity biotechnology companies

From the perspective of altruistic neglectedness, this fact counts against anti-aging as a promising field to go into. The fact that there are 130 companies working on the problem with only minor laboratory success in the last decade indicates that the marginal returns to new inputs is low. One more researcher, or one more research grant will add little to the rate of progress.

In my opinion, if robust anti-aging technologies do exist in say, 50 years, the most likely reason would be that overall technological progress sped up dramatically (for example, due to transformative AI), and progress in anti-aging was merely a side effect of this wave of progress. 

It's also possible that anti-aging science is a different kind of science than most fields, and we have reason to expect a discontinuity in progress some time soon (for one potential argument, see the last several paragraphs of my post here). The problem is that this argument is vunerable to the standard reply usually given against arguments for technological discontinuities: they're rare. 

(However I do recommend reading some material investigating the frequency of technological discontinuities here. Maybe you can find some similarities with past technological discontinuities? :) )

Comment by Matthew Barnett (matthew-barnett) on Forecasting Thread: AI Timelines · 2020-09-04T00:53:07.514Z · LW · GW
  • Your percentiles:
    • 5th: 2040-10-01
    • 25th: above 2100-01-01
    • 50th: above 2100-01-01
    • 75th: above 2100-01-01
    • 95th: above 2100-01-01

XD

Comment by Matthew Barnett (matthew-barnett) on Forecasting Thread: AI Timelines · 2020-09-03T23:59:37.594Z · LW · GW

If AGI is taken to mean, the first year that there is radical economic, technological, or scientific progress, then these are my AGI timelines.

My percentiles

  • 5th: 2029-09-09
  • 25th: 2049-01-17
  • 50th: 2079-01-24
  • 75th: above 2100-01-01
  • 95th: above 2100-01-01

I have a bit lower probability for near-term AGI than many people here are. I model my biggest disagreement as about how much work is required to move from high-cost impressive demos to real economic performance. I also have an intuition that it is really hard to automate everything and progress will be bottlenecked by the tasks that are essential but very hard to automate.

Comment by Matthew Barnett (matthew-barnett) on Reflections on AI Timelines Forecasting Thread · 2020-09-03T10:21:16.733Z · LW · GW

Here, Metaculus predicts when transformative economic growth will occur. Current status:

25% chance before 2058.

50% chance before 2093.

75% chance before 2165.

Comment by Matthew Barnett (matthew-barnett) on My guide to lifelogging · 2020-08-28T22:28:54.651Z · LW · GW
Other pros of some body cams: goes underwater without a casing blocking the mic (I think)

I haven't tried it, but I don't think it can go underwater. It is built to be water resistant but I'm not confident it can be completely submerged. Therefore, if you are a frequent snorkeler, I recommend getting an action camera.

Comment by Matthew Barnett (matthew-barnett) on Forecasting Thread: AI Timelines · 2020-08-27T03:57:18.595Z · LW · GW

It's unclear to me what "human-level AGI" is, and it's also unclear to me why the prediction is about the moment an AGI is turned on somewhere. From my perspective, the important thing about artificial intelligence is that it will accelerate technological, economic, and scientific progress. So, the more important thing to predict is something like, "When will real economic growth rates reach at least 30% worldwide?"

It's worth comparing the vagueness in this question with the specificity in this one on Metaculus. From the virtues of rationality,

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade.
Comment by Matthew Barnett (matthew-barnett) on What specific dangers arise when asking GPT-N to write an Alignment Forum post? · 2020-07-28T06:01:37.307Z · LW · GW
To me the most obvious risk (which I don't ATM think of as very likely for the next few iterations, or possibly ever, since the training is myopic/SL) would be that GPT-N in fact is computing (e.g. among other things) a superintelligent mesa-optimization process that understands the situation it is in and is agent-y.

Do you have any idea of what the mesa objective might be. I agree that this is a worrisome risk, but I was more interested in the type of answer that specified, "Here's a plausible mesa objective given the incentives." Mesa optimization is a more general risk that isn't specific to the narrow training scheme used by GPT-N.