Posts
Comments
The post anchors on the Christiano vs Eliezer models of takeoff, but am I right that the goal more generally is to disentangle the shape of progress from the timeline for progress? I strongly support disentangling dimensions of the problem. I have spoken against using p(doom) for similar reasons.
Because that method rejects everything about prices. People consume more of something the lower the price is, even more so when it is free: consider the meme about all the games that have never been played in people's Steam libraries because they buy them in bundles or on sale days. There are ~zero branches of history where they sell as many units at retail as are pirated.
A better-but-still-generous method would be to do a projection of the increased sales in the future under the lower price curve, and then claim all of that as damages, reasoning that all of this excess supply deprived the company of the opportunity to get those sales in the future.
This is not an answer, but I register a guess: the number relies on claims about piracy, which is to say illegal downloads of music, movies, videogames, and so on. The problem is that the conventional numbers for this are utter bunk, because the way it gets calculated by default is they take the number of downloads, multiply it by the retail price, and call that the cost.
This would be how they get the cost of cybercrime to significantly exceed the value of the software industry: they can do something like take the whole value of the cybersecurity industry, better-measured losses like from finance and crypto, and then add bunk numbers for piracy losses from the entertainment industry on top of it.
This feels like a bigger setback than the generic case of good laws failing to pass.
What I am thinking about currently is momentum, which is surprisingly important to the legislative process. There are two dimensions that make me sad here:
- There might not be another try. It is extremely common for bills to disappear or get stuck in limbo after being rejected in this way. The kind of bills which keep appearing repeatedly until they succeed are those with a dedicated and influential special interest behind them, which I don't think AI safety qualifies for.
- There won't be any mimicry. If SB 1047 had passed, it would have been a model for future regulation. Now it won't be, except where that regulation is being driven by the same people and orgs behind SB 1047.
I worry that the failure of the bill will go as far as to discredit the approaches it used, and will leave more space for more traditional laws which are burdensome, overly specific, and designed with winners and losers in mind.
We'll have to see how the people behind SB 1047 respond to the setback.
As for OpenAI dropping the mask: I devoted essentially zero effort to predicting this, though my complete lack of surprise implies it is consistent with the information I already had. Even so:
Shit.
I wonder how the consequences to reputation will play out after the fact.
- If there is a first launch, will the general who triggered it be downvoted to oblivion whenever they post afterward for a period of time?
- What if it looks like they were ultimately deceived by a sensor error, and believed themselves to be retaliating?
- If there is mutual destruction, will the general who triggered the retaliatory launch also be heavily downvoted?
- Less than, more than, or about the same as the first strike general?
- Would citizens who gained karma in a successful first strike condemn their 'victorious' generals at the same rate as everyone else?
- Should we call this pattern of behavior, however it turns out, the Judgment of History?
It does, if anything, seem almost backwards - getting nuked means losing everything, and successfully nuking means gaining much but not all.
However, that makes the game theory super easy to solve, and doesn't capture the opposing team dynamics very well for gaming purposes.
I think this is actually wrong, because of synthetic data letting us control what the AI learns and what they value, and in particular we can place honeypots that are practically indistinguishable from the real world
This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?
Or is the position stronger, more like we don't need to solve the alignment problem in general, due to our ability to run simulations and use synthetic data?
Following on this:
Moreover, even when that dataset does exist, there often won’t be even the most basic built-in tools to analyze it. In an unusually modern manufacturing startup, the M.O. might be “export the dataset as .csv and use Excel to run basic statistics on it.”
I wonder how feasible it would be to build a manufacturing/parts/etc company whose value proposition is solving this problem from the jump. That is to say, redesigning parts with the sensors built in, with accompanying analysis tools, preferably as drop-in replacements where possible. In this way companies could undergo a "digital transformation" gradually, at pretty much their regular operations speed.
It occurs to me we can approach the problem from a completely data-centric perspective: if we want tool AIs to be able to control manufacturing more closely, we can steal a page from the data-center-as-computer people and think of the job of the machines themselves as being production sensors in a production center.
Wrangling the trade-offs would be tricky though. How much better will things be with all this additional data as compared to, say, a fixed throughput or efficiency increase? If we shift further and think in terms of how the additional data can add additional value, are we talking about redesigning machines such that they have more degrees of freedom, ie every measurement corresponds to an adjustable variable on the machine?
Also think he's wrong in the particulars, but I can't quite square it back to his perspective once the particulars are changed.
The bluntest thing that is wrong is that you can specify as precise a choice as you care to in the prompt, and the models usually respond. The only hitch is that you have to know those choices beforehand, whereas it would be reasonable to claim that someone like a photographer is being compelled to make choices they did not know about a priori. If that winds up being important then it would be more like the artist has to make and execute the choices they make, even if they are very simple like picking shading in photoshop or pushing the camera button.
I could see an alternative framework where even the most sophisticated prompt is more like a customer giving instructions to an artist than an artist using a tool to make art, but that seems to push further in the direction of AI makes art.
Lastly, if we take his claims at face value, someone should write an opinion piece with the claim that AI is in fact rescuing art, because once all the commercial gigs are absorbed by the machine then true artists will be spared the temptation of selling out. I mean I won't write it, but I would chuckle to read it.
Great job writing an oops post, with a short and effective explanation. Strong upvote for you!
The Ted Chiang piece, on closer reading, seems to be about denying the identity of the AI prompter as an artist rather than speaking to the particular limitations of the tool. For those who did not read, his claim is:
- Being an artist is about making interesting choices in your medium (paintings, novels, photography, digital).
- AI tools make all the choices for you; therefore you cannot be an artist when using AI tools.
- Further, the way the AI makes choices precludes them from being interesting, because they are a kind of arbitrary average and therefore cannot be invested with meaning.
To set himself apart from luddites and the usual naysayers, he uses Adobe Photoshop as an example of a tool where you can be an artist: it is a computer tool; it used to be derided by photographers as not being real art; but now it is accepted and the reason is that people learned to make interesting choices with it.
He appears to go as far as to say two people could generate an identical digital picture, one via photoshop and one via AI, and the former gets to be an artist and the latter does not.
I strongly endorse you writing that post!
Detailed histories of field development in math or science are case studies in deconfusion. I feel like we have very little of this in our conversation on the site relative to the individual researcher perspective (like Hamming’s You & Your Research) or an institutional focus (like Bell Labs).
That’s very interesting - could you talk a bit more about that? I have a guess about why, but would rather hear it straight than risk poisoning the context.
Could you talk a bit about how much time and effort you have invested into writing the wikipedia articles?
I think it would be helpful by making it easier for other people to judge whether they can have an impact this way, and whether it would be worth their time.
The claim that zoning restrictions are not a taking also goes against the expert consensus among economists about the massive costs that zoning imposes on landowners.
I would like to know more about how the law views opportunity costs. For most things, such as liability, it seems to only accept costs in the normal sense of literally had to pay out of pocket X amount; for other things like worker's comp it is a defined calculation of lost future gains, but only from pre-existing arrangements like the job a person already had. It feels like the only time I see opportunity costs is lumped in with other intangibles like pain and suffering.
Independently of the other parts, I like this notion of poverty. I head-chunk the idea as any external thing a person lacks, of which they are struggling to keep the minimum; that is poverty.
This seems very flexible, because it isn't a fixed bar like an income level. It also seems very actionable, because it is asking questions of the object-level reality instead of hand-wavily abstracting everything into money.
Over at Astral Codex Ten is a book review of Progress and Poverty with three follow-up blog posts by Lars Doucet. The link goes to the first blog post because it has links to the rest right up front.
I think it is relevant because Progress and Poverty is the book about:
...strange and immense and terrible forces behind the Poverty Equilibrium.
The pitch of the book is that the fundamental problem is economic rents deriving from private ownership over natural resources, which in the book means land. As a practical matter the focus on land rents in the book heavily overlaps the modern discussion around housing. The canonical example of the problem is what a landlord charges to rent an apartment.
One interesting point is that while UBI is suggested (here called a Citizen's Dividend), it is for justice reasons, and is not proposed as a solution to poverty. I expect Henry George would agree that a UBI would not eliminate poverty, and would predict it mostly gets gobbled up by the immense and terrible forces behind the Poverty Equilibrium.
I feel like the absence of large effects is to be expected during a short-term experiment. It would be deeply shocking to me if there was a meaningful shift in stuff like employment or housing in any experiment that doesn't run for a significant fraction of the duration of a job or lease/rental agreement. For a really significant study you'd want to target the average in an area, I expect.
This is why San Francisco was chosen as the example - at least over the last decade or so it has been one of the most inelastic housing supplies in the U.S.
You are therefore exactly correct: it does not comply with basic supply and demand. This is because basic supply and demand usually do not apply for housing in American cities due to legal constraints on supply, and subsidies for demand.
I acknowledge the bow-out intention, and I'll just answer what look like the cruxy bits and then leave it.
There's no actual price signal or ground truth for that portion of the value.
Fortunately we have solved this problem! Slightly simplified: what a vacant lot sells for is the land value, and how much more a developed lot next to it sells for is the value of the improvements. Using the prices from properties recently sold is how they usually calculate this.
If it's NOT just using a land-value justification to raise the dollar amounts greatly, please educate me.
Let the record reflect that I totally expect someone to do this. But as for doing it in a non-insane fashion, we take what used to be a property tax and turn it into a sales tax, which is levied on sales of property.
There's a good guest blog post over at AstralCodexTen which digs into the value assessment problem which I think you would like. That whole series of guest blog posts was fascinating.
And with that, we'll call it!
My objection is to the core of the proposal that it's taxed at extremely high levels, based on theoretical calculations rather than actual use value.
I'm a little confused by what the theoretical calculations are in your description. The way I understand it - which is scarcely authoritative but does not confuse me - is that we have several steps:
- Theory: a lot of the value of a piece of property is not because of work done by the owner, but instead because of other people being nearby.
- Theory: this is bad. We should remove all the value provided by other people who aren't the owner.
- Practical: we need to calculate what fraction of the value is provided just by other people.
- Practical: we tax that fraction of the price, and of the income from the property.
- Practical: we adjust the fraction to allow for measurement errors or whatever other consideration.
So my understanding is there are no calculations until you get to the practical considerations of applying the tax.
Your suspected answer is how current implementations of the system work, but they are also regular property taxes in the sense of being a very tiny fraction of the value which means the payments are manageable under normal circumstances, and don't take aim at eliminating the economic rent.
Just scaling up property taxes the way they work now with the new values would be an epically bad move, in the same vein as the taxing unrealized capital gains at a high rate is.
The way I understand it, once we agree the focus is on the economic rent problem, we shift to taxing the sale price and the income from the property because these are how economic rents are captured, rather than an impossibly high annual payment. A homeowner doesn't capture economic rent just by living on the property, as I see it.
So for myself I would not vote for a measure that just scaled up property taxes, but would vote for one that did the sale/revenue taxation.
That isn't how the taxes are assessed, as a practical matter. The value of the land and the value of buildings are assessed, mostly using market data, and then the applied tax is the ratio of the land value to the property value, so for example in an apartment building that fraction is taxed out of the rent payments, and when a property is sold that fraction is taxed from the sale price.
I do notice that we don't have any recent examples of the realistically-full land tax interacting with individual home ownership; everywhere we see it is treated the same as a regular property tax. While it seems reasonable to me that non-income-producing properties should not require regular payments and instead only be taxed at point of sale, this is an adaptation rather than a strict application of the theory.
They can only make decisions that generate enough income to pay the taxes
Out of curiosity, how is this different from current property taxes, or from mortgages for that matter?
“Government will make better resource decisions than profit-motivated private entities”
I think you landed on the crux of it - under the Georgist model, individuals (or firms) still make the decisions about what to do with the resources. What the government does is set a singular huge incentive, strongly in the direction of "add value to the land."
I don't have an answer to this question, but I would register a prediction:
- Georgism believers < communism believers
- Georgism popularity > communism popularity
The latter is mostly because there are a bunch of people who really hate communism and will prefer almost literally anything else in a survey.
I feel like suburban homeowners are the key group here. They have the most votes among landowning groups, and the strongest motivation to oppose anything that reduces property values because their home represents all of their wealth. There is also the way-of-life question, because the economics of the suburbs seem really difficult under Georgism.
Something like an exemption for the first sale after the tax is passed would be a simple solution to taking some of the sting out, and makes sure no one has to unilaterally lose their investment to a tax.
Huzzah for enriched!
Things to consider:
- Pays out faster than regular real estate development on a within-sector basis.
- Here I envision industrial/infrastructure uses, but for areas that are sufficiently difficult (New York or California, for example) it could be viable for things like housing in combination with traditional development (permit company -> development company -> homebuyer).
- You almost definitely cannot get "general construction" permits whereby you can build anything, which on the one hand is limiting and requires strategic choices in permitting.
- On the other hand it allows some strategic influence over the development of useful sites.
Special Purpose Permit Companies
How many kinds of environmental and/or building permits could be gotten by company A, and then used for company B's project, after company B buys company A? For example, could we form a special purpose company to pursue navigating all the NEPA stuff for building a power plant at a particular site, and then sell that company to a power company, who could then begin construction immediately?
Consider real estate developers: one company buys the land and develops it, which means do all the construction, and then sells it to other companies who use the buildings.
I propose an extension of this model, where one company will buy the land and do the legal/regulatory work, and then sell the site+permits to another company who will do the construction of the facility.
The advantage here is that rather than one company taking the whole risk of the land and permitting process through to completion of the needed facility, the land and permitting can be handled by a special purpose company which can be sold, construction-ready, to a market of potential buyers. Because there are multiple potential buyers, the risks are lower than they are for one company doing the whole process alone, and normally you can't sell permits/licenses/etc once they are granted so if the project falls through for some reason the work done on the regulatory stuff is wasted.
If widely adopted as a practice, what I hope this would allow is a lot of potential building sites for needed infrastructure can be prepared for construction simultaneously. While this will do nothing to speed up projects starting now, it would potentially cut years off the construction time of the next generation of facilities as an abundance of sites would be waiting for them already.
The most frequent expression I see used for this is alpha, by which people mean the temporary advantage that comes from new things in finance. It has since moved into entrepreneurship lingo. However, I note that alpha speaks to a competitive environment, where you lose the advantage because other people are doing it; it isn't normally used to cover the situations you describe where a company shoots themselves in the foot, for example.
Even if their abstractions are different from ours, a key valuable thing is to predict us and thus model our abstractions, so even if you started out with alien abstractions you would then also want something closer to ours?
I am closer to John than Eliezer on this one so it is not a crux for me, but speaking just to this reasoning: it seems to me that by the time the AI is deciding via its alien concepts whether or not to simulate us, the issue is completely out of our hands and it doesn't matter whether we can interpret its simulations. Why wouldn't the concepts by which it would judge the simulation also be the concepts we want to govern in the first place?
While I do not use the platform myself, what do you think of people doing their thinking and writing offline, and then just using it as a method of transmission? I think this is made even easier by the express strategic decision to create an account for AI-specific engagement.
For example, when I look at tweets at all it is largely as links to completed threads or off-twitter blogs/articles/papers.
- An atom isn't a chair.
- Adding an atom to something that's not a chair doesn't make it a chair.
- Therefore, nothing is a chair.
I've decided to call this Zeno's System Paradox.
I wonder how feasible it would be to make something that would allow people to coordinate for consumption like unions for bargaining, parties for politics, or corporations for production. In particular I have in mind targeting entertainment and social media consumption because of the prevalence of recommendation algorithms here.
The intuition is that these algorithms are political computations: who gets the advertising traffic, what type of content is in demand, which consumers are catered to.
The thing being coordinated on is the consumer behavior; namely what people click on.
Examples:
- like a union, but instead of collective bargaining to sell labor, it is collective behavioring[1] to consume content.
- like the old website Massdrop, but instead of coordinated buying it is coordinated clicking.
- like a political party, but instead of coordinating votes, it coordinates clicks.
- like a company, but instead of coordinating work, it coordinates consumption.
Concretely, I have in mind something like a browser widget that overlays indicators of the most impactful consumption choice to make to influence the algorithm in the direction you want, such as an icon or a color filter. This would run over things like youtube, instagram, tiktok, etc.
Another way to think of this: a value handshake between whatever algorithm the company is using and an algorithm you would choose instead.
- ^
behavioring - by this I mean consciously conforming to a pattern of behavior so it shows up in behavior analysis, in this case consumer behavior tracking which is part of the function of recommendation algorithms.
I read a story once about a billionaire who managed to flout a zoning limitation by building himself an absurdly huge high-rise single apartment. Once that was done, he waited a bit, and either using a loophole that already existed or one he finagled into the law, he added a bunch of walls inside is personal mega-apartment to and then sold off all these new units as regular apartments.
Now I think of this in the context of the American dredging and shipping laws. There's a lot of rules surrounding dredges and shipping vessels, but there are a lot fewer surrounding luxury items like yachts. So imagine a group of investors building a big, state of the art shipyard that builds yachts, but which can be converted for little or no additional cost to build dredges or vessels suitable for river shipping or similar.
A simpler alternative scheme is to just comply with the laws but use yachts or similar less-regulated vessels as a lead product to build capacity for the more economic ones, analogous to how Tesla strategically built an electric sports car to generate interest and funds for eventually building electric sedans.
Reading this after some months it looks like the majority of the commenters interpreted the post as being "I found a bunch of things that increase IQ," but it feels like the point of the post is more like "Anyone can increase their IQ by trying a bunch of plausible things."
If I am right, for your purposes, a better experiment would be other people trying different batches of interventions at a similar intensity for a similar length of time. Does that sound right?
I was absolutely certain I had responded to this, because I had taken the trouble to search for and locate a description of the procedure used in particle physics, which appears to be the central place where likelihood functions are the preferred tool.
Seems I wrote it but never submitted it, so in this here placeholder comment I vouchsafe to hunt that resource down again and put it here in an edit.
Edit: As I promised, the resource: https://ep-news.web.cern.ch/what-likelihood-function-and-how-it-used-particle-physics
This is a short article from by a person from CERN, Robert Cousins. It covers in brief what likelihood is and how it is different than probability, then a short description of three different methods of using a likelihood function (here listed as Likelihoodist, Neman-Pearson, and Bayesian), and then on to a slightly more advanced example. It has references which include some papers from the work on identifying the Higgs Boson, and some of his own relevant papers.
Suddenly more poignant:
Out of curiosity, were any patterns discovered during this process? For example, were the writing styles similar among the ones the AI could convert into successful music, or did ones by the same author churn out songs with specific similarities, or what have you?
This is great, bookmarked for future warm and fuzzies. I've just had my second, a son, on February 8th. My first, a daughter, is six next week.
Let it be known to all and sundry that kids are fantastic and fatherhood is wondrous. It is much work and a high cost in money and sleep, in exchange for which you are endowed with glorious purpose and wireheaded to the future.
Also there is the love. Strongly recommended.
Some relevant details for the American government case:
Popular election of the Senate began in 1913. Before that each state’s Senators were elected by the state legislature. This means factional dominance of the Senate was screened off, and actually determined the state level.
This is because the dominant group analysis at the time the constitution was written was people, state government, and federal government, and the conversation was about how to prevent a single group from gaining control over all of government.
The slave vs free grouping played out at the state level. Continuing the group analysis, state level politics is viewed as having been largely between urban and rural interests. In the South the rural interests - plantation owners - usually won, and in the North, urban industrialists usually did. The canonical example of the legacy of this divide is that state capitols are rarely the largest city in the state. The capitols - and therefore the state capital - are normally a much smaller city.
This brings us down to the local level, which in the US is where most of the competition between traditional divisions like race, religion and ethnicity played out.
I think at least in the American case, I model the key development as the creation of more and different groupings through federalism, rather than a veto mechanism for traditional groups.
On the other hand, separately I have heard the idea that traditional groups were weaker in the US than in Europe because of the disruption caused by the US’ colonial structure and immigration, so I could be mislead by these peculiar circumstances. I would need a much better understanding of the democratization of other European countries and preferably some outside of Europe. Unfortunately the data is pretty sparse there, as these democracies are usually very young and don’t have many cycles of competition to compare.
A few years after the fact: I suggested Airborne Contagion and Air Hygiene for Stripe’s (reprint program)[https://twitter.com/stripepress/status/1752364706436673620].
One measure of status is how far outside the field of accomplishment it extends. Using American public education as the standard, Leibniz is only known for calculus.
there is not any action that any living organism, much less humans, take without a specific goal
Ah, here is the crux for me. Consider these cases:
- Compulsive behavior: it is relatively common for people to take actions without understanding why, and for people with OCD this even extends to actions that contradict their specific goals.
- Rationalizing: virtually all people actively lie to themselves about what their goals are when they take an action, especially in response to prodding about the details of those goals after the fact.
- Internal Family Systems and related therapies: the claim on which these treatments rest is that every person intrinsically has multiple conflicting goals of which they are generally unaware, and the learning how to mediate them explicitly is supposed help.
- The hard problem of consciousness: similar to the above, one of the proposed explanations for consciousness is that it serves as a mechanism for mediating competing biological goals.
These are situations where either the goal is not known, or it is fictionalized, or it is contested (between goals that are also not known). Even in the case of everyday re-actions, how would the specific goal be defined?
I can clearly see an argument along the lines of evolutionary forces providing us with an array of specific goals for almost every situation, even when we are not aware of them or they are hidden from us through things like self-deception. That may be true, but even given that it is true I come to the question of usefulness. Consider things like food:
- I claim most of the time we eat, because we eat. As a goal it is circular.
- We might eat to relieve our stomach growling, or to be polite to our host, and these are specific goals, but these are the minority cases.
Or sex:
- Also circular, the goal is usually sex qua sex.
- Speaking for myself, even when I had a specific goal of having children (making explicit the evolutionary goal!), what was really happening under the hood is I was having sex qua sex and just very excited about the obvious consequences.
It doesn't feel to me like thinking of these actions in terms of manipulation adds anything to them as a matter of description or analysis. Therefore when talking about social things I prefer to use the word manipulation for things that are strategic (by which I mean we have an explicit goal and we understand the relationship between our actions and that goal) and unaligned (which I mean in the same sense you described in your earlier comment, the other person or group would not have wanted the outcome).
Turning back to the post, I have a different lens for how to view How To Win Friends and Influence People. I suggest that these are habits of thought and action that work in favor of coordination with other people; I say it works the same way rationality works in favor of being persuaded by reality.
I trouble to note that this is not true in general of stuff about persuasion/influence/etc. A lot of materials on the subject do outright advocate manipulation even as I use the term. But I claim that Carnegie wrote a better sort of book, that implies pursuing a kind of pro-sociality in the same way we pursue rationality. I make an analogy: manipulators are to people who practice the skills in the book as Vulcan logicians are to us, here.
A sports analogy is Moneyball.
The counterfactual impact of a researcher is analogous to the insight that professional baseball players are largely interchangeable because they are all already selected from the extreme tail of baseball playing ability, which is to say the counterfactual impact of a given player added to the team is also low.
Of course in Moneyball they used this to get good-enough talent within budget, which is not the same as the researcher case. All of fantasy sports is exactly a giant counterfactual exercise; I wonder how far we could get with 'fantasy labs' or something.
I agree that processor clock speeds are not what we should measure when comparing the speed of human and AI thoughts. That being said, I have a proposal for the significance the fact that the smallest operation for a CPU/GPU is much faster than the smallest operation for the brain.
The crux of my belief is that having faster fundamental operations means you can get to the same goal using a worse algorithm in the same amount of wall-clock time. That is to say, if the difference between the CPU and neuron is ~10x, then the CPU can achieve human performance using an algorithm with 10x as many steps as the algorithm that humans actually use in the same clock period.
If we view the algorithms with more steps than human ones as sub-human because they are less computationally efficient, and view a completion of the steps of an algorithm such that it generates an output as a thought, this implies that the AI can get achieve superhuman performance using sub-human thoughts.
A mechanical analogy: instead of the steps in an algorithm consider the number of parts in a machine for travel. By this metric a bicycle is better than a motorcycle; yet I expect the motorcycle is going to be much faster even when it is built with really shitty parts. Alas, only the bicycle is human-powered.
It isn't quoted in the above selection of text, but I think this quote from same chapter addresses your concern:
“I instantly saw something I admired no end. So while he was weighing my envelope, I remarked with enthusiasm: "I certainly wish I had your head of hair." He looked up, half-startled, his face beaming with smiles. "Well, it isn't as good as it used to be," he said modestly. I assured him that although it might have lost some of its pristine glory, nevertheless it was still magnificent. He was immensely pleased. We carried on a pleasant little conversation and the last thing he said to me was: "Many people have admired my hair." I'll bet that person went out to lunch that day walking on air. I'll bet he went home that night and told his wife about it. I'll bet he looked in the mirror and said: "It is a beautiful head of hair." I told this story once in public and a man asked me afterwards: "'What did you want to get out of him?" What was I trying to get out of him!!! What was I trying to get out of him!!! If we are so contemptibly selfish that we can't radiate a little happiness and pass on a bit of honest appreciation without trying to get something out of the other person in return - if our souls are no bigger than sour crab apples, we shall meet with the failure we so richly deserve.”
Out of curiosity, what makes this chapter seem Dark-Artsy to you?
So the smarter one made rapid progress in novel (to them) environments, then revealed they were unaligned, and then the first round of well established alignment strategies caused them to employ deceptive alignment strategies, you say.
Hmmmm.
I don't see this distinction as mattering much: how many ASI paths are there which somehow never go through human-level AGI? On the flip side, every human-level AGI is an ASI risk.
I would perhaps urge Tyler Cowen to consider raising certain other theories of sudden leaps in status, then? To actually reason out what would be the consequences of such technological advancements, to ask what happens?
At a guess, people resist doing this because predictions about technology are already very difficult, and doing lots of them at once would be very very difficult.
But would it be possible to treat increasing AI capabilities as an increase in model or Knightian uncertainty? It feels like questions of the form "what happens to investment if all industries become uncertain at once? If uncertainty increases randomly across industries? If uncertainty increases according to some distribution across industries?" should be definitely answerable. My gut says the obvious answer is that investment shifts from the most uncertain industries into AI, but how much, how fast, and at what thresholds are all things we want to predict.