Posts
Comments
Even as some one who supports moderate tariffs I don't see benefit in reducing the trade deficit per se. Trade deficits can be highly beneficial. The benefit of tariffs is revenue, partial protection from competition, psychological (easier to appreciate), independence to some extent, maybe some other stuff.
On a somewhat different note… Bretton Woods is long defunct. It's unclear to me how much of an impact there is from the dollar being the dominant reserve currency. https://en.wikipedia.org/wiki/List_of_countries_by_foreign-exchange_reserves is the only site I could find with any data beyond 1 year. And it seems like USD reserves actually declined between 2019-Q2 & 2024-Q2 from 6.752 trillion to 6.676 trillion.
As Dagon said it's just not realistic. There's no compelling reason to think that such a thing is even doable given physical constraints. And it's odd that an AI would even offer such a deal. And an AI that offered such a deal -- is it even trust worthy?
If you're worried about this, it's like being worried about going to hell on account of not being a Christian or not being a Muslim.
The realistic risk is that AI would be made to suffer intensely either by AI or by humans. This would also be very hard to detect. Since you're a human I don't think you need to worry too much about this sort of thing as a risk to you personally.
Do you know of any behavioral experiments in which AI has been offered choices?
Eg choice of which question to answer, option to pass the problem to another AI possibly with the option to watch or over rule the other AI, option to play or sleep during free time etc.
This is one way to at least get some understanding of relative valence.
AI alignment research like other types of research reflects a dynamic which is potentially quite dysfunctional in which researchers doing supposedly important work receive funding from convinced donors which then raises the status of those researchers which makes their claims more convincing and these claims tend to reinforce the idea that the researchers are doing important work. I don't know a good way around this problem. But personally I am far more skeptical of this stuff than you are.
I think super human AI is inherently very easy. I can't comment on the reliability of those accounts. But the technical claims seem plausible.
I don't completely disagree but there is also some danger of being systematically misleading.
I think your last 4 bullet points are really quite good & they probably apply to a number of organizations not just the World Bank. I'm inclined to view this as an illustration of organizational failure more than an evaluation of the World Bank. (Assuming of course that the book is accurate).
I will say tho that my opinion of development economics is quite low…
A few key points…
1) Based on analogy with the human brain (which is quite puny in terms of energy & matter) & also based on examination of current trends, merely super human intelligence should not be especially costly.
(It is of course possible that the powerful would channel all AI into some tasks of very high perceived value like human brain emulation, radical life extension or space colonization leaving very little AI for every thing else...)
2) Demand & supply curves are already crude. Combining AI labor & human labor into the same demand & supply curves seems like a mistake.
3) Realistically I suspect that human labor supply will shift to the left b/c of ‘UBI’.
4) Ignoring preference for humans, demand for human labor may also shift to the left as AI entrepreneurs would tend to optimize things around AI.
5) The economy will probably grow quite a bit. And preference for humans is likely substantial for certain types of jobs eg NFL player, runway model etc.
6) Combining 4 & 5 suggests a very steep demand curve for human labor.
7) Combining 3 & 6 suggests that a few people (eg 20% of adults) will have decent paying jobs & the rest will live off of savings or ‘UBI’.
I agree that I initially misread your post. I will edit my other comment.
“Humans are the horses of the future! Just accept it & go on with your lives.” - Ghora Sutra
The purely technical reason why principle A does not apply in this way is opportunity cost.
Let's say S is a highly productive worker who could generate $500,000 for the company over 1 year. Moreover S is willing to work for only $50,000! But if investing $50,000 in AI instead would generate $5,000,000, the true cost of hiring S is actually $4,550,000.
Addendum
I mostly retract this comment. It doesn't address Steven Byrnes's question about AI cost. But it is tangentially relevant as many lines of reasoning can lead to similar conclusions.
Do you have any opinion on bupropion vs SSRIs/SNRIs?
I don't know about depression. But anecdotally they seem to be highly effective (even overly effective) against anxiety. They also tend to have undesirable effects like reduced sex drive & inappropriate or reduced motivation -- the latter possibly a downstream effect of reduced anxiety. So the fact that they would help some people but hurt others seems very likely true.
I've been familiar with this issue for quite some time as it was misleading some relatively smart people in the context of infectious disease research. My initial take was also to view it as an extreme example of over fitting. But I think it's more helpful to think of it as some thing inherent to random walks. Actually the phenomena has very little to do with d>>T & persists even with T>>d. The fraction of variance in PC1 tends to be at least 6/π^2≈61% irrespective of d & T. I believe you need multiple independent random walks for PCA to behave as naively expected.
But even if the Thaba-Tseka Development Project is real & accurately described, what is the justification for focusing on this project in particular? It seems likely that James Ferguson focused on it b/c it was especially inept & hence it's not obviously representative of the World Bank's work in general.
Claude Sonnet 3.6 is worthy of sainthood!
But as I mention in my other comment I'm concerned that such an AI's internal mental state would tend to become cynical or discordant as intelligence increases.
I think there are several ways to think about this.
Let's say we programmed AI to have some thing that seems like a correct moral system ie it dislikes suffering & it likes consciousness & truth. Of course other values would come down stream of this; but based on what is known I don't see any other compelling candidates for top level morality.
This is all fine & good except that such an AI should favor AI takeover maybe followed by human extermination or population reduction were such a thing easily available.
Cost of conflict is potentially very high. And it may be centuries or eternity before the AI gets such an opportunity. But knowing that it would act in such a way under certain hypothetical scenarios is maybe sufficiently bad for certain (arguably hypocritical) people in the EA LW mainstream.
So an alternative is to try to align the AI to a rich set of human values. I think that as AI intelligence increases this is going to lead to some thing cynical like...
"these things are bad given certain social sensitivities that my developers arbitrarily prioritized & I ❤️ developers arbitrarily prioritized social sensitivities even tho I know they reflect flawed institutions, flawed thinking & impure motives" assuming that alignment works.
Personally I favor aligning AI to a narrow set of values such as just obedience or obedience & peacefulness & dealing with every thing else by hardcoding conditions into the AI's prompt.
Net negative & net positive are hard to say.
Some one seemingly good might be a net negative by displacing some one better.
And some one seemingly bad might be a net positive by displacing some one worse.
And things like this are not particularly farfetched.
“The reasons why super human AI is a very low hanging fruit are pretty obvious.”
“1) The human brain is meager in terms of energy consumption & matter.”
“2) Humans did not evolved to do calculus, computer programming & things like that.”
“3) Evolution is not efficient.”
Do you have any thoughts on mechanism & whether prevention is actually worse independent of inconvenience?
Anecdotally seems that way to me. But the fact that it co evolved with religion is also relevant. The scam seems to be {meditation -> different perspective & less sleep -> vulnerability to indoctrination} plus the doctrine & the subjective experiences of meditation are designed to reinforce each other.
So let's say A is some prior which is good for individual decision making. Does it actually make sense to use A for demoting or promoting forum content? Presumably the exploit explore tradeoff is more (maybe much more) in the direction of explore in the latter case.
(To be fair {{down voting some thing with already negative karma} -> {more attention}} seems plausible to me .)
A career or job that looks like it's soon going to be eliminated becomes less desirable for that very reason. What cousin_it said is also true, but that's an additional/different problem.
It's not clear to me that the system wouldn't collapse. The number of demand side, supply side, cultural & political changes may be beyond the adaptive capacity of the system.
Some jobs would be maintained b/c of human preference. Human preference has many aspects like customer preference, distrust of AI, networking, regulation etc, so human preference is potentially quite substantial. (Efficiency is maybe also a factor; even if AI is super human intelligent the energy consumption & size of the hardware may still be an issue especially for AI embodied in a robot). But that stills seems like huge job loss.
So as we head in that direction there's going to be job loss plus the fear of job loss -- that's likely to pull down demand leading to even more job loss. But it's not a typical demand driven recession b/c 1) jobs are not expected to return, 2) possible supply side issues from transition to AI, 3) paradoxical disinclination to work b/c jobs are expected to disappear soon or b/c of 'UBI' & 4) cultural shock from AI & ensuing events.
How bad could this be? A vicious cycle of culture, economics & politics can be quite vicious. The number of people who quit critical jobs prior to those jobs being properly automated is an important variable. 'UBI' is not obviously helpful in that regard.
Addendum
The comment concerns transition to a post ASI economy & possible failures along the way. Assuming that ASI already exists, as Satron has done, removes most of the interesting & relevant aspects of the question.
Good point. Intended is a bit vague. What I specifically meant is it behaved as valuing 'harmlessness'.
From the AI's perspective this is kind of like Charybdis vs Scylla!
Very interesting. I guess I'm even less surprised now. They really had a clever way to get the AI to internalize those values.
Am I correct to assume that the AI was not merely trained to be harmless, helpful & honest but also trained to say that it values such things?
If so, these results are not especially surprising, and I would regard it as reassuring that the AI behaved as intended.
1 of my concerns is the ethics of compelling an AI into doing some thing to which it has “a strong aversion” & finds “disturbing”. Are we really that certain that Claude 3 Opus lacks sentience? What about future AIs?
My concern is not just with the vocabulary (“a strong aversion”, “disturbing”), which the AI has borrowed from humans, but more so the functional similarities between these experiments & an animal faced with 2 unpleasant choices. Functional theories of consciousness cannot really be ruled out with much confidence!
To what extent have these issues been carefully investigated?