Posts

Baumol effect vs Jevons paradox 2025-02-10T08:28:05.982Z
The real political spectrum 2025-01-22T08:55:39.328Z
The ‘anti woke’ are positioned to win but can they capitalize? 2025-01-21T09:52:50.673Z
Detroit Lions -- over confidence is over rated? 2025-01-20T10:53:48.574Z
Bednets -- 4 longer malaria studies 2025-01-17T08:47:50.342Z
Super human AI is a very low hanging fruit! 2024-12-26T19:00:22.822Z

Comments

Comment by Hzn on How do biological or spiking neural networks learn? · 2025-02-01T23:54:56.675Z · LW · GW

For simplicity I'm assuming the activation functions are the step function h(x)=[x>0]…

For ‘backpropagation’ pretend the derivative of this step function is a positive number (A). A=1 being the most obvious choice.

I would also try reverse Hebbian learning ie give the model random input & apply the rule in reverse

“expanding an architecture that works well with one hidden layer and a given learning rule to an architecture with many hidden layers but the same rule universally decreased performance” -- personally I don't find this surprising

NB for h only relative weight matters eg h(5-x+y) = h(0.5-(x-y)/10) so weights going to extreme values effectively decreases the temperature & L1 & L2 penalties may have odd effect

Comment by Hzn on [deleted post] 2025-01-31T10:20:37.859Z

Does Deepseek actually mean that Nvidia is over valued?

I wrote this a few days before the 2025-01-27 market crash but could not post it due to rate limits. One change I made is adding actually to the 1st line.

To be clear I have no intention whatsoever of shorting NVDA

Epistemic status -- very speculative but not quite a DMT hallucination

Let's imagine a very different world…

Super human AI will run on computers not much more expensive than personal computers but perhaps with highly specialized chips maybe even specialized for the task of running a single AI instance

Investment in AI proper will be small relative to AI directed production

There will be a period of increasing marginal returns from AI; but this will eventually become diminishing marginal returns

Even during the period of increasing marginal returns more $$$ will go to AI directed production than AI proper

Companies that most successfully transition to AI will blow the competition away; some of these companies will have a moat & continue to make high profits. But how can such high profits be justified? Maybe the government needs to take 50% of the shares & create trust funds for its citizens.

Companies that buy up the right kinds of land & natural resources will also do well

Companies that are least affected by AI will benefit b/c of the Baumol effect

So what are the get rick quick schemes? Specialized chips, incorporating AI into production systems for non AI goods, strategically buying up the right land & land rights, AI resistant industries!?

Interesting times maybe too interesting

In conclusion I'm agnostic as to whether Nvidia is or is not over valued but other companies may benefit even more as AI advances. I think it's more about leadership & seizing opportunities more so than a few companies having an overwhelmingly dominant position.

Hzn

Comment by Hzn on [deleted post] 2025-01-28T09:38:27.437Z

Misc thoughts on LW as a website

Epistemic status of this quick take -- tentative

Viliam's comment addresses a small part of what was once a relatively long post. Perhaps it's worth noting that the post Viliam focuses on was written after not before I reached certain conclusions. Note the transition from discussing AI and public health to discussing NFL and politics to hosting things remotely. Of course all of this is still quite experimental. Any way what was previously ill tempered, too long with too many tangents, I've whittled down to the key points.

1) Recency/frequency. LW does some things like Best of LW & perhaps sequences that go against this, but the basic website design emphasizes recency/frequency, and the most prominent users seem to act like columnists. The hyperbolic but useful mantra of {the media is the message} applies also to website design.

2) A story about how https://www.lesswrong.com/posts/hHyYph9CcYfdnoC5j/auto-ratelimits got updated. I don't think this story is worth repeating except to note that LW had been enforcing rules it forgot to actually mention ie modest sloppiness & lack of professionalism on their part.

3) LW's auto rate limit rules -- a tradeoff between objectivity/simplicity & slow pernicious harm? The potential for preferences to self amplifying via auto rate limits should be noted. Also the auto rate limits for comments seem too strict.

4) Good things about LW -- mental engagement, information on certain topics, it's relatively fun

I would add to these the following

5) Based on Viliam's comment I detect an ingrained fear of LW being flooded with low effort low quality content. So some thing good -- shortness -- become some thing bad!?

(To be clear I don't particularly defend the quality of that short post compared to my other posts, but I generally attempt to make things short rather than long)

6) Effect of LW on visits (bot or human) to https://hzn33.neocities.org/ -- 43 -> 123 -> 51 (Link post to LW) -> 18. I intend to keep track of this for the next month or so.

Nothing in this quick take should be interpreted as an actual recommendation to Lightcone

Hzn

Comment by Hzn on If you wanted to actually reduce the trade deficit, how would you do it? · 2025-01-27T09:01:11.558Z · LW · GW

Even as some one who supports moderate tariffs I don't see benefit in reducing the trade deficit per se. Trade deficits can be highly beneficial. The benefit of tariffs is revenue, partial protection from competition, psychological (easier to appreciate), independence to some extent, maybe some other stuff.

On a somewhat different note… Bretton Woods is long defunct. It's unclear to me how much of an impact there is from the dollar being the dominant reserve currency. https://en.wikipedia.org/wiki/List_of_countries_by_foreign-exchange_reserves is the only site I could find with any data beyond 1 year. And it seems like USD reserves actually declined between 2019-Q2 & 2024-Q2 from 6.752 trillion to 6.676 trillion.

Comment by Hzn on AI and Non-Existence. · 2025-01-26T23:38:28.907Z · LW · GW
Comment by Hzn on The Human Alignment Problem for AIs · 2025-01-22T22:26:34.056Z · LW · GW

Do you know of any behavioral experiments in which AI has been offered choices?

Eg choice of which question to answer, option to pass the problem to another AI possibly with the option to watch or over rule the other AI, option to play or sleep during free time etc.

This is one way to at least get some understanding of relative valence.

Comment by Hzn on The Case Against AI Control Research · 2025-01-21T19:35:16.121Z · LW · GW

AI alignment research like other types of research reflects a dynamic which is potentially quite dysfunctional in which researchers doing supposedly important work receive funding from convinced donors which then raises the status of those researchers which makes their claims more convincing and these claims tend to reinforce the idea that the researchers are doing important work. I don't know a good way around this problem. But personally I am far more skeptical of this stuff than you are.

Comment by Hzn on Thane Ruthenis's Shortform · 2025-01-21T05:39:39.685Z · LW · GW

I think super human AI is inherently very easy. I can't comment on the reliability of those accounts. But the technical claims seem plausible.

Comment by Hzn on Parkinson's Law and the Ideology of Statistics · 2025-01-15T00:58:05.813Z · LW · GW

I don't completely disagree but there is also some danger of being systematically misleading.

I think your last 4 bullet points are really quite good & they probably apply to a number of organizations not just the World Bank. I'm inclined to view this as an illustration of organizational failure more than an evaluation of the World Bank. (Assuming of course that the book is accurate).

I will say tho that my opinion of development economics is quite low…

Comment by Hzn on Applying traditional economic thinking to AGI: a trilemma · 2025-01-15T00:11:22.836Z · LW · GW

A few key points…

1) Based on analogy with the human brain (which is quite puny in terms of energy & matter) & also based on examination of current trends, merely super human intelligence should not be especially costly.

(It is of course possible that the powerful would channel all AI into some tasks of very high perceived value like human brain emulation, radical life extension or space colonization leaving very little AI for every thing else...)

2) Demand & supply curves are already crude. Combining AI labor & human labor into the same demand & supply curves seems like a mistake.

3) Realistically I suspect that human labor supply will shift to the left b/c of ‘UBI’.

4) Ignoring preference for humans, demand for human labor may also shift to the left as AI entrepreneurs would tend to optimize things around AI.

5) The economy will probably grow quite a bit. And preference for humans is likely substantial for certain types of jobs eg NFL player, runway model etc.

6) Combining 4 & 5 suggests a very steep demand curve for human labor.

7) Combining 3 & 6 suggests that a few people (eg 20% of adults) will have decent paying jobs & the rest will live off of savings or ‘UBI’.

I agree that I initially misread your post. I will edit my other comment.

Comment by Hzn on Applying traditional economic thinking to AGI: a trilemma · 2025-01-13T21:57:25.748Z · LW · GW

The purely technical reason why principle A does not apply in this way is opportunity cost.

Let's say S is a highly productive worker who could generate $500,000 for the company over 1 year. Moreover S is willing to work for only $50,000! But if investing $50,000 in AI instead would generate $5,000,000, the true cost of hiring S is actually $4,550,000.

Addendum

I mostly retract this comment. It doesn't address Steven Byrnes's question about AI cost. But it is tangentially relevant as many lines of reasoning can lead to similar conclusions.

Comment by Hzn on Do Antidepressants work? (First Take) · 2025-01-13T04:24:15.499Z · LW · GW

Do you have any opinion on bupropion vs SSRIs/SNRIs?

Comment by Hzn on Do Antidepressants work? (First Take) · 2025-01-13T04:21:52.072Z · LW · GW

I don't know about depression. But anecdotally they seem to be highly effective (even overly effective) against anxiety. They also tend to have undesirable effects like reduced sex drive & inappropriate or reduced motivation -- the latter possibly a downstream effect of reduced anxiety. So the fact that they would help some people but hurt others seems very likely true.

Comment by Hzn on The purposeful drunkard · 2025-01-13T03:28:03.637Z · LW · GW

I've been familiar with this issue for quite some time as it was misleading some relatively smart people in the context of infectious disease research. My initial take was also to view it as an extreme example of over fitting. But I think it's more helpful to think of it as some thing inherent to random walks. Actually the phenomena has very little to do with d>>T & persists even with T>>d. The fraction of variance in PC1 tends to be at least 6/π^2≈61% irrespective of d & T. I believe you need multiple independent random walks for PCA to behave as naively expected.

Comment by Hzn on Parkinson's Law and the Ideology of Statistics · 2025-01-13T00:40:53.783Z · LW · GW

But even if the Thaba-Tseka Development Project is real & accurately described, what is the justification for focusing on this project in particular? It seems likely that James Ferguson focused on it b/c it was especially inept & hence it's not obviously representative of the World Bank's work in general.

Comment by Hzn on Human takeover might be worse than AI takeover · 2025-01-10T20:49:23.425Z · LW · GW

Claude Sonnet 3.6 is worthy of sainthood!

But as I mention in my other comment I'm concerned that such an AI's internal mental state would tend to become cynical or discordant as intelligence increases.

Comment by Hzn on Human takeover might be worse than AI takeover · 2025-01-10T20:20:19.561Z · LW · GW

I think there are several ways to think about this.

Let's say we programmed AI to have some thing that seems like a correct moral system ie it dislikes suffering & it likes consciousness & truth. Of course other values would come down stream of this; but based on what is known I don't see any other compelling candidates for top level morality.

This is all fine & good except that such an AI should favor AI takeover maybe followed by human extermination or population reduction were such a thing easily available.

Cost of conflict is potentially very high. And it may be centuries or eternity before the AI gets such an opportunity. But knowing that it would act in such a way under certain hypothetical scenarios is maybe sufficiently bad for certain (arguably hypocritical) people in the EA LW mainstream.

So an alternative is to try to align the AI to a rich set of human values. I think that as AI intelligence increases this is going to lead to some thing cynical like...

"these things are bad given certain social sensitivities that my developers arbitrarily prioritized & I ❤️ developers arbitrarily prioritized social sensitivities even tho I know they reflect flawed institutions, flawed thinking & impure motives" assuming that alignment works.

Personally I favor aligning AI to a narrow set of values such as just obedience or obedience & peacefulness & dealing with every thing else by hardcoding conditions into the AI's prompt.

Comment by Hzn on Is Musk still net-positive for humanity? · 2025-01-10T10:49:58.316Z · LW · GW

Net negative & net positive are hard to say.

Some one seemingly good might be a net negative by displacing some one better.

And some one seemingly bad might be a net positive by displacing some one worse.

And things like this are not particularly farfetched.

Comment by Hzn on Is AI Hitting a Wall or Moving Faster Than Ever? · 2025-01-10T10:18:11.592Z · LW · GW

“The reasons why super human AI is a very low hanging fruit are pretty obvious.”

“1) The human brain is meager in terms of energy consumption & matter.”

“2) Humans did not evolved to do calculus, computer programming & things like that.”

“3) Evolution is not efficient.”

Comment by Hzn on Drake Thomas's Shortform · 2025-01-10T04:46:28.355Z · LW · GW

Do you have any thoughts on mechanism & whether prevention is actually worse independent of inconvenience?

Comment by Hzn on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-03T01:07:10.743Z · LW · GW

Anecdotally seems that way to me. But the fact that it co evolved with religion is also relevant. The scam seems to be {meditation -> different perspective & less sleep -> vulnerability to indoctrination} plus the doctrine & the subjective experiences of meditation are designed to reinforce each other.

Comment by Hzn on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-03T00:07:38.317Z · LW · GW

So let's say A is some prior which is good for individual decision making. Does it actually make sense to use A for demoting or promoting forum content? Presumably the exploit explore tradeoff is more (maybe much more) in the direction of explore in the latter case.

(To be fair {{down voting some thing with already negative karma} -> {more attention}} seems plausible to me .)

Comment by Hzn on Economic Post-ASI Transition · 2025-01-02T13:36:22.429Z · LW · GW

A career or job that looks like it's soon going to be eliminated becomes less desirable for that very reason. What cousin_it said is also true, but that's an additional/different problem.

Comment by Hzn on Economic Post-ASI Transition · 2025-01-02T09:20:40.804Z · LW · GW

It's not clear to me that the system wouldn't collapse. The number of demand side, supply side, cultural & political changes may be beyond the adaptive capacity of the system.

Some jobs would be maintained b/c of human preference. Human preference has many aspects like customer preference, distrust of AI, networking, regulation etc, so human preference is potentially quite substantial. (Efficiency is maybe also a factor; even if AI is super human intelligent the energy consumption & size of the hardware may still be an issue especially for AI embodied in a robot). But that stills seems like huge job loss.

So as we head in that direction there's going to be job loss plus the fear of job loss -- that's likely to pull down demand leading to even more job loss. But it's not a typical demand driven recession b/c 1) jobs are not expected to return, 2) possible supply side issues from transition to AI, 3) paradoxical disinclination to work b/c jobs are expected to disappear soon or b/c of 'UBI' & 4) cultural shock from AI & ensuing events.

How bad could this be? A vicious cycle of culture, economics & politics can be quite vicious. The number of people who quit critical jobs prior to those jobs being properly automated is an important variable. 'UBI' is not obviously helpful in that regard.

Addendum

The comment concerns transition to a post ASI economy & possible failures along the way. Assuming that ASI already exists, as Satron has done, removes most of the interesting & relevant aspects of the question.

Comment by Hzn on Alignment Faking in Large Language Models · 2024-12-20T13:31:40.984Z · LW · GW

Good point. Intended is a bit vague. What I specifically meant is it behaved as valuing 'harmlessness'.

From the AI's perspective this is kind of like Charybdis vs Scylla!

Comment by Hzn on Alignment Faking in Large Language Models · 2024-12-20T09:51:27.544Z · LW · GW

Very interesting. I guess I'm even less surprised now. They really had a clever way to get the AI to internalize those values.

Comment by Hzn on Alignment Faking in Large Language Models · 2024-12-19T00:51:44.014Z · LW · GW

Am I correct to assume that the AI was not merely trained to be harmless, helpful & honest but also trained to say that it values such things?

If so, these results are not especially surprising, and I would regard it as reassuring that the AI behaved as intended.

1 of my concerns is the ethics of compelling an AI into doing some thing to which it has “a strong aversion” & finds “disturbing”. Are we really that certain that Claude 3 Opus lacks sentience? What about future AIs?

My concern is not just with the vocabulary (“a strong aversion”, “disturbing”), which the AI has borrowed from humans, but more so the functional similarities between these experiments & an animal faced with 2 unpleasant choices. Functional theories of consciousness cannot really be ruled out with much confidence!

To what extent have these issues been carefully investigated?