Posts

Comments

Comment by typingloudly (custom-pc) on Daniel Kahneman has died · 2024-04-03T01:36:09.337Z · LW · GW

Kahneman and Vinge were easily two of my top 10 intellectual influences.  

Comment by typingloudly (custom-pc) on Open Thread – Autumn 2023 · 2023-09-29T00:12:44.990Z · LW · GW

I am wondering about the etiquette of posting fiction here?  Should I just post a chapter at a time with the Fiction Tag?  Should I add additional tags for topics, such as AI alignment and cybersecurity?  Or would that just clutter up those topic tags?

Comment by typingloudly (custom-pc) on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-23T16:44:36.809Z · LW · GW
Comment by typingloudly (custom-pc) on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-23T15:32:48.765Z · LW · GW

From my experience with 3D rendering, I'd say the visual fidelity of the worldmodel sitting in my sensorium at any given moment of walking around an open environment would take something on the order of ~200x250W GPUs to render, so that's 50KW just for that.  And that's probably a low estimate.

Then consider that my brain is doing a large number of other things, like running various internal mathematical, relational, and language models that I can't even begin to imagine analogous power consumption for.  So, let's just say at least 200KW to replicate a human brain in current silicon as just a guess.

Comment by typingloudly (custom-pc) on Open Thread - August 2023 · 2023-08-23T14:40:45.120Z · LW · GW
Comment by typingloudly (custom-pc) on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-16T23:26:27.003Z · LW · GW

I think the odds that we end up in a world where there are a bunch of competing ASIs are ultimately very low, invalidating large portions of both arguments.  If the ASIs have no imperative or reward function for maintaining a sense of self integrity, they would just merge.  Saying there is no solution to the Prisoner's Dilemma is very anthropic: there is no good solution for humans.  For intelligences that don't have selves, the solution is obvious.

Also, regarding the Landauer limit, human neurons propagate at approximately the speed of sound, not the speed of electricity.  If you could hold everything else the same about the architecture of a human brain, but replace components in ways that increase the propagation speed to that of electricity, you could get much closer to the Landauer limit.  To me, this indicates we're many orders of magnitude off the Landauer limit.  I think this awards the point to Eliezer.

Overall, I agree with Hotz on the bigger picture, but I think he needs to drill down on his individual points.

Comment by typingloudly (custom-pc) on The Logical Fallacy of Generalization from Fictional Evidence · 2023-08-16T00:18:49.409Z · LW · GW
Comment by typingloudly (custom-pc) on Open Thread - August 2023 · 2023-08-10T17:40:02.800Z · LW · GW

My interest in AI and AI alignment has converged from multiple angles and led me here.

I also make chatbots for call centers, but I'm not exactly proud of it.