Posts

Phil Tanny's Shortform 2022-09-01T23:37:50.883Z

Comments

Comment by Phil Tanny on How Do AI Timelines Affect Existential Risk? · 2022-09-02T15:58:19.846Z · LW · GW

The problem with this policy is the unilateralist's curse which says that a single optimistic actor could develop a technology. Technologies such as AI have substantial benefits and risks, the balance is uncertain and the net benefit is perceived differently by different actors. For a technology not to be developed all actors would have to agree not to develop it which would require significant coordination.

 

Yes, agreed,  what you refer to is indeed a huge obstacle.  

From years of writing on this I've discovered another obstacle.   When ever this subject comes up almost all those who join the conversation focus almost exclusively on obstacles and theories regarding why such change isn't possible, and...

The conversation almost never gets to the point of folks rolling up their sleeves to look for solutions.    

I don't have a big pile of solutions to put on the table either.   All I really have is the insight that overcoming these challenges isn't optional.   

In my judgement there is little chance of such fundamental change to our relationship with unlimited technological progress within the current cultural status quo.  However, given the vast scale of forces being released in to the world there would seem to be an unprecedented possibility of revolutionary change to the status quo.

As example, imagine even a limited nuclear exchange between Pakistan and India.  More people would die in a few minutes than died in all of WWII.   The media would feed on the carnage for a long time, relentlessly pumping unspeakable horror imagery in to every home in the world with a TV.  

Consider for instance how all the stories about floods, fires and heat waves etc are editing our relationship with climate change.  It's no longer such an abstract issue to us, it's increasingly becoming real, hitting us where we really live, in the emotional realm.

Comment by Phil Tanny on AI coordination needs clear wins · 2022-09-02T15:43:15.148Z · LW · GW

Nor is there likely to ever be such cooperation.   Thus, well intended intellectual elites in the West are not in a position to decide the future of AI.   I shoulda just said that.

Comment by Phil Tanny on Artificial Moral Advisors: A New Perspective from Moral Psychology · 2022-09-02T15:20:05.983Z · LW · GW

Tired: can humans solve artificial intelligence alignment?

Wired: can artificial intelligence solve human alignment?

 

Apologies that I haven't read the article (not an academic) but I just wanted to cast my one little vote that I enjoy this point, and the clever way you put it.

Briefly, it's my sense that most of the self inflicted problems which plague humanity (war for example) arise out of the nature of thought, that which we are all made of psychologically.   They're built-in.  

I can see how AI, like computing and the Internet, could have a significant impact upon the content of thought, but not the nature of thought.

Genetic engineering seems a more likely candidate for editing the nature of thought, but I'm not at all optimistic that this could happen any time soon, or maybe any time ever.   

Comment by Phil Tanny on What is the best critique of AI existential risk arguments? · 2022-09-02T14:59:57.289Z · LW · GW

Thanks much for your engagement Mitchell, appreciated.

Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response

Yes,  to quibble just a bit, not just self sustaining, but also accelerating.   The way I often put it is that we need to adapt to the new environment created by the success of the knowledge explosion.  I just put up an article on the forum which explains further:

https://www.lesswrong.com/posts/nE4fu7XHc93P9Bj75/our-relationship-with-knowledge

from the whole of civilization? just from the intelligentsia

As I imagine it, the needed adaptation would start with intellectual elites, but eventually some critical mass of the broader society would have to agree, to some degree or another.  I've been writing about his for years now, and can't actually provide any evidence that intellectual elites can lead on this, but who else?

It's unclear to me if you think you already have a solution. 

I don't have a ten point plan or anything, just trying to encourage this conversation where ever I go.   Success for me would be hundreds of intelligent well educated people exploring the topic in earnest together.  That is happening to some degree already, but not with the laser focus on the knowledge explosion that I would prefer.

You're also saying that focus on AI safety is a mistake...

I see AI discussions as a distraction, as an addressing of symptoms, rather than addressing the source of X risks.   If 75% of the time we were discussing the source of X risk, I wouldn't object to 25% addressing particular symptoms.   

I'm attempting to apply common sense.  If one has puddles all around the house every time it rains, the focus should be on fixing the hole in the roof.  Otherwise one spends the rest of one's life mopping up the puddles.

There are in fact good arguments that AI is now pivotal to the whole process and also to its resolution.

I don't doubt AI can make a contribution in some areas, no argument there.   But I don't see any technology as being pivotal.  I see the human condition as being pivotal.  

I'm attempting to think holistically, and consider man and machine as a single operation, with the success of that operation being dependent upon the weakest link, which I propose to be us.   Knowledge development races ahead at an ever accelerating rate, while human maturity inches along at an incremental rate, if that. Thus, the gap between the two is ever widening. 

Please proceed to engage from whatever perspective you find useful.  What I hope to be part of is a long deliberate process of challenge and counter challenge which helps us inch a little closer to some useful truth.  

Thanks again!

Comment by Phil Tanny on AI coordination needs clear wins · 2022-09-02T14:32:11.614Z · LW · GW

EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs.

 

Are you having any luck finding cooperation with Russian, Chinese, Iranian and North Korean labs?

Comment by Phil Tanny on How Do AI Timelines Affect Existential Risk? · 2022-09-02T13:27:55.345Z · LW · GW

However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology.

Here's a solution to all this.  I call this revolutionary new philosophy....

Acting Like Adults

Here's how it works.  We don't create a new technology which poses an existential risk until we've credibly figured out how to make the last one safe.  

So, in practice, it looks like this.  End all funding for AI, synthetic biology and molecular nanotechnology etc until we figure how to liberate ourselves from 1945 existential risk technology.  

The super sophisticated, high end, intellectual elite, philosophically elegant methodology involved here is called...

Common Sense

If our teenage son wants us to buy him a car, we might respond by saying, "show me that you won't crash this moped first".  Prove that you're ready.

The fact that all of this has to be explained, and once explained it will be universally ignored, demonstrates that...

We ain't ready.

Comment by Phil Tanny on Why was progress so slow in the past? · 2022-09-02T13:13:11.176Z · LW · GW

Why was progress so slow in the past?

Knowledge development feeds back on itself.  So when you have a little knowledge you get a slow speed of further development, and when you have a lot of knowledge you get a fast speed.  The more knowledge we get, the faster we go.

Comment by Phil Tanny on Toni Kurz and the Insanity of Climbing Mountains · 2022-09-02T12:36:41.612Z · LW · GW

The first photo was incredible, amazing!  Thanks for sharing that.

So what do we make of these men, who risk so much for so little?

Macho madness.  Youtube and Facebook is full of it these days, and it truly pains me to watch young people with so much ahead of them risk everything in exchange for a few minutes of social media fame.  

But, you know, it's not just young people, it's close to everybody.   Here's an experiment to demonstrate.  The next time you're on the Interstate count how many people NASCAR drafting tailgate you at 75mph.   Risking everything, in exchange for nothing.

Comment by Phil Tanny on Grand Theft Education · 2022-09-02T11:54:35.711Z · LW · GW

On behalf of the Boomer generation I wish to offer my sincere apologies for how we totally ripped off our own children.  We feasted on the big jobs in higher education, and sent you the bill.

I paid my own way through the last two years of a four year degree, ending in 1978.  I graduated with $4,000 in debt.  That could have been you too, but we Boomer administrators wanted the corner office.

I've spent my entire adult life living near, sometimes only blocks away, from the largest university in Florida.  It used to be an institution of higher learning, but we Boomers turned it in to a country club.  Very expensive. But no worries, cause we passed the bill on to you.

By the way, reading this post costs $1300.  But don't worry about it, because I can give you a loan, with interest of course.

Comment by Phil Tanny on Shortform · 2022-09-02T11:44:36.114Z · LW · GW

As a self appointed great prophet, sage and heretic I am working to reveal that a focus on AI alignment is misplaced at this time.   As a self appointed great prophet, sage and heretic I expect to be rewarded for my contribution with my execution, which is part of the job that a good heretic expects in advance, is not surprised by,  and accepts with generally good cheer.  Just another day in the office.  :-)

Comment by Phil Tanny on New 80,000 Hours problem profile on existential risks from AI · 2022-09-02T11:38:56.328Z · LW · GW

A knowledge explosion itself -- to the extent that that is happening -- seems like it could be a great thing.

 

It's certainly true that many benefits will continue to flow from the knowledge explosion, no doubt about it.  

The 20th century is a good real world example of the overall picture.  

  • TONS of benefits from the knowledge explosion, and...
  • Now a single human being can destroy civilization in just minutes.

This pattern illustrates the challenge presented by the knowledge explosion.   As the scale of the emerging powers grows, the room for error shrinks,  and we are ever more in the situation where one bad day can erase all the very many benefits the knowledge explosion has delivered.  

In 1945 we saw the emergence of what is arguably the first existential threat technology.  To this day, we still have no idea how to overcome that threat.

And now in the 21st century we are adding more existential threats to the pile.   And we don't really know how to manage those threats either.

And the 21st century is just getting underway.  With each new threat that we add to the pile of threats, the odds of us being able to defeat each and every existential threat (required for survival) goes down.

Footnote:  I'm using "existential threat" to refer to a possible collapse of civilization, not human extinction, which seems quite unlikely short of an astronomical event.  

Comment by Phil Tanny on CFAR Handbook: Introduction · 2022-09-02T09:10:36.867Z · LW · GW

Hi again Duncan, 

Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!

Can AI destroy modern civilization in the next 30 minutes?   Can a single human being unilaterally decide to make that happen, right now, today?

I feel that nuclear weapons are a very useful tool for analysis because unlike emerging technologies like AI, genetic engineering etc they are very easily understood by almost the entire population.  So if we're not talking about nukes, which we overwhelmingly are not across the culture at every level of society, it's not because we don't understand.  It's because we are in a deep denial similar to how we relate to our own personal mortality.   To debunk my own posts, puncturing such deep denial with mere logic is not very promising, but one does what one knows how to do.

I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence.

Except that is not the test I proposed.  That's ignoring the most pressing threat to engage a threat that's more fun to talk about.  That said, any discussion of X risk must be applauded, and I do so applaud.

The challenge I've presented is not to the EA community in particular, who seem far ahead of other intellectual elites on the subject of X risk generally.  I'm really challenging the entire leadership of our society.  I tend to focus the challenge mostly at intellectual elites of all types, because that's who  I have the highest expectations of.   You know, it's probably pointless to challenge politicians and the media on such subjects.

Comment by Phil Tanny on Alignment is hard. Communicating that, might be harder · 2022-09-02T08:58:45.059Z · LW · GW

Hi Duncan, thanks for engaging.

I think that EA writers and culture are less "lost" than you think, on this axis.  I think that most EA/rationalist/ex-risk-focused people in this subculture would basically agree with you that the knowledge explosion/recursive acceleration of technological development is the core problem

Ok, where are their articles on the subject?  What I see so far are a ton of articles about AI, and nothing about the knowledge explosion unless I wrote it.   I spent almost all day every day for a couple weeks on the EA forum, and observed the same thing there.

That said, I'm here because the EA community is far more interested in X risk than the general culture and the vast majority of intellectual elites, and I think that's great.   I'm hoping to contribute by directing some attention away from symptoms and towards sources.   This is obviously a debatable proposition and I'm happy to see it debated, no problem.

Comment by Phil Tanny on CFAR Handbook: Introduction · 2022-09-01T23:51:36.788Z · LW · GW

Here's a simple test which can be used to evaluate the qualifications of all individuals and groups claiming to be qualified to teach rational thinking.

How much have they written or otherwise contributed on the subject of nuclear weapons?

As example, as a thought experiment imagine that I walk around all day with a loaded gun in my mouth, but I typically don't find the gun interesting enough to discuss.  In such a case, would you consider me an authority on rational thinking?   In this example, the gun in one person's mouth represents the massive hydrogen bombs in all of our mouths.

Almost all intellectual elites will fail this test.  Once this is seen one's relationship with intellectual elites can change substantially.   

Comment by Phil Tanny on Phil Tanny's Shortform · 2022-09-01T23:44:35.375Z · LW · GW

Also, I think it should be required that all EA followers wear Cyndi Lauper style hair so that followers can easily identify each other in public.  I could be kidding about this.

Comment by Phil Tanny on Phil Tanny's Shortform · 2022-09-01T23:37:51.115Z · LW · GW

Here's a suggested theme song for the EA movement.

Comment by Phil Tanny on New 80,000 Hours problem profile on existential risks from AI · 2022-09-01T23:32:12.226Z · LW · GW

Would it be sensible to assume that all technologies with the potential for crashing civilization have already been invented?   

If the development of knowledge feeds back on itself...

And if this means the knowledge explosion will continue to accelerate...

And if there is no known end to such a process....

Then, while no one can predict exactly what new threats will emerge when, it seems safe to propose that they will.

I'm 70 and so don't worry too much about how as yet unknown future threats might affect me personally, as I don't have a lot of future left.  Someone who is 50 years younger probably should worry, when we consider how many new technologies have emerged over the last 50 years, and how the emergence of new threats is likely to unfold at a faster rate than previously was the case.

Comment by Phil Tanny on Supposing Europe is headed for a serious energy crisis this winter, what can/should one do as an individual to prepare? · 2022-09-01T21:57:33.485Z · LW · GW

Keep in mind that endless generations of Europeans brought you this far without the need of Russian oil and gas.   

Comment by Phil Tanny on New 80,000 Hours problem profile on existential risks from AI · 2022-09-01T21:54:36.928Z · LW · GW

So long as we're talking about AI, we're not talking about the knowledge explosion which created AI, and all the other technology based existential risks which are coming our way.

Endlessly talking about AI is like going around our house mopping up puddles one after another after another every time it rains.  The more effective and rational approach is to get up on the roof and fix the hole where the water is coming in.  The most effective approach is to deal with the problem at it's source.

This year everybody is talking about AI.  Next year it will be some other new threat.   Soon after, another another threat.   And then more threats, bigger and bigger, coming faster and faster.   

It's the simplest thing.   If we were working at the end of a product shipping line at an Amazon warehouse, and the product shipping line kept sending us new products to package, faster, and faster, and faster, without limit...

What's probably going to happen?

If we don't turn our attention to the firehose of knowledge which is generating all the threats, there's really no point in talking about AI.

Comment by Phil Tanny on Alignment is hard. Communicating that, might be harder · 2022-09-01T21:42:47.042Z · LW · GW

The current 80,000 Hours list of the world's most pressing problems ranks AI safety as the number one cause in the highest priority area section.

 

AI safety is not the world's most pressing problem.  It is a symptom of the world's most pressing problem, our unwillingness and/or inability to learn how to manage the pace of the knowledge explosion.   

Our outdated relationship with knowledge is the problem.  Nuclear weapons,  AI,  genetic engineering and other technological risks are symptoms of that problem.  EA writers insist on continually confusing sources and symptoms.  

To make this less abstract, consider a factory assembly line.  The factory is the source.  The products rolling off the end of the assembly line are the symptoms.   

EA writers (and the rest of the culture) insist on focusing on each product as it comes off the end of the assembly line, while the assembly line keeps accelerating faster and faster.   While you're focused on the latest shiny product to emerge off  the assembly line, the assembly line is ramping up to overwhelm you with a tsunami of other new products.

Comment by Phil Tanny on How to plan for a radically uncertain future? · 2022-09-01T16:40:02.624Z · LW · GW

One way to plan for the future is to slow down the machinery taking us there to reduce the uncertainty about what is coming to some degree.

Another way to plan for the future is to do what I've done, which is to get old (70) so that you have far less chips on the table in the face of the uncertainty.  Ok, sorry, not very helpful.  But on the other hand, it's most likely going to happen whether you plan it or not, and some comfort might be taken from knowing that sooner or later we all earn a "get out of jail free" card.

For today, one of the things we have some hope of being able to control is our relationship with risk, living, dying etc.   In an era characterized by historic uncertainty, such a pursuit seems a good investment.

Comment by Phil Tanny on What is the best critique of AI existential risk arguments? · 2022-09-01T16:30:36.060Z · LW · GW

If we were to respond specifically to the title of the post....

What is the best critique of AI existential risk arguments?

I would cast my vote for the premise that AI  risk arguments don't really matter so long as a knowledge explosion feeding back upon itself is generating ever more, ever larger powers, at an ever accelerating rate.

For example, let's assume for the moment that 1) AI is an existential risk, and 2) we solve that problem somehow so that AI becomes perfectly safe.  Why would that matter if civilization is then crushed when we lose control of some other power emerging from the knowledge explosion?   Remember, triumphing over existential risk will require us to win every single time, and never lose once.  

If it's true that 1) the knowledge explosion is accelerating, and if it's true that 2) human ability is limited, then it follows that at some point we will be overwhelmed by one or more challenges that we can't adapt to in time.

Seventy five years after Hiroshima we still have no idea what to do about nuclear weapons, nor do we know what to do about AI, or genetic engineering.  And the threats keep coming, more and more, larger and larger, faster and faster.

If it is our choice to accept an ever accelerating knowledge explosion as a given, the best critique of AI existential risk arguments seems to be that they don't really matter.   Or, if you prefer, that they are a distraction from what does matter.