Feelings, Nothing More than Feelings, About AI

post by PaulBecon · 2023-11-14T18:50:24.275Z · LW · GW · 0 comments

Contents

  FOBO is the Fear of Becoming Obsolete
None
No comments

FOBO is the Fear of Becoming Obsolete

ChatGPT came out at the end of 2022, but over 100MM have fooled around and found out for themselves at least a little about LLMs (Large Language Models). The arc of fascination has even roller coasted past the initial infatuation, as some people now feel it’s been getting dumbed down. The cognitive belt notches continue to accumulate, as it blows past AP exams, conquers the bar exam, and is a peer of radiologist MDs in accurate diagnoses.

Instead of discussing the IQ of AIs, I’m far more interested in humans, and their complex emotional responses to suddenly encountering these alien minds. "Algorithm Aversion," a term coined by behavioral scientists in 2015, encapsulates the harsher standards humans insist upon when decision-making is algorithmic. We're currently seeing this with San Francisco's revocation of driverless car company Cruise's license to carry passengers. Merely being far safer than human drivers is not sufficient. When algorithms make mistakes, people lose confidence in them more quickly than they would with human error. 

What are we willing to hand off to blind computer processing? Currently, programmers are most intimately exposed to the significant productivity impact of using GPTs to supplement their traditional methods of working. Memorizing gnarly, arcane language conventions is now as obsolete as tricks for calculating square roots in the wake of handheld calculators. Instead of picking through thorny tangles of code to find bugs, a code "co-pilot" can ingest a hairball of  code and generate its own 'best guess' as to the right code.

The term 'co-pilot' finesses the question as to who's in the driver's seat. Coders initiate the conversation with a GPT, they direct the AI to focus on particular lines, and they make the final decision as to whether to incorporate, reject, or tweak the AI's proposal.

Humans can get snagged on preserving their own sense of autonomy. We can get touchy/titchy in reaction to computers driving human decisions. The cost of algorithmic aversion can be measured by the way financial recommendations are treated quite differently when presented as the output of a computer vs a human. Decades of evidence has piled up to unambiguously prove that human financial advisors, as a group, NEVER outperform robo-automated, well-diversified, low cost index funds. People, as investors, do not simply want money or better evidence by which to make money. Even brokers who are simply grifting sell their customers desirable feelings: trust, a face, a way to delegate responsibility w/o loss of oversight, an illusion of control.  The delegation to a human who works for you is that you can contract out the need to think about that domain.

The specific domain of robo-advice easily generalizes to the wild world of contexts where humans (will increasingly) interact with computer generated content. The flood tide of new compute affects every specialized field one might list: doctors, lawyers, designers, developers, marketers, et al must all re-assess what component of their value survives contact with the AI. 

I'm interested primarily in how the humans will feel. What psychological responses will erupt among different individuals from different demographics (knowledge work vs manual labor; boomers vs the rest of us). The workers most affected will be those at least tangentially involved in making sense of fuzzy stuff, particularly where domain expertise involves a lot of factual knowledge. 

 The emotional reactions to GPTs are currently focused on the dark scenarios where mass unemployment strikes 21st century knowledge workers. When Dall-E started impressing people with its capacity to transform text into evocative images, who had anticipated that designers would be some of the first to face disemployment by AI? Designers, who devote many hours of tweaking to reach the fiddly final form, can now outsource all but the intention. The expert will know what to derive from the toolset of options available inside Figma/Adobe Suite.  A lot of the fastidious review and summarization work across many expert fields can now be sped up. LLMs generate a world where everyone has a research assistant on tap, willing to do an inexhaustible amount of grunt work, so long as you can somehow signal to the AI what you want it to try to do for you. Presumably, until the next epoch of AI arrives, experts will have their brainpower amplified by harnessing the GPT guess machinery. Writing code, e.g, with Github's AI pair programmer, "coPilot."  

At the moment, coPilot isn't doing most languages, such as Python, SAS or R for data scientists. Developers who have had a chance to use an environment "enriched" by interacting with an LLM, tasked with the role of "pair programming" with the software guy. Reports of getting a 50% boost in output aren't uncommon. As reliable, longitudinal data streams in, the findings should verify Rutherford's principle: "If you needed statistics to find it, it wasn't important enough to be worth attention." The lift in coding ability is not strange, given the particular strengths of GPTs.  With today's level of LLM competency, the human and the paired AI-coder step through code in basic chunks. The supervisory role of the human in the loop relaxes to a semi-automated process of having the GPT generate a chunk of code, then run it to see if the GPT got it right. If not, tell the GPT to find out what the bug was and request it be debugged. It was once thought that AI's ability to self-debug would trigger the take off "foom" scenario. Fortunately, GPTs still lack the capacity to take initiative.

0 comments

Comments sorted by top scores.