Review of the Challenge

post by SD Marlow (sd-marlow) · 2022-11-05T06:38:58.899Z · LW · GW · 5 comments

Contents

  All of this AI stuff is a misguided sideshow.
  We should be even more focused on AI.
  Different aspects of the problem. 
  Scrutiny is mutiny.
None
5 comments

All of this AI stuff is a misguided sideshow.

The disconnect between what Machine Learning represents and the desired or “hyped” abilities is very real. The flashy, headline grabbing results of the past decade are certainly a sideshow, but instead of hiding how far these systems are from actual cognition (and especially from being sentient), they obscure the simple nature of what the xNN models represent. They are a vast probability table with a search function. The impressive outputs are based on the impressive inputs that went into the training process. There is no active mind in the middle. There is no form of cognition taking place at the time of training. 

We should be even more focused on AI.

When focus is based on the latest arXiv paper, or this weeks SOTA, then general sight has already been lost. Rather than follow behind the circus animals, effort should be made to stop and orient one’s self. Where is AI today. Where was it 60 years ago. What direction should it be going. Is the circus, the sideshow, really where all time and effort should be invested? What else is out there? In describing AI research as a journey from Los Angeles to New York, ML is Las Vegas. Sentient animal research might be Albuquerque or Denver. Chicago as early childhood development.

Different aspects of the problem. 

My original post attempted to address this very point. Current efforts to predict arrival times or plot a development path based on compute scaling laws are a game of Russian Roulette. You only get it right after finding the magic bullet, and by then, it’s too late. 

Take a leap of faith and assume advances in Machine Learning are only specific to ML. Looking for the right kind of markers of progress toward the science fiction levels of AI, the kind that are not just incrementally better than current years examples, requires understanding AI itself. Not what is current. Not what came before. An AI that understood basic math would make zero errors on a math test. That current systems get some percentage below that tells us they don’t REALLY understand. 

It's beyond the scope of EA efforts, but I need to add that examples of high-level thinking are what current systems try to mimic. Actual research doesn’t start with language translation or solving mathematical conjectures. Like the human mind, a lot of effort goes into building a structure that supports abstract logic, and solving this “critical mass” problem is likely to leave a very tiny footprint.  

Scrutiny is mutiny.

While the premise is to have the communities assumptions “exposed to external scrutiny,” there is a strong correlation between “popular” posts and those that support existing assumptions. I don’t think selection bias is going to improve anything.

Do you really want your mind changed? If the AI challenged is solved in a manner you didn’t plan for, then 10’s of millions spent on ‘wrong method’ alignment will have been wasted. Seems like you literally can afford to keep your options open.

5 comments

Comments sorted by top scores.

comment by Ruby · 2022-11-05T18:58:22.621Z · LW(p) · GW(p)

plot a development path based on compute scaling laws are a game of Russian Roulette. You only get it right after finding the magic bullet, and by then, it’s too late. 

 

Scrutiny is mutiny.

Quick note as a mod for the site. I feel this post's ratio of substance to witty/snarky metaphor/word play isn't high enough. I downvoted and as part of an effort to nudge the site more towards great content, I'm applying a 1 post/comment per day rate limit to your account (in light of this post and other downvoted posts).

We're a little more trigger-ready with Future Fund Worldview Prizes because it seems the quality is lower than average for LW.  And I don't think that's just because we're resistant to contrary opinions.

comment by ChristianKl · 2022-11-05T15:40:10.369Z · LW(p) · GW(p)

An AI that understood basic math would make zero errors on a math test. 

Do humans that understand basic math make zero errors on math tests? I don't think that's the case. Part of human intelligence involves making all sorts of random errors. 

If you think this is a major current problem, how certain are you that a scaled-up Gato won't be able to do all math at the level of a high school student?

Replies from: sd-marlow
comment by SD Marlow (sd-marlow) · 2022-11-05T17:49:19.084Z · LW(p) · GW(p)

Errors from transposing or misreading numbers, placing the decimal in the wrong location, etc. The machine mind has a "perfect cache" to hold numbers, concepts, and steps involved. Math is just the simple example of their ability. Such machine minds will be able to hold every state and federal law in their mind, and could re-draft legislation that is "clean" while porting case law from the old to the new legal references. 

*for an example of current tech getting simple math wrong:   https://twitter.com/KordingLab/status/1588625510804119553?s=20&t=lIJcvTaFTLK8ZlgEfT-NfA

Replies from: ChristianKl
comment by ChristianKl · 2022-11-05T18:36:10.008Z · LW(p) · GW(p)

It's part of human intelligence to make errors. Making errors is a sign of human-like intelligence.

You could imagine an AGI that doesn't make any mistakes, but the presence of errors is no argument against it achieving human-like performance. 

It's interesting that you completely ignored the question about what you believe will be the likely capabilities of near-future technology like Gato.

comment by SD Marlow (sd-marlow) · 2022-11-05T14:19:09.752Z · LW(p) · GW(p)

*You can't align something that doesn't really work (it not really working is the current danger). A better question is, can you take a working AI and brainwash it? The unhackable (machine) mind? A good system, in the wrong hands, and all that. 

**Again, free yourself from the monolithic view of current architectures.