Htarlov's Shortform

post by Htarlov (htarlov) · 2024-12-21T19:16:25.829Z · LW · GW · 2 comments

Contents

2 comments

2 comments

Comments sorted by top scores.

comment by Htarlov (htarlov) · 2025-01-29T19:33:18.689Z · LW(p) · GW(p)

Thought on short timelines. Opinionated.

I think that AGI timelines might be very short based on an argument taken from a different side of things.

We all can agree that humans have general intelligence. If we look at how our general intelligence evolved from simpler forms of specific intelligence typical for animals - it wasn't something that came from complex interactions and high evolutional pressure. Basically there were two aspects of that progress. The first one is the ability to pass on knowledge through generations (culture). Something that we share with some other animals including our cousins chimpanzee. The second one is intersexual selection - at some moment in the past, our species started to have sexual preferences based on the ability to gossip and talk. It is still there, even if we are not 100% aware of that - our courtship, known as dating, is based mostly on meeting together and talking. People who are not talkative and introverts, even if successful, have a hard time dating.
These two things seem to be major drivers for us to both develop more sophisticated language and better general intelligence.

It seems to me that this means that there are not many pieces missing from using current observations and some general heuristics like animals do, to have full-fledged general intelligence.

It also suggests that you need some set of functions or heuristics, possibly a small set, together with a form of external memory, to tackle any general problem by dividing it into smaller bits and rejoining sub-solutions into a general solution. Like a processor or Turing machine that has a small set of basic operations, but can in principle run any program.

comment by Htarlov (htarlov) · 2024-12-21T19:16:25.981Z · LW(p) · GW(p)

In many publications, posts, and discussions about AI, I can see an unsaid assumption that intelligence is all about prediction power. 

  • The simulation hypothesis assumes that there are probably vastly powerful and intelligent agents that use full-world simulations to make better predictions.
  • Some authors like Jeff Hawkins basically use that assumption directly.
  • Many people when talking about AI risks say things about the ability to predict that is the foundation of the power of that AI. Some failure modes seem to be derived or at least enhanced based on this assumption.
  • Bayesian way of reasoning is often titled as the best possible way to reason as this adds greatly to prediction power (with exponential cost of computation)

I think this take is not proper and this assumption does not hold. It has one underlying assumption that intelligence costs are negligible or will have negligible limits in the future with progress in lowering the cost.

This does not fit the curve of AI power vs the cost of resources needed (with even well-optimized systems like our brains - basically cells being very efficient nanites - having limits).

The problem is that the computation cost of resources (material, energy) and time should be taken into the equation of optimization. This means that the most intelligent system should have many heuristics that are "good enough" for problems in the world, not targeting the best prediction power, but for the best use of resources. This is also what we humans do - we mostly don't do exact Bayesian or other strict reasoning. We mostly use heuristics (many of which cause biases).

The decision to think more or simulate something precisely is a decision about resources. This means that deciding if to use more resources and time to predict better vs using less and deciding faster is also part of being intelligent. A very intelligent system should therefore be good at selecting resources for the problem and scaling that as its knowledge changes. This means that it should not over-commit to have the most perfect predictions and should use heuristics and techniques like clustering (including but not limited to using clustered fuzzy concepts of language) instead of a direct simulation approach, when possible.

Just a thought.