Posts

Is AI Alignment a pseudoscience? 2022-01-23T10:32:08.826Z

Comments

Comment by mocny-chlapik on New User's Guide to LessWrong · 2023-07-03T19:54:19.966Z · LW · GW

Hey, I wonder what's your policy on linking blog posts? I have some texts that might be interesting to this community, but I don't really feel like copying everything from HTML here and duplicating the content. At the same time I know that some communities don't like people promoting their content. What are the best practices here?

Comment by mocny-chlapik on Will we run out of ML data? Evidence from projecting dataset size trends · 2022-11-15T12:53:27.139Z · LW · GW

In general, LM-generated text is still easily distinguishable by other LMs. Even though we humans can not tell the difference, the way they generate text is not really human-like. They are much more predictable, simply because they are not trying to convey information as humans do, they are guessing the most probable sequence of tokens.

Humans are less predictable, because they have always something new to say, LMs on the other hand are like the most cliche person ever.

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-30T08:57:03.633Z · LW · GW

No indication in this context means that:

  1. Our current paradigm is almost depleted. We are hitting the wall with both data (PaLM uses 780B tokens, there are 3T tokens publicly available, additional Ts can be found in closed systems, but that's it) and compute (We will soon hit Landauer's limit so no more exponentially cheaper computation. Current technology is only three orders of magnitude above this limit).
  2. What we currently have is very similar to what we will ultimately be able to achieve with current paradigm. And it is nowhere near AGI. We need to solve either the data problem or the compute problem.
  3. There is no practical possibility of solving the data problem => We need a new AI paradigm that does not depend on existing big data.
  4. I assume that we are using existing resource nearly optimally and no significantly more powerful AI paradigm will be created until we have significantly more powerful computers. To have more significantly more powerful computers, we need to sidestep Landauer's limit, e.g. by using reversible computing or other completely different hardware architecture.
  5. There is no indication that such architecture is currently in development and ready to use. It will probably take decades for such architecture to materialize and it is not even clear whether we are able to build such computer with our current technologies.

We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-30T06:21:44.837Z · LW · GW

There is no indication for many catastrophic scenarios and truthfully I don't worry about any of them.

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-29T23:08:47.108Z · LW · GW

I don't see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore's law is dead, the fact that we are now relying on HW accelerators is not a good thing, it's a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-29T16:00:10.003Z · LW · GW

I believe that fixating on benchmark such as chess etc is ignoring the G part of AGI. Truly intelligent agent should be general at least in the environment he resides in, considering the limitation of its form. E.g. if a robot is physically able to work with everyday object, we might apply Wozniak test and expect that intelligent robot is able to cook a dinner in arbitrary house or do any other task that its form permits.

If we assume that right now we develop purely textual intelligence (without agency, persistent sense of self etc) we might still expect this intelligence to be general. I.e. it is able to solve arbitrary task if it seems reasonable considering its form. In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer. 

BIG Bench has recently showed us that our current LMs are able to solve some problems, but they are nowhere near general intelligence. They are not able to solve even very simple problems if it actually requires some sort of logical thinking and not only using associative memory, e.g. this is a nice case:

https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/symbol_interpretation

You can see in the Model performance plots section that scaling did not help at all with tasks like these. This is a very simple task, but it was not seen in the training data so the model struggles to solve it and it produces random results. If the LMs start to solve general linguistic problems, then we are actually having intelligent agents at our hand.

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-29T13:27:28.974Z · LW · GW

It's not goapost moving, it's the hype that's moving. People reduce intelligence to arbitrary skills or problems that are currently being solved and then they are let down when they find out that the skill was actually not a good proxy.

I agree that LMs are concetually more similar to ELIZA than to AGI.

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-29T08:33:21.041Z · LW · GW

I believe that over time we will understand that producing human-like text is not a sign of intelligence. In the past people believed that only intelligent agents are able to solve math equations (naturally, since only people can do it and animals can). Then came computer and they were able to do all kinds of calculations much faster and without errors. However, from our current point of view we now understand that doing math calculations is not really that intelligent and even really simple machines can do that. Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well. People were afraid that chess-algorithm-like machines can be programmed to conquer the world, but from our perspective, that's a ridiculous proposition.

I believe that text generation will be a similar case. We think that you have to be really intelligent to produce human-like outputs, but in the end with enough data, you can produce something that looks nice and it can even be useful sometimes, but there is no intelligence in there. We will slowly develop an intuition about what are the capabilities of large-scale ML models. I believe that in the future we will think about them as basically a kinda fuzzy databases that we can query with natural language. I don't think that we will think about them as intelligent agents capable of autonomous actions.

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-28T22:14:30.028Z · LW · GW

in order for this to occupy any significant probability mass, I need to hear an argument for how our current dumb architectures do as much as they do, and why that does not imply near-term weirdness. Like, "large transformers are performing {this type of computation} and using {this kind of information}, which we can show has {these bounds} which happens to include all the tasks it has been tested on, but which will not include more worrisome capabilities because {something something something}."

What about: State-of-the-art models with 500+B parameters still can't do 2-digit addition with 100% reliability. For me, this shows that the models are perhaps learning some associative rules from the data, but there is no sign of intelligence. An intelligent agent should notice how addition works after learning from TBs of data.  Associative memory can still be useful, but it's not really an AGI.

Comment by mocny-chlapik on Why I think strong general AI is coming soon · 2022-09-28T20:29:33.514Z · LW · GW

The post starts with the realization that we are actually bottlenecked by data and then proceeds to talk about HW acceleration. Deep learning is in a sense a general paradigm, but so is random search. It is actually quite important to have the necessary scale of both compute and data and right now we are not sure about either of them. Not to mention that it is still not clear whether DL actually leads to anything truly intelligent in a practical sense or whether we will simply have very good token predictors with very limited use.

Comment by mocny-chlapik on Language models seem to be much better than humans at next-token prediction · 2022-08-12T07:11:50.177Z · LW · GW

I believe I have read a paper about superhuman performance of LSTM LMs maybe 4 years ago. The fact that LMs are better than humans is not that surprising. With the amount of data they have seen, even relatively simple models are able to precisely calculate the probabilities for individual words. But the comparison to humans does not make much sense here. People are not really doing language modeling in their day-to-day communication. When we speak, we are not predicting what will be our next word, we are communicating ideas and selecting words that will represent these ideas. When we hear someone speaking, we are using context clues to understand their language, we are not making the predictions based solely on the words that are being said in that moment. When thinking about language, we are not sorting all the words from the vocabulary in our heads, we are usually selecting the one word that fits our needs the best. Language modeling as used in computer science today is completely unnatural to human thinking and useless in communication.

Comment by mocny-chlapik on Deepmind's Gato: Generalist Agent · 2022-05-16T12:53:45.680Z · LW · GW

So many S-curves and paradigms hit an exponential wall and explode, but DL/DRL still have not.

Don't the scaling laws use logarithmic axis? That would suggest that the phenomenon is indeed exponential in it nature. If we need to get X times more compute with X times more data for additional improvements, we will hit the wall quite soon. There is only that much useful text on the Web and only that much compute that labs are willing to spend on  this considering the diminishing returns.

Comment by mocny-chlapik on Is AI Progress Impossible To Predict? · 2022-05-16T09:25:54.939Z · LW · GW

According to current understanding of scaling laws most tasks follow a sigmoid with their performance w.r.t. model size. As we increase model size, we have a slow start followed by a rapid improvement, followed by a slow saturation towards maximum performance. But each task has different shape based on its difficulty. Therefore in some tasks you might be in the rapid improvement phase when you do one comparison and then you might he in saturated phase when you do another. The results you are seeing are to be expected so far. I would visualize absolute performance for each task for a series of models to see how the performance actually behaves.

Comment by mocny-chlapik on The case for becoming a black-box investigator of language models · 2022-05-12T07:17:05.909Z · LW · GW

There is already a sizable amount of research done in this direction, the so called bertology. I believe the methodology that is being developed is useful, but knowing about specific models is probably superfluous. In few months / years we will have new models and anything that you know will not generalize.

Comment by mocny-chlapik on 12 interesting things I learned studying the discovery of nature's laws · 2022-03-09T08:54:57.363Z · LW · GW

You might enjoy reading _The Structure of Scientific Revolutions_. #9 is explicitly discussed there. It is often a case when the old incorrect theory has a lot of work in it and many of the anomalies are explained by additional mechanism, e.g. the geocentric theory had a lot of bells and whistles in the end and it was quite precise in some cases. When the heliocentric theory was created, it was actually worse at predicting the movement of celestial bodies because it was too simplistic and was not able to handle various edge cases. Related to your remark about gravity, it took more than 50 years to successfully apply the theory of gravity to predict how Moon will behave.

Comment by mocny-chlapik on Is AI Alignment a pseudoscience? · 2022-01-24T15:12:16.528Z · LW · GW

Yeah, that is somewhat my perception.

Comment by mocny-chlapik on Is AI Alignment a pseudoscience? · 2022-01-24T10:39:38.849Z · LW · GW

Thanks, this looks very good.

Comment by mocny-chlapik on Is AI Alignment a pseudoscience? · 2022-01-24T10:38:55.749Z · LW · GW

Are you being passive-aggressive or am I reading this wrong? :)

The user Hickey is making a different argument. He is arguing about the falsifiability of the superintelligence is coming claim. This is also an interesting question, but I was not talking about this claim in particular.

Comment by mocny-chlapik on Is AI Alignment a pseudoscience? · 2022-01-24T10:21:49.459Z · LW · GW

I think that AI Safety can be a subfield of AI Alignment, however I see a distinction between AI as current ML models and AI as theoretical AGI.

Comment by mocny-chlapik on Is AI Alignment a pseudoscience? · 2022-01-24T10:18:40.015Z · LW · GW

Thanks for you reply. I am aware of that, but I didn't want to reduce the discussion to particular papers. I was curious about how other people read this field as a whole and what's their opinion about it. One particular example I had in mind is the Embedded Agency post often mentioned as a good introductory material into AI Alignment. The text often mentions complex mathematical problems, such as halt problem, Godel's theorem, Goodhart's law, etc. in a very abrupt fashion and use these concept to evoke certain ideas. But a lot is left unsaid, e.g. if Turing completeness is evoked, is there an assumption that AGI will be deterministic state machine? Is this an assumption for the whole paper or only for that particular passage? What about other types of computations, e.g. theoretical hypercomputers? I think it would be beneficial for the field if these assumptions would be stated somewhere in the writing. You need to know what are the limitations of individual papers, otherwise you don't know what kind of questions were actually covered previously. E.g. if this paper covers only Turing-computable AGI, it should be clearly stated so others can work on other types of computations.

Comment by mocny-chlapik on Is AI Alignment a pseudoscience? · 2022-01-23T18:41:56.510Z · LW · GW

Thanks for your reply. Popper-falsifiable does not mean experiment-based in my books. Math is falsifiable -- you can present a counterexample, error in reasoning, a paradoxical result, etc. Similarly to history, you can often falsify certain claims by providing evidence against. But you can not falsify a field where every definition is hand-waved and nothing is specified in detail. I agree that AI Alignment has pre-paradigmic features as far as Kuhn goes. But Kuhn also says that pre-paradigmic science is rarely rigorous or true, even though it might produce some results that will lead to something interesting in the future.

Comment by mocny-chlapik on Why haven't we celebrated any major achievements lately? · 2020-11-10T08:30:29.230Z · LW · GW

Is it only technical achievements that are not getting celebrated anymore? Sometimes when you read old books you can read that certain celebrity was greeted by a huge crowd when it came to USA via boat. Can you imagine crowds waiting for celebrities nowadays? Sure, you can have some fans, but certainly not crowds waiting for someone. I believe that social media are simply replacing crowd celebrations and people have no need to actually go outside to celebrate anymore. You can see the event live with great video coverage (while you usually don't see much in the crowd) and you can also interact with all your friends (not with a bunch of random onlookers). This makes social media much more comfortable and accessible.