Posts

Transformative AI is a process 2023-06-08T08:57:36.421Z
How will they feed us 2023-06-01T08:49:51.645Z

Comments

Comment by meijer1973 on The (local) unit of intelligence is FLOPs · 2023-06-09T18:51:07.078Z · LW · GW

This emphasis on generality makes deployment of future models a lot easier. We first build a gpt4 ecosystem. When gpt5 comes out it will be easy to implement (e.g. autogpt can run just as easy on gpt4 as on gpt5). The adaptions that are necessary are very small and thus very fast deployment of future models is to be expected.

Comment by meijer1973 on The (local) unit of intelligence is FLOPs · 2023-06-07T19:17:28.978Z · LW · GW

Fine-tuning, whether using RL or not, is the proverbial “cherry on the cake” and the pre-trained model captures more than 99.9% of the intelligence of the model. 

 

I am still amazed by the strength of general models. There is the no-free lunch theorem that people use to point out that we will probably have specialized AI's because they will be better. Current practice seems to contradict this. 

Comment by meijer1973 on Transformative AGI by 2043 is <1% likely · 2023-06-07T14:15:18.937Z · LW · GW

AI will probably displace a lot of cognitive workers in the near future. And physical labor might take a while to get below 25$/hr.

  • Most most tasks human level intelligence is not required. 
  • Most highly valued jobs have a lot of tasks that do not require high intelligence.
  • Doing 95% of all tasks could be a lot sooner (10-15 years earlier) than 100%. See autonomous driving (getting to 95% safe or 99,9999 safe is a big difference).
  • Physical labor by robots will probably remain expensive for a long time (e.g. a robot plumber). A robot ceo is probably cheaper in the future than the robot plumber. 
  • Just take gpt4 and fine tune it and you can automate a lot of cognitive labor already.
  • Deployment of cognitve work automation (a software update) is much faster that deployment of physical robots.

I agree that AI might not replace swim instructors by 2030. It is the cognitive work where the big leaps will be. 

Comment by meijer1973 on Algorithmic Improvement Is Probably Faster Than Scaling Now · 2023-06-07T13:57:41.316Z · LW · GW

An interesting development is the development of synthetic data. This is also a sort of algorithmic improvement, because the data is generated by algorithms. For example in the verify step by step paper there is a combination of synthetic data and human labelling. 

At first this seemed counter intuitive to me. The current model is being used to create data for the next model. Feels like bootstrapping. But it starts to make sense now. Better prompting (like CoT or ToT) is a method to get better data or a second model that is trained to pick the best answers from a thousand and that will get you data good enough to improve the model. 

Demis Hassabis said in his interview with Lex Fridman that they used synthetic data when developing AlphaFold. They had some output of AlphaFold that they had great confidence in. Then they fed the output as input and the model improved (this gives your more data with great confidence, repeat). 

Comment by meijer1973 on What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5? · 2023-06-06T09:00:11.942Z · LW · GW

Specific Resources (Access to a DGX data center): Even if an AI had access to such resources, it would still need to understand how to use them effectively, which would require capabilities beyond what GPT-4 or a hypothetical GPT-5 have.

To my knowledge resource management in data centers is done by AI's. It is the humans who cannot do this. The AI already can.

Comment by meijer1973 on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-06T08:57:20.125Z · LW · GW

Algorithmic improvement has more FOOM potential. Hardware always has a lag. 

Comment by meijer1973 on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-06T08:55:45.483Z · LW · GW

Hanson's chance on extinction is close to a 100%. He just thinks it's slower. He is optimistic about something that most would call a dystopia (a very interesting technological race that will conquer the stars before the grabby aliens do). A discussion between Yudkowsky and Hanson is about are we dying fast or slow. It is not really a doomer vs non-doomer debate from my perspective (still a very interesting debate btw, both have good arguments).

I do appreciate the Hanson perspective. It is well thought out and coherent. I just would not call it optimistic (because of the extinction). I have no ready example of a non-extinction perspective coherent view on the future. Does anybody have a good example of a coherent non-extinction view? 

Comment by meijer1973 on implications of NN design for education · 2023-06-06T08:43:42.619Z · LW · GW

If I understand you correctly you mean this transfer between machine learning and human learning. Which is an interesting topic. 

When a few years ago I learned about word2vec I was quite impressed. It felt a lot like how humans store information according to cognitive psychology. In cognitive psychology, a latent space or a word vector would be named as a semantic representation. Semantic representations are mental representations of the meaning of words or concepts. They are thought to be stored in the brain as distributed representations, meaning that they are not represented by a single unit of activation, but rather by a pattern of activation across many units. 

That was sort my "o shit this is going to be a thing" moment. I realized there are similarities between human and machine understanding. This is a way to build a world model.

Now I really can try the differences in gpt4 and Palm2. To learn how they think I give them the same question as my students and when they make mistakes I guide them like I would guide a student. It is interesting to see that within the chat they can learn to improve themselves with guidance. 

What I find interesting is that the understanding is sometimes quite different and there are also similarities. The answers and the responses to guidance are quite different from that of students. It is similar enough to give human like answers. 

Can this help us understand human learning? I think it can. Comparing human learning to machine learning makes the properties of human learning more salient (1+1=3). As an example I studied economics and Mathematics and oftentimes it felt like I did three times the learning because I did not only learn mathematics and economics but I also learned the similarities and differences between the two. 

The above is a different perspective on your question then my previews answer. I would appreciate feedback on whether I am on the right track here. I am very interested in the topic independent of the perspective taken on the topic. So we could also explore different perspectives. 

Comment by meijer1973 on implications of NN design for education · 2023-06-05T19:27:28.632Z · LW · GW

I am in education (level about high school/AP macro economics)

possible implications:

  • upskilling : faster learning through better information, more help, AI tutoring etc. 
  • deskilling : students let the AI do the work (the learning, writing, homework etc.)
  • reskilling : develop new skillsets that are relevant to todays world 
  • relevance : in a world where AI does the work what is the relevance of education

The last is the most important I think. What is the place of education in todays world. What should a kid of fifteen years old learn to be prepared for what is coming? I don't know because I don't know what is coming.

One thing I do know. Learning from a machine is a paradox. Yes you can learn better and faster with the help of a machine. But if the machine can teach it to you, than the machine can probably do it. And why would we want to learn things that a machine can do? To learn the things a machine can not do, we need humans. But that only works if there are things a machine cannot do. 

The kid of fifteen wil be 25 in ten years. Ten years is a lot. I do not know what to tell them because I do not know. Love to hear more input on this.

Comment by meijer1973 on Optimization happens inside the mind, not in the world · 2023-06-05T19:11:43.901Z · LW · GW

Your model has some uncertainty, but you know the statistical distributions. For example, with probability 80% the world is in state X, with probability 20% it is in state Y.

Nice way of putting it. 

Comment by meijer1973 on Optimization happens inside the mind, not in the world · 2023-06-05T19:07:39.542Z · LW · GW
  • Mathematical definition: Optimization is the process of finding the best possible solution to a problem, given a set of constraints.
  • Practical definition: Optimization is the process of improving the performance of a system, such as by minimizing costs, maximizing profits, or improving efficiency.

In my comment I focused on the second interpretation (by focussing on iteration). The first definition does not require a perfect model of the world. 

In the real world we always have limited information and compute and so the best possible solution is always an approximation. The person with the most compute and information will probably optimize faster and win.  

I agree that this is a very good post and it helps me sharpen my views. 

Comment by meijer1973 on Optimization happens inside the mind, not in the world · 2023-06-04T10:31:56.285Z · LW · GW

Strong world-optimization only happens if there is a robust and strong correlation between the world-model and reality.

 

Humans and corporations do not have perfect world models. Our knowledge of the world and therefore our world models are very limited. Still humans and corporations manage to optimize. Mostly this happens by trial and error (and copying succesful behaviors of others). 

So I wonder if strong world-optimization could occur as an interative process based on an imperfect model of the world. This however assumes interaction with the world and not a "just in your head" process.

As a thought experiment I propose a corporation evading tax law. Over time corporations always manage to minimize the amount of taxes paid. But I think this is not based on a perfect world model. It is an iterative process whereby people predict, try things and learn along the way. (another example could be the scientific method, also iterative and not in your head but there is an interaction with the world).

My claim however assumes that optimization not occuring just in your head, but interaction with the real world is neccessary for optimization. So maybe I am missing the point of your argument here.

Comment by meijer1973 on What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5? · 2023-06-04T10:12:09.742Z · LW · GW

People are finding ways to push the boundaries of the capabilities GPT-4 and are quite succesful at that (in reasoning, agency etc). These algorithmic improvements will probably also work on gpt5. 

A lot of infrastructure built for gpt4 will also work on gpt5 (like plug-ins). We do not need to build new plug-ins for gpt5, we just swap the underlying foundational model (greatly increasing the adoption of gpt5 compared to gpt4).

This also works for agency shells like autogpt. Autogpt is independant of foundational model (works with gpt3.5, gpt4 and also gpt5). By the time gpt5 is released these agency shells will be greatly improved and we just have to swap out the underlying engine to get al lot more oomph from that.

Same for memory models like vector databases. 

I think the infrastructure part will be a big difference. A year from now we will have a lot of applications, use cases, experience, better prompts etc. That could make the impact and speed of deployment of gpt5 (or Gemini) a lot bigger/faster than gpt4.

Comment by meijer1973 on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-04T09:59:20.419Z · LW · GW

Here a summary of the Hanson position (by himself). He is very clear about humanity being replaced by AI. 

https://www.overcomingbias.com/p/to-imagine-ai-imagine-no-ai

Comment by meijer1973 on Full Automation is Unlikely and Unnecessary for Explosive Growth · 2023-06-02T12:15:18.800Z · LW · GW

I like your motivation, robotics can bring a lot of good. It is good to work on automating the boring and dangerous work.

I see this as a broken promise. For a long time this was the message (we will automate the boring and dangerous). But now we automate valuable jobs like STEM, journalism, art etc. These are the jobs that give meaning to life and they provide positive externalities I like to talk to different people soI meet the critical journalist, the creative artist, the passionate teacher etc. 

E.g. we need a fraction of the people to be journalists so the population as a whole can boost its critical thinking. Same for STEM, art etc. (These people have positive externalities). Humanity is a superintelligence, but it needs a variety in the parts that create the whole. 

Comment by meijer1973 on Full Automation is Unlikely and Unnecessary for Explosive Growth · 2023-06-02T12:04:55.271Z · LW · GW

Thanks for the post. I would like to add that I see a difference in automation speed of cognitive work and physical work. In physical work the growth of productivity is rather constant. With cognitive work there is a sudden jump from not much use cases to a lot of use cases ( like a sgmoid). And physical labour has speed limits. And also costs, generality and deployment are different.

It is very difficult to create a usefull AI for legal or programming work. But once you are over the treshold (as we are now) there are a lot of use cases and productivity growth is very fast. Robotics in car manufacturing took a long time and continued steadily. A few years ago the first real applications of legal AI emerged, and now we have a computer that can pass the bar exam. This time frame is much shorter. 

The other difference is speed. A robot building a car is limited in speed. Compare this to a legal AI summarizing legal texts (1000x+ increase in speed). AI doing coginitve work is crazy fast and has the potential to become increasingly faster with more and cheaper compute.  

The cost is also different. The marginal cost for robots are higher than for a legal AI. Robots will always be rather narrow and expensive (A Roomba is about as expensive as a laptop). Building one robo lawyer will be very expensive. But after that copying it is very cheap (low marginal costs). Once you are over the treshold, the cost of deployment is very low.

The generality of AI knowledge workers is somewhat of a surprise. It was thought that specialized AI's would be better, cheaper etc. Maybe a legal AI could be a somewhat finetuned GPT-4. But this model would still be a decent programmer and accountant. A more general AI is much easier to deploy. And there might be unknown use cases for a lawyer, programmer, accountant we have not thought of yet. 

Deployment speed is faster for cognitive work and this has implications for growth. When a GPT+1 is introduced all models are easily replaced by the better and faster model. When you invent a better robot to manufacture cars it will take decades before this is implemented in every factory. But the changing the the base model of your legal AI from gpt4 to gpt5 might be just a software update. 

In summary there are differences for automating cognitive work with regard to:

  • growth path (sigmoid instead of linear)
  • speed of excecuting work
  • cost (low marginal cost)
  • generality (the robo lawyer, programmer, accountant)
  • deployment speed (just a software update)

Are there more differences that effect speed? Am I being too bullish? 

Comment by meijer1973 on What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5? · 2023-06-02T09:25:35.285Z · LW · GW

Became recently aware of the progress made in synthetic data and other algorithmic improvements. We have not pushed GPT-4 to the max yet. 

e.g. this paper https://arxiv.org/abs/2305.20050

It details how training on the steps in step by step reasoning as opposed to just rewarding the end result can give significant improvements. And there is so much more.  

Comment by meijer1973 on [deleted post] 2023-06-02T09:06:09.726Z

Agreed, one of the objectives of a game is to not die during the game. This is also true for possible fatal experiments like inventing AGI. You have one or a few shots to get it right. But to win you got to stay in the game. 

Comment by meijer1973 on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-02T09:00:09.559Z · LW · GW

Note that Hanson currently thinks the chances of AI doom are < 1%, while Yudkowsky thinks that they are > 99%.

 

It is good to note that the optimistic version of Hanson would be considered doom by many (including Yudkowsky). Doom/utopia definition Yudkowsky is not equal to doom/utopia definition of Hanson. 

This is important in many discussions. Many non-doomers have definitions of utopia that many consider to be dystopian. E.g. AI will replace humans to create a very interesting future where the AI's will conquer the stars, some think this is positive others think this is doom because there are no humans. 

Comment by meijer1973 on How will they feed us · 2023-06-01T20:24:31.076Z · LW · GW

Thanks for the addition. Vertical and indoor farming should improve on the current fragility (thus add to robustness) of the agricultural industry. Feeding 8 billion people will still cost a lot of resources. 

Mining however is different in that mining cost will ever increase due to decreasing quality of ore and ores being mined in places that are harder to reach. This effect could be offset by technological progress for a limited time (unless we go to the stars). Vast improvements in recycling could be a solution, but that requires a lot of energy. 

Solving the energy problem via fusion energy would really help a lot for the more utopian scenario's. 

Comment by meijer1973 on What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5? · 2023-06-01T10:48:57.208Z · LW · GW

Agency is advancing pretty fast. Hard to tell how hard this problem is. But there is a lot of overhang. We are not seeing gpt-4 at its maximum potential.

Comment by meijer1973 on What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5? · 2023-06-01T08:59:05.112Z · LW · GW

Agree, human in the loop systems are very valuable and probably temporary. HITL systems provide valuable data for training allowing the next step. AI alone is indeed much faster and cheaper. 

Comment by meijer1973 on Humans, chimpanzees and other animals · 2023-05-31T12:43:40.040Z · LW · GW

One difference I suspect could be generality over specialization in the cognitive domain. It is assumed that specialization is better. But this might only be true for the physical domain. In the cognitive domain general reasoning skills might be more important. E.g. for an ASI the specialized knowledge of a lawyer might be small from the perspective of the ASI.  

Comment by meijer1973 on Humans, chimpanzees and other animals · 2023-05-31T12:06:52.946Z · LW · GW

Good point. As I understood it, humans have an OOM more parameters than chimps. But chimps also have an OOM over a dog. So not all OOM's are created equal (so I agree with your point).

I am very curious about the qualatative differences between humans and and superintelligence. Are there qualatative emergent capabilities aboven human leven intelligence that we cannot imagine or predict at the moment.   

Comment by meijer1973 on Request: stop advancing AI capabilities · 2023-05-26T20:00:46.319Z · LW · GW

According to Deepmind we should aim at that little spot at the top ( I added the yellow arrow). This spot is still dangerous btw. Seems tricky to me.

Image from the deepmind paper on extreme risk.(https://arxiv.org/pdf/2305.15324.pdf) 

 

Comment by meijer1973 on What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5? · 2023-05-26T11:07:00.101Z · LW · GW

The biggest issue I think is agency. In 2024 large improvements will be made to memory (a lot is happening in this regard). I agree that GPT-4 already has a lot of capability. Especially with fine-tuning it should do well on a lot of individual tasks relevant to AI development. 

But the executive function is probably still lacking in 2024. Combining the tasks to a whole job will be challenging. Improving data is agency intensive (less intelligence intensive). You need to contact organizations, scrape the web, sift through the data etc. Also it would need to order the training run,  get the compute for inference time, pay the bills etc. These require more agency than intelligence. 

However, humans can help with the planning etc. And GPT-5 will probably boost productivity of AI developers. 

note: depending on your definition of intelligence, agency or the executive function would/should be part of intelligence. 

Comment by meijer1973 on Do humans still provide value in correspondence chess? · 2023-05-26T10:32:44.018Z · LW · GW

True, it depends on the ratio mundane and high stakes decisions. Athough there are high stakes decisions that are also time dependant. See the example about high frequency trading (no human in the loop and the algorithm makes trades in the millions).  

Furthermore your conclusion that time independant high stakes decisions will be the tasks where humans provide most value seems true to me. AI will easily be superior when there are time constraint. Absent such constraints, humans will have a better chance of competing with AI. And economic strategic decisions are often times not extremely time constrained (at least a couple of hours or days of time).

In economic situations the amount of high stakes decisions will be limited  (only a few people make desicions about large sums of money and strategy) . Given a multinational with a 100.000 employees, only very few will take high stake decisions. But these decisions might have a significant impact on competitiveness. Thus the multinational with a human ceo might out compete a full AI company. 

In a strategic situation time might give more of an advantage (i am economist not a military expert so I am really guessing here). My guess would be that a drone without a human in the loop could have a significant advantage (thus pressures might rise to push for high stake decision making by drones (human lives)). 

Comment by meijer1973 on Do humans still provide value in correspondence chess? · 2023-05-24T08:02:40.265Z · LW · GW

Time should also be a factor when comparing strength between AI alone and an AI-human team. Humans might add to correspondence chess but it will cost them a significant amount of time. Human-AI teams are very slow compared to AI alone.

For example in low latency algorithmic stock trading reaction times are below 10ms. Human reaction time is 250ms. A human-AI cooperation of stock traders would have a minimum reaction time of 250ms (if the human immediatly agrees when the AI suggests a trade), This is way to slow and means a serious competitive disadvantage. 

Take this to strategically aware AI compared to a human working with a strategically aware AI. And suppose that the human can improve the strategic decision if given enough time. The AI alone would be at least a 100x faster that the AI-human team. A serious advantage for the AI alone.

For the more mundane human in the loop applications speed and cost will probably be a deciding factor. If chess was a job than most of the time a Magnus Carlson level move in a few seconds for a few cents will be sufficient. In rare cases (e.g. cutting edge science) it might be valuable to go for the absolute best decision at a higher cost in time and money. 

So my guess is that human in the loop solutions will be a short fase in the coming transition. The human in the loop fase will provide valuable data for the AI, but soon monetary and time costs will move processes towards an AI alone setup instead of humans in te loop.

Even if in correspondence chess AI-human teams are better it probably does not transfer to a lot of real world applications.