Posts

Is there any metric measuring ~"proportion of people creating extra value"? 2023-08-03T22:54:10.028Z
[Linkpost] Scaling Laws for Generative Mixed-Modal Language Models 2023-01-12T14:24:00.921Z
Is ChatGPT TAI? 2022-12-30T19:44:50.508Z
Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda? 2022-11-15T22:50:22.968Z
What will the scaled up GATO look like? (Updated with questions) 2022-10-25T12:44:39.184Z

Comments

Comment by Amal (asta-vista) on Announcing Dialogues · 2023-10-08T20:48:35.077Z · LW · GW

sure, I'm actually not suggesting that it should necessarily be a feature of dialogues on lw, it was just a suggestion for a different format (my comment generated almost opposite karma/agreement votes, so maybe this is the reason?). it also depends on frequency how often do you use the branching - my guess is that most don't require it in every point, but maybe a few times in the whole conversation might be useful. 

Comment by Amal (asta-vista) on Announcing Dialogues · 2023-10-08T20:43:00.481Z · LW · GW

yeah definitely, there could be a possibility for quoting/linking answers from other branches - i haven't seen any UI that would support something like it, but also my guess is that it wouldn't be too difficult to make one. my thinking about it was that there would be one main branch and several other smaller branches that could connect to the main one, so that some points can be discussed in greater depth. also, the branching should probably not happen always, but just when both participants occasionally agree on them.

Comment by Amal (asta-vista) on Announcing Dialogues · 2023-10-07T12:08:45.200Z · LW · GW

It seems to me that these types of conversations would benefit if they were not chains but trees instead. Usually when two people have a disagreement/different point of view, there is usually some root cause of this disagreement. When the conversation is a chain, I think it likely results in one person explaining her arguments/making several points, another one having to expand on each, and then at some point in order for this to not result in massively long comments, the participants have to paraphrase, summarise or ignore some of the arguments to make it concise. If there was an option at some point to split the conversation into several branches, it could possibly make the comments shorter, easier to read, and go deeper.  Also, for the reader not participating in the conversation it can be easier to follow the conversation and get to the main point influencing his view. 

I'm not sure if something like this was done before and it would obviously require a lot more work on the UI, but I just wanted to share the idea as it might be worth considering.

Comment by Amal (asta-vista) on Expectations for Gemini: hopefully not a big deal · 2023-10-02T19:06:38.501Z · LW · GW

My guess is that it will be a scaled-up Gato - https://www.lesswrong.com/posts/7kBah8YQXfx6yfpuT/what-will-the-scaled-up-gato-look-like-updated-with. I think there might be some interesting features when the models are fully multi-modal - e.g. being able to play games, perform simple actions on a computer etc. Based on the announcement from google I would expect full multimodal training - image, audio, video, text in/out. Based on deepmind's hiring needs I would expect they want it to also generate audio/video and extend the model to robotics (the brain of something similar to a Tesla Bot) in the near future. Elon claims that training just from video input/output can result in full self-driving, so I'm very curious what training on youtube videos can achieve.  If they've managed to make a solid progress in long-term planning/reasoning and can deploy the model with a sufficiently small latency it might be a quite significant release, that could simplify many office jobs.

Comment by Amal (asta-vista) on Is there any metric measuring ~"proportion of people creating extra value"? · 2023-08-04T20:31:29.464Z · LW · GW

sure, I agree that writing is a tough gig and the distribution of what is read how much is pareto, still however the writers contribute to the chance that they improve the top writings that are read the most.


I think I'm much less interested in how deeply poeple benefit, but more in how many of them can potentially benefit and whether this scales roughly with the effort e.g. professions where by spending X effort I can serve Y people and if I wanted to serve 2Y people I would have to spend 2X effort (chef/teacher/hairdresser...) don't fall into the same category as writing.

Maybe a better way of thinking about it is as follows: If additional new 1000 people are added to the  population with a usual distribution of skills/professions....what portion of work they do would be contributing to the rest of the already existing population vs just work needed for them to self-sustain and cater to themselves (obviously with substitutions - e.g. if someone from the "new" people cooks food for the "old" ones, but someone from "old" has to cook for someone from the "new" - this does not count as contributing).

Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2023-07-21T12:17:38.246Z · LW · GW

Some of my updates:
at least one version with several trillion parameters, at least 100k tokens long context window(with embeddings etc. seemingly 1million), otherwise I am quite surprised that I mostly still agree with my predictions, regarding multimodal/RL capabilities. I think robotics could still see some latency challenges, but anyway there would be a significant progress in tasks not requiring fast reactions - e.g. picking up things, cleaning a room, etc. Things like superagi might become practically useful and controlling a computer with text/voice would seem easy.

Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2023-07-21T12:04:56.590Z · LW · GW

I believe we can now say with a high level of confidence that the scaled up GATO will be Google's Gemini model released in next few months. Does anyone want to add/update their predictions?

Comment by Amal (asta-vista) on [Linkpost] Scaling Laws for Generative Mixed-Modal Language Models · 2023-01-12T14:34:06.594Z · LW · GW

it is fixed now, thanks!

Comment by Amal (asta-vista) on 2022 was the year AGI arrived (Just don't call it that) · 2023-01-04T18:48:09.493Z · LW · GW

it could be sparse...a 175B parameters GPT-4 that has 90 percent sparsity could essentially equivalent to 1.75T param GPT-3. Also I am not exactly sure, but my guess is that if it is multimodal the scaling laws change (essentially you get more varied data instead of training it always on predicting text which is repetitive and likely just a small percentage contains new useful information to learn).

Comment by Amal (asta-vista) on Open & Welcome Thread - November 2022 · 2022-11-28T14:52:45.037Z · LW · GW

Stupid beginner question: I noticed that while interesting, many of the posts here are very long and try to go deep into the topic explored often without tldr. I'm just curious - how do the writers/readers find time for it? are they paid? If someone lazy like me wants to participate - is there a more twitter-like Lesswrong version?

Comment by Amal (asta-vista) on Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda? · 2022-11-17T15:33:02.513Z · LW · GW

my understanding is that they fully separate computation and memory storage. So whhile traditional architectures need some kind of cache to store large amount of data for model partitions from which just a small portion is used for the computation at any single time point, CS2 only requests what it needs so the bandwidth doesnt need to be so big

Comment by Amal (asta-vista) on Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda? · 2022-11-16T10:21:25.313Z · LW · GW

I am certainly not an expert, but I am still not sure about your claim that it's only good for running small models. The main advantage they claim to have is "storing all model weights externally and stream them onto each node in the cluster without suffering the traditional penalty associated with off chip memory. weight streaming enables the training of models two orders of magnitude larger than the current state-of-the-art, with a simple scaling model." (https://www.cerebras.net/product-cluster/ , weight streaming). So they explicitly claim that it should perform well with large models.
 

Furthermore, in their white paper (https://f.hubspotusercontent30.net/hubfs/8968533/Virtual%20Booth%20Docs/CS%20Weight%20Streaming%20White%20Paper%20111521.pdf), they claim that the CS-2 architecture is much better suited for sparse models(e.g. by Lottery Ticket Hypothesis) and on page 16 they show that Sparse GPT-3 could be trained in 2-5 days. 

This would also align with tweets by OpenAI that Trillion is the new billion, and rumors about the new GPT-4 being similarly big jump as GPT-2 -> GPT-3 was - having colossal number of parameters and sparse paradigm (https://thealgorithmicbridge.substack.com/p/gpt-4-rumors-from-silicon-valley). I could imagine that sparse parameters deliver  much stronger results than normal parameters, and this might change scaling laws a bit.

Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2022-11-13T14:04:29.874Z · LW · GW

oh and besides IQ tests, i predict it would also be able to pass most current CAPTCHA-like tests (though humans would still be better in some)

Comment by Amal (asta-vista) on Has anyone increased their AGI timelines? · 2022-11-07T12:52:29.766Z · LW · GW

What are your reasons for AGI being so far away?

Comment by Amal (asta-vista) on Has anyone increased their AGI timelines? · 2022-11-06T14:02:02.199Z · LW · GW

Nah...I still believe that the future AGI would invent a time machine and then it invents itself before 2022

Comment by Amal (asta-vista) on Should I Pursue a PhD? · 2022-11-06T13:57:47.447Z · LW · GW

Why do you think TAI is decades away?

Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2022-11-05T18:59:37.752Z · LW · GW

I should also make a prediction for the nearer version of GATO to actually answer the questions from the post. So if a new version of GATO appears in next 4 months, I predict:

80% confidence interval: Gato will have 50B-200B params. Context window will be 2-4x larger(similar to GPT-3)

50%: No major algorithmic improvements, RL or memory. Maybe use of perceiver. Likely some new tokenizers. The improvements would come more from new data and scale.

80%: More text,images,video,audio. More games and new kinds of data. E.g. special prompting to do something in a game, draw a picture, perform some action.

75%: Visible transfer learning. Gato trained on more tasks and pre-trained on video would perform better in most but not all games, compared to a model with similar size trained just on the particular task. Language model would be able to descripe shape of objects better after being trained together with images/video/audio.  

70%: Chain of thought reasoning would perform better compared to a LLM of similar size. The improvement won't be huge though and I wouldn't expect it to gain some suprisingly sophisticated new LLM capabilities.

80%: It won't be able to play new Atari games similarly to humans, but there would be a visible progress - the actions would be less random and directed towards the goal of the game. With sophisticated prompting, e.g. "Describe first what the goal of this game is, how to play it, what is the best strategy", significant improvements would be seen, but still sub-human.

Comment by Amal (asta-vista) on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-05T17:15:29.959Z · LW · GW

Isn't the risk coming from insufficient AGI alignment relatively small compared to vulnerable world hypthesis? I would expect that even without the invention of AGI or with aligned AGI, it is still possible for us to use some more advanced AI techniques as research assistants that help us invent some kind of smaller/cheaper/easier to use atomic bomb that would destroy the world anyway. Essentially the question is why so much focus on AGI alignment instead of general slowing down of technological progress?

I think this seems quite underexplored. The fact that it is hard to slow down the progress doesn't mean it isn't necessary or that this option shouldn't be researched more.

Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2022-11-05T12:32:45.929Z · LW · GW

I see. I will update the post with some questions. I find it quite difficult though to forecast on how percentages of the performance metrics would improve, compared to just predicting capabilities as the datasets are probably not so well known. 

Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2022-11-05T00:22:13.612Z · LW · GW
Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2022-11-05T00:18:09.452Z · LW · GW

Ok, I was thinking about this a bit and finally got some time to write it down.  I realized that it is quite hard to make predictions about the first version of GATO as it depends on what the team would prioritize in development. Therefore I'll try to predict some attributes/features of a GATO-like model that should be available in next two years, while expecting that many will appear sooner - it is just difficult to say which ones. I'm not a professional ML researcher so I might get some factual things wrong, so I would be happy to hear from people with more insight to correct me.  

First the prediction regarding the size of the model: I would expect a GATO-like architecture to have a larger amount of commercial success/usefulness than e.g. GPT-3 and so the investment should also be higher. Furthermore, I would guess that there would be several significant improvements to the training infrastructure e.g. by companies such as Cerebras/Graphcore etc. Therefore I estimate the model to use somewhere between 10-100x more compute compared to GPT-3. This might result in the model having more parameters, a larger context window, or most likely both. I predict the most likely context window size to be ~40000 (10x more) and param count  1T (6x compared to gpt). Regarding the context window - I think there will be some algorithmic improvements, so it won't be the same as before (see bellow).

Since GATO is multimodal, I would expect the scaling laws to change a bit due to the transfer learning. E.g. since the model won't need to find all information about the shape of the objects in the text, but instead can just look at the images, it should be much easier for it to answer questions such as "Can scissors be inserted into a glass bottle?" and so require a significantly smaller amount of data. Thus the scaling laws would also need to be multi-dimensional to answer what is the optimal ratio of audio/text/image/video to achieve best results. For example to improve the language model part of GATO, we may counterintuitively need to train on more images, instead of text.

I predict that GATO  would be trained on text, images, audio and video. I belive they would try to do also image/audio/video generation which they didn't do in the current version, by essentially just trying to predict what is the next image/video token, instead of using diffusion models. The context window size seems too small for video, however I believe there are two reasons why it won't be a huge problem. First, by using something like Perceiver where more processing is happening on tokens seen recently and just a small amount of computation is used on older far away tokens, the context window could be increased significantly (or some other kind of memory could be added). 

Second, I don't think that the model needs to see the whole image/video. When humans look at something, only a very small part of the image is sharp and the rest is blurry. Similarly, I think that Gato will get image information with just a small amount of tokens that describe a small rectangle of the picture/video clearly and some small number of tokens decribing the blurred rest. There would further be action tokens describing the "eye movement" when the focus shifts to a different part of the image. In this way I think GATO will be able to watch/generate videos or read/write long books. 

Futhermore, I think that in general, RL could be used to make the model "think slow" when predicting the tokens. For example, instead of the task of predicting the next token, GATO could be trained on a RL task "find a set of actions by which you find what the next token is". So to predict what the next image token, next word, or next action in a game should be, it would first look around the image/video/book and only after collecting the relevant information, it would emit the right token. Possibly it could also emit tokens summarizing the information it has seen so far from a large amount of data that it might need in the future. Of course it would probably still be trained on Atari games(likely now with actual RL) or in some coding environment with predefined inputs/outputs, but I think these will be much less significant compared to the "information finding RL". Maybe a smaller feature would be that GATO would be able to emit commands modifying its context window and deleting/adding tokens to it.

So some capabilities that I would predict in 2 years: Generation of images,video,audio,text, even quite long ones. E.g. size of a book,  5min. long video etc. Instead of "Let's think step by step" we would have "Let's sketch the solution", to draw diagrams etc. Gato would be able to reasonably operate a computer using text commands - e.g. search on the internet, use paint, IDE/debugger and so on. It will be much better at coding by combining various prompting methods and would perform in top 10% of competitive programmers (compared to Alphacode being in 50th percentile). Solve some IMO problems (probably wouldn't get gold medal, but maybe bronze by ignoring combinatorics). Act like a smart research assisant - e.g. finding relevant papers, pointing their strengths/weaknesses, suggesting improvements/ideas. Learn to play entirely new (atari) games similarly fast as humans - this would probably however require RL instead of just being a prediction model. Complete IQ test and get above average result.

 Capabilities I don't expect: generating novel jokes, outperforming best humans at long-term planning, research, math or coding. Image/Video generation wouldn't match reality. Similarly, AI generated books wouldn't sell well.  Empathy - it wouldn't make a very good friend. It will be slow and  expensive - so real-time robotics will probably still be a challenge.  Also, It won't be a very reliable doctor, despite being connected to the internet.

Comment by Amal (asta-vista) on What will the scaled up GATO look like? (Updated with questions) · 2022-10-26T16:37:20.026Z · LW · GW

this has generated much less engagement than I thought it would...what am I doing wrong?

Comment by Amal (asta-vista) on AGI in our lifetimes is wishful thinking · 2022-10-24T18:09:48.360Z · LW · GW

thanks for this post! I think it is always great when people share their opinions about the timelines and more people(even the ones not directly involved in ML) should be encouraged to freely express their view without the fear that they will be held accountable in the case they are wrong. In my opinion, even the people directly involved in ML research seem to be too reluctant to share their timelines and how they impact their work which might be useful for others. Essentially, I think that people should share their view when it is something that is going to somehow influence their decision making, rather than when they feel it crosses some level of rigour/certainty, therefore posts like this one should receive a bit more praise (and LW should have two types of voting also for posts not just comments). 

While I disagree with the overall point of the post, I agree that there is probably a lot of wishful thinking/curiosity driving this forum and impacting some predictions. However, even despite this, I still think that AGI is very close. My prediction is that TAI will happen in the next 2-5 years(70%) and AGI in the next 8 (75%). I guess it will be based on something like scaled-up GATO pre-trained on youtube videos with RL and some memory.  The main reason for this is that deep-learning was operating on a very small scale just two years ago(less than a billion parameters) which made it very difficult to test some ideas. The algorithmic improvements to me seem just too easy to come up with. For example, almost all important problems e.g. language,vision, audio, RL were solved/almost solved in a very short time and the the ideas there didn't require much ingenuity.

Just a slight exaggeration - if you take a five year old and ask him to draw a random diagram, chances are quite high that if scaled up, this is a SOTA architecture for something. It is just hard to test the ideas, because of the engineering difficulty and the lack of compute. However, this is likely to be overcomed soon with either more money being thrown on the problem or architecture improvements - e.g. Cerebras and Graphcore seem to be doing some promising work here.

Comment by Amal (asta-vista) on DeepMind is hiring for the Scalable Alignment and Alignment Teams · 2022-06-01T15:53:38.387Z · LW · GW

Hi Rohin, how long does it usually take to hear back if selected for the next stage? I applied two weeks ago but didn't receive any other mail yet,  so I was just curious if I still have a chance or was not selected.