Posts

How does bee learning compare with machine learning? 2021-03-04T01:59:06.067Z

Comments

Comment by guicosta on How does bee learning compare with machine learning? · 2021-03-05T19:19:53.310Z · LW · GW

The current SOTA models do very well (~90% accuracy) at few-shot learning tasks in the CIFAR-FS dataset [source], which has a comparable resolution to the images seen by bees, so I think that this task is quite solvable. Even bees and the models I discussed seem to do pretty well compared to chance. 

Interesting to learn that compute figures can be brought down so much without accuracy loss! Could you point me to some reading material about this?

Comment by guicosta on How does bee learning compare with machine learning? · 2021-03-05T19:03:00.808Z · LW · GW

I think Rohin's second point makes sense. Bees are actually pretty good at classifying abstract shapes (I mention a couple of studies that refer to this in the appendix about my choice of benchmark, such as Giurfa (1996)), so they might plausibly be able to generalize to stylized images.

Comment by guicosta on How does bee learning compare with machine learning? · 2021-03-04T14:52:36.880Z · LW · GW

Hey Ben! Thanks for formatting the doc into the post, it looks great!

Comment by guicosta on Draft report on AI timelines · 2020-11-04T23:45:39.454Z · LW · GW

Hey everyone! I’m Guilhermo. I’m doing an AI timelines internship with Open Phil and am going to investigate that topic over the next few months.

It seems plausible to a lot of people that simply scaling up current ML architectures with more data and compute could lead to transformative AI. In particular, the recent successes of GPT-3 and the impressive scaling observed seem to suggest that a scaled-up language model could have a transformative impact. This hypothesis can be modeled within the framework of Ajeya’s report by considering that a transformative model would have the same effective horizon length as GPT-3 and assuming that the scaling will follow the same laws as current Transformer models. I’ve added an anchor corresponding to this view in an updated version of the quantitative model that can be found (together with the old one) here, where the filenames corresponding to the updated model begin with “(GPT-N)”. Please note that Ajeya’s best guess sheet hasn’t been updated to take this new anchor into account. A couple of minor numerical inconsistencies between the report and the quantitative model were also fixed.