Posts

Comments

Comment by Rodrigo Heck (rodrigo-heck-1) on Possible OpenAI's Q* breakthrough and DeepMind's AlphaGo-type systems plus LLMs · 2023-11-23T03:22:29.925Z · LW · GW

The hypothesis of a breakthrough having had happened crossed my mind, but the only thing that didn't seem to fit into this narrative was Ilya regretting his decision. If such a new technology has emerged and he thinks Sam is not the man to guide the company in this new scenario, why would he reverse his position? It doesn't seem wise given what it's assumed to be at stake.

Comment by Rodrigo Heck (rodrigo-heck-1) on Adventist Health Study-2 supports pescetarianism more than veganism · 2023-06-19T00:56:10.549Z · LW · GW

The Adventist Study is interesting, but surely not definitive. It's not even a RCT. To my knowledge, the only respectable RCT ever conducted analyzing a diet pattern is PREDIMED. It doesn't test veganism specifically (the treatment group adopts a mediterranean diet), but it does increase the strength of the association between plant based diets and lower overall mortality risk. Overall, I think the level of evidence against meat is only suggestive, but since it's so time-consuming and expensive to conduct these trials, I don't expect much further light coming from investigations using metrics like mortality, heart attacks, strokes, etc. I think epigenetic clocks will in the future be a much better way to quickly analyze the effects of diet interventions, and I suspect plant-based diets will have an advantage over other diet patterns.

Comment by Rodrigo Heck (rodrigo-heck-1) on Change my mind: Veganism entails trade-offs, and health is one of the axes · 2023-06-02T22:33:38.439Z · LW · GW

How is health a trade-off when the longest living populations are the ones eating mostly plant based diets?

Comment by Rodrigo Heck (rodrigo-heck-1) on Request: stop advancing AI capabilities · 2023-05-28T16:33:35.614Z · LW · GW

Not advancing AI is highly antisocial since it prolongs the human misery that could be addressed and resolved by a powerful technology.

Comment by Rodrigo Heck (rodrigo-heck-1) on any good rationalist guides to nutrition / healthy eating? · 2023-03-06T02:33:30.638Z · LW · GW

Now with epigenetic clocks we can see how dietary modifications impact health more broadly and they do show that nutritionist consensus (something like a mediterranean diet / plant based diet) is in the right direction.  Your skepticism isn't well founded, in my opinion.

Comment by Rodrigo Heck (rodrigo-heck-1) on Peter Thiel's speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, 'anti-anti anti-anti-classical liberalism', Bostrom, LW, etc. · 2023-02-01T19:20:35.325Z · LW · GW

Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity's best interest pops up.

Comment by Rodrigo Heck (rodrigo-heck-1) on Peter Thiel's speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, 'anti-anti anti-anti-classical liberalism', Bostrom, LW, etc. · 2023-02-01T00:01:18.023Z · LW · GW

That's still a theoretical problem; something we should consider but not overly update on, in my opinion. Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation? That wasn't the case even with nuclear bombs.

Comment by Rodrigo Heck (rodrigo-heck-1) on Peter Thiel's speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, 'anti-anti anti-anti-classical liberalism', Bostrom, LW, etc. · 2023-01-31T22:10:22.502Z · LW · GW

That's exactly my point. We don't even know how these future technologies will look like. Gain of function research has potential major negative effects right now, so I think it's reasonable to be cautious. AI is not currently at this point. It may potentially be in the future, but by then we will be better equipped to deal with it and assess the risk-benefit profile we are willing to put up with.

Comment by Rodrigo Heck (rodrigo-heck-1) on Peter Thiel's speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, 'anti-anti anti-anti-classical liberalism', Bostrom, LW, etc. · 2023-01-31T00:28:15.349Z · LW · GW

AI risk is still at another level of concern. If you ask me to list what can go wrong with gain of function research, I can probably cite a lot of things. Now if you ask me what dangers LLM's can cause to humanity, I will have a much more inoffensive list.

Comment by Rodrigo Heck (rodrigo-heck-1) on Peter Thiel's speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, 'anti-anti anti-anti-classical liberalism', Bostrom, LW, etc. · 2023-01-30T23:40:00.073Z · LW · GW

I am with him on this. The level of AI alarmism that is being put forward especially in this community is uncalled for. I was just reading Yudkowski and Scott's chat exchange and all the doom arguments I captured were of the form "what if?". What about we just return to the way we do engineering: keep building and innovating and dealing with negative side effects along the way?

Comment by Rodrigo Heck (rodrigo-heck-1) on Scaling laws vs individual differences · 2023-01-13T18:12:22.668Z · LW · GW

I think option 2 is the most reasonable explanation. We consider complex things some humans can perform but others can't. Things that everyone can do are just too simple; while things no one can do are basically impossible. So we have a very limited definition of "complex" that can't be transferred to different levels of intelligence.

Comment by Rodrigo Heck (rodrigo-heck-1) on Whisper's Wild Implications · 2023-01-03T21:12:06.371Z · LW · GW

A better approach IMO is to directly tokenize audio and then find a clever way to align text tokens with audio tokens during training, without relying on 100% transcription.

Comment by Rodrigo Heck (rodrigo-heck-1) on What will the scaled up GATO look like? (Updated with questions) · 2022-10-27T06:08:10.885Z · LW · GW

My guess is they will make tweaks on the tokenization part. If I was given the task of adding more modalities to a Transformer, I would probably be scretching my head thinking if there was a way to universally tokenize any type of data. But that's an optimistic scenario, I don't expect them to come up with such a solution right now. So I think they will just add new state-of-the-art tokenizers, like ViT-VQGAN for images. Other than that, I am mostly curious if we will be able to more clearly observe transfer learning when increasing the model size. That I think is the most important information that can come from GATO 2, because then we will be more equiped to achieve Chincilla's scaling laws without some potential slowdown. I am betting that we will, as long as tasks are not that far away from each other and we take the time to build good tokenizers.

Comment by Rodrigo Heck (rodrigo-heck-1) on The heritability of human values: A behavior genetic critique of Shard Theory · 2022-10-21T01:16:23.892Z · LW · GW

I have an implicit bias to always discount theories that try to underplay genetics influence and your article is a nice reinforcer.

Comment by Rodrigo Heck (rodrigo-heck-1) on Why are we sure that AI will "want" something? · 2022-09-18T01:41:30.072Z · LW · GW

AI won't have wishes or desires. There is no correlation in the animal kingdom between desires and cognitive function (the desire to climb on the social hierarchy or to have sex is preserved no matter the level of intelligence). Dumb humans want basically the same things as bright humans. All that suggests that predictive modeling of the world is totally decoupled from wishes and desires.

I suppose it is theoretically possible to build a system that also incorporates desires, but why would we do that? We want Von Neuman's cognitive abilities, not Von Neuman's personality.

Comment by Rodrigo Heck (rodrigo-heck-1) on Why Do People Think Humans Are Stupid? · 2022-09-16T01:52:23.471Z · LW · GW

No. It performs much worse than AI systems.

Comment by Rodrigo Heck (rodrigo-heck-1) on Why Do People Think Humans Are Stupid? · 2022-09-15T03:16:20.004Z · LW · GW

Can you predict the shape of a protein from the sequence of its aminoacids? I can't and I suspect no human (even with the most powerful non-AI software) can. There is so much we are unable to understand. Another example is how we still seem to struggle to make advances on Quantum Physics.

Comment by Rodrigo Heck (rodrigo-heck-1) on A Mechanistic Interpretability Analysis of Grokking · 2022-08-16T18:47:01.987Z · LW · GW

It seems less obvious to me that human grokking looks like 'stare at the same data points a bunch of times until things click'

Personally, I don't think it's that different. At least for language. When I read some unrecognizable word in a foreign language, my mind tries first to retrieve other times I have seen this word but haven't understood. Suppose I can remember 3 of these instances. Now I have 3 + 1 of these examples in my mind and, extracting the similar context they share, I can finally deduct the meaning.

Comment by Rodrigo Heck (rodrigo-heck-1) on chinchilla's wild implications · 2022-07-31T06:14:35.165Z · LW · GW

A possible avenue to explore is to expand these models to multilingual data. There are perhaps a lot of high quality text uniquely available in other languages (news, blogs, etc.).  Anyways, IMO this effort should probably be directed less on acquiring the largest amount of data and more on acquiring high quality data. Chinchilla's scaling law doesn't include quality as a distinctive property, but we have reasons to believe that more challenging text are much more informative and can compensate low data environments.

Comment by Rodrigo Heck (rodrigo-heck-1) on Book Review: Talent · 2022-06-04T05:09:26.947Z · LW · GW

I don't understand why and how the authors reached the conclusion that IQ is not that important from this study on Swedish CEOs. They are 1.5 standard deviations above the mean. This is a huge effect.