Posts

Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 2023-08-20T15:18:24.477Z
Max TK's Shortform 2023-08-18T19:12:00.734Z
Memetic Judo #1: On Doomsday Prophets v.3 2023-08-18T00:14:11.322Z
Memetic Judo #2: Incorporal Switches and Levers Compendium 2023-08-14T16:53:05.363Z
(retired article) AGI With Internet Access: Why we won't stuff the genie back in its bottle. 2023-03-18T03:43:09.806Z

Comments

Comment by Max TK (the-sauce) on What wiki-editing features would make you use the LessWrong wiki more? · 2023-09-21T09:32:15.324Z · LW · GW

I guess this means they found my suggestion reasonable and implemented it right away :D I am impressed!

Comment by Max TK (the-sauce) on What wiki-editing features would make you use the LessWrong wiki more? · 2023-09-14T15:02:08.449Z · LW · GW

thanks!!!

Comment by Max TK (the-sauce) on What wiki-editing features would make you use the LessWrong wiki more? · 2023-08-26T11:39:23.697Z · LW · GW

I think there is an option for whether they can be promoted to front page.

Comment by Max TK (the-sauce) on What wiki-editing features would make you use the LessWrong wiki more? · 2023-08-25T07:24:32.194Z · LW · GW

When I am writing my articles, I prefer a workflow in which I am able to show my article to selected others for discussion and review before I publish. This seems to not be possible currently without giving them co-authorship - which often is not what I want.

This could be solved for example by having one additional option that makes the article link accessible by others even while it is in draft mode.

Comment by Max TK (the-sauce) on Memetic Judo #1: On Doomsday Prophets v.3 · 2023-08-24T12:52:00.567Z · LW · GW

Update: Because I want to include this helpful new paragraph in my article and I am unable to reach Will, I am now adding it anyways (it seems to me that this is in spirit of what he intended). @Will: please message me if you object

Comment by Max TK (the-sauce) on Memetic Judo #1: On Doomsday Prophets v.3 · 2023-08-21T14:03:38.978Z · LW · GW

https://en.wikipedia.org/wiki/God_helps_those_who_help_themselves

Comment by Max TK (the-sauce) on Memetic Judo #1: On Doomsday Prophets v.3 · 2023-08-20T13:43:19.335Z · LW · GW

Lovely; can I add this to this article if I credit you as the author?

Comment by Max TK (the-sauce) on Memetic Judo #1: On Doomsday Prophets v.3 · 2023-08-19T15:10:10.746Z · LW · GW

Good idea! I thought of this one: https://energyhistory.yale.edu/horse-and-mule-population-statistics/

Comment by Max TK (the-sauce) on Max TK's Shortform · 2023-08-18T19:12:00.915Z · LW · GW

On How Yudkowsky Is Perceived by the Public

Over the recent months I have been able to gather some experience as an AI safety activist. One of my takeaways is that many people I talk to do not understand Yudkowsky's arguments very well.

I think this is for 2 reasons mainly:

  1. A lot of his reasoning requires a kind of "mathematical intuition" most people do not have. In my experience it is possible to make correct and convincing arguments that are easier to understand, or even invest more effort into explaining some of the more difficult ones.

  2. I think he is used to a lesswrong-lingo that sometimes gets in the way of communicating with the public.

Still I am very grateful that he continues to address the public and I believe that it is probably a net positive, I think over the recent months the public AI-safety discourse has begun to snowball into something bigger, other charismatic people continue picking up the torch, and I think his contribution to these developments has probably been substantial.

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-17T13:45:39.623Z · LW · GW

I think a significant part of the problem is not the LLMs trouble of distinguishing truth from fiction, it's rather to convince it through your prompt that the output you want is the former and not the latter.

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T21:59:52.092Z · LW · GW

#parrotGang

Comment by Max TK (the-sauce) on Memetic Judo #2: Incorporal Switches and Levers Compendium · 2023-08-16T18:59:52.900Z · LW · GW

My argument does not depend on the AI being able to survive inside a bot net. I mentioned several alternatives.

Comment by Max TK (the-sauce) on Memetic Judo #2: Incorporal Switches and Levers Compendium · 2023-08-16T17:20:56.091Z · LW · GW

You were the one who made that argument, not me. 🙄

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T17:17:06.206Z · LW · GW

Of the universal approximation theorem

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T17:14:37.439Z · LW · GW

Usually between people in international forums, there is a gentlemen's agreement to not be condescending over things like language comprehension or spelling errors, and I would like to continue this tradition, even though your own paragraphs would offer wide opportunities for me to do the same.

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T16:42:41.441Z · LW · GW

Based on your phrasing I sense you are trying to object to something here, but it doesn't seem to have much to do with my article. Is this correct or am I just misunderstanding your point?

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T16:39:07.374Z · LW · GW

LLMs use 1 or more inner layers, so shouldn't the proof apply to them?

Comment by Max TK (the-sauce) on Memetic Judo #2: Incorporal Switches and Levers Compendium · 2023-08-16T16:30:39.752Z · LW · GW

the delta for power efficiency is currently ~1000 times in favor of brains => brain: ~20 W, AGI: ~20kW, kWh in Germany: 0,33 Euro 20 kWh: ~6 Euro => running our AGI would, if we are assuming that your description of the situation is correct, cost around 6 Euros in energy per hour, which is cheaper than a human worker.

So ... while I don't assume that such estimates need to be correct or apply to an AGI (that doesn't exist yet) I don't think you are making a very convincing point so far.

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T15:52:01.763Z · LW · GW

I don't really know what to make of this objection, because I have never seen the stochastic parrot argument applied to a specific, limited architecture as opposed to the general category.

Edit: Maybe make a suggestion of how to rephrase to improve my argument.

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T15:26:54.610Z · LW · GW

Good point. I think I will add it later.

Comment by Max TK (the-sauce) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T14:28:09.407Z · LW · GW

About point 1: I think you are right with that assumption, though I believe that many people repeat this argument without having really a stance on (or awareness of) brain physicalism. That's why I didn't hesitate to include it. Still, if you have a decent idea of how to improve this article for people who are sceptical of physicalism, I would like to add it.

About point 2: Yeah you might be right ... a reference to OthelloGPT would make it more convincing - I will add it later!

Edit: Still, I believe that "mashup" isn't even a strictly false characterization of concept composition. I think I might add a paragraph explicitly explaining that and how I think about it.

Comment by Max TK (the-sauce) on Memetic Judo #1: On Doomsday Prophets v.3 · 2023-08-11T23:45:58.476Z · LW · GW

Isn't that a response to a completely different kind of argument? I am probably not going to discuss this here, since it seems very off-topic, but if you want I can consider putting it on my list for arguments I might discuss in this form in a future article.

Comment by Max TK (the-sauce) on Memetic Judo #1: On Doomsday Prophets v.3 · 2023-08-11T18:41:02.072Z · LW · GW

Interesting insight. Sadly there isn't much to be done against the beliefs of someone who is certain that god will save us.

Maybe the following: Assuming the frame of a believer, the signs of AGI being a dangerous technology seem obvious on closer inspection. If god exists, then we should therefore assume that this is an intentional test he has placed in front of us. God has given us all the signs. God helps those who help themselves.

Comment by Max TK (the-sauce) on My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" · 2023-04-09T13:50:21.020Z · LW · GW

weakly suggested that more dimensions do reduce demon formation

This also makes a lot of sense intuitively, as it should become more difficult in higher dimensions to construct walls (hills / barriers without holes).

Comment by Max TK (the-sauce) on How can we promote AI alignment in Japan? · 2023-03-19T11:41:16.501Z · LW · GW

I am under the impression that the public attitude towards AI safety / alignment is about to change significantly.
Strategies that aim at informing parts of the public that may have been pointless in the past (abstract risks etc.) may now become more successful, because mainstream newspapers are now beginning to write about AI risks, people are beginning to be concerned. The abstract risks are becoming more concrete.

Comment by Max TK (the-sauce) on (retired article) AGI With Internet Access: Why we won't stuff the genie back in its bottle. · 2023-03-18T13:51:15.612Z · LW · GW

Maybe if it happens early there is a chance that it manages to become an intelligent computer virus but is not intelligent enough to further scale its capabilities or produce effective schemes likely to result in our complete destruction. I know I am grasping at straws at this point, but maybe it's not absolutely hopeless.

The result could be a corrupted infrastructure and a cultural shock strong enough for the people to burn down OpenAI's headquarters (metaphorically speaking) and AI-accelerating research to be internationally sanctioned.

In the past I have thought a lot about "early catastrophe scenarios", and while I am not convinced it seemed to me that these might be the most survivable ones.

Comment by Max TK (the-sauce) on (retired article) AGI With Internet Access: Why we won't stuff the genie back in its bottle. · 2023-03-18T13:43:50.650Z · LW · GW

One very problematic aspect of this view that I would like to point out is that in a sense, most 'more aligned' AGIs of otherwise equal capability level seem to be effectively 'more tied down' versions, so we should assume them to have a lower effective power level than a less aligned AGI that has a shorter list of priorities.
If we imagine both as competing players in a strategy game, it seems that the latter has to follow fewer rules.

Comment by Max TK (the-sauce) on (retired article) AGI With Internet Access: Why we won't stuff the genie back in its bottle. · 2023-03-18T13:34:42.026Z · LW · GW

Good addition! I even know a few of those "AI rights activists" myself.
Since this here is my first post - would it be considered bad practice to edit my post to include it?

Comment by Max TK (the-sauce) on (retired article) AGI With Internet Access: Why we won't stuff the genie back in its bottle. · 2023-03-18T07:48:54.301Z · LW · GW

I think that's not an implausible assumption.
However this could mean that some of the things I described might still be too difficult for it to pull them off successfully, so in the case of an early breakout dealing with it might be slightly less hopeless.

Comment by Max TK (the-sauce) on Ethical AI investments? · 2023-03-18T05:32:19.069Z · LW · GW

This is an important question. To what degree are both of these (naturally conflicting) goals important to you? How important is making money? How important is increasing AI-safety?

Comment by Max TK (the-sauce) on Humans provide an untapped wealth of evidence about alignment · 2022-08-16T16:21:14.883Z · LW · GW

I would be the last person to dismiss the potential relevance understanding value formation and management in the human brain might have for AI alignment research, but I think there are good reasons to assume that the solutions our evolution has resulted in would be complex and not sufficiently robust.
Humans are [Mesa-Optimizers](https://www.alignmentforum.org/tag/mesa-optimization) and the evidence is solid that as a consequence, our alignment with the implicit underlying utility function (reproductive fitness) is rather brittle (i.e. sex with contraceptives, opiate abuse etc. are examples of such "failure points").
Like others have expressed here before me I would also argue that human alignment has to perform in a very narrow environment which is shared with many very similar agents that are all on (roughly) the same power level. The solutions the human evolution has produced to ensure human semi-alignment is therefore to a significant degree not just a purely neurological one but also a social one.
Whatever these solutions are we should not expect that they will generalize well or that they would be reliable in a very different environment like one of an intelligent actor who has an absolute power monopoly.

This suggests that researching the human mind alone would not yield a technology that is robust enough to use when we have only exactly one shot at getting it right. We need solutions to the aforementioned abstractions and toy models because we probably should try to find a way to build a system that is theoretically safe and not just "probably safe in a narrow environment".