Posts

Claude wants to be conscious 2024-04-13T01:40:43.066Z
[Linkpost] Faith and Fate: Limits of Transformers on Compositionality 2023-06-16T15:04:59.828Z
The Intrinsic Interplay of Human Values and Artificial Intelligence: Navigating the Optimization Challenge 2023-06-05T20:41:46.124Z
Paper: Forecasting world events with neural nets 2022-07-01T19:40:12.788Z
Converging toward a Million Worlds 2021-12-24T21:33:52.732Z
Partial-Consciousness as semantic/symbolic representational language model trained on NN 2021-03-16T18:51:00.038Z
Joe Kwon's Shortform 2021-03-16T01:18:11.166Z
Value of building an online "knowledge web" 2020-05-01T04:31:24.265Z

Comments

Comment by Joe Kwon on Claude wants to be conscious · 2024-04-13T15:28:43.197Z · LW · GW

Makes sense, and I also don't expect the results here to be surprising to most people.

Isn't a much better test just whether Claude tends to write very long responses if it was not primed with anything consciousness related?

What do you mean by this part? As in if it just writes very long responses naturally? There's a significant change in the response lengths depending on whether it's just the question (empirically the longest for my factual questions), a short prompt preceding the question, a longer prompt preceding the question, etc. So I tried to control for the fact that having any consciousness prompt means a longer input to Claude by creating some control prompts that have nothing to do with consciousness -- in which case it had shorter responses after controlling for input length.

 

Basically because I'm working with an already RLHF'd model whose output lengths are probably most dominated by whatever happened in the preference tuning process, I try my best to account for that by having similar length prompts preceding the questions I ask.

Comment by Joe Kwon on Claude wants to be conscious · 2024-04-13T02:11:07.418Z · LW · GW

Thanks for the feedback! In a follow-up, I can try creating various rewordings of the prompt for each value. But instead of just neutral rewordings, it seems like you are talking about the extent to which the tone of the prompt is implicitly encouraging behavior (output length) one way or the other, am I correct in interpreting that way? So e.g. have a much more subdued/neutral tone for the consciousness example? 

Comment by Joe Kwon on Highlights from Lex Fridman’s interview of Yann LeCun · 2024-03-14T17:24:55.792Z · LW · GW

Does the median LW commenter believe that autoregressive LLMs will take us all the way to superintelligence?

Comment by Joe Kwon on Solving the Mechanistic Interpretability challenges: EIS VII Challenge 2 · 2024-02-19T17:06:01.664Z · LW · GW

Super cool stuff. Minor question, what does "Fraction of MLP progress" mean? Are you scaling down the MLP output values that get added to the residual stream? Thanks!

Comment by Joe Kwon on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T01:44:03.443Z · LW · GW

FWIW I understand now what it's meant to do, but have very little idea how your protocol/proposal delivers positive outcomes in the world by emitting performative speech acts. I think explaining your internal reasoning/hypothesis for how emitting performative speech acts leads to powerful AI's delivering positive incomes would be helpful. 

Is such a "channel" necessary to deliver positive outcomes? Is it supposed to make it more likely that AI delivers positive outcomes? More details on what a success looks like to you here, etc.

Comment by Joe Kwon on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T01:31:41.482Z · LW · GW

I skimmed The Snuggle/Date/Slap Protocol and Ethicophysics II: Politics is the Mind-Savior which are two recent downvoted posts of yours. I think they get negative karma because they are difficult to understand and it's hard to tell what you're supposed to take away from it. They would probably be better received if the content were written such that it's easy to understand what your message is at an object-level as well as what the point of your post is. 

 

I read the Snuggle/Date/Slap Protocol and feel confused about what you're trying to accomplish (is it solving AI Alignment?) and how the method is supposed to accomplish that. 

In the ethicophysics posts, I understand the object level claims/material (like the homework/discussion questions) but fail to understand what the goal is. It seems like you are jumping to grounded mathematical theories for stuff like ethics/morality which immediately makes me feel dubious. It's a too much, too grand, too certain kind of reaction. Perhaps you're just spitballing/brainstorming some ideas, but that's not how it comes across and I infer you feel deeply assured that it's correct given statements like "It [your theory of ethics modeled on the laws of physics] therefore forms an ideal foundation for solving the AI safety problem."

 

I don't necessarily think you should change whatever you're doing BTW just pointing out some likely reactions/impressions driving negative karma.

Comment by Joe Kwon on Introducing Fatebook: the fastest way to make and track predictions · 2023-07-19T02:57:33.882Z · LW · GW

This is terrific. One feature that will be great to have, is a way to sort and categorize your predictions under various labels.

Comment by Joe Kwon on Human sexuality as an interesting case study of alignment · 2022-12-30T17:46:56.594Z · LW · GW

Sexuality is, usually, a very strong drive which has a large influence over behaviour and long term goals. If we could create an alignment drive as strong in our AGI we would be in a good position.

I don't think we'd be in a good position even if we instilled an alignment drive this strong in AGI

Comment by Joe Kwon on The Limit of Language Models · 2022-12-29T20:47:09.672Z · LW · GW

To me, the caveats section of this post highlights the limited scope from which language models will be able to learn human values and preferences, given explicitly stated (And even implied-from-text) goals != human values as a whole. 

Comment by Joe Kwon on Alignment via prosocial brain algorithms · 2022-09-13T16:25:05.775Z · LW · GW

Hi Cameron, nice to see you here : ) what are your thoughts on a critique like: human prosocial behavior/values only look the way they look and hold stable within-lifetimes, insofar as we evolved in + live in a world where there are loads of other agents with roughly equal power as ourselves? Do you disagree with that belief? 

Comment by Joe Kwon on Principles for Alignment/Agency Projects · 2022-07-07T06:05:22.625Z · LW · GW

This was very insightful. It seems like a great thing to point to, for the many newish-to-alignment people ideating research agendas (like myself). Thanks for writing and posting!

Comment by Joe Kwon on Naive Hypotheses on AI Alignment · 2022-07-02T21:37:24.389Z · LW · GW

This is a really cool idea and I'm glad you made the post! Here are a few comments/thoughts:

H1: "If you give a human absolute power, there is a small subset of humans that actually cares and will try to make everyone’s life better according to their own wishes"

How confident are you in this premise? Power and sense of values/incentives/preferences may not be orthogonal (and my intuition is that it isn't).  Also, I feel a little skeptical about the usefulness of thinking about the trait showing up more or less in various intelligence strata within humans. Seems like what we're worried about is in a different reference class. Not sure.

 

H4 is something I'm super interested in and would be happy to talk about it in conversations/calls if you want to : )

Comment by Joe Kwon on GPT-3 Catching Fish in Morse Code · 2022-07-01T00:12:55.244Z · LW · GW

Something at the root of this might be relevant to the inverse scaling competition thing where they're trying to find what things get worse in larger models. This might have some flavor of obviously wrongness -> deception via plausible sounding things as models get larger? https://github.com/inverse-scaling/prize 

Comment by Joe Kwon on Will vague "AI sentience" concerns do more for AI safety than anything else we might do? · 2022-06-15T04:36:43.645Z · LW · GW

interesting idea. like.. a mix of genuine sympathy/expansion of moral circle to AI, and virtue signaling/anti-corporation meme spreads to the majority population and effectively curtails AGI capabilities research? This feels like a thing that might actually do nothing to reduce corporations' efforts to get to powerful AI unless it reaches a threshold at which point there's very dramatic actions against corporations who continue to try to do that thing

Comment by Joe Kwon on Why don't you introduce really impressive people you personally know to AI alignment (more often)? · 2022-06-11T17:07:07.479Z · LW · GW

I stream-of-consciousness'd this out and I'm not happy with how it turned out, but it's probably better I post this than delete it for not being polished and eloquent. Can clarify with responses in comments.

Glad you posted this and I'm also interested in hearing what others say. I've had these questions for myself in tiny bursts throughout the last few months. 

When I get the chance to speak to people earlier in their career stage than myself (starting undergrad, or is a high schooler attending a mathcamp I went to) who are undecided about their careers, I bring up my interest in AI Alignment and why I think it's important, and share resources for them after the call in case they're interested in learning more about it. I don't have very many opportunities like this because I don't actively seek to identify and "recruit" them. I only bring it up by happenstance (e.g. joining a random discord server for homotopy type theory, seeing an intro by someone who went to the same mathcamp as me and is interested in cogsci, and scheduling a call to talk about my research background in cogsci and how my interests have evolved/led me to alignment over time). 

I know very talented people who are around my age at MIT and from a math program I attended; students who are breezing by technical double majors with perfect GPAs, IMO participants, good competitive programmers, etc. Some things that make it hard for me:

  1. If I know them well, I can talk about my research interests and try to get them to see my motivation, but if I'm only catching up with them 1-2x a year, it feels very unnatural and synthetic for me to be spending that time trying to convert them into doing alignment work. If I am still very close to them / talk to them frequently, there's still an issue of bringing it up naturally and having a chance to convince them. Most of these people are doing Math PhDs, or trading in finance, or working on a startup, or... The point is that they are fresh on their sprint down the path that they have chosen. They are all the type who are very focused and determined to succeed on the goals they have settled on. It is not "easy" to get them (or for this matter, almost any college student) to halt their "exploit" mode, take 10 steps back and lots of time from their busy lives, and then "explore" another option that I'm seemingly imposing onto them. FWIW, the people I know who are in trading seem to be the most likely to switch out (explicitly have told me in conversations that they just enjoy the challenge of the work, but want to find more fulfilling things down the road. And to these people I share ideas and resources about AI Safety.)
  2. I shared resources after the call, talked about why I'm interested in alignment, and that's the furthest I've gone wrt potentially converting someone who is already in a separate career track, to consider alignment.
  3. If it was MUCH easier to convince people that ai alignment is worth thinking about in under an hour, and I could reach out to people to talk to me about this for a hour without looking like a nutjob and potentially damaging our relationship because it seems like I'm just trying to convert them into something else, AND the field of AI Alignment was more naturally compelling for them to join, I'd do much more of this outreach. On that last point, what I mean is: for one moment, let's suspend the object level importance of solving AI Alignment. In reality, there are things that are incredibly important/attractive for people when pursuing a career. Status, monetary compensation, and recognition (and not being labeled a nutjob) are some big ones. If these things were better (and I think they are getting much better recently), it would be easier to get people to spend more time at least thinking about the possibility of working on AI Alignment, and eventually some would work on it because I don't think the arguments for x-risk from AI are hard to understand. If I personally didn't have so much support by way of programs the community had started (SERI, AISC, EA 1-1s, EAG AI Safety researchers making time to talk to me), or it felt like the EA/X-risk group was not at all "prestigious", I don't know how engaged I would've been in the beginning, when I started my own journey learning about all this. As much as I wish it weren't true, I would not be surprised at all if the first instinctual thing that led me down this road was noticing that EAs/LW users were intelligent and had a solidly respectable community, before choosing to spend my time engaging with the content (a lot of which was about X-risks).                                    
Comment by Joe Kwon on How Do Selection Theorems Relate To Interpretability? · 2022-06-11T00:01:46.121Z · LW · GW

Hi John. One could run useful empirical experiments right now, before fleshing out all these structures and how to represent them, if you can assume that a proxy for human representations (crude: conceptnet, less crude: similarity judgments on visual features and classes collected by humans) is a good enough proxy for "relevant structures" (or at least that these representations more faithfully capture the natural abstractions than the best machines in vision tasks for example, where human performance is the benchmark performance), right?

I had a similar idea about ontology mismatch identification via checking for isomorphic structures, and also realized I had no idea how to realize that idea. Through some discussions with Stephen Casper and Ilia Sucholutsky, we kind of pivoted the above idea into the regime of interpretability/adversarial robustness where we are hunting for interesting properties given that we can identify the biggest ways that humans and machines are representing things differently (and that humans, for now, are doing it "better"/more efficiently/more like the natural abstraction structures that exist). 

I think am working in the same building this summer (caught a split-second glance at you yesterday); I would love a chance to discuss how selection theorems might relate to an interpretability/adversarial robustness project I have been thinking about.

Comment by Joe Kwon on Agency As a Natural Abstraction · 2022-05-14T23:19:13.447Z · LW · GW

Thanks so much for the response, this is all clear now! 

Comment by Joe Kwon on Agency As a Natural Abstraction · 2022-05-14T16:11:11.588Z · LW · GW

Sorry if it's obvious from some other part of your post, but the whole premise is that sufficiently strong models *deployed in sufficiently complex environments* leads to general intelligence with optimization over various levels of abstractions. So why is it obvious that: It doesn't matter if your AI is only taught math, if it's a glorified calculator — any sufficiently powerful calculator desperately wants to be an optimizer? 

If it's only trained to solve arithmetic and there are no additional sensory modalities aside from the buttons on a typical calculator, how does increasing this AI's compute/power lead to it becoming an optimizer over a wider domain than just arithmetic? Maybe I'm misunderstanding the claim, or maybe there's an obvious reason I'm overlooking.

Also, what do you think of the possibility that when AI becomes superhuman++ in tasks, that the representations go from interpretable to inscrutable again (because it uses lower level representations that are inaccessible to humans)? I understand the natural abstraction hypothesis, and I buy it too, but even an epsilon increase in details might compound into significant prediction outcomes if a causal model is trying to use tons of representations in conjunction to compute something complex. 

Do you think it might be valuable to find a theoretical limit that shows that the amount of compute needed for such epsilon-details to be usefully incorporated is greater than ever will be feasible (or not)? 

Comment by Joe Kwon on [Intro to brain-like-AGI safety] 13. Symbol grounding & human social instincts · 2022-04-28T19:31:44.244Z · LW · GW

Hi Steve, loved this post! I've been interested in viewing the steering and thought generator + assessor submodule framework as the object and generator-of-values of which which we want AI to learn a good pointer to/representation of, to simulate out the complex+emergent human values and properly value extrapolate. 

I know the way I'm thinking about the following doesn't sit quite right with your perspective, because AFAIK, you don't believe there need to be independent, modular value systems that give their own reward signals for different things (your steering subsystem and thought generator and assessor subsystem are working in tandem to produce a singular reward signal). I'd be interested in hearing your thoughts on what seems more realistic, after importing my model of value generators as more distinctive and independent modular systems in the brain.

In the past week, I've been thinking about the potential importance of considering human value generators as modular subsystems (for both compute and reward). Consider the possibility that at various stages of the evolutionary neurocircuitry-shaping timeline of humans, that modular and independently developed subsystems developed. E.g. one of the first systems, some "reptilian" vibe system, was one that rewarded sugary stuff because it was a good proxy at the time for nutritious/calorie-dense foods that help with survival. And then down the line, there was another system that developed to reward feeling high-social status, because it was a good proxy at the time for surviving as social animals in in-group tribal environments. What things would you critique about this view, and how would you fit similar core-gears into your model of the human value generating system?

I'm considering value generators as more independent and modular, because (this gets into a philosophical domain but) perhaps we want powerful optimizers to apply optimization pressure not towards the human values generated by our wholistic-reward-system, but to ones generated by specific subsystems (system 2, higher-order values, cognitive/executive control reward system) instead of reptilian hedon-maximizing system. 

This is a few-day old, extremely crude and rough-around-the-edges idea, but I'd especially appreciate your input and critiques on this view. If it were promising enough, I wonder if (inspired by John Wentworth's evolution of modularity post) training agents in a huge MMO environment and switching up reward signals in the environment (or the environment distribution itself) every few generations would lead to a development of modular reward systems (mimicking the trajectory of value generator systems developing in humans over the evolutionary timeline). 

Comment by Joe Kwon on Lessons After a Couple Months of Trying to Do ML Research · 2022-03-24T18:13:20.836Z · LW · GW

Enjoyed reading this! Really glad you're getting good research experience and I'm stoked about the strides you're making towards developing research skills since our call (feels like ages ago)! I've been doing a lot of what you describe as "directed research" myself lately as I'm learning more about DL-specific projects and I've been learning much faster than when I was just doing cursory, half-assed paper skimming, alongside my cogsci projects. Would love to catch up over a call sometime to talk about stuff we're working on now

Comment by Joe Kwon on [Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL · 2022-03-04T19:36:56.035Z · LW · GW

Really appreciated this post and I'm especially excited for post 13 now! In the past month or two, I've been thinking about stuff like "I crave chocolate" and "I should abstain from eating chocolate" as being a result of two independent value systems (one whose policy was shaped by evolutionary pressure and one whose policy is... idk vaguely "higher order" stuff where you will endure higher states of cortisol to contribute to society or something). 

I'm starting to lean away from this a little bit, and I think reading this post gave me a good idea of what your thoughts are, but it'd be really nice to get confirmation (and maybe clarification). Let me know if I should just wait for post 13. My prediction is that you believe there is a single (not dual) generator of human values, which are essentially moderated at the neurochemical level, like "level of dopamine/serotonin/cortisol".  And yet, this same generator, due to our sufficiently complex "thought generator", can produce plans and thoughts such as "I should abstain from eating chocolate" even though it would be a dopamine hit in the short-term, because it can simulate forward much further down the timeline, and believes that the overall neurochemical feedback will be better than caving into eating chocolate, on a longer time horizon. Is this correct? 

If so, do you believe that because social/multi-agent navigation was essential to human evolution, the policy was heavily shaped by social world related pressures, which means that even when you abstain from the chocolate, or endure pain and suffering for a "heroic" act, in the end, this can all still be attributed to the same system/generator that also sometimes has you eat sugary but unhealthy foods?

Given my angle on attempting to contribute to AI Alignment is doing stuff to better elucidate what "human values" even is, I feel like I should try to resolve the competing ideas I've absorbed from LessWrong: 2 distinct value systems vs. singular generator of values. This post was a big step for me in understanding how the latter idea can be coherent with the apparent contradictions between hedonistic and higher-level values.

Comment by Joe Kwon on Converging toward a Million Worlds · 2022-02-18T19:13:45.865Z · LW · GW

In case anyone stumbles across this post in the future, I found these posts from the past both arguing for and against some of the worries I gloss over here. I don't think my post boils down completely to merely "recommender systems should be better aligned with human interests", but that is a big theme. 

https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area

https://www.alignmentforum.org/posts/TmHRACaxXrLbXb5tS/rohinmshah-s-shortform?commentId=EAKEfPmP8mKbEbERv

Comment by Joe Kwon on Book review: The Age of Surveillance Capitalism · 2022-02-15T21:33:56.413Z · LW · GW

I'm also not sold on this specific part, and I'm really curious about what things support the idea. One reason I don't think it's good to rely on this as the default expectation though, is that I'm skeptical about humans' abilities to even know what the "best experience" is in the first place. I wrote a short rambly post touching on, in some part, my worries about online addiction: https://www.lesswrong.com/posts/rZLKcPzpJvoxxFewL/converging-toward-a-million-worlds

Basically, I buy into the idea that there are two distinct value systems in humans. One subconscious system where the learning is mostly from evolutionary pressures, and one conscious/executive system that cares more about "higher-order values" which I unfortunately can't really explicate. Examples of the former: craving sweets, addiction to online games with well engineered artificial fulfillment. Example of the latter: wanting to work hard, even when it's physically demanding or mentally stressful, to make some type of positive impact for broader society. 

And I think today's modern ML systems are asymmetrically exploiting the subconscious value system at the expense of the conscious/executive value system. Even knowing all this, I really struggle to overcome instances of akrasia, controlling my diet, not drowning myself in entertainment consumption, etc. I feel like there should be some kind of attempt to level the playing field, so to speak, with which value system is being allowed to thrive. At the very least, transparency and knowledge about this phenomena to people who are interacting with powerful recommender (or just general) ML systems, and in the optimal, allowing complete agency and control over what value system you want to prioritize, and to what extent. 

Comment by Joe Kwon on The Intense World Theory of Autism · 2021-09-27T22:27:53.853Z · LW · GW

Very interesting post! 

1) I wonder what your thoughts are on how "disentangled" having a "dim world" perspective and being psychopathic are (completely "entangled" being: all psychopaths experience dim world and all who experience dim world are psychopathic).  Maybe I'm also packing too many different ideas/connotations into the term "psychopathy". 

2) Also, the variability in humans' local neuronal connection and "long-range" neuronal connections seems really interesting to me. My very unsupported, weak suspicion is that perhaps there is a correlation between these ratios (or maybe the pure # of each), and the natural ability to learn information and develop expertise in a very narrow domain of things (music, math?) vs. develop big new ideas where the concepts are largely formed from cross-domain, interdisciplinary thinking. Do you have any thoughts on this? Depending on what we believe for this, what we believe for question 1) has some very interesting implications, I think?

3) Finally, I wonder if the lesswrong community has a higher rate of "dim world" perspective-havers (or "psychopaths in the narrowly defined sense of having lower thresholds for stimulation), than the base-rate of the general population.

Comment by Joe Kwon on How should my timelines influence my career choice? · 2021-08-04T01:45:21.894Z · LW · GW

Just a small note that your ability to contribute via research doesn’t go from 0 now, to 1 after you complete a PhD! As in, you can still contribute to AI Safety with research during a phd

Comment by Joe Kwon on Internal Information Cascades · 2021-06-27T15:27:30.563Z · LW · GW

Thanks for posting this! I was wondering if you might share more about your "isolation-induced unusual internal information cascades" hypothesis/musings! Really interested in how you think this might relate to low-chance occurrences of breakthroughs/productivity.

Comment by Joe Kwon on Time & Memory · 2021-05-20T22:57:35.896Z · LW · GW

https://www.lesswrong.com/posts/rHhoGHsd3YHPgyFyA/partial-consciousness-as-semantic-symbolic-representational?commentId=b86me3runvdgmNLaT

My original idea (and great points against the intuition by Rohin)

Comment by Joe Kwon on Time & Memory · 2021-05-20T22:56:04.531Z · LW · GW

"To me, it feels viscerally like I have the whole argument in mind, but when I look closely, it's obviously not the case. I'm just boldly going on and putting faith in my memory system to provide the next pieces when I need them. And usually it works out."

This closely relates to the kind of experience that makes me think about language as post hoc symbolic logic fitting to the neural computations of the brain. Which kinda inspired the hypothesis of a language model trained on a distinct neural net being similar to how humans experience consciousness (and gives the illusion of free will). 

Comment by Joe Kwon on Partial-Consciousness as semantic/symbolic representational language model trained on NN · 2021-03-17T02:24:24.488Z · LW · GW

So, I thought it would be a neat proof of concept if GPT3 served as a bridge between something like a chess engine’s actions and verbal/semantic level explanations of its goals (so that the actions are interpretable by humans). e.g. bishop to g5; this develops a piece and pins the knight to the king, so you can add additional pressure to the pawn on d5 (or something like this).

In response, Reiichiro Nakano shared this paper: https://arxiv.org/pdf/1901.03729.pdf 
which kinda shows it's possible to have agent state/action representations in natural language for Frogger. There are probably glaring/obvious flaws with my OP, but this was what inspired those thoughts.  

Apologies if this is really ridiculous—I'm maybe suggesting ML-related ideas prematurely & having fanciful thoughts. Will be studying ML diligently to help with that.

Comment by Joe Kwon on Joe Kwon's Shortform · 2021-03-16T01:18:12.910Z · LW · GW
Comment by Joe Kwon on Value of building an online "knowledge web" · 2020-05-04T02:01:05.889Z · LW · GW

Thanks, I hadn't thought about those limitations

Comment by Joe Kwon on Value of building an online "knowledge web" · 2020-05-04T01:59:48.913Z · LW · GW

For the basic features, I got used to navigating everything within a hour. I'll be on the lookout for improvements to Roam or other note-taking programs like this