Posts

Teaching My Younger Self to Program: A case study of how I'd pass on my skill at self-learning 2024-12-01T21:05:15.602Z
Is School of Thought related to the Rationality Community? 2024-10-15T12:41:33.224Z
Runner's High On Demand: A Story of Luck & Persistence 2024-09-29T17:15:29.494Z
Explore More: A Bag of Tricks to Keep Your Life on the Rails 2024-09-28T21:38:52.256Z
Where is the Learn Everything System? 2024-09-27T21:30:16.379Z
Introduction to Super Powers (for kids!) 2024-09-20T17:17:27.070Z
DM Parenting 2024-07-16T08:50:08.144Z
Bed Time Quests & Dinner Games for 3-5 year olds 2024-06-22T07:53:38.989Z
Dyslucksia 2024-05-09T19:21:33.874Z
Predicting Alignment Award Winners Using ChatGPT 4 2024-02-08T14:38:37.925Z
Discussion Meetup 2024-02-07T10:03:04.958Z
New Years Meetup (Zwolle) 2023-12-30T11:23:33.414Z
Mini-Workshop on Applied Rationality 2023-10-11T09:13:00.325Z
United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress 2023-04-20T23:19:01.229Z
March - Social Meetup 2023-03-04T20:19:30.626Z
Short Notes on Research Process 2023-02-22T23:41:45.279Z
February Online Meetup 2023-02-11T05:45:09.464Z
Reflections on Deception & Generality in Scalable Oversight (Another OpenAI Alignment Review) 2023-01-28T05:26:49.866Z
A Simple Alignment Typology 2023-01-28T05:26:36.660Z
Optimizing Human Collective Intelligence to Align AI 2023-01-07T01:21:25.328Z
Announcing: The Independent AI Safety Registry 2022-12-26T21:22:18.381Z
New Years Social 2022-12-26T01:22:31.930Z
Loose Threads on Intelligence 2022-12-24T00:38:41.689Z
Research Principles for 6 Months of AI Alignment Studies 2022-12-02T22:55:17.165Z
Three Alignment Schemas & Their Problems 2022-11-26T04:25:49.206Z
Winter Solstice - Amsterdam 2022-10-13T12:52:22.337Z
Deprecated: Some humans are fitness maximizers 2022-10-04T19:38:10.506Z
Let's Compare Notes 2022-09-22T20:47:38.553Z
Overton Gymnastics: An Exercise in Discomfort 2022-09-05T19:20:01.642Z
Novelty Generation - The Art of Good Ideas 2022-08-20T00:36:06.479Z
Cultivating Valiance 2022-08-13T18:47:08.628Z
Alignment as Game Design 2022-07-16T22:36:15.741Z
Research Notes: What are we aligning for? 2022-07-08T22:13:59.969Z
Naive Hypotheses on AI Alignment 2022-07-02T19:03:49.458Z
July Meet Up - Utrecht 2022-06-22T21:46:13.752Z

Comments

Comment by Shoshannah Tekofsky (DarkSym) on Explore More: A Bag of Tricks to Keep Your Life on the Rails · 2024-12-05T09:06:36.899Z · LW · GW

Thanks! Glad to hear it :D

Comment by Shoshannah Tekofsky (DarkSym) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T17:17:24.877Z · LW · GW

Oh shit. It's worse even. I read the decimal separators as thousand separators.

I'm gonna just strike through my comment.

Thanks for noticing ... <3

Comment by Shoshannah Tekofsky (DarkSym) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T08:21:30.971Z · LW · GW

As someone who isn't really in a position to donate much at all, and who feels rather silly about the small amount I could possibly give, and what a tiny drop that is compared the bucket this post is sketching...

I uh ... sat down and did some simple math. If everyone who ever votes (>12M) donates $10 then you'd have >$120 million covered. If we follow bullshit statistics of internet activity, where it's said 99% of all content is generated by 1% of all people, then this heuristic would get us $1.2M from people paying this one time "subscription" fee. Now I also feel, based on intuition and ass-numbers, that LW folk have a better ratio than that, so let's multiply by 2 and then we could get a $2.4 million subscriber fee together from small donations.

Now on the pure power of typical mind ... I personally like people knowing when I do a nice thing - even a stupidly small thing.

So I'm commenting about it.

I find this embarrassing, and I'm working through the embarrassment to make it easier for others to farm this nutrient too and just normalize it in case that helps with getting a critical mass of small donations of the $10 variety.

Basically my point to readers is: 'Everyone' paying a one-time $10 subscription fee would solve the problem.

The trick is mostly to help each other generate the activation energy to do this thing. If it helps to post, high five, or wave about it, please do! Visibility of small donations may help activation energy and get critical mass! Group action is awesome. Using your natural reward centers about it is great! <3 Hi :D Wanna join? _

Thanks, abstractapplic, for noticing the first error in my calculation: It's number of votes, not number of people voting. Additionally I noticed I applied the power of dyslexia to the decimal point and read that as an thousand separator. So ignore the errored out math, give what you can, and maybe upvote each other for support on giving as much as possible? 

PS: I would prefer if actually big donators would get upvoted more than my post of error math. Feel free to downvote my post just to achieve a better ordering of comments. Thanks. <3

PPS: Note to the writer - Maybe remove decimal numbers entirely throughout the graphs? This is what it looked like for me, and led to the error. And this image is way zoomed in compared to what I see naturally on my screen.

Comment by Shoshannah Tekofsky (DarkSym) on Is School of Thought related to the Rationality Community? · 2024-11-29T14:22:38.548Z · LW · GW

Thanks for the explanation! Are you familiar with the community here and around Astral Codex Ten (ACX)? There are meetups and events (and a lot of writers) who focus on the art and skill of rationality. That was what led to my question originally.

Comment by Shoshannah Tekofsky (DarkSym) on Explore More: A Bag of Tricks to Keep Your Life on the Rails · 2024-11-07T16:54:47.242Z · LW · GW

This made me unreasonably happy. Thank you :D

Comment by Shoshannah Tekofsky (DarkSym) on Explore More: A Bag of Tricks to Keep Your Life on the Rails · 2024-11-05T15:09:25.943Z · LW · GW

Thank you for the in-depth thoughts!

Comment by Shoshannah Tekofsky (DarkSym) on Explore More: A Bag of Tricks to Keep Your Life on the Rails · 2024-11-05T15:06:21.494Z · LW · GW

Thank you!

It was a joke :) I had been warned by my friends that the joke was either only mildly funny or just entirely confusing. But I personally found it hilarious so kept it in. Sorry for my idiosyncratic sense of humor ;)

Comment by Shoshannah Tekofsky (DarkSym) on Is School of Thought related to the Rationality Community? · 2024-10-15T19:45:30.146Z · LW · GW

Oh cool!

I was asking for any connection of any type. The overlap just seemed so great that I’d expect there to be a connection of some sort. The Clearer Thinking link makes sense and is an example, thank you!

Comment by Shoshannah Tekofsky (DarkSym) on Is School of Thought related to the Rationality Community? · 2024-10-15T14:18:41.046Z · LW · GW

Oh and also, thank you for checking and sharing your thoughts! :)

Comment by Shoshannah Tekofsky (DarkSym) on Is School of Thought related to the Rationality Community? · 2024-10-15T14:18:16.602Z · LW · GW

I didn't look deeply in to the material, but good branding gives people a good feeling about a thing, and I think rationality could use some better branding. In my experience a lot of people bounce off a lot of the material cause they have negative associations with it or it's not packaged in a way that appeals. I think even if (I didn't check) the material is too superficial to be useful as content, it's still useful to increase people's affinity / positive association with rationality.

Comment by Shoshannah Tekofsky (DarkSym) on Parental Writing Selection Bias · 2024-10-13T14:50:25.939Z · LW · GW

Yeah, I can second this entire sentiment. I try to write up parenting tricks that work for me that are clearly not going to reflect negatively on my kids, or will even feel too personal. And then I realized that a lot of the most valuable information that I could read as a parent, I'll never find cause a parent with high integrity is not going to write down very negative experiences they had with their kids and all the ways they failed to respond optimally. It reminds me a little of Duncan's social dark matter concept.

Comment by Shoshannah Tekofsky (DarkSym) on MakoYass's Shortform · 2024-10-11T10:53:40.143Z · LW · GW

Oh this is amazing. I can never keep the two apart cause of the horrible naming. I think I’m just going to ask people if they mean intuition or reason from now on.

Comment by Shoshannah Tekofsky (DarkSym) on Where is the Learn Everything System? · 2024-10-09T11:35:06.153Z · LW · GW

Thank you for the clarification!

I think I agree this might be more a matter of semantics than underlying world model. Specifically:

Bill.learning = "process of connecting information not known, to information that is known"

Shoshannah.learning = "model [...] consisting of 6 factors - Content, Knowledge Representation, Navigation, Debugging, Emotional Regulation, and Consolidation." (note, I'm considering a 7th factor at the moment: which is transfer learning. This factor may actually bridge are two models.)

Bill.teaching = "applying a delivery of information for the learner with a specific goal in mind for what that learner should learn"

Shoshannah.teaching = [undefined so far], but actually "Another human facilitating steps in the learning process of a given human"

---

With those as our word-concept mappings, I'm mostly wondering what "learning" bottoms out to in your model? Like, how does one learn?

One way to conceptualize my model is as:
Data -> encoding -> mapping -> solution search -> attention regulation -> training runs

And the additional factor would be "transfer learning" or I guess fine-tuning (yourself) by noticing how what you learn applies to other areas as well.

And a teacher would facilitate this process by stepping in an providing content/support/debugging for each step that needs it.

I'm not sure why you are conceptualizing the learning goal as being part of the teacher and not the learner? I think they both hold goals, and I think learning can happen goal-driven or 'free', which I think is analoguous with the "play" versus "game" distinction in ludology - and slightly less tightly analoguous to exploration versus exploitation behavior.

I'm curious if you agree with the above.

Comment by Shoshannah Tekofsky (DarkSym) on What is it like to be psychologically healthy? Podcast ft. DaystarEld · 2024-10-07T09:11:29.411Z · LW · GW

Hmmm, I think ‘healthy’ is saying too much. This is one particular way of being psychologically healthy, but in my model you can be psychologically healthy and suffer more than 5 minutes per week and experience inner conflict some of the time. I think this is implicitly making the target too narrow for people that care about getting there and might consider this a reference point.


Also, I’m curious if the depression comment also refers to adaptive depression, like when someone very close to you dies and you need to adapt? (I’m not making a case that prolonged grief is good but I would make the case that grieving for 6 months or so is not psychologically unhealthy).


All the other points seem fine to me ❤️

Comment by Shoshannah Tekofsky (DarkSym) on Where is the Learn Everything System? · 2024-10-04T12:07:27.840Z · LW · GW

Thanks, Bill! I appreciate the reframe. I agree teaching and learning are two different activities. However, I think the end goal is that the user can learn whatever they need to learn, in whatever way they can learn it. As such, the learner activity is more central than the teaching activity - Having an ideal learning activity will result in the thing we care about (-> learning). Having the ideal teaching experience may still fall flat if the connection with the learner is somehow not made.

I'm curious what benefits you notice from applying the reframe to focusing on the teaching activity first. Possibly more levers to pull on as it's the only side of the equation we can offer someone from the outside?

Comment by Shoshannah Tekofsky (DarkSym) on Runner's High On Demand: A Story of Luck & Persistence · 2024-09-29T22:04:18.208Z · LW · GW

I never run longer than an hour, and it always lasts till the end of my run. It disappears near-instantly when I stop running. Even tying my shoelaces or whatever is really obstructive cause it takes me a minute or two to get back in to after.

I do have after-workout glow and have always had that. Like I feel good after a decent workout for a couple of hours no matter what I do. It’s not related to the runners high. But it means it’s not like my state goes back to baseline when the runners high fades.

Comment by Shoshannah Tekofsky (DarkSym) on Runner's High On Demand: A Story of Luck & Persistence · 2024-09-29T21:36:00.934Z · LW · GW

How does that runner's high feel?

Like taking good painkillers, being high energy but calm, having great focus, having a clear mind free of rumination or worry, empowering like nothing can stop me.

Because your method of getting there sounds like hell on earth. I'd want to know what the payoff is.

I mean, yeah. The method is gruelling. Fwiw, I do have anecdottal data that such "bootcamp" like workouts can more often push people through a plateau in their physical fitness. I'm guessing there are preconditions involved though.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-07-19T08:36:16.614Z · LW · GW

Interesting! Thank you for sharing

Comment by Shoshannah Tekofsky (DarkSym) on DM Parenting · 2024-07-17T10:49:04.595Z · LW · GW

Aw glad to hear it! That brought a smile to my face! :D

Comment by Shoshannah Tekofsky (DarkSym) on DM Parenting · 2024-07-17T05:12:56.021Z · LW · GW

Lol, thanks! :D

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-14T11:46:03.787Z · LW · GW

Oh wow, I love this! Thank you for looking in to this and sharing!

It lines up with my intuitions and experience trying to learn Japanese. I found all of it as baffling as any new language I tried to learn except kanji. I noticed I found learning kanji far easier than learning any words in hiragana or katakana (both phonetic instead of pictorial), and also that I found learning kanji easier than most non-dyslectic English speakers I ran in to (I didn't run in to many Dutch speakers)

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-13T09:12:50.512Z · LW · GW

I was low-key imagining you speaking German like Rammstein and then Japanese like Baby Metal.

My inner comedian not withstanding, that sounds awesome! _

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-13T09:12:10.161Z · LW · GW

oh huh ... It hadn't occurred to me to use it for memorization. I should try that, considering I think I have subpar memory for non-narrative/non-logical information like strings of numbers. Good point!

Conversely, I think I have above average memory for narrative and logically coherent information like how things work or events that happened in the past. It feels like that type of information has a ton of "hooks" such that I can use one of a dozen of them to recall the entire package, while a string of numbers has no hooks. It's like someone is asking me to repeat white noise. But phone numbers and codes and what not are that. Let alone trying to keep track of numbers on something like a graphics card or processor (I gave up).

Comment by Shoshannah Tekofsky (DarkSym) on Selfmaker662's Shortform · 2024-05-12T13:39:16.276Z · LW · GW

These are quizzes you make yourself. Did OKC ever have those? It's not for a matching percentage.

A quiz in paiq is 6 questions, 3 multiple choice and 3 open. If someone gets the right answer on the multiple choice, then you get to see their open question answers as a match request, and you can accept or reject the match based in that. I think it's really great.

You can also browse other people's tests and see if you want to take any. The tests seem more descriptive of someone than most written profiles I've read cause it's much harder to misrepresent personal traits in a quiz then in a self-declared profile

Comment by Shoshannah Tekofsky (DarkSym) on Selfmaker662's Shortform · 2024-05-11T14:17:09.596Z · LW · GW

I discovered the Netherlands actually has a good dating app that doesn't exist outside of it... I'm rather baffled. I have no idea how they started. I've messaged them asking if they will localize and expand and they thanked me for the compliment so... Dunno?

It's called Paiq and has a ton of features I've never seen before, like speed dating, picture hiding by default, quizzes you make for people that they can try to pass to get a match with you, photography contacts that involve taking pictures of stuff around and getting matched on that, and a few other things... It's just this grab bag of every way to match people that is not your picture or a blurb. It's really good!

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-11T11:42:35.298Z · LW · GW

That sounds great! I have to admit that I still get a far richer experience from reading out loud than subvocalizing, and my subvocalizing can't go faster than my speech. So it sounds like you have an upgraded form with more speed and richness, which is great!

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-11T11:40:13.122Z · LW · GW

Thanks! :D

Attention is a big part of it for me as well, yes. I feel it's very easy to notice when I skip words when reading out loud, and getting the cadence of a sentence right only works if you have a sense of how it relates to the previous and next one.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-11T06:41:41.554Z · LW · GW

Yeah, that's my understanding as well.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T12:57:53.675Z · LW · GW

Oh interesting! Maybe I'm wrong. I'm more curious about something like a survey on the topic now.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T09:21:03.529Z · LW · GW

This is really good! Thank you for sharing _ competition drive and wanting to achieve certain things are great motivations, and I think in any learning process the motivation one can tap into is at least as important as the actual learning technique. I'm glad you had access to that.

I tend to feel a little confused about the concept of "intelligence", as I guess my post already illustrated, haha. I think the word as we use it is very imprecise for cases like this. I'd roughly expect people with higher general intelligence to be much faster and successful at finding workarounds for their language processing issues, but I'd also expect the variance in this to be so high as to make plotting your general intelligence against "how quickly did you tame your dyslexia" to not make super much sense.

Then again, I do agree with a comment somewhere else here that Typical Minding is a thing, and my intuitions here may be wrong cause I'm failing to understand what it's like for other minds and I might have overcorrected due to 25 years of incorrectly concluding I was kind of dumb. Lol.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T07:59:04.992Z · LW · GW

Interesting! Thank you for sharing! I'd love to know the answer as well.

Anecdotally, I can say that I did try to learn Japanese a little, and I found Kanji far easier to learn than words in hiragana or katakana, cause relating a "picture" to a word seemed far easier for me to parse and remember than to remember "random phonetic encodings". I'm using quotation marks to indicate my internal experience, cause I'm a little mistrustful by now if I'm even understanding how other people parse words and language.

Either way, that anecdote would point to my pictoral->meaning wiring being stronger than my phoneme-encoding->meaning wiring. Which might explain why processing language as drawings helped me. I really have no idea how much this would generalize. But I agree people must run in to this when learning new alphabets.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T07:53:49.756Z · LW · GW

[mind blown]

Minds are so interesting! Thank you for sharing!

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T07:53:03.489Z · LW · GW

Yeah, that sounds about right. Dutch culture has additionally strong reinforcement of typical mind fallacy cause being "different" in any direction is considered uncomfortable or unsocial, and everyone is encouraged to conform to the norm. There is a lot of reference to how all humans are essentially the same, and you shouldn't think you are somehow different or special. I think I absorbed these values quite a bit, and then applied some motivated cognition to not notice the differences in how I was processing information compared to my peers.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T07:50:53.876Z · LW · GW

Thank you! I appreciate you sharing that _

My mother is/was very aware of historical practices and I think she often normalized my reading out loud with these types of references as well :)

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T07:47:02.173Z · LW · GW

I'm now going to admit your question made me realize I'm not sure "subvocalize" refers to the same thing for everyone ... I could always read in my head, but the error rate was huge. Only in my early 20s did I switch to a way of reading in my head that also does cadence and voices etc. The latter is what I mean by subvocalizing: The entire richness of an audiobook, generated by my own voice, but just so softly no one else can hear. It's a gradient from normal speech volume, to whisper, to whispering so softly no one can hear, to moving my lips and no sound coming out, to entire subvocalization.

Anyway, my prediction is that non-dyslectics do not subvocalize - it's much too slow. You can't read faster than you speak in that case.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-10T07:42:32.761Z · LW · GW

Thank you for sharing!

Would it be correct to say that the therapy gave you the tools to read and write correctly with effort, and that the bullet point list shows motivations you experienced to actually apply that effort?

Cause my problem was mostly that I didn't know how to even notice the errors I was making, let alone correct for them. Once I knew how to notice them, I was, apparently, highly motivated to do so.

Comment by Shoshannah Tekofsky (DarkSym) on Dyslucksia · 2024-05-09T21:02:12.132Z · LW · GW

aaaaw thank you for saying that! _ I appreciate it!

Comment by Shoshannah Tekofsky (DarkSym) on Predicting Alignment Award Winners Using ChatGPT 4 · 2024-02-08T17:36:19.802Z · LW · GW

Oh, that does help to know, thank you!

Comment by Shoshannah Tekofsky (DarkSym) on New Years Meetup (Zwolle) · 2024-01-13T10:37:25.746Z · LW · GW

Hi! comment so everyone gets a msg about this:

Location is The Refter in Zwolle at Bethlehemkerkplein 35a, on the first floor!

If you have trouble finding it feel free to ping me here, on the discord, or the what's app group. Link to discord can be found below!

Comment by Shoshannah Tekofsky (DarkSym) on Mini-Workshop on Applied Rationality · 2023-10-21T11:22:17.395Z · LW · GW

We are moving to Science Park Library

Comment by Shoshannah Tekofsky (DarkSym) on Mini-Workshop on Applied Rationality · 2023-10-20T21:38:34.113Z · LW · GW

The ACX meeting on the same day is unfortunately cancelled. For that reason we are extending the deadline for sign up:

If you have a confirmation email, then you can definitely get in.

Otherwise, fill out the form and we'll select 3 people for the remaining spots. If people show up without signing up, they can get in if we are below 20. If we are on 20 or more, then no dice :D

(Currently 17)

Comment by Shoshannah Tekofsky (DarkSym) on Mini-Workshop on Applied Rationality · 2023-10-18T11:12:55.981Z · LW · GW

Update: So far 11 people have been confirmed for the event. If you filled out the sign up form, but did not receive an email with confirmation, and you think you should, please DM me here on LW.

The last review cycle will be Friday morning, so if you want to attend, be sure to fill out the form before then.

Looking forward to seeing you there!

Comment by Shoshannah Tekofsky (DarkSym) on Mini-Workshop on Applied Rationality · 2023-10-16T14:43:47.744Z · LW · GW

Here is the sign-up form. Please fill it out before Friday. People who are accepted in to the workshop will receive an email to that effect. 

Comment by Shoshannah Tekofsky (DarkSym) on Mini-Workshop on Applied Rationality · 2023-10-14T15:32:30.823Z · LW · GW

We have hit 15 signups!

Keep an eye on your inboxes for the signup form.

Comment by Shoshannah Tekofsky (DarkSym) on United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress · 2023-04-22T06:08:00.063Z · LW · GW

Well damn... Well spotted.

I found the full-text version and will dig in to this next week to see what's up exactly.

Comment by Shoshannah Tekofsky (DarkSym) on United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress · 2023-04-21T18:12:52.230Z · LW · GW

Thank you! I wholeheartedly agree to be honest. I've added a footnote to the claim, linking and quoting your comment. Are you comfortable with this?

Comment by Shoshannah Tekofsky (DarkSym) on United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress · 2023-04-21T05:28:33.323Z · LW · GW

Oooh gotcha. In that case, we are not remotely any good at avoiding the creation of unaligned humans either! ;)

Comment by Shoshannah Tekofsky (DarkSym) on United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress · 2023-04-21T01:48:09.415Z · LW · GW

Could you paraphrase? I'm not sure I follow your reasoning... Humans cooperate sufficiently to generate collective intelligence, and they cooperate sufficiently due to a range of alignment mechanics between humans, no?

Comment by Shoshannah Tekofsky (DarkSym) on Fucking Goddamn Basics of Rationalist Discourse · 2023-02-04T05:41:32.910Z · LW · GW

Should we have a rewrite the Rationalist Basics Discourse contest?

Not that I think anything is gonna beat this. But still :D

Ps: can be both content and/or style

Comment by Shoshannah Tekofsky (DarkSym) on A Simple Alignment Typology · 2023-01-30T23:50:04.547Z · LW · GW

Thank you! I appreciate the in-depth comment.

Do you think any of these groups hold that all of the alignment problem can be solved without advancing capabilities?