Posts

Could transformer network models learn motor planning like they can learn language and image generation? 2023-04-23T17:24:08.952Z
Pitching an Alignment Softball 2022-06-07T04:10:45.023Z

Comments

Comment by mu_(negative) on What TMS is like · 2024-11-19T03:49:30.193Z · LW · GW

Hey, I remember your medical miracle post. I enjoyed it!

"Objectively" for me would translate to "biomarker" i.e., a bio-physical signal that predicts a clinical outcome. Note that for depression and many psychological issues this means that we find the biomarkers by asking people how they feel...but maybe this is ok because we do huge studies with good controls, and the biomarkers may take on a life of their own after they are identified.

I'm assuming you mean biomarkers for psychological / mental health outcomes specifically. This is spiritually pretty close to what my lab studies - ways to predict how TMS will affect individuals, and adjust it to make it work better in each person. Our philosophy - which I had to think about for a bit to even articulate, it's so baked into our thinking - is that the effects of an intervention will manifest most reliably in reactions to very simple cognitive tasks like vigilance, working memory, and so on. Most serious health issues impact your reaction times, accuracy, bias, etc. in subtle but statistically reliable ways. Measuring these with random sampling from a phone app and doing good statistics on the data is probably your best bet for objectively assessing interventions. Maybe that is what Quantified Mind does, I'm not sure?

The short answer is that if this were easy, it would already be popular, because we clearly need it. A lot of academic labs and industry people are trying to do this all the time. There is growing success, but it's slow growing and fraught with non-replicable work.

Comment by mu_(negative) on What TMS is like · 2024-11-19T03:18:01.681Z · LW · GW

Took me a while to get back to this question. I didn't know the answer so I looked up some papers. The short answer is, knowing this requires long follow-up periods which studies are generally not good at so we don't have great answers. Definitely a significant number of people don't stay better.

The longer answer is, probably about half of people need some form of maintenance treatment to stay non-depressed for more than a year, but our view of this is very confounded. Some studies have used normal antidepressant medications for maintenance, and some studies have tried additional rounds of TMS, both of which work really well. Up to a third of patients experience "symptom worsening" meaning that after an initial improvement from TMS, their symptoms actually get worse than when they started, but apparently more TMS can fix this in most people? I wasn't completely sure what they were saying here. So yeah, it isn't great. A lot of people need maintenance of some kind. This could very well correlate with whether your depression is the "life circumstances" kind or the "intrinsic brain chemistry" kind, not that we have a great handle on differentiating those two either.

Furthermore, (1) there are a few modes of TMS therapy out there, including most notably the accelerated course, and there may be different relapse rates across these treatment modes. There is some handwaving that the accelerated course may be more effective in this regard but I don't think we know yet. And (2) another important issue with interpreting these data is that many of the studies are done on people who are treatment resistant, such as yourself. It's unclear how much the results translate to the general population of depressed people.

Overall this is probably not a very satisfying answer, I don't really have the specialist inside view on this one.

FYI the most targeted paper I found on this topic is the citation below. Note that it's from 2016. There is probably something more recent, I just didn't have more time to dig.

Sackeim, H. A. (2016). Acute continuation and maintenance treatment of major depressive episodes with transcranial magnetic stimulation. Brain Stimulation: Basic, Translational, and Clinical Research in Neuromodulation, 9(3), 313-319.

Comment by mu_(negative) on What TMS is like · 2024-11-08T04:56:09.273Z · LW · GW

Hi Sable, I'm a TMS (+EEG) researcher. I'm happy to see some TMS discussed here and this is a nice introductory writeup. If you had any specific questions about TMS or the therapy I'd be happy to answer them or point you in the right direction. Depression is not my personal area of study or expertise, but it's hard not to know a lot about depression treatment if you study TMS for a living because it's the most successful application of the technique.

Two specific things you mentioned - first, that TMS depression therapy does not require or use an MRI. It's right of you to point this out, because doesn't it seem obvious that if we targeted based on the brain's structural or functional properties we could give better therapy? The answer is yes, and it's frankly a miracle that TMS works for depression without this kind of targeting. Still, there is still a strong intuition that the current therapy leaves a lot on the table, and lots of labs are studying how to make it better in depression, other dysfunctions and in healthy cognition. This is close to my area of research, which is broadly to investigate what spatial or temporal information about the brain is useful to make a given therapy or research intervention better and develop techniques for actually using it.

The second thing I noticed was your frustration that you have to fail medications to get approved for TMS. This is super frustrating, but I think it's just about caution in the medical community rather than any first order conspiracy by pharma. My sense is that medications still work better for most people, where "work better" is a conjunction between the raw effectiveness of the therapy and treatment adherence (which as you mentioned is a not-convenient aspect of TMS for depression). A related issue is that the medical community knows how well medications work and so are in some kind of Hippocratic contract to force people try that first before offering the "experimental" thing. Even though it's over a decade old now, TMS depression therapy is still considered pretty experimental. Epecially the kind it sounds like you got, which is an accelerated course.

I'm really glad the therapy worked for you, thanks for bringing this content to LessWrong.

Comment by mu_(negative) on Career Scouting: Dentistry · 2022-11-21T07:14:57.227Z · LW · GW

Wanted to say that I enjoyed this and found it much more enlightening than I expected to, given that I have no intrinsic interest in dentistry. I would value a large cross-discipline sample of this question set and think it would have been very useful to my younger self. I think the advice millennials were given when considering college degrees and careers was generally unhelpful magical thinking. These practical questions are helpful. I'd be interested in slightly longer form answers. Are these edited, or was this interviewee laconic?

Comment by mu_(negative) on What "The Message" Was For Me · 2022-10-13T15:26:10.430Z · LW · GW

Yes, the notion of being superceded does disturb me. Not in principle, but pragmatically. I read your point, broadly, to be that there are a lot of interesting potential non-depressing outcomes to AI, up to advocating for a level of comfort with the idea of getting replaced by something "better" and bigger than ourselves. I generally agree with this! However, I'm less sanguine than you that AI will "replicate" to evolve consciousness that leads to one of these non-depressing outcomes. There's no guarantee we get to be subsumed, cyborged, or even superceded. The default outcome is that we get erased by an unconscious machine that tiles the universe with smiley faces and keeps that as its value function until heat death. Or it's at least a very plausible outcome we need to react to. So caring about the points you noted you care about, in my view, translates to caring about alignment and control.

Comment by mu_(negative) on What "The Message" Was For Me · 2022-10-13T04:21:07.680Z · LW · GW

"For example, if I were making replicators, I'd ensure they were faithful replicators "

Isn't this the whole danger of unaligned AI? It's intelligent, it "replicates" and it doesn't do what you want.

Besides physics-breaking 6, I think the only tenuous link in the chain is 5; that AI ("replicators") will want to convert everything to comptronium. But that seems like at least a plausible value function, right? That's basically what we are trying to do. It's either that or paperclips, I'd expect.

(Note, applaud your commenting to explain downvote.)

Comment by mu_(negative) on What "The Message" Was For Me · 2022-10-13T04:13:47.025Z · LW · GW

While I may or may not agree with your more fantastical conclusions, I don't understand the downvotes. The analogy between biological, neural, and AI systems is not new but is well presented. I particularly enjoyed the analogy that comptronium is "habitable space" to AI. Minus physics-as-we-know-it breaking steps, which are polemic and not crucial to the argument's point, I'd call on downvoters to be explicit about what they disagree with or find unhelpful.

Speculatively, perhaps at least some find the presentation of AI as the "next stage of evolution" infohazardous. I'd disagree. I think it should start discussion along the lines of "what we mean by alignment". What's the end state for a human society with "aligned" AI? It probably looks pretty alien to our present society. It probably tends towards deep machine mediated communication blurring the lines between individuals. I think it's valuable to envision these futures.

Comment by mu_(negative) on The Alignment Problem Needs More Positive Fiction · 2022-08-23T22:16:21.698Z · LW · GW

Netcentrica, in this letter your explicit opinion is that fiction with a deep treatment of the alignment problem will not be palatable to a wider audience. I think this is not necessarily true. I think that compelling fiction is perhaps the prime vector for engaging a wider, naive audience. Even the Hollywood treatment of I Robot touched on it and was popular. Not deep or nuanced, sure. But it was there. Maybe more intelligent treatments could succeed if produced with talent.

I mostly stopped reading sci Fi after the era of Asimov and Bradbury. I'd be interested in comments on which modern, popular authors have written or produced AI fiction with the most intelligent treatment of the assignment issue (or related issues), to establish a baseline.

Comment by mu_(negative) on [Link] "The madness of reduced medical diagnostics" by Dynomight · 2022-06-19T00:20:23.039Z · LW · GW

Hmm, yeah, I guess that's a good point. I was thinking myopically at a systems level. The post is useful advice for a patient who is willing to do their own research, confident they can do it thoroughly, and is not afraid to "stare into the abyss" i.e risk getting freaked out or overwhelmed.

Although, I also wonder if insurance companies might try to exploit a patient's prior decision to decline recommended treatment/tests as a reason to not cover future costs...

.

Comment by mu_(negative) on [Link] "The madness of reduced medical diagnostics" by Dynomight · 2022-06-17T14:30:27.612Z · LW · GW

I don't disagree with you exactly, but I think the focus on rational decision making misses the context the decisions are being made in. Isn't this just an unaligned incentives problem? When a patient complains of an issue, doctors face exposure to liability if they do not recommend tests to clarify the issue. If the tests indicate something, doctors face liability for not recommending corrective procedures. They generally face less liability for positively recommending tests and procedures because the risk is quantifiable beforehand and the patient makes the decision. If they decline a recommended test, the doctor can't be blamed.

The push to do less testing makes sense in that context. It has to emerge at the level of a movement so that the doctors have safety in numbers.

I am not in healthcare, perhaps this is cynical?

Edit, I see that Gwern already mentioned lawsuits briefly in a comment. But I think it deserves a lot more focus and obviates "you're not dealing with fully rational agents." I mean, maybe not, but that's not necessary to get this result.

Comment by mu_(negative) on Pitching an Alignment Softball · 2022-06-09T01:27:38.224Z · LW · GW

Thanks for that link! I agree that there is a danger this pitch doesn't get people all the way to X-risk. I think that risk might be worth it, especially if EA notices popular support failing to grow fast enough - i.e., beyond people with obviously related background and interests. Gathering more popular support for taking small AI-related dangers seriously might move the bigger x-risk problems into the Overton window, whereas right now I think they are very much not. Actually I just realized that this is a great summary of my entire idea, basically, "move the Overton window with softballs before you try to pitch people the fastball."

But also as you said, that approach does model the problem as a war of attrition. If we really are metaphorically moments from the final battle, hail-mary attempts to recruit powerful allies is the right strategy. The problem is that these two strategies are pretty mutually exclusive. You can't be labeled as both a thoughtful, practical policy group with good ideas and also pull the fire alarms. Maybe the solution is to have two organizations pursuing different strategies, with enough distance between them that the alarmists don't tarnish the reputation of the moderates.

Comment by mu_(negative) on Pitching an Alignment Softball · 2022-06-09T01:03:45.578Z · LW · GW

Whoops, apologies, none of the above. I meant to use the adage "you can't wake someone who is pretending to sleep" similarly to the old "It is difficult to make a man understand a thing when his salary depends on not understanding it." A person with vested interests is like a person pretending to sleep. They are predisposed not to acknowledge arguments misaligned with their vested interests, even if they do in reality understand and agree with the logic of those arguments. The most classic form of bias.

I was trying to express that in order to make any impression on such a person you would have to enter the conversation on a vector at least partially aligned with their vested interests, or risk being ignored at best and creating an enemy at worst. Metaphorically, this is like entering into the false "dream" of the person pretending to sleep.

Comment by mu_(negative) on Pitching an Alignment Softball · 2022-06-08T14:51:53.222Z · LW · GW

Although I do like ACC, I haven't read any of the Rama series. It sounds like you're asking if I am advocating for a top down authoritarian society. It's hard to tell what triggered this impression without more detail from you, but possibly it was my mention of creating an "always-good-actor" bot that guards against other unaligned AGIs.

If that's right, please see my update to my post: I strongly disclaim to have good ideas about alignment, and should have better flagged that. The AGA bot is my best understanding of what Eliezer advocates, but that understanding is very weak and vague, and doesn't suggest more than extremely general policy ideas.

If you meant something else, please elaborate!

Comment by mu_(negative) on Pitching an Alignment Softball · 2022-06-08T03:47:14.656Z · LW · GW

Thanks for your reply! I like your compressed version. That feels to me like it would land on a fair number of people. I like to think about trying to explain these concepts to my parents. My dad is a healthcare professional, very competent with machines, can do math, can fix a computer. If I told him superintelligent AI would make nanomachine weapons, he would glaze over. But I think he could imagine having our missile systems taken over by a "next-generation virus."

My mom has no technical background or interests, so she represents my harder test. If I read her that paragraph she'd have no emotional reaction or lasting memory of the content. I worry that many of the people who are the most important to convince fall into this category. 

Comment by mu_(negative) on Pitching an Alignment Softball · 2022-06-08T03:20:05.994Z · LW · GW

Thanks for your replies! I'm really glad my thoughts were valuable. I did see your post promoting the contest before it was over, but my thoughts on this hadn't coalesced yet.

At this time, I don't know how much sense it makes to risk posing as someone you're not (or, at least, accidentally making a disinterested policymaker incorrectly think that's what you're doing).

Thanks especially for this comment. I noticed I was uncomfortable while writing that part of my post , and I should have paid more attention to that signal. I think I didn't want to water down the ending because the post was already getting long. I should have put a disclaimer that I didn't really know how to conclude, and that section is mostly a placeholder for what people who understand this better than me would pitch. To be clearer here: I do not intend to express any opinion on what to tell policymakers about solutions to these problems. I know hardly anything about practical alignment, just the general theory of why it is important. (I'm going to edit my post to point at this comment to make sure that's clear.)

What you're talking about, bypassing talk of superintelligence or recursive self-improvement, is something that I agree would be pure gold but only if it's possible and reasonable to skip that part. Hyperintelligent AI is sorta the bread and butter of the whole thing [...]

Yup, I agree completely.  I should have said in the post that I only weakly endorse my proposed approach. It would need to be workshopped to explore its value - especially, which signals from the listener suggested going deeper into the rabbithole versus popping back out into impacts on present day issues. My experience talking to people outside my field is that at the first signal someone doesn't take your niche issue seriously, you had better immediately connect it back to something they already care about or you've lost them. I wrote with the intention to provide the lowest common denominator set of arguments to get someone to take anything in the problem space seriously, so they at least have a hope of being worked slowly towards the idea of the real problem. I also wrote it as an ELI5-level for politicians who think the internet still runs on telephones. So like a "worst case scenario" conversation. But if this approach got someone worrying about the wrong aspect of the issue or misunderstanding critical pieces, it could backfire.

If I were going to update my pitch to better emphasize superintelligence, my intuition would be to lean into the video spoofing angle. It doesn't require any technical background to imagine a fake person socially engineering you on a zoom call. GPT3 examples are already sufficient to put home the Turing Test "this is really already happening" point. So the missing pieces are just seamless audio/video generation, and the ability of the bot to improvise its text-generation towards a goal as it converses. It's then a simple further step to envision the bad-actor bot's improvisation getting better and better until it doesn't make mistakes, is smarter than a person and can manipulate us into doing horrible things - especially because it can be everywhere at once. This argument scales from there to however much "AI-pill" the listener can swallow. I think the core strength of this framing is that the AI is embodied. Even if it takes the form of multiple people, you can see it and speak to it. You could experience it getting smarter, if that happened slowly enough. This should help someone naive get a handle on what it would feel like to be up against such an adversary.

The problem is that this body of knowledge is very, very cursed. There are massive vested interests, a ton of money and national security, built on a foundation of what is referred to as "bots" in this post. 

Yeah, absolutely...I was definitely tiptoeing around this in my approach rather than addressing it head on. That's because I don't have good ideas about that and suspect there might not be any general solutions. Approaching a person with those interests might just require a lot more specific knowledge and arguments about those interests to be effective. There is that old saying "You cannot wake someone who is pretending to sleep." Maybe you can, but you have to enter their dream to do it.

Comment by mu_(negative) on We will be around in 30 years · 2022-06-07T05:51:55.433Z · LW · GW

Cool, I just wrote a post with an orthogonal take on the same issue. Seems like Eliezer's nanotech comment was pretty polarizing. Self promoting...Pitching an Alignment Softball

I worry that the global response would be impotent even if the AGI was sandboxed to twitter. Having been through the pandemic, I perceive at least the United States' political and social system to be deeply vulnerable to the kind of attacks that would be easiest for an AGI - those requiring no physical infrastructure.

This does not directly conflict with or even really address your assertion that we'll all be around in 30 years. It seems like you were very focused here on a timeline for actual extinction. I guess I'm looking for a line to draw about "when will unaligned AGI make life no longer worth living, or at least destroy our ability to fight it?" I find this a much more interesting question, because at that point it doesn't matter if we have a month or 30 years left - we're living in caves on borrowed time.

My expectation is that we don't even need AGI or superintelligence, because unaligned humans are going to provide the intelligence part. The missing doomsday ingredient is ease of attack, which is getting faster, better, and cheaper every year.

Comment by mu_(negative) on Pitching an Alignment Softball · 2022-06-07T02:53:44.858Z · LW · GW

Hi Moderators, as this is my first post I'd appreciate any help in giving it appropriate tags. Thanks