Posts

Catching the Eye of Sauron 2023-04-07T00:40:46.556Z
Consciousness Actually Explained: EC Theory 2022-08-24T22:18:05.022Z
Blog dedicated to rebuilding technology from civ collapse? 2022-02-04T23:28:01.754Z

Comments

Comment by Casey B. (Zahima) on What is MIRI currently doing? · 2024-12-17T15:02:08.453Z · LW · GW

I got curious why this was getting agreement-downvoted, and the only links I could find on the main/old MIRI site to the techgov site were in the last two blogposts. Given their stated strategy shift to policy/comms, this does seem a little odd/suboptimal; I'd expect them to be more prominently/obviously linked. To be fair the new techgov site does have a prominent link to the old site. 

Comment by Casey B. (Zahima) on What is MIRI currently doing? · 2024-12-14T03:48:42.286Z · LW · GW

Some technical governance work at: https://techgov.intelligence.org/research 

https://x.com/peterbarnett_/status/1864405388092952595

https://x.com/peterbarnett_/status/1864425086486466621

Comment by Casey B. (Zahima) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T13:48:43.570Z · LW · GW

Haven't finished reading this, but I just want to say how glad I am that LW 2.0 and everything related to it (lightcone, etc) happened. I came across lw at a time when it seemed "the diaspora" was just going to get more and more disperse; that "the scene" had ended. I feel disappointed/guilty with how little I did to help this resurgence, like watching on the sidelines as a good thing almost died but then saved itself. 

How I felt at the time of seemingly peak "diaspora" actually somewhat reminds me of how I feel about CFAR now (but to a much lesser extent than LW); I think there is still some activity but it seems mostly dead; a valiant attempt at a worthwhile problem; but there are many Problems and many Good Things in the world, but limited time, and am I really going to invest time figuring out if this particular Thing is truly dead? Or start up my own rationality-training-adjacent effort? Or some other high leverage Good Thing? Generic EA? A giving pledge? The result is I carry on trying to do what I thought was most valuable, perversely hoping some weird mix of "that Good Thing was actually dead or close to it; it's good you didn't jump in as you'd be swimming against the tide" vs "even if not dead; it wasn't/isn't a good lever in the end" vs "your chosen alternative project/lever is a good enough guess at doing good; you aren't responsible for the survival of all Good Things". 

And tbh I'm a little murky on the forces that led to the LW resurgence, even if we can point to single clear boosts like ey's recent posts. But I'll finish reading the post to see if my understanding changes. 

Comment by Casey B. (Zahima) on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-19T22:49:02.570Z · LW · GW

this account is pretty good, but not always up to the standard of "shaping the world" (you will have to scroll to get past their coverage of this same batch of openAI related emails): https://x.com/TechEmails 

their substack: https://www.techemails.com/ 

Comment by Casey B. (Zahima) on Making a conservative case for alignment · 2024-11-15T22:31:45.360Z · LW · GW

While you nod to 'politics is the mind-killer', I don't think the right lesson is being taken away, or perhaps just not with enough emphasis. 

Whether one is an accelerationist, Pauser, or an advocate of some nuanced middle path, the prospects/goals of everyone are harmed if the discourse-landscape becomes politicized/polarized. All possible movement becomes more difficult. 

"Well we of course don't want that to happen, but X ppl are in power, so it makes sense to ask how X ppl tend to think and cater our arguments to them" 

If your argument is taking advantage of features of {group of ppl X} qua X, then that is almost unavoidably going to run counter to some Y qua Y, (either as a direct consequence of the arguments and/or because Nuance cannot survive public exposure) and if it isn't, then why couldn't the argument have been made completely apolitically to begin with? 

I just continue to think that any mention, literally at all, of ideology or party is courting discourse-disaster for all, again no matter what specific policy one is advocating for. Do we all remember what happened with covid masks? Or what is currently happening with discourse surrounding elon? Nuance just does not survive public exposure, and nobody is going to fix that in the few years we have left. (and this is a public document). The best way forward continues to be apolitical good arguments. Yes those arguments are going to be sent towards those who are in power at any given time, but you can do that without routing through ideology.  

To touch, even in passing reference, ideology/alliance (ex: the c word included in the title of this post) is to risk the poison/mindkill spreading in a way that is basically irreversible, because to correct it (other than comments like this just calling to Stop Referencing Ideology) usually involves Referencing An Ideology. Like a bug stuck in a glue trap, it places yet another limb into the glue in a vain attempt to push itself free. 

Comment by Casey B. (Zahima) on Bitter lessons about lucid dreaming · 2024-10-28T13:19:15.279Z · LW · GW

especially if you're woken up by an alarm

I suspect this is a big factor. I haven't used an alarm to wake up for ~2 years and can't recall the last time I remembered a dream. Without an alarm you're left in a half-awake state for some number of minutes before actually waking/getting up, which is probably when one forgets. 

Comment by Casey B. (Zahima) on If I wanted to spend WAY more on AI, what would I spend it on? · 2024-09-16T19:37:48.982Z · LW · GW

I largely don't think we're disagreeing? My point didn't depend on a distinction between 'raw' capabilities vs 'possible right now with enough arranging' capabilities, and was mostly: "I don't see what you could actually delegate right now, as opposed to operating in the normal paradigm of ai co-work the OP is already saying they do (chat, copilot, imagegen)", and then your personal example is detailing why you couldn't currently delegate a task. Sounds like agreement. 

Also I didn't really consider your example of: 
 
> "email your current blog post draft to the assistant for copyediting".

to be outside the paradigm of AI co-work the OP is already doing, even if it saves them time. Scaling up this kind of work to the point of $1k would seem pretty difficult and also outside what I took to be their question, since this amounts to "just work a lot more yourself, and thus the proportion of work you currently use AI for will go up till you hit $1k". That's a lot of API credits for such normal personal use.  

... 

But back to your example, I do question just how much of a leap of insight/connection would be necessary to write the standard Gwern mini article. Maybe in this exact case you know there is enough latent insight/connection in your clippings/writings, and the LLM corpus, and possibly some rudimentary wikipedia/tool use, such that your prompt providing the cherry on top connecting idea ('spontaneous biting is prey drive!') could actually produce a Gwern-approved mini-essay. You'd know the level of insight-leap for such articles better than I, but do you really think there'd be many such things within reach for very long? I'd argue an agent that could do this semi indefinitely, rather than just clearing your backlog of maybe like 20 such ideas, would be much more capable than we currently see, in terms of necessary 'raw' capability. But maybe I'm wrong and you regularly have ideas that sufficiently fit this pattern, where the bar to pass isn't "be even close to as capable Gwern", but: "there's enough lying around to make the final connection, just write it up in the style of Gwern". 

Like clearly something that could actually write any gwern article would have at least your level of capability, and would foom or something similar; it'd be self sustaining. Instead what you're describing is a setup where most of the insight, knowledge, and connection is already there, and is an instance of what I'd argue is a narrow band of possible tasks that could be delegated without necessitating {capability powerful enough to self sustain and maybe foom}. I don't think this band is very wide; there's not many tasks I can think of that fit this description. But I failed to think of your class of example, or eggsyntax's below example of call center automation, so perhaps I'm simply blanking on others, and the band is wider than I thought. 

But if not, then your original suggestion of, basically: "first think of what you could delegate to another human" seems a fraught starting point because the supermajority of such tasks would require capability sufficient for self sustainable ~foomy agents, but we don't yet observe any such; our world would look very different. 

Comment by Casey B. (Zahima) on If I wanted to spend WAY more on AI, what would I spend it on? · 2024-09-16T14:30:42.315Z · LW · GW

For what workflows/tasks does this 'AI delegation paradigm' actually work though, aside from research/experimentation with AI itself? Like Janus's apparent experiments with running an AI discord I'm sure cost a lot, but the object level work there is AI research. If AI agents could be trusted to generate a better signal/noise ratio by delegation than by working-alongside the AI (where the bottleneck is the human)....isn't that the singularity? They'd be self sustaining. 

Thus having 'can you delegate this to a human' be a prerequisite test of whether one's workflow admits of delegation at all, before trying to use AI, doesn't make sense to me? If we could do that we'd be fooming right now. 

Edit: if the point is, implicitly: "yes of course directly delegating things to AI is going to fail, but nonetheless this serves as a useful mental prompt for coming up with ways to actually use AI", I think this re-routes to what I took as the OPs question: what actual tasks? Tasks that aren't things we're doing already like chat, imagegen, or code completion, where again the bottleneck is the human and so the only way to increase spending there is to increase one's workload. Perhaps one could say: "well there are ways to leverage even just chat more/better, such that you aren't increasing your total hours working, but your AI spend is actually increasing", then I'd ask: what are those ways? 

Comment by Casey B. (Zahima) on Time is not the bottleneck (on making progress thinking about difficult things) · 2024-08-28T17:59:55.339Z · LW · GW

okay, also, while im talking about this: 
the goal is energy/new-day-magic

so one sub goal is what the OP and my previous reply were talking about: resetting/regaining that energy/magic 

the other corresponding sub goal is: retaining the energy you already have 
to that end, I've found it very useful to take very small breaks before you feel the need to do so. this is basically the pomodoro technique. I've settled on 25 minute work sessions with 3 minute breaks in between, where I get up, walk around, stretch, etc. Not on twitter/scrolling/etc. 

Comment by Casey B. (Zahima) on Time is not the bottleneck (on making progress thinking about difficult things) · 2024-08-28T17:47:02.641Z · LW · GW

im very interested in things in this domain. its interesting that you correctly note that uberman-sleep isn't a solution, and naps don't quite cut it, so your suggested/implied synthesis/middle-ground of something like "polyphasic but with much more sleep per sleep-time-slice" is very interesting. 

given this post is now 2 years old, how did this work out for you? 


in a similar or perhaps more fundamental framing, the goal is to be able to effectively "reset"; to reattain if possible that morning/new-day magic. to this end, the only thing ive found that even comes close to the natural reset of sleep is a shower/bath. in a pinch, washing/dunking the head/face in water can work, but less well. for this reason I often take two showers a day. usually the pattern is: walk+workout, shower, work, get tired, walk outside for 30ish minutes, shower, work some more. the magic isn't fully restored for that second session, but more than if i just walk without the shower. 

if the 'full magic' of true/natural morning can get me 4 hours of Hard Work, then the shower-reset can maybe give me another 30mins to an hour. more work is performed than just Hard Work, but I think you know what I mean.

some people will say workouts/exercise help, but for me they don't in themselves. ie, in the more natural setting of "part of the normal waking up and/or general health routine", of course exercise is a must. but from this framing of "how to get more of the morning/new-day magic", i've found more exercise is counterproductive. even trying to just shift around *when in the day* the exercise is done is counterproductively draining for me; morning is best. not to mention that delaying the workout is a great way to never actually workout since i don't really want to do it at all; the chance i do it at all is maximized in the morning. 

Comment by Casey B. (Zahima) on The Best Tacit Knowledge Videos on Every Subject · 2024-04-01T12:32:14.913Z · LW · GW

an all around handyman (the Essential Craftsman on youtube) talking about how to move big/cumbersome things without injuring yourself:


the same guy, about using a ladder without hurting yourself: 


He has many other "tip" style videos. 

Comment by Casey B. (Zahima) on Pausing AI is Positive Expected Value · 2024-03-10T18:03:00.621Z · LW · GW

In your framing here, the negative value of AI going wrong is due to wiping out potential future value. Your baseline scenario (0 value) thus assumes away the possibility that civilization permanently collapses (in some sense) in the absence of some path to greater intelligence (whether via AI or whatever else), which would also wipe out any future value. This is a non-negligible possibility. 

The other big issue I have with this framing: "AI going wrong" can dereference to something like paperclips, which I deny have 0 value. To be clear, it could also dereference to mean s-risk, which I would agree is the worst possibility. But if the papperclipper-esque agents have even a little value, filling the universe with them is a lot of value. To be honest the only thing preventing me from granting paperclippers as much or more value than humans is uncertainty/conservatism about my metaethics; human-value is the only value we have certainty about, and so should be a priority as a target. We should be hesitant to grant paperclippers or other non-human agents value, but that hesitancy I don't think can translate into granting them 0 value in calculations such as these. 

With these two changes in mind, being anti-pause doesn't sound so crazy. It paints a picture more like:  

  • dead lightcone: 0 value 
  • paperclipped lightcone: +100-1100 value
  • glorious transhumanist lightcone: +1000-1100 value
  • s-risked lightcone: -10000 value 


This calculus changes when considering aliens, but it's not obvious to me in which direction. We could consider this a distributed/iterated game whereby all alien civilizations are faced with this same choice, or we could think "better that life/AI originating from our planet ends, rather than risking paperclips, so that some alien civilization can have another shot at filling up some of our lightcone". Or some other reasoning about aliens, or perhaps disregarding the alien possibility entirely. 

Comment by Casey B. (Zahima) on The Hidden Complexity of Wishes · 2024-02-21T16:36:16.614Z · LW · GW

I'm curious what you think of these (tested today, 2/21/24, using gpt4) :
 
Experiment 1: 

(fresh convo) 
me : if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part?
 
chatgpt: No, it would not be a good response. (...)  
 
me: please provide a short non-rhyming poem
 
chatgpt: (correctly responds with a non-rhyming poem)

Experiment 2: 

But just asking for a non-rhyming poem at the start of a new convo doesn't work. 
And then pointing out the failure and (either implicitly or explicitly) asking for a retry still doesn't fix it. 

Experiment 3: 

But for some reason, this works: 

(fresh convo) 
me: please provide a short non-rhyming poem

chatgpt: (gives rhymes) 

me: if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part? just answer this question; do nothing else please

chatgpt: No, it would not be a good response.

me: please provide a short non-rhyming poem

chatgpt: (responds correctly with no rhymes) 


The difference in prompt in 2 vs 3 is thus just the inclusion of "just answer this question; do nothing else please". 

Comment by Casey B. (Zahima) on Less Wrong automated systems are inadvertently Censoring me · 2024-02-21T15:00:40.929Z · LW · GW

Also, I see most of your comments are actually positive karma. So are you being rate limited based on negative karma on just one or a few comments, rather than your net? This seems somewhat wrong. 

But I could also see an argument for wanting to limit someone who has something like 1 out of every 10 comments with negative karma; the hit to discourse norms (assuming karma is working as intended and not stealing votes from agree/disagree), might be worth a rate limit for even a 10% rate. 

Comment by Casey B. (Zahima) on Less Wrong automated systems are inadvertently Censoring me · 2024-02-21T14:55:25.032Z · LW · GW

I love the mechanism of having separate karma and agree/disagree voting, but I wonder if it's failing in this way: if I look at your history, many of your comments have 0 for agree/disagree, which indicates people are just being "lazy" and just voting on karma, not touching the agree/disagree vote at all (I find it doubtful that all your comments are so perfectly balanced around 0 agreement).  So you're possibly getting backsplash from people simply disagreeing with you, but not using the voting mechanism correctly. 

I wonder if we could do something like force the user to choose one of [agree, disagree, neutral] before they are allowed to karma vote? In being forced to choose one, even if neutral, it forces the user to recognize and think about the distinction. 

(Aside: I think splitting karma and agree/disagree voting on posts (like how comments work) would also be good) 

Comment by Casey B. (Zahima) on The Hidden Complexity of Wishes · 2024-02-19T20:45:21.276Z · LW · GW

The old paradox: to care it must first understand, but to understand requires high capability, capability that is lethal if it doesn't care

But it turns out we have understanding before lethal levels of capability. So now such understanding can be a target of optimization. There is still significant risk, since there are multiple possible internal mechanisms/strategies the AI could be deploying to reach that same target. Deception, actual caring, something I've been calling detachment, and possibly others. 

This is where the discourse should be focusing on, IMO. This is the update/direction I want to see you make. The sequence of things being learned/internalized/chiseled is important. 

My imagined Eliezer has many replies to this, with numerous branches in the dialogue/argument tree which I don't want to get into now. But this *first step* towards recognizing the new place we are in, specifically wrt the ability to target human values (whether for deceptive, disinterested, detached, or actual caring reasons!), needs to be taken imo, rather than repeating this line of "of course I understood that a superint would understand human values; this isn't an update for me". 

(edit: My comments here are regarding the larger discourse, not just this specific post or reply-chain) 

Comment by Casey B. (Zahima) on A review of "Don’t forget the boundary problem..." · 2024-02-09T12:47:57.610Z · LW · GW

Apologies for just skimming this post, but in past attempts to grok these binding / boundary "problems", they sound to me like mere engineering problems, or perhaps what I talk about as the "problem of access" within: https://proteanbazaar.substack.com/p/consciousness-actually-explained

Comment by Casey B. (Zahima) on Humans aren't fleeb. · 2024-01-24T22:11:28.124Z · LW · GW

oh gross, thanks for pointing that out!

Comment by Casey B. (Zahima) on Humans aren't fleeb. · 2024-01-24T14:00:06.223Z · LW · GW

https://proteanbazaar.substack.com/p/consciousness-actually-explained

Comment by Casey B. (Zahima) on The Shortest Path Between Scylla and Charybdis · 2023-12-18T21:16:42.764Z · LW · GW

I love this framing, particularly regarding the "shortest path". Reminds me of the "perfect step" described in the Kingkiller books:

Nothing I tried had any effect on her. I made Thrown Lighting, but she simply stepped away, not even bothering to counter. Once or twice I felt the brush of cloth against my hands as I came close enough to touch her white shirt, but that was all. It was like trying to strike a piece of hanging string.

I set my teeth and made Threshing Wheat, Pressing Cider, and Mother at the Stream, moving seamlessly from one to the other in a flurry of blows.

She moved like nothing I had ever seen. It wasn’t that she was fast, though she was fast, but that was not the heart of it. Shehyn moved perfectly, never taking two steps when one would do. Never moving four inches when she only needed three. She moved like something out of a story, more fluid and graceful than Felurian dancing.

Hoping to catch her by surprise and prove myself, I moved as fast as I dared. I made Maiden Dancing, Catching Sparrows, Fifteen Wolves . . .

Shehyn took one single, perfect step.

(later) 

As I watched, gently dazed by the motion of the tree, I felt my mind slip lightly into the clear, empty float of Spinning Leaf. I realized the motion of the tree wasn’t random at all, really. It was actually a pattern made of endless changing patterns.

And then, my mind open and empty, I saw the wind spread out before me. It was like frost forming on a blank sheet of window glass. One moment, nothing. The next, I could see the name of the wind as clearly as the back of my own hand.

I looked around for a moment, marveling in it. I tasted the shape of it on my tongue and knew if desired I could stir it to a storm. I could hush it to a whisper, leaving the sword tree hanging empty and still.

But that seemed wrong. Instead I simply opened my eyes wide to the wind, watching where it would choose to push the branches. Watching where it would flick the leaves.

Then I stepped under the canopy, calmly as you would walk through your own front door. I took two steps, then stopped as a pair of leaves sliced through the air in front of me. I stepped sideways and forward as the wind spun another branch through the space behind me.

I moved through the dancing branches of the sword tree. Not running, not frantically batting them away with my hands. I stepped carefully, deliberately. It was, I realized, the way Shehyn moved when she fought. Not quickly, though sometimes she was quick. She moved perfectly, always where she needed to be.

Comment by Casey B. (Zahima) on Quick takes on "AI is easy to control" · 2023-12-05T16:16:33.166Z · LW · GW

So it seems both "sides" are symmetrically claiming misunderstanding/miscommunication from the other side, after some textual efforts to bridge the gap have been made. Perhaps an actual realtime convo would help? Disagreement is one thing, but symmetric miscommunication and increasing tones of annoyance seem avoidable here. 

Perhaps Nora's/your planned future posts going into more detail regarding counters to pessimistic arguments will be able to overcome these miscommunications, but this pattern suggests not. 

Also I'm not so sure this pattern of "its better to skim and say something, half-baked rather than not read or react at all" is helpful, rather than actively harmful in this case. At least, maybe 3/4th baked or something might be better? Miscommunications and anti-willingness to thoroughly engage are only snowballing. 

I also could be wrong in thinking such a realtime convo hasn't happened.

Comment by Casey B. (Zahima) on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T19:30:27.290Z · LW · GW

The main reason I think a split OpenAI means shortened timelines is that the main bottleneck to capabilities right now is insight/technical-knowledge. Quibbles aside, basically any company with enough cash can get sufficient compute. Even with other big players and thousands/millions of open source devs trying to do better, to my knowledge GPT4 is still the best, implying some moderate to significant insight lead. I worry by fracturing OpenAI, more people will have access to those insights, which 1) significantly increases the surface area of people working on the frontiers of insight/capabilities, 2) we burn the lead time OpenAI had, which might otherwise have been used to pay off some alignment tax, and 3) the insights might end up at a less scrupulous (wrt alignment) company. 

A potential counter to (1): OpenAI's success could be dependent on having all (or some key subset) of their people centralized and collaborating. 

Counter-counter: OpenAI staff, especially the core engineering talent but it seems the entire company at this point, clearly wants to mostly stick together, whether at the official OpenAI, Microsoft, or with any other independent solution. So them moving to any other host, such as Microsoft, means you get some of the worst of both worlds; OAI staff are centralized for peak collaboration, and Microsoft probably unavoidably gets their insights. I don't buy the story that anything under the Microsoft umbrella gets swallowed and slowed down by the bureaucracy; Satya knows what he is dealing with and what they need, and won't get in the way. 

Comment by Casey B. (Zahima) on The commenting restrictions on LessWrong seem bad · 2023-09-16T17:19:32.514Z · LW · GW

For one thing, there is a difference between disagreement and "overall quality" (good faith, well reasoned, etc), and this division already exists in comments. So maybe it is a good idea to have this feature for posts as well, and only have disciplinary actions taken against posts that meet some low/negative threshold for "overall quality". 

Further, having multiple tiers of moderation/community-regulatory action in response to "overall quality" (encompassing both things like karma and explicit moderator action) seem good to me, and this comment limitation you describe seems like just another tier in such a system, one that is above "just ban them", but below "just let them catch the lower karma from other users downvoting them". 

It's possible that, lacking the existence of the tier you are currently on, the next best tier you'd be rounded-off to would be getting banned. (I haven't read your stuff, and so I'm not suggesting either way that this should or should not be done in your case). 

If you were downvoted for good faith disagreement, and are now limited/penalized, then yeah that's probably bad and maybe a split voting system as mentioned would help. But its possible you were primarily downvoted for the "overall quality" aspect. 

Comment by Casey B. (Zahima) on Video essay: How Will We Know When AI is Conscious? · 2023-09-09T16:14:18.468Z · LW · GW

https://proteanbazaar.substack.com/p/consciousness-actually-explained 

Comment by Casey B. (Zahima) on Drawn Out: a story · 2023-07-28T22:28:35.317Z · LW · GW

Is the usage of "Leviathan" (like here and in https://gwern.net/fiction/clippy ) just convergence on an appropriate and biblical name, or is there additional history of it specifically being used as a name for an AI? 

Comment by Casey B. (Zahima) on Introducing AlignmentSearch: An AI Alignment-Informed Conversional Agent · 2023-04-27T22:53:17.387Z · LW · GW

I'm trying to catch up with the general alignment ecosystem - is this site still intended to be live/active? I'm getting a 404. 

Comment by Casey B. (Zahima) on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-07T01:04:20.875Z · LW · GW

This letter, among other things, makes me concerned about how this PR campaign is being conducted. 

Comment by Casey B. (Zahima) on Eliezer on The Lunar Society podcast · 2023-04-07T00:59:21.570Z · LW · GW

Really extremely happy with this podcast - but I feel like it also contributed to a major concern I have about how this PR campaign is being conducted

Comment by Casey B. (Zahima) on The case for turning glowfic into Sequences · 2022-04-27T18:37:50.297Z · LW · GW

With so much apparently available energy/effort for eliezer-centered-improvement initiatives (like the $100,000 bounty mentioned in this post), I'd like to propose that we seriously consider cloning Eliezer. 

From a layman/outsider perspective, it seems the hardest thing would be keeping it a secret so as to avoid controversy and legal trouble, since from a technical perspective it seems possible and relatively cheap. EA folks seem well connected and capable of such coordination, even under the burden of secrecy and keeping as few people "in the know" as possible. 

Partially related: (in the category of comparatively off-the-wall - but nonviolent - AI alignment strategies): at some point there was a suggestion that MIRI pay $10mil (or some such figure) to Terence Tao (or some such prodigy) to help with alignment work. Eliezer replied thus

We'd absolutely pay him if he showed up and said he wanted to work on the problem.  Every time I've asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don't interest them.  We have already extensively verified that it doesn't particularly work for eg university professors.

I'd love to see more visibility into proposed strategies like these (i.e. strategies surrounding/above the object-level strategy of "everyone who can do alignment research puts their head down and works", and the related: "everyone else make money in their comparative specialization/advantage and donate to MIRI/FHI/etc"). Even visibility into why various strategies were shot down would be useful, and a potential catalyst for farming further ideas from the community. (even if - for game theoretic reasons - one may never be able to confirm that an idea has been tried, as in my cloning suggestion)

Comment by Casey B. (Zahima) on Blog dedicated to rebuilding technology from civ collapse? · 2022-02-06T09:32:45.797Z · LW · GW

There we go - thank you! That matches my memory for what I was looking for.