Posts

Optimistic Assumptions, Longterm Planning, and "Cope" 2024-07-17T22:14:24.090Z
Fluent, Cruxy Predictions 2024-07-10T18:00:06.424Z
80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) 2024-07-03T20:34:50.741Z
What percent of the sun would a Dyson Sphere cover? 2024-07-03T17:27:50.826Z
What distinguishes "early", "mid" and "end" games? 2024-06-21T17:41:30.816Z
"Metastrategic Brainstorming", a core building-block skill 2024-06-11T04:27:52.488Z
Can we build a better Public Doublecrux? 2024-05-11T19:21:53.326Z
some thoughts on LessOnline 2024-05-08T23:17:41.372Z
Prompts for Big-Picture Planning 2024-04-13T03:04:24.523Z
"Fractal Strategy" workshop report 2024-04-06T21:26:53.263Z
One-shot strategy games? 2024-03-11T00:19:20.480Z
Rationality Research Report: Towards 10x OODA Looping? 2024-02-24T21:06:38.703Z
Exercise: Planmaking, Surprise Anticipation, and "Baba is You" 2024-02-24T20:33:49.574Z
Things I've Grieved 2024-02-18T19:32:47.169Z
CFAR Takeaways: Andrew Critch 2024-02-14T01:37:03.931Z
Skills I'd like my collaborators to have 2024-02-09T08:20:37.686Z
"Does your paradigm beget new, good, paradigms?" 2024-01-25T18:23:15.497Z
Universal Love Integration Test: Hitler 2024-01-10T23:55:35.526Z
2022 (and All Time) Posts by Pingback Count 2023-12-16T21:17:00.572Z
Raemon's Deliberate (“Purposeful?”) Practice Club 2023-11-14T18:24:19.335Z
Hiring: Lighthaven Events & Venue Lead 2023-10-13T21:02:33.212Z
"The Heart of Gaming is the Power Fantasy", and Cohabitive Games 2023-10-08T21:02:33.526Z
Related Discussion from Thomas Kwa's MIRI Research Experience 2023-10-07T06:25:00.994Z
Thomas Kwa's MIRI research experience 2023-10-02T16:42:37.886Z
Feedback-loops, Deliberate Practice, and Transfer Learning 2023-09-07T01:57:33.066Z
Open Thread – Autumn 2023 2023-09-03T22:54:42.259Z
The God of Humanity, and the God of the Robot Utilitarians 2023-08-24T08:27:57.396Z
Book Launch: "The Carving of Reality," Best of LessWrong vol. III 2023-08-16T23:52:12.518Z
Feedbackloop-first Rationality 2023-08-07T17:58:56.349Z
Private notes on LW? 2023-08-04T17:35:37.917Z
Exercise: Solve "Thinking Physics" 2023-08-01T00:44:48.975Z
Rationality !== Winning 2023-07-24T02:53:59.764Z
Announcement: AI Narrations Available for All New LessWrong Posts 2023-07-20T22:17:33.454Z
What are the best non-LW places to read on alignment progress? 2023-07-07T00:57:21.417Z
My "2.9 trauma limit" 2023-07-01T19:32:14.805Z
Automatic Rate Limiting on LessWrong 2023-06-23T20:19:41.049Z
Open Thread: June 2023 (Inline Reacts!) 2023-06-06T07:40:43.025Z
Worrying less about acausal extortion 2023-05-23T02:08:18.900Z
Dark Forest Theories 2023-05-12T20:21:49.052Z
[New] Rejected Content Section 2023-05-04T01:43:19.547Z
Tuning your Cognitive Strategies 2023-04-27T20:32:06.337Z
"Rate limiting" as a mod tool 2023-04-23T00:42:58.233Z
LessWrong moderation messaging container 2023-04-22T01:19:00.971Z
Moderation notes re: recent Said/Duncan threads 2023-04-14T18:06:21.712Z
LW Team is adjusting moderation policy 2023-04-04T20:41:07.603Z
Abstracts should be either Actually Short™, or broken into paragraphs 2023-03-24T00:51:56.449Z
Tabooing "Frame Control" 2023-03-19T23:33:10.154Z
Dan Luu on "You can only communicate one top priority" 2023-03-18T18:55:09.998Z
"Carefully Bootstrapped Alignment" is organizationally hard 2023-03-17T18:00:09.943Z
Prizes for the 2021 Review 2023-02-10T19:47:43.504Z

Comments

Comment by Raemon on Daniel Kokotajlo's Shortform · 2024-07-25T00:39:40.549Z · LW · GW

...in the last 24 hours? Or, like, awhile ago in a previous context?

Comment by Raemon on Nathan Young's Shortform · 2024-07-24T17:36:11.110Z · LW · GW

Well, an alternate framing is "does the big stick turn out to have the effect you want?"

Comment by Raemon on Nathan Young's Shortform · 2024-07-24T01:07:46.530Z · LW · GW

I guess the actual resolution here will eventually come from seeing the final headlines and that, like, they're actually reasonable.

Comment by Raemon on jacobjacob's Shortform Feed · 2024-07-23T21:39:29.130Z · LW · GW

I'd be interested in a few more details/gears. (Also, are you primarily replying about the immediate parent, i.e. domestication of dissent, or also about the previous one)

Two different angles of curiosity I have are:

  • what sort of things you might you look out for, in particular, to notice if this was happening to you at OpenAI or similar?
  • something like... what's your estimate of the effect size here? Do you have personal experience feeling captured by this dynamic? If so, what was it like? Or did you observe other people seeming to be captured, and what was your impression (perhaps in vague terms) of the diff that the dynamic was producing?
Comment by Raemon on Towards more cooperative AI safety strategies · 2024-07-23T18:43:41.631Z · LW · GW

My take atm is "seems right that this shouldn't be a permanent norm, there are definitely costs of disclaimer-ratcheting that are pretty bad. I think it might still be the right thing to do of your own accord in some cases, which is, like, superogetory."

I think there's maybe a weird thing with this post, where, it's trying to be the timeless, abstract version of itself. It's certainly easier to write the timeless abstract version than the "digging into specific examples and calling people out" version. But, I think the digging into specific examples is actually kind of important here – it's easy to come away with vague takeaways that disagree, where everyone nods along but then mostly thinks it's Those Other Guys who are being power seeking.

Given that it's probably 10-50x harder to write the Post With Specific Examples, I think actually a pretty okay outcome is "ship the vague post, and let discussion in the comments get into the inside-baseball-details." And, then, it'd be remiss for the post-author's role in the ecosystem not coming up as an example to dig into.

Comment by Raemon on johnswentworth's Shortform · 2024-07-23T16:33:10.942Z · LW · GW

They can believe in catastrophic but non-existential risks. (Like, AI causes something like crowdstrike periodically if your not trying to prevent that )

Comment by Raemon on johnswentworth's Shortform · 2024-07-23T05:27:27.486Z · LW · GW

I think people mostly don't believe in extinction risk, so the incentive isn't nearly as real/immediate.

Comment by Raemon on Towards more cooperative AI safety strategies · 2024-07-22T20:38:33.799Z · LW · GW

Part of the whole point of CEV is to discover at least some things that current humanity is confused about but would want if fully informed, with time to think. It'd be surprising to me if CEV-existing-humanity didn't turn out to want some things that many current humans are opposed to. 

Comment by Raemon on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-22T19:40:38.808Z · LW · GW

So, I do think definitely I've got some confirmation bias here – I know because the first thing I thought when I saw was "man this sure looks like the thing Eliezer was complaining about" and it was awhile later, thinking it through, that was like "this does seem like it should make you really doomy about any agent-foundations-y plans, or other attempts to sidestep modern ML and cut towards 'getting the hard problem right on the first try.'"

I did (later) think about that a bunch and integrate it into the post.

I don't know whether I think it's reasonable to say "it's additionally confirmation-bias-indicative that the post doesn't talk about general doom arguments." As Eli says, the post is mostly observing a phenonenon that seems more about planmaking than general reasoning. 

(fwiw my own p(doom) is more like 'I dunno man, somewhere between 10% and 90%, and I'd need to see a lot of things going concretely right before my emotional center of mass shifted below 50%')

Comment by Raemon on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-19T22:14:52.334Z · LW · GW

Yeah. I tried to get at this in the Takeaways but I like your more thorough write up here.

Comment by Raemon on Friendship is transactional, unconditional friendship is insurance · 2024-07-19T18:03:28.493Z · LW · GW

In the world where people had exactly $30 to spend every hour and they’d either spend it or it disappeared, would you object to calling that spending money? I feel like many of my spending intuitions would still basically transfer to that world.

Comment by Raemon on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-19T17:43:43.587Z · LW · GW

Curious for details.

Comment by Raemon on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-18T23:04:07.750Z · LW · GW

People varied in how much Baba-Is-You experience they had. Some of them were completely new, and did complete the first couple levels (which are pretty tutorial-like) using the same methodology I outline here, before getting to a level that was a notable challenge.

They actually did complete the first couple levels successfully, which I forgot when writing this post. This does weaken the rhetorical force, but also, the first couple levels are designed more to teach the mechanics and are significantly easier. I'll update the post to clarify this.

Some of them had played before, and were starting a new level from around where they left off.

Comment by Raemon on Towards more cooperative AI safety strategies · 2024-07-18T22:36:10.801Z · LW · GW

...fwiw I think it's not grossly inaccurate. 

I think MIRI did put a lot of effort into being cooperative about the situation (i.e. Don't leave your fingerprints on the future, doing the 'minimal' pivotal act that would end the acute risk period, and when thinking about longterm godlike AI, trying to figure out fair CEV sorts of things).

But, I think it was also pretty clear that "have a controllable, safe AI that's just powerful enough to take some action that prevents anyone else from building a more powerful and more dangerous AI" were not in the overton window. I don't know what Eliezer's actual plan was since he disclaimed "yes I know melt all the GPUs won't work", but, like, "melt all the GPUs" implies a level of power over the world that is really extreme by historical standards, even if you're trying to do the minimal thing with that power.

Comment by Raemon on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-18T20:09:28.431Z · LW · GW

Also, the second section makes an argument in favor of backchaining. But that seems to contradict the first section, in which people tried to backchain and it went badly.

This didn't come across in the post, but – I think people in the experiment were mostly doing things closer to (simulated) forward chaining, and then getting stuck, and then generating the questionable assumptions. (which is also what I tended to do when I first started this experiment). 

An interesting thing I learned is that "look at the board and think without fiddling around" is actually a useful skill to have even when I'm doing the more openended "solve it however seems best." It's easier to notice now when I'm fiddling around pointlessly instead of actually doing useful cognitive work.

Comment by Raemon on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-18T19:37:44.668Z · LW · GW

I had a second half of this essay that felt like it was taking too long to pull together and I wasn't quite sure who I was arguing with. I decided I'd probably try to make it a second post. I generally agree it's not that obvious what lessons to take.

The beginning of the second-half/next-post was something like:

There's an age-old debate about AI existential safety, which I might summarize as the viewpoints:

1. "We only get one critical try, and most alignment research dodges the hard part of the problem, with wildly optimistic assumptions."

vs

2. "It is basically impossible to make progress on remote, complex problems on your first try. So, we need to somehow factor the problem into something we can make empirical progress on."

I started out mostly thinking through lens #1. I've updated that, actually, both views are may be "hair on fire" levels of important. I have some frustrations with both some doomer-y people who seem resistant to incorporating lens #2, and with people who seem to (in practice) be satisfied with "well, iterative empiricism seems tractable, and we don't super need to incorporate frame #1)

I am interested in both:

  • trying to build "engineering feedback loops" that more accurately represent the final problem as best we can, and then iterating on both "solving representative problems against our current best engineered benchmarks" while also "continuing to build better benchmarks. (Automating Auditing and Model Organisms of Misalignment seem like attempts at this)
  • trying to develop training regimens that seem like they should help people plan better in Low-Feedback-Domains, which includes theoretic work, and empirical research that's trying to keep their eye on the longterm ball better, and the invention of benchmarks a la previous bullet.
Comment by Raemon on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-18T00:55:42.081Z · LW · GW

Games I was particularly thinking of were They Are Billions, Slay The Spire. I guess also Factorio although the shape of that is a bit different.

(to be clear, these are fictional examples that don't necessarily generalize, but, when I look at the AI situation I think it-in-particular has an 'exponential difficulty' shape)

Comment by Raemon on Turning Your Back On Traffic · 2024-07-17T16:39:21.922Z · LW · GW

I also just realized the actual reason I do this is not because it works better, but because I felt too awkward merely turning my back.

Comment by Raemon on Turning Your Back On Traffic · 2024-07-17T05:34:17.765Z · LW · GW

I take it a step farther and just start walking down the sidewalk away from the road until they pass, and then turn around.

Comment by Raemon on Nathan Young's Shortform · 2024-07-17T00:55:05.636Z · LW · GW

though, curious to hear an instance of it actually playing out

Comment by Raemon on Alexander Gietelink Oldenziel's Shortform · 2024-07-16T23:50:14.961Z · LW · GW

See also: https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice 

Comment by Raemon on Nathan Young's Shortform · 2024-07-16T19:35:29.062Z · LW · GW

Nice.

Comment by Raemon on Fluent, Cruxy Predictions · 2024-07-14T00:21:05.829Z · LW · GW

Woo, great. :) 

Whether this works out or not for you, I quite appreciate you laying out the details. Hope it's useful for you!

Comment by Raemon on Fluent, Cruxy Predictions · 2024-07-12T23:52:39.587Z · LW · GW

Curious to here what sort of things you end up predicting about, if you're up for sharing. :)

Comment by Raemon on Reliable Sources: The Story of David Gerard · 2024-07-12T02:22:18.132Z · LW · GW

Maybe I'm unusual and few other readers don't have this problem. I suspect that's not the case, but given that I don't know, I'll just say that I find this writing style to be a little too Dark Artsy and symmetrical for my comfort.

fyi I also felt this. (Don't have much more to add. I just wanted to note it). 

Comment by Raemon on Thoughts to niplav on lie-detection, truthfwl mechanisms, and wealth-inequality · 2024-07-12T00:54:28.605Z · LW · GW

Quick mod note – this post seems like a pretty earnest, well intentioned version of "address a dialogue to someone who hasn't opted into it". But, it's the sort of thing I'd expect to often be kind of annoying. I haven't chatted with other mods yet about whether we want to allow this sort of thing longterm, but, flagging that we're tracking it as an edge case to think about.

Comment by Raemon on Poker is a bad game for teaching epistemics. Figgie is a better one. · 2024-07-11T20:26:16.859Z · LW · GW

I'm curating this post, both for the post itself, as well as various followup discussion in the post disclaimer and comments that I found valuable.

I think the question of "how do we quickly/efficiently train epistemic skills?" is a very important one. I'm interested in the holy grail of training full-generality epistemic skills, and I'm interesting in training more specific clusters of skills (such as ones relevant for trading). I agree with kave's comment that this post equivocates between "epistemics" and "trading" but I'm generally excited for LessWrong folk to develop the art of "designing games that efficiently teach nuanced skills that can transfer". 

I like rossry's attitude of "the main feedbackloop of the game should help players become unconfused".

Comment by Raemon on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) · 2024-07-09T17:16:54.959Z · LW · GW

Yeah I read those lines, and also "Want to use your engineering skills to push the frontiers of what state-of-the-art language models can accomplish", and remain skeptical. I think the way OpenAI tends to equivocate on how they use the word "alignment" (or: they use it consistently, but, not in a way that I consider obviously good. Like, I the people working on RLHF a few years ago probably contributed to ChatGPT being released earlier which I think was bad*)

*I like the part where the world feels like it's actually starting to respond to AI now, but, I think that would have happened later, with more serial-time for various other research to solidify.

(I think this is a broader difference in guesses about what research/approaches are good, which I'm not actually very confident about, esp. compared to habryka, but, is where I'm currently coming from)

Comment by Raemon on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) · 2024-07-09T04:37:33.155Z · LW · GW

I was thinking of things like the Alignment Research Science role. If they talked up "this is a superalignment role", I'd have an estimate higher than 55%. 

We are seeking Researchers to help design and implement experiments for alignment research. Responsibilities may include:

  • Writing performant and clean code for ML training
  • Independently running and analyzing ML experiments to diagnose problems and understand which changes are real improvements
  • Writing clean non-ML code, for example when building interfaces to let workers interact with our models or pipelines for managing human data
  • Collaborating closely with a small team to balance the need for flexibility and iteration speed in research with the need for stability and reliability in a complex long-lived project
  • Understanding our high-level research roadmap to help plan and prioritize future experiments
  • Designing novel approaches for using LLMs in alignment research

You might thrive in this role if you:

  • Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter
  • Want to use your engineering skills to push the frontiers of what state-of-the-art language models can accomplish
  • Possess a strong curiosity about aligning and understanding ML models, and are motivated to use your career to address this challenge
  • Enjoy fast-paced, collaborative, and cutting-edge research environments
  • Have experience implementing ML algorithms (e.g., PyTorch)
  • Can develop data visualization or data collection interfaces (e.g., JavaScript, Python)
  • Want to ensure that powerful AI systems stay under human control
Comment by Raemon on Buck's Shortform · 2024-07-09T00:28:02.914Z · LW · GW

I tend to dismiss scenarios where it's obvious, because I expect the demonstration of strong misaligned systems to inspire a strong multi-government respons

I think covid was clear-cut, and it did inspire some kind of government response, but not a particularly competent one.

Comment by Raemon on On saying "Thank you" instead of "I'm Sorry" · 2024-07-08T04:03:50.343Z · LW · GW

I think I have some tendency to apologize the way this post warns about, and have heard the "say thank you" advice and considered it in the past. But, I'm curious to hear from anyone who's been on the receiving end of the "thank you" apology substitutes and how it feels to them.

Comment by Raemon on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) · 2024-07-08T03:55:17.695Z · LW · GW

I'm not Elizabeth and probably wouldn't have worded my thoughts quite the same, but my own position regarding your first bullet point is:

"When I see OpenAI list a 'safety' role, I'm like 55% confident that it has much to do with existential safety, and maybe 25% that it produces more existential safety than existential harm." 

Comment by Raemon on Reflections on Less Online · 2024-07-07T06:09:29.302Z · LW · GW

This was really nice to read, thank you!

Re: 

I see no obvious way on the site to send Lightcone money, or to otherwise contribute to this happening again, and I would like to. What do I do?

For now, the best place is https://www.lesswrong.com/donate. (We used to link this from the sidebar but people didn't use it often enough to really justify the screen real-estate)

Comment by Raemon on Reflections on Less Online · 2024-07-07T05:48:19.018Z · LW · GW

Minor note:

His name is Leo. As best I could tell from asking others, he’s not attached to the site, he hails from one of the adjacent properties and just likes the people. I was going to nominate him as the LessOnline mascot, but must admit that Agendra might be more appropriate.

Leo's owner is one of the maintenance-folk who help keep the venue in good repair. :)

Comment by Raemon on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) · 2024-07-06T17:31:14.097Z · LW · GW

tbh I typically find those bots annoying too.

Comment by Raemon on Habryka's Shortform Feed · 2024-07-05T20:31:23.692Z · LW · GW

Are there currently board members who are meaningfully separated in terms of incentive-alignment with Daniella or Dario? (I don't know that it's possible for you to answer in a way that'd really resolve my concerns, given what sort of information is possible to share. But, "is there an actual way to criticize Dario and/or Daniella in a way that will realistically be given a fair hearing by someone who, if appropriate, could take some kind of action" is a crux of mine)

Comment by Raemon on What percent of the sun would a Dyson Sphere cover? · 2024-07-03T19:06:00.001Z · LW · GW

I agree we can't get exact numbers here but it'd be surprising to me if modern material science wasn't capable of generating some upper/lower bounds.

Comment by Raemon on What percent of the sun would a Dyson Sphere cover? · 2024-07-03T19:05:08.456Z · LW · GW

As a caveat, I would suggest that if the AI is "nice" enough to spare Earth, it's likely to be nice enough to beam some reconstituted sunlight over to us.

Yeah seems right. I still find myself curious, as well as strategically interested in "man, I just really don't know how the future is likely to play out, so getting more clarity on physical limits of this sort of system feels like it helps constrain possible future scenarios." That might just be cope though.

Comment by Raemon on What percent of the sun would a Dyson Sphere cover? · 2024-07-03T18:28:12.958Z · LW · GW

Nod, but, this doesn't answer the actual question.

Comment by Raemon on What percent of the sun would a Dyson Sphere cover? · 2024-07-03T18:27:40.010Z · LW · GW

A thing I'm still not sure about reading that is "what percent of the light is getting through?". Like, how dense are the reflector modules? 

Later in the paper it says "The Dyson sphere is assumed to have an efficiency of one third", which could mean "realistically you only capture about 1/3rd of the energy in the first place" or "the capturing/redirecting process" loses 2/3rds of the energy.

Comment by Raemon on MIRI 2024 Communications Strategy · 2024-06-30T17:15:35.539Z · LW · GW

Thinking a bit more, scenarios that seem at least kinda plausible:

  • "misuse" where someone is just actively trying to use AI to commit genocide or similar. Or, we get into an humans+AI vs human+AI war. 
  • the AI economy takes off, it has lots of extreme environmental impact, and it's sort of aligned but we're not very good at regulating it fast enough, but, we get it under control after a billion death.
Comment by Raemon on MIRI 2024 Communications Strategy · 2024-06-30T01:39:08.045Z · LW · GW

(To be clear, I think there is a substantial chance of at least 1 billion people dying and that AI takeover is very bad from a longtermist perspective.)

Is there a writeup somewhere of how we're likely to get "around a billion people die" that isn't extinction, or close to it? Something about this phrasing feels weird/suspicious to me. 

Like I have a few different stories for everyone dying (some sooner, or later). 

I have some stories where like "almost 8 billion people" die and the AI scans the remainder. 

I have some stories where the AI doesn't really succeed and maybe kills millions of people, in what is more like "a major industrial accident" than "a powerful superintelligence enacting its goals".

Technically "substantial chance of at least 1 billion people dying" can imply the middle option there, but it sounds like you mean the central example to be closer to a billion than 7.9 billion or whatever. That feels like a narrow target and I don't really know what you have in mind.

Comment by Raemon on Higher-effort summer solstice: What if we used AI (i.e., Angel Island)? · 2024-06-28T18:13:40.513Z · LW · GW

I think the idea of an island ritual is really cool. (I also think it's a fine thing to try one year even if it turns out not to make sense as a permanent thing)

One of the things that feels hesitationy/cruxy to me is that I think it requires a larger team than one might expect (even after an intuitive scale-up from the normal Summer Solstices). 

I think an issue with Summer Solstice is that it requires a lot of basic logistical infrastructure to be "basically functional" / "Maslow Hierarchy level 1." And this ends up consuming most of the energy to run it. I think in recent years there hasn't been much spare capacity for planning ritual aspects.

A thing I expect with Angel Island is that you'll need even more infrastructure to be "basically functional" (handling Ferry rides, making sure to have more food than usual, etc). And then developing ritual that really capitalizes on the island will be an additional large batch of effort. So I expect it to need more like 3x the amount of organizers, rather than (what I might naively guess) of 2x.

Comment by Raemon on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-27T19:48:36.996Z · LW · GW

Curated.

The overall point here seems true and important to me.

I think I either disagree, or am agnostic about, some of the specific examples given in the Myth vs Reality section. I don't think they're loadbearing for the overall point. I may try to write those up in more detail later.

Comment by Raemon on What distinguishes "early", "mid" and "end" games? · 2024-06-26T19:55:50.967Z · LW · GW

Nod.

I can't remember if I said this already, but the way I'm looking at this is "take stock of various clusters of strategy heuristics or frameworks, and think about which-if-any apply to stuff that I care about." So, less looking for universal principles, more "try on different strategic lenses and see what shakes out."

Comment by Raemon on Metastrategy get-started guide · 2024-06-26T19:18:31.227Z · LW · GW

I appreciate the writeup/followup!

I maybe want to flag this is "one particular leg/trunk/ear of 'the elephant that is metastrategy'". My preferred way to intro people to it is with a full week of workshop classes that highlight different skills that interrelate with each other.

I think "have at least two plans that are pretty fleshed out and feel 'real' to you" is a major cornerstone of my personal practice, but I think the core element is "dedicate any significant fraction of time for thinking about 'how to do strategy', at all." See: "Metastrategic Brainstorming", a core building-block skill

Comment by Raemon on LLM Generality is a Timeline Crux · 2024-06-25T18:06:33.898Z · LW · GW

It so happens I hadn't seen your other posts, although I think there is something that this post was aiming at, that yours weren't quite pointed at, which is laying out "this is a crux for timelines, these are the subcomponents of the crux." (But, I haven't read your posts in detail yet and thought about what else they might be good at that this post wasn't aiming for)

Comment by Raemon on LLM Generality is a Timeline Crux · 2024-06-24T20:51:08.180Z · LW · GW

Curated.

This is a fairly straightforward point, but one I haven't seen written up before and I've personally been wondering a bunch about. I appreciated this post both for laying out the considerations pretty thoroughly, including a bunch or related reading, and laying out some concrete predictions at the end.

Comment by Raemon on What distinguishes "early", "mid" and "end" games? · 2024-06-24T17:05:11.419Z · LW · GW

Sure, but those areas aren’t the ones that have me interested in gaming metaphors to figure out how to solve my problems.

‘Found a startup’ is a bit more of an established process that ‘counts’ for my purposes here. There’s a lot of reading and learning I can do before getting started. (Compared to ‘build a functioning alignment community’). But even there I think it’s less like playing a game I’ve already studied such that the early game is memorized, and more like sitting down to play a multiplayer game for the first time, which shares structure with other games but is still involved a lot of learning on the fly. (I bet this is still reasonably true on your second or third startup, though maybe not if you literally are running Y Combinator). though interested in hearing from people who have run multiple to see if they think that tracks.

Comment by Raemon on What distinguishes "early", "mid" and "end" games? · 2024-06-23T16:35:26.187Z · LW · GW

The "early game is what you have memorized" makes sense for literal games, but doesn't actually help much with my current use-case, which is "and this translates into real life." (when I'm thinking about these in game-form, I'm generally thinking about one-shot gaming, where you're trying hard to win your first time playing a game, such that figuring out the early game is part of the challenge)