Posts

Which LessWrong/Alignment topics would you like to be tutored in? [Poll] 2024-09-19T01:35:02.999Z
How do we know that "good research" is good? (aka "direct evaluation" vs "eigen-evaluation") 2024-07-19T00:31:38.332Z
Friendship is transactional, unconditional friendship is insurance 2024-07-17T22:52:41.967Z
Enriched tab is now the default LW Frontpage experience for logged-in users 2024-06-21T00:09:30.441Z
[New Feature] Your Subscribed Feed 2024-06-11T22:45:00.000Z
LW Frontpage Experiments! (aka "Take the wheel, Shoggoth!") 2024-04-23T03:58:43.443Z
Jobs, Relationships, and Other Cults 2024-03-13T05:58:45.043Z
The "context window" analogy for human minds 2024-02-13T19:29:10.387Z
Throughput vs. Latency 2024-01-12T21:37:07.632Z
Taking responsibility and partial derivatives 2023-12-31T04:33:51.419Z
The proper response to mistakes that have harmed others? 2023-12-31T04:06:31.505Z
Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" 2023-11-21T17:39:17.828Z
Is the Wave non-disparagement thingy okay? 2023-10-14T05:31:21.640Z
Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) 2023-09-28T02:48:58.994Z
Joseph Bloom on choosing AI Alignment over bio, what many aspiring researchers get wrong, and more (interview) 2023-09-17T18:45:28.891Z
What is the optimal frontier for due diligence? 2023-09-08T18:20:03.300Z
Conversation about paradigms, intellectual progress, social consensus, and AI 2023-09-05T21:30:17.498Z
Announcement: AI Narrations Available for All New LessWrong Posts 2023-07-20T22:17:33.454Z
Some reasons to not say "Doomer" 2023-07-09T21:05:06.585Z
Open Thread - July 2023 2023-07-06T04:50:06.735Z
Reacts now enabled on 100% of posts, though still just experimenting 2023-05-28T05:36:40.953Z
New User's Guide to LessWrong 2023-05-17T00:55:49.814Z
Thoughts on LessWrong norms, the Art of Discourse, and moderator mandate 2023-05-11T21:20:52.537Z
[New] Rejected Content Section 2023-05-04T01:43:19.547Z
Open & Welcome Thread - May 2023 2023-05-02T02:58:01.690Z
What 2025 looks like 2023-05-01T22:53:15.783Z
Should LW have an official list of norms? 2023-04-25T21:20:53.624Z
[Feedback please] New User's Guide to LessWrong 2023-04-25T18:54:40.379Z
LW moderation: my current thoughts and questions, 2023-04-12 2023-04-20T21:02:54.730Z
A Confession about the LessWrong Team 2023-04-01T21:47:11.572Z
[New LW Feature] "Debates" 2023-04-01T07:00:24.466Z
LW Filter Tags (Rationality/World Modeling now promoted in Latest Posts) 2023-01-28T22:14:32.371Z
The LessWrong 2021 Review: Intellectual Circle Expansion 2022-12-01T21:17:50.321Z
Petrov Day Retrospective: 2022 2022-09-28T22:16:20.325Z
LW Petrov Day 2022 (Monday, 9/26) 2022-09-22T02:56:19.738Z
Which LessWrong content would you like recorded into audio/podcast form? 2022-09-13T01:20:06.498Z
Relationship Advice Repository 2022-06-20T14:39:36.548Z
Open & Welcome Thread - May 2022 2022-05-02T23:47:21.181Z
A Quick Guide to Confronting Doom 2022-04-13T19:30:48.580Z
March 2022 Welcome & Open Thread 2022-03-02T19:00:43.263Z
[Beta Feature] Google-Docs-like editing for LessWrong posts 2022-02-23T01:52:22.141Z
[New Feature] Support for Footnotes! 2022-01-04T07:35:21.500Z
Open & Welcome Thread November 2021 2021-11-01T23:43:55.006Z
Petrov Day Retrospective: 2021 2021-10-21T21:50:40.042Z
Book Review Review (end of the bounty program) 2021-10-15T03:23:04.300Z
Petrov Day 2021: Mutually Assured Destruction? 2021-09-22T01:04:26.314Z
LessWrong is paying $500 for Book Reviews 2021-09-14T00:24:23.507Z
You can get feedback on ideas and external drafts too 2021-09-09T21:06:04.446Z
LessWrong is providing feedback and proofreading on drafts as a service 2021-09-07T01:33:10.666Z
(apologies for Alignment Forum server outage last night) 2021-08-25T14:45:06.906Z

Comments

Comment by Ruby on AIs Will Increasingly Attempt Shenanigans · 2024-12-19T07:27:29.935Z · LW · GW

Curated. This is a good post and in some ways ambitious as it tries to make two different but related points. One point – that AIs are going to increasingly commit shenanigans – is in the title. The other is a point regarding the recurring patterns of discussion whenever AIs are reported to have committed shenanigans. I reckon those patterns are going to be tough to beat, as strong forces (e.g. strong pre-existing conviction) cause people to take up the stances they do, but if there's hope for doing better, I think it comes from understanding the patterns.

There's a good round up of recent results in here that's valuable on its own, but the post goes further and sets out to do something pretty hard in advocating for the correct interpretation of the results. This is hard because I think the correct interpretation is legitimately subtle and nuanced, with the correct update depending on your starting position (as Zvi explains). It sets out and succeeds.

Lastly, I want to express my gratitude for Zvi's hyperlinks to lighter material, e.g. "Not great, Bob" and "Stop it!" It's a heavy world with these topics of AI, and the lightness makes the pill go down easier. Thanks

Comment by Ruby on LessWrong FAQ · 2024-12-13T18:40:42.881Z · LW · GW

Yes, true, fixed, thanks!

Comment by Ruby on avturchin's Shortform · 2024-10-31T17:35:36.795Z · LW · GW

Dog: "Oh ho ho, I've played imaginary fetch before, don't you worry."

Comment by Ruby on Occupational Licensing Roundup #1 · 2024-10-30T17:38:50.866Z · LW · GW

My regular policy is to not frontpage newsletters, however I frontpaged this one as it's the first in the series and I think it's neat for more people to know this is a series Zvi intends to write.

Comment by Ruby on A bird's eye view of ARC's research · 2024-10-27T17:24:03.213Z · LW · GW

Curated! I think it's generally great when people explain what they're doing and why in way legibile to those not working on it. Great because it let's others potentially get involved, build on it, expose flaws or omissions, etc. This one seems particularly clear and well written. While I haven't read all of the research, nor am I particularly qualified to comment on it, I like the idea of a principled/systematic approach behind, in comparison to a lot of work that isn't coming on a deeper, bigger, framework.

(While I'm here though, I'll add a link to Dmitry Vaintrob's comment that Jacob Hilton described as "best critique of ARC's research agenda that I have read since we started working on heuristic explanations". Eliciting such feedback is the kind of good thing that comes out of up writing agendas – it's possible or likely Dmitry was already tracking the work and already had these critiques, but a post like this seems like a good way to propagate them and have a public back and forth.)

Roughly speaking, if the scalability of an algorithm depends on unknown empirical contingencies (such as how advanced AI systems generalize), then we try to make worst-case assumptions instead of attempting to extrapolate from today's systems.

I like this attitude. The human standard, I think often in alignment work too, is to argue why one's plan will work and find stories for that, and adopting the methodology of the opposite, especially given the unknowns, is much needed in alignment work.

Overall, this is neat. Kudos to Jacob (and rest of the team) for taking the time to put this all together. Doesn't seem all that quick to write, and I think it'd be easy to think they ought to not take time out off from further object-level research to write it.  Thanks!

Comment by Ruby on New User's Guide to LessWrong · 2024-10-26T18:37:38.606Z · LW · GW

Thanks! Fixed

Comment by Ruby on Why I’m not a Bayesian · 2024-10-19T17:43:48.961Z · LW · GW

Curated. I really like that even though LessWrong is 1.5 decades old now and has Bayesianism assumed as background paradigm while people discuss everything else, nonetheless we can have good exploration of our fundamental epistemological beliefs.

The descriptions of unsolved problems, or at least incompleteness of Bayesianism strikes me as technically correct. Like others, I'm not convinced of Richard's favored approach, but it's interesting. In practice, I don't think these problems undermine the use of Bayesianism in typical LessWrong thought. For example, I never thought of credences being applied to "propositions" rigorously, and more like "hypotheses" or possibilities for how things are that could be framed as models already too.  Context-dependent terms like "large" or quantities without explicit tolerances like "500ft" are the kind of things that you you taboo or reduce if necessary either for your own reasoning or a bet

That said, I think the claims about mistakes and downstream consequences of the way people do Bayesianism are interesting. I'm reading a claim here I don't recall seeing. Although we already knew that bounded reasons aren't logically omniscient, Richard is adding a claim (if I'm understanding correctly) that this means that no matter how much strong evidence we technically have, we shouldn't have really high confidence in any domain that requires heavy of processing that evidence, because we're not that good at processing. I do think that leaves us with a question of judging when there's enough evidence to be conclusive without complicated processing or not.

Something I might like a bit more factored out is the rigorous gold-standard epistemological framework and the manner in which we apply our epistemology day to day.

I fear this curation notice would be better if I'd read all the cited sources on critical rationalism, Knightian uncertainty, etc., and I've added them to my reading list. All in all, kudos for putting some attention on the fundamentals.

Comment by Ruby on Open Thread Fall 2024 · 2024-10-17T18:03:18.308Z · LW · GW

Welcome! Sounds like you're on the one hand at start of a significant journey but also you've come a long distance already. I hope you find much helpful stuff on LessWrong.

I hadn't heard of Daniel Schmachtenberger, but I'm glad to have learend of him and his works. Thanks.

Comment by Ruby on 2024 Petrov Day Retrospective · 2024-09-29T03:22:28.082Z · LW · GW

The actual reason why we lied in the second message was "we were in a rush and forgot." 

My recollection is we sent the same message to the majority group because:

  1. Treating it different would require special-casing it and that would have taken more effort.
  2. If selectors of different virtues had received a different messages, we wouldn't be able to have a properly compared their behavior.
  3. [At least in my mind], this was a game/test and when playing games you lie to people in the context of the game to make things work. Alternatively, it's like how scientific experimenters mislead subjects for the sake of the study.
Comment by Ruby on Ruby's Quick Takes · 2024-09-29T00:53:18.142Z · LW · GW

Added!

Comment by Ruby on Ruby's Quick Takes · 2024-09-29T00:52:40.556Z · LW · GW

Added!

Comment by Ruby on Ruby's Quick Takes · 2024-09-29T00:51:09.251Z · LW · GW

Money helps. I could probably buy a lot of dignity points for a billion dollars. With a trillion variance definitely goes up because you could try crazy stuff and could backfire. (I mean true for a billion too). But EV of such a world is better. 

I don't think there's anything that's as simple as writing a check though.

US Congress gives money to specific things. I do not have a specific plan for a trillion dollars.

I'd bet against Terrance Tao being some kind of amazing breakthrough researcher who changes the playing field.

Comment by Ruby on Ruby's Quick Takes · 2024-09-28T17:14:37.045Z · LW · GW

Your access should be activated within 5-10 minutes. Look for the button in the bottom right of the screen.

Comment by Ruby on Ruby's Quick Takes · 2024-09-28T15:57:33.524Z · LW · GW

Not an original observation but yeah, separate from whether it's desirable, I think we need to be planning for it.

Comment by Ruby on Ruby's Quick Takes · 2024-09-28T00:22:24.305Z · LW · GW

Just thinking through simple stuff for myself, very rough, posting in the spirit of quick takes
 

  1. At present, we are making progress on the Technical Alignment Problem[2] and like probably could solve it within 50 years.

  2. Humanity is on track to build ~lethal superpowerful AI in more like 5-15 years.
  3. Working on technical alignment (direct or meta) only matters if we can speed up overall progress by 10x (or some lesser factor if AI capabilities is delayed from its current trajectory). Improvements of 2x are not likely to get us to an adequate technical solution in time.
  4. Working on slowing things down is only helpful if it results in delays of decades.
    1. Shorter delays are good in so far as they give you time to buy further delays.
  5. There is technical research that is useful for persuading people to slow down (and maybe also solving alignment, maybe not). This includes anything that demonstrates scary capabilities or harmful proclivities, e.g. a bunch of mech interp stuff, all the evals stuff.
  6. AI is in fact super powerful and people who perceive there being value to be had aren’t entirely wrong[3]. This results in a very strong motivation to pursue AI and resist efforts to be stopped

    1. These motivations apply to both businesses and governments.

  7. People are also developing stances on AI along ideological, political, and tribal lines, e.g. being anti-regulation. This generates strong motivations for AI topics even separate from immediate power/value to be gained.
  8. Efforts to agentically slow down the development of AI capabilities are going to be matched by agentic efforts to resist those efforts and push in the opposite direction.
    1. Efforts to convince people that we ought to slow down will be matched by people arguing that we must speed up.
    2. Efforts to regulate will be matched by efforts to block regulation. There will be efforts to repeal or circumvent any passed regulation.
    3. If there are chip controls or whatever, there will be efforts to get around that. If there are international agreements, there will be efforts to clandestinely hide.
    4. If there are successful limitations on compute, people will compensate and focus on algorithmic progress.
  9. Many people are going to be extremely resistant to being swayed on topics of AI, no matter what evidence is coming in. Much rationalization will be furnished to justify proceeding no matter the warning signs.
  10. By and large, our civilization has a pretty low standard of reasoning.
  11. People who want to speed up AI will use falsehoods and bad logic to muddy the waters, and many people won’t be able to see through it[4]. No matter the evals or other warning signs, there will be people arguing it can be fixed without too much trouble and we must proceed.

  12. In other words, there’s going to be an epistemic war and the other side is going to fight dirty[5], I think even a lot of clear evidence will have a hard time against people’s motivations/incentives and bad arguments.

  13. When there are two strongly motivated sides, seems likely we end up in a compromise state, e.g. regulation passes but it’s not the regulation originally designed that even in its original form was only maybe actually enough.
  14. It’s unclear to me whether “compromise regulation” will be adequate. Or that any regulation adequate to cost people billions in anticipated profit will conclude with them giving up.

Further Thoughts

  1. People aren’t thinking or talking enough about nationalization.
    1. I think it’s interesting because I expect that a lot of regulation about what you can and can’t do stops being enforceable once the development is happening in the context of the government performing it.

What I Feel Motivated To Work On

Thinking through the above, I feel less motivated to work on things that feel like they’ll only speed up technical alignment problem research by amounts < 5x. In contrast, maybe there’s more promise in:

  • Cyborgism or AI-assisted research that gets up 5x speedups but applies differentially to technical alignment research
  • Things that convince people that we need to radically slow down
    • good writing
    • getting in front of people
    • technical demonstrations
    • research that shows the danger
      • why the whole paradigm isn’t safe
      • evidence of deception, etc.
  • Development of good (enforceable) “if-then” policy that will actually result in people stopping in response to various triggers, and not just result in rationalization for why actually it’s okay to continue (ignore signs) or just a bandaid solution
  • Figuring out how to overcome people’s rationalization
  • Developing robust policy stuff that’s set up to withstand lots of optimization pressure to overcome it
  • Things that cut through the bad arguments of people who wish to say there’s no risk and discredit the concerns
  • Stuff that prevents national arms races / gets into national agreements
  • Thinking about how to get 30 year slowdowns
  1. ^

     By “slowing down”, I mean all activities and goals which are about preventing people from building lethal superpowerful AI, be it via getting them to stop, getting to go slower because they’re being more cautious, limiting what resources they can use, setting up conditions for stopping, etc.

  2. ^

     How to build a superpowerful AI that does what we want.

  3. ^

     They’re wrong about their ability to safely harness the power, but not if you could harness, you’d have a lot of very valuable stuff.

  4. ^

     My understanding is a lot of falsehoods were used to argue against SB1047 by e.g. a16z

  5. ^

     Also some people arguing for AI slowdown will fight dirty too, eroding trust in AI slowdown people, because some people think that when the stakes are high you just have to do anything to win, and are bad at consequentialist reasoning.

Comment by Ruby on Ruby's Quick Takes · 2024-09-27T23:41:01.479Z · LW · GW

The “Deferred and Temporary Stopping” Paradigm

Quickly written. Probably missed where people are already saying the same thing.

I actually feel like there’s a lot of policy and research effort aimed at slowing down the development of powerful AI–basically all the evals and responsible scaling policy stuff.

A story for why this is the AI safety paradigm we’ve ended up in is because it’s palatable. It’s palatable because it doesn’t actually require that you stop. Certainly, it doesn’t right now. To the extent companies (or governments) are on board, it’s because those companies are at best promising “I’ll stop later when it’s justified”. They’re probably betting that they’ll be able to keep arguing it’s not yet justified. At the least, it doesn’t require a change of course now and they’ll go along with it to placate you.

Even if people anticipate they will trigger evals and maybe have to delay or stop releases, I would bet they’re not imagining they have to delay or stop for all that long (if they’re even thinking it through that much). Just long enough to patch or fix the issue, then get back to training the next iteration. I'm curious how many people imagine that once certain evaluations are triggered, the correct update is that deep learning and transformers are too shaky a foundation. We might then need to stop large AI training runs until we have much more advanced alignment science, and maybe a new paradigm.

I'd wager that if certain evaluations are triggered, there will be people vying for the smallest possible argument to get back to business as usual. Arguments about not letting others get ahead will abound. Claims that it's better for us to proceed (even though it's risky) than the Other who is truly reckless. Better us with our values than them with their threatening values.

People genuinely concerned about AI are pursuing these approaches because they seem feasible compared to an outright moratorium. You can get companies and governments to make agreements that are “we’ll stop later” and “you only have to stop while some hypothetical condition is met”. If the bid was “stop now”, it’d be a non-starter.

And so the bet is that people will actually be willing to stop later to a much greater extent than they’re willing to stop now. As I write this, I’m unsure of what probabilities to place on this. If various evals are getting triggered in labs:

  • What probability is there that the lab listens to this vs ignores the warning sign and it doesn’t even make it out of the lab?
  • If it gets reported to the government, how strongly does the government insist on stopping? How quickly is it appeased before training is allowed to resume?
  • If a released model causes harm, how many people skeptical of AI doom concerns does it convince to change their mind and say “oh, actually this shouldn’t be allowed”? How many people, how much harm?
  • How much do people update that AI in general is unsafe vs that particular AI from that particular company is unsafe, and only they alone should be blocked?
  • How much do people argue that even though there are signs of risk here, it’d be more dangerous to let other pull ahead?
  • And if you get people to pause for a while and focus on safety, how long will they agree to a pause for before the shock of the damaged/triggered eval gets normalized and explained away and adequate justifications are assembled to keep going?

There are going to be people who fight tooth and nail, weight and bias, to keep the development going. If we assume that they are roughly equally motivated and agentic as us, who wins? Ultimately we have the harder challenge in that we want to stop others from doing something. I think the default is people get to do things.

I think there's a chance that various evals and regulations do meaningfully slow things down, but I write this to express the fear that they're false reassurance–there's traction only because people who want to build AI are betting this won't actually require them to stop.

Related:

Comment by Ruby on Skills from a year of Purposeful Rationality Practice · 2024-09-26T07:13:34.220Z · LW · GW

Curated. I think Raemon's been doing a lot of work in the last year pushing this stuff, and this post pulls together in one place a lot of good ideas/advice/approach.

I would guess that because of the slow or absent feedback loops, people don't realize how bad human reasoning and decision-making is when operating outside of the familiar and quick feedback. That's many domains, but certainly the whole AI situation. Ray is going after the hard stuff here.

And the same time, this stuff ends up feeling like the "eat your vegetables" of reasoning and decision-making. It's not sexy, or at least it's not that fun to sit down and e.g. try to brainstorm further plans when you already have one that's appealing. or backchain from your ostensible goal. I think we'd be in a better place if these skills and practices were normalized, in the sense of there's a norm that you do these things and if you don't, then you're probably screwing up.

Comment by Ruby on Ruby's Quick Takes · 2024-09-22T19:55:31.590Z · LW · GW

Yeah, I think a question is whether I want to say "that kind of wireheading isn't mypoic" vs "that isn't wireheading". Probably fine eitherway if you're consistent / taboo adequately.

Comment by Ruby on Lighthaven Sequences Reading Group #3 (Tuesday 09/24) · 2024-09-22T16:19:35.230Z · LW · GW

My guess is Ben created the event while on the East Coast and 6pm got timezone converted for West Coast. I've fixed it.

Comment by Ruby on Ruby's Quick Takes · 2024-09-22T16:16:31.304Z · LW · GW

Once I'm rambling, I'll note another thought I've been mulling over:

My notion of value is not the same as the value that my mind was optimized to pursue. Meaning that I ought to be wary that typical human thought patterns might not be serving me maximally.

That's of course on top of the fact that evolution's design is flawed even by its own goals; humans rationanlize left, right, and center, are awfully myopic, and we'll likely all die because of it.

Comment by Ruby on Ruby's Quick Takes · 2024-09-22T16:13:16.380Z · LW · GW

There's an age old tension between ~"contentment" and ~"striving" with no universally accepted compelling resolution, even if many people feel they have figured it out. Related:

In my own thinking, I've been trying to ground things out in a raw consequentialism that one's cognition (including emotions) is just supposed to take you towards more value (boring, but reality is allowed to be)[1].

I fear that a lot of what people do is ~"wireheading". The problem with wireheading is it's myopic. You feel good now (small amount of value) at the expense of greater value later. Historically, this has made me instinctively wary of various attempts to experience more contentment such as gratitude journaling. Do such things curb the pursuit of value in exchange for feeling better less unpleasant discontent in the moment?

Clarity might come from further reduction of what "value" is. The primary notion of value I operate with is preference satisfaction: the world is how you want it to be. But also a lot of value seems to flow through experience (and the preferred state of the world is one where certain experiences happen).

A model whereby gratitude journaling (or general "attend to what is good" motions) maximize value as opposed to the opposite, is that they're about turning 'potential value' into 'experienced actual value'. The sunset on its own is merely potential value, it becomes experienced actual value when you stop and take it in. The same for many good things in one's life you might have just gotten used it, but could be enjoyed and savored (harvested) again by attending to them.

Relatedly, I've thought a distinction between actions that "sow value" vs "reap value", roughly mapping onto actions that are instrumental vs terminal to value, roughly mapping to "things you do to get enjoyment later" vs "things you actually enjoy[2] now".

My guess is that to maximize value over one's lifetime (the "return" in RL terms), one shouldn't defer reaping/harvesting value until the final timestep. Instead you want to be doing a lot of sowing but also reaping/harvesting as you go to, and gratitude-journaling-esque, focus-on-what-you-got-already stuff faciliates that, and is part of of value maximization, not simply wireheading.

It's a bit weird in our world, because the future value you can be sowing for (i.e. the entire cosmic endowment not going to waste) is so overwhelming, it kinda feels like maybe it should outweigh any value you might reap now. My handwavy answer is something something human psychology it doesn't work to do that.

I'm somewhat rederiving standard "obvious" advice, but I don't think it actually is, and figuring out better models and frameworks might ultimately solve the contentment/striving tension (/ focus on what you go vs focus on what you don't tension).

 

  1. ^

    And as usual, that doesn't mean one tries to determine the EV of every individual mental act. It means when setting up policies, habits, principles, etc., etc., that ultimate the thing that determines whether those are good is the underlying value consequentialism.

  2. ^

    To momentarily speak in terms of experiential value vs preference satisfaction value.

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T02:17:34.965Z · LW · GW

Applied Game Theory

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T02:16:32.645Z · LW · GW

CFAR-style Rationality Techniques

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T01:48:33.963Z · LW · GW

Anthropics

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T01:48:13.441Z · LW · GW

Decision Theory

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T01:47:19.752Z · LW · GW

Agent Foundations

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T01:46:57.784Z · LW · GW

Natural Latents

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T01:45:10.836Z · LW · GW

Infra-Bayesianism

Comment by Ruby on Which LessWrong/Alignment topics would you like to be tutored in? [Poll] · 2024-09-19T01:40:34.033Z · LW · GW

Poll for LW topics you'd like to be tutored in
(please use agree-react to indicate you'd personally like tutoring on a topic, I might reach out if/when I have a prototype)

Note: Hit cmd-f or ctrl-f (whatever normally opens search) to automatically expand all of the poll options below.

Comment by Ruby on Ruby's Quick Takes · 2024-09-16T19:41:18.050Z · LW · GW

Added! (Can take a few min to activate though) My advice is for each one of those, ask in it in a new separate/fresh chat because it'll only a do single search per chat.

Comment by Ruby on Building an Inexpensive, Aesthetic, Private Forum · 2024-09-09T20:01:34.102Z · LW · GW

I'm not sure where to rate it on easy to set up. It's not as out of the box as other services.

Comment by Ruby on Perhaps Try a Little Therapy, As a Treat? · 2024-09-07T06:42:01.632Z · LW · GW

At this point it seems hardly to add anything to write it explicitly, since I think observers are reaching the same conclusion without help, but I would be utterly horrified to be targeted by Caleb Ditchfield in the way that seems Duncan has been.

It strikes me as an inadequacy of our civilization that someone can perpetuate harms like this and not be stopped. To the extent I have any power to prevent harms like this, e.g. my ~vote on whether Caleb is allowed in communal spaces, I do vote to remove him for the benefit of others.

It is a very sad situation. It seems that Caleb is just very ill, and the worst kind of ill where there's massive denial and he'll fight all efforts to persuade him to get help. I hope that writing this comment, not really changing anything, at least pushes in the right direction.

Comment by Ruby on Ruby's Quick Takes · 2024-09-03T21:17:34.678Z · LW · GW

You now have access to the LW LLM Chat prototype!

also think the low-friction integration might make it useful for clarifying math- or programming-heavy posts, though I'm not sure I'll want this often.

That's actualy one of my favorite use-cases

Comment by Ruby on Ruby's Quick Takes · 2024-09-03T21:16:36.839Z · LW · GW

You've been granted access to the LW LLM Chat prototype! 

No need to provide API key (we haven't even set that up, I was just explaining why we having people manually request access rather than make it immediately available more broadly.

Comment by Ruby on Ruby's Quick Takes · 2024-09-01T19:21:12.367Z · LW · GW

Cheers! Comments here are good, so is LW DM, or Intercom.

Comment by Ruby on Ruby's Quick Takes · 2024-09-01T02:33:51.333Z · LW · GW

Not available on mobile at this time, I'm afraid.

Comment by Ruby on Ruby's Quick Takes · 2024-08-31T18:47:12.652Z · LW · GW

@Neel Nanda @Stephen Fowler @Saul Munn – you've been added.

I'm hoping to get a PR deployed today that'll make a few improvements:
- narrow the width so doesn't overlap the post on smaller screens than before
- load more posts into the context window by default
- upweight embedding distance relative to karma in the embedding search for relevant context to load in
- various additions to the system response to improve tone and style

Comment by Ruby on Ruby's Quick Takes · 2024-08-31T18:43:42.822Z · LW · GW

Added! That's been one of my go-to questions for testing variations of the system, I'd suggest just trying it yourself.

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T22:02:54.712Z · LW · GW

I'll add you now, though I'm in the middle of some changes that should make it better for lit search.

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T18:47:39.638Z · LW · GW

You are added!

Claude 3.5 Sonnet is the chat client, and yes, with RAG using OpenAI text-embedding-3-large for embeddings.

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T18:38:02.302Z · LW · GW

Oh, you access it with the sparkle button in the bottom right:
 

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T15:12:05.054Z · LW · GW

Sounds good! I'd recommend pasting in the actual contents together with a description of what you're after.

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T15:10:56.447Z · LW · GW

@Chris_Leong @Jozdien @Seth Herd @the gears to ascension @ProgramCrafter 

You've all been granted to the LW integrated LLM Chat prototype. Cheers!

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T01:44:36.755Z · LW · GW

Added!

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T01:44:23.904Z · LW · GW

Added!

Comment by Ruby on Ruby's Quick Takes · 2024-08-30T01:13:57.262Z · LW · GW

Seeking Beta Users for LessWrong-Integrated LLM Chat

Comment here if you'd like access. (Bonus points for describing ways you'd like to use it.)


A couple of months ago, a few of the LW team set out to see how LLMs might be useful in the context of LW.  It feels like they should be at some point before the end, maybe that point is now. My own attempts to get Claude to be helpful for writing tasks weren't particularly succeeding, but LLMs are pretty good at reading a lot of things quickly, and also can be good at explaining technical topics.

So I figured just making it easy to load a lot of relevant LessWrong context into an LLM might unlock several worthwhile use-cases. To that end, Robert and I have integrated a Claude chat window into LW, with the key feature that it will automatically pull in relevant LessWrong posts and comments to what you're asking about.

I'm currently seeking beta users. 

Since using the Claude API isn't free and we haven't figured out a payment model, we're not rolling it out broadly. But we are happy to turn it on for select users who want to try it out. 

Comment here if you'd like access. (Bonus points for describing ways you'd like to use it.)

Comment by Ruby on Quick look: applications of chaos theory · 2024-08-19T04:26:05.837Z · LW · GW

Mandelbrot’s work on phone line errors is more upstream than downstream of fractals, but produced legible economic value by demonstrating that phone companies couldn’t solve errors via their existing path of more and more powerful phone lines. Instead, they needed redundancy to compensate for the errors that would inevitably occur. Again I feel like it doesn’t take a specific mathematical theory to consider redundancy as a solution, but that may be because I grew up in a post-fractal world where the idea was in the water supply. And then I learned the details of TCP/IP where redundancy is baked in.

Huh, I thought all of this was covered by Shannon information theory already.

Comment by Ruby on What is "True Love"? · 2024-08-18T18:25:35.908Z · LW · GW

I think if you find the true name of true love, it'll be pretty clear what it is and how to avoid it.

Comment by Ruby on jacquesthibs's Shortform · 2024-08-13T18:09:03.951Z · LW · GW

Same question as above.

Comment by Ruby on jacquesthibs's Shortform · 2024-08-13T18:08:43.992Z · LW · GW

I'm curious how much you're using this and if it's turning out to be useful on LessWrong. Interested because it's something we've been thinking about integrating LLM stuff like this into LW itself.