Posts

Can LessWrong provide me with something I find obviously highly useful to my own practical life? 2023-07-07T03:08:58.183Z

Comments

Comment by agrippa on A new option for building lumenators · 2023-12-14T08:34:38.003Z · LW · GW

flood lights seem best?

Comment by agrippa on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-18T00:27:40.793Z · LW · GW

However, Annie has not yet provided what I would consider direct / indisputable proof that her claims are true. Thus, rationally, I must consider Sam Altman innocent.

This is an interesting view on rationality that I hadn't considered

Comment by agrippa on How to make real-money prediction markets on arbitrary topics (Outdated) · 2023-08-19T21:43:14.969Z · LW · GW

Omen decouples but has prohibitive gas problems and sees no usage as a result.

Comment by agrippa on How to make real-money prediction markets on arbitrary topics (Outdated) · 2023-08-19T21:42:44.014Z · LW · GW

Augur was a total failboat. Almost all of these projects couple the market protocol to the resolution protocol, which is stupid, especially if you are Augur and your ideas about making resolution protocols are really dumb.

Comment by agrippa on How to make real-money prediction markets on arbitrary topics (Outdated) · 2023-08-19T21:40:12.064Z · LW · GW

Your understanding is correct. I built one which is currently offline, I'll be in touch soon.

Comment by agrippa on Can LessWrong provide me with something I find obviously highly useful to my own practical life? · 2023-07-10T05:32:30.577Z · LW · GW

I found the stuff about relationship success in Luke's first post here to be useful! thanks

Comment by agrippa on Can LessWrong provide me with something I find obviously highly useful to my own practical life? · 2023-07-07T06:26:04.878Z · LW · GW

Ok, this kind of tag is exactly what I was asking about. I'll have a lok at these posts.

Comment by agrippa on Grant applications and grand narratives · 2023-07-07T02:43:18.339Z · LW · GW

Thanks for giving an example of a narrow project, I think it helps a lot. I have been around EA for several years, I find that grandiose projects and narratives at this point alienate me, and hearing about projects like yours make my ears perk up and feel like maybe I should devote more time and attention to the space.

Comment by agrippa on Going Crazy and Getting Better Again · 2023-07-06T07:32:10.683Z · LW · GW

I guess it’s good to know it’s possible to be both a LW-style rationalist and quite mentally ill.

Not commenting on distributions here, but it sure as fuck is possible. 

Comment by agrippa on Moderation notes re: recent Said/Duncan threads · 2023-07-06T06:55:07.822Z · LW · GW

I liked the analogy and I also like weird bugs

Comment by agrippa on My Time As A Goddess · 2023-07-06T03:53:23.875Z · LW · GW

While normal from a normal perspective, this post is strange from a rationalist perspective, since the lesson you describe is X is bad, but the evidence given is that you had a good experience with X aside from mundane interpersonal drama that everyone experiences and that doesnt sound particularly exacerbated by X. Aside from that you say it contributed to psychosis years down the line, but its not very clear to me there is a strong causal relationship or any. 

(of course, your friend's bad experience with cults is a good reason to update against cults being safe to participate in)

I am not really a cult advocate. But it is okay (and certainly bayesian) to just have a good personal experience with something and conclude that can be safer or nicer than people generally think. Just because you're crazy doesnt mean everything you did was bad.

Edit: This is still on my mind so I will write some more. I feel like the attitude in your post, especially your addendum, is that its fundamentally obviously wrong to feel like your experience was okay or an okay thing to do. And that the fact you feel/felt okay about it is strong evidence that you need to master rationality more, in order to be actually okay. And that once you do master rationality, you will no longer feel it was ok. 

But "some bad things happened and also some good things, I guess it was sort of okay" is in fact a reasonable way to feel. It does sound like some bad things happened, some good things, and that it was just sort of okay (if not better). There is outside view evidence about cults being bad. Far be it from me to say that you should not avoid cults. We should certainly incorporate the outside view into our choices. But successfully squashing your inside view because it contradicts the outside view is not really an exercise in rationality, and is often the direct opposite. Also, it makes me sad.

Comment by agrippa on The Dictatorship Problem · 2023-06-11T06:56:08.778Z · LW · GW

how are you personally preparing for this?

Comment by agrippa on Why hasn't deep learning generated significant economic value yet? · 2022-05-18T14:39:45.842Z · LW · GW

Recently I learned that Pixel phones actually contain TPUs. This is a good indicator of how much deep learning is being used (particularly it is used by the camera I think)

Comment by agrippa on What an actually pessimistic containment strategy looks like · 2022-04-22T20:07:17.003Z · LW · GW

Re: taboos in EA, I think it would be good if somebody who downvoted this comment said why. 

Comment by agrippa on What an actually pessimistic containment strategy looks like · 2022-04-22T20:06:15.501Z · LW · GW

Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve's top comment on this post is an example of enforcing/reiterating this norm. 

It's an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid.  That fits what I'd consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.  

Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc's post analogizes their relationship with AI researchers to Isreal's relationship with Iran.  When I think of Israel's resistance to Iran nonviolence is not the first thing that comes to mind.

Comment by agrippa on What an actually pessimistic containment strategy looks like · 2022-04-22T19:08:01.886Z · LW · GW

So the first step to good outreach is not treating AI capabilities researchers as the enemy. We need to view them as our future allies, and gently win them over to our side by the force of good arguments that meets them where they're at, in a spirit of pedagogy and truth-seeking.

 

To this effect I have advocated that we should call it "Different Altruism" instead of "Effective Altruism", because by leading with the idea that a movement involves doing altruism better than status quo, we are going to trigger and alienate people part of status quo that we could have instead won over by being friendly and gentle. 

I often imagine a world where we had ended up with a less aggressive and impolite name attached to our arguments. I mean, think about how virality works: making every single AI researcher even slightly more resistant to engaging your movement (by priming them to be defensive) is going to have massive impact on the probability of ever reaching critical mass.

Comment by agrippa on What an actually pessimistic containment strategy looks like · 2022-04-05T06:39:36.785Z · LW · GW

Thanks a lot for doing this and posting about your experience. I definitely think that nonviolent resistance is a weirdly neglected approach. "mainstream" EA certainly seems against it. I am glad you are getting results and not even that surprised.

You may be interested in discussion here, I made a similar post after meeting yet another AI capabilities researcher at FTX's EA Fellowship (she was a guest, not a fellow): https://forum.effectivealtruism.org/posts/qjsWZJWcvj3ug5Xja/agrippa-s-shortform?commentId=SP7AQahEpy2PBr4XS
 

Comment by agrippa on MIRI announces new "Death With Dignity" strategy · 2022-04-05T05:09:11.974Z · LW · GW

I'm interestd in working on dying with dignity

Comment by agrippa on MIRI announces new "Death With Dignity" strategy · 2022-04-03T16:13:04.542Z · LW · GW

I actually feel calmer after reading this, thanks. It's nice to be frank. 

For all the handwringing in comments about whether somebody might find this post demotivating, I wonder if there are any such people. It seems to me like reframing a task from something that is not in your control (saving the world) to something that is (dying with personal dignity) is the exact kind of reframing that people find much more motivating.

Comment by agrippa on Worth checking your stock trading skills · 2021-11-11T23:52:38.897Z · LW · GW

Related post: https://www.lesswrong.com/posts/ybQdaN3RGvC685DZX/the-emh-is-false-specific-strong-evidence

One relevant thing here is baseline P(beats market) on given [rat / smart] & [tries to beat market]. In my own anecdotal dataset of about 15 people the probability here is about 100%, and the amount of wealth among these people is also really high. Obvious selection effects or whatever are obvious. But EMH is just a heuristic and you probably have access to stronger evidence. 

Comment by agrippa on Speaking of Stag Hunts · 2021-11-09T17:03:32.126Z · LW · GW

I found this post persuasive, and only noticed after the fact that I wasn't clear on exactly what it had persuaded me of.

I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.

Comment by agrippa on Speaking of Stag Hunts · 2021-11-09T02:29:31.310Z · LW · GW

Great, thanks.

Comment by agrippa on Speaking of Stag Hunts · 2021-11-09T02:19:32.196Z · LW · GW

I was not aware of any examples of anything anyone would refer to as prejudicial mobbing with consequences. I'd be curious to hear about your prejudicial mobbing experience.

Comment by agrippa on Speaking of Stag Hunts · 2021-11-09T00:47:18.496Z · LW · GW

Maybe there is some norm everyone agrees with that you should not have to distance yourself from your friends if they turn out to be abusers, or not have to be open about the fact you were there friend, or something. Maybe people are worried about the chilling effects of that.

If this norm is the case, then imo it is better enforced explicitly. 

But to put it really simply it does seem like I should care about whether it is true that Duncan and Brent were close friends if I am gonna be taking advice from him about how to interpret and discuss accusations made in the community. So if we are not enforcing a norm that such relationships should not enter discussion then I am unclear about the basis of downvoting here.

Comment by agrippa on Speaking of Stag Hunts · 2021-11-09T00:19:22.620Z · LW · GW

Your OP is way too long (or not sufficiently indexed) for me to, without considerable strain, determine how much or how meaningfully I think this claim is true. Relatedly I don't know what you are referring to here.

Comment by agrippa on Speaking of Stag Hunts · 2021-11-09T00:14:33.799Z · LW · GW

Maybe it is good to clarify: I'm not really convinced that LW norms are particularly conducive to bad faith or psychopathic behavior. Maybe there are some patches to apply. But mostly I am concerned about naivety. LW norms aren't enough to make truth win and bullies / predators lose. If people think they are, that alone is a problem independent of possible improvements. 
 

since you might just have different solutions in mind for the same problem.

I think that Duncan is concerned about prejudicial mobs being too effective and I am concerned about systematically preventing information about abuse from surfacing. To some extent I do just see this as a conflict based on interests -- Duncan is concerned about the threat of being mobbed and advocating tradeoffs accordingly, I'm concerned about being abused / my friends being abused and advocating tradeoffs accordingly. But to me it doesn't seem like LW is particularly afflicted by prejudicial mobs and is nonzero afflicted by abuse.

I don't think Duncan acknowledges the presence of tradeoffs here but IMO there absolutely have to be tradeoffs. To me the generally upvoted and accepted responses to jessicata's post are making a tradeoff to protect MIRI against mudslinging, disinformation, mobbing while also making it scarier to try to speak up about abuse. Maybe the right tradeoff is being made and we have to really come down on jessicata for being too vague and equivocating too much, or being a fake victim of some kind. But I also think we should not take advocacy regarding these tradeoffs at face value, which yeah LW norms seem to really encourage.

Comment by agrippa on Speaking of Stag Hunts · 2021-11-08T23:35:49.499Z · LW · GW

If you do happen to feel like listing a couple of underappreciated norms that you think do protect rationality, I would like that.

 

Brevity

Comment by agrippa on Speaking of Stag Hunts · 2021-11-08T21:53:33.354Z · LW · GW

I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. I think this post is pretty much an example of that:
- a lot of time is spent gesturing / sermoning about the importance of fighting biases etc. with no particularly informative or novel content (it is after all intended to "remind people of why they care".). I personally find it difficult to engage critically with this kind of high volume and low density. 
- ultimately the intent seems to be an effort to coordinate power against types of posters that Duncan doesn't like

I just don't see how most of this post is supposed to help me be more rational. The droning on makes it harder to engage as an adversary, than if the post were just "here are my terrible ideas", but it does so in an arational way.

I bring this up in part because Duncan seems to be advocating that his adherence to LW norms means he can't just propagandize etc.

If you read the OP and do not choose to let your brain project all over it, what you see is, straightforwardly, a mass of claims about how I feel, how I think, what I believe, and what I think should be the case.

I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you're going to dismiss all of the "I" statements as being mere window dressing or something (I'm not sure that's what you're doing, but it seems like something like that is necessary, to pretend that they weren't omnipresent in what I wrote), you need to do so explicitly.  You need to argue for them not-mattering; you can't just jump straight to ignoring them, and pretending that I was propagandizing.

If people here really think you can't propagandize or bad-faith accumulate points/power while adhering to LW norms, well, I think that's bad for rationality.

I am sure that Duncan will be dissatisfied with this response because it does not engage directly with his models or engage very thoroughly by providing examples from the text etc. I'm not doing this stuff because I just don't actually think it serves rationality to do so.

While I'm at it:

Duncan:

I'm not trying to cause appeals-to-emotion to disappear.  I'm not trying to cause strong feelings oriented on one's values to be outlawed.  I'm trying to cause people to run checks, and to not sacrifice their long-term goals for the sake of short-term point-scoring.

To me it seems really obvious that if I said to Duncan in response to something, "you are just sacrificing long-term goals for the sake of short-term point-scoring", (if he chose to respond) he would write about how I am making a bald assertion and blah blah blah. How I should retract it and instead say "it feels to me you are [...]" and blah blah blah. But look, in this quote there is a very clear and "uncited" / non-evidentiated claim that people are sacrifiing their long-term goals for the sake of short-term point-scoring. I am not saying it's bad to make such assertions, just saying that Duncan can and does make such assertions baldly while adhering to norms. 

To zoom out, I feel in the OP and in this thread Duncan is enforcing norms that he is good at leveraging but that don't actually protect rationality. But these norms seem to have buy in. Pooey!

I continuously add more to this stupid post in part because I feel the norms here require a lot of ink gets spilled and that I substantiate everything I say. It's not enough to just say "you know it seems like you are doing [x thing I find obvious]". Duncan is really good at enforcing this norm and adhering to it. 

But the fact is that this post was a stupid usage of my time that I don't actually value having written, completely independent of how right I am about anything I am saying or how persuasive.

Again I submit:

I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you're going to dismiss all of the "I" statements as being mere window dressing or something (I'm not sure that's what you're doing, but it seems like something like that is necessary, to pretend that they weren't omnipresent in what I wrote), you need to do so explicitly.  You need to argue for them not-mattering; you can't just jump straight to ignoring them, and pretending that I was propagandizing.

Look, if I have to reply to every single attack on a certain premise, before I am allowed to use this premise, then I am not going to be allowed to use the premise ever. Because Duncan has more time than me allocated to this stuff, and seemingly more than most people who criticize this OP. But that seems like a really stupid norm. 

I made this top level because, even though I think the norm is stupid, among other norms I have pointed out, I also think that Duncan is right that all of them are in fact the norm here.

Comment by agrippa on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T17:20:33.322Z · LW · GW

Thank you SO MUCH for writing this. 

The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with.  Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.

I think this is so well put and important.

I think that your fear of extreme rebuke from publishing this stuff is obviously reasonable when dealing with a group that believes itself to be world-saving. Any such org is going to need to proactively combat this fear if they want people to speak out. To me this is totally obvious. 

Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.  (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.)

I feel that this is a very important point.

I want to hear more experiences like yours. That's not "I want to hear them [before I draw conclusions]." I just want to hear them. I think this stuff should be known. 

I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that's the case, much discussion is completely moot. I personally kinda think that the world's best shot is the one where MIRI/CFAR type orgs don't break so many eggs. And I think transparency is the only realistic mechanism for course correction. 

Comment by agrippa on Zoe Curzi's Experience with Leverage Research · 2021-10-17T16:46:28.520Z · LW · GW

"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."

I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.

Comment by agrippa on Zoe Curzi's Experience with Leverage Research · 2021-10-17T06:44:32.536Z · LW · GW

[1] I don’t particularly blame them, consider the alternative.

I think the alternative is actually much better than silence!

For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced. 

Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those "in the know" matter; they lead, and I think its better for everyone if that leadership happens in the light.

Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. 

I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.

Comment by agrippa on Common knowledge about Leverage Research 1.0 · 2021-10-04T03:59:34.886Z · LW · GW

I will say that the EA Hotel, during my 7 months of living there, was remarkably non-cult-like.  You would think otherwise given Greg's forceful, charismatic presence /j

Comment by agrippa on No, Really, I've Deceived Myself · 2021-06-11T07:38:32.351Z · LW · GW

I find it hard to imagine people sleeping in on Sundays. Not even the most hardened criminal will steal when the policeman's right in front of him and the punishment is infinite.

I'm a little late on this one but for another clear example is that theists don't have the relationship with death that you would expect someone to have if they believed that post-death was the good part. "You want me to apologize to the bereaved family for murder? They should be thanking me!"