Posts

Comments

Comment by Geoff_Anders on Zoe Curzi's Experience with Leverage Research · 2021-11-13T07:52:10.873Z · LW · GW

It was published this evening. Here is a link to the letter, and here is the announcement on Twitter.

Comment by Geoff_Anders on Zoe Curzi's Experience with Leverage Research · 2021-10-17T11:23:08.521Z · LW · GW

Yes, here: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=3gMWA8PjoCnzsS7bB

Comment by Geoff_Anders on Zoe Curzi's Experience with Leverage Research · 2021-10-17T11:21:31.301Z · LW · GW

Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.

It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which affected so many of the choices we made. We dismissed a lot of the actually useful advice and tools and methods from more typical sources, and it seems that blocking out society made room for extreme and harmful narratives that should have been tempered by a lot more reality. It’s terrible that you felt like your funding, or ability to rest, or take time off, or choose how to interact with your own mind were compromised by Leverage’s narratives, including my own. I totally did not expect this, or the negative effects you experienced after leaving, though maybe I would have, had I not narrowed my attention and basically gotten way too stuck in theoryland.

I agree with you that we shouldn’t skip steps. I’ve updated accordingly. Again I’m truly sorry. I really wanted your experience on the project to be good.

Comment by Geoff_Anders on Zoe Curzi's Experience with Leverage Research · 2021-10-14T01:33:09.398Z · LW · GW

Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.

I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.

My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.

Comment by Geoff_Anders on [deleted post] 2021-01-12T00:39:45.232Z

Author of the post here. I edited the post by:

(1) adding an introduction — for context, and to make the example in Part I less abrupt

(2) editing the last section — the original version was centered on my conversations with Rationalists in 2011-2014; I changed it to be a more general discussion, so as to broaden the post's applicability and make the post more accessible

Comment by Geoff_Anders on [deleted post] 2019-09-29T19:02:30.068Z

Good point. I think they are prima facie orthogonal. Empirically, though, my current take is that many deep psychological distortions affect attention in a way that makes trying to manage them primarily on short time scales extremely difficult compared to managing them on longer time scales.

Imagine, for instance, that you have underlying resignation that causes your S1 to put 5x the search power into generating plausible failure scenarios than plausible success scenarios. This might be really hard to detect on the 5 second level, especially if you don't have a good estimate of the actual prevalence of plausible failure or success scenarios (or, a good estimate of the actual prevalence of plausible failure or success scenarios, as accessible by your own style of thinking). But on longer time scales, you can see yourself potentially bending too pessimistic and start to investigate why. That might then turn up the resignation.

Comment by Geoff_Anders on [deleted post] 2019-09-27T17:04:55.783Z
I think I'm willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it's something that people who try to get you commit bad epistemic moves like to raise [1].

There's a tricky balance to maintain here. On one hand, we don't want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.

With epistemic dangers, I think there is a choice between "confront" and "evade". Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like "confronting" radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like "evading" a treatable illness. Ultimately, whether to confront or evade is an empirical question.

Allowing questions of motivation to factor into one's truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one's motivation will be affected adversely to justify any desired behavior. I don't deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that's an empirical claim I'm making.

One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won't have to avoid behavior X forever? If not, that might be a cause for concern.

Comment by Geoff_Anders on [deleted post] 2019-09-27T16:31:09.412Z

I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.

My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as "probably wrong, just an idea", and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.

In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.

Comment by Geoff_Anders on Open & Welcome Thread - September 2019 · 2019-09-25T17:02:28.486Z · LW · GW

Hi everyone! For those who don’t know me, I’m Geoff Anders. I’ve been the leader of a community adjacent to the rationalist community for many years, a community centered around my research organization Leverage Research. I engaged mostly with the rationalist community in 2011-2014. I visited SingInst in March 2011, taught at the Rationality Boot Camp in June and July 2011, attended the July 2012 CFAR workshop, and then was a guest instructor at CFAR from 2012-2014.

For the past many years, I’ve been primarily focused on research. Leverage has now undergone a large change, and as part of that I’m switching to substantially more public engagement. I’m planning to write up a retrospective on the first eight and a half years of Leverage’s work and put that on my personal blog.

In the meantime, I thought it would be good to start engaging with people more and I thought the rationalist community and LessWrong was a good place to start. As part of my own pursuit of truth, I’ve developed methods, techniques, and attitudes that could be thought of as an approach to “rationality”. These techniques, methods, etc., differ from those I’ve seen promulgated by rationalists, so hopefully there’s room for a good discussion, and maybe we can bridge some inferential distance :).

Also, I’m mindful that I’m coming in from a different intellectual culture, so please let me know if I accidentally violate any community norms, it’s not intentional.

Comment by Geoff_Anders on Best causal/dependency diagram software for fluid capture? · 2013-04-12T21:09:12.229Z · LW · GW

Here are instructions for setting up the defaults the way some people have found helpful:

  1. Open yEd.
  2. Create a new document.
  3. Click the white background; a small yellow square should appear on the canvas.
  4. Click the small yellow square so as to select it.
  5. Click and drag one of the corners of the yellow square to resize it. Make it the default size you'd like your text boxes to be. You will be able to change this later.
  6. Make sure the yellow square is still selected.
  7. Look at the menu in the lower right. It is called "Properties View". It will show you information about the yellow square.
  8. Click the small yellow square in the menu next to the words "Fill Color".
  9. Select the color white for the Fill Color.
  10. Lower in the menu, under "Label", there is an item called "Placement". Find it. Change Placement to "Internal" and "Center".
  11. Right below Placement in the menu is "Size". Find it. Change Size to "Fit Node Width".
  12. Right below Size is "Configuration". Find it. Change Configuration to "Cropping".
  13. Right below Configuration is "Alignment". Find it. Ensure that Alignment is "Center".
  14. In the upper toolbar, click "File" then "Preferences".
  15. A menu will come up. Click the "Editor" tab.
  16. You will see a list of checkboxes. "Edit Label on Create Node" will be unchecked. Check it.
  17. Click Apply.
  18. In the upper toolbar, click "Edit" then "Manage Palette".
  19. A menu will come up. In the upper left there will be a button called "New Section". Click it.
  20. Name the new section after yourself.
  21. Verify that the new section has been created by locating it in the righthand list of "Displayed Palette Selections".
  22. Close the Palette Manager menu.
  23. Doubleclick your white textbox to edit its label.
  24. Put in something suitably generic to indicate a default textbox. I use "[text]" (without the quotes).
  25. Select your white textbox. Be sure that you have selected it, but are not now editing the label.
  26. Right click the white textbox. A menu will appear.
  27. On the menu, mouse over "Add to Palette", then select the palette you named after yourself.
  28. On the righthand side of the screen, there will be a menu at the top called "Palette". Find it.
  29. Scroll through the palettes in the Palette menu until you find the palette you named after yourself. Expand it.
  30. You will see your white textbox in the palette you have named after yourself. Click it to select it.
  31. Right click the white textbox in the palette. Select "Use as Default".
  32. To check that you have done everything properly, click on the white background canvas. Did it create a white textbox like your original, and then automatically allow you to edit the label? If so, you're done.

Then... a. Click the white background to create a box. b. Click a box and drag to create an arrow. c. Click an already existing box to select it. Once selected, click and drag to move it. d. Doubleclick an already existing box to edit its label.

Enjoy!

Comment by Geoff_Anders on A Critique of Leverage Research's Connection Theory · 2012-09-23T12:48:37.136Z · LW · GW

For at least 2 years prior to January 2009, I procrastinated between 1-3 hours a day reading random internet news sites. After I created my first CT chart, I made the following prediction: "If I design a way to gain information about the world that does not involve reading internet news sites that also does not alter my way of achieving my other intrinsic goods, then I will stop spending time reading these internet news sites." The "does not alter my way of achieving my other intrinsic goods" was unpacked. It included: "does not alter my way of gaining social acceptance", "does not alter my relationships with my family members", etc. The specifics were unpacked there as well.

This was prediction was falsifiable - it would have failed if I had kept reading internet news sites. It was also bold - cogsci folk and good random human psychologists would have predicted no change in my internet news reading behavior. And it was also successful - after implementing the recommendation in January 2009, I stopped procrastinating as predicted. Now, of course there are multiple explanations for the success of the prediction, including "CT is true" and "you just used your willpower". Nevertheless, this is an example of a faisifiable, bold, successful prediction.

Comment by Geoff_Anders on A Critique of Leverage Research's Connection Theory · 2012-09-22T15:08:25.082Z · LW · GW

If I recall correctly, I was saying that I didn't know how to use CT to predict simple things of the form "Xs will always Y" or "Xs will Y at rate Z", where X and Y refer to simple observables like "human", "blush", etc. It would be great if I could do this, but unfortunately I can't.

Instead, what I can do is use the CT charting procedure to generate a CT chart for someone and then use CT to derive predictions from the chart. This yields predictions of the form "if a person with chart X does Y, Z will occur". These predictions frequently do not overlap with what existing cognitive science would have one expect.

The way I could have evidence in favor of CT would be if I had created CT charts using the CT procedure, used CT to derive predictions from the charts, and then tested the predictions. And I've done this.

Comment by Geoff_Anders on On Leverage Research's plan for an optimal world · 2012-01-11T23:57:14.510Z · LW · GW

Connection Theory is not the main thing that we do. It's one of seven main projects. I would estimate that about 15% of our current effort goes directly into CT right now. It's true that having a superior understanding of the human mind is an important part of our plan, and it's true that CT is the main theory we're currently looking at. So that is one reason people are focusing on it. But it's also one of the better-developed parts of our website right now. So that's probably another reason.

Comment by Geoff_Anders on Introducing Leverage Research · 2012-01-11T15:25:40.327Z · LW · GW

I can usually do any type of work. Sometimes it becomes harder for me to write detailed documents in the last couple hours of my day.

Comment by Geoff_Anders on On Leverage Research's plan for an optimal world · 2012-01-11T15:22:19.215Z · LW · GW

We've tried to fill in step 3 quite a bit. Check out the plan and also our backup plan. We're definitely open to suggestions for ways to improve, especially places where the connection between the steps is the most tenuous.

Comment by Geoff_Anders on On Leverage Research's plan for an optimal world · 2012-01-10T17:10:22.550Z · LW · GW

Unfortunately, I'm not familiar with Ayn Rand's ideas on psychology.

Comment by Geoff_Anders on On Leverage Research's plan for an optimal world · 2012-01-10T16:46:27.715Z · LW · GW

There are no Objectivist influences that I am aware of.

Comment by Geoff_Anders on Introducing Leverage Research · 2012-01-10T15:45:21.672Z · LW · GW

Short answer: Yes, CT is falsifiable. Here's how to see this. Take a look at the example CT chart. By following the procedures stated in the Theory and Practice document, you can produce and check a CT chart like the example chart. Once you've checked the chart, you can make predictions using CT and the CT chart. From the example chart, for instance, we can see that the person sometimes plays video games and tries to improve and sometimes plays video games while not trying to improve. From the chart and CT, we can predict: "If the person comes to believe that he stably has the ability to be cool, as he conceives of coolness, then he will stop playing video games while not trying to improve." We would measure belief here primarily by the person's belief reports. So we have a concrete procedure that yields specific predictions. In this case, if the person followed various recommendations designed to increase his ability to be cool, ended up reporting that he stably had the ability to be cool, but still reported playing video games while not trying to improve, CT would be falsified.

Longer answer: In practice, almost any specific theory can be rendered consistent with the data by adding epicycles, positing hidden entities, and so forth. Instead of falsifying most theories, then, what happens is this: You encounter some recalcitrant data. You add some epicycles to your theory. You encounter more recalcitrant data. You posit some hidden entities. Eventually, though, the global theory that includes your theory becomes less elegant than the global theory that rejects your theory. So, you switch to the global theory that rejects your theory and you discard your specific theory. In practice with CT, so far we haven't had to add many epicycles or posit many hidden entities. In particular, we haven't had the experience of having to frequently change what we think a person's intrinsic goods are. If we found that we kept having to revise our views about a person's intrinsic goods (especially if the old posited intrinsic goods were not instrumentally useful for achieving the new posited intrinsic goods), this would be a serious warning sign.

Speaking more generally, we're following particular procedures, as described in the CT Theory and Practice document. We expect to achieve particular results. If in a relatively short time frame we find that we can't, that will provide evidence against the claim "CT is useful for achieving result X". For example, I've been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we'll be able to replicate this. If we end up not being able to, that'll be obvious to us and everyone else.

Thanks for raising the issue of falsifiability. I'm going to add it to our CT FAQ.

Comment by Geoff_Anders on Introducing Leverage Research · 2012-01-10T05:33:10.670Z · LW · GW

Oops, I forgot to answer your question about how central Connection Theory is to what we're doing.

The answer is that CT is one part of what some of us believe is our best current answer to the question of how the human mind works. I say "one part" because CT does not cover emotions. In all contexts pertaining to emotions, everyone uses something other than CT. I say "some of us" because not everyone in Leverage uses CT. And I say "best current answer" because all of us are happy to throw CT away if we come up with something better.

In terms of our projects, some people use CT and others don't. Some parts of some training programs are designed with CT in mind; other parts aren't. In some contexts, it is very hard to do anything at all without relying on some background psychological framework. In those contexts, some people rely on CT and others don't.

In terms of our overall plan, CT is potentially extremely useful. That said, CT itself is inessential. If it ends up breaking, we can find new psychological tools. And we actually have a backup plan in case we ultimately can't figure out much at all about how the mind works.

Comment by Geoff_Anders on Introducing Leverage Research · 2012-01-10T04:29:38.497Z · LW · GW

Hi Luke,

I'm happy to talk about these things.

First, in answer to your third question, Leverage is methodologically pluralistic. Different members of Leverage have different views on scientific methodology and philosophical methodology. We have ongoing discussions about these things. My guess is that probably two or three of our more than twenty members share my views on scientific and philosophical methodology.

If there’s anything methodological we tend to agree on, it’s a process. Writing drafts, getting feedback, paying close attention to detail, being systematic, putting in many, many hours of effort. When you imagine Leverage, don’t imagine a bunch of people thinking with a single mind. Imagine a large number of interacting parallel processes, aimed at a single goal.

Now, I’m happy to discuss my personal views on method. In a nutshell: my philosophical method is essentially Cartesian; in science, I judge theories on the basis of elegance and fit with the evidence. (“Elegance”, in my lingo, is like Occam’s razor, so in practice you and I actually both take Occam’s razor seriously.) My views aren’t the views of Leverage, though, so I’m not sure I should try to give an extended defense here. I’m going to write up some philosophical material for a blog soon, though, so people who are interested in my personal views should check that out.

As for Connection Theory, I could say a bit about where it came from. But the important thing here is why I use it. The primary reason I use CT is because I’ve used it to predict a number of antecedently unlikely phenomena, and the predictions appear to have come true at a very high rate. Of course, I recognize that I might have made some errors somewhere in collecting or assessing the evidence. This is one reason I’m continuing to test CT.

Just as with methodology, people in Leverage have different views on CT. Some people believe it is true. (Not me, actually. I believe it is false; my concern is with how useful it is.) Others believe it is useful in particular contexts. Some think it’s worth investigating, others think it’s unlikely to be useful and not worth examining. A person who thought CT was not useful and who wanted to change the world by figuring out how the mind really works would be welcome at Leverage.

So, in sum, there are many views at Leverage on methodology and CT. We discuss these topics, but no one insists on any particular view and we’re all happy to work together.

I'm glad you like that we're producing public-facing documents. Actually, we're going to be posting a lot more stuff in the relatively near future.

Comment by Geoff_Anders on Students asked to defend AGI danger update in favor of AGI riskiness · 2011-10-21T00:53:28.208Z · LW · GW

Fixed.

Comment by Geoff_Anders on Students asked to defend AGI danger update in favor of AGI riskiness · 2011-10-19T18:56:07.018Z · LW · GW

Hi everyone. Thanks for taking an interest. I'm especially interested in (a) errors committed in the study, (b) what sorts of follow-up studies would be the most useful, (c) how the written presentation of the study could be clarified.

On errors, Michaelos already found one - I forgot to delete some numbers from one of the tables. That error has been fixed and Michaelos has been credited. Can anyone see any other errors?

On follow-up studies, lessdazed has suggested some. I don't know if we need to see what happens when nothing is presented on AGI; I think our "before" surveys are sufficient here. But trying to teach some alternative threat is an interesting idea. I'm interested in other ideas as well.

On clarity of presentation, it will be worth clarifying a few things. For instance, the point of the study was to test a method of persuasion, not to see what students would do with an unbiased presentation of evidence. I'll try to make that more obvious in the next version of the document. It would be good to know what other things might be misunderstood.

Comment by Geoff_Anders on Students asked to defend AGI danger update in favor of AGI riskiness · 2011-10-19T17:28:10.269Z · LW · GW

Thanks for pointing this out. There was in fact an error. I've fixed the error and updated the study. Some of the conclusions embedded in tables change; the final conclusions reported stay the same.

I've credited you on p.3 of the new version. If you want me to credit you by name, please let me know.

Thanks again!