Posts

Why don't governments seem to mind that companies are explicitly trying to make AGIs? 2021-12-26T01:58:20.467Z
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits 2021-11-19T17:55:27.119Z
Disagreeables and Assessors: Two Intellectual Archetypes 2021-11-05T09:05:07.056Z
Prioritization Research for Advancing Wisdom and Intelligence 2021-10-18T22:28:48.730Z
Intelligence, epistemics, and sanity, in three short parts 2021-10-15T04:01:27.680Z
Information Assets 2021-08-24T04:32:40.087Z
18 possible meanings of "I Like Red" 2021-08-23T23:25:24.718Z
AI Safety Papers: An App for the TAI Safety Database 2021-08-21T02:02:55.220Z
Contribution-Adjusted Utility Maximization Funds: An Early Proposal 2021-08-04T17:09:25.882Z
Two Definitions of Generalization 2021-05-29T04:20:28.115Z
The Practice & Virtue of Discernment 2021-05-26T00:34:08.932Z
Oracles, Informers, and Controllers 2021-05-25T14:16:22.378Z
Questions are tools to help answerers optimize utility 2021-05-24T19:30:30.270Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:35.920Z
Forecasting Prize Results 2021-02-19T19:07:09.420Z
Prize: Interesting Examples of Evaluations 2020-11-28T21:11:22.190Z
Squiggle: Technical Overview 2020-11-25T20:51:00.098Z
Squiggle: An Overview 2020-11-24T03:00:32.872Z
Working in Virtual Reality: A Review 2020-11-20T23:14:28.707Z
Epistemic Progress 2020-11-20T19:58:07.555Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:12:39.009Z
Are the social sciences challenging because of fundamental difficulties or because of imposed ones? 2020-11-10T04:56:13.100Z
Open Communication in the Days of Malicious Online Actors 2020-10-07T16:30:01.935Z
Can we hold intellectuals to similar public standards as athletes? 2020-10-07T04:22:20.450Z
Expansive translations: considerations and possibilities 2020-09-18T15:39:21.514Z
Multivariate estimation & the Squiggly language 2020-09-05T04:35:01.206Z
Epistemic Comparison: First Principles Land vs. Mimesis Land 2020-08-21T22:28:09.172Z
Existing work on creating terminology & names? 2020-01-31T12:16:32.650Z
Terms & literature for purposely lossy communication 2020-01-22T10:35:47.162Z
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T12:51:01.339Z
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:23:47.229Z
ozziegooen's Shortform 2019-08-31T23:03:24.809Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z
Ideas for Next Generation Prediction Technologies 2019-02-21T11:38:57.798Z
Predictive Reasoning Systems 2019-02-20T19:44:45.778Z
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T00:46:25.912Z
Can We Place Trust in Post-AGI Forecasting Evaluations? 2019-02-17T19:20:41.446Z
The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work 2019-02-14T16:21:13.564Z
Short story: An AGI's Repugnant Physics Experiment 2019-02-14T14:46:30.651Z
Three Kinds of Research Documents: Exploration, Explanation, Academic 2019-02-13T21:25:51.393Z
The RAIN Framework for Informational Effectiveness 2019-02-13T12:54:20.297Z
Overconfident talking down, humble or hostile talking up 2018-11-30T12:41:54.980Z
Stabilize-Reflect-Execute 2018-11-28T17:26:39.741Z
What if people simply forecasted your future choices? 2018-11-23T10:52:25.471Z
Current AI Safety Roles for Software Engineers 2018-11-09T20:57:16.159Z
Prediction-Augmented Evaluation Systems 2018-11-09T10:55:36.181Z
Critique my Model: The EV of AGI to Selfish Individuals 2018-04-08T20:04:16.559Z
Expected Error, or how wrong you expect to be 2016-12-24T22:49:02.344Z

Comments

Comment by ozziegooen on Use Normal Predictions · 2022-01-09T21:31:26.793Z · LW · GW

The more sophisticated system is Squiggle. It's basically a prototype. I haven't updated it since the posts I made about it last year.
https://www.lesswrong.com/posts/i5BWqSzuLbpTSoTc4/squiggle-an-overview 

Comment by ozziegooen on Information Assets · 2022-01-08T02:04:33.379Z · LW · GW

Update: 
I think some of the graphs could be better represented with upfront fixed costs.

When you buy a book, you pay for it via your time to read it, but you also have the fixed initial fee of the book.

This fee isn't that big of a deal for most books that you have a >20% chance of reading, but it definitely is for academic articles or similar.

Comment by ozziegooen on Get Set, Also Go · 2021-12-24T00:03:39.834Z · LW · GW

(Also want to say I've been reading them all and am very thankful)

Comment by ozziegooen on Can we hold intellectuals to similar public standards as athletes? · 2021-12-19T23:07:56.895Z · LW · GW

I enjoyed writing this post, but think it was one of my lesser posts. It's pretty ranty and doesn't bring much real factual evidence. I think people liked it because it was very straightforward, but I personally think it was a bit over-rated (compared to other posts of mine, and many posts of others). 

I think it fills a niche (quick takes have their place), and some of the discussion was good. 

Comment by ozziegooen on More power to you · 2021-12-16T15:53:18.051Z · LW · GW

Good point! I feel like I have to squint a bit to see it, but that's how exponentials sometimes look early on. 

Comment by ozziegooen on More power to you · 2021-12-16T15:51:58.776Z · LW · GW

To be clear, I care about clean energy. However, if energy production can be done without net-costly negative externalities, then it seems quite great. 

I found Matthew Yglesias's take, and Jason's writings, interesting.

https://www.slowboring.com/p/energy-abundance

All that said, if energy on the net leads to AGI doom, that could be enough to offset any gain, but my guess is that clean energy growth is still a net positive. 

Comment by ozziegooen on More power to you · 2021-12-16T15:49:06.817Z · LW · GW

but I think this is actually a decline in coal usage.

Ah, my bad, thanks!

They estimate ~35% increase over the next 30 years

That's pretty interesting. I'm somewhat sorry to see it's linear (I would have hoped solar/battery tech would improve more, leading to much faster scaling, 10-30 years out), but it's at least better than some alternatives.

Comment by ozziegooen on More power to you · 2021-12-16T00:12:32.942Z · LW · GW

I found this last chart really interesting, so did some hunting. It looks electricity generation in the US grew linearly until around ~2000. In the last 10 years though, there's been a very large decline in "petroleum and other", along with a strong increase in natural gas, and a smaller, but significant, increase in renewables.

I'd naively guess things to continue to be flat for a while as petroleum use decreases further; but at some point, I'd expect energy use to increase again.

That said, I'd of course like for it to increase much, much faster (more like China). :)

https://www.eia.gov/energyexplained/electricity/electricity-in-the-us-generation-capacity-and-sales.php
 

Comment by ozziegooen on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-15T02:45:12.273Z · LW · GW

I liked this post a lot, though of course, I didn't agree with absolutely everything. 

These seemed deeply terrible. If you think the best use of funds, in a world in which we already have billions available, is to go trying to convince others to give away their money in the future, and then hoping it can be steered to the right places, I almost don’t know where to start. My expectation is that these people are seeking money and power,

I'm hesitant about this for a few reasons.

  1. Sure, we have a few billion available, and we're having trouble donating that right now. But we're also not exactly doing a ton of work to donate our money yet. (This process gave out $10 Million, with volunteers). In the scheme of important problems, a few (~40-200) billion really doesn't seem like that much to me. Marginal money, especially lots of money, still seems pretty good.
  2. My expectation is that these people are seeking money and power -> I don't know which specific groups applied or their specific details. I can say that my impression, lots of EAs really just don't know what else to do. It's tough to enter research, and we just don't have that much in terms of "these interventions would be amazing, please someone do them" for longtermism. I've seen a lot of orgs get created with something like, "This seems like a pretty safe strategy, it will likely come into use later on, and we already have the right connections to make it happen." This, combined with a general impression that marginal money is still useful in the long-term, I think could present a more sympathetic take than what you describe.

The default strategy for lots of non-EA entrepreneurs I know has been something like, "Make a ton of money/influence, then try to figure out how to use it for good. Because people won't listen to me or fund my projects on my own". I wish more of these people would do direct work (especially in the last few years, when there's been more money), but can sympathize with that strategy. Arguably, Elon Musk is much better off having started with "less ambitious" ventures like Zip2 and Paypal; it's not clear if he would have been funded to start with SpaceX/Tesla when he was younger.

All that said, the fact that EAs have so little idea of what exactly is useful seems like a pretty burning problem to me. (This isn't unique to EAs, to be clear). On the margin, it seems safe to heavily emphasize "figuring stuff out" instead of "making more money, in hopes that we'll eventually figure stuff out" However, "figuring stuff out" is pretty hard and not nearly as tractable as we'd like it to be. 
 

"I would hire assistance to do at least the following"

I've been hoping that the volunteer funders (EA Funds, SFF) would do this for a while now. Seems valuable to at least try out for a while. In general, "funding work" seems really bottlenecked to me, and I'd like to see anything that could help unblock it.
 

definitely a case of writing a longer letter

I'm impressed by just how much you write on things like this. Do you have any posts outlining your techniques? Is there anything special, like speech-to-text, or do you spend a lot of time on it, or are you just really fast?

Comment by ozziegooen on Why indoor lighting is hard to get right and how to fix it · 2021-12-13T22:50:42.549Z · LW · GW

Thanks! 
Just checking; I think you might have sent the wrong link though?

Comment by ozziegooen on Why indoor lighting is hard to get right and how to fix it · 2021-12-12T22:53:24.452Z · LW · GW

Quick question: 
When you say, "Yuji adjustable-color-temperature LED strips/panels"

Do you mean these guys?
https://store.yujiintl.com/products/yujileds-high-cri-95-dim-to-warm-led-flexible-strip-1800k-to-3000k-168-leds-m-pack-5m-reel

It looks kind of intimidating to setup, and is pricey, but maybe is worth it.

Comment by ozziegooen on Improving on the Karma System · 2021-11-15T10:12:42.926Z · LW · GW

Just want to say; I'm really excited to see this.

I might suggest starting with an "other" list that can be pretty long. With Slack, different subcommunities focus heavily on different emojis for different functional things. Users sometimes figure out neat innovations and those proliferate. So if it's all designed by the LW team, you might be missing out.

That said, I'd imagine 80% of the benefit is just having anything like this, so I'm happy to see that happen.

Comment by ozziegooen on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-06T13:03:30.881Z · LW · GW

That's interesting to know, thanks!

Comment by ozziegooen on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-06T13:00:27.006Z · LW · GW

I just (loosely) coined "disagreeables" and "assessors" literally two days ago.

I suggest coming up with any name you think is a good fit.

Comment by ozziegooen on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T22:18:48.558Z · LW · GW

I wouldn't read too much into my choice of word there.

It's also important to point out that I was trying to have a model that assumed interestingness. The "disagreeables" I mention are the good ones, not the bad ones. The ones worth paying attention to I think are pretty decent here; really, that's the one thing they have to justify paying attention to them.

Comment by ozziegooen on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T12:42:47.474Z · LW · GW

Good point, agreed.

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-11-02T08:52:37.823Z · LW · GW

A few quick thoughts:

1) This seems great, and I'm impressed by the agency and speed.

2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.

In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, "We'll help fund lawyers in case you're attacked legally, or anti-harassing teams if you're harassed or trolled". This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.

I don't mean to complain; I think any steps here, especially so quickly are fantastic.

3) I'm afraid this will get lost in this comment section. I'd be excited about a list of "things to keep in mind" like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be necessary papers like, "Know your rights, as a Rationalist/EA", which flags how individuals can report bad actors and behavior.

4) Obviously a cash prize can encourage lying, but I think this can be decently managed. (It's a small community, so if there's good moderation, $15K would be very little compared to the social stigma that would come and you've found out to have destructively lied for $15k)

Comment by ozziegooen on Intelligence, epistemics, and sanity, in three short parts · 2021-10-25T03:31:13.308Z · LW · GW

The latter option is more of what I was going for.

I’d agree that the armor/epistemics people often aren’t great at coming up with new truths in complicated areas. I’d also agree that they are extremely unbiased and resistant to both poor faith arguments, and good faith, but systematically misleading arguments (these are many of the demons the armor protects against, if that wasn’t clear).

When I said that they were soft-spoken and poor at arguing, I’m assuming that they have great calibration and are likely arguing against people who are very overconfident, so in comparison they seem meager. I think of a lot of superforecasters in this way; they’re quite thoughtful and reasonable, but not often bold enough to sell a lot of books. Other people with too epistemics sometimes recognize their skills (especially when f they have empirical track records like in forecasting systems), but that’s right now a meager minority.

Comment by ozziegooen on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T23:32:10.610Z · LW · GW

When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups.

I tried to make it clear that I was referring to groups with the phrase, "of humanity", as in, "as a whole", but I could see how that could be confusing. 

the wisdom and intelligence[1] of humanity

 

For those interested in increasing humanity’s long-term wisdom and intelligence[1]


I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.

I imagine there's a lot of overlap. I'd also be fine with multiple prioritization research projects, but think it's early to decide that. 

This makes me wonder how nascent this really is?

I'm not arguing that people haven't made successes in the entire field (I think there's been a ton of progress over the last few hundred years, and that's terrific). I would argue though that there's very little formal prioritization of such progress. Similar to how EA has helped formalize the prioritization of global health and longtermism, we have yet to have similar efforts for "humanity's wisdom and intelligence". 

I think that there are likely still strong marginal gains in at least some of the intervention areas.

Comment by ozziegooen on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T18:26:17.164Z · LW · GW

That's an interesting perspective. It does already assume some prioritization though. Such experimentation can only really be done in a very few of the intervention areas. 

I like the idea, but am not convinced of the benefit of this path forward, compared to other approaches. We already have had a lot of experiments in this area, many of which cost a lot more than $15,000; marginal exciting ones aren't obvious to me.

But I'd be up for more research to decide if things like that are the best way forward :)

Comment by ozziegooen on In the shadow of the Great War · 2021-10-19T16:17:14.866Z · LW · GW

The first few chapters of "The Existential Pleasures of Engineering" detail some optimism, then pessimism, of technocracy in the US at least. 

I think the basic story there was that after WW2, in the US, people were still pretty excited about tech. But in the 70s (I think), with environmental issues, military innovations, and general malaise, people because disheartened.

https://www.amazon.com/Existential-Pleasures-Engineering-Thomas-Dunne-ebook/dp/B00CBFXLWQ

I'm sure I'm missing details, but I found the argument interesting. It is true that in the US at least, there seemed to be a lot of techno-optimism post-WW2. 

Comment by ozziegooen on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T02:18:16.113Z · LW · GW

Ah, thanks!

Comment by ozziegooen on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T20:45:01.426Z · LW · GW

Thanks for the opinion, and I find the take interesting.

I'm not a fan of the line, "How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?", in large part because of the phrase "not yet good enough". This is a really thorny topic that seems to have several assumptions baked into it that I'm uncomfortable with.

I also think that many here like at least some drugs that are "technically illegal", in part, because the FDA/federal rules move slowly. Different issue though.

I like points 2 and 3, I imagine if you had a post just with those two it would have gotten way more upvotes.

Comment by ozziegooen on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T20:18:14.380Z · LW · GW

There's an "EA Mental Health Navigator" now to help people connect to the right care.
https://eamentalhealth.wixsite.com/navigator

I don't know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.

Comment by ozziegooen on Feature Suggestion: one way anonymity · 2021-10-17T21:09:53.658Z · LW · GW

I really like things like this. I think it's possible we could do a "decent enough" job, though it's impossible to have a solution without risk.

One thing I've been thinking about is a browser extention. People would keep a list of things, like, "User XYZ is Greg Hitchenson", and then when it sees XYZ, it adds annotation". 

Lots of people are semi-anonymous already. They have psuedonyms that most people don't know, but "those in the know" do. This sort of works, but isn't formalized, and can be a pain. (Lots of asking around: "Who is X?")

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-17T18:38:38.263Z · LW · GW

That's good to know. 

I imagine grantmakers would be skeptical about people who would say "yes" to an optional form. Like, they say they're okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time.

However, some of our community seems unusually reasonable, so perhaps there's some way to make it viable.

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-17T15:48:36.226Z · LW · GW

I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)

I think the fact that we have so few grantmakers right now is a big bottleneck that I'm sure basically everyone would love to see improved. (The situation isn't great for current grantmakers, who often have to work long hours). But "figuring out how to scale grantmaking" is a bit of a separate discussion. 

Around making the information public specifically, that's a whole different matter. Imagine the value proposition, "If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see." Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund. 

(Note: I was a guest manager on the LTFF for a few months, earlier this year)

Comment by ozziegooen on Book Review: Why Everyone (Else) Is a Hypocrite · 2021-10-16T16:02:36.051Z · LW · GW

Thanks for the review here. I found this book highly interesting and relevant. I've been surprised at how much it seems to have been basically ignored. 

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-16T15:32:44.656Z · LW · GW

I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities. 

I'm not making a normative claim about the value of being "moral" and/or "intense", just saying that I'd expect moral/intense groups to have some of the same characteristics and challenges.

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-16T15:30:39.268Z · LW · GW

Agreed, though I think that the existence of many groups makes it a more obvious problem, and a more complicated problem.

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-16T06:32:27.108Z · LW · GW

To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them. —-

For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-16T06:29:06.396Z · LW · GW

I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)

One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.

I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.

This post by Nuno was partially meant as a test for this:

https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations

Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. I think that in the case of Leverage, there really should have been some deep investigation a few years ago, perhaps after a separate setup to flag possible targets of investigation. Back then things were much more disorganized and more poorly funded, but now we’re in a much better position for similar efforts going forward.

[1] I don’t particularly blame them, consider the alternative.

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-15T04:21:26.900Z · LW · GW

Sorry, edited. I meant that it was a mistake for me to keep away before, not now.

(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-14T14:42:45.482Z · LW · GW

As someone who's been close to these, some had a few related issues, but Leverage seemed much more extreme in many of these dimensions to me.

However, now there are like 50 small EA/rationalist groups out there, and I am legitimately worried about quality control.

Comment by ozziegooen on Zoe Curzi's Experience with Leverage Research · 2021-10-14T14:32:14.528Z · LW · GW

As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.

As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, particularly in the last ~4 years or so. (I've heard stories of people getting shouted at just for saying they worked at Leverage at a conference). On the other hand, they definitely had support by a few rationalists/EA orgs and several higher-ups of different kinds.

They've always been secretive, and some of the few public threads didn't go well for them, so it's not too surprising to me that they've had a small LessWrong/EA Forum presence.

I've personally very much enjoyed staying mostly staying away from the controversy, though very arguably I made a mistake there.

(I should also note that I had friends who worked at or worked close to Leverage, I attended like 2 events there early on, and I applied to work from there around 6 years ago)

Comment by ozziegooen on Common knowledge about Leverage Research 1.0 · 2021-10-08T17:38:42.140Z · LW · GW

+1 for the detail. Right now there's very little like this explained publicly (or accessible in other ways to people like myself). I found this really helpful.

I agree that the public discussion on the topic has been quite poor.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2021-10-05T00:18:56.086Z · LW · GW

Some updates:

  1. I'm now using it a bit here and there, but I changed rooms and the connection isn't as good, so it's much more painful to use.
  2. There's a new VR headset being made specifically for linux, which looks very neat. https://simulavr.com/
  3. Here's a much more in-depth blog by someone who's been doing this for many hours.https://blog.immersed.team/working-from-orbit-39bf95a6d385
Comment by ozziegooen on GPT-Augmented Blogging · 2021-09-16T04:34:29.157Z · LW · GW

I was fairly excited for this book for a second there

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-11T16:59:08.630Z · LW · GW

Is there any culture in which power structures aren't systemic and deeply ingrained into our culture? Even a tribe of hunter gather has it's cultural norms that regulate the power between the individuals.

I agree. I think there's a whole lot of stuff deeply ingrained in the culture of every group. 

I would expect that most people at LessWrong don't have a problem with power structures provided they fulfill critieria like being meriocratic and a few other criteria.

It's hard for me to understand your argument here, I expect that this would have to be a much longer discussion. I'm not saying that there aren't some cases where power structures aren't justified. But I think there are pretty clearly some that almost all of us would agree were unjustified, and I think that a lot of racial/historical cases work like that.

Comment by ozziegooen on You can get feedback on ideas and external drafts too · 2021-09-10T02:52:40.980Z · LW · GW

Good to know, thanks so much!

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-10T02:00:23.431Z · LW · GW

Great question! I have some books I personally enjoyed, and also would like to encourage others to recommend texts. I'm sure that my understanding is vastly less than what I'd really want. However, there are a few books that come to mind.

I think the big challenge, for me, is "attempting to empathize and understand African Americans". This is incredibly, ridiculously difficult! Cultures are very different from one another. I grew up in an area with a large mix of ethnic groups, and I think that was useful, but the challenge is far greater.

I really liked "So You Want to Talk about Race", a few years ago. 
https://www.goodreads.com/book/show/35099718-so-you-want-to-talk-about-race?ac=1&from_search=true&qid=Q2Zay18Jca&rank=1

I thought Black Like Me was great, though it's by a white author, and he doesn't have as good an understanding (though he comes from a similar place to many white readers)
https://www.goodreads.com/book/show/42603.Black_Like_Me?ac=1&from_search=true&qid=qI4fgVu3E5&rank=1

In pop culture, I found "Dear White People", both the movie, and the TV show (mostly the first 2 seasons), to be pretty interesting. 

I really like James Baldwin, though enjoyed his speeches more than his books, so far.

Honestly, African American Studies is just a gigantic field with lots of great work. This can be looked at as interesting to better understand African Americans, but there's also a lot of other take-aways, like understanding severe cognitive biases and motivated reasoning and from a very different angle.

https://en.wikipedia.org/wiki/African_American_studies

Of course, many of these resources are somewhat specific to American problems. 

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-10T01:48:00.541Z · LW · GW

Thanks so much for clarifying! Sorry to have misinterpreted that.

I think this topic is particularly toxic for online writing. People can be intensely attacked for either side here. This means that people of positions feel more inclined to hint at their positions rather than directly saying them. Which correspondingly means that I'm more inclined to think that text is meant as being hints.

If you or others want to have a private video call about these topics I'd be happy to do so (send me a PM), I just hate public online discussion for topics like these.

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-10T01:38:49.718Z · LW · GW

Thanks for the longer comments here!

Quick thoughts, on my end:

But he's also stating that he thinks I have literally nothing to offer him by way of new information and vice-versa. That's pretty low!

This is definitely not how I saw it. 

I'm sure everyone has a lot to learn from everyone else. The big challenge is that this learning is costly and we have extremely limited resources. There's an endless number of discussions we could be part of, and we all have very limited time left in total (for discussions and other things). So if I try to gently leave a conversation, it's mainly a signal of "I don't think that this is the absolutely most high-value thing for me to be doing now", which is a high bar!

Second, I think you might have been taking this a bit personally, like me trying to hold off conversation was a personal evaluation as you as a person. 

Again, I know very little about you, and I used to know even less (when you made the original comment). This is the comment in question:

Defending a position by pointing out that a portion (however big or small) of the critics of the position are 'vitriolic' isn't actually a valid argument. If people really hate something so much so that they get emotional about it that's still pretty good evidence that the something is bad.

This really doesn't give me much insight into your position or background. Basically all I know about you is that you wrote these two sentences here, and have written a few comments on LessWrong in the past. My prior for "person with an anonymous name on LessWrong, a few previous comments there, and so on", doesn't make me incredibly excited to spend a lot of time going back and forth with. I've been burned in the past, a few times, with people who match similar characteristics. 

Often people who use anonymous accounts wind up being terrific, it's just hard to discern which are which, early on.

About that last line; I'm fine with you replying or not replying. I wish you the best in the continuation of your intellectual journey. 

Lastly, I'll note that this "White Fragility" is a very sensitive topic that I'm not excited to chat about publicly on forums like this. (In part because my comments on this get downvoted a lot, in part because this sort of discussion can easily be used as ammunition later on by anyone interested (against either myself or any of the other commenters who responds)). My identity is clearly public, so there is real risk.

I write blog posts on LessWrong that are far less controversial, and am much more happy to publicly discuss those topics.

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T19:39:14.831Z · LW · GW

I think it means the reaction to the book is not really the reaction to the book itself, but rather to the political powers this book represents.

I think it's very likely that you're right here. I do wish this could be said more. It's totally fine to argue against political powers and against potential situations. Ideally this argument would be differentiated around discussion on this particular book/author.

What is more likely to happen, is someone reading the book, and then yelling at me for not agreeing with some idea in the book. Possibly in a situation where this might get me in trouble

I agree that there are lots of ideas in the book that are probably wrong. To be clear, I could also easily imagine many situations where unreasonable people would take either the wrong ideas too far, or take their own spin on this and take those ideas far too far. I imagine that in either case, the results can be highly destructive.

I hope that these sorts of fears don't prevent us from understanding or understanding interesting/useful ideas from such material. I think they make this massively harder, but there might be some decent strategies.

I would be curious if people here have recommendations on how they would like to see these ideas getting discussed in ways that minimize the potential hazards of getting people into trouble for unreasonable reasons or creating tons of anxiety. I think that this book has generated a lot of high-anxiety discussion that's clearly not very effective at delivering better understanding.

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T19:07:10.491Z · LW · GW

I'm really sorry if I hurt or offended you. I assumed that a brief description of where I was at would be preferred to not replying at all. I clearly was incorrect about that.

I disagree with some of your specific implications. I'm fairly sure though that you'd disagree with my responses. I could easily imagine that you've already predicted them, well enough, and wouldn't find them very informative, particularly for what I could write in a few sentences. 

This isn't unusual for me. I try to stay out of almost all online discussion. I have things to do, I'm sure you have things to do as well. Online discussion is costly, and it's especially costly when people know very little about each other[1], and the conversation topic (White Fragility) is as controversial as this one is.

[1]:  I know almost nothing about you. I feel like I'd have a very difficult time feeling comfortable saying things in ways I can predict you'd be receptive to, or things that you wouldn't actively attack me for. I find that I've had a difficult time modeling people online; particularly people who I barely know. This could easily lead to problems of several different kinds. It's very, very possible that none of this applies to you, but it would take a fair amount of discussion for me to find that out and feel safe with my impressions of you. This also applies for all the other people I don't know, but who might be watching this conversation or jump in at any point.

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T17:59:35.625Z · LW · GW

Position noted, but I don't feel like more back-and-forth here will be productive

Comment by ozziegooen on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-07T16:21:10.975Z · LW · GW

I feel like both sides of the "White Fragility" debate have some of this going on.

I don't feel like I've exactly seen rationalists on these sides (in large part because the discussion generally hasn't been very prominent), but I've seen lots of related people on both sides, and I expect rationalists to have similar beliefs to those people. (Myself included) 

https://www.lesswrong.com/posts/pqa7s3m9CZ98FgmGT/i-read-white-fragility-so-you-don-t-have-to-but-maybe-you?commentId=wEuAmC2kYWsCg4Qsr

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T16:16:27.662Z · LW · GW

I think from reading some of the other comments here on the LessWrong post, I'm a bit worried that this might be turning into some flame wars.

I'd note that this particular book is probably not the best one to have debates around this issue for. The book seems to be quite a bit more sensationalist, moralistic, and less scientific than I'd really like, which I think makes it very difficult to discuss. This seems like a subject that would attract lots of motte-and-bailey thinking on both sides. (the connection between more reasonable vs. outlandish claims representing the motte-and-bailey, but switched on each side). 

This is clearly a highly sensitive issue. No one wants to be (publicly especially!) associated with either racism or cancel culture. 

Publicly discussion is far more challenging than private discussions. For example, we simply don't know who is watching these discussions or who might be trying to use anything posted here for antagonistic purposes. (They copy several comments from someone and post them without much context, accusing them of either racism or cancel culture). 

Very sadly, public discussion of topics like these right now is thoroughly challenging for many reasons. My guess is that it's often just not worth it. 

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T16:06:47.869Z · LW · GW

I'm really not sure what you're trying to do here, but I feel like your phrasing could be interpreted like creating a dichotomy between:
1. People who this impacts (in near mode), who will be very much hurt by this work.

2. Armchair, ivory-tower intellectuals who smirk and find the same sorts of interest in this book that they would get from the next "provocative" Game of Thrones book.

As such, the clear implication (that some readers) might take away is that I sit very much in the camp of (2), that just finds it interesting because the issues don't actually matter much to me. So my opinion probably shouldn't matter as much as those in (1).

It's possible that such a criticism, if it were meant, might be justified! I've been wrong before, many times. But I wanted to be more clear if this is what you were intending before responding.


I'd note that far-mode being-interesting-and-provocative, as I used it, often means that for some people it will be difficult.

Previous discussions introducing athiesm/veganism/altruism also really upset a lot of people. They clearly led to a whole lot of change that was incredibly challenging or devastating to different people.

Often interesting-and-provocative could be very bad, like both extreme left-wing and extreme right-wing literature.

Comment by ozziegooen on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-06T23:43:19.697Z · LW · GW

Thanks for writing this up. I was a bit nervous when reading the title because I was expecting that this would have been an "edgy takedown", but it wasn't.

I haven't read the book, but I seen a few talks by Robin DiAngelo, and found them generally reasonable. They at least brought up several points I thought were interesting and provocative, which is a high bar for public presentations.

I then saw numerous reviews from sources I previously deemed decent that treated the book with extreme vitriol. 

I found the hate leveled at this book to be frightening. There are a lot of "mediocre popular science books", but this one was truly disdained by large communities. (Right wing ones, of course, but also some somewhat politically neutral or left crowds). 

The basic ideas of "racism" being systemic in our culture, but occasionally very difficult to directly notice (especially for those in power), strike me as very similar to ones of implicit biases and similar. The Elephant in the Brain comes to mind. I think the Rationality community and similar should be well equipped to be able to discuss some of these issues.

My impression is that this book isn't rigorous in the ways that most of us here would hope for. It doesn't seem to have nearly as much nuance as I'd probably want, but books with nuance typically don't become popular. It's a bit of a pity, it is an important topic, so it would be great to work here we could trust to be fairly non-biased (either way) and thoughtful. However, I think I'm still happy that this book was written. I'm sure that Robin DiAngelo has probably faced gigantic amounts of harassment for writing it; perhaps this will lessen the burden for other people doing work in the area.
 

It seems like there are two big issues here:

1. Racism and power structures are systemic and deeply ingrained into our culture

2. This book presents a scientifically rigorous account of many details around the situation.

My impression is that #1 has a lot of truth to it, but #2 is lacking. In fairness, lots of books are terrible at #2, but this one might be particularly bad (given the broad claims). Unfortunately, I get the impression that a lot of reviews argue that because #2 is poor, #1 is wrong, and that seems cheap to me.
 

I considered writing my own review on the book on LessWrong to generate discussion, but myself was too wimpy to do so. I was very nervous about possible flame wars from doing so. (This makes me more thankful you've done it.) 


For examples of the vitriol I'm talking about, see the Goodreads reviews:
https://www.goodreads.com/book/show/43708708-white-fragility?ac=1&from_search=true&qid=sSB9PhQyYt&rank=1