Posts

ozziegooen's Shortform 2019-08-31T23:03:24.809Z · score: 14 (3 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z · score: 41 (10 votes)
Ideas for Next Generation Prediction Technologies 2019-02-21T11:38:57.798Z · score: 16 (14 votes)
Predictive Reasoning Systems 2019-02-20T19:44:45.778Z · score: 26 (11 votes)
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T00:46:25.912Z · score: 21 (3 votes)
Can We Place Trust in Post-AGI Forecasting Evaluations? 2019-02-17T19:20:41.446Z · score: 23 (9 votes)
The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work 2019-02-14T16:21:13.564Z · score: 42 (14 votes)
Short story: An AGI's Repugnant Physics Experiment 2019-02-14T14:46:30.651Z · score: 9 (7 votes)
Three Kinds of Research Documents: Clarification, Explanatory, Academic 2019-02-13T21:25:51.393Z · score: 23 (6 votes)
The RAIN Framework for Informational Effectiveness 2019-02-13T12:54:20.297Z · score: 40 (13 votes)
Overconfident talking down, humble or hostile talking up 2018-11-30T12:41:54.980Z · score: 45 (20 votes)
Stabilize-Reflect-Execute 2018-11-28T17:26:39.741Z · score: 31 (9 votes)
What if people simply forecasted your future choices? 2018-11-23T10:52:25.471Z · score: 18 (5 votes)
Current AI Safety Roles for Software Engineers 2018-11-09T20:57:16.159Z · score: 82 (31 votes)
Prediction-Augmented Evaluation Systems 2018-11-09T10:55:36.181Z · score: 43 (15 votes)
Critique my Model: The EV of AGI to Selfish Individuals 2018-04-08T20:04:16.559Z · score: 50 (13 votes)
Expected Error, or how wrong you expect to be 2016-12-24T22:49:02.344Z · score: 9 (9 votes)
Graphical Assumption Modeling 2015-01-03T20:22:21.432Z · score: 23 (18 votes)
Understanding Who You Really Are 2015-01-02T08:44:50.374Z · score: 9 (19 votes)
Why "Changing the World" is a Horrible Phrase 2014-12-25T06:04:48.902Z · score: 28 (40 votes)
Reference Frames for Expected Value 2014-03-16T19:22:39.976Z · score: 5 (23 votes)
Creating a Text Shorthand for Uncertainty 2013-10-19T16:46:12.051Z · score: 6 (11 votes)
Meetup : San Francisco: Effective Altruism 2013-06-23T21:48:34.365Z · score: 3 (4 votes)

Comments

Comment by ozziegooen on ozziegooen's Shortform · 2019-09-11T10:54:50.623Z · score: 2 (1 votes) · LW · GW

Yea, in cases like these, having intermediate metrics seems pretty essential.

Comment by ozziegooen on ozziegooen's Shortform · 2019-09-10T09:51:00.245Z · score: 2 (1 votes) · LW · GW

Yep, good points. Ideally one could do a proper or even estimated error analysis of some kind.

Having good units (like, ratios) seems pretty important.

Comment by ozziegooen on ozziegooen's Shortform · 2019-09-10T09:49:17.509Z · score: 8 (2 votes) · LW · GW

I think that prediction markets can help us select better proxies, but the initial set up (at least) will require people pretty clever with ontologies.

For example, say a group comes up with 20 proposals for specific ways of answering the question, "How much value has this organization created?". A prediction market could predict the outcome of the effectiveness of each proposal.

I'd hope that over time people would put together lists of "best" techniques to formalize questions like this, so doing it for many new situations would be quite straightforward.

Comment by ozziegooen on ozziegooen's Shortform · 2019-09-09T10:00:23.854Z · score: 2 (1 votes) · LW · GW

Hm... At this point I don't feel like I have a good intuition for what you find intuitive. I could give more examples, but don't expect they would convince you much right now if the others haven't helped.

I plan to eventually write more about this, and eventually hopefully we should have working examples up (where people are predicting things). Hopefully things should make more sense to you then.

Short comments back<>forth are a pretty messy communication medium for such work.

Comment by ozziegooen on ozziegooen's Shortform · 2019-09-07T19:34:29.495Z · score: 4 (2 votes) · LW · GW

If you had a precise definition of "effectiveness" this shouldn't be a problem.

Coming up with a precise definition is difficult, especially if you want multiple groups to agree. Those specific questions are relatively low-level; I think we should ask a bunch of questions like that, but think we may also want some more vague things as well.

For example, say I wanted to know how good/enjoyable a specific movie would be. Predicting the ratings according to movie reviewers (evaluators) is an approach I'd regard as reasonable. I'm not sure what a precise definition for movie quality would look like (though I would be interested in proposals), but am generally happy enough with movie reviews for what I'm looking for.

"How much value has this organization created?"

Agreed that that itself isn't a forecast, I meant in the more general case, for questions like, "How much value will this organization create next year" (as you pointed out). I probably should have used that more specific example, apologies.

And, although clearly defining value can be tedious (and prone to errors), I don't think that problem can be avoided.

Can you be more explicit about your definition of "clearly"? I'd imagine that almost any proposal at a value function would have some vagueness. Certificates of Impact get around this by just leaving that for the review of some eventual judges, kind of similar to what I'm proposing.

Why would you do that? What's wrong with the usual prediction markets?

The goal for this research isn't fixing something with prediction markets, but just finding more useful things for them to predict. If we had expert panels that agreed to evaluate things in the future (for instance, they are responsible for deciding on the "value organization X has created" in 2025), then prediction markets and similar could predict what they would say.

Comment by ozziegooen on How Can People Evaluate Complex Questions Consistently? · 2019-09-02T10:50:46.855Z · score: 4 (2 votes) · LW · GW

I attempted to summarize some of the motivation for this here: https://www.lesswrong.com/posts/Df2uFGKtLWR7jDr5w/?commentId=tdbfBQ6xFRc7j8nBE

Comment by ozziegooen on [deleted post] 2019-08-31T23:33:53.074Z

Elizabeth recently posted some LessWrong questions on this topic, the top one is here: https://www.lesswrong.com/posts/6g6pRNBZadT9J3APM/how-can-people-evaluate-complex-questions-consistently

Comment by ozziegooen on ozziegooen's Shortform · 2019-08-31T23:03:24.990Z · score: 26 (8 votes) · LW · GW

Questions around Making Reliable Evaluations

Most existing forecasting platform questions are for very clearly verifiable questions:

  • "Who will win the next election"
  • "How many cars will Tesla sell in 2030?"

But many of the questions we care about are much less verifiable:

  • "How much value has this organization created?"
  • "What is the relative effectiveness of AI safety research vs. bio risk research?"

One solution attempt would be to have an "expert panel" assess these questions, but this opens up a bunch of issues. How could we know how much we could trust this group to be accurate, precise, and understandable?

The topic of, "How can we trust that a person or group can give reasonable answers to abstract questions" is quite generic and abstract, but it's a start.

I've decided to investigate this as part of my overall project on forecasting infrastructure. I've recently been working with Elizabeth on some high-level research.

I believe that this general strand of work could be useful both for forecasting systems and also for the more broad-reaching evaluations that are important in our communities.


Early concrete questions in evaluation quality

One concrete topic that's easily studiable is evaluation consistency. If the most respected philosopher gives wildly different answers to "Is moral realism true" on different dates, it makes you question the validity of their belief. Or perhaps their belief is fixed, but we can determine that there was significant randomness in the processes that determined it.

Daniel Kahneman apparently thinks a version of this question is important enough to be writing his new book on it.

Another obvious topic is in the misunderstanding of terminology. If an evaluator understands "transformative AI" in a very different way to the people reading their statements about transformative AI, they may make statements that get misinterpreted.

These are two specific examples of questions, but I'm sure there are many more. I'm excited about understanding existing work in this overall space more, and getting a better sense of where things stand and what the next right questions are to be asking.

Comment by ozziegooen on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-08-01T13:12:34.236Z · score: 6 (3 votes) · LW · GW

So what I would like to see from forecasting platforms, companies, and projects is a lot more specifics about how forecasting relates to the decisions that need to be made, and how it improves them

My general impression is that there's a lot of creative experimentation to be done here. Right now there doesn't seem to be that much of this kind of exploration. In general though, there really aren't that many people focusing on forecasting infrastructure work of these types.

Comment by ozziegooen on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-07-30T21:11:24.812Z · score: 9 (5 votes) · LW · GW

I just want to note that this transcript is probably kind of hard to read without much more context. Before this I gave a short pitch on my ideas, which is not included here.

Much of this thinking comes from work I've been doing, especially in the past few months since joining RSP. I've written up some of my thoughts on LessWrong, but have yet to write up most of it. There's a decent amount, and it takes a while to organize and write it.

Recently my priority has been to build a system and start testing out some of the ideas. My impression was that this would be more promising than writing up thoughts for other people to hopefully eventually do. I hope to announce some of that work shortly.

Happy to answer any quick questions here. Also happy to meet/chat with others who are specifically excited about forecasting infrastructure work.

https://www.lesswrong.com/s/YX6dCo6NSNQJDEwXR

Comment by ozziegooen on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-07-30T21:07:08.086Z · score: 7 (3 votes) · LW · GW

I think it would probably take a while to figure out the specific cruxes of our disagreements.

On your "aesthetic disagreement", I'd point out that there are, say, three types of forecasting work with respect to organizations.

  1. Organization-specific, organization-unique questions. These are questions such as, "Will this specific initiative be more successful than this other specific initiative?" Each one needs to be custom made for that organization.

  2. Organization-specific, standard questions. These are questions such as, "What is the likelihood that employee X will leave in 3 months"; where this question can be asked at many organizations and compared as such. A specific instance is unique to an organization, but the more general question is quite generic.

  3. Inter-organization questions. These are questions such as, "Will this common tool that everyone uses get hacked by 2020?". Lots of organizations would be interested.

I think right now organizations are starting traditional judgemental forecasting for type (1), but there are several standard tools already for type (2). For instance, there are several startups that help businesses forecast key variables; like engineering timelines, sales, revenue, and HR issues. https://www.liquidplanner.com/

I think type (3) is most exciting to me; that's where PredictIt and Metaculus are currently. Getting the ontology right is difficult, but possible. Wikipedia and Wikidata are two successful (in my mind) examples of community efforts with careful ontologies that are useful to many organizations; I see many future public forecasting efforts in a similar vein. That said, I have a lot of uncertainty, so would like to see everything tried more.

I could imagine, in the "worst" case, that the necessary team for this could just be hired. You may be able to do some impressive things with just 5 full time equivalents, which isn't that expensive in the scheme of things. The existing forecasting systems don't seem to have that many full time equivalents to me (almost all forecasters are very part time)

Comment by ozziegooen on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-07-30T20:56:39.026Z · score: 2 (1 votes) · LW · GW

Yep. I don't think any/much NLP is interesting for a lot of interesting work, if things are organized well with knowledge graphs. I haven't thought much about operationalizing questions using ML, but have been thinking that by focussing on questions that could be scaled (like, GDP/Population of every country for every year), we could get a lot of useful information without a huge amount of operationalization work.

Comment by ozziegooen on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-07-30T20:50:02.864Z · score: 4 (2 votes) · LW · GW

Yea, I've been thinking about this too, though more for college students. I think that hypothetically forecasting could be a pretty cool team activity; perhaps different schools/colleges could compete with each other. Not only would people develop track records, but the practice of getting good at forecasting seems positive for epistemics and similar.

Comment by ozziegooen on "Shortform" vs "Scratchpad" or other names · 2019-07-25T08:14:12.995Z · score: 11 (3 votes) · LW · GW

Quick thought (not about the name)

The Karma on shortform posts (not the comments) strikes me as a bit awkward; as it's basically a popularity contest. I imagine removing Karma on shortform posts could be pretty sensible.

If you need to replace it, one option could just be a sum of the karma of the top-level comments by that user on that post.

Comment by ozziegooen on Long Term Future Fund applications open until June 28th · 2019-06-11T20:57:04.235Z · score: 6 (4 votes) · LW · GW

I made a Question for this on the EA forum. https://forum.effectivealtruism.org/posts/NQR5x3rEQrgQHeevm/what-new-ea-project-or-org-would-you-like-to-see-created-in

Comment by ozziegooen on Personalized Medicine For Real · 2019-03-11T10:32:00.934Z · score: 4 (3 votes) · LW · GW

I'd second this, but, to be fair, I think predictions are basically the answer to everything, so this may not be a big update.

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-03-07T18:03:13.772Z · score: 3 (2 votes) · LW · GW

Thanks, and good to hear!

Density applies only to some situations; it just depends on how you look at things. It felt quite different from the other attributes.

For instance, you could rate a document "per information content" according to the RAIN framework, in which you would essentially decouple it from density. Or you could rate it per "entire document", in which case the density would matter.

Comment by ozziegooen on Karma-Change Notifications · 2019-03-03T20:09:04.966Z · score: 12 (4 votes) · LW · GW

This feature seems pretty useful, and I really appreciate how you put thought into not making it too addicting. Having good incentives seems like a good way of allowing our community to "win", and I'm happy to see that pay off in practice.

Comment by ozziegooen on Predictive Reasoning Systems · 2019-02-21T11:35:18.888Z · score: 1 (1 votes) · LW · GW

I was thinking of political policies. Government bills can often be 800+ pages, and seem to contain many specific decisions per page. I could easily imagine one having 20 possible decisions per page, each with 10 options, for 500 pages (assuming some pages are padding), meaning 100k simple decisions total. This is obviously a very rough estimate.

I hope that better examples will become obvious in future writing, will keep that in mind. Thanks for the feedback!

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-21T11:23:19.754Z · score: 1 (1 votes) · LW · GW

Just a quick 2 cents: I think it's possible to have really poor or not-useful ontologies. One could easily make a decent ontology of Greek Gods, for instance. However, if their Foundational Understanding wasn't great, then that ontology won't be that useful (as if they believed in Greek Gods).

In this case, I would classify the DSM as mainly a taxonomy (a kind of ontology), but I think many people would agree it could be improved. Much of this improvement would hopefully come through what is here called Foundational Understanding.

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-20T19:30:46.046Z · score: 1 (1 votes) · LW · GW

Interesting, that makes sense, thanks for the examples.

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-19T00:04:00.139Z · score: 3 (3 votes) · LW · GW

For the first question, I'm happy we identified this as an issue. I think it is quite different. If you think there's a good chance you will die soon, then your marginal money will likely not be that valuable to you. It's a lot more valuable in the case that you survive.

For example, say you found out tomorrow that there's a 50% chance everyone will die in one week. (Gosh this is a downer example) You also get to place an investment for $50, that will pay out in two weeks for $70. Is the expected value of the bet really equivalent to (70/2)-50 = -$5? If you don't expect to spend all of your money in one week, I think it's still a good deal.

I'd note that Superforecasters have performed better than Prediction Markets, in what I believe are relatively small groups (<20 people). While I think that Prediction Markets could theoretically work, I'm much more confident in systems like those of Superforecasters, where they wouldn't have to make explicit bets. That said, you could argue that their time is the cost, so the percentage chance still matters. (Of course, the alternative, of giving them money to enjoy for 5-15 years before 50% death, also seems pretty bad)

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T15:20:00.961Z · score: 8 (4 votes) · LW · GW

If the reason your questions won't resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.

That said, one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue :)

I'd estimate there's a 2% chance of this being considered "useful" in 10 years, and in those cases would estimate it to be worth $10k to $20 million of value (90% ci). Would you predict <0.1%?

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T13:55:27.257Z · score: 2 (2 votes) · LW · GW

I'd agree this would work poorly in traditional Prediction Markets. Not so sure about Prediction Tournaments, or other Prediction Market systems that could exist. Others could be heavily subsidized, and the money on hold could be invested in more standard asset classes.

*(Note: I said >20%, not exactly 20%)

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T13:53:22.591Z · score: 3 (3 votes) · LW · GW

I'm saying that the AGI would be helpful to do the resolutions; any world post-AGI could be significantly better at answering such questions. I'm not sure if it's a useful distinction though between "The AGI evaluates the questions" and "An evaluation group uses the AGI to evaluate the questions."

You're right it has the issue of "predicting what someone smarter than me would do." Do you know of much other literature on that one issue? I'm not sure how much of an issue to expect it to be.

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-17T22:03:32.564Z · score: 3 (3 votes) · LW · GW

Thanks for the considered comment.

I think the main crux here is how valuable money will be post-AGI. My impression is that it will still be quite valuable. Unless there is a substantial redistribution effort (which would have other issues), I imagine economic growth will make the rich more money than the poor. I'd also think that even though it would be "paradise", many people would care about how many resources they have. Having one-millionth of all human resources may effectively give you access to one-millionth of everything produced by future AGIs.

Scenarios where AGI is friendly (not killing us) could be significantly more important to humans than ones in which it is not. Even if it has a 1% chance of being friendly, in that scenario, it's possible we could be alive for a really long time.

Last, it may not have to be the case that everyone thinks money will be valuable post-AGI, but that some people with money think so. In those cases, they could exchange with others pre-AGI to take that specific risk.

So I generally agree there's a lot of uncertainty, but think it's less than you do. That said, this is, of course, something to apply predictions to.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-15T19:04:52.456Z · score: 3 (1 votes) · LW · GW

I like exploration. Could also see synonyms of exploration. "discovery", "disclosure", "origination", "introduction"

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:11:06.691Z · score: 1 (1 votes) · LW · GW

Thanks! Good point about the division. I agree that the different parts to be done by different groups, I'm not sure with the best way of doing each one is. My guess is that some experts should be incorporated into the foundational understanding process, but that they would want to use many other tools (like the ones you mention). I would imagine all could be either done in the private or public sector.

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:08:44.130Z · score: 1 (1 votes) · LW · GW

I didn't mean that all analogies around diseases were good, especially around psychology. The challenges are quite hard; even in cases where a lot of good work goes into making ontologies, there could be tons of edge cases in similar.

That said, I think medicine is one of the best examples of the importance and occasional effectiveness of large ontologies. If one is doing independent work in the field, I would imagine it is rare that they would be best served by doing their own ontology development, given how much is been done so far.

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:02:00.619Z · score: 5 (1 votes) · LW · GW

Good point, thanks. Even though we data could have values multiple sources, that could still be more useful than nothing, but it's probably better where possible to use specific sources like the CIA World Factbook, if you trust that they will have the information the future.

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:01:38.059Z · score: 5 (1 votes) · LW · GW

I think I'll be quite doable, but take some infrastructural work of course. Could you be a bit more specific about the predictions you want to make?

Comment by ozziegooen on Short story: An AGI's Repugnant Physics Experiment · 2019-02-14T16:18:11.653Z · score: 2 (2 votes) · LW · GW

Yup, in general, I agree.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T15:06:23.909Z · score: 1 (1 votes) · LW · GW

I really like this formulation, will consider the specifics more. It kind of reminds me of Pokemon types or Guilds in Magic.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T11:33:09.747Z · score: 1 (1 votes) · LW · GW

Thanks for the ideas!

I was also hesitant to use "clarification", but do kind of think about it as one "clarifying" their messy thoughts on paper, and clarifying them with a few people. I feel like "sketch" implies incompleteness, which is not exactly the case, and description and delineation are not descriptive enough.

Some other related terms. Blueprint, survey, first pass, embryonic, pioneering, preliminary, germinal, prosaic.

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-14T11:19:03.369Z · score: 3 (2 votes) · LW · GW

Good to know. Can you link to another resource that states this? Wikipedia says "the amount a decision maker would be willing to pay for information prior to making a decision", LessWrong has something similar "how much answering a question allows a decision-maker to improve its decision".

https://en.wikipedia.org/wiki/Value_of_information https://www.lesswrong.com/posts/vADtvr9iDeYsCDfxd/value-of-information-four-examples

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-14T01:01:01.731Z · score: 1 (1 votes) · LW · GW

Changed. I originally moved accessibility to the bottom, out of order, just because the other three are more similar to each other, but don't have a strong preference.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T00:12:01.774Z · score: 6 (2 votes) · LW · GW

That sounds interesting, I look forward to eventually reading more about it, if that is published online.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T00:11:04.278Z · score: 1 (1 votes) · LW · GW

Good question. It's more technical than most of the ones I was considering and I have a harder time judging it because I haven't really gone through it. I think with pure-math posts the line is blurry because if it's sufficiently formal, it can be hard to make more explanatory, for dedicated readers.

I imagine that there are very math-heavy posts on Agent foundations that are both optimized for readability from other viewers, and ones more made as a first pass at writing down the ideas. That specific post seems like it's doing a good amount of work to be clear and connect it to other useful work, so I would tend to think of it as explanatory. Of course, if it's the first time the main content is online, then to viewers it would fulfill both purposes of being the original source, and also the most readable source.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-13T21:48:32.019Z · score: 2 (2 votes) · LW · GW

Good to know. These are the main three types I've been thinking about working on so it seemed personally useful, I assumed the grouping may be useful to others as well. I could definitely imagine many other kinds of groupings for other purposes.

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-13T20:13:04.531Z · score: 1 (1 votes) · LW · GW

There's no real difference, RAIN just a quick choice because I figured people may prefer it. Happy to change if people have preferences, would agree ARIN sounds cool.

Do other commenters have thoughts here?

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-13T19:32:11.182Z · score: 3 (2 votes) · LW · GW

Glad you like it :)

There's definitely a ton of stuff that comes to mind, but I don't want to spend too much time on this (have other posts to write), but a few quick thoughts.

Novelty
The Origin of Consciousness in the Breakdown of the Bicameral Mind
On the Origin of Species

Accessibility
Zen and the Art of Motorcycle Maintenance
3Blue1Brown's Videro Series

Robustness
Euclid's Elements
The Encyclopædia Britannica

Importance is more relative to the reader and is about positive expected value, so is harder to say. Perhaps one good example is an 80,000 Hours article that gets one to change their career.

I'm also interested in others here have recommendations or good examples.

Comment by ozziegooen on Current AI Safety Roles for Software Engineers · 2019-01-20T22:16:14.256Z · score: 4 (3 votes) · LW · GW

It's kind of hard to describe. In my mind, people who are passionate about advanced mathamatics, LessWrong/Eliezer's writing, and AI safety should be be a good fit. You could probably tell a lot just by reading about their current team and asking yourself if you'd feel like you fit in with them.

https://intelligence.org/team/

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-12-02T19:50:18.104Z · score: 11 (4 votes) · LW · GW

It's a good point.

The options are about how you talk to others, rather than how you listen to others. So if you talk with someone who knows more than you, "humble" means that you don't act overconfidently, because they could call you out on it. It does not mean that you aren't skeptical of what they have to say.

I definitely agree that you should often begin skeptical. Epistemic learned helplessness seems like a good phrase, thanks for the link.

One specific area I could see this coming up is when you have to debate someone you are sure is wrong, but has way more practice debating. They may know all the arguments and counter-arguments, and would destroy you in any regular debate, but that doesn't mean you should trust them, especially if you know there are better experts on the other side. You could probably find great debaters on all controversial topics, on both sides.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-12-01T13:59:22.600Z · score: 2 (2 votes) · LW · GW

Good points;

I would definitely agree that people are generally reluctant to blatantly deceive themselves. There is definitely some cost to incorrect beliefs, though it can vary greatly in magnitude depending on the situation.

For instance, just say all of your friends go to one church, and you start suspecting your local minister of being less accurate than others. If you actually don't trust them, you could either pretend you do and live as such, or be honest and possibly have all of your friends dislike you. You clearly have a strong motivation to believe something specific here, and I think generally incentives trump internal honesty.[1]

On the end part, I don't think that "hostile talking up" is what the hostile actors want to be seen as doing :) Rather, they would be trying to make it seem like the people previously above them are really below them. To them and their followers, they seem to be at the top of their relevant distribution.

1) There's been a lot about politics being tribal being discussed recently, and I think it makes a lot of pragmatic sense. link

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-12-01T12:54:51.468Z · score: 2 (2 votes) · LW · GW

In response to your last point, I didn't really get into differences between similar areas of knowledge in this post, it definitely becomes a messy topic. I'd definitely agree that for "making a suspension bridge", I'd look at people who seem to have knowledge in "making suspension bridges" than knowledge in "physics, in general."

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T18:51:52.908Z · score: 6 (4 votes) · LW · GW

Dangit, fixed. I switched between markdown and the other format a few times, I think that was responsible.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:53:31.940Z · score: 3 (3 votes) · LW · GW

To be a bit more specific; I think there are multiple reasons why you would communicate in different ways to people on different levels of knowledge. One is because you could "get away with more" around people who know less than you. But another is that you would expect people at different parts of the curve to know different things and talk in different ways, so if you just optimized for their true learning, the results would be quite different.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:37:00.473Z · score: 4 (3 votes) · LW · GW

That's a good point. My communication changes a lot too and it's one reason why I'm often reluctant to explain ideas in public rather than in private; it's much harder to adjust the narrative and humility-level.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:35:24.276Z · score: 4 (4 votes) · LW · GW

Perhaps if you have a large definition of politicized. To me this applies to many areas where people are overconfident (which happens everywhere). Lots of entrepreneurs, academics, "thought leaders", and all the villains of Expert Political Judgement.

To give you a very different example, take a tour guide of San Francisco. They probably know way more about SF history than the people they teach. If they happen to be overconfident for different reasons, no one is necessarily checking them. I would imagine that if they ever give tour guides to SF history experts, their stated level of confidence in their statements would be at least somewhat different.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:26:47.304Z · score: 1 (1 votes) · LW · GW

It's a frequency distribution ordered by amount of knowledge on a topic. The Y axis for a distribution is frequency, but the units aren't very useful for these (the shape is the important part, because it's normalized to total 1).