Comment by ozziegooen on Personalized Medicine For Real · 2019-03-11T10:32:00.934Z · score: 4 (3 votes) · LW · GW

I'd second this, but, to be fair, I think predictions are basically the answer to everything, so this may not be a big update.

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-03-07T18:03:13.772Z · score: 3 (2 votes) · LW · GW

Thanks, and good to hear!

Density applies only to some situations; it just depends on how you look at things. It felt quite different from the other attributes.

For instance, you could rate a document "per information content" according to the RAIN framework, in which you would essentially decouple it from density. Or you could rate it per "entire document", in which case the density would matter.

Comment by ozziegooen on Karma-Change Notifications · 2019-03-03T20:09:04.966Z · score: 12 (4 votes) · LW · GW

This feature seems pretty useful, and I really appreciate how you put thought into not making it too addicting. Having good incentives seems like a good way of allowing our community to "win", and I'm happy to see that pay off in practice.

Ideas for Next Generation Prediction Technologies

2019-02-21T11:38:57.798Z · score: 12 (13 votes)
Comment by ozziegooen on Predictive Reasoning Systems · 2019-02-21T11:35:18.888Z · score: 1 (1 votes) · LW · GW

I was thinking of political policies. Government bills can often be 800+ pages, and seem to contain many specific decisions per page. I could easily imagine one having 20 possible decisions per page, each with 10 options, for 500 pages (assuming some pages are padding), meaning 100k simple decisions total. This is obviously a very rough estimate.

I hope that better examples will become obvious in future writing, will keep that in mind. Thanks for the feedback!

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-21T11:23:19.754Z · score: 1 (1 votes) · LW · GW

Just a quick 2 cents: I think it's possible to have really poor or not-useful ontologies. One could easily make a decent ontology of Greek Gods, for instance. However, if their Foundational Understanding wasn't great, then that ontology won't be that useful (as if they believed in Greek Gods).

In this case, I would classify the DSM as mainly a taxonomy (a kind of ontology), but I think many people would agree it could be improved. Much of this improvement would hopefully come through what is here called Foundational Understanding.

Predictive Reasoning Systems

2019-02-20T19:44:45.778Z · score: 23 (9 votes)
Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-20T19:30:46.046Z · score: 1 (1 votes) · LW · GW

Interesting, that makes sense, thanks for the examples.

Impact Prizes as an alternative to Certificates of Impact

2019-02-20T00:46:25.912Z · score: 21 (3 votes)
Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-19T00:04:00.139Z · score: 3 (3 votes) · LW · GW

For the first question, I'm happy we identified this as an issue. I think it is quite different. If you think there's a good chance you will die soon, then your marginal money will likely not be that valuable to you. It's a lot more valuable in the case that you survive.

For example, say you found out tomorrow that there's a 50% chance everyone will die in one week. (Gosh this is a downer example) You also get to place an investment for $50, that will pay out in two weeks for $70. Is the expected value of the bet really equivalent to (70/2)-50 = -$5? If you don't expect to spend all of your money in one week, I think it's still a good deal.

I'd note that Superforecasters have performed better than Prediction Markets, in what I believe are relatively small groups (<20 people). While I think that Prediction Markets could theoretically work, I'm much more confident in systems like those of Superforecasters, where they wouldn't have to make explicit bets. That said, you could argue that their time is the cost, so the percentage chance still matters. (Of course, the alternative, of giving them money to enjoy for 5-15 years before 50% death, also seems pretty bad)

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T15:20:00.961Z · score: 8 (4 votes) · LW · GW

If the reason your questions won't resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.

That said, one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue :)

I'd estimate there's a 2% chance of this being considered "useful" in 10 years, and in those cases would estimate it to be worth $10k to $20 million of value (90% ci). Would you predict <0.1%?

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T13:55:27.257Z · score: 2 (2 votes) · LW · GW

I'd agree this would work poorly in traditional Prediction Markets. Not so sure about Prediction Tournaments, or other Prediction Market systems that could exist. Others could be heavily subsidized, and the money on hold could be invested in more standard asset classes.

*(Note: I said >20%, not exactly 20%)

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T13:53:22.591Z · score: 3 (3 votes) · LW · GW

I'm saying that the AGI would be helpful to do the resolutions; any world post-AGI could be significantly better at answering such questions. I'm not sure if it's a useful distinction though between "The AGI evaluates the questions" and "An evaluation group uses the AGI to evaluate the questions."

You're right it has the issue of "predicting what someone smarter than me would do." Do you know of much other literature on that one issue? I'm not sure how much of an issue to expect it to be.

Comment by ozziegooen on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-17T22:03:32.564Z · score: 3 (3 votes) · LW · GW

Thanks for the considered comment.

I think the main crux here is how valuable money will be post-AGI. My impression is that it will still be quite valuable. Unless there is a substantial redistribution effort (which would have other issues), I imagine economic growth will make the rich more money than the poor. I'd also think that even though it would be "paradise", many people would care about how many resources they have. Having one-millionth of all human resources may effectively give you access to one-millionth of everything produced by future AGIs.

Scenarios where AGI is friendly (not killing us) could be significantly more important to humans than ones in which it is not. Even if it has a 1% chance of being friendly, in that scenario, it's possible we could be alive for a really long time.

Last, it may not have to be the case that everyone thinks money will be valuable post-AGI, but that some people with money think so. In those cases, they could exchange with others pre-AGI to take that specific risk.

So I generally agree there's a lot of uncertainty, but think it's less than you do. That said, this is, of course, something to apply predictions to.

Can We Place Trust in Post-AGI Forecasting Evaluations?

2019-02-17T19:20:41.446Z · score: 23 (9 votes)
Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-15T19:04:52.456Z · score: 3 (1 votes) · LW · GW

I like exploration. Could also see synonyms of exploration. "discovery", "disclosure", "origination", "introduction"

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:11:06.691Z · score: 1 (1 votes) · LW · GW

Thanks! Good point about the division. I agree that the different parts to be done by different groups, I'm not sure with the best way of doing each one is. My guess is that some experts should be incorporated into the foundational understanding process, but that they would want to use many other tools (like the ones you mention). I would imagine all could be either done in the private or public sector.

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:08:44.130Z · score: 1 (1 votes) · LW · GW

I didn't mean that all analogies around diseases were good, especially around psychology. The challenges are quite hard; even in cases where a lot of good work goes into making ontologies, there could be tons of edge cases in similar.

That said, I think medicine is one of the best examples of the importance and occasional effectiveness of large ontologies. If one is doing independent work in the field, I would imagine it is rare that they would be best served by doing their own ontology development, given how much is been done so far.

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:02:00.619Z · score: 5 (1 votes) · LW · GW

Good point, thanks. Even though we data could have values multiple sources, that could still be more useful than nothing, but it's probably better where possible to use specific sources like the CIA World Factbook, if you trust that they will have the information the future.

Comment by ozziegooen on The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work · 2019-02-15T15:01:38.059Z · score: 5 (1 votes) · LW · GW

I think I'll be quite doable, but take some infrastructural work of course. Could you be a bit more specific about the predictions you want to make?

The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work

2019-02-14T16:21:13.564Z · score: 40 (12 votes)
Comment by ozziegooen on Short story: An AGI's Repugnant Physics Experiment · 2019-02-14T16:18:11.653Z · score: 2 (2 votes) · LW · GW

Yup, in general, I agree.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T15:06:23.909Z · score: 1 (1 votes) · LW · GW

I really like this formulation, will consider the specifics more. It kind of reminds me of Pokemon types or Guilds in Magic.

Short story: An AGI's Repugnant Physics Experiment

2019-02-14T14:46:30.651Z · score: 9 (7 votes)
Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T11:33:09.747Z · score: 1 (1 votes) · LW · GW

Thanks for the ideas!

I was also hesitant to use "clarification", but do kind of think about it as one "clarifying" their messy thoughts on paper, and clarifying them with a few people. I feel like "sketch" implies incompleteness, which is not exactly the case, and description and delineation are not descriptive enough.

Some other related terms. Blueprint, survey, first pass, embryonic, pioneering, preliminary, germinal, prosaic.

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-14T11:19:03.369Z · score: 3 (2 votes) · LW · GW

Good to know. Can you link to another resource that states this? Wikipedia says "the amount a decision maker would be willing to pay for information prior to making a decision", LessWrong has something similar "how much answering a question allows a decision-maker to improve its decision".

https://en.wikipedia.org/wiki/Value_of_information https://www.lesswrong.com/posts/vADtvr9iDeYsCDfxd/value-of-information-four-examples

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-14T01:01:01.731Z · score: 1 (1 votes) · LW · GW

Changed. I originally moved accessibility to the bottom, out of order, just because the other three are more similar to each other, but don't have a strong preference.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T00:12:01.774Z · score: 6 (2 votes) · LW · GW

That sounds interesting, I look forward to eventually reading more about it, if that is published online.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-14T00:11:04.278Z · score: 1 (1 votes) · LW · GW

Good question. It's more technical than most of the ones I was considering and I have a harder time judging it because I haven't really gone through it. I think with pure-math posts the line is blurry because if it's sufficiently formal, it can be hard to make more explanatory, for dedicated readers.

I imagine that there are very math-heavy posts on Agent foundations that are both optimized for readability from other viewers, and ones more made as a first pass at writing down the ideas. That specific post seems like it's doing a good amount of work to be clear and connect it to other useful work, so I would tend to think of it as explanatory. Of course, if it's the first time the main content is online, then to viewers it would fulfill both purposes of being the original source, and also the most readable source.

Comment by ozziegooen on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-13T21:48:32.019Z · score: 2 (2 votes) · LW · GW

Good to know. These are the main three types I've been thinking about working on so it seemed personally useful, I assumed the grouping may be useful to others as well. I could definitely imagine many other kinds of groupings for other purposes.

Three Kinds of Research Documents: Clarification, Explanatory, Academic

2019-02-13T21:25:51.393Z · score: 23 (6 votes)
Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-13T20:13:04.531Z · score: 1 (1 votes) · LW · GW

There's no real difference, RAIN just a quick choice because I figured people may prefer it. Happy to change if people have preferences, would agree ARIN sounds cool.

Do other commenters have thoughts here?

Comment by ozziegooen on The RAIN Framework for Informational Effectiveness · 2019-02-13T19:32:11.182Z · score: 3 (2 votes) · LW · GW

Glad you like it :)

There's definitely a ton of stuff that comes to mind, but I don't want to spend too much time on this (have other posts to write), but a few quick thoughts.

Novelty
The Origin of Consciousness in the Breakdown of the Bicameral Mind
On the Origin of Species

Accessibility
Zen and the Art of Motorcycle Maintenance
3Blue1Brown's Videro Series

Robustness
Euclid's Elements
The Encyclopædia Britannica

Importance is more relative to the reader and is about positive expected value, so is harder to say. Perhaps one good example is an 80,000 Hours article that gets one to change their career.

I'm also interested in others here have recommendations or good examples.

The RAIN Framework for Informational Effectiveness

2019-02-13T12:54:20.297Z · score: 40 (13 votes)
Comment by ozziegooen on Current AI Safety Roles for Software Engineers · 2019-01-20T22:16:14.256Z · score: 4 (3 votes) · LW · GW

It's kind of hard to describe. In my mind, people who are passionate about advanced mathamatics, LessWrong/Eliezer's writing, and AI safety should be be a good fit. You could probably tell a lot just by reading about their current team and asking yourself if you'd feel like you fit in with them.

https://intelligence.org/team/

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-12-02T19:50:18.104Z · score: 11 (4 votes) · LW · GW

It's a good point.

The options are about how you talk to others, rather than how you listen to others. So if you talk with someone who knows more than you, "humble" means that you don't act overconfidently, because they could call you out on it. It does not mean that you aren't skeptical of what they have to say.

I definitely agree that you should often begin skeptical. Epistemic learned helplessness seems like a good phrase, thanks for the link.

One specific area I could see this coming up is when you have to debate someone you are sure is wrong, but has way more practice debating. They may know all the arguments and counter-arguments, and would destroy you in any regular debate, but that doesn't mean you should trust them, especially if you know there are better experts on the other side. You could probably find great debaters on all controversial topics, on both sides.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-12-01T13:59:22.600Z · score: 2 (2 votes) · LW · GW

Good points;

I would definitely agree that people are generally reluctant to blatantly deceive themselves. There is definitely some cost to incorrect beliefs, though it can vary greatly in magnitude depending on the situation.

For instance, just say all of your friends go to one church, and you start suspecting your local minister of being less accurate than others. If you actually don't trust them, you could either pretend you do and live as such, or be honest and possibly have all of your friends dislike you. You clearly have a strong motivation to believe something specific here, and I think generally incentives trump internal honesty.[1]

On the end part, I don't think that "hostile talking up" is what the hostile actors want to be seen as doing :) Rather, they would be trying to make it seem like the people previously above them are really below them. To them and their followers, they seem to be at the top of their relevant distribution.

1) There's been a lot about politics being tribal being discussed recently, and I think it makes a lot of pragmatic sense. link

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-12-01T12:54:51.468Z · score: 2 (2 votes) · LW · GW

In response to your last point, I didn't really get into differences between similar areas of knowledge in this post, it definitely becomes a messy topic. I'd definitely agree that for "making a suspension bridge", I'd look at people who seem to have knowledge in "making suspension bridges" than knowledge in "physics, in general."

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T18:51:52.908Z · score: 6 (4 votes) · LW · GW

Dangit, fixed. I switched between markdown and the other format a few times, I think that was responsible.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:53:31.940Z · score: 3 (3 votes) · LW · GW

To be a bit more specific; I think there are multiple reasons why you would communicate in different ways to people on different levels of knowledge. One is because you could "get away with more" around people who know less than you. But another is that you would expect people at different parts of the curve to know different things and talk in different ways, so if you just optimized for their true learning, the results would be quite different.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:37:00.473Z · score: 4 (3 votes) · LW · GW

That's a good point. My communication changes a lot too and it's one reason why I'm often reluctant to explain ideas in public rather than in private; it's much harder to adjust the narrative and humility-level.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:35:24.276Z · score: 4 (4 votes) · LW · GW

Perhaps if you have a large definition of politicized. To me this applies to many areas where people are overconfident (which happens everywhere). Lots of entrepreneurs, academics, "thought leaders", and all the villains of Expert Political Judgement.

To give you a very different example, take a tour guide of San Francisco. They probably know way more about SF history than the people they teach. If they happen to be overconfident for different reasons, no one is necessarily checking them. I would imagine that if they ever give tour guides to SF history experts, their stated level of confidence in their statements would be at least somewhat different.

Comment by ozziegooen on Overconfident talking down, humble or hostile talking up · 2018-11-30T17:26:47.304Z · score: 1 (1 votes) · LW · GW

It's a frequency distribution ordered by amount of knowledge on a topic. The Y axis for a distribution is frequency, but the units aren't very useful for these (the shape is the important part, because it's normalized to total 1).

Comment by ozziegooen on Current AI Safety Roles for Software Engineers · 2018-11-30T14:47:46.034Z · score: 2 (2 votes) · LW · GW

Good point, fixed. I think about terms the "AI safety community" and "EA safety community" to be focusing on the same thing, use them interchangeably sometimes.

Overconfident talking down, humble or hostile talking up

2018-11-30T12:41:54.980Z · score: 45 (20 votes)
Comment by ozziegooen on Critique my Model: The EV of AGI to Selfish Individuals · 2018-11-28T19:36:24.169Z · score: 1 (1 votes) · LW · GW

Good point.

Stabilize-Reflect-Execute

2018-11-28T17:26:39.741Z · score: 31 (9 votes)
Comment by ozziegooen on Conversational Cultures: Combat vs Nurture · 2018-11-28T11:38:34.057Z · score: 3 (2 votes) · LW · GW

I found the ideas behind Radical Candor to be quite useful. I think they're similar to ones here. Link

Comment by ozziegooen on Critique my Model: The EV of AGI to Selfish Individuals · 2018-11-26T11:56:41.258Z · score: 7 (2 votes) · LW · GW

Another update: Apparently, my assumption that the universe would be 6 Billion years old was very incorrect. Seems like it's possible that useful computation could be done in 10^2500 years, which is much better.

https://en.wikipedia.org/wiki/Future_of_an_expanding_universe

Comment by ozziegooen on What if people simply forecasted your future choices? · 2018-11-25T20:26:46.260Z · score: 1 (1 votes) · LW · GW

I'm imagining that the predictors would often fall in-line with the user, especially if the user were reasonable enough to be making decisions using them.

Comment by ozziegooen on What if people simply forecasted your future choices? · 2018-11-23T22:43:00.756Z · score: 1 (1 votes) · LW · GW

Agreed it could be gamed in net-negative ways if there was enough incentive in the prediction system. I think that in many practical cases, the incentives are going to be much smaller than the deltas between decisions (otherwise it seems surprisingly costly to have them.)

Also, predictor meddling is also a thing in the other prediction alternatives, like decision markets. Individuals could try to sabotage outcomes selectively. I don't believe any of these approaches are perfectly safe. I'm definitely recommending them for humans only at this point; though perhaps if there is a lot of testing we could get a better sense of what the exact incentives will be, and use that knowledge for simple AI use.

Comment by ozziegooen on What if people simply forecasted your future choices? · 2018-11-23T12:22:21.062Z · score: 1 (1 votes) · LW · GW

To be a bit more specific, it's answering a question by having other people predict which answer you will choose; but yes, it's very bootstrap-y.

I consider this proposal an alternative to decision markets and prediction-augmented evaluations, so I don't think this system suffers from the challenge of information more than those two proposals. All are of course limited to a significant extent by information.

One nice point for these systems is that individuals are often predictably biased, even though they are knowledgeable. So in many cases it seems like more ignorant but less biased predictors with a few base rates of a problem can do better.

I imagine that if there were a bunch of forecasters doing this, they would eventually collect and organize tables of public data of the base rates at which agents make decisions. I expect the public data to be really good if it were properly organized. After that, agents could, of course, select to provide additional information.

What if people simply forecasted your future choices?

2018-11-23T10:52:25.471Z · score: 18 (5 votes)
Comment by ozziegooen on Prediction-Augmented Evaluation Systems · 2018-11-19T10:30:30.488Z · score: 3 (2 votes) · LW · GW

Interesting. Looks like a book is coming out too: https://www.thebookseller.com/news/william-collins-scoops-kahnemans-book-7-figure-pre-empt-752276

Comment by ozziegooen on Prediction-Augmented Evaluation Systems · 2018-11-17T15:41:52.438Z · score: 2 (2 votes) · LW · GW

I'm happy to talk theoretically, though have the suspicion that there are a whole lot of different ways to approach this problem and experimentation really is the most tractable way to make progress on it.

That said, ideally, a prediction system would include ways of predicting the EVs of predictions and predictors, and people could get paid somewhat accordingly; in this world, high-EV predictions would be ones which may influence decisions counterfactually. You may be able to have a mix of judgments from situations that will never happen, and ones that are more precise but only applicable to ones that do.

I would be likewise suspicious that naive decision markets that use one or two techniques like that would be enough to really make a system robust, but could imagine those ideas being integrated with others for things that are useful.

Comment by ozziegooen on Prediction-Augmented Evaluation Systems · 2018-11-17T15:30:53.041Z · score: 2 (2 votes) · LW · GW

Good find. I didn't see that post (it came out a day after I published this, coincidentally). I'm surprised it came out so recently but imagine he probably had similar ideas, and likely wrote them down, much earlier. I definitely recommend it for more details on the science aspect.

From the post: "For each scientific paper, there is a (perhaps small) chance that it will be randomly chosen for evaluation in, say, 30 years. If it is chosen, then at that time many diverse science evaluation historians (SEH) will study the history of that paper and its influence on future science, and will rank it relative to its contemporaries. To choose this should-have-been prestige-rank, they will consider how important was its topic, how true and novel were its claims, how solid and novel were its arguments, how influential it actually was, and how influential it would have been had it received more attention.

....

Using these assets, markets can be created wherein anyone can trade in the prestige of a paper conditional on that paper being later evaluated. Yes traders have to wait a long time for a final payoff. But they can sell their assets to someone else in the meantime, and we do regularly trade 30 year bonds today. Some care will have to be taken to make sure the base asset that is bet is stable, but this seems quite feasible."

Comment by ozziegooen on Prediction-Augmented Evaluation Systems · 2018-11-16T22:01:39.302Z · score: 1 (1 votes) · LW · GW

I'm not too optimistic about traditional prediction markets, I have feelings similar to Zvi. I haven't seen prediction markets be well subsidized for even a few dozen useful variables; in prediction augmented evaluation systems they would have to be done for thousands+ variables. They seem like more overhead per variable then simply stating one's probability and moving on.

My next step is just messing around a lot with my own prediction application and seeing what seems to work. I plan to gradually invite people, but let them mostly do their own testing. At this point, I want to get an intuitive idea of what seems useful, similar to my experiences making other experimental applications. I'm really not sure what ideas I may come up with, with more experimentation.

That said, I am particularly excited about estimating expected values of things, but realize I may not be able to make all of these public, or may have to keep things very apolitical. I expect it to be really easy to anger people if estimates that are actually important are public.

https://www.lesswrong.com/posts/a4jRN9nbD79PAhWTB/prediction-markets-when-do-they-work

Comment by ozziegooen on Prediction-Augmented Evaluation Systems · 2018-11-16T21:54:59.105Z · score: 3 (2 votes) · LW · GW

Thanks for the feedback! I was unsure about the structure; my main goals here was to set up a categorization system and have information explained, even if it wasn't particularly understandable. I'll mess around with other techniques in future posts.

Comment by ozziegooen on Current AI Safety Roles for Software Engineers · 2018-11-10T11:37:49.419Z · score: 5 (3 votes) · LW · GW

Thanks for the updates. Sorry about getting your organization wrong, I changed that part.

Current AI Safety Roles for Software Engineers

2018-11-09T20:57:16.159Z · score: 82 (31 votes)
Comment by ozziegooen on Prediction-Augmented Evaluation Systems · 2018-11-09T10:59:03.678Z · score: 3 (2 votes) · LW · GW

If you have other ideas for things to be evaluated / other uses, please post them below!

Prediction-Augmented Evaluation Systems

2018-11-09T10:55:36.181Z · score: 43 (15 votes)
Comment by ozziegooen on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-16T17:51:34.983Z · score: 1 (1 votes) · LW · GW

I'm not sure how to solve such an equation, though doing it for simple cases seems simple enough. I'll admit I don't understand logical induction near as well as I would like, and mean to do so some time.

Critique my Model: The EV of AGI to Selfish Individuals

2018-04-08T20:04:16.559Z · score: 50 (13 votes)

Expected Error, or how wrong you expect to be

2016-12-24T22:49:02.344Z · score: 9 (9 votes)

Graphical Assumption Modeling

2015-01-03T20:22:21.432Z · score: 23 (18 votes)

Understanding Who You Really Are

2015-01-02T08:44:50.374Z · score: 9 (19 votes)

Why "Changing the World" is a Horrible Phrase

2014-12-25T06:04:48.902Z · score: 28 (40 votes)

Reference Frames for Expected Value

2014-03-16T19:22:39.976Z · score: 5 (23 votes)

Creating a Text Shorthand for Uncertainty

2013-10-19T16:46:12.051Z · score: 6 (11 votes)

Meetup : San Francisco: Effective Altruism

2013-06-23T21:48:34.365Z · score: 3 (4 votes)