Posts

Comments

Comment by eric-rogstad on [deleted post] 2017-03-25T02:45:39.000Z

Nm, the longer explanation later in the page answered my question.

Comment by eric-rogstad on [deleted post] 2017-03-25T02:26:37.000Z

If Wisconsin is trading cheese with Ohio, and then Michigan becomes much better at producing cheese, this can harm the economy of Wisconsin. It should not be possible for Wisconsin to be harmed by trading with Michigan unless something weird is going on.

Was "Wisconsin" supposed to be "Ohio" in the second sentence? Or are you contrasting between Wisconsin trading with Ohio and Wisconsin trading with Michigan?

Comment by eric-rogstad on [deleted post] 2017-02-20T13:53:17.000Z

Not currently.

Comment by eric-rogstad on [deleted post] 2017-01-22T09:18:09.000Z

This is silly

Perhaps

then you ought to focus predominantly on something else

This does not seem inconsistent with the post. (Contributing $1 per day towards something hardly seems to preclude focusing predominantly on other things.) Do you disagree with that?

Comment by eric-rogstad on [deleted post] 2017-01-22T08:59:21.000Z

It seems that classifiers trained on adversarial examples may be finding (more) conservative concept boundaries:

We also found that the weights of the learned model changed significantly, with the weights of the adversarially trained model being significantly more localized and interpretable

Explaining and Harnessing Adversarial Examples

Comment by eric-rogstad on [deleted post] 2017-01-17T11:05:45.000Z

Benquo Given your analysis, I'm surprised by your vote of 50%. You took what was given as a conservative estimate, added in additional moderating factors, and still got a 10x margin of safety. Is this just because of a strong prior towards discounting cost effectiveness estimates?

How much would one have to donate for you to be 90% sure that it would offset the cost of eating meat?

Comment by eric-rogstad on [deleted post] 2017-01-16T14:31:25.000Z

For counterpoint, see: http://effective-altruism.com/ea/ry/ethical_offsetting_is_antithetical_to_ea/.

Comment by eric-rogstad on [deleted post] 2017-01-16T12:41:03.000Z

For reference, Lewis Bollard estimates that recent corporate cage-free campaigns "will spare about 250 hens a year of cage confinement per dollar spent."

Comment by eric-rogstad on [deleted post] 2017-01-13T09:29:25.000Z

So I suppose I should attempt a real reply.

I think:

  • information hazards should be avoided
  • people should be allowed to develop opinions in private so that they can think freely
  • there's tremendous value in public discussions (where ideas can be evaluated by and/or spread to many people)
Comment by eric-rogstad on [deleted post] 2017-01-13T09:13:08.000Z

But I do think it is a good question.

Comment by eric-rogstad on [deleted post] 2017-01-13T09:12:36.000Z

A probability doesn't seem like the right way to measure this.

Comment by eric-rogstad on [deleted post] 2017-01-07T07:42:25.000Z

See also: http://www.metaculus.com/questions/377/will-donald-trump-be-the-president-of-the-united-states-in-2018/.

Comment by eric-rogstad on [deleted post] 2017-01-07T07:39:25.000Z

Note that PredictIt currently thinks there's a 7% chance Trump will be impeached within the first 100 days.

That seems high to me for the first 100 days, since Republicans control both the house and the Senate. However, things could change at the midterm elections in 2018.

Overall I'm going with a 1 in 6 chance during the first term.

Comment by eric-rogstad on [deleted post] 2017-01-04T15:09:47.000Z

Fair to paraphrase as: donor-as-silent-partner?

Comment by eric-rogstad on [deleted post] 2017-01-04T10:41:41.000Z

Current thinking is that we should allow claims to be edited, but that past users' votes appear grayed out (so it's clear that they voted on a previous version of the claim). As of today, this hasn't been implemented yet.

Comment by eric-rogstad on [deleted post] 2017-01-04T05:41:41.000Z

The question of tradeoffs between X and Y and winners' curses reminds me of Bostrom's paper, The Unilateralist's Curse.

From the abstract:

In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will move forward more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to some objections that could be raised against it.

Comment by eric-rogstad on [deleted post] 2017-01-04T05:31:14.000Z

Is the idea that a single organization should pursue X or Y and not worry about the fact that any given donors will value both X and Y to varying degrees?

(If so I might have called this organization-independence, or single-focus.)

Comment by eric-rogstad on [deleted post] 2017-01-04T05:24:36.000Z

I'm not sure what you mean about an exchange rate. Isn't a Pareto improvement something that makes everyone better off (or rather: someone better off and no one worse off)?

Comment by eric-rogstad on [deleted post] 2016-12-23T13:17:33.000Z

This is one of the claims that Benquo made in his post, so I think we should leave the wording as is, unless he wants to change it.

(I've added a note explaining where the claim comes from.)

Comment by eric-rogstad on [deleted post] 2016-12-23T11:33:36.000Z

I agree that there are some x-risks (like global warming) that are helped by a colony, but most aren't.

Alexei What are some of the ones (besides AI x-risk) that you think are not?

Comment by eric-rogstad on [deleted post] 2016-12-23T08:48:34.000Z

From the FB thread:

Nathan Bouscal: Note that I haven't heard significant disagreement about a colony being useless-ish against AI x-risk. The argument is that it helps with (almost) every other x-risk.

Robert Wiblin: Even then the disagreement isn't that a Mars colony couldn't help, it's that you can get something similarly valuable on Earth for a fraction of the price and difficulty.

Paul Crowley: The proper disagreement to measure is something like "A permanent, self-sustaining off-Earth colony would be a much more effective mitigation of x-risk than even an equally well funded system of disaster shelters on Earth."

Comment by eric-rogstad on [deleted post] 2016-12-23T08:41:31.000Z

"has some resistance to Eternal September" -> "is resistant to Eternal September" ?

Comment by eric-rogstad on [deleted post] 2016-12-23T07:37:33.000Z

I agree that For mitigating AI x-risk, an off-Earth colony would be about as useful as a warm scarf.

Otherwise, I think this does seem like the kind of thing you would do to mitigate a broad class of risks. Namely, those that arise on Earth and don't lend themselves to interplanetary travel (e.g. pandemics, nukes, and some of the unknown unknowns).

Comment by eric-rogstad on [deleted post] 2016-12-21T05:33:54.000Z

First use of "we" should indicate who "we" are, e.g. "We at Arbital..."

Comment by eric-rogstad on [deleted post] 2016-12-17T06:24:12.000Z

is utility the donor gets from donating money "smooth" with respect to the amount raised

Ideally the utility the donor gets (on reflection) is closely related to the utility the charity gets :-) But I agree that it's important to take donor "hedonics" into account.

Comment by eric-rogstad on [deleted post] 2016-12-17T06:16:51.000Z

I'd be interested to know if you find yourself having that feeling a lot, while interacting with claims.

If it's a small minority of the time, I think the solution is a "wrong question" button. If it happens a lot, we might need another object type --something like a prompt-for-discussion rather than a claim-to-be-agreed with.

Comment by eric-rogstad on [deleted post] 2016-12-17T05:55:29.000Z

In other words, promoting this claim as worded, is misleading?

Comment by eric-rogstad on [deleted post] 2016-12-17T04:25:41.000Z

Maybe "gradual" would be a better term. I mean that there aren't sharp transitions where e.g. raising 48k is not very valuable, but 50k+ is valuable.

Comment by eric-rogstad on [deleted post] 2016-12-17T03:12:01.000Z

Alexei can you say more about why you endorse this proposal? In particular, would you change your mind if you believed this claim?

Comment by eric-rogstad on [deleted post] 2016-12-17T03:09:04.000Z

Organizations' utility curves for money are usually smooth (fundraiser milestones notwithstanding)

Comment by eric-rogstad on [deleted post] 2016-12-17T03:01:43.000Z

I don't usually have this concern because I assume that the utility from extra money for an organization grows smoothly as the amount of money increases, and that there are not sharp cutoffs or thresholds (even if the fundraiser declares "milestone" amounts).

Comment by eric-rogstad on [deleted post] 2016-12-16T09:03:19.000Z

Even if we expect to implement an indirect approach to specifying ethics for our AI systems, it's still valuable to gain a better understanding of our own ethical reasoning (and ethics in general), because:

  1. The better we understand ethics, the less likely we are to take some mistaken assumption for granted when designing the process of extrapolating our ethics.
  2. The more confidently we'll be able to generate test cases for the AI's ethical reasoning.
Comment by eric-rogstad on [deleted post] 2016-12-16T08:38:38.000Z

Better?

Comment by eric-rogstad on [deleted post] 2016-12-16T06:59:57.000Z

I think this should be a claim.

Comment by eric-rogstad on [deleted post] 2016-12-16T06:44:26.000Z

I would add an "I assume" here in parentheses, so you're not putting words in their mouth, or projecting feelings into their heads.

Comment by eric-rogstad on [deleted post] 2016-12-15T08:35:51.000Z

I would like to see an operationalization.

Who is our community? How many of us should move?

Comment by eric-rogstad on [deleted post] 2016-12-14T06:04:32.000Z

Overall, I think the post covers most of the important points, but I think I'd want to cut some parts.

I'll try making an outline of what I think the key points are.

Comment by eric-rogstad on [deleted post] 2016-12-14T05:57:41.000Z

I might rephrase this to, "initial target" so it's clear that it was intended as a step along the path, not that it was our entire vision.

Comment by eric-rogstad on [deleted post] 2016-12-14T05:54:21.000Z

I think this section conflates two things: 1) the role LW used to play, and 2) the role ultimate-Arbital will play.

I think 1 is a subset of 2.

In particular, I don't think LW had solved the problem you describe here: "If someone wants to catch up on the state of the debate, they either need to get a summary from someone, figure it out as they go along, or catch up by reading the entire discussion."

Comment by eric-rogstad on [deleted post] 2016-12-13T14:10:27.000Z

Not sure if it makes sense for this one to be a probability bar.

Here's an alternate version with an Agreement bar.

Comment by eric-rogstad on [deleted post] 2016-12-01T06:40:53.000Z

Here's another comment.

Comment by eric-rogstad on [deleted post] 2016-12-01T06:38:11.000Z

Testing out replies.

Comment by eric-rogstad on [deleted post] 2016-11-30T10:52:20.000Z

I am a real comment. Don't delete me please!

Comment by eric-rogstad on [deleted post] 2016-08-24T02:18:32.000Z

This page is an outline for the Universal Property project.

Progress on the project will be measured by tracking the state of the pages linked below, as they transition from redlinks to stubs, etc.

Comment by eric-rogstad on [deleted post] 2016-08-17T07:20:05.000Z

We're going to feature whatever we choose as the current project on the front page, and I want to include some intro text. What do you think of the following (adapted from the first paragraph above):

Help us build an intuitive explanation of this fundamental concept in category theory!

Category theory is famously very difficult to understand, even for people with a relatively high level of mathematical maturity. With this project, we want to produce an explanation that will clearly communicate a core concept in category theory, the universal property, to a wide audience of learners.

See below for our current progress on the project, as well as how you can contribute.

Comment by eric-rogstad on [deleted post] 2016-08-11T15:37:16.000Z

Omit the 'as'

Comment by eric-rogstad on [deleted post] 2016-08-04T03:29:47.000Z

If these are included I think it would be good to also include explanations of why each one is wrong.

Comment by eric-rogstad on [deleted post] 2016-07-30T02:25:35.000Z

This is a clear explanation, but I think some formatting changes could enable readers to grok it even more quickly.

Suppose a reader understands two of the three requirements and just needs an explanation of the third. It would be cool if they could find the sentences they're looking for w/o having to scan a whole paragraph looking for the words, "first", "second", or "third".

I think we can achieve this by A) moving each explanation right under the equation / inequality it's talking about, or B) putting the three explanations in a second numbered list, or C) leaving the three explanations in a paragraph, but use the numerals 1, 2, and 3 within the paragraph. Might require some experimentation to see what looks best.

Comment by eric-rogstad on [deleted post] 2016-07-29T15:38:46.000Z

Did you just swap the pronouns here? In the previous sentences the speaker was the seller and the listener was the buyer, but now it sounds like it's the other way around.

Comment by eric-rogstad on [deleted post] 2016-07-29T15:25:17.000Z

and I can toss it?