Posts

Confabulation Bias 2012-09-28T01:27:19.040Z

Comments

Comment by EricHerboso on 2017 LessWrong Survey · 2017-09-18T03:52:32.032Z · LW · GW

I have taken the survey.

Comment by EricHerboso on Lesswrong 2016 Survey · 2016-04-01T05:54:15.445Z · LW · GW

I have taken the survey.

Comment by EricHerboso on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-02-28T22:53:05.548Z · LW · GW

He learned that he can will his own transfigurations to end wandlessly and without spoken words.

Comment by EricHerboso on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T04:37:21.144Z · LW · GW

If the snitch is both the trigger and the epicenter of this spell in progress, then this would explain how the three wishes will be granted by "a single plot". The game is played/watched by mostly Slytherin/Ravenclaw students, so mostly Slytherin/Ravenclaw students would die. I can see a school like Hogwarts then giving both these houses the House Cup as a way to deal with the trauma for surviving students and honor the lost children. So that's all three wishes: both houses win the House Cup, and the snitch is removed from Qudditch, all using "a single plot".

(from Iron_Nightingale on r/hpmor)

Comment by EricHerboso on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T04:59:19.240Z · LW · GW

I agree that legilimensed Sprout's magic is activating the sense of doom. But the troll was not legilimensed, so there's no reason for the sense of doom to activate.

I may be wrong, but intuitively it seems that when Voldemort causes Sprout to cast a spell, that spell counts as originating from Voldemort, not Sprout -- and that is what makes it activate the sense of doom. Whereas the troll was acting on its own accord, and so didn't activate the sense of doom.

Comment by EricHerboso on 2014 Survey of Effective Altruists · 2014-05-01T19:41:50.778Z · LW · GW

Just to be clear, it wouldn't be "LW affiliation"; it would be "heard of EA through LW". I'm sure there are quite a few like me who learned about LW through EA, not the other way around.

Comment by EricHerboso on Effective Altruism Summit 2014 · 2014-04-30T16:30:29.540Z · LW · GW

While your application correctly identifies Animal Charity Evaluators by its current name, the main EA Summit webpage lists ACE under its old name of "Effective Animal Activism". Is there any chance you could update the page to use the new name?

Comment by EricHerboso on What do professional philosophers believe, and why? · 2013-05-01T16:29:31.102Z · LW · GW

After comparing my own answers to the clusters Bouget & Chalmers found, I don't appear to fit well in any one of the seven categories.

However, I did find the correlations between philosophical views outlined in section 3.3 of the paper to be fairly predictive of my own views. Nearly everything in Table 4 that I agree with on the left side corresponds to an accurate prediction of what I'd think about the issue on the right side.

Interestingly, not all of these correlations seem like they have an underlying reason why they should logically go together. Does this mean that I've fallen prey to agreeing with the greens over the blues for something other than intellectual reasons?

Comment by EricHerboso on Risk-aversion and investment (for altruists) · 2013-02-28T15:31:53.165Z · LW · GW

Blindness affects cats less negatively than starving affects humans.

Comment by EricHerboso on Philosophical Landmines · 2013-02-13T03:00:33.512Z · LW · GW

I've never seen that as an additional ambiguity. I've always understood "OP" to mean "the original article", and never "the top level comment". But maybe this is because I've just never encountered the other use (or didn't notice when someone meant it to refer to the top level comment).

Comment by EricHerboso on If Many-Worlds Had Come First · 2013-02-12T19:24:53.441Z · LW · GW

Maybe he's counting the lack of an objective state as additional information?

Comment by EricHerboso on Assessing Kurzweil: the results · 2013-01-31T20:47:05.612Z · LW · GW

In the future, we might distinguish "difficult" predictions from trivial ones by seeing if the predictions are unlike the predictions made by others at the same time. This is easy to do if we evaluate contemporary predictions.

But I have no idea how to accomplish this when looking back on past predictions. I can't help but to feel that some of Kurzweil's predictions are trivial, yet how can we tell for sure?

Comment by EricHerboso on AidGrade - GiveWell finally has some competition · 2013-01-24T19:15:03.540Z · LW · GW

Case in point: Charity Navigator, which places unreasonable importance on irrelevant statistics like administrative overhead. There are already charity effectiveness evaluators out there that are doing counter-productive work.

Personally, I think adding another good charity evaluator to the mix as competition to GiveWell/Giving What We Can is important to the overall health of the optimal philanthropy movement.

Comment by EricHerboso on AidGrade - GiveWell finally has some competition · 2013-01-24T17:33:15.380Z · LW · GW

I agree with the spirit of this comment, but I think you are perhaps undervaluing the usefulness of helping with instrumental goals.

I am a huge fan of GiveWell/Giving What We Can, but one of the problems that many outsiders have with them is that they seem to have already made subjective value judgments on which things are more important. Remember that not everyone is into consequentialist ethics, and some find problems just with the concept of using QALYs.

Such people, when they first decide to start comparing charities, will not look at GiveWell/GWWC. They will look at something atrocious, like Charity Navigator. They will actually prefer Charity Navigator, since CN doesn't introduce subjective value judgments, but just ranks by unimportant yet objective stuff like overhead costs.

Though I've only just browsed their site, I view AidGrade as a potential way to reach those people. The people who want straight numbers. People who maybe aren't utilitarians, but recognize anyway that saving more is better than saving less, and so would use AidGrade to direct their funding to a better charity within whatever category they were going to donate to anyway. These people may not be swayed by traditional optimal philanthropy groups' arguments on mosquito nets over hiv drugs. But by listening to AidGrade, perhaps they will at least redirect their funding from bad charities to better charities within whatever category they choose.

Comment by EricHerboso on AidGrade - GiveWell finally has some competition · 2013-01-23T00:55:43.995Z · LW · GW

That speaks to GWWC's favor, I think. It would be odd for them to not take into account research done by GiveWell.

Remember that they don't agree on everything (e.g., cash transfers). When they do agree, I take it as evidence that GWWC has looked into GiveWell's recommendation and found it to be a good analysis. I don't really view it as parroting, which your comment might unintentionally imply.

Comment by EricHerboso on Assessing Kurzweil: the results · 2013-01-22T19:03:12.549Z · LW · GW

I am only one of the contributors, but you're welcome to view my comments. I doubt it will be helpful for your purpose, though.

Comment by EricHerboso on Assessing Kurzweil: the results · 2013-01-22T18:59:32.268Z · LW · GW

As a (perhaps) trivial example, consider the pair of predictions:

  • "Intelligent roads are in use, primarily for long-distance travel."
  • "Local roads, though, are still predominantly conventional."

As one of the people who participated in this study, I marked the first as false and the second as true. Yet the second "true" prediction seems like it is only trivially true. (Or perhaps not; I might be suffering from hindsight bias here.)

Comment by EricHerboso on Assessing Kurzweil: the results · 2013-01-22T18:53:44.312Z · LW · GW

As one of the people who contributed to this project by assessing his predictions, I do want to point out that several of the predictions marked as "True" seemed very obvious to me. Of course, this might be the result of hindsight bias, and in fact it is actually very impressive for him to have predicted something like the following examples:

  • "[Among portable computers,] Memory is completely electronic, and most portable computers do not have keyboards."
  • "However, nanoengineering is not yet considered a practical technology."
  • "China has also emerged as a powerful economic player."

Note also that some of the statements marked "True" are only vacuously true. For example, one of his wrong predictions was that "intelligent roads are in use...for long-distance travel". But he follows this up with the following prediction which got marked as "True":

"Local roads, though, are still predominantly conventional."

As you can see, I do not think that looking just at the percentage of true predictive statements he made is enough. Some of those predictions seem almost trivial. And yet we can't just dismiss them out of hand, because the reason I think they are trivial might just be because I'm looking at it from after the fact. Counterfactually, if intelligent roads had come about, but local roads were still conventional, would I still call the prediction trivial? What if local roads weren't conventional? Would I then still call it a trivial prediction?

We had no choice but to just mark such statements as true and count them in the percentage he got correct, because there's just no way I know of to disregard such "trivial" predictions. And this means we shouldn't really be looking at the percentage marked as true except to compare it with Kurzweil's own self-assessment of accuracy. Using the percentage marked as true for other reasons, like "should I trust Kurzweil's predictive power more than others'", seems like a misuse of this data.

Comment by EricHerboso on [Link] Noam Chomsky Killed Aaron Schwartz · 2013-01-16T17:37:23.581Z · LW · GW

While I don't agree with much of the linked post, the line portraying civil disobedience as an application of might makes right really hits hard for me. I need to do more thinking on this to see if there is justification for me to update my current beliefs.

Comment by EricHerboso on Assessing Kurzweil: the gory details · 2013-01-16T13:53:44.284Z · LW · GW

My initial impression was that the volunteer completion rate would be higher among a group like LW members. But now I realize that was a naive assumption to make.

Comment by EricHerboso on Morality: Theory and Practice · 2013-01-16T00:18:14.567Z · LW · GW

Whether something is doable is irrelevant when it comes to determining whether it is right.

A separate question is what should we do, which is different from what is right. We should definitely do the most right thing we possibly can, but just because we can't do something does not mean that it is any less right.

A real example: There's nothing we can realistically do to stop much of the suffering undergone by wild animals through the predatory instinct. Yet the suffering of prey is very real and has ethical implications. Here we see something which has moral standing even though there appears to be nothing we can do to help the situation (beyond some trivial amount).

Comment by EricHerboso on January 2013 Media Thread · 2013-01-10T15:03:36.845Z · LW · GW

While I appreciate the recommendation and understand why you recommended it after just now watching it on netflix, I honestly can't get over this laugh track. How do people watch shows with laughs in the background like this? I find it not only extremely distracting but also a bit insulting to have the show give me a cue of when I should find things funny.

Comment by EricHerboso on Licensing discussion for LessWrong Posts · 2012-12-24T22:26:27.749Z · LW · GW

I can't edit a poll, but obviously option 2 was meant to read "allow", not "require".

Comment by EricHerboso on Licensing discussion for LessWrong Posts · 2012-12-24T22:24:53.404Z · LW · GW

I'd like to second a change for so that all future posts are explicitly under whatever license is needed. The mission of LW involves outreach, and you can't effectively conduct outreach if every time a book is published or a podcast is made every author has to be individually contacted for explicit permission.

How do others feel about making this change for all future submissions?

[pollid:376]

Comment by EricHerboso on [LINK] 23andme is 99$ now · 2012-12-12T23:33:08.632Z · LW · GW

Others have already pointed to HN comments arguing that 23andme is mostly for novelty, but for those just skimming lw discussion that don't want to wade through pages of material, I'll highlight the strongest argument against taking 23andme seriously:

Recent research hints that 10% of ordinary healthy people have genes that we understand to be indicative of major disease. In other words, if these people bought 23andme's service, they would receive results that would be extraordinarily distressing, even while being nonetheless healthy.

See the study in question. Relevant quote: "[O]ur current best mean estimates of ∼400 damaging variants and ∼2 bona fide disease mutations per individual [is an underestimate]". (The study was brought to my attention by NPR. Note that I have not read the actual paper, but only listened to a news report on it and read the abstract.)

Comment by EricHerboso on 2012 Survey Results · 2012-12-04T00:10:26.996Z · LW · GW

I see no reason to throw out their responses. They appear to just not be familiar with the terminology. To someone that does not know that "fair coin" is defined as having .5 probability for each side, they might envision it as a real physical coin that doesn't have two heads.

Comment by EricHerboso on LessWrong podcasts · 2012-12-03T22:20:14.583Z · LW · GW

Seriously? Are you sure you've been comparing good narrators to that TTS voice?

For me, a good narrator will win out in an overwhelming majority of cases where I can choose between TTS and a good narrator.

Comment by EricHerboso on The substrate · 2012-12-02T00:33:22.479Z · LW · GW

I assume there's got to be a ground universe somewhere in the chain.

I'm not saying you're wrong to think this is likely, but I don't think this is as necessary a condition as some people are taking it to be. So long as each simulation is simulated from somewhere, there's no reason why it can't be the case that every simulator is also simulated. I can think of no reason why the universe would be like this, but I can also think of no reason why it can't be that way.

Comment by EricHerboso on [META] Retributive downvoting: Why? · 2012-11-28T02:03:30.418Z · LW · GW

Several months ago, another user offered to set up a fork of the reddit enhancement suite that could achieve this and other features for users interested in them, but the project never took off. Arguably, this is a poor way of solving the problem, because it requires opting in, and most users would continue to see the old look instead. But it would be better, perhaps, than doing nothing.

Comment by EricHerboso on CFAR and SI MOOCs: a Great Opportunity · 2012-11-14T03:06:30.463Z · LW · GW

I get the impression that they already have years worth of demand lined up, and so investing in supply improvements will have far higher returns on their end.

I'd hate for this to be the reason why CFAR decides not to pursue putting out an online course on rationality. Even if demand really is as high as you say, doing an online course would dramatically increase the number of people able to go through the curriculum at all, which I assume would be good progress toward CFAR's mission. Even if CFAR couldn't fully take advantage of the extra demand for camps that this would drive, I still think Konkvistador & Wrongnesslessness' idea is worthwhile for the organization.

Comment by EricHerboso on Group rationality diary, 11/13/12 · 2012-11-14T01:08:57.309Z · LW · GW

I recently took the time to compile a list of my favorite philosophy podcasts and finally realized in the process that I spend a disproportionate amount of time on podcasts in general. However, since I've been pretty happy about how much time I spend on podcasts, I'm unsure if changes to my current behavior are warranted.

My current plan is to cut the bottom third of podcasts I prefer out and see how I feel. If it turns out that I'll be just as content with only 2/3 of the time invested, that'll definitely free up some time I can spend on other projects. But my prediction is that I'll miss a lot of them and just end up re-adding them after a few weeks' hiatus. I'll guess I'll find out in a month or two.

Comment by EricHerboso on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-12T21:43:11.584Z · LW · GW

You're correct; I was confusing the 80k pledge with the GWWC pledge. I retract all previous comments made in this thread on this point. Sorry for being stubborn earlier without rechecking the source.

Comment by EricHerboso on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-12T19:36:41.317Z · LW · GW

Remember that the pledge is not to give money to GWWC; it's a pledge to give to effective charities in general. So those who want to focus on just human will be giving only to human-based charities, while those who give to animal welfare charities will have their money spent on animal welfare.

Although I agree the pledge wording would be perhaps too deceptive, I do not agree that anyone would ever feel tricked, since they still individually choose where to send their money. Conservatives would probably give to the human welfare orgs GWWC recommends, while others would give to the animal welfare orgs EAA recommends.

Comment by EricHerboso on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-12T01:33:06.083Z · LW · GW

To clarify I meant changing the pledge from:

"to donate 10% of their income to the charities that they believe will most effectively help people living in poverty"

to:

"to donate 10% of their income to the charities that they believe will most effectively help persons living in poverty".

I don't think the usage in this context is referring to the actors with the means and inclination to take altruistic action; the context instead is on those acted upon. (Of course, this is not a very good way of saying it, especially as there is ample evidence that money given directly to the poor in developing countries might be better than developed countries giving what they incorrectly think the poor need, but this is beside the point.)

When conservative people read "persons in poverty", they will automatically think "humans living in poverty", whereas those more familiar with the use of "person" being inclusive with non-humans might instead interpret "persons living in poverty" much more liberally. (I realize this is nonstandard usage of the term, but my intent here is to allow a liberal interpretation while maintaining specificity.)

Comment by EricHerboso on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-12T00:50:15.750Z · LW · GW

It might be slightly deceptive (and thus not worth doing), but what about changing "people" to "persons"? Those who think about animal welfare more liberally would recognize "persons" as referring to both humans and non-humans, while those who are more conservative that GWWC is trying to reach will just automatically assume it means "people".

I would prefer this to your reformulation of "do good" because it explicitly takes other types of "doing good" out of the equation. (Unless possibly there's some reason why being more inclusive of "doing good" is worthwhile to use in such a pledge? It seems at first glance to me that specificity is important in pledges of this kind.)

Comment by EricHerboso on Questions from potential donors to Giving What We Can, 80,000 Hours and EAA · 2012-11-12T00:40:13.936Z · LW · GW

I agree with the idea that EAA seems more likely to be more effective than 80k for the reasons you stated. However, I disagree that this is sufficient reason to encourage earmarking.

It's true that I'd prefer to give to EAA directly, and the only way to do this currently is to write a check to the "Tides Foundation" and earmark it for EAA. But I think the far better way of doing this is for EAA to be separate not just from Tides, but also 80k (which has a confusingly distinct mission focused on careers and lifetime charitable donations, not animal welfare). Until they're separate, I can see why earmarking is justified, but you said it should be encouraged, which is an entirely different thing. I would NOT encourage earmarking; I'd earmark regretfully, and only until they separate out the organizations so that I can donate toward the mission I consider to be genuinely more effective.

Comment by EricHerboso on Questions from potential donors to Giving What We Can, 80,000 Hours and EAA · 2012-11-12T00:31:36.308Z · LW · GW

Actually, I think this is a technical problem they have, and should not be construed as a positive endorsement of earmarking. It looks like what they want are separate organizations (80k, GWWC), but the way their org is set up, they can only be tax deductible if you donate to the "Tides Foundation" instead.

Although technically this looks like earmarking, the intent seems to be that they wanted to have separate organizations with separate funding but have so far not actually separated them for the purposes of tax deductibility.

Comment by EricHerboso on Struck with a belief in Alien presence · 2012-11-11T01:48:42.222Z · LW · GW

Ascribing a prior probability of zero for these claims is like saying we should ignore all previous evidence and start over from scratch. But this is inappropriate; there is a long history of "aliens on Earth"-type claims that have been made over the years, and they've all been shown to be insufficient. So when a new "aliens on Earth"-type claim arises (like your linked video, which I have not yet clicked on), it is entirely appropriate to assign it a low prior.

Comment by EricHerboso on Voting is like donating thousands of dollars to charity · 2012-11-05T23:30:46.600Z · LW · GW

Quick correction:

(90%-30%)1/(3.5 million)($7 trillion) = $1.2 million

The beginning of this should be 90%-10%, which changes the projected value to $1.6 million, not $1.2 million.

Comment by EricHerboso on 2012 Less Wrong Census/Survey · 2012-11-04T04:09:55.350Z · LW · GW

I answered every question, and enjoyed doing so. Thank you for putting this together. (c:

Comment by EricHerboso on Open Thread, November 1-15, 2012 · 2012-11-03T02:40:30.897Z · LW · GW

It's been a few years since I heard this pronounced aloud, but my old undergrad prof's pronunciation of "3^^^3" was "3 hyper5 3". The "hyper5" part refers to the fact that three up-arrows is pentation. Similarly, "x^^y" is "x hyper4 y", because two up-arrows indicate tetration.

In general, add 2 to the number of up-arrows, and that's the hyper number you'd use.

(I should mention that I've never heard it used by anyone other than him, so it might have been just his way of saying it, as opposed to the way of saying it.)

Comment by EricHerboso on Checking Kurzweil's track record · 2012-10-31T01:08:01.699Z · LW · GW

I'll commit to doing 20 questions.

Comment by EricHerboso on Omega lies · 2012-10-24T12:45:21.874Z · LW · GW

Omega could tell you "Either I am simulating you to gauge your response, or this is reality and I predicted your response" - and the problem would be essentially the same.

This is essentially the same only if you care only about reality. But if you care about outcomes in simulations, too, then this is not "essentially the same" as the regular formulation of the problem.

If I care about my outcomes when I am "just a simulation" in a similar way to when I am "in reality", then the phrasing you've used for Omega would not lead to the standard Newcomb problem. If I'm understanding this correctly, your reformulation of what Omega says will result in justified two-boxing with CDT.

Either I'm a simulation, or I'm not. Since I might possibly choose to one-box or two-box as a probability distribution (e.g.: 70% of the time one-box; otherwise two-box), Omega must simulate me several times. This means I'm much more likely to be a simulation. Since we're in a simulation, Omega has not yet predicted our response. Therefore two-boxing really is genuinely better than one-boxing.

In other words, while Newcomb's problem is usually an illustration for why CDT fails by saying we should two-box, under your reformulation, CDT correctly says we should two-box. (Under the assumption that we value simulated utilons as we do "real" ones.)

Comment by EricHerboso on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-19T23:53:24.672Z · LW · GW

Lumping moral skepticism in "none of the above" seems very inappropriate to me. I know that technically, if the others cover all the moral realist bases (which I agree that it does), then "none of the above" is linguistically correct and has moral skepticism as its referent.

But it seems dismissive to call it "none of the above". It feels to me like describing it that way has semantic content embedded in the phrasing of the question that I disagree with.

I would prefer "moral skepticism" as an option for the same reason I'd prefer "atheism" as an option under the religious question. Calling it "none of the above" might be formally accurate, but it nevertheless feels inappropriate to phrase it that way, as it makes the question itself feel biased.

Comment by EricHerboso on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-19T02:17:22.943Z · LW · GW

Under "Part Five", you list SAT scoring, but not ACT scoring. I know far less people use the ACT, but if you're going to add in an option for SAT scores, I would also include a place for ACT scores.

Comment by EricHerboso on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-19T02:15:26.190Z · LW · GW

With which of these moral philosophies do you MOST identify?

  • There is no such thing as "morality"

Can you please rephrase this to "moral skepticism"? Or is there some benefit to saying it in the way you have?

Note that moral skepticism does not necessarily equate to nihilism -- error theories, fictionist accounts and moral revisionism all talk about doing what others would call "the right thing", even though they are all moral skeptic theories.

Also, don't you think this section is a bit coarsely defined? I'd love to see a breakdown of moral skeptics categorized as revisionists, fictionists, etc. You can always include an "general moral skeptic" option for those people that stop thinking about metaethics once they decide moral skepticism is correct. Similarly, I'd love to see more finely grained options under consequentialism and the other broad categories of this section.

Comment by EricHerboso on Happy Ada Lovelace Day · 2012-10-18T02:52:56.971Z · LW · GW

These are excellent points. Unfortunately, I'm a bit hampered by the fact that I stole the chart in question from the original study (pdf), and they used only "dynamite plots" in their paper. After reading your links on the topic, I can definitely see why this is bad. I'm appending a short note to this effect as an edit to my original article.

Thank you for bringing this stuff to my attention.

Comment by EricHerboso on Happy Ada Lovelace Day · 2012-10-17T00:23:36.886Z · LW · GW

Rather than writing about a specific person, I wrote a blog post on Why Ada Lovelace Day is Important. It includes a review of a thorough study on gender bias among science faculty published a few months ago. It's really distressing to me that even in 2012 there exists this much male privilege in science academia.

Comment by EricHerboso on Problem of Optimal False Information · 2012-10-16T03:02:44.659Z · LW · GW

Except after executing the code, you'd know it was FAI and not a video game, which goes against the OP's rule that you honestly believe in the falsehood continually.

I guess it works if you replace "FAI" in your example with "FAI who masquerades as a really cool video game to you and everyone you will one day contact" or something similar, though.

Comment by EricHerboso on Problem of Optimal False Information · 2012-10-16T02:50:50.174Z · LW · GW

Point taken.

Yet I would maintain that belief in true facts, when paired with other things I value, is what I place high value on. If I pair those other things I value with belief in falsehoods, their overall value is much, much less. In this way, I maintain a very high value in belief of true facts while not committing myself to maximize accuracy like paper clips.

(Note that I'm confabulating here; the above paragraph is my attempt to salvage my intuitive beliefs, and is not indicative of how I originally formulated them. Nevertheless, I'm warily submitting them as my updated beliefs after reading your comment.)