Posts

AABoyles's Shortform 2019-12-05T22:08:09.901Z · score: 4 (1 votes)
Another Case Study of Inadequate Equilibrium in Medicine 2017-12-11T19:02:19.230Z · score: 15 (5 votes)
Cryonic Demography 2017-11-09T20:09:04.246Z · score: 11 (5 votes)
LessWrong-Portable 2017-09-22T20:48:41.200Z · score: 14 (6 votes)
I just increased my Altruistic Effectiveness and you should too 2014-11-17T15:45:00.537Z · score: 10 (11 votes)

Comments

Comment by aaboyles on AABoyles's Shortform · 2019-12-05T22:08:10.049Z · score: 4 (3 votes) · LW · GW

Attention Conservation Warning: I envision a model which would demonstrate something obvious, and decide the world probably wouldn't benefit from its existence.

The standard publication bias is that we must be 95% certain a described phenomenon exists before a result is publishable (at which time it becomes sufficiently "confirmed" to treat the phenomenon as a factual claim). But the statistical confidence of a phenomenon conveys interesting and useful information regardless of what that confidence is.

Consider the space of all possible relationships: most of these are going to be absurd (e.g. the relationship between number of minted pennies and number of atoms in moons of Saturn), and exhibit no correlation. Some will exhibit weak correlations (in the range of p = 0.5). Those are still useful evidence that a pathway to a common cause exists! The universal prior on random relationships should be roughly zero, because most relationships will be absurd.

What would science look like if it could make efficient use of the information disclosed by presently unpublishable results? I think I can generate a sort of agent-based model to imagine this. Here's the broad outline:

  1. Create a random DAG representing some complex related phenomena.
  2. Create an agent which holds beliefs about the relationship between nodes in the graph, and updates its beliefs when it discovers a correlation with p > 0.95.
  3. Create a second agent with the same belief structure, but which updates on every experiment regardless of the correlation.
  4. On each iteration have each agent select two nodes in the graph, measure their correlation, and update their beliefs. Then have them compute the DAG corresponding to their current belief matrix. Measure the difference between the DAG they output and the original DAG created in step 1.

I believe that both agents will converge on the correct DAG, but the un-publication-biased agent will converge much more rapidly. There are a bunch of open parameters that need careful selection and defense here. How do the properties of the original DAG affect the outcome? What if agents can update on a relationship multiple times (e.g. run a test on 100 samples, then on 10,000)?

Given defensible positions on these issues, I suspect that such a model would demonstrate that publication bias reduces scientific productivity by roughly an order of magnitude (and perhaps much more).

But what would the point be? No one will be convinced by such a thing.

Comment by aaboyles on Would you benefit from audio versions of posts? · 2018-10-19T14:39:10.144Z · score: 1 (1 votes) · LW · GW

Please please please make this happen!

Comment by aaboyles on Would you benefit from audio versions of posts? · 2018-07-26T13:45:20.392Z · score: 1 (1 votes) · LW · GW

Me too!

Comment by aaboyles on Reality versus Human Expectations · 2018-03-22T17:50:58.914Z · score: 4 (2 votes) · LW · GW

Definitely 2. If you start messing with reality, things get really boring really quickly.

Comment by aaboyles on Rationality: Abridged · 2018-02-26T20:15:32.291Z · score: 3 (1 votes) · LW · GW

Thank you! I don't have a good way to test Apple products (so the fix won't be quick), but I'll look into it.

Comment by aaboyles on Rationality: Abridged · 2018-02-22T16:11:35.954Z · score: 3 (1 votes) · LW · GW

Thanks for letting me know. I use [Calibre](https://calibre-ebook.com/about) to test the files, and it opens the file without complaint. What are you using (and on what platform) to read it?

Comment by aaboyles on Rationality: Abridged · 2018-01-09T14:07:42.619Z · score: 5 (2 votes) · LW · GW

My pleasure!

Comment by aaboyles on Roleplaying As Yourself · 2018-01-08T18:40:19.625Z · score: 12 (5 votes) · LW · GW

Also See Also: Simulate and defer to more rational selves.

Comment by aaboyles on Rationality: Abridged · 2018-01-08T18:36:34.624Z · score: 10 (4 votes) · LW · GW

Done!

Comment by aaboyles on Rationality: Abridged · 2018-01-08T18:36:06.042Z · score: 29 (10 votes) · LW · GW

I have converted Rationality Abridged to EPUB and MOBI formats. The code to accomplish this is stored in this repository.

Comment by aaboyles on Rationality: Abridged · 2018-01-08T17:10:43.483Z · score: 8 (3 votes) · LW · GW

I'm on it!

Comment by aaboyles on 2017: A year in Science · 2018-01-03T15:02:09.318Z · score: 3 (1 votes) · LW · GW
- A roundworm has been uploaded to a Lego body (http://edition.cnn.com/2015/01/21/tech/mci-lego-worm/index.html)

This happened in 2015.

Comment by aaboyles on Taxonomy of technological resurrection - request for comment · 2017-12-04T16:09:55.111Z · score: 3 (1 votes) · LW · GW

The Wikipedia page on Resurrection contains some scattershot content (including links the H+Pedia article lacks) which might be useful to assimilate.

Comment by aaboyles on Cryonic Demography · 2017-11-10T13:10:29.705Z · score: 5 (2 votes) · LW · GW

https://aaboyles.github.io/Essays/portfolio/CryonicDemography.html

Comment by aaboyles on 11/07/2017 Development Update: LaTeX! · 2017-11-07T21:45:56.405Z · score: 3 (1 votes) · LW · GW

The fronted plugin worked when I submitted my comment, but I'm getting the "refresh to render LaTex" message as well. Neither refreshes nor fresh browser sessions seem to yield latex.

Comment by aaboyles on 11/07/2017 Development Update: LaTeX! · 2017-11-07T21:41:50.392Z · score: 11 (3 votes) · LW · GW

Excellent, Thank you!

Comment by aaboyles on Toy model of the AI control problem: animated version · 2017-10-30T13:49:37.637Z · score: 3 (1 votes) · LW · GW

Sorry I'm two weeks late, but the text of the unlicense has been added. Thank you!

Comment by aaboyles on Toy model of the AI control problem: animated version · 2017-10-13T14:51:03.875Z · score: 6 (3 votes) · LW · GW

Thanks! It's up.

Comment by aaboyles on Toy model of the AI control problem: animated version · 2017-10-11T18:48:36.857Z · score: 12 (4 votes) · LW · GW

Nice post! I'd like to put a copy of the code on Github, but I don't see a license anywhere in the directory (or mentioned in the files). May I assume it's generally intended to be Open Source and I can do this?

Comment by aaboyles on LessWrong-Portable · 2017-09-23T01:57:57.463Z · score: 2 (1 votes) · LW · GW

Hey Ben, I just re-ran it and it worked very well. Thanks a lot!

Comment by aaboyles on Beta - First Impressions · 2017-09-22T18:52:52.679Z · score: 3 (3 votes) · LW · GW
  • Many broken links in the Codex.

  • Searching is fast and awesome and I love it.

  • Initial pageload is very slow: nearly 2.5MB, requiring 14.17s until DOMContentLoaded, 1.5MB of which is a single file.

Comment by aaboyles on Heuristics for textbook selection · 2017-09-06T18:42:29.938Z · score: 1 (1 votes) · LW · GW

1a. If a professor is a suitable source for a recommendation, they've probably taught a course on the topic, and that course's syllabus may be available on the open web without emailing the professor.

Comment by aaboyles on Is life worth living? · 2017-09-05T19:17:19.341Z · score: 1 (1 votes) · LW · GW

If there was randomness such that you had some probability of a strongly positive event, would this incline you towards life?

Comment by aaboyles on Is life worth living? · 2017-09-05T19:14:02.932Z · score: 1 (1 votes) · LW · GW

Even if the probability was trivial?

Comment by aaboyles on Is life worth living? · 2017-08-30T21:40:50.421Z · score: 2 (2 votes) · LW · GW

The experiment specifies that the circumstances are all but literally indistinguishable:

I'll allow you to relive your life up to this moment exactly as it unfolded the first time -- that is, all the exact same experiences, life decisions, outcomes, etc.

If the sequence of events is "exactly" the same, then from your perspective it cannot be distinguished. If it could, then some event must have happened differently in the past to make it such that you were aware things were different, which violates the tenets of God's claim. In other words, the two timelines basically must be indistinguishable from your perspective.

Comment by aaboyles on Is life worth living? · 2017-08-30T16:49:02.570Z · score: 1 (1 votes) · LW · GW

Framing note: it's worth examining how intuitions change when you replace "God" with "Omega" and "relive" with "reset the deterministic simulation that computed".

Comment by aaboyles on Is life worth living? · 2017-08-30T16:33:53.199Z · score: 0 (0 votes) · LW · GW

There are many moments of my life that would give me pause about re-living them. However, were I much younger and aware that I was doomed to that set of experiences, I wouldn't opt to commit suicide. It therefore follows that my life thus far has been worth living, and that I should opt to re-live it, rather than be annihilated.

That said, it seems to me that these 'choices' are not an opportunity to make a choice at all. In this thought experiment, do we live out our second instance with the knowledge that it is a second instance and we are incapable of acting differently? If "God" makes the offer to bring me right up to that very moment in exactly the same way, all the while I'm aware that the decision is approaching, the experience of living will be qualitatively different than it was in ignorance of this fate, even if I am powerless to change it. However, the wording of option 2 seems to imply that this is not the case.

Assuming I do not have some sort of epistemic access to the fact that God has rewound my life, I will live my life exactly as I did in the first instance. To me, this is metaphysically indistinguishable from (and morally equivalent to) living it in the first place. However, there is an important difference. At the time God approaches me, the "choice" has become a lie: because God already rewound my life and let it play out again, I will behave in the same way (choose option 2) and God will annihilate me anyway! It is, after all, a stipulation of the rules God presented.

From my perspective, God is just going to annihilate me no matter what, so I'm indifferent between the two options.

Comment by aaboyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-03-01T16:09:22.151Z · score: 3 (1 votes) · LW · GW

I'm aware. Note that I did call it the "least likely possibility."

Comment by aaboyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:43:43.240Z · score: 3 (3 votes) · LW · GW

For example, maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.

Comment by aaboyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:42:11.280Z · score: 5 (5 votes) · LW · GW

This is actually a fairly healthy field of study. See, for example, Nonphotosynthetic Pigments as Potential Biosignatures.

Comment by aaboyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:28:15.053Z · score: 0 (0 votes) · LW · GW

...Think of the Federation's "Prime Directive" in Star Trek.

Comment by aaboyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:27:03.183Z · score: 6 (6 votes) · LW · GW

It may have discovered some property of physics which enabled it to expand more efficiently across alternate universes, rather than across space in any given universe. Thus it would be unlikely to colonize much of any universe (specifically, ours).

Comment by aaboyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:23:04.007Z · score: 2 (2 votes) · LW · GW

The superintelligence could have been written to value-load based on its calculations about an alien (to its creators) superintelligence (what Bostrom refers to as the "Hail Mary" approach). This could cause it to value the natural development of alien biology enough to actively hide its activities from us.

Comment by aaboyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:08:01.267Z · score: 1 (3 votes) · LW · GW

The most obvious and least likely possibility is that the superintelligence hasn't had enough time to colonize the galaxy (i.e. it was created very recently).

Comment by aaboyles on The map of global catastrophic risks connected with biological weapons and genetic engineering · 2016-02-22T14:13:10.724Z · score: 2 (2 votes) · LW · GW

The link to the "Strategic Terrorism" paper is malformed. The correct URL is here.

Comment by aaboyles on Omega's Idiot Brother, Epsilon · 2015-11-25T21:12:07.459Z · score: 4 (4 votes) · LW · GW

To take the obvious approach, let's calculate Expected Values for both strategies. To start, let's try two-boxing:

(80/8000 1000) + (7920/8000 1,001,000) = $991,000

Not bad. OK, how about one-boxing?

(3996/4000 1,000,000) + (4/4000 0) = $999,000

So one-boxing is the rational strategy (assuming you're seeking to maximize the amount of money you get).

However, this game has two interesting properties which, together, would make me consider one-boxing based on exogenous circumstances. The first is that the difference between the two strategies is very small: only $8000. If I have $990-odd thousand dollars, I'm not going to be hung up the last $8000. In other words, money has a diminishing marginal utility. As a corollary to this, two-boxing guarantees that the player receives at least $1000, where one-boxing could result in the player receiving nothing. Again, because money has a diminishing marginal utility, getting the first $1000 may be worth the risk of not winning the million. If, for example, I needed a sum of money less than $1000 to keep myself alive (with certainty), I would two-box in a heartbeat.

All that said, I would (almost always, certainly) one-box.

Comment by aaboyles on Rationality Quotes Thread October 2015 · 2015-10-21T19:20:58.410Z · score: 8 (10 votes) · LW · GW

Nobody wants to hear that you will try your best. It is the wrong thing to say. It is like saying "I probably won't hit you with a shovel." Suddenly everyone is afraid you will do the opposite.

--Lemony Snicket, All the Wrong Questions

Comment by aaboyles on [Link] Study: no big filter, we're just too early · 2015-10-21T19:11:43.863Z · score: 0 (0 votes) · LW · GW

The Great Filter isn't an explanation of why life on Earth is unique; rather, it's an explanation of why we have no evidence of civilizations that have developed beyond Kardashev I. So, rather than focusing on the probability that some life has evolved somewhere else, consider the reason that we apparently don't have intelligent life everywhere. THAT's the Great Filter.

Comment by aaboyles on [Link] Study: no big filter, we're just too early · 2015-10-21T17:04:12.626Z · score: 8 (8 votes) · LW · GW

This research doesn't imply the non-existence of a Great Filter (contra this post's title). If we take the Paper's own estimates, there will be approximately 10^20 terrestrial planets in the Universe's history. Given that they estimate the Earth preceded 92% of these, there currently exist approximately 10^19 terrestrial planets, any one of which might have evolved intelligent life. And yet, we remain unvisited and saturated in the Great Silence. Thus, there is almost certainly a Great Filter.

Comment by aaboyles on Bragging thread September 2015 · 2015-09-02T18:49:05.836Z · score: 10 (10 votes) · LW · GW

In July I started a Caloric Restriction Diet, fasting for an entire (calendar) day twice weekly. I did this out of a desire for the potential longevity benefits, but since then it's had a rather happy (albeit utterly predictable) side-effect: I lost 10 pounds!

Comment by aaboyles on Crazy Ideas Thread, Aug. 2015 · 2015-08-11T14:21:59.431Z · score: 12 (12 votes) · LW · GW

LW/CFAR should develop a rationality curriculum for Elementary School Students. While the Sequences are a great start for adults and precocious teens with existing sympathies to the ideas presented therein, there's very little in the way of rationality training accessible to (let alone intended for) children.

Comment by aaboyles on State-Space of Background Assumptions · 2015-07-29T01:14:36.505Z · score: 5 (5 votes) · LW · GW

Done. Looking forward to seeing your results!

Comment by aaboyles on Should We Shred Whole-Brain Emulation? · 2015-07-09T15:53:05.577Z · score: 3 (3 votes) · LW · GW

I am opening this thread to test the hypothesis that SuperIntelligence is plausible but that Whole-Brain Emulations would most likely become obsolete before they were even possible.

I'm not sure of what you're claiming here. Are you hypothesizing that a path to Superintelligence which requires WBE will likely be slower than a path which does not? Or something else, like that brain-based computation with good APIs will hold a relative advantage over WBE indefinitely?

Further, given the ability to do so, entities which were near to being Whole-Brain Emulations would rapidly choose to cease to be near Whole-Brain Emulations and move on to become something else.

Again, this could be clearer. Are you implying that a WBE in the process of being constructed will opt not to be completed before beginning to self-improve (i.e. become a neuromorph)?

Comment by aaboyles on Effective altruism and political power · 2015-06-17T18:26:42.709Z · score: 10 (10 votes) · LW · GW

Impact concerns notwithstanding, there are some practical constraints: Elon Musk and Sergey Brin are naturalized US Citizens, which makes them ineligible to serve as US President.

Comment by aaboyles on Simulate and Defer To More Rational Selves · 2015-06-04T15:50:46.666Z · score: 0 (0 votes) · LW · GW

A variation of this technique (pretending to be Batman) works for children.

Comment by aaboyles on Boxing an AI? · 2015-03-27T15:12:53.061Z · score: 0 (2 votes) · LW · GW

It's not a matter of "telling" the AI or not. If the AI is sufficiently intelligent, it should be able to observe that its computational resources are bounded, and infer the existence of the box. If it can't make that inference (and can't self-improve to the point that it can), it probably isn't a strong enough intelligence for us to worry about.

Comment by aaboyles on Can we decrease the risk of worse-than-death outcomes following brain preservation? · 2015-02-23T15:05:25.532Z · score: 0 (0 votes) · LW · GW

The circumstances under which I would opt to be killed are extremely specific. Namely, I would want not to be revived if I were to be tortured indefinitely. This is actually more specific than it sounds: in order for this to occur, there must exist an entity which would soon possess the ability to revive me, and an incentive to do so rather than just allowing me to die. I find this to be such an extreme edge case that I'm actually uncomfortable with the characterization of the conversation. Instead, I'd turn around the result in question: under what circumstances do you want to be revived?

Trivially, we should want to be revived into a civilization which possesses the technology to revive us at all, and subsequently extend our lives. If circumstances are bad on Earth, we should prefer to defer our revival until those circumstances improve. If they never do, the overwhelming probability is that cryonic remains will simply be forgotten, turned off, and the frozen are never revived. But building a terminal death condition which might be triggered denies us the probability of waiting out those bad circumstances.

tl;dr Don't choose death, choose deferment.

Comment by aaboyles on LINK: Guinea worm disease close to eradication · 2015-01-16T18:06:24.165Z · score: 9 (9 votes) · LW · GW

We might not want to draw that tick mark just yet. Our other "Global Eradication Target", Polio, has dropped into the 10^3 range of annual cases several times. The New York Times likened beating those last few cases to "Trying to squeeze Jell-O to death." Not that humanity doesn't deserve a collective pat on the back, but let's not call the job done until the job is done.

Comment by aaboyles on Simulate and Defer To More Rational Selves · 2015-01-05T19:44:14.790Z · score: 1 (1 votes) · LW · GW

I've been working on noticing that I'm arguing with them, and running a mental process to halt those threads. It helps a lot in the moment. More importantly, it may even be having a preventative effect--I think I'm experiencing these imaginary fights less often.

Comment by aaboyles on Identity crafting · 2014-12-31T19:59:15.823Z · score: 2 (2 votes) · LW · GW

My (imperfect, incomplete) solution to dealing with this is to establish a canonical format and data source. For example, my list of book to read is saved as an extensive Amazon Wish List. This both ensures I don't waste time reconstructing the hundreds of books I've already listed and lowers the barrier to actually obtaining the books (I could literally click once and have the book on my Kindle), leaving only the hard part of actually reading it. I'm still working on the mental process which transitions from the list to the action.