## Posts

There's an Awesome AI Ethics List and it's a little thin 2020-06-25T13:43:20.858Z
The Bentham Prize at Metaculus 2020-01-27T14:27:21.172Z
AABoyles's Shortform 2019-12-05T22:08:09.901Z
Another Case Study of Inadequate Equilibrium in Medicine 2017-12-11T19:02:19.230Z
Cryonic Demography 2017-11-09T20:09:04.246Z
LessWrong-Portable 2017-09-22T20:48:41.200Z
I just increased my Altruistic Effectiveness and you should too 2014-11-17T15:45:00.537Z

Comment by AABoyles on Anti-Aging: State of the Art · 2021-01-07T17:34:47.195Z · LW · GW

Fair point. It does seems like "pandemic" is a more useful category if it doesn't include a whole bunch of "things that happened but didn't kill a lot of people."

Comment by AABoyles on Anti-Aging: State of the Art · 2021-01-06T14:20:07.823Z · LW · GW

Without aging, COVID-19 would not be a global pandemic, since the death rate in individuals below 30 years old is extremely low.

A pandemic is an epidemic that occurs across multiple continents. Note that we can accordingly envision a pandemic with a death rate of zero, but a pandemic none-the-less. Accordingly, I think you've somewhat overstated the punchline about aging and COVID-19, though I agree with the broader point that if aging were effectively halted at 30, the death rates would be much, much lower.

Comment by AABoyles on D&D.Sci · 2020-12-08T21:16:17.985Z · LW · GW

If I wasn't trying to not-spend-time-on-this, I would fit a Random Forest or a Neural Network (rather than a logistic regression) to capture some non-linear signal, and, when it predicted well, fire up an optimizer to see how much in which stats really helps.

Comment by AABoyles on D&D.Sci · 2020-12-08T21:09:51.240Z · LW · GW

Fun! I wish I had a lot more time to spend on this, but here's a brief and simple basis for a decision:

library(readr)
library(dplyr)
library(magrittr)

training %<>%
dplyr::mutate(outcome = ifelse(result=="succeed", 1, 0))

model <- glm(outcome ~ cha + con + dex + int + str + wis, data = training, family = "binomial")

summary(model)

start <- data.frame(str = c(6), con = c(14), dex = c(13), int = c(13), wis = c(12), cha = c(4))
predict.glm(model, start, type="response")
# > 0.3701247

wise <- data.frame(str = c(6), con = c(15), dex = c(13), int = c(13), wis = c(20), cha = c(5))
predict.glm(model, wise, type="response")
# > 0.7314005

charismatic <- data.frame(str = c(6), con = c(14), dex = c(13), int = c(13), wis = c(12), cha = c(14))
predict.glm(model, charismatic, type="response")
# > 0.6510629

wiseAndCharismatic <- data.frame(str = c(6), con = c(14), dex = c(13), int = c(13), wis = c(20), cha = c(6))
predict.glm(model, wiseAndCharismatic, type="response")
# > 0.73198


Gonna go with wiseAndCharismatic (+8 Wisdom, +2 Charisma).

Comment by AABoyles on Developmental Stages of GPTs · 2020-07-28T21:12:06.732Z · LW · GW

It would also be very useful to build some GPT feature "visualization" tools ASAP.

Do you have anything more specific in mind? I see the Image Feature Visualization tool, but in my mind it's basically doing exactly what you're already doing by comparing GPT-2 and GPT-3 snippets.

Comment by AABoyles on AABoyles's Shortform · 2020-05-26T15:46:29.527Z · LW · GW

If it's not fast enough, it doesn't matter how good it is

Sure! My brute-force bitwise algorithm generator won't be fast enough to generate any algorithm of length 300 bits, and our universe probably can't support any representation of any algorithm of length greater than (the number of atoms in the observable universe) ~ 10^82 bits. (I don't know much about physics, so this could be very wrong, but think of it as a useful bound. If there's a better one (e.g. number of Planck volumes in the observable universe), substitute that and carry on, and also please let me know!)

Part of the issue with this might be programs that don't work or do anything (Beyond the trivial, it's not clear how to select for this, outside of something like AlphaGo.)

Another class of algorithms that cause problems are those that don't do anything useful for some number of computations, after which they begin to output something useful. We don't really get to know if they will halt, so if the useful structure emerges after some number of steps, we may not be committed to or able to run it that long.

Comment by AABoyles on AABoyles's Shortform · 2020-05-26T14:37:06.056Z · LW · GW

Anything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes.

The performance of AlphaGo got me thinking about algorithms we can't access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said "I would go as far as to say not a single human has touched the edge of the truth of Go.")

Perhaps we can imagine a sort of "logical causal isolation." An algorithm is logically causally isolated from us if we cannot discover it (e.g. in the case of the Go strategies that AlphaGo used) and we cannot specify an algorithm to discover it (except by random accident) given finite computation over a finite time horizon (i.e. in the lifetime of the observable universe).

Importantly, we can devise algorithms which search the entire space of algorithms (e.g. generate all permutations all possible strings of bits less than length n as n approaches infinity), but there's little reason to expect that such a strategy will result in any useful outputs of some finite length (there appear to be enough atoms in the universe () to represent all possible algorithms of length .

There's one important weakness in LCI (that doesn't exist in Physical Causal Isolation). We can randomly jump to algorithms of arbitrary lengths. This stipulation gives us the weird ability to pull stuff from outside our LCI-cone into it. Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm. (There's an interesting question about which I haven't yet thought about the distribution of useful algorithms of a given length.) Hence we must add the caveat to our definition of LCI "except by random accident."

We aren't LCI'd from the strategies AlphaGo used, because we created AlphaGo and AlphaGo discovered those strategies (even if human Go masters may never have discovered them independently). I wonder what algorithms exist beyond not just our horizons, but the horizons of all the algorithms which descend from everything we are able to compute.

Comment by AABoyles on The Bentham Prize at Metaculus · 2020-02-05T16:59:26.550Z · LW · GW

A second round is scheduled to begin this Saturday, 2020-02-08. New predictors should have a minor advantage in later rounds as the winners will have already exhausted all the intellectual low-hanging fruit. Please join us!

Comment by AABoyles on The Bentham Prize at Metaculus · 2020-02-04T19:06:59.345Z · LW · GW

Thanks! Also, thanks to Pablo_Stafforini, DanielFilan and Tamay for judging.

Comment by AABoyles on CFAR Participant Handbook now available to all · 2020-01-14T20:56:26.783Z · LW · GW

Thank you!

Comment by AABoyles on CFAR Participant Handbook now available to all · 2020-01-13T18:32:55.822Z · LW · GW

I would also like to convert it to a more flexible e-reader format. It appears to have been typeset using ... Would it be possible to share the source files?

Comment by AABoyles on Many Worlds, One Best Guess · 2019-12-31T16:04:48.699Z · LW · GW

It's time to test the Grue Hypothesis! Anyone have some Emeralds handy?

Comment by AABoyles on AABoyles's Shortform · 2019-12-31T14:51:05.403Z · LW · GW

It occurs to me that the world could benefit from more affirmative fact checker. Existing fact checkers are appropriately rude to people who publicly make false claims, but there's not much in the way of celebration of people who make difficult true claims. For example, Politifact awards "Pants on Fire" for bald lies, but only "True" for bald truths. I think there should be an even higher-status classification for true claims that run counter to the interests of the speaker. For example, we could award "Bayesian Stars" to figures who publicly update on new evidence, or "Bullets Bitten" to public figures who promulgate true evidence that weakens their arguments.

Comment by AABoyles on AABoyles's Shortform · 2019-12-26T15:59:36.802Z · LW · GW

It occurs to me that "Following one's passion" is terrible advice at least in part because of the lack of diversity in the activities we encourage children to pursue. It follows that encouraging children to participate in activities with very high-competition job markets (e.g. sports, the arts) may be a substantial drag on economic growth. After 5 minutes of search, I could not find research on this relationship. (It seems the state of scholarship on the topic is restricted to models in which participation in extracurriculars early in childhood leads to better metrics later in childhood.) This may merit a more careful assessment.

Comment by AABoyles on AABoyles's Shortform · 2019-12-05T22:08:10.049Z · LW · GW

Attention Conservation Warning: I envision a model which would demonstrate something obvious, and decide the world probably wouldn't benefit from its existence.

The standard publication bias is that we must be 95% certain a described phenomenon exists before a result is publishable (at which time it becomes sufficiently "confirmed" to treat the phenomenon as a factual claim). But the statistical confidence of a phenomenon conveys interesting and useful information regardless of what that confidence is.

Consider the space of all possible relationships: most of these are going to be absurd (e.g. the relationship between number of minted pennies and number of atoms in moons of Saturn), and exhibit no correlation. Some will exhibit weak correlations (in the range of p = 0.5). Those are still useful evidence that a pathway to a common cause exists! The universal prior on random relationships should be roughly zero, because most relationships will be absurd.

What would science look like if it could make efficient use of the information disclosed by presently unpublishable results? I think I can generate a sort of agent-based model to imagine this. Here's the broad outline:

1. Create a random DAG representing some complex related phenomena.
2. Create an agent which holds beliefs about the relationship between nodes in the graph, and updates its beliefs when it discovers a correlation with p > 0.95.
3. Create a second agent with the same belief structure, but which updates on every experiment regardless of the correlation.
4. On each iteration have each agent select two nodes in the graph, measure their correlation, and update their beliefs. Then have them compute the DAG corresponding to their current belief matrix. Measure the difference between the DAG they output and the original DAG created in step 1.

I believe that both agents will converge on the correct DAG, but the un-publication-biased agent will converge much more rapidly. There are a bunch of open parameters that need careful selection and defense here. How do the properties of the original DAG affect the outcome? What if agents can update on a relationship multiple times (e.g. run a test on 100 samples, then on 10,000)?

Given defensible positions on these issues, I suspect that such a model would demonstrate that publication bias reduces scientific productivity by roughly an order of magnitude (and perhaps much more).

But what would the point be? No one will be convinced by such a thing.

Comment by AABoyles on Would you benefit from audio versions of posts? · 2018-10-19T14:39:10.144Z · LW · GW

Comment by AABoyles on Would you benefit from audio versions of posts? · 2018-07-26T13:45:20.392Z · LW · GW

Me too!

Comment by AABoyles on Reality versus Human Expectations · 2018-03-22T17:50:58.914Z · LW · GW

Definitely 2. If you start messing with reality, things get really boring really quickly.

Comment by AABoyles on Rationality: Abridged · 2018-02-26T20:15:32.291Z · LW · GW

Thank you! I don't have a good way to test Apple products (so the fix won't be quick), but I'll look into it.

Comment by AABoyles on Rationality: Abridged · 2018-02-22T16:11:35.954Z · LW · GW

Thanks for letting me know. I use [Calibre](https://calibre-ebook.com/about) to test the files, and it opens the file without complaint. What are you using (and on what platform) to read it?

Comment by AABoyles on Rationality: Abridged · 2018-01-09T14:07:42.619Z · LW · GW

My pleasure!

Comment by AABoyles on Roleplaying As Yourself · 2018-01-08T18:40:19.625Z · LW · GW

Comment by AABoyles on Rationality: Abridged · 2018-01-08T18:36:06.042Z · LW · GW

I have converted Rationality Abridged to EPUB and MOBI formats. The code to accomplish this is stored in this repository.

Comment by AABoyles on Rationality: Abridged · 2018-01-08T17:10:43.483Z · LW · GW

I'm on it!

Comment by AABoyles on 2017: A year in Science · 2018-01-03T15:02:09.318Z · LW · GW
- A roundworm has been uploaded to a Lego body (http://edition.cnn.com/2015/01/21/tech/mci-lego-worm/index.html)

This happened in 2015.

Comment by AABoyles on Taxonomy of technological resurrection - request for comment · 2017-12-04T16:09:55.111Z · LW · GW

The Wikipedia page on Resurrection contains some scattershot content (including links the H+Pedia article lacks) which might be useful to assimilate.

Comment by AABoyles on 11/07/2017 Development Update: LaTeX! · 2017-11-07T21:45:56.405Z · LW · GW

The fronted plugin worked when I submitted my comment, but I'm getting the "refresh to render LaTex" message as well. Neither refreshes nor fresh browser sessions seem to yield latex.

Comment by AABoyles on 11/07/2017 Development Update: LaTeX! · 2017-11-07T21:41:50.392Z · LW · GW

Excellent, Thank you!

Comment by AABoyles on Toy model of the AI control problem: animated version · 2017-10-30T13:49:37.637Z · LW · GW

Sorry I'm two weeks late, but the text of the unlicense has been added. Thank you!

Comment by AABoyles on Toy model of the AI control problem: animated version · 2017-10-11T18:48:36.857Z · LW · GW

Nice post! I'd like to put a copy of the code on Github, but I don't see a license anywhere in the directory (or mentioned in the files). May I assume it's generally intended to be Open Source and I can do this?

Comment by AABoyles on LessWrong-Portable · 2017-09-23T01:57:57.463Z · LW · GW

Hey Ben, I just re-ran it and it worked very well. Thanks a lot!

Comment by AABoyles on Beta - First Impressions · 2017-09-22T18:52:52.679Z · LW · GW
• Many broken links in the Codex.

• Searching is fast and awesome and I love it.

• Initial pageload is very slow: nearly 2.5MB, requiring 14.17s until DOMContentLoaded, 1.5MB of which is a single file.

Comment by AABoyles on Heuristics for textbook selection · 2017-09-06T18:42:29.938Z · LW · GW

1a. If a professor is a suitable source for a recommendation, they've probably taught a course on the topic, and that course's syllabus may be available on the open web without emailing the professor.

Comment by AABoyles on Is life worth living? · 2017-09-05T19:17:19.341Z · LW · GW

If there was randomness such that you had some probability of a strongly positive event, would this incline you towards life?

Comment by AABoyles on Is life worth living? · 2017-09-05T19:14:02.932Z · LW · GW

Even if the probability was trivial?

Comment by AABoyles on Is life worth living? · 2017-08-30T21:40:50.421Z · LW · GW

The experiment specifies that the circumstances are all but literally indistinguishable:

I'll allow you to relive your life up to this moment exactly as it unfolded the first time -- that is, all the exact same experiences, life decisions, outcomes, etc.

If the sequence of events is "exactly" the same, then from your perspective it cannot be distinguished. If it could, then some event must have happened differently in the past to make it such that you were aware things were different, which violates the tenets of God's claim. In other words, the two timelines basically must be indistinguishable from your perspective.

Comment by AABoyles on Is life worth living? · 2017-08-30T16:49:02.570Z · LW · GW

Framing note: it's worth examining how intuitions change when you replace "God" with "Omega" and "relive" with "reset the deterministic simulation that computed".

Comment by AABoyles on Is life worth living? · 2017-08-30T16:33:53.199Z · LW · GW

There are many moments of my life that would give me pause about re-living them. However, were I much younger and aware that I was doomed to that set of experiences, I wouldn't opt to commit suicide. It therefore follows that my life thus far has been worth living, and that I should opt to re-live it, rather than be annihilated.

That said, it seems to me that these 'choices' are not an opportunity to make a choice at all. In this thought experiment, do we live out our second instance with the knowledge that it is a second instance and we are incapable of acting differently? If "God" makes the offer to bring me right up to that very moment in exactly the same way, all the while I'm aware that the decision is approaching, the experience of living will be qualitatively different than it was in ignorance of this fate, even if I am powerless to change it. However, the wording of option 2 seems to imply that this is not the case.

Assuming I do not have some sort of epistemic access to the fact that God has rewound my life, I will live my life exactly as I did in the first instance. To me, this is metaphysically indistinguishable from (and morally equivalent to) living it in the first place. However, there is an important difference. At the time God approaches me, the "choice" has become a lie: because God already rewound my life and let it play out again, I will behave in the same way (choose option 2) and God will annihilate me anyway! It is, after all, a stipulation of the rules God presented.

From my perspective, God is just going to annihilate me no matter what, so I'm indifferent between the two options.

Comment by AABoyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-03-01T16:09:22.151Z · LW · GW

I'm aware. Note that I did call it the "least likely possibility."

Comment by AABoyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:43:43.240Z · LW · GW

For example, maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.

Comment by AABoyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:42:11.280Z · LW · GW

This is actually a fairly healthy field of study. See, for example, Nonphotosynthetic Pigments as Potential Biosignatures.

Comment by AABoyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:28:15.053Z · LW · GW

...Think of the Federation's "Prime Directive" in Star Trek.

Comment by AABoyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:27:03.183Z · LW · GW

It may have discovered some property of physics which enabled it to expand more efficiently across alternate universes, rather than across space in any given universe. Thus it would be unlikely to colonize much of any universe (specifically, ours).

Comment by AABoyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:23:04.007Z · LW · GW

The superintelligence could have been written to value-load based on its calculations about an alien (to its creators) superintelligence (what Bostrom refers to as the "Hail Mary" approach). This could cause it to value the natural development of alien biology enough to actively hide its activities from us.

Comment by AABoyles on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-26T18:08:01.267Z · LW · GW

The most obvious and least likely possibility is that the superintelligence hasn't had enough time to colonize the galaxy (i.e. it was created very recently).

Comment by AABoyles on The map of global catastrophic risks connected with biological weapons and genetic engineering · 2016-02-22T14:13:10.724Z · LW · GW

The link to the "Strategic Terrorism" paper is malformed. The correct URL is here.

Comment by AABoyles on Omega's Idiot Brother, Epsilon · 2015-11-25T21:12:07.459Z · LW · GW

To take the obvious approach, let's calculate Expected Values for both strategies. To start, let's try two-boxing:

(80/8000 1000) + (7920/8000 1,001,000) = $991,000 Not bad. OK, how about one-boxing? (3996/4000 1,000,000) + (4/4000 0) =$999,000

So one-boxing is the rational strategy (assuming you're seeking to maximize the amount of money you get).

However, this game has two interesting properties which, together, would make me consider one-boxing based on exogenous circumstances. The first is that the difference between the two strategies is very small: only $8000. If I have$990-odd thousand dollars, I'm not going to be hung up the last $8000. In other words, money has a diminishing marginal utility. As a corollary to this, two-boxing guarantees that the player receives at least$1000, where one-boxing could result in the player receiving nothing. Again, because money has a diminishing marginal utility, getting the first $1000 may be worth the risk of not winning the million. If, for example, I needed a sum of money less than$1000 to keep myself alive (with certainty), I would two-box in a heartbeat.

All that said, I would (almost always, certainly) one-box.