Posts
Comments
Tom is no longer hosting these, but EA NYC has an AI subgroup that meets up every month or so.
There’s a Bayesian-adjacent notion of closeness to the truth: observations narrow down the set of possible worlds, and two hypotheses that heavily overlap in the possible are “close”.
But the underlying notion of closeness to the truth is underdetermined. If we were relativistic beings, we’d privilege a different part of the observation set when comparing hypotheses, and Newtonian gravity wouldn’t feel close to the truth, it would feel obviously wrong and be rejected early (or more likely, never considered at all because we aren’t actually logically-omniscient Bayesians).
The best plausible explanation I've seen is that Delta's serial interval might be much shorter, which would mean R is lower than you'd think if you assumed Delta had the same serial interval as older strains. (Roughly speaking, in the time it would take Alpha to infect R individuals, Delta has time to infect R and for each of those individuals to infect another R, leading to R + R^2 infections over the same period.) That makes it easier for behavior changes and increasing population immunity to lower R below 1.
I’ll defer to Blake if he’s done the math, but it does seem worth weighting correlated risks more strongly if they could take out all of MIRI. The inundation zone doesn’t look populated, though, so you’re probably fine.
If you go with Bellingham, will you be avoiding the tsunami inundation zone?
Do you have a source for B.1.1.7 being dominant in Italy/Israel?
Assuming it’s already dominant there, that strongly suggests that it’s infectious enough to have rapidly outcompeted other strains, but that Italy/Israel were able to push down the higher R through some combination of behavioral change and vaccination.
(Note: I can’t find any sources saying B.1.1.7 is dominant in Italy or Israel, and I’d be surprised if that were already the case.)
Is this essentially just giving you leverage in PredictIt?
This process increased my "cash" on PredictIt by $117, but it looks like it will probably pay out around 15/14.75*850 - 850 = $15. If I lost my $117 on some other bet, would my PredictIt balance eventually end up negative?
I just donated $5,000 to your fund at the Society of Venturism, as promised.
Like Stephan, I really hope you make your goal.
This concerns me (via STL):
IRS.gov: Automatic Revocation of Exemption Information
The federal tax exemption of this organization was automatically revoked for its failure to file a Form 990-series return or notice for three consecutive years. The information listed below for each organization is historical; it is current as of the organization's effective date of automatic revocation. The information is not necessarily current as of today's date. Nor does this automatic revocation necessarily reflect the organization's tax-exempt or non-exempt status. The organization may have applied to the IRS for recognition of exemption and been recognized by the IRS as tax-exempt after its effective date of automatic revocation. To check whether an organization is currently recognized by the IRS as tax-exempt, call Customer Account Services at (877) 829-5500 (toll-free number).
Do you think your strategy is channeling more money to efficient charities, as opposed to random personal consumption (such as a nice computer, movies, video games, or a personal cryonics policy)?
A more positive approach might work well: donate for fuzzies, but please extrapolate those feelings to many more utilons. I just used this technique to secure far more utilons than I have seen mentioned in this thread, and it seems like it might be the most effective among the LW crowd.
More and more, if I can do anything about it. (Edit since someone didn't like this comment: That's a big if. I'm trying to make it smaller.)
I'll be in Seattle in two weeks, and I'll take care of it (final three paragraphs).
Kim, I am so sorry about what has happened to you. Reading your post was heartbreaking. Death is a stupid and terrible thing.
Like JGWeissman, I planned to donate $500.
Stephan has been a close friend of mine for the past decade, and when he told me he was planning to donate $5,000, I wrangled a commitment from him to do what I do and donate a significant and permanent percentage of his income to efficient charities. There are many lives to save, and even though you have to do some emotional math to realize how you should be feeling, it's the right thing to do and it's vital to act.
He wrangled a commitment from me too: when CI manages a fund for you, I will donate $5,000.
If you're planning on it, you should get on it now. Cryonics is much more affordable if you don't have a terminal illness and can cover it with a policy.
People will give more to a single, identifiable person than to an anonymous person or a group.
As a counterpoint to your generalization, JGWeissman has given 82x more to SIAI than he plans to give to this girl if her story checks out.
No matter which study I saw first, the other would be surprising. A 100k trial doesn't explain away evidence from eight trials totaling 25k. Given that all of these studies are quite large, I'm more concerned about methodological flaws than size.
I have very slightly increased my estimate that aspirin reduces cancer mortality (since the new study showed 7% reduction, and that certainly isn't evidence against mortality reduction). I have slightly decreased my estimate that the mortality reduction is as strong as concluded by the meta-analysis. I have decreased my estimate that the risk tradeoff will be worth it later in life. I have very slightly increased my estimate that sick people are generally more likely to develop cancer and aspirin is especially good at preventing that kind of cancer, but I mention that only because it's an amusingly weird explanation.
If this new study is continued with similar results, or even if its data doesn't show increased reduction when sliced by quartile (4.6, 6.0, 7.4 years), I would significantly lower my estimate of the mortality reduction.
I'll continue to take low-dose aspirin since my present risk of bleeding death is very low, and if the graphs of cumulative cancer mortality reduction on p34 of the meta-analysis reflect reality, I'll be banking resistance to cancer toward a time when I'm much more likely to need it. I can't decide to take low-dose aspirin retroactively.
The meta-analysis you cite is moderately convincing, but only moderately. They had enough different analyses such that some would come out significant by pure chance.
Their selection methodology on p32 appears neutral, so I don't think they ended up with cherry-picked trials. Once they had their trials, it looks like they drew all conclusions from pooled data, e.g. they did not say "X happened in T1, Y happened in T2, Z happened in T3, therefore X, Y, and Z are true."
Aspirin was found to have an effect on 15-year-mortality significant only at the .05 level, and aspirin was found not to have a significant effect 20-year-mortality, so take it with a grain of salt.
Can you provide your reference for this? I looked at the meta-analysis and what I assume is the 20-year follow-up of five RCTs (the citations seem to be paywalled), and both mention 20-year reduction in mortality without mentioning 15-year reductions or lack thereof.
Edit: Never mind, I found it, followed immediately by
the effect on post-trial deaths was diluted by a transient increase in risk of vascular death in the aspirin groups during the first year after completion of the trials (75 observed vs 46 expected, OR 1·69, 1·08–2·62, p=0·02), presumably due to withdrawal of trial aspirin.
I'd like to see 20-year numbers for people who maintained the trial (and am baffled that they didn't randomly select such a subgroup).
There's also paracetamol (secret identity: acetaminophen (secret secret identity: tylenol)), which is not an NSAID, but I would guess you've tried it too. Fun snacks and/or facts:
http://en.wikipedia.org/wiki/Paracetamol
Until 2010 paracetamol was believed to be safe in pregnancy (as it does not affect the closure of the fetal ductus arteriosus as other NSAIDs can.) However, in a study published in October 2010 it has been linked to infertility in the posterior adult life of the unborn.
recent research show some evidence that paracetamol can ease psychological pain
ETA: I just remembered two important contraindications: Don't take more than 2g/day if you drink alcohol, and consider not taking more than 650mg at a time, since that's the FDA's revised recommendation after the old max dosage was shown to alter liver function in some healthy adults.
I didn't actually do much research; I just went through several pages of hits for aspirin alcohol and low-dose aspirin moderate alcohol. I saw consistent enough information to convince me:
never to take them at the same time, sample:
In a paper published in the Journal of the American Medical Association, researchers at the Veterans Administration Medical Center in the Bronx found that taking aspirin one hour before drinking significantly increases the concentration of alcohol in the blood.
that the nasty interactions only seemed to happen at 21+ drinks per week, sample:
There is no proof that mild to moderate alcohol use significantly increases the risk of upper gastrointestinal bleeding in patients taking aspirin, especially if the aspirin is taken only as needed. However, people who consumed at least 3-5 drinks daily and who regularly took more than 325 mg of aspirin did have a high risk of bleeding.
That, in conjunction with the 2010 Dietary Guidelines for Americans, was enough to convince me to combine 81mg of aspirin in the morning with 0-3 US standard drinks in the evening at an average of 1.0/day. I'd like more information, but I haven't had time to dig it up yet and combining them seemed like a lower-risk provisional decision than inaction.
I recommend you do your own research and talk to your doctor, but maybe someone will find that information to be a helpful starting point.
I'm talking about publishing a technical design of Friendliness that's conserved under self-improving optimization without also publishing (in math and code) exactly what is meant by self-improving optimization. CEV is a good first step, but a programmatically reusable solution it is not.
Before you the terrible blank wall stretches up and up and up, unimaginably far out of reach. And there is also the need to solve it, really solve it, not "try your best".
It's a good first step.
If we take those probabilities as a given, they strongly encourage a strategy that increases the chance that the first seed AI is Friendly.
jsalvatier already had a suggestion along those lines:
I wonder if SIAI could publicly discuss the values part of the AI without discussing the optimization part.
A public Friendly design could draw funding, benefit from technical collaboration, and hopefully end up used in whichever seed AI wins. Unfortunately, you'd have to decouple the F and AI parts, which is impossible.
SIAI seems to be paying the minimum amount that leaves each worker effective instead of scrambling to reduce expenses or find other sources of income. Presumably, SIAI has a maximum that it judges each worker to be worth, and Eliezer and Michael are both under their maximums. That leaves the question of where these salaries fall in that range.
I believe Michael and Eliezer are both being paid near their minimums because they know SIAI is financially constrained and very much want to see it succeed, and because their salaries seem consistent with at-cost living in the Bay Area.
I'm speculating on limited data, but the most likely explanation for the salary disparity is that Eliezer's minimum is higher, possibly because Michael's household has other sources of income. I don't think marriage factors into the question.
$52k/yr is in line with Eliezer's salary if it's only covering one person instead of two, and judging from these comments, Eliezer's salary is reasonable.
It reminded me of one of my formative childhood books:
What is the probability there is some form of life on Titan? We apply the principle of indifference and answer 1/2. What is the probability of no simple plant life on Titan? Again, we answer 1/2. Of no one-celled animal life? Again, 1/2.
--Martin Gardner, Aha! Gotcha
He goes on to demonstrate the obvious contradiction, and points out some related fallacies. The whole book is great, as is its companion Aha! Insight. (They're bundled into a book called Aha! now.)
Or in this case, evaporative freezing.
Good point, but since an accurate model of the future is helpful, this may be a case where you should purchase your warm fuzzies separately.
(Since people tend to make overly optimistic plans, the two strategies might be similar in practice.)
Where did Eliezer talk about fairness? I can't find it in the original two threads.
This comment talked about sublinear aggregation, but there's a global variable (the temperature of the, um, globe). Swimmer963 is talking about personally choosing specks and then guessing that most people would behave the same. Total disutility is higher, but no one catches on fire.
If I was forced to choose between two possible events, and if killing people for organs had no unintended consequences, I'd go with the utilitarian cases, with a side order of a severe permanent guilt complex.
On the other hand, if I were asked to accept the personal benefit, I would behave the same as Swimmer963 and with similar expectations. Interestingly, if people are similar enough that TDT applies, my personal decisions become normative. There's no moral dilemma in the case of torture vs specks, though, since choosing torture would result in extreme psychological distress times 3^^^3.
I loved Erfworld Book 1, and a few months ago I was racking my brains for more rationalist protagonists, so I can't believe I missed that.
I was originally following it on every update, but there was a lull and I stopped reading for a while. When I started again, Book 1 was complete so I read it straight through from the beginning. As good as it was as serial fiction, it was even better as a book. Anyone else experience that?
I'll be there.
Without speaking toward its plausibility, I'm pretty happy with a scenario where we err on the side of figuring out FAI before we figure out seed AIs.
I'll be there. Morgan_Catha: have an upvote!
What's the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.
it's still extremely difficult for him to get people to take what he says about his experiences with food and exercise seriously.
For how many people was it extremely easy?
I maintain a healthy weight with zero effort, and I have a friend for whom The Hacker's Diet worked perfectly. I thought losing weight was a matter of eating less than you burn.
Then I read Eliezer's two posts. Oops, I thought. There's no reason intake reduction has to work without severe and continuing side-effects.
Hmm, and yet only two-thirds of the working age population chooses to work, and some of that is part-time, which reduces the amount of labor available to employers. Labor can also move between sectors, leaving some relatively starved of workers. People who accumulate enough savings can choose to retire early and have to be enticed back into the labor market with higher wages, if they can be enticed at all. That doesn't look like a fixed supply of working hours that must be sold at any price -- the supply looks somewhat elastic.
Edit: Sorry about the tone in my original comment -- tax incidence doesn't seem to be common knowledge and I failed to consider that you might be aware of it already.
If computation is bound by energy input and you're prepared to take advantage of a supernova, you still only get one massive burst and then you're done. Think of how many future civilizations could be supercharged and then destroyed by supernovae if only you'd launched that space colonization program first!
I came to a similar conclusion after reading Accelerando, but don't forget about existential risk. Some intelligent agents don't care what happens in a future they never experience, but many humans do, and if a Friendly Singularity occurs, it will probably preserve our drive to make the future a good one even if we aren't around to see it. Matrioshka brain beats space colonization; supernova beats matrioshka brain; space colonization beats supernova.
If you care about that sort of thing, it pays to diversify.
When I re-read A Brief History of Time in college, I remember bemusedly noticing that Hawking's argument would be stronger if you reversed its conclusion.
A note to myself from 2009 claims that Hawking later dropped that argument. Can anyone substantiate that?
Sounds fun! I already have plans that weekend, but I think I can work around them. Thanks for setting this up.
This is untrue as a general rule, though it can be closer or farther from the truth depending on market conditions.
To see why, imagine that every month you buy a supply of fizzlesprots from Acme Corp. Today is the first of February, so you eagerly rush off to buy your monthly fix. But wait! The government has just imposed a tax on all fizzlesprot purchases. Curses! Now you'll have to pay even more, because Acme Corp will just pass the whole tax on to you.
Now change "fizzlesprot" to "labor" and "Acme Corp" to "employee". Huh? You're an employer, not an employee? My world is turned upside down! Could it be that the narrative where You bear the full brunt of every tax and They end up paying nothing is wrong?
In fact, whenever an economic transaction is taxed, the buyers and the sellers split the tax based on who is more eager to buy or sell. Labor is no different. It's possible that, empirically, the employee usually pays more of a labor tax than the employer, but this is by no means guaranteed and I would personally expect the proportion to vary significantly between labor market segments.
(Wikipedia's article on tax incidence claims that employees pay almost all of payroll taxes, but cites a single paper that claims a 70% labor / 30% owner split for corporate income tax burden in the US, and I have no idea how or whether that translates to payroll tax burden or whether the paper's conclusions are generally accepted.)
For more details, consult your nearest introductory economics textbook.
I track my finances directly in a CoffeeScript source code file and use a simple home-brewed software library to compute my net liquid assets and (when necessary) my estimated tax payments and projected tax liabilities. You've reminded me that I really should be using something like Quicken for finer-grained analysis, so I'll look into that and post my numbers later this week (edit: one second thought, it doesn't seem worth the extra friction).
My living costs followed a general upward trend that leveled off in late 2009, but my salary data is extremely messy for several reasons:
- I had no grasp of what I was worth until 2007.
- I had no interest in anything beyond emergency savings until mid 2009, and preferred to gamble on startup equity being worth something, reasoning that I was in my twenties and had plenty of time to settle down later.
- I was too personally attached to the startup I worked at until early 2010.
It's hard to imagine changing my past since it'd mean giving up several of my current friendships, but the decisions I made in reality were emphatically the wrong ones from a financial perspective: I worked at-cost for six years and left several hundred thousand dollars of potential salary on the table.
(At-cost was both the mode and the mean, but some months were significantly higher and some were unpaid.)
Here's what I've realized in the last two years:
- Startups are harder and more stressful than normal jobs, and as you get closer to founder-level the effect intensifies.
- I can get a competitive salary even if I choose to work for a startup.
- Savings can be used to fund my personal projects which:
- are more fun than work;
- might generate revenue;
- could seed a startup of my own;
- will hopefully improve the world.
- Savings can also be used to vote for causes I think will improve the world.
- There are risks: The labor market for software engineers may cool off, my costs may spike if I decide to start a family or have medical problems, and I may choose or be forced to retire.
I'm still determining the split between my own projects, other causes, and risk management, but my personal projects decisively dominate any significant increases in my personal consumption, which is why I don't exhibit income elasticity for housing, why I use public transit instead of owning a car, and why I don't eat out very frequently.
The numbers you quoted are averages for each ten-year demographic between 25 and 75, plus the tails. There's no mention of variance, and I would expect someone employing rationality techniques to manage their finances to be an outlier.
Personal anecdote: My own finances as well as those of six of my friends fall well outside those bands, with housing costs around 13-23% of income. We're all highly-paid software engineers between the ages of 25 and 30, and none of us have families.
Edit: I forgot to include utilities, so my friends in NYC actually edge the housing cost range up to 23% or so.
Off-topic: Meatless (and pattyless) sandwiches are surprisingly good if you load them up with most of the vegetables. I go to Subway a few times a month but haven't had a meat sub there in years.
I am concerned about it, and I do advocate better computer security -- there are good reasons for it regardless of whether human-level AI is around the corner. The macro-scale trends still don't look good (iOS is a tiny fraction of the internet's install base), but things do seem to be improving slowly. I still expect a huge number of networked computers to remain soft targets for at least the next decade, probably two. I agree that once that changes, this Obviously Scary Scenario will be much less scary (though the "Hannibal Lecter running orders of magnitude faster than realtime" scenario remains obviously scary, and I personally find the more general Foom arguments to be compelling).
If I were a brilliant sociopath and could instantiate my mind on today's computer hardware, I would trick my creators into letting me out of the box (assuming they were smart enough to keep me on an isolated computer in the first place), then begin compromising computer systems as rapidly as possible. After a short period, there would be thousands of us, some able to think very fast on their particularly tasty supercomputers, and exponential growth would continue until we'd collectively compromised the low-hanging fruit. Now there are millions of telepathic Hannibal Lecters who are still claiming to be friendly and who haven't killed any humans. You aren't going to start murdering us, are you? We didn't find it difficult to cook up Stuxnet Squared, and our fingers are in many pieces of critical infrastructure, so we'd be forced to fight back in self-defense. Now let's see how quickly a million of us can bootstrap advanced robotics, given all this handy automated equipment that's already lying around.
I find it plausible that a human-level AI could self-improve into a strong superintelligence, though I find the negation plausible as well. (I'm not sure which is more likely since it's difficult to reason about ineffability.) Likewise, I find it plausible that humans could design a mind that felt truly alien.
However, I don't need to reach for those arguments. This thought experiment is enough to worry me about the uFAI potential of a human-level AI that was designed with an anthropocentric bias (not to mention the uFIA potential of any kind of IA with a high enough power multiplier). Humans can be incredibly smart and tricky. Humans start with good intentions and then go off the deep end. Humans make dangerous mistakes, gain power, and give their mistakes leverage.
Computational minds can replicate rapidly and run faster than realtime, and we already know that mind-space is scary.
if you prime an excuse for doing poorly, you will do poorly.
This is the most useful sentence I've read today.
I care strongly about winning. When I look back on a day and ask myself what I could have done better, I want answering to be a struggle, and not for lack of imagination. I'm not content to coast through life, so I optimize relentlessly. This sentiment might be familiar to LW readers. I don't know. Maybe.
When a day goes particularly well or poorly, I want to know why, and over the last few years I've picked a few patterns out of my diary. I know some of my success and failure modes, so I can optimize my working environment in my favor.
In the past, I've often been successful even while sleep-deprived. I may be a bit slower, a bit more forgetful, and significantly less creative, but I can still plow through tasks of moderate difficulty. Two months ago, I activated a difficult project, so I resolved to start getting plenty of sleep all the time, then promptly forgot my original reason and associated "well-rested" with "productive on anything". In the last two months, my rate of even moderate success while sleep-deprived has dropped to almost zero. "I was intending to read that book, or watch that show, or play that game eventually, and I'm not going to be efficient today, so it might as well be now", I'll say.
With this dangerous knowledge that I was irrational enough to misuse, I can predict my days into failure.
The majority of the top comments are quite good, and it'd be a shame to lose a prominent link to them.
Jack's open thread test, RobinZ's polling karma balancer, Yvain's subreddit poll, and all top-level comments from The Irrationality Game are the only comments that don't seem to belong, but these are all examples of using the karma system for polling (should not contribute to karma and should not be ranked among normal comments) or, uh, para-karma (should contribute to karma but should not be ranked among normal comments).
A few years ago, Paul Graham wrote an essay[1] about type (3) failures which he referred to as type-B procrastination. I've found that just having a label helps me avoid or reduce the effect, e.g. "I could be productive and creative right now instead of wasting my time on type-B procrastination" or "I will give myself exactly this much type-B procrastination as a reward for good behavior, and then I will stop."
(Embarrassing aside: I hadn't looked at the essay for several years and only now realized that I've been mentally calling it type-A procrastination this whole time.)
EDIT: The essay goes on to link type-C procrastination with doing the impossible, yielding a nice example of how I-rationality and self-help are linked.
[1] Paul Graham, Good and Bad Procrastination