Yes, right, so to continue this line of thought: since more diversified means less risk, Gwern would want to buy VTSAX if he needs to spend that money in a relatively short time horizon. If this isn't the reason, though, from what I gathered from a personal finance book I read years ago, funds tracking S&P500 always outperformed funds tracking the entire U.S. equity market over long periods (is this actually true?). So I was curious about why Gwern made such a choice in case the reason I hypothesized (he is investing money he potentially needs shorter-term) was wrong and there are actually good reasons to buy funds tracking the total US equity market even when saving long term.
Once you have dealt with signaling, one other huge problem remains. I have met just one person IRL who actually invests (my brother). Everyone else isn't aware safe investment options exist and they just put everything in the bank account.
Also, in my experience, middle-aged and older people tend to downplay their wealth and not brag about it (why? Not entirely sure). Younger people instead seem more braggy... but most young people aren't very wealthy. This is just my experience though. I wonder if it is actually common.
Italy. House of 5 people. A city with around 1k cases per day for a few months. One person goes to school, sees friends, invites friends into the house. Another travels abroad or inside the country for a few days every 10 days or so and doesn't always get tested when returning. When he is in the house he also invites his girlfriend, eats out, sees friends, etc. In the microcovid test site I put 5 ppl house with 10 close contacts for lack of better options. Sounds reasonable?
Edit: Italy's vaccination rate sucks. Not gonna see a vaccine for me or anyone in the house with risky behavior till 2022
My risk should be from 19% to 82% probability in the next six months. This, if I always remain in the house. In order to avoid that, I should put my life on hold and get a full-time job I dislike. And people call me exaggerated and crazy both IRL and online. Long-term consequences of Covid are what worry me the most. Idk how to deal with this tbh. Genuinely asking.
Level 10's more ground-level explanation should be something like "If I say "There's not a lion across the river." I'm getting downvoted and if I say "There's a lion across the river" I'm getting upvoted?"
I'm trying to not use the expression "trying to mind control you". But stated this way it's not that different from lvl 3. Maybe lvl 10 is lvl 3 + an explicit selection mechanism such as upvotes or downvotes. But one could argue that such incentives exist regardless of how explicit they are.
Does the fact that Alcor is co-owner create difficulties if you want to change cryonics provider at some point? Example: new tech (e.g. aldehyde stabilized cryopreservation) gets offered somewhere else but not at Alcor, so you want to change.
To become an Associate Member use our Associate Membership Form to send a check, money order, or credit card information ($5 per month or $60 per year) to Alcor Life Extension Foundation, 7895 E. Acoma Dr., Suite 110, Scottsdale, Arizona 85260, or call Marji Klima at (480) 905-1906 ext. 101 with your credit card information.
Or you can pay online via PayPal using the Join button below (quarterly option is not available this way). Please note that this will set up automatic recurring charges of either $5 per month or $60 per year. You do not need a PayPal account to make a payment to Alcor (however, your browser will need to accept a harmless PayPal cookie or you will get an error message). If you want to receive Cryonics magazine, be sure to include your name and mailing address.
From what is written I gather that setting up the PayPal payment is enough to become an associate member and compiling the form is not necessary, correct?
Btw, thanks A LOT for this sequence. I currently live in Italy, my home country, but this is still HUGELY useful for me. I plan to move to the US eventually and even if I decide to postpone my real subscription for when I'll be there and I will have an actual income, this sequence will guide my decisions moving forward.
This reminds me of the sentiment Eliezer expresses here:
When someone politely presents themselves with a careful argument, does your cultural software tell you that you're supposed to listen and make a careful response, or make fun of the other person and then laugh about how they're upset? What about when your own brain tries to generate a careful argument? Does your cultural milieu give you any examples of people showing how to really care deeply about something (i.e. debate consequences of paths and hew hard to the best one), or is everything you see just people competing to be loud in their identification?
I really really love this initiative. Reading LW in book form is just better for me. Online I get distracted and I read stuff as procrastination instead of deliberate effort. I've read the first two books of the sequences and HPMOR on Kindle and the experience is not even comparable with reading with a browser.
My posts here are basically all evaluations or considerations useful for cost-effectiveness evaluations. They are crossposted from the EA Forum. The most interesting ones for your purpose are probably:
I don't know if this has already been discussed, but why the daily deaths in every European country are 1/10th or less of lockdown levels but daily cases are two or three times higher? In the rest of the world daily deaths still seem to follow daily cases but in the US and Japan (to a lesser extent), in which daily deaths are about half of what they were in May (which is still not as extreme as Europe). I may be unaware of other countries in which this is the case.
This is an interesting comment, I think you bring up good points.
One reason why I didn't focus much on crowdfunding is that the money that goes in there is not really LEAF's, and it's just one among many focuses they have. If an EA decides to give money to LEAF (through the recurring campaign, or through a grant, for example) that money will probably not go to a crowdfunding campaign, and would probably not make much of an impact on how they decide who to crowdfund. It would go to their other projects. When donating to a campaign you donate to the specific org who benefits from the project of the campaign and not to LEAF. LEAF, unlike other orgs like Open Phil for example, doesn't make grants directly, but only organizes campaigns so that people can bring money to a project.
You probably already knew all in the paragraph above, so: I think your point is correct. Where exactly they bring money by choosing who to finance is important in order to ascertain if the research which wouldn't otherwise have happened is actually making an impact (an impact at all, given the characteristics of this field, yes). A plus to them from my POV is that they seem internally sympathetic to SENS' approach (it's obvious by reading their introductory articles), although they also financed different approaches (one campaign is for a project involving NMN supplementation led by David Sinclair, a couple of others on biomarkers...). But I admit it's not much and a more detailed look would be ideal. For now, if you are more concerned about the science than YouTube/internet advocacy, policy influencing, etc. it is probably best to donate directly to orgs doing specific scientific research.
Not being able to evaluate much by looking at crowdfunding alone I followed the methodology of trying to gauge the ratio donations : money brought to the field, which I've seen used a lot for evaluating advocacy charities inside EA.
Maybe we'll be able to ascertain their decision-making regarding crowdfunding better (although probably not a lot better) after the interview, since the first question is about that.
In the past, I've donated to them and supported them in Project4Awesome, but I'm not inside the org. Basically, this is a post trying to evaluate it from an EA standpoint, in a similar way I did for SENS. Their budget should be the recurring campaign and single donations (which I don't expect to be much), the interview should probably make it clearer I hope.
Edit: the post is probably not very on topic for LW, but since I crossposted my analysis of aging research from an EA standpoint I wanted to put this here too for completeness.
I'm inclined to think that if junk media (social media, news) were only useful for news, completely disregarding them would be probably the best action. Considering every other use though, I'm inclined to think the optimal is being able to reach a compromise of 20m per day maximum, although I'm not sure if it is possible without getting addicted. If it isn't it just might be best to get away, but I'm unsure.
There haven't been historical events that prompted me to react earlier than everyone else for now (not even covid, my city has never been the center of a big enough outbreak and I just abided to the lockdown rules. I can imagine that an earlier reaction could have been better if I lived in another country/city). The historical events that are important to react early to are probably the ones that would put me/my family/everyone else around in relatively sudden danger: war, political instability, coups, dangerous diseases, and probably other stuff. Things that happened just a handful of times in now developed countries during the twentieth century (maybe they won't happen again, but...).
I wouldn't have been this nervous 5 years ago, but it seems to me that the world is socially evolving faster now, and I think it's possible not to react fast enough on a historical event. But maybe I just have become more anxious? One other thing is: many times my life changed due to great fucking information I found while farting around the Internet, but at the same time this comes with all the drawbacks Isusr rightly identified. There is also the feeling that I have witnessed society and even art evolve by staying consistently online, and stopping feels like jumping out of a train. I'm not sure how I should act.
Do you keep up with news of any kind? If so, how? Don't you have fear of missing out something important which you should act upon (both good news or bad news or not even news but simply information)? Not necessarily politics or general news of course.
Comply's foam tips: they replace the more common tips for in-ear earphones and isolate you from the outside world much more than noise canceling. It's basically having earphones + earplugs, thanks to the foam. If you live in a noisy environment they may radically change your life for the better. You need to learn the correct procedure to fit them properly (it's easy, you can find videos on how to do it for earplugs. It is the same procedure). I recommend them for watching movies or reading/studying while listening to nature sounds.
A gaming console: I bought a PS4 a couple of years ago and it has been one of the best decisions in the last four years. This is valid if you already plan to allocate some time to gaming and if you manage not to get addicted to it.
A beach chair or a big chair with the same inclination to read more comfortably and not fall asleep (this is the problem if you read on the bed). I can't recommend a specific chair, because I don't know exactly where you can find my own.
Tablet: much better for reading anything on the internet. I find it strains my eyes much less. I can't recommend a particular one... I owned two and both were great.
A Kindle e-reader: much cheaper to read, easier due to less weight and having many books in the same place. As a result you will probably read more.
A big desk with adjustable height with a big chair with adjustable height.
I really like this post. I think it is probably also relevant from an Effective Altruism standpoint (you identify a tractable and neglected approach which might have a big impact). I think you should probably crosspost this on the EA Forum, and think about if your other articles on the topic are apt to be published there. What do you think?
If you read my profile both here and on the EA Forum you'll find a lot of articles in which I'm trying to evaluate aging research. I'm making this suggestion because I think you are adding useful pieces.
He would probably say that he doesn't care (he works for others, not for himself) and that alchool doesn't affect him, since people already kind of noted this and the answers were these. But tbh, this whole thing is not that interesting to me, and I would classify it as weak evidence for what he belives or not. Usually it is mainly gossip.
Wow, ok, thank you. This is useful information. I didn't take your ADHD/ADD hypothesis seriously to be honest, but now that you specify the nature of the test to diagnose it, it makes much more sense. I will research more and get tested.
No, my experience is the gameplays I have seen. From what I've seen it seems very easy to communicate (via voice chat) and interact with environments, which are also very customizable. I don't know anything beyond this.
Regarding "If a survey is performed, most people in the United States will say that curing aging is undesirable. 85%". One similar survey has already been done. The result depends if you specify that an unlimited lifespan would be in health and not in increasing frailty. If you do, > 40% of respondents opt for unlimited lifespan, otherwise 1%. https://www.frontiersin.org/articles/10.3389/fgene.2015.00353/full
I know this conversation is very old and Holden has matured his outlook on the subject (see Open Philanthropy's grants to aging research, and Open Philanthropy's analysis of aging research, although still dismissive of SENS), but I still want to point out what I think were the mistakes he made here.
Holden didn't seem to get how different in scope the SENS' plan is from the kind of research that a single brilliant researcher can bring forward in the traditional way. SENS needs a plethora of different therapies that would require an entire NIA for themselves to be developed... and this would be enough only for the first phases of research and not for clinical trials. I don't get how he could be confused about this. Quoting Holden:
You [Aubrey] state that you have a high-expected-value plan that the academic world can't recognize the value of because of shortcomings such as "balkanisation" and risk aversion. I believe it may be true that the academic world has such problems to a degree; however, I also believe that there are a lot of extremely talented people in academia and that they often (though not necessarily always) find ways to move forward on promising work.
Also, I'm confused about why Holden put so much weight on Dario Amodei's opinion over Aubrey's. Dario is an AI researcher.
[...] And as my summary of our conversation shows, he [Dario] acknowledges that the world of biomedical research may have certain suboptimal incentives, but didn't seem to think that these issues are leaving specific, visible outstanding research programs on the table the way that your email implies. [...]
Thankfully, the Open Phil Holden obviously doesn't think this is the case.
Any question such that a correct answer to it should very clearly benefit both humanity and the Oracle. Even if the Oracle has preferences we can't completely guess, we can probably still say that such questions could be about the survival of both humanity and the Oracle, or about the survival of only the Oracle or its values. This because even if we don't know exactly what the Oracle is optimising for, we can guess that it will not want to destroy itself, given the vast majority of its possible preferences. So it will give humanity more power to protect both, or only the Oracle.
Example 1: let's say we discover the location of an alien civilisation, and we want to minimise the chances of it destroying our planet. Then we must decide what actions to take. Let's say the Oracle can only answer "yes" or "no". Then we can submit questions such as if we should take a particular action or not. This kind of situation I suspect falls within a more general case of "use Oracle to avoid threat to entire planet, Oracle included" inside which questions should be safe.
Example 2: Let's say we want to minimise the chance that the Oracle breaks down due to accidents. We can ask him what is the best course of action to take given a set of ideas we come up with. In this case we should make sure beforehand that nothing in the list makes the Oracle impossible or too difficult to shut down by humans.
Example 3: Let's say we become practically sure that the Oracle is aligned with us. Then we could ask it to choose the best course of action to take among a list of strategies devised to make sure he doesn't become misaligned. In this case the answer benefits both us and the Oracle, because the Oracle should have incentives not to change values itself. I think this is more sketchy and possibly dangerous, because of the premise: the Oracle could obviously pretend to be aligned. But given the premise it should be a good question, although I don't know how useful it is as a submission under this post (maybe it's too obvious or too unrealistic given the premise).
The definition of LEV I used in the previous post is: "Longevity Escape Velocity (LEV) is the minimum rate of medical progress such that individual life expectancy is raised by at least one year per year if medical interventions are used". So it doesn't lead to an unbounded life expectancy. In fact, with a simplified calculation, in the first post I calculated life expectancy after LEV to be approximately 1000 years. 1000 years is what comes up using the same idea as your hydra example (risk of death flat at the risk of death of a young person), but in reality it should be slightly less, because in the calculation I left out the part when risk of death starts falling just after hitting LEV. We are not dealing with infinite utilities.
The main measure of impact I gave in the post comes from these three values and some corrections:
1000 QALYs: life expectancy of a person after hitting LEV
36,500,000 deaths/year due to aging
Expected number of years LEV is made closer by (by a given project examined)