Posts
Comments
I thought it was funny when Derek said, "I can explain it without jargon."
It seems to be conflating 'morality' with 'success'. Being able to predict the future consequences of an act is only half the moral equation - the other half is empathy. Human emotion, as programmed by evolution, is the core of humanity, and yet seems derided by the author.
The novel After Life by Simon Funk has quite a few flashbacks to the world prior humanity's end, though it takes more than a year. I find it one of the more hopeful stories in the genre.
Your periodic reminder that in 1947, New York City vaccinated ~6.35 million people (80% of their population) for smallpox in less than a month. If you do not think we can do this, what changed to make it impossible?
What changed? We started looking for every possible negative consequence of rolling out vaccines that quickly, and then working to mitigate each and every one.
Neat. I work for DLA. Thanks for the update.
Thank you very much for the insightful news. I consider these posts essential reading.
Once again, thank you for these incredibly informative posts.
Thank you for all this useful information and analysis.
Thank you very much for posting these.
I agree it fits well here. However, it has a very different tone from other posts on the MIRI blog, where it has also been posted.
Laziness. Though I note Stuart_Armstrong had the same opinion as me, and offered even fewer means of improvement, and got upvoted. I should have also said I agree with all points contained herein, and that the message is an important one. That would have reduced the bite.
This article is very heavy with Yudkosky-isms, repeats of stuff he's posted before, and it needs a good summary, and editing to pare it down. I'm surprised they posted it to the MIRI blog in its current form.
Edit: As stated below, I agree with all the points of the article, and consider it an important message.
Eliezer thinks it's a big deal.
Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.
That interview is indeed worrying. I'm surprised by some of the answers.
Great news! I've been waiting for this kind of thing.
More likely, he also "always thought that way," and the extreme story was written to provide additional drama.
Thank you for replicating the experiment!
Somewhat upper middle class job; low cost of living, inexpensive hobbies, making donations a priority.
I donated $5000 today and continue my $1000 monthly donations.
I feel, and XiXiDu seems to agree, that his posts require a disclaimer or official counterarguments. I feel it's appropriate to point out that someone has made collecting and spreading every negative aspect of a community they can find into a major part of their life.
So MIRI and LW are no longer a focus for you going forward?
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
Skin reacts to light, too.
tl;dr: buy Index Funds, like the Vanguard Total Stock Market Index, because money can be turned into a great many utilons after holding it for a long time.
The FAQ addresses Crohn's Disease: "more data needed".
https://faq.soylent.me/hc/en-us/articles/200838449-Will-Soylent-help-my-Crohns-or-IBS-
It also has a full list of ingredients.
https://faq.soylent.me/hc/en-us/articles/200789315-Soylent-1-0-Nutrition
One thing from the link above that I didn't previously know: "The Soylent recipe is based on the recommendations of the Institute of Medicine (IOM) and is approved as a food by the Food and Drug Administration (FDA)." (emphasis theirs)
No agreement. It's a polarizing topic, even here.
No reason to apologize. It's a good time for another thread, since it's actually out now.
Previous discussions on LW:
Here's my review of Soylent and a taskification of how I use it.
Pros:
- Much easier than cooking or even fast food, when transportation costs are taken into account
- Much more nutritionally complete than fast food or processed sugar-foods
- Relatively cheap
- Tastes neutral or slightly sweet
Cons:
- Sometimes sticks to the back of my throat
- Can give foul smelling gas
- Can cause headaches
- Can cause nausea
- Texture of high pulp orange juice
- Doesn't have the daily allowance of sodium
Preparation Process:
- Place Takeya pitcher on counter with top off
- Rip off top of Soylent bag
- Squeeze top of Soylent bag down to a circular shape that fits in the pitcher
- Place top of bag in pitcher and tilt
- Squeeze and press on bag until all powder is in pitcher
- Add 1/4 tsp to 1 tsp of salt, depending on taste and sodium cravings. I use Diamond Crystal Kosher Salt.
- Add warm water to pitcher to the edge of the container
- Put top on and shake vigorously
- Open top, careful not to drip remnants
- Add oil from oil jar and more warm water to edge of the container
- Put on top and shake vigorously
- Place pitcher in refrigerator
Consumption process:
- Pour Soylent into 8oz glass - I use Bermioli Rocco glasses recommended by TheWirecutter
- Alternatively, pour Soylent into 16oz Thermos, such as the Thermos Nissan
- If still warm, put in 1 ice cube
- Sip or chug as needed
- Consume lots of additional water
- Immediately upon finishing a glass, add a dash of water, swirl it around, drink remnants, and then rinse glass
Notes:
- Do not put water in pitcher before Soylent powder, as it's easy to put in too much water, and the Soylent won't fit.
- Warm water mixes more easily with the Soylent
- Soylent tastes better when chilled
- Soylent dries out into a very hard, crusty residue which is difficult to clean, so stray droplets are a nuisance
I pledged to continue donating $1,000 per month.
I also convinced a friend to donate for the first time.
Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?
XiXiDu cares about every Eliezer potential-mistake.
Forum drama is noise, not signal.
I didn't realize the grand prize was based on daily unique donors until I got the 'urgent' email. I got my dad to chip in $10, too. Looks like the other leading organization has more friends and family.
My apologies, I won't be able to make it. Work unexpectedly kept me up until 3am, and my body punished me with sleep.
Jon's what I call normal-smart. He spends most of his time watching TV, mainly US news programs, and they're quite destructive to rational thinking, even if the purpose is for comedic fodder and to discover hypocrisy. He's very tech averse, letting the guests he has on the show come in with information he might use, trusting (quite good) intuition to fit things into reality. As such, I like to use him as an example of what more normal people feel about tech / geek issues.
Every time he has one of these debates, I really want to sit down as moderator so I can translate each side, since they often talk past each other. Alas, it's a very time restricted format, and I've only seen him fact check on the fly once (Google, Wikipedia).
The number thing was at least partly a joke, along the lines of "bigger than 10 doesn't make much sense to me" - scope insensitivity humor. I've done similar before.
Immediate thoughts, before reading comments: One-box. I had started to think more deeply until I read the part about being run over for factoring, and for some reason my brain applied it to reasoning about this topic as a whole and spit out a final answer.
Intuitively, it seemed one boxing would get me a million, as per standard Newcomb. The lottery two million seemed like gravy above that (diminishing marginal utility of money), with a potential for 3 million total. Since they're independent, the word "separately" and its description made it seem like the lottery was unable to be affected by my actions at all. Thus, take box B, and hope for a lottery win. Definitely don't over think it, or risk a trolley encounter.
Glad to hear. It is interesting data that you managed to bring in 3 big name trolls for a single thread, considering their previous dispersion and lack of interest.
AMF/GiveWell charities to keep GiveWell and the EA movement growing while actors like GiveWell, Paul Christiano, Nick Beckstead and others at FHI, investigate the intervention options and cause prioritization, followed by organization-by-organization analysis of the GiveWell variety, laying the groundwork for massive support for the top far future charities and organizations identified by said processes
Cool, if MIRI keeps going, they might be able to show FAI as top focus with adequate evidence by the time all of this comes together.
Build up general altruistic capacities through things like the effective altruist movement or GiveWell's investigation of catastrophic risks
I read every blog post they put out.
Invest money in an investment fund for the future which can invest more [...] when there are better opportunities
I figure I can use my retirement savings for this.
(recalling that most of the value of MIRI in your model comes from major institutions being collectively foolish or ignorant regarding AI going forward)
I thought it came from them being collectively foolish or ignorant regarding Friendliness rather than AGI.
Prediction markets, meta-research, and other institutional changes
Meh. Sounds like Lean Six Sigma or some other buzzword business process improvement plan.
Work like Bostrom's
Luckily, Bostrom is already doing work like Bostrom's.
Pursue cognitive enhancement technologies or education methods
Too indirect for my taste.
Find the most effective options for synthetic biology threats
Not very scary compared to AI. Lots of known methods to combat green goo.
Anchoring from my butt-number?
The method is even more important (practice vs. perfect practice, philanthropy vs. givewell). I believe in the mission, not MIRI per se. If Eliezer decided that magic was the best way to achieve FAI and started searching for the right wand and hand gestures rather than math and decision theory, I would look elsewhere.
I subscribe to the view that AGI is bad by default, and don't see anyone else working on the friendliness problem.
I'm not sure which fallacy you're invoking, but saying (to paraphrase), 'superintelligence is likely difficult to aim' and 'MIRI's work may not have an impact' are certainly possible, and already contribute to my estimates.
I'm pessimistic and depressed.