Posts

NYT: A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit 2018-04-20T18:33:29.949Z
ICO to Build Next Generation AI Raises $36 Million in 60 Seconds 2017-12-23T14:53:17.622Z
Pathological utilitometer thought experiment 2010-10-26T15:13:06.100Z

Comments

Comment by Rain on Moral Reality Check (a short story) · 2023-11-26T14:21:04.571Z · LW · GW

I thought it was funny when Derek said, "I can explain it without jargon."

It seems to be conflating 'morality' with 'success'. Being able to predict the future consequences of an act is only half the moral equation - the other half is empathy. Human emotion, as programmed by evolution, is the core of humanity, and yet seems derided by the author.

Comment by Rain on The Last Year - is there an existing novel about the last year before AI doom? · 2022-10-22T23:58:34.114Z · LW · GW

The novel After Life by Simon Funk has quite a few flashbacks to the world prior humanity's end, though it takes more than a year. I find it one of the more hopeful stories in the genre.

Comment by Rain on How Should We Respond to Cade Metz? · 2021-02-13T19:43:57.177Z · LW · GW
Comment by Rain on Covid 1/7: The Fire of a Thousand Suns · 2021-01-08T14:57:32.097Z · LW · GW

Your periodic reminder that in 1947, New York City vaccinated ~6.35 million people (80% of their population) for smallpox in less than a month. If you do not think we can do this, what changed to make it impossible?

What changed? We started looking for every possible negative consequence of rolling out vaccines that quickly, and then working to mitigate each and every one.

Comment by Rain on Covid 12/24: We’re F***ed, It’s Over · 2020-12-24T16:44:20.013Z · LW · GW

Neat. I work for DLA. Thanks for the update.

Comment by Rain on Covid 11/26: Thanksgiving · 2020-11-27T18:33:34.711Z · LW · GW

Thank you.

Comment by Rain on Covid Covid Covid Covid Covid 10/29: All We Ever Talk About · 2020-10-30T00:13:47.508Z · LW · GW

Thank you very much for the insightful news. I consider these posts essential reading.

Comment by Rain on Covid 10/22: Europe in Crisis · 2020-10-22T21:57:56.786Z · LW · GW

Once again, thank you for these incredibly informative posts.

Comment by Rain on Covid 10/8: October Surprise · 2020-10-09T02:02:10.984Z · LW · GW

Thank you for all this useful information and analysis.

Comment by Rain on Covid 9/24: Until Morale Improves · 2020-09-24T18:34:21.724Z · LW · GW

Thank you very much for posting these.

Comment by Rain on There's No Fire Alarm for Artificial General Intelligence · 2017-10-14T15:43:55.100Z · LW · GW

I agree it fits well here. However, it has a very different tone from other posts on the MIRI blog, where it has also been posted.

Comment by Rain on There's No Fire Alarm for Artificial General Intelligence · 2017-10-14T15:35:51.058Z · LW · GW

Laziness. Though I note Stuart_Armstrong had the same opinion as me, and offered even fewer means of improvement, and got upvoted. I should have also said I agree with all points contained herein, and that the message is an important one. That would have reduced the bite.

Comment by Rain on There's No Fire Alarm for Artificial General Intelligence · 2017-10-14T03:13:20.064Z · LW · GW

This article is very heavy with Yudkosky-isms, repeats of stuff he's posted before, and it needs a good summary, and editing to pare it down. I'm surprised they posted it to the MIRI blog in its current form.

Edit: As stated below, I agree with all the points of the article, and consider it an important message.

Comment by Rain on LW 2.0 Open Beta Live · 2017-09-22T14:27:13.300Z · LW · GW

Any RSS feeds?

Comment by Rain on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning · 2016-01-27T22:47:10.545Z · LW · GW

Eliezer thinks it's a big deal.

Comment by Rain on [Link] Introducing OpenAI · 2015-12-23T14:36:04.277Z · LW · GW

Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.

Comment by Rain on [Link] Introducing OpenAI · 2015-12-12T03:46:59.387Z · LW · GW

That interview is indeed worrying. I'm surprised by some of the answers.

Comment by Rain on New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan) · 2015-12-03T16:00:12.812Z · LW · GW

Great news! I've been waiting for this kind of thing.

Comment by Rain on What is your rationalist backstory? · 2015-09-25T01:39:21.912Z · LW · GW

More likely, he also "always thought that way," and the extreme story was written to provide additional drama.

Comment by Rain on How To Win The AI Box Experiment (Sometimes) · 2015-09-12T15:14:31.730Z · LW · GW

Thank you for replicating the experiment!

Comment by Rain on MIRI's 2015 Summer Fundraiser! · 2015-07-21T13:18:50.132Z · LW · GW

Somewhat upper middle class job; low cost of living, inexpensive hobbies, making donations a priority.

Comment by Rain on MIRI's 2015 Summer Fundraiser! · 2015-07-21T01:22:45.993Z · LW · GW

I donated $5000 today and continue my $1000 monthly donations.

Comment by Rain on xkcd on the AI box experiment · 2014-11-23T19:56:07.512Z · LW · GW

I feel, and XiXiDu seems to agree, that his posts require a disclaimer or official counterarguments. I feel it's appropriate to point out that someone has made collecting and spreading every negative aspect of a community they can find into a major part of their life.

Comment by Rain on Breaking the vicious cycle · 2014-11-23T19:40:19.475Z · LW · GW

So MIRI and LW are no longer a focus for you going forward?

Comment by Rain on xkcd on the AI box experiment · 2014-11-21T13:56:07.294Z · LW · GW

Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.

Comment by Rain on xkcd on the AI box experiment · 2014-11-21T13:40:19.578Z · LW · GW

Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.

Comment by Rain on A discussion of heroic responsibility · 2014-10-29T12:37:30.894Z · LW · GW

Skin reacts to light, too.

Comment by Rain on A Guide to Rational Investing · 2014-09-15T03:19:06.302Z · LW · GW

tl;dr: buy Index Funds, like the Vanguard Total Stock Market Index, because money can be turned into a great many utilons after holding it for a long time.

Comment by Rain on Is it a good idea to use Soylent once/twice a day? · 2014-09-08T13:05:09.436Z · LW · GW

The FAQ addresses Crohn's Disease: "more data needed".

https://faq.soylent.me/hc/en-us/articles/200838449-Will-Soylent-help-my-Crohns-or-IBS-

It also has a full list of ingredients.

https://faq.soylent.me/hc/en-us/articles/200789315-Soylent-1-0-Nutrition

One thing from the link above that I didn't previously know: "The Soylent recipe is based on the recommendations of the Institute of Medicine (IOM) and is approved as a food by the Food and Drug Administration (FDA)." (emphasis theirs)

Comment by Rain on Is it a good idea to use Soylent once/twice a day? · 2014-09-08T02:11:46.089Z · LW · GW

No agreement. It's a polarizing topic, even here.

Comment by Rain on Is it a good idea to use Soylent once/twice a day? · 2014-09-08T01:48:00.095Z · LW · GW

No reason to apologize. It's a good time for another thread, since it's actually out now.

Comment by Rain on Is it a good idea to use Soylent once/twice a day? · 2014-09-08T01:14:38.132Z · LW · GW

Previous discussions on LW:

Comment by Rain on Is it a good idea to use Soylent once/twice a day? · 2014-09-08T01:11:48.770Z · LW · GW

Here's my review of Soylent and a taskification of how I use it.

Pros:

  • Much easier than cooking or even fast food, when transportation costs are taken into account
  • Much more nutritionally complete than fast food or processed sugar-foods
  • Relatively cheap
  • Tastes neutral or slightly sweet

Cons:

  • Sometimes sticks to the back of my throat
  • Can give foul smelling gas
  • Can cause headaches
  • Can cause nausea
  • Texture of high pulp orange juice
  • Doesn't have the daily allowance of sodium

Preparation Process:

  • Place Takeya pitcher on counter with top off
  • Rip off top of Soylent bag
  • Squeeze top of Soylent bag down to a circular shape that fits in the pitcher
  • Place top of bag in pitcher and tilt
  • Squeeze and press on bag until all powder is in pitcher
  • Add 1/4 tsp to 1 tsp of salt, depending on taste and sodium cravings. I use Diamond Crystal Kosher Salt.
  • Add warm water to pitcher to the edge of the container
  • Put top on and shake vigorously
  • Open top, careful not to drip remnants
  • Add oil from oil jar and more warm water to edge of the container
  • Put on top and shake vigorously
  • Place pitcher in refrigerator

Consumption process:

  • Pour Soylent into 8oz glass - I use Bermioli Rocco glasses recommended by TheWirecutter
  • Alternatively, pour Soylent into 16oz Thermos, such as the Thermos Nissan
  • If still warm, put in 1 ice cube
  • Sip or chug as needed
  • Consume lots of additional water
  • Immediately upon finishing a glass, add a dash of water, swirl it around, drink remnants, and then rinse glass

Notes:

  • Do not put water in pitcher before Soylent powder, as it's easy to put in too much water, and the Soylent won't fit.
  • Warm water mixes more easily with the Soylent
  • Soylent tastes better when chilled
  • Soylent dries out into a very hard, crusty residue which is difficult to clean, so stray droplets are a nuisance
Comment by Rain on MIRI's 2014 Summer Matching Challenge · 2014-08-07T22:18:54.031Z · LW · GW

I pledged to continue donating $1,000 per month.

I also convinced a friend to donate for the first time.

Comment by Rain on [LINK] Another "LessWrongers are crazy" article - this time on Slate · 2014-07-18T20:40:26.319Z · LW · GW

Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?

XiXiDu cares about every Eliezer potential-mistake.

Comment by Rain on [Retracted] More silent deletion; LessWrong's moderation needs to change · 2014-07-12T01:14:46.766Z · LW · GW

Forum drama is noise, not signal.

Comment by Rain on Calling all MIRI supporters for unique May 6 giving opportunity! · 2014-05-07T02:39:41.463Z · LW · GW

I didn't realize the grand prize was based on daily unique donors until I got the 'urgent' email. I got my dad to chip in $10, too. Looks like the other leading organization has more friends and family.

Comment by Rain on Meetup : Southeast Michigan · 2014-01-04T18:28:38.732Z · LW · GW

My apologies, I won't be able to make it. Work unexpectedly kept me up until 3am, and my body punished me with sleep.

Comment by Rain on Meetup : Southeast Michigan · 2013-12-28T03:48:58.594Z · LW · GW

I'll be coming.

Comment by Rain on Open Thread, September 23-29, 2013 · 2013-09-30T23:13:00.929Z · LW · GW

Jon's what I call normal-smart. He spends most of his time watching TV, mainly US news programs, and they're quite destructive to rational thinking, even if the purpose is for comedic fodder and to discover hypocrisy. He's very tech averse, letting the guests he has on the show come in with information he might use, trusting (quite good) intuition to fit things into reality. As such, I like to use him as an example of what more normal people feel about tech / geek issues.

Every time he has one of these debates, I really want to sit down as moderator so I can translate each side, since they often talk past each other. Alas, it's a very time restricted format, and I've only seen him fact check on the fly once (Google, Wikipedia).

The number thing was at least partly a joke, along the lines of "bigger than 10 doesn't make much sense to me" - scope insensitivity humor. I've done similar before.

Comment by Rain on Rationality Quotes September 2013 · 2013-09-15T13:39:50.269Z · LW · GW
Comment by Rain on The Ultimate Newcomb's Problem · 2013-09-10T13:18:22.467Z · LW · GW

Immediate thoughts, before reading comments: One-box. I had started to think more deeply until I read the part about being run over for factoring, and for some reason my brain applied it to reasoning about this topic as a whole and spit out a final answer.

Intuitively, it seemed one boxing would get me a million, as per standard Newcomb. The lottery two million seemed like gravy above that (diminishing marginal utility of money), with a potential for 3 million total. Since they're independent, the word "separately" and its description made it seem like the lottery was unable to be affected by my actions at all. Thus, take box B, and hope for a lottery win. Definitely don't over think it, or risk a trolley encounter.

Comment by Rain on The genie knows, but doesn't care · 2013-09-10T13:03:40.464Z · LW · GW

Glad to hear. It is interesting data that you managed to bring in 3 big name trolls for a single thread, considering their previous dispersion and lack of interest.

Comment by Rain on How does MIRI Know it Has a Medium Probability of Success? · 2013-08-04T23:58:40.010Z · LW · GW

AMF/GiveWell charities to keep GiveWell and the EA movement growing while actors like GiveWell, Paul Christiano, Nick Beckstead and others at FHI, investigate the intervention options and cause prioritization, followed by organization-by-organization analysis of the GiveWell variety, laying the groundwork for massive support for the top far future charities and organizations identified by said processes

Cool, if MIRI keeps going, they might be able to show FAI as top focus with adequate evidence by the time all of this comes together.

Comment by Rain on How does MIRI Know it Has a Medium Probability of Success? · 2013-08-04T23:56:20.985Z · LW · GW

Build up general altruistic capacities through things like the effective altruist movement or GiveWell's investigation of catastrophic risks

I read every blog post they put out.

Invest money in an investment fund for the future which can invest more [...] when there are better opportunities

I figure I can use my retirement savings for this.

(recalling that most of the value of MIRI in your model comes from major institutions being collectively foolish or ignorant regarding AI going forward)

I thought it came from them being collectively foolish or ignorant regarding Friendliness rather than AGI.

Prediction markets, meta-research, and other institutional changes

Meh. Sounds like Lean Six Sigma or some other buzzword business process improvement plan.

Work like Bostrom's

Luckily, Bostrom is already doing work like Bostrom's.

Pursue cognitive enhancement technologies or education methods

Too indirect for my taste.

Find the most effective options for synthetic biology threats

Not very scary compared to AI. Lots of known methods to combat green goo.

Comment by Rain on How does MIRI Know it Has a Medium Probability of Success? · 2013-08-02T22:56:21.433Z · LW · GW

Anchoring from my butt-number?

Comment by Rain on How does MIRI Know it Has a Medium Probability of Success? · 2013-08-02T15:21:26.531Z · LW · GW

The method is even more important (practice vs. perfect practice, philanthropy vs. givewell). I believe in the mission, not MIRI per se. If Eliezer decided that magic was the best way to achieve FAI and started searching for the right wand and hand gestures rather than math and decision theory, I would look elsewhere.

Comment by Rain on How does MIRI Know it Has a Medium Probability of Success? · 2013-08-02T14:05:33.841Z · LW · GW

I subscribe to the view that AGI is bad by default, and don't see anyone else working on the friendliness problem.

Comment by Rain on How does MIRI Know it Has a Medium Probability of Success? · 2013-08-01T18:58:22.628Z · LW · GW

I'm not sure which fallacy you're invoking, but saying (to paraphrase), 'superintelligence is likely difficult to aim' and 'MIRI's work may not have an impact' are certainly possible, and already contribute to my estimates.

Comment by Rain on How does MIRI Know it Has a Medium Probability of Success? · 2013-08-01T16:31:44.987Z · LW · GW

I'm pessimistic and depressed.