Posts

Link: Rob Bensinger on Less Wrong and vegetarianism 2014-11-13T17:09:11.021Z

Comments

Comment by Sysice on Financial Effectiveness Repository · 2014-11-24T11:21:43.510Z · LW · GW

"active investment with an advisor is empirically superior to passive non-advised investment for most people." Can you source this?

Comment by Sysice on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-24T11:13:05.227Z · LW · GW

This isn't necessarily- if you have to think about using that link as charity while shopping, it could decrease your likelihood of doing other charitable things (which is why you should set up a redirect so you don't have to think about it, and you always use it every time!)

Comment by Sysice on Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission · 2014-11-24T11:10:41.804Z · LW · GW

Does this stack with amazon smile? For example, how much money goes where when comparing this link to this one?

Comment by Sysice on xkcd on the AI box experiment · 2014-11-21T08:36:40.770Z · LW · GW

It might be useful to feature a page containing what we, you know, actually think about the basilisk idea. Although the rationalwiki page seems to be pretty solidly on top of google search, we might catch a couple people looking for the source.

If any XKCD readers are here: Welcome! I assume you've already googled what "Roko's Basilisk" is. For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

Comment by Sysice on Link: Rob Bensinger on Less Wrong and vegetarianism · 2014-11-14T11:00:25.078Z · LW · GW

You seem to be saying that people can't talk, think about, or discuss topics unless they're currently devoting their life towards that topic with maximum effectiveness. That seems... incredibly silly.
Your statements seem especially odd considering that there are people currently doing all of the things you mentioned (which is why you knew to mention them).

Comment by Sysice on 2014 Less Wrong Census/Survey · 2014-10-28T07:57:32.787Z · LW · GW

Did the survey (a couple days ago).

I wasn't here for the last survey- are the results predominantly discussed here and on Yvain's blog?

Comment by Sysice on Link: Elon Musk wants gov't oversight for AI · 2014-10-28T07:52:00.348Z · LW · GW

I find it very useful to have posts like these as an emotional counter to the echo chamber effect. Obviously this has little or no effect on the average LW reader's factual standpoint, but reminds us both of the heuristical absurdity of our ideas, and how much we have left to accomplish.

Comment by Sysice on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-13T21:41:52.678Z · LW · GW

True. I've always read things around that speed by default, though, so it's not related to speedreading techniques, and I don't know how to improve the average person's default speed.

Comment by Sysice on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-13T19:39:25.520Z · LW · GW

This matches my experience. Speed reading software like Textcelerator is nice when I want to go through a fluff story at 1200 WPM, but anything remotely technical requires me to be at 400-600 at most, and speedreading does not fundamentally affect this limit.

Comment by Sysice on Positive Book and Other Media Recommendations for a Teen Audience · 2014-10-12T16:49:05.555Z · LW · GW

HPMOR is an excellent choice.

What's your audience like? A book club (presumed interest in books, but not significantly higher maturity or interest in rationality than baseline), a group of potential LW readers, some average teenagers?

The Martian (Andy Weir) would be a good choice for a book-club-level group- very entertaining to read and promotes useful values. Definitely not of the "awareness raising" genre, though.

If you think a greater than average amount of them would be interested in rationality, I'd consider spending some time on Ted Chiang's work- only short stories at the moment, but very well received, great to read, and brings up some very good points that I'd bet most of your audience hasn't considered.

Edit: Oh, also think about Speaker for the Dead.

Comment by Sysice on What's the right way to think about how much to give to charity? · 2014-09-24T23:40:35.503Z · LW · GW

Giving What We Can recommends over 10% of income. I currently donate what I can spare when I don't need the money, and have precommitted to 50% of my post-tax income in the event that I acquire a job that pays over $30,000 a year (read: once I graduate college). The problem with that is that you already have a steady income and have arranged your life around it- it's much easier to not raise expenses in response to income than it is to lower expenses from a set income.

Like EStokes said, however, the important thing isn't to get caught up in how much you should be donating in order to meet some moral requirement. It's to actually give in a way that you, yourself, can give. We all do what we can :)

Comment by Sysice on Simulation argument meets decision theory · 2014-09-24T23:34:10.259Z · LW · GW

How I interpreted the problem- it's not that identical agents have different utility functions, it's just that different things happen to them. In reality, what's behind the door is behind the door, while in the simulation rewards X with something else. X is only unaware of whether or not he's in a simulation before he presses the button- obviously once he actually receives the utility he can tell the difference. Although the fact that nobody else has stated this makes me unsure. OP, can you clarify a little bit more?

Comment by Sysice on Simulation argument meets decision theory · 2014-09-24T14:09:32.690Z · LW · GW

It's tempting to say that, but I think pallas actually meant what he wrote. Basically, hitting "not sim" gets you a guaranteed 0.9 utility. Hitting "sim" gets you about 0.2 utility, getting closer as the number of copies increases. Even though each person strictly prefers "sim" to "not-sim," and a CDT agent would choose sim, it appears that choosing "not-sim" gets you more expected utility.

Edit: not-sim has higher expected utility for an entirely selfish agent who does not know whether he is simulated or not, because his choice affects not only his utility payout, but also acasually affects his state of simulation. Of course, this depends on my interpretation of anthropics.

Comment by Sysice on CEV-tropes · 2014-09-23T03:38:33.988Z · LW · GW

Most of what I know about CEV comes from the 2004 Yudkowsky paper. Considering how many of his views have changed in similar timeframes, and how the paper states multiple times that CEV is a changing work in progress, this seems like a bad thing for my knowledge of the subject. Has there been any significant public changes since then, or are we still debating based on that paper?

Comment by Sysice on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T03:30:27.671Z · LW · GW

I'm interested in your statement that "other people" have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I'm a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?

Comment by Sysice on Questions on the human path and transhumanism. · 2014-08-13T07:22:34.802Z · LW · GW

I don't disagree with you- this would, indeed, be a sad fate for humanity, and certainly a failed utopia. But the failing here is not inherent to the idea of an AGI that takes action on its own to improve humanity- it's of one that doesn't do what we actually want it to do, a failure to actually achieve friendliness.

Speaking of what we actually want, I want something more like what's hinted at in the fun theory sequence than one that only slowly improves humanity over decades, which seems to be what you're talking about here. (Tell me if I misunderstood, of course.)

Comment by Sysice on The greatest good for the greatest number - starting soonest, or ending last, or lasting longest? · 2014-08-09T04:07:42.377Z · LW · GW

...Which, of course, this post also accomplishes. On second thought, continue!

Comment by Sysice on The greatest good for the greatest number - starting soonest, or ending last, or lasting longest? · 2014-08-09T04:05:21.441Z · LW · GW

The answer is, as always, "it depends." Seriously , though- I time discount to an extent, and I don't want to stop totally. I prefer more happiness to less, and I don't want to stop. (I don't care about ending date, and I'm not sure why I would want to). If a trade off exists between starting date, quality, and duration of a good situation, I'll prefer one situation over the other based on my utility function. A better course of action would be to try and get more information about my utility function, rather than debating which value is more sacred than the rest.

Comment by Sysice on MIRI 2014 Summer Matching Challenge and one-off opportunity to donate *for free* · 2014-08-04T04:07:50.605Z · LW · GW

I've voted, but for sake of clear feedback- I just made my first donation ($100) to MIRI, directly as a result of both this thread and the donation-matching. This thread alone would not have been enough, but I would not have found out about the donation-matching without this thread. I had no negative feelings from having this thread in my recent posts list.

Consider this a positive pattern reinforced :)

Comment by Sysice on Knightian Uncertainty and Ambiguity Aversion: Motivation · 2014-07-22T13:25:12.830Z · LW · GW

MMEU fails as a decision theory that we actually want for the same reason that laypeople's intuitions about AI fail- it's rare to have a proper understanding of how powerful the phrases "maximum" and "minimum" are. As a quick example, actually following MMEU means that a vacuum metastability event is the best thing that could possibly happen to the universe, because it removes the possibility of humanity being tortured for eternity. Add in the fact that it doesn't allow you to deal with infinitesimals correctly (e.g. Pascal's Wager should never fail to convince an MMEU agent), and I'm seriously confused as to the usefulness of this.