Posts

Comments

Comment by wiresnips on Rationality Quotes November 2013 · 2013-11-04T06:15:33.210Z · score: 6 (10 votes) · LW · GW

Agreed. Squicky dilemmas designed to showcase utilitarianism are not generally found in real life (as far as I know). And a human probably couldn't be trusted to make a sound judgement call even if one were found. Running on untrusted hardware and such.

Ah- and this is the point of the quote. Oh, I like that.

Comment by wiresnips on Rationality Quotes November 2013 · 2013-11-03T19:05:40.781Z · score: 11 (11 votes) · LW · GW

Utilitarianism isn't a description of human moral processing, it's a proposal for how to improve it.

Comment by wiresnips on Rationality Quotes October 2013 · 2013-10-05T22:38:09.281Z · score: 14 (16 votes) · LW · GW

That's not necessarily false, but it's a dangerous thing to say to yourself. Mostly when I find myself thinking it, I've just wasted a great deal of time, and I'm trying to convince myself that it wasn't really wasted. It's easy to tell myself, hard to verify, and more pleasant than thinking my time-investment was for nothing.

Comment by wiresnips on Post ridiculous munchkin ideas! · 2013-05-19T20:01:46.014Z · score: 1 (1 votes) · LW · GW

This is transformative. Thank you.

Comment by wiresnips on Rationality Quotes April 2013 · 2013-04-09T17:39:21.173Z · score: -2 (12 votes) · LW · GW

This may not be strictly true. Consider the basilisk.

Comment by wiresnips on Rationality Quotes February 2012 · 2012-02-10T22:55:20.115Z · score: 1 (1 votes) · LW · GW

Either both are true, or neither.

Comment by wiresnips on Sarah Connor and Existential Risk · 2011-05-01T20:25:46.197Z · score: 1 (1 votes) · LW · GW

Anyone smart enough to be dangerous is smart enough to be safe? I'm skeptical- folksy wisdom tells me that being smart doesn't protect you from being stupid.

But in general, yes- the threat becomes more and more tangible as the barrier to AI gets lower and the number of players increases. At the moment, it seems pretty intangible, but I haven't actually gone out and counted dangerously smart AI researchers- I might be surprised by how many there are.

To be clear, I was NOT trying to imply that we should actually right now form the Turing Police.

Comment by wiresnips on Sarah Connor and Existential Risk · 2011-05-01T20:04:37.579Z · score: 2 (2 votes) · LW · GW

Edited, in the interest of caution.

However, this is exactly the issue I'm trying to discuss. It looks as though, if we take the threat of uncaring AI seriously, this is a real problem and it demands a real solution. The only solution that I can see is morally abhorrent, and I'm trying to open a discussion looking for a better one. Any suggestions on how to do this would be appreciated.

Comment by wiresnips on [Altruist Support] How to determine your utility function · 2011-05-01T17:55:30.813Z · score: 4 (6 votes) · LW · GW

If we accept that what someone 'wants' can be distinct from their behaviour, then "what do I want?" and "what will I do?" are two different questions (unless you're perfectly rational). Presumably, a FAI scanning a brain could answer either question.

Comment by wiresnips on Rationality Quotes: April 2011 · 2011-04-04T17:35:44.318Z · score: 0 (2 votes) · LW · GW

The question of which is kind of still there, though. Procrastination is lazy, but getting drunk at work is irresponsible.

Comment by wiresnips on Toronto Less Wrong Meetup - Thursday Feb 17 · 2011-02-11T22:12:46.519Z · score: 1 (3 votes) · LW · GW

One more for Ottawa. Interest is yes.

Comment by wiresnips on Procedural Knowledge Gaps · 2011-02-08T06:22:51.263Z · score: 2 (4 votes) · LW · GW

You tip when you pay, whether you're running a bill or buying drinks one by one.

If you're paying by card, usually the little card-swipey-machine(?) will ask if you want to tip, and how much. Nice and easy.

If you're paying cash, you can drop some into a visible tip jar, or leave a little pile on the bar/table. It's convenient to overpay and then use some or all of your change for this. You don't need to stick around to watch this be picked up. edit: absolutely agree with JoshuaZ- you should wait for your change. After accepting it you don't need to be present when the bartender gets the tip.

Sometimes, more in semi-classy restaurants, a waiter/ess will ask if you want change- if you say no, the difference is tip.

Comment by wiresnips on Procedural Knowledge Gaps · 2011-02-08T05:20:45.014Z · score: 0 (2 votes) · LW · GW

I'll take a swing at it- let me know if it's helpful at all.

Ordering at a bar is easiest if you're friendly with the bartender. A jovial attitude, a confession of ignorance, and a vague description of a target drink (ie, "colorful and with rum", or "something delicious") will prompt a short exchange wherein the tender narrows their options down a little. Err towards generous tipping.

Note that I stick to quiet establishments. This probably doesn't work nearly as well in a very busy bar.

Comment by wiresnips on Rationality Quotes: January 2011 · 2011-01-03T20:40:04.341Z · score: 26 (28 votes) · LW · GW

Whatever elaborate, and grotesquely counter-intuitive, underpinnings there might be to familiar reality, it stubbornly continues to be familiar. When Rutherford showed that atoms were mostly empty space, did the ground become any less solid? The truth itself changes nothing.

-- Greg Egan, Quarantine

Comment by wiresnips on Rationality Quotes: December 2010 · 2010-12-04T02:45:48.506Z · score: 3 (5 votes) · LW · GW

"properly applied" qualifies it as practice

Comment by wiresnips on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-02T02:54:55.204Z · score: 0 (2 votes) · LW · GW

Isn't beauty a set of built-in fitness testing heuristics? If so, fitness really does cause beauty.

It's worth pointing out that beauty also really does cause fitness. The runaway cycle is the peacock effect.

Comment by wiresnips on Which parts of philosophy are worth studying from a pragmatic perspective? · 2010-10-01T15:22:29.306Z · score: 0 (2 votes) · LW · GW

I have a guess:

Let's say that studying philosophy is gratifying in and of itself. That would make the study of philosophy an intrinsic good. There might be some parts of philosophy whose study yields an instrumental good. These would be the "pragmatic" parts.

Comment by wiresnips on Epilogue: Atonement (8/8) · 2010-09-30T01:13:55.314Z · score: 3 (5 votes) · LW · GW

if you can translate them, they're hardly untranslatable

Comment by wiresnips on Anti-akrasia remote monitoring experiment · 2010-09-28T20:20:09.982Z · score: 1 (3 votes) · LW · GW

I explicitly uninstalled my other browsers, in point of fact. Reinstalling them is enough trouble that it's no worth it. I know that I've known about the disable-the-addon trick, but I definitely forgot about it.

It'll be interesting to see if you've just sabotaged me with the reminder.

Comment by wiresnips on Anti-akrasia remote monitoring experiment · 2010-09-28T19:28:14.377Z · score: 2 (4 votes) · LW · GW

Solo, I've had pretty good results with aggressive leechblock settings. My habitual timesinks are only accessible for a half-hour block each day.

https://addons.mozilla.org/en-US/firefox/addon/4476/

Comment by wiresnips on Unknown knowns: Why did you choose to be monogamous? · 2010-06-27T02:11:27.335Z · score: 4 (6 votes) · LW · GW

Polygamy is definitely to women's advantage. Since there's no real limit to the number of children a man can father, women can agree to share the very best male genetic material amongst each other and leave all the other men out in the cold. Think of the private harems that any number of rulers have maintained. In a monogamous culture, any given sub-excellent male has a much better chance of mating.

Comment by wiresnips on Open Thread: May 2010 · 2010-05-04T17:07:30.310Z · score: 5 (7 votes) · LW · GW

I don't know that we've ever successfully assimilated a feral human either.

Comment by wiresnips on Epistemic Luck · 2010-02-08T15:20:20.290Z · score: 1 (3 votes) · LW · GW

Widening the spread of your mentors should reduce this bias, as long as you didn't choose mentors that agree with each other. Obviously, there isn't really enough time to be taught from a wide enough sample of perspectives to properly eliminate it.

Comment by wiresnips on The Second Best · 2009-07-27T23:42:01.858Z · score: 3 (5 votes) · LW · GW

I don't think libertarians have nearly as much to say about optimization as they do about regulation. The libertarian answer would be, If you and Fred want to work something out, fine, but Sally has no business telling either of you what to do with your fish.

Comment by wiresnips on Absolute denial for atheists · 2009-07-17T07:05:12.612Z · score: 3 (7 votes) · LW · GW

I'm quite sure that the idea won't go away, if only because in at least some cases, it'll be flagrantly true- season with a dash of confirmation bias and serve hot.

Comment by wiresnips on Well-Kept Gardens Die By Pacifism · 2009-04-21T17:16:29.500Z · score: -1 (3 votes) · LW · GW

The mediocre button should be the same as simply not voting, I think. Especially since it'd have to be used quite often, no-one wants to be pushing a button for every mediocre comment. Maybe a similar effect could be reached if comments gradually accumulate negative karma with time?