Posts

What would an Incandescence about FAI look like? 2011-05-01T20:30:43.049Z · score: 1 (4 votes)
Seeking matcher for SIAI donation 2010-10-08T05:38:41.316Z · score: 2 (5 votes)

Comments

Comment by vnkket on Meetup : Love and Sex in Salt Lake City · 2013-02-06T06:50:29.861Z · score: 1 (1 votes) · LW · GW

I'm guessing RobertLumley's "Why would you downvote a meetup post?" caused people to upvote. I know I like to upvote when someone points out unnecessary-seeming downvotes.

Comment by vnkket on Just One Sentence · 2013-01-12T06:44:41.001Z · score: 1 (1 votes) · LW · GW

Oh, yes, I've agreed with you about that for a long time. The grandparent comment wasn't actually my (only) reminder.

Comment by vnkket on Just One Sentence · 2013-01-05T05:54:55.913Z · score: 6 (6 votes) · LW · GW

Speaking of copies, I keep meaning to write a LOCKSS plugin for LessWrong. This comment will be my note-to-self, or anyone else who wants to do it first.

Comment by vnkket on CFAR’s Inaugural Fundraising Drive · 2012-12-18T02:53:55.742Z · score: 4 (4 votes) · LW · GW

(interested in hearing how other donors frame allocation between SI and CFAR)

I still only donate to SI. It's great that we can supposedly aim the money at FAI now, due to the pivot towards research.

But I would also love to see EY's appeal to MoR readers succeed:

I don’t work for the Center for Applied Rationality and they don’t pay me, but their work is sufficiently important that the Singularity Institute (which does pay me) has allowed me to offer to work on Methods full-time until the story is finished if HPMOR readers donate a total of $1M to CFAR.

Comment by vnkket on Welcome to Less Wrong! (July 2012) · 2012-07-20T02:18:24.240Z · score: 2 (2 votes) · LW · GW

Upvoted for explaining how polls work.

Comment by vnkket on [LINK] Using procedural memory to thwart "rubber-hose cryptanalysis" · 2012-07-20T02:08:35.067Z · score: 5 (5 votes) · LW · GW

And after skimming the paper, the only thing I could find in response to your point is:

Coercion detection. Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under adversary control. It is possible to reduce the effectiveness of this technique if the system could detect if the user is under duress. Some behaviors such as timed responses to stimuli may detectably change when the user is under duress. Alternately, we might imagine other modes of detection of duress, including video monitoring, voice stress detection, and skin conductance monitoring [8, 16, 1]. The idea here would be to detect by out-of-band techniques the effects of coercion. Together with in-band detection of altered performance, we may be able to reliably detect coerced users.

Comment by vnkket on Questions on SI Research · 2012-06-02T04:36:55.255Z · score: 6 (6 votes) · LW · GW

How easily could the SI lose important data (like unpublished research) in, say, a fire or computer malfunction?

Comment by vnkket on Holiday giving thread · 2012-01-01T01:30:38.476Z · score: 2 (2 votes) · LW · GW

Thank you! And I just gave $200 to SI on top of the $50/month they automatically get from me.

Comment by vnkket on New 'landing page' website: Friendly-AI.com · 2011-12-13T07:59:54.706Z · score: 1 (1 votes) · LW · GW

In FAQ #6:

Friendly AI is a problem of cognitive science.

I think this un-argued-for assertion makes the site seem (and be) less rigorous. Unfortunately, I can't think of a better concise justification for why FAI researchers should read about cognitive science.

Comment by vnkket on New 'landing page' website: Friendly-AI.com · 2011-12-13T07:52:13.326Z · score: 0 (0 votes) · LW · GW

Nice!

From FAQ #5:

Independent researchers associated with the Singularity Institute: Daniel Dewey, Kaj Sotala, Peter de Blanc, Joshua Fox, Steve Rayhawk, and others

Would it be feasible to make this list exhaustive so that you can delete the "and others"? I think the "and others" makes the site seem less prestigious.

Comment by vnkket on Q&A with new Executive Director of Singularity Institute · 2011-11-08T04:39:20.772Z · score: 5 (7 votes) · LW · GW

Good question. And for people who missed it, this refers to money that was reported stolen on SI's tax documents a few years ago. (relevant thread)

Comment by vnkket on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics · 2011-11-07T02:08:38.125Z · score: 1 (1 votes) · LW · GW

You got me reading that chapter.

Comment by vnkket on Felicifia: a Utilitarianism Forum · 2011-11-04T19:38:27.592Z · score: 3 (5 votes) · LW · GW

Is this comment supposed to be pleasant or unpleasant?

Edit: I asked because "have a good day, thank you for posting" is often used to mean "shut up", but now that I've looked at your past comments, I assume you're being friendly.

Comment by vnkket on Content writing offer · 2011-10-01T21:14:38.768Z · score: 1 (1 votes) · LW · GW

How much payment are you offering for how much work?

Comment by vnkket on Hard problem? Hack away at the edges. · 2011-09-26T21:15:43.671Z · score: 5 (5 votes) · LW · GW

Of the things on your list, I'm most surprised by cognitive science and maybe game theory, unless you're talking about the fields' current insights rather than their expected future insights. In that case, I'm still somewhat surprised game theory is on this list. I'd love to learn what led you to this belief.

It's possible I only know the basics, so feel free to say "read more about what the fields actually offer and it'll be obvious if you've been on Less Wrong long enough."

Comment by vnkket on How to Save the World · 2011-08-30T02:37:46.546Z · score: 1 (1 votes) · LW · GW

I know the pain of being someone who has had sex before, and then being reminded of how awesome sex is without having an outlet for it at the time, and having it leave me feeling unbelievably miserable. I didn't want to leave even a single person reading my article in a place like that.

This thought is very much appreciated.

Comment by vnkket on Help Fund Lukeprog at SIAI · 2011-08-29T21:04:26.166Z · score: 2 (2 votes) · LW · GW

Because of this, I set up a $50/month pledge (instead of a one-time donation of $500), and I hope this drive causes lots of people to do the same.

Comment by vnkket on Really good education podcasts · 2011-07-31T21:54:44.092Z · score: 2 (2 votes) · LW · GW

If you're interested in how your body works, I recommend Gerald Cizadlo's lectures. They are biology classes for nursing students at an American religious college. Because of his pathophysiology and physiology podcasts, I'm now able to explain the way nerves transmit signals (for example).

(Edited; I originally called nerves insane.)

Comment by vnkket on The $125,000 Summer Singularity Challenge · 2011-07-31T00:22:39.713Z · score: 12 (12 votes) · LW · GW

Nice! And for anyone freaked out by the "current balance of my bank account" part, there's an explanation here.

Comment by vnkket on SIAI Fundraising · 2011-07-02T07:26:24.899Z · score: 2 (8 votes) · LW · GW

Is the Singularity Institute supporting her through your salary?

I hope you're not too put out by the rudeness of this question. I've decided that I'm allowed to ask because I'm a (small) donor. I doubt your answer will jeopardize my future donations, whatever it is, but I do have preferences about this.

(Also, it's very good to hear that you're taking health seriously! Not that I expected otherwise.)

Comment by vnkket on Is it possible to prevent the torture of ems? · 2011-07-02T06:24:37.283Z · score: 1 (1 votes) · LW · GW

I suspect that value systems that simply seek to minimize pain are poor value systems.

Fair enough, as long as you're not presupposing that our value systems -- which are probably better than "minimize pain" -- are unlikely to have strong anti-torture preferences.

As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It's at least obvious that anti-em-torture is harder to enforce, but are you thinking it's also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?

Comment by vnkket on Nonperson Predicates · 2011-07-02T06:17:17.509Z · score: 1 (1 votes) · LW · GW

Has anyone attempted to prove the statement "Consciousness of a Turing machine is undecideable"? The proof (if it's true) might look a lot like the proof that the halting problem is undecideable.

Your conjecture seems to follow from Rice's theorem, assuming the personhood of a running computation is a property of the partial function its algorithm computes. Also, I think you can prove your conjecture by taking a certain proof that the Halting Problem is undecidable and replacing 'halts' with 'is conscious'. I can track this down if you're still interested.

But this doesn't mess up Eliezer's plans at all: you can have "nonhalting predicates" that output "doesn't halt" or "I don't know", analogous to the nonperson predicates proposed here.

Comment by vnkket on Is it possible to prevent the torture of ems? · 2011-07-01T04:57:01.143Z · score: 0 (0 votes) · LW · GW

I think you're right that many of the relevant empirical facts will be about your preferences. At risk of repeating myself, though, there are other facts that matter, like whether ems are conscious, how much it costs to prevent torture, and what better things we could be directing our efforts towards.

To partially answer your question ("how much effort is it worth to prevent the torture of ems?"): I sure do want torture to not happen, unless I'm hugely wrong about my preferences. So if preventing em torture turns out to not be worth a lot of effort, I predict it's because there are other bad things that can be more efficiently prevented with our efforts.

But I'm still not sure how you wanted your question interpreted. Are you, for example, wondering whether you care about ems as much as non-em people? Or whether you care about torture at all? Or whether the best strategy requires putting our efforts somewhere else, given that you care about torture and ems?

Comment by vnkket on Is it possible to prevent the torture of ems? · 2011-06-30T06:18:11.677Z · score: 2 (2 votes) · LW · GW

Are you unsure about whether em torture is as bad as non-em torture? Or do you just mean to express that we take em torture too seriously? Or is this a question about how much we should pay to prevent torture (of ems or not), given that there are other worthy causes that need our efforts?

Or, to ask all those questions at once: do you know which empirical facts you need to know in order to answer this?

Comment by vnkket on Making Reasoning Obviously Locally Correct · 2011-03-18T06:41:34.240Z · score: 0 (0 votes) · LW · GW

Is it easy to accidentally come up with criteria for "locally correct" that will still let us construct globally wrong results?

This comment was brought to you by a surface analogy with the Penrose triangle.

Comment by vnkket on Singularity Institute now accepts donations via Bitcoin · 2011-03-01T02:15:53.425Z · score: 3 (3 votes) · LW · GW

This makes me happy. Now, here's a question that is probably answered in the technical paper, but I don't have time to read it:

"New coins are generated by a network node each time it finds the solution to a certain calculational problem." What is this calculational problem? Could it easily serve some sinister purpose?

Comment by vnkket on Tallinn-Evans $125,000 Singularity Challenge · 2011-02-26T04:32:09.754Z · score: 6 (6 votes) · LW · GW

I donated $250 on the last day of the challenge.

Comment by vnkket on Efficient Charity: Do Unto Others... · 2010-12-27T23:49:03.888Z · score: 0 (0 votes) · LW · GW

I finally remembered to post this here

Good timing, though: now this is fresh in our minds during the challenge.

Comment by vnkket on How to Save the World · 2010-12-12T07:10:06.963Z · score: 0 (0 votes) · LW · GW

Oh, we agree, I was just unclear about my objection. Fixed.

Comment by vnkket on How to Save the World · 2010-12-12T07:09:10.110Z · score: 0 (0 votes) · LW · GW

Upvoted for pointing out why people who I agree with were disagreeing with me.

Comment by vnkket on How to Save the World · 2010-12-12T07:08:08.297Z · score: 0 (0 votes) · LW · GW

Oh, oops, we were talking about different things. I think you're right to mention matching donations (especially after hearing your anecdote), but I wonder if there's room for a warning like, "It's more important to pick the right charity than to get someone to match your donation. (Do both if you can, of course.)"

Comment by vnkket on Kazakhstan's president urges scientists to find the elixir of life · 2010-12-12T05:51:51.513Z · score: 1 (1 votes) · LW · GW

Good idea!

Comment by vnkket on How to Save the World · 2010-12-01T23:40:36.835Z · score: 3 (3 votes) · LW · GW

Thank you for this post! One thing:

  1. Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.

If GiveWell's cost-benefit calculations are remotely right, you should downplay matching donations even more than just making this item second-last. I fear that matching donations are so easy to think about that they will distract people from picking good charities.

Comment by vnkket on Seeking matcher for SIAI donation · 2010-10-16T03:14:26.353Z · score: 1 (1 votes) · LW · GW

Thank you! And your commit-then-wait method sounds obviously good.

Comment by vnkket on Seeking matcher for SIAI donation · 2010-10-15T01:52:54.878Z · score: 0 (0 votes) · LW · GW

You are awesome.

And let me know if you want more evidence of my little donation.

Comment by vnkket on Seeking matcher for SIAI donation · 2010-10-10T04:35:55.423Z · score: 0 (0 votes) · LW · GW

And if they take long enough, I'll match that person myself. :)

Comment by vnkket on Seeking matcher for SIAI donation · 2010-10-09T03:38:28.079Z · score: 1 (1 votes) · LW · GW

I see no reason to disagree with you. (By the way, the other time I did this was non-backwards.)

A note for potential matchers: if you match this donation, you'll make me more likely to donate in the future. (I'll be like, "Not only would I be helping out, but I could probably get someone to match this donation as well. ") I was relying on this being obvious.

Comment by vnkket on Five-minute rationality techniques · 2010-08-12T16:10:48.704Z · score: 1 (1 votes) · LW · GW

Candidate: Hold off on proposing solutions.

This article is way more useful than the slogan alone, and it's short enough to read in five minutes.

Comment by vnkket on Five-minute rationality techniques · 2010-08-12T16:09:15.683Z · score: 2 (2 votes) · LW · GW

You changed my mind. I'm worried my candidate will hurt more than it helps because people will conflate "bad idea generators" with "disreputable idea generators" -- they might think, "that idea came to me in my sleep, so I guess that means I'm supposed to ignore it."

A partially-fixed candidate: If an idea was generated by a clearly bad method, the idea is probably bad.

Comment by vnkket on Open Thread, August 2010 · 2010-08-10T21:11:34.704Z · score: 1 (1 votes) · LW · GW

I know. That made me happy.

Comment by vnkket on Open Thread, August 2010 · 2010-08-10T19:43:18.690Z · score: 0 (0 votes) · LW · GW

Thank you very much. I matched it.

I honestly wouldn't be able to tell if you faked your confirmation e-mail, unless there's some way for random people to verify PayPal receipt numbers. So don't worry about the screenshot. Hopefully I'll figure out some convenient authentication method that works for the six donations in this scheme.

Comment by vnkket on Five-minute rationality techniques · 2010-08-10T16:53:31.878Z · score: 5 (5 votes) · LW · GW

Candidate: Don't pursue an idea unless it came to your attention by a method that actually finds good ideas. (Paraphrased from here.)

Comment by vnkket on Open Thread, August 2010 · 2010-08-07T00:56:40.182Z · score: 0 (0 votes) · LW · GW

Is anyone willing to be my third (and last) matching donor?

Comment by vnkket on Open Thread, August 2010 · 2010-08-03T02:46:12.442Z · score: 2 (4 votes) · LW · GW

Since this site has such a high sanity waterline, I'd like to see comments about important topics even if they aren't directly rationality-related. Has anyone figured out a way to satisfy both me and RobinZ without making this site any less convenient to contribute to?

(Upvoted for explaining your objection.)

Comment by vnkket on Open Thread: May 2010, Part 2 · 2010-07-27T22:18:00.517Z · score: 1 (1 votes) · LW · GW

I have nothing to add, but I want to tell you I'm happy you wrote this post, so that you don't get discouraged by the lack of comments.

Comment by vnkket on The President's Council of Advisors on Science and Technology is soliciting ideas · 2010-07-22T15:05:17.107Z · score: 1 (1 votes) · LW · GW

I'm disappointed but still very happy you made those comments about catastrophic risks and aging.

Comment by vnkket on Open Thread: July 2010, Part 2 · 2010-07-21T21:28:45.490Z · score: 12 (14 votes) · LW · GW

Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?

(Edited for clarity.)

Comment by vnkket on What Intelligence Tests Miss: The psychology of rational thought · 2010-07-12T16:50:18.542Z · score: 2 (2 votes) · LW · GW

Your reviews have convinced me to read this book in hopes of using it to introduce people to rationality.

Comment by vnkket on What Intelligence Tests Miss: The psychology of rational thought · 2010-07-11T23:26:46.492Z · score: 1 (1 votes) · LW · GW

(I'm guessing there's a missing word in "but that doesn't [negate?] the fact that people overvalue it".)

Comment by vnkket on Open Thread: July 2010 · 2010-07-02T21:59:14.588Z · score: 3 (3 votes) · LW · GW

I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment:

In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.

Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.