Posts
Comments
I'm guessing RobertLumley's "Why would you downvote a meetup post?" caused people to upvote. I know I like to upvote when someone points out unnecessary-seeming downvotes.
Oh, yes, I've agreed with you about that for a long time. The grandparent comment wasn't actually my (only) reminder.
Speaking of copies, I keep meaning to write a LOCKSS plugin for LessWrong. This comment will be my note-to-self, or anyone else who wants to do it first.
(interested in hearing how other donors frame allocation between SI and CFAR)
I still only donate to SI. It's great that we can supposedly aim the money at FAI now, due to the pivot towards research.
But I would also love to see EY's appeal to MoR readers succeed:
I don’t work for the Center for Applied Rationality and they don’t pay me, but their work is sufficiently important that the Singularity Institute (which does pay me) has allowed me to offer to work on Methods full-time until the story is finished if HPMOR readers donate a total of $1M to CFAR.
Upvoted for explaining how polls work.
And after skimming the paper, the only thing I could find in response to your point is:
Coercion detection. Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under adversary control. It is possible to reduce the effectiveness of this technique if the system could detect if the user is under duress. Some behaviors such as timed responses to stimuli may detectably change when the user is under duress. Alternately, we might imagine other modes of detection of duress, including video monitoring, voice stress detection, and skin conductance monitoring [8, 16, 1]. The idea here would be to detect by out-of-band techniques the effects of coercion. Together with in-band detection of altered performance, we may be able to reliably detect coerced users.
How easily could the SI lose important data (like unpublished research) in, say, a fire or computer malfunction?
Thank you! And I just gave $200 to SI on top of the $50/month they automatically get from me.
In FAQ #6:
Friendly AI is a problem of cognitive science.
I think this un-argued-for assertion makes the site seem (and be) less rigorous. Unfortunately, I can't think of a better concise justification for why FAI researchers should read about cognitive science.
Nice!
From FAQ #5:
Independent researchers associated with the Singularity Institute: Daniel Dewey, Kaj Sotala, Peter de Blanc, Joshua Fox, Steve Rayhawk, and others
Would it be feasible to make this list exhaustive so that you can delete the "and others"? I think the "and others" makes the site seem less prestigious.
Good question. And for people who missed it, this refers to money that was reported stolen on SI's tax documents a few years ago. (relevant thread)
You got me reading that chapter.
Is this comment supposed to be pleasant or unpleasant?
Edit: I asked because "have a good day, thank you for posting" is often used to mean "shut up", but now that I've looked at your past comments, I assume you're being friendly.
How much payment are you offering for how much work?
Of the things on your list, I'm most surprised by cognitive science and maybe game theory, unless you're talking about the fields' current insights rather than their expected future insights. In that case, I'm still somewhat surprised game theory is on this list. I'd love to learn what led you to this belief.
It's possible I only know the basics, so feel free to say "read more about what the fields actually offer and it'll be obvious if you've been on Less Wrong long enough."
I know the pain of being someone who has had sex before, and then being reminded of how awesome sex is without having an outlet for it at the time, and having it leave me feeling unbelievably miserable. I didn't want to leave even a single person reading my article in a place like that.
This thought is very much appreciated.
Because of this, I set up a $50/month pledge (instead of a one-time donation of $500), and I hope this drive causes lots of people to do the same.
If you're interested in how your body works, I recommend Gerald Cizadlo's lectures. They are biology classes for nursing students at an American religious college. Because of his pathophysiology and physiology podcasts, I'm now able to explain the way nerves transmit signals (for example).
(Edited; I originally called nerves insane.)
Nice! And for anyone freaked out by the "current balance of my bank account" part, there's an explanation here.
Is the Singularity Institute supporting her through your salary?
I hope you're not too put out by the rudeness of this question. I've decided that I'm allowed to ask because I'm a (small) donor. I doubt your answer will jeopardize my future donations, whatever it is, but I do have preferences about this.
(Also, it's very good to hear that you're taking health seriously! Not that I expected otherwise.)
I suspect that value systems that simply seek to minimize pain are poor value systems.
Fair enough, as long as you're not presupposing that our value systems -- which are probably better than "minimize pain" -- are unlikely to have strong anti-torture preferences.
As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It's at least obvious that anti-em-torture is harder to enforce, but are you thinking it's also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?
Has anyone attempted to prove the statement "Consciousness of a Turing machine is undecideable"? The proof (if it's true) might look a lot like the proof that the halting problem is undecideable.
Your conjecture seems to follow from Rice's theorem, assuming the personhood of a running computation is a property of the partial function its algorithm computes. Also, I think you can prove your conjecture by taking a certain proof that the Halting Problem is undecidable and replacing 'halts' with 'is conscious'. I can track this down if you're still interested.
But this doesn't mess up Eliezer's plans at all: you can have "nonhalting predicates" that output "doesn't halt" or "I don't know", analogous to the nonperson predicates proposed here.
I think you're right that many of the relevant empirical facts will be about your preferences. At risk of repeating myself, though, there are other facts that matter, like whether ems are conscious, how much it costs to prevent torture, and what better things we could be directing our efforts towards.
To partially answer your question ("how much effort is it worth to prevent the torture of ems?"): I sure do want torture to not happen, unless I'm hugely wrong about my preferences. So if preventing em torture turns out to not be worth a lot of effort, I predict it's because there are other bad things that can be more efficiently prevented with our efforts.
But I'm still not sure how you wanted your question interpreted. Are you, for example, wondering whether you care about ems as much as non-em people? Or whether you care about torture at all? Or whether the best strategy requires putting our efforts somewhere else, given that you care about torture and ems?
Are you unsure about whether em torture is as bad as non-em torture? Or do you just mean to express that we take em torture too seriously? Or is this a question about how much we should pay to prevent torture (of ems or not), given that there are other worthy causes that need our efforts?
Or, to ask all those questions at once: do you know which empirical facts you need to know in order to answer this?
Is it easy to accidentally come up with criteria for "locally correct" that will still let us construct globally wrong results?
This comment was brought to you by a surface analogy with the Penrose triangle.
This makes me happy. Now, here's a question that is probably answered in the technical paper, but I don't have time to read it:
"New coins are generated by a network node each time it finds the solution to a certain calculational problem." What is this calculational problem? Could it easily serve some sinister purpose?
I donated $250 on the last day of the challenge.
I finally remembered to post this here
Good timing, though: now this is fresh in our minds during the challenge.
Oh, we agree, I was just unclear about my objection. Fixed.
Upvoted for pointing out why people who I agree with were disagreeing with me.
Oh, oops, we were talking about different things. I think you're right to mention matching donations (especially after hearing your anecdote), but I wonder if there's room for a warning like, "It's more important to pick the right charity than to get someone to match your donation. (Do both if you can, of course.)"
Good idea!
Thank you for this post! One thing:
- Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.
If GiveWell's cost-benefit calculations are remotely right, you should downplay matching donations even more than just making this item second-last. I fear that matching donations are so easy to think about that they will distract people from picking good charities.
Thank you! And your commit-then-wait method sounds obviously good.
You are awesome.
And let me know if you want more evidence of my little donation.
And if they take long enough, I'll match that person myself. :)
I see no reason to disagree with you. (By the way, the other time I did this was non-backwards.)
A note for potential matchers: if you match this donation, you'll make me more likely to donate in the future. (I'll be like, "Not only would I be helping out, but I could probably get someone to match this donation as well. ") I was relying on this being obvious.
Candidate: Hold off on proposing solutions.
This article is way more useful than the slogan alone, and it's short enough to read in five minutes.
You changed my mind. I'm worried my candidate will hurt more than it helps because people will conflate "bad idea generators" with "disreputable idea generators" -- they might think, "that idea came to me in my sleep, so I guess that means I'm supposed to ignore it."
A partially-fixed candidate: If an idea was generated by a clearly bad method, the idea is probably bad.
I know. That made me happy.
Thank you very much. I matched it.
I honestly wouldn't be able to tell if you faked your confirmation e-mail, unless there's some way for random people to verify PayPal receipt numbers. So don't worry about the screenshot. Hopefully I'll figure out some convenient authentication method that works for the six donations in this scheme.
Candidate: Don't pursue an idea unless it came to your attention by a method that actually finds good ideas. (Paraphrased from here.)
Is anyone willing to be my third (and last) matching donor?
Since this site has such a high sanity waterline, I'd like to see comments about important topics even if they aren't directly rationality-related. Has anyone figured out a way to satisfy both me and RobinZ without making this site any less convenient to contribute to?
(Upvoted for explaining your objection.)
I have nothing to add, but I want to tell you I'm happy you wrote this post, so that you don't get discouraged by the lack of comments.
I'm disappointed but still very happy you made those comments about catastrophic risks and aging.
Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?
(Edited for clarity.)
Your reviews have convinced me to read this book in hopes of using it to introduce people to rationality.
(I'm guessing there's a missing word in "but that doesn't [negate?] the fact that people overvalue it".)
I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment:
In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.
Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.