Posts

Comments

Comment by Neph on Post ridiculous munchkin ideas! · 2014-06-16T11:57:33.597Z · LW · GW

I've got one. I actually came up with this on my own, but I'm gratified to see that EY has adopted it

cashback credit cards. these things essentially reduce the cost of all expenditures by 1%.

...but that's not where they get munchkiny. where they get munchkiny is when you basically arbitrage two currencies of equal value.

as a hypothetical example, say you buy $1000 worth of dollar bills for $1000. by using the credit card, it costs $990, since you get $10 back. you then take it to the bank and deposit it for $1000, making a $10 profit. wash, rinse repeat

the catch is, most of them have an annual fee attached, so you it's a use it or it's not worth it scenario (note, though, that for most people, if they use it for rent and nothing else, they'll save about the same as the annual fee). also, most of them need good credit to acquire, so if you're a starving college student with loans, kiss that goodbye. also, you cannot directly withdraw cash and get the 1%, so you have to come up with a way ton efficiently exchange a purchasable resource for money.

Comment by Neph on Post ridiculous munchkin ideas! · 2014-06-16T11:28:37.309Z · LW · GW

it definitely worked in at least one happily married case

so did "find god's match for you"

if we're looking at all the successful cases, but none of the unsuccessful ones, of course we're going to get positive results. also, as positive results go, "at least one" success is hardly reassuring

Comment by Neph on Siren worlds and the perils of over-optimised search · 2014-06-15T14:13:42.916Z · LW · GW
def checkMorals():
>[simulate philosophy student's brain]
>if [simulated brain's state is offended]:
>>return False
>else:
>>return True
if checkMorals():
>[keep doing AI stuff]

there. that's how we tell an AI capable of being an AI and capable of simulating a brain to not to take actions which the simulated brain thinks offend against liberty, as implemented in python.

Comment by Neph on How habits work and how you may control them · 2014-06-15T13:17:49.363Z · LW · GW

relevant: http://xkcd.com/906/

Comment by Neph on Rationality Quotes June 2014 · 2014-06-15T13:10:35.281Z · LW · GW

does anyone else find it ironic that we're using fictional evidence (a story about homeopathic writers that don't exist) to debate fictional evidence?

Comment by Neph on Is it immoral to have children? · 2013-10-25T03:27:32.552Z · LW · GW

I previously made a comment that mistakenly argued against the wrong thing. so to answer the real question- no.

the person who commented to my response said "$50 to the AMF gets someone someone around an additional year of healthy life."

but here's the thing- there's no reason it couldn't give another person- possibly a new child- an additional year of healthy life.

a life is a life, and $50 is $50, so unless the charity is ridiculously efficient (in which case, you should be looking at how to become more efficient) the utility would be the same (when comparing giving to AMF vs. doing the same thing as AMF to someone who may or may not be your child)

however with the having a child option, there is one more life- and all the utility therein- than the charity option- the people the charity would benefit would exist in either case. and since we've just shown that it doesn't really matter whether you donate to AMF or do the same thing as AMF to someone, that puts having a child at greater utility.

Comment by Neph on Is it immoral to have children? · 2013-10-24T06:38:12.107Z · LW · GW

I'm going to assume that we're comparing "having a child" vs. "adopting a child" as opposed to "having a child" vs. "lolrandomly dumping a ton of money into [insert charity here]" ...although, arguably, the utility generated by adoption and donation should be pretty close to equal.

adopting a child has obvious positive utility benefits. there's the joy of getting out of the orphanage, the toys you buy him, the food he eats, etc. etc. etc.

having a child has those same benefits, BUT the child you would have adopted would still exist, and would still have positive utility of his own, (unless you are in a completely primitive society in which orphans have literally NO chance of success) thus resulting in higher utility

Comment by Neph on Creating an Optimal Future · 2013-10-24T06:15:21.378Z · LW · GW

(puts on morpheus glasses) what if I told you... many of this site's members are also members of those sites?

Comment by Neph on Confusion about science and technology · 2013-10-24T05:53:19.723Z · LW · GW

I know this may come off as a "no true scotsman" argument, but this is a bit different- bear with me. consider christianity (yes, I'm bringing religion into this, sort of...) in the beginning, we have a single leader preaching a set of morals that is (arguably) correct from a utilitarian standpoint, and calling all who follow that set "christians" by so doing, he created what Influence: Science and Practice would call "the trappings of morality" ...so basically, fast-forward a few hundered years, and we have people who think they can do whatever they like and it'll be morally right, so long as they wear a cross doing it. parallel to the current situation: we set up science- a set of rules that will always result in truth, if followed. by so doing, we created the trappings of right-ness. fast forward to now, and we have a bunch of people who think they can decide whatever they want, and it'll be right, so long as they wear a labcoat while doing it. understand, that's a bit of metaphor, in truth, these "scientists" (scoff) simply learned the rules of science by rote without really understanding what they mean. to them, reproducible results is just something nice to have as part of the ritual of science, instead of something completely necessary to get the right answer

...all of this stuff I said, by the way, is said in one of the core sequences, but I'm not sure which. I may reply to myself later with the link to the sequence in question.

Comment by Neph on The Modesty Argument · 2013-09-15T17:43:48.436Z · LW · GW

remember that Bayesian evidence never reaches 100%, thus making middle ground- upon hearing another rationalist's viewpoint, instead of not shifting (as you suggest) or shifting to average your estimate and theirs together (as AAT suggests) why not adjust your viewpoint based on how likely the other rationalist is to have assessed correctly? ie- you believe X is 90% likely to be true the other rationalist believes it's not true 90%. suppose this rationalist is very reliable, say in the neighborhood of 75% accurate, you should adjust your viewpoint down to X is 75% likely to be 10% likely to be true, and 25% likely to be 90% likely to be true (or around 30% likely, assuming I did my math right.) assume he's not very reliable, say a creationist talking about evolution. let's say 10%. you should adjust to X is 10% likely to be 10% likely and 90% likely to be 90% likely. (82%) ...of course this doesn't factor in your own fallibility.

Comment by Neph on Open Thread, October 16-31, 2012 · 2012-10-24T08:56:00.555Z · LW · GW

hello, all. first post around here =^.^= I've been working my way through the core sequences, slowly but surely, and I ran into a question I couldn't solve on my own. please note that this question is probably the stupidest in the universe.

what is the difference between the Bayesian and Frequentist points of view?

let me clarify: in Eli Yudkowsky's explanation of Bayes' theorem, he presented an iconic problem:

"1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?"

to my understanding of the Bayesian perspective, the answer would be 7.8% and would represent the degree of uncertainty that the subject has breast cancer

to my understanding of the Frequentist perspective, the answer would be 7.8% and would represent the frequency of subjects that both have cancer and tested positive.

a keen observer will understand where my confusion comes from- on my way through the core sequences, I have heard much from the Bayesian side, but nothing from the Frequentist side, making it seem artificially non-existent.