Posts

Comments

Comment by Autodidact420 on Engineering Religion · 2015-12-11T20:56:15.863Z · LW · GW

Disclosure: I haven't read the full string of comments

I'm pretty sure you're a bit off on the Islamic side of things though.

a valid source of laws and doesn't think that God has given human kings the right to make laws the way Christianity thinks with the devine right of kings.

Kings' claim to rule seems to be fairly similar to that of an Islamic caliph, who are supposed to be prophets selected by God himself, and are able to create laws/etc. as he would want basically.

Comment by Autodidact420 on Maximizing Donations to Effective Charities · 2015-12-11T05:44:46.816Z · LW · GW

I'm in the middle of writing an essay due tomorrow morning so pardon the slightly off topic and short reply (I'll get back to you on the other matters later) but I am particularly curious about one topic that comes up here a lot, as far as I can tell, on discussions of existential risk. The topic is the AI and its relations to existential risk. By the sounds of it I may hold an extremely unpopular opinion, while I acknowledge that the AI could pose an existential risk, my personal ideas (which I don't have the time to discuss here or the points required to make a full post on the subject matter)is that an AI is probably our best bet at mitigating existential risk and maximizing the utility, security, and knowledge I previously mentioned. Does that put me at odds with the general consensus on the issue here?

Comment by Autodidact420 on Mark Zuckerberg plans to give away 99% of his facebook wealth over his lifetime · 2015-12-10T20:02:00.563Z · LW · GW

I've heard other criticisms that he is just going to give it to a charity fund in a similar manner to other billionaires who place their children at control of the charity and then use it as a way to pass on wealth to their kids without any taxation. Not entirely sure of the credibility of the claim that Mark is doing it, but I do know that this scheme has been tried and worked for others before.

Comment by Autodidact420 on Maximizing Donations to Effective Charities · 2015-12-10T19:54:11.047Z · LW · GW

I agree to some extent, depending on how efficient advertising for a specific charity through a meta-charity is. I see what you're saying now after re-reading it, to be honest I had only very briefly skimmed it last night/morning. Curious, do have any stats on how effective Intentional Insights is at gathering more money for these other charities than is given to them directly?

Also, how does In In decide whether something is mitigating existential risk? I'm not overly familiar with the topic but donations to "Against Malaria Foundation" and others mentioned don't sound like the specific sort of charity I'm mostly interested in.

Comment by Autodidact420 on Maximizing Donations to Effective Charities · 2015-12-10T08:34:04.962Z · LW · GW

I don't have a lot of time so this comment will be rather short and largely insufficient at fully addressing your post. That said, I tend to side with the idea presented in this article: http://www.nickbostrom.com/astronomical/waste.html

Essentially, I fail to see how anything other than advancing technology at the present could be the most effective route. How would you defend your claims of effective charity against the idea that advancing technology and minimizing existential risks instead of giving to those currently in need are ultimately the most effective ways for humans to raise utility long-term?

EDIT: I suppose it would be worth noting here that I have a fairly specific value set in place already. Basically, I favor a specific view of Utilitarianism that has three component values I've decided (and would argue) are each important: Intelligence, happiness, and security. In my thinking these three form a sort of triangle, with intelligence [and knowledge] leading to "higher happiness" and allowing for a "higher security" (intentionally adapting to threats), while also be intrinsically valuable. Security in a general sense basically meaning the ability to resist threats and exist for an extended time, bolstering happiness and knowledge by preserving them for extended periods of time. Happiness, of course, is the typical utilitarian ideal, this is inherently good. And as previously mentioned knowledge allows higher level happiness and security allows prolonged happiness.

Given this model, or a more standard model as I don't have time to fully articulate my idea, the charities you listed seem to be somewhat ineffective compared to other more direct attempts at increasing security and knowledge, which I would argue are the two values which we should currently be focused on increasing even at the cost of present-day happiness.

Not to diminsh what you're doing, as it is still much better than not giving anything at all or giving to less effective charities given your goal. More so to convince me to donate to these charities instead of otherwise using my money.

Comment by Autodidact420 on Engineering Religion · 2015-12-10T08:10:41.171Z · LW · GW

I feel like intelligence is similar to logic or grammar and faces the dunning kurger effects full force essentially. As they state in the abstract of their work: Their lack of skill deprives them not only of the ability to produce correct responses, but also of the expertise necessary to surmise that they are not producing them.

If you're able to "fake" being intelligent, you require the ability to produce the "intelligent" response, and the ability to recognize when you're not being intelligent. So if you don't have it, you can't really fake it... I mean, unless you're moderately skilled and meticulously research and craft your responses specifically for effect, but even then that means you're able to do so effectively...

Comment by Autodidact420 on Engineering Religion · 2015-12-10T06:15:45.805Z · LW · GW

I'm new here and not sure exactly what you expect when someone links, but it seems like you guys are generally intelligent so:

http://www.enotes.com/research-starters/sociological-theories-religion-structural

It sounds like what you're asking (with regards to the function of religion) is something that has been covered a great deal by the structural-functionalist sociological approach. If you're willing to read up on it there's a lot of information out there on the topic. Hope that helps! If you'd prefer I answer your question on here more directly feel free to ask, I'm in the middle of finals and haven't read up too much on the topic myself so I'd have to do some research before getting back to you.

Comment by Autodidact420 on Smarter humans, not artificial intellegence · 2015-12-10T06:11:04.741Z · LW · GW

IQ testing is controversial in some ways but supported in others.

In support of IQ, some forms of IQ tests ('g' loaded tests) tend to reproduce similar scores for the same individual. Further, this score is linked to various life outcomes - higher numbers of patents created, higher academic success rates, higher income, less time in jail, etc. As well as all of this, IQ has been found to be hereditary through twin studies. Lots of literature on this suggest that whatever IQ measures, even if it's not intelligence, it's useful to have in western societies.

But here's why it's controversial: Firstly, there is a potential gender and racial bias. Certain races tend to do better than others on average even controlling for socioeconomic status and the like. Men tend to be at the extreme ends of the scale, with many more falling into the high scoring ranges (2+ standard deviations) than women as well as in the low scoring ranges. Secondly, langauge barriers are another large problem with any verbal-based IQ test, which restrict those tests ability to accurately gauge a test taker who is writing with English as their non-native tongue. Thirdly, there are arguments about how a single number could accurately represent all of human intelligence. In tangent with this, there is debate about what constitutes intelligence, and how we should group it. Should emotional intelligence count? Should physical (kinetic) intelligence count? Should math count as much as verbal? Should problem solving count? Etc.

Against the last point, ignoring the less traditional sorts of intelligence (e.g. kinetic [bodily movement] intelligence), 'g' loaded tests support the idea that even if you're bad at math, if you've got a high 'g' score you'll likely be above average at math if you're high above average in 'g' loaded verbal or logical reasoning tests. So it seems that even if you're deficient or exceptionally good at one area, there is some sort of underlying factor that does help explain at least some difference in the traditional realms. And that underlying factor is what 'g' loaded tests are supposed to assess.

Also worth noting is the decreasing returns after about the second deviation. Although it does continue to have increasing effects in some areas, benefits in other areas start to drop off. It has been argued that IQ can help find a limiting factor but after that limiting factor (around 120-130 IQ on a 15 SD scale) it stops being as useful for prediction. To explain in a better way, "genius" has stopped being linked to a specific IQ. Instead, it's thought that a minimum IQ of around 130 ~ 120 is needed to be a genius, but there is no set point of IQ where you are automatically a genius. You could have 180 IQ and not be a genius, or you could have 120 IQ and be a genius.

As far as I know (I might be wrong) IQ is especially useful at finding exceptionally low-skilled individuals.

So largely it's controversial in that it represents a universal intelligence, It's less controversial that it's some sort of useful construct which predicts a great deal of life outcomes in western society with decent accuracy within the groups it was designed to test (Western-cultured English speakers in particular)