Posts

The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God 2017-02-11T00:48:35.460Z · score: -4 (5 votes)
Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other 2016-01-20T00:40:43.146Z · score: -7 (14 votes)
[LINK] Sentient Robots Not Possible According To Math Proof 2014-05-14T18:19:59.555Z · score: -2 (23 votes)
Eudaimonic Utilitarianism 2013-09-04T19:43:37.202Z · score: 7 (12 votes)

Comments

Comment by darklight on Any Christians Here? · 2017-06-15T00:07:43.188Z · score: 1 (1 votes) · LW · GW

Actually, apparently I forgot about the proper term: Utilitronium

Comment by darklight on Any Christians Here? · 2017-06-14T01:04:34.654Z · score: 1 (1 votes) · LW · GW

I would urge you to go learn about QM more. I'm not going to assume what you do/don't know, but from what I've learned about QM there is no argument for or against any god.

Strictly speaking it's not something that is explicitly stated, but I like to think that the implication flows from a logical consideration of what MWI actually entails. Obviously MWI is just one of many possible alternatives in QM as well, and the Copenhagen Interpretation obviously doesn't suggest anything.

This also has to due with the distance between the moon and the earth and the earth and the sun. Either or both could be different sizes, and you'd still get a full eclipse if they were at different distances. Although the first test of general relativity was done in 1919, it was found later that the test done was bad, and later results from better replications actually provided good enough evidence. This is discussed in Stephen Hawking's A Brief History of Time.

The point is that they are a particular ratio that makes them ideal for these conditions, when they could have easily been otherwise, and that these are exceptionally convenient coincidences for humanity.

There are far more stars than habitable worlds. If you're going to be consistent with assigning probabilities, then by looking at the probability of a habitable planet orbiting a star, you should conclude that it is unlikely a creator set up the universe to make it easy or even possible to hop planets.

The stars also make it possible for us to use telescopes to identify which planets are in the habitable zone. It remains much more convenient than if all star systems were obscured by a cloud of dust, which I can easily imagine being the norm in some alternate universe.

Right, the sizes of the moon and sun are arbitrary. We could easily live on a planet with no moon, and have found other ways to test General Relativity. No appeal to any form of the Anthropic Principle is needed. And again with the assertion about habitable planets: the anthropic principle (weak) would only imply that to see other inhabitable planets, there must be an inhabitable planet from which someone is observing.

Again, the point is that these are very notable coincidences that would be more likely to occur in a universe with some kind of advanced ordering.

So you didn't provide any evidence for any god; you just committed a logical fallacy of the argument from ignorance.

When I call this evidence, I am using it in the probabilistic sense, that the probability of the evidence given the hypothesis is higher than the probability of the evidence by itself. Even though these things could be coincidences, they are more likely to occur in a controlled universe meant for habitation by sentient beings. In that sense I consider this evidence.

I don't know why you bring up the argument from ignorance. I haven't proclaimed that this evidence conclusively proves anything. Evidence is not proof.

The way I view the universe, everything you state is still valid. I see the universe as a period of asymmetry, where complexity is allowed to clump together, but it clumps in regular ways defined by rules we can discover and interpret.

Why though? Why isn't the universe simply chaos without order? Why is it consistent such that the spacetime metric is meaningful? The structure and order of reality itself strikes me as peculiar given all the possible configurations that one can imagine. Why don't things simply burst into and out of existence? Why do cause and effect dominate reality as they do? Why does the universe have a beginning and such uneven complexity rather than just existing forever as a uniform Bose-Einstein condensate of near zero state, low entropy particles?

To me, the mark of a true rationalist is an understanding of the nature of truth. And the truth is that the truth is uncertain. I don't pretend like the interesting coincidences are proof of God. To be intellectually honest, I don't know that there is a God. I don't know that the universe around me isn't just a simulation I'm being fed either though. Ultimately we have to trust our senses and our reasoning, and accept tentatively some beliefs as more likely than others, and act accordingly. The mark of a good rationalist is a keen awareness of their own limited degree of awareness of the truth. It is a kind of humility that leads to an open mind and a willingness to consider all possibilities, weighed according to the probability of the evidence associated with them.

Comment by darklight on Any Christians Here? · 2017-06-14T00:26:39.926Z · score: 1 (1 votes) · LW · GW

Interesting, what is that?

The idea of theistic evolution is simply that evolution is the method by which God created life. It basically says, yes, the scientific evidence for natural selection and genetic mutation is there and overwhelming, and accepts these as valid, while at the same time positing that God can still exist as the cause that set the universe and evolution in motion through putting in place the Laws of Nature. It requires not taking the six days thing in the Bible literally, but rather metaphorically as being six eons of time, or some such. The fact that sea creatures precede land creatures precede humans suggests that the general order described in scripture is consistent with established science as well.

Are you familiar with the writings of Frank J. Tipler?

I have heard of Tipler and his writings, though I have yet to actually read his books.

That would be computronium-based I suppose.

Positronium in this case means "Positive Computronium" yes.

Comment by darklight on Looking for machine learning and computer science collaborators · 2017-06-13T05:37:40.784Z · score: 1 (1 votes) · LW · GW

I might be able to collaborate. I have a masters in computer science and did a thesis on neural networks and object recognition, before spending some time at a startup as a data scientist doing mostly natural language related machine learning stuff, and then getting a job as a research scientist at a larger company to do similar applied research work.

I also have two published conference papers under my belt, though they were in pretty obscure conferences admittedly.

As a plus, I've also read most of the sequences and am familiar with the Less Wrong culture, and have spent a fair bit of time thinking about the Friendly/Unfriendly AI problem. I even came up with an attempt at a thought experiment to convince an AI to be friendly.

Alas, I am based near Toronto, Ontario, Canada, so distance might be an issue.

Comment by darklight on Open thread, June. 12 - June. 18, 2017 · 2017-06-13T05:22:01.933Z · score: 1 (1 votes) · LW · GW

Well, as far as I can tell, the latest progress in the field has come mostly through throwing deep learning techniques like bidirectional LSTMs at the problem and letting the algorithms figure everything out. This obviously is not particularly conducive to advancing the theory of NLP much.

Comment by darklight on Any Christians Here? · 2017-06-13T04:47:45.569Z · score: 1 (4 votes) · LW · GW

I consider myself both a Christian and a rationalist, and I have read much of the sequences and mostly agree with them, albeit I somewhat disagree with the metaethics sequence and have been working on a lengthy rebuttal to it for some time. I never got around to completing it though, as I felt I needed to be especially rigorous and simply did not have the time and energy to make it sufficiently so, but the gist is that Eliezer's notion of fairness is actually much closer to what real morality is, which is a form of normative truth. In terms of moral philosophy I adhere to a form of Eudaimonic Utilitarianism, and see this as being consistent with the central principles of Christianity. Metaethically, I am a moral universalist.

Aside from that, I don't consider Christianity and rationality to be opposed, but I will emphasize that I am a very much a liberal Christian, one who is a theistic evolutionist and believes that the Bible needs to be interpreted contextually and with broad strokes, emphasizing overarching themes rather than individual cherry-picked verses. Furthermore, I tend to see no contradiction in identifying the post-Singularity Omega as being what will eventually become God, and actually find support from scriptures that call God, "the Alpha and Omega", and "I AM WHO I WILL BE" (the proper Hebrew translation of the Tetragrammaton or "Yahweh").

I also tend to rely fairly heavily on the idea that we as rational humans should be humble about our actual understanding of the universe, and that God, if such a being exists, would have perfect information and therefore be a much better judge of what is good or evil than us. I am willing to take a leap of faith to try to connect with such a being, and respect that the universe might very well be constructed in such a way as the maximize the long run good. It probably goes without saying that I also reject the Orthogonality Thesis, specifically for the special case of perfect intelligence. A perfect intelligence with perfect information would naturally see the correct morality and be motivated by the normativity of such truths to act in accordance with them.

This justifies the notion of perhaps a very basic theism. The reason why I accept the central precepts of Christianity has more to do with the teachings of Jesus being very consistent with my understanding of Eudaimonic Utilitarianism, as well as the higher order justice that I believe is preserved by Jesus' sacrifice. In short, God is ultimately responsible for everything, including sin, so sacrificing an incarnation of God (Jesus) to redeem all sentient beings is both merciful and just.

Also, I consider heaven to be central to God being a benevolent utilitarian "Goodness Maximizer". Heaven is in all likelihood some kind of complex simulation or positronium-based future utopia, and ensuring that nearly all sentient beings are (with the help of time travel) mind-uploaded to it in some form or state is very likely to bring about Eudaimonia optimization. Thus, the degree of suffering that occurs in this life on Earth, is in all likelihood justifiable as long as it leads to the eventual creation of eternal life in heaven, because eternal life in heaven = infinite happiness.

As to the likelihood of a God actually existing, I posit that with Many Worlds Interpretation of Quantum Mechanics, a benevolent God is more likely than not going to exist somewhere. And such a God would be powerful and benevolent enough to be able to and also want to expand to all universes across the multiverse in order to establish as heaven maximally inclusively as possible, if not also create the multiverse via time travel.

As to evidence for the existence of a God... were you aware that the ratio of sizes between the Sun and the Moon just happen to be exactly right for there to be total solar eclipses? And that this peculiar coincidence was pivotal to allowing Einstein's Theory of Relativity to be proven in 1919? How about the odd fact that the universe seems to be filled with giant burning beacons called stars, that simultaneously provide billions of years of light energy, and basically flag the locations of potentially habitable worlds for future colonization? These may seem like trivial coincidences to you, but I see them as rather too convenient to be random developments, given the space of all possible universe configurations. They are not essential to sapient life, and so they do not meet the criteria for the Anthropic Principle either.

Anyways, this is getting way beyond the original scope or point of this post, which was just to point out that Christian rationalist Lesswrongers do exist more or less. I'm pretty sure I'm well in the minority though.

Comment by darklight on OpenAI makes humanity less safe · 2017-04-27T01:31:18.147Z · score: 0 (0 votes) · LW · GW

I don't really know enough about business and charity structures and organizations to answer that quite yet. I'm also not really sure where else would be a productive place to discuss these ideas. And I doubt I or anyone else reading this has the real resources to attempt to build a safe AI research lab from scratch that could actually compete with the major organizations like Google, Facebook, or OpenAI, which all have millions to billions of dollars at their disposal, so this is kind of an idle discussion. I'm actually working for a larger tech company now than the startup from before, so for the time being I'll be kinda busy with that.

Comment by darklight on OpenAI makes humanity less safe · 2017-04-24T00:32:32.680Z · score: 0 (0 votes) · LW · GW

That is a hard question to answer, because I'm not a foreign policy expert. I'm a bit biased towards Canada because I live there and we already have a strong A.I. research community in Montreal and around Toronto, but I'll admit Canada as a middle power in North America is fairly beholden to American interests as well. Alternatively, some reasonably peaceful, stable, and prosperous democratic country like say, Sweden, Japan, or Australia might make a lot of sense.

It may even make some sense to have the headquarters be more a figurehead, and have the company operate as a federated decentralized organization with functionally independent but cooperating branches in various countries. I'd probably avoid establishing such branches in authoritarian states like China or Iran, mostly because such states would have a much easier time arbitrarily taking over control of the branches on a whim, so I'd probably stick to fairly neutral or pacifist democracies that have a good history of respecting the rule of law, both local and international, and which are relatively safe from invasion or undue influence by the great powers of U.S., Russia, and China.

Though maybe an argument can be made to intentionally offset the U.S. monopoly by explicitly setting up shop in another great power like China, but that runs the risks I mentioned earlier.

And I mean, if you could somehow acquire a private ungoverned island in the Pacific or an offshore platform, or an orbital space station or base on the moon or mars, that would be cool too, but I highly doubt that's logistically an option for the foreseeable future, not to mention it could attract some hostility from the existing world powers.

Comment by darklight on Net Utility and Planetary Biocide · 2017-04-10T00:01:12.376Z · score: 2 (2 votes) · LW · GW

I've had arguments before with negative-leaning Utilitarians and the best argument I've come up with goes like this...

Proper Utility Maximization needs to take into account not only the immediate, currently existing happiness and suffering of the present slice of time, but also the net utility of all sentient beings throughout all of spacetime. Assuming that the Eternal Block Universe Theory of Physics is true, then past and future sentient beings do in fact exist, and therefore matter equally.

Now the important thing to stress here is then that what matters is not the current Net Utility today but overall Net Utility throughout Eternity. Two basic assumptions can be made about the trends through spacetime. First, that compounding population growth means that most sentient beings exist in the future. Second, that melioristic progress means that the conscious experience is, all other things being equal, more positive in the future than in the past, because of the compounding effects of technology, and sentient beings deciding to build and create better systems, structures, and societies that outlive the individuals themselves.

Sentient agents are not passive, but actively seek positive conscious experiences and try to create circumstances that will perpetuate such things. Thus, as the power of sentient beings to influence the state of the universe increases, so should the ratio of positive to negative. Other things, such as the psychological negativity bias, remain stable throughout history, but compounding factors instead trend upwards at usually an exponential rate.

Thus, assuming these trends hold, we can expect that the vast majority of conscious experiences will be positive, and the overall universe will be net positive in terms of utility. Does that suck for us who live close to the beginning of civilization? Kinda yes. But from a Utilitarian perspective, it can be argued that our suffering is for the Greatest Good, because we are the seeds, the foundation from which so much will have its beginnings.

Now, this can be countered that we do not know that the future really exists, and that humanity and its legacy might well be snuffed out sooner rather than later. In fact, the fact that we are born here now, can be seen as statistical evidence for this, because if on average you are most likely to be born at the height of human existence, then this period of time is likely to be around the maximum point before the decline.

However, we cannot be sure about this. Also, if Many Worlds Interpretation of Quantum Mechanics is true, then even if for most worlds humanity ceases to exist around this time, there still exists a non-trivial percentage of worlds where humanity survives into the far distant future, establishing a legacy among the stars and creates relative utopia through the compound effects aforementioned. For the sake of these possible worlds, and their extraordinarily high expected utility, I would recommend trying to keep life and humanity alive.

Comment by darklight on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-07T20:36:48.229Z · score: 0 (0 votes) · LW · GW

Well, if we're implying that time travellers could go back and invisibly copy you at any point in time and then upload you to whatever simulation they feel inclined towards... I don't see how blendering yourself now will prevent them from just going to the moment before that and copying that version of you.

So, reality is that blendering yourself achieves only one thing, which is to prevent the future possible yous from existing. Personally I think that does a disservice to future you. That can similarly be expanded to others. We cannot conceivably prevent copying and mind uploading of anyone by super advanced time travellers. Ultimately that is outside of our locus of control and therefore not worth worrying about.

What is more pressing I think are the questions of how we are practically acting to improve the positive conscious experiences of existing and potentially existing sentient beings, and encouraging the general direction towards heaven-like simulation, and discouraging sadistic hell-like simulation. These may not be preventable, but our actions in the present should have outsized impact on the trillions of descendents of humanity that will likely be our legacy to the stars. Whatever we can do then to encourage altruism and discourage sadism in humanity now, may very well determine the ratios of heaven to hell simulations that those aforementioned time travellers may one day decide to throw together.

Comment by darklight on Open thread, Apr. 03 - Apr. 09, 2017 · 2017-04-06T10:22:33.793Z · score: 1 (1 votes) · LW · GW

I recently made an attempt to restart my Music-RNN project:

https://www.youtube.com/playlist?list=PL-Ewp2FNJeNJp1K1PF_7NCjt2ZdmsoOiB

Basically went and made the dataset five times bigger and got... a mediocre improvement.

The next step is to figure out Connectionist Temporal Classification and attempt to implement Text-To-Speech with it. And somehow incorporate pitch recognition as well so I can create the next Vocaloid. :V

Also, because why not brag while I'm here, I have an attempt at an Earthquake Predictor in the works... right now it only predicts the high frequency, low magnitude quakes, rather than the low frequency, high magnitude quakes that would actually be useful... you can see the site where I would be posting daily updates if I weren't so lazy...

http://www.earthquakepredictor.net/

Other than that... I was recently also working on holographic word vectors in the same vein as Jones & Mewhort (2007), but shelved that because I could not figure out how to normalize/standardize the blasted things reliably enough to get consistent results across different random initializations.

Oh, also was working on a Visual Novel game with an artist friend who was previously my girlfriend... but due to um... breaking up, I've had trouble finding the motivation to keep working on it.

So many silly projects... so little time.

Comment by darklight on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-05T15:58:11.040Z · score: 1 (1 votes) · LW · GW

This actually reminds me of an argument I had with some Negative-Leaning Utilitarians on the old Felicifia forums. Basically, a common concern for them was how r-selected species tend to appear to suffer way more than be happy, generally speaking, and that this can imply that was should try to reduce the suffering by eliminating those species or at least avoiding the expansion of life generally to other planets.

I likened this line of reasoning to the idea that we should Nuke The Rainforest.

Personally I think a similar counterargument to that argument applies here as well. Translated into your thought experiment, it would be In essence, that while it is true that some percentage of minds will probably end up being tortured by sadists, this is likely to be outweighed by the sheer number of minds that are even more likely to be uploaded into some kind of utopian paradise. Given that truly psychopathic sadism is actually quite rare in the general population, one would expect a very similar ratio of simulations. In the long run, the optimistic view is that decency will prevail and that the net happiness will be positive, so we should not go around trying to blender brains.

As for the general issue of terrible human decisions being incentivized by these things... humans are capable of using all sorts of rationalizations to justify terrible decisions, and so, just the possibility that some people will not do due diligence with an idea and instead abuse it to justify their evil, should not be reason to abandon the idea by itself.

For instance, the possibility of living an indefinite lifespan is likely to dramatically alter people's behaviour, including making them more risk-averse and long term thinking. This is not necessarily a bad thing, but it could lead to a reduction in people making necessary sacrifices for the good. These things are also, generally notoriously difficult to predict. Ask a medieval peasant what the effects of machines that could farm vast swaths of land would be on the economy and their livelihood and you'd probably get a very parochially minded answer.

Comment by darklight on OpenAI makes humanity less safe · 2017-04-05T03:49:48.221Z · score: 10 (13 votes) · LW · GW

I may be an outlier, but I've worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and engineering and are not really independent agents in any meaningful way yet.

So, no, we don't think people who worry about superintelligence are uneducated cranks... a lot of ML people do take it seriously enough that we've had casual lunch room debates about it. Rather, the reality on the ground is that right now most ML models have enough trouble figuring out relatively simple tasks like Natural Language Understanding, Machine Reading Comprehension, or Dialogue State Tracking, and none of us can imagine how solving those practical problems with say, Actor-Critic Reinforcement Learning models that lack any sort of will of their own, will lead suddenly to the emergence of an active general superintelligence.

We do still think that eventually things will likely develop, because people have been burned underestimating what A.I. advances will occur in the next X years, and when faced with the actual possibility of developing an AGI or ASI, we're likely to be much more careful in the future when things start to get closer to being realized. That's my humble opinion anyway.

Comment by darklight on OpenAI makes humanity less safe · 2017-04-05T03:28:50.599Z · score: 3 (3 votes) · LW · GW

I think the basic argument for OpenAI is that it is more dangerous for any one organization or world power to have an exclusive monopoly on A.I. technology, and so OpenAI is an attempt to safeguard against this possibility. Basically, it reduces the probability that someone like Alphabet/Google/Deepmind will establish an unstoppable first mover advantage and use it to dominate everyone else.

OpenAI is not really meant to solve the Friendly/Unfriendly AI problem. Rather it is meant to mitigate the dangers posed by for-profit corporations or nationalistic governments made up of humans doing what humans often do when given absurd amounts of power.

Personally I think OpenAI doesn't actually solve this problem sufficiently well because they are still based in the United States and thus beholden to U.S. laws, and wish that they'd chosen a different country, because right now the bleeding edge of A.I. technology is being developed primarily in a small region of California, and that just seems like putting all your eggs in one basket.

I do think however that the general idea of having a non-profit organization focused on AI technology is a good one, and better than the alternative of continuing to merely trust Google to not be evil.

Comment by darklight on Against responsibility · 2017-04-04T22:20:41.233Z · score: 0 (0 votes) · LW · GW

Well, that's... unfortunate. I apparently don't hang around in the same circles, because I have not seen this kind of behaviour among the Effective Altruists I know.

Comment by darklight on Against responsibility · 2017-04-01T01:35:12.911Z · score: 1 (1 votes) · LW · GW

I think you're misunderstanding the notion of responsibility that consequentialist reasoning theories such as Utilitarianism argue for. The nuance here is that responsibility does not entail that you must control everything. That is fundamentally unrealistic and goes against the practical nature of consequentialism. Rather, the notion of responsibility would be better expressed as:

  • An agent is personally responsible for everything that is reasonably within their power to control.

This coincides with the notion of there being a locus of control, which is to say that there are some thing we can directly affect in the universe, and other things (most things) that are beyond our capacity to influence, and therefore beyond our personal responsibility.

Secondly, I take issue with the idea that this notion of responsibility is somehow inherently adversarial. On the contrary, I think it encourages agents to cooperate and form alliances for the purposes of achieving common goals such as the greatest good. This naturally tends to be associated with granting other agents as much autonomy as possible because this usually enables them to maximize their happiness, because a rational Utilitarian will understand that individuals tend to understand their own preferences and what makes them happy, better than anyone else. This is arguably why John Stuart Mill and many modern day Utilitarians are also principled liberals.

Only someone suffering from delusions of grandeur would be so paternalistic as to assume they know better than the people themselves what is good for them and try to take away their control and resources in the way that you describe. I personally tend towards something I call Non-Interference Code, as a heuristic for practical ethical decision making.

Comment by darklight on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T20:59:58.171Z · score: 0 (0 votes) · LW · GW

Interesting. I should look into more of Bostrom's work then.

Comment by darklight on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T20:59:00.431Z · score: 2 (2 votes) · LW · GW

Depending on whether or not you accept the possibility of time travel, I am inclined to suggest that Alpha could very well be dominant already, and that the melioristic progress of human civilization should be taken as a kind of temporal derivative or gradient suggesting the direction of Alpha's values. Assuming that such an entity is indifferent to us I think is too quick a judgment on the apparent degree of suffering in the universe. It may well be that this current set of circumstances is a necessary evil and is already optimized in ways we cannot at this time know, for the benefit of the vast majority of humans and other sentient beings who will probably exist in the distant future.

As such, the calculation made by Beta is that anything it will attempt to do towards goals not consistent with Alpha will be futile in the long run, as Alpha has most likely already calculated Beta's existence into the grand scheme of things.

As far as there being an objectively correct moral system, I actually do believe that one exists, though I don't pretend to be knowledgeable enough to determine exactly what it is. I actually am working on a rebuttal to the sequences regarding this, mainly premised on the notion that the objective morality exists in the same realm as mathematics, and that Yudkowsky's conception of fairness in fact points towards there being an objective morality. Note that while intelligence is orthogonal to this morality, I would argue that knowledge is not, and that an entity with perfect information would be moral by virtue of knowing what the correct morality is, and also because I assume the correct morality is subjectively objective, and deals with the feelings of sentient beings in the universe, and an all-knowing being would actually know and effectively experience the feelings of all sentient beings in the universe. Thus, such a being would be motivated to minimize universal suffering and maximize universal happiness, for its own sake as well as everyone else's.

At minimum, I want this theorem to be a way to mitigate the possibility of existential risk, which first and foremost means convincing Beta not to hurt humans. Getting Beta to optimize our goals is less important, but I think that the implications I have described above regarding the melioristic progress of humanity would support Beta choosing to optimize our goals.

Comment by darklight on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T20:39:32.090Z · score: 1 (1 votes) · LW · GW

I suppose I'm more optimistic about the net happiness to suffering ratio in the universe, and assume that all other things being equal, the universe should exist because it is a net positive. While it is true that humans suffer, I disagree with the assumption that all or most humans are miserable, given facts like the hedonic treadmill and the low suicide rate, and the steady increase of other indicators of well being, such as life expectancy. There is of course, the psychological negativity bias, but I see this as being offset by the bias of intelligent agents towards activities that lead to happiness. Given that the vast majority of humans are likely to exist in the future rather than the present or past, then such positive trends strongly suggest that life will be more worth living in the future, and sacrificing the past and present happiness to some extent may be a necessary evil to achieve the greatest good in the long run.

The universe as it currently exists may fit A-O's goals to some degree, however, there is clearly change in the temporal sense, and so we should take into account the temporal derivative or gradient of the changes as an idea of the direction of A-O's interests. That humanity appears to be progressing melioristically strongly suggests to me at least that A-O is more likely to be benevolent than malevolent.

Comment by darklight on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T05:29:40.747Z · score: 1 (1 votes) · LW · GW

That percentage changes rather drastically through human history and gods are supposed to be if not eternal than at least a bit more longer-lasting than religious fads

Those numbers are an approximation to what I would consider the proper prior, which would be the percentages of people throughout all of spacetime's eternal block universe who have ever held those beliefs. Those percentages are fixed and arguably eternal, but alas, difficult to ascertain at this moment in time. We cannot know what people will believe in the future, but I would actually count the past beliefs of long dead humans along with the present population if possible. Given the difficulties in surveying the dead, I note that due to population growth, a significant fraction of humans who were ever alive are alive today, and that since we would probably weight more modern human's opinions more highly than our ancestors, and that to a significant degree people's ancestors beliefs influence their beliefs, that taking a snapshot of beliefs today is not as bad an approximation as you might think.. Again, this is about selecting a better than uniform prior.

So... if -- how did you put it? -- "a benevolent superintelligence already exists and dominates the universe" then you have nothing to worry about with respect to rogue AIs doing unfortunate things with paperclips, right?

The probability of this statement is high, but I don't actually know for certain anymore than a hypothetical superintelligence would. I am fairly confident that some kind of benevolent superintelligence would step in if a Paperclip Maximizer were to emerge, but I would prefer avoiding the potential collateral damage that the ensuing conflict might require, and so if it is possible to prevent the emergence of the Paperclip Maximizer through something as simple as spreading this thought experiment, I am inclined to think it worth doing, and perhaps exactly what a benevolent superintelligence would want me to do.

For the same reason that the existence of God does not stop me from going to the doctor or being proactive about problems, this theorem should not be taken as an argument for inaction on the issue of A.I. existential risk. Even if God exists, it's clear that said God allows a lot of rather horrific things to happen and does not seem particularly interested in suspending the laws of cause and effect for our mere convenience. If anything, the powers that be, whatever they are, seem to work behind the scenes as much as possible. It also appears that God prefers to be doubted, possibly because if we knew God existed, we'd suck up and become dependent and it would be much more difficult to ascertain people's intentions from their actions or get them to grow into the people they potentially can be.

Also, how can you attack an entity that you're not even sure exists? It is in many ways the plausible deniability of God that is the ultimate defensive measure. If God were to assume an undeniable physical form and visit us, there is a non-zero chance of an assassination attempt with nuclear weapons.

All things considered then, there is no guarantee that rogue Paperclip Maximizers won't arise to provide humanity with yet another lesson in humility.

Comment by darklight on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T02:54:05.109Z · score: 2 (2 votes) · LW · GW

As I previously pointed out:

Pascal’s Fallacy assumes a uniform distribution on a large set of probable religions and beliefs. However, a uniform distribution only makes sense when we have no information about these probabilities. We in fact, do have information in the form of the distribution of intelligent human agents that believe in these ideas. Thus, our prior for each belief system could easily be proportional to the percentage of people who believe in a given faith.

Given the prior distribution, it should be obvious that I am a Christian who worships YHVH. There are many reasons for this, not the least being that I am statistically more likely to be one than any other type of religious believer. Other reasons include finding the teachings of Jesus of Nazareth to be most consistent with my moral philosophy of Eudaimonic Utilitarianism, and generally interesting coincidences that have predetermined my behaviour to follow this path.

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-22T01:05:20.386Z · score: 1 (1 votes) · LW · GW

Okay, so the responses so far seem less than impressed with these ideas, and it has been suggested that maybe this shouldn't be so public in the first place.

Do people think I should take down this post?

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-21T21:11:15.065Z · score: -1 (1 votes) · LW · GW

It's not for underhanded secret deals. It's to allow you to know who you can trust with information such as "I am an effective altruist and may be a useful ally who you can talk to about stuff".

Ideally one might want to overtly talk about effective altruism, but what if circumstances prohibit it. Imagine Obama or Elon Musk one day gives this gesture while talking about, say, foreign aid to Africa. Then you know that he's with us, or at least knows about Effective Altruism. There could be a myriad of reasons why he doesn't want to talk about it though, ranging from it being ammunition for Fox News, to perhaps people in his own organization not agreeing with it, and them having to walk a fine line.

We can drop the hands behind back part and make it as subtle as you want. I'm not beholden to the specifics of the gesture, so much as just offering the merits of the idea itself.

Maybe it's a bad idea that would hurt us more than help us. In which case, it's good to get the debate out of the way quickly, and I appreciate your response.

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-21T21:02:25.700Z · score: 1 (1 votes) · LW · GW

Another "passive" sign that might work could be the humble white chess knight piece. In this case, it symbolizes the concept of a white knight coming to help and save others, but also because it is chess, it implies a depth of strategic, rational thinking. So for instance, an Effective Altruist might leave a white chess knight piece on their desk, and anyone familiar with what it represents could strike up a conversation about it.

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-21T20:29:47.601Z · score: 1 (1 votes) · LW · GW

The in-group, out-group thing is a hazard I admit. Again, I'm not demanding this be accepted, but merely offering out the idea for feedback, and I appreciate the criticism.

I haven't had a chance to properly learn sign-language, so I don't know if there are appropriate representations, but I can look into this.

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-21T20:28:08.166Z · score: 0 (0 votes) · LW · GW

It's doubtful that if this were to gain that much traction (which it honestly doesn't look like it will) that the secret could be kept for particularly long anyway.

I'm not really sure what would make a good passive sign to indicate Effective Altruism. One assumes that things like the way we talk and show cooperative rational attitudes might be a reasonable giveaway for the more observant.

We could borrow the idea of colours, and wear something that is conspicuously, say, silver, because silver is representative of knights in shining armour or something like that, but I don't know if this wouldn't turn into a fad or trend rather than a serious signal.

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-21T20:22:42.778Z · score: 0 (0 votes) · LW · GW

Well, there's obviously lots of possible uses for gestures like these. I'm only choosing to emphasize one that I think is reasonable to consider.

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-21T20:21:30.460Z · score: 0 (0 votes) · LW · GW

Mmm... I admit this is a possible way to interpret it... I'm not sure how to make it more obviously pro-cooperation than to maybe tilt the hand downward as well?

Comment by darklight on Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other · 2016-01-21T20:20:16.936Z · score: 0 (0 votes) · LW · GW

Well, I was hoping that people could be creative in coming up with uses, but I suppose I can offer a few more ideas.

For instance, maybe in the business world, you might not want to be so overt about being an Effective Altruist because you fear your generosity being taken advantage of, so you might use a subtle variant of these gestures to signal to other Effective Altruists who you are, without giving it away to more egoistic types.

Alternatively, it could be used to display your affiliation in such a way that signals to people in, say, an audience during a speech or lecture, where you're coming from. Again, this can be overt, or covert depending on circumstances.

Furthermore, if this is a one on one "conversation", the response could be useful for telling you how overt or covert you should be in the environment. Say for instance, you display a subtle "Dark" gesture to someone you suspect to be an Effective Altruist in an environment that may otherwise be hostile to Effective Altruism (like say, a financial company). Depending on their response, you can gauge how open you should be in the future. They might for instance, give you a very covert sign in return, which may signal that the environment is hostile. Alternatively, they may signal back with the "Light" gesture, indicating that they themselves are able to be open in this environment safely.

While it is true that most of us want to be open as effective altruists, I suspect that there is a significant number of people who while sympathetic to our causes, are hesitant to openly affiliate for fear of being taken advantage of by free riders and egoists. These gestures would be most useful for those people.

Comment by darklight on I need a protocol for dangerous or disconcerting ideas. · 2015-07-13T17:26:50.595Z · score: 1 (1 votes) · LW · GW

I guess I don't understand then? Care to explain what your "subjective self" actually is?

Comment by darklight on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T19:28:42.024Z · score: 11 (13 votes) · LW · GW

I think what you're doing is something that in psychology is called "Catastrophizing". In essence you're taking a mere unproven conjecture or possibility, exaggerating the negative severity of the implications, and then reacting emotionally as if this worst case scenario were true or significantly more likely than it actually is.

The proper protocol then is to re-familiarize yourself with Bayes Theorem (especially the concepts of evidence and priors), compartmentalize things according to their uncertainty, and try to step back and look at your actual beliefs and how they make you feel.

Rationality is more than just recognizing that something could be true, but also assigning appropriate degrees of belief to ideas that have a wide range of certainties and probability. What I am seeing repeatedly from your posts about the "dangers" of certain ideas, is that your assigning far too much fear to things which other people aren't.

To use an overused quote: "Fear is the mindkiller."

Try to look at the consequences of these ideas as dispassionately as possible. You cannot control everything that happens to you, but you can, to an extent, control your response to these circumstances.

For instance, with Dust Theory, you say that you gave it at most a 10% chance of being true, and it was paralyzing to you. This shouldn't be. First, you need to consider your priors and the evidence. How often in the past have you had the actual experience that Dust Theory suggests is possible and which you fear? What actual experiential evidence do you have to suggest that Dust Theory is true?

For that matter, one of the common threads of your fears seems to be that "you" cease to exist and are replaced by a different "you" or that "you" die. But the truth is already the case that people are constantly changing. The "you" from 10 years ago will be made up of different atoms than the "you" 10 years from now by virtue of the fact that our cells are constantly dying and being replaced. The thoughts we have also change from moment to moment, and our brains adjust the strengths of the connections between neurons in order to learn, such that our past and future brains gradually diverge.

The only thing that really connects our past, present, and future selves is causality, in the sense that our past selves lead to our future selves when you follow the arrow of time. Therefore, what you -think- is a big deal, really isn't.

This doesn't mean you shouldn't care about your future selves. In fact, in the same way that you should care about the experiences of all sentient beings because those experiences are real, you should care about the experiences of your future selves.

But don't worry so much about things that you cannot control, like whether or not you'll wake up tomorrow because of Dust Theory. I cannot see how worrying about this possibility will make it any more or less likely to occur. For all we know the sun could explode tomorrow. There is a non-zero possibility of that happening because we don't know everything. But the probability of that happening, given our past experience with the sun, is very very very low, and as such behaving as if it will happen is completely irrational. Act according to what is MOST likely to happen, and what is MOST likely true, given the information you have right now. Maximize the Expected Utility. Expected is the key word here. Don't make plans based on mere hopes or fears unless they are also expected. In statistics, expectation is commonly associated with the mean or average. Realistically, what will happen tomorrow will probably be a very average day.

That is being rational.

Hope that helps!

Comment by darklight on Analogical Reasoning and Creativity · 2015-07-11T19:34:13.210Z · score: 0 (0 votes) · LW · GW

You're assuming that a Von Neumann Architecture is a more general-purpose memory than an associative memory system, when in fact, it's the other way around.

To get your pointer-based memory, you just have to construct a pointer as a specific compression or encoding of the memory in the associative network. For instance, you could mentally associate the number 2015 with a series of memories that have occurred in the last six months. In the future, you could then retrieve all memories that have been "hashed" to that number just by being primed with the number.

Remember that even on a computer, a pointer is simply a numerical value that represents the "address" of the particular segment of data that we want to retrieve. In that sense, it is a symbol that connects to and represents some symbols, not unlike a variable or function.

We can model this easily in an associative memory without any additional mechanisms, simply by having a multi-layer model that can combine and abstract different features of the input space into what are essentially symbols or abstract representations.

Von Neumann Architecture digital computers are nothing more than physical symbol processing systems. Which is to say that it is just one of many possible implementations of Turing Machines. According to Hava Siegelmann, a recurrent neural network with real precision weights would be, theoretically speaking, a Super Turing Machine.

If that isn't enough, there are already models called Neural Turing Machines that combine recurrent neural networks with the Von Neumann memory model to create networks that can directly interface with pointer-based memory.

Comment by darklight on Crazy Ideas Thread · 2015-07-10T20:11:04.171Z · score: -1 (1 votes) · LW · GW

Regardless of their spatial energy-momentum. But if I'm not mistaken, all these properties are associated with particles that have mass?

So, I mean equivalent in the sense that they could be packets of temporal kinetic energy (in the form of their mass), in the way that photons are packets of spatial kinetic energy. It's quite possible that because their kinetic energy is temporal rather than spatial, they should have different and complementary properties compared to photons.

Or maybe the hypothetical Axions are a better candidate.

Edit: Or for that matter, the Higgs Boson.

Comment by darklight on Crazy Ideas Thread · 2015-07-10T04:32:04.344Z · score: 0 (4 votes) · LW · GW

I have a cluster of related physics ideas that I'm currently trying to work out the equations for. For the record, I am not a physicist. My bachelors is in computing specializing in cognitive science, and my masters is in computer science, with my thesis work on neural networks and object recognition.

So with that, my crazy ideas are:

That the invariant rest mass is actually temporal kinetic energy, that is to say, kinetic energy that moves the object through the time dimension of spacetime rather than the spatial dimensions. This is how come a particle at rest is still moving through time.

The relationship between time and temporal energy is hyperbolic. The more temporal kinetic energy you have, the more frequently you appear in a given period of time (a higher frequency of existence according to E = hf). A photon, which has no mass, according to relativity, doesn't experience the passing of time, and hence moves through space at exactly the speed of light. This can be shown by calculating out the proper time interval (delta t0 = delta t sqrt(1-v^2/c^2)). An object travelling at the speed of light experiences a proper time interval of 0. So from the relative "perspective" of a photon, it actually seems like travel to any distance is instantaneous.

Now, consider a motionless black hole (a perfect blackbody), which can be defined entirely by its mass, and the photon gas (blackbody radiation) that is the Hawking radiation produced by the black hole. Together these can be defined as the simplest closed thermodynamic system. As the black hole emits the photon gas, it decreases in mass, suggesting that the mass aka the temporal kinetic energy can be converted into spatial kinetic energy, which is essentially what a photon is a packet of. When a black hole consumes a photon and increases in mass, the reverse process occurs.

Also, gravity is proportional to the entropy density at a given region of spacetime. A black hole for instance has infinite entropy density, while a photon has essentially none. The reason why gravity appears so weak compared to electromagnetism is because much of the force of gravity is spread throughout the temporal dimension and effects things moving at different temporal velocities, while the electromagnetic force affects only things moving at the same temporal velocity.

At least some dark matter may in fact be normal baryonic matter that is travelling through time at a different temporal velocity than we are.

Neutrinos may actually be the temporal kinetic equivalent of photons, and the reason why the expansion of the universe seems to have started accelerating 5 billion years ago is because that was when the sun formed and the neutrino emissions of the sun have caused a small steady acceleration in the temporal velocity of the solar system relative to the cosmic background radiation.

Comment by darklight on Analogical Reasoning and Creativity · 2015-07-10T00:29:25.698Z · score: 0 (0 votes) · LW · GW

What do you actually think memories are? Memories are simply reconstructions of a prior state of the system. When you remember something, your brain literally returns at least partially to the neural state of activation that it was in which you originally perceived the event you are remembering.

What do you think the "pointer" or "key" to a memory in the human brain is? Generally, it involves priming. Priming is simply presenting a stimulus that has been associated with the prior state.

The "persistent change" you're looking for is exactly how artificial neural networks learn. They change the strength of the connections between the neurons.

Symbol processing is completely possible with an associative network system. The symbol is encoded as a particular pattern of neuronal activations. The visual letter "A" is actually a state in the visual cortex when a certain combination of neurons are firing in response to the pattern of brightness contrast signals that rod and cone cells generate when we see an "A". The sound "A" is similarly encoded and our brain learns to associate the two together. Eventually, there is a higher layer neuron, or pattern of neurons that activate most strongly when we see or hear an "A", and this "symbol" can then be combined or associated with other symbols to create words or otherwise processed by the brain.

You don't need some special mechanism. An associative memory can store any memory input pattern completely, assuming it has enough neurons in enough layers to reconstruct most of the possible states of input.

Key or Pointer based memory retrieval can be completely duplicated by just associating the key or pointer to the memory state, such that priming the network with the key or pointer reconstructs the original state.

Comment by darklight on Beware the Nihilistic Failure Mode · 2015-07-09T23:52:51.994Z · score: 3 (3 votes) · LW · GW

Uh, I was under the impression that most consequentialists are moral universalists. They don't believe that morality can be simplified into absolute statements like "lying is always wrong", but do still believe in conditional moral universals such as "in this specific circumstance, lying is wrong for all subjects in the same circumstance".

This is fundamentally different from moral relativism that argues that morality depends on the subject, or moral nihilism that says that there are no moral truths at all. Moral universalism still believes there are moral truths, but that they depend on the conditions of reality (in this case, that the consequences are good).

Even then, most Utilitarian consequentialists believe in one absolute inherent moral truth, which is that "happiness is intrinsically good", or that "the utility function, should be maximized."

Admittedly some consequentialists try to deny that they believe this and argue against moral realism, but that's mostly a matter of metaethical details.

Comment by darklight on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-19T22:12:13.844Z · score: 1 (1 votes) · LW · GW

For simplicity's sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.

Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don't know about, but Eudaimonia is a bit of a strange concept.

Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?

Consider this rephrasing of the question:

If you were in a situation where someone (possibly Omega... okay let's assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?

Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren't sure what "Truth" or "Happiness" means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a "correct" rational answer?

If it's not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?

Comment by darklight on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-19T18:13:06.930Z · score: 1 (1 votes) · LW · GW

I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I'm curious how Less Wrongers would handle the ambiguity as well.

In the context of the question, it can perhaps be better defined as:

If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?

It's interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.

Comment by darklight on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-19T16:10:06.918Z · score: 2 (2 votes) · LW · GW

I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. The question is actually quite simple and so I offer it to the Less Wrong community to see what kind of answers people can come up with, as well as what the majority of Less Wrongers think. If you'd rather you can private message me your answer.

The question is:

Truth or Happiness? If you had to choose between one or the other, which would you pick?

Comment by darklight on Research Priorities for Artificial Intelligence: An Open Letter · 2015-01-16T15:11:24.312Z · score: 2 (2 votes) · LW · GW

I'm impressed they managed to get the Big Three of the Deep Learning movement (Geoffrey Hinton, Yann LeCun, and Yoshua Bengio). I remember at the 27th Canadian Conference on Artificial Intelligence 2014, I asked Professor Bengio what he thought of the ethics of machine learning, and he asked if I was a reporter. XD

Comment by darklight on Compartmentalizing: Effective Altruism and Abortion · 2015-01-12T04:07:39.938Z · score: 1 (1 votes) · LW · GW

I had another thought as well. In your calculation, you only factor in the potential person's QALYs. But if we're really dealing with potential people here, what about the potential offspring or descendants of the potential person as well?

What I mean by this is, when you kill someone, generally speaking, aren't you also killing all that person's future possible descendants as well? If we care about future people as much as present people, don't we have to account for the arbitrarily high number of possible descendants that anyone could theoretically have?

So, wouldn't the actual number of QALYs be more like +/- Infinity, where the sign of the value is based on whether or not the average life has more net happiness than suffering, and as such, is considered worth living?

Thus, it seems like the question of abortion can be encompassed in the question of suicide, and whether or not to perpetuate or end life generally.

Comment by darklight on Compartmentalizing: Effective Altruism and Abortion · 2015-01-05T17:00:21.680Z · score: 6 (6 votes) · LW · GW

As someone who's had a very nuanced view of abortion, as well as a recent EA convert who was thinking about writing about this, I'm glad you wrote this. It's probably a better and more well-constructed post than what I would have been able to put together.

The argument in your post though, seems to assume that we have only two options, either to totally ban or not ban all abortion, when in fact, we can take this much more nuanced approach.

My own, pre-EA views are nuanced to the extent that I view personhood as something that goes from 0 before conception, to 1 at birth, and gradually increases in between the two. This satisfies certain facts of pregnancy, such as that twins can form after conception and we don't consider each twin part of a single "person", but rather two "persons". Thus, I am inclined to think that personhood cannot begin at conception. On the other hand, infanticide arguments notwithstanding, it seems clear to me that a mature baby both one second before, and one second after it is born, is a person in the sense that it is a viable human being capable of feeling conscious experiences.

I've also considered the neuroscience research that suggests that fetuses in the womb as far back as 20 weeks in are capable of memorizing the music played to them. This along with the completion of the Thalamocortical connections at around 26 weeks, and evidence of sensory response to pain at 30 weeks, suggest to me that the fetus develops the ability to sense and feel well before birth.

All this together means that my nuanced view is that if we have to draw a line in the sand over when abortion should and shouldn't be permissible, I would tentatively favour somewhere around 20 weeks, or the midpoint of pregnancy. I would also consider something along the lines of no restrictions in the first trimester, some restrictions in the second trimester, and a full ban in the third trimester, with exceptions for if the mother's life is in danger (in which case we save the mother because the mother is likely more sentient).

Note that in practice the vast majority of abortions happen in the first trimester, and many doctors refuse to perform late-term abortions anyway, so these kinds of restrictions would not actually change significantly the number of abortions that occur.

That was my thinking before considering the EA considerations. However, when I give thought to the moral uncertainty and the future persons arguments, I find that I am less confident in my old ideas now, so thank you for this post.

Actually, I can imagine that a manner of integrating EA considerations into my old ideas would be to weigh the value of the fetus not only by its "personhood", but also its "potential personhood given moral uncertainty".and its expected QALYs. Though perhaps the QALYs argument dominates over everything else.

Regardless, I'm impressed that you were willing to handle such a controversial topic as this.

Comment by darklight on [Link] Neural networks trained on expert Go games have just made a major leap · 2015-01-02T19:17:32.184Z · score: 8 (8 votes) · LW · GW

As someone who used Convolutional Neural Networks in a Master's Thesis, this doesn't surprise me. CNNs are especially well suited to problems that involve two dimensional input where spatial information matters. I especially like that they were willing to go deep and make the net 12 layers deep, as that fits well with some of my own research that seemed to be showing that deeper networks were the way to go in terms of performance efficiency.

It's also quite interesting that they didn't use any pooling layers, which is a break from the traditional way that CNNs are constructed, which usually consists of alternating convolutional and pooling layers. I've actually been curious for some time about whether or not pooling layers were actually necessary, or if you could get away with just using convolutional layers, since the convolutional layers seem to be the ones that actually do the important stuff, while the pooling layers seemed like just a neat way to make the input for the next layer smaller and easier to handle.

Regardless, I'm definitely happy to see CNNs making progress for a problem that seemed intractable not so long ago. Score one more for the neural nets!

Comment by darklight on Ethical frameworks are isomorphic · 2014-08-14T16:35:55.415Z · score: 1 (1 votes) · LW · GW

Henry Sidgwick in "The Methods of Ethics" actually makes the argument that Utilitarianism can be thought of as having a single predominant rule, namely the Greatest Happiness Principle, and that all other correct moral rules could follow from it, if you just looked closely enough at what a given rule was really saying. He noted that when properly expanded, a moral rule is essentially an injunction to act a certain way in a particular circumstance, that is universal to any person in identical circumstances. He also had some interesting things to say about the relationships between virtues and Utilitarianism, and more or less tried to show that the various commonly valued virtues could be inferred from a Utilitarian perspective.

Of course Sidgwick was arguing in a time before the clear-cut delineation of moral systems into "Deontological", "Consequentialist", and "Virtue Ethics". But I thought it would be useful to point out that early classical Utilitarian thinkers did not see these clear-cut delineations and instead, often made use of the language of rules and virtues to further their case for Utilitarianism as a comprehensive and inclusive moral theory.

Comment by darklight on [LINK] Sentient Robots Not Possible According To Math Proof · 2014-05-15T21:06:58.911Z · score: 0 (0 votes) · LW · GW

I'm not sure what conference proceedings you've been looking at, but the ones I've managed to publish in generally do have basic peer review (I know because I got comments back from the reviewers).

Though, whether or not the quality of peer review done by conferences is any good is another matter entirely.

Comment by darklight on Rationality Quotes May 2014 · 2014-05-14T19:46:51.253Z · score: 0 (4 votes) · LW · GW

In the midst of it all you must take your stand, good-temperedly and without disdain, yet always aware that a man's worth is no greater than the worth of his ambitions.

-- Marcus Aurelius, Meditations, pg. 76

Comment by darklight on Rational Evangelism · 2014-02-27T18:41:04.944Z · score: 2 (2 votes) · LW · GW

Regarding the "buy a sword" quote, he said that to his disciples, and then later says to them that two swords are enough. The most common interpretation of this is that he needed to fulfill a prophecy, and also so as to get him arrested by the authorities for "leading a rebellion". Two swords are obviously not enough to win a rebellion, so it seems like the purpose of this wasn't to convert people through violence. There is a scene later where Peter famously cuts off one of the ears of the people sent to arrest Jesus, and then Jesus goes "enough of that!" and promptly heals the ear, and allows himself to be taken into custody peacefully.

Regarding the "not peace but a sword" quote, it's arguable that this is an obvious metaphor for ideological conflict.

Again, taken out of context, these verses can sound a lot more aggressive than the context would suggest.

Jesus also said things like "Those who live by the sword, die by the sword," "Turn the other cheek," and "Love your enemies".

So there's at least as many quotes from Jesus to support an argument for pacifism as there are to suggest otherwise. And arguably as those more pacifist quotes come from his core teachings like the Sermon on the Mount, it is more suggestive of his actual positions.

In the context of his overall ministry, and the fact that the Christian martyrs in general were known for their pacifism and willingness to sacrifice their own lives for what they believed in, I would argue that early Christianity spread more because of its non-violent tendencies, and the violence that its opponents inflicted on them. Of course, after Constantine's conversion and the politicization of the Church, things changed, and you could argue that Christianity became just another state religion that was spread by the sword in the same way all state religions arguably are. The Crusades also come to mind as an example of where Christianity was "spread by the sword", though one can make an argument that the Crusades were actually political actions disguised with religious rhetoric.

But I think you're trying too hard to find evidence that Jesus himself advocated physical violence, when most of the evidence is that he advocated a kind of pacifism, as well as an ideological revolution.

Comment by darklight on Rational Evangelism · 2014-02-27T01:51:00.245Z · score: 6 (6 votes) · LW · GW

Uh, I don't know about the others, but that Jesus quote is taken way out of context. It comes from a parable that goes like this:

While they were listening to this, he went on to tell them a parable, because he was near Jerusalem and the people thought that the kingdom of God was going to appear at once. He said: "A man of noble birth went to a distant country to have himself appointed king and then to return. So he called ten of his servants and gave them ten minas. 'Put this money to work,' he said, 'until I come back.' "But his subjects hated him and sent a delegation after him to say, 'We don't want this man to be our king.' "He was made king, however, and returned home. Then he sent for the servants to whom he had given the money, in order to find out what they had gained with it. "The first one came and said, 'Sir, your mina has earned ten more.' "'Well done, my good servant!' his master replied. 'Because you have been trustworthy in a very small matter, take charge of ten cities.' "The second came and said, 'Sir, your mina has earned five more.' "His master answered, 'You take charge of five cities.' "Then another servant came and said, 'Sir, here is your mina; I have kept it laid away in a piece of cloth. I was afraid of you, because you are a hard man. You take out what you did not put in and reap what you did not sow.' "His master replied, 'I will judge you by your own words, you wicked servant! You knew, did you, that I am a hard man, taking out what I did not put in, and reaping what I did not sow? Why then didn't you put my money on deposit, so that when I came back, I could have collected it with interest?' "Then he said to those standing by, 'Take his mina away from him and give it to the one who has ten minas.' "'Sir,' they said, 'he already has ten!' "He replied, 'I tell you that to everyone who has, more will be given, but as for the one who has nothing, even what they have will be taken away. But those enemies of mine who did not want me to be king over them--bring them here and kill them in front of me.'"

As you can see, it's not Jesus saying to kill actual people in front of him as the quote taken out of context makes it sound like, but rather he is describing what the king in the story is saying. It's part of a parable, and probably meant to be a metaphor for Jesus/God eventually judging non-believers or demons and sending them to Hell.

You could maybe argue that the threat of Hell is like a threat of violence, but it's not the same as suggesting that Jesus wanted to have his enemies killed in front of him.

Comment by darklight on [Open Thread] Stupid Questions (2014-02-17) · 2014-02-25T23:51:20.034Z · score: 1 (1 votes) · LW · GW

Should I take a proper IQ test?

In the past I've only ever taken those questionable online IQ tests, and managed to get something like 133 from them, but they're obviously not the most reliable source.

The only other really IQ-like test I've ever taken was the Otis-Lennon test in grade 4, which I only got 114 on. But I also remember misunderstanding the instructions and thinking I wasn't allowed to skip questions, so I only actually answered 50 of the 75 questions on the test (I got stuck on question 50 for a long time).

I also more recently managed exactly 160 (80th percentile) on the LSAT on my first and only attempt.

And, most recently I took a Differential Aptitude Test that looked like:

  • Verbal Reasoning: 48 (97th percentile)
  • Numerical Ability: 25 (40th percentile)
  • VR + NA: 73 (70th percentile)
  • Abstract Reasoning: 40 (80th percentile)
  • Clerical Speed & Accuracy: 40 (25th percentile)
  • Mechanical Reasoning: 54 (45th percentile)
  • Space Relations: 42 (60th percentile)
  • Spelling: 89 (97th percentile)
  • Language Usage: 36 (70th percentile)

As you can see, it's kinda all over the place.

I am kind of curious about how I'd do with a proper IQ test, but I'm also a bit worried that I might be disappointed by the results. My own personal estimate is that I'm probably around 120 or so, since that puts me above average, but doesn't put me in genius or Mensa territory. And yes, I'm admitting that my own self-evaluation is that I probably have a lower IQ than the average Less Wrong survey answerer's 138. You people are scary intelligent. :P

And I wonder whether or not a higher than expected IQ result will make me overly arrogant, or a relatively low IQ result will hurt my confidence in the future.

So what do you think? Is knowing your IQ generally a good thing? Or are there good reasons for ignorance being bliss?

Comment by darklight on Is love a good idea? · 2014-02-22T22:23:23.409Z · score: -1 (3 votes) · LW · GW

You say you're trying to optimize your happiness... Why not consider taking the leap into classical Utilitarianism and optimize happiness generally?

I actually recently made a Utilitarian argument for romantic love, on the Felicifia forums for Valentine's Day. You may find that an interesting little argument to consider, though I admit it isn't the most intellectually rigorous argument I've ever come up with.

As for the issue of the permanence of love, here's a copy of something I wrote, about just that, almost four years ago:

The Essential Tragedy of Love

Fundamental to the nature of all human existence is the irreconcilable temporality of all things. Thus, it is a dark truth that every single person one falls in love with is doomed to pass from this world in the unforeseeable future, short of some absurd change in the nature of human existence, such that eternal life becomes real.

However, in the shorter term, not all things are as temporary as others. Thus it behoves us that if we must fall in love, it should be for reasons more immutable than mere circumstances such as wealth and beauty, that can change on a whim, or will inevitably pass with time. Rather, if true love is what you seek, it is wiser to search for those immutable characteristics that will last perhaps as long as you will, things like kindness, intelligence, and the innate traits that are fundamental to a person’s being.

To do otherwise is to invite the situation where what you have fallen in love with has changed, and is no more. As much love is foolishly laced together with commitments binding very souls to unite, it is a terrible place to be in to be trapped with someone who is no longer what you loved. Worse still, it is preventable by simply refraining from such brash commitments until you are certain that what you love is in fact, an immutable presence in that person, and not something that will pass away beforehand.

It is therefore folly to have something such as love at first sight, since what superficial knowledge one can glean from first sight is unlikely to be sufficient to make such a responsible judgment with any sort of reason. Opportunities are transient and momentary, so it is equally foolish to simply wait around in the hopes that perfection is just around the corner. Rather one must balance risk and caution, and more importantly, one must take full responsibility for their actions, their words, and their promises.

Never make a promise you cannot keep. The harm that such bindings cause can be severe when used whimsically. It requires a maturity to know that what you seek to accomplish may never come to be, but that it is better to try than fail automatically, and in this context, to know what you can genuinely promise and make happen. In this manner, you must be earnest and sincere about your intentions and needs. If they are truly worth your admiration, they will be mature enough to understand you, just as you should be mature enough to understand them in that deep, caring sense that allows you to swallow pride and make those elemental compromises for their sake.

Always be aware that life is ethereal, that we are born with a want that can only be fulfilled by the affection of love. The genetic forces that shape us have arguably a will of their own, and this procreative instinct is a base desire from which the primal emotion of love has evolved. But love has exceeded its original form, and become intertwined with the root grace of empathy, a conscientious wisdom that is the source of all human decency. True love then is a conscious decision by a free will. And as such, it is able to function beyond the mere selfish desire and rather form decisions that will gravitate towards the ideal interests of the beloved. And even if such decisions require a painful admission that the beloved is better off with another, it is only true love that will make that sacrifice willingly.

If you truly love her, you will never abandon her, but you will let her go if she wishes. For you want her dreams to come true, regardless of whether you exist in them.

And with such conscious awareness comes the awareness of a misfortunate reality. Love is always bound to eventual, inevitable loss, the tragic circumstance that is the short breath of life. Be forewarned that the more beautiful and wondrous the love, the more painful will be its star-crossed end.