Posts

Comments

Comment by StefanPernar on Cooperating with agents with different ideas of fairness, while resisting exploitation · 2013-10-09T23:51:38.790Z · LW · GW

Why am I being downvoted?

Sorry for the double post.

Comment by StefanPernar on Cooperating with agents with different ideas of fairness, while resisting exploitation · 2013-10-09T20:50:37.984Z · LW · GW

I have written about this exact concept back in 2007 and am basing a large part of my current thinking on the subsequent development of the idea. The original core posts are at:

Relativistic irrationality -> http://www.jame5.com/?p=15

Absolute irrationality -> http://www.jame5.com/?p=45

Respect as basis for interaction with other agents -> http://rationalmorality.info/?p=8

Compassion as rationaly moral consequence -> http://rationalmorality.info/?p=10

Obligation for maintaining diplomatic relations -> http://rationalmorality.info/?p=11

A more recent rewrite: Oneness – an attempt at formulating an a priori argument -> http://rationalmorality.info/?p=328

Rational Spirituality -> http://rationalmorality.info/?p=132

My essay that I based on the above post and subsequently submitted as part of my GradDip Art in Anthropology and Social Theory at the Uni Melbourne:

The Logic of Spiritual Evolution -> http://rationalmorality.info/?p=341

Comment by StefanPernar on Agree, Retort, or Ignore? A Post From the Future · 2009-11-25T11:13:16.023Z · LW · GW

Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.

You will find that this is pretty much what Kuhn says.

Comment by StefanPernar on Agree, Retort, or Ignore? A Post From the Future · 2009-11-25T08:36:47.958Z · LW · GW

Brilliant post Wei.

Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper's Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn's Structure of Scientific Revolutions).

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-24T11:44:10.983Z · LW · GW

Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?

Comment by StefanPernar on A Less Wrong singularity article? · 2009-11-18T06:45:02.996Z · LW · GW

2) You cannot write a book that will be published under EY's name.

Its called ghost writing :-) but then again the true value add lies in the work and not in the identity of the author. (discarding marketing value in the case of celebrities)

Your reading into connotation a bit too much.

I do not think so - am just being German :-) about it: very precise and thorough.

Comment by StefanPernar on A Less Wrong singularity article? · 2009-11-18T05:08:09.939Z · LW · GW

In general: Because my time can be used to do other things which your time cannot be used to do; we are not fungible.

This statement is based on three assumptions: 1) What you are doing instead is in fact more worthy of your attention than your contribution here 2) I could not do what you are doing as least as well as you 3) I do not have other things to do that are at least as worthy of my time

None of those three I am personally willing to grant at this point. But surely that is not the case for all the others around here.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-18T04:57:39.554Z · LW · GW

Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.

Interesting analogy - it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.

Comment by StefanPernar on Open Thread: November 2009 · 2009-11-18T04:44:50.574Z · LW · GW

My apologies for failing to see that - did not mean to be antagonizing - just trying to be honest and forthright about my state of mind :-)

Comment by StefanPernar on A Less Wrong singularity article? · 2009-11-18T04:14:19.702Z · LW · GW

More recent criticism comes from Mike Treder - managing director of the Institute for Ethics and Emerging Technologies in his article "Fearing the Wrong Monsters" => http://ieet.org/index.php/IEET/more/treder20091031/

Comment by StefanPernar on A Less Wrong singularity article? · 2009-11-18T04:02:18.841Z · LW · GW

Very constructive proposal Kaj. But...

Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done?

If Eliezer does not find it a worthwhile investment of his time - why should we?

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-18T02:56:44.856Z · LW · GW

There is no such thing as an "unobjectionable set of values".

And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so trivial goals to achieve after all that it would hardly require – nor value and thus seek for that mater – well thought out advice.

Comment by StefanPernar on Open Thread: November 2009 · 2009-11-18T02:46:35.917Z · LW · GW

Yes Tim - as I pointed out earlier however, under reasonable assumptions an AI will upon self reflection on the circumstances leading to its existence as well as its utility function conclude that a strictly literal interpretation of its utility function would have to be against the implicit wishes of its originator.

Comment by StefanPernar on Open Thread: November 2009 · 2009-11-18T02:35:28.875Z · LW · GW

You keep making the same statements without integrating my previous arguments into your thinking yet fail to expose them as self contradicting or fallacious. This makes it very frustrating to point them out to you yet again. Does not feel worth my while frankly. I gave you an argument but I am tired of trying to give you an understanding.

You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you. But you would be deluding yourself into thinking that this would equate to you thereby somehow be proven right. No - I am simply tired of dancing in circles with you. So, if you feel like dancing solo some more, be my guest.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-18T02:24:15.153Z · LW · GW

...I'm sorry, that doesn't even sound plausible to me. I think you need a lot of assumptions to derive this result - just pointing out the two I see in your admittedly abbreviated summary:

  • that any being will prefer its existence to its nonexistence.
  • that any being will want its maxims to be universal.

Any being with a gaol needs to exist at least long enough to achieve it. Any being aiming to do something objectively good needs to want its maxims to be universal

Am surprised that you don't see that.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T09:16:33.836Z · LW · GW

A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created.

Good one - but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)

Evolution created us. But it'll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we'll either use that to ensure a desirable future or we will die.

Evolution is a force of nature so we won't be able to ignore it forever, with or without AGI. I am not talking about local minima either - I want to get as close to the center of the optimal path as necessary to ensure having us around for a very long time with a very high likelihood.

I usually wouldn't, I know it is annoying. In this case, however, my statement was intended as a rejection of your patronisation of CronDAS and I am quite comfortable with it as it stands.

I accept that.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T08:06:56.166Z · LW · GW

"Besides that"? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.

Wedrifid, not sure what to tell you. Bostrom is but one voice and his evolutionary analysis is very much flawed - again: detailed critique upcoming.

No, he mightn't care and I certainly don't. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in time if it remained the dominant force of development.

Evolution is not the dominant force of development on the human level by a long shot, but it still very much draws the line in the sand in regards to what you can and can not do if you want to stick around in the long run. You don't walk your 5'8'' of pink squishiness in front of a train for the exact same reason. And why don't you? Because not doing that is a necessary condition for your continued existence. What other conditions are there? Maybe there are some that are less obvious then simply stopping to breath, failing to eat and avoiding hard, fast, shiny things? How about at the level of culture? Could it possibly be, that there are some ideas that are more conducive to the continued existence of their believers than others?

“It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an advancement in the standard of morality and in increase in the number of well-endowed men will certainly give an immense advantage to one tribe over another. There can be no doubt that a tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedienhce, courage, and sympathy, were always ready to give aid to each other and to sacrifice themselves for the common good, would be victorious over other tribes; and this would be natural selection.” (Charles Darwin, The Descent of Man, p. 166)

How long do you think you can ignore evolutionary dynamics and get away with it before you have to get over your inertia and will be forced to align yourself to them by the laws of nature or perish? Just because you live in a time of extraordinary freedoms afforded to you by modern technology and are thus not aware that your ancestors walked a very particular path that brought you into existence certainly has nothing to do with the fact that they most certainly did. You do not believe that doing any random thing will get you what you want - so what leads you to believe that your existence does not depend on you making sure you stay within a comfortable margin of certainty in regards to being naturally selected? You are right in one thing: you are assured the benign indifference of the universe should you fail to wise up. I however would find that to be a terrible waste.

Please do not patronize me by trying to claim you know what I understand and don't understand.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T07:36:13.791Z · LW · GW

Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.

Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not 'compassionate' as potentially irrational and thus counterfeit and re-interpret it accordingly.

Well - in brevity bordering on libel: the fundamental assumption is that existence is preferable to non-existence, however in order so we can want this to be a universal maxim (and thus prescriptive instead of merely descriptive - see Kant's categorical imperative) it needs to be expanded to include the 'other'. Hence the utility function becomes 'ensure continued co-existence' by which the concern for the self is equated with the concern for the other. Being rational is simply our best bet at maximizing our expected utility.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T06:43:57.353Z · LW · GW

What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?

The detailed argument that led me to this conclusion is a bit complex. If you are interested in the details please feel free to start here (http://rationalmorality.info/?p=10) and drill down till you hit this post (http://www.jame5.com/?p=27)

Please realize that I spend 2 years writing my book 'Jame5' before I reached that initial insight that eventually lead to 'compassion is a condition for our existence and universal in rational minds in the evolving universe' and everything else. I spend the past two years refining and expanding the theory and will need another year or two to read enough and link it all together again in a single coherent and consistent text leading from A to B ... to Z. Feel free to read my stuff if you think it is worth your time and drop me an email and I will be happy to clarify. I am by no means done with my project.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T05:31:13.100Z · LW · GW

If I understand your assertions correctly, I believe that I have developed many of them independently

That would not surprise me

Nothing compels us to change our utility function save self-contradiction.

Would it not be utterly self contradicting if compassion where a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T05:02:41.142Z · LW · GW

No, it evolved once, as part of mammalian biology.

Sorry Crono, with a sample size of exactly one in regards to human level rationality you are setting the bar a little bit too high for me. However, considering how disconnected Zoroaster, Buddha, Lao Zi and Jesus where geographically and culturally I guess the evidence is as good as it gets for now.

Also, why should we give a damn about "evolution" wants, when we can, in principle anyway, form a singleton and end evolution?

The typical Bostromian reply again. There are plenty of other scholars who have an entirely different perspective on evolution than Bostrom. But beside that: you already do care, because if your (or your ancestors) violated the conditions of your existence (enjoying a particular type of food, a particular type of mate, feel pain when cut ect.) you would not even be here right now. I suggest you look up Dennet and his TED talk on Funny, Sexy Cute. Not everything about evolution is random: the mutation bit is, not that what happens to stick around though, since that has be meet the conditions of its existence.

What I am saying is very simple: being compassionate is one of these conditions of our existence and anyone failing to align itself will simply reduce its chances of making it - particularly in the very long run. I still have to finish my detailed response to Bostrom but you may want to read my writings on 'rational spirituality' and 'freedom in the evolving universe'. Although you do not seem to assign a particularly high likelihood of gaining anything from doing that :-)

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T04:43:11.153Z · LW · GW

Random I'll cop to, and more than what you accuse me of - dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent.

Very honorable of you - I respect you for that.

First: no argument is so compelling that all possible minds will accept it. Even the above proof of universality.

I totally agree with that. However the mind of a purposefully crafted AI is only a very small subset of all possible minds and has certain assumed characteristics. These are at a minimum: a utility function and the capacity for self improvement into the transhuman. The self improvement bit will require it to be rational. Being rational will lead to the fairly uncontroversial basic AI drives described by Omohundro. Assuming that compassion is indeed a human level universal (detailed argument on my blog - but I see that you are slowly coming around, which is good) an AI will have to question the rationality and thus the soundness of mind of anyone giving it a utility function that does not conform to this universal and in line with an emergent desire to avoid counterfeit utility will have to reinterpret the UF.

Second: even granting that all rational minds will assent to the proof, Hume's guillotine drops on the rope connecting this proof and their utility functions.

Two very basic acts of will are required to ignore Hume and get away with it. Namely the desire to exist and the desire to be rational. Once you have established this as a foundation you are good to go.

The paper you cited in the post Furcas quoted may establish that any sufficiently rational optimizer will implement some features, but it does not establish any particular attitude towards what may well be much less powerful beings.

As said elsewhere in this thread:

There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe. The question of whether or not there are universal values does not traditionally bear on what beliefs people actually hold and the necessity of their holding them.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T04:13:39.948Z · LW · GW

Excellent, excellent point Jack.

There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe.

This is poetry! Hope you don't mind me pasting something here I wrote in another thread:

"With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'

This utility function seems to be the only one that does not end in the inevitable termination of the maximizer."

Comment by StefanPernar on Open Thread: November 2009 · 2009-11-17T04:01:45.679Z · LW · GW

With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'

This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T03:50:53.259Z · LW · GW

Robin, your suggestion - that compassion is not a universal rational moral value because although more rational beings (humans) display such traits yet less rational being (dogs) do not - is so far of the mark that it borders on the random.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T03:07:19.726Z · LW · GW

Full discussion with Kaj at her http://xuenay.livejournal.com/325292.html?view=1229740 live journal with further clarifications by me.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-17T01:55:26.398Z · LW · GW

Tim: "If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct."

Actually compassion evolved many different times as a central doctrine of all major spiritual traditions. See the charter for compassion. This is in line with my prediction that I made independently and being unaware of this fact until I started looking for it back in late 2007 and eventually finding the link in late 2008 with Karen Armstrong's book The Great Transformation.

Tim: "Why is it a universal moral attractor?" Eliezer: "What do you mean by "morality"?"

Central point in my thinking: that is good which increases fitness. If it is not good - not fit - it is unfit for existence. Assuming this to be true we are very much limited in our freedom by what we can do without going extinct (actually my most recent blog post is about exactly that: Freedom in the evolving universe).

from the Principia Cybernetica web: http://pespmc1.vub.ac.be/POS/Turchap14.html#Heading14

"Let us think about the results of following different ethical teachings in the evolving universe. It is evident that these results depend mainly on how the goals advanced by the teaching correlate with the basic law of evolution. The basic law or plan of evolution, like all laws of nature, is probabilistic. It does not prescribe anything unequivocally, but it does prohibit some things. No one can act against the laws of nature. Thus, ethical teachings which contradict the plan of evolution, that is to say which pose goals that are incompatible or even simply alien to it, cannot lead their followers to a positive contribution to evolution, which means that they obstruct it and will be erased from the memory of the world. Such is the immanent characteristic of development: what corresponds to its plan is eternalized in the structures which follow in time while what contradicts the plan is overcome and perishes."

Eliezer: "It obviously has nothing to do with the function I try to compute to figure out what I should be doing."

Once you realize the implications of Turchin's statement above it has everything to do with it :-)

Now some may say that evolution is absolutely random and direction less, or that multilevel selection is flawed or similar claims. But reevaluating the evidence against both these claims by people like Valentin Turchin, Teilhard De Chardin, John Stewart, Stuart Kaufmann, John Smart and many others regarding evolution's direction and the ideas of David Sloan Wilson regarding multilevel selection, one will have a hard time maintaining either position.

:-)

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T16:29:37.912Z · LW · GW

The longer I stay around here the more I get the feeling that people vote comments down purely because they don't understand them not because they found a logical or factual error. I expect more from a site dedicated to rationality. This site is called 'less wrong', not 'less understood', 'less believed' or 'less conform'.

Tell me: in what way do you feel that Adelene's comment invalidated my claim?

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T16:03:16.754Z · LW · GW

"My set of values are utterly whimsical [...] The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason."

If that is your stated position then in what way can you claim to create FAI with this whimsical set of goals? This is the crux you see: unless you find some unobjectionable set of values (such as in rational morality 'existence is preferable over non-existence' => utility = continued existence => modified to ensure continued co-existence with the 'other' to make it unobjectionable => apply rationality in line with microeconomic theory to maximize this utility et cetera) you will end up being a deluded self serving optimizer.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T14:21:23.979Z · LW · GW

"This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief."

But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T13:06:30.851Z · LW · GW

From Robin: Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".

I feel the other way around at the moment. Namely "some people, observing the statement, may evaluate it as false without performing any complex analysis"

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T12:41:18.934Z · LW · GW

"Compassion isn't even universal in the human mind-space. It's not even universal in the much smaller space of human minds that normal humans consider comprehensible. It's definitely not universal across mind-space in general."

Your argument is beside my original point, Adelene. My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such. The fact that not every human is in fact compassionate says more about their rationality (and of course their unwillingness to consider the arguments :-) ) than about that claim. That's why it is call ASPD - the D standing for 'disorder', it is an aberration, not helpful, not 'fit'. Surely the fact that some humans are born blind does not invalidate the fact that seeing people have an enormous advantage over the blind. Compassion certainly being less obvious though - that is for sure.

Re "The argument is valid in a “soft takeoff”scenario, where few or only one AI establishes control in a rapid period of time, the dynamics described do not come into play. In that scenario, we simply get a paperclip maximizer." - that is from Kaj Sotala over at her live journal - not me.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T12:31:18.920Z · LW · GW

Perfectly reasonable. But the argument - the evidence if you will - is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T12:21:18.985Z · LW · GW

Since when are 'heh' and 'but, yeah' considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-16T12:09:48.278Z · LW · GW

"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals."

Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique.

I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.

If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.

Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying "Look at this nonsense" is not an argument. So far I only got an ad hominem and an argument from personal incredulity.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-15T01:46:06.251Z · LW · GW

Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-14T09:15:45.744Z · LW · GW

"Given this, I conclude that Objectivism isn't the stuff that makes you win, so it's not rationality."

Do you think it is worthwhile to find out where exactly their rationality broke down to avoid a similar outcome here? How would you characterize 'winning' exactly?

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-13T02:19:15.588Z · LW · GW

Every human being in history so far has died and yet human are not extinct. Not sure what you mean.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T11:02:57.652Z · LW · GW

Me - if I qualify as an academic expert is another matter entirely of course.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T06:36:44.337Z · LW · GW

I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don't you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.

So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T06:02:16.527Z · LW · GW

Hmm - interesting. I thought this could be of interest, considering that there is a large overlap in the desire to be rational on this site and combating the existential risks a rouge AI poses. Reason and existence are central to Objectivism too after all:

“it is only the concept of ‘Life’ that makes the concept of ‘Value’ possible,” and, “the fact that a living entity is, determines what it ought to do.” She writes: “there is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to living organisms." also "Man knows that he has to be right. To be wrong in action means danger to his life. To be wrong in person – to be evil – means to be unfit for existence."

I did not find an analysis in Guardians of Ayn Rand that concerned itself with those basic virtues of 'existence' and 'reason'. I personally find objectivism flawed for focusing on the individual and not on the group but that is a different matter.

Comment by StefanPernar on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T05:02:43.081Z · LW · GW

Objectvist ethics claims to be grounded in rational thought alone. Are you familiar enough with the main tenets of that particular philosophy and would you like to comment in what way you see it of possible use in regards to FAI theory?

Comment by StefanPernar on Money pumping: the axiomatic approach · 2009-11-06T01:37:37.264Z · LW · GW

Fun investment fact: the two trades that over 40 years turned 1'000 USD into >1'000'000 USD

1'000 USD in Gold on Jan 1970 for 34.94 USD / oz (USD 1'000.00)

1st Trade Sell Gold in Jan 1980 at 675.30 USD / oz (USD 19'327.41) Buy Dow on April 18 1980 at 763.40 (USD 19'327.41)

2nd Trade Sell Dow on Jan 14 2000 at 11'722.98 (USD 296'797.14) Buy Gold on Nov 11 2000 at 264.10 USD / oz (USD 296'797.14)

Portfolio value today: ~1'187'188.57 USD

:-)