Posts

Comments

Comment by haig2 on Another Call to End Aid to Africa · 2009-04-04T06:25:32.000Z · LW · GW

Again, ambiguous language seems to derail the conversation. I'm sure she doesn't mean stop caring about Africa, turn a blind eye and go about your way, and we'll take care of ourselves (though the data may suggest that such a course of action might have been more productive). She means stop blindly donating money and goods that at first seems to help but in reality does more harm than good with the exception of satisfying the donors commiseration. It would follow that she would love for people to think of more rational ways to help, to think about the end results of charity more then the act of being charitable.

Comment by haig2 on Higher Purpose · 2009-01-24T07:07:20.000Z · LW · GW

Altruism doesn't only mean preventing suffering. It also means increasing happiness. If all suffering were ended, altruists will still have purpose in providing creativity, novelty, happiness, etc. Suffering then becomes not experience unthinkable levels of insert_positive_emotion_here and philanthropists will be devoted to ensure that all sentient entities experience all they can. The post-singularity Make-a-Wish foundation would experience rapid growth and expand their services as well as volunteers as they operate full-time with repeat customers.

Comment by haig2 on Amputation of Destiny · 2008-12-30T06:27:45.000Z · LW · GW

Doesn't this line of thinking make the case for Intelligence Augmentation (IA) over that of FAI? And let me qualify that when I say IA, I really mean friendly intelligence augmentation relative to friendly artificial intelligence. If you could 'level up' all of humanity to the wisdom and moral ethos of 'friendliness', wouldn't that be the most important step to take first and foremost? If you could reorganize society and reeducate humans in such a way to make a friendly system at our current level of scientific knowledge and technology, that would almost (not entirely, but as best as we can) cut the probability of existential threats to a minimum and allow for a sustainable eudaimonic increase of intelligence towards a positive singularity outcome. Yes, that is a hard problem, but I'm sure not harder than FAI (probably a lot less hard). It'll probably take generations, and we might have to take a few steps backwards before we take further steps forwards (and non-existential catastrophes might provide those backward steps regardless of our choosing), but it seems like it is the best path. The only reasons to choose an FAI plan is because you 1.) think an existential threat is likely to occur very soon, 2.) you want to be alive for the singularity and don't want to risk cryogenics, 3.) you just fancy the FAI idea for personal non-rational reasons.

Comment by haig2 on High Challenge · 2008-12-19T05:52:04.000Z · LW · GW

What you describe as targets over '4D states' reminds me of Finite and Infinite Games by James Carse. For an example, playing a game of basketball with a winner/loser after an hour of play is a finite game. However, the sport of basketball overall, is an infinite game. So playing a specific video game to reach a score or pass the final level is a finite game, but being a 'gamer' is an infinite game, allowing ever more types of gaming to take place.

Comment by haig2 on Disappointment in the Future · 2008-12-01T11:07:44.000Z · LW · GW

Prediction can't be anything but a naive attempt at extrapolating past & current trends out into the future. Most of Kurzwel's accurate predictions are just trends about technology that most people can easily notice. Social trends are much more complex, and those predictions of Kurzweil are off. Also, the occasional black swan is unpredictable by definition, and is usually what causes the most significant changes in our societies.

I like how sci-fi authors talk about their writings as not predicting what the future is going to look like (that's impossible and getting more so), but as using the future to critique the present.

Lastly, Alan Kay's quote always comes in handy when talking about future forecasting: "The best way to predict the future is to invent it."

Comment by haig2 on Chaotic Inversion · 2008-11-30T04:34:12.000Z · LW · GW

It is interesting that no one from this group of empirical materialists has suggested looking at this problem from the perspective of human physiology. If I tried painting a house for hours on end I would need to rest--my hand would be sore, I'd be weak and tired, and generally would lack the energy to continue. Why would exercising your brain be significantly different? If Eli is doing truly mentally strenuous work for hours, it is not simply a problem of willpower, but mental energy. Maintaining a high level of cognitive load will physically wear you out. The US military is experimenting with fnirs-based neuroimaging devices to see if they can measure how much cognitive load they can put on workers in high-performance mental situations the same way you measure how much weight a person can lift or how much time/distance someone can run.

If the problem was that he could not get going at all, then it is more of a psychological problem such as procrastination. But it seems to be that he just wants to sustain long stretches of high-performance cognitive work, which unfortunately the brain cannot do. Switching to watching a video or browsing the web is your brain stopping the run and resorting to walking until it rests enough.

Comment by haig2 on The Weak Inside View · 2008-11-18T23:05:26.000Z · LW · GW

How do periods of stagnant growth, such as extinction level events in earth's history, effect the graphs? As the dinosaurs went extinct, did we jump straight to the start of the mammalian s-curve, or was there a prolonged growth plateau that when averaged out in the combined s-curve meta-graph, doesn't show up as being significant?

A singularity type phase-shift being so steep, even If growth were to grind down in the near future and become stagnant for 100s of years, wouldn't the meta-graph still show an overall fit when averaged out if the singularity occurred after some global catastrophe?

I guess I want to know what effect periods of <= 0 growth have on these meta-graphs.

Comment by haig2 on Whither OB? · 2008-11-18T00:42:08.000Z · LW · GW

I've only been starting to read this blog consistently for a few months, but if there weren't thoughtful mini-essay style posts from EY, Hanson, or someone similar, I doubt I'd stay. I actually think a weekly frequency as opposed to daily would be slightly better since my attention and schedule are increasingly being taxed. The most important value this blog provides is the quality of the posts firstly, and subsequently the quality of the comments/discussions pertaining to the posts. Don't create a community for the sake of creating a community, maintain quality at all costs. That is your competitive advantage. If that isn't likely, then better to freeze the site at its height and leave it for posterity than to tarnish it.

Comment by haig2 on Selling Nonapples · 2008-11-15T07:54:48.000Z · LW · GW

So are you claiming that Brooks' whole plan was, on a whim, to just do the opposite of what the neats were doing up till then? I thought his inspiration for the subsumption architecture was nature, the embodied intelligence of evolved biological organisms, the only existence proof of higher intelligence we have so far. To me it seems like the neats are the ones searching a larger design space, not the other way around. The scruffies have identified some kind of solution to creating intelligent machines in nature and are targeting a constrained design space inspired by this--the neats on the other hand are trying to create intelligence seemingly out of the platonic world of forms.

Comment by haig2 on Efficient Cross-Domain Optimization · 2008-10-29T22:54:04.000Z · LW · GW

Jeff Hawkins, in his book On Intelligence, says something similar to Eliezer. He says intelligence IS prediction. But Eliezer say intelligence is steering the future, not just predicting it. Steering is a behavior of agency, and if you cannot peer into the source code but only see the behaviors of an agent, then intelligence would necessarily be a measure of steering the future according to preference functions. This is behaviorism is it not? I thought behaviorism had been predicated as a useful field of inquiry in the cognitive sciences?

I can see where Eliezer is going with all this. The most moral/ethical/friendly AGI cannot take orders from any human, let alone be modeled on human agency to a large degree itself, and we also definitely do not want this agency to be a result of the same horrendous process of natural selection red in tooth and claw that created us.

That cancels out an anthropomorphic AI, cancels out evolution through natural selection, and it cancels out an unchecked oracle/genie type wish-granting intelligent system (though I personally feel that a controlled (friendly?) version of the oracle AI is the best option because I am skeptical with regard to Eliezer or anyone else coming up with a formal theory of friendliness imparted on an autonomous agent). ((Can an oracle type AI create a friendly AI agent? Is that a better path towards friendliness?))

Adam's comment above is misplace because I think Eliezer's recursively self-improving friendly intelligence optimization is a type of evolution, just not as blind as natural selection as has been played out through natural history on our earth.

Comment by haig2 on Traditional Capitalist Values · 2008-10-17T21:04:52.000Z · LW · GW

@Eliezer: what 'evil' person has ever admitted to being 'evil'? I'm not talking about the petty thief who is sorry for his actions, but the bin ladens or stalins of the world who are convinced that the end justify their means and they are really doing good for their people/country/future/god/whatever.

Rand's objectivism and capitalism are criticized by people who reflexively see 'selfish' and equate that with greed and all the problems with capitalism. But those critics are delusional or they just don't understand human nature and our built-in modules for self-interest. And what altruism we do have is reciprocal, that doesn't make it any less 'good', it just makes it a 'good' form of self-interested behavior that benefits both or many parties and allows for societies instead of small warring bands.

Socialism is bad because it doesn't work well--capitalism is bad because it works too well. Socialism goes against our natural instincts, capitalism in its unregulated form amplifies those natural instincts to unsustainable levels. And truly, all our economies in this world are mixed economies anyway, so that should tell you something about a 'one true way'.

Again referring to my previous post, I think most of the problems are not capitalistic ideals, but the money systems that warp those ideals. Money is important, because bartering doesn't scale and is inefficient, and lacking a post-scarcity reality where everything is abundant, money will not go away. But in its current form it is far from optimal for maximum friendliness for all.

Comment by haig2 on Traditional Capitalist Values · 2008-10-17T02:43:53.000Z · LW · GW

I think when most people complain about capitalism it has more to do with the monetary policies and banking systems of certain implementations of capitalism then with the capitalist ideals themselves.

For instance, fiat money that is created by the governing body and interest charged on the lending of that money is a specific condition that allows for wealth inequalities and the aggregation of power. I think alternative currencies have an absolutely amazing potential to change society but governments violently suppress any competition that will result in them losing control and power over the system.

Interest was lauded by the founders of the USA and Benjamin Franklin himself remarked that it is the 8th wonder of the world, but as fascinating as compound interest is, it ultimately creates cycles of debt and unsustainable behavior.

An example case that is cited often is the use of alternative currencies in the towns of Spain during their civil war. When banks shut down and money was not available, the towns began to fall apart and enter deep depressions. As a response, the towns took the initiative to create local exchange trading systems, and almost immediately their situations began to improve and prosperity started to return to the towns. The governing bodies, deeply afraid of losing control, shut these systems down, choosing to purposefully let them fall into a depressed state rather than lose power.


Now, if you believe that friendly AGI is around the corner and will result in the singularity occurring sometime before our current social systems collapse, then you probably do not want to rock the boat. You would not care for sustainability, you would want to keep accelerating innovation at all costs and keep the system as is till our friendly AGI savior arrives to save the world.

If, on the other hand, you are skeptical that such a singularity is inevitable or you weigh the probabilities higher that our social systems which are allowing such accelerating innovations will collapse before such a singularity arrives, then thinking about sustainability starts to become really important.

Comment by haig2 on Entangled Truths, Contagious Lies · 2008-10-16T07:32:45.000Z · LW · GW

A new method of 'lie detection' is being perfected using functional near infrared imaging of the prefrontal cortex:

http://www.biomed.drexel.edu/fNIR/Contents/deception/

In this technique the device actually measures whether or not a certain memory is being recalled or is being generated on the spot. For example, if you are interrogating a criminal who denies ever being at a crime scene, and you show them a picture of the scene, you can deduce whether he/she has actually seen it or not by measuring if their brain is recalling some sensory data from memory or newly creating and storing it.

Comment by haig2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T03:04:32.000Z · LW · GW

What you are getting at is that the ends justify the means only when the means don't effect the ends. In the case of a human as part of the means, the act of the means may effect the human and thus effect the ends. In summary, reflexivity is a bitch. This is a reason why social science and economics is so hard--the subjects being modeled change as a result of the modeling process.

This is a problem with any sufficiently self-reflective mind, not with AIs that do not change their own rules. A simple mechanical narrow AI that is programmed to roam about collecting sensory data and to weigh the risk of people dying due to traffic collisions, then stepping in only to minimize the number of deaths, would be justified if it happens to allow or even cause the smaller number of deaths.

The concept of corruption doesn't exist in this context, the act is just a mechanism. A person can transition from an uncorrupted state to a corrupted state only because the rules governing the person's behavior is subject to modification in such a complex fashion as to occur even under the radar of the person it is happening to, because the person is the behavior caused by the rules, and when the rules change the person changes. We are not in as much control as we would like to think.

When the eastern religions preach the ego is the root of all our problems, they may be more right than we give them credit for. Ego is self-identity, which arises out of the ability to introspect and separate the aggregate of particles constituting 'I' with the rest of the particles in the environment. How would you go about building an AGI that doesn't have the false duality of self and non-self? Without ego corruption does not exist.

Imagine instead of an embodied AGI, or even a software AGI running on some black box computational machine sitting in a basement, the friendly AGI takes the form of an intelligent environment, say a superintelligent house. In the house there exists safeguards that disallows any unfriendly action. The house isn't conscious, it just adds a layer of friendliness on top of harsh reality. This may be a fruitful way of thinking about friendliness that avoids all the messy reflexivity.

Fun stuff this. I am enjoying these discussions.

Comment by haig2 on Why Does Power Corrupt? · 2008-10-14T11:18:15.000Z · LW · GW

Sorry for the triple post, one more addition. Larry Lessig just gave a lecture on corruption and the monetary causes of certain types of corruption prevalent in our society.

http://www.lessig.org/blog/2007/10/corruption_lecture_alpha_versi_1.html

Comment by haig2 on Why Does Power Corrupt? · 2008-10-14T11:11:08.000Z · LW · GW

Short addition to my previous post.

I've been thinking about how to apply the notion of recursive self-improvement to social structures instead of machines. I think it actually offers (though non-intuitively) a simpler case to think about friendliness optimization. If anyone else is interested feel free to email me. I'm planning on throwing up a site/wiki about this topic and may need help.

haig51 AT google mail

Comment by haig2 on Why Does Power Corrupt? · 2008-10-14T11:00:32.000Z · LW · GW

That is why systems consisting of checks and balances were eventually created (ie. democracy). Such social systems try to quell the potential for power aggregation and abuse, though as current events show, there will always be ways for power hungry people to game the system (and the best way to game the system is to run it and change the rules in your favor, creating the illusion that you still abide by the rules).

I always felt that the best system would be one of two extremes: 1.) a benevolent dictator (friendly superintelligence?) or 2.) massively decentralized libertarian socialism (or similar).

Notice that its an all or nothing dichotomy based on the absolute 'goodness' of a potential benevolent dictator--meaning that if it is possible to have an absolutely perfect benevolent dictator, than it would be best to concentrate power with it, but an absence of perfection would then require the exact opposite and to spread power out as wide as possible.

Someone contemplating changing the world for the better (or best) would necessarily need to decide which camp they fall in, #1 or #2. If #1 (like EY I presume), your most important duty would be to make the creation of this benevolent dictator your highest priority. If in camp #2 (like myself), your highest priority would be to create a system that uses the knowledge of human cognition, biases, and tendencies to diffuse power aggregation/abuse while trying to maximize the pursuit of happiness.

I think work on both can be done concurrently and may be complementary. Working on #2 might help keep the world from blowing up until #1 can be completed, and work on #1 can give insights into how to tune #2 (like EY's writings inspiring me to work on #2 etc.).

Given a choice, we would all want #1 as soon as possible, but being a pragmatist, #2 might be the more fruitful position for most people.

Comment by haig2 on Shut up and do the impossible! · 2008-10-08T23:38:53.000Z · LW · GW

Did Einstein try to do the impossible? No, yet looking back it seems like he accomplished an impossible (for that time) feat doesn't it. So what exactly did he do? He worked on something he felt was: 1.) important, and probably more to the point, 2.) passionate about.

Did he run the probabilities of whether he would accomplish his goal? I don't think so, if anything he used the fact that the problem has not been solved so far and the problem is of such difficulty only to fuel his curiosity and desire to work on the problem even more. He worked at it every day because he was receiving value simply by doing the work, from being on the journey. He couldn't or wouldn't want to be doing anything else (patent clerk payed the bills, but his mind was elsewhere).

So instead of worrying about whether you are going to solve an impossible problem or not, just worry about whether you are doing something you love and usually if you are a smart and sincere person, that thing you love will more often than not turn out to be pretty important.

Ben Franklin wrote something relevant when talking about playing games: "...the persons playing, if they would play well, ought not much to regard the consequence of the game, for that diverts and makes the player liable to make many false open moves; and I will venture to lay it down for an infallible rule, that, if two persons equal in judgment play for a considerable sum, he that loves money most shall lose; his anxiety for the success of the game confounds him. Courage is almost as requisite for the good conduct of this game as in a real battle; for, if he imagines himself opposed by one that is much his superior in skill, his mind is so intent on the defensive part, that an advantage passes unobserved.”

Comment by haig2 on Make an Extraordinary Effort · 2008-10-08T07:48:55.000Z · LW · GW

I think most people's feedback threshold requires some return on their efforts in a relatively short time period. It takes monk-like patience to work on something indefinitely without any intermediary returns. So then, I don't think the point in contention is whether people are willing to make extraordinary effort, it is whether they are willing to make extraordinary effort without extraordinary returns in a time span relative to their feedback threshold. Even in eastern cultures where many people believe that enlightenment in the strong sense is possible by meditating your whole life, there is a reason why there are only a few practicing monks.

Comment by haig2 on Beyond the Reach of God · 2008-10-06T11:29:00.000Z · LW · GW

On the existential question of our pointless existence in a pointless universe, my perspective tends to oscillate between two extremes:

1.) In the more pessimistic (and currently the only rationally defensible) case, I view my mind and existence as just a pattern of information processing on top of messy organic wetware and that is all 'I' will ever be. Uploading is not immortality, it's just duplicating that specific mind pattern at that specific time instance. An epsilon unit of time after the 'upload' event that mind pattern is no longer 'me' and will quickly diverge as it acquires new experiences. An alternative would be a destructive copy, where the original copy of me (ie. me that is typing this right now) is destroyed after or at the instance of upload. Or I might gradually replace each synapse of my brain one by one with a simulator wirelessly transmitting the dynamics to the upload computer until all of 'me' is in there and the shell of my former self is just discarded. Either way, 'I' is destroyed eventually--maybe uploading is a fancier form of preserving ones thoughts for posterity as creating culture and forming relationships is pre-singularity, but it does not change the fact that the original meatspace brain is going to eventually be destroyed, no matter what.

The second case, what I might refer to as an optimistic appeal to ignorance, is to believe that though the universe appears pointless according to our current evidence, there may be some data point in the future that reveals something more that we are ignorant to at the moment. Though our current map reveals a neutral territory, the map might be incomplete. One speculative position taken directly from physics is the idea that I am a Boltzmann Brain. If such an idea can be taken seriously (and it is) than surely there are other theoretically defensible positions where my consciousness persists in some timeless form one way or another. (Even Bostrom's simulation argument gives another avenue of possibility)

I guess my two positions can be simplified into:
1.) What we see is all there is and that's pretty fucked up, even in the best case scenario of a positive singularity.

2.) We haven't seen the whole picture yet, so just sit back, relax, and as long as you have your towel handy, don't panic.

Comment by haig2 on The Magnitude of His Own Folly · 2008-10-01T08:08:00.000Z · LW · GW

I'm relatively new to this site and have been trying to read the backlog this past week so maybe I've missed some things, but from my vantage point it seems like your are trying to do, Eliezer, is come up with a formalized theory of friendly agi that will later be implemented in code using, I assume, current software development tools on current computer architectures. Also, your approach to this AGI is some sort of bayesian optimization process that is 'aligned' properly as to 'level-up' in such a way as to become and stay 'friendly' or benevolent towards humanity and presumably all sentient life and the environment that supports them. Oh ya, and this bayesian optimization process is apparently recursively self-improving so that you would only need to code some seedling of it (like a generative process such as a mandelbrot set) and know that it will blossom along the right course. That, my friends, is a really tall order and I do not envy anyone who tries to take on such a formidable task. I'm tempted to say that it is not even humanly possible (without a manhattan project and even then maybe not) but I'll be bayesian and say the probability is extremely low.

I think you are a very bright and thoughtful young guy and from what I've read seem like more of a philosopher than an engineer or scientist, which isn't a bad thing, but to transition from philosophizing to engineering is not trivial especially when philosophizing upon such complex issues.

I can't even imagine trying to create some trivial new software without prototyping and playing around with drafts before I had some idea of what it would look like. This isn't Maxwell's equations, this is messy self-reflective autonomous general intelligence, there is no simple, elegant theory for such a system. So get your hands dirty and take on a more agile work process. Couldn't you at least create a particular component of the AI, such as a machine vision module, that would show your general approach is feasible? Or do you fear that it would spontaneously turn into skynet? Does your architecture even have modules, or are you planning some super elegant bayesian quine? Or do you even have an architecture in mind?

Anyway, good luck and I'll continue reading, if for nothing else then entertainment.