Positioning oneself to make a difference

post by Mitchell_Porter · 2010-08-18T23:54:38.901Z · LW · GW · Legacy · 52 comments

Last weekend, while this year's Singularity Summit took place in San Francisco, I was turning 40 in my Australian obscurity. 40 is old enough to be thinking that I should just pick a SENS research theme and work on it, and also move to wherever in the world is most likely to have the best future biomedicine (that might be Boston). But at least since the late 1990s, when Eliezer first showed up, I have perceived that superintelligence trumps life extension as a futurist issue. And since 2006, when I first grasped how something like CEV could be an answer to the problem of superintelligence, I've had it before me as a model of how the future could and should play out. I have "contrarian" ideas about how consciousness works, but they do not contradict any of the essential notions of seed AI and friendly AI; they only imply that those notions would need to be adjusted and fitted to the true ontology, whatever that may be.

So I think this is what I should be working on - not just the ontological subproblem, but all aspects of the problem. The question is, how to go about this. At the moment, I'm working on a lengthy statement of how I think a Friendly Singularity could be achieved - a much better version of my top-level posts here, along with new material. But the main "methodological" problem is economic and perhaps social - what can I live on while I do this, and where in the world and in society should I situate myself for maximum insight and productivity. That's really what this post is about.

The obvious answer is, apply to SIAI. I'm not averse to the idea, and on occasion I raise the possibility with them, but I have two reasons for hesitation.

The first is the problem of consciousness. I often talk about this in terms of vaguely specified ideas about quantum entanglement in the brain, but the really important part is the radical disjunction between the physical ontology of the natural sciences and the manifest nature of consciousness. I cannot emphasize enough that this is a huge gaping hole in the scientific understanding of the world, the equal of any gap in the scientific worldview that came before it, and that the standard "scientific" way of thinking about it is a form of property dualism, even if people won't admit this to themselves. All the quantum stuff you hear from me is just an idea about how to restore a type of monism. I actually think it's a conservative solution to a very big problem, but to believe that you would have to agree with me that the other solutions on offer can't work (as well as understanding just what it is that I propose instead).

This "reason for not applying to SIAI" leads to two sub-reasons. First, I'm not sure that the SIAI intellectual environment can accommodate my approach. Second, the problem with consciousness is of course not specific to SIAI, it is a symptom of the overall scientific zeitgeist, and maybe I should be working there, in the field of consciousness studies. If expert opinion changes, SIAI will surely notice, and so I should be trying to convince the neuroscientists, not the Friendly AI researchers.

The second top-level reason for hesitation is simply that SIAI doesn't have much money. If I can accomplish part of the shared agenda while supported by other means, that would be better. Mostly I think in terms of doing a PhD. A few years back I almost started one with Ben Goertzel as co-supervisor, which would have looked at implementing a CEV-like process in a toy physical model, but that fell through at my end. Lately I'm looking around again. In Australia we have David Chalmers and Marcus Hutter. I know Chalmers from my quantum-mind days in Arizona ten years ago, and I met with Hutter recently. The strong interdisciplinarity of my real agenda makes it difficult to see where I could work directly on the central task, but also implies that there are many fields (cognitive neuroscience, decision theory, various quantum topics) where I might be able to limp along with partial support from an institution.

So that's the situation. Are there any other ideas? (Private communications can go to mporter at gmail.)

52 comments

Comments sorted by top scores.

comment by cousin_it · 2010-08-19T10:16:40.736Z · LW(p) · GW(p)

Why do you think consciousness should be explained by quantum phenomena at all?

We already have a perfectly fine, worked-out sort of "dualism": algorithm vs substrate. When a computer runs a quicksort algorithm, the current pivot position certainly "exists" from the program's point of view, but trying to find an "ontologically basic" physical entity corresponding to it will make you massively confused! In fact, even trying to determine whether such-and-such machine implements such-and-such algorithm, whether two identical computations count as one, etc. will likely lead you into confusion. Even Eliezer fell for that.

My intuition says that a successful path to solving such questions shouldn't begin with taking them at face value. Rather, you should try to find a perspective that makes the answer follow (yes, mathematically) from simpler assumptions that don't take the answer for granted in advance. I'm not very good at this game, but I was pretty happy upon realizing how exactly the appearance of probabilities can arise in a purely deterministic world, and how exactly the appearance of the Born rule might arise from a deterministic MWI world with no such rule. Likewise, I hold that the puzzle of consciousness must someday be solvable normally, by a direct mathematical construction that doesn't appeal to philosophy. If you don't see a way, this doesn't mean there is no way. Free will and probability looked just as mysterious, after all.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-08-20T09:47:42.262Z · LW(p) · GW(p)

My main criterion for whether a computational property is objectively present in a physical system, or is a matter of interpretation, is whether it involves semantics. Pure physics only gives you state machines with no semantics. In this case, I think quicksort comes quite close to being definable at the state-machine level. "List" sounds representational, because usually it means "list of items represented by computational tokens", but if you think of it simply as a set of physical states with an ordering produced by the intrinsic physical dynamics, then "sorting a list" can refer to a meta-dynamics which rearranges that ordering, and "current pivot position" can be an objective and strictly physical property.

The property dualism I'm talking about occurs when basic sensory qualities like color are identified with such computational properties. Either you end up saying "seeing the color is how it feels" - and "feeling" is the extra, dual property - or you say there's no "feeling" at all - which is denial that consciousness exists. It would be better to be able to assert identity, but then the elements of a conscious experience can't really be coarse-grained states of neuronal ensembles, etc - that would restore the dualism.

We need an ontology which contains "experiences" and "appearances" (for these things undoubtedly exist), which doesn't falsify their character, and which puts them in interaction with the atomic aggregates we know as neurons, which presumably also exist. Substance dualism was the classic way to do this - the soul interacting with the pineal gland, as in Descartes. The baroque quantum monadology I've hinted at, is the only way I know to introduce consciousness into physical causality that avoids both substance dualism and property dualism. Maybe there's some other way to do it, but it's going to be even weirder, and seems like it should still involve what we would now call quantum effects, because the classical ontology just does not contain minds.

I identify with your desire to solve the problem "mathematically" to a certain point. Husserl, the phenomenologist, said that distinct ontological categories are to be understood by different "eidetic sciences". Mathematics, logic, computer science, theoretical physics, and maybe a few other disciplines like decision theory, probability theory, and neoclassical economics, are all eidetic. Husserl's proposition was that there should also be eidetic sciences for all the problematic aspects of consciousness. Phenomenology itself was supposed to be the eidetic science of consciousness, as well as the wellspring of the other eidetic sciences, because all ontology derives from phenomenology somehow, and the eidetic sciences study "regional ontologies", aspects of being.

The idea is not that everything about reality is to be discovered apriori and through introspection. Facts still have to come through experience. But experience takes a variety of forms: along with sensory experience, there's logical experience, reflective experience, and perhaps others. Of these, reflective experience is the essence of phenomenology, and the key to developing new eidetic sciences; that is, to developing the concepts and methods appropriate to the ontological aspects that remain untheorized, undertheorized, or badly theorized. We need new ideas in at least two areas: the description of consciousness, and the ontology of the conscious object. We need new and better ideas about what sort of a thing could "be conscious", "have experiences" like the ones we have, and fit into a larger causal matrix. And then we need to rethink physical ontology so that it contains such things. Right now, as I keep asserting, we are stuck with property dualism because the things of physics, in any combination, are fundamentally unlike the thing that is conscious, and so an assertion of identity is not possible.

For more detail, see everything else I've written on this site, or wait for the promised paper. :-)

comment by timtyler · 2010-08-19T06:25:07.139Z · LW(p) · GW(p)

I often talk about this in terms of vaguely specified ideas about quantum entanglement in the brain, but the really important part is the radical disjunction between the physical ontology of the natural sciences and the manifest nature of consciousness. I cannot emphasize enough that this is a huge gaping hole in the scientific understanding of the world, the equal of any gap in the scientific worldview that came before it, and tha the standard "scientific" way of thinking about it is a form of property dualism, even if people won't admit this to themselves.

I am among those who "won't admit this to themselves". In fact, I think it is nonsense - there is no "gap".

Replies from: PhilGoetz
comment by PhilGoetz · 2010-08-19T18:06:44.782Z · LW(p) · GW(p)

Tim, that's an astonishing assertion. It sounds to me like you just claimed that we fully understand the mechanism that generates consciousness.

Replies from: timtyler, timtyler
comment by timtyler · 2010-08-19T18:52:45.587Z · LW(p) · GW(p)

Science doesn't understand how the brain works well enough to make one - but that is not more of a "hole in the scientific understanding of the world" than is embryology, or muscles or metabolism - which we don't completely understand either.

I don't agree with the bits about "quantum entanglement" or "dualism" either - this material is just all wrong, in my view.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-08-26T22:56:48.623Z · LW(p) · GW(p)

I wouldn't worry about the "quantum entanglement" part, which is not needed for the more general claim that we don't know squat about consciousness.

Anyway, I disagree - embryology, muscles, and metabolism are not mysteries in the way that consciousness is. I find it astonishing you would even make the comparison.

comment by timtyler · 2010-08-19T18:51:30.933Z · LW(p) · GW(p)

Science doesn't understand how the brain works well enough to make one - but that is no more a "hole in the scientific understanding of the world" than is embryology, or muscles or metabolism - which we don't completely understand either.

I don't agree with the bits about "quantum entanglement" or "dualism" either - this material is just all wrong, in my view.

comment by rwallace · 2010-08-19T18:36:03.610Z · LW(p) · GW(p)

Granted that we don't yet understand consciousness, and supposing for the sake of argument it might require special physics (though I don't think this is the case), I still think you had it right the first time.

Understanding consciousness is not possible yet. It's not the kind of problem that's going to be solved by armchair thought. We need an example working in a form where we can see the moving parts. In other words, we need uploading. (Or, if you believe the requirement for special physics means uploading can't work, we need to get to the point where we have the technology to do it, and observe it failing where standard theory would have had it succeeding, or otherwise find an experiment to distinguish the special physics from the currently understood kind.)

And understanding consciousness is not necessary yet. It's something that can quite happily be left for another few generations. It's not on the critical path to anything. As an AI researcher, I can tell you that if an explanation of consciousness fell into my lap tomorrow, I would be intellectually fascinated, but it would do nothing to solve any of the problems I'm currently facing.

SENS, by contrast, obviously is on the critical path to, well, everything else -- it's hard to solve any problems at all when you're dead.

So my recommendation is to go back to your original plan.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-20T08:18:02.495Z · LW(p) · GW(p)

As an AI researcher, I can tell you that if an explanation of consciousness fell into my lap tomorrow, I would be intellectually fascinated, but it would do nothing to solve any of the problems I'm currently facing.

Part of what we'd like an AI to be able to do is minimize pain and maximize pleasure. How do we go about building such an AI, if we don't know which patterns of neuron firings (or chemical reactions, or information processing, or whatever) constitute pain, and which constitute pleasure? Do you not consider that to be part of the problem of consciousness, or related to it?

(Well, one way is if the AI could itself solve such problems, but I'm assuming that's not what you meant...)

Replies from: rwallace
comment by rwallace · 2010-08-20T09:08:06.465Z · LW(p) · GW(p)

Huh? We already know that, we've known it since the 1950s. As far as I'm aware, the knowledge hasn't really helped us solve our problems.

comment by Perplexed · 2010-08-19T04:15:51.027Z · LW(p) · GW(p)

If you are considering

  1. Doing a PhD

  2. In philosophy

  3. In Oz

  4. At the age of 40

then you need to talk to the albino gorilla.

Replies from: CarlShulman, wedrifid
comment by CarlShulman · 2010-08-20T08:03:33.514Z · LW(p) · GW(p)

Are you just referring to the rough philosophy job market, or some specific posts at that blog?

comment by wedrifid · 2010-08-19T04:59:05.293Z · LW(p) · GW(p)

Out of motivated curiosity, are you familiar with other elements of academia in Australia?

Replies from: Perplexed
comment by Perplexed · 2010-08-19T05:14:23.773Z · LW(p) · GW(p)

Not at all, sorry. I'm a Yank. I just know Wilkins by way of various web fora, and the names of a few other philosophers through Wilkins.

Replies from: jswilkins
comment by jswilkins · 2010-08-23T10:22:53.020Z · LW(p) · GW(p)

I would love to have a PhD student of promise, but it would not be good for a student to do a doctorate with me while I am at a small university. Much better to do a thesis via a good quality university and have me as an ancillary advisor, and then only if the topic is squarely in my field (philosophy of biology). The future of a good candidate depends, unfairly perhaps, upon the status of the university and department in which the doctorate is done.

Replies from: Perplexed, komponisto
comment by Perplexed · 2010-08-23T12:54:17.859Z · LW(p) · GW(p)

Thx, John. That kind of advice is exactly the reason I suggested that Mr. Porter get in touch with you.

Just curious: Did you stumble upon rational Harry Potter as an indirect result of my dragging you into this?

comment by komponisto · 2010-08-23T10:54:23.521Z · LW(p) · GW(p)

Much better to do a thesis via a good quality university....The future of a good candidate depends, unfairly perhaps, upon the status of the university and department in which the doctorate is done.

Out of curiosity, how high does a university's prestige level need to be in order to qualify as "good quality" in your view?

comment by PhilGoetz · 2010-08-19T02:55:50.734Z · LW(p) · GW(p)

The problem of consciousness is really hard, and IMHO we don't dare try to make a friendly AI until we've got it solidly nailed down. (Well, IMHO making the "One Friendly AI to Rule them All" is just about the worst thing we could possibly do; but I'm speaking for the moment from the viewpoint of an FAI advocate.)

The idea of positioning yourself brings to mind chess as a metaphor for life. When I started playing chess, I thought chess players planned their strategies many moves in advance. Then I found out that I played best when I just tried to make my position better on each move, and opportunities for checkmate would present themselves. Then I tried to apply this to my life. Then, after many years of improving my position in a many different ways without having a well-researched long-term plan, I realized that the game of life has too large a board for this to work as well as it does in chess.

I'd like to see posts on life lessons learned from chess and go, if someone would care to write them.

Replies from: None, None, xamdam, Morendil
comment by [deleted] · 2010-08-19T14:52:50.290Z · LW(p) · GW(p)

I don't play chess, but it occurs to me that what you're talking about sounds like applying greedy algorithms to life. And I realized recently that that's what I do. At any given moment, take the action that is the biggest possible step towards your goal.

For example: You're trying to cut expenses. The first step you make is to cut your biggest optional expense. (Analogously: first deal with your biggest time sink, your biggest worry.) A lot of people start with the little details or worry about constructing a perfect long-term plan; my instinct is always to do the biggest step in the right direction that's possible right now.

comment by [deleted] · 2010-08-19T13:40:03.387Z · LW(p) · GW(p)

The prime tenet of successful strategy in any domain - chess, life, whatever - is "always act to increase your freedom of action". In essence, the way to deal with an uncertain future is to give yourself as many ways of compensating for it as possible. (Edit: removed confused relationship to utility maximization).

It's much more difficult to apply this to a life-sized board, but it's still a very strong heuristic.

Replies from: ciphergoth, orthonormal
comment by Paul Crowley (ciphergoth) · 2010-08-19T14:01:10.307Z · LW(p) · GW(p)

(It can also be thought of as a 'dumb' version of utility maximization, where the utility of every possibility is set to 1).

No, this gives a utility of 1 to every action. You have to find some way to explicitly encode for the diversity of options available to your future self.

Replies from: Emile, None
comment by Emile · 2010-08-19T14:20:04.060Z · LW(p) · GW(p)

If you're programming a chess AI, that would translate into a heuristic for the "expected utility" of a position as a function of the number of moves you can make in that position (in addition to also being a function of the number of pieces other player have).

comment by [deleted] · 2010-08-19T17:05:50.590Z · LW(p) · GW(p)

Hrm, I'm not sure if I just miscommunicated or I'm misunderstanding something about utility calculations. Can you clarify your correction?

Replies from: WrongBot, ciphergoth
comment by WrongBot · 2010-08-19T17:31:34.429Z · LW(p) · GW(p)

Utility calculations are generally used to find the best course of action, i.e. the action with the highest expected utility. If every possible outcome has a utility set to 1, a utility maximizer will choose at random because all actions have equal expected utility. I think you're proposing maximizing the total utility of all possible future actions, but I'm pretty sure that's incompatible with reasoning probabilistically about utility (at least in the Bayesian sense). 0 and 1 are forbidden probabilities and your distribution has to sum to 1, so you don't ever actually eliminate outcomes from consideration. It's just a question of concentrating probabilities in the areas with highest utility.

Does that make any sense at all?

(Ciphergoth's answer to your question is approximately a more concise version of this comment.)

Replies from: None
comment by [deleted] · 2010-08-19T17:42:32.841Z · LW(p) · GW(p)

You're right both in my intended meaning and why it doesn't make sense - thanks.

comment by Paul Crowley (ciphergoth) · 2010-08-19T17:27:45.683Z · LW(p) · GW(p)

The expected utility is the sum of utilities weighted by probability. The probabilities sum to 1, and since the utilities are all 1, the weighted sum is also 1. Therefore every action scores 1. See Expected utility hypothesis.

Replies from: None
comment by [deleted] · 2010-08-19T17:37:54.074Z · LW(p) · GW(p)

Thanks. (Edit: My intended meaning doesn't make sense, since # of possible outcomes doesn't change, only their probabilities do. Still a useful heuristic, but tying it to utility is incorrect).

comment by orthonormal · 2010-08-24T18:02:31.584Z · LW(p) · GW(p)

The way chess is different from life is that it's inherently adversarial; reducing your opponent's freedom of action is as much of a win as increasing yours (especially when you can reduce your opponent's options to "resign" or "face checkmate").

And I don't think that heuristic applies without serious exceptions in life either.

comment by xamdam · 2010-08-19T09:08:33.017Z · LW(p) · GW(p)

I'd like to see posts on life lessons learned from chess and go, if someone would care to write them.

Here is a whole book of them (pretty goof, IMO) How Life Imitates Chess by Gary Kasparov

comment by Morendil · 2010-08-19T08:15:02.530Z · LW(p) · GW(p)

I'd like to see posts on life lessons learned from chess and go, if someone would care to write them.

Go has many lessons, but you have to be somewhat tentative about taking them to heart, at least until you reach those ethereal high dan levels of learning. (That's one lesson right there.)

comment by Johnicholas · 2010-08-19T03:47:30.935Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid, and the degree you're helping the world is proportional to the degree you're underpaid.

I'd advocate that you become self-funding ASAP, in a peripherally related field. This has a couple benefits. Firstly, your paid work will require you to obtain some real skills and provide some reality checks - countering ivory-tower-ish tendencies to some extent. Second, the ideas you bring to the table from your paid work will add diversity to the SIAI/LessWrong/existential risks community. Third, you will have fewer structural incentives to defend SIAI/LessWrong's continued existence.

Replies from: rwallace, wedrifid, Mitchell_Porter, Vladimir_Nesov, Jonathan_Graehl, PhilGoetz, rwallace, rwallace, rwallace
comment by rwallace · 2010-08-19T18:20:39.278Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid, and the degree you're helping the world is proportional to the degree you're underpaid.

This is not remotely true. The implicit assumption is that life is a zero-sum game, and payment constitutes the annihilation of wealth. In reality, payment constitutes the transfer of wealth: when you spend your money, you're providing income to the people who provide you with goods and services, and enabling them to purchase goods and services from their own suppliers, etc. Economics is a positive-sum game.

Replies from: Johnicholas
comment by Johnicholas · 2010-08-19T18:44:46.763Z · LW(p) · GW(p)

Okay - I grant you that economics is a positive-sum game, but charitable work is not different from other work in this way.

Drawing a salary from working for a nonprofit organization isn't (by your argument) more benevolent than drawing a salary from working at a for-profit organization.

Replies from: rwallace
comment by rwallace · 2010-08-19T19:14:23.893Z · LW(p) · GW(p)

Of course. The only reason to prefer working for a nonprofit organization is if it happens to be the case that the job where you think you can make the largest positive difference, is only being done by nonprofit organizations (or if only that kind has an opening for you).

comment by wedrifid · 2010-08-19T04:32:49.755Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid, and the degree you're helping the world is proportional to the degree you're underpaid.

OR, you manage to secure funding from people or sources that would otherwise have been wasted or used inefficiently.

Replies from: Johnicholas
comment by Johnicholas · 2010-08-19T05:04:02.570Z · LW(p) · GW(p)

I don't understand. Surely someone who volunteers as a grant-getter is being more benevolent than someone who accepts a wage to work as a grant-getter. Unpaid volunteering is exactly analogous to accepting a wage and then turning around and donating it, which is surely praiseworthy.

Replies from: wedrifid
comment by wedrifid · 2010-08-19T05:07:01.853Z · LW(p) · GW(p)

I don't disagree with either of those statements, or your overall position. Indeed, I'm actually taking your suggested approach myself. I would go as far as to say that any direct participation that I have in research in my chosen field is less benevolent than a pure focus on wealth creation (and harvesting).

I should have included disclaimers to that effect when exploring, as I was, a technical curiosity.

comment by Mitchell_Porter · 2010-08-20T07:33:33.816Z · LW(p) · GW(p)

you're not helping the world unless you're underpaid

I guess you mean "paid less than you want" or "paid less than the industry standard", rather than "not paid enough". Obviously, to do a job you need to be paid enough to do the job. I have been genuinely poor my whole life and it makes everything difficult. It was a late and horrifying discovery when I saw that people with PhDs had an annual income greater than my decadal income, and realized what a difference that would have made. Essentially, I have always done the bare minimum necessary to keep myself alive, and then tried to work directly on whatever seemed the most important at the time. I have had no academic career to speak of; if I had published papers, rather than just writing emails and blog comments, my situation would be completely different.

comment by Vladimir_Nesov · 2010-08-19T17:15:41.450Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid, and the degree you're helping the world is proportional to the degree you're underpaid.

Therefore, you can provide unlimited help to the world by refusing to be paid at all.

Replies from: Johnicholas
comment by Johnicholas · 2010-08-19T17:30:38.096Z · LW(p) · GW(p)

Was this facetious? Surely someone who donates all of their time is donating a finite value equivalent to the cost of replacing them.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-19T19:50:14.556Z · LW(p) · GW(p)

You are conflating work and money donation. You can implicitly donate money to a cause by accepting less payment, but it's not related to how well you are furthering the cause through your work.

Replies from: steven0461
comment by steven0461 · 2010-08-19T21:28:31.970Z · LW(p) · GW(p)

I think Johnicholas is assuming a model where your pay comes from an organization that would otherwise use the money to help the world in other ways, and you're "underpaid" if and only if the work you are paid to do is more helpful than the alternative uses of the money would have been.

Of course, if you're being paid by an organization that would not otherwise use the money well, it's extremely easy to be "underpaid" in this sense.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-19T21:38:16.308Z · LW(p) · GW(p)

In this sense, all useful work is "underpaid".

comment by Jonathan_Graehl · 2010-08-19T04:33:20.744Z · LW(p) · GW(p)

This comment puzzled me at first. I agree with the principle behind it, but I don't think the conclusion follows.

The principle seems to be: if everyone had full knowledge of the utility they owed to every other person's actions, and there were a mechanism for frictionlessly negotiating a sort of blackmail (I'll stop doing it unless I get compensated exactly for the utility I give others, out of principle), then what people end up getting paid is in some ideal sense the proper amount. I feel like this probably represents some sort of optimality under an implied expressed utility. I may have mangled my economics significantly here.

So, saying that someone is receiving money to do charitable work ought to be underpaid if they want to feel any part of the charity is morally due to them, just means underpaid in the highly theoretical sense I tried to outline above. Their moral credit should probably equal the amount by which they're underpaid.

I don't think it makes sense to argue that you should avoid being funded (well enough to be comfortable financially) if you do work that's thought of as charitable. But I suppose you should be suspicious of your own self-serving bias about how much good you're doing, the higher your stipend.

comment by PhilGoetz · 2010-08-19T18:08:25.729Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid

Why? Are you using Marx's theory of labor value (fair wage = added value)?

comment by rwallace · 2010-08-19T18:20:22.126Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid, and the degree you're helping the world is proportional to the degree you're underpaid.

This is not remotely true. The implicit assumption is that life is a zero-sum game, and payment constitutes the annihilation of wealth. In reality, payment constitutes the transfer of wealth: when you spend your money, you're providing income to the people who provide you with goods and services, and enabling them to purchase goods and services from their own suppliers, etc. Economics is a positive-sum game.

comment by rwallace · 2010-08-19T18:18:15.538Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid, and the degree you're helping the world is proportional to the degree you're underpaid.

This is not remotely true. The implicit assumption is that life is a zero-sum game, and payment constitutes the annihilation of wealth. In reality, payment constitutes the transfer of wealth: when you spend your money, you're providing income to the people who provide you with goods and services, and enabling them to purchase goods and services from their own suppliers, etc. Economics is a positive-sum game.

comment by rwallace · 2010-08-19T18:17:36.271Z · LW(p) · GW(p)

If you accept funding to do something to help the world, you're not helping the world unless you're underpaid, and the degree you're helping the world is proportional to the degree you're underpaid.

This is not remotely true. The implicit assumption is that life is a zero-sum game, and payment constitutes the annihilation of wealth. In reality, payment constitutes the transfer of wealth: when you spend your money, you're providing income to the people who provide you with goods and services, and enabling them to purchase goods and services from their own suppliers, etc. Economics is a positive-sum game.

comment by Craig_Morgan · 2010-08-20T07:03:43.917Z · LW(p) · GW(p)

Hello from Perth! I'm 27, have a computer science background, and have been following Eliezer/Overcoming Bias/Less Wrong since finding LOGI circa 2002. I've also been thinking how I can "position myself to make a difference", and have finally overcome my akrasia; here's what I'm doing.

I'll be attending the 2010 Machine Learning Summer School and Algorithmic Learning Theory Conference for a few reasons:

  • To meet and get to know some people in the AI community. Marcus Hutter will presenting his talk on Universal Artificial Intelligence at MLSS2010.
  • To immerse myself in the current topics of the AI research community.
  • To figure out whether I'm capable of contributing to that research.
  • To figure out whether contributing to that research will actually help in the building of a FAI.
Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-08-20T16:03:21.071Z · LW(p) · GW(p)

Uf. I hope you have a large supply of coffee (or something stronger), and a high tolerance for PowerPoint presentations.

comment by thomblake · 2010-08-19T01:00:28.309Z · LW(p) · GW(p)

Regarding the doctorate, if you want to stay on the philosophical end, you can't do much better than Chalmers. If you have an in, that's a good way to go. Whatever people say, we do need some sane philosophy done on that front. Though the interesting results are likely to come from the sciences.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-19T16:42:26.520Z · LW(p) · GW(p)

Whatever people say, we do need some sane philosophy done on that front.

I don't believe Mitchell Porter's qualifies.