Posts

Any taxonomies of conscious experience? 2021-07-18T18:28:50.444Z
Reasonably Fun 2010-09-04T12:07:51.113Z

Comments

Comment by JohnDavidBustard on Any taxonomies of conscious experience? · 2021-07-19T12:31:06.083Z · LW · GW

Thanks for the comment, I think it is very interesting to think about the minimum complexity algorithm that could plausibly be able to have each conscious experience. The fact that we remember events and talk about them and can describe how they are similar e.g. blue is cold and sad, implies that our internal mental representations and the connections we can make between them must be structured in a certain way. It is fascinating to think about what the simplest 'feeling' algorithm might be, and exciting to think that we may someday be able to create new conscious sensations by integrating our minds with new algorithms.

Comment by JohnDavidBustard on Any taxonomies of conscious experience? · 2021-07-19T12:26:14.854Z · LW · GW

Thanks it is very handy to get something that is compatible with SUMO.

Comment by JohnDavidBustard on Any taxonomies of conscious experience? · 2021-07-19T12:25:46.487Z · LW · GW

Thank you for the thoughtful comments. I am not certain that the approach that I am suggesting will be successful but I am hoping that more complex experiences may be explainable from simpler essences, similar to the behaviour of fluids from simpler atomic rules. I am currently focused on the assumption that the brain is similar to a modern reinforcement learning algorithm where there is a one or more large learnt structures and a relatively simple learning algorithm. The first thing I am hoping to look at is if all the concious experiences could be explained purely by behaviours associated with the learning algorithm. Even better if in trying to do this it indicates new structures that the learning algorithm should take. For example, we have strong memories of sad events and choices we regret, this implies we rank the importance of past experiences based on these situations and weight them more heavily when learning from them. We might avoid a stategy because our intuition says it makes us sad (it is like other situations that made us sad) rather than it simply being a poor strategy to achieve our goals.

Comment by JohnDavidBustard on Any taxonomies of conscious experience? · 2021-07-19T12:16:33.483Z · LW · GW

Great links, thank you, I hadn't considered the drug effects before that is an interesting perspective on positive sensations. Also I wanted to say I am a big fan of your work, particularly your media synthesis stuff. I use it in teaching of deep learning to show examples of how to use academic source code to explore cutting edge techniques.

Comment by JohnDavidBustard on Any taxonomies of conscious experience? · 2021-07-19T12:14:32.816Z · LW · GW

Perfect, thank you

Comment by JohnDavidBustard on Interesting talk on Bayesians and frequentists · 2010-10-26T14:50:41.995Z · LW · GW

A high level post on its use would be very interesting.

I think my main criticism of the Bayes approach is that it leads to the kind of work you are suggesting i.e. have a person construct a model and then have a machine calculate its parameters.

I think that much of what we value in intelligent people is their ability to form the model themselves. By focusing on parameter updating we aren't developing the AI techniques necessary for intelligent behavior. In addition, because correct updating does not guarantee good performance (because the model properties dominate) then we will always have to judge methods based on experimental results.

Because we always come back to experimental results, whatever general AI strategy we develop its structure is more likely to be one that searches for new ways to learn (with bayesian model updating and SVMs as examples) and validates these strategies using experimental data (replicating the behaviour of the AI field as a whole).

I find it useful to think about how people solve problems and examine the huge gulf between specific learning techniques and these approaches. For example, to replicate a Bayesian AI researcher an AI needs to take a small amount of data, an incomplete informal model of the process that generates it (e.g. based on informal metaphors of physical processes the author is familiar with) and then find a way of formalising this informal model (so that its behaviour under all conditions can be calculated) and possibly doing some theorem proving to investigate properties of the model. They then apply potentially standard techniques to determine the models parameters and judge its worth based on experiment (potentially repeating the whole process if it doesn't work).

By focusing on Bayesian approaches we aren't developing techniques that can replicate these kinds of lateral and creative thinking behaviour. Saying there is only one valid form of inference is absurd because it doesn't address these problems.

I feel that trying to force our problems to suit our tools is unlikely to make much progress. For example, unless we can model (and therefore largely solve) all of the problems we want an AI to address we can't create a "Really Good Model".

Rather than manually developing formalisations of specific forms of similarity we need an algorithm to learn different types of similarity and then construct the formalisation itself (or not as I don't think we actually formalise our notions of similarity and yet can still solve problems).

Automated theorem proving is a good example where the problems are well defined yet unique, so any algorithm that can construct proofs needs to see meta patterns in other proofs and apply them. This brings home the difficulty of identifying what it means for things to be similar and also emphasises the incompleteness of a probabilistic approach: the proof that the AI is trying to construct has never been encountered before, in order for it to benefit from experience it needs to invent a type of similarity to map the current problem to the past.

Comment by JohnDavidBustard on Help: Is there a quick and dirty way to explain quantum immortality? · 2010-10-25T15:25:20.942Z · LW · GW

Eh not impossible... just very improbable (in a given world) and certain across all worlds.

I would have thought the more conventional explanation is that the other versions are not actually you (just very like you). This sounds like the issue of only economists acting in the way that economists model people. I would suspect that only people who fixate on such matters would confuse a copy with themselves.

I suspect that people who are vulnerable to these ideas leading to suicide are in fact generally vulnerable to suicide. There are lots of better reasons to kill yourself that most people ignore. If you think you're at risk of this I recommend you seek therapy, thought experiments should not have such drastic effects on your actions.

Comment by JohnDavidBustard on Interesting talk on Bayesians and frequentists · 2010-10-25T14:41:51.336Z · LW · GW

Thanks for your reference it is good to get down to some more specific examples.

Most AI techniques are model based by necessity: it is not possible to generalise from samples unless the sample is used to inform the shape of a model which then determines the properties of other samples. In effect, AI is model fitting. Bayesian techniques are one scheme for updating a model from data. I call them incomplete because they leave a lot of the intelligence in the hands of the user.

For example, in the thesis reference the author designs a model of transformations on handwritten letters that (thanks to the authors intelligence) is similar to the set of transformations applied to numeric characters. The primary reason why the technique is effective is because the author has constructed a good transformation. The only way to determine if this is true is through experimentation, I doubt the bayesian updating is contributing significantly to the results, if another scheme such as an SVM was chosen I would expect it to produce similar recognition results.

The point is that the legitimacy or otherwise of the model parameter updating scheme is relatively insignificant in comparison to the difficulty in selecting a good model in the first place. As far as I am aware, as there are a potentially infinite set of models, Bayesian techniques cannot be applied to select between them, leaving the real intelligence being provided by the user in the form of the model. In contrast, SVMs are an attempt to construct experimentally useful models from samples and so are much closer to being intelligent in the sense of being able to produce good results with limited human interaction. However, neither technique addresses the fundamental difficulty of replicating the intelligence used by the author in creating the transformation in the first place. Fixating on a particular approach to model updating when model selection is not addressed is to miss the point, it may be meaningful for gambling problems but for real AI challenges the difference it makes appears to be irrelevant to actual performance.

I would love to discuss what the real challenges of GAI are and explore ways of addressing them, but often the posts on LW seem to focus on seemingly obscure game theory or gambling based problems which don't appear to be bringing us closer to a real solution. If the model selection problem can't be addressed then there is no way to guarantee that whatever we want an AI to value, it won't create an internal model that finds something similar (like paperclips) and decides to optimise for that instead.

Silently down voting criticism of Bayesian probability without justification is not helpful either.

Comment by JohnDavidBustard on Interesting talk on Bayesians and frequentists · 2010-10-24T21:56:34.092Z · LW · GW

From what I understand, in order to apply Bayesian approaches in practical situations it is necessary to make assumptions which have no formal justification, such as the distribution of priors or the local similarity of analogue measures (so that similar but not exact predictions can be informative). This changes the problem without necessarily solving it. In addition, it doesn't address the issue of AI problems not based on repeated experience, e.g. automated theorem proving. The advantage of statistical approaches such as SVMs is that they produce practically beneficial results with limited parameters. With parameter search techniques they can achieve fully automated predictions that often have good experimental results. Regardless of whether Bayesianism is the law of inference, if such approaches cannot be applied automatically they are fundamentally incomplete and only as valid as the assumptions they are used with. If Bayesian approaches carry a fundamental advantage over these techniques why is this not reflected in their practical performance on real world AI problems such as face recognition?

Oh and bring on the down votes you theory loving zealots :)

Comment by JohnDavidBustard on Discuss: How to learn math? · 2010-10-16T22:32:46.339Z · LW · GW

Thank you very much for your great reply. I'll look into all of the links. Your comments have really inspired me in my exploration of mathematics. They remind me of the aspect of academia I find most surprising. How it can so often be ideological, defensive and secretive whilst also supporting those who sincerely, openly and fearlessly pursue the truth.

Comment by JohnDavidBustard on Discuss: How to learn math? · 2010-10-16T16:58:57.493Z · LW · GW

Thank you, my main goal at the moment is to get a handle on statistical learning approaches and probability. I hope to read Jaynes's book and the nature of statistical learning theory once I have some time to devote to them. however I would love to find an overview of mathematics. Particularly one which focuses on practical applications or problems. One of the other posts mentioned the Princeton companion to Mathematics and that sounds like a good start. I think what I would like is to read something that could explain why different fields of mathematics were important, and how I would concretely benefit from understanding them.

At the moment I have a general unease about my partial mathematical blindness, I understand the main mathematical ideas underlying the work in my own field (computer vision) and I'm pretty happy with the subjects in numerical recipes and some optimisation theory. I'm fairly sure that I don't need to know more, but it bothers me that I don't. At the same time I don't want to spend a lot of time wading through proofs that are unlikely to ever be relevant to me. I have also yet to find a concrete example in AI where an engineering approach with some relatively simple applied maths has been substantially weaker than an approach that requires advanced mathematical techniques, making me suspect that mathematics is as it is because it appeals to those who like puzzles, rather than necessarily providing profound insight into a problem. Although I'd love to be proved wrong on that point.

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-16T15:37:03.372Z · LW · GW

So, assuming survival is important, a solution that maximises survival plus wireheading would seem to solve that problem. Of course it may well just delay the inevitable heat death ending but if we choose to make that important, then sure, we can optimise for survival as well. I'm not sure that gets around the issue that any solution we produce (with or without optimisation for survival) is merely an elaborate way of satisfying our desires (in this case including the desire to continue to exist) and thus all FAI solutions are a form of wireheading.

Comment by JohnDavidBustard on Discuss: How to learn math? · 2010-10-16T10:44:45.292Z · LW · GW

One frustration I find with mathematics is that it is rarely presented like other ideas. For example, few books seem to explain why something is being explained prior to the explanation. They don't start with a problem, outline its solution provide the solution and then summarise this process at the end. They present one 'interesting' proof after another requiring a lot of faith and patience from the reader. Likewise they rarely include grounded examples within the proofs so that the underlying meaning of the terms can be maintained. It is as if the field is constructed so that it is in the form of puzzles rather than providing a sincere attempt to communicate idea as clearly as possible. Another analogy would be programming without the comments.

A book like Numerical Recipies, or possibly Jaynes book on probability, is the closest I've found so far. Has anyone encountered similar books?

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-16T10:29:33.667Z · LW · GW

I'm not sure I understand the distinction between an answer that we would want and a wireheading solution. Are not all solutions wireheading with an elaborate process to satisfy our status concerns. I.e. is there a real difference between a world that satisfies what we want and directly altering what we want? If the wire in question happens to be an elaborate social order rather than a direct connection why is that different? What possible goal could we want pursued other than the one which we want?

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-16T01:01:46.246Z · LW · GW

Ok, so how about this work around.

The current approach is to have a number of human intelligences continue to explore this problem until they enter a mental state C (for convinced they have the answer to FAI). The next stage is to implement it.

We have no other route to knowledge other than to use our internal sense of being convinced. I.e. no oracle to tell us if we are right or not.

So what if we formally define what this mental state C consists of and then construct a GAI which provably pursues only the objective of creating this state. The advantage being that we now have a means of judging our progress because we have a formally defined measurable criteria for success. (In fact this process is a valuable goal regardless of the use of AI but it now makes it possible to use AI techniques to solve it).

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-15T22:28:50.413Z · LW · GW

Interesting, if I understand correctly the idea is to find a theoretically correct basis for deciding on a course of action given existing knowledge and then to make this calculation efficient and then direct towards a formally defined objective.

As distinct from a system which potentially sub optimally, attempts solutions and tries to learn improved strategies. i.e. one in which the theoretical basis for decision making is ultimately discovered by the agent over time (e.g. as we have done with the development of probability theory). I think the perspective I'm advocating is to produce a system that is more like an advanced altruistic human (with a lot of evolutionary motivations removed) than a provably correct machine. Ideally such a system could itself propose solutions to the FAI problem that would be convincing, as a result of an increasingly sophisticated understanding of human reasoning and motivations.

I realise there is a fear that such a system could develop convincing yet manipulative solutions. However the output need only be more trustworthy than a human's response to be legitimate (for example based on an analysis of its reasoning algorithm it appears to lack a Machiavellian capability, unlike humans).

Or put another way, can a robot Vladimir (Eliezer etc.) be made that solves the problem faster than their human counterparts do. And is there any reason to think this process is less safe (particularly when AI developments will continue regardless)?

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-15T21:07:47.162Z · LW · GW

If there is an answer to the problem of creating an FAI, it will result from a number of discussions and ideas that lead a set of people to agreeing that a particular course of action is a good one. By modelling psychology it will be possible to determine all the ways this can be done. The question then is why choose one over any of the others? As soon as one is chosen it will work and everyone will go along with it. How could we rate each one? (they would all be convincing by definition). Is it meaningful to compare them? Is the idea that there is some transcendent answer that is correct or important that doesn't boil down to what is convincing to people?

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-15T20:23:21.477Z · LW · GW

When I say feel, I include:

I feel that is correct. I feel that is proved etc.

Regardless of the answer, it will ultimately involve our minds expressing a preference. We cannot escape our psychology. If our minds are deterministic computational machines within a universe without any objective value, all our goals are merely elaborate ways to make us feel content with our choices and a possibly inconsistent set of mental motivations. Attempting to model our psychology seems like the most efficient way to solve this problem. Is the idea that there is some other kind of answer? How would could it be shown to be legitimate?

I suspect that the desire for another answer is preventing practical progress in creating any meaningful solution. There are many problems and goals that would be relatively uncontroversial for an AI system to attempt to address. The outcome of the work need only be better than what we currently have to be useful we don't have to solve all problems before addressing some of them and indeed without attempting to address some of them I doubt we will make significant progress on the rest.

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-15T18:43:33.397Z · LW · GW

Ok, I certainly agree that defining the goal is important. Although I think there is a definite need for a balance between investigation of the problem and attempts at its solution (as each feed into one another). Much as how academia currently functions. For example, any AI will need a model of human and social behaviour in order to make predictions. Solving how an AI might learn this would represent a huge step towards solving FAI and a huge step in understanding the problem of being friendly. I.e. whatever the solution is will involve some configuration of society that maintains and maximises some set of measurable properties from it.

If the system can predict how a person will feel in a given state it can solve for which utopia we will be most enthusiastic about. Eliezer's posts seem to be exploring this problem manually, without really taking a stab at a solution, or proposing a route to reaching one. This can be very entertaining but I'm not sure it's progress.

Comment by JohnDavidBustard on Strategies for dealing with emotional nihilism · 2010-10-14T22:34:05.515Z · LW · GW

I suggest just getting some casual exercise or watching some good films and tv shows. They're full of emotionally motivating experiences.

I think there is a worrying tendency to promote puritan values on LW. I personally see no moral problem with procrastination, or even feeling bad every so often. I feel worried that I might not hit deadlines or experience some practical consequence from not working on a task but I wouldn't want to add moral guilt. I think if people lose sight of the pleasures in life they become nihilistic which in turn leads them to be selfish and cruel as an expression of their pain.

If you can feel good about yourself and recognise that the positive playful fun that can come with idle pleasures might actually be the point. They represent the one value system that does seem pretty sensible. If you can enjoy them, you can feel the emotional energy to be nice and supportive to others. I certainly don't want a friendly AI enforcing the morality of anti-procrastination, anti-unhealthy eating, anti-indulgence or any other form of self flagellating self improvement. Lets just be supportive of one another and try to have a good time.

Comment by JohnDavidBustard on Recommended Reading for Friendly AI Research · 2010-10-14T22:14:31.533Z · LW · GW

I am sure these are interesting references for studying pure mathematics but do they contribute significantly to solving AI?

In particular, it is interesting that none of your references mention any existing research on AI. Are there any practical artificial intelligence problems that these mathematical ideas have directly contributed towards solving?

E.g. Vision, control, natural language processing, automated theorem proving?

While there is a lot of focus on specific, mathematically defined problems on LessWrong (usually based on some form of gambling), there seems to be very little discussion of the actual technical problems of GAI or a practical assessment of progress towards solving them. If this site is really devoted to rationality should we not at least define our problem and measure progress towards its solution. Otherwise we risk being merely a mathematical social club, or worse, a probability based religion?

Comment by JohnDavidBustard on Which parts of philosophy are worth studying from a pragmatic perspective? · 2010-10-03T20:36:25.203Z · LW · GW

I'm not sure of the merit of studying philosophy as opposed to just personally thinking about philosophical ideas. For me, the most profound pragmatic benefit has been to deeply alter my own psychology as a result of examining ideas like free-will and morality. I had a lot of unexamined assumptions and strongly felt conventions and taboos that I managed to overcome through examining my own feelings in a philosophical way. This is very different from the kind of learnt understanding that can be obtained by reading other peoples ideas. I think it is very common for people to parrot philosophical statements without them actually altering a persons behaviour in any significant way. In particular, I find a lot of classic philosophy unsatisfying as it seems much less relevant in our society. I think this is particularly the case if you adopt a science is enough to explain everything and the brain is a computer perspective. I'd love to read some philosophy that gave me a new and distinct perspective that might alter my priorities, and thus my behaviour, but I have yet to read such material. The closest I've encountered would be status anxiety which helpfully reframes classic philosophical statements concerning status and value in a modern context. This really brought home the extent to which our values are culturally specific. This helped me to pursue my life with my own priorities, as well as giving me the perspective to enable me to examine what I actually value. In that sense I consider these philosophical ideas to be much more practically useful to determining my life than any piece of technical or theoretical understanding.

Comment by JohnDavidBustard on Do you believe in consciousness? · 2010-10-03T15:29:44.339Z · LW · GW

Each time a question like this comes up it seems to get down voted as a bad question. I think it's a great question, just one for which there are no obviously satisfactory answers. Dennet's approach seems to be to say, if you just word things differently its all fine, nothing to see here. But to me this is a weird avoiding of the question.

We feel there is a difference between living things and inanimate ones. We believe that other people and some animals are feeling things that are similar to the feelings we have. Many people would find it absurd to think that devices or machines were feeling anything. Yet whatever computational model of our minds we create, it is hard to identify the point at which it starts to feel. It is easy to create a virtual character that appears to feel but most people doubt that it is doing any more than simulating feelings, similar to the inauthentic patterns of behaviour we form when we are acting or lying. I think one can imagine what life would feel like to be constantly acting, performing reasoned interactions without sincere emotion, if at heart we are computational why does all interaction not feel this way?

To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.

My justification for this theory is an attempt to provide a simple explanation of the origin of conscious experience, based on a belief that explanations should be simple and lack special cases (I don't find the idea that human beings are fundamentally distinct from other structures particularly elegant).

Comment by JohnDavidBustard on Automated theorem proving · 2010-10-03T14:57:31.142Z · LW · GW

There have been (and continue to be) many approaches to this, in fact the term Good old fashioned AI basically refers to this. It is very interesting that significant progress has not been made with this approach. This has led to greater use of statistical techniques, such as support vector machines or Bayesian networks. A basic difficulty with any approach to AI is that many techniques merely change the problem of learning, generalisation and problem solving into another form rather than solving it. E.g. Formal methods for software development move the problem from that of programming to that of formal specification. It's not clear that creating the specifications is any less error prone than the creation of the original program.

On the other hand there are enormous benefits to turning any knowledge into a form that a computer can manipulate. There has been some progress in doing this for mathematics. It is possibly one of the most depressing consequences of Godel's theorem that there has not been more work in this area. Writing proofs informally on paper doesn't get around Godel's theorem, it just makes the work harder to validate.

It's also a good challenge for those who feel that AI is just a matter of correctly applying Bayes theorem, as it is not clear how it could be applied to solve this problem.

Comment by JohnDavidBustard on Proposal for a structured agreement tool · 2010-10-03T14:23:48.503Z · LW · GW

Has anyone encountered a formal version of this? I.e. a site for the creation of formal logical arguments. Users can create axioms, assign their confidence to them and structure arguments using them. Users can then see the logical consequences of their beliefs. I think it would make a very interesting format for turning debate into a competitive game, whose results are rigorous, machine readable, arguments.

Comment by JohnDavidBustard on Reflections on a Personal Public Relations Failure: A Lesson in Communication · 2010-10-03T13:43:11.406Z · LW · GW

I think this comment highlights the distinction between popular and good.

High ranked posts are popular, good may or may not have anything to do with it.

Personally I find all this kowtowing to the old guard a bit distasteful. One of my favorite virtues of academia is the double blind submissions process. Perhaps there are similar approaches that could be taken here.

Comment by JohnDavidBustard on The Meaning of Life · 2010-09-22T12:35:19.945Z · LW · GW

Interesting points.

I suspect that predicting the economy with economics is like predicting a persons behaviour from studying their biology. My desire for wisdom is in the form of perspective, I want to know the rough landscape of the economy (like the internal workings of a body).

For example I have little grasp of the industries contributing most to GDP or the taxes within my (or any other) country. In terms of government spending this site provides a nice overview for the UK, but it is only the start. I would love to know the chain of businesses and systems that provide the products I use each day. In particular, I'm very interested in the potential for technologically supported self sufficiency as a means for providing a robust underpinning to society. To do this effectively its necesary to understand the systems that we depend upon.

While such understanding might not enable prediction, I think it does provide perspective on potential opportunities and threats (just as biology does). It also helps to focus on relative importance, similar to how concentrating on cash flow helps prioritise business decisions. E.g. the social equivalent of worrying about too much paper usage in office printers when there are entire business units that aren't profitable. Or similarly, being blind to opportunities that could render many other problems irrelevant (such as easy self sufficiency reducing the necesity for potentially problematic government infrastructure).

Comment by JohnDavidBustard on Less Wrong Should Confront Wrongness Wherever it Appears · 2010-09-22T08:25:04.030Z · LW · GW

I fear this would reduce LessWrong to referencing research papers. Perhaps there is more value in applying rigor as disagreements emerge. I.e. a process of going from two people flatly disagreeing to establishing criteria to choose between them. I.e. a norm concerning a process for reaching reasonable conclusions on a controversial topic. In this way there would be greater emphasis on turning ambiguous issues into reasonable ones. Which I view as one of the main benefits of rationality.

Comment by JohnDavidBustard on The Meaning of Life · 2010-09-19T17:42:48.521Z · LW · GW

Thank you, I also agree with your comments on your posting. I generally prefer a balance of pragmatic action with theory. In fact, I view the 'have a go' approach to theoretical understanding to be very useful as well. I think just roughly listing ones thoughts on a topic and then categorising them can be very revealing and really help provide perspective. I recently had a go at my priorities (utility function) and came up with the following:

  • To be loved
  • To be wise
  • To create things that I am proud of
  • To be entertained
  • To be respected
  • To be independent (ideally including being safe, relatively healthy and financially secure)

This is probably not perfect but it is something to build on (and a list I wouldn't mind a friendly AI optimising for either).

Also, as with the positive effects mentioned in your article, I've found giving to charity makes it easier for me to feel love (or at least friendship) towards others and to feel more cared for in return (perhaps simply because giving to charity makes me slightly nicer towards everyone I meet).

My current focus is wisdom, I feel uncomfortable that I don't have perspective on problems in society or the structure of the economy (i.e. how my quality of life is maintained). When I mention these ideas to others their reaction is generally to describe the problems as being too hard or impossible, I think this is a very interesting form of rationality failure, because the same people would go to enormous lengths to construct a solution to a technical problem if they were told it was not possible. Why don't creative, intellectual and rational people apply their problem solving skills to these kinds of issues? Why don't they 'have a go'?

Comment by JohnDavidBustard on Rationality Power Tools · 2010-09-19T16:27:33.303Z · LW · GW

True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be to create robot designs that could be printed with a 3D printer (plus some assembly like lego) so that progress could be rapidly shared. I realise this is much less capable than human machinery but I feel there is a lot further to go with the software and AI side.

Comment by JohnDavidBustard on Rationality Power Tools · 2010-09-19T15:45:04.408Z · LW · GW

The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.

Comment by JohnDavidBustard on Rationality Power Tools · 2010-09-19T12:05:59.895Z · LW · GW

In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific problem areas which rarely generalise. For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can't create a face recognition algorithm that matches human performance. We can't even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably). All our learning algorithms have human tweaked parameters to achieve good results and hardly any of them can perform online learning beyond the constrained manually fed training data used to construct them. As a result there are very few commercial applications of AI that operate unaided (i.e. not as a specific tool equivalent to a word processor ). I would love to imagine otherwise, but I don't understand where the confidence in AI performance is coming from. Does anyone even have a set of partial Turing-test like steps that might lead to an AI (dangerous or otherwise).

Comment by JohnDavidBustard on The Meaning of Life · 2010-09-19T10:48:01.049Z · LW · GW

I really like this post. It touches on two topics that I am very interested in:

How society shapes our values (domesticates us)

and

What should we value (what is the meaning of life?)

I find the majority of discussions extremely narrow, focusing on details while rarely attempting to provide perspective. Like doing science without a theory, just performing lots of specific experiments without context or purpose.

1 Why are things the way they are and why do we value the things we value? A social and psychological focus, Less Wrong touches on these issues but appears focused on specific psychological studies rather than any overall perspective (I suspect this would start to touch on politics and so would not be discussed). I think our understanding of the system we are a part of significantly shapes our sense of meaning and purpose and, as a result, strongly influences our society.

I would go so far as to suggest we are psychologically incapable of pursuing goals that are inconsistent with our understanding of how the universe functions (sorry Clippy), i.e. if we are selfish gene darwinists we will value winning and reproductive success. If we have a Confucian belief that the universe is a conflict between order and chaos we will pursue social stability and tradition. I have my own take on this for those who are interested (How we obtain our values, the meaning of life)

2 What problems do we want to solve? It seems much easier to find problems to solve than goals to obtain. A recent post about Charity mentioned GiveWell. This organisation at least evaluates whether progress is made but as far as I am aware there is no economics of suffering no utilitarian (or otherwise) analysis of the relative significance of different problems. Is a destructive AI worse than global warming, or cancer or child abuse or obesity or terrorism. Is there a rational means to evaluate this for a given utility function? Has anyone tried? (this is an area I'm looking into so any links would be greatly appreciated)

3 What can we do? Within instrumental rationality and related fields there are a lot of discussions of actions to achieve improvements in capability. Likewise for charity, lots of good causes. However there seems to be relatively little discussion of what is likely to be achieved as a result of the action, as if any progress is justification enough to focus on it. For example, what will be the difference in quality of life if I pursue a maximally healthy lifestyle vs a typical no exercise slacker life. In particular, do I want to die of a heart attack or cancer and alzheimers (which given my family history are the two ways I'm likely to go). If we had a realistic assessment of return on investment, as well as how psychologically likely we are to achieve things, we could focus our actions rationally.

I suggest that if we know how things work, what the problems are and what we can do about them, then we have a pretty good start on the meaning of life. I am frequently frustrated by the lack of perspective on these issues, we seem culturally conditioned to focus on action and specific theoretical points rather than trying to get a handle on it all. Of course that might be more fun, and that might be a sensible utility function. But for my own peace of mind I'd like to check there isn't an alternative.

Comment by JohnDavidBustard on The Affect Heuristic, Sentiment, and Art · 2010-09-16T22:59:53.475Z · LW · GW

I think this section of your post is part of what makes me feel bad about your comment. The reason I said I like it, is because I think it's important that people can talk about these things and the fact that your comments affect me in that way highlights that they are important to me.

I would have worded this more strongly, myself. In my experience, people who are themselves inclined towards reasoned debate, even civilly, drastically overestimate how much other people are also inclined towards debate and argument.

I can't speak for anyone else, but personally I don't think I drastically overestimate others' interest in debate, I'm painfully aware of how much hostility there is to making direct statements about even slightly controversial issues. When I talk that way with others, I'm not doing it to fit in, I'm doing it because I want to and because I feel driven to. I feel frustrated at having a different personality from the majority and don't view others lifestyles as inherently more legitimate than my own. In particular, I have a desire to understand why society and my community works as it does. I feel there is a great deal of unspoken social dynamics and traditions which act as a mask to unjustified status hierarchies and passive aggressive conflict. I love the directness of reasoned argument because I feel that it is basically fair. It can quickly sear away self delusions and unjustified assumptions, getting to a lasting truth. A truth that while unpalatable is, at its best, independent of who has said it and how it has been said. Avoiding the undesirable (for me at least) political maneuvering that seems to dominate so much of society.

For me, I'm looking for a community which is honest and fearless with itself and others. I'm less interested in productivity or instrumental rationality than simply being able to discuss issues in a direct way so that I can get a better understanding of them for my own satisfaction. Without this opportunity, I feel I am engaging in a social dance that never satisfies my desire to find what is true and what is important.

In terms of a neutral opposite something like:

Psychologically accommodating

might be good. It emphasises the fact that the communication is designed to be easy to absorb without implying manipulation. Both sound like they would be useful and both subtly imply their weaknesses (i.e. insult and compromise).

Oh and I should add, I like your forest : )

Comment by JohnDavidBustard on Bayes' rule =/= Bayesian inference · 2010-09-16T21:08:56.556Z · LW · GW

Thank you, that's very interesting, and comforting.

Comment by JohnDavidBustard on Bayes' rule =/= Bayesian inference · 2010-09-16T19:23:37.270Z · LW · GW

Thank you, this is very interesting. I'm not sure of the etiquette, but I'm reposting a question from an old article, that I would really appreciate your thoughts on.

Is it correct, to say that the entropy prior is a consequence of creating an internally consistent formalisation of the aesthetic heuristic of preferring simpler structures to complex ones?

If so I was wondering if it could be extended to reflect other aesthetics. For example, if an experiment produces a single result that is inconsistent with an existing simple physics theory, it may be that the simplest theory that explains this data is to treat this result as an isolated exception, however, aesthetically we find it more plausible that this exception is evidence of a larger theory that the sample is one part of.

In contrast when attempting to understand the rules of a human system (e.g. a bureaucracy) constructing a theory that lacked exceptions seems unlikely ("that's a little too neat"). Indeed when stated informally the phrase might go "in my experience, that's a little too neat" implying that we formulate priors based on learned patterns from experience. In the case of the bureaucracy, this may stem from a probabilistic understanding of the types of system that result from a particular 'maker' (i.e. politics).

However, this moves the problem to one of classifying contexts and determining which contexts are relevant, if this process is considered part of the theory, then it may considerably increase its complexity always preferring theories which ignore context. Unless of course the theory is complete (incorporating all contexts) in which case the simplest theory may share these contextual models and thus become the universal simplest model. It would therefore not be rational to apply Kolmogorov complexity to a problem in isolation. I.e. probability and reductionism are not compatible.

Comment by JohnDavidBustard on Bayes' rule =/= Bayesian inference · 2010-09-16T18:53:39.710Z · LW · GW

Is there a bound on the amount of data that is necessary to adjust a prior of a given error magnitude? Likewise, if the probability is the result of a changing system I presume it could well be the case that the pdf estimates will be consistently inaccurate as they are constantly adjusting to events whose local probability is changing. Does the Bayesian approach help, over say, model fitting to arbitrary samples? Is it, in effect, an example of a model fitting strategy no more reasonable than any other?

Comment by JohnDavidBustard on Bayes' rule =/= Bayesian inference · 2010-09-16T18:43:05.901Z · LW · GW

I suppose the question is, how to calculate the priors so they do make sense. In particular, how can an AI estimate priors. I'm sure there is a lot of existing work on this. The problem with making statements about priors that don't have a formal process for their calculation is that there is no basis for comparing two predictions. In the worst case, by adjusting the prior the resulting probabilities can be adjusted to any value. Making the approach a formal technique which is potentially just hiding the unknowns in the priors. In effect being no more reasonable because the priors are a guess.

Comment by JohnDavidBustard on The Affect Heuristic, Sentiment, and Art · 2010-09-16T16:12:56.226Z · LW · GW

I like your post because it makes me feel bad.

What I mean by that is that it gets at something really important that I don't like. The problem is that I get more pleasure from debates than almost anything else. I search for people who don't react in the intensely negative way you describe, and I find it hard to empathise with those that do. I don't do this because I think one method is 'right' and the other 'wrong' I just don't enjoy trying to conform to others expectations and prefer to find others who can behave in the same way. I think for most people deep down, community is more important than ideology (or indeed achieving anything), but a community where you cannot be yourself is one in which you always feel uncomfortable, whether this is intellectually confrontational or indirect. Does anyone know of any other environments like Less Wrong where an intellectually direct way of communicating wont get you flamed to death?

Comment by JohnDavidBustard on Bayes' rule =/= Bayesian inference · 2010-09-16T15:56:35.111Z · LW · GW

Wow, this really brings home the arbitrary nature of the Bayesian approach. If we're trying to get an AI to determine what to do, it can't guess meaningful priors (and neither can we come to that). I presume when it is applied there is a load of theoretical approaches to prior model estimation or is a uniform prior just used as default? In which case are there other occasions when a frequentist and bayesians probability estimates differ?

Comment by JohnDavidBustard on Steps to Achievement: The Pitfalls, Costs, Requirements, and Timelines · 2010-09-12T18:21:22.410Z · LW · GW

Thank you for your reply. It really highlights the difficulty of making an appropriate choice. There is also the difficulty that a lot of professions require specialised training before they can be experienced.

I did not find any of the careers guidance information at school or university to be particularly helpful. However after working in games for a number of years it was clear that there were a number of types with very similar backgrounds. I think it would be very valuable to read honest autobiographical accounts of different professions and ideally some form of personality assessment that meaningfully matches them. The closest I have found is the book "What type am I?" which guides the reader through a Myers-Briggs personality test and indicates common professions for each type. My current career (academic) was selected from this list and is a much better choice for me.

I find the balance of emphasis in existing research and books disturbing. There is a lot of emphasis on productivity, being a great manager and making lots of money but not so much on finding a good fit for ones personality. Perhaps, there is a need for more scientists and rationalists to focus on these sorts of issues. Issues that directly affect the enjoyment of the majority of peoples lives. Much as how positive psychology has started to redress the fixation on pathology.

Comment by JohnDavidBustard on Steps to Achievement: The Pitfalls, Costs, Requirements, and Timelines · 2010-09-12T09:02:20.436Z · LW · GW

I think your very first step Identify is the key to all this.

Is it rational to pursue an irrational goal rationally?

Our culture focuses on external validation, achievement and winning. My concern is that this is a form of manipulation focused on improving a societies economic measures of value over an individual's personal satisfaction.

In contrast, the science of happiness seems like a good start. This work seems to focus on developing techniques to come to feel satisfaction with ones current state. Perhaps a next step is to look at how communities and organisations can be structured to support this. Speaking for myself I naively assumed that making computer games would be an enjoyable career because I thought that making a game and playing a game would be similar, this is not the case. Does anyone have any suggestions for careers or lifestyles where one can feel a sustained sense of satisfaction? Or indeed a rational means to select/create one?

Comment by JohnDavidBustard on Reason is not the only means of overcoming bias · 2010-09-10T10:45:13.245Z · LW · GW

Thank you! That's a great link I'll look into it.

Comment by JohnDavidBustard on Reason is not the only means of overcoming bias · 2010-09-10T07:36:32.947Z · LW · GW

For about 3 years now I've been giving to a number of charities through a monthly standing order. Initially setting it up was very satisfying and choosing the charities was a little like purchasing a new gadget, assuming a hands on experience is not available, and there are no trusted reviewers, I look at the various options and go with the ones whose advertising most closely reflected my personality and who looked the least like charlatans. With gadget purchases I find these indirect signals much more informative of the experience with the product than any enumeration of features, while individual statistics can convey value (and certainly attract attention) they often obscure the interaction experience or final quality.

So it is with charities, I cannot easily experience for myself whether they convey benefit, nor have I found a trusted source of recommendations. Charities don't tend to focus on the statistics of their work often preferring to focus on the emotions of the problem, perhaps reflecting the reality that naive, emotional people are the most likely to give to charity. Even when statistics are used they are like the feature lists for the gadgets, I feel no confidence that they reflect the whole picture.

While I have received some satisfaction from my donations, I do feel uncomfortable with the lack of confidence I have that my donations are actually causing an improvement, or whether they are directed towards genuine priorities. From my experiences in work I am familiar with the enormous waste within many organisations and how small steps of progress can be eradicated by later poor decision making.

What I really want is a kind of economics of suffering, a measurement of the various problems that I can apply my own weights and hypotheses to. I would like to see a comparison of the expected loss of life, and loss of quality of life, due to temperature rises, famines, AIDs, cancer etc. as well as the expected rate of return on my investment: have any of the charities I can give money to achieved progress or do they just maintain a status quo. How many charities have done the equivalent of eradicating small pox (i.e. solved a problem). Does anyone know of anything like this? I think it would be a much more valuable step than using rationality to empower the emotional arms race between charities.

Comment by JohnDavidBustard on Humans are not automatically strategic · 2010-09-09T13:37:47.246Z · LW · GW

Thanks for the link.

You make a good point about the lack of a clear distinction, and at a fundamental level I believe that our genes and external environment determine our behaviour (I am a determinist, i.e. I don't believe in free will). However, I think that it is also possible to be highly motivated about different things which can cause a lot of mental stress and conflict. I think this occurs because we have a number of distinct evolved motivations which can drive us in opposing ways (e.g. the desire to eat, the status desire of being thin, the moral desire to eat healthily etc.). What I mean by "deep down" is the result of balancing these motivations to provide a satisfying compromise. The reason I emphasise culture is because I feel that society has developed powerful means of manipulating our motivations. This is good to the extent that it can make our sense of motivation (and enjoyment) more intense but can also lead to these strong internal conflicts, which, at least for myself, are not enjoyable.

I am fascinated by how these manipulations of our motivation occur and like yourself experience a strong resistance towards authority. I think the strength of these feelings is a reflection of my personality. On a Myers Briggs assessment I am an ENTP and descriptions of this type indicate a common resistance to authority. In part I suspect this is because I don't find arguments not based on reason to be that legitimate. I'm not sure whether this personality is 'more natural' or is merely one form of survival strategy reflected by the interaction of my genes with the environment.

I do feel a strong disparity between the world as it is and how I think it could (should?) be. In particular I think there is a great difference between peoples internal stories of why they act as they do and the true dynamics of how they have been influenced. For example, I find the ideas of Adam Curtis, John Taylor Gatto and Alain de Botton very interesting. I recognise that the society we currently have may well require the kind of values and conditioning described by these authors but I think it would be preferable to have a society with less of it, or at least have it performed much more openly and explicitly. I also feel that a stable society is possible with a much greater degree of emotional 'freedom' than we currently experience. Particularly through the use of technology. For example, by providing comfortable technologically based self sufficiency so that a competitive externally rewarded role is viewed as a lifestyle option rather than a necessity.

Comment by JohnDavidBustard on Less Wrong: Open Thread, September 2010 · 2010-09-09T12:33:35.145Z · LW · GW

Thanks for the link, very interesting.

Comment by JohnDavidBustard on Humans are not automatically strategic · 2010-09-08T21:17:59.615Z · LW · GW

I've wrestled with this disparity myself, the distance between my goals and my actions. I'm quite emotional and when my goals and my emotions are aligned I'm capable of rapid and tireless productivity. At the same time my passions are fickle and frequently fail to match what I might reason out. Over the years I've tried to exert my will over them, developing emotionally powerful personal stories and habits to try and control them. But every time I have done so it tends to cause more problems that it fixes. I experience a lot of stress fighting with myself in this way and quickly lose the ability to maintain perspective or, more importantly, to prioritise. My reason becomes a tunnel visioned rationalisation, and rather than being a tool for appropriate action becomes a tool to reinforce an unwise initial judgement of my priorities.

More recently, I've come to accept that my conscious reasoning self is, to an extent, a passenger in an emotional mind. What's more, that that emotional mind often has a much more sophisticated understanding of what will lead to a satisfying future than my own reasoning can provide. If I have the patience to listen (and occasionally offer it suggestions) I seem to get much closer to solving creative and technical problems, and more importantly, much closer to contentment, than if I try to force myself to follow an existing plan.

I think there is a real risk of having ones culture and community define goals for ourselves that are not actually what we want. Causing us to feel a sense of duty towards values that deep down, we don't share. Is our reasoning flawed or do we just not understand our utility function?

Comment by JohnDavidBustard on Rationality quotes: September 2010 · 2010-09-07T13:56:14.212Z · LW · GW

Thank you, it's such a pleasure to find so many interesting discussions of these ideas.

Comment by JohnDavidBustard on Rationality quotes: September 2010 · 2010-09-07T13:04:47.338Z · LW · GW

I’m glad you like it : )

I suppose the question is, to what extent can ideas be separated from social dynamics, such as status and legitimacy, and therefore not carry with them the risk of causing anger and fear.

Well ideas can certainly create positive as well as negative responses. For example, more accurate understanding and the communication of practically useful approaches are often intrinsically enjoyable. As is the communication of experience that might help determine the correct course of action or help avoid problems (i.e. personal stories, news). Provided these don’t threaten our status and legitimacy rules they remain positive and rewarding. They can also serve to validate our choices and serve to bolster our self esteem, or even to reduce the importance of those who threaten our values (satire). These can be viewed as improving our feelings that we have status (value) and legitimacy (goodness), i.e. the opposite of the fear causing uses I mentioned above.

However, ideas can also influence these factors more indirectly. For example, in the entertainment industry the term ‘social currency’ is sometimes used. This is used to describe the value that people place on communicating ideas as a means for establishing relationships (mutually nurturing through making each other feel good) and raising status (being relatively more important because you convey the pleasure of entertainment). A process mirrored here through the karma points. As such it can carry fears associated with threatening those in an alpha position or a conflict in the rules that underlie the status, for example as expressed in the phrase “you’re just saying that to be popular”.

Academics and other creative roles have the added pressure that the continuous generation of ideas is the basis for their livelihood. This is likely to lead to ideas being a major factor in their self esteem, adding an extra intensity to the fears and pleasures associated with having them (perhaps explaining why Bertrand Russell said the quote in the first place).

More significantly, formal ideas, such as proofs can be extremely threatening because they introduce a method of influence that conflicts with many existing status hierarchies. It breaks many people’s sense of legitimacy, to say that anyone can make a formal statement for any reason and yet still determine other people’s actions, or even more significantly, that an opinion of a person can be considered to have no value if a formal statement demonstrates that it is false. Taken to extreme, it could lead to a position where a person had no influence (and thus no status) because they were incapable of ever making a statement that was not proved false. I would suggest it is more common to view the legitimacy of statements as deriving from status (which is why we tend to value quotes the way we do). And also to have status determined by tradition, which in some community’s leads almost all novelty to be treated as illegitimate. Thus reducing conversation to small talk or the repetition of traditional sayings (anything else would be ‘weird’). The power of scientific method (and this site) is in creating a set of traditions that enable novelty to be introduced in a legitimate way, so that these two positions are less likely to conflict (although I suspect they do so quite frequently).

I am in the unfortunate position of enjoying conversations about status and legitimacy, which are almost always fraught with the risks of taboo violation. Not to mention the fact that thinking about status and legitimacy tends to make your values different from those around you, making interactions a lot like walking on thin ice (exhilarating and terrifying) : ).

Comment by JohnDavidBustard on Reasonably Fun · 2010-09-07T10:40:32.290Z · LW · GW

Thank you for the link.

I think the discussion distinguishing like and want is the beginning of the answer. My view is that there are a number of distinct complementary motivations which can cause subtly different emotions, each of which can be referred to as happiness. With each motivation having evolved because it contributes towards our survival. These distinctions become clearer when trying to create enjoyment experiences and I'll elaborate on my take on them in the next article.

What I think is so interesting (and important) about this is that without understanding why we are motivated (and made happy) we cannot accurately make predictions of human behaviour or indeed form a meaningful utility function that is based on enjoyment. In this way deciphering human motivation is essential in order to apply reason to our lives.