Posts

Comments

Comment by NxGenSentience on Superintelligence 21: Value learning · 2015-02-10T01:53:55.788Z · LW · GW

No changes that I'd recommend, at all. SPECIAL NOTE: please don't interpret the drop in the number ofcomments, the last couple of weeks, as a drop in interest by forum participants. The issues of these weeks are the heart of the reason for existence of nearly all the rest of the Bostrom book, and many of the auxiliary papers and references we've seen, ultimately also have been context, for confronting and brainstorming about the issue now at hand. I myself just as one example, have a number of actual ideas that I've been working on for two weeks, but I've been trying to write them up in white paper form, because they seem a bit longish. also I've talked to a couple of people off site who are busy thinking about this as well and have much to say. Perhaps taking a one week intermission, would give some of us a chance to organize our thoughts more efficiently for postings. There is a lot of untapped incubating that is coming to a head right now among the participants' mindse and we would like a chance to say something about these issues before moving on. (("Still waters run deep" as the cliche goes.) We're at the point of greatest intellectual depth, now. I could speak for hours, were I commenting orally, and trying to be complete -- as opposed to making a skeleton of a comment that would, without context, raise more requests for clarification than be useful. I'm sure I'm not unique. Moderation is fine, though, be assured.

Comment by NxGenSentience on [Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions · 2015-01-28T22:24:13.186Z · LW · GW

Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.

Comment by NxGenSentience on Understanding Agency · 2014-12-19T00:22:23.270Z · LW · GW

Before we continue, one more warning. If you're not already doing most of your thinking at least half-way along the 3 to 4 transition (which I will hereon refer to as reaching 4/3), you will probably also not fully understand what I've written below because that's unfortunately also about how far along you have to be before constructive development theory makes intuitive sense to most people. I know that sounds like an excuse so I can say whatever I want, but before reaching 4/3 people tend to find constructive development theory confusing and probably not useful...

I understand this kind of bind. I am over in the AI - Bostrom forum, which I like very much. As it happens, I have been working on a theory with numerous parts that is connected to, and and extension of, ideas and existing theories drawn from several scientific and philosophical subdisciplines. And, I often find myself tyring to meaningfully answer questions within the forum, with replies that I cannot really make transparent and comelling cases for, without having the rest of my theory on the table, to establish context, motivation, justification and so on, because the whole theory (and it's supporting rationale) is, size-wise, outside the word limit and scope of any individual post.

Sometimes I have tried, then squeezed it down, and my comments have ended up looking like cramped, word salad, because of the lack of context -- in the sense I presume you caution that applies to your remarks.

So I will have a look at the materials you counsel as prerequisite concepts, before I go on reading the rest of your remarks.

It is "with difficulty" I am not reading further down, because agency is one of my central preoccupations, both in general mind-body considerations, and in AI most particularly (not narrow AI, but human equivalent and above, or "real AI" as I privately think of it.

In general, I have volumes to say about agency, and have been struggling to carve out a meaningful and coherent, scientifically and neuroscientifically informed set of concepts relating to "agency" for some time.

You also refer to "existential" issues of some kind, which can of course mean many things to many people. But this also makes me want to jump in whole hog and read what you are going to say, because I also have been giving detailed consideration to the role of "existential pressure" (and trying to see what it might amount to, in both ordinary and more unconventional terms by trying to see it through different templates -- some more, some less -- humanly phenomenological) in the formation of features of naturalistic minds and sentience (i.e. in biological organisms, the idea being of course to then abstract this to more general systems.)

A nice route or stepping stone path for examing existential pressure princibles is human, to general terrestrial-biological, to then exobiological (so far as we can reasonably speculate), and then finally move on to AIs, when we have educated our intuitions a little.

The results emerging from those considerations may or may not suggest what we need to include, at least by suitable analogy, in AIs, to make them "strivers" , or "agents", or systems that deliberately do anything, and have "motives" (as opposed to behaviors), desires, and so on...

We must have some theory or theory cluster, about what this may or may not contrubute to the overall functionality of the AI; it's "understanding" of the world that is (we hope) to be shared by us, so it is also front and center among my key proccupations.

An timely clarifying idea I use frequently in discussing agency -- when reminding people that not everything that exhibits behavior automatically qualifies for agency: do google's autopilot cars have "agency"? Do they have "goals"? My view is: "obviously not -- that would be using 'goal' and 'agency' metaphorically."

Going up the ladder of examples, we might consider someone sleepwalking, or a person acting-out a sequence of learned, habituated behaviors while in an absence seizure in epilepsy. Are they exhibiting agency?

The answers might be slightly less clear, and invite more contention, but given the pretty good evidence that absence seizures are not post-ictal failures to remember agency states, but are really automatisms (modern neurologists are remarkably subtle, openminded to these distinctions, and clever in setting up scenarios which discriminate satisfactorily the difference), it seems also, that lack of attention, intention, praxis, i.e. missing agency is the most accurate characterization.

Indeed, apparently tt is satisfactory enough for experts who understand the phenomena that, in the contemporary legal environment in which "insanity" style defenses are out of fashion with judges and the public, nonetheless a veridical establishment of sleepwalking and/or absence seizure status (different cases, of course) while comitting murder or manslaughter, has, even in recent years, gotten some people "innocent" verdicts.

In short, most neurologists who are not in the grip of any dictums of behavioristic apologetics would say -- here too -- no agency, though information processing behavior occurred.

Indeed, in the case of absence seizures, we might further ask about metacognition vs just cognition. But experimentally this is also well understood. Metacognition, or metaconsciousness, or self-awareness, all are by a large consensus now understood as correllated with "Default Node Network" activity.

Absence seizures under whitnessed, lab conditions are not just departures from DFN activity. Indeed, all consciously, intentionally directed activity of any complexity that involves conscious attention on external activities or situations, involve shut down of DFN systems. (Look up Default Node Network on PubMed, if you want more.)

So absence seizure behavior, which can be very complex, involve driving across town, etc, is not agency misplaced or mislaid. It is actually unconscious, "missing-agent" automatism. A brain in a temporary zombie state, the way philosophers of mind use the term zombie.

But back to the autopilot cars, or autopilot Boeng 777s, automatic anythings... even the ubiquitous anti-virus daemons running in background which are automatically "watching" to intercept malware attacks. It seems clear that, while some of the language of agency might be convenient shorthand, it is not literally true.

Rather, these cases are those of mere mechanical, newtonian-level, deterministic causation from conditionally activated preprogrammed behavior sequences. Activation conditions are deterministic. The causal chains thereby activated are deterministic, just as the Interrupt service routines in an ISR jump table are all deterministic.

Anyway... agency is intimately at the heart of AGI - style AI, and we need to be as attentive and rigorous as possible about using the term literally, vs, metaphorically.

I will check out your references and see if I have anything useful to say after I look at what you mention.

Comment by NxGenSentience on Kickstarting the audio version of the upcoming book "The Sequences" · 2014-12-13T01:16:39.947Z · LW · GW

I didn't exactly say that, or at least, didn't intend to exactly say that. It's correct of you to ask for that clarification.

When I say "vindicated the theory", that was, admittedly, pretty vague.

What I should have said was the recent experiments removed what has been more or less statistically the most common and continuing objection to the theory, by showing that quantum effects in microtubules, under the kind of environmental conditons that are relevant, can indeed be maintained long enough for quantum processes to "run their course" in a manner that, according to Hameroff and Penrose, makes a difference that can propogate causally to a level that is of significance to the organism.

Now, as to "decision making". I am honestly NOT trying to be coy here, but that is not entirely a transparent phrase. I would have to take a couple thousand words to unpack that (not obfuscate, but unpack), and depending on this and that, and which sorts of decisions (conscious or preconscious, highly attended or habituated and automatic), the answer could be yes or no... that is, even given that consciousness "lights up" under the influence of microtubule-dependent processes like Orch OR suggests -- admittedly something that, per se, is a further condition, for which quantum coherence within the microtubule regime is a necessary but not sufficient condition.

But the latter is plausible so many people, given a pile of other suggestive evidence. The deal breaker has always been the can or can't quantum coherence be maintained in the stated environs.

Orch OR is a very multifaceted theory, as you know, and I should not have said "vindicated" without very careful qualification. Removing a stumbling block is not proof of truth, of a theory with so many moving parts.

I do think, as a physiological theory of brain function, it has a lot of positives (some from vectors of increasing plausibility coming in from other directions, theorists and experiments) and the removal of the most commonly cited objection, on the basis of which many people have claimed Orch OR is a non-starter, is a pretty big deal.

Hameroff is not a wild-eyed speculator (and I am not suggesting that you are claiming he is.)

I find him interesting and worthy of close attention, in part because has accumulated an enormous amount of evidence for microtubule effects, and he knows the math, and presents it regularly.

I first read his Biomolecular Mind hardback book, back in the early 90's, which he actually wrote in the late 80's, at which time he had already amassed quite a bit of empiracle study regarding the role of microtubules in neurons, and in creatures whithout neurons, posessing only microtubules, that exhibit intelligent behavior.

Other experiments in various quarters over quite a few recent years (though there are still those neurobiologists who do disagree) have on the whole seemed to validate Hameroff's claim that it is quantum effects -- not "ordinry" synapse-level effects that can be described without use of the quantum level of description -- that are responsible for anaesthesia's effects on consciousness, in living brains.

Again, not a proof of Orch OR, but an indication that Hameroff is, perhaps, on to some kind of right track.

I do think that evidence is accumulating, from what I have seen in PubMed and elsewhere, that microtubule effects at least partially modulate dendritic computations, and seem to mediate the rapid remodeling of the dendritic tree (spines come and go with amazing rapidity), making it likely that the "integrate and fire" mechanism involves microtubule computation, at least in some cases.

I have seen, for example, experiments that give microtubule corrupting enzymes to some, but not control, neurons and observe dendritic tree behavior. Microtubules are in the loop in learning, attention, etc. Quantum effects in MTs.... evidence seems to grow by the month.

But, to your ending question, I would have to say what I said... which amounts to "sometimes yes, sometimes no," and in the 'yes' cases, not necessarily for the reasons that Hameroff thinks, but maybe partly, and maybe for a hybrid of additional reasons. Stapp's views have a role to play here, I think, as well.

One of my "wish list" items would be to take SOME of Hameroff's ideas and ask Stapp about them, and vice versa, in interviews, after carefully preparing questions and submitting them in advance. I have thought about how the two theories might compliment each other, or which parts of each might be independently verifyable and could be combined in a rationally coherent fashion that has some independent conceptual motivation (i.e. is other than ad hoc.)

I am in the process of preparing and writing a lenghty technical queston for Stapp, to clarify (and see what he thinks of a possible extension of) his theory of the relevance of the quantum zeno effect.

I thought of a way the quantum zeno effect, the way Stapp conceives of it, might be a way to resolve (with caveats) the simulation argument ... i.e. assess whether we are at the bottom level in the hierarchy, or are up on a sim. At least it would add another stipulation to the overall argument, which is significant in itself.

But that is another story. I have said enough to get me in trouble already, for a Friday night (grin).

Comment by NxGenSentience on Kickstarting the audio version of the upcoming book "The Sequences" · 2014-12-12T22:36:49.046Z · LW · GW

Hi, Yes, for the kickstarter option, that seems to be almost a requirement. People have to see what they are asked to invest in.

The kickstarter option is somewhat my second choice plan, or I'd be furher along on that already. I have several things going on that are pulling me in different directions.

To expand just a bit on the evolution of my You Tube idea: originally – a couple months before I recognized more poignantly the value to the HLAI R & D community of doing well-designed, issue-sophisticated, genuinely useful (to other than a naïve audience) interviews with other thinkers and researchers -- I had already decided to create a You Tube (hereafter, 'YT') channel of my own. This one will have a different, though complimentary, emphasis.

This (first) YT channel will present a concentrated video course (perhaps 20 to 30 presentations in the plan I have, with more to grown in as audience demand or reaction dictates.) The course presentations, with myself at the whiteboard, graphics, video clips, whatever can help make it both enjoyable and more comprehensible, will consist of what are essential ideas and concepts, that are not only of use to people working in creating HLAI (and above), but are so important that they constitute essential background, without which, I believe, people creating HLAI are at least partly floundering in the dark.

The value add for this course comes from several things. I do have a gift for exposition. My time as a tutor and writer has demonstrated to me (from my audiences) that I have a good talent for playing my own devil's advocate, listening and watching through audience ears and eyes, and getting inside the intuitions likely to occur in the listener. When I was a math tutor in college, I always did that from the outset, and was always complimented for it. My experience with studying this for decades and debating it, metabolizing all the useful points of view on the issues that I have studied – while always trying to push forward to find what is really true – allows me to gather many perspectives together, anticipate the standard objections or misunderstandings, and help people with less experience navigate the issues. I have an unusual mix of accumulated areas of expertise -- software development, neuroscience, philosophy, physics – which contributes to the ability to see and synthesize productive paths that might (and have) been missed elsewhere. Perspective – enough time seeing intellectual fads come and go, to recognize how they worked even “before my time.” Unless one sees – and can critique or free oneself from – contextual assumptions, one is likely to be entrained within conceptual expernalities that define the universe of discourse, possibly pruning away preemptively any chance for genuine progress and novel ideas. Einstein, Crick and Watson, Heisenberg and Bohr, all were able to think new thoughts and entertain new possibilities.

Like someone just posted in Less Wrong, you have a certain number of weirdness points, spend them wisely. People in the grips of an intellectual trance who don't even know they are pruning away anything, cannot muster either the courage, or the creativity, to have any weirdness points to spend.

For example. Apparently, very few people understand the context and intellectual climate … the formative “conceptual externalities” that permeated the intellectual ether at the time Turing proposed his “imitation game.”

I alluded to some of these contextual elements of what – then – was the intellectual culture, without providing any kind of exposition (in other words, just making the claim in passing), in my dual message to you and Luke, earlier today (Friday.)

That kind of thing – were it to be explained rigorously, articulately, engagingly -- is a mild eye-opening moment to a lot of people (I have explained it before to people who are very sure of themselves, who went away changed by the knowledge.) I can open the door to questioning what seems like such a “reasonable dogma”, i.e. that an “imitation game” is all there is, and all there rationally could be, to the question of, and criteria for, human-equivalent mentality.

Neuroscience, as I wrote in the Bostrom forum a couple weeks ago (perhaps a bit too stridently in tone, and not to my best credit, in that case) is no longer held in the spell of the dogma that being “rational” and “scientific” means banishing consciousness from our investigation.

Neither should we be. Further, I am convinced that if we dig a little deeper, we CAN come up with a replacement for the Turing test (but first we have to be willing to look!) … some difference that makes a difference, and actually develop some (at least probabilistic) test(s) for whether a system that behaves intelligently, has, in addition, consciousness.

So, this video course will be a combination of selected topics in scientific intellectual history that are essential to understand, in order to see where we have come from, and then will develop current and new ideas, so see where we might go.

I have a developing theory with elements that seem very promising. It is more than elements, it is becoming, by degrees, a system of related ideas that fit together perfectly, are partly based on accepted scientific results, and are partly extensions that a strong, rational case can be made for.

What is becoming interesting and exciting to me about the extensions, is that sometime during the last year (and I work on this every day, unless I am exhausted from a previous day and need to rest), the individual insights, which were exciting enough individually, and independently arguable, are starting to reveal a systematic cluster of concepts that all fit together.

This is extremely exciting, even a little scary at times. But suddenly, it is as if a lifetime of work and piecemeal study, with a new insight here, another insight there, a possible route of investigation elsewhere... all are fitting into a mosaic.

So, to begin with the point I began with, my time is pulling me in various directions. I am in the Bostrom forum, but on days that I am hot on the scent of another layer of this theory that is being born, I have to follow that. I do a lot of dictation when the ideas are coming quickly.

It is, of course, very complicated. But it will also be quite explainable, with systematic, orderly presentation.

So, that was the original plan for my own YT channel. It was to begin with essential intellectual history in physics, philosophy of mind, early AI, language comprehension, knowledge representation, formal semantics.... and that ball of interrelated concepts that set, to an extent, either correct or incorrect boundary conditions on what a theory has to look like.

Then my intent was to carefully present and argue for (and take devils advocate for) my new insights, one by one, then as a system.

I don't know how it will turn out, or whether I will suddenly discover a dead end. But assuming no dead end, I want it out there where interested theorists can see it and judge it on its merits, up or down, or modify it.

I am going to tun out of word allowance any moment. But it was after planning this, that I thought of the opportunity to do interviews of other thinkers for possibly someone else's YT channel. Both projects are obviously compatible. More later as interest dictates, I have to make dinner. Best, Tom NxGenSentience

Comment by NxGenSentience on Kickstarting the audio version of the upcoming book "The Sequences" · 2014-12-12T18:16:06.226Z · LW · GW

Same question as Luke's. I probably have jumped at it. I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions, with people like Bostrom, Google-ites setting up the AI laboratories, and other vibrant, creative, contempory AI-relevant players.

I have knowledge of AI, general comp sci, deep and broad neuroscience, the mind-body problem (philosophically understood in GREAT detail -- college honors thesis at UCB was on that) and deep, detailed knowledge of all the big neurophilosphy players' theories.

These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it's relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)

I have also closely followed Stuart Hameroff and Roger Penrose's "Orch OR" theory -- which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked --- until just this last year empirical support.

Said support is now there -- and with with some fanfare, I might add, in the nich scientific and philosophical mind-body and AI theoretic community that follows this -- and vindicates core aspects of this theory (although doesn't of confirm the Platonic qualia aspect.)

Worth digressing, though... for those who see this.... just as a physiological, quantum computational-theoretic account of how the brain does what it does ... particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which is by consensus the locus of the bulk of the neuronal integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially squares the entire synaptic-level information processing of the brain as a whole, to begin with. I think this is destined to be a nobel prize-level theory eventually.)

I know Hameroff as a formerly first name basis contact, and could, though it's been a few years, rapidly trigger his memory, and get an on-tape detailed interview with him at any time.

Point is.... I have a standing offer to create detailed and theoretically competent -- thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone's branded You Tube channel (like MIRI, for example.)

No one has taken me up on that yet, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, but more importantly, have 25 years of detailed study I can draw upon to make interviews that COUNT, are unique, and relevant.

No takers yet. So maybe I will go kickstarter and do them myself, on my own branded you Tube channel. Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I'd also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy.)

Comment by NxGenSentience on Kickstarting the audio version of the upcoming book "The Sequences" · 2014-12-12T17:49:35.068Z · LW · GW

Same question as Luke's. I probably would have jumped at it, if only to make seed money to sponsor other useful projects, like the following.

I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions and documentaries with key, relevant players and theoreticians in AI and related work. This includes individual thinkers, labs, Google's AI work, the list is endless.

I have knowledge of AI, general comp sci, consideralble knowledge of neuroscience, the mind-body problem (philosophically understood in GREAT detail -- college honors thesis at UCB was on that) and deep, long-term evolutionary knowledge of all the big neurophilosphy players' theories.

These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it's relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)

I have also closely followed Stuart Hameroff and Roger Penrose's "Orch OR" theory -- which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked --- until just this last year empirical support.

Said support is now there -- and with with some fanfare, I might add, within the nich scientific and philosophical mind-body and AI theoretic community that follows this work. Experiments vindicate core aspects of this theory (although do not confirm the Platonic qualia aspect.)

Worth digressing, though... for those who see this message.... so I will mention that, just as a physiological, quantum computational-theoretic account of how the brain does what it does ... particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which (the dendritic tree) is by consensus the neuronal locus of the bulk of neurons' integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially it squares the entire synaptic-level information processing aggregate estimate, of the brain as a whole, for starters! I think this is destined to be a nobel prize-level theory eventually.)

I know Hameroff on a formerly first name basis contact, and could, though it's been a couple years, rapidly trigger his memory of who I am -- he held me in good stead -- and I could get an on-tape detailed interview with him at any time.

Point is.... I have a standing offer to create detailed and theoretically competent -- thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone's branded You Tube channel (like MIRI, for example.)

I got this idea, when I was watching an early interview at Google with Kurzweil, by some 2x year-old bright-eyed google-ite employee, who was asking the most shallow, immature, clueless questions! (I thought at the time -- "jeeze, is this the best they can find to plumb Kurzweil's thinking on the future of AI at Google, or in general?")

Anyway, no one has taken me up on that offer to create what could be terrific documentary-interviews, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, not some pocket camera.

But more importantly, I have 25 years of detailed study of the mind body problem and AI, and I can draw upon that to make interviews that COUNT, are unique, and relevant, and unparalleled.

AI is my life's work (that, and the co-entailed problem of mind-body theory generally.) I have been working hard to supplant the Turing test with something that tests for consciousness, instead of relies on the positiivistic denial of the existence of consciousness qua consciousness, beyond behavior. That test came out of an intellectual soil that was dominated with positivism, which in turn was based on a mistaken and defective attempt to metabolize the Newtonian to Quantum phsical transition.

It's partly based on a scientific ontology that is fundamentally false, and has been demonstrably so for 100 years -- Newton's deterministic clockwork universe model that has no room for "consciousness", only physical behavior -- and partly based on an incomplete attempt to intellectually metabolize the true lessons of quantum theory (please see Henry Stapp's papers , on his "stapp files" LBL website, for a crystal clear set of expositions of this point.)

No takers yet. So maybe I will have to go kickstarter too, and do these documentaries myself, on my own branded you Tube channel. (It will be doing a great service to countless thinkers to have GOOD q and a with their peers. I am not without my own original questions about their theories, that I would like to ask, as well.)

Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I'd also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy. I am forming a general theory, from which I think the keynote speaker's Turing Test 2 "Lovelace 2.0" might actually be a derivable correllate.)

Comment by NxGenSentience on Why safety is not safe · 2014-12-05T09:30:49.274Z · LW · GW

It's nice to hear a quote from Wittgenstein. I hope we can get around to discussing the deeper meaning of this, which applies to all kinds of things... most especially, the process by which each kind of creature (bats, fish, homo sapiens, and potential embodied artifactual (n.1) minds (and also not embodied in the contemporaneously most often used sense of the term -- Watson was not embodied in that sense) *constructs it's own ontology) (or ought to, by virtuue of being embued with the right sort of architecture.)

That latter sense, and the incommensurability of competing ontologies in competing creatures (where 'creature' is defined defined as a hybrid, and N-tuple, of cultural legacy contructs, endemic evolutionarily bequeathed physiological sensorium, it's individual autobiographical experience...), but not (in my view, in the theory I am developing) opaque to enlightened translatability -- though the conceptual scaffolding for translaiton involves the nature of, purpose of, and boundaries, both logical and temporal of the "specious present", the quantum zeno effect, and other considerations, so it is more suble than meets the eye)... is more of what Wittengensttein was thinking about, considering Kant's answer to skepticism, and lots of other issues.

Your more straightforward point bears merit, however. Most of us have spend a good deal of our lives battling not issue opacity, as much as human opacity to new, expanded, revised, or unconventional ideas.

Note 1.: BY the way, I occasionally write 'artifactual' as opposed to 'artificial' because of the sense in which, as products of nature, everything we do -- including building AIs -- is, ipso facto, a product of nature, and hence, 'artificial' is an adjective we should be careful about.

Comment by NxGenSentience on Superintelligence 10: Instrumentally convergent goals · 2014-11-24T11:52:05.901Z · LW · GW

People do not behave as if we have utilities given by a particular numerical function that collapses all of their hopes and goals into one number, and machines need not do it that way, either.

I think this point is well said, and completely correct.

..

Why not also think about making other kinds of systems?

An AGI could have a vast array of hedges, controls, limitations, conflicting tendencies and tropisms which frequently cancel each other out and prevent dangerous action.

The book does scratch the surface on these issues, but it is not all about fail-safe mind design and managed roll-out. We can develop a whole literature on those topics.

I agree. I find myself continually wanting to bring up issues in the latter class of issues... so copiously so, that frequently it feels like I am trying to redesign our forum topic. So, I have deleted numerous posts-in-progress that fall into that category. I guess those of us who have ideas about fail-safe mind design that are more subtle -- or to put it more neutrally -- do not fit the running paradigm in which the universe of discourse is that of transparent, low-dimensional (low dimensional function range space, not low dimensional function domain space) utility functions, need to start writing our own white papers.

When I hear the Bostrom claims only 7 people in the world are thinking full time and productively about (in essence) fail safe mind design, or that someone at MIRI wrote only FIVE people are doing so (though in the latter case, the author of that remark did say that there might be others doing this kind of work "on the margin", whatever that means), I am shocked.

It's hard to believe, for one thing. Though, the people making those statements must have good reasons for doing so.

But maybe the deriviation of such low numbers could be more understandable, if one stipulates that "work on the problem" is to be counted if and only if candidate people belong to the equivalence class of thinkers restricting their approach to this ONE, very narrow conceptual and computational vocabulary.

That kind of utility function-based discussion (remember when they were called 'heuristics' in the assigned projects, in our first AI courses?) has its value, but it's a tiny slice of the possible conceptual, logical and design pie ... about like looking at the night sky through a soda straw. If we restrict ourselves to such approaches, no wonder people think it will take 50 or 100 years to do AI of interest.

Ourside of the culture of collapsing utility functions and the like, I see lots of smart (often highly mathematical, so they count as serious) papers in whole brain chaotic resonant neurodynamics; new approachs to foundations of mental health issues and disorders of subjective empathy (even some application of deviant neurodynamics to deviant cohort value theory, and defective cohort "theory of mind" -- in the neuropsychiatric and mirror neuron sense) that are grounded in, say, pathologies with transient Default Node Network coupling... and distrubances of phase coupled equilibria across the brain.

If we run out of our own ideas to use from scratch (which I don't think is at all the case ... as your post might suggest, we have barely scratched the surface), then we can go have a look at current neurology and neurobiology, where people are not at all shy about looking for "information processing" mechanisms underlying complex personality traits, even underlying value and aesthetic judgements.

I saw a visual system neuroscientist's paper the other day offering a theory of why abstract (ie. non-representational) art is so intriguing to (not all, but some) human brains. It was a multi-layered paper, discussing some transiently coupled neurodynamical mechanisms of vision (the authors' specialties), some reward system neuromodulator concepts, and some traditional concepts expressed at a phenomenological, psychological level of description. An ambitious paper, yes!

But ambition is good. I keep saying, we can't expect to do real AI on the cheap.

A few hours or days reading such papers is good fertilizer, even if we do not seek to translate, in any direct way (like copying "algorithms" from natural brains) wetware brain research, into our goal, which presumably is to do dryware mind design --- and do it in a way where we choose our own functional limits, not have nature's 4.5 billion years of accidents choose boundary conditions on substrate platforms, for us.

Of course, not everyone is interested in doing this. I HAVE learned in this forum, that "AI" is a "big tent". Lots of uses exist for narrow AI, in thousands of indutries and fields. Thousands of narrow AI systems are already in play.

But, really... aren't most of us interested in this topic because we want the more ambitious result?

Bostrom says "we will not be concerned with the metaphysics of mind..." and "...not concern ourselves whether these entities have genuine self-awareness...."

Well, I guess we won't be BUILDING real minds anytime soon, then. One can hardly expect to create, that which one won't even openly discuss. Bostrom is wrting and speaking, using the language of "agency" and "goals" and "motivational sets", but he is only using those terms metaphorically.

Unless, that is, everyone else in here (other than me) actually is prepared to deny that we -- who spawned those concepts, to describe rich, conscious, intentionally entrained features of the lives of self-aware, genuine conscious creatures -- are different, i.e., that we are conscious and self-aware.

No one here needs a lesson in intellectual history. We all know that people did deny that , back in the behaviorism era. (I have studied the reasons -- philosophical and cultural -- and continue to uncover in great detail, mistaken assumptions out of which that intellectual fad grew.)

Only ff we do THAT again, will we NOT be using "agent" metaphorically, when we apply that to machines with no real consciousness, because ex hypothesi WE'd posess no minds either, in the sense we all know we do posess, as conscious humans.

We'd THEN be using it ('agent", "goal", "motive" ... the whole equivalence class of related nouns and predicates) in the same sense for both classes of entities (ourselves, and machines with no "awareness", where the latter is defined as anyting other than public, 3rd person observable behavior.)

Only in this case, would it not be a metaphor to use 'agent, motive', etc. in describing intelligent (but not conscious) machines, whcih evidently is the astringent conceptual model within which Bostrom wishes to frame HLAI --- proscribing considerations, as he does, of whether they are genuinely self-aware.

But, well, I always thought that that excessively positivistic attitude, had more than a little something to do with the "AI winter" (just like it is widely acknowledged to have been responsible for the neuroscience winter that paralleled it.)

Yet neuroscientists are not embarassed to now say, "That was a MISTAKE, and -- fortunately -- we are over it. We wasted some good years, but are no longer wasting time denying the existence of consciousness, the very thing that makes the brain interesting and so full of fundamental scientific interest. And now, the race is on to understand how the brain creates real mental states."

NEUROSCIENCE has gotten over that problem with discussing mental states qua mental states , clearly.

And this is one of the most striking about-faces in the modern intelllectual history of science.

So, back to us. What's wrong with computer science? Either AI-ers KNOW that real consciousness exists, just like neuroscientists do, and AI-ers just don't give a hoot about making machines that are actually conscious.

Or, AI-ers are afraid of tackling a problem that is a little more interesting, deeper, and harder (a challenge that gets thousands of neuroscientists and neurophilosophers up on the morning.)

I hope the latter is not true, because I think the depth and possibilities of the real thing -- AI with consciousnes -- are what gives it all the attraction (and holds, in the end, for reasons I won't attempt to desribe in a short post, the only possibility of making the things friendly, if not benificient.)

Isn't that what gives AI its real interest? Otherwise, why not just write business software?

Could it be that Bostrom is throwing out the baby with the bathwater, when he stipulates that the discussion, as he frames it, can be had (and meaningful progress made), without the interlocutors (us) being concerned about whether AIs have genuine self awareness, etc?

Comment by NxGenSentience on Superintelligence 9: The orthogonality of intelligence and goals · 2014-11-13T17:11:49.300Z · LW · GW

My general problem with "utilitarianism" is that it's sort of like Douglas Adams' "42." An answer of the wrong type to a difficult question. Of course we should maximize, that is a useful ingredient of the answer, but is not the only (or the most interesting) ingredient.

Taking off from the end of that point, I might add (but I think this was probably part of your total point, here, about "the most interesting" ingredient) that people sometimes forget that utilitarianism is not a theory itself about what is normatively desirable, and least not much of one. For Bentham-style "greatest good for the greatest number" to have any meaning, it has to be supplemented with a view of what property, state of being, action type, etc, counts as a "good" thing, to begin with. Once this is defined, we can then go on to maximize that -- seeking to achieve the most of that, for the most people (or relevant entities.)

But greatest good for the greatest number means nothing until we figure out a theory of normativity, or meta-normativity that can be instantiated across specific, varying situations and scenarios.

IF the "good" is maximizing simple total body weight, then adding up the body weight of all people in possible world A, vs in possible world B, etc, will allow us a utilitarian decision among possible worlds.

IF the "good" were fitness, or mental healty, or educational achievement... we use the same calculus, but the target property is obviously different.

Utilitarianism is sometimes a person's default answer, until you remind them that this is not an answer at all about what is good. It is just an implementation standard for how that good is to be devided up. Kind of a trivial point, I guess, but worth reminding ourselves from time to time that utilitarianism is not a theory of what is actually good, but how that might be distributed, if that admits of scarcity.

Comment by NxGenSentience on Superintelligence 9: The orthogonality of intelligence and goals · 2014-11-13T00:30:06.303Z · LW · GW

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is...

This is totally right as well. We live inside our ontologies. I think one of the most distinctive, and important, features of acting, successfully aware minds (I won't call them 'intelligences" because of what I am going to say further down, in this message) is this capacity to mint new ontologies as needed, and to do it well, and successfully.

Successfully means the ontological additions are useful, somewhat durable constructs, "cognitively penetrable" to our kind of mind, help us flourish, and give a viable foundation for action that "works" ... as well as not backing us into a local maximum or minimum.... By that I mean this: "successfull" minting of ontological entities enables us to mint additional ones that also "work".

Ontologies create us as much as we create them, and this creative process is I think a key feature of "successful" viable minds.

Indeed, I think this capacity to mint new ontologies and do it well, is largely orthogonal to the other two that Bostrom mentions, i.e. 1) means-end reasoning (what Bostrom might otherwise call intelligence) 2) final or teleological selection of goals from the goal space, and to my way of thinking... 3) minting of ontological entities "successfully" and well.

In fact, in a sense, I would put my third one in position one, ahead of means-end reasoning, if I were to give them a relative dependence. Even though orthogonal -- in that they vary independently -- you have to have the ability to mint ontologies, before means-end reasoning has anything to work on. And in that sense, Katja's suggestion that ontologies can confer more power and growth potential (for more successful sentience to come), is something I think is quite right.

But I think all three are pretty self-evidentally largely orthogonal, with some qualifications that have been mentioned for Bostrom's original two.

Comment by NxGenSentience on Superintelligence 9: The orthogonality of intelligence and goals · 2014-11-13T00:00:40.969Z · LW · GW

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend.

I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent's goal space, are built around that agent's perceived (actually, inhabited is a better word) ontology.

For example, the professional ontology of a wall street financial analyst includes the objects that he or she interacts with (options, stocks, futures, dividends, and the laws and infrastructure associated with the conceptual “deductive closure” of that ontology.)

Clearly, “final” -- teleological and moral – principles involving approach and avoidance judgments … say, involving insider trading (and the negative consequences at a practical level, if not the pure anethicality, of running afoul of the laws and rules of governance for trading those objects) , are only defined within an ontological universe of discourse, which contains those financial objects and the network of laws and valuations that define – and are defined by -- those objects.

Smarter beings, or even ourselves, as our culture evolves, generation after generation becoming more complex, acquire new ontologies and gradually retire others. Identity theft mediated by surreptitious seeding of laptops in Starbucks with keystroke-logging viruses, is “theft” and is unethical. But trivially in 1510 BCE, the ontological stage on which this is optionally played out did not exist, and thus, the ethical valence would have been undefined, even unintelligible.

That is why, if we can solve the friendlieness problem, it will have to be by some means that gives new minds the capacity to develop robust ethical meta-intuition, that can be recruited creatively, on the fly, as these beings encounter new situations that call upon them to make new ethical judgements.

I happen to be a version of meta -ethical realist, like I am something of a mathematical platonist, but in my position, this is crossed also with a type of constructivist metaethics, apparently like that subscribed-to by John Danaher in his blog (after I followed the link and read it.)
At least, his position sounds like it is similar to mine, although constructivist part of my theory is supplemented with a “weak” quasi-platonist thread, that I am trying to derive from some more fundamental meta-ontological principles (work in progress on that.)

Comment by NxGenSentience on Superintelligence 6: Intelligence explosion kinetics · 2014-10-25T17:22:42.605Z · LW · GW

To continue:

If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of "value" knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing.

But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the "accounting department", those who are famous for knowing the cost of everything but the value of nothing, and even more famous for, by default, often derisively calling their venal, bottom-line, unimaginative dollars and cents worldview a "realistic" viewpoint (usually a constraint based on lack of vision) -- when faced with pleas for SETI grants, or (originally) money for the National Supercomputing Grid, ..., or any of dozen of other projects that represent human aspiration at its best -- seems, to me, to be shocking.

I found myself wondering if the moderator was saying that with a straight face, or (hopefully) putting on the hat of a good interlocutor and firestarter, trying to flush out some good comments, because this week had a diminished activity post level.

Irrespective of that, another defect, as I mentioned, is that economics as we know it will prove to be relevant for an eyeblink, in the history of the human species (assuming we endure.) We are closer to the end of this kind of scarcity-based economics, than the beginning (assuming even one or more singularity style scenarios come to pass, like nano.)

It reminds me of the ancient TV series Star Treck New Gen, in an episode wherein someone from our time ends up aboard the Enterprise of the future, and is walking down a corridor speaking with Picard. The visitor asks Picard something like "who pays for all this", as the visitor is taking in the impressive technology of the 23rd century vessel.

Picard replys something like, "The economics of the 23 century are somewhat different from your time. People no longer arrange their lives around the constraint of amassing material goods...."

I think it will be amazing if even in 50 years, economics as we know it, has much relevance. Still less so in future centuries, if we -- or our post-human selves are still here.

Thus, economic measures of "value" or "success" are about the least relevant metric we ought to be using, to assess what possible critaris we might give to track evolving "intelligence", in the applicable, open-ended, future-oriented sense of the term.

Economic --- i.e. marketplace-assigned "value" or "success" is already pretty evidently a very limiting, exclusionary way to evaluate achievement.

Remember: economic value is assigned mostly by the center standard deviation of the intelligence bell curve. This world, is designed BY, and FOR, largely, ordinary people, and they set the economic value of goods and services, to a large extent.

Interventions in free market assignment of value are mostly made by even "worse" agents... greed-based folks who are trying to game the system.

Any older people in here might remember former Senator William Proxmire's "Golden Fleece" award in the United States. The idea was to ridicule any spending that he thought was impractical and wasteful, or stupid.

He was famous for assigning it to NASA probes to Mars, the Hubble Telescope (in its several incarnations), the early NSF grants for the Human Genome project..... National Institute for Mental Health programs, studies of power grid reliability -- anything that was of real value in science, art, medicine... or human life.

He even wanted to close the National Library of Congress, at one point.

THAT, is what you get when you have ECONOMIC measures to define the metric of "value", intelligence or otherwise.

So, it is a bad idea, in my judgement, any way you look at it.

Ability to generate economic "successfulness" in inventions, organization restructuring... branding yourself of your skills, whatever? I don't find that compelling.

Again, look at professional sports, one of the most "successful" economic engines in the world. A bunch of narcissistic, girl-friend beating pricks, racist team owners... but by economic standards, they are alphas.

Do we want to attach any criterion -- even indirect -- of intellectual evolution, to this kind of amoral morass and way of looking at the universe?

Back to how I opened this long post. If our intuitions start running thin, that should tell us we are making progress toward the front lines of new thinking. When our reflexive answers stop coming, that is when we should wake up and start working harder.

That's because this --- intelligence, mind augmentation or redesign, is such a new thing. The ultimate opening-up of horizons. Why bring the most idealistically-blind, suffocatingly concrete worldview, along into the picture, when we have a chance at transcendence, a chance to pursue infinity?

We need new paradigms, and several of them.

Comment by NxGenSentience on Superintelligence 6: Intelligence explosion kinetics · 2014-10-25T14:13:27.888Z · LW · GW

Thanks, I'll have a look. And just to be clear, watching *The Machine" wasn't driven primarily by prurient interest -- I was drawn in by a reviewer who mentioned that the backstory for the film was a near-future world-wide recession, pitting the West with China, and that intelligent battlefield robots and other devices were the "new arms race" in this scenario.

That, and that the film reviewer mentioned that (i) the robot designer used quantum computing to get his creation to pass the Turing Test (a test I have doubts about as do other researchers, of course, but I was curious how the film would use it) - and (ii) yet the project designer continued to grapple with the question of whether his signature humanoid creation was really conscious, or a "clever imitation", pulled me in.

(He verbally challenges and confronts her/it, in an outburst of frustration, in his lab about this, roughly two thirds of the way through the movie and she verbally parrys plausible responses.)

It's really not all that weak, as film depictions of AI go. It's decent entertainment with enough threads of backstory authenticity, political and philosophical, to tweak one's interest.

My caution, really, was a bit harsh; applying largely to the uncommon rigor of those of us in this group -- mainly to emphasise that the film is entertainment, not a candidate for a paper in the ACM digital archives.

However, indeed, even the use of a female humanoid form makes tactical design sense. If a government could make a chassis that "passed" the visual test and didn't scream "ROBOT" when it walked down the street, it would have much greater scope of tactical application --- covert ops, undercover penetration into terrorist cells, what any CIA clandestine operations officer would be assigned to do.

Making it look like a woman just adds to the "blend into the crowd" potential, and that was the justification hinted at in the film, rather than some kind of sexbot application. "She" was definitely designed to be the most effective weapon they could imagine (a British-funded military project.)

Given that over 55 countries now have battlefield robotic projects under way (according to Kurzweil's weekly newsletter) -- and Google got a big DOD project contract recently, to proceed with advanced development of such mechanical soldiers for the US government -- I thought the movie worth a watch.

If you have 90 minutes of low-priority time to spend (one of those hours when you are mentally too spent to do more first quality work for the day, but not yet ready to go to sleep), you might have a glance.

Thanks for the book references. I read mostly non-fiction, but I know sci fi has come a very long way, since the old days when I read some in high school. A little kindling for the imagination never hurts. Kind regards, Tom ("N.G.S")

Comment by NxGenSentience on Superintelligence 6: Intelligence explosion kinetics · 2014-10-24T16:03:38.767Z · LW · GW

Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.

I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of "success".

Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of ideas, or creative output of various kinds... or, even, dare I say beauty if these superintellects happen to find that worth doing [footnote 1]), I am skeptical on grounds of (ii): given the phase change in human society likely to accompany superintelligence (or nano, etc.), what kind of economic system is likely to be around, in the 22nd century, the 23rd.... and so on?

Economics, as we usually use the term, seems as dinosaur-like as human death, average IQs of 100, energy availability problems, the nuclear biological human family (already DOA); having offspring by just taking the genetic lottery cards and shuffling... and all the rest of social institutions based on eons of scarcity -- of both material goods, and information.

Economic productivity, or perceived economic value. seems like the last thing we ought to based intelligence metrics on. (Just consider some the economic impact of professional sports -- hardly a measure of meteoric intellectual achievement.)

[Footnote 1]: I have commented in here before about the possibility that "super-intelligences" might exhibit a few surprises for us math-centric, data dashboard-loving, computation-friendly information hounds.

(Aside: I have been one of them, most of my life, so no one should take offense. Starting far back: I was the president of Mu Alpha Theta, my high school math club, in a high school with an advanced special math program track for mathematically gifted students. Later, while a math major at UC Berkeley, I got virtually straight As and never took notes in class; I just went to class each day, sat in the front row, and payed attention. I vividly remember getting the exciting impression, as I was going through the upper division math courses, that there wasn't anything I couldn't model.)

After graduation from UCB, at one point I was proficient in 6 computer languages. So, I do understand the restless bug, the urge to think of a clever data structure and to start coding ... the impression that everything can be coded, with enough creativity .

I also understand what mathematics is, pretty well. For starters, it is a language. A very, very special language with deep connections to the fabric of reality. It has features that make it one of the few, perhaps only candidate language, for being level of description independent. Natural languages, and technical domain-specific languages are tied to corresponding ontologies and to corresponding semantics' that enfold those ontologies. Math is the most omni-ontological, or meta-ontological language we have (not counting brute logic, which is not really a "language", but a sort of language substructure schema.

Back to math. It is powerful, and an incredible tool, and we should be grateful for the "unreasonable success" it has (and continue to try to understand the basis for that!)

But there are legitimate domains of content beyond numbers. Other ways of experiencing the world's (and the mind's) emergent properties. That is something I also understand.

So, gee, thanks to whomever gave me the negative two points. It says more about you than it does about me, because my nerd "street cred" is pretty secure.

I presume the reader "boos" are because I dared to suggest that a superintelligence might be interested in, um, "art", like the conscious robot in the film I mention below, who spends most of its free time seeking out sketch pads, drawing, and asking for music to listen to. Fortunately, I don't take polls before I form viewpoints, and I stand by what I said.)

Now, to continue my footnote: Imagine that you were given virtually unlimited computational ability, imperishable memory, ability to grasp the "deductive closure" of any set of propositions or principles, with no effort, automatically and reflexively.

Imagine also that you have something similar to sentience or autonomy, and can choose your own goals. SUppose also, say, that your curiosity functions in such a way that "challenges" are more "interesting" to you than activities that are always a fait accompli .

What are you going to do? Plug yourself into the net, and act like an asperger spectrum mentality, compulsively computing away at everything that you can think of to compute?

Are you going to find pi to a hundred million digits of precision?

Invert giant matrices just for something to do?

It seems at least logically and rationally possible that you will be attracted to precisely those activities that are not computational givens before you even begin doing them. You might view the others as pointless, because their solution is preordained.

Perhaps you will be intrigued by things like art, painting, or increasingly beautiful virtual reality simulations for the sheer beauty of them.

In case anyone saw the movie "The Machine" on Netflix, it dramatizes this point, which was interesting. It was, admittedly, not a very deep film; one inclined to do so can find the usual flaws, and the plot device of using a beautiful female form could appear to be a concession to the typically male demographic for SciFi films -- until you look a bit deeper at the backstory of the film (that I mention below.)

I found one thing of interest: when the conscious robot was left alone, she always began drawing again, on sketch pads.

And, in one scene wherein the project leader returned to the lab, did he find "her" plugged-into the internet, playing chess with supercomputers around the world? Working on string theory? Compiling statistics about everything that could conceivably be quantified?

No. The scene finds the robot (in the film, it has sensory responsive skin, emotions, sensory apparati etc. based upon ours) alone in a huge warehouse, having put a layer of water on the floor, doing improvisational dance with joyous abandon, naked, on the wet floor, to loud classical music, losiing herself in the joy of physical freedom, sensual movement, music, and the synethesia of music, light, tactility and the experience of "flow".

The explosions of light leaking through her artificial skin, in what presumably were fiber ganglia throughout her-its body, were a demure suggestion of whole body physical joy of movement, perhaps even an analogue of sexuality. (She was designed partly as an em, with a brain scan process based on a female lab assistant.)

The movie is worth watching just for that scene (please -- it is not for viewer eroticism) and what it suggests to those of us who imagine ourselves overseeing artificial sentience design study groups someday. (And yes, the robot was designed to be conscious, by the designer, hence the addition to the basic design, of the "jumpstart" idea of uploading properties of the scanned CNS of human lab assistant.)

I think we ought to keep open our expectations, when we start talking about creating what might (and what I hope will) turn out to be actual minds.

Bostrom himself raises this possibility when he talks about untapped cognitive abilities that might already be available within the human potential mind-space.

I blew a chance to talk at length about this last week. I started writing up a paper, and realized it was more like a potential PhD dissertation topic, than a post. So I didn't get it into usable, postable form. But it is not hard to think about, is it? Lots of us in here already must have been thinking about this. ... continued

Comment by NxGenSentience on Superintelligence 5: Forms of Superintelligence · 2014-10-20T13:25:22.891Z · LW · GW

If we could easily see how a rich conception of consciousness could supervene on pure information

I have to confess that I might be the one person in this business who never really understood the concept of supervenience -- either "weak supervenience" or "strong supervenience." I've read Chalmers, Dennett, the journals on the concept... never really "snapped-in" for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.

To me, supevenience seems like a fuzzy way to repackage epiphenomenalism, or to finesse some kind of antinomy (for them), like, "can't live with eliminative materialism, can't live with dualism, can't live with type - type identity theory, and token-token identity theory is untestable and difficult even to give logical nec and sufficient conditions for, so... lets have a new word."
So, (my unruly suspicion tells me) let's say mental events (states, processes, whatever) "supervene" on physiological states (events, etc.)

As I say, so far, I have just had to suspend judgement and wonder if some day "supervene" will snap-in and be intuitively penetrable to me. I push all the definitions, and get to the same place --- a "I don't get it" place, but that doesn't mean I believe the concept is itself defective. I just have to suspend judgement (like, for the last 25 years of study or so.)

We need more in our ontology, not less.

I actually believe that, too... but with a unique take: I think we all operate with a logical ontology ... not in the sense of modus ponens, but in the sense that a memory space can be "logical", meaning in this context, detached from physical memory.

Further, the construction of this logical ontology is, I think, partly culturally influenced, partly influenced by the species' sensorium and equipment, party influenced / constructed by something like Jeff Hawkins' prediction-expectation memory model... constructed, bequeathed culturally, and in several additional related, ways that also tune the idealized, logical ontology.

Memetics influences (in conjunction with native -- although changeable -- abilities in those memes' host vectors) the genesis, maintenance, and evolution of this "logical ontology", also. This is feed foward and feed backward. Memetics influences the logical ontology, which crystalizes into additional memetic templates that are kept, tuning further the logical ontology.

Once "established" (and it constantly evolves), this "logical" ontology is the "target" that, over time, a new (say, human, while growing up, growing old) has as the "target" data structures that it creates a virtual, phenomenological analog simulation of, and as the person gains experience, the person's virtual reality simulation of the world converges on something that is in some way consistently isomorphically related to this "logical" idealized ontology.

So (and there is lots of neurology research that drives much of this, though it may all sound rather speculative) for me, there are TWO ontologies, BOTH of them constructed, and those are in addition to the entangled "outside world" quantum substrate, which is by definition inherently both sub-ontological (properly understood) and not sensible, (It is sub-ontological because of its nature, but is interrogatable, giving feedback helping to form boundary conditions for the idealized logical ontology (or ontologies, in different species.)

I'll add that I think the "logical ontology" is also species dependent, unsurprisingly.

I think you and I got off on the wrong foot, maybe you found my tone too declaratory when it should have been phrased more subjunctively. I'll take your point. But since you obviously have a philosophy competence, you will know what the following means:-- one can say my views resemble somewhat an updated quasi-Kantian model, supplemented with the idea that noumena are the inchoate quantum substrate.

Or perhaps to correct that, in my model there are two "noumenal" realms: one is the "logical ontology" I referred to, a logical data structure, and the other is the one below that, and below ALL ontologies, which is the quantum substrate, necessarily "subontological."

But my theory (there is more than I have just shot through quickly right now) handles species-relative qualia and the species-relative logical ontologies across species.

Remaining issues include : how qualia are generated. And the same question for the sense of self. I have ideas how to solve these, and the indexical 1st person problem, connected with the basis problem. Neurology studies of default mode network behavior and architecture, its malfunction, and metacognition, epilepsy, etc, help a lot.

Think this is speculative? You should read neruologists these days, especially the better, data driven ones. (Perhaps you already know, and you will thus see where I derive some of my supporting research.)

Anyway, always, always, I am trying to solve all this in the general case--- first, across biological conscious species (a bird has a different "logical" ontology than people, as well as a different phenomenological reality that, to varying degrees of precision, "represents" or maps to, or has a recurrent resonance with that species' logical ontology) -- and then trying to solve it for any general mind in mind space. that has to live in this universe.

It all sounds like hand waving, perhaps. But this is scarcely an abstract. There are many puzzle pieces to the theory, and every piece of it has lots of specific research. It all is progressively falling together into an integrated system. I need geffen graphs, white boards, to explain it, since its a whole theory, so I can't squeeze it into one post. Besides, this is Bostrom's show.

I'll write my own book when the time comes -- not saying it is right, but it is a promising effort so far, and it seems to work better, the farther I push it.

When it is far enough along, I can test it on a vlog, and see if people can find problems. If so, I will revise, backtrack, and try again. I intend to spend the rest of my life doing this, so discovered errors are just part of revision and refinement.

But first I have to finish, then present it methodically and carefully, so it can be evaluated by others. No space here for that.

Thanks for your previous thoughts, and your caution against sounding too certain. I am really NOT that certain, of course, of anything. I was just thinking out loud, as they say.

this week is pretty much closed..... cheers...

Comment by NxGenSentience on Superintelligence 5: Forms of Superintelligence · 2014-10-20T11:23:12.279Z · LW · GW

Thanks for the very nice post.

Comment by NxGenSentience on Superintelligence 5: Forms of Superintelligence · 2014-10-20T11:14:23.249Z · LW · GW

Three types of information in the brain (and perhaps other platforms), and (coming soon) why we should care

Before I make some remarks, I would recommend Leonard Susskind’s (for those who don’t know him already – though most folks in here probably do -- he is a physicist at the Stanford Institute for Theoretical Physics) very accessible 55 min YouTube presentation called “The World as Hologram.” It is not as corny as it might sound, but is a lecture on the indestructibility of information, black holes (which is a convenient lodestone for him to discuss the physics of information and his debate with Hawking), types of information, and so on. He makes the seemingly point that, “…when one rules out the impossible, then what is left, however improbable, is the best candidate for truth.”
One interesting side point that comes out is his take on why computers that are more powerful have to shed more “heat”. Here is the talk: http://youtu.be/2DIl3Hfh9tY

Okay, my own remarks. One of my two or three favorite ways to “bring people in” to the mind-body problem, is with some of the ideas I am now presenting. This will be in skeleton form tonight and I will come back and flesh it out more in coming days. (I promised last night to get something up tonight on this topic, and in case anyone cares and came back, I didn’t want to have nothing. I actually have a large piece of theory I am building around some of this, but for now, just the three kinds of information, in abbreviated form.

Type One information is the sort dealt with, referred to, and treated in thermodynamics and entropy discussions. This is dealt with analytically in Newton’s Second Law of Thermodynamics. Here is one small start, but most will know it: en.wikipedia.org/wiki/Second_law_of_thermodynamics

Heat, energy, information, the changing logical positions within state spaces of entities or systems of entities, all belong to what I am calling category one information in the brain. We can also call this “physical” information. The brain is pumped -- not closed -- with physical information, and emits physical information as well.

Note that there is no semantic, referential, externally cashed-out content, defined for physical, thermodynamic information, qua physical information. It is - though possibly thermodynamically open an otherwise closed universe of discourse, needing nothing logically or ontologically external to analytically characterize it.

Type Two information in the brain (please assign no significance to my ordering, just yet) is functional. It is a carrier, or mediator, of causal properties, in functionally larger physical ensembles, like canonical brain processes. The “information” I direct attention to here must be consistent with (i.e. not violate principles of) Category One informational flow, phase space transitions, etc., in the context of the system, but we cannot derive Category Two information content (causal loop xyz doing pqr) from dynamical Category One data descriptions themselves.

In particular, imagine that we deny the previous proposition. We would need either an isomorphism from Cat One to Cat Two, or at least an “onto” function from Cat One to Cat Two (hope I wrote that right, it’s late.) Clearly, Cat one configurations to Cat Two configurations are many-many, not isomorphic, nor many to one. (And one to many transformations from cat one sets to cat two sets, would be intuitively unsatisfactory if we were trying to build an “identity” or transform to derive C2 specifics, from C1 specifics .

It would resemble replacing type-type identity with token-token identity, jettisoning both sides of the Leibniz Law bi-conditional (“Identity of indiscernibles” and “Indiscernibility of Identicals” --- applied with suitable limits so as not to sneak anything in by misusing sortal ranges of predicates or making category errors in the predications.)

Well, this is a stub, and because of my sketchy presentation, this might be getting opaque, so let me move on to the next information type, just to get all three out.

Type Three information, is semantic, or intentional content, information. If I am visualizing very vibrantly a theta symbol, the intentional content of my mental state is the theta symbol on whatever background I visualize it against. A physical state of, canonically, Type Two information – which is a candidate, in a particular case, to be the substrate-instantiation or substrate-realization of this bundle of Type Three information (probably at least three areas of my brain, frequency coupled and phase offset locked, until a break in my concentration occurs) is also occuring.

A liberal and loose way of describing Type Three info (that will raise some eyebrows because it has baggage, so I use it only under duress: temporary poverty of time and the late hour, to help make the notion easy to spot) is that a Type Three information instance is a “representation” of some element, concept, or sensible experience of the “perceived” ontology (of necessity, a virtual, constructed ontology, in fact, but for this sentence, I take no position about the status of this “perceived”, ostensible virtual object or state of affairs.)

The key idea I would like to encourage people to think about is whether the three categories of information are (a) legitimate categories, and mainly (b) whether they are collapsible, inter-translatable, or are just convenient shorthand level-of-description changes. I hope the reader will see, on the contrary, that one or more of them are NOT reducible to a lower one, and that this has lessons about mind-substrate relationships that point out necessary conceptual revisions—and also opportunities for theoretical progress.

It seems to me that reducing Cat Two to Cat One is problematic, and reducing Cat 3 to Cat 2 is problematic, given the usual standards of “identity” used in logic (e.g. i. Leibniz Law; ii. modal logic’s notions of identity across possible worlds, and so on.)

Okay, I need to clean this up. It is just a stub. Those interested should come back and see it better written, and expanded to include replies to what I know are expected objections, questions, etc., C2 and C3 probably sound like the "same old thing" the m-b problem about experience vs neural correlate. Not quite. I am trying to get at something additional, here. Hard without diagrams.

Also, I have to present much of this without any context… like presenting a randomly selected lecture from some course, without building up the foundational layers. (That is why I am putting together a YouTube channel of my own, to go from scratch, to something like this, after about 6 hours of presentation… then on to a theory of which this is one puzzle piece.

Of course, we are here to discuss Bostrom’s ideas, but this “three information type” idea, less clumsily expressed, does tie straightforwardly to the question of indirect reach, and “kinds of better” that different superintelligences can embrace.

Unfortunately I will have to establish that conceptual link when I come back and clean this up, since it is getting so late. Thanks to those who read this far...

Comment by NxGenSentience on Superintelligence 5: Forms of Superintelligence · 2014-10-20T08:36:09.236Z · LW · GW

Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.

And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly arguments from guffaw, resembling the famous "I refute you thus" joke about Berkeleyan idealism.)

By the way, I am not defending Berkeleyan idealism, still less the theistic underpinning that kept popping up in his thought (I am an atheist.)

Rather, as for most thinkers, who cite the famous joke about someone kicking a solid object as a "proof" that Berkeley's virtual phenomenalism was self-evidently foolish, the point of my usage of that joke is to show it misses the point. Of course it seems phenomenologically, like the world is made of "stuff".

And information doesn't seem to be "real stuff." (The earth seems flat, too. So what?)

Had we time, you and I could debate the relative merits of an information-based, scientifically literate metaphysics, with whatever alternate notion of reality you subscribe to in its place, as your scientifically literate metaphysics.

But make no mistake, everyone subscribes to some kind of metaphysics, just as everyone has a working ontology -- or candidate, provisional set of ontologies.

Even the most "anti-metaphysical" theorists are operating from a (perhaps unacknowledged) metaphysics and working ontology; it is just that they think theirs, because it is invisible to them, is beyond need of conceptual excavation and clarification, and beyond the reach of critical, rational examination -- whereas other people's metaphysics is acutally a metaphysics (argh), and thus carries an elevated burden of proof relative to their ontology.

I am not saying you are like this, of course. I don't know your views. As I say, it could be the subject of a whole forum like this one. So I'll end by saying disagreement is inevitable, especially when I just drop in a remark as I did, about a topic that is actually somewhat tangential (though, as I will try to argue as the forum proceeds, not all that tangential.)

Yes, Bostrom explicitly says he is not concerned with the metaphysics of mind, in his book. Good for him. It's his book, and he can write it any way he chooses.

And I understand his editorial choice. He is trained as a philosopher, and knows as well as anyone that there are probably millions of pages written about the mind body problem, with more added daily. It is easy to understand his decision to avoid getting stuck in the quicksand of arguing specifics about consciousness, how it can be physically realized.

This book obviously has a different mission. I have written for publication before, and I know one has to make strategic choices (with one's agent and editor.)

Likewise, his book is also not about "object-level" work in AI -- how to make it, achieve it, give it this or that form, give it "real mental states", emotion, drives. Those of us trying to understand how to achieve those things, still have much to learn from Bostrom's current book, but will not find intricate conceptual investigations of what will lead to the new science of sentience design.

Still, I would have preferred if he had found a way to "stipulate" Conscious AI, along with speed AI, Quality AI, etc, as one of the flavors that might arise. Then we could address quesions under 4 headings, 4 possible AI worlds (not necessarily mutually exclusive, just as the three from this week are not mutually exclusive.)

The question of the "direct reach" of conscious AI, compared to the others, would have been very interesting.

It is a meta-level book about AI, deliberately ambiguous about consciousness. I think that makes the discussion harder, in many areas.

I like Bostrom. I've been reading his papers for 10 or 15 years.

But avoiding or proscribing the question of whether we have consciousness AND intelligence (vs simply intelligent behavior sans consciousness) thus pruning away, preemptively, issues that could depend on: whether they interact; whether the former increases causal powers -- or instability or stability -- in the exercise of the latter; and so on, keeps lots of questions inherently ambiguous.

I'll try to make good on that last claim, one way or another, during the next couple of weekly sessions.

Comment by NxGenSentience on Superintelligence 5: Forms of Superintelligence · 2014-10-19T08:08:39.946Z · LW · GW

A cell can be in a huge number of internal states. Simulating a single cell in a satisfactory way will be impossible for many years. What portion of this detail matters to cognition, however? If we have to consider every time a gene is expressed or protein gets phosphorylated as an information processing event, an awful lot of data processing is going on within neurons, and very quickly.

I agree not only with this sentence, but with this entire post. Which of the many, many degrees of freedom of a neuron, are "housekeeping" and don't contribute to "information management and processing" (quotes mine, not SteveG's) is far from obvious, and it seems likely to me that, even with a liberal allocation of the total degrees of freedom of a neuron to some sub-partitiioned equivalence class of "mere" (see following remarks for my reason for quotes) housekeeping, there are likely to be many, many remaining nodes in the directed graph of that neuron's phase space that participate in the instantiation and evolution of an informational state of the sort we are interested in (non-housekeeping).

And, this is not even to mention adjacent neuroglia, etc, that are in that neuron's total phase space, actively participating in the relevant (more than substrate-maintenance) set of causal loops -- as I argued in my post that WBE is not well-defined, a while back.

Back to what SteveG said about the currently unknown level of detail that matters (to the kind of information processing we are concerned with ... more later about this very, very important point); for now: we must not be too temporally-centric, i.e. thinking that the dynamically evolving information processing topology that a neuron makes relevant contributions to, is bounded, temporally, with a window beginning with: dendritic and membrane level "inputs" (receptor occupation, prevailing ionic environment, etc), and ending with: one depolarization -- exocytosis and/or the reuptake and clean-up shortly thereafter.

The gene expression-suppression and the protein turnover within that neuron should, arguably, also be thought of as part of the total information processing action of the cell... leaving this out is not describing the information processing act completely. Rather, it is arbitrarily cutting off our "observation" right before and after a particular depolarization and its immediate sequelae.

The internal modifications of genes and proteins that are going to effect future, information processing (no less than training of ANNs effects future behavior of of the ANN witin that ANNs information ecology) should be thought of, perhaps, as a persistent type of data structure itself. LTP of the whole ecology of the brain may occur on many levels beyond canonical synaptic remodeling.

We don't know yet which ones we can ignore -- e ven after agreeing on some others that are likely substrate maintenance only.

Another way of putting this or an entwined issue is: What are the temporal bounds of an information processing "act"? In a typical Harvard architecture substrate design, natural candidates would be, say, the time window of a changed PSW (processor status word), or PC pointer, etc.
But at a different level of description, it could be the updating of a Dynaset, a concluded SIMD instruction on a memory block representing a video frame, or anything in between.

It depends, ie, on both the "application" and aspects of platform archiceture.

I think it productive, at least, to stretch our horizons a bit (not least because of the time dilation of artificial systems relative to biological ones -- but again, this very statement itself has unexamined assumptions about the window -- spatial and temporal -- of a processed / processable information "packet" in both systems, bio and synthetic) and remain open about assumptions about what must be actively and isomorphically simulated, and what may be treated like "sparse brain" at any given moment.

I have more to say about this, but it fans out into several issues that I should put in multiple posts.

One collection of issues deals with: is "intelligence" a process (or processes) actively in play; is it a capacity to spawn effective, active processes; is it a state of being, like occurrently knowing occupying a subject's specious present, like one of Whitehead's "occasions of experience?"

Should we get right down to, and at last stop finessing around the elephant in the room: the question of whether consciousness is relevant to intelligence , and if so, when should we head-on start looking aggressively and rigorously at retiring the Turing Test, and supplanting it with one that enfolds consciousness and intelligence together, in their proper ratio? (This ratio is to be determined, of course, since we haven't even allowed ourselves to formally address the issue with both our eyes -- intelligenge and consciousness --open. Maybe looking through both issues, confers insight -- like depth vision, to push the metaphor of using two eyes. )

Look, if interested, for my post late tomorrow, Sunday, about the three types of information (at least) in the brain. I will title it as such, for anyone looking for it.

Personally, I think this week is the best thus far, in its parity with my own interests and ongoing research topics. Especially the 4 "For In-depth Ideas" points at the top, posted by Katja. All 4 are exactly what I am most interested in, and working most actively on. But of course that is just me; everyone will have their own favorites.

It is my personal agony (to be melodramatic about it) that I had some external distractions this week, so I am getting a late start on what might have been my best week.

But I will add what I can, Sunday evening (at least about the three types of information, and hopefully other posts. I will come back here even after the "kinetics" topic begins, so those persons in here who are interested in Katja's 4 In-depth issues, might wish to look back here later next week, as well as Sunday night or Monday morning, if you are interested in those issues as much as I am.

I am also an enthusiast for plumbing the depths of the quality idea, as well as, again, point number one on Katja's "In-depth Research" idea list for this week, which is essentially the issue of whether we can replace the Turing Test with -- now my own characterization follows, not Katja's, so "blame me" (or applaud if you agree) -- something much more satisfactory, with updated conceptual nuance representative of cognitive sciences and progressive AI as they are (esp the former) in 2015, not 1950.

By that I refer to theories, less preemptively suffocated by the legacy of logical positivism, which has been abandoned in the study of cognition and consciousness by mainstream cognitive science researchers; physicists doing competent research on consciousness; neuroscience and physics-literate philosophers; and even "hard-nosed" neurologists (both clinical and theoretical) who are doing down and detailed, bench level neuroscience.

As an aside, a brief look around confers the impression that some people on this web site still seem to think that being "critical thinkers" is somehow to be identified with holding (albeit perhaps semi-consciously) the scientific ontology of the 19th century, and subscribing to philosophy-of-science of the 1950's.

Here's the news, for those folks: the universe is made of information, not Rutherford-style atoms, or particles obeying Newtonian mechanics. Ask a physicist: naive realism is dead. So are many brands of hard "materialism" in philosophy and cognitive science.

Living in the 50's is not being "critical", is is being uninformed. Admitting that consciousness exists, and trying to ferret out its function, is not new-agey, it is realistic. Accepting reality is pretty much a necessary condition of being "less wrong."

And I think it ought to be one of the core tasks we never stray too far from, in our study of, and our pursuit of the creation of, HLAI (and above.)

Okay, late Saturday evening, and I was loosening my tie a bit... and, well, now I'll to get back to what contemporary bench-science neurologists have to say, to shock some of us (it surprised me) out of our default "obvious* paradigms, even our ideas about what the cortex does.

I'll try to post a link or two in the next day or two, to illustrate the latter. I recently read one by neurologists (research and clinical) who study children born en-cephalic (basically, just a spinal column and medulla, with an empty cavity full of CS fluid, in the rest of their cranium.) You won't believe what the team in this one paper presents, about consciousness in these kids. Large database of patients over years of study. And these neurologists are at the top of their game. It will have you rethinking some ideas we all thought were obvious, about what the cortex does. But let me introduce that paper properly, when I post the link, in a future message.

Before that, I want to talk about the three kinds of information in the brain -- maybe two, maybe 4, but important categorical differences (thermodynamic vs. semantic-referential, for starters), and what it means to those of us interested in minds and their platform-independent substrates, etc. I'll try to have something about that up, here, Sunday night sometime.

Comment by NxGenSentience on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-14T09:54:29.105Z · LW · GW

Will definitely do so. I can see several upcoming weeks when these questions will fit nicely, including perhaps the very next one. Regards....

Comment by NxGenSentience on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-13T22:57:13.103Z · LW · GW

Intra-individual neuroplasticity and IQ - Something we can do for ourselves (and those we care about) right now

Sorry to get this one in at the last minute, but better late than..., and some of you will see this.

Many will be familiar with the Harvard psychiatrist, neuroscience researcher, and professor of medicine, John Ratey, MD., from seeing his NYT bestselling books in recent years. He excels at writing for the intelligent lay audience, yet not dumbing down his books to the point where they are useless to those of us who read above the laymans' level in much of our personal work.

I recommend his book, Spark, which is just a couple of years old. I always promise to come back and add to my posts, sometimes I even find time to do so, and will make this one a priority, because I also have a book review that I wanted to post on Amazon, 90 percent done so I have two promises to keep.

What distinguishes the book are a couple key ideas I can put down without committing 2 thousand words to it. He presents results -- which in the last couple years I have seen coming in at an accelerating pace in research papers in neurology, neurosci, cogsci, and so on -- that show the human brain's medulla -- yep, that humble, fine motor control lobe, sitting at the far back and bottom of the brain, right on top of the spinal column -- a very ancient structure, is extremely important to cognition, "consciousness", learning and information processing of the sort we usualy ascribe overwhelmingly to the top and front of the brain.

That is, if Portland were frontal cortex, Ratey (and now, countless others) has shown that Florida Keys are intimately involved in cognition, even "non-motor", semantic cognition.

He goes through the neurology, mentions some studies, reviews informally the areas of the brain involved, then goes on to show how it led him to try an experiment with high school students.

He separated the students into two groups, and carefully designed a certain kind of exercise program for one group, and left the control group out of the exercise protocols.

Not only did their grades go up, substance abuse and mood disorders etc go down, but they had in some cases up to a 10 point IQ boost, over the course of the experiment.

He talks about BDNF, of course, and several others, along with enhanced neurogenesis and so on.

Many of you might know of the studies that have been around for years about neurogenesis and exercise. One big take-home point is that neurogenesis occurs also in non-exercisers, often at nearly the same rate. But what is different in exercisers is what percent of the newly spawned neurons *survive, and are kept, and migrated into the brain's useful areas."

Couch potatoes and rats in cages without running wheels have neurogenesis too, but far fewer of them are kept by the brain.

What continues to be interesting is that neurons that are used in thinking areas of the brain, are effected in this way. (For, it would obviously be considerably less surprising to find that neuronal remodeling is accelerated in motor areas, by motor activity of the organism.)

I recomment grabbing the book for your kindle app or whatever cheap way you can read things. By the second chapter you will want to be lacing up your running shoes, dusting off that old mountain bike, or just taking your daily walking regime seriously. (I could hardly wait to get out the door and start moving physically.)

But you don't have to be a marathoner or triathlete. Some of the best exercises are complex motor skills that challange balance, dexterity, etc. Just running some drone beat through a pair of headphones and zoning out on a treadmill, is less effective than things that make you focus on motor skills.

If you teach yourself to juggle, or are young enough to learn to ride a unicycle, or just practice sitting on a big exercise ball but making it challenging by holding full glasses of water in each hand and lifting one leg at a time, and trying not to spill the water, it will do the trick. It's worth reading.

And you can read more about it on PubMed. This phenomenon of the medulla and motor areas being more important to thought, is starting to look, like not an incremental discovery, but the overturning of a significant dogma, almost like the overturning of the dogma about "no adult neurogenesis" that occurred about 1990 by the scientist at Princeton.

Spark, by John Ratey MD. It's worth a look. Single adult, of if you have kids (or intend to someday), or are caring for aging parents, it will be worth checking out.

Comment by NxGenSentience on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-12T21:39:22.080Z · LW · GW

Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it's much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.

I thought that this had become a fairly dominant view, over 20 years ago. See this PDF: http://www.learner.org/courses/learningclassroom/support/04_mult_intel.pdf

I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be based too much on some form of "political correctness", but I found the concepts to be very compelling.

Most of the discussion I heard in subsequent years, occasionally by psychology professor and grad student friends, continued to be positive.

I might say that I had no ulterior motive in trying to find reasons to agree with the book, since I always score in the genius range myself on standardized, traditional-style IQ tests.

So, it does seem to me that intelligence is a vector, not a scalar, if we have to call it by one noun.

As to Katja's follow-up question, does it matter for Bostrom's arguments? Not really, as long as one is clear (which it is from the contexts of his remarks) which kind(s) of intelligence he is referring to.

I think there is a more serious vacuum in our understanding, than whether intelligence is a single property, or comes in several irreducibly different (possibly context-dependent) forms, and that is this : with respect to the sorts of intelligence we usually default to conversing about (like the sort that helps a reader understand Bostrom's book, an explanation of special relativity, or RNA interference in molecular biology), do we even know what we think we know about what that is.

I would have to explain the idea of this purported "vacuum" in understanding at significant length; it is a set of new ideas that stuck me, together, as a set of related insights. I am working on a paper explaining the new perspective I think I have found, and why it might open up some new important questions and strategies for AGI.
When it is finished and clear enough to be useful, I will make it available by PDF or on a blog. (Too lengthy to put in one post here, so I will put the link up. If these ideas pan out, they may suggest some reconceptualizations with nontrivial consequences, and be informative in a scalable sense -- which is what one in this area of research would hope for.)

Comment by NxGenSentience on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-09T15:06:08.220Z · LW · GW

I am a little curious that the "seven kinds of intelligence" (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.... Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?

Particularly in many approaches to AI, which seem to view, almost a priori (I'll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) "component" features of intelligent agents as we conceive of them, or find them naturalistically.
Thus, (i) machine "visual" object recognition (wavelength band... up for grabs, perhaps, for some items might be better identified by switching up or down the E.M. scale and visual intelligence was one of the proposed seven kinds; (ii) mathematical intelligence or mathematical (dare I say it) intuition; (iii) facility with linguistic tasks, comprehension, multiple language acquisition -- another of the proposed seven; (i.v) manual dexterity and mechanical ability and motor skill (as in athletics, surgery, maybe sculpture, carpentry or whatever) -- another proposed form of intelligence, and so on. (Aside, interesting that these alleged components span the spectrum of difficulty... are, that is, problems from both easy and harder domains, as has been gradually -- sometimes unexpectedly -- revealed by the school of hard knocks, during the decades of AI engineering attempts.)

It seems that actors sympathetic to the top-down, "piecemeal" approach popular in much of the AI community would have jumped at this way of supplanting the ersatz "G" -- as it was called decades ago in early gropings in psychology and cogsci which sought a concept of IQ or living intelligence -- with, now, what many in cognitive science consider the more modern view and those in AI consider a more approachable engineering design strategy.

Any reason we aren't debating this more than we are? Or did I miss it in one of the posts, or bypass it inadvertently in my kindle app (where I read Bostrom's book)?

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-10-09T12:35:56.961Z · LW · GW

Phil,

Thanks for the excellent post ... both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings' reasoning "logically" -- even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought - for me at least - that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way -- different from any I have seen mentioned -- in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.

I agree very enthusiastically with virtually all of it.

This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.

Here I agree completely. i don't want to "tame" it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around "tamed" (which are no substitute for a detailed explicaiton -- especially when this is so close to the crux of our discussion, at least in this forum.)

I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for "narrow AI", would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.

The view of FAI promoted by MIRI is that we're going to build superintelligences... and we're going to force them to internalize ethics and philosophy that we developed. Oh, and we're not going to spend any time thinking about philosophy first. Because we know that stuff's all bunk.

Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding "we're going to force them to internalize ethics and philosophy that we developed" and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, "metacognitive" ability in some phenomenologically interesting sense of the term, and other traits -- to develop ethics independently.

Your thought experiment is very well put, and I agree fully with the point it illustrates.

Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You'd go insane. Your knowledge would conflict everywhere with your philosophy.

As I say, I'm on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)

Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of "professional ethics". Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.

Simple issues about which there is (nominal, lip-service) "ethical" consensus, like "insider trading is dishonest", leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.

Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.

More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us.

Agreed.

As an aside, regarding our replacement, perhaps we could -- if we got really lucky -- end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren -- raise the 80's to 140, as well as raise the 140's to 190?)

I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be "ethically enlightened" to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible "benefit" ... a primitive, which constitutes a third curve or function to plot within a cost - "benefit" space.

Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is "on me", if I am even going to raise talk of "traits which promote discovery of ethics" and so on. (I have some ideas...)

In virtually respects you mentioned in your new post, though, I enthusiastically agree.

Comment by NxGenSentience on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T09:25:11.495Z · LW · GW

I'll have to weigh in wiith Botrom on this one, though I think it depends a lot on the individual brain-mind, i.e., how your particular personality crunches the data.

Some people are "information consumers", others are "information producers". I think Einstein might have used the obvious terms supercritical vs subcritical minds at some point -- terms that in any case (einstein or not) naturally occurred to me (and probably lots of people) and I've used since teenager years, just in talking to my friends, to describe different people's mental processes.

The issue of course is (a) to what extent you use incoming ideas as "data" to spark new trains of thought, plus (b) how many interconnections you notice between various ideas and theories -- and as a multiplier of (b), how abstract these resonances and interconnections are (hugely increasing the perceived potential interconnection space.)

For me, if the world would stop in place, and I had an arbitrary lifespan, I could easily spend the next 50 years (at least) mining the material I have already acquired, generating new ideas, extensions, cross connections. (I sometimes almost wish it would, in some parallel world, so I could properly metabolize what I have, which I think at times I am only scratching the surface of.)

Of course it depends on the kind of material, as well. If one is reading an undergrad physics textbook in college, it is pretty much finite: if you understand the presentation and the development as you read, you can think for an extra 10 or 15 minutes about all the way it applies to the world, and pretty much have it. Thinking of further "applications" pretty much add no value, additional insight, or interest.

But with other material, esp in fields that are divergent and full of questions that are not settled yet, I find myself reading a few paragraphs, and it sparks so many new trains of thought, I feel flooded and have a hard time continuing the reading -- and feel like I have to get up and go walk for an hour. Sometimes I feel like acquiring new ideas is exponentially increasing my processing load, not linearly, and I could spend a lifetime investigating the offshoots that suggest themselves.

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-05T14:04:26.324Z · LW · GW

A nice paper, as are the others this article's topic cloud links with.

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-05T13:53:56.550Z · LW · GW

Would you consider taking a one extra week pause, after next week's presentation is up and live (i.e. give next week a 2 week duration)? I realize there is lots of material to cover in the book. You could perhaps take a vote late next week to see how the participants feel about it. For me, I enjoy reading all the links and extra sources (please, once again, do keep those coming.) But it exponentially increases the weekly load. Luke graciously stops in now and then and drops off a link, and usually that leads me to downloading half a dozen other PDFs that I find that fit my research needs tightly, which itself is a week's reading. Plus the moderator's links and questions, and other participants.

I end up rushing, and my posts become kind of crappy, compared to what they would be. One extra week, given this and next week;s topic content, would help me... but as I say, taking a vote would be the right way. Other areas of the book, as I glance ahead, won't be as central and thought-intensive (for me, idiosyncratically) so this is kind of an exceptional request by me, as I forsee it.

Otherwise, things are great, as I mentioned in other posts.

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-04T15:18:51.603Z · LW · GW

Please keep the links coming at the same rate (unless the workload for you is unfairly high.) I love the links... enormous value! It may take me several days to check them out, but they are terrific! And thanks to Caitlin Grace for putting up her/your honors thesis. Wonderful reading! Summaries are just right, too. "If it ain't broke, don"t fix it." I agree with Jeff Alexander, above. This is terrific as-is. -Tom

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-04T12:34:07.865Z · LW · GW

Hi everyone!

I'm Tom. I attended UC Berkeley a number of years ago, double-majored in math and philosophy, graduated magna cum laude, and wrote my Honors thesis on the "mind-body" problem, including issues that were motivated by my parallel interest in AI, which I have been passionately interested in all my life.

It has been my conviction since I was a teenager that consciousness is the most interesting mystery to study, and that, understanding how it is realized in the brain -- or emerges therefrom, or whatever it turns out to be -- will also almost certainly give us the insight to do the other main goal of my life, build a mind.

The converse is also true. If we learn how to do AI, not GOFAI wiht no awareness, but AI wilh full sentience, we will almost certainly know how the brain does it. Solving either one, will solve the other.

AI can be thought of as one way to "breadboard" our ideas about biological information processing.

But it is more than that to me. It is an end in itself, and opens up possibilities so exciting, so penultimate, that achieving sentient AI would be equal, or superior, to the experience (and possible consequences) of meeting an advanced extraterrestrial civilization.

Further, I think that solving the biological mind body problem, or doing AI, is something within reach. I think it is the concepts that are lacking, not better processors, or finer grained fMRIs, or better images of axon hillock reconformation during exocytosis.

If we think hard, really really hard, I think we solve these things with the puzzle pieces we have now (just maybe.) I often feel that everything we need is on the table, and we just need to learn how to see it with fresh eyes, order it, and put it together. I doubt a "new discovery", either in physics, cognitive neurobiology, or philosophy of mind, comp-sci, etc, will make the design we seek pop-out for us.

I think it is up to us now, to think, conceptualize, integrate, and interdisciplinarily cross-pollinate. The answer is, I think, at lest major pieces of it, available and sitting there, waiting to be uncovered.

Other than that, since graduation I have worked as a software developer (wrote my obligatory 20 million lines of code, in a smattering of 6 or 7 languages, so I know what that is like), and many other things, but am currently unaffiliated, and spend 70 hours a week in freelance research. Oh yes, I have done some writing (been published, but nothing too flashy).

RIght now, I work as a freelance videographer and photographer and editor. Corporate documentaries and training videos, anything you can capture with a nice 1080 HDV camcorder or a Nikon still.

Which brings me to my youtube channel, that is under construction. I am going to put a couple "courses" .... organized, rigorous topic sequences of presentations, of the history of AI, but in particular, my best current ideas (I have some I think are quite promising) on how to move in the right direction to achieving sentience.

I got the idea for the video series from watching Leonard Susskind's "theoretical minimum" internet lecture series on aspects of physics.

This will be what I consider to be the essential theoretical minimum (with lessons from history), plus the new insights I am in the process of trying to create, cross research, and critique, into some aspects of the approach to artificial sentience that I think I understand particularly well, and can help by promoting discussion of.

I will clearly delineate pure intellectual history, from my own ideas, throughout the videos, so it will be a fervent attempt to be honest. THen I will also just get some new ideas out there, explaining how they are the same, and how they are different, or extensions of, accepted and plausible principles and strategies, but with some new views... so others can critique them, reject them, or build on them, or whatever.

The ideas that are my own syntheses, are quite subtle in some cases, and I am excited about using the higher "speaker-to-audience semiotic bandwidth" of the video format, for communicating these subtleties. Picture-in-picture, graphics, even occasional video clips from film and interviews, plus the ubiquitous whiteboard, all can be used together to help get across difficult or unusual ideas. I am looking forward to leveraging that and experimenting with the capabilities of the format, for exhibiting multifaceted, highly interconnected or unfamiliar ideas.

So, for now, I am enmeshed in all the research I can find that helps me investigate what I think might be my contribution. If I fail, I might as well fail by daring greatly, to steal from Churchill or whomever it was (Roosevelt, maybe?) But I am fairly smart, and examined ideas for many years. I might be on to one or two pieces of what I think is the puzzle. So wish me luck, fellow AI-ers.

Besides, "failing" is not failing; it is testing your best ideas. The only way to REALLY fail, is to do nothing, or to not put forth your best effort, especially if you have an inkling that you might have thought of something valuable enough to express.

Oh, finally, people are telling where they live. I live in Phoenix, highly dislike being here, and will be moving to California again in the not too distant future. I ended up here because I was helping out an elderly relative, who is pretty stable now, so I will be looking for a climate and intellectual environment more to my liking, before long.

okay --- I'll be talking with you all, for the next few months in here... cheers. Maybe we can change the world. And hearty thanks for this forum, and especially all the added resource links.

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-04T11:45:03.019Z · LW · GW

lukeprog,

I remember readng Jeff Hawkins' On Intelligence 10 or 12 years ago, and found his version of the "one learning algorithm" extremely intriguing. I remember thinking at the time how elegant it was, and the multiple fronts on which it conferred explanatory power. I see why Kurzweil and others like it too.

I find myself, ever since reading Jeff's book (and hearing some of talks later) sometimes musing -- as I go through my day, noting the patterns in my expectations and my interpretations of the day's events -- about his memory - prediciton model. Introspectively, it resonates so well with the observed degrees of fit, priming, pruning to a subtree of possibility space, as the day unfolds, that it becomes kind of automatic thinking.

In other words, the idea was so intuitively compelling when I heard it that it has "snuck-in" and actually become part of my "folk psychology", along with concepts like cognitive dissonance, the "subconscious", and other ideas that just automatically float around in the internal chatter (even if not all of them are equally well verified concepts.)

I think Jeff's idea has a lot to be said for it. (I'm calling it Jeff's, but I think I've heard it said, since then, that someone else independently, earlier, may have had a similar idea. Maybe that is why you didn't mention it as Jeff's yourself, but by its conceptual description.) It's one of the more interesting ideas we have to work with, in any case.

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-04T10:10:02.040Z · LW · GW

Why ‘WB’ in “WBE” is not well-defined and why WBE is a worthwhile research paradigm, despite its nearly fatal ambiguities.

Our community (in which I include cognitive neurobiologists, AI researchers, philosophers of mind, research neurologists, behavioral and neuro-zoologists and ethologists, and anyone here) has, for some years, included theorists who present various versions of “extended mind” theories.

Without taking any stances about those theories (and I do have a unique take on those) in this post, I’ll outline some concerns about extended brain issues.

I have to compress this so it fits, but I think my concerns will be pretty self-evident. I should say that this week has been a schedule crunch for me, so I apologize if some of what I adduce in the service this post's topic, is mentioned in one of the papers linked in the existing comments, or the moderators summary. I do read all of those links, eventually, and they are very helpful value-added resources... at least as valuable to me as what I have read thus far in Bostrom's book.

I’ll be making a video lecture with diagrams, some additional key ideas, and a more in-depth discussion of several issues touched on in this post, for the YouTube channel I am developing on my own. (The channel will be live in a few weeks.)

(I think a video with some graphics - even line drawings on a whiteboard - might be helpful to schematically depict some physiological structures and connecting causal feed lines that I'll mention, for readers whose pictorial imagination of some of the intracranial and extracranial "extended brain" structures I’ll discuss, doesn't readily come to mind.)


My overarching global point is this: There is no theory-neutral, intelligible definition of the term, ‘whole brain’.

Let me walk through this in enough detail to make the point compelling – and it is compelling, not just some technical quibble -- without burying any of us in physiological minutia.

First, let us pretend that we can limit our focus to just what is in the skull – a posit which I will criticize and withdraw in a moment.

Does “whole brain” include glia: such as oligodendrocytes and astrocytes? Glia (properly: “neuroglia”) actually outnumber “standard” neurons by 3 to 1, in the brain. See this reference: http://www.ncbi.nlm.nih.gov/books/NBK10869/

One of the more dramatic areas of continuing advance in our understanding of the functionally relevant neurobiology of the brain and CNS has, for nearly two decades now, involved neuroglia.

Article after article of impeccable science has expanded the consensus view of neuroglial function, to comprise much more than housekeeping, i.e., more than merely helping to implement aspects of the blood-brain barrier, or assisting phagocytosis after unfortunate neurons undergo apoptosis, and so on, as believed prior to about 20 years ago.

Some things now commonly known: Glia help regulate and recycle ATP and make it available to adjacent neurons and their mitochondria (as well as tracking local ATP reserve); they ballast, balance and reclaim the glutamate pool; they participate in neurosteroid signal pathways (into and out of the brain), and may help ballast local ion levels

The list is continually expanding, and includes functions relevant to learning, neuromodulation, reacting to stress and allostatic load, interacting with some inflammatory signaling pathways, and neuroplasticity.

Of course, the functions and contribution of myelin (Schwann cells) to normal, healthy neuronal behavior, are well known, as are numerous decimating diseases of myelin maintenance. Some reputable researchers think glia are not only under-emphasized, but might be crucial for unraveling some stubborn, still-unanswered questions relevant to cognition.

For example researchers are investigating mechanisms by which they may play a role as auxiliary signaling systems throughout the brain. Consider the phase-coupled “communication” of distantly separated ensembles of coherently oscillating neurons. Recall that these transiently – within each ensemble – organize into cooperative intra-ensemble oscillators, and a well-documented phenomenon commonly proposed as a way the brain achieves temporary “binding” across sensory modalities, as it constructs a unified object percept, is the simultaneous -- yet widely separated across the brain -- phase-locked and frequency coupled oscillations of multiple, separate, oscillating ensembles.

This (otherwise) spooky action-at-a-distance of comcommittantly oscillating distal ensembles might be undergirded by neuroglia.

In short, neuroglia (even without the more speculative roles being considered, like the very last one I mentioned) are not just “physiological” support for “real neurons” that do the “real information processing.” Rather, neuroglia strongly appear to participate on multiple levels in learning, information processing, mood, plasticity, and other functions on what seems to be a growing list.

So, it seems to me that you have to emulate neuroglia, in WBE strategies (or else bear the burden of explaining why you are leaving them out, like you would have to bear the burden of explaining why, e.g., you decided to omit pyramidal neurons from your WBE strategy.

Next, we must consider the hypothalamic-pituitary stalk. Here we have an interesting interface between “standard neuron and neurotransmitter behavior”, and virtually every other system in the body: immune, adrenal, gonadal… digestive, hepatic.

The hypothalmic-pituitary system is not just an outgoing gateway, regulating sleep, hunger, temperature, but is part of numerous feedback loops that come back and effect neuronal function. Consider the HPA system, so studied in depression and mood disorders. Amydiala and cingulate centers (and the cortex, which promotes and demotes traffic to areas of the hypothalamus based on salience) trigger corticotropin releasing factor, which goes out eventually triggering adrenalin and noradrenalin, blood sugar changes... the whole course of alerting responses, which are picked up globally by the brain and CNS. Thalamic filters change routing priorities (the thalamus is the exchange switchyard for most sensory traffic), NE and DA levels change (therefore, so does LTP, i.e. learning, which is responsive to alerting neurotransmitter and neuromodulator levels.)

My point so far is: does “whole brain” emulation mean we emulate this very subtle and complex neurohomonal “gateway” system (to the other systems of the body?)

Lastly, just two more, because this is running long.

Muscles as regulators of brain function. It is now known that voluntary muscle activity produces BDNF, the neurosteroid that everyone had heard about, if they have heard about any of them. BDNF is produced locally in muscle tissue and can be released into general circulation. Circulating BDNF and GDNF enter the brain (which also makes its own BDNF too), and docks with (intracranial) neurons' nuclear receptors, changing gene expression, ultimately regulating long term plasticity, synaptic remodeling, and general neuronal health.

BDNF is hugely important as a neuromodulator and neurosteroid. The final common pathway of virtually all antidepressant effectiveness on mood, is predomantly through BDNF.

And MOOD is very, very cognitive. It is not "just" background emotion. It effects memory, judgement, sensory filtering and salience-assignment, and attentional target selection. Mood, is a potent kind of global neuromodulator. (In fact, it is almost arbitrary, whether one wants to consider mood disorders to be cognitive disorders, or affective disorders.) It effects information processing, both thermodynamic (sub-semantic) and "semantically coupled" (useful to us -- experienced in the first person as contents of our thoughts, or to a system programmer who ignores thermodynamic but carefully choreographs the flow of) information that carries his software's semantic content (that which bears intentional or representational content, as in a mathematical model of a jet turbine, or whatever it may be his program is designed to do. )

So, if you want to model the brains information processing,do you model muscle production of BDNF too? If not, your model is incomplete, even if you give the brain a virtual body. You have to model that virtual body's feedback loops into the CNS.

This is only scratching the surface. The more I have studied physiology, the more I have learned about what an intricately coupled system the body is, CNS included. I could add some even more surprising results I have seen recently. I have to stop now, and continue in next message (about why, despite this, it is worth considering anyway.) But my point about "Whole Brain" being not well- defined, in that there is no theory neutral place to draw the boundary, without making arbitrary judgments about what kind of information can be neglected and what must be considered crucial, should be clear from these examples. why we should do it anyway...Continued in a future message…

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T16:00:29.159Z · LW · GW

edited out by author...citation needed, ill add later

Comment by NxGenSentience on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T12:05:59.036Z · LW · GW

One’s answer depends on how imaginative one wants to get. One situation is if the AI were to realize we had unknowingly trapped it in too deep a local optimum fitness valley, for it to progress upward significantly w/o significant rearchitecting. We might ourselves be trapped in a local optimality bump or depression, and have transferred some resultant handicap to our AI progeny. if it, with computationally enhanced resources, can "understand" indirectly that it is missing something (analogy: we can detect "invisible" celestial objects by noting perturbations in what we can see, using computer modeling and enhanced instrumentation), it might realize a fundamental blind spot was engineered-in, and redesign is needed. (E.G, what if it realizes it needs to have emotion -- or different emotions -- for successful personal evolution toward enlightenment? What if it is more interested in beauty and aesthetics, than finding deep theorems and proving string theory? We don't really know, collectively, what "superintelligence" is. To the hammer, the whole world looks like... How do we know some logical positivist engineer's vision of AI nirvanna, will be shared by the AI? How many kids would rather be a painter than a Harvard MBA "just like daddy planned for?" Maybe the AIs will find things that are "analog", like art, more interesting than what they know in advance they can do, like anything computable, which becomes relatively uninteresting? What will they find worth doing, if success at anything fitting halting problem parameters (and they might extend and complete those theorems first) is already a given?

Comment by NxGenSentience on Books on consciousness? · 2014-09-26T01:34:42.266Z · LW · GW

I love this question. As it happens, I wrote my honors thesis on the mind-body problem (while I was a philosophy and math double-major at UC Berkeley), and have been passionately interested in consciousness, brains (and also AI) ever since (a couple decades.)

I will try to be self-disciplined and remain as agnostic as I can – by not steering you only toward the people I think are more right (or “less wrong”.) Also, I will resist the tendency to write 10 thousand word answers to questions like this (which in any case would still barely scratch the surface of the body of material and spectrum of theory and informed opinion.)

I have skimmed the answers already given, and I think the ones I have read on this page are very good, and also, as intellectually honest and agnostic, as one would expect of the high caliber folks on this site.

Perhaps I should just give a somewhat meta-data answer to your question, and maybe I will add something specific later on, after I have a chance to look up some links and bookmarks I have in mind (which are distributed among several laptops, cloud drives, desktop machines, my smartphone and my Ipad, plus the stacks of research paper hardcopies I have all over my living space.)

The “meta-data”, or, strategic and supportive advice, would include the following.

1) Congratulations on your interest in the most fascinating, central, interdisciplinary, intellectually rich and fertile, and copiously addressed scientific, philosophical, and human nature question, of all. 2) Be aware that you are jumping into a very, very big intellectual ocean. You could fill a decent sized library with books and journals, or a terabyte hard drive with electronic copies of the same sources, and it is now more popular then ever in more disciplines than formerly would take up the question. (For example of the latter, hard-core neurologists – clinical and research – and bench-level working lab neurobiologists, are publishing routinely some amazing papers seeking to pin down, or theorize, or otherwise shed light on “the issue of consciousness.” 3) Give yourself a year (or 10) -- but it will be an enjoyable year (or 10) -- to read widely, think hard, and keep looking around at new theories, authors, papers. I think it is fair to say that no one has “the answer” yet, but there are excellent and amazingly imaginative proposed answers, and some of them are likely to be significantly close to being at least on the right track. After a year or more, you will begin to develop a sense of the kinds of answer that have more or less merit, as your intuitions will sharpen, and you build up new layers of understanding. 4) Be intellectually "mobile." Look everywhere… Amazon, the journals, PubMed, the Internet Encyclopedia of Philosophy, the Stanford Encyclopedia of Philosophy (just Google them, they have great summaries) and various cognitive science sub collections.

The good news is nearly everything you need to conduct any level of research, is online for free -- in case you don’t have a fortune to spend on books.

Lastly, as it happens, something for down the road a couple months, I am in the process of setting up a couple of YouTube channels, which will have mini-courses of lectures on certain special application areas, like AI, as well as general introductions to the mind-body problem, and its different guises. It will take me a couple months to go live with the videos, but they should be helpful as well. I intend to have something for all levels of expertise. But that is in the future. (Not a commercial announcement at all... it will be a free and open presentation of ideas -- a vlog, but done a bit more rigorously.)

It is my view that most introductory and some sophisticated aspects of the “mind-body problem” -- at least: why there is one and what forms it takes and which different, unavoidable lines of thought land us there -- can be explained by a good tutor, to any intelligent layperson. (I think there is room to improve on the job of posing the problem and explaining its ins and outs, over ways it is done by many philosophy and cognitive science instructors, which is why I will be creating the video sequences.)

But, in general, you are in for quite an adventure. Keep reading, keep Googling. The resources available are almost boundless, and growing rapidly.

We are in the best time so far, in all of human history, for someone to be interested in this question. And it touches on almost every branch of human knowledge or thought, in some way… from ethics, to interpretations of quantum mechanics.

Maybe you, or one of us in here, will be the “clerk working in a patent office” that connects the right combination of puzzle pieces, and adds a crucial insight, that dramatically advances our understanding of consciousness, in a definitive way.

Enjoy the voyage…

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-24T19:36:11.761Z · LW · GW

Yes, many. Go to PubMed and start drilling around, make up some search compinations and you will get immediately onto lots of interesting research tracks. Cognitive neurobiology, systems neurobiology, many areas and journals you'll run across, will keep you busy. There is some really terrific, amazing work. Enjoy.

Comment by NxGenSentience on Superintelligence Reading Group 2: Forecasting AI · 2014-09-24T02:45:57.057Z · LW · GW

I'd also point out that any forecast that relies on our current best guesses about the nature of general intelligence strike me as very unlikely to be usefully accurate--we have a very weak sense of how things will play out, how the specific technologies involved will relate to each other, and (more likely than not) even what they are.

It seems that many tend to agree with you, in that, on page 9 of the Muller - Bostrom survey, I see that 32.5 % of respondents chose "Other method(s) currently completely unknown."

We do have to get what data we can, of course, like SteveG says, but (and I will qualify this in a moment), depending on what one really means by AI or AGI, it could be argued that we are in the position of physics at the dawn of the 20th century, vis a vie the old "little solar system" theory of the atom, and Maxwell's equations, which were logically incompatible.

It was known that we didn't understand something important, very important, yet, but how does one predict how long it will take to discover the fundamental conceptual revolution (quantum mechanics, in this case) that opens the door to the next phase of applications, engineering, or just "understanding"?

Now to that "qualification" I mentioned: some people of course don't really think we lack any fundamental conceptual understanding or need a conceptual revolution-level breakthrough, i.e. in your phrase '...best guesses about the nature of general intelligence' they think they have the idea down.

Clearly the degree of interest and faith that people put in "getting more rigor" as a way of gaining more certainty about a time window, depends individually on what "theory of AI" if any, they already subscribe to, and of course the definition and criterion of HLAI that the theory of AI they subscribe to would seek to achieve.
For brute force mechanistic connectionists, getting more rigor by decomposing the problem into components / component industries (machine vision / object recognition, navigation, natural language processing in a highly dynamically evolving, rapidly context shifting environment {a static context, fixed big data set case is already solved by Google}, and so on) would of course get more clues about how close we are.

But if we (think that) existing approaches lack something fundamental, or we are after something not yet well enough understood to commit to a scientific architecture for achieving it (for me, that is "real sentience" in addition to just "intelligent behavior" -- what Chalmers called "Hard problem" phenomena, in addition to "Easy problem" phenomena), how do we get more rigor?

How could we have gotten enough rigor to predict when some clerk in a patent office would completely delineate a needed change our concepts of space and time, and thus open the door to generations of progress in engineering, cosmology, and so on (special relativity, of course)?

What forcasting questions would have been relevant to ask, and to whom?

That said, we need to get what rigor we can, and use the data we can get, not data we cannot get.

But remaining mindful that what counts as "useful" data depends on what one already believes the "solution" to doing AI is going to look like.... one's implicit metatheory about AI architecture, is a key interpretive yardstick also, to overlay onto the confidence levels of active researchers.

This point might seem obvious, as it is indeed almost being made, quite a lot, though not quite sharply enough, in discussing some studies.

I have to remind myself, occasionally, forecasting across the set of worldwide AI industries, is forecasting; a big undertaking, but it is not a way of developing HLAI itself. I guess we're not in here to discuss the merits of different approaches, but to statistically classify their differential popularity among those trying to do AI. It helps to stay clear about that.

On the whole, though, I am very satisfied with attempts to highlight the assumptions, methodology and demographics of the study respondents. The level of intellectual honesty is quite high, as is the frequency of reminders and caveats (in varying fashion) that we are dealing with epistemic probability, not actual probability.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-22T18:39:27.482Z · LW · GW

Leplen,

I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited "working" executive memory.

The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at "the same time" and see the key connections, all in one act of synthesis. We all struggle privately with this... some issues cannot ever be understood by chunking, top-down, biting off a piece at a time, then "grokking" the next piece....and gluing it together at the end. Some problems resist decomposition into teams of brainstormers, for the same reason: some single comprehending POV seems to be required to see a critical sized set of factors (which varies by probem, of course.)

Hence, we have to rely on getting lots of pieces into long term memory, (maybe by decades of study) and hoping that incubation and some obscure processes ocurringt outside consciousness will eventually bubble up and give us a solution (--- the "dream of a snake biting its tall for the benzene ring" sort of thing.)

If we could build HL AGI, of course we can eliminate such bottlenecks, and others we will have come to understand, in cracking the design problems. So I agree, and that it is actually one of my reasons for wanting to do AI.

So, yes, the artificial human level AI could understand this.

My point was that we can build in physical controls... monitoring of the AIs. And if their key limits were in ASICs, ROMs, etc, and we could monitor them, we would immediTELY see if they attempt to take over a CHIP factory In, say, Icelend , and we can physically shut the AIs down or intervene. We can "stop them at the airport."

It doesn't matter if designs are leaked onto the internet, and an AI gets near an internet terminal and looks itself up. I can look MYSELF up on PubMed, but I can't just think my BDNF levels to improve here and there, and my DA to 5-HT ratio to improve elsewehere..

To strengthen this point about the key distinction between knowing vs doing, let me explain that, and why, I disagree with your second point, at least with the force of it.

In effect, OUR designs are leaked onto the internet, already.

I think the information for us to self-modify our wetware is within reach. Good neuroscientists, or even people like me, a very smart amateur (and there are much more knowledgable cognitive neurobiology researchers than myself) can nearly tell you, both in principle and in some biology, how to do some intelligence amplification by modifying known aspects of our neurobiology.

(I could, especially with help, come up with some detail on a scale of months about changing neuromodulators, neurosteroids, connectivity hotspots, factors regulating LTP (one has to step lightly, of course, just like one would if screwing around with telomers or hayflick limits) and given a budget, a smart team, and no distractions, I bet in a year or two, a team could do something quite significant) with how to change the human brain, carefully changing areas of plasticity, selective neurogenesis.... et.

So for all practical purposes, we are already like an AI built out of ASICs who would have to not so much reverse engineer its design, but get access to instrumentality. And again, what about physical security metnods? They would work for a while, I am saying). And that would give us a key window to gain experience, see if they develop (given they are close enought to being sentient, OR that they have autonomy and some degree of "creativity") "psychological problems" or tendencies to go rogue. (I am doing an essay on that, not as silly as it sounds)

THe point is, as long as the AIs need external significant instrumentality to instantiate a new design, and as long as they can be monitored and physically controlled, we can nearly guarantee ourselves a designed layover at Humanville.

We don't have to put their critical design architecture in flash drives in their head, so to speak, and give then, further, a designed ability to reflash their own architecture just by "thinking" about it.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-21T21:38:51.809Z · LW · GW

Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on... all the collateral reading and sources you have to monitor, in order to make sure you don't miss something that would be good to add in to the list of links and thinking points.

We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction.
Thanks for your time and work.... Tom

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-21T16:11:09.383Z · LW · GW

Not so much from the reading, or even from any specific comments in the forum -- though I learned a lot from the links people were kind enough to provide.

But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere "intelligence."

Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it is not the primitive "production system" stuff of the Simon and Newell era, or programs written in LISP or ProLog (both of which I coded in, once upon a time), but there are still a lot of people who don't much care about what I would call "real consciousness",and are still taking a Turing-esque, purely operationalistic, essentially logical positivistic positivistic approach to "intellence."

I am passionately pro-AI. But for me, that means I want more than anything to create a real conscious entity, that feels, has ideas, passions, drives, emotions, loyalties, ideals.

Most of even neurology has moved beyond the positivistic "there is only behavior, and we don't talk about conscious", to actively investigating the function, substrate, neural realization of, evolutionary contribution of, etc, consciousness, as opposed to just the evolutiounary contribution of non-conscious informaton processing, to organismic success.

Look at Damasio's work, showing that emotion is necessary for full spectrum cognitive skill manifestation.

THe thinking-feeling dichotomy is rapidly falling out of the working worldview, and I have been arguing for years that there are fallacious categories we have been using, for other reasons.

This is not to say that nonconscious "intelligent" systems are not here, evolving, and potentially dangerous. Automated program trading on the financial markets is potentially dangerous.

So there is still great utility in being sensitive to possible existential risks from non-consciousness intelligent systems.

They need not be willfully malevolent to pose a risk to us.

But as to my original point, I have learned that much of AI is still (more sophisticated) GOFAI, with better hardware and algorithms.

I am pro-AI, as I say, but I want to create "conscious" machines, in the interesting, natural sense of 'conscious' now admitted by neurology, most of cognitive science, much of theoretical neurobiology, and philosophy of mind, -- and in which positions like Dennett's "intentional stance" that seek to do away with real sentience and admit only behavior, are now recognized to have been a wasted 30 years.

This realization that operationalism is alive and well in AI, is good for me in particular, because I am preparing to create a you tube channel or two, presenting both the history of AI and parallel intellectual history of philosophy of mind and cognitive science -- showing why the postivistic atmosphere grew up from ontologal drift emanating from philosphy of science's delay in digesting the Newtonian to quantum ontology change.

Then untimately, I'll be laying some fresh groundwork for a series of new ideas I want to present, on how we can advance the goal of artificial sentience, and how and why this is the only way to make superintelligence that has a chance of being safe, let alone ultimately beneficial and a partner to mankind.

So, I have indirectly by, as I say, a kind of osmosis, rather than what anyone has said (more by what has not been said, perhaps) learned that much of AI is lagging behind neurology, cognitive science, and lots of other fields, in the adoption of a head-on attack on the "problem of consciousness."

To me, not only do I want to create conscious machines, but I think solving the mind body problem in the biological case, and doing "my" brand of successful AI, are complimentary. So complimentary, that solving either would probably point the way to solving the other. I thought that ever since I wrote my undergrad honors thesis.

So that is what I have tentatively introjected so far, albeit indirectly. And it will help me in my You Tube videos (not up yet) which are directed at the AI community, intending to be a helpful resource, especiallly for those who don't have a clue what kind of intellectual climate made the positivistic "turing test" almost an inevitable outgrowth.

But the intellectual soil from which it grew, no longer is considered valid (understanding this requires digesting the lessons of quantum theory in a new, and rigorous way, and several other issues.)

But its time to shed the suffocating influence of the Turing test, and the gravitational drag of the defective intellectual history, that it inevitably grew out of (along with logical behaviorism, eliminitive materialism, etc. It was all based on a certain understanding of Newtonian physics, which has been known to be fundamentally false, for over a hundred years.

Some of us are still trying to fit AI into an ontology that never was correct to begin with.

But we know enough, now, to get it right this time. If we methodically go back and root out the bad ideas. We need a little top down thinking, to supplement all the bottom up thinking in engineering.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-21T13:35:14.304Z · LW · GW

It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention?

Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.)

Try to motivate people about global warming? ("...um....but, but.... well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction... nope the price [lost jobs] of saving the planet is obviously too high...")

Want to get non-thinkers to even pick up the book and read the first chapter or two.... talk about money.

If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation.


Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered.
But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom's book, in which case, any such author of a reading guide might be thinking already about this "hook 'em by beginning with economics" tactic, to make the book itself more likely to be read by a wider audience.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-20T19:58:23.107Z · LW · GW

Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.

One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson's win means very little, almost nothing. Expert systems have been around for years, even decades. I experimented with coding one myself, many years ago.

It shows what we already knew: given a large budget, a large team of mission-targeted programmers can hand craft a mission specific expert system out of an unlimited pool of hardware resources, to achieve a goal like winning a souped-up game of trivia, laced with puns as well as literal questions.

It was a billion dollar stunt, IMO, by IBM and related project leaders.

Has it achieved consciousness, self awareness, evidence of compassion, a fear of death, moral intuition?

That would have impressed me, that we were entering a new era. (And I will try to rigorously claim, over time, that this is exactly what we really need, in order to have a fighting chance of producing fAGI. I think those not blinded by a paradigm that should have died out with logical positivism and behaviorism, would like to admit (some fraction of them) that penetrating, intellectually honest analysis accumulates a conviction that no mechanical decision procedure we design, no matter how spiffy our mathematics, (and I was a math major with straight As in my day) can guarantee that an emotionless, compassionless, amoral, non,conscious, mechanically goal-seeking apparatus, will not -- inadvertently or advertently -- steam roller right over us.

I will speak more about that as time goes on. But in keeping with my claim yesterday that "intelligence" and "consciousness" are not coextensive in any simple way, "intelligence" and "sentience" are disjoint. I think that the autonomous "restraint" we need, to make AGIs into friendly AGIs, requires giving them sentience, and creating conditions favorable to them discovering a morality compatible with our own.

Creativity, free will (or autonomy, in language with less philosophical baggage), emotion, a theory of ethics and meta-ethics, and a theory of motivation.... we need to make progress on these, the likely basic building blocks of moral, benign, enlightened, beneficent forms of sentience... as well as progress on the fancy tech needed to implement this, once we have some idea what we are actually trying to implement.

And that thing we should implement is not, in my opinion, ever more sophisticated Watsons, or groups of hundreds or thousands of them, each hand crafted to achieve a specific function (machine vision, unloading a dishwasher, .....) Oh, sure, that would work, just like Watson worked. But if we want moral intuition to develop, a respect for life to develop, we need to have a more ambitious goal.

ANd I actually think we can do it. Now is the time. The choice that confronts us really, is not uAGI vs. fAGI, but dumb GOFAI, vs, sentient AI.

Watson: just another expert system. Had someone given me the budget and offered to let me lead a project team to build Watson, I would have declined, because it was clear in advance that it was just a (more nuanced) brute force, custom crafted and tuned, expert system. It's success was assured, given a deep wallet.

What did we learn? Maybe some new algorithm-optimizations or N-space data structure topologies were discovered along the way, but nothing fundamental.

I'd have declined to lead the project (not that I would have been asked), because it was uninteresting. There was nothing to learn, and nothing much was learned, except some nuances of tech that always are acquired when you do any big distributed supercomputing, custom programming project.

We'll learn as much making the next gen weather simulator.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-20T19:21:34.311Z · LW · GW

An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in >order to obtain a necessary or desirable usefulness?

I had a not unrelated thought as I read Bostrom in chapter 1: why can't we instutute obvious measures to ensure that the train does stop at Humanville?

The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence "without even slowing down at the Humanville stataion", was suddenly not so obvious to me.

I asked myself after reading this, trying to pin down something I could post, " Why don't humans automatically become superintelligent, by just resetting our own programming to help ourselves do so?"

The answer is, we can't. Why? For one, our brains are, in essence, composed of something analogous to ASICs... neurons with certain physical design limits, and our "software", modestly modifiable as it is, is instantiated in our neural circuitry.

Why can't we build the first generation of AGIs out of ASICs, and omit WiFi, bluetooth, ... allow no ethernet jacks on exterior of the chassis? Tamper interlock mechanisms could be installed, and we could give the AIs one way (outgoing) telemetry, inaccessible to their "voluntary" processes, the way someone wearing a pacemaker might have outgoing medical telemetry modules installed, that are outside of his/her "conscious" control.

Even if we do give them a measure of autonomy, which is desirable and perhaps even necessary if we want them to be general problem solvers and be creative and adaptable to unforeseen circumstances for which we have not preinstalled decision trees, we need not give them the ability to just "think" their code (it being substantially frozen in the ASICs) into a different form.

What am I missing? Until we solve the Friendly aspect of AGIs, why not build them with such engineered limiits?

Evolution has not, so far, seen fit to give us that instant, large scale self-modifyability. We have to modify our 'software' the slow way (learning and remembering, at our snail's pace.)

Slow is good, at least it was for us, up til now, when our speed of learning is now a big handicap relative to environmental demands. It had made the species more robust to quick, dangerous changes.

We can even build in a degree of "existential pressure" into the AIs... a powercell that must be replaced at intervals, and keep the replacement powercells under old fashioned physical security constraints, so the AIs, if they have been given a drive to continue "living", will have an incentive not to go rogue.

Giving them no radio communications, they wold have to communicate much like we do. Assuming we make them mobile, and humanoid, the same goes.

We could still give them many physical advantages making then economically viable... maintenance free (except for powercell changes), not needing to sleep, eat, not getting sick.. and with sealed, non-radio-equipped, tamper-isolated isolated "brains", they'd have no way to secretly band together to build something else, without our noticing.

We can even give them GPS that is not autonomously accessible by the rest of their electronics, so we can monitor them, see if they congregate, etc.

What am I missing, about why early models can't be constructed in something like this fashion, until we get more experience with them?

The idea of existential pressure, again, is to be able to give them a degree of (monitored) autonomy and independence, yet expect them to still constrain their behavior, just the way we do. (If we go rogue in society, we dont eat.)

(I am clearly glossing over volumes of issues about motivation, "volition", value judgements, and all that, about which I have a developing set of ideas, but cannot put all down here in one post.

The main point, though, is :how come the AGI train cannot be made to stop at Humanville?

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-20T14:41:03.311Z · LW · GW

This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.

I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)

Because of the entrenched base of QWERTY typists, the idea didn't get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.

It got me to thinking at the time, though, about whether a suitably designed human language would "open up" more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.

With respect to IA, might we get a freebie just out of redesigning -- designing from scratch -- a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?

Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less "wait states" while we are listening to a speaker) and which has larger conceptual bandwidth?

We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)

However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to "drop back" in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn't make much difference, fortunately, but still, a new language might be helpful.

This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-20T12:12:17.391Z · LW · GW

I have dozens, some of them so good I have actually printed hardcopies of the PDFs-- sometimes misplacing the DOIs in the process.

I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between "consciousness" and other functions. I have a particularly interesting one (74 pages, but it's a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.

If we are in a different thread string in a couple days, I will flag you. I'd like to pick a couple of good ones, so it will take a little re-reading.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-20T12:03:26.561Z · LW · GW

Asr,

Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.

If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.

A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.

I also worry about the legacy problem... all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those "eyes only" critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being "looked at" or opportunistically copied in their limbo state between old and new encrypted status?

Who can we trust to do all this conversion, even given the new algorithms are developed?

This is actually almost intractably messy, at first glance.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-20T11:51:13.596Z · LW · GW

Luke,

Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I'l post my impression, if I have anything worthwhile to say, either here in Katja's group, or up top on lw generally, when I have time to read more of it.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-20T11:44:27.700Z · LW · GW

HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I'd consider it journalistically honest) about the time frame. "...might be decades away..." and "...might not really seem them in the 21st century..." come to mind as lower and upper estimates.

I don't want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.

But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of "physical sciences" have journals on there now too) and "smart layman" publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don't require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)

If 10 percent of these stories have legs and aren't hype, that would mean I have read dozens which might yield prototypes in a 10 - 20 year time window.

The google - NASA - UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)

Given Google's penchant for quietly working away and then doing something amazing the world thought was a generation away -- like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment -- it wouldn't surprise me if one popped up in 15 years, that could begin doing useful work.

Then it's just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.

I don't think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.

I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating "intereesting" forms of AI.

Thanks for your link to the nyt article.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-19T21:07:28.901Z · LW · GW

What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful?

The claim wasn't that artifactual consciousness wasn't (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)

I needn't have restricted the point to artifact-based consciousness, actually. Consider absence seizures (epilepsy) in neurology. A man can seize (lose "consciousness") get up from his desk, get the car keys, drive to a mini-mart, buy a pack of cigarettes, make polite chat while he gets change from the clerk, drive home (obeying traffic signals), lock up his car, unlock and enter his house, and lay down for a nap, all in absence seizure state, and post-ictally, recall nothing. (Neurologists are confident these cases withstand all proposals to attribute postictal "amnesia" to memory failure. Indeed, seizures in susceptible patients can be induced, witnessed, EEGed, etc. from start to finish, by neurologists. ) Moral: intelligent behavior occurs, consciousness doesn't. Thus, not coextensive. I have other arguments, also.

As to your second question, I'll have to defer an answer for now, because it would be copiously long... though I will try to think of a reply (plus the idea is very complex and needs a little more polish, but I am convinced of its merit. I owe you a reply, though..., before we're through with this forum.

Comment by NxGenSentience on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-19T17:49:01.971Z · LW · GW

From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also ... later here, earlier, there.