Posts

[Proposed Paper] Predicting Machine Super Intelligence 2012-11-20T07:15:05.275Z

Comments

Comment by JaySwartz on Intuitions Aren't Shared That Way · 2012-11-30T19:47:06.717Z · LW · GW

I think a semantic check is in order. Intuition can be defined as an immediate cognition of a thought that is not inferred by a previous cognition of the same thought. This definition allows for prior learning to impact intuition. Trained mathematicians will make intuitive inferences based on their training, these can be called breakthroughs when they are correct. It would be highly improbable for an untrained person to have the same intuition or accurate intuitive thoughts about advanced math.

Intuition can also be defined as untaught, non-inferential, pure knowledge. This would seem to invalidate the example above since the mathematician had a cognition that relied on inferences from prior teachings. Arriving at an agreement on which definition this thread is using will help clarify comments.

Comment by JaySwartz on Intuitions Aren't Shared That Way · 2012-11-30T18:43:16.566Z · LW · GW

More specifically, epistemology is a formal field of philosophy. Epistemologists study the interaction of knowledge with truth and belief. Basically, what we know and how we know it. They work to identify the source and scope of knowledge. An epistemological statement example goes something like this; I know I know how to program because professors who teach programming, authoritative figures, told me so by giving me passing grades in their classes.

Comment by JaySwartz on Wanting to Want · 2012-11-29T15:52:23.912Z · LW · GW

Quite right about attachment. It may take quite a few exceptions before it is no longer an exception. Particularly if the original concept is regularly reinforced by peers or other sources. I would expect exceptions to get a bit more weight because they are novel, but no so much as to offset higher levels of reinforcement.

Comment by JaySwartz on Wanting to Want · 2012-11-28T21:33:44.239Z · LW · GW

While the Freudian description is accurate relative to sources, I struggle to order them. I believe it is an accumulated weighting that makes one thought dominate another. We are indeed born with a great deal of innate behavioral weighting. As we learn, we strengthen some paths and create new paths for new concepts. The original behaviors (fight or flight, etc.) remain.

Based on this known process, I conjecture that experiences have an effect on the weighting of concepts. This weighting sub-utility is a determining factor in how much impact a concept has on our actions. When we discover fire burns our skin, we don't need to repeat the experience very often to weigh fire heavily as something we don't want touching our skin.

If we constantly hear, "blonde people are dumb," each repetition increases the weight of this concept. Upon encountering an intelligent blond named Sandy, the weighting of the concept is decreased and we create a new pattern for "Sandy is intelligent" that attaches to "Sandy is a person" and "Sandy is blonde." If we encounter Sandy frequently, or observe many intelligent blonde people, the weighting of the "blonde people are dumb" concept is continually reduced.

Coincidentally, I believe this is the motivation behind why religious leaders urge their followers to attend services regularly, even if subconsciously. The service maintains or increases weighting on the set of religious concepts, as well as related concepts such as peer pressure, offsetting any weighting loss between services. The depth of conviction to a religion can potentially be correlated with frequency of religious events. But I digress.

Eventually, the impact of the concept "blonde people are dumb" on decisions becomes insignificant. During this time, each encounter strengthens the Sandy pattern or creates new patterns for blondes. At some level of weighting for the "intelligent" and "blonde" concepts associated to people, our brain economizes by creating a "blond people are intelligent" concept. Variations of this basic model is generally how beliefs are created and the weights of beliefs are adjusted.

As with fire, we are extremely averse to incongruity. We have a fundamental drive to integrate our experiences into a cohesive continuum. Something akin to adrenaline is released when we encounter incongruity, driving us to find a way to resolve the conflicting concepts. If we can't find a factual explanation, we rationalize one in order to return to balanced thoughts.

When we make a choice of something over other things, we begin to consider the most heavily weighted concepts that are invoked based on the given situation. We work down the weighting until we reach a point where a single concept outweighs all other competing concepts by an acceptable amount.

In some situations, we don't have to make many comparisons due to the invocation of very heavily weighted concepts, such as when a car is speeding towards us while we're standing in the roadway. In other situations, we make numerous comparisons that yield no clear dominant concept and can only make a decision after expanding our choice of concepts.

This model is consistent with human behavior. It helps to explain why people do what they do. It is important to realize that this model applies no division of concepts into classes. It uses a fluid ordering system. It has transient terminal goals based on perceived situational considerations. Most importantly, it bounds the recursion requirements. As the situation changes, the set of applicable concepts to consider changes, resetting the core algorithm.

Comment by JaySwartz on [Proposed Paper] Predicting Machine Super Intelligence · 2012-11-28T19:51:45.835Z · LW · GW

Kaj,

Thank you. I had noticed that as well. It seems the LW group is focused on a much longer time horizon.

Comment by JaySwartz on A definition of wireheading · 2012-11-28T04:13:07.995Z · LW · GW

In every human endeavor, humans will shape their reality, either physically or mentally. They go to schools where their type of people go and live in neighborhoods where they feel comfortable based on a variety of commonalities. When their circumstances change, either for the better or the worse, they readjust their environment to fit with their new circumstances.

The human condition is inherently vulnerable to wireheading. A brief review of history is rich with examples of people attaining power and money who subsequently change their values to suit their own desires. The more influential and wealthy they become, enabling them to exist unfettered, the more they change their value system.

There are also people who simply isolate themselves and become increasingly absorbed in their own value system. Some amount of money is needed to do this, but not a great amount. The human brain is also very good at compartmentalizing value sets such that they can operate by two (or more) radically different value systems.

The challenge in AI is to create an intelligence that is not like ours and not prone to human weaknesses. We should not attempt to replicate human thinking, we need to build something better. Our direction should be to create an intelligence that includes the desirable components and leaves out the undesirable aspects.

Comment by JaySwartz on Raising the waterline · 2012-11-28T02:32:15.583Z · LW · GW

Well, I'm a sailor and raising the waterline is a bad thing. You're underwater when the waterline gets too high.

Comment by JaySwartz on [Proposed Paper] Predicting Machine Super Intelligence · 2012-11-23T18:02:42.002Z · LW · GW

Thanks for the feedback. I agree on the titling; I started with the title on the desired papers list, so wanted some connection with that. I wasn't sure if there was some distinction I was missing, so proceeded with this approach.

I know it is controversial to say super intelligence will appear quickly. Here again, I wanted some tie to the title. It is a very complex problem to predict AI. To theorize about anything beyond that would distract from the core of the paper.

While even more controversial, my belief is that the first AGI will be a super intelligence in its own right. An AGI will have not have one pair of eyes, but as many as it needs. It will not have just one set of ears, it will immediately be able to listen to many things at once. The most significant aspect is an AGI will immediately be able to hold thousands of concepts in the equivalent of our short term memory, as opposed to the typical 7 or so for humans. This alone will enable it to comprehend immensely complex problems.

Clearly, we don't know how AGI will be implemented or if this type of limit can be imposed on the architecture. I believe an AGI will draw its primary power from data access and logic (i.e., the concurrent concept slots). Bounding an AGI to an approximation of human reasoning is an important step.

This is a major aspect of friendly AI because one of the likely ways to ensure a safe AI is to find a means to purposely limit the number of concurrent concept slots to 7. Refining an AGI of this power into something friendly to humans could be possible before the limit is removed, by us or it.

I just wanted to express some thoughts here. I do not intent to cover this in the paper as it is a topic for several focused papers to explore.

Comment by JaySwartz on Wanting to Want · 2012-11-22T09:47:29.313Z · LW · GW

I struggle with conceiving wanting to want, or decision making in general, as a tiered model. There are a great many factors that modify the ordering and intensity of utility functions. When human neurons fire they trigger multiple concurrent paths leading to a set of utility functions. Not all of the utilities are logic-related.

I posit that our ability to process and reason is due to this pattern ability and any model that will approximate human intelligence will need to be more complex than a simple linear layer model. The balance of numerous interactive utilities combine to inform decision making. A multiobjective optimization model, such as PIBEA, is required.

I'm new to LW, so I can't open threads just yet. I'm hoping to find some discussions around evolutionary models and solution sets relative to rational decision processing.

Comment by JaySwartz on Circular Altruism · 2012-11-22T00:45:58.121Z · LW · GW

Granted. My point is the function needs to comprehend these factors to come to a more informed decision. Simply doing a compare of two values is inadequate. Some shading and weighting of the values is required, however subjective that may be. Devising a method to assess the amount of subjectivity would be an interesting discussion. Considering the composition of the value is the enlightening bit.

I also posit that a suite of algorithms should be comprehended with some trigger function in the overall algorithm. One of our skills is to change modes to suit a given situation. How sub-utilities impact the value(s) served up to the overall utility will vary with situational inputs.

The overall utility function needs to work with a collection of values and project each value combination forward in time, and/or back through history, to determine the best selection. The nature of the complexity of the process demands using more sophisticated means. Holding a discussion at the current level feels to me to be similar to discussing multiplication when faced with a calculus problem.

Comment by JaySwartz on Circular Altruism · 2012-11-22T00:02:09.726Z · LW · GW

For a site promoting rationality this entire thread is amazing for a variety of reasons (can you tell I'm new here?). The basic question is irrational. The decision for one situation over another is influenced by a large number of interconnected utilities.

A person, or an AI, does not come to a decision based on a single utility measure. The decision process draws on numerous utilities, many of which we do not yet know. Just a few utilities are morality, urgency, effort, acceptance, impact, area of impact and value.

Complicating all of this is the overlay of life experience that attaches a function of magnification to each utility impact decision. There are 7 billion, and growing, unique overlays in the world. These overlays can include unique personal, societal or other utilities and have dramatic impact on many of the core utilities as well.

While you can certainly assign some value to each choice, due to the above it will be a unique subjective value. The breadth of values do cluster in societal and common life experience utilities enabling some degree of segmentation. This enables generally acceptable decisions. The separation of the value spaces for many utilities preclude a single, unified decision. For example, a faith utility will have radically different value spaces for Christians and Buddhists. The optimum answer can be very different when the choices include faith utility considerations.

Also, the circular example of driving around the Bay Area is illogical from a variety of perspectives. The utility of each stop is ignored. The movement of the driver around the circle does not correlate to the premise that altruistic actions of an individual are circular.

For discussions to have utility value relative to rationality, it seems appropriate to use more advanced mathematics concepts. Examining the vagaries created when decisions include competing utility values or are near edges of utility spaces are where we will expand our thinking.

Comment by JaySwartz on GiveWell and the Centre for Effective Altruism are recruiting · 2012-11-21T02:51:09.429Z · LW · GW

I am disappointed that my realistic and fact based observation generated a down vote.

At the risk of an additional down vote, but in the interest of transparent honest exchange, I am pointing out a verifiable fact, however unsavory it may be interpreted.

If over time the time cost of intermediaries (additional handling and overhead costs) remains below the cost of the steps to eliminate intermediaries (the investment required to establish a 501(c)(3)) then I stand corrected. While an improbable situation, it could well be possible.

Comment by JaySwartz on How minimal is our intelligence? · 2012-11-21T02:34:52.506Z · LW · GW

200k years ago when Homo Sapiens first appeared, fundamental adaptability was the dominant force. The most adaptable, not the most intelligent, survived. While adaptability is a component of intelligence, intelligence is not a component of adaptability. The coincidence with the start of the ice age is consistent with this. The ice age is a relatively minor extinction event, but none the less the appearance and survival of Homo Sapiens is consistent, where less adaptable life forms did not survive.

Across the Hominidae family Homo Sapiens proved to be most adaptable. During the ice age the likely focus was simply to survive. When a temperate climate returned there are some who believe Homo Sapiens, much as future Aztecs and others, began to systematically eliminate their competition.

Concurrently, another phenomenon was occurring. Homo Sapiens was learning and steadily increasing their understanding of the world. While there is not evidence that has survived the years, it would be reasonable to posit that learning continued in much the same fashion as today; new knowledge building on established knowledge. Being less organized than later situations it would progress more slowly.

Our improved knowledge likely increased our survival rates through the second ice age. When temperate climates returned, the stage was set for the advancement of mankind to organized farming, written language and Ur.

Somewhere in this time frame, intelligence began to overtake adaptability as the dominant force. This also marked the shift from evolutionary pressure to societal pressure as the underlying force behind advancement and survivability. The random nature of evolutionary advances gave way to a more complex society-driven selection process.

It's also important to draw a subtle distinction. The advances were not a function of increase in general IQ. They were a function of integration of the concepts envisioned by a subset of high IQ individuals into society; i.e., a societal variant of evolutionary adaptability.

Comment by JaySwartz on Logical Pinpointing · 2012-11-21T00:01:08.464Z · LW · GW

I'm new here, so watch your toes...

As has been mentioned or alluded to, the underlying premise may well be flawed. By considerable extrapolation, I infer that the unstated intent is to find a reliable method for comprehending mathematics, starting with natural numbers, such that an algorithm can be created that consistently arrives at the most rational answer, or set of answers, to any problem.

Everyone reading this has had more than a little training in mathematics. Permit me to digress to ensure everyone recalls a few facts that may not be sufficiently appreciated. Our general education is the only substantive difference between Homo Sapiens today and Homo Sapiens 200,000 years ago.

With each generation the early education of our offspring includes increasingly sophisticated concepts. These are internalized as reliable, even if the underlying reasons have been treated very lightly or not at all. Our ability to use and record abstract symbols appeared at about the same time as farming. The concept that "1" stood for a single object and "2" represented the concept of two objects was establish along with a host of other conceptual constructs. Through the ensuing millennia we now have an advanced symbology that enables us to contemplate very complex problems.

The digression is to point out that very complex concepts, such as human logic, require a complex symbology. I struggle with understanding how contemplating a simple artificially constrained problem about natural numbers helps me to understand how to think rationally or advance the state of the art. The example and human rationality are two very different classes of problem. Hopefully someone can enlighten me.

There are some very interesting base alternatives that seem to me to be better suited to a discussion of human rationality. Examining the shape of the Pareto front generated by PIBEA (Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems) runs for various real-world variables would facilitate discussions around how each of us weights each variable and what conditional variables change the weight (e.g., urgency).

Again, I intend no offense. I am seeking understanding. Bear in mind that my background is in application of advanced algorithms in real-world situations.

Comment by JaySwartz on [Proposed Paper] Predicting Machine Super Intelligence · 2012-11-20T21:27:24.741Z · LW · GW

Joshua,

Thank you for the feedback.

I do need to increase the emphasis on the focus, which is the first premise you mentioned. I did not do that in this draft with the intent of eliciting feedback on the viability and interest in the model concept.

I will use formal techniques, which one(s) I have not yet settled on. At the moment, I am leaning to the processes around use case development to decompose current AI models into the componentry. For the weighting and gap calculations some statistical methods should help.

I am mulling over Bill Hibbard's 2012 AGI papers, "Avoiding Unintended AI Behaviors" and "Decision Support for Safe AI Design" http://www.ssec.wisc.edu/~billh/g/mi.html as well as some PIBEA findings, e.g., http://www.cs.umb.edu/~jxs/pub/cec11-prospect.pdf to use as a framework for the component model. The Pareto front element is particularly interesting when considered with graph theory.

I am considering how the rate modifiers can be incorporated into the predictive model. This will help to identify what events for the community to look for and how a rate modifier occurrence in one area, e.g., pattern recognition, impacts other aspects of the model. We clearly do not know all of the components, but we do know the major disciplines that will contribute. As noted, the model will be extensible to allow discoveries to be incorporated, increasing the accuracy.

The general idea is to establish a predictive model with assumed margins of error and functionality. To put a formalized "stick in the ground" from which improvements are made. If maintained and enhanced with discoveries the margin of error will continue to decline and confidence levels will increase. Such a model also provides context for research and identifies potential areas of study.

One potential aspect of the model is to identify aspects of research that may be obviated. If a requirement is consistently satisfied through unexpected methods, it can be removed from consideration in the area where it was originally conceived. This also has the potential to provide insights to the original space.

Comment by JaySwartz on What does the world look like, the day before FAI efforts succeed? · 2012-11-20T01:49:05.181Z · LW · GW

The sheer volume of the scenarios diminishes the likelihood of any one of them. The numerous variations indicate an intractable predictability. While subject to conjunction bias, a more granular approach is the only feasible method to determine even a hint of the pre-FAI environment. Only a progressively refined model can provide information of value.

Comment by JaySwartz on GiveWell and the Centre for Effective Altruism are recruiting · 2012-11-20T01:37:05.023Z · LW · GW

As I noted on the 80,000 Hours thread, intermediaries are nearly always an added expense on the distribution side. In this case, distribution of donations. The immediate impact is that fewer donation dollars (or whatever currency) find their way to the target organizations. The exception is if an intermediate organization facilitates a 100% pass-through, due to other funding or altruistic efforts.

Comment by JaySwartz on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-20T01:28:19.765Z · LW · GW

I am compelled to point to a fundamental supply chain issue; intermediary drag. Simply stated, the greater the number of steps, the greater the overhead expense. While aggregators have some advantage on the purchasing side, they are an added expense on the distribution side in the vast majority of cases. If they enable some form of extended access, intermediaries may have a value, but the limited nature of charitable donations would make intermediaries an unlikely advantage.

Comment by JaySwartz on Welcome to Less Wrong! (July 2012) · 2012-11-19T23:12:52.305Z · LW · GW

Hello,

I am Jay Swartz, no relation to Aaron. I have arrived here via the Singularity Institute and interactions with Louie Helm and Malo Bourgon. Look me up on Quora to read some of my posts and get some insight to my approach to the world. I live near Boulder, Colorado and have recently started a MeetUp; The Singularity Salon, so look me up if you're ever in the area.

I have an extensive background in high tech, roughly split between Software Development/IT and Marketing. In both disciplines I have spent innumerable hours researching human behavior and thought processes in order to gain insights into how to create user interfaces and how to describe technology in concise ways to help people to evaluate the merits of the technology. I've spent time at Apple, Sun, Seagate, Mensa, Osborne and a few start-ups applying my ever-deepening understanding of the human condition.

Over the years, I have watched synthetic intelligence (I much prefer the more precise SI over AI) grow in fits and starts. I am increasing my focus in this area because I believe we are on the cusp of general SI (GSI). There is a good possibility that within my life time I will witness the convergence of technology that leads to the appearance of GSI. This will in part be facilitated by advances in medicine that will extend my lifespan well past 100 years.

I am currently building my first SI web crawler that will begin building a corpus to be mined by some SciPy applications I have on my list of things to do. These efforts will provide me with technical insights on the SI challenge. There is even the possibility, however slight, that they can be matured to make a contribution to the creation of SI.

Finally, I am working on a potential paper for the Singularity Institute. I just posted a first outline/draft, Predicting Machine Super Intelligence, but do not yet know the details on how anyone finds it or how I see any responses. Having been on more than a few sites similar to this, I know I will be able to quickly sort thing out.

I am looking forward to reading and exchanging ideas here. I will strive to contribute as much as I receive.

Jay