Posts

How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it 2011-07-31T18:27:33.014Z

Comments

Comment by BenRayfield on How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it · 2011-08-02T04:23:05.162Z · LW · GW

Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something "free". Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples' brains, Human AI Net will bypass the conscious level and access the core of the problems that lead to conscious disagreements. Words are a corrupted interface so any AI built on them will have errors.

To the LessWrong and Singularity community, I offered an invitation to influence by designing details of this plan for singularity. Downvoting an invitation will not cancel the event, but if you can convince me that my plan may result in UnFriendly AI then I will cancel it. Since I have considered many possibilities, I do not expect a reason against it exists. Would your time be better spent calculating the last digit of friendliness probability for all of the mind space, or working to fix any problems you may see in a singularity plan that's in progress and will finish before yours?

Comment by BenRayfield on How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it · 2011-08-01T00:39:46.513Z · LW · GW

My plan will have approximately the same effect as connecting many peoples' temporal lobes (where audio first comes into brains) to other peoples' temporal lobes, the same way brain parts normally wire to or disconnect from other brain parts, forming a bigger mind. The massively multiplayer audio game is to make those connections.

Its like tunneling a network program through SSL, but this is much more complex because its tunneling statistical thoughts through mouse movements, audio software, ears, brain, mouse movements again (and keep looping), internet, audio software, ears, brain, mouse movements, audio software, ears brain (and keep looping), and back on the same path to the first person and many other people.

If we all shared neural connections, that would be closer to CEV than any one volition of any person or group. Since its an overall increase in the coherence of volition on Earth, it is purely a move toward CEV and away from UnFriendly AI.

It is better to increase the coherence of volition of a-bunch-of-guys-on-the-internet than not to. Or do you want everyone to continue disagreeing with eachother approximately equal amounts until most of those disagreements can be solved all at once with the normal kind of CEV?

Comment by BenRayfield on How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it · 2011-07-31T20:22:29.065Z · LW · GW

Networking Human minds together subconsciously (through feedback loops of mouse movements and generated audio) either (1) doesn't work, or (2) causes people to think x amount more like 1 mind and therefore is x amount of progress toward Coherent Extrapolated Volition. The design is not nearly smart enough to create superhuman intelligence without being a CEV, so there is no danger of UnFriendly AI.

Comment by BenRayfield on How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it · 2011-07-31T19:03:37.548Z · LW · GW

Its not a troll. Its a very confusing subject, and I don't know how to explain it better unless you ask specific questions.

Comment by BenRayfield on With whom shall I diavlog? · 2011-05-30T18:56:11.486Z · LW · GW

When he says "intelligent design", he is not referring to the common theory that there is some god that is not subject to the laws of physics which created physics and everything in the universe. He says reality created itself as a logical consequence of having to be a closure. I don't agree with everything he says, but based only on the logical steps that lead up to that, him and Yudkowsky should have interesting things to talk about. Both are committed to obey logic and get rid of their assumptions, so there should be no unresolvable conflicts, but I expect lots of conflicts to start with.

Comment by BenRayfield on With whom shall I diavlog? · 2011-05-30T04:51:31.752Z · LW · GW

I suggest Christopher Michael Langan, as roland said. His "Cognitive-Theoretic Model of the Universe (CTMU)" ( download it at http://ctmu.org ) is very logical and conflicts in interesting ways with how Yudkowsky thinks of the universe at the most abstract level. Langan derives the need for an emergent unification of "syntax" (like the laws of physics) and "state" (like positions and times of objects) and that the universe must be a closure. I think he means the only possible states/syntaxes are very abstractly similar to quines. He proposes a third category, not determinism or random, but somewhere between that fits into his logical model in subtle ways.

QUOTE: The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.

When he says a system which tends toward a "generalized utility function", I think he means, for example, our physics follow a geodesic, so geodesic would be their utility function.

Comment by BenRayfield on Newcomb's Problem standard positions · 2011-01-02T21:24:40.786Z · LW · GW

This is my solution to Newcomb's Paradox.

Causal decision theory is a subset of evidential decision theory. We have much evidence that information flows from past to future. If we observe new evidence that information flows the other direction or the world works a different way than we think which allows Omega (or anyone else) to repeatedly react to the future before it happens, then we should give more weight to other parts of decision theory than causal. Depending on what we observe, our thoughts can move gradually between the various types of decision theory, using evidential decision theory as the meta-algorithm to choose the weighting of the other algorithms.

Observations are all we have. Observations may be that information flows past to future, or they may be that Omega predicts accurately, or some combination. In this kind of decision theory, estimate the size of the evidence for each kind of decision theory.

The evidence for causal theory is large but can be estimated as the log_base_2 of the number of synapses in a Human brain (10^15) multiplied by http://en.wikipedia.org/wiki/Dunbar%27s_number (150). The evidence may be more, but that is a limit on how advanced a thing any size group of people can learn (without changing how we learn). That result is around 57 bits.

The game played in Newcomb's Paradox has 2 important choices: one-boxing and two-boxing, so I used log base 2. Combining the evidence from all previous games and other ways Newcomb's Paradox is played, if the evidence that Omega is good at predicting builds up to exceed 57 bits, then in choices related to that, I would be more likely to one-box. If there have only been 56 observations and in all of them two-boxing lost or one-boxing won, then I would be more likely to two-box because there are more observations that information flows past to future and Omega doesn't know what I will do.

The Newcomb Threshold of 57 is only an estimate for a specific Newcomb problem. For each choice, we should reconsider the evidence for the different kinds of decision theory so we can learn to win Newcomb games more often than we lose.

Comment by BenRayfield on Cached Selves · 2010-11-18T01:28:41.089Z · LW · GW

The cache problem is worst for language because its usually made entirely of cache. Most words/phrases are understood by example instead of reading a dictionary or thinking of your own definitions. I'll give an example of a phrase most people have an incorrect cache for. Then I'll try to cause your cache of that phrase to be updated by making you think about something relevant to the phrase which is not in most peoples' cache of it. Its something which, by definition, should be included but for other reasons will usually not be included.

"Affirmative action" means for certain categories including religion and race, those who tend to be discriminated against are given preference when the choices are approximately equal.

Most people have caches for common races and religions, especially about black people in USA because of the history of slavery in USA. Higher quantity of relevant events gets more cache. More cache makes it harder to define.

One who thinks one acts in affirmative action ways for religion would usually redefine "affirmative action" when they sneeze and instead of hearing "God bless you" they hear "Devil bless you. I hope you don't discriminate against devil worshippers." Usually the definition is updated to end with "except for devil worshippers" and/or an exclusion is added to the cache. Then, one may consider previous incorrect uses of the phrase "affirmative action". The cache did not mean what they thought it meant.

We should distrust all language until we convert it from cache to definitions.

Language usually is not verified and stays as cache. It appears to be low pressure because no pressure is remembered. Its expected to always be cache. Its experienced as high pressure when one chooses a different definition. High pressure is what causes us to reevaluate our beliefs, and with language, reevaluating our beliefs leads to high pressure. With language, neither of those things tends to be first so neither happens usually. Many things are that way but it applies to language the most.

Example of changing cache to definition resulting in high pressure to change back to cache: Using the same words for both sides of a war regardless of which side your country is on can be the result of defining those words. A common belief is soldiers should be respected and enemy combatants deserve what they get. Language is full of stateful words like those. If you think in stateful words, then the cost of learning is multiplied by the number of states at each branch in your thinking. If you don't convert cache to definition (to verify later caches of the same idea), then such trees of assumptions and contexts are not verified, which merge with other such trees and form a tangled mess of exceptions to every rule which eventually prevents you from defining something based on those caches. That's why most people think its impossible to have no contradictions in your mind, which is why they choose to believe new things which they know have unsolvable contradictions.

Comment by BenRayfield on Nonperson Predicates · 2010-09-21T08:42:36.647Z · LW · GW

Humans evolved from the ancestors of Monkeys therefore there is no line between person and nonperson. There are many ways to measure it, but all correct ways are a continuous function. More generally, the equations of quantum physics are continuous. There is a continuous path from any possible state of the universe to any possible state of the universe. Therefore, for any 2 possible life forms, there is a continuous path of quantum wavefunction (state of the universe) between them, which would look like a video morphing continuously between 2 pictures but morphing between living patterns instead of pictures. For example, there is a continuous path between both possible states (alive and dead in the box) of Schrodinger's Cat, but its more important that there are an infinite number of continuous paths, not just the path that crosses the point in spacetime where it is decided if the cat lives or dies. For what I'm explaining here, it does not matter if all these possibilities exist or not. It only matters that they can be defined in logic, even if we do not know the definition. To solve hard problems, its useful to know a solution exists.

Starting from the knowledge that there are definable functions that can approximate continuous measures between any 2 life forms, I will explain a sequence of tasks that starts at something simple enough that we know how to do it, and continuous as tasks of increasing difficulty, finally defining a task that calculates a Nonperson Predicate, the subject of this thread. It is very slow and uses a lot of computer memory, but to define it at all is progress.

I am not defining ethics. I am writing a more complex version of "select * from..." in a database language, but this process defines how to select something thats not a person. That is a completely different question than if its right or wrong to simulate people and delete the simulations.

The second-last step is to define a continuous function that returns 0 for the average Monkey and returns 1 for the average Human and returns a fraction for any evolution between them (if such transition species were still alive to measure), and to define many similar functions that measure between Human and many other things.

All of these functions must return approximately the same number for a simulation as for a simulation of a simulation, to any recursive depth.

A computer can run a physics simulation of another computer which runs a simulation of a life form. Such a recursive simulation is inside quantum physics. Quantum physics equations are continuous and have an infinite number of paths between all possible states of the universe. Therefore continuous functions can be defined that measure between a simulation and a simulation of a simulation. That does not depend on if it has ever been done. I only need to explain that it can be defined abstractly.

The "continuous function that returns 0 for the average Monkey and returns 1 for the average Human" problem, counting simulations and simulations of simulations equally, are much too hard to solve directly, so start at a similar and extremely simpler problem:

Define a continuous function that returns 0 for the average electron and returns 1 for the average photon, counting simulations of electrons/photons the same as simulations of simulations of electrons/photons.

Just the part of counting a simulation the same as a simulation of a simulation (to any recursive depth) is enough to send most people "screaming and running away from the problem". No need to get into the Human parts. The same question about simple particles in physics, which we have well known equations for, is more than we know how to do. Learn to walk before you learn to run.

Choose many things as training data including electrons, photons, atoms, molecules, protein-folding, tissues, bacteria, plants, insects, animals, and eventually approach Humans without ever getting there. Calculate continuous functions between pairs of these things, and calculate a web of functions that approaches a Nonperson Predicate without ever simulating a person. For the last step, extrapolate from Monkey to Human the same way you can use statistical clustering to extrapolate from simpler animals to Monkey.

That's how you calculate a Nonperson Predicate without ever simulating a person.

Also, near the last few steps, because of the way it can simulate and predict brains of animals and the simpler behaviors of people, this algorithm, including details about the clustering and evolution of continuous measuring functions to be figured out later, may converge to a Coherent Extrapolated Volition (CEV) algorithm and therefore generate a Friendly AI, if you had the unreasonably large amount of computers needed to run it.

Its basically an optimization process for simulating everything from physics up to animals and extrapolating that to higher life forms like people. Its not practical to build this and use it. Its purpose is to define a solution so we can think of faster ways to do it later.

Comment by BenRayfield on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-08T00:15:54.489Z · LW · GW

I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.

I misunderstood. I thought you were saying it was your goal to prove that instead of you thought it would not be proven. My question does not make sense.

Comment by BenRayfield on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-03T16:32:44.181Z · LW · GW

Why do you consider the possibility of smarter than Human AI at all? The difference between the AI we have now and that is bigger than the difference between those 2 technologies you are comparing.

Comment by BenRayfield on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-03T16:29:03.701Z · LW · GW

It is the fashion in some circles to promote funding for Friendly AI research as a guard against the existential threat of Unfriendly AI. While this is an admirable goal, the path to Whole Brain Emulation is in many respects more straightforward and presents fewer risks.

I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.

That's exactly the high awareness I was talking about, and most people don't have it. I wouldn't be surprised if most people here failed at it, if it presented itself in their real lives.

Most people would not act like a Friendly AI therefore "Whole Brain Emulation" only leads to "fewer risks" if you know exactly which brains to emulate and have the ability to choose which brain(s).

If whole brain emulation (for your specific brain) its expensive, it might result in the brain being from a person who starts wars and steals from other countries, so he can get rich.

Most people prefer that 999 people from their country should live at the cost of 1000 people of another country would die, given no other known differences between those 1999 people. Also unlike a "Friendly AI", their choices are not consistent. Most people will leave the choice at whatever was going to happen if they did not choose, even if they know there are no other effects (like jail) from choosing. If the 1000 people were going to die, unknown to any of them, to save 999, then most people would think "Its none of my business, maybe god wants it to be that way" and let the extra 1 person die. A "Friendly AI" would maximize lives saved if nothing else is known about all those people.

There are many examples why most people are not close to acting like a "Friendly AI" even if we removed all the bad influences on them. We should build a software to be a "Friendly AI" instead of emulating brains and only emulate brains for different reasons, except maybe the few brains that think like a "Friendly AI". Its probably safer to do it completely in software.

Comment by BenRayfield on Far & Near / Runaway Trolleys / The Proximity Of (Fat) Strangers · 2010-01-26T02:01:33.310Z · LW · GW

My Conclusions It seems there is Far Near and Near Near, and if you ever again find yourself with time to meta-think that you are operating in Near mode.... then you're actually in Far mode. and so I will be more suspicious of the hypothetical thought experiments from now on.

When one watches the movie series called "Saw", they will experience the "near mode" of thinking much more than the examples given in this thread. "Saw" is about people trapped in various situations, enforced by mechanical means only (no psychotic person to beg for mercy, the same way you can't beg the train to stop), where they must choose which things to sacrifice to save a larger number of lives, sometimes including their own life. For example, the first "Saw" movie starts with 2 dieing people trapped in an abandoned basement, with their legs chained to the wall, and the only way the first person can escape is to cut off their foot with the saw. Many times in the movie series, the group of trapped people chose whose turn it was to go into the next dangerous area to get the key to the next room. Similarly, the psychotic person who puts the people in those situations thinks he is doing it for their own good because he chooses people who have little respect for their own life and through the process of escaping his horrible traps some of them have a better state of mind after escaping than before. I'm not saying that would really work, but that's the main subject of the movies and is shown in many ways simultaneously. These are good examples of how to avoid "meta thinking" and really think in "near mode": Watch the "Saw" movies.

Comment by BenRayfield on Eliezer Yudkowsky Facts · 2010-01-01T01:41:40.153Z · LW · GW

Not necessarily. It's just that we are very far from being perfectly rational.

You're right. I wrote "rational minds" in general when I was thinking about the most rational few of people today. I did not mean any perfectly rational mind exists.

Most or all Human brains tend to work better if they experience certain kinds of things that may include wasteful parts, like comedy, socializing, and dreaming. Its not rational to waste more than you have to. Today we do not have enough knowledge and control over our minds to optimize away all our wasteful/suboptimal thoughts.

I have no reason to think, in the "design space" of all possible minds, there exists 0, or there exists more than 0, perfectly rational minds that tend to think more efficiently after experiencing comedy.

I do have a reason to slightly bias it toward "there exists more than 0" because Humans and monkeys have a sense of humor that helps them think better if used at least once per day, but when thinking about exponential size intelligence, that slight bias becomes an epsilon. Epsilon can be important if you're completely undecided, but usually its best to look for ideas somewhere else before considering an epsilon size chance. What people normally call "smarter than Human intelligence" is also an epsilon size intelligence in this context, so the 2 things are not epsilon when compared to eachother.

The main thing I've figured out here is to be more open-minded about if comedy (and similar things) can increase the efficiency of a rational mind or not. I let an assumption get into my writing.

Comment by BenRayfield on Eliezer Yudkowsky Facts · 2009-12-31T20:40:07.944Z · LW · GW

We are Borg. You will be assimilated. Resistance is futile. If Star Trek's Borg Collective came to assimilate everyone on Earth, Eliezer Yudkowsky would engage them in logical debate until they agreed they should come back later after our technology has increased exponentially for some number of years, a more valuable thing for them to assimilate. Also, he would underestimate how fast our technology increases just enough that when the Borg came back, we would be the stronger force.

Why is this posted to LessWrong? What does it have to do with being less wrong or sharpening our rationality?

Rational minds need comedy too, or they go insane. Much of this is vaguely related to rational subjects so it does not fit well in other websites.