FAI FAQ draft: What is the Singularity?

post by lukeprog · 2011-11-16T19:11:14.253Z · LW · GW · Legacy · 14 comments

Contents

14 comments

I invite your feedback on this snippet from the forthcoming Friendly AI FAQ. This one is an answer to the question "What is the Singularity?"

_____

 

There are many types of mathematical and physical singularities, but in this FAQ we use the term 'Singularity' to refer to the technological singularity.

There are also many things someone might have in mind when they refer to a 'technological Singularity' (Sandberg 2010). Below, we’ll explain just three of them (Yudkowsky 2007):

  1. Intelligence explosion
  2. Event horizon
  3. Accelerating change

 

Intelligence explosion

Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and found a more elegant proof for one of them than Russell and Whitehead had given in Principia Mathematica (MacKenzie 1995). By the late 1990s, 'expert systems' had surpassed human skill for a wide range of tasks (Nilsson 2009). In 1997, IBM's Deep Blue computer beat the world chess champion (Campbell et al. 2002), and in 2011 IBM's Watson computer beat the best human players at a much more complicated game: Jeopardy! (Markoff 2011). Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results (King et al. 2009; King 2011).

Computers remain far short of human intelligence, but the resources that aid AI design are accumulating (including hardware, large datasets, neuroscience knowledge, and AI theory). We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence (Good 1965).

 

Event horizon

Vernor Vinge (1993) wrote that the arrival of machine superintelligence represents an 'event horizon' beyond which humans cannot model the future, because events beyond the Singularity will be stranger than science fiction: too weird for human minds to predict. So far, all social and technological progress has resulted from human brains, but humans cannot predict what future radically different and more powerful intelligences will create. He made an analogy to the event horizon of a black hole, beyond which the predictive power of physics at the gravitational singularity breaks down.

 

Accelerating Change

A third concept of technological singularity refers to accelerating change in technological development.

Ray Kurzweil (2005) has done the most to promote this idea. He suggests that although we expect linear technological change, information technological progress is exponential, and so the future will be more different than most of us expect. Technological progress enables even faster technological progress. Kurzweil suggests that technological progress may become so fast that humans cannot keep up unless they amplify their own intelligence by integrating themselves with machines.

14 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2011-11-16T20:41:49.638Z · LW(p) · GW(p)

it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it. Here is a number of points that might have be worth mentioning:

We may one day design a machine that surpasses human skill at designing artificial intelligences.

Are there any alternatives?

superintelligence represents an 'event horizon' beyond which humans cannot model the future

We have trouble modeling the future already (our world is probably rather unlike what experts had predicted 25 years ago). If the horizon is the limit of the shrinking predictability timescale, what is the arguments for and against this scale being a monotonically decreasing function?

Technological progress enables even faster technological progress.

Similar to the one above. Sometimes is slows down, halts or reverses for decades or centuries.

I assume that your citations address these questions, but it is useful to state the obvious objections, so the reader is not left hanging.

A technical point:

He made an analogy to the event horizon of a black hole, beyond which the predictive power of physics at the gravitational singularity breaks down.

Physics works mighty fine at the event horizon, predicting what happens to something crossing it with any desired accuracy. It only breaks down at or near the singularity, whether or not it is shrouded by a horizon (not all singularities have to be). While the event horizon is a cute popsci analogy, it should be treated as such, without making false physical statements.

Replies from: steven0461, spuckblase
comment by steven0461 · 2011-11-17T00:31:29.263Z · LW(p) · GW(p)

Agreed on the technical point at the end.

Vinge's 1993 article doesn't mention black holes or event horizons; attributing "event horizon" to him in quotes is certainly wrong, at least if it's referring to that article. Wikipedia claims Vinge said something about black holes, but it cites SingInst's "What is the Singularity?" for that claim, which in turn doesn't cite anything. Did the black hole analogy originate in some other Vinge writing?

comment by spuckblase · 2011-11-17T09:06:22.011Z · LW(p) · GW(p)

it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it.

This is not a rational discourse but part of an FAQ, providing explanations/definitions. Counterarguments would be misplaced.

comment by kilobug · 2011-11-17T11:31:11.905Z · LW(p) · GW(p)

My initial objection to the Singularity was "how do we know making something more intelligent is not an exponential more difficult task, preventing the feedback loop from growing fast or even reaching a low limit ?"

The AI Foom debate did mostly answer to that objection, but I think addressing it in a FAQ about the Singularity would be a good idea.

comment by antigonus · 2011-11-16T22:12:55.825Z · LW(p) · GW(p)

There are also many things someone might have in mind when they refer to a 'technological Singularity' (Sandberg 2010). Below, we’ll explain just three of them (Yudkowsky 2007):

Are there really more than those? Significantly more? I personally don't think I've come across any others, and your wording makes it sound like you have several readily at hand.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-17T00:10:46.495Z · LW(p) · GW(p)

My impression is that many versions of the Singularity are hybrids of these. Kurzwei's envisioned Singularity for example includes AI as a major point, and in "The Singularity is Near" he makes points about prediction issues that are close to Vinge's claim.

comment by billswift · 2011-11-16T21:17:30.209Z · LW(p) · GW(p)

If you are not going to provide links, at least provide enough information so someone can find the referenced papers. I just tried to find one of your references on Google Scholar, and after the fifth page of obvious non-matches came back to leave this comment.

Replies from: lukeprog
comment by lukeprog · 2011-11-22T22:04:24.654Z · LW(p) · GW(p)

Sources will be provided in the final document. Current list of sources for all sections of the FAI FAQ is below:

Allen, Varner, & Zinser (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12: 251-261.

Allen (2002). Calculated morality: Ethical computing in the limit. In I. Smit & G. Lasker (eds.), Cognitive, emotive and ethical aspects of decision making and human action, vol I. Baden/IIAS.

Allhoff, Lin, & Moore (2010). What is nanotechnology and why does it matter?. WIley-Blackwell.

Anderson & Anderson, eds. (2006). IEEE Intelligent Systems, 21(4).

Anderson & Anderson, eds. (2011). Machine Ethics. Cambridge University Press.

Azevedo, Carvalho, Grinberg, et al. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. The Journal of Comparative Neurology, 513: 532-41.

Bainbridge (2005). Survey of NBIC Applications. In Bainbridge & Roco (eds.), Managing nano-bio-info-cogno innovations: Converging technologies in society. Springer.

Barch & Carter (2005). Amphetamine improves cognitive function in medicated individuals with schizophrenia and in healthy volunteers. Schizophrenia Research, 77: 43–58.

Baum, Goertzel, & Goertzel (2011). How Long Until Human-Level AI? Results from an Expert Assessment. Technological Forecasting & Social Change, 78: 185-195.

Block (1981). Psychologism and behaviorism. Philosophical Review, 90: 5–43.

Bostrom (1998). How long before superintelligence? Intenational Journal of Future Studies, 2.

Bostrom (2003). Ethical issues in advanced artificial intelligence. In Smit, Lasker & Wallach (eds.), Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence, vol II. IIAS, Windsor.

Bostrom & Cirkovic (2008). Global Catastrophic Risks. Oxford University Press.

Bostrom & Sandberg (2009). Cognitive enhancement: Methods, ethics, regulatory challenges. Science and Engineering Ethics, 15: 311–334.

Bostrom & Yudkowsky (2011). The ethics of artificial intelligence. In Ramsey & Frankish (eds.), The Cambridge Handbook of Artificial Intelligence.

Bruner, Shapiro, & Tagiuri. The Meaning of Traits in Isolation and in Combination. In Tagiuri & Petrullo (eds.), Person Perception and Interpersonal Behavior. Stanford University Press.

Butler (1863). Darwin among the machines. The Press (Cristchurch, New Zealand), June 13.

Caldwell, Caldwell, et al. (2000). A double-blind, placebo-controlled investigation of the efficacy of modafinil for sustaining the alertness and performance of aviators: A helicopter simulator study. Psychopharmacology (Berlin), 150: 272–282.

Campbell (1932). The Last Evolution. Amazing Stories.

Campbell, Hoane, & Hsu (2002). Deep Blue. Artificial Intelligence, 134: 57-83.

Capurro, Hausmanninger, Weber, Weil, Cerqui, Weber, & Weber (2006). International Review of Information Ethics, Vol. 6: Ethics in Robots.

Chico, Benedict, Louie, & Cohen (1996). Quantum conductance of carbon nanotubes with defecs. Physical Review B, 54: 2600-2606.

Clarke (1968). The mind of the machine. Playboy, December 1968.

Daley & Onwuegbuzie (2011). Race and intelligence. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 293-307). Cambridge University Press.

Danielson (1992). Artificial morality: Virtuous robots for virtual games. Routledge.

Davidson & Kemp (2011). Contemporary models of intelligence. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 58-83). Cambridge University Press.

Drexler (1987). Engines of Creation. Anchor.

Dreyfus (1972). What Computers Can’t Do. Harper & Row.

Eden, Soraker, Moor, & Steinhart, eds. (2012). The Singularity Hypothesis: A Scientific and Philosophical Assessment. Springer.

Finke, Dodds, et al. (2010). Effects of modafinil and methylphenidate on visual attention capacity: a TVA-based study. Psychopharmacology, 210: 317-329.

Floridi & Sanders (2004). On the morality of artificial agents. Minds and Machines, 14: 349-379.

Gibbs & D’Esposito (2005). Individual capacity differences predict working memory performance and prefrontal activity following dopamine receptor stimulation. Cognitive & Affective Behavioral Neuroscience, 5: 212–221.

Gill, Haerich, et al. (2006). Cognitive performance following modafinil versus placebo in sleep-deprived emergency physicians: A double-blind randomized crossover study. Academic Emergency Medicine, 13: 158–165.

Gilovich, Griffith, & Kahneman, eds. (2002). Heuristics and Biases: The Psychology of Human Judgment. Cambridge University Press.

Glimcher (2010). Foundations of Neuroeconomic Analysis. Oxford University Press.

Gödel (1931). Uber formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme I. Monatshefte fur Mathematik und Physik, 38: 173–198.

Goertzel & Pennachin (2007). Artificial General Intelligence. Springer.

Good (1965). Speculations concerning the first ultraintelligent machine. Advanced in Computers, 6: 31-88.

Graimann, Allison, & Pfurtscheller (2011). Brain-Computer Interfaces: Revolutionizing Human-Computer Interaction. Springer.

Gron, Kirstein, et al. (2005). Cholinergic enhancement of episodic memory in healthy young adults. Psychopharmacology (Berlin), 182: 170–179.

Hall (2000). Ethics for machines.

Halpern, Beninger, & Straight (2011). Sex differences in intelligence. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 253-272). Cambridge University Press.

Hanson (1994). If uploads come first: The crack of a future dawn. Extropy, 6:2.

Hanson (2008). Economics of the singularity. IEEE Spectrum, June: 37‐42.

Hochberg, Serruya, Friehs, Mukand, Saleh, Caplan, Branner, Chen, Penn, & Donoghue (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442: 164-171.

Replies from: lukeprog
comment by lukeprog · 2011-11-22T22:04:51.544Z · LW(p) · GW(p)

PART TWO:

Johnston (2004). Healthy, wealthy and wise? A review of the wider benefits of education. Report 04/

  1. New Zealand Treasury.

Kandel, Schwartz, and Jessell (2000). Principles of neural science, 4th edition. McGraw-Hill.

Kimberg, D’Esposito, & Farah (1997). Effects of bromocriptine on human subjects depend on working memory capacity. Neuroreport, 8: 3581–3585.

Kimberg, Aguirre, et al. (2001). Cortical effects of bromocriptine, a D-2 dopamine receptor agonist, in human subjects, revealed by fMRI. Human Brain Mapping, 12: 246–257.

Kimberg & D’Esposito (2003). Cognitive effects of the dopamine receptor agonist pergolide. Neuropsychologia, 41: 1020–1027.

King et al. (2009). The automation of science. Science, 324: 85-89.

King (2011). Rise of the robo scientists. Scientific American, January 2011.

Kringelbach & Berridge, eds. (2009). Pleasures of the Brain. Oxford University Press.

Kurzweil (2005). The Singularity is Near. Viking.

Legg & Hutter (2007). A collection of definitions of intelligence. In Goertzel (ed.), Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms (pp. 17–24). IOS Press.

Legg (2008). Machine Superintelligence. PhD Thesis. IDSIA.

Lenat (1983). Eurisko: A program that learns new heuristics and domain concepts. Artificial Intelligence, 21: 61-98.

Lin & Shung (1995). Magnetoconductance of carbon nanotubes. Physical Review B, 5: 7592-7597.

Lokhorst (2011). Computational meta-ethics: Towards the meta-ethical robot. Minds and Machines, 21: 261-274.

Lucas (1961). Minds, machines, and Gödel. Philosophy, 36: 12–127.

Lukasiewicz (1974). The ignorance explosion. Leonardo, 7: 159-163.

Luke (2011). Essentials of Metaheuristics. Lulu.

MacKenzie (1995). The Automation of Proof: A Historical and Sociological Exploration. IEEE Annals, 17(3): 7-29.

Mahoney (2010). A Model for Recursively Self Improving Programs.

Marcus (2008). Kluge: The Haphazard Construction of the Human Mind. Houghton Mifflin.

Markoff (2011). Computer Wins on Jeopardy!; Trivial, It’s Not. New York Times, February 17th 2011: A1.

Mattay, Callicott, et al. (2000). Effects of dextroamphetamine on cognitive performance and cortical activation. Neuroimage, 12: 268–275.

Mattay, Goldberg, et al. (2003). Catechol O-methyltransferase val158-met genotype and individual variation in the brain response to amphetamine. Proceedings of the National Academy of Sciences USA, 100: 6186–6191.

McLaren (2005). Lessons in Machine Ethics from the Perspective of Two Computational Models of Ethical Reasoning. AAAI Technical Report FS-05-06: 70-77.

McMorris, Harris, Swain, Corbett, Collard, Dyson, et al. (2006). Effect of creatine supplementation and sleep deprivation, with mild exercise, on cognitive and psychomotor performance, mood state, and plasma concentrations of catecholamines and cortisol. Psychopharmacology, 185: 93-103.

Mehta, Swainson, et al. (2001). Improved short-term spatial memory but impaired reversal learning following the dopamine D(2) agonist bromocriptine in human volunteers. Psychopharmacology (Berlin), 159: 10–20.

Mondadori (1996). Nootropics: Preclinical results in the light of clinical effects; comparison with tacrine. Critical Reviews in Neurobiology, 10: 357–370.

Muehlhauser & Helm (2012). The Singularity and machine ethics. In The Singularity Hypothesis: A Scientific and Philosophical Assessment.

Muller, von Cramon, & Pollmann (1998). D1- versus D2-receptor modulation of visuospatial working memory in humans. Journal of Neuroscience, 18: 2720–2728.

Muller, Steffenhagen, et al. (2004). Effects of modafinil on working memory processes in humans. Psychopharmacology, 177: 161–169.

Neisser. The Concept of Intelligence. Intelligence, 3: 217–227.

Newhouse, Potter, & Singh (2004). Effects of nicotinic stimulation on cognitive performance. Current Opinion in Pharmacology, 4: 36-46.

Niu & Brass (2011). Intelligence in worldwide perspective. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 623-646). Cambridge University Press.

Nilsson (2009). The Quest for Artificial Intelligence. Cambridge University Press.

Nyberg, Sandblom, Jones, Neely, Petersson, Ingvar, et al. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100: 13728–13733.

Olum (2002). http://hnn.us/articles/171.html

Replies from: lukeprog
comment by lukeprog · 2011-11-22T22:05:06.323Z · LW(p) · GW(p)

PART THREE:

Penrose (1994). *Shadows of the Mind. Oxford University Press.

Phoenix (2005). Molecular manufacturing: What, why, and how. Center for Responsible Nanotechnology.

Popov, Kyotani, Nemanich, & Koga (2002). Superhard phase composed of single-wall carbon nanotubes. Physical Review B, 65: 033408.

Rae, Digney, McEwan, & Bates (2003). Oral creatine monohydrate supplementation improves brain performance: A double-blind, placebo-controlled, cross-over trial. Proceedings of the Royal Society of London Series B, Biological Sciences, 270: 2147–2150.

Repantis, Schlattmann, et al. (2010). Modafinil and methylphenidate for neuroenhancement in healthy individuals: A systematic review. Pharmacological Research, 62: 187-206.

Roache (2008). Ethics, speculation, and values. Nanoethics, 2: 317-327.

Roesch-Ely, Scheffel, et al. (2005). Differential dopaminergic modulation of executive control in healthy subjects. Psychopharmacology (Berlin), 178: 420–430.

Rusted, Trawley, Heath, Kettle, & Walker (2005). Nicotine improves memory for delayed intentions. Psychopharmacology (Berl), 182: 355–365.

Sandberg (2010). An overview of models of technological singularity. Presented at the “Roadmaps to AGI and the future of AGI” workshop following the AGI 2010 conference in Lugano, Switzerland.

Sandberg & Bostrom (2008). Whole Brain Emulation: A Roadmap. Technical Report 2008-3. Future of Humanity Institute, Oxford University.

Sandberg & Bostrom (2011). Machine intelligence survey, Technical Report 2011-1, Future of Humanity Institute, Oxford.

Sawyer (2007). Robot ethics. Science, 318: 1037.

Schmidhuber (2007). Gödel Machines: Fully Self-Referential Optimal Universal Self-Improvers. In Goertzel & Pennachin (eds.), Artificial General Intelligence (pp. 199–226). Springer.

Schwartz (1987). Limits of Artificial Intelligence. In Shapiro & Eckroth (eds.), Encyclopedia of Artificial Intelligence, Vol. 1 (pp. 488-503). John Wiley and Sons, Inc.

Searle (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3: 417–424.

Schroeder (2004). Three Faces of Desire. Oxford University Press.

Stanovich (2010). Rationality and the Reflective Mind. Oxford University Press.

Sternberg (1985). Implicit Theories of Intelligence, Creativity, and Wisdom. Journal of Personality and Social Psychology 49: 607–627.

Sternberg, Conway, Ketron, & Bernstein (1981). People’s Conceptions of Intelligence. Journal of Personality and Social Psychology, 41: 37–55.

Sternberg & Grigorenko (2006). Cultural Intelligence and Successful Intelligence. Group & Organization Management, 31: 27–39.

Tang, Shimizu, Dube, Rampon, Kerchner, Zhuo, Liu, & Tsien (1999). Genetic enhancement of learning and memory in mice. Nature, 401: 63–69.

Turing (1950). Computing machinery and intelligence. Mind, 59: 433-460.

Turing (1951/2004). Intelligent machinery, a heretical theory. In Copeland (ed.), The Essential Turing (2004). Oxford University Press. Originally presented in 1951 as a lecture for the ‘51 society in Manchester.

Turner, Clark, Dowson, Robbins, & Sahakian (2004). Modafinil improves cognition and response inhibition in adult attention-deficit/hyperactivity disorder. Biological Psychiatry, 55: 1031-1040.

Vinge (1981). True Names. In Dell Binary Star #5.

Vinge (1993). The coming technological singularity: How to survive in the post-human era. Whole Earth Review, winter 1993. New Whole Earth.

Wallach & Allen (2009). Moral Machines. Oxford University Press.

Wallach, Allen, & Smit (2008). Machine morality: Bottom-up and top-down approaches for modeling human moral faculties. AI and Society, 22: 565–582.

Watanabe, Kato, & Kato (2002). Effects of creatine on mental fatigue and cerebral hemoglobin oxygenation. Neuroscience Research, 42(4), 279–285.

Williams & Herrup (1988). The control of neuron number. Annual Review of Neuroscience, 11: 423–53.

Yudkowsky (1996). Staring into the Singularity.

Yudkowsky (2000). Creating a Transhuman AI.

Yudkowsky (2001). Creating Friendly AI.

Yudkowsky (2002). The AI-Box Experiment.

Yudkowsky (2004). Coherent extrapolated volition.

Yudkowsky (2007). Three Major Singularity Schools.

Yudkowsky (2008). Artificial intelligence as a negative and positive factor in global risk. In Bostrom & Cirkovic (eds)., Global Catastrophic Risks. Oxford University Press.

Zanello, Zhao, Hu, & Haddon (2006). Nano Letters, 6: 562-567.

Zentall (2011). Animal intelligence. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 309-327). Cambridge University Press.

comment by lessdazed · 2011-11-17T16:30:47.799Z · LW(p) · GW(p)

After that, this machine could improve its own intelligence faster and better than humans can

After that, this machine could improve its own intelligence faster and better than humans can improve human intelligence(?)

After that, this machine could improve its own intelligence faster and better than humans can improve the machine's intelligence(?)

After that, this machine could improve its own intelligence faster and better than humans could improve the machine's intelligence.

the machine quickly becomes

the machine would quickly become

...breaks down.

...breaks down. The idea that the future will be unpredictable after the creation of sufficiently intelligent machines is the second meaning often intended by "technological singularity".

comment by timtyler · 2011-11-16T20:35:49.544Z · LW(p) · GW(p)

Computers remain far short of human intelligence, but the resources that aid AI design are accumulating (including hardware, large datasets, neuroscience knowledge, and AI theory). We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence (Good 1965).

It is worth noting that the correct story is of a man-machine symbiosis - with the human part gradually diminishing, and the machine part gradually rising - through the well-known process of automation.

In this context, contrasting machine ability with that of unmodified humans seems silly and irrelevant. Humans have not been designing computers "on their own" for decades. Rather, there's a man-machine symbiosis.

The difference between these two pictures really matters. A symbiosis in which the humans decline gradually results in a different growth curve from one in which computers reach "human level" and then suddenly "take off". The second picture - while naive and inaccurate - is good for scaring small children with. I'm sure we don't want to be scaring small children with fabricated stories.

comment by JoshuaZ · 2011-11-16T19:33:22.896Z · LW(p) · GW(p)

Minor formatting issue- in the section on "Event horizon" the url links need spaces between the words so they don't appear as " theevent horizonof" and " thegravitational singularitybreaks". There are some similar issues with some of the italics.

I would also suggest that it may make sense to have a direct link to Eliezer's bit about three schools.

A major part of Kurzweil's approach is the idea that this linear v. exponential issue causes people to underestimate technological change, and this seems worth mentioning.

When you say:

Computers remain far short of human intelligence

this jars slightly with the sentence right before. I would suggest saying "In general, computers remain far short of human intelligence" or something similar.

Replies from: lukeprog
comment by lukeprog · 2011-11-16T19:45:08.144Z · LW(p) · GW(p)

The formatting issue is actually a LW bug, but I fixed it; thanks.