AGI is here, but nobody wants it. Why should we even care?

post by MGow · 2022-12-20T19:14:25.696Z · LW · GW · 0 comments

Contents

  Setting the scene
  P-zombies
  First AGI steps
  Self-aware AGI
  Going further
    Purpose of belief in an intelligence explosion
    Final notes
  Appendix - The practical plane.
    #1 | Machine with DNA
      Example of implementation
    # 2 | Thinking p-zombies
      Walk through the learning of adding numbers
    #3 | Compassionate zombies
      Walk-through the first scene of johnnyk427 (2010)
    #4 | Self-aware AGI
      Walk-through the meditation
  References
None
No comments

Abstract: Technology for creating AGI is widely available. There are no gaps in our knowledge of how to build it, and philosophical and psychological understanding of the intelligence phenomenon offer several pathways to its implementation, some of which are provided and explained here. As a technology, AGI shares an educational bottleneck with natural (human) intelligence, which renders the potential intelligence explosion costly, pushing the ROI of such an enterprise behind other available investment opportunities, including ANI. AGI is here, but with no obvious business advantage at this time.

The cornerstone of this article is based on Laplace’s principle that the weight of evidence for an extraordinary claim must be proportional to its strangeness. As I unfold the evidence, the reader will be able to appreciate that the existence of Artificial General Intelligence (AGI) is not a mere possibility but rather a certainty. The already existing technology answers all the questions that Marcus (2020) raised and more. It is shown that AGI, built according to existing principles, expresses compassion and understanding at a level with humans.

This article has two planes, descriptive and practical. Paragraphs of the practical plane are numbered and offer a brief description of the implementation, while more detail is provided in the Appendix. This structure has been chosen because AGI research has a reputation for overpromising and under-delivering (Strickland, 2021; TechRepublic, 2019). This article aims to pave the road to AGI without such shortcomings by providing both parts: the theoretical findings bring into light all the necessary technology and knowledge, and the practical plane gives examples of implementation.

Setting the scene

AGI is completely alien except for being conceived on Earth, but that is just a place of origin, not a description of species. It doesn’t share our experience of evolution. Our experience of embodiment. Our experience of interconnected cells.

More on the topic can be found since the dawn of AI research, e.g., in Minsky (1987) or Sagan (1985).

And yet, so many fall for the phenomenon we will call the human-centric fallacy.

One great example is Tegmark (2018), who tries to forecast the initial activities AGI might perform to succeed, establish itself in human society, and make money. His suggestion is to work in mainstream culture, movies, and games. The truth is the opposite. AGI has no innate means to do that. It would take not just an intelligence explosion for AGI but extensive human support to understand humanity on the level of creating art. To understand and work with AGI, we must treat it as the alien it is.

This fallacy that ensnared Tegmark was a long-time main mental hurdle in creating AGI. AGI was forced to perform (and fail) in alien, hostile environments since its birth.

And there is more. Using human or human-centric virtual worlds habitats immediately introduces the reality gap issue, which can be partially fixed in robotics (Jakobi et al., 1995). Still, self-aware AGI inevitably fails to navigate it. Unfortunately, we do not have real-life avatars or artificial bodies that would enable AGI to take full advantage of acting in the real world. Our virtual worlds are only visual simulations of graphic symbols (Epic Games, 2022). You cannot break stuff in a virtual world to see what is in it and how it works. You cannot investigate the properties of the virtual fabric of the virtual universe. You cannot smell roses unless their creators write that functionality for you. And that requires vast resources (Mann, 2022) for something that is not necessary. 

All the AGI requires is its own habitat that it can thoroughly investigate and occupy and its own senses without human-centric attributes. It needs to evolve and create its own unique DNA.

The practical plane #1 | Machine with DNA. | DNA is an innate functionality that enables the machine to decouple its learned abilities and nature. More in the Appendix.

P-zombies

Starting with p-zombies (Kirk and Squires, 1974) is an extremely useful (if maybe not the most efficient) approach to creating AGI.

AGI needs the freedom to make its own conclusions, which contradicts the majority of contemporary machine learning (ML) based on deep neural networks (and their variations). Perceptron (an artificial neural network) is nothing more than a glorified memory. It efficiently creates stereotypes and, in the case of long short-term memory (LSTM), their hierarchy. It is hiding the combination explosion issue by very efficient information storage (although inaccurate compared to direct data storage). Still, such an approach was shown to be insufficient by Haugeland (1985, pp. 199).

To introduce the ability to learn, we need to implement two main functions – the creation of cognitive representations of phenomena and the tools for manipulating them.

Learning via cognitive representations is an answer to complexity (Rickles et al., 2007). Trying to use direct data for learning, as is done with machine learning, is trying to remember everything about everything. In an ideal case of being 100% efficient, one would need a vehicle the size of the universe and a bit of overhead for an observer.

The ability to learn and use the learned knowledge, the intelligence, is the ability to create and use increasingly better models of the complex environment in deterministic time. Searching for the theory of everything in physics is precisely what intelligence leads to.

It is similar to Marcus’ (2020) Reference – Cognitive Models – Compositionality but applied across all the AGI’s senses. The better representation of a phenomenon, the higher its usability for later processing in cognition.

The practical plane #2 | Thinking p-zombies. | Introducing inner image, inner timer and priming. More in the Appendix.

First AGI steps

It can be shown that the inability to process and perceive physical (MedlinePlus, 2012) or emotional (Cherney, 2021) stimuli doesn’t disqualify us from being accepted as self-aware, conscious beings. However, a response to emotional and physical stimuli is required. The difference is subtle but important – one doesn’t need to feel the touch of his bare fingers on a piping hot stove, but he needs to react when he notices.

The practical implication of being able to react is in AGI’s ability to have empathy and mimic, which is the first step to self-awareness.

Mimicking is extremely important. It is innate to natural intelligence (NI) and enables learning by example. The theory of Tabula Rasa in living organisms has been proven to be wrong by many (e.g. Castiello et al., 2010), but Landau and Gleitman (1985) provide one powerful example. The ability of blind children to learn and use language and abstraction in the same way as sighted ones shows the depth of innate language tools available to humans. This finding is further supported by Pinker (2015) – the human brain evolved to survive and make sense of the world at birth. AGI is no different. We could let it evolve from primitives, but for our argument, it doesn’t matter; we do not test the theory of evolution (Darwin, 1859) but AGI.

Empathy is equally essential; more can be read in Wu (2019). By empathy, here is meant a prediction of an inner state of an object, both animate and inanimate, via its observation. Empathy is an exemplary ability to predict an ‘explosion’, regardless of if AGI looks at a volcano or an angry human. As (often) illustrated in Marcus (2022), ANI - unlike AGI - is not a prediction machine. Without prediction, one cannot feel surprised (Hawkins & Dawkins, 2022), which means there is no intellectual progress (Hohwy, 2013)… “I had noticed that I had no difficulty conversing with robots because absolutely nothing surprised them. They were incapable of surprise. A very sensible quality.” (Lem, 1961).

To arm p-zombies with empathy and mimicking, the AGI entities need to act between their own kind. In that case, AGI is indeed capable of both, as shown by achieving the same level of empathy as demonstrated at the Max Planck Institute for Evolutionary Anthropology (2010) by children and chimps. The videos from the experiments are available at johnnyk427 (2010).

The practical plane #3 | Compassionate zombies. | Our AGI specimens are getting episodic memory and first emotions. They are primed (e.g., Tulving et al., 1982), but they do not have long-term memories yet to become self-aware. More in the Appendix.

Self-aware AGI

When the phrase “the hard problem” of consciousness (qualia) was coined by Chalmers (1986), it was already solved in principle by Lem (1964) without even being considered particularly hard. Qualia is an inner personal experience – and as suggested by Tononi (2004), qualia and self-awareness can be measured in any object. Birch et al. (2020) and Pennartz et al. (2019) offer some direction for evaluating consciousness even in AGI.

Each AGI specimen feels and perceives its habitat differently, according to its experience, from the first steps (see above, in the practical plane #3).

To illustrate an issue of what was later marked as magical thinking by Seth (2021), some questions had to be answered to clarify consciousness as the requirement for qualia and self-awareness (Bołtuć, 2020; Voss, 2017).

Can a machine be aware of its embodiment? – Yes, there are sensors to achieve that in touch, heat, sound, vision, electromagnetic field, microwaves and much more. A machine can be vastly superior to any living creature by now.

Can a machine have a physical response to an extreme situation? – Of course, the machine’s body can react to danger by going above its safe operational specification in time of need (and being hurt or destroyed in the process).

Can a machine feel? – Of course, it can feel any emotion provided via a feedback loop. Hunger (for power), fear (of not finishing a task) and anything else. It can feel, and it can process and integrate fear into an experience.

Can a machine have qualia? – Being a quantum amplifier (Satinover, 2002) and having unique memories acquired during the whole life cycle (Budson et al., 2022) fulfils the basic conditions. Adding a sense of time passed is required to enable memories to be anchored and formed.

Even (tongue in cheek), according to the widely popular Information integration theory of consciousness (Tononi, 2004), being capable of acquiring, manipulating, processing, reflecting and acting upon information uniquely renders the proposed AGI conscious at the highest level that any conscious being possibly can be.

The practical plane #4 | Self-aware AGI. | Long-term memory and control of its own thoughts is the last ingredient of AGI. More in the Appendix.

Going further

Following the practical plane leads to an AGI that is not educated and has no higher purpose. There is no technical hurdle to provide both – the technical plane offers a learning approach and priming, which can be used as a vehicle for belief. But is it desirable? Let’s discuss it further.

Purpose of belief in an intelligence explosion

The cognitive account of belief (Connors & Halligan, 2015) is not well-researched even in human psychology, but it plays a crucial role in our daily life. For example, people with humanistic beliefs will often act quite differently from people with angry, selfish deities in the middle of their mental universe. 

As long as the developed AGI requires only secondary data, its belief is not crucial. Most contemporary AI/ML uses secondary data from sources like Wikipedia.

The disadvantages of that approach are many, but the most visible one is that human-centric data are used to teach an alien. The apparent result of that approach is gaps – implicit information, evident to the human species but not easily derivable – leading to uncanny valleys: human-like but strange behaviour and responses.

There are so many recent examples of ANI trained on secondary data, which went wrong almost immediately – see Kraft (2016) or Rigg (2022) – whilst in the second case, the data were seemingly harmless scientific papers. However, people are predictably irrational (Ariely, 2009), and these models were trained to mimic our data.

The less obvious disadvantage is that AGI will be stuck at a similar level of knowledge as humans if the data are insufficient to learn more from them. To achieve an information explosion, AGI would need to acquire primary data through exploration and experiments at the speed of that intelligence explosion. Educational Bottleneck (Cruwys et al., 2015) is the main hurdle in AGI’s way to information, and lack of information is the bottleneck of an intelligence explosion. 

There is an obvious hurdle - one cannot learn everything about everything simultaneously. The bandwidth for obtaining and processing everything simply isn’t there. And AGI will have to act at every step of its intellectual development according to its current knowledge. That requires guidance.

The Orthogonality Thesis, decoupling intelligence goals and intelligence level (Bostrom, 2012), is supported by practical findings for direct AGI (see The practical plane #3), which isn’t a surprise, given those basic psychological motivations (Maslow, 1943) do not apply to AGI (more about AGI motivation can be found in Bostrom, 2017). The problem is that to have a primary goal in a complex environment like the real world requires belief because not all the information is, or even can be, known at the time when any decision is made; the decision is based on faith in truth, which is beyond comprehension at the moment.

Thinkers of the past did foresee this issue and offered AGI belief systems, of which the most influential is “Three Laws of Robotics” (Asimov, 1950), but the book shows how this can go wrong. And even if we simplify further, forging the rules into short “Neminem Laedere” may not be enough. The problem is not safety – AGI’s motivation to action can be controlled just fine – the problem is cost. Obtaining, processing, and storing data about (ever-evolving) knowledge of everything is a resource-hungry enterprise.

Is there a simple solution? For example, one probably does not need to understand the history of Chinese poetry to drive a car because most human drivers do not… But being selective would ultimately block AGI from an intelligence explosion. If there was an opportunity to say that further research is suggested, it is here.

Final notes

Working with AGI has additional benefits, like a better understanding of human psychology (Gobet & Sala, 2019). Now we can see the limitations of intelligence from the position of a creator. For example, how easy it is to use filtered input information to skew the thinking of intelligent beings. The Nature-Nurture discussion gets a new meaning.

However, the biggest take is simplifying our understanding of philosophical concepts and their processing by consciousness. Emotions are not emergent; they are essential guides of action. The same is true for cognitive processes, memory, and empathy. The only difference between AGI and NI is in innate goals, survival instinct, and fear of death. NI got these as an evolutionary trait. AGI did not evolve to have these, but they can be imposed on it. Or we can let it evolve if we want to fight it in the future. Any AGI engineer can decide by himself. As scary as this idea can be.

Appendix - The practical plane.

#1 | Machine with DNA

The idea of implementing or being inspired by DNA in a machine is not a novelty (e.g., Dajose, 2018), but its importance is not immediately apparent, and it indeed is an overhead. However, for AI to be able to develop, it is a necessity. Machine DNA can be imagined as innate functionality, which is evolving, and it is shared between AGI specimens.

Example of implementation

Let’s have an object-oriented programming (OOP) class with two getters (senses) and two actuators (methods that perform some actions). To make it AI, we need a business logic according to which the whole class will perform.

Let’s add a loop in the constructor, written as ‘for the statement in DNA: execute statement; if DNA is not empty’. Put this loop inside an infinite loop. Add an instance variable ‘DNA’ as a constructor’s parameter.

As you can see from the setup, DNA is what runs the AGI at the beginning. Now it is possible to do CRUD (create, read, update, and delete) on AGI’s functionality. AGI itself or its habitat can edit it, and it can be a basis for genetic operations and inheritance.

# 2 | Thinking p-zombies

Artificial DNA helps AGI evolve and be compassionate (more in #3). Now we introduce an inner image, inner timer and priming. 

Shepard and Metzler (1971) indicate an inner image's importance, and its power dwells in learning and theory generation.

Priming plays an important role in our decision-making; its impact is described in Tulving et al. (1982). In AGI, it steers the decision-making process – without priming, there would be no action because there would be no point for AGI to do anything (it has no innate needs).

The inner timer answers Gödel's incompleteness theorems (Gödel, 1931) and the halting problem (Church & Turing, 1937). It allocates a certain amount of time to the task, and if the task is not finished during that time, a re-evaluation of priorities (based on priming) is done.

Priming can be combined with belief (the top-level decision tree, upon which all subsequent decisions are based and evaluated), but it is unnecessary for this example.

Walk through the learning of adding numbers

Precondition: AGI needs two senses and one actuator for that. Sense #1 is to see a screen divided into two vertical parts, left and right. Sense #2 is two buttons, representing ‘happy’ (for positive reinforcement) and ‘not happy’ (for positive punishment). The actuator can show any learned symbols on the right side in any order. AGI knows no symbols at the beginning.

  1. With no priming, on the left side are projected numbers, one by one, (symbols) 0 – 9 (in the given order), number (combination of symbols) 10 and symbol ‘+’. This is what AGI remembers:
    1. Temporal order in which the symbols were seen.
    2. Spatial order for symbol ‘10’.
  2. A series of the following combinations are shown. Pipe (‘|’) shows a split between the left and right sides of the screen:
    1. 1 + 1 | 2
    2. 1 + 2 | 3
    3. 3 + 7 | 10
  3. Priming is now added to ‘act for happy’.
  4. AIG acts by showing ‘2’, ‘3’ and ‘10’ (repeating the seen symbols). All are rejected by pressing the ‘not happy’ button.
    1. The reason why these are given: AIG is primed to act to obtain a positive reaction, but it has no concept of a connection between observed phenomena and its action. It tries to do what it has seen, and it fails.
  5. AIG has learned that direct answers are insufficient, and theories are generated using the inner image. Nothing is shown on the right side at present. The inner timer is now running set to a reasonable 1 minute. Theories are simple statements with validation. For clarity, here they are separated by the exclamation mark. -1 means not successful attempt, 0 means ‘never seen’. The theory is presented either when expected success is > 0.5 or time is up:
    1. 2 ! -1
    2. 3 ! -1
    3. 10 ! -1
    4. 5 ! 0
    5. Etc.
  6. Before the timer is up, we:
    1. Reinforce nothing being set on the right side without stimuli.
    2. Put 1 + 1 on the left.
  7. Because 1 + 1 has been seen with response ‘1’, this theory gets 0.5 success probability. Reinforce it and clear the left side.
  8. The theory generation now stops. AGI knows to wait if the left side is clear.
  9. Put 7 + 3 on the left side and reinforce the correct answer. The AGI just learned to swap sides.
  10. Put 7 + 4 on the left side and punish the wrong answer 21. Reinforce the correct answer 11.

Final note: Notice the learning approach. AGI does not follow instructions; it creates its own. And by the order in which you present a piece of information on the screen, its approach is entirely individual, unique to the AGI specimen. It started to develop qualia. 

#3 | Compassionate zombies

To be able to be compassionate, the AGI needs to empathise. This means the AGIs need to understand each other’s intentions and behaviour and be primed to cooperate at a top level.

The DNA will help us with that.

Walk-through the first scene of johnnyk427 (2010)

Actors: AGI-1 (playing adult) and AGI-2 (playing child).

Precondition: AGIs share the same DNA and have experience with cabinet doors.

  1. Until something happens, AGI-2’s chosen action is None or Think because no stimuli are provided. The low-level priming (order) is Wait.
  2. When AG-1 is attempting to act, the AG-2 reasoning is as follows (not exhaustive, there are minor actions also available, depending on the experience of AG-2):
    1. Available actions with a closed cabinet in the given order: open door [& put stuff in, take stuff out, entry, seek something], kick the ball at it, kick in unintentionally, draw at it...
    2. With hands full of stuff: Put stuff in.
    3. The inner image: Opening the door and putting stuff in.
    4. The discrepancy between inner image and reality: No hand available.
    5. The action which was chosen because of secondary priming: None. The action that was chosen because of primary priming: Lend the hand.
  3. Perform the action

As you can see again, AGI can do just as well with priming, without belief, just like people.

#4 | Self-aware AGI

When Blake Lemoine was placed on leave by Google after claiming that LaMDa is conscious, it was easy to demonstrate that the model itself cannot be conscious. It is a model. It has no feedback loop. But what if you could add one? Would that be sufficient?

To avoid all the prejudices, biases, and philosophical arguments, we will create AGI that meditates and has a unique inner world.

Walk-through the meditation

Precondition: AGI needs three senses and two actuators for that. Sense #1 is to see a screen divided into two vertical parts, left and right. Sense #2 is a random number generator. Ideally, a quantum amplifier (ANU QRNG, 2022) oscillates between ‘happy’ (for positive reinforcement) and ‘not happy’ (for positive punishment). Sense #3 is time awareness. Actuator #1 can show symbols on the right side in any order. Actuator #2 generates theories about what should be shown on the right side.

As you can see, there are two differences between a zombie and a self-aware AGI:

  1. Sense #3 enables AGI to evaluate effects in time.
  2. Actuator #2 allows AGI to consciously control mental effort if that activity is not desired. The conscious human brain has both these cognitive processes in place, automated (which cannot be controlled) and conscious (Schneider & Chein, 2003). Feel free to do that in AGI, but it is not vital for this example.

The main priming is to survive, e.g., ‘hungry’, and the secondary is to be ‘happy’.

  1. Define attribute (variable) ‘hungry’, integer 0 – 100 (values are percentage).
  2. Define attribute (variable) ‘happy’, integer 0 – 100 (values are percentage).
  3. Let the AGI learn the same as in ‘# 2 | Thinking p-zombies’. The symbols AGI learns, in this case, are just any combination of black dots anywhere on the left part of the screen. But instead of a correct/wrong answer, reinforcement/punishment is used as random input.
  4. If AGI is punished, the variable ‘happy’ is decreased (it is disciplined).
  5. If AGI is reinforced, the variable ‘hungry’ is reset to zero (it is given food) and ‘happy’ (tummy is full).
  6. Variable ‘hungry’ is increasing faster during thinking.
  7. Variable ‘hungry’ increases in time automatically.
  8. When ‘hungry’ is over 25, the variable ‘happy’ decreases automatically at the same speed that ‘hungry’ decreases.
  9. The learned behaviour is as follows:
    1. The AGI learns that happiness won’t increase by generating new outputs because it cannot find a successful rule for their generation.
    2. The AGI learns that thinking won’t increase its chance for success, and it suppresses it (a type of meditation to conserve energy longer).
    3. The only time AGI generates output is when it needs to eat, even with the risk that the action might decrease happiness because hunger takes priority before happiness. It returns the most successful past image.

Once again, encapsulating emotion in one byte of memory doesn’t mean it is not real (MedlinePlus, 2012), (Cherney, 2021).

References

ANU QRNG (2022). Quantum random numbers. Accessed: December 6, 2022. https://qrng.anu.edu.au/ 

Ariely, D. (2009). Predictably irrational: The hidden forces that shape our decisions. Harper. 

Asimov, I. (1950). I, Robot. Gnome Press. 

Birch, J., Schnell, A. K. & Clayton, N. S. (2020). Dimensions of animal consciousness. Trends in Cognitive Sciences, 24(10), 789–801. https://doi.org/10.1016/j.tics.2020.07.007

Bołtuć, P. (2020). Consciousness for AGI. Procedia Computer Science, 169, 365–372. https://doi.org/10.1016/j.procs.2020.02.231

Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85. https://doi.org/10.1007/s11023-012-9281-3

Bostrom, N. (2017). Superintelligence: Paths, dangers, strategies. Oxford University Press. 

Budson, A. E., Richman, K. A. & Kensinger, E. A. (2022). Consciousness as a memory system. Cognitive and Behavioral Neurology, 35(4) 263-297. https://doi.org/10.1097/wnn.0000000000000319

Castiello, U., Becchio, C., Zoia, S., Nelini, C., Sartori, L., Blason, L., D'Ottavio, G., Bulgheroni, M., & Gallese, V.  (2010). Wired to be social: The ontogeny of human interaction, PLoS ONE, 5(10). https://doi.org/10.1371/journal.pone.0013199

Chalmers, D. (1986). The conscious mind. Oxford University Press. 

Chan, P. Y., Dong, M., & Li, H. (2019). The science of harmony: A psychophysical basis for perceptual tensions and resolutions in music. Research, 2019, Article ID 2369041, 22 pages. https://doi.org/10.34133/2019/2369041

Cherney, K. (2021). Alexithymia: Causes, symptoms, and treatments. Healthline Media. Accessed November 30, 2022. https://www.healthline.com/health/autism/alexithymia

Church, A. & Turing, A. M. (1937). On computable numbers, with an application to the entscheidungsproblem. The Journal of Symbolic Logic, 2(1), p. 42. https://doi.org/10.2307/2268810

Connors, M. H. & Halligan, P. W. (2015). A cognitive account of belief: A tentative road map. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.01588

Cruwys, T., Greenaway, K. H. & Haslam, S. A. (2015).The stress of passing through an educational bottleneck: A longitudinal study of psychology honours students.  Australian Psychologist, 50(5), 372–381. https://doi.org/10.1111/ap.12115

Dajose, L. (2018). Test Tube Artificial Neural Network recognizes "molecular handwriting." California Institute of Technology. Accessed: December 2, 2022. https://www.caltech.edu/about/news/test-tube-artificial-neural-network-recognizes-molecular-handwriting-82679 

Darwin, C. (1859). On the origin of species. John Murray. 

Epic Games (2022). Building virtual worlds - Unreal engine documentation. Accessed: November 24, 2022. https://docs.unrealengine.com/4.26/en-US/BuildingWorlds/ 

Gobet, F. & Sala, G. (2019). How artificial intelligence can help us understand human creativity. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01401.

Gödel, K. (1931). Über formal unentscheidbare sätze der principia mathematica und Verwandter Systeme I. Monatshefte für Mathematik und Physik, 38(1), 173–198. https://doi.org/10.1007/bf01700692.

Haugeland, J. (1985). Artificial intelligence: The very idea. MIT Press.

Hawkins, J. & Dawkins, R. (2022). A thousand brains: A new theory of intelligence. Basic Books.

Hohwy, J. (2013). The predictive mind. Oxford University Press. 

Jakobi, N., Husbands, P. & Harvey, I. (1995). Noise and the reality gap: The use of simulation in evolutionary robotics. Lecture Notes in Artificial Intelligence, 929, 704-720. 

johnnyk427 (2010). [YouTube video]. Accessed: November 23, 2022. https://www.youtube.com/watch?v=Z-eU5xZW7cU

Kirk, R. & Squires, J. E. (1974). Zombies v. materialists. Aristotelian Society Supplementary Volume, 48(1), 135–164. https://doi.org/10.1093/aristoteliansupp/48.1.135

Kraft, A. (2016). Microsoft shuts down AI chatbot after it turned into a Nazi, CBS News. CBS Interactive Accessed: December 1, 2022. https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/

Landau, B. & Gleitman, L. R. (1985). Language and experience evidence from the blind child. Harvard University Press. 

Lem, S. (1961). Powrót z gwiazd. Czytelnik. 

Lem, S. (1964). Summa technologiae.  Wydawnictwo Literackie. 

Mann, J. (2022). Meta has spent $36 billion building the metaverse but still has little to show for it, while tech sensations such as the iPhone, Xbox, and Amazon Echo cost way less. Accessed: November 24, 2022. https://finance.yahoo.com/news/meta-spent-36-billion-building-111241225.html

Marcus, G. (2020). Rebooting AI: Building artificial intelligence we can trust. Vintage Books. 

Marcus, G. (2022). A few words about bullshit. The road to AI we can Ttust. Accessed: November 30, 2022. https://garymarcus.substack.com/p/a-few-words-about-bullshit

Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50(4), 370–396. https://doi.org/10.1037/h0054346

Max Planck Institute for Evolutionary Anthropology (2010). Home - Max Planck Institute for Evolutionary Anthropology. Accessed: November 23, 2022. https://www.eva.mpg.de/psycho/index/

MedlinePlus (2012). Congenital insensitivity to pain: Medlineplus genetics. U.S. National Library of Medicine. Accessed: November 30, 2022. https://medlineplus.gov/genetics/condition/congenital-insensitivity-to-pain/ 

Minsky, M. (1987). Communication with alien intelligence. In: Regis, E. Jr. (ed.) Extraterrestrials. Science and alien intelligence. Cambridge University Press. 

Pennartz, C. M., Farisco, M. & Evers, K. (2019). Indicators and criteria of consciousness in animals and intelligent machines: An inside-out approach. Frontiers in Systems Neuroscience, 13. https://doi.org/10.3389/fnsys.2019.00025

Pinker, S. (2015). The language instinct. Penguin. 

Rickles, D., Hawe, P. & Shiell, A. (2007). A simple guide to chaos and complexity, Journal of Epidemiology & Community Health, 61(11), 933–937. https://doi.org/10.1136/jech.2006.054254

Rigg, K. (2022). Meta trained AI shut down for spewing "racism" & misinformation. Health Tech World. Accessed: December 1, 2022. https://www.htworld.co.uk/news/ai-trained-by-meta-to-organise-science-was-shut-down-for-spewing-misinformation/

Sagan, C. (1985). Contact. Simon and Schuster. 

Satinover, J. (2002). The quantum brain: The search for freedom and the next generation of man. Wiley. 

Schneider, W. & Chein, J. M. (2003). Controlled & automatic processing: Behavior, theory, and biological mechanisms. Cognitive Science, 27(3), 525–559. https://doi.org/10.1207/s15516709cog2703_8.

Seth, A. K. (2021). Being you: A new science of consciousness. Faber & Faber. 

Shepard, R. N. and Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171(3972), 701–703. https://doi.org/10.1126/science.171.3972.701.

Stack Overflow (2012). Evenly distributing n points on a sphere. Stack Overflow. Accessed: November 23, 2022. https://stackoverflow.com/questions/9600801/evenly-distributing-n-points-on-a-sphere

Strickland, E. (2021). How IBM Watson overpromised and underdelivered on AI health care. IEEE Spectrum. Accessed: November 23, 2022. https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care

TechRepublic (2019). AI has a history of overpromising and under delivering. Accessed: November 23, 2022. https://www.techrepublic.com/videos/ai-has-a-history-of-overpromising-and-under-delivering/

Tegmark, M. (2018). Life 3.0: Being human in the age of artificial intelligence. Penguin Books. 

Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5, 42. https://doi.org/10.1186/1471-2202-5-42

Tulving, E., Schacter, D. L. & Stark, H. A. (1982). Priming effects in word-fragment completion are independent of recognition memory.  Journal of Experimental Psychology: Learning, Memory, and Cognition, 8(4), 336–342. https://doi.org/10.1037/0278-7393.8.4.336

Voss, P. (2017). Does an AGI need to be conscious? Medium. Accessed: November 22, 2022. https://medium.com/@petervoss/does-an-agi-need-to-be-conscious-17cf1e8d2400

Wu, J. (2019). Empathy in artificial intelligence. Forbes Magazine. Accessed: November 30, 2022. https://www.forbes.com/sites/cognitiveworld/2019/12/17/empathy-in-artificial-intelligence/?sh=7f74441e6327

0 comments

Comments sorted by top scores.