Posts

Comments

Comment by paul (paul-1) on Plans are Recursive & Why This is Important · 2019-04-08T14:50:57.682Z · LW · GW

Seems like the planning process or algorithm is recursive but the plans are merely hierarchical.

Speaking or recursion in human cognition, I've always wondered if it is implemented in the human brain in what computer scientists (programming language compiler writers, to be specific) call "unrolling" as opposed to true recursion. Many modern compilers, when they detect that a recursive algorithm or simple loop will actually only nest 5 times, say, will generate machine code that unrolls the recursion into a simple linear series of 5 steps. The brain really can't handle very many levels of recursion so this may be why. it implements what abstractly requires recursion as a linear sequence, turning the recursion level into a simple sequence index. Nature never (or hardly ever) implements true recursion as it always stops after a few levels.

Comment by paul (paul-1) on Do we need a high-level programming language for AI and what it could be? · 2019-03-06T18:55:07.049Z · LW · GW

I am doing AI work (not neural nets) and I'm also a programming language aficionado. I've invented several special purpose languages and have implemented them. The role programming language might play in AI is something I have though about.

That all said, the place to start is the AI model. It only make sense to invent a programming language as an aid to humans to express designs using a chosen model. In short, you don't start with the language but the designs you would like to express. The purpose of a language is solely to make designs easier to read and write by humans.

Comment by paul (paul-1) on Native mental representations that give huge speedups on problems? · 2019-02-26T17:28:41.293Z · LW · GW

Any system that takes a huge amount of input data and reduces it to some sort of representation will have input cases it doesn't handle well. The reduction throws away data of a certain, supposedly unimportant, variety. Input cases are bound to exist where the data thrown away by the reduction algorithm are, in fact, important. Visual illusions are such cases for the human visual system. Those that work on autonomous vehicles have to deal with such cases. Humans that understand how such recognition systems work can purposefully construct such cases in order to "hack" them. It's a jungle out there.

Comment by paul (paul-1) on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-02-25T22:16:46.078Z · LW · GW

This kind of "progress" in AI is so boring. It has been shown over and over that an AI will never perform like a human until it understands the world at least somewhat as a human does. Sure, it can fool someone's quick glance but falls apart under close scrutiny. These models driven by big data and statistics are never going to come close to anything truly useful unless in applications that call for fake text. I shudder to think what those are.

Comment by paul (paul-1) on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-02-23T18:36:57.948Z · LW · GW

I look at this from a functional point of view. If I were designing an AGI, what role would emotions play in its design? In other words, my concern is to design in emotions, not wait for them to emerge from my AGI. This implies that my AGI needs emotions in order to function more competently. I am NOT designing in emotions in order to better simulate a human, though that might be a design goal for some AGI projects.

So what are emotions and why would an AGI need them? In humans and other animals, emotions are a global mechanism for changing the creature's behavior for some high priority task. Fear, for example, readies a human for a fight or flight response, sacrificing some things (energy usage) for others (speed of response, focused attention). Such things may be needed in an AGI I'm designing. A battlefield AGI or robot, for example, might need an analogous fear emotion to respond to a perceived threat (or instructed to do so by a controlling human).

Obviously, the change brought about by "fear" in my AGI will be different from the 6 qualities you describe here. For example, my battlefield robot would temporarily suspend any ongoing maintenance activities. This is analogous to a change in its attention. It might rev up its engines in preparation for a fight or flight response. Depending on the nature of the threat, it might change the configuration of its sensors. For example, it may turn on a high-resolution radar that is normally off to save energy. Generally, as in humans, emotion is a widespread reallocation of the AGI's resources for a particular perceived purpose.

Comment by paul (paul-1) on Why didn't Agoric Computing become popular? · 2019-02-16T22:54:02.437Z · LW · GW

Agoric Computing seems like a new name given to a very common mechanism employed by many programs in the software industry for decades. It is quite common to want to balance the use of resources such as time, memory, disk space, etc. Accurately estimating these things ahead of their use may use substantial resource by itself. Instead, a much simpler formula is associated with each resource usage type and that stands as a proxy for the actual cost. Some kind of control program uses these cost functions to decide how best to allocate tasks and use actual resources. The algorithms to compute costs and manipulate the market can be as simple or as complex as the designer desires.

This control program can be thought of as an operating system but it might also be done in the context of tasks within a single process. This might result in markets within markets.

I doubt many software engineers would think of these things in terms of the market analogy. For one thing, they would gain little constraining their thinking to a market-based system. I suspect many software engineers might be fascinated to think of such things in terms of markets but only for curiosity sake. I don't see how this point of view really solves any problems for which they don't already have a solution.