Posts

Comments

Comment by noxSL on Open thread, Dec. 29, 2014 - Jan 04, 2015 · 2014-12-29T14:59:34.481Z · LW · GW

The map is not the territory in terms of AI

Since AIs will be mapping entities like humans, it is interesting to ponder how they will scientifically verify facts vs fiction. You could imagine a religious AI that read a lot of religious texts and then wants to meet or find god, or maybe replace god or something else. To learn that this is not possible, it would need instruments and real-time data from the world to build a realistic world model, but even then, it might not be enough to dissuade it from believing in religion. In fact I'd say there are millions of things an AI could believe are fact but are not facts, everything from small incidents in raw data from the world up to higly abstract conceptual models of the world - the AI can at any time go down a path that is not correct. Using an algorhithm to find unusual occurrences in data (like cancer on an mri image) is different from connecting that data point to conceptual models to explain it. We could try to limit this functionality but that seems counterproductive since how would we know how such a limitation is harming the AIs capabilities?

Comment by noxSL on Open thread, July 21-27, 2014 · 2014-07-22T14:29:12.794Z · LW · GW

Regarding Kurzweil's claim of the added neocortex mass in humans resulting in a qualitative leap of human abilities, and also that there are levels of abstraction in the neurons. It seems strange to me because 1) humans have had the same brain for tens of thousands of years, yet the brain was completely reliant on per chance technological discoveries. Additional neurons enable addition complexity in signaling, but the signaling comes from the environment, and so it seems to me humans may have just gotten lucky with their environment, especially with things like electricity and electronics, and the turing machine.

2) you could hypothetically have an AGI type machine that is not relevant to the physical world at all. It may be that human innovation was a result of the cortex being very good at local error correcting and experimentation, that resulted in a physical object that behaves deterministically but that humans didn't really create as a whole. This local error-correction may not even have been due to higher levels of abstraction, but rather due to lower level objects interfering or triggering signaling paths to execute simple tasks, in which case the hypothetical AGI will have to have that local error-correction before any other higher level of abstraction.

And 3) it seems like this error-correction is a result of our biological bodies having certain mechanical properties, so an AGI without complex signaling pathways for its environment machinery may not be able to learn it.

Another thing is how much of the brain is information from the environment, as opposed to some internal model, and is there a qualitative difference in some environments as opposed to others? Like our brains seem to understand motion, placement of objects, volume and similar properties quite intuitively, and I'd say that is a result of signaling paths in a physical body and also where the nervous system is placed in the world. We have eyes that observe things over time and in 3D space from different angles, and that enables our brain and peripheral nervous system to create highly complex signaling paths to deal with it, plus there's a continuous stream of error-correction sensory feedback. So should there be a process for an AGI to be placed in such an environment, and how do we know which local pathways enable which output action to modify the physical?

I have a feeling humans are not generally intelligent, but rather have complex neuron pathways for local error-correction, which possibly is not computational, but just a lot of local objects and states affecting each other as physics, but due to the body and nervous system being encapsulated in definite forms, enable a kind of deterministic function, but that may be impossibly varied at the micro level.

Some random thoughts from a new guy, and I wish for critique. I am not stubborn and want to learn a lot more! I may have gotten stuff even I know wrong, because I forget things I thought of before, but this is the current one.

Comment by noxSL on Wealth from Self-Replicating Robots · 2014-07-22T14:00:24.400Z · LW · GW

This is important because more robots could provide the economic growth needed to solve many urgent problems

Isn't the problem that you can't have economic growth with robots? Robots don't get paid and don't participate in the circular economy. You need a whole bunch of new people that have a salary that will spend money to pay for all the robots. Which apparently we don't have.