Posts

Comments

Comment by DuncanSharpe on What makes people intellectually active? · 2023-10-20T07:24:20.997Z · LW · GW

The main difference is having a thorough map of the territory. The stage before having worthwhile ideas is going on a mapping exercise - finding out what is already known about a topic, and learning what existing workers in the field are able to do, and how they understand it. As you learn how the different aspects of the field are connected, it’s possible to start having your own useful ideas - and most of the time you’ll find your idea is already known and part of the field. But as you continue to map and explore, you may come across ideas that don’t seem to be in the literature. And of course this may be an original idea. There is a big difference between being intellectually curious enough to know something about a field, and having dug through that field sufficiently to have a decent map of it. Conversely there are often crackpot ideas in every field - the easiest test for them is often to ask the originator of the idea to explain the current mainstream theory. If they can’t, then you can save your time on their alternative. For me one motivator is this - there is no difference between having an idea that turns out to be known already, and having an original idea. It’s the same thing, except for having not yet reached the frontier of the known world. It was my idea, and it was original once….. And it’s much more likely to be correct than an idea that nobody else is talking about yet - it’s a sign of being on the right track.

Comment by DuncanSharpe on [deleted post] 2023-04-11T17:53:45.138Z

To me the very notion of an AI system not having any goals at all seems inherently wrong. If the system is doing something - even if that something is just reasoning - the system must have some means of deciding what to do out of the infinite pool of things that could possibly be done. Whatever that system is defines the goal.

Goal-directed behaviour can be as simple as what a central heating thermostate does. An AI could very possibly have no internal representation of what its own goal is, but if it is carrying out computations, it almost certainly has something which directs it on what sort of computations it's expected to carry out, and that is quite enough to define a goal for it.