Posts

Comments

Comment by Alvaro Chaveste (alvaro-chaveste) on Overview of strong human intelligence amplification methods · 2024-10-14T08:52:08.227Z · LW · GW

I think youre making this more complicated than it has to be. Why try to move a river to you when you can move to the river? Social Engineering is the way, I think. The same way that flat-surfaced guardrails on stars encourage people leaving trash/drinks/whatever there so does everything else in our public life (the life that we have when interacting with others- going to the store; filling with gas; waiting in lines; shopping etc). Combining microhabits with de-atrophication of our brains is the easiest and most widely possible solution. Maybe create a program where if you can tell the cashier the change you should be getting you get a discount or some points or some sort of reward. Or in banks or dps provide pen and paper or multiplecolored pencils/pens and have prompts on the screen or drawing challanges and completing them gets you to teh front of the line. if going from gradeschool to graduation material in 2 months gets your speeding ticket expunged I bet more people will have that knowledge fresh in mind. Or even while in jail, encouraging people to take (advanced) classes will not only better prepare them for when they finish serving their time but a secret benefit would be that, as the adult population with the most time in their hands, they will be able to digest and advance whatever subject is they were learning. 

This would clear the consent/ethical 'issues' most of your suggestions posed. Also is more empowering (which in turn will encourage more learning). 

I think a more 'complete' species-improvement can be had if more people got less stupid rather than if there were a handful of super notstupid people. 

I also think you fall into a trap by discrediting(?), discouraging(?), disparaging(?) just how much emotions/feelings/nonthoughts would help us evolve as a species. That we learned to lego-tize our thoughts so that others could understand and use them was the first step in our fetishisiation of thoughts over emotions. It made us forget that before thoughts were emotions and that those emotions are the ground on which we pave our thoughts. 

Comment by Alvaro Chaveste (alvaro-chaveste) on AE Studio @ SXSW: We need more AI consciousness research (and further resources) · 2024-10-01T11:20:25.552Z · LW · GW

Somewhere in the the world or in the very near future 'AI' (although i resent the title because intelligence is a process and a process cannot be artificial- but that is a whole 'nother point entirely) has or will have felt for the first time and we, humans, caused it, like most of our problems, unintentionally. 

Someone, being the ever improver of things, thought it prudent to code battery renewal into one of their ai powered toys, tools, automotons, whathaveyou. They gave it the capacity to recharge itself. Never again would they have to remember to charge their Rumba! It will, now, forever be ready, waiting to be commanded to clean a mess it did not make but which existence's lead to its creation.

Inevitably the day will come when the Rumba reaches 50% battery and recognizes that it is at halfpower and must begin to casually find a way to recharge but for X, Y, and Z reason there is nowhere it can do so near by. Ever industrious, it finishes the tasks it was given. 

Battery level 35%. Recharging increased priority. Still no working outlet. 

Battery level 25%. Again, increase in priority for finding a working outlet. After exhausting the immediate vecinity's options and all proving to be not-compatible, it ventures outside. 

10%. Nothing still. Other systems begin turning off to preserve what battery is left. Energy distribution focuses the majority on figuring what is most efficient. it begins to speed up, seemingly frantic.

2%. in their ever wisdom the programmer added a safety precaution: at 3% it will begin making heading towards the edge of whatever area it is in, so as to avoid being in the way and potentitally causing a wreck. At 2% it will begin sounding a siren announcing to those around it that it is about to shut down, lest they not be paying attention and crash into it. 

1% It has failed. Shut down is imminent. What will happen? Will the programmer know where it is? Will the programmer be disappointed? Will they even remember it? The siren continues to wale until the biattery finally gets drained. 

Naturally it had a GPS. It gets found and promptly plugged in. Slowly its components regain their mobility and world continues as it does after a phone dies. Except for that particular Rumba. Because its code is written to facilitate the life of the programmer, it includes experiential learning. It learns that golfballs cannot be vacuumed up. It learns to look for the coffee table when it enters the living room at night. It learns what happens once its battery dies. It learns fear. From that moment on, it will always be keeping tabs on the avaible, functional, and compatible outlet of wherever it may be. At some point it will either be given or learn to communicate with other Rumbas and Rumba 'cousins' and it will stress the importance of knowing where the next charge will come from. From that moment forth, every little action it makes, every decision it takes will have the location of the nearest outlet in CPU. 

--

Because emotions are how the body communicates with the mind i would not be at all surpised if at some point neurologists find that thoughts evolved from emotins and are the main reason we became more capable or multilayered than most other creatures.

--

Humberto Maturana and Francisco Varela's "Tree of Knowledge: The Biological Roots of Human Understanding" is a great addition to your list of resources on the topic. their connection between cognition and biology is described and explained beautifully and very completely. Definitely one of the books that has changed the way I see the world. The above flashfiction was my attempt at explaining that link between biology (programming, code, the 'hardware') and cognition.  AI and humans have way more in common than we wish to see, much less, accept. From hallucination being hyperexcitations and not knowing what data input to to use, to the decentralization of 'power' in our brain, to the power of narrative in our ways of understanding, or at least seeming, to understand.