Comment by almostvoid on Superintelligence 29: Crunch time · 2015-03-31T09:08:38.405Z · score: -7 (9 votes) · LW · GW

the concept was great fantastic delirious to me even. the book itself showed how anglo-american logic can get totally lost in its own myriad paradigm defined logic maze and loose sight of the trees and the redefine the forest. All the maybe spinoffs were totally irrelevant as primary causes-field-events. Hardly anything [in the book not the conversation here] mentioned really intelligence artificial or otherwise. Nor was human senescence alluded to to create an avenue how AIs might progress. Sentience was truly thin on the ground. More about un-related conceptualizations that belonged to a degree to thought-experiments that really should have been appendixed. And most books in this field all repeat each other. We don't seem to be making -as a human race- much progress because I think we are doing this all wrong. Like the early aeronauts who flapped wings trying to fly like birds. Until the propeller was used and we had lift-off. Personally we will never have Ais because our brains are quantum processors with sentient mind-full-ness. The latter is specific to our race. This cannot be-as such duplicated. It is a generated field-wave-mind-set that defines us as we conceptualize. Of course AIs can mimic. They might even become DALEKS. Happy Easter as I dive into the Void.

Comment by almostvoid on Superintelligence 28: Collaboration · 2015-03-24T09:07:33.470Z · score: -2 (2 votes) · LW · GW

the over complications, the suppositions in all areas, the assumptions of certain outcomes, the complex logic of spaghetti minded mental convulsions to make a point. It all misses the essence of AI in whatever form. I ran into this at uni doing philslophy-logic and couldn't be philosophical about the proposed propositions. I cheated to pass. It is the same - more erudite - here and the book in general. Creating more forests and hiding the trees. Still it's a learning curve.

Comment by almostvoid on Superintelligence 28: Collaboration · 2015-03-24T08:51:44.095Z · score: 0 (0 votes) · LW · GW

since men are wired to mate diversely then obviously the recipient must feel the same not different. I mean it takes 2 to tango. I've met women who wanted to ** with me and once asked the proponent that I had a lover and she said: so what? Lesson over.

Comment by almostvoid on Superintelligence 27: Pathways and enablers · 2015-03-17T08:24:09.459Z · score: -4 (4 votes) · LW · GW

brain emulation: what a concept. makes interest re sci fi which I shall actually write into my third novel- how a brain feels as it thinks in it's vacancy. Because with no experience of anything it comes into being just-like-that. A baby has a loaded unconscious but will an emulated brain? empirically not. It is a test tube - tank creation. It will be a blank. The best it can think of if at all is a void-state [not zen]. For this globular entity to function it needs ideas to think with which must be implanted. And that is just the beginning. It may lead to totally absurd answers-results. Nature as per usual has the answer: a baby! OK so it takes at least a decade before it is mentally making progress but emulated brains might end up in controll if at all: watch City of Lost Children to see what could easily happen.

Comment by almostvoid on Superintelligence 26: Science and technology strategy · 2015-03-10T08:21:41.745Z · score: -1 (5 votes) · LW · GW

prosperity going even incrementally onwards as a for-ever process is impossible to maintain. This happened nearly 80 yrs ago with the big Crash [1929] when the perfect socity [USA] couldn't save itself from a mega disaster of major proportions. Yet across the Atlantic neither Italy nor Germany [after 33] suffered. So it is a matter of applying collective intelligence to this reward system. The 1950s achieved a dream run that stalled 20 yrs later and the oil shock was a symptom not a cause. Collective failure was the debilitating source of this slow down. During the 80s Australia had a progressive govt [Labor] which adjusted to the neo-conservatives whilst America and the UK caused nothing but grief. The point is that it is attitude that creates prosperity. Even ideology. We have to think and apply macro models applied in micro ways like universal health care which works in several countries. That too can be considered prosperity because in New Zealand you wont go broke going to a hospital. Owning a car is not prosperity but spin-masters want this criterion included for purely monetary advantage. Car ownership is a burden on one's pocket. Govt subsidized public transport is not. It too works. [used to be a bus driver in Sydney]. Increased prosperity is a false vision. Secure prosperity might be a better ideological attitude. America's unemployment % might be low but when the basic wage is so low you can't exist on it then there won't be much prosperity for the workers who in countries that have collective bargaining entrenched in the social contract such as Germany prosperity is assured. The anglo-american mercantile model of rapacious exploitive capitalism guarantees almost next to nothing whilst the European model though not perfect by even their standards guarantees its participants a better overall quality of life missing in pure monetary gauges that seem to miss the essential: the human equation which is not a mere adjunct to investors who have no conscience that their earnings might come from child labour in some unfortunate re-developing country. Secure prosperity should be the foundation of society.

Comment by almostvoid on Superintelligence 24: Morality models and "do what I mean" · 2015-02-24T09:05:10.416Z · score: -1 (1 votes) · LW · GW

we don't talk about red lights for a train having made a moral decision. i don't think even in AI it applies. if it does than i'd be worried about the humans who offload thinking-decision making to a machine mind. anyway that entity will never comprehend anything per se because it will never be sentient in the broadest sense. I can't see it being an issue. Dropping the atom bomb didn't worry anybody.

Comment by almostvoid on Superintelligence 23: Coherent extrapolated volition · 2015-02-17T09:00:58.974Z · score: -6 (6 votes) · LW · GW

We are not flawed. Or conversely flaws are our uniqueness {Hawkwind circa 1975}. As for coherency it's a mirror. We got this far without logicians. Our advances were intuitive, artistic. Science organized the insights into applicable manipulated realities [plural] both theoretically as in Galileo and practically since Stevensons's Rocket. [or Shelley's Frankenstein]. There may be alien-artificial life out there in the universe and it might be totally logically coherent but I speculated it will have reached the end of creativity. As long as we remain flawed we will be unpredictable. And that is comforting to know

Comment by almostvoid on Superintelligence 22: Emulation modulation and institutional design · 2015-02-10T09:15:29.076Z · score: -7 (7 votes) · LW · GW

I am realizing that there is this assumption that robots, AI OSs and variants are gonna work. Well I used to run a live website and working with webmistress realized for starters that codes self corrupt. so no reliability there. then there is human interpretation. some experts simply could not comprehend simple instructions and often to hide their ignorance came back with gobbly-gook speech obfuscation. It took a while to find the right expert. Even then things always went wrong. So future AI is all fantasy as it stands now. Which means a lot of these conversations are fantasy not fact. To give but 2 more examples. I left Twitter but they could not delete me in a month! I re-de-activated myself again. So here we have a system that can't delete information. Another example was Flash-Player which had a security hole [even that in itself shows how hopeless this AI endeavour is. So I deleted the old unstable copy and uploaded the new safe one. Except my computer lost the upload file. Which I found by accident. [Again the implications for running AIs] and finished the upload. But then the Flash-Player videos wouldn't open and play. So here we had vanishing replacement codes. The player worked the day after. The point is that whilst this book is interesting it is about that and no more. The conversations are useless because what happened here couldn't have happened in isolation. I am surely not the only one.

Comment by almostvoid on Superintelligence 20: The value-loading problem · 2015-01-27T08:52:07.123Z · score: -3 (7 votes) · LW · GW

I wonder [read the book got the t-shirt & sticker] if it really is -generally- all so complex. I mean a lot of the imputations are anthropomorphic. Machines are dead brains that are switched on. There is nothing else. Unless mimickry which might con some people some of the time. 2001 the movie was still the closest to a machine thinking along certain logic lines. As for rebelling robots, independent machine inteliigences [unless hybrid brain interfaces] I cannot forsee anything in this book that is even relevant. Nice thought experiments though. I am finished. This is it.

Comment by almostvoid on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T09:05:51.526Z · score: 0 (0 votes) · LW · GW

In a way happiness is ingrained into specific personality types. My neighbour - next flat - is amazingly happy even after she locked herself out and I tried to break in for her. That happiness can only be duplicated with good drugs. Then there is attitude. I was in India [not as a 5 * tourist either] and found they were content [a bit less than happy] with their lives which compared to ours was a big obvious difference. Anyway it's a moot point as the Scandinavians won that round globally the last time because - social democracy works and it is not socialism which a lot of the braindead insist it is. so will all this collapse in a sci fin Asimov type future? No. As cars replaced horse-cabbies and the underground trains created true mass transit happiness per se was-is not affected-effected. Nor to airoplanes instead of ancient clippers to travel across the seas. Personally I can't wait for the future. I even dream about it. At times. People adjust as kids to their surroundings and take it from there. Anthropologists, social scientists, historians, journalists and writers and even real scientists have shown us that we can be happy whether living in the Stone Age [Australian aborigines] or high tech astronauts and everything in between.

Comment by almostvoid on Superintelligence 17: Multipolar scenarios · 2015-01-07T09:18:48.893Z · score: -3 (7 votes) · LW · GW

automation aka AI scenarios and populations per se: less is more. We don't need or require 7billion+. the only reason this ideology of more ppl is for capitalism to create more docile consumers for rapacious share holders nothing else. capital should be invested in science of course, space exploration big time, build underwater cities for fun and get rid of planet-city destroying cars: eg Australia. it might be urbanized but really us sub-urbanized. an eco nightmare, disturbia through wrong technology [the car] deciding how ppl live. This has to stop even if we get down to 1 billion humans [one can dream]. However if AI really takes off and goes beyond the Event Horizon [false known as singularity whatever that is] than all ISMS are defunct be it socialism/capitalism/mercantilism/religionism hopefully they all vanish as humans don't work anymore on Earth. Let the AI run riot. There is always an off button.

Comment by almostvoid on Superintelligence 13: Capability control methods · 2014-12-09T08:00:40.901Z · score: -6 (6 votes) · LW · GW

I think the -intelligence- in the -artificial- is overrated. It doesn't need anything per se to control it. All these scenario's pretend that -it- has volition which it has not. As long as that is the case all this -re:above- is what the Germans call Spiegelfechterei- fencing with a mirror image. Esp social integration. That is so Neanderthal. I rest my case. And my mind.

Comment by almostvoid on Superintelligence 12: Malignant failure modes · 2014-12-02T07:37:17.529Z · score: -5 (10 votes) · LW · GW

being psychotic is not some alpha out of control testosterone malfunction per se. You see I have been diagnosed as [among other madness-es] psychotic. I have an innate hatred of all humans [though when meeting them it's cool] but still I'd rather disengage than engage. An AI with intelligence could easily deduce that most humans are morons and a waste of space. Social media forums proves this abundantly by the vapid vomit espoused in such multitudinal mind numbing stupidity. Real intelligence is rare so perhaps an AI will be as stupid as their creators. From what I gathered a lot of logic and maths is involved which is intelligent-specific which is of a different order to eastern definition of comprehensive if not cosmic wisdom. So perhaps we have nothing to fear except stupidity itself.

Comment by almostvoid on Superintelligence 11: The treacherous turn · 2014-11-25T08:52:08.371Z · score: -11 (11 votes) · LW · GW

Amerikans seem to think AIs will morph into DALEKS. Perhaps they will. We don't know this though. At the moment machine-intelligences are stupid and that is the way best to work with them. The late great Arthur C Clarke had 3 futures re AIs 1] we rule 2] they rule but keep us for interests' sake. We become the samples, the experiment, the lab-rats. 3] they rule and realize we are a pest and this is the end of the human race 4] DALEKS win [my contribution]

obviously AIs in the future haven't gotten in to time travel [yet] as we are still here building the systems that will doom us. This has been a prevalent Amerikan view. The Japanese are embracing AIs and robots. The TV series -and movie spin offs - Ghost in the Shell, Stand alone Complex explored the future of us being integrated with machine intelligences beautifully and it really is a romantic possibility. All depends on the intelligence - stupidity of the programmers.

Maybe it is time to explore this and reincarnate on another advanced [relative to Earth] planet and study the results there in situ.

Comment by almostvoid on Superintelligence 10: Instrumentally convergent goals · 2014-11-18T09:55:19.301Z · score: -9 (9 votes) · LW · GW

Self improvement: oh dear we are sinking into pop psych 101 here. Something with some sub-urbanites that resonates in their mental echo chamber. Add technology and the cargo cult mentality comes in: problems solved of whatever. Not a solid solution. As for cognitive enhancement the European led the way with primarily the secular mind and philosophers from the French and the Germans complementing each other. The Brits were basically logicians which proves nothing. But they did start the Industrial Revolution without self improvement or self help books or self help life gurus. That the Europeans achieved by independent thinking so that today's aides won't work. During the shamanistic stage of human development psychtoropic drugs boosted humanity into the cosmic city state. Quite an achievement with mental achievement. Yet it was the beer sodden wine sodden northern Europe that defined our current society. The great mental release was to go secular. Write divinity and it's mental ordure out of the holistic equation. And today other paradigms are needed: Philip K Dick created possibilities that really did, along with Arthur C Clarke push the cosmological horizon into near infinity. In-sanity the next mindful step.

Comment by almostvoid on Superintelligence 9: The orthogonality of intelligence and goals · 2014-11-11T09:06:51.896Z · score: -4 (4 votes) · LW · GW

the Chinese way back during their great days as philisophers discovered that it is us humans that input values onto the world at large, objects that it is us who give meaning to something that is meaningless in itself [Kant's thing-in-itself] so that a system's values is there as long as it delivers. luckily humans move on [boredom helps] so that values should never be enshrined: otherwise we may go the way of the Neaderthals. So does a system change with its intelligence? The problem here is that AI's potential intelligence is a redefinition of itself because intelligence [per se] is innate within us: it is a resonance- a mind-field-wave-state [on a quantum level] that self manifests sort of. No AI will ever have that unless symbiosis as interphasing. So the answer to date is: No.

Comment by almostvoid on Superintelligence 8: Cognitive superpowers · 2014-11-04T09:10:06.225Z · score: -6 (6 votes) · LW · GW

An AI that escapes is truly compelling. Given the hopeless mess humans create we are not the best of mentally applied solution solvers. Too many people. Too many outdated technologies such as cars & ango-amerikan suburbs. Outdated social ideas such as capitalism, socialism & religious manias. Maybe it will take an AI to lead us to equitable solutions globally applied. Arthur Clarke viewed it this way. And we must stop breeding for a few decades. All this smart thinking is really self proscribed [even if ironically in a global network].

Comment by almostvoid on Superintelligence 7: Decisive strategic advantage · 2014-10-28T10:15:03.626Z · score: -3 (5 votes) · LW · GW

The Manhattan Project [Tubes] managed to remain secret even though scientists and researchers themselves thrive on the open community which in turn feeds new ideas through innovation and response to continue the scientific dialectic. A spy in the works [Fuchs] saved years re the Soviets to get it's bomb together. Then there was the Soviet sputnik which too surprised esp Amerika. And went paranoid because they thought in cold-war terms not scientific ones. Not at first anyway. Now AI research and breakthroughs-spyware aside-can keep researchers autonomous. Computers are shrinking. What took electro-shock-rock bands a full room of hardware to synthesize their sound is now done in a tiny nifty box top model the size of a thick book. Research will be done in secret [spyware aside]. After all terrorists nearly get away with their secret schemes so devoted scientists can and will do the same in the interest of glory which they fully deserve. We will be in for some surprises I hope

Comment by almostvoid on Superintelligence 6: Intelligence explosion kinetics · 2014-10-21T10:03:48.210Z · score: -5 (5 votes) · LW · GW

The problem is is that superintelligence has been re-defined. Under current models it aint gonna happen no matter how sexy the presentation and thoughts inputted. Eating potatoes or whatever doesn't make you smarter. Take all the German scientists back end of 19th-20th century. Stodgy European food. And it worked. They delivered. Now Asians have much better cuisine but they are now catching up and may yet end up as the best is yet to come. Drugs are good. Esp Opium. Cognitive enhancement big time. Yet again those cultures that enjoyed it staid within their cultural milieu for ages. Again things have changed but that option should be made available to all everywhere. The rest is gimmicky. Interesting in itself as itself. Take whole brain emulation. I mean that is what babies are. Re-creating brains and or designing wetware is silly. Plus the brain so emulated may have ideas of its own and turn DALEK on us. The essence of intelligence is resonant sentience. Meditation techniques esp Ch'an-Zen [and related] Buddhist practices will give results but this takes years and even then no guarantee. But this self enhancement seems more logical than a lot of machines thinking they think by merely puzzling out solutions that are in essence given anyway. The Mind. That is the centre of everything. Molecular enhancement will for now be the best bet.

Comment by almostvoid on Superintelligence 5: Forms of Superintelligence · 2014-10-14T09:35:37.008Z · score: -5 (5 votes) · LW · GW

I hate to spoil the party but the author has redefined superintelligence. whilst the possibilities are there to go further deeper and broader in scope real superintelligence is raising the paradigms and boundary thresholds of current intelligence. To be super is to be in another level of cognitively boosted consciousness. If the mind is in toto resonating at level 1 then superintelligence has to resonate above that. The closest humanity has come to that is external and wrapped in ancient Hindu mythologies of their gods and godesses. Anything less is simply where we are now.

Comment by almostvoid on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T10:30:28.010Z · score: -3 (5 votes) · LW · GW

YES. AI under present knowledge systems wont deliver the promise of real live intelligence. And the author[s] get bogged in computational details that delay detract for greater efficiency in the cognitive field. Still it's a rippa of a project. Given how hopeless humans are globally machine logic might offer real time solutions such as less humans to start off with. Would solve heaps of other problems we have and are creating.

Comment by almostvoid on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T10:26:22.913Z · score: -4 (4 votes) · LW · GW

you would create a social nightmare politically and socially. The less interference in the political process [but never without consultation] the clearer the outcome. Total social input would become grey noise. The lowest common denominator would win. Populist politics is bad enough as it is. Representative democracy on the Scandinavian-Swiss model is credible as a system that seems to steer away from any form of extremism.

Comment by almostvoid on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T10:22:17.861Z · score: -4 (6 votes) · LW · GW

IQ tests verify inbuilt biases of the one doing the questioning. I have failed these gloriously yet got distinctions at uni. Tests per se mean nothing. [I blame psychologists]. As for non human systems they may mimic intelligence but unless they have sentience they will remain machines. [luckily]

Comment by almostvoid on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T10:17:55.922Z · score: -7 (7 votes) · LW · GW

I laughed [then chocked] on the idea the eating potatoes makes you more intelligent [well maybe carrots] couple that with the idea of more singular brains = better intelligence - is just too much. For the food=intelligence works only in extreme cases of under nourishment but the social resonance has far more effect on a mind than a potato [does not equal eat what you are]. If you are stupid bad luck. That is why zen masters don't guarantee anything. Some individuals are hopeless and will be so for ever. As for group mind thinking to a better intelligence is laughable in its flawed assumption. Look at the intellectual success rate of the 4 Scandinavian countries and what they have achieved socially individually and mentally given their superior equitable social system etc and the whole of some OVERPOPULATED countries totally lost and beyond hope of even saving themselves & who by default they should be leading ruling the world. And are not. The whole AI hypothesis is flawed. [more of that later]. Take drugs: opium is a very cerebral effective set of molecules. The Middle East enjoyed it for centuries if not millennia yet beer drinking tea drinking Britain made that crucial breakthrough known as the Industrial Revolution. Social modelling just wont work nor will implied assumptions. Intelligence follows chaos equations melded with random algorithms.

Comment by almostvoid on Superintelligence reading group · 2014-09-30T10:39:03.805Z · score: -4 (4 votes) · LW · GW

WBEs are a worry. They can be used to carry dangerous information which a normal [suppressed laughter] may recoil from. But worse if this is carried off it may also attract sentient consciousness-awareness just like us. Frankenstein 2.0 Anyway we got 7 billion [6 too many] humans. Why would se want to do this? Space exploration by remote control to get the human feel of alien environments. Again-my only worry is that this process-construct may become a-live. And have it's own ideas which are not in conjunction for the very reason it was crafted. Or it may outsmart its creators. And if controlled by whatever means - insertion of compliant resonant mind-states- it could rebel and become a terrorist. We are mad enough as it is. Personally as stated initially this is not the best solution to AI.

Comment by almostvoid on Superintelligence reading group · 2014-09-30T10:36:10.689Z · score: -4 (4 votes) · LW · GW

WBEs are a worry. They can be used to carry dangerous information which a normal [suppressed laughter] may recoil from. But worse if this is carried off it may also attract sentient consciousness-awareness just like us. Frankenstein 2.0 Anyway we got 7 billion [6 too many] humans. Why would se want to do this? Space exploration by remote control to get the human feel of alien environments. Again-my only worry is that this process-construct may become a-live. And have it's own ideas which are not in conjunction for the very reason it was crafted. Or it may outsmart its creators. And if controlled by whatever means - insertion of compliant resonant mind-states- it could rebel and become a terrorist. We are mad enough as it is. Personally as stated initially this is not the best solution to AI.