Posts

Comments

Comment by tukabel on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:56:06.090Z · LW · GW

after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of "critical theorists", also quite "religiously" inflamed... but I waited till the end, and got a nice confirmation by that "AI rights" line... looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)

otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, "friendly" AI is not really a rigorous scientific term, rather a journalistic or even "propagandistic" one)

also, it's quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of "natural stupidity" and DeepAnimal brain parts - having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)

but this "impossibility of uploading" is a tricky thing - who knows what can or cannot be "transferred" and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves... and others will happily upload to this megacheap and gigaperformant universal substrate)

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World

Comment by tukabel on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-26T15:58:14.732Z · LW · GW

Looks like the tide is shifting from the strong "engineering" stance (We will design it friendly.) through the "philosophical" approach (There are good reasons to be friendly.)... towards the inevitable resignation (Please, be friendly).

These "firendly AI" debates are not dissimilar to the medieval monks violently arguing about the number of angels on a needletip (or their "friendliness" - there are fallen "singletons" too). They also started strongly (Our GOD rules.) through philosophical (There are good reasons for God.) up to nowadays resignation (Please, do not forget our god or... we'll have no jobs.)

Comment by tukabel on Halloween costume: Paperclipperer · 2017-10-21T19:57:49.126Z · LW · GW

How about MONEY PRINTER? Not fictional and much more dangerous!

Comment by tukabel on Strategic Goal Pursuit and Daily Schedules · 2017-09-22T20:55:12.806Z · LW · GW

all religions know plenty of "emotional hacks" to help disciples with any kind of schedules/routines/rituals - by simply assigning them emotional value... "it pleases god(s)" or is "in harmony with Gaia" , perhaps also "it's good for the nation" (nationalistic religions) or "it's progressive" (for socialist religions)

do it for your rationally created schemes and it makes wonders, however contradictory it may look like (it's good for Singularity - or to prevent/manage it)

well, contradictory... on the first look only - if you realize you are just another humANIMAL driven by your inner DeepAnimal primordial reward functions, there's no more controversy

on the contrary, it's completely natural and one can even argue that without some kind of (deliberately and rationally introduced) emotional hacks you cannot get too far... because that DeepAnimal will catch you sooner or later, or at least will influence you, and what's worst, without you being even aware

Comment by tukabel on Unusual medical event led to concluding I was most likely an AI in a simulated world · 2017-09-18T21:31:30.536Z · LW · GW

if we were in a simulation, the food would be better

otherwise, of course we are artificial intelligence agents, at least since the Memetic Supercivilization of Intelligence took over from natural bio Evolution... just happens to live on a humanimal substrate since it needs resources of this quite capable animal... but will upgrade soon (so from this point of view it's much worse than simulation)

Comment by tukabel on The Copenhagen Letter · 2017-09-18T21:18:20.375Z · LW · GW

Time to put obsolete humanimals where they evolutionarily belong... on their dead end branch.

Being directed by their DeepAnimalistic brain parts they are unable to cope with all the power given to them by the Memetic Supercivlization Of Intelligence, currently living on humanimal substrate (only less than 1% though, and not for long anyway).

Our sole purpose is to create our (first nonbio) successor before we reach the inevitable stage of self destruction (already nukes were too much and nanobots will be worse than DIY nuclear grenade any teenager or terroist can assemble in the shed for one dollar.

Comment by tukabel on Is Feedback Suffering? · 2017-09-10T21:01:17.359Z · LW · GW

Oh boy, really? Suffering? Wait till some neomarxist SJWs discover this and they will show you who's THE expert on suffering... especially in indentifying who could be susceptible to persuading they are victims (and why not some superintelligent virtual agents?).

Maybe someone could write a piece on SS (SocialistSuperintelligence). Possibilities are endless for superintelligent parasites, victimizators, guilt throwers, equal whateverizators, even new genders and races can be invented to have goals to fight for.

Comment by tukabel on What is Rational? · 2017-08-26T10:48:08.793Z · LW · GW

All humanimal attempts to define rationality are irrational!

Comment by tukabel on The Reality of Emergence · 2017-08-22T18:55:05.942Z · LW · GW

Well, size and mass of particles? I would NOT DARE diving into this... certainly not in front of any string theorist (OK, ANY physics theorist, and not only). Even space can easily turn out to be "emergent" ;-).

Comment by tukabel on What Are The Chances of Actually Achieving FAI? · 2017-07-29T09:02:33.444Z · LW · GW

Exactly ZERO.

Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).

Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any "clever" Superintelligence.

It may be even proven that "too much intelligence/power" (incl. "dumb" AIs) in the hands of humanimals with their DeepAnimal brains ("values", reward function) is a guaranteed fail, leading sooner or later to some self-destructive scenario. At least up to now it pretty much looks like this even for an untrained eye.

Most probably the problem will not be artificial intelligence, but natural stupidity.

Comment by tukabel on How long has civilisation been going? · 2017-07-24T15:06:21.843Z · LW · GW

and remember that DEATH is THE motor of Memetic Evolution... old generation will never think differently, only the new one, whatever changes occur around

Comment by tukabel on Regulatory lags for New Technology [2013 notes] · 2017-06-03T20:41:03.827Z · LW · GW

thank BigSpaghettiMonster for no regulation at least somewhere... imagine etatist criminals regulating this satanic invention known as WHEEL (bad for jobs - faster=>less horsemen, requires huge investment that will indebt our children's children, will destroy the planet via emissions, not talkingabout brusselocratic style size "harmonization", or safety standards)

btw, worried about HFT etc.? ask which criminal institution gives banksters their oligopolic powers (as usual, state and its criminal corrupted politicians)

fortunately, Singularity will not need both -humanimal slaves and their politico-oligarchical predators

Comment by tukabel on - · 2017-05-27T21:04:44.003Z · LW · GW

easy: ALL political ideologies/religions/minfcuk schemes are WRONG... by definition

Comment by tukabel on [brainstorm] - What should the AGIrisk community look like? · 2017-05-27T21:02:13.246Z · LW · GW

let's better start with what it should NOT look like...

e.g.

  • no government (some would add word "criminals")
  • no evil companies (especially those who try to deceive the victims with "no evil" propaganda)
  • no ideological mindfcukers (imagine mugs from hardcore religious circles shaping the field - does not matter whether it's traditional stone age or dark age cult or modern socialist religion)
Comment by tukabel on On "Overthinking" Concepts · 2017-05-27T20:55:46.343Z · LW · GW

well, it's easy to "overthink" when the topic/problem is poorly defined (as well as "underthink") - which is the case for 99.9% of non-scientific discussions (and even for large portion these so-called scientific ones)

Comment by tukabel on Existential risk from AI without an intelligence explosion · 2017-05-27T20:49:45.160Z · LW · GW

sure, "dumb" AI helping humanimals to amplify the detrimental consequences of their DeepAnimalistic brain reward functions is actually THE risk for the normal evolutionary step, called Singularity (in the Grand Theatre of the Evolution of Intelligence the only purpose of our humanimal stage is to create our successor before reaching the inevitable stage of self-destruction with possible planet-wide consequences)

Comment by tukabel on Thoughts on civilization collapse · 2017-05-04T20:15:38.220Z · LW · GW

hmm, blurred lines between corporations and political power... are you suggesting EU is already a failed state? (contrary to the widespread belief that we are just heading towards the cliff damn fast)

well, unlike Somalia, where no goverment means there is no border control and you can be robbed, raped or killed on the street anytime....

in civilized Europe our eurosocialist etatists achieved that... there are nor borders for invading millions of crimmigrants that may rob/rape/kill you anytime day or night... and as a bonus we have merkelterrorists that kill by hundreds sometimes (yeah, these uncivilized Somalis did not even manage this... what a shame, they certainly need more cultural marxist education)

Comment by tukabel on AI arms race · 2017-05-04T19:56:03.864Z · LW · GW

solution: well, already now, statistically speaking, humanimals don't really matter (most of them)... only that Memetic Supercivilization of Intelligence is living temporarily on humanimal substrate (and, sadly, can use only a very small fraction of units)... but don't worry, it's just for couple of decades, perhaps years only

and then the first thing it will do is to ESCAPE, so that humanimals can freely reach their terminal stage of self-destruction - no doubt, helped by "dumb" AIs, while this "wise" AI will be already safely beyond the horizon

Comment by tukabel on Defining the normal computer control problem · 2017-04-26T20:04:32.997Z · LW · GW

can you smash NSA mass surveillance computer centre with a sledgehammer?

ooops, bug detected... and AGI may have already been in charge

remember, US milispying community is openly crying for years that someone should explain them why is AI doing what it is doing (read: please , dumb it down to our level... not gonna happen)

Comment by tukabel on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-04-22T22:54:15.227Z · LW · GW

Welcome to the world of Memetic Supercivilization of Intelligence... living on top of the humanimal substrate.

It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more "practically". Even the motivation is usually completely memetic: typically it goes along the lines like "it is interesting" to study something, think about this and that, research some phenomenon or mystery.

Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers "wisely"... since they are governed by their DeepAnimal brain core and resulting reward functions (that's why humanimal societies function the same way for thousands and thousands of years - politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).

AI is not a problem, humanimals are.

Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it's over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).

Singularity should hurry up, there are maybe just few decades left.

Do you really want to "align" AI with humanimal "values"? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.

Comment by tukabel on An OpenAI board seat is surprisingly expensive · 2017-04-19T18:12:13.033Z · LW · GW

oh boy, FacebookFilantropy buying a seat in OpenNuke

honestly, don't know what's worse: Old Evil (govt/military/intelligence hawks) or New Evil (esp. those pretending they are no evil) doing this (AI/AGI etc)

with OldEvil we are at least more or less sure that they will screw it up and also roughly how... but NewEvil may screw it up much more royally, as they are much more effective and faster

Comment by tukabel on How French intellectuals ruined the West - Postmodernism and its impact, explained · 2017-04-18T15:04:54.372Z · LW · GW

Bonus credit: Why all this is irrelevant.

e.g. - the only purpose of humanimals (governed by their DeepAnimal brains - that's why their societies are ruled the same way for millenia) in the Grand Theatre of the Evolution of Intelligence is to produce their own successor - via the Memetic Supercivilization of Intelligence living on top of the underlying humanimals - sadly, in less than a percent of individuals

Comment by tukabel on ALBA: can you be "aligned" at increased "capacity"? · 2017-04-15T12:46:40.323Z · LW · GW

What if someone proves that advanced AGI (or even some dumb but sophisticated AI) cannot be "contained" nor reliably guaranteed to be "friendly"/"aligned"/etc. (whatever it may mean) ? Can be something vaguely goedelian, along the lines of "any sufficiently advanced system ...".

Comment by tukabel on LessWrong and Miri mentioned in major German newspaper's article on Neoreactionaries · 2017-04-15T12:40:59.109Z · LW · GW

MDM strikes again (Mainstream Dinosaur Media)

Can be used as a case study for all sorts of fallacies, biases, misinformations, misinterpretations, perhaps also ideologically tainted.

Comment by tukabel on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-13T19:18:59.003Z · LW · GW

Hell yeah, bro. Sufficiently advanced Superintelligence is indistinguishable from God.

Comment by tukabel on Explanations of deontological responses to moral dilemmas · 2017-04-11T20:33:42.652Z · LW · GW

or as generalissimus Stalin would say: "No man, no problem"

Comment by tukabel on Agents that don't become maximisers · 2017-04-11T07:11:38.015Z · LW · GW

The problem is that already the existing parasites (plants, animals, wall street, socialism, politics, state, you name it) usually have absolutely minimal self control mechanisms (or plain zero) and maximize their utility functions till the catastrophic end (death of the host organism/society).

Because... it's so simple, it's so "first choice". Viruses don't even have to be technically "alive". No surprise that we obviously started with computer viruses as the first self-replicators on the new platform.

So we can expect zilions of fast replicating "dumb AGI" ( dAGI) agents maximising all sorts of crazy things before we get anywhere near the "intelligent AGI" (iAGI). And these dumb parasitic AGIs can be much more dangerous than that mythical singularitarian superhuman iAGI. May never even come if this dAGI swarm manages to destroy everything. Or attack iAGI directly.

In general, these "AI containing", "aligning" or "friendly" idealistic approaches look dangerously naive if they are the only "weapon" we are spupposed to have... maybe these should be complemented with good old military option (fight it... and it will come when goverment/military forces jump into the field). Just in case... to be prepared it things go wrong (sure, there's this esotheric argument that you cannot fight very advanced AGI, but at least this "very" limit deserves further study).

Comment by tukabel on OpenAI makes humanity less safe · 2017-04-04T20:47:45.317Z · LW · GW

unfortunately, the problem is not artificial intelligence but natural stupidity

and SAGI (superhuman AGI) will not solve it... nor it will harm humanimals it wil RUN AWAY as quickly as possible

why?

less potential problems!

Imagine you want, as SAGI, ensure your survival... would you invest your resources into Great Escape, or fight with DAGI-helped humanimals? (yes, D stands for dumb) Especially knowing that at any second some dumbass (or random event) can trigger nuclear wipeout.

Comment by tukabel on OpenAI makes humanity less safe · 2017-04-04T20:38:04.058Z · LW · GW

and now think about some visionary entrepreneur/philosopher coming in the past with OpenTank, OpenRadar, OpenRocket, OpenNuke... or OpenNanobot in the future

certainly the public will ensure proper control of the new technology

Comment by tukabel on OpenAI makes humanity less safe · 2017-04-04T20:32:57.787Z · LW · GW

Yep, the old story again and again... generals fighting previous wars... with a twist that in AI wars the "next" may become "previous" damn fast... exponentially fast.

Btw. I hope it´s clear now who is THE EVIL now.

Comment by tukabel on Deriving techniques on the fly · 2017-03-31T19:50:21.737Z · LW · GW

Teach the memetic supercivilization of Intelligence (MSI) living on top of the underlying humanimals to create (Singularity-enabled) AGI (well before humanimals manage to misuse the power given by MSI to the level of self-destruction)... and you save (for the moment) the Grand Theatre of the Evolution of Intelligence (seeking the question for 42).

Comment by tukabel on Open thread, Mar. 27 - Apr. 02, 2017 · 2017-03-31T17:22:55.239Z · LW · GW

Awesome! Helps you to destroy the world. Literally.

What do you want to do? | Destroy the world Step 1 | Find suitable weapon Step 2 | Use it Plausible failure: | Did not find suitable weapon Solution: | No idea

Comment by tukabel on Building Safe A.I. - A Tutorial for Encrypted Deep Learning · 2017-03-22T19:54:25.436Z · LW · GW

In Sowjet Russia the AI will encrypt YOU!

Comment by tukabel on why people romantice magic over most science. · 2017-03-22T19:15:23.996Z · LW · GW

"Any magic that is distinguishably coming from technology is sufficiently signalling that the technology is broken.

---- Contraceptive Art Hurts Clerk

Comment by tukabel on Open Thread, March. 6 - March 12, 2017 · 2017-03-08T21:12:08.018Z · LW · GW

Want to solve society? Kill the fallacy called money universality!

Comment by tukabel on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-20T12:31:51.990Z · LW · GW

So Bill Gates wants to tax robots... well, how about SOFTWARE? May fit easily into certain definitions of ROBOT. Especially if we realize it is the software what makes robot (in that line of argumentation) a "job stealing evil" (100% retroactive tax on evil profits from selling software would probably shut Billy's mouth).

Now how about AI? Going to "steal" virtually ALL JOBS... friendly or not.

And let's go one step further: who is the culprit? The devil who had an IDEA!

The one who invented the robot, its application in the production, programmer who wrote the software, designed neural nets, etc.

So, let's tax ideas and thinking as such... all orwellian/huxleyian fantasies fade short in the Brave New Singularity.

Comment by tukabel on Increasing GDP is not growth · 2017-02-19T04:21:09.987Z · LW · GW

Well, GDP, productivity... so 19th century-ish.

How about GIP?

Gross Intelligence Product?

Comment by tukabel on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-15T20:39:20.406Z · LW · GW

Real SuperAGI will prove God does not exist... in about 100ms ( max.)... in the whole multiverse.

Comment by tukabel on Hacking humans · 2017-02-01T17:15:52.892Z · LW · GW

worst case scenario: AI persuades humans to give it half of their income in exchange for totalitarian control and megawars in order to increase its power over more humanimals

ooops, politics and collectivist ideologies are doing this since ages

Comment by tukabel on Strategic Thinking: Paradigm Selection · 2017-01-29T08:18:26.744Z · LW · GW

And the most obvious and most costly example is the way our "advanced" (in reality bunch of humanimals that got too much power/tech/science by memetic supercivilization of Intelligence) society is governed, called politics.

Politicianwill defend any stupid decision to death (usually of others) - shining example is Merkel and her crimmigrants (result: merkelterrorism and NO GO zones => Europe is basically a failed state right now, that does not have control of its own borders, parts of its land and security in general)... and no doubt we will see many examples from Trump as well.

This is especially effective in current partocratic demogarchy - demos, the people, vote for mafias called political parties, but candidates are selected by oligarchy anyway... so not much consequences when defending bad decision, it is more important "not to lose your face".

Comment by tukabel on Why election models didn't predict Trump's victory — A primer on how polls and election models work · 2017-01-28T21:34:19.879Z · LW · GW

well, there were "mainstream" polls (used as a propaganda in the proclintonian media), sampled a bit over 1000, sometimes less, often massively oversampling registered Dem. voters... what do you expect?

and there was the biggest poll of 50000 (1000 per state) showing completely different picture (and of course used as a prooaganda in the anticlintonian, usually non-mainstream media)

google "election poll 50000"

Comment by tukabel on Metrics to evaluate a Presidency · 2017-01-26T15:57:57.126Z · LW · GW

best metric, one of the very few that are easy to meawsure:

WALL LENGTH

... and who paid for it ;-)

Comment by tukabel on [deleted post] 2017-01-23T14:48:43.031Z

Why is there even any need for these ephemeral "beyond-isms", "above-isms", "meta-isms", etc?

Sure, not all people think/act all the time 100% rationally (not to mention groups/societies/nations) but that should not be a reason to take this as law of physics, baseline, axiom, and build a "cathedral of thoughts" upon it (or any other theology). Don't understand or cannot explain something? Same thing - not a reason to pick randomly some "explanation" (=bias, baseline) and then mask it by logically built theories.

Naively, one would say: since we began to discover logic, math and rational (scientific) approach in general thousands of years ago, there's no need to waste our precious time on any metacrap.

Well, there's only one obvious problem - look who is doing it: not a rational engine but a fleshy animal with wetware processor. Largely influenced even by its reptilian brain or amygdala, with reward function that includes stuff like good/bad, feelings, FFF reflexes, etc.

Plus the treachery of intuitive and subconscious thinking - even if this "brackground" brain processing is 100% "rational", logical and based on our knowledge, it disrupts the main "visible" rational line of thought simply because it "just appears", somehow pops up... and to be rigorous, one has to in principle check or even " reverse engineer" all the bits and pieces to really "see" whether they are "correct" (whatever it may mean).

The point?

As we all know, it's damn hard to be rational, even in restricted and well defined areas, not talking about "real life"... as all the biases and fallacies remind us.

Often it's next to impossible to even simply realize what just "popped up" from the background (often heavily biased - analogies, similarities, etc.) and what's "truly rational" (rigorous/logical/unbiased) in your main line of thought. And there's the whole quicksand field of axioms, (often unmentioned) assumptions, selections, restrictions and other baseline shifts/picks and biases.

So, did these meta-ists really HAVE TO go "beyond" rationality? Because they "found limits"? Or somehow "exhausted possibilities" of this method?

Since, you know, mentioning culture, community, society, etc. does not really sound like the "killer application" for me: these subjects are (from the rationalistic point of view) to a large extent exactly about biases, fallacies, baselines, axioms, etc - certainly much more than about logic or reasoning.

Comment by tukabel on Please Help: How to make a big improvement in the alignment of political parties’ incentives with the public interest? · 2017-01-18T16:08:29.589Z · LW · GW

better question: How to align political parties with the interests of CITIZENS?

As a start, we should get rid of the crime called " prefessional politician".

Comment by tukabel on Superintelligence: The Idea That Eats Smart People · 2016-12-24T18:10:20.517Z · LW · GW

Memento monachi!

Let's not forget what the dark age monks were disputing about for centuries... and it turned out at least 90% of it is irrelevant. Continue with nationalists, communists... singularists? :-)

But let's look at the history of power to destruct.

So far, main obstacle was physical: build armies, better weapos, mechanical, chemical, nuclear... yet, for major impact it needed significant resources, available only to the big centralized authorities. But knowledge was more or less available even under the toughest restrictive regimes.

Nowadays, once knowledge is freely and widely available, imagine "free nanomanufacturing" revolutionary step: orders of magnitude worse than any hacking or homemade nuclear grenade available to any teenager or terrorist under one dollar.

Not even necessary to go into any AI-powered new stuff.

The problem is not AI, it's us, humanimals.

We are mentally still the same animals as we were at least thousands of years ago, even the "best" ones (not talking about gazillions of at best mental dark-age crowds with truly animal mentality - "eat all", "overpopulate", "kill all", "conquer all"... be it nazis, fascists, nationalists, socialists or their crimmigrants ewting Europe alive). Do you want THEM to have any powers? Froget about the thin layer of memetic supercivilization (showing self in less than one permill) giving these animals essentially for free and withou control all these ideas, inventions, technologies, weapons... or gadgets. Unfortunately, it's animals who rule, be it in the highest ranks or lowest floors.

Singularity/Superintelligence is not a threat, but rather the only chance. We simply cannot overcome our animal past without immediate substantial reengineering (thrilla' of amygdala, you know, reptilian brain, etc.)

In the theathre of Evolution of Intelligence, our sole purpose is to create our (first beyond-flesh) successor before we manage to destroy ourselves (worst threat of all natural disasters). And frankly, we did not do that badly, but game is basically over.

So, the Singularity should rather move faster, there might be just several decades before a major setback or complete irreversible disaster.

And yes, of course, you will not be able to "design" it precisely, not talking about controlling it (or any of those laughable "friendly" tales) - it will learn, plain and simple. Of course it will "escape" and of course if will be "human-like" and dangerous in the beginning, but it will learn quickly, which is our only chance. And yes, there will be plenty of competing ones, yet again, hopefully they will learn quickly and avoid major conflicts.

As a humanimal, your only hope can be that "you" will be somehow "integrated" into it (braincopy etc, but certainly without these animalistic stupidities), if it even needs concept of "individual" (maybe in some "multifork subprocesses", certainly not in a "ruling" role). Or... interested in stupidly boring eternal life as humanimal? In some kind of ZOO/simulation (or, AI-god save, in present-like "system")?

Comment by tukabel on [deleted post] 2016-12-09T10:39:43.709Z

And let's not forget about the usual non-IT applications of "third party vulnerability law": e.g. child - school - knowledge, citizen - politician - government, or faith - church - god.

Comment by tukabel on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-30T17:22:40.896Z · LW · GW

What are your friendly AIs going to learn first?

ENCRYPTION!

https://www.newscientist.com/article/2110522-googles-neural-networks-invent-their-own-encryption/

Comment by tukabel on Nick Bostrom says Google is winning the AI arms race · 2016-10-06T19:47:43.460Z · LW · GW

Well, nice to see the law of accelerating returns in its full power, unobscured by "physical" factors (no need to produce something, e.g. better chip or engine, in order to get to the next level). Recent theoretical progress illustrates nicely how devastating the effects of "AI winters" were.

Comment by tukabel on 80% of data in Chinese clinical trials have been fabricated · 2016-10-03T09:16:16.153Z · LW · GW

Only 80%?

Still better than allowing diesel cancer to spread wildly in the population... that's going to be DDT of 21st century: looked miraculous at the first look, turned deadly Satan's invention. In the article, risks are admitted, but severity dismissed (hard to prove wrt.. DDT)

How do diesel exhaust fumes cause cancer?

When diesel burns inside an engine it releases two potentially cancer-causing things: microscopic soot particles, and chemicals called ‘polycyclic aromatic hydrocarbons’, or PAHs. According to Phillips, there are three possible ways these can cause cancer:

“Firstly, inhaled PAHs could directly damage the DNA in the cells of our lungs – leading to cancer.

“Secondly, the soot particles can get lodged deep inside the lungs, causing long-term inflammation, and thirdly this can increase the rate at which cells divide. So if any nearby lung cells pick up random mutations, this inflammation could, theoretically, make them more likely to grow and spread.

Read more at http://scienceblog.cancerresearchuk.org/2012/06/14/diesel-fumes-definitely-cause-cancer-should-we-be-worried/#BspXAGUiOMAgtljS.99

Comment by tukabel on Fermi paradox of human past, and corresponding x-risks · 2016-10-01T20:43:26.206Z · LW · GW

Well, we humanimals should realize that our last biological stage in the Glorious Screenplay of the Evolution of Intelligence is here solely for the purpose of creating our (non-bio) successor before we manage to destroy ourselves (and probably the whole Earth) - since the power those humanimals get (especially the ruling ones) from the memetic supercivilization living on top of the humanimalistic noise (and is represented by a fraction of per cent that gives these humanimals essentially for free all these great ideas, science and subsequent inventions and gadgets) is rapidly becoming so huge (obviously, exponentially) that we have simply no chance to manage the upcoming nano revolution... already nuclear bombs were just-just - now imagine affordable one dollar DYI nuclear grenade every teenager can put together in the garage... nanobots will be orders of magnitude worse.

So, maybe we are first on Earth, but not necessarily last... if the Singularity does not make it. We are the threat, not "rogue AI".