Questions on the human path and transhumanism.
post by HopefullyCreative · 2014-08-12T20:34:10.746Z · LW · GW · Legacy · 44 commentsContents
44 comments
I had a waking nightmare: I know some of you reading this just went "Oh great, here we go..." but bear with me. I am a man who loves to create and build, it is what I have dedicated my life to. One day because of the Less Wrong community I was prompted to ask "What if they are successful in creating an artificial general intelligence whose intellect dwarfs our own?"
My mind raced and imagined the creation of an artificial mind designed to be creative, subservient to man but also anticipate our needs and desires. In other words I imagined if current AGI engineers accomplished the creation of the greatest thing ever. Of course this machine would see how we loathe tiresome repetitive work and design and build for us a host of machines to do it for us. However then the horror at the implication of this all set in. The AGI will become smarter and smarter through its own engineering and soon it will anticipate human needs and produce things no human being could dream of. Suddenly man has no work to do, there is no back breaking labor to be done nor even the creative glorious work of engineering, exploring and experimentation. Instead our army of AGI has robbed us of that.
At this moment I certainly must express that this is not a statement amounting to "Lets not make AGI" for we all know AGI are coming. Then what is my point in expressing this? To express a train of thought that results in questions that have yet to be answered in the hopes that in depth discussion may shed some light.
I realized that the only meaning for man in a world run by AGI would actually be to order the AGI to make man himself better. Instead of focusing on having the AGI design a world for us, use that intellect that we could not before modification compare with to design a means to put us on its own level. In other words, the goal of creating an AGI should not to be to create an AGI but to make a tool so powerful we can use it to command man to be better. Now, I'm quite certain the audience present here is well aware of transhumanism. However, there are some important questions to be answered on the subject:
Mechanical or Biological modification? I know many would think "Are you stupid?! Of course cybernetics would be better than genetic alteration!" Yet the balance of advantages is not as clear as one would think. Lets consider cybernetics for a moment: Many would require maintenance, they would need to be designed and manufactured and therefore quite expensive. They also would need to be installed. Initially, possibly for decades only the rich could afford such a thing creating a titanic rift in power. This power gap of course will widen the already substantial resentment between the regular folk and the rich thereby creating political and social uncertainty which we can ill afford in a world with the kind of destructive power nuclear arms present.
Genetic alteration comes with a whole new set of problems. A titanic realm of genetic variables in which tweaking one thing may unexpectedly alter and damage another thing. Research in this area could potentially take much longer due to experimentation requirements. However the advantage is that genetic alteration can be accomplished with the help of virus in controlled environments. There would be no mechanic required to maintain the new being we have created and if designed properly the modifications can be passed down to the next generation. So instead of having to pay to upgrade each successive generation we instead only have to pay to upgrade one single generation. The rich obviously would still be the first ones to afford this procedure, however it could quickly spread across the globe due its potentially lower cost nature once development costs have been seen to. However, the problem is that we would be fundamentally and possibly irreversibly be altering our genetic code. Its possible to keep a gene bank so we have a memory of what we were in the hopes we could undo the changes and revert if the worst happened yet that is not the greatest problem with this path. We cannot even get the public to accept the concept of genetically altered crops, how can we get a world to accept its genes being altered? The sort of instability created by trying to push such a thing too hard, or the power gap created by those who have upgraded and who have not can again cause substantial instability that is globally dangerous.
So now I ask you, the audience. Genetic or cybernetic? How would we solve the political problems associated with both? What are the problems with both?
44 comments
Comments sorted by top scores.
comment by [deleted] · 2014-08-12T22:34:11.989Z · LW(p) · GW(p)
Given that a friendly AGI’s intellect dwarfs our own, why not ask it how to improve ourselves? It will consider all your concerns and more. Once you get close to the AGI’s intellect, then you will encounter and appreciate unsolved problems. Personally, I’d prefer to be a wirehead.
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T00:33:37.457Z · LW(p) · GW(p)
That's more or less what I stated was the only solution to the problem of finding meaning in a world with such an AGI. This really all comes down to the purpose of the AGI in the first place. w
Replies from: None↑ comment by [deleted] · 2014-08-13T01:59:43.212Z · LW(p) · GW(p)
OK, so the second half of your post discussing the pros and cons of mechanical and biological modification assumes a world without AGI? Otherwise, the endeavor is useless because the AGI could simply figure it out for us.
My primary purpose of AGI: to create a perpetual state of bliss for all current humans (maybe future generations and other sentient beings as well, but that's a longer discussion). I'll trade "creative glorious work" for Heaven any day of the week. Even if you require the satisfaction from working to achieve bliss, the AGI can oblige you.
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T06:46:28.833Z · LW(p) · GW(p)
I certainly liked this post for the fact that you noticed that the AGI would probably figure out all the pros and cons for us. I did however figure it would be enjoyable for us in our world that currently lacks any AGI to discuss them though :).
Anyway I cannot really relate with the desired goal for an AGI. I much rather do an eternity in hell with all its cognitive stimulation than rot in "heaven". Look at our experiences with the elderly that we resign to homes where their minds literally rot from lack of use.
I am merely pointing out the horror of never having to actually think for yourself or actually do anything. Suddenly any purpose that we can find in the world is gone and our bodies as well as our minds begin to rot as we use them less and less.
comment by Viliam_Bur · 2014-08-13T06:36:37.217Z · LW(p) · GW(p)
So, we have an "artificial general intelligence whose intellect dwarfs our own" which can "anticipate our needs and desires"... but we still have poverty and the risk of nuclear war? How is that possible?
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T19:20:59.372Z · LW(p) · GW(p)
The existence of a super intelligent AGI would not somehow magic the knowledge of nuclear ordinance out of existence, nor would that AGI magically make the massive stockpiles of currently existing ordinance disappear. Getting governments to destroy those stockpiles for the foreseeable future is a political impossibility. The existence of a grand AGI doesn't change the nature of humanity, nor does it change how politics work.
This goes the same with the rich and the working classes, the existence of a super intelligent AGI does not mean that the world will magically overnight transform into a communist paradise. Of course you do have a sound point if you state that once the AGI has reached a certain point and its working machines are so sophisticated and common that such a paradise is possible to create. That does not mean it would be politically expedient enough to actually form.
However, lets assume that a communist paradise is formed and it is at this point that mankind realizes that the AGI is doing everything and as such we have very little meaning in our own existence. At this point if we begin to go down the path of transhumanism with cybernetics then there still would be a point in which these technologies are still quite rare and therefore rationed. What many don't realize is that in the end a communist system and a capitalist system behave similarly when there is a resource or production shortfall. The only difference is that in a capitalist system money determines who gets the limited resource, in any communist system politics would determine who gets the limited resource.
So in the end, even in a world in which we laze about, money doesn't exist and the AGI builds everything for us, new technologies that are still limited in number means that there will be people who have more than others. More than that I do not see people submitting to an AGI to determine who gets what, as such the distribution of the product of the AGI's work would be born of a human political system and clearly there would be people who game the system better gaining much more resources than everyone else, just like some people are better at business in our modern capitalist world.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-13T19:36:18.459Z · LW(p) · GW(p)
I think most people here envision a full-blown AGI as being in control and not constrained by politics. If a government were to refuse to surrender its nuclear-weapons stockpile, the AGI would tell it "You're not responsible enough to play with such dangerous toys, silly" and just take it away.
Replies from: None, HopefullyCreative↑ comment by [deleted] · 2014-08-14T16:25:51.990Z · LW(p) · GW(p)
This is what we call 'superlative futurism' and is basically theological thinking applied to the topic.
When you assume the superlative, you can handwave such things. Its no different from "god is the greatest thing that can exist" and all the flawed arguments that come from that.
Replies from: Lumifer↑ comment by HopefullyCreative · 2014-08-13T19:43:44.155Z · LW(p) · GW(p)
My nightmare was a concept of how things would rationally likely to happen. Not how they ideally would happen. I had envisioned an AGI that was subservient to us and was everything that mankind hopes for. However, I also took into account human sentiment which would not tolerate the AGI simply taking nuclear weapons away, or really the AGI forcing us to do anything.
As soon as the AGI makes any visible move to command and control people the population of the world would scream out about the AGI trying to "enslave" humanity. Efforts to destroy the machine would happen almost instantly.
Human sentiment and politics need always be taken into account.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-13T20:01:26.036Z · LW(p) · GW(p)
I had envisioned an AGI that was subservient to us and was everything that mankind hopes for.
What exactly the AGI will be subservient to? A government? Why wouldn't it tell the AGI to kill all its enemies and make sure no challenges to this government's power ever arise?
A slave AGI is a weapon and will be used as such.
The common assumption on LW is that after a point the humanity will not be able to directly control an AGI (in the sense of telling it what to do) and will have to, essentially, surrender its sovereignty to it. That is exactly the reason why so much attention is paid to ensuring that this AGI will be "friendly".
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T22:20:21.276Z · LW(p) · GW(p)
A weapon is no more than a mere tool. It is a thing that when controlled and used properly magnifies the force that the user is capable of. Due to this relationship I point out that an AGI that is subservient to man is not a weapon, for a weapon is a tool with which to do violence, that is physical force upon another. Instead an AGI is a tool that can be transformed into many types of tools. A possible tool that it can be transformed into is in fact a weapon, however as I have pointed out that does not mean that the AGI will always be a weapon.
Power is neither good nor ill. Uncontrolled, uncontested power however is dangerous. Would you start a fire without anything to contain it? For sentient beings we posses social structures, laws and reprisals to manage and regulate the behavior of the powerful force that is man. If even man is managed and controlled and managed into the most intelligent manner we can muster then why would an AGI be free of any such restraint? If sentient being A is due to its power cannot be trusted to operate without rules then how can we trust sentient being B whom is much more powerful to operate without any constraints? Its a logic hole.
A gulf of power is every bit as dangerous. When power between two groups is too disparate then there is a situation of potentially dangerous instability. As such its important for mankind to seek to improve itself so as to shrink this gap between power. Controlling an AGI and using it as a tool to improve man is one potential option to shrink this potential gulf in power.
comment by ArisKatsaris · 2014-08-12T21:57:14.927Z · LW(p) · GW(p)
"Instead our army of AGI has robbed us of that."
If by the human valuation system this would be a loss compared to the alternative, and if the AGI accurately promoted human values, doesn't it follow that it would not choose to so "rob" us?
Replies from: HopefullyCreative, Jan_Rzymkowski, polymathwannabe↑ comment by HopefullyCreative · 2014-08-13T00:44:24.084Z · LW(p) · GW(p)
Suppose we created an AGI the greatest mind ever conceived and we created it to solve humanities greatest problems. An ideal methodology for the AGI to do this would to ask for factories to produce physical components to copy itself over and over. The AGI then networks its copies all over the world creating a global mind and then generates a hoard of "mobile platforms" from which to observe, study and experiment with the world for its designed purpose.
The "robbery" is not intentional, its not intending to make mankind meaningless. The machine is merely meeting its objective of doing its utmost to find the solutions to problems for humanity. The horror is that as the machine mind expands networking its copies together and as it sends its mobile platforms out into the world eventually human discovery and invention would be dwarfed by this being. Outside of social and political forces destroying or dismantling the machine(quite likely) human beings would ultimately be forced with a problem: with the machine thinking of everything for us, and its creations doing all the hard work we really have nothing to do. In order to have anything to do we must improve ourselves to at the very lest have a mind that can compete.
Basically this is all a look at what the world would be like if our current AGI researchers did succeed in building their ideal machine and what it would mean for humanity.
Replies from: Sysice↑ comment by Sysice · 2014-08-13T07:22:34.802Z · LW(p) · GW(p)
I don't disagree with you- this would, indeed, be a sad fate for humanity, and certainly a failed utopia. But the failing here is not inherent to the idea of an AGI that takes action on its own to improve humanity- it's of one that doesn't do what we actually want it to do, a failure to actually achieve friendliness.
Speaking of what we actually want, I want something more like what's hinted at in the fun theory sequence than one that only slowly improves humanity over decades, which seems to be what you're talking about here. (Tell me if I misunderstood, of course.)
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T07:35:08.776Z · LW(p) · GW(p)
You actually hit the nail on the head in terms of understanding the AGI I was referencing.
I thought about problems such as why would a firm researching crop engineering to solve world hunger bother with paying a full and very expensive staff? Wouldn't an AGI that not only crunches the numbers but manages mobile platforms for physical experimentation be more cost effective? The AGI would be smarter and run around the clock testing, postulating and experimenting. Researchers would quickly find themselves out of a job if the ideal AGI were born for this purpose.
Of course if men took on artificial enhancements their own cognitive abilities could improve to compete. They could even potentially digitally network ideas or even manage mobile robotic platforms with their minds as well. It seems therefore that the best solution to the potential labor competition problems with AGI is to simply use the AGI to help or outright research methods of making men mentally and physically better.
↑ comment by Jan_Rzymkowski · 2014-08-12T22:34:11.431Z · LW(p) · GW(p)
It's not impossible that human values are itself conflicted. Sole existence of AGI would "rob" us from that, because even if AGI restrained from doing all the work for humans, it would still be "cheating" - AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway - I can think of at least two more ways. First is creating games, vastly simulating the "joy of work". Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.
↑ comment by polymathwannabe · 2014-08-12T22:09:29.103Z · LW(p) · GW(p)
It depends. Is the excitement of hard work a terminal or an instrumental value?
comment by polymathwannabe · 2014-08-12T20:51:33.215Z · LW(p) · GW(p)
Your nightmare scenario is the plot of WALL-E. The solution was to create a re-terraforming challenge by entirely abolishing the comfort zone.
comment by Baughn · 2014-08-12T20:43:09.176Z · LW(p) · GW(p)
Well, there are quite a few other options.
Just for instance, the AGI might not solve all our problems for us. I think I'd be okay to have it working in the background - as a sort of operating system - without actually telling us spoilers for all those scientific problems you mentioned. There are problems with that, but let's spend a little more time thinking before deciding that transhumanism is the only answer.
Not that I won't be all over transhumanism almost as soon as I can.
comment by Shmi (shminux) · 2014-08-12T20:40:55.695Z · LW(p) · GW(p)
Once you have small enough nanotech, there is no difference between genetic and cybernetic.
Replies from: advancedatheist↑ comment by advancedatheist · 2014-08-13T03:54:30.055Z · LW(p) · GW(p)
When will geeks give up on this "nanotech" delusion and work on feasible technologies which don't engage in quantum-mechanics denialism?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-13T05:06:36.948Z · LW(p) · GW(p)
What is "quantum-mechanics denialism"?
Replies from: advancedatheist↑ comment by advancedatheist · 2014-08-13T16:08:45.203Z · LW(p) · GW(p)
Eric Drexler argued back in the 1980's that we could apply the principles of macroscopic mechanical engineering to create molecular machines which behave as if quantum effects don't exist on that scale.
The fact that this idea has gotten pretty much nowhere in the last 30 years says a lot about the lack of integrity of MIT's Media Lab in granting Drexler a Ph.D. based on this notion.
By contrast, technologies which exploit quantum mechanics - lasers, magnetic resonance imaging, LED lighting, semiconductors, etc. - pretty much invent themselves.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-13T16:52:55.613Z · LW(p) · GW(p)
I see. I was not talking about Drexler's classical nanotech, though quantum effects are indeed irrelevant on the scale of, say, blood cells, save for essentially quantum-random ion channel gating and similar transitions. It may well be possible to design nanotech starting with, say, quantum dots, or custom fullerene molecules. Our technology is not there yet, but it's going in the right direction. It's not inconceivable that artificial prion-like structures which are not viruses can be used to modify the DNA in a living organism. Whether this requires a superintelligence or just human intelligence is a separate question.
comment by passive_fist · 2014-08-13T00:41:08.412Z · LW(p) · GW(p)
I've pretty much stopped thinking about AGI in terms of how it will affect "our way of life", because no matter how you slice it, the unavoidable conclusion is that AGI will completely destroy everything we take for granted (that's not necessarily a bad thing - we could all wind up much happier afterwards). So how can we guarantee a good outcome?
Most people take the 'head in sand' approach: Since AGI is going to dramatically rearrange (and possibly destroy) human civilzation, it can't possibly happen, since the current state of affairs has been going on for millenia. They may not say this outright but it's the way they think. They might read the occasional paper on the dangers of AGI, go "huh", and then forget about it and go back to assuming that we will still live pretty much the same way we do now in 100 years.
More enlightened people seem to fall into the "We should delay AGI as long as possible until we completely guarantee friendliness" camp (this includes most of this site). A respectable opinion. The only problem is that we have no idea if this is even possible. All indications seem to be that it isn't.
Only very very few people are of the same opinion as me, which is that unless we choose total technological stagnation, AGI is coming, and there is no way to guarantee friendliness, or even that we are correct in desiring friendliness. In fact, I can't think of any real difference between stagnation and trying to guarantee friendliness. Our choice is to pick living in the dark ages or embracing the unknown. We've already been through the dark ages, and it wasn't very pleasant. So bring on the unknown.
Few people agree with me and I've been repeatedly downvoted or 'corrected' for my opinions on this site (usually by people who have no idea what I'm even saying). I expect this post to get a lot of downvotes too.
Replies from: skeptical_lurker, gjm↑ comment by skeptical_lurker · 2014-08-13T00:53:11.151Z · LW(p) · GW(p)
there is no way to guarantee friendliness
There are never any guarantees. So maximise your odds and take a chance!
Replies from: passive_fist↑ comment by passive_fist · 2014-08-14T05:50:23.898Z · LW(p) · GW(p)
Well for purposes of discussion, I'm equating 'guaranteeing' with 'guaranteeing up to a high degree of certainty'.
But even if we have reasonable guarantees, I'm not even sure if successful friendliness itself is all that different from stagnation.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-08-14T10:24:20.126Z · LW(p) · GW(p)
Ok, so why do you believe that there is no way to guarantee friendlessness? The idea seems to pursue mathematically provable FAI, which AFAICT ought to have a very high probability of success, if it can be developed in time.
I'm not even sure if successful friendliness itself is all that different from stagnation.
In this case I would say that this is not what I would consider friendliness, because I, and a fairly large subset of humanity, place a lot of value in things not stagnating.
Replies from: passive_fist↑ comment by passive_fist · 2014-08-15T01:33:21.702Z · LW(p) · GW(p)
Only one side of the friendliness coin is the 'mathematical proof' (verification) part. The other side of the coin is the validation part. That is, the question of whether our design goals are really the right design goals to have. A lot of the ink that has been spilled on topics like CEV centers mostly around this aspect.
I, and a fairly large subset of humanity, place a lot of value in things not stagnating.
People say that but it's usually just empty words. If progress necessitated that you be destroyed (or, at the very least, accept being unconditionally ruled over), would you prefer progress, or the status quo?
Try to imagine if humanity were bound by the ethical codes of chimpanzees, then you begin to see what I mean.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-08-15T17:36:18.342Z · LW(p) · GW(p)
If progress necessitated that you be destroyed (or, at the very least, accept being unconditionally ruled over), would you prefer progress, or the status quo?
I'm pretty sure there is a potential continuous transform between myself and a Jupiter brain, (assuming as continuity of personality makes sense). Add one more brain cell or make a small alteration and I'm still myself, so by induction you could add an arbitrarly large number of brain cell up until fundamental physical constrains kick in.
And even supposing there are beings I can never evolve into, well, the universe is quite big, so live and let live?
That is, the question of whether our design goals are really the right design goals to have. A lot of the ink that has been spilled on topics like CEV centers mostly around this aspect.
Well, there are aspects of CEV that worry me, but I would say it seems to be far better than an arbitrary (e.g. generated by evolutionary simulations) utility function.
Try to imagine if humanity were bound by the ethical codes of chimpanzees, then you begin to see what I mean.
Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn't they eventually progress and probably develop a vaguely human-like civilisation?
Replies from: passive_fist↑ comment by passive_fist · 2014-08-18T09:10:59.410Z · LW(p) · GW(p)
well, the universe is quite big, so live and let live?
Our interests will eventually conflict. Look at ants. We don't go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn't they eventually progress and probably develop a vaguely human-like civilisation?
That's a good hypothetical. What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I'm sure many here wouldn't)
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-08-20T16:41:48.844Z · LW(p) · GW(p)
Our interests will eventually conflict. Look at ants. We don't go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
To continue the metaphor, some environmentalists do protest the building of motorways, although not to all that much effect, and rarely for the benefit of insects. But, we have no history of signing treaties with insects, nor does anyone reminisce about when they used to be an insect.
Regardless of whether posthumans would value humans, current humans do value humans, and also value continuing to value humans, so a correct implementation of CEV would not put humanity on a path where humanity would get wiped out. I think this is the sort of point at which TDT comes in, and so CEV could morph into CEV-with-constrains-added-at-initial-runtime. For instance, perhaps CEV(t)=C CEV(0)+(1-C) CEV(t) where CEV(t) means CEV evaluated at time t, and C is a constant fixed at t=0, determining how much values should remain unchanged.
What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I'm sure many here wouldn't)
Sounds good to me! I think post-singularity it might be good to fork yourself, with some copies heading off towards superintelligence quickly and others taking the scenic route and exploring baseline human activities first.
comment by ChristianKl · 2014-08-12T21:09:32.837Z · LW(p) · GW(p)
You missed the most straightforward way of enhancing humans: Education.
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T00:30:55.729Z · LW(p) · GW(p)
This is a statement that is deeper than it first appears. It actually poses the question, are the current limits on human intelligence due to the human being's genetic design or is it due to poor education?
As in are I.Q. limitations as we observe them due to lack of education?
Of course education is already improving. What is at issue is eventually we will have a world populated with magnificent artificial intelligences that make us look stupid. Its highly probable that our minds will have physical limits well below the sea of intelligence we are about to birth. Therefore we must examine our role, our very sense of purpose and meaning in a potential future where we are no longer capable being the smartest and therefore the "leader"
Replies from: advancedatheist, ChristianKl↑ comment by advancedatheist · 2014-08-13T04:09:45.834Z · LW(p) · GW(p)
It actually poses the question, are the current limits on human intelligence due to the human being's genetic design or is it due to poor education?
Education can't make dumb people smart, but it can work wonders for naturally smart people. The other day I suggested that if we had to put a badly run country into receivership and straighten it out, I would pick North Korea because of the natural experiment that has happened on that peninsula. Their cousins to the south show that the extended Korean tribe has the genetic goods to benefit from the investment.
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T06:40:45.958Z · LW(p) · GW(p)
What I was fundamentally wondering with the above post was the relationship of developmental education and eventual I.Q. Such as given identical genetic characteristics would heightened mental stimulation during early brain development greatly improve the I.Q. over the control?
↑ comment by ChristianKl · 2014-08-13T10:19:31.570Z · LW(p) · GW(p)
Focusing on IQ even misses the point. Most bad decisions that humans make are not due to low IQ but do to insufficient knowledge or bad mental habits.
CFAR is focuses on education that allows people to make better decisions but it doesn't focus on increasing IQ.
Making better decisions is not about keeping the decision making methods the same but turning up the IQ knob.
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-13T19:36:42.659Z · LW(p) · GW(p)
Education can allow someone access to a platform from which to stand upon that is certain. I was unconcerned because even if you spend thirty years educating someone they are still limited by their own intelligence when it comes to discovery, creativity, and decision making.
Spending time studying philosophy has greatly improved my ability to understand logic structures and has helped me make better decisions. However there are still limits set upon me by my own biological design. More than that, I am limited with how much education I can receive and still be able to work off the debt in a single lifetime. Even in state funded education its an investment, the student must generate more value in a lifetime than the cost of the education to be worth the education.
The pace of education is limited by a great number of variables including the student's IQ. Therefore we cannot simply solve that problem by trying to educate at a faster pace. The other solution is a form of transhumanism, that is alter my body so that I may live longer in order to be worth the cost of longer education. However postulating about such a long and substantial education is ignoring whether or not there is a point when education has no effect and the only other option is actual hands on experience in life.
We can logically see that we cannot magically educate every problem away. Education is and will be the primary means of improving the human mind. However, if we need to improve our natural limitations on how quickly we can learn and so fourth physical alteration of the human body may be necessary.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-08-13T23:00:22.539Z · LW(p) · GW(p)
Therefore we cannot simply solve that problem by trying to educate at a faster pace.
The point is not about educating faster but educating better.
We can logically see that we cannot magically educate every problem away.
You also can't magically solve problems with genetic engineering or cybertech.
However postulating about such a long and substantial education is ignoring whether or not there is a point when education has no effect and the only other option is actual hands on experience in life.
When I speak about education I don't mean "getting a college degree" I speak about actual learning. About improving structures of reasoning. Hands on experience is also providing education.
Replies from: HopefullyCreative↑ comment by HopefullyCreative · 2014-08-14T01:07:56.145Z · LW(p) · GW(p)
Human alteration certainly wont magically improve human being's mental capabilities all on their own. That's why I put the qualifier that education "is and will be the primary means of improving the human mind"
I was point out when faced with an artificial intelligence that can continually upgrade itself the only way the human mind can compete is to upgrade as well. At some point current human physical limitations will be to limiting and human beings will fall to the wayside of uselessness in the face of artificial intelligence.