Singularity Mindset
post by alkjash · 2018-01-19T00:32:00.839Z · LW · GW · 16 commentsThis is a link post for https://radimentary.wordpress.com/2018/01/18/singularity-mindset/
Contents
AI Worries Take Ideas Seriously None 16 comments
In a fixed mindset, people believe their basic qualities, like their intelligence or talent, are simply fixed traits. In a growth mindset, people believe that their basic qualities can be modified gradually via dedication and incremental progress. Scott Alexander has cast some doubt on the benefits of growth mindset, but I think it still has merit, if only because it is closer to the truth.
Growth mindset is a good thing that doesn't do enough. The situation calls for More Dakka. I present: Singularity Mindset.
In a Singularity Mindset, people believe they are self-modifying intelligences well past the singularity threshold, that their basic qualities can be multiplied by large constants by installing the right algorithms and bug fixes, and that even the best optimized people are nowhere near hardware limitations. People who apply Singularity Mindset come to life with startling rapidity.
The seed of this post was planted in my head by Mind vs. Machine, which I read as a call-to-arms to be radically better humans, or at least better conversationalists. The seed sprouted when I noticed the other day that I've already blogged more words in 2018 (today is January 18) than in 2017.
Apparently, Kurzweil beat me to the name with Singularity University and Exponential Mindset, but (a) that's geared towards businesses and technology instead of individuals, and (b) I'm agnostic about the exact shape of the foom, so I'll stick to Singularity Mindset.
AI Worries
A tiny note of confusion noticed but unresolved can, over the course of years, fester into an all-consuming possession. One such question possessing me has come to a head:
Why is Eliezer Yudkowsky so much more worried about AI risk than I am?
I came up with a standard response rapidly, which goes something like: after years of steadfastly correcting his own biases, acquiring information, and thinking clearly about the world's biggest problems, he came to the right conclusion.
It's a good explanation which provokes in me at least a pretense of updating models, but today I will entertain another narrative.
I think the difference between us is that Eliezer has the lived experience of Singularity Mindset, of deliberately self-modifying to the point of becoming unrecognizably intelligent and productive, and the simultaneous lived experience of seeing his own values drift and require extraordinary effort to keep in line.
Meanwhile, I've been plodding up the incremental steps of the Temple of Growth Mindset, humbly and patiently picking up the habits and mental hygiene that arise organically.
And so the difference between our worries about AI-risk might be described as us individually typical-minding an AI. Eliezer's System 1 says, "If AI minds grow up the way I grew up, boy are we in trouble." My System 1 says, "Nah, we'll be fine."
Take Ideas Seriously
Singularity Mindset is taking ideas seriously:
- I took Jordan Peterson's advice seriously and cleaned my room. Turns out I actually love making the bed and decorating. A print of Kandinsky's Composition VIII is the best thing that happened to my room.
- I went to see Wicked on Broadway, took it as seriously as possible, and was mesmerized. I ended up bawling my eyes out for the entire second hour.
- I took More Dakka seriously and doubled the amount I blog every day until it became physically fatiguing.
- I took my own advice about Babble and constrained writing seriously and wrote three short stories sharing the same dialogue.
Great ideas are not just data points. They are (at bare minimum) algorithms, software updates for your upcoming Singularity. To integrate them properly so that they become yours - to take them with the seriousness they deserve - requires not just an local update on the map, but at very least the design of a new cognitive submodule. In all likelihood, to get maximum mileage out of an idea requires a full-stack restructuring of the mind from the map down to the perceptual structures.
Take this quote of Jung's that I treasure:
Modern men cannot find God because they will not look low enough.
(More and more I find myself in the absurd position of writing on the ideas of this man who I find impossible to read directly but from whom I have derived such wisdom via second-hand sources.)
It is an injunction to be humble in directions orthogonal to the eighth virtue. To take Jung seriously deserves its own post, but in brief I read this quote in at least three directions.
Look low enough by focusing your mental energy on things that seem beneath you. Feed yourself properly, fix your sleep, and get exercise. Perhaps the most important thing you could be doing with your extraordinary intellectual capacity is to resolve the dysfunction within your immediate family. Perhaps the most important thing you could be writing involves repeating, summarizing, and coming up with catchy concept handles for the ideas of better men and women. Whatever it is, take it seriously, do it properly, and only good will come of it.
Look low enough by confronting the darkness in your personal hell. There are shadows there you instinctively convulse away from: horror movies, historical nightmares, the homeless on the street. Perhaps the most important thing you could be doing is admitting the existence of and then mastering your sadistic urges and delusions of genocide that lie just under the surface, ready to flood forth at the least opportune moment. Perhaps the demon is instead a spiral of anxiety and self-doubt that sends you into sobbing fits in the fetal position. What you need in your life is exactly where you least want to look. Wield your attention against the darkness whenever you have the slack. Only light can defeat shadow.
Look low enough by looking to your inner child for guidance. Oftentimes, progress curves look like "naive, cynical, naive but wise":
- For mathematicians, the curve is pre-rigor, rigor, post-rigor.
- Picasso said, "It took me four years to paint like Raphael, but a lifetime to paint like a child."
- Scott Alexander foretold that idealism is the new cynicism.
- Knowing about biases can hurt you.
If you've plateaued for a long time in the cynical stage, look low enough by reconstituting your inner child. Relinquish your cynicism with the same quickness with which you relinquished your naiveté. Despite your "better" judgment, trust and forgive people. Feel small when you stand besides the ocean. Babble like a baby. Try stupid shit.
Taking ideas seriously is terrifying. It requires that at the drop of a hat, you are willing to extend such charity to a casual remark as to rebuild your whole mental machine on it if it proves true.
Extraordinary people take ideas with extraordinary seriousness. I will read a paper by skimming the abstract. "Huh, that sounds vaguely true." Scott Alexander will read the paper and write three detailed criticisms, each longer than the paper itself. Me on the other hand, in the last five years I've read more words in Scott's book reviews than in books themselves. What I'm after is that gripping but elusive experience of watching a mind take ideas seriously and completely synthesize them into a vast ocean of knowledge.
Is there a deep truth that caught your fancy recently, that you toss around with your friends the way Slytherins toss Remembralls? You thought it through once and you think you've done your due diligence?
Take that idea seriously. Reorganize your mind and life around it. Travel the world looking for examples of it. At very least, write a thousand words about it. God knows I want to hear about it.
16 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2018-01-19T23:56:36.648Z · LW(p) · GW(p)
Andrew Critch told me once to explicitly think of myself as a recursively self-improving friendly intelligence. I mostly haven't done it, but I like the advice.
Edit: Also, I wrote the comment above before actually reading the entire post, and wow, that is not the first thing I would have said after reading the whole thing, boy howdy.
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-01-19T14:50:47.027Z · LW(p) · GW(p)
"Look low enough by focusing your mental energy on things that seem beneath you... Look low enough by looking to your inner child for guidance."
I have conflicting emotions about this advice. There was something about my own personality that I tried to bury for a long time, because I believed that it is a distraction from focusing on my work. My work is important for mankind as a whole (at least I believe so), whereas the "something" in question is only important for me personally. However, recently the "something" exploded and took over my life, and I'm still not sure: did I do a terrible mistake letting go of control and indulging that desire, which spiralled into my current condition, or, was it inevitable anyway and I should have gone forward with this even sooner, since without fixing my own soul it is hard to find energy to fix the world at large?
Replies from: Qiaochu_Yuan, Nisan, alkjash, commissar Yarrick↑ comment by Qiaochu_Yuan · 2018-01-19T23:57:47.450Z · LW(p) · GW(p)
I am personally super against people burying parts of their own personalities and super in favor of doing things that matter only to them, because they matter too. So for what it's worth, this sounds great to me.
↑ comment by Nisan · 2020-09-04T19:51:12.298Z · LW(p) · GW(p)
2 years later, do you have an answer to this?
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-09-04T20:50:31.852Z · LW(p) · GW(p)
Well, first nowadays I endorse my own selfishness. I still want to save the world, but I wouldn't sacrifice myself entirely for just a tiny chance of success. Second, my life is much more stable now, even though I went through a very rough period. So, I'm definitely happy about endorsing the "something".
↑ comment by alkjash · 2018-01-19T23:54:07.008Z · LW(p) · GW(p)
This is very much above my pay grade, but I tried to answer your question as seriously as possible in my new post.
↑ comment by Crazy philosopher (commissar Yarrick) · 2024-10-19T07:03:39.145Z · LW(p) · GW(p)
Can you tell us what exactly led to "something" explosion? Does something change in your life before?
comment by Chris_Leong · 2018-01-19T10:59:38.781Z · LW(p) · GW(p)
"Perhaps the most important thing you could be writing involves repeating, summarizing, and coming up with catchy concept handles for the ideas of better men and women. Whatever it is, take it seriously, do it properly, and only good will come of it." - I have no doubt that LW is a mine full of ideas that weren't expressed well or fully developed and someone could produce an incredible amount of value by digging up these unloved ideas.
Replies from: alkjash↑ comment by alkjash · 2018-01-19T23:36:33.886Z · LW(p) · GW(p)
Someone has to, but no one else will. Will you rise to the challenge?
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-01-20T14:35:44.407Z · LW(p) · GW(p)
The odds are against it. I've got ideas for a huge number of other writing projects that might even deliver more value.
comment by SquirrelInHell · 2018-01-19T09:02:06.643Z · LW(p) · GW(p)
This is very well done :) Thanks for the Terence Tao link - it's amusing that he describes exactly the same meta-level observation which I expressed in this post.
Replies from: alkjash, Gunnar_Zarncke↑ comment by alkjash · 2018-01-19T21:56:40.788Z · LW(p) · GW(p)
Yes, I think its possible that an entire field like machine learning could be still in Stage 2, the technical results going so much farther, faster, and more sporadically that the systematic intuition-coalescing and metaphor-building has yet to catch up.
↑ comment by Gunnar_Zarncke · 2018-01-21T20:53:04.119Z · LW(p) · GW(p)
I think this goes beyond math and is really a general pattern about learning by system 1 and 2 interacting. It's just more clearly visible with math because it is neccessarily more precise. I once described it here (before knowing about system 1 and 2 terminology): http://wiki.c2.com/?FuzzyAndSymbolicLearning
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2018-01-22T11:42:07.398Z · LW(p) · GW(p)
Yes, and also it's even more general than that - it's sort of how progress works on every scale of everything. See e.g. tribalism/rationality/post-rationality; thesis/antithesis/synthesis; life/performance/improv; biology/computers/neural nets. The OP also hints at this.
comment by Swerve · 2018-01-22T07:25:07.759Z · LW(p) · GW(p)
Lo and behold my pleasure at seeing something that's been swimming around in the back of my head for at least a month put out in excellent writing. I found this post valuable for several reasons:
1.) This describes what may be an extremely worthwhile meta-skill. The whole sentiment of "take an idea seriously" seems to have a lot of recursive power. As one takes the singularity mindset seriously, they are in-fact practicing the singularity mindset. This not only gives itself a positive feedback loop, but allows other potentially helpful concepts or ideas to piggyback on the loop and offer utility in other ways. Your example of "More Dakka" made me realize I read that post neigh two months ago and have little to show for it. That's not for lack of the concept's usefullness, but more for my own lack of trying to find an application for it. There is also probably a sub or synergistic skill of identifying which concepts are worth really working at.
2.) This isn't quite as related to the meat of the post, but it is still a very valuable aspect to me. There's something to be said for narrativemancy and the aesthetics of trying to act like an x or y or organic improving AI. It's just low hanging utility fruit that one gets from conceptualizing themself in a certain way.
comment by flipflopchip · 2018-01-21T19:36:50.109Z · LW(p) · GW(p)
I really enjoyed reading this, thank you for writing it.
I especially liked your point about Yudkowsky's own improvement giving him insite on AI.
I once wished for additional intelegence in the same way most people wish they could fly.
But in the last few weeks, I've decided to actually do something about it, and This post gave me another nudge in the right direction.