How to Dissolve It
post by TurnTrout · 2018-03-07T06:19:22.923Z · LW · GW · 6 commentsContents
End in Mind Backward Chaining Qualia of Discovery None 6 comments
In the last month and a half, I've had more (of what I believe to be) profound, creative insights to technical problems than in the five years prior. For example, I independently came up with the core insight behind DenseNets during the second lecture on convolutional neural nets in my Deep Learning class. I've noticed that these insights occur as byproducts of good processes and having a conducive mindset and physiology at that point in time. I'm going to focus on the former in this post.
End in Mind
Often, when I'm confused about a technical problem, I realize that I don't even know what a solution would look like. Sure, I could verify a solution if one were handed to me on a platter, but in these situations, I generally lack a detailed mental model of how I'd want a solution to behave. This may sound rather obvious once brought to your attention, but I'd like to emphasize how easy it is to spend hours in murky, muddled, pattern-matching gradient descent, mindlessly tweaking the first ideas that came to you.
Incredibly, generating a detailed solution model is often enough to quickly resolve (or at least reduce) even formidable-seeming problems.
Backward Chaining
The aforementioned process is backward chaining (as opposed to forward chaining, which is what you're doing with the work-from-what-I-have-now approach). Take the time to actually visualize what a solution to the problem would look like. How would it behave in response to different inputs? Are you offloading all of the work onto a weasel word (like emergence)?
When you're done, map-territory confusions should be ironed out, it should be clear whether current approaches truly tackle the core issue or just something similar-looking, and black-box terms should be disintegrated. If you can't do this, you probably can't solve the problem yet - you should spend more time working on your understanding. I encourage you to read this multiple times - the bulk of this post's value resides in this section.
For example, yesterday I thought about a smaller problem for about 15 minutes, and had nothing to show for it. I almost resigned to asking a more mathematically-experienced friend if he had any ideas, but instead forced myself to go through the above process. The answer was immediately obvious.
In my experience, proper solution formulations are accompanied by a satisfying mental 'click'; this is when you eagerly chase down all the freshly-exposed avenues of attack. Push the frontier of new understanding as far as it will go, repeat the process when you notice your thought process again becoming muddled, think with paper if the problem is complicated, and enjoy the rush.
Qualia of Discovery
I can't impart to you the exact feeling of using this technique properly, but I can make its fruits concrete by selecting a few things I've dissolved recently:
- I've observed samples from a normal distribution and recorded the mean and variance. New observations are going to come in, and I want to be able to assign some scalar quantifying how unusual this looks for the distribution.
- Challenge: I have not yet had the opportunity to thoroughly study a statistics textbook. How can I produce a solution without having to post this on StackExchange or guess the name of the actual technique?
- Answer: Srrq gur fgnaqneq qrivngvba bs gur arj bofreingvba vagb gur gnau npgvingvba shapgvba gb trg n qvssreragvnoyr zrnfher bs qvfgnapr-sebz-qvfgevohgvba.
- A binary classifier has observed a bunch of 'dog' pictures, and we want it to be able to robustly communicate when it sees something new - knowing what it don't know.
- Challenge: How can we robustly represent the 'unknown' label in classification?
- Answer: Purpx bhg zl cbfg ba nzovthvgl qrgrpgvba sbe n cbgragvny nafjre.
- I'm a reinforcement-learning agent named R̸̵̕Ȩ͟Ḑ́A͝͏̢C̷͜T̸҉E͘D̶͘͟, and I'd love to fetch things from other parts of your house for you, as quickly as possible.
- Challenge: How can we robustly disincentivize agents from messing up the environment while optimizing their objective?
- Answer: I won't even ROT13 this one - I'm not sharing this until I've polished my work more. Attempting to deduce my response is left as an exercise to the reader; I promise it doesn't require more than 96 insights in series.
Look out for my review of Artificial Intelligence: A Modern Approach, coming soon!
6 comments
Comments sorted by top scores.
comment by Swerve · 2018-03-25T00:43:01.455Z · LW(p) · GW(p)
The first paragraphs of the "Backwards Chaining" section of the post is exactly the place where abstract instructions can be helpful, but concrete step-by-step instructions of the technique are arguably even more important for learning to do the thing in the first place. You appear to attempt to ameliorate this by including examples of things you could apply this to. However I think [93%] this isn't as helpful as walkthroughs of the technique imo.
For example, you could include examples of problems you had to solve, and used this technique on, what it felt like to use the technique, what the actual process and results were, etc.
I say this because I think [89%] you're onto something really helpful as a problem solving tool, but as it stands it's hard (not impossible) to extract practical value from the instructions (ie I'm asking you to assume good faith on the part of my criticism).
Thanks for making the post, I got a pretty large amount of value out of it.
Replies from: TurnTroutcomment by CronoDAS · 2018-03-11T04:59:35.479Z · LW(p) · GW(p)
Did you ever end up looking at a statistics book and seeing if there is a standard technique? The hyperbolic tangent has a lot of the properties you would want (it's an increasing function that goes from -1 to +1 over the reals) but it gets really flat very quickly. I guess ideally you'd use something based on the normal distribution itself, but you'll find tanh built into more software.
Replies from: TurnTroutcomment by Aorou (Adnll) · 2018-03-08T18:08:49.993Z · LW(p) · GW(p)
Hi,
Somehow unrelated, my question is about dissolution. What is the empirical evidence behind it? Could someone point me to it, preferably something short about brain structures?
Otherwise, it would seem to be subject too much to hindsight bias: you've seen people make a mistake, and you build a brain model that makes that mistake. But it could be another brain model, you just don't know because your dissolution is unfalsifiable.
Thank you!
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-08T19:16:58.224Z · LW(p) · GW(p)
Thanks for the question!
I'm not making a global claim that people who do this always - or even usually - do better than people who reason forwards (although I don't see why the approaches need be mutually-exclusive). I do suspect that this is the case for many problems. Considering the meat of the approach - fixing a success condition clearly in your mind, it seems reasonable to do that whenever solving a problem. In fact, not doing so would be an obvious failure mode for all but the most trivial of problems, unless you want to solve problems by a somewhat random walk through solution-space.
To be clear, my evidence is anecdotal, as are my claims. For me personally, this anecdotal evidence is rather strong - I have indeed been more insightful in the last few months. In that time, my IQ, mental health, and other obvious confounds have not changed; I'm doing my best to isolate what has changed so it can be replicated and reused to the benefit of the community-at-large.
So if I can't be sure, what's the point - why share this? As I understand it, one of the driving forces behind the instrumental rationality project is that the scientific study of achievement-maximization and thinking clearly about really hard problems has been woefully under-prioritized. So I'm doing my part by sharing things I'm fairly sure explain changes for me and seeing if they generalize. I'd love for others to try this approach and report their results; both affirmative and negative results would be evidence for the question of whether this is just an incorrect post facto explanation.