Facets of Problems v0.1
post by whpearson · 2017-01-28T12:00:42.672Z · LW · GW · Legacy · 18 commentsContents
Epistemic Constraints Inconveniences Meta None 18 comments
Problems can be decomposed into parts that are shared among different problems, these parts I'll call facets (for want of a better word, if there is some art I am ignorant of let me know). Each facet fundamentally effects how you approach a problem, by changing the class of problem being solved. Facets can be seen as parts of paradigms extended into everyday life.
For example, when trying to path find, given only the map, you may use something like the A* algorithm. But if you have a map and an oracle to tell you that the optimal path runs through certain points, you can use that information decompose the problem into solving the shortest path between those points. Having that oracle is a facet to a problem. Another facet might be that you know of an automated doorway on a shortcut that is open and closed at different times. You no longer have a fixed map so the A* algorithm is not appropriate. You'll have to represent that probabilistically or try and figure out the pattern of the opening, so you can predict exactly when it is open.
There are a number of distinct ways that facets can impact your problem solving . These can be because :
- a facet suggests new sources of information to solve a problem (Epistemic)
- a facet constrains the problem in a hard to discover way that make it easier to solve (Constraints)
- a facet makes things harder to solve but more likely a good solution will be found if the facet is true (Inconveniences)
- a facet means you have to manipulate your other facets (Meta)
A problem can have many facets and they interact in a non-trivial fashion. Having the wrong facets can be very bad they form the systems inductive bias.
I think facets can impact different things:
- How we approach the world ourselves (Are we making use of all the facets that we can? How do the facets we are exploiting interfere? Do we have damaging facets?).
- How we design systems that interact with the world. Enumerating the facets of a problem is the first step to trying to solve it.
Epistemic status: Someone has probably thought of this stuff before. Hoping to find it. If they haven't and people find it useful I'll do a second version.
Base Assumptions: That throwing lots of resources at a single approach with no regard to facets is impractical (No AIXI type solution)
Epistemic
These facets can allow you to move more quickly to solving your problem by giving you information about the world, or how you should act.
Exploration - There are known unknowns. See what is there and or useful the in those locations. You have a function F(x) and you know the values of F(1) F(2) F(3) what is the value of F(10,000). Exploration is a known phase of reinforcement learning and it can interfere with Exploitation.
Generalisation - There are patterns in the unknowns so that it makes sense to try and generalise and discover the true function. More exploration can help with generalisation.
Being Taught - There is an agent out there that can model you in some way and provide feedback on how you are doing or processing data. Whether this is labeled input data pairs or feedback in a language like "Stop thinking so hard" or "A is for Apple".
Copying agents -There are agents out there doing stuff, they may not be able to tell you what to do or they may tell you to do the wrong thing (if they are adversarial). But you can copy what they do. See mirror neurons. This may be a pre-requisite for "Being Taught" if you don't want to pre-encode a language.
Research - There is linguistic information out there that may be useful find it. This is made more complex by the information being from agents with the wrong view of reality or overtly hostile agents.
Commissioned Learning - You can convince another existing agent to do some of the above for you and give you the results.
Creation of Learning systems - You can create a system that does some of the above for you and gives you the results.
Constraints
These facets can allow you look for certain patterns and make use of them more easily than having to derive them from experience or first principles.
Construction - You can break down a problem into parts, Solve those sub parts and re-assemble. See modular programming. Minimising the things you have to think about at once can make things more tractable.
Positive sum (Social) -There are other people you can communicate with and you share motivation with enough to encourage them to work with you. This adds complex things like, managing other peoples beliefs about you or your organisation. Are you competent? Do you know what you are doing? This can interact with paradigm shifting, unless you are explicitly trying to paradigm shift, you are not motivated to visibly search for things that undermine your current goal (you will look bad and people will no longer want to work/support you).
Adverserial/Zero-Sum - You win if another person loses. If the agent you are competing against is using "Construction" then figuring out the things they are trying to construct is important to be able to disrupt it. See Minmax. Individual strategies might be anti-inductive if you are competing against strong optimizers.
Optimisation/Exploitation - You know a bunch of stuff, pick the best possible action. It might be non-obvious and require lots of processing to figure out the best action. I'm including this to contrast with satisficing or weird non-problems where every action is the same. Most interesting problems have some element of this.
Randomness - You know there is a source of randomness, so you have to manage your expectations/beliefs.
Game building - You want to create a system that encourages other agents/agent-like things to behave in certain ways (prediction markets/ school ranking metrics/ company management). You care less what individual agents know and just want to know enough about their goals so you can fit them into the game.
Introspective - There are things wrong with your internal programs. Explore them/make them more explicit and try and improve them. This might include "Game Building" if you model your internal state as a bunch of agents.
Physics - Your whole world is defined by discoverable rules and you can use them for predictions. sn+1 = U(sn) what is the update function U? This is great, this means everything that happens in the world can be evidence for or against U.
Scientific constraints - There are rules that allow you to constrain the types of hypotheses you have for science. Like conservation of energy or that the update function is time reversible. Or Turing computable.
Optimiser Building - You are trying to build an optimiser to optimise the world for you. You define a goal and a search space, prior knowledge and an search algorithm. All these things embody a set of facets, This is classic AI. Note that it doesn't allow the system to move through facets, this is needed for AGI and general intelligence augmentation (GAI).
Optimising Optimiser Building - You are trying to build an optimiser to build an optimiser to optimise the world for you. You define a goal type of optimser and a search space of possible optimisers, prior knowledge and an search algorithm.
Super-optimiser control - You have a super-optimiser, It needs no facets to solve problems. Try and build a goal that it can maximise and still do what you want.
Person Building - You are trying to build a person. People can adopt different facets, so this is not like optimiser building. People also don't have coherent goals throughout there lifetime. There is a rough purpose, survive and propagate, but that is hard to extract from the day to day activity. The day to day activity can look like optimising so it is good to adopt the intentional stance towards them on a short timescale. However the intentional stance breaks down on the developmental time scale, due to the lack of coherent goals mentioned before. People are the only known General Intelligence.
Intelligence Augmentation - You are trying to figure out enough about the nature of how you work that you can expand it. Looking for the identity function (not to be confused with this identity function) I (x) = I (x + c) and an aptitude function A(x+c ) > A(x). Where you are x, c is some computational resource, + is some method of connection, A is an aptitude test you care about. This is only like optimiser building if x is an optimiser.
Inconveniences
These don't help you at all. They just mean that solutions that look good that if you ignore them may not be in fact be good.
Survival - If you make the wrong step you can't make any more. Although in some cases if you can't take a step you are likely to not be able to make any more. This means you cannot or do not want to explore the entire search space. This also means if your knowledge of what will cause death is faulty that you will fail to explore some useful regions.
Embedding within physics - Not only might you die. But your actions and internal actions all have cost and impacts you might not expect. Also people might be able to influence your internal state in ways you might not expect. Interacts with optimisation in that the process of optimisation now itself can have a cost. Time can also be a problem. Optimisation might not be optimal!
Judging Religion - There is a god judging every action and thought you have.
Evolutionary - You know that you and the other agents are products of evolution.
Meta
These facets refer to other facets
Paradigm shifting (Kuhnian sense) - You start with a model of the world (maybe acquired from research or being taught). Things don't seem consistent with your world view. Attempting to paradigm shift is to try and change your world view by gathering the inconsistencies and finding models that fit. For a part of the world your view is F(x) = y however F(x) !=y. Maybe you need to include other data (you should be creating a two-place function and you need to find the other variable) or y is not a function of x at all. Or you need to add another facet to your view of the world. There are things that seem to need paradigm shifting currently (physics, the quantum and gravity split and AI/consciousness). If you try to paradigm shift you are likely to be wrong as you are going beyond your current model of reality and the world of possible models is vast. This can be seen as altering your hypothesis space and/or input space or your facets.
Teaching/Research - The epistemic ones, when they have linguistic information can allow you to switch facets too. You can encode facets linguistically (as I am doing here) and propagate them. This is something that is impossible to do with pure feedback (good/bad) or labeled input.
I'm sure there are lots of others. There might be ways of adding ethics and morality into this framework, but I'm not sure it useful. What else can people think of?
Disclaimer: I wrote this so I could have a vocabulary to talk about the rationalist community.
18 comments
Comments sorted by top scores.
comment by RomeoStevens · 2017-01-29T02:36:07.355Z · LW(p) · GW(p)
I think of these as dimensions, because that allows their representation to vary along something.
Replies from: whpearson↑ comment by whpearson · 2017-01-29T13:15:16.015Z · LW(p) · GW(p)
Not quite. Dimensions cover some of it. However playing social games you add many new dimensions of particular types for each new participant. You need to know their goals/personality and knowledge in order to be able to optimise. So adding one facet can add more than one dimension. Any facets with language or trying to paradigm shift can also alter the number of dimensions.
I might try and contrast what I am talking about with dimensions in the second version, so thanks for the feedback.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2017-02-01T22:24:50.281Z · LW(p) · GW(p)
Curious how a cluster of dimensions that you assign something would differ from a facet in that case. Trying to figure our what intuitions facet is capturing as a concept.
comment by JenniferRM · 2017-02-03T05:24:16.781Z · LW(p) · GW(p)
What you call "facets" here seem somewhat heterogeneous to me. Some of it reminds me of techniques from Polya's How To Solve It on teaching and learning to solve math problems. Other aspects remind me of Chapman's paper on Cognitive Cliches.
Replies from: whpearson, whpearson↑ comment by whpearson · 2017-02-03T09:02:18.009Z · LW(p) · GW(p)
Hmm. Thanks for the feedback, I think this needs a few more iterations.
'Facets' are supposed to be commonalities in problem descriptions, where problems are a much broader category than envisaged by Polya or Chapman (at least in the cognitive cliches paper linked, I'd not come across that before, so thanks, I'll read more of his to see if it is expanded).
comment by chaosmage · 2017-01-29T13:32:40.103Z · LW(p) · GW(p)
Am I supposed to learn something from this or is this more of a memo to yourself?
Replies from: whpearson, whpearson↑ comment by whpearson · 2017-01-29T15:56:47.254Z · LW(p) · GW(p)
It is supposed to create a way of talking about things.
For example we can examine what facets the rationalist community is good at using (good at reasearch, good at optimisation, trying to get better at positive sum social stuff, not so good at encouraging paradigm shifting IMO).
Or you can try and unpick the history of intelligence in terms of facets. You can then talk about what happened at the birth of humanity and figure out it is applicability as evidence to the creation of AI. On the facet view of things we adopted a new facet of being to able to absorb and also output linguistic information, so a new epistemic facet. Are the things we posit AIs can do that we can't as big deal as this kind of epistemic facet or bigger? Should we expect them to be better paradigm shifters than us?
Replies from: ChristianKl↑ comment by ChristianKl · 2017-01-30T11:57:21.470Z · LW(p) · GW(p)
What makers you think that this community isn't good at paradigm shifting? Which kind of communties do you consider good at paradigm shifting and how do you evaluate that they are better?
Replies from: whpearson↑ comment by whpearson · 2017-01-30T20:34:40.192Z · LW(p) · GW(p)
Normal science. For what it is worth.
Most paradigm shifters fail. But science at least gives them some space, lots of prestige and a record of their failure.
The rationalist community emphasis on expected utility (especially on the individual scale) with things like, giving what you can, recommending comfortable jobs and predictable charities means that the brightest and best might not be trying to paradigm shift.
Who knows what the utility for humanity of unifying quantum physics and relativity would be? We don't know if it will enable new inventions or just be pleasing maths. It is a known unknown, there are probably still unknown unknowns out there.
Our requirements for evidence and references also doesn't fit well with paradigm shifting as you might need to spend time discussing and refining ideas before you can figure out how to build equipment to collect evidence for your ideas. There is not a good space in the rationalist community for that.
Other things that might help paradigm shifters: Having good documents of all the failed ideas and descriptions of why they failed might stop people repeating themselves and show people fallow areas to avoid.
I think the karma ranking system also do not help paradigm shifting discussion on LW. You would likely be down voted by people who are invested in the paradigm you are trying shift (as Kelvin pronounced X-Rays as a hoax). You are explicitly talking about things that seem unlikely and/or that you don't have a good language for.
I think we could do better than normal science.
Replies from: ChristianKl, Viliam↑ comment by ChristianKl · 2017-01-31T16:13:21.532Z · LW(p) · GW(p)
If you believe Kuhn than normal science doesn't constantly change it's paradigms. It's something that happens less than once per decade.
Most paradigm shifters fail. But science at least gives them some space, lots of prestige and a record of their failure.
That's not true. A person who doesn't play according to the rules of the established paradigm get's no prestige in science.
I have a hard time getting a Wikipedia page for Thomas Hanna being considered notable (and thus getting transfered from draft space to the Wikipedia main space) despite him being accomplished as a creator of paradigms.
If I look at new paradigms than I see MIRI approach to AI that's a different paradigm than what existed before. MIRI does get funding despite not being a predictable charity. MIRI even gets funding from OpenPhil.
The same goes for CFAR. It uses a different paradigm than academics when the research decision making.
MetaMed was a new paradigm but failed.
Effective Altruism is a new paradigm.
A month ago I wrote a paradigm post on LW. It doesn't have that many upvotes but it has engagement. I also got before engagement when I circulated the draft among LW Berlin.
That paradigm in that form is the result of five years of thinking since I did a lot for Quantified Self in 2011. I did a lot of experimenting with credence calibration in different forms and had discussions about it till the idea of the "prediction based medicine" paradigm arose.
Our requirements for evidence and references also doesn't fit well with paradigm shifting as you might need to spend time discussing and refining ideas before you can figure out how to build equipment to collect evidence for your ideas. There is not a good space in the rationalist community for that.
A while ago a poster on LW asked a question about how to estimate the probability that a woman he asks out for a date says "Yes". Another poster complained that this is an example of LW doing everything wrong and being too nerdy and simply not getting what dating is about. The paradigm in which you write down a probability for the likelihood of getting a "Yes" feels foreign. The practical value of writing down the probability isn't apparent.
The fact that the practical value isn't apparent didn't prevent this from being discussed on LW. The lack of evidence or references also didn't prevent the idea from being discussed on LW.There a place for posts like that on LW. Discussing them leads to refinement of ideas.
Other things that might help paradigm shifters: Having good documents of all the failed ideas and descriptions of why they failed might stop people repeating themselves and show people fallow areas to avoid.
I don't see the point of that when it comes to changing paradigm in Kuhn's sense.
Replies from: whpearson↑ comment by whpearson · 2017-01-31T21:29:13.896Z · LW(p) · GW(p)
That's not true. A person who doesn't play according to the rules of the established paradigm get's no prestige in science.
Physicists seems to be able to go off paradigm and explore promising paradigms like string theory etc and still be prestigious. There being a long standing discrepancy and a known history of paradigm shifts probably makes them more open to it. There are things that are hard for them to explore though, like data that violates conservation of energy. Mainstream ML/AI seems a lot more conservative in comparison.
My statements about the rationalist community are based a bit off trying to get a single locus of community (e.g. lesswrong or arbital etc). See my reply to Villiam for what I think someone should try.
My understanding of Kuhnian revolutions is that they happen in the following way.
1) There is either a) collection of data to try and refine current in-paradigm measurements and a surprise occurs b) existing phenomena not predicted by the in-paradigm theorems
2) Someone creates some language to try and explain that measurement/phenomena.
3) There is a period of language refinement and measurement refinement until a new language is stable.
Agreed? None of the paradigm shifts you mention quite fit this mold. They do change the language around things, but not to improve peoples prediction about the world (at least in a scientific sense). They seek to improve how we interact with the world.
I also think they have been created outside the harsh reality of the on-line rationalist community (the only off-line rationalist community I've experience with is London).
The concepts and language for MIRI started on the Extropian and SL4 mailing lists (before even overcoming bias) which were very free wheeling and allowed far out ideas.
Similarly the concepts and language for EA have probably been marinating in whatever social groups Peter Singer was hanging around with in since he started writing about rationality and charities ('96?). I don't know what that was like, probably philosophical which is all about the discussion.
Was Metamed born inside LW or apart from it? Could we improve how things like that are born and learn from their mistakes?
I've not been to any CFAR workshop so I'm not sure of it's content and genesis.
I like your prediction-based medicine post. It would be nice if there was a website where that sort of thing was the norm. And we could build expertise on how to test small parts of it.
I don't see the point of that when it comes to changing paradigm in Kuhn's sense.
If you consider human society as exploring the world of ideas, it makes sense to leave enough information so that people can see where you have been before. If you apply the no free lunch theorems of search avoiding sampling the same spots of the search space again is the only thing to do to guarantee that your algorithm is better (on average) than any other.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-02-01T15:18:51.313Z · LW(p) · GW(p)
Physicists seems to be able to go off paradigm and explore promising paradigms like string theory etc and still be prestigious.
In the paradigm of the various allowed physical theories really that different in nature? It seems to me like all those physicists basically agree on a method like building the large Hadroon colider to find out things in detail and generally doing a lot of mathematical proofs.
But it might be that I simply don't know enough about theoretical physics.
I personally care more about biology, medicine, and general decision making and I'm fine with having hole in my knowledge about physics.
Was Metamed born inside LW or apart from it?
I think that depends on how broadly you see LW. It's wasn't directly created by online discussion. On the other hand the discussion about Bayesianism on LessWrong was likely very influential for them getting their concept together.
They do change the language around things, but not to improve peoples prediction about the world (at least in a scientific sense). They seek to improve how we interact with the world
GiveWell's way of evaluating charities improves the ability of the people to make predictions about which charities will produce worthwhile effects in the future.
An idea like "room for funding" allows us to make better predictions about the effects of funding a charity.
GiveWell is a science project in the Kuhnian sense. GiveWell makes progress on getting better at evaluating causes. At least I believe that there's progress.
If you consider human society as exploring the world of ideas, it makes sense to leave enough information so that people can see where you have been before.
Many startups fail because they are too early. Today I can buy pet food online but that wasn't a good business model in the DotCom boom. If somebody would have had the idea of molecular biology in the 1920's his paradigm would have failed.
http://kingsreview.co.uk/articles/how-academia-and-publishing-are-destroying-scientific-innovation-a-conversation-with-sydney-brenner/ is worth reading for the dawn of molecular biology. Sydney Brenner was one of the people who brought that paradigm to life.
Money quote (and part of the reason why I believe the current scientific system is bad at paradigm change); `It is alarming that so many Nobel Prize recipients have lamented that they would never have survived this current academic environment. What are the implications of this on the discovery of future scientific paradigm shifts and scientific inquiry in general? I asked Professor Brenner to elaborate.
SB: He wouldn’t have survived. Even God wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments (creating the universe), but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently? And a third would say, to top it all, he published it all in an un-refereed journal (The Bible).
So you know we now have these performance criteria, which I think are just ridiculous in many ways. But of course this money has to be apportioned, and our administrators love having numbers like impact factors or scores. Singapore is full of them too. Everybody has what are called key performance indicators. But everybody has them. You have to justify them.
I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.`
If you consider human society as exploring the world of ideas, it makes sense to leave enough information so that people can see where you have been before.
Sydney Brenner : I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant. Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.
The AirBnB folks didn't even know of CouchSurfing when they started AirBnB.
Reading a long list about why ideas fail gives you a perspective of how the person who evaluated them considered them failing. It doesn't give you necessarily a new perspective.
Replies from: whpearson↑ comment by whpearson · 2017-02-03T19:12:28.436Z · LW(p) · GW(p)
GiveWell's way of evaluating charities improves the ability of the people to make predictions about which charities will produce worthwhile effects in the future.
I'll have to think more about this.
I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.`
I agree with this. But I still think science has something that the rationality community currently lacks. A place for people with shared interests to meet. Schelling points for people who might paradigm shift a certain subject. If you can fit your interest into LW or EA, or happen to have people in your area you are okay. Otherwise you are out of luck.
Sydney Brenner : I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant. Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.
Reading a long list about why ideas fail gives you a perspective of how the person who evaluated them considered them failing. It doesn't give you necessarily a new perspective
I don't think young people should be forced to read everything other people have done before discussing or formulating a hypotheses. But they should be able to explain how their hypothesis differs from things that have gone before if they want large amounts of funding. That should be what literature reviews are about IMO.
The AirBnB folks didn't even know of CouchSurfing when they started AirBnB.
With companies, the environment is always changing so there is less value in knowing about previous attempts. How many rationalist enterprises fit that mold rather than the scientific one I don't know.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-02-04T09:53:25.968Z · LW(p) · GW(p)
With companies, the environment is always changing so there is less value in knowing about previous attempts. How many rationalist enterprises fit that mold rather than the scientific one I don't know.
In science the available tools are also always changing. Molecular biology couldn't have been done fifty years earlier. Paradigm change is quite often due to new tools that allow new ways of doing research.
If you can fit your interest into LW or EA, or happen to have people in your area you are okay.
I don't think the goal of providing places to meet for every possible interest is valuable. You can't discuss every kind of question at a university either. The kind of questions that GiveWell investigates aren't traditional academic questions.
I think the domain of questions that can be ask in LW and EA is quite broad.
When it comes to places to meet events like the European Community Weekend are good. EA Global also exists for getting people to meet.
The Accelerator Project might create a more permanent place for people to go and find people to do projects with.
↑ comment by Viliam · 2017-01-31T11:23:32.749Z · LW(p) · GW(p)
I understand this to mean that focusing on the expected utility may lead to premature optimization, especially with regard to the unknown unknowns. Rejecting a whole line of thought, just because it seems from the outside that the probability of it being useful is very low.
Makes sense. But I think that maybe this is more (or also) about the size of the community than about its attitude. I mean, if you would have 1000 active LW members, then even if you present an idea that 99% of them reject, the remaining 10 people can organize their own discussion and explore the idea.
Replies from: whpearson↑ comment by whpearson · 2017-01-31T20:18:59.961Z · LW(p) · GW(p)
But I think that maybe this is more (or also) about the size of the community than about its attitude. I mean, if you would have 1000 active LW members, then even if you present an idea that 99% of them reject, the remaining 10 people can organize their own discussion and explore the idea.
You'd need a change in karma so that the post that was reject wasn't buried. And that other people who see the post aren't put off from posting speculative things by the karma hit.
Also are you expecting 1000 users in discussion, because I don't think speculative posts will get into main.
I think it would be better if there was a separate forum to lesswrong that allowed/encouraged builders of systems and data hunters to gather with theory builders to work off the data. With a public record of the failures, so that people who are searching can know what has been built and failed to find anything interesting. Maybe it can be pseudo-anonymous so that people can work without fear to their reputation. Or maybe we should work harder at not worrying about being wrong, as long as we were wrong in good faith. Being secretly wrong is a sad thing to have to do in a rationalist community.
And lesswrong could be a literature review of the successes (meant and otherwise) so that people who are trying to execute can get the best knowledge without lots of cruft.